1 of 1 people found this helpful
I was waiting for other customer's to respond because I'm probably partial in some ways.
Using an external load balancer has always had success in my experience. I like it in particular because of its abilities to add/remove devices from a pool very smoothly without interupting the end users. It also has abilities for perfoming health checks and performing GET requests and actually interpretting the response (success or fail).
On the other hand Built-in ProxyHA is useful if you do not already have an external load balancer. This allows one appliance to "handle" the traffic (aka the director node and it owns the virtual IP), and the other node is the scanning node. The director node is also a "scanning" node (so it handles traffic too). There is built-in load sharing algorithm that takes into account resource usage and active number of connections (so if one appliance is overloaded, the other will get more traffic to compensate).
For information on setting up Proxy HA and information on external load balancer, see page 41-42, 102-103 of the Product Guide (https://kc.mcafee.com/corporate/index?page=content&id=PD24047).
Hope this helps!
Is ProxyHA using ICAP between the systems to accommodate work sharing?
No, its using a kernel level redirect. The Web Gateway has a "network driver" that does this called "mfend".
1 of 1 people found this helpful
FWIW, we use an external load balancer (Cicso ACE at the moment) and find it a very workable solution. We have 8 mwg5000's spread across two physical data centers. We have a load balancer in front of each set of 4, and use DNS to let the client select which data center to go to. The primary reason we went this way was due to an already strong expertise and robust processes in the ACEs.
We do not do any SSL inspection, nor cacheing on the MWGs.
As Jon says, it makes it easy to take a device out of use, just turn it off and traffic is not sent that way. The ACE's do a good job of spreading the load evenly, and with the Keep Alive going to three or four reliable sites like hp.com, microsoft.com, etc. they quickly remove a device should it fail.
Al, if I might ask, you said you use DNS to allow the client to decide where they will go. Could you explain that more?
We currently use PAC files, they can be finicky but the time-to-convergence if we make a change or failure occurs is very good.
We considered using DNS, but you have to set a low TTL to make sure systems fail over quickly in the event of an outage.
We also considered 3DNS, but the same problem exists.
It seemed to us like the 'best' solution would be to use the PAC files pointing to DNS registered VIPs on a load balancer.
That's pretty much how we have it. 4 devices per data center, with a single LB VIP in front of them. Since we have two datacenters, the DNS name that is given in the pac file resolves to two VIPs.
Should we lose all devices in a data center, or the load balancer, we would be having to make a DNS change and remove one of the VIPs. Of course, should we lose a data center there will be a lot of excitement for other reasons, and I'll have time to slip in a DNS change and for the TTL to expire before too many people notice they only get half of their requests to CNN.com.
Thanks, I appreciate it.
may I ask what you use for health check of the MWGs?