We are trying to configure a high availability setup with 4 Web Gateway 7.3 nodes in the same subnet.
We want to configure 2 directors with one backup / director scanner for each node to distribute load across the directors. We are having a hard time to configure it. The Web Gateway manual is also not very clear about high availability in the first place. What we see is that having 2 directors in the same subnet gives strange problems (state failed on some nodes with mfend-lb –s command in the console). Has anybody experience with different Web Gateway HA setups ore does anybody have a good Web Gateway HA manual available?
Could you perhaps post a screenshot of your ProxyHA settings to see if we can get things working?
What are the strange problems you are noticing? If the two directors are aware of each other, the one with the higher priority (or started first if they have equal priority) will take over the virtual IP. Is the virtual IP pingable?
Our test setup looks like this:
xxx.xxx.xxx.40 = vrrp virtal IP 1 (director 1 with scanner 1 standby)
xxx.xxx.xxx.41 = director 1
xxx.xxx.xxx.42 = scanner 1 / standby director 1
xxx.xxx.xxx.43 = director 2
xxx.xxx.xxx.44 = scanner 2 / standby director 2
xxx.xxx.xxx.45 = vrrp virtual IP 2 (director 1 with scanner 2 standby
Console output director 1
[root@tlabcache-xxxxxxx1 ~]# mfend-lb -s
Console output director 2
[root@tlabcache-xxxxxx3 ~]# mfend-lb -s
Director 1 central management settings:
Director 1 proxy ha settings:
Director 2 central management settings:
Director 2 proxy ha settings:
For load distribution we want to directors in the same firewall DMZ subnet. If possible it would be nice if they would fall under the same central management (for automatic policy distribution) but this is not required.
Yeah, your settings arent what I'd want for what you described.
More or less, what you want is two HA clusters (three per HA cluster), and one centrally managed cluster (for policy settings).
1. You have changed the "Central Managment" settings which is not something you want/need to do. Please revert back to defaults 'all' for each 'group' value. Then you can join all the nodes together (to sync policy).
2. Then you need to configure is the "Proxies" settings AND a command line parameter.
Under the settings you have proxy port 9090 and 81 defined, you MUST have each of these defined as Proxy ports otherwise the virtual IP will not work, and you will get a CONFLICT (I think). Please send a screenshot of your proxy ports.
3. To distinguish between cluster 1 and cluster 2 please check out: https://community.mcafee.com/message/225129
Let me know if this helps.
Adding the MFEND_LBID values in the mfend config file and reverting back to the origional settings under Central Management resolved the conflict after rebooting all the test appliances.
The proxy settings seem correct now (all virtual IP's are working) See below.
I still need to acomplish to add both HA cluster under the same central management.
Can I just add all systems under central management or do I need to change some settings in the central management config page ?
Is there any extra documentation available regarding what all the different central management options normaly are used for ?
Glad to hear the resolved most of the problems. So at the moment each virtual IP is pingable?
As far as joining all of the nodes into one managed cluster, all you need to do is click the 'Add' button under the Configuration tab. There is no need to change any other configuration. Only click 'Add' then type the IP of the node in question.
The Central Management happens over port 12346, so you must make sure that, that port is open between the nodes (and you havent goofed the port configuration on each node).
As far as documentation on the different central managment options you can check out page 303 of the product guide (https://kc.mcafee.com/corporate/index?page=content&id=PD24047). But honestly there isnt much you need to configure unless the clusters were in different locations and you needed the respective clusters to get the updates in a special fashion.
I would suggest reviewing the section on the types of groups (network, update, runtime).