I am hoping someone else has seen this, I cannot find a resolution anywhere.
I stood up a new McAfee epo 4.5 server last Thursday after testing it in a virtual environment for several months with no issues. I exported the system tree along with the system names from our production ePO 4.0 server to the newly built ePO 4.5 server. Since my virtual test envronment was 4.5, I exported the policies and imported them into the new 4.5 server as well. I then created a query to search for all unmanged system and built an automation task to install agents on all unmanaged systems. I set the task to run every 4 hours and it was installing the agent on a few machines at a time and nothing more. The task was stopping in less than a minute. So I went on and had a network guru incorporate the agent install into the logon script. I got great results and thought we were on the way to great success. When I got back to work Monday, the ePO DB was almost 43 GB. The server was being flooded with 1095 events. Naturally, I turned off the event, but they kept coming. At that point, I had to kill all ePO services because it started to bog down other enterprise apps (HEAT, BES) that have DBs on the same SQL server. I had the DBA restore the ePO DB from an earlier version after I changed the ports in ePO so the endpoints could not talk to ePO. I then went back in and made sure that the 1095 event was turned off and restored the ports so the endpoints could talk again. I understand that the endpoints were caching data while the services and ports were unreachable, however even after turning off 1095 and performing an agent wakeup, the events continue to flood in after almost 48 hours. I have exercised every avenue possible including McAfee Gold Support with no end in site. Does this sound like a case of cached data on endpoints that cannot get written fast enough, or something more serious? Any input would be greatly appreciated.