Loading dashboards, tree, etc (everything) from the epo management console became terribly slow since yesterday.
For example, it takes over a minute to load a standard dashboard which was in the past within 5 seconds. b
I already restarted the eposerver without any improvements.
Another weird issue is, that the console keeps loading endlessly (trying to load the logon screen) if I start it on the eposerver itself.
we have been having very similar issues and have thus far been unable to determine root cause. The servers don't appear to be under any physical stress in terms of proc/mem usage however they are constantly hitting the 245 connection limitation (Server is too busy (245 connections) to process request). We haven't found any issue with our SAN disk but SQL has seen many deadlocks occurring in the db...we are still working with McAfee on this but it is very frustrating.
I had a similar problem but only related to certain dashboards. Turns out the database used by McAfee was highly fragmented, after a defrag things imporved significantly.
This McAfee KB article will give some guidance on DB maintenance configuration for ePO:
KB67184 - Recommended maintenance plan for ePO 4.0/ 4.5 database using SQL Server Management Studio
You can search for it through the service portal.
Thanks for the suggestion but we already reindex the DB on a weekly basis. We have actually traced our performance issues back to the max open epo agent connection issue I referred to earlier. It seems for some reason we start hitting that max at around 2pm every Friday for at least an hour and a half even though there are no jobs running in ePO or the local server. We have also contacted our jr. admins to make sure they aren't running any SMS type ASCI script. We have monitored the amount of records being updated in the db during this time and it is not any higher then when activity is normal. The only way we have been able to replicate this type of issue is by trying a deploy job to a few thousand machines - if those machines are offline it really hammers the system as it seems to leave the open connections in progress for much longer than the 5 minute timeout. We are supposed to have someone from McAfee look at the system this Friday since it has been happening like clockwork for the past few months. I'l post back if we find anything...
We are experiencing similar problems, I see the same messages in our server.log. Our server goes down every other week. We have not had a full month where ePO has worked smoothly since transitioning from ePO 3.6. We also reindex our DB weekly and do not see any indicators that it is a DB issue. Are you running epo on vmware?? We are running epo and the db on two different virutal servers?
We are not running on virtual servers - we actually have both our app and db servers clustered. This issue is still happening on a semi-consistent basis every Thursday and Friday between 2pm and 4pm. Every now and then it skips a day or isn't as bad (only 100 connections/second) especially when we have McAfee on the line ;-) . However, last week we got McAfee tier 3 to see it for a full 2 hours (longer than normal). They were able to collect some data and we're hoping they can come up with something from that. At first they believed it was a db locking issue but as we saw during the incident some locks would come and go but the issue stayed there even when there was no locking.
FYI - we are still experiencing this issue on a regular basis at the end of the week. McAfee tier 3 has looked at this a few times and we haven't had any viable solutions from them yet. Very frustrating...today it took two hours and a restart of the app server to even be able to open server tasks...