We have an issue with queries pulling just 1 week of data running very slowly and the dashboards taking forever to load.
Our epo deployment is 4.5, and has been in the environment for years.
Our database is currently about 300gb, and we just did a big purge and re-index and re-org.
We have checked out the Max degrees of parallelism...etc, but still to no avail.
I am thinking of backing up the database and then just going thru and truncating the largest tables to see if this is going to help.
Has anyone else tried this?
Its hard to identify if my issues are due to locks on the database rows or a bad query....who knows.
Thru the SQL Activity Monitor, I can kind of see which SQL query is taking the longest time, but I cannot really correlate this to an actual query in the EPO.
Is there a table somewhere that correllates the SQL query to the Query Name in epo?
Anyhow, if anyone has done a truncate, did it improve performance and is it a recommended procedure.
There are any number of Knowledge Base articles to help you identify and reduce the size of the ePO database.
Please also note that ePO 4.5 reached it's EOL date as of Dec 31st 2013.
Yeah, tried all those...no luck.
So....anyway, any way I can correllate the SQL query running in SQL Activity Monitor to the Query name in epo?
And, is there any pitfalls to look out for if I decide to truncate?Message was edited by: epository on 4/9/14 6:13:21 AM CDT
as rackroyd already mentioned epo 4.5 is eol. have you considered upgrading to newer versions? we have made tons of improvements since 4.5 that speed up query processing in epo.
With 4.5 I've seem performance issues due to a few different things. I'd agree with everyone, there is little reason to truncate any tables. There are a few that are notorious for filling up fast and you should get to know them.
Some issues I have encountered:
1. The obvious and what most of the KBs say... large numbers of events and not having them purged. There are several purge tasks not just specific to threat events. Check them all make sure they are running regularly. Reduce the purge tasks to something smaller than recommended... 30days for instance. Check *all* your table counts to see if they are reasonable.
2. Policy auditor and FIM. If you have a lot of systems cached to perform integrity scans I've the database could crawl. If you have this, fix / reduce the policies, clear the cached systems.
3. Rogue system detection. It appears fixed in newer versions, but at one time have unreasonably large numbers of sensors (and a poorly designed table view) seemed to cause slow downs. Remove duplicate sensors on the same subnet, ensure the rsd tables are getting purged.
4. Large numbers of bad server tasks / automated tasks. Go through each task, make sure they are necessary and efficient.
5. Large numbers of bad queries. If you have lots of users creating horrible queries and have them used a lot (e.g. attached to a main dashboard with a retarted refresh rate) this could cause some serious slow-down.
6. Way too many VLFs. Depending on how your database grew over time, you could have a hundred thousand VLFs. This could lead to wierd performance symptoms. Check the VLFs, if it's in the 100k range, you're running way to high. Truncate and make it such that they never get that high.
Note that if you are seeing certain suspended processes or sql queries taking up a large amount of time, what are the tables involved? The names are pretty straight-forward. PA and FIM is related to Policy Auditor for example. That will tell you real quick where the problem seems to be coming from.