details of the implementation
running since 2014
and in addition to common components we have
a small install with about 1000 clients (700 workstations)
now the issue - in few weeks the DB jumped to 100GB !!!!
and the culprit is
any idea on what to do to clean it - i hardly believe that this is a normal size....
Observations from endpoints are initially entered in this table, an internal ePO task runs every few minutes to process the observations, they are then deleted from the table. When the scor_async_job table gets too large the task does not have time to finish and never deletes the rows. You will also see very high traffic on the network between ePO and the SQL server when this is happening too.
I've seen this after an agent handler crash that went unnoticed. The solution in my case was to delete all rows in the table to allow the task to get on top of things.
Create a support ticket and confirm with them the correct action, they will provide the SQL to run to clear the table.