I wonder about the answer to this question too
I see this regularly as well. Had an SR open on it, never resolved.
We had a different situation to this with our combo VM where it was falling under the minimum disk space daily, before deleting partitions and leaving us with rarely more than 6 days normalized data in active partitions on the box itself. This was fixed the other day on SR via our reseller, and it's now reached 9 days in ESM with 210GB free (43m records).
It turns out we still had packet data running back to last October; the max records per partition was causing it to retain far more than it was supposed to, if I remember right this was some sort of leftover from bug fixed in 9.4.2 (which we are running).
Try this (also check the packet partition info, substitute alert with packet), it outputs more than this but note the totals and max limits;
NitroTID -d '/usr/local/ess/data/ngcp.dfl|::1|1111' -t alert -4
Nitro Table Information Display (NitroTID)
DFL=/usr/local/ess/data/ngcp.dfl|::1|1111 TABLE=alert PARTITIONS
Retrieving information. Please wait...
alert (table IS open)
Table Version - 193956454654519
Partition Type - Time based partition
Total Partitions - 3
Total Active - 3
Total Inactive - 0
Partitioning Field - LastTime
Partitioning Time Unit - 1 day(s)
Min Records / Partition - 25,000,000
Max Records / Partition - 25,000,000
Allowed Attached - 101,000,000 record(s) OR 5 partition(s)
Max Before Deletion - 101,000,000 record(s) OR 5 partition(s)
Max Emtpy Gap - 30 partition unit(s)
I was told by Support that is an informational message letting you know it's dropping the packet data off the receivers. If you have an ELM in your environment that is keeping all the raw logs -- which most clients do -- it's really of no consequence since you still have your full packet data stored there. You can easily click the "ELM retrieval" button while viewing a normalized event if you want more details.
I was given the same information by Support.
What I would like to know now is how to suppress that event so it won't change my "flags" to red!
I initially dropped the severity to a "1" on this rule, and eventually just disabled it but we continue to get those "critical" alerts.
Event partition detach
I release this thread is pretty old, but I just tried to run the command you wrote out, and I got this error:
ERROR: Could not open the .cfg file handle (error 105)
Do I need to stop the ESM before running this?
If you have any thoughts, I'd be grateful to hear them.