We've had a very strange behavior out of MVM 7.0 this month, and I'm wondering if anyone else has seen it or maybe even has a solution...
We do monthly scanning which is broken up into several jobs to break up the IP pool a bit and balance thing across multiple engines. We're talking about roughly 20K hosts broken up over 6 jobs and 2 engines (FS850s).
For the past two months, the jobs have not completed, and this month only completed 1 of the 6 jobs. Everything else has been stuck for several days at anywhere from 79% to 98% complete.
Looking at traffic out of the engines with Wireshark, they do appear to be stil hitting devices, but we have had no progress on any of these scans for 3 business days now (they're running on an 8-5 business hours schedule).
The only thing we can think of that has changed recently has been the introduction of SEP12 in the environment. Other than that, we've not been able to find anything that accounts for this problem. These jobs have been running in this configuration for quite a long time prior to this without a hitch.
Has anyone else run into this behavior?
Without looking through the daily log files of the scan engines it is hard to tell what the hold up could be. If it appears packets are going out, I suspect memory allocation issues. If the scan engine does not have enough memory to launch a new batch it will hold the next batch waiting for resources until enough memory frees up. You can search for 'waiting for resources' in your log file to determine this. I suggest opening a Service Request to be sure. We maybe able to pinpoint the hold up.