I was just searching against the same condition.
I am seeing exact same thing you are and I see no one was able to offer any input.
I am curious, have you opened a support case yet? Wondering if it is worth my time or not.
Try exempting VSE from scanning its own folders
both those in program files and in profile/all users
I am wondering if there has been any further resolution on thsi particular issue, as the problem is McAfee scanning it's own files. ui have had a couple of workstations starting to have this happen daily with the following error.
A thread in process C:\Program Files\McAfee\VirusScan Enterprise\Mcshield.exe took longer than 90000 ms to complete a request.
The process will be terminated. Thread id : 5112 (0x13f8)
Thread address : 0x7C90E514
Thread message :
Build VSCORE.220.127.116.115 / 5400.1158
Object being scanned = \Device\HarddiskVolume1\Documents and Settings\All Users\Application Data\McAfee\Common Framework\AgentEvents\2009112012004168576080000177C.txml
by C:\Program Files\McAfee\Common Framework\FrameworkService.exe
Actually, scanning our own files is expected, and even beneficial, in order to prevent an infection. The problem here is more about why we are having trouble scanning such files.
A txml file is tiny, and should take only a fraction of time to scan. However, in this case, it's exceeded our scan timeouts, leading the OAS to be stopped/restarted.
This is usually caused by third party driver conflicts. This can be many things, so opening a ticket to gather process dumps, or at least more detailed diagnostic information would be a good idea.
You could, prior to opening a ticket, attempt to remove any recently added applications, in an attempt to find the application that is fighting us for I/O.
If perhaps you are able to capture a ProcMon log, during the issue, and post it here, we could give some more specific advice. This is usually not a remedial task, though, so working with McAfee Support will ensure a timely resolution.
Had same issues here...
Resolved them though....
Basically, had global updating enabled with a 20 minute interval. DAT update comes in, all servers update within 20 minutes. Our VMware ESX servers store their data on an n-series SAN. Because all the virtual servers were running off the same storage, there was heavy I/O during a dat update, even though they were staggerred by 20 minutes. When mcshield.exe would start after a dat update (even with "processes on enable" disabled) there would be a spike in swap file activity. With all our virtual servers trying to do DAT updates within minutes of each other, the n-series would basically come to a crawl.
The error we'd see in the windows event logs was not mcshield causing the problem, but timing out because the server had stalled. This feature of mcshield is basically a self-protection mechanism so that if mcshield stops responding, it kills itself and restarts.
We have experienced the same issue and have opened a ticket. It has been noted on 10% of our workstations.
We are running AV/AS8.7 engine 5400, HIPs and various agent versions. This was first noticed before upgrading to patch2 or engine 5400.
We see these crashes on a roughly 24 hour cycle, even when we have disabled DAT updates, these crashes will happen even when a user is not on the worksation.
We only noticed the crashes of AV while investigating why workstations would hang for upwards of 8 minutes then come out of the coma.
We have excluded scanning of JAVA, we have excluded McAfee folders, we have disabled On Access Scanning and the issue of crashing still happens.
We have uploaded the Process dumps and are working with support on a resolution.
I will update as more information becomes available. We have been troubleshooting internally for a long time and with McAfee for a month.
I have this same problem after updating to 8.7 Patch 2, but only on my Windows Termnial Servers & it's happening quite frequently.
5051 error then the mcshield dumps and restarts several times. On my 2003 TS boxes, no one notices. However, on my 2000 TS box, it locks up solid and requires a hard reboot.
Opened up a ticket yesterday on the issue due to the severity of the problem and was given a link to KB52441. The only practical solution so far has been to increase the timeout threashold, and incrementing it until the problem stops.
I suggested that this problem only started post upgrade to patch 2 and was summarily blown off. We also ran into several Windows 2000 CA Arcserve backups stalling after upgrading to patch 2 and had to exclude on-access scan for files opened for backup for the first time. They also blew off the suggestion that this was started with Patch 2 on that call too...
Same issue here with a couple of machines, also since we applied Patch 2 for Viruscan Enterprise 8.7i. We are trying to see if there's a pattern but no luck until now.
Always happens when scanning for .xml or .txml files on the Agent folder and always its once a day
A thread in process C:\Archivos de programa\McAfee\VirusScan Enterprise\Mcshield.exe took longer than 90000 ms to complete a request.
The process will be terminated. Thread id : 5040 (0x13b0)
Thread address : 0x7C91E514
Thread message :
Build VSCORE.18.104.22.1685 / 5400.1158
Object being scanned = \Device\HarddiskVolume1\Documents and Settings\All Users\Datos de programa\McAfee\Common Framework\AgentEvents\2009120410061176905410000131C.txml
by C:\Archivos de programa\Network Associates\Common Framework\FrameworkService.exe
McAfee had us turn off Global Updating in ePO, under Configuration, Server Settings. We are updating through Automation, Server Tasks.
In addition they had us change our AV policies such that On-Acess Scan Low/Default/High scans were being used. We had been using one policy for all.
Specifically naprdmgr.exe, frameworkservice.exe and mcscript_inuse.exe were added to the low risk processes.
Our results have been mixed. 12/7/09 wil be the first lengthy testing of the changes. I will follow up with additional details.
I will say that the crashes recorded in the applications logs are gone, but concurrently with the crash of the application the effected machines would hang for several minutes. The hang is our larger concern.