1 of 1 people found this helpful
The biggest issue I have with the 3 hour update windows, is if a staff member switches on their computer at 9:02am, they won't get an update until the 12pm update window. Trying to resolve this, without having to use a "missed update" task
I'm not sure where the "missed update" time gets counted from the original starting time or the interval that got computed using the randomized time? If the latter, you can't be sure you've missed an update at any time...
As for the original question of yours: we run two types of updates internally and no "emergency ones" (retaining this type in an emergency): the first updates the DAT, engine, BOF and ePO agent key updater. A separate task deals with the patches as we have to follow a gradual patching pattern starting with servers then workstations. This patch is not enabled only when it comes to patching. We make provisions to minimize VSE reinstalls with lower patch level.
The update task has a maximum running time of 2 hours (we have 20+ internal repositories), has randomization of 15 minutes and set to run if missed after 5 minutes. Scheduled to run daily from 6am - 10pm every hour.
1 of 1 people found this helpful
We use engine, dat and boc
randomised for 40 minutes, run missed task with 10 min delay
repeats starting at 7am every 4 hours
Patches are on a seperate task.
The run missed task check box allows our machines to update soon after swtiched on. That would solve your problem with the window and using the delay and randomisation you would avoid mass hits on the repositories and also any performance issues on the machine at startup.
Why don't you want to enable this?
Why don't you want to enable this
I'm trying to avoid a situation where users switch on their computers after a weekend away, and based on the "Run Missed Task .. Delay" setting flood the repositories over a short window. (Assuming that people have fairly similar times of arriving at offices, booting up computers etc it will create a peak at a certain set of times. It's not so much the repositories I'm worried about, but flooding some of our smaller links. Setting up Superagents at those sites isn't really something we want to do (but are considering) ).
My company's experience with VSE is such that false positives have tended to be more of a threat than detection lag. The events of a couple months ago where a lot of early updated customers got quite burned by a bad dat update only strengthened this resolve. Granted, McAfee now probably has the most scrutiny on QA in the industry given the black eye suffered there, but we're still in a daily update for desktops, applying the "latest as of last night while we slept" dat to a fraction of the environment first thing in the morning, then assuming that doesn't brick any of those users, and after checking for any screams of panic in this forum, manually promoting that DAT to current and letting the rest of the corporate desktops get that dat midday (after Asia-Pac has worked a full day with them). Windows servers, which generally don't have those crazy users with their pointy clicky web browsers on them get updates a little later still.
So.. in short, I'm not sure I'd worry that much about missed updates given the windows you're talking about.
If I were interested in being on the leading edge of detection on the desktop, I'd fire up artemis aka heuristic network checks which shrinks the discovery-to-detection window quite a bit smaller. But there again, it's a tradeoff of how much you fear the impact of a false positive detection vs how much help to a "fast moving threat for which there's a new signature" is.
If you have a web proxy gateway with an in line AV scanning capability and AV on your inbound mail feed, I feel THOSE are good places to run the latest and greatest signatures and be very aggressive with updates, because the impact of a false positive is a lot lower there, versus, say one that might delete a DLL that renders a windows machine unbootable.