1 of 1 people found this helpful
Yes I have seen this done many times. Splitting 10k nodes across 3 servers is very reasonable. But why is your randomization only 90 minutes. If you can increase that to three hours it will significantly reduce the stress on your repos.
But why is your randomization only 90 minutes. If you can increase that to three hours it will significantly reduce the stress on your repos.
I have to consider fast updating during an outbreak situation, as well as False Positive issue / dat retraction etc etc.
BUt yeah, agree in normal times, 3 hours is probably a lot safer.
I'd like to ask you something: how do you want to use these Linux servers? In a load balanced setup or individually (as three repositores) ?
An update process from one client according to my knowledge means a number of FTP connection per client per one update task.
I have taken a look in McScript.log and it seems that the updater do not close the FTP connections within each updating session, this might mean some FTP threads will be kept open until disconnected by FTP server due to inactivity. I may however, err, but in earlier VIrusScan versions, this certainly has been the case.
Then it could be more for resource taking, than you/ we initially thought.
It planned to be in a load balancing configuration, so hopefully clients will correctly bounce between the servers.
Still looking for any suggested testing tools - plenty of FTP stress testing softwares on the net, but I only need a simple app for this job.
... any suggestions from people on FTP stress testing tools that can be used?
Wanting to finalise my testing and get these into full production, but need to provide data that they're up to the job first!