Hmm... No one knows! Any smart person can come forward. LOL!
There's no hard limit as far as I know - but there are certainly practical limits. The main one of these is replication time: it will take a finite amount of time for one server to replicate to 500 repositories.
How many machines do you have per site? Ten repositories per site seems like a lot...
Also possible I think to use fewer distributed and additional multiple superagent repositories...
There is no published limit. I have run as many as 55.
there is limit as to how many can be replicated to at the same time. I once saw something that indicated 16 times the number of processors. I have single processor server. I have found that when I keep under 16 in a single replication task they all finish up about the same time. More than that and some seem start much later than the others. So I break up my replication into multiple tasks each launching an hour or so apart.
If you just "lump" them all into one job it simply multithreads up to the lmit it can handle and as soon as one repository finishes it starts on the next in the list so this can be more efficient
You are more likely to be limited by the practicalities of setting up and maintaining a large list of repositories as an administrator than any physical limitations on the ePO server.
500 sounds an awful lot. At worst you should only consider one per network and aim to reduce it to a lot less than that if at all possible.
It'll be much easier to maintain and troubleshoot the less you have.
Thanks for all the help guys.
I actual have 1 Distributed Repository per site - 3 per Region. I "daisy chain" the replication in each Region so that each Repository perfoms replication several hours apart. My thoughts on this is that If I was to set the replication 1 hr apart in each Region. I could be faced with a site not able to update for 3 hrs (Based on 1hr/replication/per Repository).
I believe that when replication occurs, the Repositories will be flagged "off-line". That is unavailable for the clients to pull updates. By design, this eliminates a client from pulling files from a Repository that is out-of-date or old.
My thoughts is that if I choose 1 Repository at each site and have these replicate immediately after the Master Pull (from the Source Repository). There will be at least 1 Repository per Region that is not "out-of-date" - based on catalogue version (sitestat and sitelist). This way the clients at each Region is guaranteed to have 1 Repository where they can pull updates. If I was to replicate each Repository back-to-back at each Region - they will all be flagged "off-line" until the replication completes.
For example: At 1 Region
First Repository replicates at say 1am, the next Repository would replicate at 3am, and the final one at 5 am. The problem I might be experiencing is that the Repository at 1am will be up-to-date based on the immediate replication after the Master Pull. However, the 3am and 5am will not be up-to-date until their Replication time occurs. It might leave me stuck with only 1 Repository per Region (that is up-to-date) from which clients can pull updates.
Am I correct in this analogy or am I way off? Can anyone tell me how they have their Replication tasks running?
Thanks in advance.
Please bear in mind that barring an emergency posting we update the dats on the McAfee source sites every 24 hours.
It's true that during replication a repository is flagged as unavailable. In that case an agent will try the next one in its sitelist. If all fail and there is a fallback site defined it will try that.
It's also true that with multiple repositories per site until they all replicate there will potentially be a difference between repositories.