7 Replies Latest reply on Nov 18, 2011 8:02 AM by JoeBidgood

    Super Agent Sizing

      Dear users,

       

      I am new to this community, have read some of your threads but this is personally my first one which I initiated.

       

      I have a problem with the sizing of Super Agents inside our WAN network regarding bandwith consumption. I have two official references here from McAfee:

      this...

      https://community.mcafee.com/community/business/epo/blog

      and the

      "Best Practices Guide for ePO 4.5"

       

      In both sources, there is written down that a normal daily DAT distribution takes about 200kb to a host. To a Super agent the DAT distribution takes 70 MB. Now there is a short explanation on the best practices guide why it is that big, seeing chapter 3, supchapter "Calculating bandwidth of repository replication":

      "If you are only replicating DAT files, the bandwidth use will be approximately 70Mb of replication per"

      day. Agents don't use all the DAT files that are copied to the repository, but there are 35 incremental

      DAT files that must be available to all agents in case they are behind on DATs.

       

      But if my math skills are still good enough than 200kb x 35 (incremental updates) are only 7 MB and not 70 MB.

      What do I missing here?

       

      This is essential for us to do a proper sizing and locating of the Super Agents.

       

      Best regards,

      lohr

        • 1. Re: Super Agent Sizing
          JoeBidgood

          Actually, it's worse than that - the numbers you quote are a bit out of date

           

          A full dat set consists of 35 incremental dat files, plus the full DAT zip file which is required by machines that are more than 35 DATs out of date. As of yesterday this works out to about 131MB.

          Assuming we're doing incremental replication, then the amount of data that needs to be replicated each day for the DAT update is that day's incremental .GEM file, and that day's full DAT zip: yesterday this would have been about 122MB.

           

          The choice of where to put distributed repositories depends to a great extent on how many clients are on a site, and the type of usage of those machines - how "up-to-date" they are generally. This is probably best demonstrated by a number of examples - this might get a bit complex so bear with me    Note: at this point, I'm talking only about "normal" distributed repositories that you replicate to each day.

           

          Firtstly imagine we have a remote office with fifty machines on it. These machines are left on all the time, run regular updates without issue, and so are usually no more than a day out of date, maybe two. The important point here is that none of these machines are ever going to ask for the full DAT zip, or certainly not on a regular basis. There is no point in putting a distributed repository on this site, as the daily requirement is 50 machines x 200kb = 10MB, which is a lot less than the 122MB daily replication load: the best bet would be to let them update over the WAN or directly from the internet.

           

          Now imagine a site that contains your IT departments build lab. It only has about 20 machines in it, but at least three new machines are brought online each day. This site needs a repo, since the new machines are going to need the full dat zip each day, so it's more economical to copy the full zip locally once - i.e. replicate to that site - than let three machines pull it over the WAN.

           

          That deals with "classic" distributed repositories - i.e. they contain a full copy of the relevant product files, and are kept updated by replication tasks. However, if you are currently in the planning phase of this, and minimising bandwidth usage is your priority, then I strongly recommend you consider ePO 4.6 and MA 4.6 with Lazy Caching enabled. Please see the MA 4.6 Product Guide for a description of Lazy Caching, but in a nutshell, MA 4.6 superagent repositories with lazy caching enabled do not require a regular replication task, but instead they only retrieve from the master repo the files their clients need, and then keep a local copy. If we return to the first example above, where we had 50 machines on a site - if we put a Lazy Cache superagent repo on that site, then each day that site would only need to pull one incremental dat a day.

           

          HTH -

           

          Joe

          • 2. Re: Super Agent Sizing

            Hi Joe,

             

            thanks for your really good answer, well explained for me.

             

            So as a conclusion for me that would mean, when I know that on a site there is going to be dployed a new agent for example every day, then more or less a repository would already be "needed". Is that correct?

            Because I understood from your answer that when a new agent is deployed the ~120 MB file needs to be delivered.

             

            Best regards,

            lohr

            • 3. Re: Super Agent Sizing
              JoeBidgood

              Hi...

               

              Not when a new agent is deployed, no - only when there is a new installation of VSE.  The agent itself doesn't need any DATs, but VSE does, and a new install of VSE almost always needs the full DAT package.

               

              HTH -

               

              Joe

              • 4. Re: Super Agent Sizing

                Hi Joe,

                 

                oh yes sry that is what I meant... Because nearly every Agent in our company has VSE with it.

                 

                Best regards,

                lohr

                • 5. Re: Super Agent Sizing
                  JoeBidgood

                  No problem   

                  Hopefully this will help with your design.

                   

                  Regards -

                   

                  Joe

                  • 6. Re: Super Agent Sizing

                    Hi Joe,

                     

                    one last question.

                    I had a closer look onto Lazy Caching in ePO 4.6. I do not understand how this is going to help me with daily DAT updates. I understood that it is good for product updates but for the daily DAT for VSE it is not very useful or?

                     

                    So the problem would still be the same to calculate base on your information above?

                     

                    Best regards,

                    lohr

                    • 7. Re: Super Agent Sizing
                      JoeBidgood

                      It's actually the other way round - Lazy Caching's greatest benefit is for files that change each day, like DAT updates. For files that don't change like product installers there's less benefit.

                       

                      Imagine a site with 100 machines, connected to the main ePO server over a WAN link. All the machines are up to date. The next day, all 100 machines will need one .GEM file in order to update.

                       

                      If there was a "normal" distributed repository on the site, then the traffic over the WAN link would be 122MB: the full dat zip and one incremental. Of this traffic, 121.5MB would be completely wasted, as none of the client machines on that site need the full DAT.

                       

                      If there was no repository on the site, then each of the 100 machines would need to pull one incremental file across the WAN: so total WAN traffic would be 100 x 200kb = 20MB.

                       

                      If there was a 4.6 Lazy Caching superagent repository on the site, the total WAN traffic would be 200kb. The first machine to update would ask the SA for the incremental file, causing it to be fetched from the master: the next 99 machines would get the same file from the SA itself.

                       

                      Hope that makes things a bit clearer

                       

                      Regards -

                       

                      Joe