6 Replies Latest reply on Feb 11, 2011 8:44 AM by sroering

    Secure Web Reporter. Database storage overhead?



      We currently have version 5.0.1 running on a MS-SQL 2005 backend. We have a currently more than 1300GB of storage for approximately 18 months of logs.


      So now my questions are:


      - Is a offical estimation of the overhead of log storage with 5.0.1

      - Is the overhead the same for 5.1.1?

      - Is there a way to reduce database storage without compromising too much the performance?



        • 1. Re: Secure Web Reporter. Database storage overhead?



          Are you sure that you are using 5.0.1 and not 5.1.0?


          There are a couple of potential issues if you are truely using 5.0.1.


          1) Upgrading to Web Reporter 5.1.x and newer requires you to be on version 5.0.2.x.  Which means that you would need to upgrade from 5.0.1 -> -> (latest version).

          2) The upgrade to 5.0.2 will require a very long DB Schema update.  The Schema update has been completely overhauled since for performance, but getting to 5.0.2.x still uses the old process.

          3) A 1.3 TB Database probably won't finish the migration to 5.0.2.x.  If it would, you are looking at approximately 1 hour per 10GB, and 1.5x the size of your DB needed for the migration.


          So if you are correct with your version, the upgrade on a DB that size probably isn't realistic.  Once you are on the latest version, future DB schema update shouldn't be such an issue.


          My recommendation would be to keep the old reporting server and DB in their current deployment for as long as necessary, and start a new DB with a new Web Reporter 5.1.1 install.  I don't know how committed you are to migrating your configuration, but starting fresh there too is also best.  I'm sure you have more questions and concerns. I'd be happy to explain in more detail if you would like.

          • 2. Re: Secure Web Reporter. Database storage overhead?

            I looked at the version and I was wrong, we are currently running 5.02.03 Build 885.


            We will probably go for a fresh install of the latest SWR version. Considering our current amount of information 1.3TB which would be the best method of importing this in the latest version? We have the raw log files ready to be imported, we only need to be sure that this is feasible ad when latest version is installed on a fresh database we would be able to import the files without much trouble.


            I would appreciate any comments or suggestions on how to approach the clean install given the information I already sent.



            • 3. Re: Secure Web Reporter. Database storage overhead?

              Really I have more questions than answers yet.


              1. How many months of data?
              2. How many log sources?
              3. How much log data per day (note if compressed or uncompressed) ?
              4. Which DB (oracle, microsoft sql) ?
              5. Does the DB share the same hardware as Web Reporter?
              6. What is the hardware specifications for the server(s)?
              7. Do you have page views enabled for log parsing options?


              There is really only one method for importing data.  Install and set your configuration.  Make sure the DB is tuned to grow files in reasonable chunks. Make sure you have one log source per each proxy/gateway. Then import the data as you have disk space to stage the processing queue.


              The next major update for Web Reporter (version 5.2.0) has database partitioning supported in the application. You would probably want to use this, unfortunately it's not expected to be ready until sometime during Q2 and would require a DB schema update.  If has been working for you, perhaps you might want to wait with importing your legecy data until after upgrading to 5.2.

              • 4. Re: Secure Web Reporter. Database storage overhead?

                Hi sroering!,


                Please find below my answers:


                1. 18 months of data.

                2. 6 log sources.

                3. Approximately 10-15 GB uncompressed data per day. This is, raw log files as they come out from proxy servers.

                4. MS-SQL 2008

                5. SWR and SQL Server are on the same physical machine. It is a dedicated server, no other service is running.

                6. Quad 3GHz Xeon. 4 cores and 8 GB of RAM


                Unfortunately we cannot wait until Q2 so we will have to go for the latest version that is available when update is required. This means in two weeks time.


                So, given this information what is your recommendation?



                • 5. Re: Secure Web Reporter. Database storage overhead?

                  I would also need to know how much DB space is required to be able to store 1 GB of raw log uncompressed data. I need to provision correctly the space I require.

                  • 6. Re: Secure Web Reporter. Database storage overhead?

                    I'd recommend going with this as a new deployment unless you are able to commit a few days to perform the DB schema update.  I think everything will just be cleaner in the end. I think you would benifit from having the servers on separate hardware, but if 5.0.2 is working in this configuration, 5.1 should be fine.


                    For Web Reporter

                    • Install 64-bit version.
                    • Under Administration -> Options -> Performance, you can increase the JVM memory to 2Gb.  I don't think you would need more than that and going above would just be robbing the DB of memory.
                    • Disable summary data roll-ups:  Administration -> Tools -> Database Maintenance.  If you're keeping detailed data, roll-ups only ammount to single digit percentage disk savings for the expensive cost of doing them (time), and heavy index fragmentation.
                    • After you've imported a week of access logs into the DB, check the cache statistics.  If any individual cache is full, increase it to accomodate your need.  The exception is the aggregate record cache, which is essentially your summary data.  The aggregate cache size will provide no benifit after it's large enough to hold a full days worth of summary data.  (Administration -> Options -> Performance -> Advanced Options :: caching).  Do not switch the caching to "Load all entries" because I don't think you have sufficient RAM and I don't believe you would benifit.


                    For the database:

                    • Set the recovery model to Simple
                    • Check your file auto-growth settings. Anything reasonable is fine. It's important that you're not growing 1Mb per increment. 200Mb-10Gb  or 10%-15% is probably reasonable.
                    • Check the indexes for fragmentation on a regular basis and reorganize/rebuild as necessary.
                    • Make sure table statistics are updated on a regular basis.  "Regular basis" is when you start noticing slow-down in your report times.  I've seen a 20 minute update statistics change a 6 hour report time to 30 seconds.


                    That should be a good start anyway.