7 Replies Latest reply on Feb 19, 2010 9:04 AM by peter_eepc Branched to a new discussion.

    Dbcfg.ini File

      I have a database that contains 10,000+ users and 6200+ machines and growing. I'm creatinga DBCFG.ini file to see if it will improve performance. below is what I have created please give me recommendations, mainly the HashCount is what I am questioning if I have correct. According to the documentation this is calculated by taking the square root of the number of users. Default is 16 but 100 is the square root of 10,000.




      Thank You!
        • 1. RE: Dbcfg.ini File
          looks good - if you've not enabled the index before it will make a BIG difference to the speed of creation and login events etc.

          You can see if it's working by looking in the object class directory (for example sbdata\00000001 ) for names.dat files.

          you'll need to restart your server and login to SBadmin or something for the new file to take effect. You should also consider implementing the "ToastCache" script as a scheduled job to refresh the index at a time you decide, rather than leaving it to refresh of its own violation.

          • 2. RE: Dbcfg.ini File
            Thank You for looking at this. Would you recommend backing up the database prior to implementing this.

            What is the toast cache script?
            • 3. RE: Dbcfg.ini File
              backups are always a good idea, but the cache is not going to touch the db itself, it's separate.

              ToastCache can be found on the tools CD, it's a script you can run as a scheduled job which rebuilds the cache at a given time.

              without it, the cache rebuilds on the first access after it expires NOT at the time it expires.

              • 4. RE: Dbcfg.ini File
                Hey Simon, at which point would you recommend indexing your database... meaning how many objects, etc. before it's worth it?
                • 5. RE: Dbcfg.ini File
                  I'd turn it on for anything. Windows is slow enough to start with...
                  • 6. RE: Dbcfg.ini File
                    Are there any filesystem or OS tweaks you would suggest? We did a few TCP tweaks and disabled the pagefile on the server, but are there any NTFS hacks to speed up access to our 500,000+ file SBDATA folder?

                    Or should we have set manual cluster sizes when creating the partition that SBDATA lives in? I also think there is a setting to have the OS not track last access time on files.
                    • 7. Re: RE: Dbcfg.ini File

                      You might have to adjust MFS settings prior to creating database though (or backup, change MFS, restore).

                      It is documented in Best Practices document. Did you read that?


                      Windows Server as a File Server

                      Tune Microsoft Windows 2003 server to be a file server.

                      See the Microsoft article


                      http://support.microsoft.com/kb/174619 about this.


                      Increase NTFS MFT (Master File Table, used to be FAT) to 50% of the disk space. The result is that small files

                      are being stored in the MFT and not as separate files in the NTFS. This helps a lot because we have thousands

                      of small files.



                      1. Open



                      2. Go to



                      3. In the right pane, look for the Dword named



                      4. If exists change the Dword to



                      5. If not exists, create a new DWORD


                      NtfsMftZoneReservation in the registry and set its value to 4.

                      EXTRA INFO

                      The default value for this key is


                      1. This is good for a drive that will contain relatively a few large files. Other

                      options include:










                      3 or 4 instead of the default value of 1




                      —Maximum file allocation

                      Unfortunately, Microsoft doesn't give any clear guidelines as to what distinguishes Medium from Larger and

                      Maximum levels of files. Suffice it to say, if you plan to store lots of files on your workstation, you may want to

                      consider a value of


                      —Larger file allocation
                      —Medium file allocation



                      on 2/19/10 10:04:01 AM EST