Anyone have any experience with using Hot Backup in a large database environment? Say for example that Hot Backup is configured for a database that contains 40,000 users and corresponding machines. In a small test environment I have configured Hot Backup with Object Change Log enabled as shown below.
This is set to repeat every 5 minutes but I suggest 15-20 minutes in a production environment. Objects that are updated in small user environment are just fine over the network. When passwords, audit logs, object properties, ect. are updated frenquently over a 40,000 user environment what now? Will the impact on the database or the network be so tremendous that Hot Backup cannot be implemented? The large environment should have the network pipline to handle the traffic but will database performance be degraded? Maybe I will have to set this to repeat every 30-45 minutes to make sure the synchronization is complete.
I captured on my test environment Hot Backup Data between prodcution and backup servers. One instance of user creation, move user, token reset, and changing user properties are averaging 28KB across the network for each. This is when I choose the option All files in the source folder and object change log. I only average 4KB when I choose the option Files that are newer than the destination and object change log.
I am experiencing another problem when I use the option Files that are newer than the destination and object change log. Objects that are created on the production server gets synchronized to the backup server but when I delete an object on the production server it does not synchronize to the backup server. Hot Backup only works when I choose All files in the source folder and object change log. Very strange behaviour.
Changes such as deleting a user does not get synchronized to the backup server until a new user is created. Changes such as changing user properties does not synch over no matter what happens on the production database. I guess we will have to live with the option....
Backup type : All files Use change log : Yes
This option has worked so far. Say that 1000 users change their passwords in a span of an hour this would be about 26MB over the network. I think everyone can live with that. This is only theoretical and I hope someone has some real numbers for me.
I setup an McAfee Endpoint Encryption failover in a Windows 2003 Server Cluster with VMware. I would recommend using this method instead of Hot Backup. The clients will only have to point to one IP or DNS address and will be seemless in case of a server fault. This is not an easy setup but I hope this will all be resolved when Endpoint Encryption is fully intergrated with EPO. I would ask if anyone has implemented this method please post any feedback.
Thanks for the good data Chris. I am a SafeBoot/McAfee SE and have setup both clusters and seperate servers for backup in large environments. The cluster is optimal for high-availability but is risky as a disaster recovery solution since the servers are usually in the same building.
If you do have to send the database over a network, there are some things you can do to make it faster. For example, you can enabled database compression which will dramatically reduce the number of files in the database. Each object will now have one file instead of dozens. This option is discussed in the admin duide, under the dbcfg.ini heading.
Also, I noticed in your initial config that you have the "clear archive bit" set to yes. This should be unecessesary if you have the change log enabled (which you do). I'd try your tests with that set to "no" and see if you get a benefit. So change it to look like this.
Starting backup at 02/20/09 00:43:00 Source path : C:\Program Files\McAfee\Endpoint Encryption Manager\SBDATA Destination path : \\neodev11\sbdata Backup type : Newer files Use change log : Yes Retry locked files : Yes -->Clear archive bit : No (optional) Reset change log : Yes
Thanks for the advice DLarson. I have implemented indexing before and I have seen the improvement on a large database. I am concerned though that compression may lead to data corruption but I would guess this is a rare occurence. Hot Backup was implemented for my client in production after I first posted. They are having success with this and are not experiencing any lengthy synchronizations. I see what you are saying about the clustered environment by itself being risky. I would think the best practice for high availabilty and fault tolerance is a clustered environment with a Hot Backup server offsite that snynchronizes every 30-60 minutes.
Does your DR backup have to be up to the hour accurate? We just us the DB backup tool to create a local copy in another folder, then we use a script to zip the folder, and archive it that way. You could then backup just the nightly zips.
It depends on the amount being backed up and the speed of the network. It also depends on your sensitivity to lost data. If you have to redo 5 password resets as a result of a failover, does it really matter?
The "average" customer will do these incrementals a few times a day, about every 4 hours. If you are smaller, you can do them hourly. If you are huge or have a slow network, then it is better to do them every 30 minutes. By increasing the frequency, we reduce the amount transferred per sync. It really is something that you have to test and fine-tune for your environment.