Before I came onboard here the admins tried to set this up and had very little success.
I have never used this tool and was just wondering how many of you out there use it and is it reliable? Does it take a long time to run? Does it eat up large chunks of your bandwidth? How often do you "sync" the two servers? etc....
We want to use the change log to only copy over new/changed objects after a full copy of course. We have a couple database where we have over 43,000 users and 25,000 machines and are afraid it will kill the network trying to copy all that data over the WAN during business hours but maybe not?
Thanks for any info.
Not in designed way. But it is still usefull to take full backup on least busy day in a week and incremental ones on other days.
In our situation we run backups online to the local storage. It takes approx 10h for full and 1h incremental. We do not transfer data to secondary server directly, but rather through controlled compression -> WAN transfer -> decompression -> reload. Secondary server is just for serious disaster recovery and some data integrity checks. I do not see this method robust enough to bring it to production under normal circumstances. Luckily we never had to use it during last 4 years.
Yea right now we are kinda similar to your setup Peter. I just wrote a script that does the backup to the Local HDD and then we move it after zipping it to the DR server.
If it worked, it would be awesome to get rid of that scritp and just use HotBackup but i am feeling less enthusiastic about it by the minute lol.
And our full backups take only about 2.5 hours. How big is your DB that it is taking 10hrs on a full backup? YIKES!
Yes, I'm using it. Not happy with it.
~150 users, 23,000 machines.
Incrementals with changelog: ~3 hours (DB stays up)
Full backups across the WAN to the second server: ~40 hours (DB stays up)
Full backup to local disk: ~2 hours w/ the DB service down
I'll be converting the production jobs to a combination of local backup + robocopy. Maybe compression, I'm not sure on that yet. My concern is how to tie together the ODBBackup jobs with the extra batch jobs to kick off the actual transfer process, start/stop the db, etc.
You don't need to stop database for backups. That defeats the "hot" backup purpose. Local backup is faster than LAN/WAN.
23,000 machines and only 150 users? Your company is RICH
I have a script that is scheduled to run at certain times.
The script does the following:
1.) Stops the MEE DB Service
2.) Creates a folder called Backup on an external HDD that is plugged into the server and appends that days date to the folder name
3.) Copies the entire SBDATA folder into that newly created folder
4.) Compares the size, file count, etc of the original SBDATA folder and the backup folder and outputs that to a log file for review the next morning
5.) Runs ToastCache to re-index the DB
6.) Starts the MEE DB Service
All this usually takes about 2.5 hours on our largest server.
If ever we need to roll over to our DR server all we have to do is zip up the latest backup, SFTP it over to our DR server, drop it in there and fire it up.
It aint pretty but it works and the only action i have to take is to check the logs to make sure that the backups completed successfully. The script does everything else for me.
You could make things more efficient by considering what object classes you need to backup - for example, if you don't regularly make changes to the filesets, you can skip object class 5.
That could save a few hundred MB at least of unnecessary work?
But that approach would save only 0.1% backup time for large enterprise database.
Great majority of time spent is allocated to backup of milions of little files that correspond to each object property.
Hey what's up Mike. I've only ever seen this successfully used in Boston at the DOT. Their DB is nowhere as large as yours so it always ran pretty smoothly and was only used for us to upgrade/makes changes on the prod. Still zipping those files in DC I see....geez.