Funnily enough I'm looking at the same thing. I found this which may be good for you but it's not using the Scheduled Jobs function:
I'm looking at using https.
At the moment I have
1) weekly scheduled job (job id "weeklybackup") - this is configured to run the job below when it finishes.
2) an upload job that is activated by the weekly backup job (job id "uploadbackup")
The upload job sends to a webserver. I was thinking of using the webreporter instance but this didn't work too well! The test file seemed to upload (via curl) to the /logloader dir but I then couldn't find the file.
So I think Im just going to use an IIS server.
From then what I may do is monitor the directory I upload to and when a new file is detected, rename with current date/ time and possibly move to another folder....
I've seen this done in places I have worked in the past but not sure how easy it is!!
Thanks. I am glad to see that I am not alone in this situation. It appears that this is a pretty big hole that was built into the product. It appears as though the whole web gateway platform is setup for backup/restore rather than true disaster recovery (can't import the entire config as-is due to invalid UUID).
We run the email gateway appliance and the GUI with it allows for automatic export to a remote location via SCP or FTP. Seems like it should have been shared amongst platforms.
I have a scheduled job that saves the configuration daily. Config files are saved off as proxy.backup and the previous backup is renamed to proxy.backup1301290000 with the date designation changing for each day.
For the second portion, you could probably script something to grab the latest file and scp or ftp it to wherever you want and then cron the job.
I've solved the same problem using a different approach.
Instead of thinking about MWG sending the backup to a remote location, I developed a small script to GET it there from another machine.
I do this using the REST interface provided by the product. Check the product guide.
I have 4 servers in my production environment and another one to test new configurations. Every Saturday I restore my Policy confs into the test.
# Backup dos proxys e restore da homologacao no Sabado
servidores="server1 server2 server3 server4"
# REST/appliances -o XML
#echo "cat /feed/entry/title/text()"| xmllint --shell XML | grep -v "\-\-" | grep -v ">"
## Create backup file
curl -b cookies.txt -X POST --tlsv1 -k "$REST/backup" -o $repositorio/$1.$data.cfg
## Restore na Homologacao
curl -b cookies.txt --data-binary @$repositorio/$producao.$data.cfg -X POST --tlsv1 -k "$REST/restore" -H "Content-Type: text/plain;charset=UTF-8"
## Log on and authenticate
curl -c cookies.txt -X POST --tlsv1 -k "$REST/login?userName=$user&pass=$pass"
## Log off
curl -b cookies.txt -X POST --tlsv1 -k "$REST/logout"
for name in $servidores
# Se o arquivo for menor que 20k indica erro no processo de login e backup
while [ $(stat -c%s $repositorio/$name.$data.cfg) -lt 20000 ]
if [ ! -f $repositorio/$name.$data.cfg ]; then
log="Erros encontrados no backup!!"
# Aguarda 5 min para o Tomcat liberar os devices
echo $log | mail -c $cc -s "Backup Proxy - $result" $email
## Restore da Homologacao no Sabado
if [ $(date +%a) == "Sat" ]; then