We have a requirement of transferring repository files from a distributed repository to another server via a secure transfer method. This other server is to act as a source site for another instance of ePO that is not allowed any internet access. Although it sounds relatively simple, I need to ensure that nothing bad happens from partially successful transfers - ie if the pkgcatalog file is transferred, but there is a problem transferring other files, then the remote ePO server would think that there were updates, but the updates would not be there. Obviously we would schedule the pulls at sensible times ie:
1) File replication from local ePO repository to remote server (acting as source site) @ 01:00
2) remote ePO repository pull from source site (remote server) @ 04:00
From whatever file transfer method we use, it would be likely that we would rollback the transfer if any part did not succeed, however *just in case* this goes wrong, what would be the worst case scenario (ie if ePO pulls a corrupt DAT file from the remote source site server due to the pkgcatalog.z file advising that the source site was all up to date?)
Apologies if this is as clear as mud :-) Let me know if any further clarification is required. I will get round to labbing this at some point :-)
thanks in advance,
Someone will correct me if i'm wrong but i don't believe you can update an ePO master repository (i.e. your second instance of ePO) from a manually created source.
The only places you can update it from is either McAfees FTP or HTTP servers.
You definitely can - I have done it :-) You can create a new source site for ePO and disable the McAfee source sites. Just wondering if there are any caveats or gotchas that I have not thought of or found, before I get the chance to lab it (and seeing what happens when I break the transfer to the new source site).
Apologies i stand corrected (after a quick browse through the sytem setup screen in ePO confirms it)...you learn a new thing everyday. You can even set the source site to a UNC (even more of a revelation)
In theory then it's down to what kind of process you use to do the file copying and what options you have e.g. SCP over SSH, FTPS etc.
Maybe If it was scripted via batch files you could first copy to a staging area on the remote server, valid the files with a MD5SUM checksum and when everything matches copy from the staging area to the source site folder. The staging area to the source site would be a local copy on the remote server and therefore would minimize any corruption. You wouldn't have to roll back as the source site should always be 'clean'.
Message was edited by: Tristan on 28/11/11 13:51:39 GMTMessage was edited by: Tristan on 28/11/11 14:50:20 GMT
I like your thinking! We are possibly looking at some enterprise file transfer solution so hopefully that should cover it. Just leaves us with the question of what would the worst case scenario be if a corrupt file did happen to sneak through. Minimal chance of this happening given above setup, but I am definitely curious!