This content has been marked as final. Show 8 replies
you're not being dense - the system is simply telling you that some other user/process has that object locked at the moment.
Usually this means that a connection has been broken, and windows for reasons known only to itself has not released the locks on the file. Mostly this happens when clients timeout during a sync event.
If set, windows will clean this kind of stuff up after a couple of hours (TCP Keepalive) - you can reduce that time though with appropriate registry settings.
If you can bare it, rebooting the server will clear all the locks for sure, or if you want to play, you can clear them up manually with something like process explorer from sysinternals.
alternatively, it could be the account you're running the server under - if your DB is remote, you need to edit the service properties and give it a username/password which can access the remote UNC - remember, localsystem only has access to the .... local system.
You should really put your server process and database on the same box though - by storing the db remotely you're putting about 20x the load on the environment compared to having them co-hosted.
If I had one server on one site, I'd agree with you. As it happens, I've got 40+ sites scattered across a wide area so I need servers at each site but I also need one object directory. Now, if you could host the object directory inside active directory, we might be talking.... :)
So instead of routing one port from the 40 sites to one server/db combo, you're routing NetBIOS traffic (which is 20x bigger) over the WAN between each site and your database?
What am I missing here? Are you distributing the database somehow to the 40 sites?
Instead of having 800 machines trying to talk to one database server across the WAN, I'll have 800 machines talking to 40 database servers over the LAN and those 40 servers talking to the object directory over the WAN. Surely this is better?
If I'm on completely the wrong track, it is, fortunately, not too late for me to change my mind :)
The server>database traffic is around 20x the client>server traffic, so in theory your architecture should be slow or unusable.
Think about it like this, imagine you want to copy a file between your client and server - in our case, the communication between the client and server is compressed, but the communication between the server and database is not. So, it would be better from a network perspective would it not to copy the file compressed direct to the database?
SafeBoot is not like EPO AV - each server does not cache the database content, it just exposes it over IP. Every request for data by a client ends up accessing files in the database, so splitting the two does not save traffic, it adds to it.
I hope that helps explain things? Personally I hate to be gloomy but I've never seen a customer set up like you intend (successfully), and the architecture wasn't built to support it. Once server/db combo will support 800 machines without any problems, and think of all the hardware you'll be saving. You could buy a really nice box+backup and still have budget to spare. wink
That explains things alright. Thanks for that! I'll be changing my strategy then :)
What you didn't point out but which I did realise over the weekend is that if I did things my way, the traffic to and from the database wouldn't be unencrypted :(
the server>db data is encrypted, but at a lower level than the client>server communications. It's the server which does the db decryption in memory.
We have about 6000 machines all across the US calling home to a single DB server. So if you were concerned, about 800... I wouldn't worry.