Good deal! Let us know if that helps. Also, are there any other differences in the 2 epo servers, such as extension versions, products being managed, etc?
Was my reply helpful?
If this information was helpful in any way or answered your question, will you please select Accept as Solution in my reply and together we can help other members?
Yes, definitely different extensions and products managed on each. The affected server in this case is the same one I had opened a ticket for last week because it was throwing a "OSServicePackVer" error following the upgrade to 5.10. This one has Move muliplatform extensions and the "OSServicePackVer" error is thrown when I add/use add of the columns highlighted associated with MOVE MP SVM Product Properties. So, this current DB server key check error is just another wrinkle that I've run into following the upgrade to 5.10.
,
Yes, I remember that. One thing about that column name, seems that epo 5.10 changed the column name to something else - don't ask me why, I don't know. If you have retain policies and tasks enabled in server settings, back up your policies and assignments just in case (if this isn't resolved yet - I don't recall), then remove and reinstall the affected extension. I believe you might have done that, just repeating it for anyone else's benefit. That should pick up the new column name of OSCsdVersion.
Anyway, different product extensions also have different internal tasks that run, some more resource intensive than others. So that can definitely affect memory usage, threads open to the database, etc.
Was my reply helpful?
If this information was helpful in any way or answered your question, will you please select Accept as Solution in my reply and together we can help other members?
I was getting ready yesterday to remove and reinstall the MOVE extension when this other issue popped up yesterday, so I plan on circling back to that before the end of the week. I'll update this thread with feedback when it's done.
Removing and reinstalling Move extensions appears to have resolved the other issue with the "OSServicePackVer" error.
Just to clarify, there are two steps here?
I would at least triple the memory, then adjust memory settings as follows.
Step 2: In [HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Apache Software Foundation\Procrun 2.0\MCAFEETOMCATSRV530\Parameters\Java] options, change jvmmx to half the installed ram. So, if you install 24g ram, for example, half of that is 12g - you would calculate it as such:
12 x 1024 -1. For some reason, that is how epo calculates it on install. So in your case, you would set jvmmx at 12287. So then you want to leave about 4g available for the OS, which leaves you 8g for sql.
Step 3: In the properties for the sql server, set max memory to 8 x 1024, or 8192. That should clear up that issue. Those are just examples, obviously it is going to depend on what amount of ram you actually install.
My VM team asked me to start by doubling the memory to see if that resolves the issue. So I only upgraded to 16 gb. That would mean adjusting the reg entry you referenced for JvmMx to: 8 x 1024-1 = 8191
I'm a little foggy on step 3 with setting max memory for the SQL server. Where is that setting?
JvmMx to: 8 x 1024-1 = 8191 - that is correct.
For sql, I would set it to use 4 g (4*1024). To do that, in sql management studio, at the very top where the sql server is listed when you connect to the server registration, right-click on the properties of that sql server instance and go to properties, then go to the memory tab. Under maximum server memory in mb, add the 4g value there. That should take affect immediately without restart of sql services.
Was my reply helpful?
If this information was helpful in any way or answered your question, will you please select Accept as Solution in my reply and together we can help other members?
After upgrading the VM to 16 gb's of memory the server locked up again a few days later. What appears to be happening is the tomcat7.exe process (EPO 5.10) in combination with sqlservr.exe (SQL 2016) are causing TCP/IP port exhaustion errors, and then the server becomes unresponsive. Since rebooting the server this morning four hours ago both processes have already established well over 1,000 connections each and the number increases approximately by 100 every 15 minutes or so. This morning my VM team upgraded the server to the recommended 24 gb's of memory, but I'm doubtful that will cure the issue of exhausting ephemeral ports. The system event log is showing TCPIP warning 4231 and 4227 when the system locks up. I imagine the next move should be running a MER and opening a ticket?
There are a couple of things to look for first.
1. Are you running RSD and if so, what version?
2. Are you running the smartscheduler extension and if so, what version?
3. Do you have any 5.6.0.878 versions of the agent out there - any at all?
Are the connections from agents to apache, or epo to sql? You can check that with a netstat -an > c:\ports.txt output. That will show if the connections are from agents inbound to server, or outbound from epo to sql port.
If they are apache connections from agents, how frequent is your asci, how many systems being managed, how frequently do update/deployment tasks run and are you using distributed repositories to offload some of the traffic?
Was my reply helpful?
If this information was helpful in any way or answered your question, will you please select Accept as Solution in my reply and together we can help other members?
1. I see RSD version 5.0.3 in the master repository and extension 5.0.5.123
2. No, I don't have smart scheduler in the extensions.
3. I ran a report which doesn't show any systems running the 5.6.0.878 agent. All systems are running 5.6.1.157 except half a dozen still on 5.5.1.388, and one system with 5.6.0.702.
Connections appear to be EPO to SQL since the results of netstat -an > C:\ports.txt are IP address of the EPO server, port number (varies) and then IP address of the EPO server and port number used by SQL.
Download the new ePolicy Orchestrator (ePO) Support Center Extension which simplifies ePO management and provides support resources directly in the console. Learn more about ePO Support Center
Corporate Headquarters
2821 Mission College Blvd.
Santa Clara, CA 95054 USA