lpp wrote:
We are using REST only temporarily to test this new approach (collecting small chunks of logs every few minutes instead of processing a huge volume at night), as WebGateway FTP push is always difficult to work (curl error 7) and needs time to set-up.
Most people push logs from the MWG after log rotation. You shouldn't be getting curl errors unless the files are really large. 100MB with compression enabled seems to be a common and good choice.
The overall settings are basically default, except for a few parameters.
I set the maximum concurrent log processing jobs to the value of 2.
Since the CPUs are basically idle, I hoped that this parameter would increase parallelism during log import.
If the system is idle, and there are jobs to be processed, it would be best to see why it is slow. If adding more log parsing jobs keeps it busy, then it would indicate that the log parsers are getting stuck on something. The only 2 things that come to mind are 1) directories: If you have a directory attached, the log parser does the LDAP lookup for the display name. 2) Use hostname log parsing option is enabled: If you have a lot of unauthenticated traffic for IP addresses not in DNS (such as DHCP addresses), then Windows will try to resolve them using NBIOS broadcasts. That puts the thread on hold for 30 seconds on every record.
In those 2 cases, having more log parsing threads could help ride over the wait times, but I would still try to address the real problem instead of using more threads as a work-around.
We always loved page views (along with the detailed web), but again we thought that clearing the processing tab would decrease importing time. I set back page views. I just doubled the value of IP cache size, User cache size and Site request cache size, as the hit ratio was 99.9, 100% and 98.9% respectively.
Page views will always be faster. Detail data has the most significant impact on the size of the DB, but shouldn't affect log parsing too much. Log records by default are "detailed data". It is actually more work to condence that data into summary records. That is where the cache management is important and why you don't want to be changing values unless you know what you are doing. I've documented all the advanced performance options here.
https://kc.mcafee.com/corporate/index?page=content&id=KB71665
Increasing IP/User/Site cache sizes larger isn't likely to make any difference. The number of records to load is relatively low and won't blow out memory. But it would be rare that the defaults are not enouugh. Don't go above double on the aggregate cache.
Thank you again for your precious help.
Best regards,
LPP
you're welcome
I am sorry, I didn't understand the hit cache ratio results I got.
Now I am using defaults for IP cache size, User cache size and Site request cache size, and I doubled Aggregate record cache.
Thank you for your attention.
LPP
lpp wrote:
I am sorry, I didn't understand the hit cache ratio results I got.
Now I am using defaults for IP cache size, User cache size and Site request cache size, and I doubled Aggregate record cache.
Thank you for your attention.
LPP
After parsing several hours of logs, all the cache hit rations should be at or near 99% except the aggregate cache. Aggregate cache will never hit 100% unless you are parsing the same log file in a loop. Aggregate cache is summary data. We need to load summary data when parsing logs because they are grouped by hour.
https://kc.mcafee.com/corporate/index?page=content&id=KB71665
Corporate Headquarters
6220 America Center Drive
San Jose, CA 95002 USA