Just an FYI for you, you can open multiple browser windows to your ESM and log in and work in one tab, while you are exporting in another, that way you are not waiting on the export function and being un-productive.
Thanks for that. I tend not to use this because errors may be missed. In the case I mentioned where the Formating File started looping this would not have been immediately apparent if I hadn't had the window open and could watch it get to 100%, flip between 0% and 100% a few times, then start all over again. This is not an isolated experience so I leave the screens open until I gain confidence that the the behaviour is predictable.
Option 3 - Query on the ELM and export the results is also a fail. Scattered through the exported log records are corrupt records where it looks like two or more records have been concatenated. Some of the records that have been globbed together do not match the search criteria and the records appear to be terminated by a CR rather than an LF character....so you see something like <bad record>CR<record>CR<bad record>LF where the bulk are just <record>LF.
I've raised (another) SR
Please post a resolution if you receive one from your SR. One of my managers is always requesting reports which provide a enormous amount of results. The only way I can get them is via ELM search (not enhanced, as I like it to run in the background). My issue is I either always hit the file size exceeded or time limit exceeded. That leaves to run about 20 + reports with smaller time ranges, as increasing hte file size just makes the report useless.
I wish there were an easier way to create reports with summary data. The summaries I've been able to create are very intuitive.
I'm running into a similar issue. It took me quite a while for me to realize that exporting from the events list only exports the size of the page view (default 100 records) and even if you export multiple pages, it is SIEM data, not the raw logs. What I need to be able to do is once the events are found in the SIEM, somehow relay that information to the ELM, and give me the raw logs that correspond.
If I'm able to click an event, retrieve the log from the ELM, click another event, and retrieve the log from the ELM again, the system should be able to automate the same, don't you think?
The standard answer I usually get is that I can search the ELM directly, but that doesn't work if say I parse out all "freeradius" events and want the subset of all usernames that logged in as "admin" searching for freeradius is too big in scope, searching for "admin" is bound to give me other stuff I don't need, and I guess I could do some regex magic, but that doesn't always work right, and can be very slow.
Extending the "ELM Archive" tab to allow the retrieveal of all events referenced by a view would be brilliant. It would remove the need to concoct a regex and provides filtering capabilities (Normalized ID for example) that would be beyond the capabilities of a regex (I think). If the output could be parsed as well and contain all the columns referenced in the Event list I would be very happy!
I ran into similar issues trying to extract one specific type of events for a month from the ELM, because I was always hittting the file size or time limit.
After digging a little on the forum, I saw there was a possibility to retrieve raw logs using sftp as described here https://community.mcafee.com/thread/70302?start=0&tstart=0
However this was not a good solution for me for several reasons:
- Do not have extra disk space to process the data.
- Wanted only specific event from all datasources
After having look how the data extraction works using the GUI, I developped a python script which can be used to retrieve and store raw logs on the ELM so that you can gzip it before downloading it to your computer.
This script is available on my github: https://github.com/0x8008135/McAfee
To avoid the script execution from being stopped, I strongly suggest you run it inside a screen session on the ELM.
Also if you only want events from specific datasources, you could create a file containing the datasources ids and add a "ids=<filename>" in the subprocess call.
Feedbacks are more than welcome !
This looks really interesting. Can you tell me a bit more about how it runs? i.e. is it one file that is extracted, or is it multiple and you slice the files by time to get around the file size or time limit?
This is a bit old, but just saw it when I was searching for something else. If you can get the data from an event table in a view, you should be able to do the same in the csv. 1 million events should be ok. You just need to keep the query below the point it is going to fill disk, as running queries go to the local index_hd directory for temp storage. I just ran one that was about 75k events and the file was around 8mb, so 1 million should end up between 100-200Mb. I think it would be adventurous (not to mention taxing) to go too far beyond this, seems like a weekend or overnight style report.
What you will most likely need to do is build a new pdf report with a table in it, the table will allow you to build a new event query, name the event query whatever you want and select all the fields you want in the CSV file. Save the PDF report - you can even delete it now, as you only need it to build the query. Now create a new report using the csv option with all your filters and the event query you created in the previous step. Then you should be fine.
If this doesn't make sense, let me know an example of the fields and I can screenshot the process.