The parameter ids= is optional and refers to a file which is normally created within the wd directory and contains a list, one value per line, of the DsrcIDs associated with logs to be considered during the search. If this parameter is not specified no DsrcID constraints will be imposed.
If you need to know the DsrcID I suggest you do a dummy search query (preferably time consuming) from the GUI with the required datasources filter, then have a look in the ELM process and find the elmsearch job. Normally you should see the parameter "ids=<path to file>" then you can copy it to your custom wd folder and add the parameter to the subprocess.call function within the script.
Let me know if something is unclear.
and naturally that file had to have a relative path ... 🙂
May abuse from your kindness and ask you:
How do you exactly know a search has been truncated? You mention to look for the logs but look for what? 🙂
Sorry for my late answer...
Each jobs this script runs will generate status lines to stdout of the following format:
Which looks like:
As you can see in the script each subprocess calls are redirecting the output to the logfile (stdout=l).
To answer your question, here are the possible status:
'RUNNING' which means the program is running,
'FAILED' which means the program failed to execute,
'COMPLETE' which means the program completed normally
'DONE', which means the program completed execution early because the total amount of allowed output was reached.
You can check if something went wrong using this command :
grep -E "FAILED|DONE" <logfile>
Let me know if you have any questions
This is a bit old, but just saw it when I was searching for something else. If you can get the data from an event table in a view, you should be able to do the same in the csv. 1 million events should be ok. You just need to keep the query below the point it is going to fill disk, as running queries go to the local index_hd directory for temp storage. I just ran one that was about 75k events and the file was around 8mb, so 1 million should end up between 100-200Mb. I think it would be adventurous (not to mention taxing) to go too far beyond this, seems like a weekend or overnight style report.
What you will most likely need to do is build a new pdf report with a table in it, the table will allow you to build a new event query, name the event query whatever you want and select all the fields you want in the CSV file. Save the PDF report - you can even delete it now, as you only need it to build the query. Now create a new report using the csv option with all your filters and the event query you created in the previous step. Then you should be fine.
If this doesn't make sense, let me know an example of the fields and I can screenshot the process.
Has anyone tried or thought about implementing something like logstash to view and pull the logs better? I don't want to give terminal access to some of my users who request this data constantly.
Until version 9.5, this was the only piece of the McAfee solution that was able to actually worked... However, since version 9.5 the receiver has displaying some instability and we are now moving away from using it as our primary log destination.
Anyhow, back to your question:
We enabled archiving at the receiver level and started exporting all data to platforms capable of processing it... Initial destination was Hadoop HDFS (using an NFS head and batch loading) but we are now moving into "Hadoop based" streaming technologies like Spark.
Just a warning: Starting this path is like the pill dilemma on the Matrix movies. There is not turning back... Yeah, you could turn it off but in reality you ask... Why bother with the Nitro SIEM any longer? 🙂
Can you specify what kind of processing your trying to accomplish ? Do you plan to send the data to another corrleation/log managment device or just do the analysis by hand? in option 1 you stated there are some fields taht are not being process that you want. Can you tell me what are those fields?
We have dashboards for proxy servers that can present data for a month or previous month.