It could be both.
I suggest look at where your Use Case is coming from. If it is coming from a gap observed via hunting or log review, then when you run the use case against the logs you have containing the issue and it doesn't fire? It's your use case setup that would be faulty. If the Use Case is coming from some external source, and running the Use Case against your logs or replayed in your ESM/ACE then you might not have the necessary logs -- I suggest setting up a test scenario for the Use Case and actually looking at stimulus response within your lab or test area for the Use Case.
I see what you mean. I was hoping if there was an option where the siem can tell you if the correlation is correct or wrong.
Well... A correlation will always be "correct". The point would be... "Is it relevant?"...
Try to think out of the technology when creating a new rule. Create a model then implement it into the SIEM.
- Each option added in the SIEM should correspond to an element of your model
- Each element of the model should be implemented.
Then think about conditions in which base events are generated to produce a test-procedure. Ideally, you will run this procedure on a regular basis. It can also be an input for your audits/pentests. Two birds with one stone.
Doing so, you will avoid "blind progress". But you cannot guaranty 100% true-positives and 0% false-positives.