3 Replies Latest reply on Apr 1, 2016 5:02 AM by gehinger

    How test use case correlation

    ecan007

      Is there a way on how to test the use case correlation?

      For example: when you create a new use case and it doesn't trigger,

      does it mean the use case correlation couldn't find any event s or the use case correlation isn't correct.?

        • 1. Re: How test use case correlation
          mikeofmany

          It could be both.

           

          I suggest look at where your Use Case is coming from.  If it is coming from a gap observed via hunting or log review, then when you run the use case against the logs you have containing the issue and it doesn't fire? It's your use case setup that would be faulty.  If the Use Case is coming from some external source, and running the Use Case against your logs or replayed in your ESM/ACE then you might not have the necessary logs -- I suggest setting up a test scenario for the Use Case and actually looking at stimulus response within your lab or test area for the Use Case.

          • 2. Re: How test use case correlation
            ecan007

            Thx Mike,

             

            I see what you mean. I was hoping if there was an option where the siem can tell you if the correlation is correct or wrong.

            • 3. Re: How test use case correlation
              gehinger

              Well... A correlation will always be "correct". The point would be... "Is it relevant?"...

               

              Try to think out of the technology when creating a new rule. Create a model then implement it into the SIEM.

              - Each option added in the SIEM should correspond to an element of your model

              - Each element of the model should be implemented.

               

              Then think about conditions in which base events are generated to produce a test-procedure. Ideally, you will run this procedure on a regular basis. It can also be an input for your audits/pentests. Two birds with one stone.

               

              Doing so, you will avoid "blind progress". But you cannot guaranty 100% true-positives and 0% false-positives.