Event rate describes the distribution of events after they are parsed and inserted in the database. It has no memory of how/when those events got to us.
Collection rate describes how the events come into the SIEM. Events may come to us in bursts, may have latency between the time they are generated and the time they are sent to the SIEM, and may come out-of-order.
As an extreme, simplistic example, imagine that you have a data source that generates a reliable 10 events/sec. The events are batched up and sent to the SIEM 1/hour. At the end of the hour, once we collect, parse, and store the events for that hour, you will see that you have an event rate of a solid 10 events/sec. The collection rate will be 0 for most of the hour, and 36,000 events/sec (10 events/sec * 60 sec/min * 60 min/hr) for the last second.
Ohh. Nice. Thanks for your answer.
One last thing, so is there a way for the SIEM to determine if there are missing logs from the different data sources, if there is a gap in between the normalized logs?
Best practice is to use inactivity timers on your data sources to alert you to unexpected gaps in log collection. You can configure inactivity timers as appropriate for each data source, and then use Alarms to alert you to unexpected outages.