Strongly agree. This would go a long way in helping people understand the best way to organize data sources, create effective policies, and would likely decrease the number of how-do-i support calls. Understanding the underlying operation and architecture of a product like this is important to using it properly to produce the desired outcome.
To the examples already provided by scott3boy, I'd add that it is important to understand how the data flow is altered depending on type of data source (standalone, parent, client, child) and collection method (syslog, MEF, WMI, CIF).
Here is the path an event takes when it reaches the ERC:
1. Receiver Filter Rules are applied.
2. The event passes to the parser (ASP Rules) for the specific data source type that was defined in the configuration for the IP address from which the event was received.
3. The metadata (Data Source Rules) is added to the parsed event to further identify its content.
4. The event is aggregated and stored in the ERC database and awaits retrieval by the ESM.
Hope this helps.