First, I need to qualify my response with the fact that I know very little about IntelMQ. However, after a cursory look at the FAQ, some high level descriptions, and examples, it appears that the goals of the two projects are quite different.
The primary goals of IntelMQ appear to be:
Normalization of large number of data feeds (security feeds, log files, tweets) using a message queueing protocol
Support a wide variety of these data feeds in a consistent manner (JSON, etc.)
Ability to persist the feeds in a variety of systems (Splunk, ElasticSearch, etc.)
The primary goals of DXL are:
Ability to connect a large number of clients (100s of thousands to millions) on a distributed fabric (may extend large geographic regions with fault tolerance)
Share near real-time security events with those clients (reputation change for a file, etc.)
Easily allow security products to integrate with the fabric (TIE, MAR, Rapid7, Aruba, CheckPoint, etc.) and make their functionality available to the connected clients in a way that hides deployment details (topic-based communication)
Secure the fabric in a consistent way (PKI-based mutual authentication and certificate-based authorization)
The two projects seem very complimentary in fact. Exposing IntelMQ normalized events to the DXL fabric would be something that would seem to be fairly straight-forward (they have an example that shows a similar integration with Splunk).