More of sanity check based on the product guide but can you confirm if my understanding is right. Apologies if this is confusing...
Datacentre 1 = Multiple prod MWGs + 1 x test MWG
Datacentre 2 = Multiple prod MWGs + 1 x test MWG
Datacentre 3 = 1 x prod MWGs
Prod traffic from Datacentre1 & Datacentre 2 is loadbalanced by BigIP
I have collected some details. I hope I can help you a little bit, but I am not too confident with the cluster stuff :-)
Important to know: The policy (everything in the policy tab) is always synched across ALL members of a cluster. You cannot split test/prod clusters by using the network groups, etc. In case you would like to have one cluster, but have different policies on prod and test, you will have to create a policy that reflects this. For example you can make two top-level rule sets, one called test, one called prod. You could then create two lists which contain the hostnames of all prod nodes or test nodes, and use "System.HostName is in list 'prod'" to apply the production policy (for example).
By doing so you have ONE policy on all nodes, but depending on whether a node is in the list "prod" or "test" it will apply only a specific policy. Complete separation is not possible.
Additionally I would like to state that the runtime group is only used to exchange runtime information such as quota information. It does not affect distribution of the policy either. Network groups are only used for routing information.
Priorities are an interesting topic. Priorities only are used when there is no manual intervention, e.g. a node goes offline, the configuration gets changed and the nodes comes online again. In case you add a node that was stand-alone before, this node will get the current policy of the cluster, independed from its priority. Also a manual change on the GUI will always overwrite the policy, even if the system you make the change on has a lower priority.
To create a master/site scenario you can pick one node as a master, give it a higher priority and turn off the GUI on all other nodes (after the cluster was setup). By doing so you only have one node to do changes. In case a node goes down and comes back with a different configuration, it will load the configuration from the "master" (because the master has the highest priority).
I hope this helps you to get started.