1 of 1 people found this helpful
For my whitelists, black lists, and almost every other list that relates to a URL, i break it up into 2 types. Hosts only and full URLs.
Name: Global Whitelist: Domains or Global Whitelist: URLs
URL.Host.BelongsToDomains (Global Whitelist: Domains) equals true OR
URL matches in list Global Whitelist: URLs
The string list of Global Whitelist: Domains are just domain or host names:
1 adobe.com 2 apache.org 3 blackberry.com 4 broker.gotoassist.com 5 cdc.gov 6 cisco.com
The Wildcard list of Global Whitelist: URLs are where i put full wildcards:
3 http://www.google.com/uds/api/visualization/1.0/* 4 http://www.google.com/uds/modules/gviz/1.0/* 5 http://ajax.googleapis.com/ajax/static/modules/gviz/1.0/*
For those who haven't covered it, it's important to note the difference between "Stop Cycle" and "Stop Rule Set". So, we use "Stop Cycle" for our global white list, and that's upstream of authentication, anti-malware scanning, and DLP. So, we are cautious to limit what goes in the global white list.
We use a completely different set of lists for those things that should not be blocked but do need the scanning and authentication.
On a separate note. There is a performance consideration. Wildcard matches that start with a wild card, e.g. *.google.com, may search an entire string before giving up (unless there's some weird, behind-the-scenes optimization). So if you need matches of that type, matching against URL.Host or URL.Domain can provide some performance improvement and even eliminate some of those patterns that start with a wild card.