Unfortunately There is no way for the user to control this web cache stuff. The duration how long the file is kept is up to MWG.
The webserver of cause can decide how long a cache is allowed to cache it, if it needs to be revalidated or something, but if for example the disk space is coming to an end MWG might still delete cache entries even if they could be kept longer.
yes, Alok is right that it is up to MWG to hold a file. And it is not recommended to hold files for a very long time since this will fill up the /opt partition (default is 10 minutes I think).
I do not know if this completely meets your wishes but I think it could be possible to avoid that the same file is scanned again and again when another user is downloading it.
A rule set could be build where the md5 hash for files larger than X MB is written into PD storage (in case it is considered to be clean) where you can decide how long the value is stored.
Then you could compare if the hash is already in PD storage and if so skip scanning. This should renew the PD storage counter for the configured time (so it would be stored another 24 hours for example).
I haven't seen such a rule set yet, just a limitated where not all possible scenarios where caught.
But maybe it is possible to create such a rule set. At the end, the files would be downloaded but not scanned again and again.