Currently it seems that files uploaded to HubSpot are served with a static URL and the Cache-Control header directive "max-age=1209600", which instructs client browsers to cache the file they receive for 14 days (1209600 seconds). Of course, HubSpot also provides an ability to replace files with an updated version, but because that is served through the same URL, users returning to the website or landing page will continue to see the original cached version for up to 2 weeks after the file was replaced. The only exception to this is if the user forcibly reloads the file URL or otherwise clears their browser cache, but (a) users have to know that they need to do this, and (b) most browsers do not provide a facility for refreshing links to downloadable files, like Word documents or other assets, that are not natively rendered by the browser. This idea has a couple possibilities: Website/landing pages that link to a file should include a version query parameter in the rendered link, such that this version changes whenever a file has been updated. This will ensure that users who refresh the website/landing page will see an updated link to the file, and therefore their browsers will fetch the updated file rather than relying on the older cached version. File assets should expire from browser caches more frequently than once every 2 weeks -- at least within 24 hours, ideally within 1-4 hours. This change need not sacrifice bandwidth resources because browsers that have the file cached will automatically perform a conditional request for the content, and your CDN provider can simply respond with a "304 Not Modified" status in the event that the file has not changed. The only sacrifice here is the latency of a single round-trip from client browser to CDN edge server to perform the conditional revalidation. The final part required is to ensure that your CDN/edge caches (which are controlled by HubSpot) do not serve older versions of the file for too long, once a new version has been uploaded. As an external developer, I don't have visibility into this part of your system, but every CDN provider I'm aware of (CloudFlare, Akamai, AWS CloudFront) has controls in terms of static configuration, per-response headers, and on-demand invalidations that can be used to ensure the CDN-cached content is fresh. I imagine the ideal scenario would be for the CDN cache to have a long TTL, and then to perform on-demand invalidation of the existing URL whenever a new version of a file is uploaded (since I imagine this occurs with much less frequency than the number of times content is served from origin.)
...read more