Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

GCS override cache-control on private buckets

I'd like to be able to use a CDN to cache commonly requested assets that are within a private bucket. The current blocker is that CDNs respect cache-control header, which is private on GCS objects.

I've tried to override objects within my private bucket to set cache-control directive to public with a long TTL, but it seems like it's being ignored.

How would one enable cache-control header on private buckets to allow CDNs to cache the request? 

0 3 4,697
3 REPLIES 3

What are your max age settings?

If the max age is not specified in the response, CDN will cache it by default for 3600 sec. As the static content is less likely to change, you can set the max-age value to even higher as specified in the doc, which should further improve the cache hit ratio. See.

Why would CDN cache by default if the cache-control directive states no-cache from Google? It doesn't seem like overriding this either has any effect on private bucket 

Even if you have the "CACHE_ALL_STATIC" or "FORCE_CACHE_ALL" enabled, and if you have not specified the max-age parameter, then TTL by default is 3600 sec. But if the objects are less likely to change, you can set the max-age value to even higher, which will improve the cache hit ratio.

If an applicable object doesn't have a Cache-Control metadata entry, Cloud Storage uses the default value of: public, max-age=3600.

When objects are served from a Public Cloud Storage bucket, by default they have a Cache-Control: public, max-age=3600 response header applied. This allows the objects to be cached when Cloud CDN is enabled. You can refer to the doc to set up a CDN with Cloud bucket.

Also, you cannot control the cache in your HTTP request for an object as Cloud Storage ignores this header and sets response Cache-Control headers based on the stored metadata values. You can refer to this in the documentation.

There's a subtle difference between responses internally, which results in GCS telling Cloud CDN not to cache the item. So, although the bucket itself might be publicly readable, the item itself is still considered private (although readable!).

You can try getting read access for all users in there. To make individual objects readable to everyone on the public internet, you can run the command below:

  gsutil acl ch -u AllUsers:R gs://[BUCKET_NAME]/[OBJECT_NAME]

It seems that without this, the item can't be cached, even though the bucket has allUsers:objectViewer as per here.

According to the CDN caching rules, backend buckets must have their objects shared publicly. Therefore, you need to add the same '"entity": "allUsers"' permission to your all object in order to share it publicly as this should satisfy the CDN caching rules.

Is it possible to modify the default max-age of the object at CDN level without setting the metadata of the object at bucket. We need to have the content in the CDN for a longer duration and don't want to make any changes to our code just to set this value as the upload happens from different sources.

Ans:  Yes, using the FORCE_CACHE_ALL mode. Cloud CDN does have settings to change the default TTL. However, if GCS is setting max-age in its responses (or cache-control: private), that will take precedence over the CDN-level setting, except in FORCE_CACHE_ALL mode. FORCE_CACHE_ALL will override the VPC-SC related cache-disablement that GCS enforces. But assuming you want to stick with CACHE_ALL_STATIC, you will probably need to make sure the GCS metadata is correct.

Let me know if the information above makes sense. You can also refer to the Public Issue Tracker for better understanding.