Getting Access Denied when getting files from Cloud CDN

Hi,

We're using Cloud CDN with a GCS bucket as back-end. We're also using Signed URL with URLPrefix   to prevent public access. 

This mostly works fine, but now and then, some users experience 403 responses:

<?xml version='1.0' encoding='UTF-8'?><Error><Code>AccessDenied</Code><Message>Access denied.</Message></Error>

What's worth noticing is that these kind of errors can happen in the middle of many successful requests. E.g. a single user can have the following requests:

https://domain.com/url-prefix/file1?URLPrefix=prefixbase64&Expires=1712220069&KeyName=our-key&Signat...  - Code 200

https://domain.com/url-prefix/file2?URLPrefix=prefixbase64&Expires=1712220069&KeyName=our-key&Signat... - Code 200

https://domain.com/url-prefix/file3?URLPrefix=prefixbase64&Expires=1712220069&KeyName=our-key&Signat...- Code 403

https://domain.com/url-prefix/file4?URLPrefix=prefixbase64&Expires=1712220069&KeyName=our-key&Signat...- Code 200

I.e. only the request for file3 fails with 403. (Notice that the prefix, expiration and signature is the same for all the requests.)

Furthermore, I get the following warning in the load balancer logs for the failed request:

 

 

 

 

{
  "insertId": "ephmc7fjnrey6",
  "jsonPayload": {
    "@type": "type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry",
    "backendTargetProjectNumber": "",
    "cacheId": "FRA-1209ea83",
    "remoteIp": "...",
    "cacheDecision": [
      "RESPONSE_HAS_CACHE_CONTROL",
      "RESPONSE_CACHE_CONTROL_DISALLOWED_CACHING",
      "RESPONSE_HAS_EXPIRES",
      "RESPONSE_HAS_CONTENT_TYPE",
      "CACHE_MODE_CACHE_ALL_STATIC"
    ],
    "statusDetails": "response_sent_by_backend"
  },
  "httpRequest": {
    "requestMethod": "GET",
    "requestUrl": "...",
    "requestSize": "249",
    "status": 403,
    "responseSize": "425",
    "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36 Edg/123.0.0.0",
    "remoteIp": "",
    "referer": "..",
    "cacheLookup": true,
    "serverIp": "...",
    "latency": "0.059912s"
  },
  "resource": {
    "type": "http_load_balancer",
    "labels": {
      "url_map_name": "some-load-balancer",
      "project_id": "...",
      "zone": "global",
      "target_proxy_name": "...-proxy-2",
      "forwarding_rule_name": "...-forwarding-rule-https",
      "backend_service_name": ""
    }
  },
  "timestamp": "2024-04-04T07:48:37.615757Z",
  "severity": "WARNING",
  "logName": "...",
  "trace": "...",
  "receiveTimestamp": "2024-04-04T07:48:38.293388621Z",
  "spanId": "574a047ae0176190"
}

 

 

 

 

however, there's nothing in the log that gives me any clue.

Any help here would be greatly appreciated. 

0 4 360
4 REPLIES 4

Hi @simen-andresen,Welcome to the Google Cloud Community!

If I understand correctly, you are trying to provide access to your files in a CGS bucket to a user using signed URLs. There’s a couple of possible reasons why the user receives a 403 error when trying to access the files. 

The most common reasons for 403 errors are:

  • That the requests are without signature or tokens therefore getting treated invalid.
  • The signature or token has already expired hence being considered as invalid or incorrect.

In your case, since you mentioned that the prefix, expiration and signature is the same for all requests, it might be possible that the cookie of the URL has already expired as it is time-limited. Storing and reusing this URL will cause issues and you should always make a fresh request to follow the redirect every time you access a resource.

Consider the recommended bucket architecture for securing your data on your bucket.

You can also visit this public documentation to guide you in maintaining HTTPs-based static websites on Cloud Storage with Cloud CDN.

For further assistance, consider filing a ticket with our Google Support team. They are well-equipped to handle issues like these.

Thanks for your reply @Rfelizardo .


@Rfelizardo wrote:

.. it might be possible that the cookie of the URL has already expired as it is time-limited.


I cannot find any documentation on "cookie of the URL" for Cloud CDN's Signed URLs. Are you thinking of Signed Cookies?


@Rfelizardo wrote:

Storing and reusing this URL will cause issues and you should always make a fresh request to follow the redirect every time you access a resource.


I should probably have mentioned that we create signed URLs with prefix like described here.  I think it should be fine to re-use the signature / key to create multiple custom signed URLs. At lest I cannot find any documentation saying you shouldn't. Also, if we cannot re-use the signature, then we'd loose a lot of performance, since the end user is making 10-20 requests per second. 

Good day @simen-andresen ,

That is correct, you may opt to use Signed Cookies instead of Signed URL. 

In situations where signing a large number of URL for each user is impractical, signed cookies emerge as a viable solution.

It is also advisable to use Signed Cookies when giving a time-limited access to a set of files. 

Let me know if this works for you.

Hello again @Rfelizardo, and thanks for the reply.

Very good, I'll give the signed cookies a shot!

Just curious: Using the signed URL mechanisms to sign URLs for a large set of files through Cloud CDN isn't really impractical from my perspective, and it works in ~98% of the cases. Also, given the URLPrefix methods, I'd thought it would be a great fit for our case. So, I still find it strange that we do get the errors mentioned above. At the very least, I'd expect some documentation saying to rather use signed cookies when dealing with a large number of files, alternatively, a more descriptive error message.