Since late night on June 2nd, I've been getting 503 responses from my Apigee proxy endpoints that call the OpenAI API. The rate varies from 25%-100% of the time. I didn't make any proxy or configuration changes recently. I'm not hitting OpenAI API limits.
I have different endpoints for "chat completions" and "speech recognition" that call OpenAI, and they're both affected. APIs that don't call OpenAI's API (e.g. they call services on Azure instead) don't have this issue.
The message I get during CORSResponseOrErrorFlowExecution during postflow is:
Status: 503
Reason phrase: Service Unavailable
Body: {"fault":{"faultstring":"The Service is temporarily unavailable","detail":{"errorcode":"messaging.adaptors.http.flow.ServiceUnavailable","reason":"TARGET_CONNECT_TIMEOUT"}}}
I've tried:
Do you recommend any other debugging steps, or any kind of network config I can investigate? Might moving to Nat IPs help?
Also, is there any way for you to check that connections to "api.openai.com" work reliably? I'm not getting far with OpenAI's support bot but they suggested: