So an App Script task ran fine and created a bunch of files. Appscript tends to be pretty robust...kudos Google!
After it had created the files, it sent an API request to Appsheet to add a row in a table for each of the created files. Once the row is created, it will trigger the next part of an automation.
However...Google drive Internal Server Error 500 scuppered all of that...bad Google!
That was the API response at the bottom and of course, none of the rows in the payload were added leaving my workflow in a state that makes me want to cry...no really...actually cry!
The [FamilyMemberID] column is a Ref to 'FamilyMembers' so when Appsheet tries to add the row it probably has to look up the Ref id to satisfy VCs etc. The Drive Error 500 scuppered all of that though and consequently the API gave up...much to soon in my opinion.
Has anyone else experienced the times when an I/O error from Google has caused people to want to cry?
Sometimes I find the error messages don't really point to the true error.
I always start with the bottom most error first. It usually is the first one encountered and may then create secondary errors.
"Error while reading" - What is the relationship between the DocID in the error message and those listed in the data rows to be added? Is there a data row that references the doc indicated in the error? If so, have you tried opening that doc? Does it open successfully when tried manually?
The "not accessible" portion of the message may be secondary but it might be worth checking that the table DOESN'T have some configuration implemented that might prevent the Bot user from accessing the table.
Lastly, "bad request" could potentially refer to the data not formatted properly OR that there are "required" columns that are not included in the data. I see some columns in the data rows with blank values. Is that expected? Are ALL of the columns represented?
I hope this helps!
Maybe try and implement an exponential back-off - I've found that this smooths out these little hiccups.
It's a way of handling certain transient errors; here is an example that I use when retrieving responses from OpenAi
Here is what Appster had to say:
_______________________________________________________________________________
Here’s a quick primer on exponential back-offs for scripting:
What is an Exponential Back-off?
An exponential back-off is a strategy for handling retries in scripts when an action, such as an API call or network request, fails temporarily. Instead of retrying immediately, which can add strain or waste resources, the script waits progressively longer between retries.
How It Works:
1. **Initial Attempt**: When a request fails, instead of retrying immediately, the script waits for a small initial delay (e.g., 1 second).
2. **Increasing Delay**: If the retry fails again, the delay before the next retry is increased exponentially. For example, the script might wait 2 seconds after the first retry, 4 seconds after the second retry, 8 seconds after the third, and so on.
3. **Max Wait & Retries**: You set a cap on the maximum delay (e.g., 32 seconds) and the number of retries, so the script eventually stops trying if the issue persists.
Why Use It?
- **Reduces Resource Usage**: Prevents overwhelming a server or service by spacing out retries.
- **Better Error Handling**: Gives time for temporary issues (like network blips or server overloads) to resolve themselves.
- **Improves Reliability**: Makes the script more resilient to transient failures, which are common in network interactions.
Usage Notes:
- Make sure to set realistic limits for delay and retry counts.
- It’s ideal for handling temporary issues, but persistent errors need a different approach.
This approach is common in networked systems, cloud-based scripts, and APIs to improve script stability and robustness.
User | Count |
---|---|
18 | |
10 | |
8 | |
6 | |
5 |