I want to use an AppSheet Action to store the search criteria (the one entered in the native AppSheet search field) in a table row. I want to perform SQL server-side batch processing on the subset of records shown to the user based on their search criteria.
If I mark each visible record with a process_this_row=true
, all these changes (dozens) need to sync back to the server first, which takes awhile. Instead, Iโm imagining sending the search criteria to the server as a single new row in a control table, triggering a batch job using the search criteria from the AppSheet search field.
Is there a function in AppSheet to get the search criteria string? Or a function that returns a list of ids for the filtered records? Or is there a way to save changes to server in a sub-second (or near sub-second) batch?
Nope. Youโd have to create your own search form as an alternative.
Nope.
I donโt understand this one.
The last question
Or is there a way to save changes to server in a sub-second (or near sub-second) batch?
is about watching the pending row updates sync one row at a time back to the server. Our use case involves flagging a large collections of rows as reviewed=true
using an Action. The user reviews all the rows, then uses the Action to batch update the rows theyโve just reviewed. This works well within the app, however the server needs to wait many seconds for the updates.
For 25 rows, this shows up as 25 in the red dot next to the sync button at the top right. We watch this number gradually decrement over the course of about 15-30 seconds. The app has low latency, but the server lags behind until the updates are complete.
Is it possible to expedite the server updates through the design of our app or a setting in AppSheet?
Ah! Thanks for the explanation!
I canโt speak from experience, but I believe a SQL-based data store isnโt subject to the delay. My impression is that the slow pacing is to avoid rate limits with Google and other cloud storage providers.
But, since this outside my experience, Iโm going to invoke @Aleksi in the hope he can shed some light.
Had he, he would have commented here, I imagine. You could contacting support@appsheet.com, too, for a more formal response.
@Stuart_Carmichael Sorry about the delay.
When the user multiselects records and then the action is triggered, yes it will happen one by one and as far as I know, there is no workaround unless you are able to do that with a Workflow/Data Change & โExecute an action on a set of rowsโ. That will do it with one action. Though you need to find a way how you can select those records other than multiselect from a table view.
We do use the โExecute an action on a set of rowsโ pattern and it makes all the expected updates on the client side, then we watch them gradually sync back to the underlying database (AWS RDS in our case).
My understanding is that client updates back into the cloud are throttled to respect Google Sheet throughput policies. Since weโre not using Google Sheets to back our tables, we can hopefully get away with relaxed throttling.
Our usual work around for this situation is with a table (for example TaskDt
) in our AWS RDS into which our AppSheet client inserts, using an Action, a single row with parameters (for example 'FLAG_ALL_RECORDS', 'stuart@foodmesh.ca', '2020-06-03'
). A database-side trigger detects the single update, then updates all the records in the other table.
CREATE TRIGGER `TaskDtInsert` BEFORE INSERT
ON TaskDt ...
The next time the App syncs, it loads the changes we made behind the scenes.
Weโre trying to avoid this pattern for the app weโre currently developing.
Are you trying to avoid the needed sync or something else? Iโm not sureโฆ
We have strategies that mitigate it, but it works best for us to stay within the AppSheet flow of client updates synced back to the backing tables. Ultimately, we are looking to have sync happen faster. Because we are working with AWS backing tables, we are hoping avoid throttling that might apply to Google backing tables.
User | Count |
---|---|
15 | |
14 | |
8 | |
7 | |
4 |