One of my app's functions involves regularly moving 1000s of rows from one table to another (copy from table A to table B and then delete from table A). While I wouldn't be surprised to learn that that design for my use case is suboptimal to begin with, it's the best I've come up with (if interested, details in the following full-story explanation). However, the way I've implemented it takes a really long time for the user--each individual row addition or deletion seems to be an individual change that needs to be synced from the local app to the server. Is there a more efficient way to move that data? Maybe just change one value in the app to trigger an automation to perform all the data moving server-side--or would that result in the same sync processing time but just in the other direction? Maybe this is what the AppSheet API is for?
Each table's data source is a worksheet in the same Google Sheets file. My action implementation is essentially:
As always, I appreciate any guidance.
Here's the context of my use case in case anyone wants to dig in and advise how I perhaps should be approaching this differently.
One of my app's views includes a function invoked from a parent table that enables importing a .csv file to add records to a child table. It's somewhat complex under the hood (leveraging nifty techniques explained by @TeeSee1, @Kirk_Masden, @MultiTech, and others) and, unfortunately, the user has to perform the import in 2 steps:
Nonetheless, the system-processing aspects have been efficient from a user's perspective--i.e., the import would process very quickly.
However, as my app has begun to be used more, I've now observed that there are some points of fragility in my approach. Multiple users have made errors in the upload file resulting in unintentionally blank columns. Also, it occurred to me that by importing directly to the table that's the final destination for the data all the columns that are not intended to be populated via .csv import are vulnerable to intentional or inadvertent corruption if they happen to be included in the .csv file. It would be great to be able to manage this via properties that provide More control over csv import action, but that's not available.
To address these issues, I implemented a technique that @TeeSee1 recommended to begin with, which is to import the .csv to a preview table before then copying the data to the permanent table. That enables me to accomplish the following:
Problematically though, this revamped approach has increased by many orders of magnitude the app's processing time for the import.
Solved! Go to Solution.
API. 100%
API. 100%
WOW!
I really appreciate your taking the time to read my challenge and provide guidance. It's tremendously helpful to know which is the right path before barreling down it.
I've been wondering for a while when I'd need to dive into figuring out the AppSheet API. I had been put off since all the Pricing page's mentions of APIs are in the context of the Enterprise plans, and I've encountered lots of challenges figuring out pricing and availability of those plans. Now I see that perhaps those are only references to non-AppSheet APIs.
In my app, I created a parallel path that utilizes AppSheet API to move rows from preview table to permanent table. Works like a charm. In a test for 1K records, automation process execution time was 19 seconds, with only a little more than that on either end for the data change to trigger the automation and the sync back to the app reflecting the updated data.
Now, I'll move onto deprecating the prior slow actions and then evaluate where else I should be using the AppSheet API.
Thank you very much!
Automation can do this, with perhaps less "technical" requirements that you'll find with the API.
---------------------------------------------------------------------
I would 100% use automation to do this
Thinking about this though.... it would be helpful to have a higher level parent, to group all the imported records under; from which you could then execute all your various actions on that group of records. The parent would provide you that grouping, as well as recording some meta-level info (X records were uploaded an Y date by Z user, etc.) - providing a paper trail of sorts.
Having a high level grouping record provides you with SOOOO many opportunities of interaction, many of which would be impossible without it's inclusion.
Here's a video with a little more:
I could see the following interaction flow:
-------------------------------------------------------------------------------------------
I love this idea about an import preview table
I’m so glad you found the post useful and think others will too. I’ve benefited a lot from this collaborative community and am grateful to be able to contribute myself. For reference by you and anyone else landing here who’s interested, here’s some further info about past lessons learned and current techniques I’m trying or considering, which are surely the basis of future lessons learned.
My implementation indeed already uses some of the techniques you suggest. As noted in the original post, I run everything from a parent table. In addition to the benefits you cite, that’s how even in my original implementation I managed to make the parent id available to the child rows imported via csv. I know you previously recommended constructing keys, but I didn’t want to use that approach in this particular scenario for other reasons. So, the first action in initiating the import is populating the parent row with the user’s id so that can be used in the initial value property of the child table’s ref column back to the parent table. Here’s an outline of that initial value expression:
SELECT(ParentTable[Parent ID], [Column for Import User ID] = (expression that yields current user’s User ID) )
When I initially implemented that approach without pausing before invoking the csv import action, the import would proceed server-side often before that User ID linking value, which is populated client-side, had synced up to the server; so, the import would frequently fail. That’s when I realized that I needed to use @Kirk_Masden's technique to confirm the server was updated before proceeding to the csv import action.
Also, you’re right on about action buttons in the parent view; see the screenshot further on. I expect that to be very user friendly.
You’re also right that the virtual column with the REFROWS() expression is very handy. While it's true that @Marc_Dillon's suggestion to use the AppSheet API is more technical than relying solely on AppSheet’s predefined data change actions, it’s also not necessarily overwhelming if you follow an existing example--the great @QREW_Apps team has a worthwhile video. That's especially the case if you take the hybrid approach of defining the action details in a custom action and then invoking that from the API. Anyway, that virtual column makes using the API even less technical than it otherwise would be. For example, here’s the very simple body of my webhook step’s API request to invoke my action that I had already defined within AppSheet to copy rows across tables:
{
"Action": "My custom action",
"Properties": {
"Locale": "en-US",
"RunAsUserEmail": "<<USEREMAIL()>>",
},
"Rows": [
<<Start: [Related Import Previews]>>
{
"Child ID": "<<[Child ID]>>"
}
<<End>>
]
}
Again, you also hit the nail on the head regarding additional possibilities that open up as a result of working from a parent table. As you hypothesized, that provides a way to summarize warnings or outright errors to the user. For now, I just have a summary; I plan to explore citing specific flagged rows in the summary. See the following screenshot.
Here’s an illustration of what I have in place in the parent view for action buttons and import warnings:
The catalyst for this approach was a good suggestion from @TeeSee1.
As you noted, it’s also straightforward to use format rules to flag values within the import preview itself. See the screenshot further on.
Another idea I utilized that I’m guessing will be handy for users is a virtual column in the import preview table that assigns an index number to each row. That number should make it easy to find the corresponding row of the original csv file if users open that in a spreadsheet editor. See the following screenshot.
Here's an illustration of my import preview's index number and flagged values:
I also love your additional ideas of:
I look forward to the additional improvements and suggestions that others will undoubtedly offer!
User | Count |
---|---|
16 | |
8 | |
7 | |
3 | |
2 |