Lost Data

Hello. I have the appsheet deployed to many users with more than 10 tables. The problem is, often when  users input some data, after clicking the SAVE button and wait for the sync's done, the record just inputted lost. Sometimes it needs 3 or 4 seconds, sometimes more, sometimes it's just directly gone. That ruins syncronizing with other tables and gives inaccurate data. Any help will be appreciated. 

0 28 1,041
28 REPLIES 28

I don't think I get what you are trying to explain. Could you rephrase it to see exactly what happens?

Also, this seems an important problem for you, so ask for help to support

thankyou for ur response sir.
when a user input a record, after 3 or 4 seconds or more, the record's lost.

If multiple users may be editing the same row simultaneously, the distinct updates may conflict. If this is relevant to your app, be sure to review Concurrent usage with multiple users - AppSheet Help

thank you for your advice sir.
it says :
"If two or more client requests arrive for the same table at or near the same time, the server normally serializes the client requests and performs them one by one. As a result, the second client request is only performed after the first client request is completely performed."
if this is what applied that's very good, will be no worries to add some data at the same time

"... For example, one client may be updating a record for one customer while a second client is updating a record for a different customer. These two concurrent updates will not collide."
this is the condition happened to mine, if the two concurrent updates will not collide, then why my records lost ๐Ÿ˜ž ?

but it also says :
"If two clients happen to change the same row at nearly the same time, the last client's change is what persists. This is referred to as "the last writer wins" conflict resolution strategy."
i'm afraid this is what realy happened, but i'm not sure if this is relevant to mine, because as i said above two different users add also two different data so the updates wont collide.

 


@lanlan wrote:

"... For example, one client may be updating a record for one customer while a second client is updating a record for a different customer. These two concurrent updates will not collide."
this is the condition happened to mine, if the two concurrent updates will not collide, then why my records lost ๐Ÿ˜ž ?


If you have multiple users simultaneously updating different records, then that's not the cause of your problem.


@lanlan wrote:

"If two clients happen to change the same row at nearly the same time, the last client's change is what persists. This is referred to as "the last writer wins" conflict resolution strategy."
i'm afraid this is what realy happened, but i'm not sure if this is relevant to mine, because as i said above two different users add also two different data so the updates wont collide.


Again, if you have multiple users simultaneously updating data but no two users are updating the same record, then the issue described here is not the cause of your problem.


@lanlan wrote:

when a user input a record, after 3 or 4 seconds or more, the record's lost.


Confirm whether the data is truly lost--i.e., does not exist in the data source--or just disappears from the user's device. If your problem is the former, then, as @SkrOYC suggested, contact AppSheet Support. If your problem is the latter, review configuration settings like the following, which control what data remains on the user's device and what data is displayed:

does this also apply to the EDIT section sir? i also have a problem there, sometimes if a user updates a record and successfully saved, 3 seconds later or more, it changes back to the former data

or should i ask this in a new post?

Steve
Platinum 4
Platinum 4

@lanlan wrote:

if a user updates a record and successfully saved, 3 seconds later or more, it changes back to the former data


A possible reason this can happen is because a condition on the server or at the data source itself prevents the updated row from being saved.

If you are using a spreadsheet as the data source, make sure the table in the app is configured with Access mode set to as app creator. If set to as app user, the spreadsheet itself must allow each and every app user the needed access.

Steve_0-1671032189018.png

 

yes, the data source uses spreadsheet and the tables are already set as app creator..

what should be done with the condition of the server?

Make sure there are no bots that would affect the rows.

well many tables are attached with an automation, it's neccessary for a report. what should i do?

Hi lanlan,

I think what you are seeing is normal as I too have this problem. I have up to 10 people doing audits and have found that to avoid losing data or overwriting existing data I have to preallocate each user a distinct range of the unique id

So if the audit is intended to do say 10000 items and I have 10 auditors then I will allocate as follows

Auditor 1 = 0-999

Auditor 2 = 1000 - 1999 etc.

In this way, each user is assigned a unique range but they may still overwrite their own data or even mistakenly enter and overwrite another. 

That is why at the end of each day I also have to do a clean-up as I have found APPSHEET (like most others) is yet to find a foolproof way to avoid this. 

What I am saying above requires you to pre-load any unique key and not auto-create so 1-10000 rows would have to be pre-created (even though each row is not yet populated). You could then make user rules to restrict each user to their unique ID domain ie 100 -1999.

Hope this helps.

I don't follow the details of this need or solution, but it suggests another point for @lanlan to consider: Confirm that you have a robust system for assigning a unique key value to each row--e.g., a dedicated key column with an initial value generated via the UNIQUEID function.

That is what I refer to as to prepopulate the unuiqe ID

the keys from the app are mostly made by combining columns, what i get from @gregdiana1 @dbaum  is the key set is not working so i need to set another key with a generated unique id column. dont i?

My point was that you need to ensure each row has a unique key. There are multiple ways to do that, and the UNIQUEID function is one easy way. If your "keys... made by combining columns" can result in multiple rows having verbatim keys, then that may indeed be the source of the problems your users are experiencing.

@lanlan This is coming interesting.. would you please send me a personal message with these details and I will check the reason:
1) Full app name, 2) Affected table name, 3) Latest timestamp (UTC+0) when this happened, 4) User's email address, 5) Let's focus on adds, not updates. Also please check the permission is granted in MyAccount > Settings.

Having a unique Iโ€™d is only one part of the problem. The issue is that if at anytime one has multiple users creating a new record at the same time which users data is going to be updated.  I have had this problem of โ€œ racingโ€ where data is either lost or overwritten. 

let us say the last record is say 36. If more than one user enters data at the same time it may transpire that each may be assigned 37 as the next valid record. Now when these multiple users eventually save their data or update the issue is whose or which users data is recorded, lost or overwritten in row 37?

The issue is the time lag before a new row ID is allocated to when it is updated and I partially resolved this by firstly pre-populating all unique IDโ€™s and then assigning all users a distinct range and which helped resolve matters. 

The keys used from other columns etc are not the issue as they are used or combined to form a new result each of which should have a unique ID. Unless of course the rows of the keys used themselves suffer from the same problem when they are formed. 


@gregdiana1 wrote:

let us say the last record is say 36. If more than one user enters data at the same time it may transpire that each may be assigned 37 as the next valid record. Now when these multiple users eventually save their data or update the issue is whose or which users data is recorded, lost or overwritten in row 37?


Thanks for explaining your point. I hadn't appreciated (nor observed nor seen reference to anywhere) the possibility that new rows simultaneously created by different users could end up assigned to the same row number in the data source and, therefore, one overwriting another. If that's indeed how the platform works, it seems like a fundamental issue that would (or, @lizlynch,  at least ought to) be called out explicitly in Concurrent usage with multiple users - AppSheet Help

Will escalate this question. Thank you!

Thanks, @Aleksi. I'm familiar with that article, which, as referenced in the article's title, seems to focus on sequentially numbered keys. Are you saying that it also addresses the situation that I understand @gregdiana1 to be describing, which is where a table uses a dedicated key column populated using the UNIQUEID function and nonetheless can still end up with conflicting rows added concurrently by different users because those new rows are assigned to the same row number? If so, it would help for that to be clearer in both the article I cited and the one you cited.


@lanlan wrote:

The problem is, often when  users input some data, after clicking the SAVE button and wait for the sync's done, the record just inputted lost.


To clarify everyone's point.

When ADDING NEW rows, if you are using data to derive the key value AND 2 users input the same data values at nearly the same time - the last entry will overwrite the first entry.  If there is sufficient time in between, AppSheet will prevent entry of the second IF a Sync has been performed so that the second device is AWARE of the first entry.  If new rows are added by actions, all bets are off!  The new will always overwrite the old!   

To fix and prevent this potential issue, add a new column dedicated as the row key and assign it using the UNIQUEID() function.

 

When EDITING EXISTING rows - it will always be the case that a second edit will overwrite a first edit.  Last one in wins!  If you have the situation where multiple people are editing a row at almost the same time, THEN EXTRA CARE MUST BE TAKEN You cannot prevent overwriting of rows programmatically.  To resolve this as a problem you either need to:

  • change business process - e.g. assign certain people responsible for editing certain sets of rows - maybe by Customer.
  • break the table into smaller tables and use Ref columns -  this might work if it is different areas of a row that are edited.  For example, maybe one person is updating order information while another is updating Customer information for the order.  By having the data in different tables, it would be different rows being edited so no chance of overwriting changes.
  • BOTH - a combination of both above 

I hope this helps!

@dbaum Even UNIQUEID() is not 100% bullet proof to have a unique key value. I have seen within the last 5 years 3..4 support tickets where after investigation, the result of UNIQUEID() has been the same.

Yes, understood. Nonetheless, that edge case is not what I understand @gregdiana1 to be describing as the reason for pre-assigning users to rows with pre-populated UNIQUEID values.


@gregdiana1 wrote:

If more than one user enters data at the same time it may transpire that each may be assigned 37 as the next valid record. Now when these multiple users eventually save their data or update the issue is whose or which users data is recorded, lost or overwritten in row 37?


salam (PII Removed by Staff) รผnvanฤฑna yazฤฑn kรถmษ™k edim

Steve
Platinum 4
Platinum 4

@lanlan wrote:

the keys from the app are mostly made by combining columns


This is your problem.


@lanlan wrote:

so i need to set another key with a generated unique id column. dont i?


Yes.


@dbaum wrote:

If that's indeed how the platform works, it seems like a fundamental issue that would (or, @lizlynch,  at least ought to) be called out explicitly in Concurrent usage with multiple users - AppSheet Help


It is:

Steve_0-1671483453349.png


@dbaum wrote:

Are you saying that it also addresses the situation that I understand @gregdiana1 to be describing, which is where a table uses a dedicated key column populated using the UNIQUEID function and nonetheless can still end up with conflicting rows added concurrently by different users because those new rows are assigned to the same row number?


The row number only ever matters if it is or is part of the key. Otherwise, that two users happen to add rows that are assigned the same row number will not prevent both rows from being saved without data loss.

If concerned about duplicate IDs being generated by UNIQUEID(), read the help doc for more options.


@Steve wrote:

It is:

Steve_0-1671483453349.png


Yes, I'm aware. Thanks for confirming. Again, my replies that you quoted are in the context of @gregdiana1's seeming point that the row number can matter even if it is not part of the key.

@lanlan, when I overlooked something as I developed an app today, I was reminded of a scenario that perhaps is related to yours. In case it's helpful, here's an explanation.

If your automation is triggered by a data change that is part of a grouped action, ensure that the grouped action does not include actions that change the row subsequent to the action that triggers the automation. That can create the type of conflict that Steve mentioned in an earlier reply. Following is an illustration.

In this grouped action, the "trigger" action changes a column's data in a manner that triggers an automation and the "reset" action clears that same column's value. In turn, the triggered automation populates some other columns in the same row. It had been a while since I last  created this type of action/automation pair, and I had forgotten that it's necessary to include the "reset" action in the automation rather than in the grouped action. Otherwise, the row updates from the grouped action's actions subsequent to the trigger can conflict with the row updates from the automation.

dbaum_0-1671570086842.png

When my app didn't behave as I expected, I began troubleshooting by reviewing the row's edit history in the data source. That showed me that all my row updates were being saved, but in a sequential manner where some were overwriting others. If you haven't already tried, reviewing row- or cell-level edit history is a handy technique. If your data source is a Google Sheets spreadsheet, here's how to show a cell's edit history.

Top Labels in this Space