Avoid duplicate entry while uploading csv file

In Appsheet for bulk upload from csv file... how we can allow only upload new entry and block existing file to avoid duplicate ? I tried using valid if expression. From the row i don't want to copy if business name is already available. 

NOT(IN([_THIS], SELECT(VENDORS[BUSINESS NAME], [VENDOR_ID] <> [_ThisRow].[VENDOR_ID])))

But get stuck if duplicate entry available in csv file. Please advise.

0 3 714
3 REPLIES 3

For a similar need, I handled duplicates after importation. You can for example create a slice with:

COUNT(FILTER("Vendors", [vendorID] = [_ThisRow].[vendorID])) > 1

I created a "Duplicate Records" view on this slice and with inline actions, I give the user the options to either delete the record or to acknowledge it as not duplicate. If you are sure and you don't need the user's intervention, you can just remove these duplicates with automation following the importation.

One workaround I have used when most of the data contains duplicates.. first import the data into a temporary table with CSV, then trigger a webhook/Bot with the latest imported row because you can filter the data automatically if the row already exists. Then finally delete all temporary rows with the webhook/Bot.

I had this problem where users would import a file that was already imported. I got around having duplicates by ensuring my key column is NOT part of the CSV file that's being imported it's just a uniqueid() as we do. Then I created a bot with event source "App" and the table I'm watching and then chose "Adds". In the condition, I used

 

count(select(Package Records[ScannableId],[ScannableId]=[_thisrow].[ScannableId])) > 1

 

Which is just comparing a column [ScannableId] that is unique to the data file. For the next step I just chose a data action of delete row. This isn't perfect as if you have a large file it's going to take forever to import everything and then delete every single row individually but it's what I could get working for now.

Top Labels in this Space