Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Application Integration - mysql connector task

np1
Bronze 3
Bronze 3

I am trying to copy data from one mySql table to another mySql table, but while fetching data only 25 rows are pass to another flow(to load data to another table) instead of full data.

Below are my configurations,

Flow1: to fetch data from 1st table

np1_1-1704279478004.pngnp1_2-1704279561070.png

np1_3-1704279615595.pngnp1_4-1704279745309.pngnp1_5-1704279777302.pngnp1_6-1704279792712.png

np1_0-1704279410009.png

flow2 : to load data to 2nd table

After testing flow1 it is in succeeded state always but only 25 rows are inserted.  Can anyone suggest here how to fetch full data.

Solved Solved
0 8 685
1 ACCEPTED SOLUTION

Hi @np1 

Spookly enough we were just looking at this yesterday!

On your mapping step then you can set the ListEntitiesPageSize, so you can load in many more rows. Depending on your payload size you maybe able to get say 500 rows at a time. (There is a connector payload limit of 8M according to https://cloud.google.com/application-integration/docs/quotas#data-processing-limits)

If you need more rows you will really need to loop and when you call the first 'page' of data consume a returned variable called   listEntitiesPageToken. This then has to be fed into your next loop iteration to get say the 2nd page of data.

More into on it can be found here:

https://cloud.google.com/application-integration/docs/configure-connectors-task#config-prop

I havent been able to test the latter on Postgres, as it doesn't appear to be returning a 'listEntitiesPageToken', but hopefully mySql is better.

 

View solution in original post

8 REPLIES 8

Hi @np1 

Spookly enough we were just looking at this yesterday!

On your mapping step then you can set the ListEntitiesPageSize, so you can load in many more rows. Depending on your payload size you maybe able to get say 500 rows at a time. (There is a connector payload limit of 8M according to https://cloud.google.com/application-integration/docs/quotas#data-processing-limits)

If you need more rows you will really need to loop and when you call the first 'page' of data consume a returned variable called   listEntitiesPageToken. This then has to be fed into your next loop iteration to get say the 2nd page of data.

More into on it can be found here:

https://cloud.google.com/application-integration/docs/configure-connectors-task#config-prop

I havent been able to test the latter on Postgres, as it doesn't appear to be returning a 'listEntitiesPageToken', but hopefully mySql is better.

 

As per your suggestion I have setup ListEntitiesPageSize to 1000 and listEntitiesPageToken to listEntitiesNextPageToken as below,

np1_0-1704358059959.png

Does it make sense, I just try to fetch data if page size exceed with 1000 lines but this is not working..

Hi @np1 

Pls take a look at this thread for fetching large payload response from a connector: https://www.googlecloudcommunity.com/gc/Integration-Services/API-Pagination-in-Application-Integrati...

 

Lmk if this answers your question

Hi @Meenchou ,

I have tried this pagination flow but i am getting below error,

np1_0-1704895016506.png

 

Hi @np1 

I was having trouble getting the Nextpagetoken back so raised a case with google. It looks like you have to both map a pagesize and a sort column to get the token back (listEntitiesPageSize, listEntitiesSortByColumns-[NB String-Array] ).  I have managed to do this and get a token back, which I then fed into a 2nd connector call (with also pagesize and sort column mapped) and it then returned the subsequent 25 rows.

Not sure why you have to specify a pagesize to get the token back, instead of it just picking up the default of 25!

It also looks like if you have a primary key on the table you dont need the sort definition (but I havent tested that).

Good luck

Hey @Graham-Vosper , I'm having the same issue as above.

I manage to get a token back when the only value I add is listEntitiesPageSize but I continue to get the same error as @np1 whenever I try to set the listEntitiesSortByColumns value to anything or if I try to set the value of  listEntitiesPageToken to the listEntitiesNextPageToken.

Do you know if there is a specfiic format that should be used for the sort or page token columns? I noticed that listEntitiesSortByColumns should be a string array and i am entering it in the same format as below but continue to receive an error:

mfers_0-1724756495173.png

Maybe it could be down to that?

Hi @mfers 

On the listEntitiesSortByColumns  then what I did was to create a variable of type 'String Array' and then include the database column name as the Default Value as shown here:

GrahamVosper_0-1724861442109.png

(col_int is the name of my database column, note it doesnt allow you to have multiple sort columns)

On the question of the next page token mapping then it should be a case of mapping the 'NextPageToken' from your first call to the 'PageToken' of the next. For my dummy integration I had 2 connector calls, one to get the first 25 and another to get the next 25 as shown here.

GrahamVosper_1-1724861828874.png

The mapping stage prior to the 'next 25' call had:

GrahamVosper_2-1724861959360.png

 

This simple test worked fine for me to prove the concept. If I wanted to build something production like then I suspect I would need to have something that say reads the first 500 rows, and if the results are non-blank it would call the same integration to process the next 500 via an API Trigger call passing the next-page token across. 

* If you plan to do this API Trigger call then shout, as there is another 'feature'/'workaround' needed to be aware of to make this work.

 

Top Labels in this Space