Hi
I am using mongodb to bigquery cdc template and I am able to create a successful job.
However when the mongodb documents has a new column the schema to write in bigquery changes and the pipeline fails. how do i create a pipeline that adjust the schema to use and write in the same bigquery table.
Also, I have to keep a python script running in a VM so I can do the streaming and I find it cummbersome, because when you close the VM everything stops. is there any alternative on how to run the script that streams data from mongodb so it is not seperate from the dataflow pipeline ? something like i will provide the stream script that send message to pubsub using the container google create to launch my dataflow pipeline..
I would really appreciate your help to solve the issue. I already spent so many days on it.
Thanks beforehand.