Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Migrating Data from Big Query to S3 Buckets using AWS Glue

Trying to migrate data from Big Query to S3 bucket using AWS glue but not able to configure the connection properly. 

We are following the steps provided here: https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-connect-bigquery-home.html 

When creating the new connection and testing it we get the following error: 

rifath_0-1742825054470.png

Tried decoding the encoded string and it returns the correct JSON file that was provided. Other than that on AWS Big Query connector the following are required: 
"token_uri","client_x509_cert_url","private_key_id","project_id","universe_domain","auth_provider_x509_cert_url","auth_uri","client_email","private_key","spark.hadoop.google.cloud.auth.service.account.json.keyfile","type","client_id"

This "spark.hadoop.google.cloud.auth.service.account.json.keyfile" is not present in the JSON file. 

rifath_1-1742825433504.png

Wondering if that is what is causing the field 'type'  to not be found. We have also submitted a ticket with AWS. But not really able to pin point what we are doing wrong or what the actual issue. 

We have also tested the service accounts and we are able to connect. Also tested the encoded string using boto3 and we get the correct JSON. 

Has anyone experienced this issue? Any tips or suggestions would be greatly appreciated! Thanks

0 0 62
0 REPLIES 0