Hi Team,
We have recently started working with Biglake metastore with Iceberg tables. I am using Apache Spark stored procedures to create catalogs and manage databases and tables.
While metastore, database, and table creation work fine, Data and metadata get loaded on GCS, but the BigQuery linking fails with internal server errors.
I was following this document, and my query looks exactly like this. In fact, it fails when i run the external table command to link it to bigquery table.
Spark Table Creation & Linking:
Can you please assist?
ErrorMessage:
An internal error occurred and the request could not be completed. This is usually caused by a transient issue. Retrying the job with back-off as described in the BigQuery SLA should solve the problem: https://cloud.google.com/bigquery/sla. If the error continues to occur please contact support at https://cloud.google.com/support. Error: 80324028
"code" : 400,
"errors" : [ {
"domain" : "global",
"message" : "The job encountered an internal error during execution and was unable to complete successfully.",
"reason" : "jobInternalError"
} ],
"message" : "The job encountered an internal error during execution and was unable to complete successfully.",
"status" : "INVALID_ARGUMENT"
Transient errors are not uncommon in distributed systems, and a back-off strategy is often effective. However, if the problem continues, Google Cloud support will be able to assist further