This website uses Cookies. Click Accept to agree to our website's cookie use as described in our Privacy Policy. Click Preferences to customize your cookie settings.
Is it possible to assign a dataproc cluster (server or serverless) to the spark connection in the bigquery. I want to change the network details of the cluster that I am not able to change when using the pyspark stored procedure.