I have created a cloud function that triggers when an audio file is uploaded to cloud storage. The function runs a machine learning model using the uploaded .pkl file and a sample of the incoming audio to generate predictions. The predictions are then sent back to the cloud storage in .txt format. The model is built using TensorFlow.
However, after deploying the function to the cloud, when I uploaded an audio file to my bucket, I noticed that there was no output generated in my bucket. I encountered the following errors.
I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used."
I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations." "To enable the following instructions: AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags."
W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT"
I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'gradients/split_2_grad/concat/split_2/split_dim' with dtype int32"
[{{node gradients/split_2_grad/concat/split_2/split_dim}}]]
I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'gradients/split_grad/concat/split/split_dim' with dtype int32"
I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'gradients/split_1_grad/concat/split_1/split_dim' with dtype int32"
Layer 'conv2d' expected 2 variables, but received 0 variables during loading. Expected: ['conv2d/kernel:0', 'conv2d/bias:0']"
Good day @Bangkit_cohort,
Welcome to Google Cloud Community!
If understood correctly, your model is uploaded in a bucket and you are requesting from cloud functions. There are multiple reasons why you are encountering this error, you can try the following options:
1. This might be an issue with the Tensorflow version, if you are currently using version 2.12, try downgrading it to 2.11 and also check if your model is retrieved and saved in /tmp directory. See if it will solve the problem.
2. You have mentioned that you saved your model in .pkl format, if you are using Tensorflow, make sure that the model artifact is saved in .pb format, .pkl format is used if you are using scikit-learn. Please also make sure that its name is "model" or "saved_model" (for Tensorflow) which means your artifact must be saved as saved_model.pb. You can check this link for more information: https://cloud.google.com/vertex-ai/docs/model-registry/import-model#upload_model_artifacts_to
3. I would suggest instead of uploading the model to GCS, you can try importing it on Vertex AI Model Registry and creating an endpoint, then you can deploy your model in that endpoint. After that you can keep the same flow that you have, the only difference is your cloud function will send a prediction request to the AI endpoint instead of retrieving the model from Google Cloud Storage. This is recommended, specially if you are running GPU-based online predictions. You can check this link on how to create an endpoint: https://cloud.google.com/vertex-ai/docs/samples/aiplatform-create-endpoint-sample#aiplatform_create_...
https://cloud.google.com/vertex-ai/docs/model-registry/import-model
Hope this is useful!