I spun up a new Compute Engine VM Instance (n1-standard-4 machine w 2 T4 GPUs) and tried to install NVIDIA drivers to use the GPUs on the new instance following the instructions here: https://cloud.google.com/compute/docs/gpus/install-drivers-gpu
Configuration: Linux Debian x86_64 version 12
When I tried running the utility script provided by GCP
install_gpu_driver.py
I received the following error:
ERROR: modpost: GPL-incompatible module nvidia.ko uses GPL-only symbol '__rcu_read_lock'
ERROR: modpost: GPL-incompatible module nvidia.ko uses GPL-only symbol '__rcu_read_unlock'
I received the same error when using the CUDA Toolkit to install the NVIDIA drivers via the CUDA Toolkit
wget https://developer.download.nvidia.com/compute/cuda/12.3.2/local_installers/cuda_12.3.2_545.23.08_lin...
sudo sh cuda_12.3.2_545.23.08_linux.run
From https://forums.developer.nvidia.com/t/linux-6-7-3-545-29-06-550-40-07-error-modpost-gpl-incompatible... it seems like there's an issue with the latest NVIDIA drivers for this kernel version which will be patched in a future release. Until then, how can I get NVIDIA drivers installed on a VM Instance to unblock me from training w GPUs on Google Cloud Platform? Can I configure a new VM Instance on an older kernel version?