When creating a batch job in GCP Batch, the VM provisioning model under resource specification does not list the C4A machine types.
We want to use this Arm based processor as it offers faster speed for our ML task.
Solved! Go to Solution.
Hello there,
So the UI is not showing up the C4A support for Batch yet, but we should have a way to work that around. Here is what you can do:
1. Using gcloud command to submit a Batch job like: gcloud batch jobs submit c4ajob --config job.json
2. For the job.json file, here is a sample file:
{
"taskGroups": [
{
"taskSpec": {
"computeResource": {
"cpuMilli": "2000",
"memoryMib": "2000"
},
"runnables": [
{
"script": {
"text": "sleep 1000"
}
}
],
},
}
],
"allocationPolicy": {
"instances": [
{
"policy": {
"machineType": "c4a-standard-1",
"bootDisk": {
"type": "hyperdisk-balanced",
"sizeGb": "10",
"image": "projects/debian-cloud/global/images/debian-12-bookworm-arm64-v20241009"
}
}
}
]
},
"labels": {
"department": "finance",
"env": "testing"
},
"logsPolicy": {
"destination": "CLOUD_LOGGING"
}
}
So the key part is the allocationPolicy settings to support c4a using hyperdisk.
Hello @rohitg-brt ,Welcome on Google Cloud Community.
C4A is available for me:
Are you maybe using free tier ?
--
cheers,
Damian Sztankowski
LinkedIn medium.com Cloudskillsboost Sessionize Youtube
Hey @DamianS, thanks for replying.
I am not using free tier as this is quite an old GCP project.
I can as well see the C4A machines on create compute engine VM page, but it is not listed under the machines when creating a Batch Job.
Ah yes, my bad. I went trough Batch release notes as well as Axion Processors documentation and I'm not able to find any info about supporting C4A. From the other hands, Compute Engine notes contain info about C4A Arm VMS : https://cloud.google.com/compute/docs/release-notes#October_30_2024
Based on that, C4A is not yest supported on Batch.
--
cheers,
Damian Sztankowski
LinkedIn medium.com Cloudskillsboost Sessionize Youtube
Hello there,
So the UI is not showing up the C4A support for Batch yet, but we should have a way to work that around. Here is what you can do:
1. Using gcloud command to submit a Batch job like: gcloud batch jobs submit c4ajob --config job.json
2. For the job.json file, here is a sample file:
{
"taskGroups": [
{
"taskSpec": {
"computeResource": {
"cpuMilli": "2000",
"memoryMib": "2000"
},
"runnables": [
{
"script": {
"text": "sleep 1000"
}
}
],
},
}
],
"allocationPolicy": {
"instances": [
{
"policy": {
"machineType": "c4a-standard-1",
"bootDisk": {
"type": "hyperdisk-balanced",
"sizeGb": "10",
"image": "projects/debian-cloud/global/images/debian-12-bookworm-arm64-v20241009"
}
}
}
]
},
"labels": {
"department": "finance",
"env": "testing"
},
"logsPolicy": {
"destination": "CLOUD_LOGGING"
}
}
So the key part is the allocationPolicy settings to support c4a using hyperdisk.
Thanks, this worked!