Typically, you would deploy to a service like Cloud Run, AppEngine or GKE. However, (standalone via gcloud) you can start a new GCE/VM and have it run a docker image.
Normally, if you were to do this via a gcloud command, it would look something like:
gcloud compute instances create-with-container [VM_NAME] \
--container-image gcr.io/[PROJECT_ID]/[IMAGE_NAME]:[IMAGE_TAG]
The doc is here but if memory serves, you won't have to specify any port mappings as it runs with --net=host essentially, so the only thing you will need to do is make sure you have a firewall rule to allow ingress on port 3000
As for the reason why your 3rd step (step #2) doesn't work, is because those are NOT local commands run from the machine your container is to be deployed on. These are commands run on ephemeral containers which has the your required binary (i.e. docker or gcloud). Think of those command as being run from your laptop or desktop: so running docker run -p 3000:3000 myimage:mytag would only run the image locally on your laptop, but it wouldn't run it it on your target GCE/VM.
For your 3rd step (step #2), I think you'd need something like:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'compute'
- 'instances'
- 'create-with-container'
- etc.
- etc.
DISCLAIMER: I've never tried this so I don't know if this will actually work as Cloud Build is not meant to do this, although in theory it should as long as it has the right permissions when running the gcloud command...which brings me to my next point, which is the default Cloud Build service account wouldn't have permissions to run gcloud compute commands, so you'll either have to add Compute Instance Admin (v1) role to it or point it to use another service account that does have the correct perms.