Using terraform, I'm trying to spin a VM on GCP with a container inside it .
This container is by default has less memory and is crashing when I try to do Yarn install etc. So I need to increase its memory.
Is there a way to do it ? Some providers have a way but I need from google.
Here is my main.tf
terraform {
required_providers {
coder = {
source = "coder/coder"
version = "~> 0.6.12"
}
google = {
source = "hashicorp/google"
version = "~> 4.34.0"
}
}
}
variable "project_id" {
description = "Which Google Compute Project should your workspace live in?"
}
variable "zone" {
description = "What region should your workspace live in?"
default = "us-east1-b"
validation {
condition = contains(["northamerica-northeast1-a", "us-east1-b", "us-west2-c", "europe-west4-b", "southamerica-east1-a"], var.zone)
error_message = "Invalid zone!"
}
}
provider "google" {
zone = var.zone
project = var.project_id
}
data "google_compute_default_service_account" "default" {
}
data "coder_workspace" "me" {
}
resource "coder_agent" "main" {
auth = "google-instance-identity"
arch = "amd64"
os = "linux"
login_before_ready = false
startup_script_timeout = 180
startup_script = <<-EOT
set -e
# install and start code-server
curl -fsSL https://code-server.dev/install.sh | sh -s -- --method=standalone --prefix=/tmp/code-server --version 4.8.3
/tmp/code-server/bin/code-server --auth none --port 13337 >/tmp/code-server.log 2>&1 &
EOT
}
# code-server
resource "coder_app" "code-server" {
agent_id = coder_agent.main.id
slug = "code-server"
display_name = "code-server"
icon = "/icon/code.svg"
url = "http://localhost:13337?folder=/home/coder"
subdomain = false
share = "owner"
healthcheck {
url = "http://localhost:13337/healthz"
interval = 3
threshold = 10
}
}
module "gce-container" {
source = "terraform-google-modules/container-vm/google"
version = "3.0.0"
container = {
image = "codercom/enterprise-node:ubuntu"
command = ["sh"]
args = ["-c", coder_agent.main.init_script]
securityContext = {
privileged : true
}
}
}
resource "google_compute_instance" "dev" {
zone = var.zone
count = data.coder_workspace.me.start_count
name = "coder-${lower(data.coder_workspace.me.owner)}-${lower(data.coder_workspace.me.name)}"
machine_type = "e2-standard-2"
network_interface {
network = "leap-dev-network"
subnetwork = "leap-dev-east-1-subnet"
access_config {
// Ephemeral public IP
}
}
boot_disk {
initialize_params {
image = module.gce-container.source_image
}
}
service_account {
email = data.google_compute_default_service_account.default.email
scopes = ["cloud-platform"]
}
metadata = {
"gce-container-declaration" = module.gce-container.metadata_value
}
labels = {
container-vm = module.gce-container.vm_container_label
}
}
resource "coder_agent_instance" "dev" {
count = data.coder_workspace.me.start_count
agent_id = coder_agent.main.id
instance_id = google_compute_instance.dev[0].instance_id
}
resource "coder_metadata" "workspace_info" {
count = data.coder_workspace.me.start_count
resource_id = google_compute_instance.dev[0].id
item {
key = "image"
value = module.gce-container.container.image
}
}
Howdy,
See the following:
and
In your google_compute_instance.machine_type you can encode both the number of CPUs desired as well as a custom amount of memory.
When I think of memory on Linux systems ... I think in two facets ... the first is RAM ... which is the underlying memory of the machine. The second is PAGING ... which is the ability for the machine to appear to have more addressable memory than there is actually RAM by swapping blocks (pages) of RAM to and from disk as referenced. When you talk about "increasing the resources of the VM" ... I am assuming you mean the RAM that the VM has available to it.
When we run a container on a Google Cloud Compute Engine, we run the container under Docker and Docker. However, Docker and the OS that runs Docker has to itself come from somewhere. Google makes available a machine image that can be booted on a Compute Engine called the "Container Optimized OS" ... see here. Your last post got me thinking ... If a Compute Engine has been granted (say) 32GB of RAM, what then does the Docker environment see as its available RAM? My gut said that since the Compute Engine launcher of a docker container supports (by default) just one container to be launched at boot then, by default, the COS will give as much RAM as it has to Docker.
So ... let's approach the puzzle from a different direction. Imagine you had a container that was nothing but a simple Linux OS. If you launched that on a Compute Engine with 16GB of RAM and you launched the SAME on a Compute Engine with 32GB of RAM, my contention is that if we logged into both, we would see the first believe it has 16GB of RAM available and the second believe it has 32GB of RAM available. In your last post, I get the impression that you think the answer will be the same for both of them ... even though both VM machines have distinct actual RAM sizes. How are you measuring memory available to the VM?