Hey,
we started using Custom Compute Classes.
We created the following config
apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
name: cost-optimized
spec:
priorities:
- machineFamily: c2
spot: true
minCores: 32
- machineFamily: c2
spot: true
- machineFamily: c2d
spot: true
- machineFamily: n2
spot: true
- machineFamily: n2d
spot: true
- machineFamily: c2
spot: false
minCores: 32
activeMigration:
optimizeRulePriority: true
autoscalingPolicy:
consolidationDelayMinutes: 5
consolidationThreshold: 70
nodePoolAutoCreation:
enabled: true
It created about 10 duplicated nap-c2-standard-60-spot node pools. Is this desired behaviour? I was expecting single node pool for c2-standard-60 spot.
We're running k8s 1.30.5-gke.1014001
Could you also provide a scrubbed manifest for the workloads that you deployed to use this class?
Are you interested in anything in particular in these manifests? We deployed a lot of workloads using this class (~20 deployments which overall have 2000+ of pods)
Ah yeah I was trying to see why it was creating multiple node pools based on your node selectors/affinity/resource requests. How many nodes are in each of those pools?
This is the current state
Are Pods from different deployments landing in the same node pool's nodes, or is every node pool dedicated to a specific deployment's Pods? I'm trying to see if it's trying to separate different deployments
Different deployments are landing on the same node pool's nodes.