I created an autoscaling policy and attached to a cluster with 2 primary worker nodes (2 core with 8gb). I ran a PySpark code with number of executors set to 50, but it did not trigger autoscaling.
Went through the documentations, most is on the theory explanation, but no real samples. I even tried core-based autoscaling policy.
Could someone provide a real life configuration to demo?
Here is the policy I got:
{ "id": "xyz", "name": "projects/unravel-dataproc/regions/us-central1/autoscalingPolicies/xyz", "basicAlgorithm": { "yarnConfig": { "scaleUpFactor": 1, "gracefulDecommissionTimeout": "3600s" }, "cooldownPeriod": "120s" }, "workerConfig": { "minInstances": 2, "maxInstances": 2, "weight": 1 }, "secondaryWorkerConfig": { "maxInstances": 5, "weight": 1 } }