Skip to main content

Using Karpenter with vCluster Platform

vCluster is compatible with Karpenter and vCluster's features can be used in conjuntion with Karpenter to further reduce infrastructure costs.

This article will explain how to Create Designated Node Pool for a vCluster's Workload Pods, Create Designated Node Pool for a Project, and Target Node Pool Based on vCluster Pod Labels. All three of which can be powerful cost saving tools.

This article assumes you are using a host cluster provisioned on one of the following managed Kubernetes services: EKS (Elastic Kubernetes Service) or AKS (Azure Kubernetes Service).

The pattern <value> repeatedly occurs in this documentation and is meant to imply that the user should replace the value, angular brackets included, with their own.

Much of this article borrows and builds upon the Getting Started with Karpenter documentation and the Karpenter Provider Azure README.md. Most if not all scripts and yaml manifests were written by the others of those articles or articles of other AKS and Azure documentation. Some of those works were then modified while others remain the same.

Pre-requisites​

  1. Either an EKS Kubernetes cluster or an AKS Kubernetes cluster.
  2. vCluster Platform installed on host cluster. Read Deploy the vCluster Platform docs.
  3. EKS specific requirements for vCluster Platform compatibility:
    1. EBS driver installed. Get the Amazon EBS CSI driver add-on.
    2. Delete the gp2 StorageClass with kubectl delete storageclass gp2 and replace with gp3 by applying following yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp3
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
type: gp3
allowVolumeExpansion: true

Install Karpenter​

  1. Run helm install for karpenter:
helm upgrade --install karpenter oci://public.ecr.aws/karpenter/karpenter --version "${KARPENTER_VERSION}" --namespace "${KARPENTER_NAMESPACE}" --create-namespace \
--set "settings.clusterName=<cluster_name_in_provider>" \
--set "settings.interruptionQueue=<cluster_name_in_provider>" \
--set controller.resources.requests.cpu=1 \
--set controller.resources.requests.memory=1Gi \
--set controller.resources.limits.cpu=1 \
--set controller.resources.limits.memory=1Gi \
--wait

Configure Host Cluster Kubernetes RBAC permissions​

To deploy the manifests presented in the remainder of this article you will need certain permissions in the Kubernetes host cluster. The following Kubernetes permissions will be required in the host cluster for the user:

  • Create NodePools.karpenter.sh/v1beta1 in host cluster.
  • Create EC2NodeClass.karpenter.sh/v1beta1 in host cluster.

Create Node Pools​

Create Designated Node Pool for a vCluster's Workload Pods​

vCluster Platform Permissions

The following permissions will be required in vCluster Platform:

  • Create virtual clusters

Create Node Pool

This section utilizes taints and tolerations to guide virtual cluster pods to the desired node. This works best if every node pool or node utilizes a taint. Otherwise, the pods may be scheduled on non-tainted nodes.

  1. Select "Create Virtual Cluster"
  2. Select "Configure" and add a toleration to virtual cluster pods using the yaml editor. If the below yaml is out of date then see the applying tolerations docs.
sync:
toHost:
pods:
enforceTolerations:
- vclusterID=<vcluster-name>:NoSchedule
  1. Create a node pool with a taint matching the toleration and create node class for it to use.
cat <<EOF | kubectl apply -f -
---
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
name: <name-of-your-choosing>
spec:
template:
spec:
taints:
- key: vclusterID
value: <vcluster-id>
effect: NoSchedule
requirements:
- key: kubernetes.io/arch
operator: In
values: ["amd64"]
- key: kubernetes.io/os
operator: In
values: ["linux"]
- key: karpenter.sh/capacity-type
operator: In
values: ["spot"]
- key: karpenter.k8s.aws/instance-category
operator: In
values: ["c", "m", "r"]
- key: karpenter.k8s.aws/instance-generation
operator: Gt
values: ["2"]
nodeClassRef:
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
name: <node-class-name>
limits:
cpu: 100
disruption:
consolidationPolicy: WhenUnderutilized
expireAfter: Never
---
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
name: <name-of-your-choosing>
spec:
amiFamily: AL2 # Amazon Linux 2
role: "KarpenterNodeRole-<cluster-name>"
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: "<cluster-name>"
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: "<cluster-name>"
amiSelectorTerms:
- id: "<arm-ami-id>"
- id: "<amd-ami-id>"
EOF
  1. Now we will confirm the node pool is working. Run kubectl get nodes --watch. Open a separate terminal tab. Connect to your vCluster by running vcluster connect <vcluster-id>. Then run:
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
spec:
replicas: 0
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
terminationGracePeriodSeconds: 0
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
resources:
requests:
cpu: 1
EOF

kubectl scale deployment inflate --replicas 5

You should see nodes provision for the deployment. If you do the above on a vCluster without the tolerations then the nodes should not be provisioned on the node pool.

  1. Tear down the deployment used above.
kubectl delete deployment inflate

Create Designated Node Pool for a Project​

We will designate a node pool for a project by creating templates and enforcing the template for that project.

vCluster Platform Permissions

The following permissions will be required in vCluster Platform:

  • Create vCluster Templates
  • Create Projects

Create Node Pool

  1. Using the vCluster Platform add a template: Templates -> Virtual Clusters -> Add Virtual Cluster Template.
  2. In the yaml editor box, which contains the vcluster.yaml, add the following:
sync:
toHost:
pods:
enforceTolerations:
- vclusterID=<project-name>:NoSchedule
  1. Create a node pool with the taint corresponding to the toleration in step 2, e.g.:
cat <<EOF | kubectl apply -f -
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
name: <name-of-your-choosing>
spec:
template:
spec:
taints:
- key: <project-name>
value: "true"
effect: NoSchedule
requirements:
- key: kubernetes.io/arch
operator: In
values: ["amd64"]
- key: kubernetes.io/os
operator: In
values: ["linux"]
- key: karpenter.sh/capacity-type
operator: In
values: ["spot"]
- key: karpenter.k8s.aws/instance-category
operator: In
values: ["c", "m", "r"]
- key: karpenter.k8s.aws/instance-generation
operator: Gt
values: ["2"]
nodeClassRef:
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
name: <node-class-name>
limits:
cpu: 100
disruption:
consolidationPolicy: WhenUnderutilized
expireAfter: Never
---
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
name: <name-of-your-choosing>
spec:
amiFamily: AL2 # Amazon Linux 2
role: "<karpenter-node-role>" # replace with your cluster name
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: "<cluster-name>"
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: "<cluster-name>"
amiSelectorTerms:
- id: "<arm-ami-id>"
- id: "<amd-ami-id>"
EOF
  1. Create a new project that only allows templates with set tolerations. Go to project dropdown and create a new project. In the Allowed Templates section, remove the All Templates option. Select the templates that include tolerations for the node pool created in step 3. Now, all pods created in virtual clusters belonging to this project will deploy to the designated node pools.

Target Node Pool Based on vCluster Pod Labels​

vCluster Platform Permissions

The following permissions will be required in vCluster Platform:

  • Create virtual clusters

Create Node Pool

  1. Create a node pool with custom labels e.g. my.org/environ.
cat <<EOF | kubectl apply -f -
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
requirements:
- key: kubernetes.io/arch
operator: In
values: ["amd64"]
- key: kubernetes.io/os
operator: In
values: ["linux"]
- key: karpenter.sh/capacity-type
operator: In
values: ["spot"]
- key: karpenter.k8s.aws/instance-category
operator: In
values: ["c", "m", "r"]
- key: karpenter.k8s.aws/instance-generation
operator: Gt
values: ["2"]
- key: my.org/environ
operator: Exists
nodeClassRef:
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
name: <node-class-name>
limits:
cpu: 100
disruption:
consolidationPolicy: WhenUnderutilized
expireAfter: Never
---
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
name: <name-of-your-choosing>
spec:
amiFamily: AL2 # Amazon Linux 2
role: "<karpenter-node-role>" # replace with your cluster name
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: "<cluster-name>"
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: "<cluster-name>"
amiSelectorTerms:
- id: "<arm-ami-id>"
- id: "<amd-ami-id>"
EOF
  1. Create a virtual cluster.

  2. Add nodeSelector to deployments in your virtual cluster using a nodeSelector that targets the label from step 1 with the desired value.

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
spec:
replicas: 0
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
nodeSelector:
my.org/environ: <any key, "prod", "dev", "test" make sense here>
terminationGracePeriodSeconds: 0
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
resources:
requests:
cpu: 1
EOF

kubectl scale deployment inflate --replicas 5

If you make a deployment with a key/value pair that matches an existing node it can be scheduled to that node. Otherwise, a new node will be provisioned. Since you are adding the key to the node pool but not the value, using different values necessitates different nodes. You can read more about user-defined labels in the Karpenter's User-Defined Labels docs.