Skip to main content

Deploy with Isolated Workloads

vCluster offers several different policies to automatically isolate workloads in a virtual cluster. For all options within each policy, review the policies documentation.

Isolating your Workloads​

In order to isolate workloads on a vCluster, you will need to enable a couple of configuration options.

  1. Set a Pod Security Standard for the syncer component. If you enable either baseline or restricted policy, it will follow the standards outlined in Kubernetes. For exampke, with baseline as a Pod Security Standard, pods that try to run as a privileged container or mount a host path will not be synced to the host cluster. Though Pod Security Standard is a Kubernetes concept and only applicable to certain versions of Kubernetes, vCluster supports this regardless of Kubernetes version as this is directly implemented in vCluster. Rejected pods will stay pending in the vCluster and in newer Kubernetes version they will be denied by the admission controller as well.
  2. Enable a resource quota as well as a limit range. This allows restricting resource consumption of vCluster workloads. If enabled, sane defaults for those 2 resources are chosen.
  3. Enable a network policy that restricts access of vCluster workloads as well as the vCluster control plane to other pods in the host cluster. This only works if your host cluster CNI supports network policies.

Example vcluster.yaml​

Enabling the recommended settings for workload isolation.

policies:
# empty, baseline, restricted can be used here
podSecurityStandard: baseline

resourceQuota:
enabled: true

limitRange:
enabled: true

networkPolicy:
enabled: true
info

When enabling resource quotas locally, make sure to add a --expose-local=false flag to your vcluster create [...] command, as by default the vCluster CLI will try to automatically expose the vCluster using NodePorts, when interacting with a local Kubernetes cluster.

Network Only Isolation​

Workloads created by vCluster will be able to communicate with other workloads in the host cluster through their cluster IPs. This can be sometimes beneficial if you want to purposely access a host cluster service, which is a good method to share services between virtual clusters. However, you often want to isolate namespaces and do not want the pods running inside vCluster to have access to other workloads in the host cluster. This requirement can be accomplished by using Network Policies for the namespace where vCluster is installed in.

vCluster can automatically deploy a network policy in your host cluster by enabling the following option in your vcluster.yaml:

policies:
networkPolicy:
enabled: true

warning

Network policies do not work in all Kubernetes clusters and need to be supported by the underlying CNI plugin.

Advanced Isolation​

Besides the basic workload isolation using Pod Security Standard and Resource Quotas, you could always set more advanced isolation methods, i.e. isolating the workloads on separate nodes or through another container runtime.

You should be aware that pods created in the vCluster will set their tolerations, which will affect scheduling decisions. To prevent the pods from being scheduled to the undesirable nodes, you can use the sync.fromHost.nodes.selector.labels option or admission controller.

Workload & Network Isolation within the vCluster​

The above mentioned methods also work for isolating workloads inside the vCluster itself, as you can just deploy resource quotas, limit ranges, admission controllers and network policies in there. To allow network policies to function correctly, you'll need to enable these configuration options in your vcluster.yaml.

Secret based Service Account tokens​

By default, vCluster will create Service Account Tokens for each pod and inject them as an annotation in the respective pod's metadata. If this doesn't comply with your security practices, then you can mitigate this by enabling an option in the vcluster.yaml which creates separate secrets for each pod's Service Account Token and mounts it accordingly using projected volumes. This can be enabled in your vcluster.yaml:

sync:
toHost:
pods:
useSecretsForSATokens: true