Skip to main content
Version: v0.27 Stable

What is vCluster?

vCluster is an open source solution that enables teams to run virtual Kubernetes clusters inside existing infrastructure. It helps platform engineers create secure, isolated environments for development, testing, CI/CD, and even production workloads, without the cost or overhead of managing separate physical clusters.

vCluster supports a wide range of tenancy models, from lightweight namespace-based setups to more advanced configurations with private nodes, GPUs, and bare metal. Environments are defined declaratively, allowing teams to provision repeatable clusters that match their isolation and performance needs.

By consolidating workloads onto fewer host clusters, vCluster reduces infrastructure sprawl, lowers Kubernetes costs, and simplifies multi-tenant platform operations.

What is a virtual cluster?​

A virtual clustersVirtual ClusterA certified Kubernetes distribution that runs as an isolated, virtual environment nested inside a physical host cluster. Virtual clusters run inside host cluster namespaces but operate as independent Kubernetes environments, each with its own API server, control plane, syncer, and set of resources.Related: vCluster, Host Cluster is a fully functional Kubernetes cluster that runs on top of another Kubernetes cluster.

Typically, a virtual cluster runs inside a namespace of a host clusterHost ClusterThe physical Kubernetes cluster where virtual clusters are deployed and run. The host cluster provides the infrastructure resources (CPU, memory, storage, networking) that virtual clusters leverage, while maintaining isolation between different virtual environments.Related: Virtual Cluster, but it operates as an independent environment, with its own API server, control plane, and resource set.

Depending on the tenancy model, virtual clusters may share or isolate compute and networking resources. Regardless of the underlying setup, they remain abstracted from the host cluster’s global state, enabling strong workload separation and tenant autonomy.

vCluster extends this concept to support a full spectrum of tenancy models, from simple namespace syncing to advanced configurations using shared nodes, virtual nodes, dedicated nodes, or even private nodes.

Virtual clusters are Certified Kubernetes Distributions, adhering to upstream Kubernetes standards while maintaining isolation from the host cluster.

This flexibility allows you to select the ideal tenancy model for your team’s security, cost, and performance requirements, while also benefiting from faster provisioning and centralized management.

Tenancy Models Overview​

vCluster supports a range of tenancy models, allowing you to choose the right balance of isolation, cost-efficiency, and operational complexity for your platform. Each virtual cluster runs on top of a host Kubernetes cluster, but how it isolates workloads, consumes resources, and interacts with the underlying infrastructure depends on the tenancy model you select.

Below are the tenancy models supported by vCluster:

Shared Nodes​

  • The control plane of the virtual cluster is deployed as a container on a host cluster.
  • Worker nodes of the virtual cluster are from the same host cluster.

How it works​

All virtual clusters run in a single Kubernetes host cluster and schedule pods onto the same shared node pool. The vCluster control plane enforces separation at the API, RBAC, and CRD levels, but does not restrict pod scheduling unless additional mechanisms (e.g., taints, affinities) are applied. Tenants interact with their own virtual clusters as if they are separate environments, but their workloads run side-by-side with those from other virtual clusters at the node level. Shared infrastructure components like the container runtime, CNI, and CSI drivers are used across all tenants.

vCluster Arch for Shared Nodes
vCluster Architecture for Shared Nodes

Dedicated Nodes​

  • The control plane of the virtual cluster is deployed as a container on a host cluster.
  • Worker nodes of the virtual cluster are from a specific set of nodes of the same host cluster.

How it works​

Each vCluster is configured with a Kubernetes nodeSelector (or affinity rules) that ensures all tenant workloads are scheduled only to nodes with specific labels. For example, a virtual cluster assigned to nodegroup=team-a will only run pods on nodes matching that label.

While compute is scoped to these dedicated nodes, all other components—like the CNI, CSI, and underlying Kubernetes host cluster—remain shared. The vCluster itself maintains full API isolation, separate CRDs, tenant-specific RBAC, and control plane security.

vCluster Comparison
vCluster Architecture for Dedicated Nodes

Private Nodes​

  • The control plane of the virtual cluster is deployed as a container on a host cluster.
  • Worker nodes of the virtual cluster are individual nodes that are not connected to any other Kubernetes cluster.

How it works​

Each vCluster is deployed into its own Kubernetes host cluster, provisioned with a dedicated set of physical nodes. The CNI, CSI, kube-proxy, and all other Kubernetes components are fully isolated per tenant.

Because vCluster runs on top of this separate host cluster, it inherits the benefits of virtual cluster abstraction (faster startup, CRD freedom, sleep mode, etc.), but adds an additional hard isolation boundary beneath it. Tenants cannot interfere with one another’s environments at any layer—from API server to node kernel.