What is vCluster?
vCluster provisions fully isolated Kubernetes environments, called tenant clustersTenant ClusterA fully isolated Kubernetes environment provisioned for a single tenant. Each tenant cluster has its own API server, controller manager, and resource namespace, backed by a virtualized control plane hosted on a Control Plane Cluster. From the tenant's perspective it behaves exactly like a standard Kubernetes cluster., on your infrastructure or directly on bare metal. Each tenant gets a dedicated API server, its own CRDs and RBAC, and a cluster experience indistinguishable from a dedicated Kubernetes cluster.
The control plane is completely invisible to tenants. There are no shared control plane nodes, no in-cluster agent pods, and no lateral path between environments. vCluster suits any environment where isolation is a hard requirement, from developer platforms and CI/CD pipelines to GPU cloud infrastructure serving paying tenants.
How to start​
Two questions determine your deployment path.
1. Do you already have a Kubernetes cluster?​
Yes — your existing cluster becomes the Control Plane ClusterControl Plane ClusterThe Kubernetes cluster that hosts the virtualized control planes for tenant clusters. The Control Plane Cluster is operated by the platform provider and is completely invisible to tenants. There are no shared control plane nodes, no in-cluster agent pods, and no lateral path between tenant environments. With shared nodes, this cluster also runs tenant workloads alongside the control plane pods — the same node pool is used for both. that hosts tenant cluster control planes. You need no additional infrastructure to get started.
No — choose one of:
- vCluster Standalone — a zero-dependency Kubernetes distribution that runs as a self-contained binary on bare metal or VMs. Suited for AI cloud providers and neoclouds building from bare metal, as well as edge and air-gapped production deployments.
- vind (vCluster in Docker) — runs a complete cluster in Docker containers with no Kubernetes dependency. Suited for local development and CI/CD pipelines.
2. How will worker nodes run?​
Shared nodes — Tenant workloads run on the same cluster that hosts the control planes, sharing the existing node pool. Multiple tenant clusters run side by side on the same physical nodes. Lower overhead, fastest to set up. Suited for developer environments, CI/CD, and high-density internal platforms.
Private nodes — Each tenant cluster gets dedicated nodes that join through a token-based process. The network, storage, and compute are fully isolated per tenant. No cross-tenant visibility exists at the infrastructure level. This is the isolation model GPU workloads, regulated industries, and multi-tenant AI cloud platforms require. Nodes can come from any Linux infrastructure. Join nodes to your vCluster from bare metal servers using vMetal, cloud VMs through a Terraform node provider, or any manually joined Linux machine, including nodes from other cloud environments.
Deployment paths​
| Shared nodes | Private nodes | |
|---|---|---|
| Existing K8s cluster | Deploy the control plane | Join private nodes |
| vCluster Standalone | Install Standalone for density | Install Standalone for isolation |
| vind (Docker) | — | Deploy in Docker |
vind always uses private nodes. It automatically provisions worker nodes as Docker containers.
How it works​
Each tenant cluster runs a dedicated virtualized control plane as a containerized process on the Control Plane Cluster. The control plane manages all operations within the tenant cluster and cannot be accessed by other tenants.
With shared nodes, a syncerSyncerA component in vCluster that synchronizes resources between the virtual cluster and the host cluster, enabling virtual clusters to function while maintaining isolation. component translates workload resources (Pods, Secrets, ConfigMaps, Services) from each tenant cluster into a dedicated namespace on the underlying cluster. Tenants see their own resources. The translated copies are invisible to them.
With private nodes, vCluster synchronizes only control plane state. Workloads run directly on dedicated infrastructure with no sync overhead. Tenants see an environment identical to a dedicated single-tenant cluster.
Tenant clusters are certified Kubernetes distributions. Any conformant tool works against them without modification: kubectl, Helm, Argo, Crossplane, and others.
Features and plans​
vCluster is available as open source and as an enterprise-grade platform with additional features across paid tiers:
Next steps​
- Architecture — control plane internals, syncer behavior, and networking
- Private Nodes — dedicated infrastructure for GPU tenants and regulated workloads
- vCluster Standalone — zero-dependency Kubernetes for bare metal and edge
- Building a GPU cloud platform — deployment models for AI cloud providers
- vcluster.yaml reference — full configuration reference