Skip to main content

Design Principles

Lightweight with low-overhead​

vCluster is designed as lightweight as possible to minimize resource overhead on the host cluster.

Implementation: This is achieved by bundling Kubernetes components inside a single Pod.

No performance degradation​

Workloads running inside a virtual cluster (even inside nested vCluster instances) should run with the same performance as workloads that are running directly on the underlying host cluster. The computing power, the access to underlying persistent storage, and the network performance should not be degraded at all.

Implementation: This is achieved by synchronizing pods to the host cluster, which means that the pods are actually being scheduled and started just like regular pods deployed on the host cluster.

Reduce requests on the host cluster​

vCluster should greatly reduce the number of requests to the Kubernetes API server of the underlying host cluster by ensuring that all high-level resources, e.g. Deployments and StatefulSets, remain only in the virtual cluster.

Implementation: This is achieved by using a separate API server that handles all requests to the virtual cluster and a separate data store that stores all objects inside the virtual cluster. The syncer component synchronizes only a few low-level resources to the host cluster and thus requires very few API server requests. All of this happens in an asynchronous, non-blocking fashion, which follows Kubernetes design principles.

Flexible deployment​

vCluster does not make any assumptions about how users deploy the virtual clusters. Users should be able to create a virtual cluster in any host cluster without requiring the installation of any server-side components. Deploying is possible with different client-only deployment tools, e.g. vCluster CLI, Helm or Terraform. Users can add an operator or CRDs to manage virtual clusters, e.g. Argo CD, but a server-side management plane should never be required for deploying a virtual cluster.

Implementation: This is achieved by making vCluster run as a simple StatefulSet (or Deployment) + Service that can be deployed using any Kubernetes tool.

No admin privileges required​

To deploy a virtual cluster, a user does not need any cluster-wide permissions. If a user has the RBAC permissions to deploy a simple web app to a namespace, they are also able to deploy a virtual cluster to the namespace.

Implementation: This is achieved by making vCluster run as a simple StatefulSet (or Deployment) + Service. Typically, every user has the privilege to run these if they have any Kubernetes access at all.

Single namespace encapsulation​

Each virtual cluster and all the workloads and data inside the virtual cluster are encapsulated into a single namespace on the host cluster. Even if the virtual cluster has hundreds of namespaces, everything in the host cluster is still in a single host namespace.

Implementation: This is achieved by using a separate API server and data store. Additionally, the syncer component synchronizes everything to a single host namespace while renaming resources during the sync to prevent naming conflicts when mapping from multiple namespaces inside the virtual cluster to a single namespace in the host cluster.

Easy cleanup​

Virtual clusters do not have any hard wiring with the underlying cluster. Deleting a virtual cluster or merely deleting the virtual cluster's host namespace is possible without any negative impacts on the underlying host cluster (e.g. no namespaces stuck in a terminating state). Also, deleting will also remove all vCluster-related resources cleanly and immediately without leaving any orphan resources behind.

Implementation: This is achieved by not adding any control plane or server-side elements to the deployment of a virtual cluster, which is just a StatefulSet (or Deployment) and a few other Kubernetes resources. All synchronized resources in the host namespace have an appropriate owner reference, which means if you delete the virtual cluster itself, everything that belongs to it is automatically deleted by Kubernetes as well. This is a similar mechanism to what Deployments and StatefulSets use to clean up their pods.