Skip to main content
Version: main 🚧

Create control plane

Supported Configurations
Running the control plane as a binary with vCluster Standalone. When scaling with additional worker nodes, they are joined as private nodes.

Overview​

When deploying a vCluster Standalone cluster managed by vCluster Platform, the control plane node is provisioned and managed through the platform using Auto Nodes.

Predeployment configuration options​

Before deploying, it's recommended to review the set of configuration options that cannot be updated post deployment. These options require deploying a new vCluster instead of upgrading your vCluster with new options.

Control-plane options​

  • High availability - Run multiple control plane nodes
  • CoreDNS - Currently only CoreDNS deployed by vCluster during startup is supported.
  • Backing Store - Decide how the data of your cluster is stored, must be one of either embedded SQLite (the default) or embedded etcd.

Node Roles​

Decide if the control plane node will also be a worker node or not. Once a node joins the cluster, the roles of the node cannot change.

By default, the control plane node also acts as a worker node. To deploy a dedicated control plane that does not run workloads, set controlPlane.standalone.joinNode.enabled to false.

Worker Nodes​

With vCluster Standalone, worker node pools can only be private nodes. Since there is no host cluster, there is no concept of host nodes.

Prerequisites​

Install control plane node​

Managed by vCluster Platform​

When managing a standalone cluster through vCluster Platform, the initial control plane node is provisioned through the platform using Auto Nodes.

Use vCluster Platform to:

  1. Add a Node Provider.
  2. Add the vCluster configuration (example below).
  3. Provision the cluster from the platform UI.
Modify the following with your specific values to replace on the whole page and generate copyable commands:
vcluster.yaml for a standalone control plane managed by vCluster Platform
controlPlane:
standalone:
enabled: true
autoNodes:
provider: aws # Node provider you want to use
quantity: 1 # Number of nodes (HA requires embedded etcd or external DB)
distro:
k8s:
image:
tag: v1.35.0 # Kubernetes version you want to use

# Worker nodes
privateNodes:
enabled: true
autoNodes: # (optional) Add worker nodes with Auto Nodes
provider: aws
dynamic:
- name: aws-pool-1

# Networking configuration
networking:
# Specify the pod CIDR
podCIDR: 10.64.0.0/16
# Specify the service CIDR
serviceCIDR: 10.128.0.0/16

After provisioning completes, vCluster Platform manages the control plane node lifecycle. Worker node lifecycle also remains managed through the platform UI when Auto Nodes are used.

Access your cluster​

To access a standalone cluster managed by vCluster Platform, open the vCluster in the platform UI and click Connect.

As an alternative, use the vcluster platform connect vcluster command.

Self-managed​

Control Plane Node

All steps are performed on the control plane node.

  1. Replace the VCLUSTER_VERSION with the vCluster version you want to install and the CONFIG_DIRECTORY with the directory to store the vCluster configuration values file, the vcluster.yaml.

    Modify the following with your specific values to replace on the whole page and generate copyable commands:
  2. Create directory for storing vCluster Standalone configuration.

    Create config directory
    mkdir -p /etc/vcluster

    Save a basic vcluster.yaml configuration file for vCluster Standalone on the control plane node.

    Create vCluster config file
    cat <<EOF > /etc/vcluster/vcluster.yaml
    controlPlane:
    distro:
    k8s:
    version: v1.34.0
    EOF
    warning

    Adding additional control plane nodes will not be supported unless you follow the high availability steps for configuration.

  3. Authenticate with sudo privileges to access the Kubernetes context:

    sudo su -

    Then run the installation script on the control plane node:

    Install vCluster Standalone on control plane node
    curl -sfL https://github.com/loft-sh/vcluster/releases/download/v0.31.0/install-standalone.sh | sh -s -- --vcluster-name standalone
  4. Check that the control plane node is ready.

    After installation, the kubeconfig is automatically configured on the control plane node. The kubectl context is set to interact with your new vCluster Standalone instance.

    Run these commands on the control plane node:

    Check node status
    kubectl get nodes

    Expected output:

    NAME               STATUS   ROLES                  AGE   VERSION
    ip-192-168-3-131 Ready control-plane,master 11m v1.32.1
    Verify cluster components are running
    kubectl get pods -A

    Pods should include:

    • Flannel: CNI for container networking
    • CoreDNS: DNS service for the cluster
    • KubeProxy: Network traffic routing and load balancing
    • Konnectivity: Secure control plane to worker node communication
    • Local Path Provisioner: Dynamic storage provisioning

Available flags to use in the install script​

There are several flags available that can be added to the script.

FlagDescription
--vcluster-nameName of the vCluster instance
--vcluster-versionSpecific vCluster version to install
--configPath to the vcluster.yaml configuration file
--skip-downloadSkip downloading vCluster binary (use existing)
--skip-waitExit without waiting for vCluster to be ready
--extra-envAdditional environment variables for vCluster
--platform-access-keyAccess key for vCluster Platform integration
--platform-hostvCluster Platform host URL
--platform-insecureSkip TLS verification for Platform connection
--platform-instance-nameInstance name in vCluster Platform
--platform-projectProject name in vCluster Platform

Access your cluster​

After installation, the kubeconfig is automatically configured on the control plane node. The kubectl context is set to interact with your new vCluster Standalone instance. If you decide to use vCluster Standalone as a host cluster for virtual clusters, then you can set your current kube context to the vCluster Standalone and create and interact with virtual clusters using vCluster CLI.

To access the cluster from other machines, copy the kubeconfig from /var/lib/vcluster/kubeconfig.yaml on the control plane node, then replace the server field with relevant IP or DNS to access the cluster. You can also configure vCluster to make external access easier.

The vCluster CLI is installed at /var/lib/vcluster/bin/vcluster-cli.