Skip to main content
Version: main 🚧

Deploy in high availability

Supported Configurations
Running the control plane as a binary with vCluster Standalone. When scaling with additional worker nodes, they are joined as private nodes.

Overview​

By default, vCluster Standalone is installed on one initial control plane node. This deployment method is recommended for any uses cases that are very ephemeral (e.g. dev environments, CI/CD, etc.), but for production use cases, it's recommended to run vCluster with more redundancy. We recommend deploying vCluster Standalone with multiple nodes for the control plane (i.e. in high availability (HA)) in order have the virtual cluster be more resilient to failures.

Each control plane node needs to be added one by one to the cluster starting with an initial control plane node.

Predeployment configuration options​

Backing store must be embedded etcd or external database​

When running vCluster Standalone in HA, the only option for the backing store is embedded etcd or external database, which needs to be specifically enabled from the initial node.

Control Plane Node Roles​

Decide if the control plane node will also be a worker node or not.

Worker Nodes​

With vCluster Standalone, worker nodes can only be private nodes. Since there is no host cluster, there is no concept of host nodes.

Prerequisites​

Install Initial Control Plane Node​

Managed by vCluster Platform​

When managing a standalone HA cluster through vCluster Platform, the initial control plane node is provisioned through the Platform using Auto Nodes.

Use vCluster Platform to:

  1. Add a Node Provider.
  2. Add the vCluster configuration (example below).
Modify the following with your specific values to replace on the whole page and generate copyable commands:
vcluster.yaml for an HA standalone control plane managed by vCluster Platform
controlPlane:
standalone:
enabled: true
autoNodes:
provider: aws # Node provider you want to use
quantity: 3 # Number of nodes
distro:
k8s:
image:
tag: v1.35.0 # Kubernetes version you want to use
backingStore:
etcd:
embedded:
enabled: true # Required for HA (or use external DB)

# Worker nodes
privateNodes:
enabled: true
autoNodes: # (optional) Add worker nodes with Auto Nodes
provider: aws
dynamic:
- name: aws-pool-1

# Networking configuration
networking:
# Specify the pod CIDR
podCIDR: 10.64.0.0/16
# Specify the service CIDR
serviceCIDR: 10.128.0.0/16
  1. Provision the cluster from the platform UI.

After provisioning completes, vCluster Platform manages the control plane node lifecycle. Worker node lifecycle also remains managed through the platform UI when Auto Nodes are used.

Access your cluster​

To access a standalone cluster managed by vCluster Platform, open the vCluster in the Platform UI and click Connect.

As an alternative, use the vcluster platform connect vcluster command.

Self-managed​

When deploying vCluster Standalone outside vCluster Platform, the assets required to install the control plane are located in the GitHub releases of vCluster.

Control Plane Node

All steps are performed on the initial control plane node.

  1. Create directory for storing vCluster Standalone configuration.

    Create /etc/vcluster directory
    mkdir -p /etc/vcluster

    Save a vcluster.yaml configuration file for vCluster Standalone on the control plane node.

    Create a vcluster.yaml to enable HA for vCluster Standalone
    cat <<EOF > /etc/vcluster/vcluster.yaml
    controlPlane:
    distro:
    k8s:
    version: v1.35.0
    backingStore:
    etcd:
    embedded:
    enabled: true # Required for HA (or use external DB)
    EOF
    warning

    Adding additional control plane nodes will not be supported unless you follow the high availability steps for configuration.

  2. Run the installation script on the control plane node:

    Install vCluster Standalone on control plane node
    export VCLUSTER_VERSION="v0.33.0"

    sudo su -
    curl -sfL https://github.com/loft-sh/vcluster/releases/download/${VCLUSTER_VERSION}/install-standalone.sh | sh -s -- --vcluster-name standalone
  3. Check that the control plane node is ready.

    After installation, the kubeconfig is automatically configured on the control plane node. The kubectl context is set to interact with your new vCluster Standalone instance.

    Run these commands on the control plane node:

    Check node status
    kubectl get nodes

    Expected output:

    NAME               STATUS   ROLES                  AGE   VERSION
    ip-192-168-3-131 Ready control-plane,master 11m v1.32.1
    Verify cluster components are running
    kubectl get pods -A

    Pods should include:

    • Flannel: CNI for container networking
    • CoreDNS: DNS service for the cluster
    • KubeProxy: Network traffic routing and load balancing
    • Konnectivity: Secure control plane to worker node communication
    • Local Path Provisioner: Dynamic storage provisioning

Available flags to use in the install script​

There are several flags available that can be added to the script.

FlagDescription
--vcluster-nameName of the vCluster instance
--vcluster-versionSpecific vCluster version to install
--configPath to the vcluster.yaml configuration file
--skip-downloadSkip downloading vCluster binary (use existing)
--skip-waitExit without waiting for vCluster to be ready
--extra-envAdditional environment variables for vCluster
--platform-access-keyAccess key for vCluster Platform integration
--platform-hostvCluster Platform host URL
--platform-insecureSkip TLS verification for Platform connection
--platform-instance-nameInstance name in vCluster Platform
--platform-projectProject name in vCluster Platform

Add Additional control plane nodes​

After installing the initial control plane node, vCluster Standalone is already running and new nodes only need to join the cluster.

Create token for control plane nodes​

To join control plane nodes, a token from the vCluster must be created to provide access and permissions. A single token can be used for any node(s) to join, or if you wanted to, you could create a token for each node.

By default, the token expires within 1 hour. The token is stored as a secret prefixed with bootstrap-token- in the kube-system namespace. The expiry timestamp is stored under the expiration key in the secret.

Create a token for control plane nodes
# Create a token
/var/lib/vcluster/bin/vcluster-cli token create --control-plane --expires=1h

The output provides a command to run on your control plane node:

Example output from creating a token
curl -sfLk https://<vcluster-endpoint>/node/join?token=<token> | sh -

Join each control plane node​

For each control plane node that you want to join vCluster, run the command on the node.

The new node will automatically download the necessary binaries and configuration, and join the cluster as an additional control plane node.

Kubeconfig​

After installation, the kubeconfig is automatically configured on the control plane node. The kubectl context is set to interact with your new vCluster Standalone instance.

To access the cluster from other machines, copy the kubeconfig from /var/lib/vcluster/kubeconfig.yaml on the control plane node or use the vCluster CLI to generate access credentials.

Add worker nodes​

After the vCluster control plane is up and running, you can add dedicated worker nodes.

The API Server endpoint must be reachable from the worker nodes. You can additional configure the controlPlane.endpoint and controlPlane.proxy.extraSANs in your vCluster configuration to expose the API Server.