Deploy in high availability
Overview​
By default, vCluster Standalone is installed on one initial control plane node. This deployment method is recommended for any uses cases that are very ephemeral (e.g. dev environments, CI/CD, etc.), but for production use cases, it's recommended to run vCluster with more redundancy. We recommend deploying vCluster Standalone with multiple nodes for the control plane (i.e. in high availability (HA)) in order have the virtual cluster be more resilient to failures.
Each control plane node needs to be added one by one to the cluster starting with an initial control plane node.
Predeployment configuration options​
Backing store must be embedded etcd or external database​
When running vCluster Standalone in HA, the only option for the backing store is embedded etcd or external database, which needs to be specifically enabled from the initial node.
Control Plane Node Roles​
Decide if the control plane node will also be a worker node or not.
Worker Nodes​
With vCluster Standalone, worker nodes can only be private nodes. Since there is no host cluster, there is no concept of host nodes.
Prerequisites​
- Access to nodes that satisfies the node requirements
Install Initial Control Plane Node​
Managed by vCluster Platform​
When managing a standalone HA cluster through vCluster Platform, the initial control plane node is provisioned through the Platform using Auto Nodes.
Use vCluster Platform to:
- Add a Node Provider.
- Add the vCluster configuration (example below).
controlPlane:
standalone:
enabled: true
autoNodes:
provider: aws # Node provider you want to use
quantity: 3 # Number of nodes
distro:
k8s:
image:
tag: v1.35.0 # Kubernetes version you want to use
backingStore:
etcd:
embedded:
enabled: true # Required for HA (or use external DB)
# Worker nodes
privateNodes:
enabled: true
autoNodes: # (optional) Add worker nodes with Auto Nodes
provider: aws
dynamic:
- name: aws-pool-1
# Networking configuration
networking:
# Specify the pod CIDR
podCIDR: 10.64.0.0/16
# Specify the service CIDR
serviceCIDR: 10.128.0.0/16
- Provision the cluster from the platform UI.
After provisioning completes, vCluster Platform manages the control plane node lifecycle. Worker node lifecycle also remains managed through the platform UI when Auto Nodes are used.
Access your cluster​
To access a standalone cluster managed by vCluster Platform, open the vCluster in the Platform UI and click Connect.
As an alternative, use the vcluster platform connect vcluster command.
Self-managed​
When deploying vCluster Standalone outside vCluster Platform, the assets required to install the control plane are located in the GitHub releases of vCluster.
All steps are performed on the initial control plane node.
Create directory for storing vCluster Standalone configuration.
Create /etc/vcluster directorymkdir -p /etc/vclusterSave a
vcluster.yamlconfiguration file for vCluster Standalone on the control plane node.Create a vcluster.yaml to enable HA for vCluster Standalonecat <<EOF > /etc/vcluster/vcluster.yaml
controlPlane:
distro:
k8s:
version: v1.35.0
backingStore:
etcd:
embedded:
enabled: true # Required for HA (or use external DB)
EOFwarningAdding additional control plane nodes will not be supported unless you follow the high availability steps for configuration.
Run the installation script on the control plane node:
Install vCluster Standalone on control plane nodeexport VCLUSTER_VERSION="v0.33.0"
sudo su -
curl -sfL https://github.com/loft-sh/vcluster/releases/download/${VCLUSTER_VERSION}/install-standalone.sh | sh -s -- --vcluster-name standaloneCheck that the control plane node is ready.
After installation, the kubeconfig is automatically configured on the control plane node. The kubectl context is set to interact with your new vCluster Standalone instance.
Run these commands on the control plane node:
Check node statuskubectl get nodesExpected output:
NAME STATUS ROLES AGE VERSION
ip-192-168-3-131 Ready control-plane,master 11m v1.32.1Verify cluster components are runningkubectl get pods -APods should include:
- Flannel: CNI for container networking
- CoreDNS: DNS service for the cluster
- KubeProxy: Network traffic routing and load balancing
- Konnectivity: Secure control plane to worker node communication
- Local Path Provisioner: Dynamic storage provisioning
Available flags to use in the install script​
There are several flags available that can be added to the script.
| Flag | Description |
|---|---|
--vcluster-name | Name of the vCluster instance |
--vcluster-version | Specific vCluster version to install |
--config | Path to the vcluster.yaml configuration file |
--skip-download | Skip downloading vCluster binary (use existing) |
--skip-wait | Exit without waiting for vCluster to be ready |
--extra-env | Additional environment variables for vCluster |
--platform-access-key | Access key for vCluster Platform integration |
--platform-host | vCluster Platform host URL |
--platform-insecure | Skip TLS verification for Platform connection |
--platform-instance-name | Instance name in vCluster Platform |
--platform-project | Project name in vCluster Platform |
Add Additional control plane nodes​
After installing the initial control plane node, vCluster Standalone is already running and new nodes only need to join the cluster.
Create token for control plane nodes​
To join control plane nodes, a token from the vCluster must be created to provide access and permissions. A single token can be used for any node(s) to join, or if you wanted to, you could create a token for each node.
By default, the token expires within 1 hour. The token is stored as a secret prefixed with bootstrap-token- in the kube-system namespace. The expiry timestamp is stored under the expiration key in the secret.
# Create a token
/var/lib/vcluster/bin/vcluster-cli token create --control-plane --expires=1h
The output provides a command to run on your control plane node:
curl -sfLk https://<vcluster-endpoint>/node/join?token=<token> | sh -
Join each control plane node​
For each control plane node that you want to join vCluster, run the command on the node.
The new node will automatically download the necessary binaries and configuration, and join the cluster as an additional control plane node.
Kubeconfig​
After installation, the kubeconfig is automatically configured on the control plane node. The kubectl context is set to interact with your new vCluster Standalone instance.
To access the cluster from other machines, copy the kubeconfig from /var/lib/vcluster/kubeconfig.yaml on the control plane node or use the vCluster CLI to generate access credentials.
Add worker nodes​
After the vCluster control plane is up and running, you can add dedicated worker nodes.
The API Server endpoint must be reachable from the worker nodes. You can additional
configure the controlPlane.endpoint and controlPlane.proxy.extraSANs in your vCluster configuration to expose the API Server.