Manage vCluster Standalone Control Plane Nodes
This feature is only available for the following:
Running the control plane as a binary for vCluster Standalone, which uses private nodes.Update vCluster Standalone configurationโ
Changing the vcluster.yaml
configuration has to be done on all control plane nodes.
Modify the
vcluster.yaml
.The
vcluster.yaml
is located at:/etc/vcluster/vcluster.yaml
. Edit and save your changes.Restart vCluster systemD service to deploy your changes.
Restart vClustersystemctl restart vcluster.service
Removing nodesโ
If you have not enabled joining the control plane nodes as worker nodes, then removing vCluster requires only stopping the service and removing files from the nodes. For HA clusters, there are additional steps detailed below.
For control plane nodes that are joined in the cluster (controlPlane.standalone.joinNode.enabled: true
), please first make sure you follow this processes worker node removal steps and node cleanup
- Single Control Plane Node
- Multiple Control Plane Nodes (HA)
Stop the vCluster service on the control plane node.
Stop vCluster servicesystemctl stop vcluster.service
Remove vCluster files on the control plane node.
Clean up vCluster filesrm -rf /var/lib/vcluster && rm /etc/systemd/system/vcluster.service
Save the IP Address of the node you want to remove, it will be needed later. There are multiple ways to check it, but if you're unsure, you can get it from the vcluster logs:
Get etcd peer ipjournalctl -u vcluster.service | grep etcd | grep "Adding peer"
INFO etcd/add.go:48 Adding peer https://10.244.0.28:2380Stop the vCluster service on the control plane node.
Stop vCluster servicesystemctl stop vcluster.service
Remove vCluster files on the control plane node.
Clean up vCluster filesrm -rf /var/lib/vcluster && rm /etc/systemd/system/vcluster.service
On the different node, list existing etcd cluster members:
List ETCD members/var/lib/vcluster/bin/etcdctl --endpoints=127.0.0.1:2379 --cert=/var/lib/vcluster/pki/apiserver-etcd-client.crt --key=/var/lib/vcluster/pki/apiserver-etcd-client.key --cacert=/var/lib/vcluster/pki/etcd/ca.crt member list
find a peer ID (first column) in the output that matches your removed node IP saved earlier (
https://10.244.0.28:2380
):List ETCD members output8f2bcb8fdb98bd92, started, 10.244.0.26, https://10.244.0.26:2380, https://10.244.0.26:2379, false
d12398ad344dc8cf, started, 10.244.0.29, https://10.244.0.29:2380, https://10.244.0.29:2379, false
ee33f76261ecb3b3, started, 10.244.0.28, https://10.244.0.28:2380, https://10.244.0.28:2379, falsematching peer ID is
ee33f76261ecb3b3
.then, remove etcd peer:
Remove ETCD peer/var/lib/vcluster/bin/etcdctl --endpoints=127.0.0.1:2379 --cert=/var/lib/vcluster/pki/apiserver-etcd-client.crt --key=/var/lib/vcluster/pki/apiserver-etcd-client.key --cacert=/var/lib/vcluster/pki/etcd/ca.crt member remove ee33f76261ecb3b3
Reusing nodesโ
If you want to reuse a node that was already running vCluster standalone, you can do so by reseting the node and running install script again.
- Control Plane to Control Plane
- Control Plane to Worker Node
- Worker Node to Control Plane
To reset the node, run install script using
--reset-only
flag.Reset the nodecurl -sfL https://github.com/loft-sh/vcluster/releases/download/${VCLUSTER_VERSION}/install-standalone.sh | sh -s -- --reset-only
You should see the following output:
๐งน Resetting vCluster installation...
โ Reset complete.Run the install script again.
Run the install script againcurl -sfL https://github.com/loft-sh/vcluster/releases/download/${VCLUSTER_VERSION}/install-standalone.sh | sh -s -- --vcluster-name my-new-vcluster --config ${PWD}/vcluster.yaml
To reset the node, run install script using
--reset-only
flag.Reset the nodecurl -sfL https://github.com/loft-sh/vcluster/releases/download/${VCLUSTER_VERSION}/install-standalone.sh | sh -s -- --reset-only
You should see the following output:
Output๐งน Resetting vCluster installation...
โ Reset complete.From the primary node for virtual cluster, create a node join token.
Create node join tokenvcluster token create
You should see similar output:
Outputcurl -fsSLk "https://10.244.0.16:8443/node/join?token=hzndlx.vhf08ob7pg97r2hb" | sh -
Back on your node, in order to join you'll need to add
--force-join
flag to the command.Join the nodecurl -fsSLk "https://10.244.0.16:8443/node/join?token=hzndlx.vhf08ob7pg97r2hb" | sh -s -- --force-join
Run it on the node you want to join. You should see output similar to this:
OutputDetected OS: ubuntu
Preparing node for Kubernetes installation...
Kubernetes version: v1.33.5
Installing Kubernetes binaries...
Downloading Kubernetes binaries from https://github.com/loft-sh/kubernetes/releases/download...
Resetting node...
Ensuring kubelet is stopped...
kubelet service not found
Starting containerd...
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service โ /etc/systemd/system/containerd.service.
Starting kubelet...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service โ /etc/systemd/system/kubelet.service.
Installation successful!
Joining node into cluster...
[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.003203387s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.Your node should now be joined to the cluster.
Follow steps in Worker Nodes Documentation to reset the node.
Proceed with installing vCluster standalone.
Run the install script againcurl -sfL https://github.com/loft-sh/vcluster/releases/download/${VCLUSTER_VERSION}/install-standalone.sh | sh -s -- --vcluster-name my-new-vcluster --config ${PWD}/vcluster.yaml