Migrate the platform to a different cluster
This guide explains how to migrate the platform to a different Kubernetes cluster, which may be necessary during cluster decommissioning or cloud platform migration.
When to migrate​
Migrating a platform to a new Kubernetes cluster is necessary in certain cases, such as infrastructure upgrades, cluster decommissioning, or scaling needs. Migrate the platform in the following situations :
-
The existing cluster is being decommissioned. The platform must move to a new cluster.
-
The platform is migrating to a new cluster. The original cluster is going to be cleaned up.
-
The platform is migrating to a new cluster. The original cluster is going to be remain connected.
Migration considerations​
IMPORTANT: the platform must run on a dedicated Kubernetes cluster. It must have its own ingress and DNS record before migration. Ensure these conditions are met before proceeding.
Review the following factors before migrating. These scenarios require a different migration approach.
-
Loft router configurations If the platform uses the Loft router (
<something.loft.host>
), special DNS handling is required. The DNS records must be updated to reflect the new cluster. Failure to update DNS settings can cause routing issues and service downtime. -
Automated deployment tools If the platform is managed by an automated deployment tool such as
ArgoCD
, migration must be coordinated with GitOps workflows. Changes to cluster configuration must be reflected in the repository before deployment. Skipping this step can lead to drift between the declared and actual state of the platform. -
Virtual cluster dependencies If the platform has virtual cluster instances running on the same Kubernetes host cluster, these instances do not migrate automatically. They require a separate migration process after the platform has been moved. Migrating only the platform without considering virtual clusters can cause service disruptions and dependency issues.
-
Virtual cluster agent on the destination cluster If the destination Kubernetes cluster has a virtual cluster agent, it must be decommissioned before migration. The agent can interfere with the new platform installation, potentially leading to scheduling conflicts and unexpected behavior.
Not addressing these considerations before migration can lead to DNS misconfigurations, deployment failures, and conflicts with existing workloads.
Prerequisites​
Base prerequisites​
-
Administrator access to a Kubernetes cluster: See Accessing Clusters with kubectl for more information. Your current kube-context must have administrative privileges, which you can verify with
kubectl auth can-i create clusterrole -A
infoTo obtain a kube-context with admin access, ensure you have the necessary credentials and permissions for your Kubernetes cluster. This typically involves using
kubectl config
commands or authenticating through your cloud provider's CLI tools. -
helm
installed: Helm v3.10 is required for deploying the platform. Refer to the Helm Installation Guide if you need to install it. -
kubectl
installed: Kubernetes command-line tool for interacting with the cluster. See Install and Set Up kubectl for installation instructions.
Platform specific prerequisites​
- Storage space for backup (approximately 1GB).
- Access to DNS management for the platform's domain
- Only one platform instance can exist per Kubernetes cluster. Do not install the platform twice in the same Kubernetes host cluster.
- The migration process requires 15-30 minutes of downtime. Communicate this downtime to end users.
- The source and destination installations must use the same platform version. Version changes during migration are not supported.
Migration steps​
Configuration
- Use the Helm values from the original installation, making adjustments to values like
ingressClass
orstorageClass
as necessary - If your project namespaces have the "loft-p-" prefix, set
projectNamespacePrefix: loft-p-
in your configuration
- Use the Helm values from the original installation, making adjustments to values like
Installation
Install the platform in the new cluster following the quick start guide.
Avoid DNS conflictsWhen using cloud-managed ingress, use a temporary hostname to prevent conflicts with the running platform instance.
Send downtime communication to end users
Optional: stop
ArgoCD
sync of any virtual clustersScale down the source platform:
Scale down the platform deploymentkubectl scale deployment loft --replicas=0 -n vcluster-platform
Apply the license certificate to the new cluster:
Apply license certificateTARGET_NAMESPACE='vcluster-platform'
CA_CERT=$(kubectl get secret loft-cert -n vcluster-platform -o jsonpath="{.data.ca\.crt}")
TLS_CERT=$(kubectl get secret loft-cert -n vcluster-platform -o jsonpath="{.data.tls\.crt}")
PRIVATE_KEY=$(kubectl get secret loft-cert -n vcluster-platform -o jsonpath="{.data.tls\.key}")
cat <<EOF > loft-cert-clean.yaml
apiVersion: v1
kind: Secret
metadata:
name: loft-cert
namespace: ${TARGET_NAMESPACE}
type: kubernetes.io/tls
data:
ca.crt: ${CA_CERT}
tls.crt: ${TLS_CERT}
tls.key: ${PRIVATE_KEY}
EOFApply license certificatekubectl scale deployment loft --replicas=0 -n vcluster-platform
kubectl delete secret loft-cert -n ${TARGET_NAMESPACE} --ignore-not-found
kubectl create -f loft-cert-clean.yamlUpdate the hostname in the platform configuration as described in the configuration guide
Update the DNS record to point to the new cluster if needed
If the original cluster is going to be used as a connected cluster: connect it to the new platform installation
Install the platform in the new cluster​
Prepare for migration​
Execute migration​
Post-migration validation​
Log in to the platform UI and verify object presence:
- Check for all expected projects
- Verify user access and permissions
- Confirm virtual cluster listings
Verify license status in the platform UI:
- Navigate to Settings → License
- Confirm license is active and correct
Restart platform agents on connected clusters:
Restart platform agentskubectl rollout restart deployment loft -n vcluster-platform
Validate the core capabilities of the platform:
- Single Sign-on: test login with all configured providers
- Log access: verify log retrieval from multiple virtual clusters
- Virtual cluster creation: create a test cluster
- Sleep mode: test sleep/wake feature
Optional: restart
ArgoCD
sync if previously stoppedSend end-of-downtime communication to end users