Skip to main content
Version: main 🚧

Migrate the platform to a different cluster

This guide explains how to migrate the platform to a different Kubernetes cluster, which might be necessary during cluster decommissioning or cloud platform migration.

When to migrate​

Migrating the platform to a new Kubernetes cluster is necessary in certain cases, such as infrastructure upgrades, cluster decommissioning, or scaling needs. Migrate the platform in the following situations:

  • The existing cluster is being decommissioned. The platform must move to a new cluster.

  • The platform is migrating to a new cluster. The original cluster is going to be cleaned up.

  • The platform is migrating to a new cluster. The original cluster is going to remain connected.

Migration considerations​

warning

The platform must run on a dedicated Kubernetes cluster. It must have its own external access (using ingress or LoadBalancer) and DNS record before migration. Ensure these conditions are met before proceeding.

Review the following factors before migrating. These scenarios require a different migration approach.

  • Loft router configurations
    If the platform uses the Loft router (<example.loft.host>), special DNS handling is required. The DNS records must be updated to reflect the new cluster. Failure to update DNS settings can cause routing issues and service downtime.

  • Automated deployment tools
    If the platform is managed by an automated deployment tool such as ArgoCD, migration must be coordinated with GitOps workflows. Changes to cluster configuration must be reflected in the repository before deployment. Skipping this step can lead to drift between the declared and actual state of the platform.

  • Virtual cluster dependencies
    If the platform has virtual cluster instances running on the same Kubernetes host cluster, these instances do not migrate automatically. They require a separate migration process after the platform has been moved. Migrating only the platform without considering virtual clusters can cause service disruptions and dependency issues.

  • Virtual cluster agent on the destination cluster
    If the destination Kubernetes cluster has a virtual cluster agent, it must be decommissioned before migration. The agent can interfere with the new platform installation, potentially leading to scheduling conflicts and unexpected behavior.

Not addressing these considerations before migration can lead to DNS misconfigurations, deployment failures, and conflicts with existing workloads.

Prerequisites​

Base prerequisites​

  • Administrator access to a Kubernetes cluster: See Accessing Clusters with kubectl for more information. Your current kube-context must have administrative privileges, which you can verify with kubectl auth can-i create clusterrole -A

    info

    To obtain a kube-context with admin access, ensure you have the necessary credentials and permissions for your Kubernetes cluster. This typically involves using kubectl config commands or authenticating through your cloud provider's CLI tools.

  • helm installed: Helm v3.10 is required for deploying the platform. Refer to the Helm Installation Guide if you need to install it.

  • kubectl installed: Kubernetes command-line tool for interacting with the cluster. See Install and Set Up kubectl for installation instructions.

Platform specific prerequisites​

  • Ensure at least 1 GB of available storage space for the backup.
  • Confirm that you have access to DNS management for the platform's domain.
  • Only one platform instance can run in a single Kubernetes cluster. Do not install the platform more than once in the same host cluster.
  • Plan for 15–30 minutes of downtime during the migration process. Communicate this downtime to end users in advance.
  • Use the same platform version for both the source and destination installations. Migration does not support version changes.

Migration steps​

    Install the platform in the new cluster​

  1. Use the following configuration settings to complete the installation in the new cluster:

    • Use the Helm values from the original installation, making adjustments to values like ingressClass or storageClass as necessary.
    • If your project namespaces use the loft-p- prefix, set projectNamespacePrefix: loft-p- in your configuration.
  2. Install the platform in the new cluster by following the quick start guide.

    Using a temporary hostname during installation

    When using cloud-managed ingress, use a temporary hostname to prevent conflicts with the running platform instance.

  3. Prepare for migration​

  4. Send downtime communication to end users.

  5. Optional: Stop ArgoCD sync of any virtual clusters.

  6. Perform a backup of the source platform.

  7. Execute migration​

  8. Scale down the source platform:

    Scale down the platform deployment
    kubectl scale deployment loft --replicas=0 -n vcluster-platform
  9. Restore the backup to the new installation.

  10. Apply the license certificate to the new cluster:

    Capture certificate data from the source cluster
    TARGET_NAMESPACE='vcluster-platform'
    CA_CERT=$(kubectl get secret loft-cert -n vcluster-platform -o jsonpath="{.data.ca\.crt}")
    TLS_CERT=$(kubectl get secret loft-cert -n vcluster-platform -o jsonpath="{.data.tls\.crt}")
    PRIVATE_KEY=$(kubectl get secret loft-cert -n vcluster-platform -o jsonpath="{.data.tls\.key}")

    cat <<EOF > loft-cert-clean.yaml
    apiVersion: v1
    kind: Secret
    metadata:
    name: loft-cert
    namespace: ${TARGET_NAMESPACE}
    type: kubernetes.io/tls
    data:
    ca.crt: ${CA_CERT}
    tls.crt: ${TLS_CERT}
    tls.key: ${PRIVATE_KEY}
    EOF
    Apply certificate to the target cluster
    kubectl scale deployment loft --replicas=0 -n vcluster-platform
    kubectl delete secret loft-cert -n ${TARGET_NAMESPACE} --ignore-not-found
    kubectl create -f loft-cert-clean.yaml
    kubectl scale deployment loft --replicas=1 -n vcluster-platform
  11. Update the hostname in the platform configuration as described in the configuration guide.

  12. Update the DNS record to point to the new cluster, if needed.

  13. When using the original cluster as a connected cluster, connect it to the new platform installation.

Post-migration validation​

  1. Log in to the platform UI and verify object presence:

    • Check for all expected projects.
    • Verify user access and permissions.
    • Confirm virtual cluster listings.
  2. Verify license status in the platform UI:

    • Navigate to Settings → License.
    • Confirm license is active and correct.
  3. Restart platform agents on connected clusters:

    Restart platform agent
    kubectl rollout restart deployment loft -n vcluster-platform
  4. Validate the core capabilities of the platform:

    • Single sign-on: Test logging in with all configured providers.
    • Log access: Verify log retrieval from multiple virtual clusters.
    • Virtual cluster creation: Create a test cluster.
    • Sleep mode: Test the sleep/wake feature.
  5. Optional: Restart ArgoCD sync if previously stopped.

  6. Send end-of-downtime communication to end users.