Skip to main content

Migrate the platform to a different cluster

This guide explains how to migrate the platform to a different Kubernetes cluster, which may be necessary during cluster decommissioning or cloud platform migration.

When to migrate​

Migrating a platform to a new Kubernetes cluster is necessary in certain cases, such as infrastructure upgrades, cluster decommissioning, or scaling needs. Migrate the platform in the following situations :

  • The existing cluster is being decommissioned. The platform must move to a new cluster.

  • The platform is migrating to a new cluster. The original cluster is going to be cleaned up.

  • The platform is migrating to a new cluster. The original cluster is going to be remain connected.

Migration considerations​

warning

IMPORTANT: the platform must run on a dedicated Kubernetes cluster. It must have its own ingress and DNS record before migration. Ensure these conditions are met before proceeding.

Review the following factors before migrating. These scenarios require a different migration approach.

  • Loft router configurations If the platform uses the Loft router (<something.loft.host>), special DNS handling is required. The DNS records must be updated to reflect the new cluster. Failure to update DNS settings can cause routing issues and service downtime.

  • Automated deployment tools If the platform is managed by an automated deployment tool such as ArgoCD, migration must be coordinated with GitOps workflows. Changes to cluster configuration must be reflected in the repository before deployment. Skipping this step can lead to drift between the declared and actual state of the platform.

  • Virtual cluster dependencies If the platform has virtual cluster instances running on the same Kubernetes host cluster, these instances do not migrate automatically. They require a separate migration process after the platform has been moved. Migrating only the platform without considering virtual clusters can cause service disruptions and dependency issues.

  • Virtual cluster agent on the destination cluster If the destination Kubernetes cluster has a virtual cluster agent, it must be decommissioned before migration. The agent can interfere with the new platform installation, potentially leading to scheduling conflicts and unexpected behavior.

Not addressing these considerations before migration can lead to DNS misconfigurations, deployment failures, and conflicts with existing workloads.

Prerequisites​

Base prerequisites​

  • Administrator access to a Kubernetes cluster: See Accessing Clusters with kubectl for more information. Your current kube-context must have administrative privileges, which you can verify with kubectl auth can-i create clusterrole -A

    info

    To obtain a kube-context with admin access, ensure you have the necessary credentials and permissions for your Kubernetes cluster. This typically involves using kubectl config commands or authenticating through your cloud provider's CLI tools.

  • helm installed: Helm v3.10 is required for deploying the platform. Refer to the Helm Installation Guide if you need to install it.

  • kubectl installed: Kubernetes command-line tool for interacting with the cluster. See Install and Set Up kubectl for installation instructions.

Platform specific prerequisites​

  • Storage space for backup (approximately 1GB).
  • Access to DNS management for the platform's domain
  • Only one platform instance can exist per Kubernetes cluster. Do not install the platform twice in the same Kubernetes host cluster.
  • The migration process requires 15-30 minutes of downtime. Communicate this downtime to end users.
  • The source and destination installations must use the same platform version. Version changes during migration are not supported.

Migration steps​

    Install the platform in the new cluster​

  1. Configuration

    • Use the Helm values from the original installation, making adjustments to values like ingressClass or storageClass as necessary
    • If your project namespaces have the "loft-p-" prefix, set projectNamespacePrefix: loft-p- in your configuration
  2. Installation

    Install the platform in the new cluster following the quick start guide.

    Avoid DNS conflicts

    When using cloud-managed ingress, use a temporary hostname to prevent conflicts with the running platform instance.

  3. Prepare for migration​

  4. Send downtime communication to end users

  5. Optional: stop ArgoCD sync of any virtual clusters

  6. Perform a backup of the source platform

  7. Execute migration​

  8. Scale down the source platform:

    Scale down the platform deployment
    kubectl scale deployment loft --replicas=0 -n vcluster-platform
  9. Restore the backup to the new installation

  10. Apply the license certificate to the new cluster:

    Apply license certificate
    TARGET_NAMESPACE='vcluster-platform'
    CA_CERT=$(kubectl get secret loft-cert -n vcluster-platform -o jsonpath="{.data.ca\.crt}")
    TLS_CERT=$(kubectl get secret loft-cert -n vcluster-platform -o jsonpath="{.data.tls\.crt}")
    PRIVATE_KEY=$(kubectl get secret loft-cert -n vcluster-platform -o jsonpath="{.data.tls\.key}")

    cat <<EOF > loft-cert-clean.yaml
    apiVersion: v1
    kind: Secret
    metadata:
    name: loft-cert
    namespace: ${TARGET_NAMESPACE}
    type: kubernetes.io/tls
    data:
    ca.crt: ${CA_CERT}
    tls.crt: ${TLS_CERT}
    tls.key: ${PRIVATE_KEY}
    EOF
    Apply license certificate
    kubectl scale deployment loft --replicas=0 -n vcluster-platform
    kubectl delete secret loft-cert -n ${TARGET_NAMESPACE} --ignore-not-found
    kubectl create -f loft-cert-clean.yaml
  11. Update the hostname in the platform configuration as described in the configuration guide

  12. Update the DNS record to point to the new cluster if needed

  13. If the original cluster is going to be used as a connected cluster: connect it to the new platform installation

Post-migration validation​

  1. Log in to the platform UI and verify object presence:

    • Check for all expected projects
    • Verify user access and permissions
    • Confirm virtual cluster listings
  2. Verify license status in the platform UI:

    • Navigate to Settings → License
    • Confirm license is active and correct
  3. Restart platform agents on connected clusters:

    Restart platform agents
    kubectl rollout restart deployment loft -n vcluster-platform
  4. Validate the core capabilities of the platform:

    • Single Sign-on: test login with all configured providers
    • Log access: verify log retrieval from multiple virtual clusters
    • Virtual cluster creation: create a test cluster
    • Sleep mode: test sleep/wake feature
  5. Optional: restart ArgoCD sync if previously stopped

  6. Send end-of-downtime communication to end users