Skip to main content
Version: v0.24 Stable

Snapshot and Restore

There are multiple ways to back up and restore a virtual cluster. vCluster provides a built-in method to create and restore snapshots using its CLI.

warning

If you use an external database, such as MySQL or PostgreSQL, that does not run in the same namespace as vCluster, you must create a separate backup for the datastore. For more information, refer to the relevant database documentation.

Create a snapshot​

info

This method requires vCluster version v0.24.0 or higher

We recommend using the vCluster CLI to back up the etcd datastore. When you run a backup, vCluster creates a temporary pod to save the snapshot at the specified location and automatically determines the configured backing store. The snapshot includes:

  • Backing store data (for example, etcd or SQLite)
  • vCluster Helm release information
  • vCluster configuration (for example, vcluster.yaml)

When creating a snapshot, the command creates a new pod, using the vCluster image, to back up the backing store. To run the snapshot process inside an existing pod using kubectl exec, add the --pod-exec flag.

info

The vCluster CLI backup method currently does not support backing up persistent volumes. To back up persistent volumes, use the Velero backup method.

Snapshot URL​

vCluster uses a snapshot URL to save the snapshot to a specific location. The snapshot URL contains the following information:

ParameterDescriptionExample
ProtocolDefines the storage type for the snapshotoci, s3, container
Storage locationSpecifies where to save the snapshotoci://ghcr.io/my-user/my-repo:my-tag, s3://my-s3-bucket/my-snapshot-key, container:///data/my-snapshot.tar.gz
Optional flagsAdditional options for snapshot storageskip-client-credentials=true

Supported protocols​

The following protocols are supported for storing snapshots:

  • oci – Stores snapshots in an OCI image registry, such as Docker Hub or GHCR.
  • s3 – Saves snapshots to an S3-compatible bucket, such as AWS S3 or MinIO.
  • container – Stores snapshots as a local file inside a vCluster container or another persistent volume claim (PVC).

For example, the following snapshot URL saves the snapshot to an OCI image registry:

Snapshot and push to an OCI image registry
vcluster snapshot my-vcluster "oci://ghcr.io/my-user/my-repo:my-tag"

Store snapshots in OCI image registries​

You can save snapshots to OCI image registries. You can authenticate in two ways: by using locally stored OCI credentials or by passing credentials directly in the snapshot URL.

To authenticate with local credentials, log in to your OCI registry and create the snapshot:

Authenticate with local OCI credentials and create a snapshot
# Log in to the OCI registry using a password access token.
echo $PASSWORD_ACCESS_TOKEN | docker login ghcr.io -u $USERNAME --password-stdin

# Create a snapshot and push it to an OCI image registry.
vcluster snapshot my-vcluster "oci://ghcr.io/my-user/my-repo:my-tag"

Alternatively, you can pass authentication credentials directly in the snapshot URL and create the snapshot. The following options are supported to configure authentication when passing credentials directly in the URL:

ParameterDescriptionRequired
usernameUsername for authenticating with the OCI registryYes, when not using local credentials
passwordBase64-encoded password for authenticating with the OCI registryYes, when not using local credentials
skip-client-credentialsWhen set to true, ignores local Docker credentialsNo, defaults to false
Pass OCI credentials directly in the snapshot URL
# Pass authentication credentials directly in the URL and create a snapshot.
export OCI_USERNAME=my-username
export OCI_PASSWORD=$(echo -n "my-password" | base64)
vcluster snapshot my-vcluster "oci://ghcr.io/my-user/my-repo:my-tag?username=$OCI_USERNAME&password=$OCI_PASSWORD&skip-client-credentials=true"

Store snapshots in S3 buckets​

Store snapshots in an S3-compatible bucket using the s3 protocol. You can authenticate in two ways: by using local environment credentials or by passing credentials directly in the URL.

To use local environment credentials, log in to AWS CLI, then create and save the snapshot:

Create and store a snapshot in an S3 bucket using AWS CLI credentials
# Check if you are logged in.
aws sts get-caller-identity

# Create a snapshot and store it in an S3 bucket.
vcluster snapshot my-vcluster "s3://my-s3-bucket/my-bucket-key"

Alternatively, you can pass options directly in the snapshot URL. The following options are supported:

FlagDescriptionRequired
access-key-idBase64-encoded S3 access key ID for authenticationYes, when not using local credentials
secret-access-keyBase64-encoded S3 secret access key for authenticationYes, when not using local credentials
session-tokenBase64-encoded temporary session token for authenticationYes, when not using local credentials
regionRegion of the S3-compatible bucketNo
profileAWS profile to use for authenticationNo
skip-client-credentialsSkips use of local credentials for authenticationNo, defaults to false

Run the following command to create a snapshot and store it in an S3-compatible bucket, such as AWS S3 or MinIO:

Pass S3 credentials directly in the URL to create a snapshot
# Pass S3 credentials in the URL and create a snapshot.
export ACCESS_KEY_ID=$(cat my-access-key-id.txt | base64)
export SECRET_ACCESS_KEY=$(cat my-secret-access-key.txt | base64)
export SESSION_TOKEN=$(cat my-session-token.txt | base64)
vcluster snapshot my-vcluster "s3://my-s3-bucket/my-bucket-key?access-key-id=$ACCESS_KEY_ID&secret-access-key=$SECRET_ACCESS_KEY&session-token=$SESSION_TOKEN"

Store snapshots in containers​

Use the container protocol to save snapshots as local files inside a vCluster container or another PVC in the same namespace as vCluster. Run the following command to create a snapshot and store it in the specified path inside a container:

Create snapshots inside a vCluster container or PVC
# Create a snapshot to local vCluster PVC (if using embedded storage).
vcluster snapshot my-vcluster "container:///data/my-snapshot.tar.gz"

# Create a snapshot to another PVC,needs to be in the same namespace as vCluster.
vcluster snapshot my-vcluster "container:///my-pvc/my-snapshot.tar.gz" --pod-mount "pvc:my-pvc:/my-pvc"

Restore existing virtual cluster from a snapshot​

Restoring from a snapshot pauses the vCluster, scales down all workload pods to 0, and launches a temporary restore pod. Once the restore completes, the vCluster resumes, and all workload pods are scaled back up. This process results in temporary downtime while the restore is in progress.

Failed Restore

If the restore fails while using vcluster restore commands, the process stops. You must retry the restore to avoid leaving the virtual cluster in an inconsistent or broken state.

Restore a vCluster using the following commands. You can use any snapshot option and set the snapshot URL with different credentials.

Restore virtual clusters from OCI, S3, or container snapshots
# Restore from an OCI snapshot using local credentials.
vcluster restore my-vcluster "oci://ghcr.io/my-user/my-repo:my-tag"

# Restore from an OCI snapshot while passing in credentials in the snapshot URL
export OCI_USERNAME=my-username
export OCI_PASSWORD=$(echo -n "my-password" | base64)
vcluster restore my-vcluster "oci://ghcr.io/my-user/my-repo:my-tag?username=$OCI_USERNAME&password=$OCI_PASSWORD&skip-client-credentials=true"

# Restore from an s3 snapshot using local credentials.
vcluster restore my-vcluster "s3://my-s3-bucket/my-snapshot-key"

# Restore from a local pvc snapshot (if using embedded storage).
vcluster restore my-vcluster "container:///data/my-snapshot.tar.gz"

Clone a virtual cluster to a new virtual cluster​

You can use snapshots to clone an existing virtual cluster and create a new virtual cluster from that snapshot. When creating a new virtual cluster from a snapshot, it also restores all workloads in the virtual cluster.

Restore Failures

If the restore failes while using vcluster create, the new virtual cluster is automatically deleted.

Create a new virtual cluster from an OCI snapshot
# Create a new virtual cluster from an OCI snapshot (uses local credentials).
vcluster create my-vcluster --restore oci://ghcr.io/my-user/my-repo:my-tag
New name or namespace updates certificates

vCluster certificates change when you create the virtual cluster with a new name or in a new namespace, which is expected as virtual clusters shouldn't share the same certificates.

Migrate and override vCluster configuration options to create a new vCluster​

When upgrading virtual clusters, there are a couple of configuration options that are not supported to change on an existing virtual cluster. For example, backing store and distro cannot be changed. In order to change these configuration options, you can migrate the virtual cluster by creating a new virtual cluster from a snapshot and applying new configuration options.

When creating a new virtual cluster from a snapshot, it also restores all workloads in the virtual cluster.

Restore failures

If the restore fails while using vcluster create, the new virtual cluster is automatically deleted.

Migrate a virtual cluster by restoring from a snapshot
# Upgrade an existing vCluster by restoring from a snapshot and applying a new vcluster.yaml.
# Configuration options in the vcluster.yaml override the options from the snapshot.
vcluster create my-vcluster --upgrade -f vcluster.yaml --restore oci://ghcr.io/my-user/my-repo:my-tag
New name or namespace updates certificates

vCluster certificates change when you create a virtual cluster with a new name or in a different namespace. This behavior is expected, as virtual clusters should not share the same certificates.

Supported migration options​

vCluster supports migration paths based on your setup. The following are the available migration options for Kubernetes distributions and backing stores.

Distros​

Migrate between Kubernetes distributions based on your workload requirements. You can migrate betweeen the following distros:

  • k3s -> k8s
  • k8s -> k3s

Backing store​

Change your data store to improve efficiency, scalability, and Kubernetes compatibility. You can migrate betweeen the following data stores:

  • Embedded database (SQLite) -> Embedded database (etcd)
  • Embedded database (SQLite) -> External database

All other configuration options are overridden, similar to upgrading a virtual cluster and applying changes.

Limitations​

When taking snapshots and restoring virtual clusters, there are limitations:

Virtual clusters with PVs or PVCs

  • Snapshots do not include persistent volumes (PVs) or persistent volume claims (PVCs).

Sleeping virtual clusters

  • Snapshots require a running vCluster control plane and do not work with sleeping virtual clusters.

Virtual clusters using the k0s distro

  • Use the --pod-exec flag to take a snapshot of a k0s virtual cluster.
  • k0s virtual clusters do not support restore or clone operations. Migrate them to k8s instead.

Virtual clusters using an external database

  • Virtual clusters with an external database handle backup and restore outside of vCluster. A database administrator must back up or restore the external database according to the database documentation. Avoid using the vCluster CLI backup and restore commands for clusters with an external database.

Use Velero​

You can use Velero to back up virtual clusters.

Ensure your cluster supports volume snapshots to allow Velero to backup persistent volumes and persistent volume claims that save the virtual cluster state. Alternatively, you can use Velero's restic integration to back up the virtual cluster state.

Back up a vCluster​

Install the Velero CLI, Velero server components and run the following command:

Create a Velero backup for a vCluster namespace
velero backup create <backup-name> --include-namespaces=my-vcluster-namespace

Verify a backup was successfully created with the following command:

Verify the Velero backup status
velero backup describe <backup-name>

This should create an output similar to the following:

Velero backup output
Name:         <backup-name>
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/source-cluster-k8s-gitversion=v1.24.0
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=24

Phase: Completed

Errors: 0
Warnings: 0

Namespaces:
Included: test
Excluded: <none>

Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto

...

Restore a vCluster​

After creating a backup with the Velero CLI or a scheduled backup, you can restore a vCluster using the Velero CLI:

Restore a Velero backup
velero restore create <restore-name> --from-backup <backup-name>

Verify the restore process using the following command:

Verify the Velero backup restore status
velero restore logs <restore-name>

This restores the vCluster workloads, configuration, and state in the virtual cluster namespace.

Moving vCluster

Moving a vCluster from one namespace to another can be challenging because some objects, such as cluster role bindings and persistent volumes, contain namespace references. Velero supports namespace mapping, which works in most cases, but use cautionβ€”this may not be compatible with all vCluster setups.

Use Velero inside vCluster​

You can use Velero inside vCluster to protect your workloads and data. To use Velero for backups:

  1. Enable the hostpath-mapper component in vCluster.

  2. After enabling hostpath-mapper, install the Velero CLI (as described above), connect to your vCluster, and install Velero inside the virtual cluster:

    Velero inside the virtual cluster
    velero install --provider <provider> --bucket <bucket_name> --secret-file <your_secret_file> --plugins velero/velero-plugin-for-<provider>:<semver> --use-restic
    note

    Replace provider, bucket_name, secret-file, and other placeholders with the correct values for your environment.

  3. After installation is complete, check the status of the Velero resources:

    Verify the status of the Velero resources
    $ kubectl get all -n velero
    NAME READY STATUS RESTARTS AGE
    pod/restic-5szkb 1/1 Running 0 118s
    pod/velero-75c5479dfd-4x7sl 1/1 Running 0 118s

    NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
    daemonset.apps/restic 1 1 1 1 1 <none> 118s

    NAME READY UP-TO-DATE AVAILABLE AGE
    deployment.apps/velero 1/1 1 1 119s

    NAME DESIRED CURRENT READY AGE
    replicaset.apps/velero-75c5479dfd 1 1 1 119s
  4. Create a backup using restic:

    Create a backup with restic
    velero backup create test1 --default-volumes-to-restic
  5. Wait for the backup to complete. Once complete, you should see the following output:

    Sample Velero backup output
    $ velero backup describe test1
    Name: test1
    Namespace: velero
    Labels: velero.io/storage-location=default
    Annotations: velero.io/source-cluster-k8s-gitversion=v1.25.0+k3s1
    velero.io/source-cluster-k8s-major-version=1
    velero.io/source-cluster-k8s-minor-version=25

    Phase: Completed

    Errors: 0
    Warnings: 0

    Namespaces:
    Included: *
    Excluded: <none>

    Resources:
    Included: *
    Excluded: <none>
    Cluster-scoped: auto

    ...