Create snapshots
There are multiple ways to back up and restore a virtual cluster. vCluster provides a built-in method to create and restore snapshots using its CLI.
Need automated, scheduled snapshots? vCluster Platform AutoSnapshots lets you configure automatic snapshot creation on a cron schedule with retention policiesβno need to write custom Kubernetes jobs.
If you use an external database, such as MySQL or PostgreSQL, that does not run in the same namespace as vCluster, you must create a separate backup for the datastore. For more information, refer to the relevant database documentation.
Create a snapshotβ
We recommend using the vCluster CLI vcluster snapshot create command to back up the etcd datastore. Optionally, you can
also create snapshots of your CSI volumes by running vcluster snapshot create with --include-volumes flag.
Without the --include-volumes flag, no persistent volumes backup will be created.
The vCluster snapshot feature currently only supports creating snapshots for CSI persistent volumes. See Volume snapshots for more details. To back up non-CSI persistent volumes (e.g. local volumes), use the Velero backup method.
When you run the command to create a snapshot, vCluster CLI creates a snapshot request, which is then processed in the background by the vCluster snapshot controller. Here is an example:
vcluster snapshot create myvcluster "container:///data/my-snapshot.tar.gz"
13:52:09 info Beginning snapshot creation... You can check the snapshot creation progress with the following command: vcluster snapshot get myvcluster "container:///data/my-snapshot.tar.gz"'
You can check the snapshot creation progress with the following command:
vcluster snapshot get myvcluster "container:///data/my-snapshot.tar.gz"
SNAPSHOT | STATUS | AGE
---------------------------------------+-----------+------
container:///data/my-snapshot.tar.gz | Completed | 28s
The vCluster snapshot controller automatically determines the configured backing store and saves the snapshot at the specified location. The snapshot includes:
- Backing store data (for example, etcd or SQLite)
- vCluster Helm release information
- vCluster configuration (for example,
vcluster.yaml) - If the snapshot was created with
--include-volumesflag:- List of PVCs for which the snapshots were created
- Volume snapshot identifiers
Snapshot URLβ
vCluster uses a snapshot URL to save the snapshot to a specific location. The snapshot URL contains the following information:
| Parameter | Description | Example |
|---|---|---|
| Protocol | Defines the storage type for the snapshot | oci, s3, container |
| Storage location | Specifies where to save the snapshot | oci://ghcr.io/my-user/my-repo:my-tag, s3://my-s3-bucket/my-snapshot-key, container:///data/my-snapshot.tar.gz |
| Optional flags | Additional options for snapshot storage | skip-client-credentials=true |
Supported protocolsβ
The following protocols are supported for storing snapshots:
ociβ Stores snapshots in an OCI image registry, such as Docker Hub or GHCR.s3β Saves snapshots to an S3-compatible bucket, such as AWS S3 or MinIO.containerβ Stores snapshots as a local file inside a vCluster container or another persistent volume claim (PVC).
For example, the following snapshot URL saves the snapshot to an OCI image registry:
vcluster snapshot create my-vcluster "oci://ghcr.io/my-user/my-repo:my-tag"
Store snapshots in OCI image registriesβ
You can save snapshots to OCI image registries. You can authenticate in two ways: by using locally stored OCI credentials or by passing credentials directly in the snapshot URL.
To authenticate with local credentials, log in to your OCI registry and create the snapshot:
# Log in to the OCI registry using a password access token.
echo $PASSWORD_ACCESS_TOKEN | docker login ghcr.io -u $USERNAME --password-stdin
# Create a snapshot and push it to an OCI image registry.
vcluster snapshot create my-vcluster "oci://ghcr.io/my-user/my-repo:my-tag"
Alternatively, you can pass authentication credentials directly in the snapshot URL and create the snapshot. The following options are supported to configure authentication when passing credentials directly in the URL:
| Parameter | Description | Required |
|---|---|---|
username | Username for authenticating with the OCI registry | Yes, when not using local credentials |
password | Base64-encoded password for authenticating with the OCI registry | Yes, when not using local credentials |
skip-client-credentials | When set to true, ignores local Docker credentials | No, defaults to false |
# Pass authentication credentials directly in the URL and create a snapshot.
export OCI_USERNAME=my-username
export OCI_PASSWORD=$(echo -n "my-password" | base64)
vcluster snapshot create my-vcluster "oci://ghcr.io/my-user/my-repo:my-tag?username=$OCI_USERNAME&password=$OCI_PASSWORD&skip-client-credentials=true"
Store snapshots in S3 bucketsβ
Store snapshots in an S3-compatible bucket using the s3 protocol. This works with AWS S3 and S3-compatible providers such as MinIO or Ceph, which are common in on-premise and air-gapped environments where cloud storage isn't available.
You can authenticate in two ways: by using local environment credentials or by passing credentials directly in the URL.
To use local environment credentials, log in to AWS CLI, then create and save the snapshot:
# Check if you are logged in.
aws sts get-caller-identity
# Create a snapshot and store it in an S3 bucket.
vcluster snapshot create my-vcluster "s3://my-s3-bucket/my-bucket-key"
Alternatively, you can pass options directly in the snapshot URL. The following options are supported:
| Flag | Description | Required |
|---|---|---|
access-key-id | Base64-encoded S3 access key ID for authentication | Yes, when not using local credentials |
secret-access-key | Base64-encoded S3 secret access key for authentication | Yes, when not using local credentials |
session-token | Base64-encoded temporary session token for authentication | Yes, when not using local credentials |
region | Region of the S3-compatible bucket | No |
url | Base64-encoded custom endpoint URL for S3-compatible providers (e.g. MinIO, Ceph) | No |
force-path-style | Use path-style addressing (endpoint/bucket) instead of virtual-hosted-style. Required for most S3-compatible providers. | No, defaults to false |
profile | AWS profile to use for authentication | No |
skip-client-credentials | Skips use of local credentials for authentication | No, defaults to false |
server-side-encryption | Server-side encryption method (AES256 for SSE-S3, aws:kms for SSE-KMS) | No |
kms-key-id | KMS key ID for SSE-KMS encryption | No |
Credentials passed in the snapshot URL are Base64-encoded, not encrypted. Avoid storing snapshot commands with inline credentials in shell history, scripts, or CI logs. For production environments, prefer local credential-based authentication with aws configure or a Kubernetes Secret, and set skip-client-credentials=false.
Run the following command to create a snapshot and store it in an S3 bucket:
- macOS
- Linux
# Read the AWS credentials from files and encode them with base64
# This allows them to be safely included in the S3 URL
export ACCESS_KEY_ID=$(cat my-access-key-id.txt | base64)
export SECRET_ACCESS_KEY=$(cat my-secret-access-key.txt | base64)
export SESSION_TOKEN=$(cat my-session-token.txt | base64)
vcluster snapshot create my-vcluster "s3://my-s3-bucket/my-bucket-key?access-key-id=$ACCESS_KEY_ID&secret-access-key=$SECRET_ACCESS_KEY&session-token=$SESSION_TOKEN"
# Read the AWS credentials from files and encode them with base64
# On Linux, the -w 0 flag prevents line wrapping of the encoded output
export ACCESS_KEY_ID=$(cat my-access-key-id.txt | base64 -w 0)
export SECRET_ACCESS_KEY=$(cat my-secret-access-key.txt | base64 -w 0)
export SESSION_TOKEN=$(cat my-session-token.txt | base64 -w 0)
vcluster snapshot create my-vcluster "s3://my-s3-bucket/my-bucket-key?access-key-id=$ACCESS_KEY_ID&secret-access-key=$SECRET_ACCESS_KEY&session-token=$SESSION_TOKEN"
S3-compatible providersβ
S3-compatible providers such as MinIO or Ceph require two additional parameters: url to specify the custom endpoint and force-path-style to use path-style bucket addressing. These providers don't use session tokens, so you can omit that parameter.
- macOS
- Linux
# Base64-encode the credentials and the custom endpoint URL
export ACCESS_KEY_ID=$(cat my-access-key-id.txt | base64)
export SECRET_ACCESS_KEY=$(cat my-secret-access-key.txt | base64)
export ENDPOINT=$(echo -n "https://s3.example.com:9000" | base64)
vcluster snapshot create my-vcluster "s3://my-bucket/my-snapshot-key?access-key-id=$ACCESS_KEY_ID&secret-access-key=$SECRET_ACCESS_KEY&url=$ENDPOINT®ion=us-east-1&force-path-style=true"
# Base64-encode the credentials and the custom endpoint URL
# On Linux, the -w 0 flag prevents line wrapping of the encoded output
export ACCESS_KEY_ID=$(cat my-access-key-id.txt | base64 -w 0)
export SECRET_ACCESS_KEY=$(cat my-secret-access-key.txt | base64 -w 0)
export ENDPOINT=$(echo -n "https://s3.example.com:9000" | base64 -w 0)
vcluster snapshot create my-vcluster "s3://my-bucket/my-snapshot-key?access-key-id=$ACCESS_KEY_ID&secret-access-key=$SECRET_ACCESS_KEY&url=$ENDPOINT®ion=us-east-1&force-path-style=true"
S3 encryption supportβ
vCluster supports server-side encryption for S3 snapshots to meet security requirements. See the CLI reference for all available flags.
SSE-S3 (AES256)
vcluster snapshot create my-vcluster "s3://my-bucket/key" --server-side-encryption AES256
SSE-KMS
vcluster snapshot create my-vcluster "s3://my-bucket/key" --kms-key-id "12345678-1234-1234-1234-123456789012"
Example: Bucket policy requiring encryption
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "DenyUnencryptedUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
}]
}
Store snapshots in containersβ
Use the container protocol to save snapshots as local files inside a vCluster container.
Run the following command to create a snapshot and store it in the specified path inside a container:
# Create a snapshot to local vCluster PVC (if using embedded storage).
vcluster snapshot create my-vcluster "container:///data/my-snapshot.tar.gz"
Limitationsβ
When taking snapshots and restoring virtual clusters, there are limitations:
Sleeping virtual clusters
- Snapshots require a running vCluster control plane and do not work with sleeping virtual clusters.
Virtual clusters using an external database
- Virtual clusters with an external database handle backup and restore outside of vCluster. A database administrator must back up or restore the external database according to the database documentation. Avoid using the vCluster CLI backup and restore commands for clusters with an external database.
Distribution
- Snapshots work only with K8s