Snapshots
vCluster Platform allows you to configure taking snapshots of the vCluster at specific intervals.
This allows administrators to capture and store the vCluster state in scheduled intervals to help protect against infrastructure failures, data corruption, and configuration errors. By maintaining consistent recovery points, administrators can quickly restore the vCluster to a known good state without relying on manual backup processes. For more details on how snapshots work, refer to the documentation in the Snapshot and Restore section.
In the vcluster.yaml, it is configured under external.platform.autoSnapshot. Using the UI, you
can configure the management of snapshots in the config options of a virtual cluster under Snapshots. Though
snapshot configuration is configured on the virtual cluster itself, the capability and logic of scheduling snapshots
is in vCluster Platform.
Auto Snapshot is supported from platform version 4.4.0 onward and is currently in Beta.
Snapshot scheduling and retention​
Scheduling is based on a cron schedule and you can determine what time zone to start the schedule. The number of snapshots to keep is based on how many days are set for the retention period as well as how many total successful snapshots that you want to keep at any given time.
external:
  platform:
    autoSnapshot:
      enabled: true
      # Take a snapshot every 12 hours
      schedule: 0 */12 * * * 
      # Default is UTC
      # Options are at https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
      timezone: America/New_York
      retention:
        period: 30
        maxSnapshots: 14
      storage:
        type: s3
        s3:
          url: s3://my-bucket/path
      # Enables PVC snapshot
      volumes:
        enabled: true
      
Available scheduling and retention options​
| Option | Required | Description | Default | 
|---|---|---|---|
| enabled | Yes | Determines whether auto snapshots are enabled or disabled. | false | 
| timezone | No | Determine when midnight is considered based on time zone. | UTC | 
| schedule | Yes | Configure the schedule of taking snapshots by specifying a cron schedule. | None | 
| retention.period | No | Number of days that a snapshot will be stored in the storage service. | 30 | 
| retention.maxSnapshots | No | Number of snapshots that can be stored in the storage service. | 30 | 
Volume snapshots​
Volume Snapshot is currently in beta. Support is available from vCluster version 0.30 and beyond, and Platform version 4.5 and beyond.
You can enable vCluster volume snapshots for PersistentVolumeClaims that are provisioned by CSI drivers.
| Option | Required | Description | Default | 
|---|---|---|---|
| volumes.enabled | No | Determines whether auto snapshot should include volumes as part of the snapshot | false | 
In order to create volume snapshots, several installation and configuration steps have to be done in your host or virtual cluster. Check the Volume Snapshot documentation page to learn more about getting your cluster ready for snapshotting volumes
Supported storage backends​
The snapshot file can be saved in specific locations:
- AWS S3 buckets
- OCI registries
Storage type options​
| Option | Description | 
|---|---|
| storage.type | Defines the type of storage used to store the snapshot, the platform supports the following types: AWS S3 and OCI. | 
Store snapshots in AWS S3 buckets​
Snapshots can be stored in an AWS S3 bucket.
S3 configuration options​
| Option | Description | 
|---|---|
| storage.type.s3.url | URL of the AWS S3 bucket. Must be pre-fixed with s3://. | 
| storage.type.s3.credential | References the AWS credentials as a Kubernetes secret | 
| storage.type.s3.credential.name | Name of the Kubernetes secret. | 
| storage.type.s3.credential.namespace | Namespace of the Kubernetes secret. The secret must be deployed on the host of where the vCluster control plane pod is deployed to. | 
Authenticate with AWS Pod identity​
When using AWS S3 buckets, it is recommended to authenticate using AWS pod identity.
The EKS Pod Identity association must be created for the vCluster control plane pod. The vCluster control plane pod is the one that runs inside the host cluster namespace automatically created by vCluster Platform for each vCluster — typically named: loft-<project-name>-<vcluster-name>. By default,  this pod uses the service account: vc-<vcluster-name>. This is the service account that must be associated with your EKS Pod Identity role so that the vCluster control plane can authenticate to AWS when performing scheduled snapshot creation to S3.
external:
  platform:
    autoSnapshot:
    enabled: true
    # Take a snapshot every 12 hours
    schedule: 0 */12 * * * 
    storage:
      type: s3
        s3:
          # URL of location of S3-compatible bucket
          # Must be prefixed with `s3://`
          url: s3://<bucket-name>/snapshots
Authenticate with AWS Credentials as a secret​
Alternatively, you can create a Kubernetes secret with your AWS credentials.
- Create a Kubernetes secret of your AWS credentials. - Create this secret on the host of where the vCluster control plane is deployed. It could be deployed in the namespace of the vCluster or a different namespace. - The secret needs to contain all these three keys: - AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_SESSION_TOKEN
 Create AWS credentials secret- kubectl create -f - <<EOF
 apiVersion: v1
 kind: Secret
 type: Opaque
 metadata:
 name: aws-cred
 namespace: p-default
 data:
 AWS_ACCESS_KEY_ID: "id"
 AWS_SECRET_ACCESS_KEY: "key"
 AWS_SESSION_TOKEN: "token"
 EOF
- Create a vCluster referencing those credentials. Example vcluster.yaml referencing the Kubernetes secret- external:
 platform:
 autoSnapshot:
 enabled: true
 # Take a snapshot every 12 hours
 schedule: 0 */12 * * *
 storage:
 type: s3
 s3:
 # URL of location of S3-compatible bucket
 # Must be prefixed with `s3://`
 url: s3://<bucket-name>/<path>
 # Secret must be located on the host cluster that the vCluster is deployed on
 credential:
 secretName: aws-cred
 secretNamespace: p-default
OCI image registries​
Snapshots can be stored in an OCI image registry.
OCI configuration options​
| Option | Description | 
|---|---|
| storage.type.oci.repository | OCI registry address. Must be pre-fixed with oci:// | 
| storage.type.oci.credential | Reference the OCI credentials as a Kubernetes secret | 
| storage.type.oci.credential.name | Name of the Kubernetes secret | 
| storage.type.oci.credential.namespace | Namespace of the Kubernetes secret. The secret must be deployed on the host of where the vCluster control plane pod is deployed to. | 
| storage.type.oci.username | Username of the credentials to access the OCI registry | 
| storage.type.oci.password | Password of the credentials to access the OCI registry | 
Authenticate with credentials in a Kubernetes secret​
It's recommended to store your credentials to your OCI registry in a secret and reference the secret in the vcluster.yaml of your virtual cluster configuration. This protects the details of your credentials.
- Create a Kubernetes secret of your credentials to your OCI registry. - Create this secret on the host of where the vCluster control plane is deployed. It could be deployed in the namespace of the vCluster or a different namespace. - The secret needs to contain: - username
- password
 Create OCI credentials secret- kubectl create -f - <<EOF
 apiVersion: v1
 kind: Secret
 type: Opaque
 metadata:
 name: oci-cred
 namespace: p-default
 data:
 username: "id" # username to authenticate with the OCI registry
 password: "key" # password base64 to authenticate with the OCI registry
 EOF
- Create a vCluster referencing those credentials. Example vcluster.yaml referencing the Kubernetes secret- external:
 platform:
 autoSnapshot:
 enabled: true
 # Take a snapshot every 12 hours
 schedule: 0 */12 * * *
 storage:
 type: oci
 oci:
 # Location of OCI registry
 # Must be prefixed with `oci://`
 repository: oci://my-registry/snapshots
 credential:
 secretName: oci-cred
 secretNamespace: p-default
Authenticate without a Kubernetes secret​
If you do not want to use a secret, you can also explicitly set the username and password
directly in the vcluster.yaml.
external:
    platform:
      autoSnapshot:
        enabled: true
        # Take a snapshot every 12 hours
        schedule: 0 */12 * * * 
        storage:
          type: oci
          oci:
            # Location of OCI registry
            # Must be prefixed with `oci://`
            repository: oci://my-registry/snapshots
            credential:
              username: "my-username"
              password: "my-pasword"
View snapshots​
After enabling automatic snapshots, you can view the list of snapshots for each virtual cluster in the details of the virtual cluster. This menu is only available for Project admins and Platform admins users.
Snapshot name​
Snapshots are identified by a generated name formatted as <Virtual_Cluster_Name>-<Snapshot_Timestamp>.tar.gz.
Snapshot schedule lifecycle​
Snapshots transition through different statuses as part of their execution lifecycle. A typical flow starts with the
snapshot being in the Scheduled state, then moves to the In Progress state while it is being stored. If the snapshot and storage is successful,
the snapshot moves to the Completed state. If an error occurs, the snapshot will be moved to a Failed state.
If a snapshot was removed from the storage backend, but not in vCluster Platform, the snapshot appears in the Not Found state.
| Snapshot Status | Description | 
|---|---|
| Scheduled | A snapshot that is scheduled to be stored in the storage backend service. Only one snapshot can have the Scheduled status at a time. | 
| In Progress | A snapshot that is currently being stored in the storage backend service. | 
| Completed | A snapshot that has been successfully stored in the storage backend service. | 
| Partially Failed | A snapshot successfully stored in the storage backend service but failed to snapshot one or more volumes. (This will only occurs when volume snapshot is enabled) | 
| Failed | A snapshot that failed to be taken. | 
| Not Found | A snapshot that appears in the list but has been removed from the storage backend service. | 
When volume snapshotting is enabled, there will be transitions between various volume snapshot phases. These phases represent the progress of snapshotting PersistentVolumeClaims (PVCs) and storing them.
| Snapshot Status | Description | 
|---|---|
| Not Started | Volume snapshot has not started yet. | 
| Skipped | Volume snapshot was skipped due to being misconfigured. | 
| In Progress | Volume snapshot is currently being snapshotted and stored in the backing store. | 
| Completed Cleaning Up | When removing volume snapshot resources (VolumeSnapshot & VolumeSnapshotContent) after the volume has been sucessfully snapshotted. | 
| Failed Cleaning Up | When removing volume snapshot resources (VolumeSnapshot & VolumeSnapshotContent) after the volume has failed to be snapshotted. | 
| Completed | Volume snapshot was successfully taken | 
| Failed | Volume snapshot failed to be taken | 
These phases are displayed in the platform UI on the Snapshot View page and help to identify the current status of each volume snapshot. You can also refer to the Troubleshoot Issues section of the vCluster Volume Snapshot documentation to help identify and resolve potential issues.