Skip to main content
Version: v0.24 Stable

Resolve etcd NOSPACE alarm in vCluster

The etcd NOSPACE alarm signals that the etcd database inside the vCluster has used its available disk space. When this occurs, etcd fails its health checks, which causes the control plane to become unresponsive. As a result, all cluster operations—such as deploying workloads, updating resources, or managing cluster components—are blocked, and the vCluster is unusable until the issue is resolved.

Error message​

You might find the following error in the logs of your etcd pods, if the etcd has run out of storage space:

etcd NOSPACE alarm
etcdhttp/metrics.go:86 /health error ALARM NOSPACE status-code 503
Identifying an etcd NOSPACE alarm in vCluster

When interacting with the affected vCluster using kubectl, API requests fail with timeout errors:

Error from server: etcdserver: request timed out

Additionally, the etcd health metrics endpoint returns a 503 status code and the following error:

etcdhttp/metrics.go:86 /health error ALARM NOSPACE status-code 503

To verify the NOSPACE alarm, run the following command against the etcd instance:

etcdctl alarm list --endpoints=https://$ETCD_SRVNAME:2379 [...]

The output displays the triggered alarm:

memberID:XXXXX alarm:NOSPACE

Causes​

The NOSPACE alarm occurs due to two common conditions:

  • Excessive etcd data growth: A large number of objects—such as Deployments, ConfigMaps, and Secrets—can fill etcd’s storage if regular compaction is not performed.

  • Synchronization conflicts: Conflicting objects between the vCluster and host cluster can trigger continuous sync loops. For example, a Custom Resource Definition (CRD) modified by the host cluster might sync back to the vCluster repeatedly. This behavior quickly fills etcd’s backend storage.

Solution​

To resolve the issue, compact, and defragment the etcd database to free up space. Then, reconfigure etcd with automatic compaction and increase its storage quota to prevent recurrence.

  1. Identify if there's a syncing conflict.

    Check for objects that might be caught in a sync loop:

    kubectl -n <namespace> logs <vcluster-pod> | grep -i "sync" | grep -i "error"

    If you find a problematic object, pause syncing for it in your vCluster config.

  2. Compact and defragment etcd.

    • Connect to each etcd pod. Access the etcd pod using the following command:

      kubectl -n <namespace> exec -it <etcd-pod-name> -- sh
    • Set environment variables. Export the etcd service name as an environment variable:

      export ETCD_SRVNAME=<etcd-pod-name>
    • Get the current revision number. Retrieve the current revision number of etcd using the following command:

      etcdctl endpoint status --write-out json \
      --endpoints=https://$ETCD_SRVNAME:2379 \
      --cacert=/run/config/pki/etcd-ca.crt \
      --key=/run/config/pki/etcd-peer.key \
      --cert=/run/config/pki/etcd-peer.crt
    • Compact the etcd database. Compact etcd to remove old data and free up disk space:

      etcdctl --command-timeout=600s compact <revision-number> \
      --endpoints=https://$ETCD_SRVNAME:2379 \
      --cacert=/run/config/pki/etcd-ca.crt \
      --key=/run/config/pki/etcd-peer.key \
      --cert=/run/config/pki/etcd-peer.crt

      Replace <revision-number> with the value retrieved from the previous command.

    • Defragment etcd. Defragment etcd to optimize disk usage and improve performance:

      etcdctl --command-timeout=600s defrag \
      --endpoints=https://$ETCD_SRVNAME:2379 \
      --cacert=/run/config/pki/etcd-ca.crt \
      --key=/run/config/pki/etcd-peer.key \
      --cert=/run/config/pki/etcd-peer.crt
    • Repeat for all etcd pods in your cluster.

  3. Verify disk usage reduction.

    Check that the operation freed up space:

    etcdctl endpoint status -w table \
    --endpoints=https://$ETCD_SRVNAME:2379 \
    --cacert=/run/config/pki/etcd-ca.crt \
    --key=/run/config/pki/etcd-peer.key \
    --cert=/run/config/pki/etcd-peer.crt
  4. Disarm the NOSPACE alarm.

    Remove the alarm to restore normal operation:

    etcdctl alarm disarm \
    --endpoints=https://$ETCD_SRVNAME:2379 \
    --cacert=/run/config/pki/etcd-ca.crt \
    --key=/run/config/pki/etcd-peer.key \
    --cert=/run/config/pki/etcd-peer.crt

Prevention​

Update your vCluster configuration to prevent future occurrences. Use the following recommended settings to enable automatic maintenance of your etcd database:

vcluster.yaml
controlPlane:
backingStore:
etcd:
embedded:
enabled: false
deploy:
enabled: true
statefulSet:
enabled: true
extraArgs:
- '--auto-compaction-mode=periodic'
- '--auto-compaction-retention=30m'
- '--quota-backend-bytes=8589934592'

This configuration enables periodic compaction every 30 minutes, sets etcd quota to 8GB, and uses deployed etcd instead of embedded for better control. You can edit parameters based on your needs.

Verification​

After completing the solution steps:

  1. Check that etcd pods are healthy:

    kubectl -n <namespace> get pods | grep etcd
  2. Verify that vCluster is functioning properly:

    kubectl -n <namespace> get pods
    kubectl -n <namespace> logs <vcluster-pod> | grep -i "alarm"

Best practices​

To ensure optimal etcd performance in vCluster:

  • Monitor etcd disk usage: Use metrics tools to track disk usage and set up alerts for high usage levels.
  • Enable automated compaction: Configure compaction with --auto-compaction-mode=periodic and --auto-compaction-retention=30m to clean up old data.
  • Size etcd storage appropriately: Set --quota-backend-bytes based on usage, with a buffer for growth.
  • Defragment etcd regularly: Optimize disk usage by defragmenting etcd periodically.
  • Resolve syncing conflicts: Identify and fix syncing issues to prevent unnecessary data growth.