Skip to main content
Version: v0.34 Stable

SELinux support

Supported Configurations
Running the control plane as a container with:
Running the control plane as a binary with vCluster Standalone. When scaling with additional worker nodes, they are joined as private nodes.

vcluster-selinux is the SELinux policy module that vCluster Labs publishes for RHEL hosts running vCluster Standalone or a Private Node worker. It ships as a signed .noarch RPM for EL 8, EL 9, and EL 10 from the vCluster SELinux repository on GitHub. With the module loaded, vCluster runs on a host with SELinux in enforcing mode without disabling it or adding host-local allow rules.

No extra steps required

The vCluster installer and the Private Node join script detect SELinux on supported RHEL hosts and install the RPM before placing any vCluster binaries. Run the standalone install or the Private Node join script the same way you would on any other host — the installer fetches, verifies, and loads the SELinux module automatically.

Supported operating systems​

The SELinux module is required on hosts where the vCluster binary runs directly: vCluster Standalone control planes and Private Node workers. vCluster Platform, the Shared Nodes tenancy model, and tenant workloads running inside the cluster are unaffected.

Product-supported​

OSSELinux modeRPMNotes
RHEL 10Enforcing, Permissive, Disabledel10Supported. Installer fetches and installs el10 automatically when SELinux mode is Enforcing or Permissive.
RHEL 9Enforcing, Permissive, Disabledel9Supported. Installer fetches and installs el9 automatically when SELinux mode is Enforcing or Permissive.
RHEL 8Enforcing, Permissive, Disabledel8Supported. Requires a Kubernetes 1.31 pin for the control plane to start regardless of SELinux mode. Installer fetches and installs el8 automatically when SELinux mode is Enforcing or Permissive.

Tested derivatives​

The .noarch RPM targets the EL family as a whole. CI validates the same RPM on AlmaLinux and Rocky Linux because RHEL subscriptions are not available to the public CI runners. These distributions are covered by the same install flow as the corresponding RHEL major version, but are not product-supported.

OSRPMCI coverage
AlmaLinux 10el10Same RPM as RHEL 10; community-tested.
AlmaLinux 9el9Yes — full standalone + Private Node e2e under enforcing.
AlmaLinux 8el8Same RPM as RHEL 8; community-tested.
CentOS Stream 9el9Same RPM as RHEL 9; community-tested. Requires iptables (see node requirements).
Rocky Linux 10el10Same RPM as RHEL 10; community-tested.
Rocky Linux 9el9Same RPM as RHEL 9; community-tested. Requires iptables (see node requirements).
Rocky Linux 8el8Yes — full standalone + Private Node e2e under enforcing. Requires iptables and the Kubernetes 1.31 pin.

Prerequisites​

  • vCluster 0.34 or newer. The installer and the Private Node join script install the RPM and run restorecon after binary placement starting with this version.
  • A RHEL 8, RHEL 9, or RHEL 10 host with getenforce reporting Enforcing or Permissive. RHEL 8 additionally requires a Kubernetes 1.31 pin in vcluster.yaml. See Pin Kubernetes to 1.31 on RHEL 8.
  • Network access to https://rpm.vcluster.com from the host, or a pre-staged RPM. See Install offline or with a custom RPM mirror.
  • dnf installed. The RPM declares container-selinux, policycoreutils, policycoreutils-python-utils, libselinux-utils, and selinux-policy-base as dependencies.

Install​

On a RHEL 8, 9, or 10 host that can reach rpm.vcluster.com, no separate SELinux step is required. Use the standalone install or Private Node join flow as documented. RHEL 8 also requires the Kubernetes 1.31 pin.

When getenforce returns Enforcing or Permissive and the RPM is not already installed, the installer:

  1. Reads ${VERSION_ID%%.*} from /etc/os-release.
  2. Writes a yum repo file for https://rpm.vcluster.com/stable/el${EL_VERSION}/noarch.
  3. Verifies the package against the GPG key at https://rpm.vcluster.com/public.key.
  4. Runs dnf install -y vcluster-selinux before placing any vCluster binaries on the host.

For convenience, the same commands as the install pages:

Install vCluster Standalone (auto-installs vcluster-selinux on RHEL)
curl -fsSL https://github.com/loft-sh/vcluster/releases/download/v0.34.0/install-standalone.sh | sudo bash -s -- --config /etc/vcluster/vcluster.yaml
Join a Private Node worker (auto-installs vcluster-selinux on RHEL)
curl -sfLk "$JOIN_URL" | sudo bash

$JOIN_URL is the URL vcluster token create returns for the tenant cluster. See Join Manually Provisioned Nodes for how to mint a join token and the full join flow.

If SELinux is Enforcing and the RPM install fails, the installer exits non-zero before placing any vCluster binaries on the host. If SELinux is Permissive, the installer prints a warning and continues. The host runs vCluster without SELinux enforcement of the vcluster-selinux rules.

Pin Kubernetes to 1.31 on RHEL 8​

RHEL 8 ships glibc 2.28. The containerd binary in the default vCluster Kubernetes bundle (currently v1.35.x) links against glibc 2.32 or newer and will not load on an EL 8 host. The kubelet and any tenant pods stay stuck and the node never reaches Ready. The Kubernetes 1.31.x bundles are built against an older glibc and run on EL 8.

Pin the Kubernetes version in vcluster.yaml before running the installer:

/etc/vcluster/vcluster.yaml on a RHEL 8 host
controlPlane:
standalone:
enabled: true
joinNode:
enabled: true
containerd:
enabled: true
distro:
k8s:
version: v1.31.11

With this configuration, the standalone install on RHEL 8 is otherwise identical to RHEL 9. The el8 RPM comes from rpm.vcluster.com/stable/el8/noarch, the SELinux policy loads, and systemd transitions vcluster.service into container_runtime_t.

warning

The pin is host-wide. Every tenant Kubernetes version the host serves is 1.31.x. To run a newer Kubernetes on the control plane, run the host on RHEL 9 instead.

Enable SELinux enforcement for tenant pods​

Containerd applies per-pod MCS labels only when its configuration has enable_selinux = true. The installer does not enable this flag by default because it changes runtime behavior for workloads that may already be running. Pass --containerd-selinux to enable it:

Standalone with containerd SELinux enforcement
curl -fsSL https://github.com/loft-sh/vcluster/releases/download/v0.34.0/install-standalone.sh | sudo bash -s -- --config /etc/vcluster/vcluster.yaml --containerd-selinux
Private Node worker with containerd SELinux enforcement
curl -sfLk "$JOIN_URL" | sudo bash -s -- --containerd-selinux

With enable_selinux = true in /etc/containerd/config.toml, each tenant pod on the worker runs under container_t with its own MCS category. Combined with the module's container_t → vcluster_data_t deny rules, a compromised tenant pod cannot read host PKI or the backing-store database through the filesystem.

Air-gapped install​

Hosts without egress to rpm.vcluster.com need either a reachable mirror or a pre-staged RPM. Pair this with the rest of the air-gapped guidance in Deploy Private Nodes in an air-gapped environment.

Point at a custom RPM URL​

Pass --selinux-rpm-url or set the VCLUSTER_SELINUX_RPM_URL environment variable to a URL the host can reach:

Standalone with a custom RPM URL
curl -fsSL https://github.com/loft-sh/vcluster/releases/download/v0.34.0/install-standalone.sh | sudo bash -s -- --config /etc/vcluster/vcluster.yaml --selinux-rpm-url https://internal.example.com/vcluster-selinux-<ver>-<rel>.el9.noarch.rpm

--selinux-rpm-url accepts either a direct .rpm URL or a yum-repo URL. dnf install accepts both.

Pre-stage the RPM at image-build time​

Bake vcluster-selinux into the host image and tell the installer to skip its own fetch with --skip-selinux-rpm:

Install the RPM once, during image build
sudo dnf install -y vcluster-selinux
Run the installer later without re-fetching the RPM
curl -fsSL https://github.com/loft-sh/vcluster/releases/download/v0.34.0/install-standalone.sh | sudo bash -s -- --config /etc/vcluster/vcluster.yaml --skip-selinux-rpm
warning

Pass --skip-selinux-rpm only when vcluster-selinux is already installed on the host or SELinux is disabled. With SELinux Enforcing and no module loaded, vcluster.service fails to transition into container_runtime_t and the control plane does not start.

Install manually​

To pin a specific release, download the .noarch.rpm for your RHEL major version from the vcluster-selinux releases page and install it directly:

sudo dnf install https://github.com/loft-sh/vcluster-selinux/releases/download/<tag>/vcluster-selinux-<ver>-<rel>.el9.noarch.rpm

Or install from the same yum repo the vCluster installer would have configured:

/etc/yum.repos.d/vcluster-selinux.repo
sudo tee /etc/yum.repos.d/vcluster-selinux.repo <<'EOF'
[vcluster-selinux-stable]
name=vCluster SELinux (stable)
baseurl=https://rpm.vcluster.com/stable/el9/noarch
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://rpm.vcluster.com/public.key
EOF
sudo dnf install -y vcluster-selinux

When installing the RPM manually, run install-standalone.sh or the join script with --skip-selinux-rpm so the installer does not re-fetch the RPM.

SELinux labels​

PathSELinux type
/var/lib/vcluster/bin/vcluster (the entrypoint install-standalone.sh places)container_runtime_exec_t
/var/lib/vcluster(/.*)? (PKI, kine/etcd backing store, sockets, pid files)vcluster_data_t
/etc/vcluster(/.*)?, /etc/vcluster-vpn(/.*)?, /etc/crictl.yamlcontainer_config_t
/opt/cni(/.*)?, /etc/cni(/.*)?container_file_t
/usr/local/bin/vcluster-vpncontainer_runtime_exec_t
/etc/systemd/system/vcluster*container_unit_file_t

The module also registers a semanage fcontext override for /var/run/flannel(/.*)? → container_file_t so the flannel pod can write its runtime state. It pre-creates /var/lib/vcluster, /etc/vcluster, /etc/vcluster-vpn, /opt/cni, /etc/cni, /opt/local-path-provisioner, /run/flannel, and /run/kubernetes so the labels are correct regardless of whether the RPM or the installer runs first.

kubelet, containerd, runc, /etc/containerd, /var/lib/containerd, and /var/lib/kubelet are covered by container-selinux. This module does not change their labels.

Control-plane binaries are labeled vcluster_data_t on disk

The module's .fc file declares container_runtime_exec_t for the Kubernetes control-plane binaries (kube-apiserver, kube-controller-manager, kube-scheduler, etcd, etcdctl, kine, konnectivity-server, helm, kubectl, vcluster-cli). vCluster extracts these binaries from its bundle on first start, after the RPM's %post has already run, so they inherit vcluster_data_t from their parent directory. This is functional: container_runtime_t has managed access to vcluster_data_t and generates no AVCs. To apply the declared label on disk, run sudo restorecon -R /var/lib/vcluster/bin after vcluster.service has started once.

Verify​

After install-standalone.sh returns, the installer loads the module, the vCluster service runs under container_runtime_t, and the audit log contains no denials for the install window:

sudo semodule -l | grep '^vcluster'
# vcluster

ls -Z /var/lib/vcluster/bin/vcluster
# system_u:object_r:container_runtime_exec_t:s0 /var/lib/vcluster/bin/vcluster

sudo cat /proc/$(systemctl show -p MainPID --value vcluster.service)/attr/current
# system_u:system_r:container_runtime_t:s0

ls -Z /var/lib/vcluster/pki/ca.key
# system_u:object_r:vcluster_data_t:s0 /var/lib/vcluster/pki/ca.key

sudo ausearch -m avc --start recent \
| grep -E 'vcluster_|container_runtime_t|container_t' || echo "no denials"

Upgrade​

sudo dnf update vcluster-selinux unloads the previous policy module, installs the new one, and reruns restorecon over the paths the RPM owns. No manual steps are required. If a release adds a new control-plane binary path, the release notes call out a one-off sudo restorecon -R /var/lib/vcluster to apply the new label.

Uninstall​

sudo dnf remove vcluster-selinux

The RPM's %postun unloads the policy module, removes the flannel semanage fcontext override, and restorecons the covered paths back to their pre-install defaults.

Troubleshoot​

vCluster service fails to start under enforcing​

A denial like avc: denied { execute } on container_runtime_exec_t in journalctl -u vcluster.service indicates that systemd could not exec the vCluster binary with the expected label. Either the module is not loaded, or the binary was placed before the module relabeled the parent directory.

Confirm the module and the RPM are in place:

rpm -q vcluster-selinux
sudo semodule -l | grep '^vcluster'
getenforce

If rpm -q does not return a version, install the RPM (see Install). If the RPM and module are both present but the binary was placed before the RPM's %post ran, relabel and restart:

sudo restorecon -R /var/lib/vcluster /etc/vcluster
sudo systemctl restart vcluster.service

Installer fails to fetch the RPM​

If the installer exits with a line like failed to install vcluster-selinux RPM, it could not reach rpm.vcluster.com and no --selinux-rpm-url was passed. Choose one of the following:

A control-plane binary fails to execute​

A denial on a binary under /var/lib/vcluster/bin/ with tcontext=...vcluster_data_t in ausearch indicates that the RPM's .fc file is missing an entry for that binary. Open an issue at loft-sh/vcluster-selinux with the binary name and the denial. As a per-host workaround:

sudo semanage fcontext -a -t container_runtime_exec_t '/var/lib/vcluster/bin/<binary>'
sudo restorecon -v /var/lib/vcluster/bin/<binary>
sudo systemctl restart vcluster.service

Flannel pod can't write /var/run/flannel​

Confirm the RPM's semanage fcontext override for flannel is still in place:

sudo semanage fcontext -l | grep flannel
# /var/run/flannel(/.*)? all files system_u:object_r:container_file_t:s0

If the line is missing, reinstall the RPM. The RPM's %post registers the override. Remove it manually with semanage fcontext -d.