This feature is only available for the following:
- Host Nodes
This feature is an Enterprise feature. See our pricing plans or contact our sales team for more information.
Integrate EKS Pod Identity with vCluster
This tutorial guides you through the process of integrating AWS Service Accounts with your vClustervClusterAn open-source software product that creates and manages virtual Kubernetes clusters inside a host Kubernetes cluster. vCluster improves isolation and multi-tenancy capabilities while reducing infrastructure costs. using Pod Identity.
Setting up Pod Identity requires you to link an AWS Service Account with the Kubernetes Service Account (KSA) used by your workloads. This KSA needs to be available in the host clusterHost ClusterThe physical Kubernetes cluster where virtual clusters are deployed and run. The host cluster provides the infrastructure resources (CPU, memory, storage, networking) that virtual clusters leverage, while maintaining isolation between different virtual environments. in which your vCluster instance runs.
To achieve this setup, use the sync.toHost feature to expose the KSA in the host cluster together with the platformThe PlatformThe vCluster Platform that provides management, access control, and operational features for virtual clusters across multiple physical host clusters. API to retrieve the updated name of the KSA in the host cluster.
Prerequisites​
This guide assumes you have the following prerequisites:
kubectlinstalledawsCLI installed and configured- An existing EKS cluster with the CSI driver set up, IAM OIDC provider, and Pod Identity agent deployed
Step-by-step guide​
1. Start the platform and create an access key​
In order to integrate your workloads with EKS Pod Identity, you'll need a platform instance running. If you don't have one already, follow the platform installation guide.
Once you're done, you'll need to create a new access keyAccess KeyA secure credential in the platform that allows programmatic access to platform resources and API endpoints.. This allows you to use the platform API. Follow this guide to create a new access key.
2. Set up variables​
- bash
- Terraform
Define the necessary environment variables for your EKS cluster, service accounts, and authentication details.
#!/bin/bash
# Set up environment variables
export AWS_REGION="eu-central-1" # Replace with your AWS region
export CLUSTER_NAME="pod-identity-1" # Replace with your EKS cluster name
export SERVICE_ACCOUNT_NAME="demo-sa"
export SERVICE_ACCOUNT_NAMESPACE="default"
export VCLUSTER_NAME="my-vcluster"
export HOST=https://your.loft.host # Replace with your host
export ACCESS_KEY=abcd1234 # Replace with your access key
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
variable "aws_region" {
description = "The AWS region to deploy the EKS cluster"
type = string
default = "eu-central-1"
}
variable "cluster_name" {
description = "The name of the EKS cluster"
type = string
default = "pod-identity-1"
}
variable "service_account_name" {
description = "K8s SA name for Pod Identity binding"
type = string
default = "demo-sa"
}
variable "service_account_namespace" {
description = "Namespace in which k8s SA is created"
type = string
default = "default"
}
variable "vcluster_name" {
description = "Name of virtual cluster"
type = string
default = "my-vcluster"
}
variable "auth_token" {
description = "Auth token for vCluster.Pro API"
type = string
default = "abcd1234"
}
3. Create vCluster configuration​
Create the vcluster.yaml file with following content:
sync:
toHost:
serviceAccounts:
enabled: true
4. Deploy vCluster​
All of the deployment options below have the following assumptions:
- A
vcluster.yamlis provided. Refer to thevcluster.yamlreference docs to explore all configuration options. This file is optional and can be removed from the examples. - The vCluster is called
my-vcluster. - The vCluster is be deployed into the
team-xnamespace.
- vCluster CLI
- Helm
- Terraform
- ArgoCD
- Flux
- Cluster API
The vCluster CLI provides the most straightforward way to deploy and manage virtual clusters.
Install the vCluster CLI:
- Homebrew
- Mac (Intel/AMD)
- Mac (Silicon/ARM)
- Linux (AMD)
- Linux (ARM)
- Download Binary
- Windows Powershell
brew install loft-sh/tap/vclusterThe binaries in the tap are signed using the Sigstore framework for enhanced security.
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-darwin-amd64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vclustercurl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-darwin-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vclustercurl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vclustercurl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vclusterDownload the binary for your platform from the GitHub Releases page and add this binary to your $PATH.
md -Force "$Env:APPDATA\vcluster"; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls,Tls11,Tls12';
Invoke-WebRequest -URI "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-windows-amd64.exe" -o $Env:APPDATA\vcluster\vcluster.exe;
$env:Path += ";" + $Env:APPDATA + "\vcluster";
[Environment]::SetEnvironmentVariable("Path", $env:Path, [System.EnvironmentVariableTarget]::User);Reboot RequiredYou may need to reboot your computer to use the CLI due to changes to the PATH variable (see below).
Check Environment Variable $PATHLine 4 of this install script adds the install directory
%APPDATA%\vclusterto the$PATHenvironment variable. This is only effective for the current Powershell session, i.e. when opening a new terminal window,vclustermay not be found.Make sure to add the folder
%APPDATA%\vclusterto thePATHenvironment variable after installing vcluster CLI via Powershell. Afterward, a reboot might be necessary.Confirm that you've installed the correct version of the vCluster CLI.
vcluster --versionDeploy vCluster:
Modify the following with your specific values to generate a copyable command:vcluster create my-vcluster --namespace team-x --values vcluster.yamlnoteAfter installation, vCluster automatically switches your Kubernetes context to the new virtual cluster. You can now run
kubectlcommands against the virtual cluster.
Helm provides fine-grained control over the deployment process and integrates well with existing Helm-based workflows.
Deploy vCluster using the
helm upgradecommand:Modify the following with your specific values to generate a copyable command:helm upgrade --install my-vcluster vcluster \
--values vcluster.yaml \
--repo https://charts.loft.sh \
--namespace team-x \
--repository-config='' \
--create-namespace
You can use Terraform to deploy vCluster as code with version control and state management.
Create a
main.tffile to define your vCluster deployment using the Terraform Helm provider:provider "helm" {
kubernetes = {
config_path = "~/.kube/config"
}
}
resource "helm_release" "my_vcluster" {
name = "my-vcluster"
namespace = "team-x"
create_namespace = true
repository = "https://charts.loft.sh"
chart = "vcluster"
# If you didn't create a vcluster.yaml, remove the values section.
values = [
file("${path.module}/vcluster.yaml")
]
}Helm Provider VersionThis configuration uses the Terraform Helm provider v3.x syntax where
kubernetesis defined as an argument (kubernetes = {). If you're using Helm provider v2.x, use the block syntax instead (kubernetes {). To use v3.x, ensure your provider version is at least v3.0.0.Install the required Helm provider and initialize Terraform:
terraform initGenerate a plan to preview the changes:
terraform planReview the plan output to verify connectivity and proposed changes.
Deploy vCluster:
terraform apply
ArgoCD deployment enables GitOps workflows for vCluster management, and provides automated deployment, drift detection, and declarative configuration management through Git repositories.
To deploy vCluster using ArgoCD, you need the following files:
vcluster.yamlfor your vCluster configuration options.<CLUSTER_NAME>-app.yamlfor your ArgoCDApplicationdefinition. Replace<CLUSTER_NAME>with your actual cluster name.
Create the ArgoCD
Applicationfile<CLUSTER_NAME>-app.yaml, which references the vCluster Helm chart:Modify the following with your specific values to generate a copyable command:---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-vcluster
namespace: argocd
spec:
project: default
source:
chart: vcluster
repoURL: https://charts.loft.sh
helm:
releaseName: my-vcluster
valueFiles:
- vcluster.yaml
destination:
server: https://kubernetes.default.svc
namespace: team-xCommit and push these files to your configured ArgoCD repository.
Sync your ArgoCD repository with your configured cluster:
Modify the following with your specific values to generate a copyable command:argocd app sync my-vcluster
Create a
HelmRepositorysource in the Git repository that Flux monitors, so Flux can fetch the vCluster Helm charts automatically. In this example, the related files are stored under theclustersfolder. Save the following file asclusters/sources/vcluster-repository.yaml:Modify the following with your specific values to generate a copyable command:---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: vcluster
namespace: flux-system
spec:
interval: 1h
url: https://charts.loft.shThe vCluster will be deployed into the namespace
team-xof the host cluster. If this namespace doesn’t already exist, create it with:Modify the following with your specific values to generate a copyable command:kubectl create namespace team-xCreate a vCluster
HelmReleasefile in your Git repository. ThisHelmReleasetells Flux how to deploy a vCluster in your Kubernetes cluster using the configured Helm charts in Step 1. Save the following file asclusters/production/vcluster-demo.yaml:Modify the following with your specific values to generate a copyable command:---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: vcluster-demo
namespace: team-x
spec:
interval: 10m
chart:
spec:
chart: vcluster
version: "0.28.x"
sourceRef:
kind: HelmRepository
name: vcluster
namespace: flux-system
values:
# Configure TLS SAN for the certificate
controlPlane:
proxy:
extraSANs:
- "vcluster-demo.team-x.svc.cluster.local"
coredns:
enabled: true
exportKubeConfig:
# Set a meaningful context name
context: default
# Use a server URL that matches the TLS SAN
server: https://vcluster-demo.team-x.svc.cluster.local:443
# Skip TLS verification when Flux connects to the vCluster
insecure: true
# Specify the secret where the KubeConfig is stored
secret:
name: vcluster-flux-kubeconfig
sync:
toHost:
ingresses:
enabled: trueThe content in
vcluster.yamlshould be placed under thevalues.Unlike ArgoCD, Flux requires the KubeConfig secret to be able to access the vCluter's API server and deploy to vCluster. The
exportKubeConfigconfiguration undervalues:- Exports the virtual cluster KubeConfig as a Secret in the host namespace
- Makes the Secret available for Flux to use with the
spec.kubeConfigfield - Uses a server URL that is accessible from the Flux controllers (replace
vcluster-nameandteam-xwith your actual values) - Sets
insecure: trueto automatically skip TLS certificate verification - Adds a TLS SAN (Subject Alternative Name) that matches the server URL, which helps prevent certificate verification errors
KubeConfig Secret keyThe vCluster
exportKubeConfigconfiguration creates a Secret with the KubeConfig data stored under the keyconfig. When referring to this Secret in Flux resources, you must specify this key in thesecretRef.keyfield, as shown in the examples below.Modify the following with your specific values to generate a copyable command:# In Flux HelmRelease
spec:
kubeConfig:
secretRef:
name: vcluster-flux-kubeconfig
key: config # Must match the key used in the vCluster-generated SecretCertificate verification considerationsWhen using vCluster with Flux, proper TLS certificate configuration is essential:
- Set
exportKubeConfig.insecure: truein your vCluster configuration - Configure proper TLS SANs with the
--tls-sanflag in vCluster configuration - Ensure the server URL matches the certificate's SAN
Modify the following with your specific values to generate a copyable command:
# In your vCluster configuration
controlPlane:
proxy:
extraSANs:
- "vcluster-demo.team-x.svc.cluster.local"
exportKubeConfig:
server: https://vcluster-demo.team-x.svc.cluster.local:443
insecure: trueAfter adding the
HelmReleaseand any supporting files to your Git repository, commit and push them:Modify the following with your specific values to generate a copyable command:git add clusters/
git commit -m "Add vCluster demo configuration"
git pushOnce the changes are pushed, Flux will automatically detect them and deploy the vCluster according to the configuration in your repository.
Cluster API (CAPI) provides lifecycle management for Kubernetes clusters. The vCluster CAPI provider enables you to manage virtual clusters using the same declarative APIs and tooling used for physical clusters. For more details, see the Cluster API Provider for vCluster documentation.
Install the
clusterctlCLI.Install the vCluster provider:
clusterctl init --infrastructure vcluster:v0.2.0Export environment variables for the Cluster API provider to create the manifest. The manifest is applied to your Kubernetes cluster, which deploys a vCluster.
Modify the following with your specific values to generate a copyable command:export CLUSTER_NAME=my-vcluster
export CLUSTER_NAMESPACE=team-x
export VCLUSTER_YAML=$(awk '{printf "%s\n", $0}' vcluster.yaml)Create the namespace for the vCluster using the exported variable:
Modify the following with your specific values to generate a copyable command:kubectl create namespace team-xGenerate the required manifests and apply them using the exported variables:
Modify the following with your specific values to generate a copyable command:clusterctl generate cluster my-vcluster \
--infrastructure vcluster \
--target-namespace team-x \
| kubectl apply -f -Kubernetes versionThe Kubernetes version for the vCluster is not set at the CAPI provider command. Configure it in the
vcluster.yamlfile based on your Kubernetes distribution.Wait for vCluster to become ready by monitoring the vCluster custom resource status:
Modify the following with your specific values to generate a copyable command:kubectl wait --for=condition=ready vcluster -n team-x my-vcluster --timeout=300s
5. Connect to vCluster​
Establish a connection to your vCluster instance:
vcluster connect ${VCLUSTER_NAME}
6. Create example workload​
Create an example workload to list S3 buckets.
# Create example-workload.yaml content dynamically
cat <<EOF > example-workload.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-sa
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: s3-list-buckets
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: s3-list-buckets
template:
metadata:
labels:
app: s3-list-buckets
spec:
serviceAccountName: demo-sa
containers:
- image: public.ecr.aws/aws-cli/aws-cli
command:
- "aws"
- "s3"
- "ls"
name: aws-pod
EOF
kubectl apply -f example-workload.yaml
7. Read updated name from platform API​
- bash
- Terraform
Define a function to fetch the KSA name using curl and use it to export KSA_NAME environment variable.
# Define the function to get the KSA name using curl
get_ksa_name() {
local vcluster_ksa_name=$1
local vcluster_ksa_namespace=$2
local vcluster_name=$3
local host=$4
local access_key=$5
local resource_path="/kubernetes/management/apis/management.loft.sh/v1/translatevclusterresourcenames"
local host_with_scheme=$([[ $host =~ ^(http|https):// ]] && echo "$host" || echo "https://$host")
local sanitized_host="${host_with_scheme%/}"
local full_url="${sanitized_host}${resource_path}"
local response=$(curl -s -k -X POST "$full_url" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${access_key}" \
-d @- <<EOF
{
"spec": {
"name": "${vcluster_ksa_name}",
"namespace": "${vcluster_ksa_namespace}",
"vclusterName": "${vcluster_name}"
}
}
EOF
)
local status_name=$(echo "$response" | jq -r '.status.name')
if [[ -z "$status_name" || "$status_name" == "null" ]]; then
echo "Error: Unable to fetch KSA name from response: $response"
exit 1
fi
echo "$status_name"
}
# Get the KSA name
export KSA_NAME=$(get_ksa_name "$SERVICE_ACCOUNT_NAME" "$SERVICE_ACCOUNT_NAMESPACE" "$VCLUSTER_NAME" "$HOST" "$ACCESS_KEY")
We prepared a Terraform module that you can use in order to easily fetch updated resource name from Platform API.
module "synced_service_account_name" {
source = "github.com/loft-sh/vcluster-terraform-modules//single-namespace-rename"
providers = {
http.default = http.default
}
host = var.vcluster_platform_host
access_key = var.access_key
resource_name = var.service_account_name
resource_namespace = var.service_account_namespace
vcluster_name = var.vcluster_name
}
8. Create IAM policy and role for Pod Identity​
- bash
- Terraform
Create IAM policy and role for pod identity.
cat >my-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::*"
}
]
}
EOF
aws iam create-policy --policy-name my-policy --policy-document file://my-policy.json
cat >trust-relationship.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": [
"sts:AssumeRole",
"sts:TagSession"
]
}
]
}
EOF
aws iam create-role --role-name my-role --assume-role-policy-document file://trust-relationship.json --description "my-role-description"
aws iam attach-role-policy --role-name my-role --policy-arn=arn:aws:iam::${AWS_ACCOUNT_ID}:policy/my-policy
Create the pod identity association.
The namespace parameter depends on your vCluster deployment type:
- vCluster (not using the platform): Use the namespace where vCluster is deployed
- Platform-managed vCluster: The namespace follows the pattern
loft-<project-name>-v-<vcluster-name>
- vCluster without the Platform
- Platform-managed vCluster
For vCluster deployments not connected with the platform (i.e. deployed with vcluster create or Helm without the platform):
# Set the namespace where vCluster is deployed
export VCLUSTER_NAMESPACE="team-x" # Replace with your actual namespace
aws eks create-pod-identity-association \
--cluster-name ${CLUSTER_NAME} \
--role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/my-role \
--namespace ${VCLUSTER_NAMESPACE} \
--service-account ${KSA_NAME}
For vCluster deployments managed by the platform with a projectProjectA logical grouping of resources in the platform that provides team isolation and resource quotas. Projects help organize virtual clusters and namespace resources.:
# Set project name and construct the namespace
export PROJECT_NAME="my-project" # Replace with your actual project name
export PLATFORM_NAMESPACE="loft-${PROJECT_NAME}-v-${VCLUSTER_NAME}"
aws eks create-pod-identity-association \
--cluster-name ${CLUSTER_NAME} \
--role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/my-role \
--namespace ${PLATFORM_NAMESPACE} \
--service-account ${KSA_NAME}
data "aws_iam_policy_document" "assume_role" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["pods.eks.amazonaws.com"]
}
actions = [
"sts:AssumeRole",
"sts:TagSession"
]
}
}
resource "aws_iam_role" "example" {
name = "eks-pod-identity-example"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
}
resource "aws_iam_role_policy_attachment" "example_s3" {
policy_arn = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
role = aws_iam_role.example.name
}
9. Verify the setup​
Verify the setup by checking the logs.
kubectl logs -l app=s3-list-buckets -n default