This feature is available in the vCluster Pro tier. Contact us for more details and to start a trial.
Integrating EKS Pod Identity with vCluster
This tutorial guides you through the process of integrating AWS Service Accounts with your vCluster using Pod Identity.
Setting up Pod Identity requires you to link an AWS Service Account with the Kubernetes Service Account (KSA) used by your workloads. This KSA needs to be available in the host cluster in which your vCluster instance runs.
To achieve this setup, we'll need to use the sync.toHost feature in order to expose the KSA in the host cluster together with the vCluster Platform API to retrieve the updated name of the KSA in the host cluster.
Prerequisites​
This guide assumes you have the following prerequisites:
kubectl
installedaws
CLI installed and configured- An existing EKS cluster with the CSI driver set up, IAM OIDC provider, and Pod Identity agent deployed
Step-by-Step Guide​
1. Start vCluster Platform and create an access key​
In order to integrate your workloads with EKS Pod Identity, you'll need a vCluster Platform instance running. If you don't have one already, follow the vCluster Platform installation guide.
Once you're done, you'll need to create a new access key. This will allow you to use the vCluster Platform API. Please follow this guide to create a new access key.
2. Set Up Variables​
- bash
- Terraform
Define the necessary environment variables for your EKS cluster, service accounts, and authentication details.
#!/bin/bash
# Set up environment variables
export AWS_REGION="eu-central-1" # Replace with your AWS region
export CLUSTER_NAME="pod-identity-1" # Replace with your EKS cluster name
export SERVICE_ACCOUNT_NAME="demo-sa"
export SERVICE_ACCOUNT_NAMESPACE="default"
export VCLUSTER_NAME="my-vcluster"
export HOST=https://your.loft.host # Replace with your host
export ACCESS_KEY=abcd1234 # Replace with your access key
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
variable "aws_region" {
description = "The AWS region to deploy the EKS cluster"
type = string
default = "eu-central-1"
}
variable "cluster_name" {
description = "The name of the EKS cluster"
type = string
default = "pod-identity-1"
}
variable "service_account_name" {
description = "K8s SA name for Pod Identity binding"
type = string
default = "demo-sa"
}
variable "service_account_namespace" {
description = "Namespace in which k8s SA is created"
type = string
default = "default"
}
variable "vcluster_name" {
description = "Name of virtual cluster"
type = string
default = "my-vcluster"
}
variable "auth_token" {
description = "Auth token for vCluster.Pro API"
type = string
default = "abcd1234"
}
3. Create vCluster Configuration​
Create the vcluster.yaml
file with following content:
sync:
toHost:
serviceAccounts:
enabled: true
4. Deploy vCluster​
- vCluster CLI
- Helm
- Terraform
- Argo CD
- Cluster API
Install the vCluster CLI.
- Homebrew
- Mac (Intel/AMD)
- Mac (Silicon/ARM)
- Linux (AMD)
- Linux (ARM)
- Download Binary
- Windows Powershell
brew install loft-sh/tap/vcluster-experimental
If you installed the CLI using
brew install vcluster
, you shouldbrew uninstall vcluster
and then install the experimental version. The binaries in the tap are signed using the Sigstore framework for enhanced security.curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/download/<VCLUSTER_VERSION>/vcluster-darwin-amd64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster
Replace
<VCLUSTER_VERSION>
with the version you want to download.curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/download/<VCLUSTER_VERSION>/vcluster-darwin-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster
Replace
<VCLUSTER_VERSION>
with the version you want to download.curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/download/<VCLUSTER_VERSION>/vcluster-linux-amd64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster
Replace
<VCLUSTER_VERSION>
with the version you want to download.curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/download/<VCLUSTER_VERSION>/vcluster-linux-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster
Replace
<VCLUSTER_VERSION>
with the version you want to download.Download the binary for your platform from the GitHub Releases page and add this binary to your $PATH.
md -Force "$Env:APPDATA\vcluster"; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls,Tls11,Tls12';
Invoke-WebRequest -URI "https://github.com/loft-sh/vcluster/releases/download/<VCLUSTER_VERSION>/vcluster-windows-amd64.exe" -o $Env:APPDATA\vcluster\vcluster.exe;
$env:Path += ";" + $Env:APPDATA + "\vcluster";
[Environment]::SetEnvironmentVariable("Path", $env:Path, [System.EnvironmentVariableTarget]::User);Replace
<VCLUSTER_VERSION>
with the version you want to download.Reboot RequiredYou may need to reboot your computer to use the CLI due to changes to the PATH variable (see below).
Check Environment Variable $PATHLine 4 of this install script adds the install directory
%APPDATA%\vcluster
to the$PATH
environment variable. This is only effective for the current Powershell session, i.e. when opening a new terminal window,vcluster
may not be found.Make sure to add the folder
%APPDATA%\vcluster
to thePATH
environment variable after installing vcluster CLI via Powershell. Afterward, a reboot might be necessary.Confirm that you've installed the correct version of the vCluster CLI.
vcluster --version
Deploy vCluster.
vcluster create my-vcluster --namespace team-x --values vcluster.yaml
When the installation finishes, you are automatically connected to the virtual cluster. You Kubernetes context is updated to point to your new virtual cluster. You can run local
kubectl
commands for the new virtual cluster.
Deploy vCluster using
helm upgrade
command.helm upgrade --install my-vcluster vcluster \
--values vcluster.yaml \
--repo https://charts.loft.sh \
--namespace team-x \
--repository-config='' \
--create-namespace
Create a
main.tf
file.provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
resource "helm_release" "my_vcluster" {
name = "my-vcluster"
namespace = "team-x"
create_namespace = true
repository = "https://charts.loft.sh"
chart = "vcluster"
# If you didn't create a vcluster.yaml, remove the values section.
values = [
file("${path.module}/vcluster.yaml")
]
}Install the required Helm provider.
terraform init
Generate a plan.
terraform plan
Verify that the provider can access your cluster and that the proposed changes are correct.
Deploy vCluster.
terraform apply
To deploy vCluster using ArgoCD, you need the following files:
vcluster.yaml
for your configuration options of your vCluster.my-vcluster-app.yaml
for your ArgoCDApplication
definition.
Create the ArgoCD
Application
filemy-vcluster-app.yaml
, which refers to the vCluster Helm chart.apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-vcluster
namespace: argocd
spec:
project: default
source:
chart: vcluster
repoURL: https://charts.loft.sh
helm:
releaseName: my-vcluster
valueFiles:
- vcluster.yaml
destination:
server: https://kubernetes.default.svc
namespace: team-xCommit and push these files to your configured ArgoCD repository.
Synchronize your ArgoCD repository with your configured cluster.
Learn more about Cluster API Provider for vCluster.
Install the
clusterctl
CLI.Install the vCluster provider.
clusterctl init --infrastructure vcluster:v0.2.0
Export environment variables to be used by the cluster API provider to create the manifest. The manifest is applied to your Kubernetes cluster, which will deploy a vCluster.
export CLUSTER_NAME=my-vcluster
export CLUSTER_NAMESPACE=team-x
export VCLUSTER_YAML=$(awk '{printf "%s\\n", $0}' vcluster.yaml)Create the namespace for the vCluster.
kubectl create namespace ${CLUSTER_NAMESPACE}
Generate the required manifests and apply them.
clusterctl generate cluster ${CLUSTER_NAME} \
--infrastructure vcluster \
--target-namespace ${CLUSTER_NAMESPACE} \
| kubectl apply -f -Kubernetes VersionThe Kubernetes version for the vCluster is not set at the CAPI provider command. It is configured the
vcluster.yaml
file based on your Kubernetes distribution.Wait for vCluster to come up by watching for the vCluster custom resource to report a
ready
status.kubectl wait --for=condition=ready vcluster -n $CLUSTER_NAMESPACE $CLUSTER_NAME --timeout=300s
5. Connect to vCluster​
Establish a connection to your vCluster instance:
vcluster connect ${VCLUSTER_NAME}
6. Create Example Workload​
Create an example workload to list S3 buckets.
# Create example-workload.yaml content dynamically
cat <<EOF > example-workload.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-sa
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: s3-list-buckets
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: s3-list-buckets
template:
metadata:
labels:
app: s3-list-buckets
spec:
serviceAccountName: demo-sa
containers:
- image: public.ecr.aws/aws-cli/aws-cli
command:
- "aws"
- "s3"
- "ls"
name: aws-pod
EOF
kubectl apply -f example-workload.yaml
7. Read Updated Name From vCluster Platform API​
- bash
- Terraform
Define a function to fetch the KSA name using curl and use it to export KSA_NAME
environment variable.
# Define the function to get the KSA name using curl
get_ksa_name() {
local vcluster_ksa_name=$1
local vcluster_ksa_namespace=$2
local vcluster_name=$3
local host=$4
local access_key=$5
local resource_path="/kubernetes/management/apis/management.loft.sh/v1/translatevclusterresourcenames"
local host_with_scheme=$([[ $host =~ ^(http|https):// ]] && echo "$host" || echo "https://$host")
local sanitized_host="${host_with_scheme%/}"
local full_url="${sanitized_host}${resource_path}"
local response=$(curl -s -k -X POST "$full_url" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${access_key}" \
-d @- <<EOF
{
"spec": {
"name": "${vcluster_ksa_name}",
"namespace": "${vcluster_ksa_namespace}",
"vclusterName": "${vcluster_name}"
}
}
EOF
)
local status_name=$(echo "$response" | jq -r '.status.name')
if [[ -z "$status_name" || "$status_name" == "null" ]]; then
echo "Error: Unable to fetch KSA name from response: $response"
exit 1
fi
echo "$status_name"
}
# Get the KSA name
export KSA_NAME=$(get_ksa_name "$SERVICE_ACCOUNT_NAME" "$SERVICE_ACCOUNT_NAMESPACE" "$VCLUSTER_NAME" "$HOST" "$ACCESS_KEY")
We prepared a Terraform module that you can use in order to easily fetch updated resource name from Platform API.
module "synced_service_account_name" {
source = "github.com/loft-sh/vcluster-terraform-modules//single-namespace-rename"
providers = {
http.default = http.default
}
host = var.vcluster_platform_host
access_key = var.access_key
resource_name = var.service_account_name
resource_namespace = var.service_account_namespace
vcluster_name = var.vcluster_name
}
8. Create IAM Policy and Role for Pod Identity​
- bash
- Terraform
Create IAM policy and role for pod identity.
cat >my-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::*"
}
]
}
EOF
aws iam create-policy --policy-name my-policy --policy-document file://my-policy.json
cat >trust-relationship.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": [
"sts:AssumeRole",
"sts:TagSession"
]
}
]
}
EOF
aws iam create-role --role-name my-role --assume-role-policy-document file://trust-relationship.json --description "my-role-description"
aws iam attach-role-policy --role-name my-role --policy-arn=arn:aws:iam::${AWS_ACCOUNT_ID}:policy/my-policy
Create the pod identity association.
aws eks create-pod-identity-association --cluster-name ${CLUSTER_NAME} --role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/my-role --namespace ${VCLUSTER_NAME} --service-account ${KSA_NAME}
data "aws_iam_policy_document" "assume_role" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["pods.eks.amazonaws.com"]
}
actions = [
"sts:AssumeRole",
"sts:TagSession"
]
}
}
resource "aws_iam_role" "example" {
name = "eks-pod-identity-example"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
}
resource "aws_iam_role_policy_attachment" "example_s3" {
policy_arn = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
role = aws_iam_role.example.name
}
9. Verify the Setup​
Verify the setup by checking the logs.
kubectl logs -l app=s3-list-buckets -n default