Skip to main content
Version: main 🚧

Pro Feature

This feature is available in the vCluster Pro tier. Contact us for more details and to start a trial.

Integrating EKS Pod Identity with vCluster

This tutorial guides you through the process of integrating AWS Service Accounts with your vCluster using Pod Identity.

Setting up Pod Identity requires you to link an AWS Service Account with the Kubernetes Service Account (KSA) used by your workloads. This KSA needs to be available in the host cluster in which your vCluster instance runs.

To achieve this setup, we'll need to use the sync.toHost feature in order to expose the KSA in the host cluster together with the vCluster Platform API to retrieve the updated name of the KSA in the host cluster.

Prerequisites​

This guide assumes you have the following prerequisites:

  • kubectl installed
  • aws CLI installed and configured
  • An existing EKS cluster with the CSI driver set up, IAM OIDC provider, and Pod Identity agent deployed

Step-by-Step Guide​

1. Start vCluster Platform and create an access key​

In order to integrate your workloads with EKS Pod Identity, you'll need a vCluster Platform instance running. If you don't have one already, follow the vCluster Platform installation guide.

Once you're done, you'll need to create a new access key. This will allow you to use the vCluster Platform API. Please follow this guide to create a new access key.

2. Set Up Variables​

Define the necessary environment variables for your EKS cluster, service accounts, and authentication details.

#!/bin/bash

# Set up environment variables
export AWS_REGION="eu-central-1" # Replace with your AWS region
export CLUSTER_NAME="pod-identity-1" # Replace with your EKS cluster name
export SERVICE_ACCOUNT_NAME="demo-sa"
export SERVICE_ACCOUNT_NAMESPACE="default"
export VCLUSTER_NAME="my-vcluster"
export HOST=https://your.loft.host # Replace with your host
export ACCESS_KEY=abcd1234 # Replace with your access key
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)

3. Create vCluster Configuration​

Create the vcluster.yaml file with following content:

sync:
toHost:
serviceAccounts:
enabled: true

4. Deploy vCluster​

  1. Install the vCluster CLI.

     brew install loft-sh/tap/vcluster-experimental

    If you installed the CLI using brew install vcluster, you should brew uninstall vcluster and then install the experimental version. The binaries in the tap are signed using the Sigstore framework for enhanced security.

    Confirm that you've installed the correct version of the vCluster CLI.

    vcluster --version
  2. Deploy vCluster.

    vcluster create my-vcluster --namespace team-x --values vcluster.yaml

    When the installation finishes, you are automatically connected to the virtual cluster. You Kubernetes context is updated to point to your new virtual cluster. You can run local kubectl commands for the new virtual cluster.

5. Connect to vCluster​

Establish a connection to your vCluster instance:

vcluster connect ${VCLUSTER_NAME}

6. Create Example Workload​

Create an example workload to list S3 buckets.

# Create example-workload.yaml content dynamically
cat <<EOF > example-workload.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-sa
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: s3-list-buckets
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: s3-list-buckets
template:
metadata:
labels:
app: s3-list-buckets
spec:
serviceAccountName: demo-sa
containers:
- image: public.ecr.aws/aws-cli/aws-cli
command:
- "aws"
- "s3"
- "ls"
name: aws-pod
EOF

kubectl apply -f example-workload.yaml

7. Read Updated Name From vCluster Platform API​

Define a function to fetch the KSA name using curl and use it to export KSA_NAME environment variable.

# Define the function to get the KSA name using curl
get_ksa_name() {
local vcluster_ksa_name=$1
local vcluster_ksa_namespace=$2
local vcluster_name=$3
local host=$4
local access_key=$5

local resource_path="/kubernetes/management/apis/management.loft.sh/v1/translatevclusterresourcenames"
local host_with_scheme=$([[ $host =~ ^(http|https):// ]] && echo "$host" || echo "https://$host")
local sanitized_host="${host_with_scheme%/}"
local full_url="${sanitized_host}${resource_path}"

local response=$(curl -s -k -X POST "$full_url" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${access_key}" \
-d @- <<EOF
{
"spec": {
"name": "${vcluster_ksa_name}",
"namespace": "${vcluster_ksa_namespace}",
"vclusterName": "${vcluster_name}"
}
}
EOF
)

local status_name=$(echo "$response" | jq -r '.status.name')
if [[ -z "$status_name" || "$status_name" == "null" ]]; then
echo "Error: Unable to fetch KSA name from response: $response"
exit 1
fi
echo "$status_name"
}

# Get the KSA name
export KSA_NAME=$(get_ksa_name "$SERVICE_ACCOUNT_NAME" "$SERVICE_ACCOUNT_NAMESPACE" "$VCLUSTER_NAME" "$HOST" "$ACCESS_KEY")

8. Create IAM Policy and Role for Pod Identity​

Create IAM policy and role for pod identity.

cat >my-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::*"
}
]
}
EOF

aws iam create-policy --policy-name my-policy --policy-document file://my-policy.json

cat >trust-relationship.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": [
"sts:AssumeRole",
"sts:TagSession"
]
}
]
}
EOF

aws iam create-role --role-name my-role --assume-role-policy-document file://trust-relationship.json --description "my-role-description"

aws iam attach-role-policy --role-name my-role --policy-arn=arn:aws:iam::${AWS_ACCOUNT_ID}:policy/my-policy

Create the pod identity association.

aws eks create-pod-identity-association --cluster-name ${CLUSTER_NAME} --role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/my-role --namespace ${VCLUSTER_NAME} --service-account ${KSA_NAME}

9. Verify the Setup​

Verify the setup by checking the logs.

kubectl logs -l app=s3-list-buckets -n default