Skip to main content
Version: main 🚧

Pro Feature

This feature is available in the vCluster Pro tier. Contact us for more details and to start a trial.

Integrating GCP Workload Identity with vCluster

This tutorial guides you through the process of integrating GCP Service Accounts with your vCluster using Workload Identity.

Setting up Workload Identity requires you to link a GCP Service Account with the Kubernetes Service Account (KSA) used by your workloads. This KSA needs to be available in the host cluster in which your vCluster instance runs.

To achieve this setup, we'll need to use the sync.toHost feature in order to expose the KSA in the host cluster together with the vCluster Platform API to retrieve the updated name of the KSA in the host cluster.

Prerequisites​

This guide assumes you have the following prerequisites:

  • kubectl installed
  • gcloud CLI installed and configured
  • A running GKE cluster with Workload Identity Federation enabled (GCP docs)

Step-by-Step Guide​

1. Start vCluster Platform and create access key​

In order to integrate your workloads with GKE Workload Identity, you'll need a vCluster Platform instance running. If you don't have one already, follow the vCluster Platform installation guide.

Once you're done, you'll need to create a new access key. This will allow you to use the vCluster Platform API. Please follow this guide to create a new access key.

2. Set Up Environment Variables​

Next, you need to set up the necessary environment variables. These variables include information about your GCP project, vCluster details, and authentication keys.

# Set up environment variables
export GSA_NAME=my-gke-sa
export VCLUSTER_KSA_NAME=demo-sa
export VCLUSTER_KSA_NAMESPACE=default
export GCP_PROJECT_ID=my-gcp-project-id-12345 # Replace with your actual GCP project ID
export VCLUSTER_NAME=my-vcluster
export HOST=https://my.loft.host # Replace with your actual host
export ACCESS_KEY=abcd1234 # Replace with your actual vCluster Platform access key

Before proceeding adjust all values to your specific case, make sure to specify correct GCP project ID, vCluster Platform host and auth key.

3. Read Updated Name From vCluster Platform API​

Define a function to fetch the KSA name using curl and use it to export KSA_NAME environment variable.

# Define the function to get the KSA name using curl
get_ksa_name() {
local vcluster_ksa_name=$1
local vcluster_ksa_namespace=$2
local vcluster_name=$3
local host=$4
local access_key=$5

local resource_path="/kubernetes/management/apis/management.loft.sh/v1/translatevclusterresourcenames"
local host_with_scheme=$([[ $host =~ ^(http|https):// ]] && echo "$host" || echo "https://$host")
local sanitized_host="${host_with_scheme%/}"
local full_url="${sanitized_host}${resource_path}"

local response=$(curl -s -k -X POST "$full_url" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${access_key}" \
-d @- <<EOF
{
"spec": {
"name": "${vcluster_ksa_name}",
"namespace": "${vcluster_ksa_namespace}",
"vclusterName": "${vcluster_name}"
}
}
EOF
)

local status_name=$(echo "$response" | jq -r '.status.name')
if [[ -z "$status_name" || "$status_name" == "null" ]]; then
echo "Error: Unable to fetch KSA name from response: $response"
exit 1
fi
echo "$status_name"
}

# Get the KSA name
export KSA_NAME=$(get_ksa_name "$SERVICE_ACCOUNT_NAME" "$SERVICE_ACCOUNT_NAMESPACE" "$VCLUSTER_NAME" "$HOST" "$ACCESS_KEY")

4. Create vCluster Configuration​

Create vcluster.yaml file with following content:

sync:
toHost:
serviceAccounts:
enabled: true

5. Deploy vCluster​

  1. Install the vCluster CLI.

     brew install loft-sh/tap/vcluster-experimental

    If you installed the CLI using brew install vcluster, you should brew uninstall vcluster and then install the experimental version. The binaries in the tap are signed using the Sigstore framework for enhanced security.

    Confirm that you've installed the correct version of the vCluster CLI.

    vcluster --version
  2. Deploy vCluster.

    vcluster create my-vcluster --namespace team-x --values vcluster.yaml

    When the installation finishes, you are automatically connected to the virtual cluster. You Kubernetes context is updated to point to your new virtual cluster. You can run local kubectl commands for the new virtual cluster.

6. Connect to vCluster​

Establish a connection to your vCluster instance:

vcluster connect ${VCLUSTER_NAME}

7. Create Example Workload​

Create a file named gcs-list-buckets-deployment.yaml with the following content:

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-sa
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gcs-list-buckets
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: gcs-list-buckets
template:
metadata:
labels:
app: gcs-list-buckets
spec:
serviceAccountName: demo-sa
containers:
- image: google/cloud-sdk:slim
command:
- "gcloud"
- "storage"
- "buckets"
- "list"
name: gcs-pod

Apply the deployment:

kubectl apply -f gcs-list-buckets-deployment.yaml

9. Create a GCP Service Account​

Create a new GCP Service Account (SA) that will be used for workload identity.

# Create a new GCP Service Account
gcloud iam service-accounts create ${GSA_NAME} \
--display-name "Workload Identity experiment SA"

10. Bind IAM Policies to the GCP Service Account​

Bind the necessary IAM policies to the GCP Service Account, allowing it to be used by the Kubernetes Service Account and to list GCS buckets.

# Bind IAM policy for Workload Identity User
gcloud iam service-accounts add-iam-policy-binding ${GSA_NAME}@${GCP_PROJECT_ID}.iam.gserviceaccount.com \
--member "serviceAccount:${GCP_PROJECT_ID}.svc.id.goog[${VCLUSTER_NAME}/${KSA_NAME}]" \
--role "roles/iam.workloadIdentityUser"

# Bind IAM policy for Storage Object Viewer
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member "serviceAccount:${GSA_NAME}@${GCP_PROJECT_ID}.iam.gserviceaccount.com" \
--role "roles/storage.objectViewer"

# Add IAM policy binding to the GCP project
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member "serviceAccount:${GSA_NAME}@${GCP_PROJECT_ID}.iam.gserviceaccount.com" \
--role "roles/editor"

11. Annotate the Kubernetes Service Account​

Annotate the Kubernetes Service Account with the GCP Service Account email.

# Annotate the Kubernetes Service Account
kubectl annotate serviceaccount \
--namespace ${VCLUSTER_KSA_NAMESPACE} \
${VCLUSTER_KSA_NAME} \
iam.gke.io/gcp-service-account=${GSA_NAME}@${GCP_PROJECT_ID}.iam.gserviceaccount.com

12. Verify the Setup​

Verify that the setup is complete and the pod is able to list the GCS buckets using the IAM role.

kubectl logs -l app=gcs-list-buckets -n default

Following these steps will integrate your GCP Service Account with your vCluster, allowing you to manage resources securely and efficiently.