Skip to main content
Version: v0.29 Stable

Istio

Limited vCluster Tenancy Configuration Support

This feature is only available for the following:

Running the control plane as a container and the following worker node types:
  • Host Nodes
Enterprise-Only Feature

This feature is an Enterprise feature. See our pricing plans or contact our sales team for more information.

Istio integration

This guide shows how to set up Istio integration with your virtual cluster. This enables you to use one Istio installation from the host cluster instead of installing Istio in each virtual cluster.

You can include your virtual workloads in the mesh by setting istio.io/dataplane-mode=ambient label on the virtual Namespaces or Pods. You can exclude your virtual workloads from the mesh by setting istio.io/dataplane-mode=none label either on the Namespace or on the Pod.

Istio supported versions​

Integration works with following Istio / Gateway API Versions:

Istio VersionGateway API Version
v1.241.2
v1.251.2
v1.261.3
v1.271.3

Prerequisites​

  • Administrator access to a Kubernetes cluster: See Accessing Clusters with kubectl for more information. Run the command kubectl auth can-i create clusterrole -A to verify that your current kube-context has administrative privileges.

    info

    To obtain a kube-context with admin access, ensure you have the necessary credentials and permissions for your Kubernetes cluster. This typically involves using kubectl config commands or authenticating through your cloud provider's CLI tools.

  • helm: Helm v3.10 is required for deploying the platform. Refer to the Helm Installation Guide if you need to install it.

  • kubectl: Kubernetes command-line tool for interacting with the cluster. See Install and Set Up kubectl for installation instructions.

  • istio operator installed on your host cluster in ambient mode with DNS Capture disabled
warning

To disable DNS capture, set values.cni.ambient.dnsCapture: false in your Istio configuration. This integration works only with Istio in Ambient mode. Sidecar mode is not supported.

Enable the integration​

Enable the Istio integration in your virtual cluster configuration:

Enable istio integration
integrations:
istio:
enabled: true

This configuration:

  • Enables the integration.
  • Installs Resource Definitions for DestinationRules, Gateways and VirtualServices into the virtual cluster.
  • Exports DestinationRules, Gateways and VirtualServices from the virtual cluster to the host (and re-writes) service references to the services translated names in the host.
  • Adds istio.io/dataplane-mode label to the synced Pods based on the value of this label set in the virtual namespace.
warning

Only DestinationRules, Gateways, and VirtualServices from networking.istio.io/v1 API Version are synced to the host clusters. Other kinds are not yet supported.

Route request based on the version label of the app​

    Set up cluster contexts​

  1. Setting up the host and virtual cluster contexts makes it easier to switch between them. You can adjust the values below according to your setup. Here, the HOST_CONTEXT is the context of the host cluster, while the VCLUSTER_CONTEXT is the context to access the created vCluster. The VCLUSTER_HOST_NAMESPACE is the namespace where the vCluster is created on the host cluster. The ISTIO_NAMESPACE is the namespace to deploy the Istio integration. The commands in the following steps will be automatically updated to follow your configuration. For the GatewayAPI version, you should set it according to your Istio version / GatewayAPI version matrix above.

    Modify the following with your specific values to replace on the whole page and generate copyable commands:
    tip

    You can find your contexts by running kubectl config get-contexts

  2. Create waypoint proxy in the host​

  3. In this tutorial, you set Kubernetes service name as a host in the VirtualService spec.hosts. To make it work, you need a Waypoint proxy in the virtual cluster's host namespace. In many cases it is optional however. Refer to Istio documentation for more information on Waypoint proxies. Install Gateway CRD first in the host:

    kubectl --context="your-host-context" get crd gateways.gateway.networking.k8s.io &> /dev/null || \
    kubectl --context="your-host-context" apply -f "https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml"

    this is a Gateway for Waypoint you need:

    waypoint-gateway.yaml
    apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
    name: waypoint
    labels:
    istio.io/waypoint-for: service
    spec:
    gatewayClassName: istio-waypoint
    listeners:
    - name: mesh
    port: 15008
    protocol: HBONE

    create it in the host cluster:

    kubectl --context="your-host-context" create -f waypoint-gateway.yaml --namespace="vcluster"
  4. Create virtual namespace with ambient mode enabled​

  5. First, you need to create a test namespace:

    kubectl --context="vcluster-ctx" create namespace istio

    and label it with istio.io/dataplane-mode: ambient:

    kubectl --context="vcluster-ctx" label namespace istio istio.io/dataplane-mode=ambient
  6. Create two versions of your app​

  7. Next, you create 3 deployments: two of them are nginx server and the third one is to curl the other two.

    Create NGINX deployments that respond with different response bodies based on the contents of their respective ConfigMaps:

    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: nginx-configmap-v1
    namespace: istio
    data:
    index.html: |
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx v1!</title>
    </head>
    <body>
    <h1>Hello from Nginx Version 1!</h1>
    </body>
    </html>
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: nginx-deployment-v1
    namespace: istio
    labels:
    app: nginx
    version: v1
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: nginx
    version: v1
    template:
    metadata:
    labels:
    app: nginx
    version: v1
    spec:
    containers:
    - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80
    volumeMounts:
    - name: nginx-index-v1
    mountPath: /usr/share/nginx/html/index.html
    subPath: index.html
    volumes:
    - name: nginx-index-v1
    configMap:
    name: nginx-configmap-v1
    kubectl --context="vcluster-ctx" create -f configmap1.yaml --namespace istio
    kubectl --context="vcluster-ctx" create -f deployment1.yaml --namespace istio

    make sure that this nginx app is up and running:

    kubectl --context="vcluster-ctx" wait --for=condition=ready pod -l app=nginx --namespace istio --timeout=300s

    Create an additional NGINX deployment configured to serve a different response body, using a separate ConfigMap:

    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: nginx-configmap-v2
    namespace: istio
    data:
    index.html: |
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx v2!</title>
    </head>
    <body>
    <h1>Hello from Nginx Version 2!</h1>
    </body>
    </html>
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: nginx-deployment-v2
    namespace: istio
    labels:
    app: nginx
    version: v2
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: nginx
    version: v2
    template:
    metadata:
    labels:
    app: nginx
    version: v2
    spec:
    containers:
    - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80
    volumeMounts:
    - name: nginx-index-v2
    mountPath: /usr/share/nginx/html/index.html
    subPath: index.html
    volumes:
    - name: nginx-index-v2
    configMap:
    name: nginx-configmap-v2
    kubectl --context="vcluster-ctx" create -f configmap2.yaml --namespace istio
    kubectl --context="vcluster-ctx" create -f deployment2.yaml --namespace istio

    To ensure your NGINX application is up and running in your Kubernetes cluster, use the following command:

    kubectl --context="vcluster-ctx" wait --for=condition=ready pod -l app=nginx --namespace istio --timeout=300s

    Create a Service that targets Pods from both Deployments by using a shared label:

    apiVersion: v1
    kind: Service
    metadata:
    name: nginx-service
    namespace: istio
    labels:
    app: nginx
    istio.io/use-waypoint: "waypoint"
    spec:
    ports:
    - port: 80
    targetPort: 80
    selector:
    app: nginx

    The istio.io/use-waypoint: waypoint label directs Istio to route traffic for the labeled resource through the waypoint proxy within the same namespace. This configuration enables Layer 7 (L7) policy enforcement and observability features provided by the waypoint proxy. Applying this label to a namespace ensures that all Pods and Services within that namespace use the specified waypoint proxy.

    To deploy a Service defined in the service.yaml file within the test namespace of the Kubernetes cluster specified by the VCLUSTER_CONTEXT context, use the following command:

    kubectl --context="vcluster-ctx" create -f service.yaml --namespace istio

    To test connectivity between the two NGINX deployments, deploy a temporary Pod equipped with curl:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: client
    namespace: istio
    labels:
    app: client
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: client
    template:
    metadata:
    labels:
    app: client
    spec:
    containers:
    - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80
     kubectl --context="vcluster-ctx" create -f client_deployment.yaml --namespace istio
  8. Configure your desired traffic routing using DestinationRule and VirtualService​

  9. You can create DestinationRules and VirtualService in the virtual cluster.

    Create a pair that routes our request based on the request path:

    1. Requesting /v2 endpoint should route our request to pods with version=v2 label
    2. All other requests are routed to version=v1 pods.

    Save this DestinationRule and VirtualService definition, and apply it in the virtual cluster:

    apiVersion: networking.istio.io/v1
    kind: DestinationRule
    metadata:
    name: nginx-destination
    namespace: istio
    spec:
    host: nginx-service.istio.svc.cluster.local # vCluster translates it to the host service automatically
    subsets:
    - name: v1
    labels:
    version: v1
    - name: v2
    labels:
    version: v2
    apiVersion: networking.istio.io/v1
    kind: VirtualService
    metadata:
    name: nginx-service
    namespace: istio
    spec:
    hosts:
    - nginx-service.istio.svc.cluster.local # vCluster translates it to the host service automatically
    http:
    - name: "nginx-v2"
    match:
    - uri:
    prefix: "/v2"
    rewrite:
    uri: "/"
    route:
    - destination:
    host: nginx-service.istio.svc.cluster.local # vCluster translates it to the host service automatically
    subset: v2
    - name: "nginx-v1"
    route:
    - destination:
    host: nginx-service.istio.svc.cluster.local # vCluster translates it to the host service automatically
    subset: v1

    To apply a DestinationRule configuration to the virtual cluster specified by the VCLUSTER_CONTEXT context, use the following command:

     kubectl --context="vcluster-ctx" create -f destination_rule.yaml
    kubectl --context="vcluster-ctx" create -f virtual_service.yaml
  10. Verify that DestinationRule and VirtualService is synced to the host cluster​

  11. kubectl --context="your-host-context" get destinationrules --namespace "vcluster"
    kubectl --context="your-host-context" get virtualservices --namespace "vcluster"

    You should see a DestinationRule named nginx-destination-x-<ISTIO_NAMESPACE>-x-vcluster and VirtualService named nginx-service-x-<ISTIO_NAMESPACE>-x-vcluster.

  12. Test traffic routing​

  13. Execute a curl command from within the client Pod to verify responses from the two NGINX deployments. Depending on the request path, you should receive either "Hello from Nginx Version 1!" or "Hello from Nginx Version 2!" in the response:

    kubectl --context="vcluster-ctx" exec -it -n istio deploy/client -- curl nginx-service/v2
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx v2!</title>
    </head>
    <body>
    <h1>Hello from Nginx Version 2!</h1>
    </body>
    </html>
    kubectl --context="vcluster-ctx" exec -it -n istio deploy/client -- curl nginx-service
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx v1!</title>
    </head>
    <body>
    <h1>Hello from Nginx Version 1!</h1>
    </body>
    </html>

    Seeing the same output means that request was intercepted by Istio and routed as we specified in the DestinationRule and VirtualService.

  14. Summary​

  15. Istio integration enables you to re-use one Istio instance from the host cluster for multiple virtual clusters. Virtual cluster users can define their own Gateway, DestinationRule and VirtualService without interfering with each other.

Fields translated during the sync to host​

Following fields of Gateway are modified by vCluster during the sync to host:

  • reference to the TLS Secret is re-written spec.servers[*].tls.credentialName. Secret is automatically synced to the host cluster.
  • namespace, . and * prefix, followed by / is stripped from spec.servers[*].hosts[*], so e.g. foo-namespace/loft.sh becomes loft.sh in the host object.
  • additional labels vcluster.loft.sh/managed-by: [YOUR VIRTUAL CLUSTER NAME] and vcluster.loft.sh/namespace: [VIRTUAL NAMESPACE] are automatically added to the spec.subsets[*].labels

For additional information how Secret and Service references are translated, read How does syncing work?

Following fields of DestinationRule are modified by vCluster during the sync to host:

  • reference to the virtual Kubernetes Service is re-written for spec.host
  • reference to the TLS Secret in spec.trafficPolicy.portLevelSettings[*].tls.credentialName & spec.trafficPolicy.tls.credentialName is re-written. Secrets are automatically synced to the host cluster.
  • additional labels

Following fields of VirtualService are modified by vCluster during the sync to host:

  • reference to the virtual Kubernetes Service is re-written for:

  • spec.hosts[*]

  • spec.http[*].route[*].destination.host

  • spec.http[*].mirrors[*].destination.host

  • spec.tcp[*].route[*].destination.host

  • spec.tls[*].route[*].destination.host

    • reference to the networking.istio.io/v1 kind: Gateway is re-written for:
  • spec.gateways[*]

  • spec.http[*].match[*].gateways[*]

  • spec.tls[*].match[*].gateways[*]

  • spec.tcp[*].match[*].gateways[*]

    • reference to the networking.istio.io/v1 kind: VirtualService is re-written for:
  • spec.http[*].delegate

Fields not supported in VirtualService:

  • spec.exportTo
  • spec.http[*].match[*].sourceLabels
  • spec.http[*].match[*].sourceNamespace
  • spec.tcp[*].match[*].sourceLabels
  • spec.tcp[*].match[*].sourceNamespace
  • spec.tls[*].match[*].sourceLabels
  • spec.tls[*].match[*].sourceNamespace

Config reference​

istio required object ​

Istio syncs DestinationRules, Gateways and VirtualServices from virtual cluster to the host.

enabled required boolean false ​

Enabled defines if this option should be enabled.

sync required object ​

toHost required object ​

destinationRules required object ​
enabled required boolean true ​

Enabled defines if this option should be enabled.

gateways required object ​
enabled required boolean true ​

Enabled defines if this option should be enabled.

virtualServices required object ​
enabled required boolean true ​

Enabled defines if this option should be enabled.