Deploy Using ArgoCD
If you're using GitOps practices for deploying applications in your Kubernetes clusters, you'll likely want to apply the same approach to the platform. This guide provides a quick overview of deploying the platform using GitOps. The platform is similar to other applications, so standard GitOps practices are applicable.
This guide details deploying the platform using GitOps practices, specifically with ArgoCD. Although ArgoCD is used as an example, the principles are applicable to other GitOps tools.
Prerequisites​
-
Administrator access to a Kubernetes cluster: See Accessing Clusters with kubectl for more information. Your current kube-context must have administrative privileges, which you can verify with
kubectl auth can-i create clusterrole -A
infoTo obtain a kube-context with admin access, ensure you have the necessary credentials and permissions for your Kubernetes cluster. This typically involves using
kubectl config
commands or authenticating through your cloud provider's CLI tools. -
helm
installed: Helm v3.10 is required for deploying the platform. Refer to the Helm Installation Guide if you need to install it. -
kubectl
installed: Kubernetes command-line tool for interacting with the cluster. See Install and Set Up kubectl for installation instructions.
ArgoCD​
ArgoCD needs to be installed and configured on the host cluster. Follow the Argo CD Installation Guide to install it.
Deployment​
Basic deployment​
Before beginning, take a few minutes to review the installation with Helm section of the documentation. Helm is particularly well-suited for GitOps as it provides a standardized way to package and version Kubernetes applications.
The most basic GitOps platform deployment consists of an ArgoCD Application with the platform chart and your desired values.
Create ArgoCD application​
Execute the following command to create a simple ArgoCD Application.
PLATFORM_VERSION=4.0.0
cat <<EOF > gitops-application.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: vcluster-platform
namespace: argocd
spec:
destination:
name: ""
namespace: vcluster-platform
server: "https://kubernetes.default.svc"
source:
path: ""
repoURL: "https://charts.loft.sh"
targetRevision: $PLATFORM_VERSION
chart: vcluster-platform
helm:
parameters:
# admin
- name: admin.create
value: "true"
- name: admin.username
value: admin
- name: admin.password
value: password
# ingress
- name: ingress.enabled
value: "true"
- name: ingress.host
value: "vcluster-platform.example"
- name: ingress.name
value: "vcluster-platform-ingress"
# audit
- name: audit.enableSideCar
value: "true"
- name: config.audit.level
value: "1"
# config
- name: config.loftHost
value: "https://vcluster-platform.example"
project: default
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
EOF
Apply ArgoCD application​
To retrieve all available versions of the platform Helm chart, run the following command:
helm search repo loft-sh/vcluster-platform
Apply the ArgoCD Application to your cluster using the kubectl apply
command.
kubectl apply -f gitops-application.yaml
Alternatively, follow the regular GitOps process and push the gitops-application.yaml
file to your Git repository.
The ArgoCD Application
would deploy the platform into the
current-context configured for your Kubernetes cluster, in the vcluster-platform
namespace. In this example, values are being passed to set some basic configurations including:
- Administrator user/password
- Ingress configuration
- Audit configuration
The platform Deployment
automatically installs the platform agent in the local cluster. To manage the agent deployment manually, refer to the agent section.
If you are using ArgoCD to manage your platform instance, you won't be able to update the platform configuration through the platform UI. This shouldn't be a problem if you're following a GitOps approach, as manual updates are typically avoided. However, it's important to note that ArgoCD does not deploy the Helm release secret.
Configuration​
One of the core tenets of the platform is that all platform resources are just "normal" Kubernetes resources. This means that you can manage any platform objects in the same way you'd manage any other Kubernetes resources in your GitOps workflow.
To generate the appropriate manifests to manage each platform resource type, check out the API documentation where you can find example manifests and argument references for nearly all platform resource types. Alternatively, you can use the platform UI to build out your required objects and simply snag the YAML output from the build pane and use that as your manifest content.
Here is an example of creating a Team
, and a Project
that the Team
is a member of. You could manage these resources in your GitOps platform, then allow project users to create resources in the platform as they wish. This would put the core pieces under GitOps, freeing teams to control their own environments in the platform manually if they wish.
The following manifests could be added into a custom Helm chart, or used as manifests in a repo connected to ArgoCD. This is a contrived example, of course, but the main point here is that all platform resources are just "normal" Kubernetes (custom) resources that can be managed with your GitOps tooling, or any other Kubernetes-centric tooling.
---
apiVersion: management.loft.sh/v1
kind: Team
metadata:
name: acme-team
spec:
displayName: acme-team
owner:
user: admin
access:
- verbs:
- "*"
subresources:
- "*"
users:
- admin
- name: vcluster-platform-access
verbs:
- get
- bind
subresources:
- clusters
teams:
- acme-team
---
kind: Project
apiVersion: management.loft.sh/v1
metadata:
name: acme-team-project
spec:
displayName: acme-team-project
owner:
user: admin
quotas: {}
allowedClusters:
- name: "*"
allowedTemplates:
- kind: VirtualClusterTemplate
group: storage.loft.sh
name: "*"
- kind: SpaceTemplate
group: storage.loft.sh
name: "*"
members:
- kind: Team
group: storage.loft.sh
name: acme-team
clusterRole: loft-management-project-admin
access:
- name: vcluster-platform-admin-access
verbs:
- get
- update
- patch
- delete
subresources:
- "*"
users:
- admin
teams:
- acme-team
Connected clusters​
One of the benefits of the platform is that you can easily manage resources located in many physical clusters by adding them to the platform and using it as your central point of management.
When you add a "connected" cluster to the platform, a Cluster
resource is created and a platform Agent is installed in the cluster to handle local reconciliation tasks.
If you are managing the platform via GitOps, you may also wish to manage these connected clusters in a similar fashion, rather than letting the platform install and manage the Agent.
Managing connected clusters via GitOps offers several advantages:
- Consistency: Ensures all clusters are configured identically.
- Version Control: Keeps track of changes to cluster configurations over time.
- Automation: Reduces manual intervention and potential for human error.
- Auditability: Provides a clear record of who made changes and when.
Cluster resources​
If you would like to manage the platform and its agents via your GitOps tooling, you likely also want to manage the connected cluster configurations that live inside the platform.
These configuration elements inform the platform of:
- The remotely connected clusters
- How to connect to those clusters to validate the agent installation
- How to proxy Kubernetes commands from the central platform instance to the remote clusters
Each cluster requires two resources:
- a
Cluster
object that simply defines the cluster name and the owner of the cluster inside the platform - an associated
Secret
that contains relevant configuration information for the platform to connect to the cluster.
A common strategy for bundling cluster data with your primary platform Application
(as in an ArgoCD Application
object) is to write a simple "parent" Helm chart that includes the platform as a dependency. This parent chart can contain anything you'd like, but in this case would be used to include the Cluster
objects and their associated Secret
objects.
The secret associated with a Cluster object necessarily includes authentication data to access the remote cluster, so take care to ensure that this information is handled appropriately.
A simple Chart.yaml
that includes the base platform chart as a dependency may look similar to this, but you'll need to update $PLATFORM_VERSION
with a valid platform version.
apiVersion: v2
name: vcluster-platform-manager
description: A parent Helm chart for vCluster Platform
type: application
version: 1.0.0
dependencies:
- name: vcluster-platform
version: $PLATFORM_VERSION
repository: https://charts.loft.sh
Values that need to be passed to the dependent platform chart can be passed by referring to the dependency name, in this case, vcluster-platform
. For example, if you wanted to set the replicaCount
value on the platform chart, you could do as follows in a values.yaml
file:
vcluster-platform:
replicaCount: 3
Remember, if you want to manage the platform agent via your GitOps workflow,
make sure you set the export DISABLE_AGENT=true
environment variable to true
for your platform deployment.
The "parent" Helm chart can now include any additional resources that you may want to deploy with your platform instance. In this case, the chart should include both the Cluster
and Secret
resources for any connected clusters. You can accomplish this by having a simple template that iterates over an array of clusters that users can provide via values, something like the following:
{{ range .Values.clusters }}
---
apiVersion: management.loft.sh/v1
kind: Cluster
metadata:
name: {{ .name }}
annotations:
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
access:
- subresources:
- '*'
users:
- admin
verbs:
- '*'
config:
secretName: loft-cluster-config-{{ .name }}
secretNamespace: vcluster-platform
displayName: {{ .name }}
owner:
user: admin
{{ end }}
{{ range .Values.clusters }}
---
apiVersion: v1
data:
config: {{ .config | b64enc }}
kind: Secret
metadata:
name: loft-cluster-config-{{ .name }}
namespace: vcluster-platform
type: Opaque
{{ end }}
With the preceding template, users would provide an array of maps containing a name
and a config
field. For example:
clusters:
- name: my-connected-cluster
config: |
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: YOUR-CA-DATA-HERE
server: https://1.2.3.4:6443
name: my-connected-cluster
contexts:
- context:
cluster: my-connected-cluster
namespace: kube-system
user: my-connected-cluster-user
name: my-connected-cluster-context
current-context: my-connected-cluster-context
kind: Config
preferences: {}
users:
- name: my-connected-cluster-user
user:
token: YOUR-TOKEN-HERE
You may have noticed a strange annotation on the Cluster
resource
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
.
This annotation tells ArgoCD to skip the dry run/check of the resource.
You need this
for initial deployments as the platform itself is going to deploy the Cluster
CRD
into the cluster. Without this annotation, the Application
is not going to
sync.
Agents​
If you've turned off agent installation on your platform, ensure you manually install the platform agent in each connected cluster. Without this, the platform cannot operate within the connected cluster.
The most obvious difference in managing the agents compared to the platform manager is that the agents require installation in the connected "remote" clusters, not local to the platform manager. After adding the relevant clusters as available clusters in your ArgoCD configuration, you can simply create another ArgoCD Application
to manage the agent.
Here is a basic example
$PLATFORM_VERSION
variable with a valid platform version.---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: vcluster-platform-agent-my-other-cluster
namespace: argocd
spec:
destination:
name: ""
namespace: vcluster-platform
server: "https://my-other-cluster:6443"
source:
path: ""
repoURL: "https://charts.loft.sh"
targetRevision: $PLATFORM_VERSION
chart: vcluster-platform
helm:
parameters:
# required parameter
- name: agentOnly
value: true
# custom parameters
- name: env.SOMEVAR
value: my-value
project: default
syncPolicy:
automated:
prune: true
selfHeal:
syncOptions:
- CreateNamespace=true
Once again, note that when managing the agent deployments via ArgoCD or your GitOps tooling of choice, ensure that the DISABLE_AGENT
environment variable is set to true
for your platform deployment.
Login​
If the loftHost
is not configured in the platform settings, a random domain is
automatically provisioned. You can retrieve this domain from the
loft-router-domain
secret located in the installation namespace. This domain
is essential for accessing the platform interface and configuring other
services.
You can easily configure your own custom domain.
Follow these steps to retrieve the domain name from the secret:
- Linux and WSL
- macOS
To retrieve the domain from the secret on Linux, run:
# Set the namespace where the platform is installed
PLATFORM_NAMESPACE=vcluster-platform
echo "https://$(kubectl get secret loft-router-domain \
-n "$PLATFORM_NAMESPACE" \
-o jsonpath="{.data.domain}" \
| base64 --decode)"
To retrieve the domain from the secret on macOS, run:
# Set the namespace where the platform is installed
PLATFORM_NAMESPACE=vcluster-platform
echo "https://$(kubectl get secret loft-router-domain \
-n "$PLATFORM_NAMESPACE" \
-o jsonpath="{.data.domain}" \
| base64 -D)"
If you encounter an error or the secret is not found, ensure that the platform is properly installed and that you have the necessary permissions to access secrets in the specified namespace.
Default credentials​
If the username and password are not set in your values.yaml
, the default credentials are:
- Username:
admin
- Password:
my-password
It is strongly recommended to change these default credentials for security reasons. You can reset the administrator password.
Next steps​
Create virtual clusters​
After logging into the UI, you'll be able to start creating virtual clusters immediately. You're automatically part of a project called Default Project
.
Click on "New Virtual Cluster" and "Create" to spin one up to try out.
Find more information about creating virtual clusters in the create virtual clusters section.
Otherwise, read more about some primary concepts:
- Projects - How resources can be grouped together into different projects
- Virtual Clusters - How to create and manage virtual clusters
- Templates - How to use templates to control what type of resources that can be made
- Host Clusters - How to add more host clusters to the platform