Kubernetes & OpenShift
Out-of-the-box extraction of runtime information
The Kubernetes integration provides a simple mechanism to understand which version of your services are running on which cluster. It shows how the software package built in your CI/CD pipeline is deployed on your K8s clusters. The integration also gives you other key details of k8s objects such as the cluster configuration, jobs, stateful sets, persistent volume claims, and more.
For existing users, refer to the release notes and migration docs to upgrade from older versions to the latest version.
To use the integration, just deploy a simple agent (discussed below) to your cluster that scans your K8s objects and sends the information directly to LeanIX VSM.
Synchronized data
The integration runs on a schedule. While setting up the integration you can specify a schedule that works for your requirements. We advise to run a scan once a day or at most every 6 hours. The table below gives a high-level overview of the data objects that we scan in your K8s clusters and how they translate into LeanIX Fact Sheets.
Data Object in K8s | LeanIX VSM |
---|---|
Deployment | Deployment Fact Sheet of sub-type K8s Deployment |
StatefulSet | Deployment Fact Sheet of sub-type K8s Stateful Set |
Cron Job | Deployment Fact Sheet of sub-type K8s Cron Job |
Label: app.kubernetes.io/version | We use this best practice label to determine the version of the deployment. The version is used as an identifying factor to understand what clusters an image of the same service in the same version has been deployed to. |
K8s Label or Namespace | Software Artifact Fact Sheet We use either a dedicated K8s label or your namespaces to discover microservices. You can select the strategy during the implementation. |
Custom Resource Definition (CRD) | Technology Fact Sheet of sub-type K8s CRD |
Persistent Volume | Technology Fact Sheet of sub-type K8s PV |
Image | Technology Fact Sheet of sub-type K8s Image |
Cluster | Compute Environment Fact Sheet of sub-type Cluster |
Labels can become Tags in VSM
We not only show all the labels you set in K8s on your Deployment Fact Sheet, but we can also transform specific labels (identified by their key) into tags in VSM. You can use these tags to filter your data or display specific views in our reporting.
Setup
The Kubernetes integration scans the k8s cluster and creates & updates information in the VSM workspace.
Setting up the Data Source in Integration UI
LeanIX VSM provides this integration to scan your Kubernetes cluster at Administration > Integrations > Kubernetes . Follow the setup wizard to configure the Kubernetes Data Source.

Add data source
1. Manage or create data source
- For each Kubernetes cluster, create a new "Data Source" in the Integration UI in the Admin area.
- You have to provide a name for your configuration and the cluster name set in your Kubernetes config.
- Based on the "Resolve Strategy", Software Artifact Fact Sheets are created. With
by namespace
, the combination of namespace and service will be used; withby label
, values within the configured label group will be used instead.
Use "Save" to create the data source configuration, and proceed with further steps.

Configure data source
Setting up Data Source for the first time
When you create your first data source configuration, a change to the data model is required. These changes are applied automatically once you "Confirm" these.

Confirm data model changes
2. Setting up k8s connector on a cluster
- To setup the K8s connector, deploy the open-source connector as a helm chart to every cluster you wish to scan. Make sure to use the latest version.
- The LeanIX Kubernetes Connector gets deployed via a Helm chart into the Kubernetes cluster as a CronJob.
Before you setup the Kubernetes Connector via the provided Helm chart you must add the Helm chart repository first as shown here in this image.

Installing the k8s connector
- Next step is to provide the Leanix API token. You can either create a new technical user, or re-use a token previously created. We recommend you to use one technical user for all Kubernetes data sources. Please chose an expiry data that is aligned with your company's policy.
- Once the technical User is created, you will find the generated api-token right away in the
API token
field. - Proceed with creating a Kubernetes secret with the LeanIX API token by just executing the command copied from code snippet as shown.

Creating the technical user
3. Provision Connector using helm chart
- The Connector provisioning command deploys the connector to the Kubernetes cluster.
- You can provide the list of Namespaces that need to be excluded during scan by the connector. Multiple values can be provided as a comma-separated list, e.g.
kube-system,default
- We use the Helm chart to deploy the LeanIX Kubernetes Connector to the Kubernetes cluster.
You can copy the helm upgrade command from the code snippet and execute to create a cron-job.

Connector provisioning
Major version updates
In order to update to a new major version, like from 6.0.0 to 7.0.0 you have to manually change the command to now specify version 7.0.0-latest instead of 6.0.0-latest as depicted in the code snippet.
4. Start and test the connector
Trigger the Kubernetes run from the cluster by copying the generated command. Use the connector name (default: leanix-k8s-connector
) as name-of-cron-job
and define your own name-of-job
.

Start k8s run
Advanced
Store logs and result to a custom storage (optional)
Per default, the result of the Kubernetes scan and log files are only stored temporary and send to the VSM workspace. Optionally, a custom storage can be used to persist these in a custom storage. Two storage backend types are natively supported:
- File
- Azureblob
For both, please enable the enableCustomStorage
flag.
A. "File" storage option
The first option is to use a custom file storage which lets you use every storage that can be provided to Kubernetes through a PersistentVolume and a PersistentVolumeClaim. Please use the file.claimName
option to provide the name of the persistent volume to the Kubernetes connector.
When using the file storage backend, you must pre-create the PersistentVolume and PersistentVolumeClaim. Please follow the official Kubernetes docs to setup PersistentVolume and PersistentVolumeClaim
Limitations
The connector runs as a non-root user in the Kubernetes environment. You can only use the persistent volumes that do not require root-access to write LDIF contents to a file, e.g., Azure-File-Share can be used as a persistent volume while Amazon EBS cannot be used.
helm upgrade --install leanix-k8s-connector leanix/leanix-k8s-connector \
--set integrationApi.fqdn=app.leanix.net \
--set integrationApi.secretName=api-token \
--set integrationApi.datasourceName=aks-cluster-k8s-connector \
--set args.lxWorkspace=00000000-0000-0000-0000-000000000000 \
--set args.verbose=true \
--set args.enableCustomStorage=true \
--set args.file.claimName=pv0002-pvc"
B. "Azureblob" storage backend option
For this option, first Create a storage account
Next, create a Kubernetes secret which contains the Azure Storage account name and the Azure Storage account key. The information about the name and the key can be retrieved directly via the Azure portal.
kubectl create secret generic azure-secret --from-literal=azurestorageaccountname={STORAGE_ACCOUNT_NAME} --from-literal=azurestorageaccountkey={STORAGE_KEY}
- Access keys
Then, configure the connector as depicted below:
helm upgrade --install leanix-k8s-connector leanix/leanix-k8s-connector \
--set integrationApi.fqdn=app.leanix.net \
--set integrationApi.secretName=api-token \
--set integrationApi.datasourceName=aks-cluster-k8s-connector \
--set args.lxWorkspace=00000000-0000-0000-0000-000000000000 \
--set args.verbose=true \
--set args.enableCustomStorage=true \
--set args.storageBackend=azureblob \
--set args.azureblob.secretName=azure-secret \
--set args.azureblob.container=leanixk8sconnector"
Finally, we use the Helm chart deploying the LeanIX Kubernetes Connector to the Kubernetes cluster.
Compatibility
Our integration is independent of the K8s distribution you are using, whether it is self-hosted, Amazon EKS, Azure AKS, or OpenShift.
Pinning a version
Using the helm command will allow you to use the version parameter. This way you can pin any version you want to install instead of using our -X.0.0-latest version for automatic updates or omitting the version parameter manually update to the latest version. You can can specify the version like this:
helm upgrade --version 6.3.1 --install leanix-k8s-connector leanix/leanix-k8s-connector
Downgrade to an older version
The K8s connector can be downgraded to an older version as follows.
Connector Version
If you are installing a specific older version of the Kubernetes Connector, you have to use a different URL specifying the version number.
helm repo add leanix 'https://raw.githubusercontent.com/leanix/leanix-k8s-connector/5.0.0/helm/'
Check the current version of the K8s connector helm chart installed on your system:
helm search repo leanix
The output of the helm search repo leanix
command would look like this:
NAME CHART VERSION APP VERSION DESCRIPTION
leanix/leanix-k8s-connector 5.0.0 5.0.0 Retrieves information from Kubernetes cluster
The above output shows that the currently installed version of K8s connector is 5.0.0
.
In order to downgrade to version 4.0.0
, the helm upgrade
command should have version
flag set to 4.0.0
For example:
helm upgrade --version 4.0.0 --install leanix-k8s-connector leanix/leanix-k8s-connector \
--set integrationApi.fqdn=app.leanix.net \
--set integrationApi.secretName=api-token \
--set integrationApi.datasourceName=aks-cluster-k8s-connector \
--set args.lxWorkspace=00000000-0000-0000-0000-000000000000 \
--set args.verbose=true \
--set args.blacklistNamespaces="{kube-system,default}"
Manually upgrade to the latest version
If you want to update your connector manually you can always run the helm command without specifying the version. That way you will have to execute the command whenever a new version has been released.
helm upgrade --install leanix-k8s-connector leanix/leanix-k8s-connector \
--set integrationApi.fqdn=app.leanix.net \
--set integrationApi.secretName=api-token \
--set integrationApi.datasourceName=aks-cluster-k8s-connector \
--set args.lxWorkspace=00000000-0000-0000-0000-000000000000 \
--set args.verbose=true \
--set args.blacklistNamespaces="{kube-system,default}"
Customisations
Integration also supports Integration API execution groups via Integration Hub to enable users to add custom processors. To process the data correctly, you need to add a custom processor set.

Sample data source configuration with execution group
Unique execution group name for the integration is vsmK8sInbound
The integration API will pick up your processors and merge them with the base processors at execution time. Make sure to set the Integration API run number accordingly.
For more information on the execution groups visit: https://docs.leanix.net/docs/integration-api#section-grouped-execution-of-multiple-integration-api-configurations
Updated about 1 year ago