Kubernetes & OpenShift

Out-of-the-box extraction of runtime information

Kubernetes integration provides a simple mechanism to understand which version of your service is running on which cluster. We show how the software package built in your CI/CD pipeline is deployed on your K8s clusters. It also gives you other key details of k8s objects such as the cluster configuration, jobs, stateful sets, persistent volume claims and more.

For existing users, refer to the release notes and migration docs to upgrade from version 4.0.0 to 5.0.0.

📘

Enable the integration on your workspace

Please contact your CSM to enable this integration on your workspace

To use the integration, just deploy a simple agent (discussed below) to your cluster that scans your K8s objects and sends the information directly to MI.

Synchronized data

The integration runs on a schedule. While setting up the integration you can specify a schedule that works for your requirements. We advise to run a scan every 6 hours. The table below gives a high-level overview of the data objects that we scan in your K8s clusters and how they translate into LeanIX Fact Sheets.

Data Object in K8s

LeanIX MI

Deployment with label

Deployment Fact Sheet of sub-type K8s Deployment

StatefulSet

Deployment Fact Sheet of sub-type K8s Stateful Set

Job

Deployment Fact Sheet of sub-type K8s Job

Label: app.kubernetes.io/version

We use this best practice label to determine the version of the deployment. The version is used as an identifying factor to understand what clusters an image of the same service in the same version has been deployed to.

K8s Label or Namespace

Software Artifact Fact Sheet

We use either a dedicated K8s label or your namespaces to discover microservices. You can choose during the implementation.

CRD

Technology Fact Sheet of sub-type Custom Resource Definition

Persistent Volume

Technology Fact Sheet of sub-type Persistent Volume Claim

📘

Labels can become Tags in MI

We not only show all the labels you set in K8s on your Deployment Fact Sheet, but we can also transform specific labels (identified by their key) into tags in MI. You can use these tags to filter your data or display specific views in our reporting.

Setup

The Kubernetes integration works with Integration Hub in "self-start" mode to scan the k8s object, process to LDIF and automatically trigger inbound Integration API processor to update information on VSM.

Setting up Integration Hub data source

Create a new "DataSource" of connector "vsm-k8s-connector" on the workspace under the Integration Hub in Admin area. Populate the connector configuration parameters with right details. Details about the ongoing and previous data source runs can be found in "sync logging".

🚧

"self-start" mode in Integration Hub

Kubernetes connector starts in "self-start" mode. The Integration Hub Datasource should not be started or scheduled manually. The actual scheduling should be taken care by the k8s agent deployed on your k8s cluster.

Data source connector configuration

Name

Format

Value

resolveStrategy

plain text

  • "label"
  • "namespace"

resolveLabel

plain text

"{NAME_OF_LABEL}"

Examples:

{
    "connectorConfiguration": {
        "resolveLabel": "kubernetes.io/name",
        "resolveStrategy": "label"
    },
    "secretsConfiguration": {},
    "bindingKey": {
        "connectorType": "leanix-vsm-connector",
        "connectorId": "leanix-k8s-connector",
        "connectorVersion": "1.0.0",
        "processingDirection": "inbound",
        "processingMode": "full",
        "lxVersion": "1.0.0"
    }
}
{
    "connectorConfiguration": {
        "resolveStrategy": "namespace"
    },
    "secretsConfiguration": {},
    "bindingKey": {
        "connectorType": "leanix-vsm-connector",
        "connectorId": "leanix-k8s-connector",
        "connectorVersion": "1.0.0",
        "processingDirection": "inbound",
        "processingMode": "full",
        "lxVersion": "1.0.0"
    }
}

Setting up k8s connector on a cluster

To setup the K8s connector deploy the open-source connector as a helm chart to every cluster you wish to scan. Make sure to use the latest version. The LeanIX Kubernetes Connector gets deployed via a Helm chart into the Kubernetes cluster as a CronJob.

Installing k8s connector

Before you can install the Kubernetes Connector via the provided Helm chart you must add the Helm chart repository first.

helm repo add leanix 'https://raw.githubusercontent.com/leanix/leanix-k8s-connector/master/helm/'
helm repo update
helm repo list

❗️

Connector Version

If you are installing a specific older version of the Kubernetes Connector, you have to use a different URL specifying the version number.

helm repo add leanix 'https://raw.githubusercontent.com/leanix/leanix-k8s-connector/5.0.0/helm/'

The output of the helm repo list command should look like this.

NAME                  URL
stable                https://kubernetes-charts.storage.googleapis.com
local                 http://127.0.0.1:8879/charts
leanix                https://raw.githubusercontent.com/leanix/leanix-k8s-connector/master/helm/

📘

LeanIX API Token

See the LeanIX technical documentation on how to obtain one. LeanIX Technical Documentation

Create a Kubernetes secret with the LeanIX API token.

kubectl create secret generic api-token --from-literal=token={LEANIX_API_TOKEN}

Deploying k8s connector using Helm chart

we use the Helm chart deploying the LeanIX Kubernetes Connector to the Kubernetes cluster. The following command deploys the connector to the Kubernetes cluster and overwrites the parameters in the values.yaml file.

Parameter

Default value

Provided value

Notes

integrationApi.fqdn

app.leanix.net

The FQDN or host of your LeanIX MI

integrationApi.secretName

api-token

The name of the Kubernetes secret containing the LeanIX API token

integrationApi.datasourceName

aks-cluster-k8s-connector

The name of the Integration Hub DataSource configured on the workspace

clustername

kubernetes

aks-cluster

The name of the Kubernetes cluster

lxWorkspace

00000000-0000-0000-0000-000000000000

The UUID of your LeanIX MI workspace

verbose

false

true

Enables verbose logging on the stdout interface of the container

enableCustomStorage

false

Enable additional custom storage backend option to store kubernetes.ldif on your preferred storage option.

k8s connector creates the kubernetes.ldif file and logs to the leanix-k8s-connector.log file (This is optional in latest version)

storageBackend

file

Default value for the storage backend is file, if not provided

This flag is not required if enableCustomStorage is disabled

localFilePath

/mnt/leanix-k8s-connector

The path that is used for mounting the PVC into the container and storing the kubernetes.ldif and leanix-k8s-connector.log files

This flag is not required if enableCustomStorage is disabled

claimName

pv0002-pvc

The name of the PVC used to store the kubernetes.ldif and leanix-k8s-connector.log files.

This flag is not required if enableCustomStorage is disabled

blacklistNameSpaces

kube-system

kube-system, default

Namespaces that are not scanned by the connector. Must be provided in the format "{kube-system,default}" when using the --set option

helm upgrade --install leanix-k8s-connector leanix/leanix-k8s-connector \
--set integrationApi.fqdn=app.leanix.net \
--set integrationApi.secretName=api-token \
--set integrationApi.datasourceName=aks-cluster-k8s-connector \
--set args.clustername=aks-cluster \
--set args.lxWorkspace=00000000-0000-0000-0000-000000000000 \
--set args.verbose=true \
--set args.blacklistNamespaces="{kube-system,default}"

Advanced

Two storage backend types are natively supported:

  • file
  • azureblob

Enable enableCustomStorage flag to enable above storage option. Even if the flag is disabled the connector works as expected using Integration Hub. This is an additional storage option for kubernetes.ldif.

"File" storage option

Let's continue the further discussion using file as storage option. The file storage backend lets you use every storage that can be provided to Kubernetes through a PersistentVolume and a PersistentVolumeClaim.

🚧

When using the file storage backend, you must pre-create the PersistentVolume and PersistentVolumeClaim. Follow official Kubernetes docs to setup PersistentVolume and PersistentVolumeClaim

❗️

Limitations

The connector runs as a non-root user in the Kubernetes environment. You can only use the persistent volumes that do not require root-access to write LDIF contents to a file.
e.g. Azure-File-Share can be used as a persistent volume while Amazon EBS cannot be used.

helm upgrade --install leanix-k8s-connector leanix/leanix-k8s-connector \
--set integrationApi.fqdn=app.leanix.net \
--set integrationApi.secretName=api-token \
--set integrationApi.datasourceName=aks-cluster-k8s-connector \
--set args.clustername=aks-cluster \
--set args.lxWorkspace=00000000-0000-0000-0000-000000000000 \
--set args.verbose=true \
--set args.enableCustomStorage=true \
--set args.file.claimName=pv0002-pvc \
--set args.blacklistNamespaces="{kube-system,default}"

Compatibility

Our integration is independent of the K8s distribution you are using, whether it is self-hosted, Amazon EKS, Azure AKS, or OpenShift.

Downgrade to an older version

The K8s connector can be downgraded to an older version as follows.

Check the current version of the K8s connector helm chart installed on your system:

helm search repo leanix

The output of the helm search repo leanix command would look like this:

NAME                            CHART VERSION   APP VERSION     DESCRIPTION                                  
leanix/leanix-k8s-connector     5.0.0           5.0.0           Retrieves information from Kubernetes cluster

The above output shows that the currently installed version of K8s connector is 5.0.0.
In order to downgrade to version 4.0.0, the helm upgrade command should have version flag set to 4.0.0
For example:

helm upgrade --version 4.0.0 --install leanix-k8s-connector leanix/leanix-k8s-connector \
--set integrationApi.fqdn=app.leanix.net \
--set integrationApi.secretName=api-token \
--set integrationApi.datasourceName=aks-cluster-k8s-connector \
--set args.clustername=aks-cluster \
--set args.lxWorkspace=00000000-0000-0000-0000-000000000000 \
--set args.verbose=true \
--set args.blacklistNamespaces="{kube-system,default}"

Customisations

Integration also supports Integration API execution groups via Integration Hub to enable users to add custom processors. To process the data correctly, you need to add a custom processor set.

Sample data source configuration with execution groupSample data source configuration with execution group

Sample data source configuration with execution group

Unique execution group name for the integration is vsmK8sInbound

The integration API will pick up your processors and merge them with the base processors at execution time. Make sure to set the Integration API run number accordingly.

For more information on the execution groups visit: https://docs.leanix.net/docs/integration-api#section-grouped-execution-of-multiple-integration-api-configurations


Did this page help you?