Understand general connection into your CI/CD pipeline

Connecting into your CI/CD pipeline allows developers to utilize existing tooling for their automatic documentation. Instead of tediously open a Web UI after a deployment, they can do all their changes in their IDE - and rely on LeanIX Value Stream Management (VSM) to capture changes, make them transparent, and notify others.

Native integrations

VSM provides native plugins into some of the most popular CI/CD pipelines

All these plugins rely on the endpoints & concepts introduced below, so please read on to understand the functionality in depth.

Besides the above native integrations, there are also tutorials to integrate with Gitlab and TravisCI by means of custom integration leveraging the below CI/CD endpoint.

CI/CD REST endpoint

Instead of using the native integrations, you can use the provided REST endpoint to integrate in a custom way into your pipeline. This might be the best solution if:

  • No native plugin is available yet (see GitLab or TravisCI as examples)
  • You run a custom setup in your pipeline, where calling a central REST endpoint is the easiest way to connect without disrupting your teams

Using the CI/CD REST endpoints implies leveraging the scripting functionality of your provider. For a basic setup follow these steps:

  1. Create an lx-manifest according to the metadata file documentation or add logic to pass the microservice ID to the API call. The very minimum information required in both scenarios is a microservice ID.

  2. Create two env variables that capture the deployment stage and version at build time. Often both of these information points are already available in the pipeline and can simply be referenced.

  3. (Optional) Extract the library information manually.

  4. Authenticate using LeanIX authentication mechanisms and receive a bearer token.



Token should be generated by Admin of the workspace

curl --request POST \
  --url https://<HOST>/services/mtm/v1/oauth2/token \
  -u apitoken:<TOKEN> \
  --data grant_type=client_credentials
  1. Compile all the above data into POST /deployment request to the CICD connector API.
curl -X POST \
  -H 'Cache-Control: no-cache' \
  -H 'Authorization: Bearer <auth>' \
  -H 'Content-Type: multipart/form-data' \
  -F [email protected]<absolute path to manifest file> \
  -F [email protected]<absolute path to dependencies file> \
  -F 'data={
  "version": "1.0.0",
  "stage": "dev",
  "dependencyManager": "MAVEN"
}' \

API documentation is available here



We are using a multipart/form-data request, which makes the call a bit more complex. Some development tools such as Postman experience difficulties to send a properly formatted request right out of their client. Therefore, we recommend using cUrl for development and testing.

Imported Data

The CICD connector captures deployment-, microservice- and library-related information at the time of deployment through a CD-pipeline. Thereby the connector provides an API to capture individual deployments of a specific microservice. The API can either be leveraged directly or through one of our plugins.

There are three essential part to the data input:

  1. Metadata provided by the engineering team
  2. Deployment information
  3. Library information


We have two options to add the Microservice documentation information via the CICD connector

Option 1 - Static lx-manifest.yaml

This is a yaml-file added on the root level of each repository that you want to recognize as a microservice. A lx-manifest.yaml follows a fixed base structure but can also be flexibly advanced with custom data.

A simple example file might look like this:

id: cicd-connector-v2
name: cicd-connector-v2
description: CI/CD revamped
owner: [email protected]
  - cider
  - leanix-vsm
  - name: vcs

The support key-value pairs for the manifest available can be found below.

NameDescriptionRequiredCorrectness CriteriaAutomatic detection at build time possible?
idIdentifies the microserviceyesnon-empty string

Cannot contain '/' for valid deployment ID structure
Yes (take repo name as id)
nameName of the microserviceyesnon-empty stringYes (take repo name unless changed in VSM)
descriptionDescription of the Software Artifactnonon-empty stringNo
selfURL of the metadata file. Which will give the information of the root of the project in VCSyesUrln/a - since file doesn’t exist
domainsList of domain Ids from the VSM workspacenonon-empty list of stringsNo
teamsList of team IDs from the VSM workspace. If no team ID is present integration api would raise warnings.nonon-empty list of stringsNo
productsList of product IDs from the VSM workspacenonon-empty list of stringsNo
linksLinks to other resources (e.g. Git Repository, API Documentation)

links: - name: '' url: ''
data structure as agreed on the metadata
ownernoEmail AddressNo
tagsList of tags to be checked for every deployment and updated. Tags should always be part of a leanix tag group.

tags: - tag-group: Environment tag-name: Staging
nonon-empty list of key value pairsNo universal tagging mechanism in place. (For individual data points, possible)
lifecycle lifecycle: planned: 2021-11-01 early-access: 2021-11-01noNo



If you choose to add custom data on top, additional Integration API processors need to be added to your workspace to map the custom data into the workspace. Please view the section Extending the Integration for more information.

Option 2 - Dynamic Microservice data

If you have data around your microservice already available at build time and you do not want to add a yaml to every repository there is an option to pass the ID of the microservice as part of the API request. This will create/update a Software Artifact Fact Sheet with that specific ID

Deployment Information

Standard deployment information will be:

  • the current deployment version
  • the current deployment stage
  • the date and time of deployment

This information should be passed to the API (either through plugin or custom call). In the case of a custom API call any other key-value pairs could be added to the body, which would then result into custom data that can be loaded into the deployment Fact Sheet.



If you choose to add custom data on top, additional Integration API processors need to be added to your workspace to map the custom data into the workspace. Please view the section Extending the Integration for more information.

Library Information

For projects using the package managers

Package ManagerLanguage / Environment
NPMJavaScript / Node.js
MavenPrimarily Java

a dependencies file can be extracted and sent to the CICD connector, in order to create Library Fact Sheets and link them to the respective Microservices and Deployments.

The native plugins include the processing of library information. You can also extract the information manually:

For NPM run:

license-checker --json > /path/to/dependencies.json

For Maven run:

mvn license:download-licenses

For NuGet run:

dotnet tool run dotnet-project-licenses -p false -o --outfile licenses.json -j -i /path/to/csproj-directory



NuGet projects are automatically detected only by our out-of-box Azure pipeline's plugin solution

Please note down the paths that the resulting file is sent to, as we will need these to reference the file in our API call later.

For Gradle run:

Copy the same configuration file to your repository.

gradle generateLicenseReport -I /path/to/vsmCiCd-init.gradle

Extending the Integration

Custom data can be added in two places:

  • the lx-manifest.yml (as key-value attributes)
  • towards the bottom of the deployment, data object on the request body (add key-value attributes)

The connector doesn’t touch or modify this data. It is merely passed through to the Integration API.

In order to process this data correctly, you need to add a custom processor set.

Please do not change the base processors added to the workspace by the connector. These will be overwritten automatically by the connector and all your changes will be lost.

Instead, just create a custom processor set and add them to the Integration API execution group by adding this snippet to your processor file:

"executionGroups": [

The integration API will pick up your processors and merge them with the base processors at execution time. Make sure to set the Integration API run number accordingly.

For more information on the execution groups visit:



The base processors leverage a deletion scope and Integration API cannot merge deletion scopes. Therefore it is not possible to add any custom deletion scope at this point in time.

The custom processors should also use the processing mode full as do the base processors.