GitHub Repository

Out-of-the-box Source Code Repository Connection


There is a newer version available and recommended to be used and this documentation is only intended for existing LeanIX VSM (formerly MI) customers who already have the connector installed.

Synchronized data

The out-of-the-box setup of the connector runs by default on a daily schedule but can be adjusted to individual needs. The table below states the information that the scanners pick up in GitHub and how it translates into LeanIX data objects:

Github Data Object


Repository name

Microservice fact sheet
(name = external Id reference)

Top 3 (Based on frequency of commits in last 30 days) source code contributors are added as 'Responsible' subscriptions to the respective Microservice fact sheet.

Team entity

Team fact sheet
(name = external Id reference)

Team entity added to a repository

Relation between Microservice and Team fact sheet.

Languages of a repository

Technical Component fact sheet
(name of language = external Id reference)

Technical Component fact sheet is related to the microservice on which the language occurs.

The number of kilo bytes written in the respective language are mapped on the relation.


All topics are mapped as LeanIX tags of the tag group GitHub Topics by default.



Topics are a great way to pass custom data through to your VSM workspace. We will add more mapping functionality based on topics to this connector. Today you can already use topics to map your data into VSM by leveraging the integration API and custom data mappings. Find out more in the section below on customization.


The GitHub repository connector works with Integration Hub to scan the GitHub organisation data and process to LDIF and automatically trigger inbound Integration API processor.

Setting up Integration Hub data source

Create a new "DataSource" of connecor "mi-github-repository-connector" on the workspace under the Integration Hub in Admin area. Populate the connector configuration parameters with right details. Setup the schedule and start the DataSource (Check the below section to understand connector configuration parameters). Details about the ongoing and previous data source runs can be found in "sync logging".

Connector configuration parameters

Following are the parameters which need to be configured on the data source before starting the connector.

  1. orgName” - The name of the organisation. Repositories under organisation are scoped for scanning

  2. ghToken” - Personal access token should be generated with following permissions

  1. "repoNamesExcludeList" - Array of repositories (supports regex match) to identify repositories names which should not be included in the scanning result. (e.g.: ["allThatIncludeThisSubstring", "^start-with", "end-with$"])
Example DataSource configurationExample DataSource configuration

Example DataSource configuration

Customization options (coming soon)

Customization of this connector is possible by creating a customer specific data mapping via the integration API processors. When the connector runs and the standard mapping is applied to the data the custom mapping will be executed on top seamlessly. There is no addition development required.

Custom data can at this point be added via the GitHub topics.

To process this data correctly, you need to add a custom processor set.

Please do not change the base processors added to the workspace by the connector. These will be overwritten automatically by the connector and all your changes will be lost.

Instead, just create a custom processor set and add them to the Integration API execution group by adding this snippet to your processor file:

"executionGroups": [

The integration API will pick up your processors and merge them with the base processors at execution time. Make sure to set the Integration API run number accordingly.

For more information on the execution groups visit:



The base processors leverage a deletion scope and Integration API cannot merge deletion scopes. Therefore, it is not possible to add any custom deletion scope now.

The custom processors should also use the processing mode full as do the base processors.

Did this page help you?