Group-level Kubernetes clusters
Similar to project-level and instance-level Kubernetes clusters, group-level Kubernetes clusters allow you to connect a Kubernetes cluster to your group, enabling you to use the same cluster across multiple projects.
GitLab provides a one-click install for various applications that can be added directly to your cluster.
Applications will be installed in a dedicated namespace called
gitlab-managed-apps. If you have added an existing Kubernetes cluster
with Tiller already installed, you should be careful as GitLab cannot
detect it. In this event, installing Tiller via the applications will
result in the cluster having it twice. This can lead to confusion during
|Application||GitLab version||Description||Helm Chart|
|Helm Tiller||11.6+||Helm is a package manager for Kubernetes and is required to install all the other applications. It is installed in its own pod inside the cluster which can run the
|Ingress||11.6+||Ingress can provide load balancing, SSL termination, and name-based virtual hosting. It acts as a web proxy for your applications and is useful if you want to use Auto DevOps or deploy your own web apps.||stable/nginx-ingress|
|Cert-Manager||11.6+||Cert-Manager is a native Kubernetes certificate management controller that helps with issuing certificates. Installing Cert-Manager on your cluster will issue a certificate by Let's Encrypt and ensure that certificates are valid and up-to-date.||stable/cert-manager|
|Prometheus||11.11+||Prometheus is an open-source monitoring and alerting system useful to supervise your deployed applications.||stable/prometheus|
|GitLab Runner||11.10+||GitLab Runner is the open source project that is used to run your jobs and send the results back to GitLab. It is used in conjunction with GitLab CI/CD, the open-source continuous integration service included with GitLab that coordinates the jobs. When installing the GitLab Runner via the applications, it will run in privileged mode by default. Make sure you read the security implications before doing so.||runner/gitlab-runner|
NOTE: Note: Some cluster applications are installable only for a project-level cluster. Support for installing these applications in a group-level cluster is planned for future releases. For updates, see:
- Support installing JupyterHub in group-level clusters
For each project under a group with a Kubernetes cluster, GitLab will
create a restricted service account with
in the project namespace.
GitLab will use the project's cluster before using any cluster belonging to the group containing the project if the project's cluster is available and not disabled.
In the case of sub-groups, GitLab will use the cluster of the closest ancestor group to the project, provided the cluster is not disabled.
Multiple Kubernetes clusters [PREMIUM]
With GitLab Premium, you can associate more than one Kubernetes clusters to your group. That way you can have different clusters for different environments, like dev, staging, production, etc.
Add another cluster similar to the first one and make sure to set an environment scope that will differentiate the new cluster from the rest.
NOTE: Note: Only available when creating clusters. Existing clusters not managed by GitLab cannot become GitLab-managed later.
You can choose to allow GitLab to manage your cluster for you. If your cluster is managed by GitLab, resources for your projects will be automatically created. See the Access controls section for details on which resources will be created.
If you choose to manage your own cluster, project-specific resources will not be created
automatically. If you are using Auto DevOps, you will
need to explicitly provide the
KUBE_NAMESPACE deployment variable
that will be used by your deployment jobs.
NOTE: Note: If you install applications on your cluster, GitLab will create the resources required to run these even if you have chosen to manage your own cluster.
Introduced in GitLab 11.8.
Domains at the cluster level permit support for multiple domains
per multiple Kubernetes clusters. When specifying a domain,
this will be automatically set as an environment variable (
the Auto DevOps stages.
The domain should have a wildcard DNS configured to the Ingress IP address.
Environment scopes [PREMIUM]
When adding more than one Kubernetes cluster to your project, you need to differentiate them with an environment scope. The environment scope associates clusters with environments similar to how the environment-specific variables work.
While evaluating which environment matches the environment scope of a cluster, cluster precedence will take effect. The cluster at the project level will take precedence, followed by the closest ancestor group, followed by that groups' parent and so on.
For example, let's say we have the following Kubernetes clusters:
And the following environments are set in
stages: - test - deploy test: stage: test script: sh test deploy to staging: stage: deploy script: make deploy environment: name: staging/$CI_COMMIT_REF_NAME url: https://staging.example.com/ deploy to production: stage: deploy script: make deploy environment: name: production/$CI_COMMIT_REF_NAME url: https://example.com/
The result will then be:
- The Project cluster will be used for the
- The Staging cluster will be used for the
deploy to stagingjob.
- The Production cluster will be used for the
deploy to productionjob.
The following features are not currently available for group-level clusters: