Integration of Kubernetes Dashboard and GitLab users

Integration of Kubernetes Dashboard and GitLab users




Kubernetes Dashboard is an easy-to-use tool for getting up-to-date information about a running cluster and minimally managing it. You start to appreciate it even more when access to these features is needed not only by administrators/DevOps engineers, but also by those who are less used to the console and/or do not intend to deal with all the subtleties of interaction with kubectl and other utilities. So it happened with us: the developers wanted quick access to the Kubernetes web interface, and since we use GitLab, the solution was asked of itself.

Why this?


Direct developers may be interested in a tool like the K8s Dashboard for debugging tasks. Sometimes you want to browse logs and resources, and sometimes kill pods, scale Deployments/StatefulSets, and even go to the container console (there are also such requests, which, however, have a different way to solve, for example, via kubectl-debug ).

In addition, there is a psychological moment for managers when they want to look at the cluster - to see that “everything is green”, and thus calm down, that “everything works” (which, of course, is very relative ... but this is already beyond the scope of the article ).

As a standard CI-system, we use GitLab: all developers use it. Therefore, to give them access, it was logical to integrate the Dashboard with accounts in GitLab.

Also note that we use NGINX Ingress. If you are working with other ingress-solutions , you will need to find analogous versions of authorizations for authorization yourself.

We're trying to integrate


Installing Dashboard


Attention : If you are going to repeat the steps described below, then - in order to avoid unnecessary operations - first read the next subtitle.

Since this integration is used by us in a variety of installations, we have automated its installation. The sources that will be needed for this are published in the special GitHub repositories . They are based on slightly modified YAML configurations from the official Dashboard repository , as well as the Bash script for fast deployment.

The script installs the Dashboard into a cluster and sets it up to integrate with GitLab:

  $ ./ctl.sh
 Usage: ctl.sh [OPTION] ... --gitlab-url GITLAB_URL --oauth2-id ID --oauth2-secret SECRET --dashboard-url DASHBOARD_URL
 Install kubernetes-dashboard to Kubernetes cluster.
 Mandatory arguments:
  -i, --install install into 'kube-system' namespace
  -u, --upgrade upgrade installation installation
  -d, --delete remove everything, including the namespace
  --gitlab-url set gitlab url with schema (https://gitlab.example.com)
  --oauth2-id set OAUTH2_PROXY_CLIENT_ID from gitlab
  --oauth2-secret set OAUTH2_PROXY_CLIENT_SECRET from gitlab
  --dashboard-url set dashboard url without schema (dashboard.example.com)
 Optional arguments:
  -h, --help output this message  

However, before using it, you need to go to GitLab: Admin area → Applications - and add a new application for the future panel. Let's call it "kubernetes dashboard":



As a result of its addition, GitLab will provide hashes:



They are used as arguments to the script.As a result, the installation looks like this:

  $ ./ctl.sh -i --gitlab-url https://gitlab.example.com --oauth2-id 6a52769e ... --oauth2-secret 6b79168f ... --dashboard  -url dashboard.example.com  

After that, check that everything starts:

  $ kubectl -n kube-system get pod |  egrep '(dash | oauth)'
 kubernetes-dashboard-76b55bc9f8-xpncp 1/1 Running 0 14s
 oauth2-proxy-5586ccf95c-czp2v 1/1 Running 0 14s  

Sooner or later, everything will start, but immediately authorization will not work ! The fact is that in the used image (the situation in other images is similar) the process of catching the redirect in callback is incorrectly implemented. This circumstance leads to the fact that oauth erases the cookie, which itself (oauth) gives us ...

The problem is solved by building your oauth image with a patch.

Patch to oauth and re-install


To do this, use the following Dockerfile:

  FROM golang: 1.9-alpine3.7
 WORKDIR/go/src/github.com/bitly/oauth2_proxy

 RUN apk --update add make git build-base curl bash ca-certificates wget \
 & amp; & amp;  update-ca-certificates \
 & amp; & amp;  curl -sSO https://raw.githubusercontent.com/pote/gpm/v1.4.0/bin/gpm \
 & amp; & amp;  chmod + x gpm \
 & amp; & amp;  mv gpm/usr/local/bin
 RUN git clone https://github.com/bitly/oauth2_proxy.git.  \
 & amp; & amp;  git checkout bfda078caa55958cc37dcba39e57fc37f6a3c842
 ADD rd.patch.
 RUN patch -p1 & lt;  rd.patch \
 & amp; & amp;  ./dist.sh

 FROM alpine: 3.7
 RUN apk - update add curl bash ca-certificates & amp & amp;  update-ca-certificates
 COPY --from = 0/go/src/github.com/bitly/oauth2_proxy/dist//bin/

 EXPOSE 8080 4180
 ENTRYPOINT ["/bin/oauth2_proxy"]
 Cmd ["--upstream = http://0.0.0.0: 8080/", "--http-address = 0.0.0.0: 4180"]  

And here’s what the rd.patch patch looks like
  diff -  -git a/dist.sh b/dist.sh
 index a00318b..92990d4 100755
 --- a/dist.sh
 +++ b/dist.sh
 @@ -14.25 +14.13 @@ goversion = $ (go version | awk '{print $ 3}')
 sha256sum = ()
 
 echo "... running tests"
 -./test.sh
 + #./test.sh
 
 -for os in windows linux darwin;  do
 - echo "... building v $ version for $ os/$ arch"
 - EXT =
 - if [$ os = windows];  then
 - EXT = ". Exe"
 - fi
 - BUILD = $ (mktemp -d $ {TMPDIR: -/tmp}/oauth2_proxy.XXXXXX)
 - TARGET = "oauth2_proxy- $ version. $ Os- $ arch. $ Goversion"
 - FILENAME = "oauth2_proxy- $ version. $ Os- $ arch $ EXT"
 - GOOS = $ os GOARCH = $ arch CGO_ENABLED = 0 \
 - go build -ldflags = "- s -w" -o $ BUILD/$ TARGET/$ FILENAME ||  exit 1
 - pushd $ BUILD/$ TARGET
 - sha256sum + = ("$ (shasum -a 256 $ FILENAME || exit 1)")
 - cd .. & amp; & amp;  tar czvf $ TARGET.tar.gz $ TARGET
 - mv $ TARGET.tar.gz $ DIR/dist
 - popd
 -done
 + os = 'linux'
 + echo "... building v $ version for $ os/$ arch"
 + TARGET = "oauth2_proxy- $ version. $ Os- $ arch. $ Goversion"
 + GOOS = $ os GOARCH = $ arch CGO_ENABLED = 0 \
 + go build -ldflags = "- s -w" -o ./dist/oauth2_proxy ||  exit 1
 
 checksum_file = "sha256sum.txt"
 cd $ DIR/dists
 diff --git a/oauthproxy.go b/oauthproxy.go
 index 21e5dfc..df9101a 100644
 --- a/oauthproxy.go
 +++ b/oauthproxy.go
 @@ -381.7 +381.9 @@ func (p * OAuthProxy) SignInPage (rw http.ResponseWriter, req * http.Request, code
  if redirect_url == p.SignInPath {
  redirect_url = "/"
  }
 -
 + if req.FormValue ("rd")! = "" {
 + redirect_url = req.FormValue ("rd")
 +}
  t: = struct {
  ProviderName string
  SignInMessage string  

Now you can assemble the image and save it to our own GitLab. Next in manifests/kube-dashboard-oauth2-proxy.yaml we indicate the use of the desired image (replace it with your own):

  image: docker.io/colemickens/oauth2_proxy: latest  

If you have a private registry authorization, do not forget to add the use of a secret to pull the images:

  imagePullSecrets:
  - name: gitlab registry  

... and add the secret to the registry:

  ---
 apiVersion: v1
 data:
  .dockercfg: eyJyZWdpc3RyeS5jb21wYW55LmNvbSI6IHsKICJ1c2VybmFtZSI6ICJvYXV0aDIiLAogInBhc3N3b3JkIjogIlBBU1NXT1JEIiwKICJhdXRoIjogIkFVVEhfVE9LRU4iLAogImVtYWlsIjogIm1haWxAY29tcGFueS5jb20iCn0KfQoK
 =
 kind: Secret
 metadata:
  annotations:
  name: gitlab-registry
  namespace: kube-system
 type: kubernetes.io/dockercfg 

The attentive reader will see that the long line given above is base64 from the config:

  {"registry.company.com": {
  "username": "oauth2",
  "password": "PASSWORD",
  "auth": "AUTH_TOKEN",
  "email": "mail@company.com"
 }
 }  

This is user data in GitLab, which Kubernetes code will pull out an image from the registry.

After everything is done, you can delete the current (incorrectly working) Dashboard installation with the command:

  $ ./ctl.sh -d  

... and reinstall everything:

  $ ./ctl.sh -i --gitlab-url https://gitlab.example.com --oauth2-id 6a52769e ... --oauth2-secret 6b79168f ... --dashboard  -url dashboard.example.com  

It is time to go to the Dashboard and find a rather archaic authorization button:



After clicking on it, GitLab will meet us, offering to log in on their usual page (of course, if we have not been previously authorized there):



Log in with GitLab credentials - and it all happened:



About Dashboard Features


If you are a developer who hasn’t previously worked with Kubernetes, or simply haven’t encountered Dashboard before for some reason, I’ll illustrate some of its features.

First, you can see that "everything is green":



More detailed data are available on pod’am, such as environment variables, a rolled up image, launch arguments, their state:



Deployment status:



... and other details:



... and also have the ability to scale the deployment:



The result of this operation:



Among the other useful features already mentioned at the beginning of the article - viewing logs:



... and the container logging function of the selected pod:



Also, for example, you can look at limit/requests on nodes:



Of course, this is not all the features of the panel, but I hope that the general idea has developed.

Disadvantages of integration and Dashboard


There is no access control in the described integration.With it, all users who have any access to GitLab get access to the Dashboard. They have the same access in the Dashboard, corresponding to the rights of the Dashboard itself, which are defined in RBAC . Obviously, this is not suitable for everyone, but for our case it turned out to be sufficient.

From the noticeable minuses in the Dashboard panel I will note the following:

  • It’s impossible to get into the init-container console;
  • it is impossible to edit Deployments and StatefulSets, although this is fixable in ClusterRole;
  • Dashboard compatibility with the latest versions of Kubernetes and the future of the project raises questions.

The last problem deserves special attention.

Dashboard Status and Alternatives


The Kubernetes Dashboard compatibility table presented in the latest version of the project ( v1.10.1 ) is not very something pleases:



Despite this, there is (already adopted in January) PR # 3476 , which announces support for K8s 1.13. In addition, among the issues of the project, you can find references to users working with the panel in K8s 1.14. Finally, commits are not terminated in the code base of the project. So (at least!) The actual status of the project is not as bad as it may first appear from the official compatibility table.

Finally, there are alternatives to Dashboard. Among them:

  1. K8Dash is a young interface (the first commits are dated March this year), already offering good features, such as a visual representation of the current status of the cluster and the management of its objects. Positioned as a "real-time interface", because automatically updates the displayed data without having to refresh the page in the browser.
  2. OpenShift Console - the Red Hat OpenShift web interface, which, however, brings to your cluster and other developments of the project, which not suitable for everyone.
  3. Kubernator is an interesting project created as a lower-level (than Dashboard) interface with the ability to view all cluster objects. However, it looks like its development has ceased.
  4. Polaris - just the other day announced project that combines the functions of the panel (shows the current state of the cluster, but does not manage its objects) and automatic" validation of best practices "(checks the cluster for the correctness of the configurations running in it Deployments) .

Instead of pins


Dashboard is a standard tool for Kubernetes clusters that we serve. Its integration with GitLab has also become part of our “default installation”, since many developers are pleased with the opportunities that they have with this panel.

Kubernetes Dashboard periodically has alternatives from the Open Source-community (and we are happy to consider them), but at this stage we remain with this decision.

P.S.


Read also in our blog:

Source text: Integration of Kubernetes Dashboard and GitLab users