Skip to content

Commit 3512ee4

Browse files
committed
Fix Keycloak issues
1 parent df3e163 commit 3512ee4

File tree

3 files changed

+109
-21
lines changed

3 files changed

+109
-21
lines changed

kubernetes/namespaces/tooling/keycloak/configmap.yaml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,11 @@ data:
99
KC_HOSTNAME: "id.pydis.wtf"
1010

1111
# Set the location of the TLS certificates generated by Vault
12-
KC_HTTPS_CERTIFICATE_FILE: "/vault/secrets/server.crt"
13-
KC_HTTPS_CERTIFICATE_KEY_FILE: "/vault/secrets/server.key"
12+
# KC_HTTPS_CERTIFICATE_FILE: "/vault/secrets/server.crt"
13+
# KC_HTTPS_CERTIFICATE_KEY_FILE: "/vault/secrets/server.key"
1414

1515
# Proxy settings
16+
KC_HTTP_ENABLED: "true"
1617
KC_PROXY_HEADERS: "xforwarded"
1718

1819
# Database configuration

kubernetes/namespaces/tooling/keycloak/deployment.yaml

Lines changed: 2 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -14,20 +14,6 @@ spec:
1414
metadata:
1515
labels:
1616
app: keycloak
17-
annotations:
18-
vault.hashicorp.com/agent-inject: "true"
19-
vault.hashicorp.com/agent-init-first: "true"
20-
vault.hashicorp.com/agent-inject-secret-server.key: "internal-tls/issue/internal-tls"
21-
vault.hashicorp.com/agent-inject-template-server.key: |
22-
{{- with secret "internal-tls/issue/internal-tls" "common_name=id.pydis.wtf" -}}
23-
{{ .Data.private_key }}
24-
{{- end }}
25-
vault.hashicorp.com/agent-inject-secret-server.crt: "internal-tls/issue/internal-tls"
26-
vault.hashicorp.com/agent-inject-template-server.crt: |
27-
{{- with secret "internal-tls/issue/internal-tls" "common_name=id.pydis.wtf" -}}
28-
{{ .Data.certificate }}
29-
{{- end }}
30-
vault.hashicorp.com/role: "internal-tls-issuer"
3117
spec:
3218
serviceAccountName: internal-tls-issuer
3319
containers:
@@ -47,8 +33,8 @@ spec:
4733
readinessProbe:
4834
httpGet:
4935
path: /realms/master
50-
port: 8443
51-
scheme: HTTPS
36+
port: 8080
37+
scheme: HTTP
5238
volumeMounts:
5339
- name: ca-store
5440
mountPath: /opt/pydis/ca-store

kubernetes/namespaces/tooling/keycloak/ingress.yaml

Lines changed: 104 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,109 @@ metadata:
44
annotations:
55
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
66
nginx.ingress.kubernetes.io/auth-tls-secret: "kube-system/mtls-client-crt-bundle"
7-
nginx.ingress.kubernetes.io/auth-tls-error-page: "https://www.youtube.com/watch?v=dQw4w9WgXcQ"
8-
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
7+
# Kubernetes is a work of pure evil. When Kubernetes designed this YAML
8+
# hell, they figured "okay, but we somehow need a way to specify which
9+
# things can be specified in YAML". It's not like YAML's dozen different
10+
# ways to make a string (enough for an entire website:
11+
# https://yaml-multiline.info/) were already concern enough to choose this
12+
# format in the first place. However, it is somewhat readable for human,
13+
# and all the complexity for having to parse it lies in the minds of the
14+
# poor programmers that have to follow the specification and make it work,
15+
# which I can't help but be certain that it will result in admissions to
16+
# mental hospitals. In that regard, the ethical implications of using YAML
17+
# are sort of similar to buying shoes from any major clothing brand these
18+
# days, with the exception that a new pair of sneakers is not something you
19+
# need to "debug" because it parses "22:22" as 1342. Oh, but only if your
20+
# parser implements YAML version 1.1. Very well, you think, not my problem.
21+
# Let's talk about Kubernetes. Because Kubernetes needed some way to make
22+
# their millions of millions of lines of Golang code somehow have this
23+
# thing called "backwards compatible", they introduced this thing called
24+
# `apiVersion`. This makes sense and is a great idea. Let's talk about
25+
# Kubernetes object. Every Kubernetes object can have a bunch of metadata.
26+
# Two of these are annotations and labels, which are basically a mapping of
27+
# strings to strings. They are literally what the name says, annotations
28+
# are to store some arbitrary information on the object (which is then
29+
# persisted in Kubernetes' cluster key-value jingle-jungle database), and
30+
# labels actually have a predefined function because you can filter for
31+
# labels in e.g. `kubectl` but also do this filtering in other Kubernetes
32+
# objects, like selecting a collection of pods to route traffic to by their
33+
# label. So far, so good. Let's talk about annotations. Annotations are
34+
# basically meant for any other thing to consume some metadata about the
35+
# object, because it turns out that constraining your object to
36+
# some-versioned-keys-and-values-on-top-of-YAML specification, that's kind
37+
# of not enough for the fact that people have Real Problems(tm) they need
38+
# to solve and actually a webserver cannot just be modelled by saying
39+
# "route this there". To route traffic in Kubernetes, and I think it's
40+
# only HTTP and HTTPS traffic, you need an "ingress controller". An
41+
# "ingress controller" checks which "ingresses" are created in the cluster,
42+
# e.g. the thing you see in this file. Installing 600 lines of YAML hell
43+
# through a tool called "Helm", "kustomize" or any other hundreds of lines
44+
# of Golang or other glue code to at the end have the "ingress controller"
45+
# is basically the "cloud native", hip cool and "webscale" nature of doing
46+
# "sudo apt install nginx". So far so good, because "sudo apt install
47+
# nginx" obviously doesn't scale because it's bound to a single host and
48+
# everybody knows that Kubernetes is the only thing that can solve
49+
# multi-host deployments. Now that you have installed this "ingress
50+
# controller", as mentioned before, it watches for ingress objects like
51+
# this one. But as also mentioned before, the regular attributes that are
52+
# considered by the ingress controller for routing are not enough. What's
53+
# the webscale solution to it? Simply, you begin _namespacing string keys_
54+
# by just adding a FQDN in front of them, like
55+
# `nginx.ingress.kubernetes.io/` and then some arbitrary value. All are
56+
# strings, of course. And then you can tell it, for instance, that you want
57+
# a 16k proxy buffer size, just make sure to quote it, because remember,
58+
# strings, strings, strings, we love the strings. Configuring these
59+
# namespaced keys is basically the webscale equivalent to `sudo vim
60+
# /etc/nginx/conf.d/mysite.conf`, except not really, because everybody
61+
# knows that vim doesn't scale, thankfully Kubernetes is here to solve all
62+
# of our problems. And also not really, because it turns out that when you
63+
# throw a map of string=>string into a humongous conglomeration of Go code
64+
# that probably 99% of Kubernetes operators did not look through, you also
65+
# need some way to handle errors. Of course, the entire `sudo apt install
66+
# nginx` thing above has kind of solved that a dozen years ago, because
67+
# there's this thing called _config validation_ where you can tell your
68+
# nginx server to validate your configuration file. But, remember, this
69+
# obviously doesn't scale, because config validation only runs on a single
70+
# host. Everybody knows that they're the same like Google, truly a modern
71+
# server will not be able to host their 85 MB SPA for a restaurant without
72+
# the incredible risk of _downtime_, and beware having to manage server
73+
# upgrades. Thankfully, Kubernetes solves all of this for us, with its
74+
# rolling upgrades, with its ingress objects, and with our amazing NGINX
75+
# Helm Ingress Controller Chart that makes sure we can sleep safely at
76+
# night because we have another CSI volume outage^H^H^H^H^H^H^H^H^H^H^H^H^H
77+
# the ingress controller makes all of it work for us. Speaking about
78+
# configuration validation, of course Kubernetes also needed a solution for
79+
# that, and Kubernetes of course needed a solution that would cater the
80+
# needs of our 85 MB restaurant menu SPA, especially the entire high
81+
# availability story. Thank god Kubernetes is here to solve that problem
82+
# for us. So how did Kubernetes solve it? Kubernetes introduced something
83+
# called "Dynamic Admission Control"
84+
# (https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/).
85+
# If you don't understand the name, just read the first paragraph of "what
86+
# are admission webhooks": "Admission webhooks are HTTP callbacks that
87+
# receive admission requests and do something with them". That makes it
88+
# clear. And why are we saying all of this? Because these admission
89+
# webhooks are - obviously, and clearly - used for configuration
90+
# validation. Of course, NGINX has the entire configuration checking thing,
91+
# but it doesn't scale. Everybody knows running a command on the host
92+
# doesn't scale because the host might go away. So what's happening here?
93+
# The configuration statement below is commented out. Why? Because it
94+
# stopped to apply. It makes perfect sense to me, of course, Kubernetes
95+
# detected that YouTube isn't hosted on Kubernetes and as such it's not
96+
# valid to have a HTTPS website as an error page. That simply is not
97+
# allowed, because HTTPS doesn't scale because it's a connection between
98+
# two hosts and one of the hosts might die in the meantime. Kubernetes, or
99+
# rather, the Kubernetes NGINX Ingress Controller, or rather, the
100+
# Kubernetes NGINX Ingress Controller Dynamic Admission Webhook HTTP
101+
# callback (that just rolls off the tongue, doesn't it) thankfully solves
102+
# this problem for us by just rejecting it flat at heaven's gate. To
103+
# really explain the admission webhook topic again: Basically, an admission
104+
# webhook is something that takes your mental state and submits it to the
105+
# Kubernetes Ingress Controller to book you a spot at a psychiatric
106+
# hospital. Very well, you think, and because it's webscale, it books the
107+
# spot at two psychiatric hospitals at the same time, for high
108+
# availability. Thank you, Kubernetes, for solving this problem.
109+
# nginx.ingress.kubernetes.io/auth-tls-error-page: "https://www.youtube.com/watch?v=dQw4w9WgXcQ"
9110
nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
10111
nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
11112
nginx.ingress.kubernetes.io/server-snippet: |
@@ -29,4 +130,4 @@ spec:
29130
service:
30131
name: keycloak
31132
port:
32-
number: 8443
133+
number: 8080

0 commit comments

Comments
 (0)