@@ -4,8 +4,109 @@ metadata:
4
4
annotations :
5
5
nginx.ingress.kubernetes.io/auth-tls-verify-client : " on"
6
6
nginx.ingress.kubernetes.io/auth-tls-secret : " kube-system/mtls-client-crt-bundle"
7
- nginx.ingress.kubernetes.io/auth-tls-error-page : " https://www.youtube.com/watch?v=dQw4w9WgXcQ"
8
- nginx.ingress.kubernetes.io/backend-protocol : " HTTPS"
7
+ # Kubernetes is a work of pure evil. When Kubernetes designed this YAML
8
+ # hell, they figured "okay, but we somehow need a way to specify which
9
+ # things can be specified in YAML". It's not like YAML's dozen different
10
+ # ways to make a string (enough for an entire website:
11
+ # https://yaml-multiline.info/) were already concern enough to choose this
12
+ # format in the first place. However, it is somewhat readable for human,
13
+ # and all the complexity for having to parse it lies in the minds of the
14
+ # poor programmers that have to follow the specification and make it work,
15
+ # which I can't help but be certain that it will result in admissions to
16
+ # mental hospitals. In that regard, the ethical implications of using YAML
17
+ # are sort of similar to buying shoes from any major clothing brand these
18
+ # days, with the exception that a new pair of sneakers is not something you
19
+ # need to "debug" because it parses "22:22" as 1342. Oh, but only if your
20
+ # parser implements YAML version 1.1. Very well, you think, not my problem.
21
+ # Let's talk about Kubernetes. Because Kubernetes needed some way to make
22
+ # their millions of millions of lines of Golang code somehow have this
23
+ # thing called "backwards compatible", they introduced this thing called
24
+ # `apiVersion`. This makes sense and is a great idea. Let's talk about
25
+ # Kubernetes object. Every Kubernetes object can have a bunch of metadata.
26
+ # Two of these are annotations and labels, which are basically a mapping of
27
+ # strings to strings. They are literally what the name says, annotations
28
+ # are to store some arbitrary information on the object (which is then
29
+ # persisted in Kubernetes' cluster key-value jingle-jungle database), and
30
+ # labels actually have a predefined function because you can filter for
31
+ # labels in e.g. `kubectl` but also do this filtering in other Kubernetes
32
+ # objects, like selecting a collection of pods to route traffic to by their
33
+ # label. So far, so good. Let's talk about annotations. Annotations are
34
+ # basically meant for any other thing to consume some metadata about the
35
+ # object, because it turns out that constraining your object to
36
+ # some-versioned-keys-and-values-on-top-of-YAML specification, that's kind
37
+ # of not enough for the fact that people have Real Problems(tm) they need
38
+ # to solve and actually a webserver cannot just be modelled by saying
39
+ # "route this there". To route traffic in Kubernetes, and I think it's
40
+ # only HTTP and HTTPS traffic, you need an "ingress controller". An
41
+ # "ingress controller" checks which "ingresses" are created in the cluster,
42
+ # e.g. the thing you see in this file. Installing 600 lines of YAML hell
43
+ # through a tool called "Helm", "kustomize" or any other hundreds of lines
44
+ # of Golang or other glue code to at the end have the "ingress controller"
45
+ # is basically the "cloud native", hip cool and "webscale" nature of doing
46
+ # "sudo apt install nginx". So far so good, because "sudo apt install
47
+ # nginx" obviously doesn't scale because it's bound to a single host and
48
+ # everybody knows that Kubernetes is the only thing that can solve
49
+ # multi-host deployments. Now that you have installed this "ingress
50
+ # controller", as mentioned before, it watches for ingress objects like
51
+ # this one. But as also mentioned before, the regular attributes that are
52
+ # considered by the ingress controller for routing are not enough. What's
53
+ # the webscale solution to it? Simply, you begin _namespacing string keys_
54
+ # by just adding a FQDN in front of them, like
55
+ # `nginx.ingress.kubernetes.io/` and then some arbitrary value. All are
56
+ # strings, of course. And then you can tell it, for instance, that you want
57
+ # a 16k proxy buffer size, just make sure to quote it, because remember,
58
+ # strings, strings, strings, we love the strings. Configuring these
59
+ # namespaced keys is basically the webscale equivalent to `sudo vim
60
+ # /etc/nginx/conf.d/mysite.conf`, except not really, because everybody
61
+ # knows that vim doesn't scale, thankfully Kubernetes is here to solve all
62
+ # of our problems. And also not really, because it turns out that when you
63
+ # throw a map of string=>string into a humongous conglomeration of Go code
64
+ # that probably 99% of Kubernetes operators did not look through, you also
65
+ # need some way to handle errors. Of course, the entire `sudo apt install
66
+ # nginx` thing above has kind of solved that a dozen years ago, because
67
+ # there's this thing called _config validation_ where you can tell your
68
+ # nginx server to validate your configuration file. But, remember, this
69
+ # obviously doesn't scale, because config validation only runs on a single
70
+ # host. Everybody knows that they're the same like Google, truly a modern
71
+ # server will not be able to host their 85 MB SPA for a restaurant without
72
+ # the incredible risk of _downtime_, and beware having to manage server
73
+ # upgrades. Thankfully, Kubernetes solves all of this for us, with its
74
+ # rolling upgrades, with its ingress objects, and with our amazing NGINX
75
+ # Helm Ingress Controller Chart that makes sure we can sleep safely at
76
+ # night because we have another CSI volume outage^H^H^H^H^H^H^H^H^H^H^H^H^H
77
+ # the ingress controller makes all of it work for us. Speaking about
78
+ # configuration validation, of course Kubernetes also needed a solution for
79
+ # that, and Kubernetes of course needed a solution that would cater the
80
+ # needs of our 85 MB restaurant menu SPA, especially the entire high
81
+ # availability story. Thank god Kubernetes is here to solve that problem
82
+ # for us. So how did Kubernetes solve it? Kubernetes introduced something
83
+ # called "Dynamic Admission Control"
84
+ # (https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/).
85
+ # If you don't understand the name, just read the first paragraph of "what
86
+ # are admission webhooks": "Admission webhooks are HTTP callbacks that
87
+ # receive admission requests and do something with them". That makes it
88
+ # clear. And why are we saying all of this? Because these admission
89
+ # webhooks are - obviously, and clearly - used for configuration
90
+ # validation. Of course, NGINX has the entire configuration checking thing,
91
+ # but it doesn't scale. Everybody knows running a command on the host
92
+ # doesn't scale because the host might go away. So what's happening here?
93
+ # The configuration statement below is commented out. Why? Because it
94
+ # stopped to apply. It makes perfect sense to me, of course, Kubernetes
95
+ # detected that YouTube isn't hosted on Kubernetes and as such it's not
96
+ # valid to have a HTTPS website as an error page. That simply is not
97
+ # allowed, because HTTPS doesn't scale because it's a connection between
98
+ # two hosts and one of the hosts might die in the meantime. Kubernetes, or
99
+ # rather, the Kubernetes NGINX Ingress Controller, or rather, the
100
+ # Kubernetes NGINX Ingress Controller Dynamic Admission Webhook HTTP
101
+ # callback (that just rolls off the tongue, doesn't it) thankfully solves
102
+ # this problem for us by just rejecting it flat at heaven's gate. To
103
+ # really explain the admission webhook topic again: Basically, an admission
104
+ # webhook is something that takes your mental state and submits it to the
105
+ # Kubernetes Ingress Controller to book you a spot at a psychiatric
106
+ # hospital. Very well, you think, and because it's webscale, it books the
107
+ # spot at two psychiatric hospitals at the same time, for high
108
+ # availability. Thank you, Kubernetes, for solving this problem.
109
+ # nginx.ingress.kubernetes.io/auth-tls-error-page: "https://www.youtube.com/watch?v=dQw4w9WgXcQ"
9
110
nginx.ingress.kubernetes.io/proxy-buffers-number : " 4"
10
111
nginx.ingress.kubernetes.io/proxy-buffer-size : " 16k"
11
112
nginx.ingress.kubernetes.io/server-snippet : |
29
130
service :
30
131
name : keycloak
31
132
port :
32
- number : 8443
133
+ number : 8080
0 commit comments