Skip to content

Kubernetes example #914

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 52 additions & 1 deletion examples/hackernews/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,16 @@ This example is made of a Skip reactive service (in `reactive_service/`), a
Flask web service (in `web_service/`), a React front-end (in `www/`), a HAProxy
reverse-proxy (in `reverse_proxy/`), and a PostgreSQL database (in `db/`).

In order to run it, do:
We provide configurations to run it using either Docker Compose or Kubernetes.
The Docker Compose version is simpler and easier if you just want to get
started with as few dependencies as possible; the Kubernetes version may be
useful for users already using Kubernetes for other deployments.

## Docker Compose

To build and run the application using Docker Compose, first install and run
Docker on your system, then run:

```
$ docker compose up --build
```
Expand All @@ -21,6 +30,48 @@ transparent to clients, and can be run with:
$ docker compose -f compose.distributed.yml up --build
```

## Kubernetes

To run the application in a local Kubernetes cluster, you'll need several other
prerequisites in addition to Docker. Perform the following steps, which will
run and deploy the full application (in a distributed leader-follower
configuration) to a local Kubernetes cluster.

1. Install [`kubectl`](https://kubernetes.io/docs/tasks/tools/#kubectl)
(configuration tool to talk to a running cluster),
[`helm`](https://helm.sh/docs/intro/install/) (Kubernetes package manager)
and [`minikube`](https://minikube.sigs.k8s.io/docs/start) (local Kubernetes
cluster), and initialize a cluster with `minikube start`.

2. Enable the local Docker `registry` addon for `minikube` to use
locally-built images : `minkube addons enable registry` and expose its port
5000: `docker run --rm -it --network=host alpine ash -c "apk add socat &&
socat TCP-LISTEN:5000,reuseaddr,fork TCP:$(minikube ip):5000" &`

3. Build docker images for each component of this example, then tag and publish
each one to the `minikube` registry:
```
docker compose -f kubernetes/compose.distributed.yml build
for image in web-service reactive-service www db ; do
docker tag reactive-hackernews/$image localhost:5000/$image;
docker push localhost:5000/$image;
done
```

4. Deploy these images to your local Kubernetes cluster: `kubectl apply -f 'kubernetes/*.yaml'`

5. Configure and run HAProxy as a Kubernetes ingress controller, mediating
external traffic ("ingress") and distributing it to the relevant Kubernetes
service(s).
```
kubectl create configmap haproxy-auxiliary-configmap --from-file kubernetes/haproxy-aux.cfg
helm install haproxy haproxytech/kubernetes-ingress -f kubernetes/haproxy-aux-config.yaml
```

6. `minikube service haproxy-kubernetes-ingress` to open a tunnel to the
now-running ingress service, and point your browser at the output host/port
to see the service up and running!

### Overall System Design with optional leader/followers

```mermaid
Expand Down
9 changes: 9 additions & 0 deletions examples/hackernews/kubernetes/haproxy-aux-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
controller:
extraVolumes:
- name: haproxy-auxiliary-volume
configMap:
name: haproxy-auxiliary-configmap
extraVolumeMounts:
- name: haproxy-auxiliary-volume
mountPath: /usr/local/etc/haproxy/haproxy-aux.cfg
subPath: haproxy-aux.cfg
17 changes: 17 additions & 0 deletions examples/hackernews/kubernetes/haproxy-aux.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
backend skip_control
mode http
http-request set-path %[path,regsub(^/control/,/v1/)]
use-server leader if { path_beg -i /v1/inputs/ }
server leader rhn-skip-ingress-0:8081 weight 0
balance roundrobin
server follower1 rhn-skip-ingress-1:8081
server follower2 rhn-skip-ingress-2:8081
server follower3 rhn-skip-ingress-3:8081

backend skip_stream
mode http
http-request set-path /v1/streams/%[path,field(4,/)]
use-server %[req.hdr(Follower-Prefix)] if TRUE
server follower1 rhn-skip-ingress-1:8080
server follower2 rhn-skip-ingress-2:8080
server follower3 rhn-skip-ingress-3:8080
36 changes: 36 additions & 0 deletions examples/hackernews/kubernetes/ingress.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-kubernetes-ingress
data:
syslog-server: "address:stdout, format: raw, facility:daemon"
frontend-config-snippet: |
http-request set-header Follower-Prefix %[path,field(3,/)] if { path_beg -i /streams/ }
use_backend skip_stream if { path_beg -i /streams/ }
use_backend skip_control if { path_beg -i /control/ }
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: haproxy-kubernetes-ingress
spec:
ingressClassName: haproxy
rules:
- http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: rhn-web
port:
number: 3031
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rhn-www
port:
number: 80
34 changes: 34 additions & 0 deletions examples/hackernews/kubernetes/postgres.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
apiVersion: v1
kind: Service
metadata:
name: rhn-pg
labels:
app.kubernetes.io/name: rhn-pg
spec:
selector:
app.kubernetes.io/name: rhn-pg
ports:
- port: 5432
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rhn-pg
labels:
app.kubernetes.io/name: rhn-pg
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: rhn-pg
template:
metadata:
labels:
app.kubernetes.io/name: rhn-pg
spec:
containers:
- name: rhn-pg
image: localhost:5000/db
ports:
- containerPort: 5432
151 changes: 151 additions & 0 deletions examples/hackernews/kubernetes/skip.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
apiVersion: v1
kind: Service
metadata:
name: rhn-skip
labels:
app.kubernetes.io/name: rhn-skip
spec:
ports:
- port: 8080
name: streaming
- port: 8081
name: control
clusterIP: None
selector:
app.kubernetes.io/name: rhn-skip
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rhn-skip
labels:
app.kubernetes.io/name: rhn-skip
spec:
selector:
matchLabels:
app.kubernetes.io/name: rhn-skip
serviceName: rhn-skip
replicas: 4
template:
metadata:
labels:
app.kubernetes.io/name: rhn-skip
spec:
containers:
- name: rhn-skip
image: localhost:5000/reactive-service
command:
- bash
- "-c"
- |
set -ex
# use Kubernetes pod index as ID:
# - index 0 is leader
# - index >= 1 is follower, using that number in resource prefix
[[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1
id=${BASH_REMATCH[1]}
if [[ $id -eq 0 ]]; then
export SKIP_LEADER=true
else
export SKIP_FOLLOWER=true
export SKIP_RESOURCE_PREFIX=follower$id
export SKIP_LEADER_HOST=rhn-skip-0.rhn-skip.default.svc.cluster.local
fi
npm start
ports:
- name: streaming
containerPort: 8080
- name: control
containerPort: 8081
env:
- name: PG_HOST
value: "rhn-pg.default.svc.cluster.local"
- name: PG_PORT
value: "5432"
readinessProbe:
exec:
command:
- wget
- "--spider"
- http://localhost:8081/v1/healthcheck
initialDelaySeconds: 5
periodSeconds: 5
# Headless services giving names to followers, binding on selector
# `statefulset.kubernetes.io/pod-name`
# This is just to give a service identifier for haproxy ingress controller to
# use to distribute streaming requests to the proper followers.

# These headless services need to be created for each pod of the stateful set;
# i.e. it is required to define services rhn-skip-ingress-X for all X from 0 to
# <number of rhn-skip replicas>. It is recommended to create enough such
# headless services to support the high end of expected follower instances;
# then, horizontal scaling just requires changing the number of replicas in the
# rhn-skip StatefulSet.
---
apiVersion: v1
kind: Service
metadata:
name: rhn-skip-ingress-0
spec:
clusterIP: None
ports:
- name: streaming
port: 8080
targetPort: 8080
- name: control
port: 8081
targetPort: 8081
selector:
statefulset.kubernetes.io/pod-name: rhn-skip-0
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: rhn-skip-ingress-1
spec:
clusterIP: None
ports:
- name: streaming
port: 8080
targetPort: 8080
- name: control
port: 8081
targetPort: 8081
selector:
statefulset.kubernetes.io/pod-name: rhn-skip-1
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: rhn-skip-ingress-2
spec:
clusterIP: None
ports:
- name: streaming
port: 8080
targetPort: 8080
- name: control
port: 8081
targetPort: 8081
selector:
statefulset.kubernetes.io/pod-name: rhn-skip-2
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: rhn-skip-ingress-3
spec:
clusterIP: None
ports:
- name: streaming
port: 8080
targetPort: 8080
- name: control
port: 8081
targetPort: 8081
selector:
statefulset.kubernetes.io/pod-name: rhn-skip-3
type: ClusterIP
40 changes: 40 additions & 0 deletions examples/hackernews/kubernetes/web.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
apiVersion: v1
kind: Service
metadata:
name: rhn-web
labels:
app.kubernetes.io/name: rhn-web
annotations:
haproxy.org/backend-config-snippet: |
http-request set-path %[path,regsub(^/api/,/)]
spec:
ports:
- port: 3031
name: api
selector:
app.kubernetes.io/name: rhn-web
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rhn-web
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: rhn-web
template:
metadata:
labels:
app.kubernetes.io/name: rhn-web
spec:
containers:
- name: rhn-pg
image: localhost:5000/web-service
ports:
- containerPort: 3031
env:
- name: SKIP_CONTROL_URL
value:
"http://haproxy-kubernetes-ingress.default.svc.cluster.local/control"
#TODO try this with port 8081 instead of /control path prefix
33 changes: 33 additions & 0 deletions examples/hackernews/kubernetes/www.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
apiVersion: v1
kind: Service
metadata:
name: rhn-www
labels:
app.kubernetes.io/name: rhn-www
spec:
ports:
- port: 80
name: http
selector:
app.kubernetes.io/name: rhn-www
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rhn-www
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: rhn-www
template:
metadata:
labels:
app.kubernetes.io/name: rhn-www
spec:
containers:
- name: rhn-www
image: localhost:5000/www
ports:
- containerPort: 80
Loading