Skip to content

Kubernetes example #914

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Jun 3, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 55 additions & 3 deletions examples/hackernews/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,18 @@ This example is made of a Skip reactive service (in `reactive_service/`), a
Flask web service (in `web_service/`), a React front-end (in `www/`), a HAProxy
reverse-proxy (in `reverse_proxy/`), and a PostgreSQL database (in `db/`).

In order to run it, do:
```
We provide configurations to run it using either Docker Compose or Kubernetes.
The Docker Compose version is simpler and easier if you just want to get
started with as few dependencies as possible; the Kubernetes version may be
useful for users who are either already using Kubernetes for other deployments
or require elastic horizontal scaling of their Skip service.

## Docker Compose

To build and run the application using Docker Compose, first install and run
Docker on your system, then run:

```bash
$ docker compose up --build
```

Expand All @@ -17,10 +27,52 @@ of computing and maintaining resources in a round-robin fashion.

This distributed configuration requires only configuration changes, is
transparent to clients, and can be run with:
```
```bash
$ docker compose -f compose.distributed.yml up --build
```

## Kubernetes

To run the application in a local Kubernetes cluster, you'll need several other
prerequisites in addition to Docker. Perform the following steps, which will
run and deploy the full application (in a distributed leader-follower
configuration) to a local Kubernetes cluster.

1. Install [`kubectl`](https://kubernetes.io/docs/tasks/tools/#kubectl)
(configuration tool to talk to a running cluster),
[`helm`](https://helm.sh/docs/intro/install/) (Kubernetes package manager)
and [`minikube`](https://minikube.sigs.k8s.io/docs/start) (local Kubernetes
cluster), and initialize a cluster with `minikube start`.

2. Enable the local Docker `registry` addon for `minikube` to use
locally-built images : `minkube addons enable registry` and expose its port
5000: `docker run --rm -it --network=host alpine ash -c "apk add socat &&
socat TCP-LISTEN:5000,reuseaddr,fork TCP:$(minikube ip):5000" &`

3. Build docker images for each component of this example, then tag and publish
each one to the `minikube` registry:
```bash
docker compose -f kubernetes/compose.distributed.yml build
for image in web-service reactive-service www db ; do
docker tag reactive-hackernews/$image localhost:5000/$image;
docker push localhost:5000/$image;
done
```

4. Deploy these images to your local Kubernetes cluster: `kubectl apply -f 'kubernetes/*.yaml'`

5. Configure and run HAProxy as a Kubernetes ingress controller, mediating
external traffic ("ingress") and distributing it to the relevant Kubernetes
service(s).
```bash
kubectl create configmap haproxy-auxiliary-configmap --from-file kubernetes/haproxy-aux.cfg
helm install haproxy haproxytech/kubernetes-ingress -f reverse_proxy/kubernetes.yaml
```

6. `minikube service haproxy-kubernetes-ingress` to open a tunnel to the
now-running ingress service, and point your browser at the output host/port
to see the service up and running!

### Overall System Design with optional leader/followers

```mermaid
Expand Down
12 changes: 12 additions & 0 deletions examples/hackernews/kubernetes/haproxy-aux.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
backend skip_control
mode http
http-request set-path %[path,regsub(^/control/,/v1/)]
use-server leader if { path_beg -i /v1/inputs/ }
# placeholder address for leader; will be overwritten by actual leader on startup
server leader localhost:8081 weight 0
balance roundrobin

backend skip_stream
mode http
http-request set-path /v1/streams/%[path,field(4,/)]
use-server %[req.hdr(Follower-Prefix)] if TRUE
38 changes: 38 additions & 0 deletions examples/hackernews/kubernetes/ingress.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: rhn-haproxy-config
data:
syslog-server: "address:stdout, format: raw, facility:daemon"
frontend-config-snippet: |
http-request set-header Follower-Prefix %[path,field(3,/)] if { path_beg -i /streams/ }
use_backend skip_stream if { path_beg -i /streams/ }
use_backend skip_control if { path_beg -i /control/ }
global-config-snippet: |
stats socket ipv4@*:9999 level admin expose-fd listeners
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: haproxy-kubernetes-ingress
spec:
ingressClassName: haproxy
rules:
- http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: rhn-web
port:
number: 3031
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rhn-www
port:
number: 80
34 changes: 34 additions & 0 deletions examples/hackernews/kubernetes/postgres.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
apiVersion: v1
kind: Service
metadata:
name: rhn-pg
labels:
app.kubernetes.io/name: rhn-pg
spec:
selector:
app.kubernetes.io/name: rhn-pg
ports:
- port: 5432
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rhn-pg
labels:
app.kubernetes.io/name: rhn-pg
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: rhn-pg
template:
metadata:
labels:
app.kubernetes.io/name: rhn-pg
spec:
containers:
- name: rhn-pg
image: localhost:5000/db
ports:
- containerPort: 5432
88 changes: 88 additions & 0 deletions examples/hackernews/kubernetes/skip.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
apiVersion: v1
kind: Service
metadata:
name: rhn-skip
labels:
app.kubernetes.io/name: rhn-skip
spec:
ports:
- port: 8080
name: streaming
- port: 8081
name: control
clusterIP: None
selector:
app.kubernetes.io/name: rhn-skip
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rhn-skip
labels:
app.kubernetes.io/name: rhn-skip
spec:
selector:
matchLabels:
app.kubernetes.io/name: rhn-skip
serviceName: rhn-skip
replicas: 4
template:
metadata:
labels:
app.kubernetes.io/name: rhn-skip
spec:
containers:
- name: rhn-skip
image: localhost:5000/reactive-service
command:
- bash
- "-c"
- |
set -ex
# use Kubernetes pod index as ID:
# - index 0 is leader
# - index >= 1 is follower, using that number in resource prefix
[[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1
id=${BASH_REMATCH[1]}
ip=$(hostname -i)

if [[ $id -eq 0 ]]; then
export SKIP_LEADER=true
echo "set server skip_control/leader addr $ip port 8081 ; enable server skip_control/leader " | socat stdio tcp4-connect:haproxy-kubernetes-ingress:9999
else
export SKIP_FOLLOWER=true
export SKIP_RESOURCE_PREFIX=follower$id
export SKIP_LEADER_HOST=rhn-skip-0.rhn-skip.default.svc.cluster.local
# Self-register both the control and event streaming server with the haproxy load balancer.
# Calling 'set server' after 'add server' is redundant on initial scale-up, but necessary for subsequent scale-ups when a server of that name already exists.
# Enabling HAProxy TCP health checks ensures that servers are taken out of rotation when the system scales down or instances crash/disconnect for other reasons.
echo "\
add server skip_control/follower$id $ip:8081 check ;\
set server skip_control/follower$id addr $ip port 8081 ;\
enable server skip_control/follower$id ;\
enable health skip_control/follower$id ;\
add server skip_stream/follower$id $ip:8080 check ;\
set server skip_stream/follower$id addr $ip port 8080 ;\
enable server skip_stream/follower$id ;\
enable health skip_stream/follower$id\
" | socat stdio tcp4-connect:haproxy-kubernetes-ingress:9999
fi
npm start
ports:
- name: streaming
containerPort: 8080
- name: control
containerPort: 8081
env:
- name: PG_HOST
value: "rhn-pg.default.svc.cluster.local"
- name: PG_PORT
value: "5432"
readinessProbe:
exec:
command:
- wget
- "--spider"
- http://localhost:8081/v1/healthcheck
initialDelaySeconds: 1
periodSeconds: 2
41 changes: 41 additions & 0 deletions examples/hackernews/kubernetes/web.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
apiVersion: v1
kind: Service
metadata:
name: rhn-web
labels:
app.kubernetes.io/name: rhn-web
annotations:
haproxy.org/backend-config-snippet: |
http-request set-path %[path,regsub(^/api/,/)]
spec:
ports:
- port: 3031
name: api
selector:
app.kubernetes.io/name: rhn-web
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rhn-web
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: rhn-web
template:
metadata:
labels:
app.kubernetes.io/name: rhn-web
spec:
containers:
- name: rhn-web
image: localhost:5000/web-service
ports:
- containerPort: 3031
env:
- name: SKIP_CONTROL_URL
value:
"http://haproxy-kubernetes-ingress.default.svc.cluster.local/control"
- name: PG_HOST
value: "rhn-pg.default.svc.cluster.local"
33 changes: 33 additions & 0 deletions examples/hackernews/kubernetes/www.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
apiVersion: v1
kind: Service
metadata:
name: rhn-www
labels:
app.kubernetes.io/name: rhn-www
spec:
ports:
- port: 80
name: http
selector:
app.kubernetes.io/name: rhn-www
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rhn-www
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: rhn-www
template:
metadata:
labels:
app.kubernetes.io/name: rhn-www
spec:
containers:
- name: rhn-www
image: localhost:5000/www
ports:
- containerPort: 80
2 changes: 2 additions & 0 deletions examples/hackernews/reactive_service/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
FROM node:lts-alpine3.19
WORKDIR /app
RUN apk add --no-cache bash
RUN apk add --no-cache socat
COPY package.json package.json
RUN npm install
COPY . .
Expand Down
1 change: 1 addition & 0 deletions examples/hackernews/reactive_service/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
"@skip-adapter/postgres": "0.0.16"
},
"devDependencies": {
"@types/node": "^22.10.0",
"@skiplabs/eslint-config": "^0.0.1",
"@skiplabs/tsconfig": "^0.0.1"
}
Expand Down
10 changes: 5 additions & 5 deletions examples/hackernews/reactive_service/server.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,20 +4,20 @@ import { service } from "./dist/hackernews.service.js";

if (process.env["SKIP_LEADER"] == "true") {
console.log("Running as leader...");
runService(asLeader(service));
await runService(asLeader(service)).catch(console.error);
} else if (process.env["SKIP_FOLLOWER"] == "true") {
console.log("Running as follower...");
runService(
await runService(
asFollower(service, {
leader: {
host: "skip_leader",
host: process.env["SKIP_LEADER_HOST"] || "skip_leader",
streaming_port: 8080,
control_port: 8081,
},
collections: ["postsWithUpvotes", "sessions"],
}),
);
).catch(console.error);
} else {
console.log("Running non-distributed...");
runService(service);
await runService(service).catch(console.error);
}
16 changes: 11 additions & 5 deletions examples/hackernews/reactive_service/src/hackernews.service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -40,12 +40,18 @@ type Session = User & {
user_id: number;
};

const host: string = process.env["PG_HOST"] || "db";
const port: number = Number(process.env["PG_PORT"]) || 5432;
const database: string = process.env["PG_DATABASE"] || "postgres";
const user: string = process.env["PG_USER"] || "postgres";
const password: string = process.env["PG_PASSWORD"] || "change_me";

const postgres = new PostgresExternalService({
host: "db",
port: 5432,
database: "postgres",
user: "postgres",
password: "change_me",
host,
port,
database,
user,
password,
});

class UpvotesMapper {
Expand Down
Loading