Skip to content

📖 Update multi-cluster proposal with new implementation details #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
247 changes: 164 additions & 83 deletions designs/multi-cluster.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
# Multi-Cluster Support
Author: @sttts

Author: @sttts @embik

Initial implementation: @vincepri

Last Updated on: 03/26/2024
Last Updated on: 2025-01-07

## Table of Contents

Expand Down Expand Up @@ -35,6 +37,10 @@ Multi-cluster use-cases require the creation of multiple managers and/or cluster
objects. This proposal is about adding native support for multi-cluster use-cases
to controller-runtime.

With this change, it will be possible to implement pluggable cluster providers
that automatically start and stop sources (and thus, cluster-aware reconcilers) when
the cluster provider adds ("engages") or removes ("disengages") a cluster.

## Motivation

This change is important because:
Expand All @@ -50,27 +56,27 @@ This change is important because:

### Goals

- Provide a way to natively write controllers that
1. (UNIFORM MULTI-CLUSTER CONTROLLER) operate on multiple clusters in a uniform way,
- Allow 3rd-parties to implement an (optional) multi-cluster provider Go interface that controller-runtime will use (if configured on the manager) to dynamically attach and detach registered controllers to clusters that come and go.
- With that, provide a way to natively write controllers for these patterns:
1. (UNIFORM MULTI-CLUSTER CONTROLLERS) operate on multiple clusters in a uniform way,
i.e. reconciling the same resources on multiple clusters, **optionally**
- sourcing information from one central hub cluster
- sourcing information cross-cluster.

Example: distributed `ReplicaSet` controller, reconciling `ReplicaSets` on multiple clusters.
2. (AGGREGATING MULTI-CLUSTER CONTROLLER) operate on one central hub cluster aggregating information from multiple clusters.
Example: distributed `ReplicaSet` controller, reconciling `ReplicaSets` on multiple clusters.
2. (AGGREGATING MULTI-CLUSTER CONTROLLERS) operate on one central hub cluster aggregating information from multiple clusters.

Example: distributed `Deployment` controller, aggregating `ReplicaSets` back into the `Deployment` object.
- Allow clusters to dynamically join and leave the set of clusters a controller operates on.
- Allow event sources to be cross-cluster:
1. Multi-cluster events that trigger reconciliation in the one central hub cluster.
2. Central hub cluster events to trigger reconciliation on multiple clusters.
- Allow (informer) indexes that span multiple clusters.
- Allow logical clusters where a set of clusters is actually backed by one physical informer store.
- Allow 3rd-parties to plug in their multi-cluster adapter (in source code) into
an existing multi-cluster-compatible code-base.
Example: distributed `Deployment` controller, aggregating `ReplicaSets` across multiple clusters back into a central `Deployment` object.

#### Low-Level Requirements

- Allow event sources to be cross-cluster such that:
1. Multi-cluster events can trigger reconciliation in the one central hub cluster.
2. Central hub cluster events can trigger reconciliation on multiple clusters.
- Allow reconcilers to look up objects through (informer) indexes from specific other clusters.
- Minimize the amount of changes to make a controller-runtime controller
multi-cluster-compatible, in a way that 3rd-party projects have no reason to
object these kind of changes.
object to these kind of changes.

Here we call a controller to be multi-cluster-compatible if the reconcilers get
reconcile requests in cluster `X` and do all reconciliation in cluster `X`. This
Expand All @@ -80,7 +86,7 @@ logic.
### Examples

- Run a controller-runtime controller against a kubeconfig with arbitrary many contexts, all being reconciled.
- Run a controller-runtime controller against cluster-managers like kind, Cluster-API, Open-Cluster-Manager or Hypershift.
- Run a controller-runtime controller against cluster managers like kind, Cluster API, Open-Cluster-Manager or Hypershift.
- Run a controller-runtime controller against a kcp shard with a wildcard watch.

### Non-Goals/Future Work
Expand All @@ -94,17 +100,31 @@ logic.
## Proposal

The `ctrl.Manager` _SHOULD_ be extended to get an optional `cluster.Provider` via
`ctrl.Options` implementing
`ctrl.Options`, implementing:

```golang
// pkg/cluster

// Provider defines methods to retrieve clusters by name. The provider is
// responsible for discovering and managing the lifecycle of each cluster.
//
// Example: A Cluster API provider would be responsible for discovering and
// managing clusters that are backed by Cluster API resources, which can live
// in multiple namespaces in a single management cluster.
type Provider interface {
Get(ctx context.Context, clusterName string, opts ...Option) (Cluster, error)
List(ctx context.Context) ([]string, error)
Watch(ctx context.Context) (Watcher, error)
// Get returns a cluster for the given identifying cluster name. Get
// returns an existing cluster if it has been created before.
Get(ctx context.Context, clusterName string) (Cluster, error)
}
```

A cluster provider is responsible for constructing `cluster.Cluster` instances and returning
upon calls to `Get(ctx, clusterName)`. Providers should keep track of created clusters and
return them again if the same name is requested. Since providers are responsible for constructing
the `cluster.Cluster` instance, they can make decisions about e.g. reusing existing informers.

The `cluster.Cluster` _SHOULD_ be extended with a unique name identifier:

```golang
// pkg/cluster:
type Cluster interface {
Expand All @@ -113,59 +133,106 @@ type Cluster interface {
}
```

The `ctrl.Manager` will use the provider to watch clusters coming and going, and
will inform runnables implementing the `cluster.AwareRunnable` interface:
A new interface for cluster-aware runnables will be provided:

```golang
// pkg/cluster
type AwareRunnable interface {
type Aware interface {
// Engage gets called when the component should start operations for the given Cluster.
// The given context is tied to the Cluster's lifecycle and will be cancelled when the
// Cluster is removed or an error occurs.
//
// Implementers should return an error if they cannot start operations for the given Cluster,
// and should ensure this operation is re-entrant and non-blocking.
//
// \_________________|)____.---'--`---.____
// || \----.________.----/
// || / / `--'
// __||____/ /_
// |___ \
// `--------'
Engage(context.Context, Cluster) error

// Disengage gets called when the component should stop operations for the given Cluster.
Disengage(context.Context, Cluster) error
}
```
In particular, controllers implement the `AwareRunnable` interface. They react
to engaged clusters by duplicating and starting their registered `source.Source`s
and `handler.EventHandler`s for each cluster through implementation of
```golang
// pkg/source
type DeepCopyableSyncingSource interface {
SyncingSource
DeepCopyFor(cluster cluster.Cluster) DeepCopyableSyncingSource
}

// pkg/handler
type DeepCopyableEventHandler interface {
EventHandler
DeepCopyFor(c cluster.Cluster) DeepCopyableEventHandler
`ctrl.Manager` will implement `cluster.Aware`. As specified in the `Provider` interface,
it is the cluster provider's responsibility to call `Engage` and `Disengage` on a `ctrl.Manager`
instance when clusters join or leave the set of target clusters that should be reconciled.

The internal `ctrl.Manager` implementation in turn will call `Engage` and `Disengage` on all
its runnables that are cluster-aware (i.e. that implement the `cluster.Aware` interface).

In particular, cluster-aware controllers implement the `cluster.Aware` interface and are
responsible for starting watches on clusters when they are engaged. This is expressed through
the interface below:

```golang
// pkg/controller
type TypedMultiClusterController[request comparable] interface {
cluster.Aware
TypedController[request]
}
```
The standard implementing types, in particular `internal.Kind` will adhere to
these interfaces.

The multi-cluster controller implementation reacts to engaged clusters by starting
a new `TypedSyncingSource` that also wraps the context passed down from the call to `Engage`,
which _MUST_ be canceled by the cluster provider at the end of a cluster's lifecycle.

The `ctrl.Manager` _SHOULD_ be extended by a `cluster.Cluster` getter:

```golang
// pkg/manager
type Manager interface {
// ...
GetCluster(ctx context.Context, clusterName string) (cluster.Cluster, error)
}
```

The embedded `cluster.Cluster` corresponds to `GetCluster(ctx, "")`. We call the
clusters with non-empty name "provider clusters" or "enganged clusters", while
the embedded cluster of the manager is called the "default cluster" or "hub
cluster".

The `reconcile.Request` _SHOULD_ be extended by an optional `ClusterName` field:
To provide information about the source cluster of a request, a new type
`reconcile.ClusterAwareRequest` _SHOULD_ be added:

```golang
// pkg/reconcile
type ClusterAwareRequest struct {
Request
ClusterName string
}
```

This struct embeds a `reconcile.Request` to store the "usual" information (name and namespace)
about an object, plus the name of the originating cluster.

Given that an empty cluster name represents the "default cluster", a `reconcile.ClusterAwareRequest`
can be used as `request` type even for controllers that do not have an active cluster provider.
The cluster name will simply be an empty string, which is compatible with calls to `mgr.GetCluster`.

### BYO Request Type

Instead of using the new `reconcile.ClusterAwareRequest`, implementations _CAN_ also bring their
own request type through the generics support in `Typed*` types (`request comparable`).

Optionally, a passed `TypedEventHandler` will be duplicated per engaged cluster if they
fullfil the following interface:

```golang
// pkg/reconile
type Request struct {
ClusterName string
types.NamespacedName
// pkg/handler
type TypedDeepCopyableEventHandler[object any, request comparable] interface {
TypedEventHandler[object, request]
DeepCopyFor(c cluster.Cluster) TypedDeepCopyableEventHandler[object, request]
}
```

With these changes, the behaviour of controller-runtime without a set cluster
provider will be unchanged.
This might be necessary if a BYO `TypedEventHandler` needs to store information about
the engaged cluster (e.g. because the events do not supply information about the cluster in
object annotations) that it has been started for.

### Multi-Cluster-Compatible Reconcilers

Expand All @@ -174,43 +241,70 @@ accessing code from directly accessing `mgr.GetClient()` and `mgr.GetCache()` to
going through `mgr.GetCluster(req.ClusterName).GetClient()` and
`mgr.GetCluster(req.ClusterName).GetCache()`.

When building a controller like
A typical snippet at the beginning of a reconciler to fetch the client could look like this:

```golang
builder.NewControllerManagedBy(mgr).
For(&appsv1.ReplicaSet{}).
Owns(&v1.Pod{}).
Complete(reconciler)
cl, err := mgr.GetCluster(ctx, req.ClusterName)
if err != nil {
return reconcile.Result{}, err
}
client := cl.GetClient()
```

Due to `request.ClusterAwareRequest`, changes to the controller builder process are minimal:

```golang
// previous
builder.TypedControllerManagedBy[reconcile.Request](mgr).
Named("single-cluster-controller").
For(&corev1.Pod{}).
Complete(reconciler)

// new
builder.TypedControllerManagedBy[reconcile.ClusterAwareRequest](mgr).
Named("multi-cluster-controller").
For(&corev1.Pod{}).
Complete(reconciler)
```
with the described change to use `GetCluster(ctx, req.ClusterName)` will automatically
act as *uniform multi-cluster controller*. It will reconcile resources from cluster `X`
in cluster `X`.

The builder will chose the correct `EventHandler` implementation for both `For` and `Owns`
depending on the `request` type used.

With the described changes (use `GetCluster(ctx, req.ClusterName)`, making `reconciler`
a `TypedFunc[reconcile.ClusterAwareRequest]`) an existing controller will automatically act as
*uniform multi-cluster controller* if a cluster provider is configured.
It will reconcile resources from cluster `X` in cluster `X`.

For a manager with `cluster.Provider`, the builder _SHOULD_ create a controller
that sources events **ONLY** from the provider clusters that got engaged with
the controller.

Controllers that should be triggered by events on the hub cluster will have to
opt-in like in this example:
Controllers that should be triggered by events on the hub cluster can continue
to use `For` and `Owns` and explicitly pass the intention to engage only with the
"default" cluster (this is only necessary if a cluster provider is plugged in):

```golang
builder.NewControllerManagedBy(mgr).
For(&appsv1.Deployment{}, builder.InDefaultCluster).
WithOptions(controller.TypedOptions{
EngageWithDefaultCluster: ptr.To(true),
EngageWithProviderClusters: ptr.To(false),
}).
For(&appsv1.Deployment{}).
Owns(&v1.ReplicaSet{}).
Complete(reconciler)
```
A mixed set of sources is possible as shown here in the example.

## User Stories

### Controller Author with no interest in multi-cluster wanting to old behaviour.

- Do nothing. Controller-runtime behaviour is unchanged.

### Multi-Cluster Integrator wanting to support cluster managers like Cluster-API or kind
### Multi-Cluster Integrator wanting to support cluster managers like Cluster API or kind

- Implement the `cluster.Provider` interface, either via polling of the cluster registry
or by watching objects in the hub cluster.
- For every new cluster create an instance of `cluster.Cluster`.
- For every new cluster create an instance of `cluster.Cluster` and call `mgr.Engage`.

### Multi-Cluster Integrator wanting to support apiservers with logical cluster (like kcp)

Expand All @@ -223,23 +317,22 @@ A mixed set of sources is possible as shown here in the example.
### Controller Author without self-interest in multi-cluster, but open for adoption in multi-cluster setups

- Replace `mgr.GetClient()` and `mgr.GetCache` with `mgr.GetCluster(req.ClusterName).GetClient()` and `mgr.GetCluster(req.ClusterName).GetCache()`.
- Make manager and controller plumbing vendor'able to allow plugging in multi-cluster provider.
- Make manager and controller plumbing vendor'able to allow plugging in multi-cluster provider and BYO request type.

### Controller Author who wants to support certain multi-cluster setups

- Do the `GetCluster` plumbing as described above.
- Vendor 3rd-party multi-cluster providers and wire them up in `main.go`
- Vendor 3rd-party multi-cluster providers and wire them up in `main.go`.

## Risks and Mitigations

- The standard behaviour of controller-runtime is unchanged for single-cluster controllers.
- The activation of the multi-cluster mode is through attaching the `cluster.Provider` to the manager.
To make it clear that the semantics are experimental, we make the `Options.provider` field private
and adds `Options.WithExperimentalClusterProvider` method.
- The activation of the multi-cluster mode is through usage of a `request.ClusterAwareRequest` request type and
attaching the `cluster.Provider` to the manager. To make it clear that the semantics are experimental, we name
the `manager.Options` field `ExperimentalClusterProvider`.
- We only extend these interfaces and structs:
- `ctrl.Manager` with `GetCluster(ctx, clusterName string) (cluster.Cluster, error)`
- `cluster.Cluster` with `Name() string`
- `reconcile.Request` with `ClusterName string`
- `ctrl.Manager` with `GetCluster(ctx, clusterName string) (cluster.Cluster, error)` and `cluster.Aware`.
- `cluster.Cluster` with `Name() string`.
We think that the behaviour of these extensions is well understood and hence low risk.
Everything else behind the scenes is an implementation detail that can be changed
at any time.
Expand All @@ -258,24 +351,12 @@ A mixed set of sources is possible as shown here in the example.
- We could deepcopy the builder instead of the sources and handlers. This would
lead to one controller and one workqueue per cluster. For the reason outlined
in the previous alternative, this is not desireable.
- We could skip adding `ClusterName` to `reconcile.Request` and instead pass the
cluster through in the context. On the one hand, this looks attractive as it
would avoid having to touch reconcilers at all to make them multi-cluster-compatible.
On the other hand, with `cluster.Cluster` embedded into `manager.Manager`, not
every method of `cluster.Cluster` carries a context. So virtualizing the cluster
in the manager leads to contradictions in the semantics.

For example, it can well be that every cluster has different REST mapping because
installed CRDs are different. Without a context, we cannot return the right
REST mapper.

An alternative would be to add a context to every method of `cluster.Cluster`,
which is a much bigger and uglier change than what is proposed here.


## Implementation History

- [PR #2207 by @vincepri : WIP: ✨ Cluster Provider and cluster-aware controllers](https://github.com/kubernetes-sigs/controller-runtime/pull/2207) – with extensive review
- [PR #2208 by @sttts replace #2207: WIP: ✨ Cluster Provider and cluster-aware controllers](https://github.com/kubernetes-sigs/controller-runtime/pull/2726) –
- [PR #2726 by @sttts replacing #2207: WIP: ✨ Cluster Provider and cluster-aware controllers](https://github.com/kubernetes-sigs/controller-runtime/pull/2726) –
picking up #2207, addressing lots of comments and extending the approach to what kcp needs, with a `fleet-namespace` example that demonstrates a similar setup as kcp with real logical clusters.
- [PR #3019 by @embik, replacing #2726: ✨ WIP: Cluster provider and cluster-aware controllers](https://github.com/kubernetes-sigs/controller-runtime/pull/3019) -
picking up #2726, reworking existing code to support the recent `Typed*` generic changes of the codebase.
- [github.com/kcp-dev/controller-runtime](https://github.com/kcp-dev/controller-runtime) – the kcp controller-runtime fork
Loading