Skip to content

Possibility to use imagePullSecrets, private registry for providers & Separate config secret when using multiple infra providers. #916

@sivarama-p-raju

Description

@sivarama-p-raju

User Story

As an operator, I would like to be able to define imagePullSecret and private image registry for all the provider deployments for the sake of customizability and for not depending on an external image registry.

Detailed Description

By default, the cluster-api provider manifests are fetched from upstream repositories and as a result, deployments created use images from upstream image registries.

The below custom resources, all support the schema "spec.deployment" where custom imageUrl and imagePullSecrets can be configured:

bootstrapproviders
controlplaneproviders
infrastructureproviders
ipamproviders

It can be seen, for example, by the below:

kubectl explain ipamprovider.spec.deployment.imagePullSecrets --api-version=operator.cluster.x-k8s.io/v1alpha2 --recursive
kubectl explain ipamprovider.spec.deployment.containers --api-version=operator.cluster.x-k8s.io/v1alpha2

I am adding the below updates to each of these templates to easily add support for these custom updates:

  1. kubeadm bootstrapprovider:

-- bootstrap.yaml:

apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: BootstrapProvider
metadata:
  name: {{ $bootstrapName }}
  namespace: {{ $bootstrapNamespace }}
spec:
{{- if $bootstrap.deployment }}
  deployment: {{ toYaml $bootstrap.deployment | nindent 4 }}
{{- end }}

-- values:

bootstrap:
  kubeadm:
    deployment:
      imagePullSecrets:
      - name: <secret name>
      containers:
      - name: manager
        imageUrl: <private>/cluster-api/kubeadm-bootstrap-controller:v1.11.2
  1. kubeadm controlPlane provider:

-- bootstrap.yaml

apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: ControlPlaneProvider
metadata:
  name: {{ $controlPlaneName }}
  namespace: {{ $controlPlaneNamespace }}
spec:
{{- if $controlPlane.deployment }}
  deployment: {{ toYaml $controlPlane.deployment | nindent 4 }}
{{- end }}

-- values:

controlPlane:
  kubeadm:
    deployment:
      imagePullSecrets:
      - name: <secret name>
      containers:
      - name: manager
        imageUrl: <private>/cluster-api/kubeadm-control-plane-controller:v1.11.2
  1. azure & nutanix infrastructure provider:

-- infra.yaml

apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: InfrastructureProvider
metadata:
  name: {{ $infrastructureName }}
  namespace: {{ $infrastructureNamespace }}
spec:
{{- if $infra.additionalDeployments }}
  additionalDeployments: {{ toYaml $infra.additionalDeployments | nindent 4 }}
{{- end }}
{{- if $infra.deployment }}
  deployment: {{ toYaml $infra.deployment | nindent 4 }}
{{- end }}

NOTE: Note that I also updated additionalDeployments so that it can fetch dedicated config instead of the default "$.Values.additionalDeployments"

-- values:

infrastructure:
  azure:
    deployment:
      imagePullSecrets:
      - name: <secret name>
      containers:
      - name: manager
        imageUrl: <private>/cluster-api-azure/cluster-api-azure-controller:v1.21.1
    additionalDeployments:
      azureserviceoperator-controller-manager:
        deployment:
          imagePullSecrets:
          - name: <secret>
          containers:
          - imageUrl: <private>/k8s/azureserviceoperator:v2.11.0
            name: manager
  nutanix:
    deployment:
      imagePullSecrets:
      - name: <secret name>
      containers:
      - name: manager
        imageUrl: <private>/nutanix-cloud-native/cluster-api-provider-nutanix/controller:v1.7.2
  1. cluster-api core provider:

-- core.yaml

apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: CoreProvider
metadata:
  name: {{ $coreName }}
  namespace: {{ $coreNamespace }}
spec:
{{- if $core.deployment }}
  deployment: {{ toYaml $core.deployment | nindent 4 }}
{{- end }}

-- values:

core:
  cluster-api:
    deployment:
      imagePullSecrets:
      - name: <secret>
      containers:
      - name: manager
        imageUrl: <private>/cluster-api/cluster-api-controller:v1.11.2
  1. in-cluster IPAM provider:

-- ipam.yaml

apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: IPAMProvider
metadata:
  name: {{ $ipamName }}
  namespace: {{ $ipamNamespace }}
spec:
{{- if $ipam.deployment }}
  deployment: {{ toYaml $ipam.deployment | nindent 4 }}
{{- end }}

-- values:

ipam:
  in-cluster:
    deployment:
      imagePullSecrets:
      - name: <secret>
      containers:
      - name: manager
        imageUrl: <private>/capi-ipam-ic/cluster-api-ipam-in-cluster-controller:v1.0.3

If it makes sense, could you please add this to the chart upstream ? Else, could you please let me know if what I am looking for is already supported differently ?

Anything else you would like to add:

I think it also makes sense to update all provider templates to fetch "additionalDeployments" from specific provider's values instead of $.Values.additionalDeployments

It would also make sense to separate the "configSecret" when there are multiple Infrastructure Providers defined. For example, the azure provider has nothing to do with credentails for the nutanix provider and vice-versa.

/kind feature

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/featureCategorizes issue or PR as related to a new feature.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions