Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Additional worker node pools are not removed from shoot #640

Open
KsaweryZietara opened this issue Jan 31, 2025 · 1 comment
Open

Additional worker node pools are not removed from shoot #640

KsaweryZietara opened this issue Jan 31, 2025 · 1 comment
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@KsaweryZietara
Copy link

Description

  1. I created an instance with two additional worker node pools:
    "additionalWorkerNodePools": [
        {
            "name": "test-ha",
            "machineType": "m6i.large",
            "haZones": true,
            "autoScalerMin": 3,
            "autoScalerMax": 3
        },
        {
            "name": "test-1",
            "machineType": "m6i.large",
            "haZones": false,
            "autoScalerMin": 1,
            "autoScalerMax": 1
        }
    ]
  1. I sent an update request to remove one of them:
    "additionalWorkerNodePools": [
        {
            "name": "test-1",
            "machineType": "m6i.large",
            "haZones": false,
            "autoScalerMin": 1,
            "autoScalerMax": 1
        }
    ]
  1. The RuntimeCR has been updated correctly:
➜  ~ k get runtimes -n kcp-system 9cb74d4d-9b41-44ad-9103-4edfbd08ff18 -oyaml
apiVersion: infrastructuremanager.kyma-project.io/v1
kind: Runtime
metadata:
  creationTimestamp: "2025-01-31T09:34:06Z"
  finalizers:
  - runtime-controller.infrastructure-manager.kyma-project.io/deletion-hook
  generation: 2
  labels:
    kyma-project.io/broker-plan-id: 5cb3d976-b85c-42ea-a636-79cadda109a9
    kyma-project.io/broker-plan-name: preview
    kyma-project.io/controlled-by-provisioner: "false"
    kyma-project.io/global-account-id: 8cd57dc2-edb2-45e0-af8b-7d881006e516
    kyma-project.io/instance-id: 6769A7CF-FEB2-46CB-990E-904BAEA77B9E
    kyma-project.io/platform-region: cf-us10-staging
    kyma-project.io/provider: AWS
    kyma-project.io/region: eu-central-1
    kyma-project.io/runtime-id: 9cb74d4d-9b41-44ad-9103-4edfbd08ff18
    kyma-project.io/shoot-name: c-7f099bf
    kyma-project.io/subaccount-id: b650c927-2641-4f31-9bc2-926c9879b11a
    operator.kyma-project.io/kyma-name: 9cb74d4d-9b41-44ad-9103-4edfbd08ff18
  name: 9cb74d4d-9b41-44ad-9103-4edfbd08ff18
  namespace: kcp-system
  resourceVersion: "4444630078"
  uid: 9dd1a8ae-e3a2-44ad-9b8f-baa10e0a39a5
spec:
  security:
    administrators:
    - [email protected]
    networking:
      filter:
        egress:
          enabled: true
        ingress:
          enabled: false
  shoot:
    kubernetes:
      kubeAPIServer:
        oidcConfig:
          clientID: 9bd05ed7-a930-44e6-8c79-e6defeb7dec9
          groupsClaim: groups
          issuerURL: https://kymatest.accounts400.ondemand.com
          signingAlgs:
          - RS256
          usernameClaim: sub
          usernamePrefix: '-'
      version: "1.31"
    name: c-7f099bf
    networking:
      nodes: 10.250.0.0/16
      pods: 10.96.0.0/13
      services: 10.104.0.0/13
      type: calico
    platformRegion: cf-us10-staging
    provider:
      additionalWorkers:
      - machine:
          image:
            name: gardenlinux
            version: 1592.4.0
          type: m6i.large
        maxSurge: 1
        maxUnavailable: 0
        maximum: 1
        minimum: 1
        name: test-1
        volume:
          size: 80Gi
          type: gp3
        zones:
        - eu-central-1c
      type: aws
      workers:
      - machine:
          image:
            name: gardenlinux
            version: 1592.4.0
          type: m6i.large
        maxSurge: 3
        maxUnavailable: 0
        maximum: 20
        minimum: 3
        name: cpu-worker-0
        volume:
          size: 80Gi
          type: gp3
        zones:
        - eu-central-1c
        - eu-central-1b
        - eu-central-1a
    purpose: development
    region: eu-central-1
    secretBindingName: sap-aws-skr-dev-cust-00002-kyma-integration
status:
  conditions:
  - lastTransitionTime: "2025-01-31T09:46:40Z"
    message: Runtime processing completed successfully
    reason: ConfigurationCompleted
    status: "True"
    type: Provisioned
  - lastTransitionTime: "2025-01-31T09:42:31Z"
    message: Gardener Cluster CR is ready.
    reason: GardenerClusterCRReady
    status: "True"
    type: KubeconfigReady
  - lastTransitionTime: "2025-01-31T09:42:31Z"
    message: OIDC configuration completed
    reason: OidcConfigured
    status: "True"
    type: OidcConfigured
  - lastTransitionTime: "2025-01-31T09:42:32Z"
    message: Cluster admin configuration complete
    reason: AdministratorsConfigured
    status: "True"
    type: Configured
  state: Ready

Expected result

In Gardener, I should see two worker groups: cpu-worker-0 and test-1.

Actual result

Instead of two worker groups, I see three, including the removed worker node pool:
Image

@tobiscr
Copy link
Contributor

tobiscr commented Jan 31, 2025

@koala7659 - what looks suspicious, is the status-field, its READY. Should the RuntimeCR not switch into FAILED status if the deletion of worker-pools was not possible in Gardener?

@tobiscr tobiscr added the kind/bug Categorizes issue or PR as related to a bug. label Jan 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants