-
Notifications
You must be signed in to change notification settings - Fork 0
feat: Ensured that we have ClusterAccess CRD installed #60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughAdds a new ClusterAccess setup guide and hack docs, significantly rewrites ClusterAccess flow and related docs, removes an obsolete multicluster flow doc, updates README operation modes, refines multiple docs (authorization, gateway, listener, quickstart, local test, subscriptions, virtual workspaces), and enhances the create-clusteraccess.sh script with CRD install, cleanup, and token/kubeconfig handling. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
scripts/create-clusteraccess.sh
(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
scripts/create-clusteraccess.sh (1)
scripts/test-clusteraccess-integration.sh (2)
log_info
(12-14)log_error
(20-22)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: pipe / dockerBuild / docker
- GitHub Check: pipe / testSource / test
- GitHub Check: Analyze (go)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
scripts/create-clusteraccess.sh (2)
102-139
: Also wait when the CRD already exists; parametrize CRD nameNice addition. One edge case: if another process just applied the CRD,
kubectl get crd
can succeed before the CRD becomes Established. Add the samekubectl wait
path in the “already installed” branch. While here, DRY the CRD name to avoid typos.ensure_crd_installed() { - log_info "Checking if ClusterAccess CRD is installed in management cluster..." + log_info "Checking if ClusterAccess CRD is installed in management cluster..." + local CRD_NAME="clusteraccesses.gateway.platform-mesh.io" # Check if CRD exists - if ! KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl get crd clusteraccesses.gateway.platform-mesh.io &>/dev/null; then + if ! KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl get crd "$CRD_NAME" &>/dev/null; then log_info "ClusterAccess CRD not found. Installing CRD..." @@ - if KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl apply -f "$CRD_PATH"; then + if KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl apply -f "$CRD_PATH"; then log_info "ClusterAccess CRD installed successfully" # Wait for CRD to reach Established condition log_info "Waiting for ClusterAccess CRD to become established..." - if KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl wait --for=condition=Established crd/clusteraccesses.gateway.platform-mesh.io --timeout=60s; then + if KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl wait --for=condition=Established "crd/${CRD_NAME}" --timeout=60s; then log_info "ClusterAccess CRD is now established and ready" else log_error "ClusterAccess CRD failed to reach Established condition within 60 seconds" exit 1 fi else log_error "Failed to install ClusterAccess CRD" exit 1 fi else - log_info "ClusterAccess CRD is already installed" + log_info "ClusterAccess CRD is already installed" + log_info "Ensuring CRD is Established..." + if ! KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl wait --for=condition=Established "crd/${CRD_NAME}" --timeout=60s; then + log_error "ClusterAccess CRD exists but did not become Established within 60 seconds" + exit 1 + fi fi }
216-218
: Good placement; consider moving before cleanup to reduce noiseCalling
ensure_crd_installed
is the right addition. Optionally move it beforecleanup_existing_resources
so the initialkubectl get clusteraccess
doesn’t log a warning on clusters where the CRD isn’t present yet.-# Clean up existing resources if they exist -cleanup_existing_resources - -# Ensure CRD is installed in management cluster -ensure_crd_installed +# Ensure CRD is installed in management cluster +ensure_crd_installed + +# Clean up existing resources if they exist +cleanup_existing_resources
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
scripts/create-clusteraccess.sh
(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
scripts/create-clusteraccess.sh (1)
scripts/test-clusteraccess-integration.sh (2)
log_info
(12-14)log_error
(20-22)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Often is see such scripts in a directory called hack
instead of scripts
.
Can you leave a documentation, possibly as a a header comment in the script or as an additional README, on the purpose of this script and how it can be used?
moved to hack dir and added a small readme about usage. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
hack/create-clusteraccess.sh (1)
169-170
: Move CRD installation before cleanup to ensure proper ordering.The
cleanup_existing_resources
function (line 170) attempts to delete ClusterAccess resources, butensure_crd_installed
isn't called until line 217. If the CRD doesn't exist when cleanup runs, thekubectl get clusteraccess
command at line 145 may fail or produce unexpected behavior.Move the CRD installation earlier in the flow:
log_info "Management cluster is accessible" +# Ensure CRD is installed in management cluster +ensure_crd_installed + # Clean up existing resources if they exist cleanup_existing_resources # Extract server URL from target kubeconfigThen remove the duplicate call:
fi log_info "Management cluster is accessible" -# Ensure CRD is installed in management cluster -ensure_crd_installed - # Create kubeconfig secret in management clusterAlso applies to: 216-217
🧹 Nitpick comments (1)
hack/create-clusteraccess.md (1)
3-3
: Consider standardizing Kubernetes terminology.The phrase "Standard K8S cluster" uses informal capitalization. For consistency with technical documentation standards, consider using "standard Kubernetes cluster" or "standard K8s cluster".
-This script is used to create a ClusterAccess resource, which is needed for kubernetes-graphql-gateway to work with Standard K8S cluster. +This script is used to create a ClusterAccess resource, which is needed for kubernetes-graphql-gateway to work with a standard Kubernetes cluster.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
hack/create-clusteraccess.md
(1 hunks)hack/create-clusteraccess.sh
(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
hack/create-clusteraccess.sh (1)
hack/test-clusteraccess-integration.sh (2)
log_info
(12-14)log_error
(20-22)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: pipe / testSource / test
- GitHub Check: pipe / dockerBuild / docker
- GitHub Check: Analyze (go)
🔇 Additional comments (1)
hack/create-clusteraccess.md (1)
5-5
: Link verified:../docs/clusteraccess.md
exists. No further action needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
hack/create-clusteraccess.sh (2)
131-132
: Consider extracting hardcoded values as script-level constants.The ServiceAccount name and namespace are hardcoded here and repeated at lines 216-217. Extract these as constants at the top of the script for easier maintenance and consistency.
Apply this diff to extract the constants:
+# ServiceAccount configuration +SA_NAME="kubernetes-graphql-gateway-admin" +SA_NAMESPACE="default" + # Default values TARGET_KUBECONFIG="" MANAGEMENT_KUBECONFIG="${KUBECONFIG:-$HOME/.kube/config}"Then update the function to use these constants:
cleanup_existing_resources() { log_info "Checking for existing ClusterAccess resource '$CLUSTER_NAME'..." - SA_NAME="kubernetes-graphql-gateway-admin" - SA_NAMESPACE="default" - # Check if ClusterAccess exists in management cluster
249-256
: Replace arbitrary sleep with polling for token availability.The 2-second sleep is arbitrary and may not be sufficient in all cases. Token population timing can vary based on cluster load and configuration.
Apply this diff to implement proper polling:
-log_info "Waiting for token to be populated..." -sleep 2 - -ADMIN_TOKEN=$(KUBECONFIG="$TARGET_KUBECONFIG" kubectl get secret $SA_NAME-token -n $SA_NAMESPACE -o jsonpath='{.data.token}' | base64 -d) +log_info "Waiting for token to be populated..." +RETRY_COUNT=0 +MAX_RETRIES=30 +ADMIN_TOKEN="" + +while [[ $RETRY_COUNT -lt $MAX_RETRIES ]]; do + ADMIN_TOKEN=$(KUBECONFIG="$TARGET_KUBECONFIG" kubectl get secret $SA_NAME-token -n $SA_NAMESPACE -o jsonpath='{.data.token}' 2>/dev/null | base64 -d 2>/dev/null) + if [[ -n "$ADMIN_TOKEN" ]]; then + break + fi + sleep 1 + RETRY_COUNT=$((RETRY_COUNT + 1)) +done + if [[ -z "$ADMIN_TOKEN" ]]; then - log_error "Failed to retrieve admin token" + log_error "Failed to retrieve admin token after ${MAX_RETRIES} seconds" exit 1 fi
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
README.md
(2 hunks)docs/authorization.md
(2 hunks)docs/clusteraccess-setup.md
(1 hunks)docs/clusteraccess.md
(1 hunks)docs/gateway.md
(2 hunks)docs/listener.md
(1 hunks)docs/local_test.md
(4 hunks)docs/multicluster-kubeconfig-flow.md
(0 hunks)docs/quickstart.md
(3 hunks)docs/subscriptions.md
(4 hunks)docs/virtual-workspaces.md
(1 hunks)gateway/manager/manager.go
(1 hunks)gateway/manager/roundtripper/roundtripper.go
(4 hunks)gateway/manager/roundtripper/roundtripper_test.go
(11 hunks)hack/create-clusteraccess.sh
(4 hunks)
💤 Files with no reviewable changes (1)
- docs/multicluster-kubeconfig-flow.md
✅ Files skipped from review due to trivial changes (2)
- docs/clusteraccess-setup.md
- docs/virtual-workspaces.md
🧰 Additional context used
🧬 Code graph analysis (4)
gateway/manager/roundtripper/roundtripper_test.go (2)
gateway/manager/mocks/mock_RoundTripper.go (1)
MockRoundTripper
(12-14)gateway/manager/roundtripper/roundtripper.go (1)
New
(27-48)
hack/create-clusteraccess.sh (1)
hack/test-clusteraccess-integration.sh (2)
log_info
(12-14)log_error
(20-22)
gateway/manager/roundtripper/roundtripper.go (1)
common/config/config.go (1)
Config
(3-36)
gateway/manager/manager.go (1)
gateway/manager/roundtripper/roundtripper.go (2)
New
(27-48)NewUnauthorizedRoundTripper
(51-53)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: pipe / testSource / test
- GitHub Check: pipe / dockerBuild / docker
- GitHub Check: Analyze (go)
🔇 Additional comments (8)
gateway/manager/manager.go (1)
28-30
: LGTM!The roundtripper factory correctly passes the
tlsConfig
parameter to align with the updated constructor signature.gateway/manager/roundtripper/roundtripper_test.go (3)
14-14
: LGTM!All test cases consistently updated to pass the
TLSClientConfig
parameter to the roundtripper constructor, properly aligning with the updated signature.Also applies to: 81-81, 273-273, 387-387, 462-462, 511-511, 540-540, 580-580
51-54
: LGTM!The test correctly handles the baseRT transport behavior by skipping mock expectations when the real base transport is used (expectedStatusCode 0). This accurately reflects the implementation change where bearer token authentication now uses baseRT instead of adminRT.
Also applies to: 91-95
522-522
: LGTM!Skipping this test is a reasonable decision since it would require mocking an internal implementation detail (baseRT). The comment clearly explains the rationale.
gateway/manager/roundtripper/roundtripper.go (4)
27-48
: LGTM!The TLS-based base transport creation is well-implemented with appropriate error handling and fallback to
http.DefaultTransport
. The constructor signature update properly extends the API to support TLS configuration.
84-85
: LGTM!Cloning the request before header manipulation prevents unintended side effects on the original request, and unconditionally removing the Authorization header prevents credential leakage. Both are good security practices.
115-121
: LGTM!In impersonation mode, the Authorization header is correctly set with the bearer token after cloning the request, and the impersonating round tripper properly wraps the adminRT. This maintains the expected behavior for impersonation.
87-90
: Document bearer token transport change in docs
Non-impersonation now usesbaseRT
(TLS-configured transport) instead ofadminRT
for bearer-token auth. Add this change to the PR description and relevant docs (e.g. CHANGELOG.md or docs/), confirm it aligns with the security model, and include tests covering user-token validation against the cluster.
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
* added nil checks in relation resovling On-behalf-of: @SAP [email protected] Signed-off-by: Artem Shcherbatiuk <[email protected]>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
50ca309
to
e332a37
Compare
…teway into feat/script
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
hack/create-clusteraccess.sh (1)
262-269
: Don’t store personal kubeconfig; generate kubeconfig using SA token + CA.Using the local kubeconfig in a Secret can leak personal creds/exec plugins and break in CI. Prefer a minimal kubeconfig built from SERVER_URL, CA, and the ServiceAccount token you just created.
-log_info "Creating admin kubeconfig secret in management cluster..." -KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl create secret generic "${CLUSTER_NAME}-admin-kubeconfig" \ - --namespace="$NAMESPACE" \ - --from-file=kubeconfig="$TARGET_KUBECONFIG" \ - --dry-run=client -o yaml | \ -KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl apply -f - +log_info "Creating admin kubeconfig secret in management cluster (ServiceAccount token)..." +cat <<KCF | KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl create secret generic "${CLUSTER_NAME}-admin-kubeconfig" \ + --namespace="$NAMESPACE" \ + --from-file=kubeconfig=/dev/stdin \ + --dry-run=client -o yaml | KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl apply -f - +apiVersion: v1 +kind: Config +clusters: +- name: ${CLUSTER_NAME} + cluster: + server: ${SERVER_URL} + certificate-authority-data: $(echo -n "$CA_CERT" | base64 -w0) +contexts: +- name: ${CLUSTER_NAME} + context: + cluster: ${CLUSTER_NAME} + user: ${SA_NAME} +current-context: ${CLUSTER_NAME} +users: +- name: ${SA_NAME} + user: + token: ${ADMIN_TOKEN} +KCFAlso applies to: 279-297
🧹 Nitpick comments (3)
hack/create-clusteraccess.sh (1)
131-133
: Avoid duplicate SA vars and hardcoded namespace.SA_NAME/SA_NAMESPACE are defined twice and SA_NAMESPACE is hardcoded to "default", ignoring the --namespace flag. Define once and derive SA_NAMESPACE from NAMESPACE.
- SA_NAME="kubernetes-graphql-gateway-admin" - SA_NAMESPACE="default" + SA_NAME="kubernetes-graphql-gateway-admin" + SA_NAMESPACE="${NAMESPACE}" ... -SAINAME="kubernetes-graphql-gateway-admin" -SA_NAMESPACE="default" +# Reuse previously set values +# SA_NAME and SA_NAMESPACE already set; keep them consistent with --namespaceAlso applies to: 214-218
README.md (1)
27-30
: Minor: unify casing of “ClusterAccess”.Link text shows “Clusteraccess”; elsewhere docs use “ClusterAccess”. Standardize for consistency.
docs/clusteraccess.md (1)
120-129
: Add CRD apply command specificity.Instead of generic config/crd/, reference the exact ClusterAccess CRD path used in the script for consistency.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
README.md
(2 hunks)docs/authorization.md
(2 hunks)docs/clusteraccess-setup.md
(1 hunks)docs/clusteraccess.md
(1 hunks)docs/gateway.md
(2 hunks)docs/listener.md
(1 hunks)docs/local_test.md
(4 hunks)docs/multicluster-kubeconfig-flow.md
(0 hunks)docs/quickstart.md
(3 hunks)docs/subscriptions.md
(4 hunks)docs/virtual-workspaces.md
(1 hunks)hack/create-clusteraccess.md
(1 hunks)hack/create-clusteraccess.sh
(4 hunks)
💤 Files with no reviewable changes (1)
- docs/multicluster-kubeconfig-flow.md
✅ Files skipped from review due to trivial changes (3)
- hack/create-clusteraccess.md
- docs/gateway.md
- docs/listener.md
🚧 Files skipped from review as they are similar to previous changes (3)
- docs/virtual-workspaces.md
- docs/clusteraccess-setup.md
- docs/subscriptions.md
🧰 Additional context used
🧬 Code graph analysis (1)
hack/create-clusteraccess.sh (1)
hack/test-clusteraccess-integration.sh (2)
log_info
(12-14)log_error
(20-22)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: pipe / dockerBuild / docker
- GitHub Check: pipe / testSource / test
- GitHub Check: Analyze (go)
🔇 Additional comments (4)
hack/create-clusteraccess.sh (1)
102-126
: Good: idempotent CRD apply + Established wait.The new ensure_crd_installed flow uses kubectl apply (idempotent) and waits for Established; clear errors and logs. LGTM.
docs/quickstart.md (1)
17-20
: LGTM: clearer setup flow and proper Bearer header.Operation Modes guidance and Authorization header examples are correct and consistent.
Also applies to: 55-63
docs/local_test.md (1)
8-13
: LGTM: updated repo path and clearer instructions.Also applies to: 16-18, 35-40, 52-53, 72-73
docs/authorization.md (1)
21-21
: Verify config loader reads new env var
No occurrences ofGATEWAY_INTROSPECTION_AUTHENTICATION
or the oldINTROSPECTION_AUTHENTICATION
found in code; ensure the implementation is updated to useGATEWAY_INTROSPECTION_AUTHENTICATION
.
When GraphiQL page is loaded, it makes a request to fetch the GraphQL schema and there is no way to add the `Authorization` header to that request. | ||
When the GraphiQL page is loaded, it makes a request to fetch the GraphQL schema, and there is no way to add the `Authorization` header to that request. | ||
|
||
We have this [issue](https://github.com/openmfp/kubernetes-graphql-gateway/issues/217) open to fix this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Check GitHub issue link target.
Link points to openmfp/kubernetes-graphql-gateway#217. Confirm the org/repo is correct (this repo appears under platform-mesh).
🏁 Script executed:
#!/bin/bash
gh issue view 217 -R openmfp/kubernetes-graphql-gateway || echo "Issue not found under openmfp"
Length of output: 523
Update issue link and wording
- In docs/authorization.md (line 28), change the URL to point to platform-mesh/kubernetes-graphql-gateway#217 and adjust the text since the issue is closed (e.g., replace “open to fix this” with “see issue” or past tense).
🤖 Prompt for AI Agents
In docs/authorization.md around line 28, the issue link and wording reference
the wrong repo and use present-tense phrasing for a closed issue; update the URL
to https://github.com/platform-mesh/kubernetes-graphql-gateway/issues/217 (or
the repo’s correct issue path) and change the text from “open to fix this” to
past-tense or neutral phrasing such as “see issue” or “see issue #217” so the
sentence correctly points to platform-mesh/kubernetes-graphql-gateway#217 and
reflects the issue state.
# Create ClusterAccess resource with admin token authentication | ||
./hack/create-clusteraccess.sh --target-kubeconfig ~/.kube/platform-mesh-config --management-kubeconfig ~/.kube/platform-mesh-config | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wording mismatch: “admin token authentication” vs kubeconfig-based spec.
Quick Setup comment says admin token, but example/spec use kubeconfigSecretRef and CA. Align the text to kubeconfig-based auth or update the spec to token-based if that’s intended.
Also applies to: 36-47
🤖 Prompt for AI Agents
In docs/clusteraccess.md around lines 12-14 (and similarly for lines 36-47), the
heading and Quick Setup text say "admin token authentication" but the
example/spec use kubeconfigSecretRef and CA (kubeconfig-based auth); update the
wording to refer to "kubeconfig-based authentication" (or "kubeconfig
authentication") and adjust the Quick Setup description to mention using a
kubeconfig secret and the --target-kubeconfig/--management-kubeconfig flags,
ensuring the language matches the provided spec and examples; apply the same
wording change and consistency fixes to lines 36-47 as well.
./scripts/test-clusteraccess-integration.sh | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Broken script path.
Docs reference ./scripts/test-clusteraccess-integration.sh, but the repo appears to have hack/test-clusteraccess-integration.sh.
🏁 Script executed:
#!/bin/bash
fd -a 'test-clusteraccess-integration.sh'
Length of output: 57
Update script path in docs/clusteraccess.md
Replace ./scripts/test-clusteraccess-integration.sh
with hack/test-clusteraccess-integration.sh
.
🤖 Prompt for AI Agents
In docs/clusteraccess.md around lines 113 to 114, the script path is outdated:
replace the reference to ./scripts/test-clusteraccess-integration.sh with the
new path hack/test-clusteraccess-integration.sh. Update the markdown snippet or
inline text accordingly so all occurrences use
hack/test-clusteraccess-integration.sh and ensure formatting/backticks remain
correct.
log_info "Waiting for token to be populated..." | ||
sleep 2 | ||
|
||
ADMIN_TOKEN=$(KUBECONFIG="$TARGET_KUBECONFIG" kubectl get secret $SA_NAME-token -n $SA_NAMESPACE -o jsonpath='{.data.token}' | base64 -d) | ||
if [[ -z "$ADMIN_TOKEN" ]]; then | ||
log_error "Failed to retrieve admin token" | ||
exit 1 | ||
fi | ||
log_info "Admin token created successfully" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Race condition: token may not be ready after fixed sleep.
Fixed sleep can fail intermittently. Poll for the token with a timeout and quote vars.
-log_info "Waiting for token to be populated..."
-sleep 2
-
-ADMIN_TOKEN=$(KUBECONFIG="$TARGET_KUBECONFIG" kubectl get secret $SA_NAME-token -n $SA_NAMESPACE -o jsonpath='{.data.token}' | base64 -d)
-if [[ -z "$ADMIN_TOKEN" ]]; then
+log_info "Waiting for token to be populated..."
+for i in {1..30}; do
+ RAW_TOKEN=$(KUBECONFIG="$TARGET_KUBECONFIG" kubectl get secret "${SA_NAME}-token" -n "${SA_NAMESPACE}" -o jsonpath='{.data.token}' 2>/dev/null || true)
+ if [[ -n "$RAW_TOKEN" ]]; then break; fi
+ sleep 1
+done
+if [[ -z "$RAW_TOKEN" ]]; then
log_error "Failed to retrieve admin token"
exit 1
fi
+ADMIN_TOKEN=$(echo -n "$RAW_TOKEN" | base64 -d 2>/dev/null || true)
+if [[ -z "$ADMIN_TOKEN" ]]; then
+ log_error "ServiceAccount token found but failed to decode"
+ exit 1
+fi
log_info "Admin token created successfully"
echo " - ServiceAccount: $SA_NAME (in target cluster namespace: $SA_NAMESPACE)" | ||
|
||
|
||
|
||
echo "" | ||
echo "" | ||
log_info "Admin access token for direct API access (copy-paste ready):" | ||
echo '```' | ||
echo '{' | ||
echo " \"Authorization\": \"Bearer $ADMIN_TOKEN\"" | ||
echo '}' | ||
echo '```' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid printing the admin token to stdout.
This leaks credentials into logs/history. Gate behind a flag (e.g., --print-token) or omit entirely; at least add a clear warning and prompt.
-echo ""
-echo ""
-log_info "Admin access token for direct API access (copy-paste ready):"
-echo '```'
-echo '{'
-echo " \"Authorization\": \"Bearer $ADMIN_TOKEN\""
-echo '}'
-echo '```'
+if [[ "${PRINT_TOKEN:-false}" == "true" ]]; then
+ log_warn "Printing sensitive token to stdout as requested (PRINT_TOKEN=true). Handle with care."
+ printf '\n\n'
+ log_info "Admin access token for direct API access (copy-paste ready):"
+ echo '```'
+ echo '{'
+ echo " \"Authorization\": \"Bearer '"$ADMIN_TOKEN"\""
+ echo '}'
+ echo '```'
+fi
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In hack/create-clusteraccess.sh around lines 307-318, the script currently
prints the admin token to stdout which leaks credentials; update it to gate
printing behind a flag and add a clear warning prompt: add a conditional that
checks a env/CLI flag like PRINT_TOKEN (default false) and only prints the token
when set to "true", before printing emit a log_warn message indicating sensitive
output and instructing the user to handle carefully; ensure token output
formatting remains unchanged when printed, and otherwise omit the token entirely
(no echo to stdout) so it is not recorded in logs/history.
I improved script to create a clusterAccess resource to ensure we have CRD registered first.
Summary by CodeRabbit
New Features
Documentation