Skip to content

Conversation

vertex451
Copy link
Contributor

@vertex451 vertex451 commented Sep 29, 2025

I improved script to create a clusterAccess resource to ensure we have CRD registered first.

Summary by CodeRabbit

  • New Features

    • Streamlined ClusterAccess setup: script now installs CRDs, cleans existing resources, creates admin ServiceAccount and token, generates kubeconfig secret, and provides clearer final output.
    • Added usage guide for the ClusterAccess creation script.
  • Documentation

    • Reorganized “Operation Modes” and overhauled ClusterAccess docs with a schema-driven flow; added a comprehensive setup guide.
    • Updated gateway, listener, quickstart, authorization (Bearer tokens), subscriptions, local testing, and virtual workspaces docs.
    • Removed obsolete multicluster kubeconfig flow documentation.
    • Numerous clarity, terminology, and formatting improvements.

@vertex451 vertex451 requested a review from a team as a code owner September 29, 2025 12:17
Copy link

coderabbitai bot commented Sep 29, 2025

Walkthrough

Adds a new ClusterAccess setup guide and hack docs, significantly rewrites ClusterAccess flow and related docs, removes an obsolete multicluster flow doc, updates README operation modes, refines multiple docs (authorization, gateway, listener, quickstart, local test, subscriptions, virtual workspaces), and enhances the create-clusteraccess.sh script with CRD install, cleanup, and token/kubeconfig handling.

Changes

Cohort / File(s) Change summary
ClusterAccess docs overhaul
docs/clusteraccess.md, docs/clusteraccess-setup.md, docs/multicluster-kubeconfig-flow.md
Major restructure to a schema-centric ClusterAccess flow; adds a new setup guide; removes the old kubeconfig-flow document. Updates resource model, schema metadata, and workflow/troubleshooting.
Hack: ClusterAccess helper
hack/create-clusteraccess.md, hack/create-clusteraccess.sh
Adds doc describing the script and its usage. Script gains CRD installation with readiness wait, resource cleanup, SA/token creation with cluster-admin, token extraction, admin kubeconfig secret creation, and revised logging/output.
Operation modes consolidation
README.md, docs/quickstart.md
Renames and simplifies modes to KCP vs ClusterAccess, updates references, links, and examples; improves wording and headers; standardizes Authorization header to Bearer format.
Gateway and Listener docs alignment
docs/gateway.md, docs/listener.md
Clarifies file-per-workspace/cluster semantics, URLs, component responsibilities, and package overviews; aligns with new modes and flow.
Authorization doc tweaks
docs/authorization.md
Uses Bearer token header, renames env var to GATEWAY_INTROSPECTION_AUTHENTICATION, adjusts headings and prose formatting.
Local test guidance
docs/local_test.md
Updates repo reference, adds remediation step, clarifies wording, examples, and podman/kind notes.
Subscriptions editorial pass
docs/subscriptions.md
Formatting, casing, and phrasing refinements; no behavioral changes.
Virtual Workspaces minor edit
docs/virtual-workspaces.md
Minor capitalization/phrasing change to kubeconfig field description.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title directly reflects the core change of adding logic to verify that the ClusterAccess CRD is installed before creating the resource, which matches the modifications in hack/create-clusteraccess.sh and the PR objectives. It is concise and specific, clearly indicating the enhancement being introduced. The use of the conventional commit prefix “feat:” is appropriate for this addition.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/script

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@vertex451 vertex451 changed the title ensured that we have ClusterAccess CRD installed feat: Ensured that we have ClusterAccess CRD installed Sep 29, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c866c3c and 1a0dd24.

📒 Files selected for processing (1)
  • scripts/create-clusteraccess.sh (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
scripts/create-clusteraccess.sh (1)
scripts/test-clusteraccess-integration.sh (2)
  • log_info (12-14)
  • log_error (20-22)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: pipe / dockerBuild / docker
  • GitHub Check: pipe / testSource / test
  • GitHub Check: Analyze (go)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
scripts/create-clusteraccess.sh (2)

102-139: Also wait when the CRD already exists; parametrize CRD name

Nice addition. One edge case: if another process just applied the CRD, kubectl get crd can succeed before the CRD becomes Established. Add the same kubectl wait path in the “already installed” branch. While here, DRY the CRD name to avoid typos.

 ensure_crd_installed() {
-    log_info "Checking if ClusterAccess CRD is installed in management cluster..."
+    log_info "Checking if ClusterAccess CRD is installed in management cluster..."
+    local CRD_NAME="clusteraccesses.gateway.platform-mesh.io"
     
     # Check if CRD exists
-    if ! KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl get crd clusteraccesses.gateway.platform-mesh.io &>/dev/null; then
+    if ! KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl get crd "$CRD_NAME" &>/dev/null; then
         log_info "ClusterAccess CRD not found. Installing CRD..."
@@
-        if KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl apply -f "$CRD_PATH"; then
+        if KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl apply -f "$CRD_PATH"; then
             log_info "ClusterAccess CRD installed successfully"
             
             # Wait for CRD to reach Established condition
             log_info "Waiting for ClusterAccess CRD to become established..."
-            if KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl wait --for=condition=Established crd/clusteraccesses.gateway.platform-mesh.io --timeout=60s; then
+            if KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl wait --for=condition=Established "crd/${CRD_NAME}" --timeout=60s; then
                 log_info "ClusterAccess CRD is now established and ready"
             else
                 log_error "ClusterAccess CRD failed to reach Established condition within 60 seconds"
                 exit 1
             fi
         else
             log_error "Failed to install ClusterAccess CRD"
             exit 1
         fi
     else
-        log_info "ClusterAccess CRD is already installed"
+        log_info "ClusterAccess CRD is already installed"
+        log_info "Ensuring CRD is Established..."
+        if ! KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl wait --for=condition=Established "crd/${CRD_NAME}" --timeout=60s; then
+            log_error "ClusterAccess CRD exists but did not become Established within 60 seconds"
+            exit 1
+        fi
     fi
 }

216-218: Good placement; consider moving before cleanup to reduce noise

Calling ensure_crd_installed is the right addition. Optionally move it before cleanup_existing_resources so the initial kubectl get clusteraccess doesn’t log a warning on clusters where the CRD isn’t present yet.

-# Clean up existing resources if they exist
-cleanup_existing_resources
-
-# Ensure CRD is installed in management cluster
-ensure_crd_installed
+# Ensure CRD is installed in management cluster
+ensure_crd_installed
+
+# Clean up existing resources if they exist
+cleanup_existing_resources
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1a0dd24 and cec31ce.

📒 Files selected for processing (1)
  • scripts/create-clusteraccess.sh (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
scripts/create-clusteraccess.sh (1)
scripts/test-clusteraccess-integration.sh (2)
  • log_info (12-14)
  • log_error (20-22)

Copy link
Contributor

@nexus49 nexus49 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Often is see such scripts in a directory called hack instead of scripts.

Can you leave a documentation, possibly as a a header comment in the script or as an additional README, on the purpose of this script and how it can be used?

@vertex451
Copy link
Contributor Author

Often is see such scripts in a directory called hack instead of scripts.

Can you leave a documentation, possibly as a a header comment in the script or as an additional README, on the purpose of this script and how it can be used?

moved to hack dir and added a small readme about usage.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
hack/create-clusteraccess.sh (1)

169-170: Move CRD installation before cleanup to ensure proper ordering.

The cleanup_existing_resources function (line 170) attempts to delete ClusterAccess resources, but ensure_crd_installed isn't called until line 217. If the CRD doesn't exist when cleanup runs, the kubectl get clusteraccess command at line 145 may fail or produce unexpected behavior.

Move the CRD installation earlier in the flow:

 log_info "Management cluster is accessible"
 
+# Ensure CRD is installed in management cluster
+ensure_crd_installed
+
 # Clean up existing resources if they exist
 cleanup_existing_resources
 
 # Extract server URL from target kubeconfig

Then remove the duplicate call:

 fi
 log_info "Management cluster is accessible"
 
-# Ensure CRD is installed in management cluster
-ensure_crd_installed
-
 # Create kubeconfig secret in management cluster

Also applies to: 216-217

🧹 Nitpick comments (1)
hack/create-clusteraccess.md (1)

3-3: Consider standardizing Kubernetes terminology.

The phrase "Standard K8S cluster" uses informal capitalization. For consistency with technical documentation standards, consider using "standard Kubernetes cluster" or "standard K8s cluster".

-This script is used to create a ClusterAccess resource, which is needed for kubernetes-graphql-gateway to work with Standard K8S cluster.
+This script is used to create a ClusterAccess resource, which is needed for kubernetes-graphql-gateway to work with a standard Kubernetes cluster.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cec31ce and 8c23d38.

📒 Files selected for processing (2)
  • hack/create-clusteraccess.md (1 hunks)
  • hack/create-clusteraccess.sh (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
hack/create-clusteraccess.sh (1)
hack/test-clusteraccess-integration.sh (2)
  • log_info (12-14)
  • log_error (20-22)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: pipe / testSource / test
  • GitHub Check: pipe / dockerBuild / docker
  • GitHub Check: Analyze (go)
🔇 Additional comments (1)
hack/create-clusteraccess.md (1)

5-5: Link verified: ../docs/clusteraccess.md exists. No further action needed.

@vertex451 vertex451 requested a review from nexus49 September 30, 2025 19:21
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
hack/create-clusteraccess.sh (2)

131-132: Consider extracting hardcoded values as script-level constants.

The ServiceAccount name and namespace are hardcoded here and repeated at lines 216-217. Extract these as constants at the top of the script for easier maintenance and consistency.

Apply this diff to extract the constants:

+# ServiceAccount configuration
+SA_NAME="kubernetes-graphql-gateway-admin"
+SA_NAMESPACE="default"
+
 # Default values
 TARGET_KUBECONFIG=""
 MANAGEMENT_KUBECONFIG="${KUBECONFIG:-$HOME/.kube/config}"

Then update the function to use these constants:

 cleanup_existing_resources() {
     log_info "Checking for existing ClusterAccess resource '$CLUSTER_NAME'..."
     
-    SA_NAME="kubernetes-graphql-gateway-admin"
-    SA_NAMESPACE="default"
-    
     # Check if ClusterAccess exists in management cluster

249-256: Replace arbitrary sleep with polling for token availability.

The 2-second sleep is arbitrary and may not be sufficient in all cases. Token population timing can vary based on cluster load and configuration.

Apply this diff to implement proper polling:

-log_info "Waiting for token to be populated..."
-sleep 2
-
-ADMIN_TOKEN=$(KUBECONFIG="$TARGET_KUBECONFIG" kubectl get secret $SA_NAME-token -n $SA_NAMESPACE -o jsonpath='{.data.token}' | base64 -d)
+log_info "Waiting for token to be populated..."
+RETRY_COUNT=0
+MAX_RETRIES=30
+ADMIN_TOKEN=""
+
+while [[ $RETRY_COUNT -lt $MAX_RETRIES ]]; do
+    ADMIN_TOKEN=$(KUBECONFIG="$TARGET_KUBECONFIG" kubectl get secret $SA_NAME-token -n $SA_NAMESPACE -o jsonpath='{.data.token}' 2>/dev/null | base64 -d 2>/dev/null)
+    if [[ -n "$ADMIN_TOKEN" ]]; then
+        break
+    fi
+    sleep 1
+    RETRY_COUNT=$((RETRY_COUNT + 1))
+done
+
 if [[ -z "$ADMIN_TOKEN" ]]; then
-    log_error "Failed to retrieve admin token"
+    log_error "Failed to retrieve admin token after ${MAX_RETRIES} seconds"
     exit 1
 fi
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 77d328b and 50ca309.

📒 Files selected for processing (15)
  • README.md (2 hunks)
  • docs/authorization.md (2 hunks)
  • docs/clusteraccess-setup.md (1 hunks)
  • docs/clusteraccess.md (1 hunks)
  • docs/gateway.md (2 hunks)
  • docs/listener.md (1 hunks)
  • docs/local_test.md (4 hunks)
  • docs/multicluster-kubeconfig-flow.md (0 hunks)
  • docs/quickstart.md (3 hunks)
  • docs/subscriptions.md (4 hunks)
  • docs/virtual-workspaces.md (1 hunks)
  • gateway/manager/manager.go (1 hunks)
  • gateway/manager/roundtripper/roundtripper.go (4 hunks)
  • gateway/manager/roundtripper/roundtripper_test.go (11 hunks)
  • hack/create-clusteraccess.sh (4 hunks)
💤 Files with no reviewable changes (1)
  • docs/multicluster-kubeconfig-flow.md
✅ Files skipped from review due to trivial changes (2)
  • docs/clusteraccess-setup.md
  • docs/virtual-workspaces.md
🧰 Additional context used
🧬 Code graph analysis (4)
gateway/manager/roundtripper/roundtripper_test.go (2)
gateway/manager/mocks/mock_RoundTripper.go (1)
  • MockRoundTripper (12-14)
gateway/manager/roundtripper/roundtripper.go (1)
  • New (27-48)
hack/create-clusteraccess.sh (1)
hack/test-clusteraccess-integration.sh (2)
  • log_info (12-14)
  • log_error (20-22)
gateway/manager/roundtripper/roundtripper.go (1)
common/config/config.go (1)
  • Config (3-36)
gateway/manager/manager.go (1)
gateway/manager/roundtripper/roundtripper.go (2)
  • New (27-48)
  • NewUnauthorizedRoundTripper (51-53)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: pipe / testSource / test
  • GitHub Check: pipe / dockerBuild / docker
  • GitHub Check: Analyze (go)
🔇 Additional comments (8)
gateway/manager/manager.go (1)

28-30: LGTM!

The roundtripper factory correctly passes the tlsConfig parameter to align with the updated constructor signature.

gateway/manager/roundtripper/roundtripper_test.go (3)

14-14: LGTM!

All test cases consistently updated to pass the TLSClientConfig parameter to the roundtripper constructor, properly aligning with the updated signature.

Also applies to: 81-81, 273-273, 387-387, 462-462, 511-511, 540-540, 580-580


51-54: LGTM!

The test correctly handles the baseRT transport behavior by skipping mock expectations when the real base transport is used (expectedStatusCode 0). This accurately reflects the implementation change where bearer token authentication now uses baseRT instead of adminRT.

Also applies to: 91-95


522-522: LGTM!

Skipping this test is a reasonable decision since it would require mocking an internal implementation detail (baseRT). The comment clearly explains the rationale.

gateway/manager/roundtripper/roundtripper.go (4)

27-48: LGTM!

The TLS-based base transport creation is well-implemented with appropriate error handling and fallback to http.DefaultTransport. The constructor signature update properly extends the API to support TLS configuration.


84-85: LGTM!

Cloning the request before header manipulation prevents unintended side effects on the original request, and unconditionally removing the Authorization header prevents credential leakage. Both are good security practices.


115-121: LGTM!

In impersonation mode, the Authorization header is correctly set with the bearer token after cloning the request, and the impersonating round tripper properly wraps the adminRT. This maintains the expected behavior for impersonation.


87-90: Document bearer token transport change in docs
Non-impersonation now uses baseRT (TLS-configured transport) instead of adminRT for bearer-token auth. Add this change to the PR description and relevant docs (e.g. CHANGELOG.md or docs/), confirm it aligns with the security model, and include tests covering user-token validation against the cluster.

vertex451 and others added 9 commits October 10, 2025 15:35
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
* added nil checks in relation resovling

On-behalf-of: @SAP [email protected]
Signed-off-by: Artem Shcherbatiuk <[email protected]>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
hack/create-clusteraccess.sh (1)

262-269: Don’t store personal kubeconfig; generate kubeconfig using SA token + CA.

Using the local kubeconfig in a Secret can leak personal creds/exec plugins and break in CI. Prefer a minimal kubeconfig built from SERVER_URL, CA, and the ServiceAccount token you just created.

-log_info "Creating admin kubeconfig secret in management cluster..."
-KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl create secret generic "${CLUSTER_NAME}-admin-kubeconfig" \
-    --namespace="$NAMESPACE" \
-    --from-file=kubeconfig="$TARGET_KUBECONFIG" \
-    --dry-run=client -o yaml | \
-KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl apply -f -
+log_info "Creating admin kubeconfig secret in management cluster (ServiceAccount token)..."
+cat <<KCF | KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl create secret generic "${CLUSTER_NAME}-admin-kubeconfig" \
+  --namespace="$NAMESPACE" \
+  --from-file=kubeconfig=/dev/stdin \
+  --dry-run=client -o yaml | KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl apply -f -
+apiVersion: v1
+kind: Config
+clusters:
+- name: ${CLUSTER_NAME}
+  cluster:
+    server: ${SERVER_URL}
+    certificate-authority-data: $(echo -n "$CA_CERT" | base64 -w0)
+contexts:
+- name: ${CLUSTER_NAME}
+  context:
+    cluster: ${CLUSTER_NAME}
+    user: ${SA_NAME}
+current-context: ${CLUSTER_NAME}
+users:
+- name: ${SA_NAME}
+  user:
+    token: ${ADMIN_TOKEN}
+KCF

Also applies to: 279-297

🧹 Nitpick comments (3)
hack/create-clusteraccess.sh (1)

131-133: Avoid duplicate SA vars and hardcoded namespace.

SA_NAME/SA_NAMESPACE are defined twice and SA_NAMESPACE is hardcoded to "default", ignoring the --namespace flag. Define once and derive SA_NAMESPACE from NAMESPACE.

-    SA_NAME="kubernetes-graphql-gateway-admin"
-    SA_NAMESPACE="default"
+    SA_NAME="kubernetes-graphql-gateway-admin"
+    SA_NAMESPACE="${NAMESPACE}"
...
-SAINAME="kubernetes-graphql-gateway-admin"
-SA_NAMESPACE="default"
+# Reuse previously set values
+# SA_NAME and SA_NAMESPACE already set; keep them consistent with --namespace

Also applies to: 214-218

README.md (1)

27-30: Minor: unify casing of “ClusterAccess”.

Link text shows “Clusteraccess”; elsewhere docs use “ClusterAccess”. Standardize for consistency.

docs/clusteraccess.md (1)

120-129: Add CRD apply command specificity.

Instead of generic config/crd/, reference the exact ClusterAccess CRD path used in the script for consistency.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 50ca309 and 093457e.

📒 Files selected for processing (13)
  • README.md (2 hunks)
  • docs/authorization.md (2 hunks)
  • docs/clusteraccess-setup.md (1 hunks)
  • docs/clusteraccess.md (1 hunks)
  • docs/gateway.md (2 hunks)
  • docs/listener.md (1 hunks)
  • docs/local_test.md (4 hunks)
  • docs/multicluster-kubeconfig-flow.md (0 hunks)
  • docs/quickstart.md (3 hunks)
  • docs/subscriptions.md (4 hunks)
  • docs/virtual-workspaces.md (1 hunks)
  • hack/create-clusteraccess.md (1 hunks)
  • hack/create-clusteraccess.sh (4 hunks)
💤 Files with no reviewable changes (1)
  • docs/multicluster-kubeconfig-flow.md
✅ Files skipped from review due to trivial changes (3)
  • hack/create-clusteraccess.md
  • docs/gateway.md
  • docs/listener.md
🚧 Files skipped from review as they are similar to previous changes (3)
  • docs/virtual-workspaces.md
  • docs/clusteraccess-setup.md
  • docs/subscriptions.md
🧰 Additional context used
🧬 Code graph analysis (1)
hack/create-clusteraccess.sh (1)
hack/test-clusteraccess-integration.sh (2)
  • log_info (12-14)
  • log_error (20-22)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: pipe / dockerBuild / docker
  • GitHub Check: pipe / testSource / test
  • GitHub Check: Analyze (go)
🔇 Additional comments (4)
hack/create-clusteraccess.sh (1)

102-126: Good: idempotent CRD apply + Established wait.

The new ensure_crd_installed flow uses kubectl apply (idempotent) and waits for Established; clear errors and logs. LGTM.

docs/quickstart.md (1)

17-20: LGTM: clearer setup flow and proper Bearer header.

Operation Modes guidance and Authorization header examples are correct and consistent.

Also applies to: 55-63

docs/local_test.md (1)

8-13: LGTM: updated repo path and clearer instructions.

Also applies to: 16-18, 35-40, 52-53, 72-73

docs/authorization.md (1)

21-21: Verify config loader reads new env var
No occurrences of GATEWAY_INTROSPECTION_AUTHENTICATION or the old INTROSPECTION_AUTHENTICATION found in code; ensure the implementation is updated to use GATEWAY_INTROSPECTION_AUTHENTICATION.

When GraphiQL page is loaded, it makes a request to fetch the GraphQL schema and there is no way to add the `Authorization` header to that request.
When the GraphiQL page is loaded, it makes a request to fetch the GraphQL schema, and there is no way to add the `Authorization` header to that request.

We have this [issue](https://github.com/openmfp/kubernetes-graphql-gateway/issues/217) open to fix this.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

Check GitHub issue link target.

Link points to openmfp/kubernetes-graphql-gateway#217. Confirm the org/repo is correct (this repo appears under platform-mesh).


🏁 Script executed:

#!/bin/bash
gh issue view 217 -R openmfp/kubernetes-graphql-gateway || echo "Issue not found under openmfp"

Length of output: 523


Update issue link and wording

  • In docs/authorization.md (line 28), change the URL to point to platform-mesh/kubernetes-graphql-gateway#217 and adjust the text since the issue is closed (e.g., replace “open to fix this” with “see issue” or past tense).
🤖 Prompt for AI Agents
In docs/authorization.md around line 28, the issue link and wording reference
the wrong repo and use present-tense phrasing for a closed issue; update the URL
to https://github.com/platform-mesh/kubernetes-graphql-gateway/issues/217 (or
the repo’s correct issue path) and change the text from “open to fix this” to
past-tense or neutral phrasing such as “see issue” or “see issue #217” so the
sentence correctly points to platform-mesh/kubernetes-graphql-gateway#217 and
reflects the issue state.

Comment on lines +12 to 14
# Create ClusterAccess resource with admin token authentication
./hack/create-clusteraccess.sh --target-kubeconfig ~/.kube/platform-mesh-config --management-kubeconfig ~/.kube/platform-mesh-config
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Wording mismatch: “admin token authentication” vs kubeconfig-based spec.

Quick Setup comment says admin token, but example/spec use kubeconfigSecretRef and CA. Align the text to kubeconfig-based auth or update the spec to token-based if that’s intended.

Also applies to: 36-47

🤖 Prompt for AI Agents
In docs/clusteraccess.md around lines 12-14 (and similarly for lines 36-47), the
heading and Quick Setup text say "admin token authentication" but the
example/spec use kubeconfigSecretRef and CA (kubeconfig-based auth); update the
wording to refer to "kubeconfig-based authentication" (or "kubeconfig
authentication") and adjust the Quick Setup description to mention using a
kubeconfig secret and the --target-kubeconfig/--management-kubeconfig flags,
ensuring the language matches the provided spec and examples; apply the same
wording change and consistency fixes to lines 36-47 as well.

Comment on lines +113 to 114
./scripts/test-clusteraccess-integration.sh
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Broken script path.

Docs reference ./scripts/test-clusteraccess-integration.sh, but the repo appears to have hack/test-clusteraccess-integration.sh.


🏁 Script executed:

#!/bin/bash
fd -a 'test-clusteraccess-integration.sh'

Length of output: 57


Update script path in docs/clusteraccess.md
Replace ./scripts/test-clusteraccess-integration.sh with hack/test-clusteraccess-integration.sh.

🤖 Prompt for AI Agents
In docs/clusteraccess.md around lines 113 to 114, the script path is outdated:
replace the reference to ./scripts/test-clusteraccess-integration.sh with the
new path hack/test-clusteraccess-integration.sh. Update the markdown snippet or
inline text accordingly so all occurrences use
hack/test-clusteraccess-integration.sh and ensure formatting/backticks remain
correct.

Comment on lines +249 to +257
log_info "Waiting for token to be populated..."
sleep 2

ADMIN_TOKEN=$(KUBECONFIG="$TARGET_KUBECONFIG" kubectl get secret $SA_NAME-token -n $SA_NAMESPACE -o jsonpath='{.data.token}' | base64 -d)
if [[ -z "$ADMIN_TOKEN" ]]; then
log_error "Failed to retrieve admin token"
exit 1
fi
log_info "Admin token created successfully"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Race condition: token may not be ready after fixed sleep.

Fixed sleep can fail intermittently. Poll for the token with a timeout and quote vars.

-log_info "Waiting for token to be populated..."
-sleep 2
-
-ADMIN_TOKEN=$(KUBECONFIG="$TARGET_KUBECONFIG" kubectl get secret $SA_NAME-token -n $SA_NAMESPACE -o jsonpath='{.data.token}' | base64 -d)
-if [[ -z "$ADMIN_TOKEN" ]]; then
+log_info "Waiting for token to be populated..."
+for i in {1..30}; do
+  RAW_TOKEN=$(KUBECONFIG="$TARGET_KUBECONFIG" kubectl get secret "${SA_NAME}-token" -n "${SA_NAMESPACE}" -o jsonpath='{.data.token}' 2>/dev/null || true)
+  if [[ -n "$RAW_TOKEN" ]]; then break; fi
+  sleep 1
+done
+if [[ -z "$RAW_TOKEN" ]]; then
     log_error "Failed to retrieve admin token"
     exit 1
 fi
+ADMIN_TOKEN=$(echo -n "$RAW_TOKEN" | base64 -d 2>/dev/null || true)
+if [[ -z "$ADMIN_TOKEN" ]]; then
+  log_error "ServiceAccount token found but failed to decode"
+  exit 1
+fi
 log_info "Admin token created successfully"

Comment on lines +307 to +318
echo " - ServiceAccount: $SA_NAME (in target cluster namespace: $SA_NAMESPACE)"



echo ""
echo ""
log_info "Admin access token for direct API access (copy-paste ready):"
echo '```'
echo '{'
echo " \"Authorization\": \"Bearer $ADMIN_TOKEN\""
echo '}'
echo '```'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid printing the admin token to stdout.

This leaks credentials into logs/history. Gate behind a flag (e.g., --print-token) or omit entirely; at least add a clear warning and prompt.

-echo ""
-echo ""
-log_info "Admin access token for direct API access (copy-paste ready):"
-echo '```'
-echo '{'
-echo "  \"Authorization\": \"Bearer $ADMIN_TOKEN\""
-echo '}'
-echo '```' 
+if [[ "${PRINT_TOKEN:-false}" == "true" ]]; then
+  log_warn "Printing sensitive token to stdout as requested (PRINT_TOKEN=true). Handle with care."
+  printf '\n\n'
+  log_info "Admin access token for direct API access (copy-paste ready):"
+  echo '```'
+  echo '{'
+  echo "  \"Authorization\": \"Bearer '"$ADMIN_TOKEN"\""
+  echo '}'
+  echo '```'
+fi

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In hack/create-clusteraccess.sh around lines 307-318, the script currently
prints the admin token to stdout which leaks credentials; update it to gate
printing behind a flag and add a clear warning prompt: add a conditional that
checks a env/CLI flag like PRINT_TOKEN (default false) and only prints the token
when set to "true", before printing emit a log_warn message indicating sensitive
output and instructing the user to handle carefully; ensure token output
formatting remains unchanged when printed, and otherwise omit the token entirely
(no echo to stdout) so it is not recorded in logs/history.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants