Skip to content

Conversation

Adityakk9031
Copy link

@Adityakk9031 Adityakk9031 commented Aug 21, 2025

…flare` in self-hosted Kubernetes deployments.

Summary

Fixes 401 Unauthorized errors when vector forwards logs to logflare in self-hosted Kubernetes deployments. The issue was due to authentication method mismatch:

  • Old (broken): API key passed via ?api_key=... query param
  • New (fixed): API key passed via Authorization: Bearer ... header

This change aligns the Kubernetes Helm chart with the working Docker Compose setup and Logflare’s expected authentication model.


Related Issue

Closes supabase/supabase#37998


Changes

  • Updated vector sink configuration in Helm chart:
    • Replace ?api_key=SECRET[credentials.logflare_api_key] in sink URIs with standard endpoints.
    • Add Authorization: Bearer SECRET[credentials.logflare_api_key] headers in each logflare sink.
  • Ensured DB sink still routes through Kong for startup ordering.
  • No breaking changes for users with correct LOGFLARE_API_KEY.

Verification

  1. Deployed Supabase via Helm on Kubernetes.
  2. Confirmed all pods healthy (kubectl get pods).
  3. Observed no more 401 Unauthorized errors in vector logs.
  4. Logs successfully appear in Logflare dashboard.
  5. Curling /health on logflare pod returns 200 OK as before.

Checklist

  • Verified in Kubernetes environment
  • Ensured parity with Docker Compose behavior
  • Helm values backward-compatible
  • Added release note

Release Note

Fixed an authentication bug where `vector` failed with `401 Unauthorized` when sending logs to `logflare` in self-hosted Kubernetes deployments.
Authentication is now handled via `Authorization: Bearer <LOGFLARE_API_KEY>` headers instead of query parameters.

## What kind of change does this PR introduce?

Bug fix, feature, docs update, ...

## What is the current behavior?

Please link any relevant issues here.

## What is the new behavior?

Feel free to include screenshots if it includes visual changes.

## Additional context

Add any other context or screenshots.

…flare` in self-hosted Kubernetes deployments.

## Summary

Fixes **401 Unauthorized** errors when `vector` forwards logs to `logflare` in self-hosted Kubernetes deployments.
The issue was due to authentication method mismatch:
- **Old (broken):** API key passed via `?api_key=...` query param
- **New (fixed):** API key passed via `Authorization: Bearer ...` header

This change aligns the Kubernetes Helm chart with the working Docker Compose setup and Logflare’s expected authentication model.

---

## Related Issue
Closes #37998

---

## Changes
- Updated **vector sink configuration** in Helm chart:
  - Replace `?api_key=SECRET[credentials.logflare_api_key]` in sink URIs with standard endpoints.
  - Add `Authorization: Bearer SECRET[credentials.logflare_api_key]` headers in each logflare sink.
- Ensured DB sink still routes through Kong for startup ordering.
- No breaking changes for users with correct `LOGFLARE_API_KEY`.

---

## Verification
1. Deployed Supabase via Helm on Kubernetes.
2. Confirmed all pods healthy (`kubectl get pods`).
3. Observed **no more `401 Unauthorized` errors** in vector logs.
4. Logs successfully appear in Logflare dashboard.
5. Curling `/health` on logflare pod returns `200 OK` as before.

---

## Checklist
- [x] Verified in Kubernetes environment
- [x] Ensured parity with Docker Compose behavior
- [x] Helm values backward-compatible
- [ ] Added release note

---

## Release Note
```markdown
Fixed an authentication bug where `vector` failed with `401 Unauthorized` when sending logs to `logflare` in self-hosted Kubernetes deployments.
Authentication is now handled via `Authorization: Bearer <LOGFLARE_API_KEY>` headers instead of query parameters.
@xmh1011
Copy link

xmh1011 commented Aug 22, 2025

Thank you for your contribution. I use the latest config in your PR, but the errors still exist.

secret:
  credentials:
    type: exec
    command:
      - /etc/vector/secret.sh

api:
  enabled: true
  address: 0.0.0.0:9001

sources:
  kubernetes_host:
    type: kubernetes_logs
    extra_label_selector: app.kubernetes.io/instance=supabase-xmh-test,app.kubernetes.io/name!=supabase-vector

transforms:
  project_logs:
    type: remap
    inputs:
      - kubernetes_host
    source: |-
      .project = "default"
      .event_message = del(.message)
      .appname = del(.kubernetes.container_name)
      del(.file)
      del(.kubernetes)
      del(.source_type)
      del(.stream)
  router:
    type: route
    inputs:
      - project_logs
    route:
      kong: '.appname == "supabase-kong"'
      auth: '.appname == "supabase-auth"'
      rest: '.appname == "supabase-rest"'
      realtime: '.appname == "supabase-realtime"'
      storage: '.appname == "supabase-storage"'
      functions: '.appname == "supabase-functions"'
      db: '.appname == "supabase-db"'
  kong_logs:
    type: remap
    inputs:
      - router.kong
    source: |-
      req, err = parse_nginx_log(.event_message, "combined")
      if err == null {
          .timestamp = req.timestamp
          .metadata.request.headers.referer = req.referer
          .metadata.request.headers.user_agent = req.agent
          .metadata.request.headers.cf_connecting_ip = req.client
          .metadata.request.method = req.method
          .metadata.request.path = req.path
          .metadata.request.protocol = req.protocol
          .metadata.response.status_code = req.status
      }
      if err != null {
        abort
      }
  kong_err:
    type: remap
    inputs:
      - router.kong
    source: |-
      .metadata.request.method = "GET"
      .metadata.response.status_code = 200
      parsed, err = parse_nginx_log(.event_message, "error")
      if err == null {
          .timestamp = parsed.timestamp
          .severity = parsed.severity
          .metadata.request.host = parsed.host
          .metadata.request.headers.cf_connecting_ip = parsed.client
          url, err = split(parsed.request, " ")
          if err == null {
              .metadata.request.method = url[0]
              .metadata.request.path = url[1]
              .metadata.request.protocol = url[2]
          }
      }
      if err != null {
        abort
      }
  auth_logs:
    type: remap
    inputs:
      - router.auth
    source: |-
      parsed, err = parse_json(.event_message)
      if err == null {
          .metadata.timestamp = parsed.time
          .metadata = merge!(.metadata, parsed)
      }
  rest_logs:
    type: remap
    inputs:
      - router.rest
    source: |-
      parsed, err = parse_regex(.event_message, r'^(?P<time>.*): (?P<msg>.*)$')
      if err == null {
          .event_message = parsed.msg
          .timestamp = parse_timestamp!(parsed.time, format: "%e/%b/%Y %R %:z")
          .metadata.host = .project
      }
  realtime_logs:
    type: remap
    inputs:
      - router.realtime
    source: |-
      .metadata.project = del(.project)
      .metadata.external_id = .metadata.project
      parsed, err = parse_regex(.event_message, r'^(?P<time>\d+:\d+:\d+\.\d+) \[(?P<level>\w+)\] (?P<msg>.*)$')
      if err == null {
          .event_message = parsed.msg
          .metadata.level = parsed.level
      }
  storage_logs:
    type: remap
    inputs:
      - router.storage
    source: |-
      .metadata.project = del(.project)
      .metadata.tenantId = .metadata.project
      parsed, err = parse_json(.event_message)
      if err == null {
          .event_message = parsed.msg
          .metadata.level = parsed.level
          .metadata.timestamp = parsed.time
          .metadata.context[0].host = parsed.hostname
          .metadata.context[0].pid = parsed.pid
      }
  db_logs:
    type: remap
    inputs:
      - router.db
    source: |-
      .metadata.host = "db-default"
      .metadata.parsed.timestamp = .timestamp
      
      parsed, err = parse_regex(.event_message, r'.*(?P<level>INFO|NOTICE|WARNING|ERROR|LOG|FATAL|PANIC?):.*', numeric_groups: true)

      if err != null || parsed == null {
        .metadata.parsed.error_severity = "info"
      }
      if parsed != null {
      .metadata.parsed.error_severity = parsed.level
      }
      if .metadata.parsed.error_severity == "info" {
          .metadata.parsed.error_severity = "log"
      }
      .metadata.parsed.error_severity = upcase!(.metadata.parsed.error_severity)
sinks:
  logflare_auth:
    type: http
    inputs:
      - auth_logs
    encoding:
      codec: json
    method: post
    request:
      retry_max_duration_secs: 10
      headers:
        Authorization: "Bearer SECRET[credentials.logflare_api_key]"
    uri: "http://supabase-xmh-test-supabase-analytics:4000/api/logs?source_name=gotrue.logs.prod"

  logflare_realtime:
    type: http
    inputs:
      - realtime_logs
    encoding:
      codec: json
    method: post
    request:
      retry_max_duration_secs: 10
      headers:
        Authorization: "Bearer SECRET[credentials.logflare_api_key]"
    uri: "http://supabase-xmh-test-supabase-analytics:4000/api/logs?source_name=realtime.logs.prod"

  logflare_rest:
    type: http
    inputs:
      - rest_logs
    encoding:
      codec: json
    method: post
    request:
      retry_max_duration_secs: 10
      headers:
        Authorization: "Bearer SECRET[credentials.logflare_api_key]"
    uri: "http://supabase-xmh-test-supabase-analytics:4000/api/logs?source_name=postgREST.logs.prod"

  logflare_db:
    type: http
    inputs:
      - db_logs
    encoding:
      codec: json
    method: post
    request:
      retry_max_duration_secs: 10
      headers:
        Authorization: "Bearer SECRET[credentials.logflare_api_key]"
    # routed through kong
    uri: "http://supabase-xmh-test-supabase-kong:8000/analytics/v1/api/logs?source_name=postgres.logs"

  logflare_functions:
    type: http
    inputs:
      - router.functions
    encoding:
      codec: json
    method: post
    request:
      retry_max_duration_secs: 10
      headers:
        Authorization: "Bearer SECRET[credentials.logflare_api_key]"
    uri: "http://supabase-xmh-test-supabase-analytics:4000/api/logs?source_name=deno-relay-logs"

  logflare_storage:
    type: http
    inputs:
      - storage_logs
    encoding:
      codec: json
    method: post
    request:
      retry_max_duration_secs: 10
      headers:
        Authorization: "Bearer SECRET[credentials.logflare_api_key]"
    uri: "http://supabase-xmh-test-supabase-analytics:4000/api/logs?source_name=storage.logs.prod.2"

  logflare_kong:
    type: http
    inputs:
      - kong_logs
      - kong_err
    encoding:
      codec: json
    method: post
    request:
      retry_max_duration_secs: 10
      headers:
        Authorization: "Bearer SECRET[credentials.logflare_api_key]"
    uri: "http://supabase-xmh-test-supabase-analytics:4000/api/logs?source_name=cloudflare.logs.prod"
2025-08-22T03:04:21.260837Z ERROR sink{component_kind="sink" component_id=logflare_storage component_type=http}:request{request_id=1}: vector::sinks::util::retries: Not retriable; dropping the request. reason="Http status: 401 Unauthorized" internal_log_rate_limit=true
2025-08-22T03:04:21.260866Z ERROR sink{component_kind="sink" component_id=logflare_storage component_type=http}:request{request_id=1}: vector_common::internal_event::service: Service call failed. No retries or retries exhausted. error=None request_id=1 error_type="request_failed" stage="sending" internal_log_rate_limit=true
2025-08-22T03:04:21.260882Z ERROR sink{component_kind="sink" component_id=logflare_storage component_type=http}:request{request_id=1}: vector_common::internal_event::component_events_dropped: Events dropped intentional=false count=1 reason="Service call failed. No retries or retries exhausted." internal_log_rate_limit=true

@Adityakk9031
Copy link
Author

ok let me do some changes then try again

@Adityakk9031
Copy link
Author

Adityakk9031 commented Aug 23, 2025

now check it again

@xmh1011
Copy link

xmh1011 commented Aug 24, 2025

now check it again

Thank you bro. But it still doesn't work. Do these changes work in your local environment?

It's possible you're not seeing an error because Vector isn't actually collecting the logs. In the Supabase Studio launched with Docker Compose, you can see the component logs, but they are not visible when using the Kubernetes deployment.

The community-provided Kubernetes configuration is quite old and doesn't enable Vector to collect logs from the other components. As a result, it doesn't write to Logflare, and therefore, no error is reported.

You can reference my configuration, which does allow Vector to collect logs from the other components. However, the current issue I'm facing is a permission error when it tries to write to Logflare.

api:
  enabled: true
  address: 0.0.0.0:9001

data_dir: "/var/lib/vector"

sources:
  kubernetes_host:
    type: kubernetes_logs

transforms:
  add_metadata:
    type: remap
    inputs:
      - kubernetes_host
    source: |-
      .component = .kubernetes.pod_labels."app.kubernetes.io/name"
      .project = "supabase217946277541847040"
      .event_message = del(.message)
      del(.container_created_at)
      del(.container_id)
      del(.source_type)
      del(.stream)
      del(.image)
      del(.host)

  router:
    type: route
    inputs:
      - add_metadata
    route:
      kong: '.component == "kong"'
      auth: '.component == "auth"'
      rest: '.component == "rest"'
      realtime: '.component == "realtime"'
      storage: '.component == "storage"'
      functions: '.component == "edge-function"'
      unmatched: '.component != null'

  kong_logs:
    type: remap
    inputs:
      - router.kong
    source: |-
      req, err = parse_nginx_log(.event_message, "combined")
      if err == null {
          .timestamp = req.timestamp
          .metadata.request.headers.referer = req.referer
          .metadata.request.headers.user_agent = req.agent
          .metadata.request.headers.cf_connecting_ip = req.client
          .metadata.request.method = req.method
          .metadata.request.path = req.path
          .metadata.request.protocol = req.protocol
          .metadata.response.status_code = req.status
      } else {
        abort
      }

  kong_err:
    type: remap
    inputs:
      - router.kong
    source: |-
      .metadata.request.method = "GET"
      .metadata.response.status_code = 200
      parsed, err = parse_nginx_log(.event_message, "error")
      if err == null {
          .timestamp = parsed.timestamp
          .severity = parsed.severity
          .metadata.request.host = parsed.host
          .metadata.request.headers.cf_connecting_ip = parsed.client
          url, err2 = split(parsed.request, " ")
          if err2 == null {
              .metadata.request.method = url[0]
              .metadata.request.path = url[1]
              .metadata.request.protocol = url[2]
          }
      } else {
        abort
      }

  auth_logs:
    type: remap
    inputs:
      - router.auth
    source: |-
      parsed, err = parse_json(.event_message)
      if err == null {
          .metadata.timestamp = parsed.time
          .metadata = merge!(.metadata, parsed)
      }

  rest_logs:
    type: remap
    inputs:
      - router.rest
    source: |-
      parsed, err = parse_regex(.event_message, r'^(?P<time>.*): (?P<msg>.*)$')
      if err == null {
          .event_message = parsed.msg
          .timestamp = parse_timestamp!(parsed.time, format: "%Y-%m-%dT%H:%M:%S%.fZ") 
          .metadata.host = .project
      }

  realtime_logs:
    type: remap
    inputs:
      - router.realtime
    source: |-
      .metadata.project = del(.project)
      .metadata.external_id = .metadata.project
      parsed, err = parse_regex(.event_message, r'^(?P<time>\d+:\d+:\d+\.\d+) \[(?P<level>\w+)\] (?P<msg>.*)$')
      if err == null {
          .event_message = parsed.msg
          .metadata.level = parsed.level
      }

  storage_logs:
    type: remap
    inputs:
      - router.storage
    source: |-
      .metadata.project = del(.project)
      .metadata.tenantId = .metadata.project
      parsed, err = parse_json(.event_message)
      if err == null {
          .event_message = parsed.msg
          .metadata.level = parsed.level
          .metadata.timestamp = parsed.time
          .metadata.context[0].host = parsed.hostname
          .metadata.context[0].pid = parsed.pid
      }

  functions_logs:
    type: remap
    inputs:
      - router.functions
    source: |-
      parsed, err = parse_json(.event_message)
      if err == null {
          .metadata.timestamp = parsed.time
          .metadata = merge!(.metadata, parsed)
      }

  unmatched_logs:
    type: remap
    inputs:
      - router.unmatched
    source: |-
      .source_name = "unmatched"
      del(.kubernetes)
      del(.message)

sinks:
  logflare_auth:
    type: http
    inputs:
      - auth_logs
    uri: "http://supabase217946277541847040-analytics:4000/api/logs?source_name=gotrue.logs.prod"
    encoding:
      codec: json
    method: post
    request:
      retry_max_duration_secs: 10
      headers:
        x-api-key: ${LOGFLARE_PUBLIC_ACCESS_TOKEN?LOGFLARE_PUBLIC_ACCESS_TOKEN is required}
    batch:
      max_bytes: 1048576
      timeout_secs: 5

  logflare_realtime:
    type: http
    inputs:
      - realtime_logs
    uri: "http://supabase217946277541847040-analytics:4000/api/logs?source_name=realtime.logs.prod"
    encoding:
      codec: json
    method: post
    request:
      retry_max_duration_secs: 10
      headers:
        x-api-key: ${LOGFLARE_PUBLIC_ACCESS_TOKEN?LOGFLARE_PUBLIC_ACCESS_TOKEN is required}
    batch:
      max_bytes: 1048576
      timeout_secs: 5

  logflare_rest:
    type: http
    inputs:
      - rest_logs
    uri: "http://supabase217946277541847040-analytics:4000/api/logs?source_name=postgREST.logs.prod"
    encoding:
      codec: json
    method: post
    request:
      retry_max_duration_secs: 10
      headers:
        x-api-key: ${LOGFLARE_PUBLIC_ACCESS_TOKEN?LOGFLARE_PUBLIC_ACCESS_TOKEN is required}
    batch:
      max_bytes: 1048576
      timeout_secs: 5

  logflare_functions:
    type: http
    inputs:
      - functions_logs
    uri: "http://supabase217946277541847040-analytics:4000/api/logs?source_name=deno-relay-logs"
    encoding:
      codec: json
    method: post
    request:
      retry_max_duration_secs: 10
      headers:
        x-api-key: ${LOGFLARE_PUBLIC_ACCESS_TOKEN?LOGFLARE_PUBLIC_ACCESS_TOKEN is required}

    batch:
      max_bytes: 1048576
      timeout_secs: 5

  logflare_storage:
    type: http
    inputs:
      - storage_logs
    uri: "http://supabase217946277541847040-analytics:4000/api/logs?source_name=storage.logs.prod.2"
    encoding:
      codec: json
    method: post
    request:
      retry_max_duration_secs: 10
      headers:
        x-api-key: ${LOGFLARE_PUBLIC_ACCESS_TOKEN?LOGFLARE_PUBLIC_ACCESS_TOKEN is required}
    batch:
      max_bytes: 1048576
      timeout_secs: 5

  logflare_kong:
    type: http
    inputs:
      - kong_logs
      - kong_err
    uri: "http://supabase217946277541847040-analytics:4000/api/logs?source_name=cloudflare.logs.prod"
    encoding:
      codec: json
    method: post
    request:
      retry_max_duration_secs: 10
      headers:
        apikey: ${LOGFLARE_PUBLIC_ACCESS_TOKEN?LOGFLARE_PUBLIC_ACCESS_TOKEN is required}
    batch:
      max_bytes: 1048576
      timeout_secs: 5

  logflare_unmatched:
    type: http
    inputs:
      - unmatched_logs
    uri: "http://supabase217946277541847040-analytics:4000/api/logs?source_name=unmatched.logs"
    encoding:
      codec: json
    method: post
    request:
      retry_max_duration_secs: 10
      headers:
        x-api-key: ${LOGFLARE_PUBLIC_ACCESS_TOKEN?LOGFLARE_PUBLIC_ACCESS_TOKEN is required}
    batch:
      max_bytes: 1048576
      timeout_secs: 5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Vector receives 401 Unauthorized from Logflare in self-hosted Kubernetes setup despite correct API key

2 participants