Conversation
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds OAuth2 private_key_jwt support: new enum value and jwt-related fields in OAuth2 config, config mappings in HTTP output, JWT creation/signing and x5t thumbprint logic in the OAuth2 core, validation changes for the new auth method, and tests verifying private_key_jwt request contents. Changes
Sequence DiagramsequenceDiagram
participant Client as HTTP Plugin Client
participant OAuth2Core as OAuth2 Core
participant CertLoader as Cert Loader
participant Signer as JWT Signer
participant TokenServer as Token Server
Client->>OAuth2Core: request token (private_key_jwt)
OAuth2Core->>CertLoader: load cert & private key (jwt_cert_file, jwt_key_file)
CertLoader-->>OAuth2Core: cert data, key data
OAuth2Core->>CertLoader: compute cert thumbprint (x5t)
CertLoader-->>OAuth2Core: x5t (base64url)
OAuth2Core->>Signer: build JWT header & payload (iss,sub,aud,iat,exp,jti)
Signer->>Signer: base64url(header) + "." + base64url(payload), sign with RSA‑SHA256
Signer-->>OAuth2Core: client_assertion (signed JWT)
OAuth2Core->>TokenServer: POST /token (grant_type, client_assertion, client_assertion_type, resource)
TokenServer-->>OAuth2Core: access_token response
OAuth2Core-->>Client: return token
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly Related PRs
Suggested Reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: eb4d42b27a
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/flb_oauth2.c`:
- Around line 824-829: flb_sds_printf failures are not detected because it can
return NULL without modifying the original sds pointer; change each call to
capture the return in a temporary (e.g. tmp) and check that tmp != NULL before
assigning back (or proceeding); specifically update the flb_sds_printf usages
that produce header_json, payload_json, signing_input and assertion to do: char
*tmp = flb_sds_printf(&header_json, ...); if (!tmp) goto error; header_json =
tmp; (and same pattern for payload_json, signing_input, assertion) so a NULL
return is correctly detected and handled.
- Around line 730-770: The oauth2_private_key_jwt_sign function is
double-hashing the JWT payload by calling flb_hash_simple to produce a digest
and then calling flb_crypto_sign_simple with FLB_HASH_SHA256 (causing EVP to
hash again); remove the manual hash step (delete the flb_hash_simple call and
the digest buffer usage) and instead pass the original data and data_len
directly into flb_crypto_sign_simple (keep FLB_CRYPTO_PRIVATE_KEY,
FLB_CRYPTO_PADDING_PKCS1, FLB_HASH_SHA256, the key buffer key_buf and key_size,
and the sig/sig_len handling), ensure key_buf is freed after the sign call, and
then continue to call oauth2_base64_url_encode as before.
In `@tests/internal/oauth2.c`:
- Around line 821-862: Add a macOS-only wait for the mock server to be ready in
both test_private_key_jwt_body and test_private_key_jwt_x5t_header: after
calling create_private_key_jwt_ctx(...) (and after verifying ctx != NULL) invoke
oauth2_mock_server_wait_ready(&server) inside an `#ifdef` FLB_SYSTEM_MACOS block
and assert the return is 0 (similar to the usage in test_caching_and_refresh);
this ensures the server started by oauth2_mock_server_start(...) is accepting
connections on macOS CI.
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tests/internal/oauth2.c (1)
586-699: Duplicated base64url normalization logic betweenbase64_url_decodeandbase64_url_decode_bytes.Both functions contain the same ~30-line segment: allocate
normalized,memcpythe input, substitute-→+and_→/, append=padding, then callflb_base64_decode. The only difference is the output type (NUL-terminatedchar*vs rawunsigned char*with size).♻️ Suggested refactor — extract normalization into a shared helper
+static int base64url_to_standard(const char *input, int in_len, + unsigned char **out, size_t *out_len) +{ + int pad; + size_t i, normalized_len; + unsigned char *normalized; + + pad = (4 - (in_len % 4)) % 4; + normalized_len = in_len + pad; + normalized = flb_calloc(1, normalized_len + 1); + if (!normalized) { flb_errno(); return -1; } + memcpy(normalized, input, in_len); + for (i = 0; i < (size_t)in_len; i++) { + if (normalized[i] == '-') normalized[i] = '+'; + else if (normalized[i] == '_') normalized[i] = '/'; + } + for (i = 0; i < (size_t)pad; i++) normalized[in_len + i] = '='; + *out = normalized; + *out_len = normalized_len; + return 0; +}Then both decode functions call
base64url_to_standardand proceed toflb_base64_decode.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/internal/oauth2.c` around lines 586 - 699, The normalization logic for base64url is duplicated in base64_url_decode and base64_url_decode_bytes; extract it into a shared helper (e.g., base64url_to_standard) that takes const char *input and returns a newly allocated normalized buffer and its length (or returns error), reuse that helper from both base64_url_decode and base64_url_decode_bytes before calling flb_base64_decode, ensure the helper handles allocation, '-'→'+', '_'→'/', padding with '=' and returns errors consistently so callers free the normalized buffer on failure.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/internal/oauth2.c`:
- Around line 34-83: The committed PEM constants TEST_PRIVATE_KEY_PEM and
TEST_CERT_PEM embed a real private key (triggers Gitleaks); remove the
hard-coded PEMs and instead generate the key and cert at runtime inside the test
helper function test_setup_private_key_jwt_files using the OpenSSL C API (or
equivalent) and write them to the expected test files, or alternatively add a
targeted gitleaks allow-rule that exempts this specific test constants block;
update references to TEST_PRIVATE_KEY_PEM/TEST_CERT_PEM to use the
runtime-generated file paths or buffers.
---
Duplicate comments:
In `@src/flb_oauth2.c`:
- Around line 747-762: The code currently pre-hashes the JWT signing_input with
flb_hash_simple into digest and then calls flb_crypto_sign_simple with
FLB_HASH_SHA256, which will double-hash if flb_crypto_sign_simple uses
EVP_DigestSign; to fix, either stop pre-hashing (remove flb_hash_simple and pass
the original signing_input and its length to flb_crypto_sign_simple) when using
FLB_HASH_SHA256, or, if you intend to sign a pre-hashed digest, call
flb_crypto_sign_simple with a flag that indicates pre-hashed input (or use the
EVP_PKEY_sign codepath) so that flb_crypto_sign_simple is invoked in the same
manner as out_stackdriver/oci_logan; verify which API flb_crypto_sign_simple
uses by searching for EVP_DigestSign vs EVP_PKEY_sign in flb_crypto.c (e.g., rg
-n 'EVP_DigestSign\|EVP_PKEY_sign') and then apply the matching change to the
flb_oauth2.c block that references flb_hash_simple, digest,
flb_crypto_sign_simple, sig, key_buf, and key_size.
In `@tests/internal/oauth2.c`:
- Around line 844-847: After calling oauth2_mock_server_wait_ready(&server) in
the macOS-only block, add a flb_time_msleep(50) sleep to match the behavior used
in test_caching_and_refresh; this lets the server thread finish processing the
probe connection before the real request. Update the macOS conditional that
wraps oauth2_mock_server_wait_ready in the new tests so it calls
flb_time_msleep(50) immediately afterward (keep the existing TEST_CHECK(ret ==
0) assertion).
---
Nitpick comments:
In `@tests/internal/oauth2.c`:
- Around line 586-699: The normalization logic for base64url is duplicated in
base64_url_decode and base64_url_decode_bytes; extract it into a shared helper
(e.g., base64url_to_standard) that takes const char *input and returns a newly
allocated normalized buffer and its length (or returns error), reuse that helper
from both base64_url_decode and base64_url_decode_bytes before calling
flb_base64_decode, ensure the helper handles allocation, '-'→'+', '_'→'/',
padding with '=' and returns errors consistently so callers free the normalized
buffer on failure.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/flb_oauth2.c (1)
100-104: Validateoauth2.jwt_headeragainst its allowed value set.Any unrecognized value (e.g.
"jku"or a typo) silently falls through to theelsebranch inoauth2_private_key_jwt_thumbprint(Line 707), producing a JWT with an arbitrary claim name and a hex-SHA1 thumbprint. The token endpoint will reject the token with no actionable diagnostic from Fluent Bit's side.♻️ Suggested validation in `oauth2_private_key_jwt_thumbprint` or at config-apply time
+ if (strcasecmp(header_name, "kid") != 0 && + strcasecmp(header_name, "x5t") != 0 && + strcasecmp(header_name, "x5t#S256") != 0) { + flb_error("[oauth2] invalid jwt_header value '%s'; " + "allowed: kid, x5t, x5t#S256", header_name); + return -1; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flb_oauth2.c` around lines 100 - 104, The oauth2.jwt_header config (flb_oauth2_config.jwt_header) is not validated and unknown values fall through in oauth2_private_key_jwt_thumbprint causing an arbitrary claim name and invalid thumbprint; validate the configured value against the allowed set (e.g., "kid" and "x5t") either when applying the config or at the start of oauth2_private_key_jwt_thumbprint and return/log a clear error for unsupported values, or normalize/choose a default only when the value is one of the known options—ensure checks reference flb_oauth2_config.jwt_header and the function oauth2_private_key_jwt_thumbprint so unrecognized strings are rejected and an informative error is emitted instead of silently proceeding.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/flb_oauth2.c`:
- Line 656: The call to BIO_new_mem_buf uses file_size (size_t) where the
function expects an int; change both calls to pass an explicit cast like
(int)file_size and add a guard before calling BIO_new_mem_buf in the functions
that use file_buf/file_size: check if file_size > INT_MAX and handle that error
(return failure/log) to avoid silent narrowing on 64-bit platforms. Ensure you
update both sites that call BIO_new_mem_buf(file_buf, file_size) to use the
(int) cast and the overflow check.
---
Duplicate comments:
In `@src/flb_oauth2.c`:
- Around line 747-762: The code pre-hashes the JWT payload with flb_hash_simple
then calls flb_crypto_sign_simple with FLB_HASH_SHA256; confirm whether
flb_crypto_sign_simple / flb_crypto_transform uses EVP_PKEY_sign (expects
pre-hashed input) or EVP_DigestSign* (which will re-hash), by inspecting the
implementation for uses of EVP_PKEY_sign vs EVP_DigestSign; if
flb_crypto_sign_simple uses EVP_DigestSign, remove the manual flb_hash_simple
and pass the raw data (or switch the hash flag to indicate no pre-hash),
otherwise if it uses EVP_PKEY_sign keep the pre-hash but ensure the call
semantics/documentation and flag (FLB_HASH_SHA256) correctly indicate a
precomputed digest; update the call site or the crypto function to make the
contract explicit (or add an assert/comment) so the signing path is unambiguous.
---
Nitpick comments:
In `@src/flb_oauth2.c`:
- Around line 100-104: The oauth2.jwt_header config
(flb_oauth2_config.jwt_header) is not validated and unknown values fall through
in oauth2_private_key_jwt_thumbprint causing an arbitrary claim name and invalid
thumbprint; validate the configured value against the allowed set (e.g., "kid"
and "x5t") either when applying the config or at the start of
oauth2_private_key_jwt_thumbprint and return/log a clear error for unsupported
values, or normalize/choose a default only when the value is one of the known
options—ensure checks reference flb_oauth2_config.jwt_header and the function
oauth2_private_key_jwt_thumbprint so unrecognized strings are rejected and an
informative error is emitted instead of silently proceeding.
src/flb_oauth2.c
Outdated
| return -1; | ||
| } | ||
|
|
||
| bio = BIO_new_mem_buf(file_buf, file_size); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check the relevant section of the file
head -c 100000 src/flb_oauth2.c | tail -c 100000 > /tmp/oauth2_chunk.c
wc -l src/flb_oauth2.cRepository: fluent/fluent-bit
Length of output: 82
🏁 Script executed:
# Read the relevant context around lines 635-728
sed -n '630,730p' src/flb_oauth2.c | cat -nRepository: fluent/fluent-bit
Length of output: 3517
🏁 Script executed:
# Search for file_size variable declarations to confirm its type
rg 'file_size' src/flb_oauth2.c -B 5 -A 2Repository: fluent/fluent-bit
Length of output: 925
🏁 Script executed:
# Verify BIO_new_mem_buf function signature from OpenSSL headers
find /usr -name "bio.h" 2>/dev/null | head -5Repository: fluent/fluent-bit
Length of output: 43
🌐 Web query:
BIO_new_mem_buf OpenSSL signature parameters
💡 Result:
BIO_new_mem_buf() is not where signature (RSA-PSS / ECDSA / etc.) parameters are set. It only wraps an in-memory byte buffer so OpenSSL can read things (PEM keys, DER blobs, signatures, etc.) through the BIO API.
What BIO_new_mem_buf() does (relevant details)
BIO_new_mem_buf(const void *buf, int len)creates a read-only memory BIO overbuf.- If
len == -1,bufis treated as NUL-terminated and length is computed withstrlen(). - The BIO reads directly from your buffer (no copy), so the buffer must remain valid and unchanged until the BIO is freed.
Source: OpenSSLBIO_s_mem(3)/BIO_new_mem_buf()docs. (docs.openssl.org)
Where signature parameters actually go
Use the EVP signature APIs and set parameters on the signature context, typically via the EVP_PKEY_CTX * returned from EVP_DigestVerifyInit(_ex) (or by passing OSSL_PARAM in the _ex form).
EVP_DigestVerifyInit_ex()can take anOSSL_PARAM *paramslist that is “set on the context before returning”, and/or you can retrieveEVP_PKEY_CTX *pctxand set options there. (docs.openssl.org)- If you’re doing a “raw” verify with
EVP_PKEY_verify(), note you cannot use it to “combine” a key algorithm with a digest (i.e., don’t expect “RSA + SHA256” composition there); useEVP_DigestVerify*when you want hashing + verify. (docs.openssl.org)
Common example: RSA-PSS verification parameters
After EVP_DigestVerifyInit(_ex) gives you pctx, set (as needed):
- padding mode to PSS:
EVP_PKEY_CTX_set_rsa_padding(pctx, RSA_PKCS1_PSS_PADDING) - digest:
EVP_PKEY_CTX_set_signature_md(pctx, md) - MGF1 digest:
EVP_PKEY_CTX_set_rsa_mgf1_md(pctx, md) - salt length:
EVP_PKEY_CTX_set_rsa_pss_saltlen(pctx, saltlen)
These controls are described in the RSA-PSS EVP_PKEY_CTX documentation. (docs.openssl.org)
Citations:
- 1: https://docs.openssl.org/3.4/man3/BIO_s_mem/
- 2: https://docs.openssl.org/3.6/man3/EVP_DigestVerifyInit/?utm_source=openai
- 3: https://docs.openssl.org/3.5/man3/EVP_PKEY_verify/?utm_source=openai
- 4: https://docs.openssl.org/1.1.1/man3/EVP_PKEY_CTX_set_rsa_pss_keygen_md/?utm_source=openai
BIO_new_mem_buf receives size_t where int is expected — add explicit cast.
BIO_new_mem_buf(const void *buf, int len) takes an int for the length parameter. Passing file_size (type size_t) silently narrows the value on 64-bit platforms; this will also produce a -Wconversion diagnostic.
🛠️ Proposed fix (apply to both call sites)
- bio = BIO_new_mem_buf(file_buf, file_size);
+ bio = BIO_new_mem_buf(file_buf, (int) file_size);- bio = BIO_new_mem_buf(file_buf, file_size);
+ bio = BIO_new_mem_buf(file_buf, (int) file_size);Also applies to: 666-666
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/flb_oauth2.c` at line 656, The call to BIO_new_mem_buf uses file_size
(size_t) where the function expects an int; change both calls to pass an
explicit cast like (int)file_size and add a guard before calling BIO_new_mem_buf
in the functions that use file_buf/file_size: check if file_size > INT_MAX and
handle that error (return failure/log) to avoid silent narrowing on 64-bit
platforms. Ensure you update both sites that call BIO_new_mem_buf(file_buf,
file_size) to use the (int) cast and the overflow check.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
src/flb_oauth2.c (2)
743-768:⚠️ Potential issue | 🔴 CriticalDouble-hashing still present —
flb_crypto_sign_simplewill hash the input internally.Lines 752–759 pre-compute
SHA256(data)intodigest, then lines 762–767 pass thatdigesttoflb_crypto_sign_simplewithFLB_HASH_SHA256. Sinceflb_crypto_sign_simplecallsflb_crypto_sign, which usesEVP_DigestSign*to hash the input internally, the signature is computed overSHA256(SHA256(data))— producing an invalid JWT that no token endpoint will accept. This was previously flagged and marked as fixed, but the current code still carries the same pattern (digest[32]at line 743 and theflb_hash_simplecall at lines 752–759 remain).The fix is to remove the pre-hash step and pass the raw signing input directly:
🐛 Proposed fix
- unsigned char digest[32]; unsigned char sig[4096]; ret = flb_utils_read_file((char *) private_key_file, &key_buf, &key_size); if (ret != 0 || !key_buf || key_size == 0) { flb_error("[oauth2] failed to read private key file '%s'", private_key_file); return -1; } - ret = flb_hash_simple(FLB_HASH_SHA256, - (unsigned char *) data, data_len, - digest, sizeof(digest)); - if (ret != FLB_CRYPTO_SUCCESS) { - flb_error("[oauth2] failed to hash JWT assertion payload"); - flb_free(key_buf); - return -1; - } - sig_len = sizeof(sig); ret = flb_crypto_sign_simple(FLB_CRYPTO_PRIVATE_KEY, FLB_CRYPTO_PADDING_PKCS1, FLB_HASH_SHA256, (unsigned char *) key_buf, key_size, - digest, sizeof(digest), + (unsigned char *) data, data_len, sig, &sig_len);Run to confirm
flb_crypto_signusesEVP_DigestSign*(internal hashing):#!/bin/bash # Verify whether flb_crypto_sign hashes internally (EVP_DigestSign*) or expects pre-hashed input (EVP_PKEY_sign) rg -n 'EVP_DigestSign|EVP_PKEY_sign' src/flb_crypto.c -C 5🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flb_oauth2.c` around lines 743 - 768, The code is pre-hashing the JWT signing input with flb_hash_simple then calling flb_crypto_sign_simple with FLB_HASH_SHA256, causing a double-hash; remove the pre-hash: delete the digest buffer and the flb_hash_simple call, and call flb_crypto_sign_simple with the original data pointer/length (data, data_len) instead of digest/digest size (leave sig/sig_len handling as-is and still free key_buf after the sign call). Ensure you update references to digest and its size and only pass the raw signing input to flb_crypto_sign_simple (function names: flb_hash_simple, flb_crypto_sign_simple, and use variables data and data_len).
661-661:⚠️ Potential issue | 🟡 Minor
BIO_new_mem_bufreceivessize_twhereintis expected — add explicit cast.
BIO_new_mem_buf(const void *buf, int len)takesint;file_sizeissize_t. This silently narrows the value on 64-bit platforms and would produce a-Wconversiondiagnostic.🛠️ Proposed fix (apply to both call sites)
- bio = BIO_new_mem_buf(file_buf, file_size); + bio = BIO_new_mem_buf(file_buf, (int) file_size);- bio = BIO_new_mem_buf(file_buf, file_size); + bio = BIO_new_mem_buf(file_buf, (int) file_size);Optionally add a guard before each call:
if (file_size > INT_MAX) { flb_error("[oauth2] certificate file too large"); flb_free(file_buf); return -1; }Run to confirm the OpenSSL
BIO_new_mem_bufsignature:#!/bin/bash rg -n 'BIO_new_mem_buf' --type c -C 2Also applies to: 671-671
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flb_oauth2.c` at line 661, BIO_new_mem_buf expects an int length but file_size is size_t, so explicitly cast file_size to int when calling BIO_new_mem_buf (e.g., BIO_new_mem_buf(file_buf, (int) file_size)) and apply this change at both call sites; additionally, add a guard before each call to check if file_size > INT_MAX and handle the error (log with flb_error, free file_buf, and return -1) to avoid silent narrowing on 64-bit platforms.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/flb_oauth2.c`:
- Around line 844-849: The JWT payload is built with flb_sds_printf inserting
ctx->cfg.client_id and audience directly into payload_json which can produce
invalid JSON if those values contain " or \; update the JWT construction (where
flb_sds_printf is called for payload_json) to JSON-escape these string fields
(client_id, audience, jti) before insertion or, better, build the object with a
proper serializer/encoder instead of manual printf; ensure any escaping routine
handles quotes and backslashes (and returns error on allocation/failure) and use
the escaped values in the existing flb_sds_printf call so the generated JSON is
always valid.
---
Duplicate comments:
In `@src/flb_oauth2.c`:
- Around line 743-768: The code is pre-hashing the JWT signing input with
flb_hash_simple then calling flb_crypto_sign_simple with FLB_HASH_SHA256,
causing a double-hash; remove the pre-hash: delete the digest buffer and the
flb_hash_simple call, and call flb_crypto_sign_simple with the original data
pointer/length (data, data_len) instead of digest/digest size (leave sig/sig_len
handling as-is and still free key_buf after the sign call). Ensure you update
references to digest and its size and only pass the raw signing input to
flb_crypto_sign_simple (function names: flb_hash_simple, flb_crypto_sign_simple,
and use variables data and data_len).
- Line 661: BIO_new_mem_buf expects an int length but file_size is size_t, so
explicitly cast file_size to int when calling BIO_new_mem_buf (e.g.,
BIO_new_mem_buf(file_buf, (int) file_size)) and apply this change at both call
sites; additionally, add a guard before each call to check if file_size >
INT_MAX and handle the error (log with flb_error, free file_buf, and return -1)
to avoid silent narrowing on 64-bit platforms.
| tmp = flb_sds_printf(&payload_json, | ||
| "{\"iss\":\"%s\",\"sub\":\"%s\",\"aud\":\"%s\"," | ||
| "\"iat\":%lu,\"exp\":%lu,\"jti\":\"%s\"}", | ||
| ctx->cfg.client_id, ctx->cfg.client_id, audience, | ||
| (unsigned long) now, (unsigned long) (now + ttl), | ||
| jti); |
There was a problem hiding this comment.
client_id and audience inserted into JWT JSON without escaping.
If either value contains " or \ (e.g., a token_url used as audience), the payload_json string will be malformed JSON and the assertion will be rejected by the token endpoint. Consider escaping or validating these fields before insertion, or build the JSON through a structured serialiser.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/flb_oauth2.c` around lines 844 - 849, The JWT payload is built with
flb_sds_printf inserting ctx->cfg.client_id and audience directly into
payload_json which can produce invalid JSON if those values contain " or \;
update the JWT construction (where flb_sds_printf is called for payload_json) to
JSON-escape these string fields (client_id, audience, jti) before insertion or,
better, build the object with a proper serializer/encoder instead of manual
printf; ensure any escaping routine handles quotes and backslashes (and returns
error on allocation/failure) and use the escaped values in the existing
flb_sds_printf call so the generated JSON is always valid.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tests/internal/oauth2_jwt.c (1)
75-87: Consider assertinghas_client_id_claim == FLB_FALSEfor completeness.The other two
appidtests explicitly verify the flag state (has_client_id_claim); doing the same here would fully pin the expected state whenazpis present.✏️ Suggested addition
TEST_CHECK(jwt.claims.has_azp == FLB_TRUE); + TEST_CHECK(jwt.claims.has_client_id_claim == FLB_FALSE); flb_oauth2_jwt_destroy(&jwt);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/internal/oauth2_jwt.c` around lines 75 - 87, The test test_azp_overrides_appid should also assert that the client_id claim flag is false; add a TEST_CHECK(jwt.claims.has_client_id_claim == FLB_FALSE) after verifying has_azp and before flb_oauth2_jwt_destroy(&jwt) so the expected state when azp is present is fully validated (refer to struct flb_oauth2_jwt and jwt.claims.has_client_id_claim).src/flb_oauth2_jwt.c (1)
576-595:appidfallback logic is correct across all field-ordering combinations.All six ordering cases (appid-only,
client_id↔appid,azp↔appid,client_id+azp) produce the correctclient_id,has_azp, andhas_client_id_claimoutcomes. Memory ownership is clean in every path.One minor follow-on: the
oauth2.allowed_clientsconfig map description at line 1514 still reads "Authorized client_id/azp values" and doesn't mentionappid, which is now a third recognized identity token. Worth a one-word update for operators configuring ADFS tokens.✏️ Suggested config description update (line 1514)
- "Authorized client_id/azp values for OAuth2 JWT validation" + "Authorized client_id/azp/appid values for OAuth2 JWT validation"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flb_oauth2_jwt.c` around lines 576 - 595, Update the oauth2.allowed_clients config map description to mention the new recognized identity token "appid" in addition to "client_id" and "azp" so operators know ADFS tokens are supported; locate the config entry labeled oauth2.allowed_clients and change its description text from "Authorized client_id/azp values" to something like "Authorized client identifiers (client_id/azp/appid)" to reflect the third accepted token name.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/flb_oauth2_jwt.c`:
- Around line 576-595: Update the oauth2.allowed_clients config map description
to mention the new recognized identity token "appid" in addition to "client_id"
and "azp" so operators know ADFS tokens are supported; locate the config entry
labeled oauth2.allowed_clients and change its description text from "Authorized
client_id/azp values" to something like "Authorized client identifiers
(client_id/azp/appid)" to reflect the third accepted token name.
In `@tests/internal/oauth2_jwt.c`:
- Around line 75-87: The test test_azp_overrides_appid should also assert that
the client_id claim flag is false; add a
TEST_CHECK(jwt.claims.has_client_id_claim == FLB_FALSE) after verifying has_azp
and before flb_oauth2_jwt_destroy(&jwt) so the expected state when azp is
present is fully validated (refer to struct flb_oauth2_jwt and
jwt.claims.has_client_id_claim).
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
6490d46 to
761487a
Compare
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Adds OAuth2 client-credentials extension support for RFC7523-style private_key_jwt in Fluent Bit OAuth2 core and out_http, including ADFS-style resource token request parameter support. This enables certificate/private-key signed client assertions for token acquisition and keeps bearer-token usage unchanged on receivers like in_http.
New out_http config options:
Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.
Summary by CodeRabbit
New Features
Bug Fixes / Compatibility
Tests