Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (5)
📝 WalkthroughWalkthroughAdds an optional faucet feature: configuration, backend FaucetService with per-address/IP cooldowns and 429 handling, API endpoints (/api/faucet/info, POST /api/faucet), router/state integration, frontend pages/hooks/clients, Axios error enhancements, and docker/.env examples plus dependency updates. Changes
Sequence Diagram(s)sequenceDiagram
participant Browser as Browser
participant UI as FaucetPage
participant Client as AxiosClient
participant Server as AtlasServer
participant Faucet as FaucetService
participant Provider as BlockchainProvider
Browser->>UI: user submits address
UI->>Client: POST /api/faucet (includes client IP headers)
Client->>Server: HTTP POST /api/faucet
Server->>Faucet: request_faucet(address, client_ip)
alt cooldown active
Faucet-->>Server: TooManyRequests(retry_after_seconds)
Server-->>Client: 429 + Retry-After header + body { retry_after_seconds }
Client-->>UI: ApiError { retryAfterSeconds }
UI->>UI: display cooldown/countdown
else allowed
Faucet->>Provider: send transaction(amount_wei)
Provider-->>Faucet: tx_hash
Faucet->>Faucet: record cooldown for address/IP
Faucet-->>Server: FaucetTxResponse { tx_hash }
Server-->>Client: 200 + body
Client-->>UI: success (tx_hash)
UI->>UI: show tx link and start cooldown UI
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
frontend/src/pages/AddressPage.tsx (1)
41-62: Consider parallel fetching for improved performance.The sequential
forloop fetches tokens one at a time. If an address has many unique ERC-20 contracts, this could be noticeably slower. The current approach does have the benefit of isolated error handling per token.♻️ Optional: Parallel fetch with Promise.allSettled
useEffect(() => { let cancelled = false; (async () => { const unique = Array.from(new Set( transfers .filter(t => t.transfer_type === 'erc20') .map(t => t.contract_address.toLowerCase()) )); if (unique.length === 0) return; - const updates: Record<string, { decimals: number }> = {}; - for (const addr of unique) { - try { - const t = await getToken(addr); - updates[addr] = { decimals: t.decimals }; - } catch { - updates[addr] = { decimals: 18 }; - } - } + const results = await Promise.allSettled(unique.map(addr => getToken(addr))); + const updates: Record<string, { decimals: number }> = {}; + unique.forEach((addr, i) => { + const result = results[i]; + updates[addr] = { decimals: result.status === 'fulfilled' ? result.value.decimals : 18 }; + }); if (!cancelled) setTokenMeta(prev => ({ ...prev, ...updates })); })(); return () => { cancelled = true; }; }, [transfers]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/pages/AddressPage.tsx` around lines 41 - 62, The effect currently fetches token metadata sequentially in the for loop which is slow for many addresses; change it to run parallel requests using Promise.allSettled on unique (from transfers) mapped to getToken(addr), then build the same updates object by iterating the settled results (on fulfilled use result.value.decimals, on rejected set 18), and finally, if not cancelled call setTokenMeta(prev => ({ ...prev, ...updates })); keep the cancelled flag and the same variable names (useEffect, transfers, getToken, updates, setTokenMeta) and preserve per-token error handling by interpreting rejected promises as default decimals: 18.frontend/src/components/Layout.tsx (1)
15-15: Avoid global unconditional faucet info fetch in layout.Line 15 triggers
useFaucetInfo()for all app visits; with current hook behavior this performs/faucet/infofetch on layout mount even outside faucet flows, and can duplicate calls with faucet page data loading. Consider moving this to shared cached state (or route-level prefetch) to avoid redundant network traffic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/Layout.tsx` at line 15, The Layout component currently calls useFaucetInfo() unconditionally (const { faucetInfo } = useFaucetInfo()), causing a global /faucet/info fetch on every mount and duplicate requests; remove that top-level call from Layout.tsx and instead load faucet info from a shared cached source or at route level: either (a) wire useFaucetInfo into a Faucet-specific page or route loader so the hook runs only on faucet routes, or (b) move the fetch into a global cache/provider (e.g., React Context or react-query with a single query key) and have useFaucetInfo read from that cache without issuing a new network request. Update consumers to read from the shared cache/provider or from the faucet route prefetch so duplicate network calls are avoided.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/crates/atlas-server/src/api/handlers/faucet.rs`:
- Around line 46-68: The current extract_client_ip function trusts the left-most
X-Forwarded-For hop (first item) which is spoofable because nginx appends the
real client IP to the end; change extract_client_ip to choose the last non-empty
hop from X-Forwarded-For (i.e., split on ',' and take the last trimmed entry)
before calling normalize_ip, and fall back to X-Real-IP as before; update any
tests to add a regression test feeding an X-Forwarded-For value like
"203.0.113.10, 127.0.0.1" and assert the resolved client IP is the right-most
hop (127.0.0.1) after normalize_ip.
In `@backend/crates/atlas-server/src/api/mod.rs`:
- Around line 147-150: The TimeoutLayer is currently applied before the
conditional faucet routes are registered, so the `/api/faucet/*` endpoints are
left unprotected; move the
TimeoutLayer::with_status_code(Duration::from_secs(10),
axum::http::StatusCode::REQUEST_TIMEOUT) call so it is applied after the faucet
route registration block (the code that conditionally inserts the faucet
service) and before the SSE merge/merge method is called, ensuring TimeoutLayer
wraps all routes including the faucet handlers; update the builder chain to
place TimeoutLayer immediately after the faucet registration and before the SSE
merge/merge call (refer to TimeoutLayer and the faucet route registration block
in mod.rs).
---
Nitpick comments:
In `@frontend/src/components/Layout.tsx`:
- Line 15: The Layout component currently calls useFaucetInfo() unconditionally
(const { faucetInfo } = useFaucetInfo()), causing a global /faucet/info fetch on
every mount and duplicate requests; remove that top-level call from Layout.tsx
and instead load faucet info from a shared cached source or at route level:
either (a) wire useFaucetInfo into a Faucet-specific page or route loader so the
hook runs only on faucet routes, or (b) move the fetch into a global
cache/provider (e.g., React Context or react-query with a single query key) and
have useFaucetInfo read from that cache without issuing a new network request.
Update consumers to read from the shared cache/provider or from the faucet route
prefetch so duplicate network calls are avoided.
In `@frontend/src/pages/AddressPage.tsx`:
- Around line 41-62: The effect currently fetches token metadata sequentially in
the for loop which is slow for many addresses; change it to run parallel
requests using Promise.allSettled on unique (from transfers) mapped to
getToken(addr), then build the same updates object by iterating the settled
results (on fulfilled use result.value.decimals, on rejected set 18), and
finally, if not cancelled call setTokenMeta(prev => ({ ...prev, ...updates }));
keep the cancelled flag and the same variable names (useEffect, transfers,
getToken, updates, setTokenMeta) and preserve per-token error handling by
interpreting rejected promises as default decimals: 18.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 464499c3-4369-4158-842f-9b3a3d573407
⛔ Files ignored due to path filters (1)
frontend/bun.lockis excluded by!**/*.lock
📒 Files selected for processing (27)
.env.examplebackend/Cargo.tomlbackend/crates/atlas-common/src/error.rsbackend/crates/atlas-server/Cargo.tomlbackend/crates/atlas-server/src/api/error.rsbackend/crates/atlas-server/src/api/handlers/faucet.rsbackend/crates/atlas-server/src/api/handlers/mod.rsbackend/crates/atlas-server/src/api/handlers/status.rsbackend/crates/atlas-server/src/api/mod.rsbackend/crates/atlas-server/src/config.rsbackend/crates/atlas-server/src/faucet.rsbackend/crates/atlas-server/src/main.rsdocker-compose.ymlfrontend/package.jsonfrontend/src/App.tsxfrontend/src/api/client.tsfrontend/src/api/faucet.tsfrontend/src/api/index.tsfrontend/src/components/Layout.tsxfrontend/src/hooks/index.tsfrontend/src/hooks/useFaucetInfo.tsfrontend/src/pages/AddressPage.tsxfrontend/src/pages/FaucetPage.tsxfrontend/src/pages/index.tsfrontend/src/types/index.tsfrontend/src/utils/apiError.tsfrontend/src/utils/index.ts
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 09c4c2333e
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
🧹 Nitpick comments (1)
frontend/src/pages/FaucetPage.tsx (1)
92-137: Consider clearing error state when address changes for better UX.Currently, validation and submission errors persist until the next submit attempt. Users might expect the error to clear when they start editing the address. This is a minor UX enhancement.
✨ Optional: Clear error on input change
Add an
onChangehandler or modify the existing one at line 177:- onChange={(event) => setAddress(event.target.value)} + onChange={(event) => { + setAddress(event.target.value); + if (submitError) setSubmitError(null); + }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/pages/FaucetPage.tsx` around lines 92 - 137, The form keeps validation/submission errors until the next submit; update the address input change handler (where address state is set) to clear errors so they disappear as the user edits: call setSubmitError(null) (and optionally setTxHash(null)) inside the existing onChange that updates address, ensuring you modify the same component that defines handleSubmit and uses setSubmitError, setTxHash, and address so errors clear immediately when the user types.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@frontend/src/pages/FaucetPage.tsx`:
- Around line 92-137: The form keeps validation/submission errors until the next
submit; update the address input change handler (where address state is set) to
clear errors so they disappear as the user edits: call setSubmitError(null) (and
optionally setTxHash(null)) inside the existing onChange that updates address,
ensuring you modify the same component that defines handleSubmit and uses
setSubmitError, setTxHash, and address so errors clear immediately when the user
types.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 146dc211-929b-4262-b509-69f1d6e82fd1
📒 Files selected for processing (1)
frontend/src/pages/FaucetPage.tsx
There was a problem hiding this comment.
🧹 Nitpick comments (1)
backend/crates/atlas-server/src/api/handlers/faucet.rs (1)
78-82: Consider normalizing IPv4-mapped IPv6 addresses to IPv4.The current implementation preserves the distinction between
127.0.0.1and::ffff:127.0.0.1, which represent the same client. While nginx typically uses a consistent format, explicitly normalizing IPv4-mapped IPv6 addresses would strengthen rate-limiting.♻️ Proposed enhancement
fn normalize_ip(ip: &str) -> Result<String, AtlasError> { let parsed = IpAddr::from_str(ip) .map_err(|_| AtlasError::InvalidInput("Invalid client IP address".to_string()))?; - Ok(parsed.to_string()) + // Convert IPv4-mapped IPv6 addresses to their IPv4 equivalent + let canonical = match parsed { + IpAddr::V6(v6) => v6 + .to_ipv4_mapped() + .map(IpAddr::V4) + .unwrap_or(IpAddr::V6(v6)), + other => other, + }; + Ok(canonical.to_string()) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/api/handlers/faucet.rs` around lines 78 - 82, normalize_ip currently returns the parsed IpAddr string verbatim, which keeps IPv4-mapped IPv6 addresses like "::ffff:127.0.0.1" distinct from "127.0.0.1"; update normalize_ip to detect an IpAddr::V6 that represents an IPv4-mapped address and convert it to the equivalent IpAddr::V4 before returning. Concretely, after parsing into IpAddr in normalize_ip, check for the V6 variant and call the appropriate conversion (e.g., use the Ipv6 -> Ipv4 conversion helper / to_ipv4() or equivalent) to produce an IpAddr::V4 and return its to_string(), otherwise return the original parsed address string.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@backend/crates/atlas-server/src/api/handlers/faucet.rs`:
- Around line 78-82: normalize_ip currently returns the parsed IpAddr string
verbatim, which keeps IPv4-mapped IPv6 addresses like "::ffff:127.0.0.1"
distinct from "127.0.0.1"; update normalize_ip to detect an IpAddr::V6 that
represents an IPv4-mapped address and convert it to the equivalent IpAddr::V4
before returning. Concretely, after parsing into IpAddr in normalize_ip, check
for the V6 variant and call the appropriate conversion (e.g., use the Ipv6 ->
Ipv4 conversion helper / to_ipv4() or equivalent) to produce an IpAddr::V4 and
return its to_string(), otherwise return the original parsed address string.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5b7f694b-40b3-4007-914b-bc0cc6856eae
📒 Files selected for processing (2)
backend/crates/atlas-server/src/api/handlers/faucet.rsbackend/crates/atlas-server/src/api/mod.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/crates/atlas-server/src/api/mod.rs
pthmas
left a comment
There was a problem hiding this comment.
left just one nit.
Threw the PR into claude to review the frontend and go this:
useFaucetInfo fires on every page load via Layout — Layout.tsx
Since Layout renders on every route, GET /api/faucet/info is called on every navigation. For a disabled faucet (404 response), this is a wasted request on every page. Consider caching the 404 result (e.g., in a context or with a stale-while-revalidate approach) rather than re-fetching each time.
Might be worth investigating.
| use tower::util::ServiceExt; | ||
|
|
||
| #[derive(Clone)] | ||
| struct FakeFaucet; |
There was a problem hiding this comment.
This is a duplicate of the FakeFaucet in faucet.rs could we have a single definition?
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/crates/atlas-server/src/api/mod.rs`:
- Around line 16-24: AppState is missing the required solc_path and
admin_api_key fields; update the AppState struct to include pub solc_path:
String and pub admin_api_key: Option<String>, and then propagate these values
through the server state construction where AppState is instantiated (look for
functions/constructors that build AppState or return the app state, e.g., any
new_app_state/build_app_state/initialize_app_state or the code that previously
set pool, block_events_tx, rpc_url, chain_id, chain_name), ensuring the API uses
the API-specific PgPool and that solc_path and admin_api_key are sourced from
configuration/environment and passed into the AppState creation call.
In `@backend/crates/atlas-server/src/config.rs`:
- Around line 39-45: FaucetConfig currently derives Debug while storing the
sensitive private_key field, risking secret leakage; remove #[derive(Debug)]
from FaucetConfig and either implement a manual impl Debug for FaucetConfig that
redacts or omits the private_key field, or drop Debug entirely for this struct;
locate the FaucetConfig definition and replace the derive with a safe manual
Debug implementation (or no Debug) so that private_key is not printed or logged.
In `@backend/crates/atlas-server/src/main.rs`:
- Around line 62-85: The new conditional faucet startup logic (using
faucet_config = config::FaucetConfig::from_env(), the if faucet_config.enabled
branch that constructs PrivateKeySigner,
ProviderBuilder::new().wallet(...).connect_http(...), and FaucetService::new as
Arc<dyn faucet::FaucetBackend>) lacks unit tests; add a #[cfg(test)] mod tests
in this file that extracts a small helper to invoke the faucet initialization
logic with a provided FaucetConfig and returns whether the service was created
or an error, and then add tests for (1) disabled faucet -> returns None, (2)
enabled faucet with valid fields -> returns Some(Arc<FaucetService>), and (3)
enabled faucet with invalid rpc_url -> returns an Err from the
connect_http/parse step; use mocked or minimal valid private_key and
amount/cooldown values and assert outcomes via the helper so tests run under
cargo test --workspace.
In `@Justfile`:
- Around line 116-119: Remove the hardcoded test faucet private key by deleting
the FAUCET_PRIVATE_KEY assignment in the Justfile test-run section and make the
key required from the caller environment: stop setting FAUCET_PRIVATE_KEY=0x...
in the Justfile, instead reference the environment variable FAUCET_PRIVATE_KEY
(and add a runtime check in the test-run task/recipe to fail with a clear error
if FAUCET_PRIVATE_KEY is unset), and ensure any documentation/examples instruct
callers to export their own key before running.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: cb18361b-d2a4-47f0-ba56-3a85c61da524
📒 Files selected for processing (11)
.env.exampleJustfilebackend/crates/atlas-server/src/api/handlers/mod.rsbackend/crates/atlas-server/src/api/handlers/status.rsbackend/crates/atlas-server/src/api/mod.rsbackend/crates/atlas-server/src/config.rsbackend/crates/atlas-server/src/main.rsdocker-compose.ymlfrontend/src/App.tsxfrontend/src/components/Layout.tsxfrontend/src/pages/index.ts
🚧 Files skipped from review as they are similar to previous changes (4)
- frontend/src/App.tsx
- docker-compose.yml
- backend/crates/atlas-server/src/api/handlers/status.rs
- frontend/src/components/Layout.tsx
| pub struct AppState { | ||
| pub pool: PgPool, | ||
| pub block_events_tx: broadcast::Sender<()>, | ||
| pub head_tracker: Arc<HeadTracker>, | ||
| pub rpc_url: String, | ||
| pub faucet: Option<SharedFaucetBackend>, | ||
| pub chain_id: u64, | ||
| pub chain_name: String, | ||
| } |
There was a problem hiding this comment.
AppState is still missing required solc_path and admin_api_key fields.
Since this PR updates AppState, please align it with the required API state contract and wire these fields through state construction as well.
🧩 Suggested shape update
pub struct AppState {
pub pool: PgPool,
pub block_events_tx: broadcast::Sender<()>,
pub head_tracker: Arc<HeadTracker>,
pub rpc_url: String,
+ pub solc_path: String,
+ pub admin_api_key: Option<String>,
pub faucet: Option<SharedFaucetBackend>,
pub chain_id: u64,
pub chain_name: String,
}As per coding guidelines backend/crates/atlas-server/src/api/**/*.rs: Configure AppState to include pool: PgPool (API pool only), block_events_tx: broadcast::Sender<()>, rpc_url: String, solc_path: String, and optional admin_api_key.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/crates/atlas-server/src/api/mod.rs` around lines 16 - 24, AppState is
missing the required solc_path and admin_api_key fields; update the AppState
struct to include pub solc_path: String and pub admin_api_key: Option<String>,
and then propagate these values through the server state construction where
AppState is instantiated (look for functions/constructors that build AppState or
return the app state, e.g., any
new_app_state/build_app_state/initialize_app_state or the code that previously
set pool, block_events_tx, rpc_url, chain_id, chain_name), ensuring the API uses
the API-specific PgPool and that solc_path and admin_api_key are sourced from
configuration/environment and passed into the AppState creation call.
| let faucet_config = config::FaucetConfig::from_env()?; | ||
|
|
||
| let faucet = if faucet_config.enabled { | ||
| tracing::info!("Faucet enabled"); | ||
| let private_key = faucet_config | ||
| .private_key | ||
| .as_ref() | ||
| .expect("validated faucet private key"); | ||
| let signer: PrivateKeySigner = private_key.parse().expect("validated faucet private key"); | ||
| let rpc_url: reqwest::Url = config | ||
| .rpc_url | ||
| .parse() | ||
| .map_err(|e| anyhow::anyhow!("Invalid RPC_URL for faucet: {e}"))?; | ||
| let provider = ProviderBuilder::new().wallet(signer).connect_http(rpc_url); | ||
| Some(Arc::new(faucet::FaucetService::new( | ||
| provider, | ||
| faucet_config.amount_wei.expect("validated faucet amount"), | ||
| faucet_config | ||
| .cooldown_minutes | ||
| .expect("validated faucet cooldown"), | ||
| )) as Arc<dyn faucet::FaucetBackend>) | ||
| } else { | ||
| None | ||
| }; |
There was a problem hiding this comment.
Add unit coverage for the new faucet startup branch.
This block introduces new conditional startup logic but currently has no targeted tests in this file. Please add tests for enabled/disabled faucet paths (and invalid faucet RPC URL parse path) via a small extracted helper.
As per coding guidelines backend/crates/**/*.rs: Add unit tests for new logic in a #[cfg(test)] mod tests block in the same file, run with cargo test --workspace.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/crates/atlas-server/src/main.rs` around lines 62 - 85, The new
conditional faucet startup logic (using faucet_config =
config::FaucetConfig::from_env(), the if faucet_config.enabled branch that
constructs PrivateKeySigner,
ProviderBuilder::new().wallet(...).connect_http(...), and FaucetService::new as
Arc<dyn faucet::FaucetBackend>) lacks unit tests; add a #[cfg(test)] mod tests
in this file that extracts a small helper to invoke the faucet initialization
logic with a provided FaucetConfig and returns whether the service was created
or an error, and then add tests for (1) disabled faucet -> returns None, (2)
enabled faucet with valid fields -> returns Some(Arc<FaucetService>), and (3)
enabled faucet with invalid rpc_url -> returns an Err from the
connect_http/parse step; use mocked or minimal valid private_key and
amount/cooldown values and assert outcomes via the helper so tests run under
cargo test --workspace.

Overview
closes ##28
Summary by CodeRabbit
New Features
Behavioral / API
Configuration