feat: Add action logging system for audit and debugging#3
feat: Add action logging system for audit and debugging#3NathanWang7 wants to merge 6 commits intochainstacklabs:mainfrom
Conversation
- New lib/action_logger.py module with thread-safe JSONL logging - Daily log rotation with 30-day retention - Sensitive data sanitization (private keys, API keys) - Context manager for timing and logging operations - New `polyclaw actions` command to view logs Logging added to all major operations: - markets: trending, search, details, events - wallet: status, approve - trade: buy - positions: add, close, delete - hedge: scan, analyze Log files stored at ~/.openclaw/polyclaw/logs/actions-YYYY-MM-DD.jsonl Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new thread-safe action logging module with daily JSONL rotation and 30-day retention, plus a CLI viewer and instrumentation across multiple command scripts to emit sanitized structured action logs and provide query helpers. Changes
Sequence DiagramsequenceDiagram
participant CLI as CLI Command
participant Logger as ActionLogger
participant Sanitizer as Sanitizer
participant FS as DailyLogFile
participant Retriever as QueryFunctions
CLI->>Logger: __enter__(action, params)
Logger->>Logger: start timer
CLI->>Logger: set_details(details)
alt error occurs
CLI->>Logger: failure(error)
else success
CLI->>Logger: success()
end
CLI->>Logger: __exit__()
Logger->>Sanitizer: sanitize_dict(params/details)
Sanitizer-->>Logger: masked data
Logger->>FS: append ActionEntry (JSONL)
Logger->>FS: trigger cleanup_old_logs()
FS->>FS: delete files older than 30 days
CLI->>Retriever: get_recent_actions(...) / get_actions_by_date(...)
Retriever->>FS: read daily JSONL files
FS-->>Retriever: parsed entries
Retriever-->>CLI: filtered results
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (3)
lib/action_logger.py (2)
261-274: Same single-day limitation asget_recent_actions.This function also only searches today's log when filtering by type.
248-258:get_recent_actionsonly searches today's log.If the limit exceeds today's action count, it won't return actions from previous days. This may be unexpected for users calling
polyclaw actions --limit 100early in the day.Consider documenting this limitation or extending to search multiple days.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/action_logger.py` around lines 248 - 258, get_recent_actions currently only calls get_actions_by_date for today's date which means a large limit requested early in the day returns fewer items; change get_recent_actions to iterate backwards day-by-day (use datetime.now(timezone.utc) and subtract timedelta(days=1) each loop) calling get_actions_by_date(date_str) and appending results until you have >= limit or until a configurable max lookback is reached, then return the last `limit` items reversed to keep most-recent-first ordering; reference the existing function names get_recent_actions and get_actions_by_date and ensure you preserve timezone.utc when formatting dates and enforce the limit when slicing the final list.scripts/positions.py (1)
193-193:datetime.utcnow()is deprecated in Python 3.12+.Use
datetime.now(timezone.utc)instead for consistency withscripts/trade.py. Note: requires addingtimezoneto the import statement.Proposed fix
-from datetime import datetime +from datetime import datetime, timezone- entry_time=datetime.utcnow().isoformat(), + entry_time=datetime.now(timezone.utc).isoformat(),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/positions.py` at line 193, Replace the deprecated datetime.utcnow() usage for the entry_time field with a timezone-aware call: import timezone from datetime and use datetime.now(timezone.utc).isoformat() where entry_time=datetime.utcnow().isoformat() is set (look for the entry_time assignment in scripts/positions.py); update the imports to include timezone to match the pattern used in scripts/trade.py.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/action_logger.py`:
- Around line 58-77: The unused sensitive_keys set is dead code and the current
substring check using key_lower is incomplete for fields like address, tx_hash,
tx_hashes, and clob_order_id referenced by scripts/wallet.py and
scripts/trade.py; either remove sensitive_keys or switch the check to use it
(e.g., check membership/substring of the normalized key against sensitive_keys)
and extend the sanitization logic to mask wallet addresses and transaction
hashes (value) by returning a partially redacted string (preserving
prefixes/suffixes similar to existing private/key branches) when key_lower
matches address, tx_hash, tx_hashes or clob_order_id; update the code paths
around sensitive_keys, key_lower and value accordingly so no unused variables
remain and wallet/tx identifiers are masked consistently.
In `@scripts/actions.py`:
- Around line 93-97: The conditional around args.json is redundant because both
branches call print(json.dumps(entry, indent=2)); simplify by removing the
if/else and replace with a single call to print(json.dumps(entry, indent=2))
followed by return 0, updating the code that references args, the json flag, and
the entry variable (the surrounding function in scripts/actions.py) accordingly.
- Around line 147-149: The default branch returns cmd_list(args) but args may
lack expected attributes (type, date, limit, json); before calling cmd_list in
that else branch, ensure you attach default attributes to the Namespace (e.g.,
set args.type, args.date, args.limit, args.json to the same sensible defaults
used by other subcommands or by cmd_list), then call cmd_list(args); reference
the else branch where cmd_list(args) is returned and the cmd_list function
signature to mirror required defaults.
In `@scripts/hedge.py`:
- Line 483: The print uses an unnecessary f-string with no placeholders; replace
the f-string print(f"\nAnalyzing implications...", file=sys.stderr) with a
normal string print("\nAnalyzing implications...", file=sys.stderr) to avoid
misleading formatting and lint warnings—update the print call in
scripts/hedge.py where the "\nAnalyzing implications..." message is emitted.
- Line 379: The print call uses an f-string with no placeholders
(print(f"Fetching markets...", file=sys.stderr)); replace the f-string with a
normal string literal or add the intended placeholders/variables. Locate the
print invocation in scripts/hedge.py (the "Fetching markets..." print) and
change it to use a plain string or interpolate the actual variables if dynamic
content was intended.
- Line 460: The print call uses an unnecessary f-string: change print(f"Fetching
markets...", file=sys.stderr) to a regular string literal (print("Fetching
markets...", file=sys.stderr)) or add actual placeholders if dynamic content is
intended; update the statement in scripts/hedge.py where the print("Fetching
markets...") call occurs (look for the print(...) with "Fetching markets..." to
locate it).
In `@scripts/markets.py`:
- Line 8: Remove the unused import statement "import time" from the top-level of
the module (the import line in this file); locate the import line referencing
the time module and delete it so there are no unused imports left (run
lint/flake8 after to confirm no other unused imports).
- Around line 62-67: The local variable trunc is assigned but not used; update
the question truncation to use trunc instead of the hard-coded 60 (in the loop
where question is set) so it respects args.full via trunc, e.g. compute question
= m.question if args.full else (m.question[:trunc] + "..." if len(m.question) >
trunc else m.question), referencing the loop variable m and functions
format_price/format_volume left unchanged; alternatively remove the unused trunc
assignment if you prefer not to use it.
In `@scripts/positions.py`:
- Line 229: The print call uses an unnecessary f-string: change the expression
print(f"Multiple matches, be more specific") to a normal string literal (remove
the leading f) so it becomes print("Multiple matches, be more specific"); locate
and update the exact statement shown as print(f"Multiple matches, be more
specific") in scripts/positions.py.
- Line 259: The print call uses an unnecessary f-string: replace the literal
f-string print(f"Multiple matches, be more specific") with a regular string
print("Multiple matches, be more specific") (and make the same change for the
analogous occurrence noted at the earlier location around line 229) to avoid
misleading f-string usage.
In `@scripts/trade.py`:
- Around line 291-292: The two print statements use f-strings with no
placeholders—change them to normal string literals (remove the leading 'f') or,
if you intended to include variables, add the appropriate placeholders and
variables; locate the print calls that output " CLOB: Skipped (--skip-sell)"
and " Note: You have both YES and NO tokens" (in the function handling the
sell/skip logic) and update them accordingly.
---
Nitpick comments:
In `@lib/action_logger.py`:
- Around line 248-258: get_recent_actions currently only calls
get_actions_by_date for today's date which means a large limit requested early
in the day returns fewer items; change get_recent_actions to iterate backwards
day-by-day (use datetime.now(timezone.utc) and subtract timedelta(days=1) each
loop) calling get_actions_by_date(date_str) and appending results until you have
>= limit or until a configurable max lookback is reached, then return the last
`limit` items reversed to keep most-recent-first ordering; reference the
existing function names get_recent_actions and get_actions_by_date and ensure
you preserve timezone.utc when formatting dates and enforce the limit when
slicing the final list.
In `@scripts/positions.py`:
- Line 193: Replace the deprecated datetime.utcnow() usage for the entry_time
field with a timezone-aware call: import timezone from datetime and use
datetime.now(timezone.utc).isoformat() where
entry_time=datetime.utcnow().isoformat() is set (look for the entry_time
assignment in scripts/positions.py); update the imports to include timezone to
match the pattern used in scripts/trade.py.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2c18c757-a572-449c-9101-bb0161654f07
📒 Files selected for processing (8)
lib/action_logger.pyscripts/actions.pyscripts/hedge.pyscripts/markets.pyscripts/polyclaw.pyscripts/positions.pyscripts/trade.pyscripts/wallet.py
- Add docstrings to ActionLogger.__init__, __enter__, __exit__ - Add docstring to actions.py main() function Fixes docstring coverage warning from CodeRabbit review. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
lib/action_logger.py (1)
58-62:⚠️ Potential issue | 🟡 MinorUnused
sensitive_keysset - dead code.The
sensitive_keysset is defined but never referenced. The actual check on line 67 uses hardcoded substring matching. Remove the unused variable or refactor to use it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/action_logger.py` around lines 58 - 62, The defined set sensitive_keys is dead code; either remove it or use it in the masking logic so checks aren't hardcoded. Update the masking routine in lib/action_logger.py to use sensitive_keys (e.g., normalize to lowercase and check if any(s in header_lower or s in key_lower for s in sensitive_keys) or any(s in value_lower for s in sensitive_keys)) instead of the current hardcoded substring checks, or simply delete the sensitive_keys definition if you prefer the existing substring approach.scripts/actions.py (2)
148-150:⚠️ Potential issue | 🟠 Major
AttributeErrorwhen running without subcommand.When no subcommand is provided,
argslacks the attributes (type,date,limit,json) thatcmd_listexpects, causing anAttributeError.Proposed fix
else: # Default to list + args.json = False + args.type = None + args.date = None + args.limit = 20 return cmd_list(args)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/actions.py` around lines 148 - 150, The code returns cmd_list(args) when no subcommand is provided but args may not have attributes cmd_list expects (type, date, limit, json), causing AttributeError; fix by ensuring cmd_list reads attributes defensively (use getattr(args, 'type', <default>), getattr(args, 'date', None), getattr(args, 'limit', <default>), getattr(args, 'json', False)) or construct a small argparse.Namespace with those defaults before calling cmd_list(args), updating the caller where the fallback return is used so cmd_list always receives an object with the expected symbols (cmd_list and args).
93-97:⚠️ Potential issue | 🟡 MinorRedundant conditional - both branches produce identical output.
The
if/elseat lines 93-96 executes the same code regardless ofargs.json. This was likely intended to have different formatting for non-JSON output.Proposed fix
entry = actions[args.index] - if getattr(args, 'json', False): - print(json.dumps(entry, indent=2)) - else: - print(json.dumps(entry, indent=2)) + print(json.dumps(entry, indent=2)) return 0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/actions.py` around lines 93 - 97, The conditional on args.json is redundant because both branches call print(json.dumps(entry, indent=2)); change this so that when args.json is True it prints JSON via print(json.dumps(entry, indent=2)) and when False it prints a human-readable representation (e.g., use pprint.pprint(entry) or a custom formatted string) instead of duplicating the JSON output; update the code around the args.json check in scripts/actions.py to remove the duplicate branch and implement the alternate non-JSON formatting for the variable entry.
🧹 Nitpick comments (1)
lib/action_logger.py (1)
17-28: Consider extracting shared storage path logic.
get_storage_dir()duplicates the implementation inlib/position_storage.py(lines 10-14). Consider extracting this to a shared utility module to maintain a single source of truth for the storage path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/action_logger.py` around lines 17 - 28, get_storage_dir() and get_logs_dir() duplicate the storage-path construction found in position_storage.py; extract the common logic into a single helper (e.g., new function get_base_storage_dir or storage_dir_for_polyclaw) in a shared util module, update get_storage_dir to call that helper rather than re-creating Path.home() / ".openclaw" / "polyclaw", and change get_logs_dir to derive its path from the same helper; import and use this new helper in both lib/action_logger.py (affecting get_storage_dir/get_logs_dir) and in the code that originally duplicated the logic (the functions in position_storage.py) so there is one source of truth for the storage path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/actions.py`:
- Around line 100-116: cmd_files currently ignores the --json flag (args.json)
so implement JSON output: when args.json is True, build a list of objects for
each file (use f.stem and the line count) and print a compact JSON array via
json.dumps or json.dump to stdout; otherwise preserve the existing
human-readable prints. Update function cmd_files to import json if needed and
ensure return value and exit behavior remain unchanged; reference args.json and
function cmd_files to locate changes.
---
Duplicate comments:
In `@lib/action_logger.py`:
- Around line 58-62: The defined set sensitive_keys is dead code; either remove
it or use it in the masking logic so checks aren't hardcoded. Update the masking
routine in lib/action_logger.py to use sensitive_keys (e.g., normalize to
lowercase and check if any(s in header_lower or s in key_lower for s in
sensitive_keys) or any(s in value_lower for s in sensitive_keys)) instead of the
current hardcoded substring checks, or simply delete the sensitive_keys
definition if you prefer the existing substring approach.
In `@scripts/actions.py`:
- Around line 148-150: The code returns cmd_list(args) when no subcommand is
provided but args may not have attributes cmd_list expects (type, date, limit,
json), causing AttributeError; fix by ensuring cmd_list reads attributes
defensively (use getattr(args, 'type', <default>), getattr(args, 'date', None),
getattr(args, 'limit', <default>), getattr(args, 'json', False)) or construct a
small argparse.Namespace with those defaults before calling cmd_list(args),
updating the caller where the fallback return is used so cmd_list always
receives an object with the expected symbols (cmd_list and args).
- Around line 93-97: The conditional on args.json is redundant because both
branches call print(json.dumps(entry, indent=2)); change this so that when
args.json is True it prints JSON via print(json.dumps(entry, indent=2)) and when
False it prints a human-readable representation (e.g., use pprint.pprint(entry)
or a custom formatted string) instead of duplicating the JSON output; update the
code around the args.json check in scripts/actions.py to remove the duplicate
branch and implement the alternate non-JSON formatting for the variable entry.
---
Nitpick comments:
In `@lib/action_logger.py`:
- Around line 17-28: get_storage_dir() and get_logs_dir() duplicate the
storage-path construction found in position_storage.py; extract the common logic
into a single helper (e.g., new function get_base_storage_dir or
storage_dir_for_polyclaw) in a shared util module, update get_storage_dir to
call that helper rather than re-creating Path.home() / ".openclaw" / "polyclaw",
and change get_logs_dir to derive its path from the same helper; import and use
this new helper in both lib/action_logger.py (affecting
get_storage_dir/get_logs_dir) and in the code that originally duplicated the
logic (the functions in position_storage.py) so there is one source of truth for
the storage path.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: bdf8bd1e-a5ae-41f8-976a-c87ccba87169
📒 Files selected for processing (2)
lib/action_logger.pyscripts/actions.py
- Remove unused sensitive_keys variable in action_logger.py - Fix redundant conditional in actions.py cmd_show - Add default args attributes for actions.py default branch - Remove unused import time in markets.py - Use trunc variable in markets.py instead of hardcoded 60 - Fix f-strings without placeholders in hedge.py, positions.py, trade.py Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
scripts/actions.py (1)
97-113:⚠️ Potential issue | 🟡 MinorImplement JSON output for
cmd_filesor remove the--jsonflag.The
filessubcommand accepts a--jsonflag (line 135), butcmd_filesdoesn't useargs.json. This inconsistency could confuse users expecting JSON output.Proposed fix to implement JSON output
def cmd_files(args): """List available log files.""" logs_dir = get_logs_dir() log_files = sorted(logs_dir.glob("actions-*.jsonl"), reverse=True) if not log_files: + if getattr(args, 'json', False): + print(json.dumps([])) + else: + print("No log files found.") - print("No log files found.") return 0 + file_data = [] + for f in log_files: + with open(f) as fp: + count = sum(1 for _ in fp) + file_data.append({"name": f.stem, "count": count}) + + if getattr(args, 'json', False): + print(json.dumps(file_data, indent=2)) + return 0 + print("Available log files:") - for f in log_files: - # Count lines - with open(f) as fp: - count = sum(1 for _ in fp) - print(f" {f.stem} ({count} actions)") + for fd in file_data: + print(f" {fd['name']} ({fd['count']} actions)") return 0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/actions.py` around lines 97 - 113, The cmd_files function currently ignores the --json flag; update cmd_files to check args.json and, if true, build a JSON array (using json.dumps) of objects with e.g. {"name": f.stem, "count": <line count>} computed by iterating over log_files (use get_logs_dir() and f.stem as already used) and print the JSON to stdout; otherwise preserve the existing human-readable output, and ensure you import json at top if not present and keep return value behavior the same.
🧹 Nitpick comments (1)
lib/action_logger.py (1)
158-161: Probabilistic cleanup trigger may have uneven distribution.Using
hash(entry.timestamp) % 100 == 0for triggering cleanup relies on timestamp string hashing. Since timestamps follow patterns (sequential seconds, similar prefixes), the hash distribution might cluster. Consider using a random check instead for more uniform cleanup triggering.Proposed alternative
+import random + # Periodically clean up old logs -# (Run cleanup ~1% of the time to avoid overhead) -if hash(entry.timestamp) % 100 == 0: +# (Run cleanup ~1% of the time to avoid overhead) +if random.random() < 0.01: cleanup_old_logs()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/action_logger.py` around lines 158 - 161, The probabilistic cleanup currently uses hash(entry.timestamp) % 100 == 0 which can cluster due to timestamp patterns; replace that check with a uniform random-based trigger such as using the random module (e.g., random.random() < 0.01 or random.randint(0,99) == 0) and ensure the module is imported; update the conditional around cleanup_old_logs() (the block referencing entry.timestamp and calling cleanup_old_logs()) to use the new random check.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/hedge.py`:
- Around line 387-391: The check that handles too few markets currently calls
log.success() while returning exit code 1; update this validation branch to call
log.failure() instead, preserving the details payload (call
log.set_details({"markets_count": len(markets), "error": "Need at least 2
markets"}) first) so the failure is recorded consistently for the markets
validation in the function that contains this block; leave the print and return
1 behavior unchanged.
In `@scripts/markets.py`:
- Around line 77-80: In the block that handles empty search results (the if not
markets: branch that calls log.success() and prints f"No markets found for:
{args.query}"), change the exit code from return 1 to return 0 so an empty but
successful search returns a success status; locate and update the return
statement in that branch (keep the log.success() and print intact).
In `@scripts/positions.py`:
- Line 193: Replace the timezone-naive timestamp with a timezone-aware UTC
timestamp: add timezone to the datetime imports (e.g., ensure from datetime
import datetime, timezone is present) and change the assignment that sets
entry_time (in scripts/positions.py) from datetime.utcnow() to
datetime.now(timezone.utc) so the record is UTC-aware consistent with
lib/action_logger.py.
---
Duplicate comments:
In `@scripts/actions.py`:
- Around line 97-113: The cmd_files function currently ignores the --json flag;
update cmd_files to check args.json and, if true, build a JSON array (using
json.dumps) of objects with e.g. {"name": f.stem, "count": <line count>}
computed by iterating over log_files (use get_logs_dir() and f.stem as already
used) and print the JSON to stdout; otherwise preserve the existing
human-readable output, and ensure you import json at top if not present and keep
return value behavior the same.
---
Nitpick comments:
In `@lib/action_logger.py`:
- Around line 158-161: The probabilistic cleanup currently uses
hash(entry.timestamp) % 100 == 0 which can cluster due to timestamp patterns;
replace that check with a uniform random-based trigger such as using the random
module (e.g., random.random() < 0.01 or random.randint(0,99) == 0) and ensure
the module is imported; update the conditional around cleanup_old_logs() (the
block referencing entry.timestamp and calling cleanup_old_logs()) to use the new
random check.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 60c6d349-13e9-4f7b-8743-9a9a2bcbc905
📒 Files selected for processing (6)
lib/action_logger.pyscripts/actions.pyscripts/hedge.pyscripts/markets.pyscripts/positions.pyscripts/trade.py
- Implement JSON output for cmd_files in actions.py - Use log.failure() instead of log.success() when markets < 2 in hedge.py - Return 0 instead of 1 for empty search results in markets.py - Replace deprecated datetime.utcnow() with datetime.now(timezone.utc) in positions.py Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (3)
scripts/positions.py (1)
264-270: Logging cancellation as success may be misleading for audit analysis.When the user aborts the deletion,
log.success()is called with{"cancelled": True}. While technically the operation didn't error, audit log consumers may expect cancelled operations to use a different status. Consider if a dedicatedlog.cancelled()method orlog.success(cancelled=True)pattern would be clearer.This is a minor concern—the current implementation is functional and the
cancelledflag provides the needed information.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/positions.py` around lines 264 - 270, The audit log marks a user-aborted deletion as success; change the logging to a distinct cancelled status: replace the log.success() call in the abort branch (the block that checks args.force, reads confirm via input and uses pos['position_id'][:12]) with a dedicated cancellation log call (e.g. log.cancelled()) while preserving log.set_details({"cancelled": True}); if your logger has no log.cancelled(), instead call log.success(cancelled=True) or otherwise emit a non-success status so the cancellation is unambiguous to audit consumers.scripts/markets.py (1)
88-93: Minor inconsistency: hardcoded truncation vs. variable.Unlike
cmd_trendingwhich uses thetruncvariable,cmd_searchhardcodes the truncation length (60). Consider using the same pattern for consistency:+ trunc = 0 if args.full else 60 # Terminal output: truncate unless --full print(f"{'ID':<12} {'YES':>6} {'NO':>6} {'24h Vol':>10} {'Question'}") print("-" * 80) for m in markets: - question = m.question if args.full else (m.question[:60] + "..." if len(m.question) > 60 else m.question) + question = m.question if trunc == 0 else (m.question[:trunc] + "..." if len(m.question) > trunc else m.question)This is purely stylistic—the current code works correctly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/markets.py` around lines 88 - 93, Replace the hardcoded 60-character truncation in the market-print loop with the existing trunc variable used by cmd_trending: in the block where you iterate over markets (the for m in markets loop) change the truncation expression that uses m.question[:60] to use m.question[:trunc] and ensure the trunc variable is defined/initialized (based on args or defaults) in the same scope (and respect args.full as currently done).scripts/hedge.py (1)
458-465: Consider narrowing the exception type for market fetching.The broad
Exceptioncatch (flagged by Ruff BLE001) handles all errors fromgamma.get_market(). While this works, it could mask unexpected errors. IfGammaClientraises specific exceptions for API/network errors, consider catching those instead.However, given the external API nature and consistent pattern across the PR, this is acceptable as-is.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/hedge.py` around lines 458 - 465, The try/except around gamma.get_market(...) is catching broad Exception which can mask unrelated errors; update the except to catch the specific exceptions the Gamma client raises (e.g., GammaAPIError, GammaNetworkError, asyncio.TimeoutError or the library-specific exception types) and handle them by calling log.failure(...) / printing the error and returning 1 (same behavior as current), leaving other unexpected exceptions to propagate; if the client has no documented specific exceptions, add a comment explaining why Exception is used and prefer catching asyncio.TimeoutError and network-related errors around gamma.get_market in the try block referencing gamma.get_market, log.failure, print(...) and return 1.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@scripts/hedge.py`:
- Around line 458-465: The try/except around gamma.get_market(...) is catching
broad Exception which can mask unrelated errors; update the except to catch the
specific exceptions the Gamma client raises (e.g., GammaAPIError,
GammaNetworkError, asyncio.TimeoutError or the library-specific exception types)
and handle them by calling log.failure(...) / printing the error and returning 1
(same behavior as current), leaving other unexpected exceptions to propagate; if
the client has no documented specific exceptions, add a comment explaining why
Exception is used and prefer catching asyncio.TimeoutError and network-related
errors around gamma.get_market in the try block referencing gamma.get_market,
log.failure, print(...) and return 1.
In `@scripts/markets.py`:
- Around line 88-93: Replace the hardcoded 60-character truncation in the
market-print loop with the existing trunc variable used by cmd_trending: in the
block where you iterate over markets (the for m in markets loop) change the
truncation expression that uses m.question[:60] to use m.question[:trunc] and
ensure the trunc variable is defined/initialized (based on args or defaults) in
the same scope (and respect args.full as currently done).
In `@scripts/positions.py`:
- Around line 264-270: The audit log marks a user-aborted deletion as success;
change the logging to a distinct cancelled status: replace the log.success()
call in the abort branch (the block that checks args.force, reads confirm via
input and uses pos['position_id'][:12]) with a dedicated cancellation log call
(e.g. log.cancelled()) while preserving log.set_details({"cancelled": True}); if
your logger has no log.cancelled(), instead call log.success(cancelled=True) or
otherwise emit a non-success status so the cancellation is unambiguous to audit
consumers.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ae210be0-419b-46df-9498-b5c360d15f9d
📒 Files selected for processing (4)
scripts/actions.pyscripts/hedge.pyscripts/markets.pyscripts/positions.py
🚧 Files skipped from review as they are similar to previous changes (1)
- scripts/actions.py
- hedge.py: Catch specific exceptions (TimeoutError, HTTPStatusError, RequestError) instead of broad Exception - markets.py: Use trunc variable consistently in cmd_search instead of hardcoded 60 - positions.py: Use log.cancelled() for user-aborted deletion instead of log.success() - action_logger.py: Add cancelled() method to ActionLogger for cancelled operations Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
scripts/hedge.py (1)
466-466:⚠️ Potential issue | 🟡 MinorRemove unnecessary f-string in timeout message.
Line 466 uses an f-string without placeholders.
Proposed fix
- print(f"Error fetching markets: Timeout") + print("Error fetching markets: Timeout")#!/bin/bash # Quick check for placeholder-less f-strings in this file. rg -nP 'f"[^{}]*"' scripts/hedge.py🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/hedge.py` at line 466, The print call uses an unnecessary f-string: locate the print statement that reads print(f"Error fetching markets: Timeout") in scripts/hedge.py (within the market-fetching/error-handling section) and change it to a plain string print("Error fetching markets: Timeout") by removing the leading f so there are no unused f-strings; keep the message identical otherwise.lib/action_logger.py (1)
58-74:⚠️ Potential issue | 🟠 MajorSanitization still misses wallet/transaction identifiers.
Line 59 only checks
private/secret/password/api_key, so fields likeaddress,tx_hash, andclob_order_idcan still be logged in clear text.Proposed fix
- sensitive_substrings = ["private", "secret", "password", "api_key"] + sensitive_substrings = [ + "private", "secret", "password", "api_key", + "address", "tx_hash", "order_id", "clob_order_id", + ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/action_logger.py` around lines 58 - 74, The sanitization misses wallet/transaction identifiers because sensitive_substrings only contains ["private","secret","password","api_key"]; update the check in action_logger.py by expanding the sensitive_substrings list (the variable sensitive_substrings) to include common wallet/tx field names such as "address", "tx_hash", "transaction", "tx", "clob_order_id", "wallet", and "nonce" (keep it lowercase) so key_lower matching will catch them; keep the existing masking logic (value.startswith("0x") / "sk-" / default) and ensure you still only mask when isinstance(value, str) and len(value) > 10 so the masking behavior for hex/prefix cases remains consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/action_logger.py`:
- Around line 228-234: The call to log_action(action=self.action,
params=self.params, result=self.result, details=self.details,
duration_ms=duration_ms) can raise (I/O/JSON) exceptions and must not break
command execution; wrap this call in a try/except that catches Exception, logs
the logging failure to a safe fallback logger (including the exception message
and context: action, params or duration_ms) and then continues without
re-raising so the original command flow is preserved. Ensure you only suppress
exceptions from log_action and do not swallow unrelated exceptions; keep the
existing variables (action, params, result, details, duration_ms) passed to
log_action unchanged.
In `@scripts/markets.py`:
- Around line 102-119: The current broad except Exception around the market
lookup (the block using args.market_id and calls to client.get_market_by_slug
and client.get_market) masks fatal bugs; narrow it to catch expected
client/network errors only: catch httpx.TimeoutException and httpx.HTTPError
(import httpx) plus your client library's specific error class (e.g.,
ClientError or ApiError if available) and handle those by logging and returning
1, while letting other exceptions propagate (or re-raise) so non-recoverable
bugs aren't swallowed.
---
Duplicate comments:
In `@lib/action_logger.py`:
- Around line 58-74: The sanitization misses wallet/transaction identifiers
because sensitive_substrings only contains
["private","secret","password","api_key"]; update the check in action_logger.py
by expanding the sensitive_substrings list (the variable sensitive_substrings)
to include common wallet/tx field names such as "address", "tx_hash",
"transaction", "tx", "clob_order_id", "wallet", and "nonce" (keep it lowercase)
so key_lower matching will catch them; keep the existing masking logic
(value.startswith("0x") / "sk-" / default) and ensure you still only mask when
isinstance(value, str) and len(value) > 10 so the masking behavior for
hex/prefix cases remains consistent.
In `@scripts/hedge.py`:
- Line 466: The print call uses an unnecessary f-string: locate the print
statement that reads print(f"Error fetching markets: Timeout") in
scripts/hedge.py (within the market-fetching/error-handling section) and change
it to a plain string print("Error fetching markets: Timeout") by removing the
leading f so there are no unused f-strings; keep the message identical
otherwise.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d2412941-1f39-4d3c-8fe6-10fee5ee115b
📒 Files selected for processing (4)
lib/action_logger.pyscripts/hedge.pyscripts/markets.pyscripts/positions.py
Inline comments: - action_logger.py: Wrap log_action call in try/except to prevent logging failures from breaking commands - markets.py: Catch specific exceptions (TimeoutError, HTTPStatusError, RequestError) instead of broad Exception Duplicate comments: - action_logger.py: Expand sensitive_substrings to include wallet/tx identifiers (address, tx_hash, transaction, tx, clob_order_id, wallet, nonce) - hedge.py: Remove unnecessary f-string from print statement Nitpick comments: - action_logger.py: Extract common storage path logic, update position_storage.py to import from action_logger - action_logger.py: Use random.random() instead of hash-based cleanup trigger - action_logger.py: Make get_recent_actions iterate backwards day-by-day with configurable max lookback Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
Add a comprehensive action logging system for audit and debugging purposes.
lib/action_logger.pymodule with thread-safe JSONL loggingpolyclaw actionscommand to view logsChanges
New Files
lib/action_logger.py- Core logging modulescripts/actions.py- CLI viewer for logsModified Files
scripts/markets.py- Added logging to trending, search, details, eventsscripts/wallet.py- Added logging to status, approvescripts/trade.py- Added logging to buyscripts/positions.py- Added logging to add, close, deletescripts/hedge.py- Added logging to scan, analyzescripts/polyclaw.py- Addedactionscommand routingUsage
Log Format
Logs stored at
~/.openclaw/polyclaw/logs/actions-YYYY-MM-DD.jsonl:{"timestamp":"2026-03-12T10:30:00Z","action":"markets.trending","params":{"limit":20},"result":"success","details":{"count":20},"duration_ms":1523} {"timestamp":"2026-03-12T10:31:00Z","action":"trade.buy","params":{"market_id":"abc123","position":"YES","amount":50},"result":"success","details":{"split_tx":"0x...","entry_price":0.73},"duration_ms":8500}Test Plan
polyclaw actionscommand works🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Refactor