Skip to content

Conversation

richiemcilroy
Copy link
Member

@richiemcilroy richiemcilroy commented Aug 25, 2025

This pull request adds new functionality for users to send diagnostic logs and recordings to the Cap team, improving the feedback and troubleshooting process. It introduces new backend endpoints and UI components to collect and submit system information, logs, and recordings, and updates dependencies to support these features.

Summary by CodeRabbit

  • New Features

    • Added “Send Diagnostic Logs” and “Send Recording” in Settings > Feedback, with automatic system info collection.
    • Recording selector with thumbnails and recent items; compresses and uploads selected recording.
    • More detailed status messages for feedback, logs, and recording submissions.
  • Chores

    • Backend endpoints added to receive diagnostics, recordings, and enriched feedback.
    • Improved reliability of feedback submission via server handling.
  • Security

    • Session cookies are now always secure, enhancing protection of signed-in sessions.

Introduces Rust backend commands to gather system info, recent logs, and compress recordings for diagnostics. Updates the feedback UI to allow users to send logs and recordings to the Cap team, and adds a dedicated logs tab. Updates Tauri command typings to include new system info and log export commands.
Feedback submission now sends data as URL-encoded form and systemInfo as a JSON string. The API endpoint parses systemInfo from a string and no longer sends device info as Discord embeds. Also, Discord webhook environment variables are updated to use DISCORD_LOGS_WEBHOOK_URL for logs and feedback, and the server environment schema is updated accordingly.
Copy link
Contributor

coderabbitai bot commented Aug 25, 2025

Walkthrough

Adds system-info collection and packaging in Tauri (new module, commands, dependencies), desktop UI for submitting feedback, logs, and recordings, web API endpoints to receive and forward data (Discord, S3), environment variable for logs webhook, and contract updates. Also sets the session token cookie secure flag to always true.

Changes

Cohort / File(s) Summary of changes
Tauri backend: system info & zipping
apps/desktop/src-tauri/Cargo.toml, apps/desktop/src-tauri/src/lib.rs, apps/desktop/src-tauri/src/system_info.rs
Adds num_cpus, sysinfo, zip deps; introduces system_info module with structs and commands: system info retrieval, recent logs, log file reading, and recording directory zipping; exposes commands via Tauri.
Desktop Tauri utils
apps/desktop/src/utils/tauri.ts
Exports new types (SystemInfo, RecordingLog, LogsAndSystemInfo, LogFile, LastRecording, DisplayInfo); adds command wrappers: getLogsAndSystemInfo, getLogFiles, getLastRecordingZip, getRecordingZip; renames GeneralSettingsStore.customCursorCapture2custom_cursor_capture2.
Desktop feedback UI
apps/desktop/src/routes/(window-chrome)/settings/feedback.tsx
Switches feedback submission to server POST; adds diagnostics UI and logic to submit logs and a selected recording; integrates new Tauri commands; handles states, selection, and results.
Web API: desktop routes
apps/web/app/api/desktop/[...route]/root.ts
Adds POST /recording (S3 upload + Discord embed) and POST /logs (Discord, with optional file attachments); updates POST /feedback to accept optional systemInfo.
Env config
packages/env/server.ts
Adds optional DISCORD_LOGS_WEBHOOK_URL to server env schema.
API contract
packages/web-api-contract/src/desktop.ts
Adds optional systemInfo to feedback; defines submitRecording and submitLogs endpoints with schemas and 200 responses.
Auth cookie setting
packages/database/auth/auth-options.tsx
Sets cookies.sessionToken.secure = true unconditionally.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor U as User
  participant UI as Desktop UI (FeedbackTab)
  participant TB as Tauri Backend (system_info)
  participant API as Web API (/desktop)
  participant DC as Discord Webhook
  participant S3 as S3 Bucket

  rect rgba(200,220,255,0.25)
  note over UI,TB: Submit Logs
  U->>UI: Click "Send Diagnostic Logs"
  UI->>TB: getLogsAndSystemInfo()
  TB-->>UI: { system_info, recent_logs, app_version }
  UI->>TB: getLogFiles(paths)
  TB-->>UI: [LogFile...]
  UI->>API: POST /desktop/logs {systemInfo, recentLogs, appVersion, logFiles?}
  API-->>DC: webhook (JSON or multipart with files)
  DC-->>API: 200 OK
  API-->>UI: { success, message }
  end

  rect rgba(200,255,200,0.25)
  note over UI,S3: Submit Recording
  U->>UI: Select recording and Send
  UI->>TB: getRecordingZip(recordingPath?)
  TB-->>UI: { name, content(base64 zip), size_mb }
  UI->>API: POST /desktop/recording {systemInfo, appVersion, recording}
  API->>S3: PutObject(zip)
  S3-->>API: 200 + signed URL
  API-->>DC: webhook embed with download link
  DC-->>API: 200 OK
  API-->>UI: { success, message }
  end

  rect rgba(255,240,200,0.25)
  note over UI,API: Feedback
  U->>UI: Submit Feedback
  UI->>API: POST /desktop/feedback (form: feedback, os, version, systemInfo?)
  API-->>DC: webhook message
  DC-->>API: 200 OK
  API-->>UI: { success, message }
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • Brendonovich

Poem

I gathered logs in burrows deep,
Zipped a recording for safe keep.
Hopped to the cloud with nimble grace,
Left breadcrumbs in a Discord place.
My whiskers twitch—diagnostics sing,
A hare’s report on everything. 🐇📦🪵

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch get-logs

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (18)
apps/desktop/src-tauri/Cargo.toml (1)

80-82: Sanity-check new native deps for binary size and platform quirks

All three crates look appropriate for diagnostics. Consider constraining features to keep the Tauri binary lean and avoid zlib linkage surprises on Windows:

  • zip: disable default features and enable only what you need (often "deflate").
  • sysinfo: if you only need system info (not processes), disable unnecessary features.

Example tweak:

-zip = "0.6"
+zip = { version = "0.6", default-features = false, features = ["deflate"] }

and, if applicable:

-sysinfo = "0.32.1"
+sysinfo = { version = "0.32.1", default-features = false }
packages/env/server.ts (1)

44-45: Validate webhook as a URL to fail fast on misconfig

Use a stricter zod schema for webhook format. Also consider aligning DISCORD_FEEDBACK_WEBHOOK_URL similarly for consistency.

Apply within this range:

-			DISCORD_LOGS_WEBHOOK_URL: z.string().optional(),
+			DISCORD_LOGS_WEBHOOK_URL: z.string().url().optional(),

(Optionally make the same change to DISCORD_FEEDBACK_WEBHOOK_URL on Line 43.)

packages/database/auth/auth-options.tsx (1)

33-36: Don’t ship debug logging in production

Hard-coded debug: true will spam logs (and can include sensitive info). Gate it by NODE_ENV.

-    debug: true,
+    debug: process.env.NODE_ENV !== "production",
packages/web-api-contract/src/desktop.ts (5)

57-73: DRY the SystemInfo shape across endpoints

The same object schema is repeated. Extract a shared zod object to avoid drift.

Introduce near the top:

const SystemInfoZ = z.object({
  os: z.string(),
  os_version: z.string(),
  arch: z.string(),
  cpu_cores: z.number(),
  memory_gb: z.number(),
  displays: z.array(z.object({ width: z.number(), height: z.number(), scale_factor: z.number() })),
  cameras: z.array(z.string()),
  microphones: z.array(z.string()),
});

Then here:

-        systemInfo: z.object({ ... }).optional(),
+        systemInfo: SystemInfoZ.optional(),

78-108: Apply the shared SystemInfoZ and add basic input hygiene for recording payload

Also consider bounding size_mb and clarifying content encoding.

-        systemInfo: z.object({ ... }),
+        systemInfo: SystemInfoZ,
         appVersion: z.string(),
         recording: z.object({
           name: z.string(),
-          content: z.string(),
-          size_mb: z.number(),
+          // base64-encoded zip or media
+          content: z.string().min(1),
+          size_mb: z.number().nonnegative().max(1024), // sanity cap: 1GB
         }),

109-150: Normalize log schema and reuse SystemInfoZ; avoid nullable+optional overlap

  • Prefer optional alone over nullable().optional() unless null is a meaningful value.
  • Bound durations to non-negative.
  • Reuse SystemInfoZ.
-        systemInfo: z.object({ ... }),
+        systemInfo: SystemInfoZ,
         recentLogs: z.array(
           z.object({
             id: z.string(),
             timestamp: z.string(),
-            duration_seconds: z.number().nullable(),
+            duration_seconds: z.number().nonnegative().nullable(),
             error: z.string().nullable(),
             log_content: z.string().nullable(),
-            log_file_path: z.string().nullable().optional(),
+            log_file_path: z.string().optional(),
           }),
         ),
         appVersion: z.string(),
         logFiles: z.array(
           z.object({
             name: z.string(),
-            content: z.string(),
+            content: z.string().min(1),
           }),
         ).optional(),

55-57: Version field naming is inconsistent across endpoints

submitFeedback uses version while submitRecording/submitLogs use appVersion. Consider standardizing to appVersion everywhere.

If changing API is acceptable in this PR, rename submitFeedback.version to appVersion (and update server/client handlers accordingly). Otherwise, at least document the difference to avoid confusion.

Also applies to: 98-99, 139-140


55-56: Feedback os union vs. systemInfo.os string — align types

Top-level os is a union ("macos" | "windows") while systemInfo.os is a free string. Prefer one source of truth (union) to improve validation.

  • Either change SystemInfoZ.os to z.enum(["macos","windows"]), or drop the top-level os in favor of systemInfo.os.
apps/desktop/src-tauri/src/system_info.rs (3)

358-406: Zip file permissions: use 0755 for directories, 0644 for files.

Applying 0755 to files makes them executable unnecessarily. Set directory and file permissions separately.

Apply:

-    zip.add_directory(&zip_path, *options)
+    // Directories: 0o755
+    zip.add_directory(&zip_path, options.clone().unix_permissions(0o755))
         .map_err(|e| format!("Failed to add directory to zip: {}", e))?;
@@
-    zip.start_file(&zip_path, *options)
+    // Files: 0o644
+    zip.start_file(&zip_path, options.clone().unix_permissions(0o644))
         .map_err(|e| format!("Failed to start file in zip: {}", e))?;

101-117: macOS display scale factor hardcoded to 1.0; compute actual scale.

On Retina, logical bounds and pixel dimensions differ. Use pixels_wide/pixels_high to derive scale.

Example:

-                        let bounds = display.bounds();
+                        let bounds = display.bounds();
+                        let pixels_wide = display.pixels_wide() as u32;
+                        let pixels_high = display.pixels_high() as u32;
+                        let scale_x = pixels_wide as f64 / bounds.size.width as f64;
+                        let scale_y = pixels_high as f64 / bounds.size.height as f64;
                         DisplayInfo {
                             width: bounds.size.width as u32,
                             height: bounds.size.height as u32,
-                            scale_factor: 1.0,
+                            scale_factor: scale_x.max(scale_y),
                         }

149-170: Optional: filter recordings by .cap to reduce noise.

get_recent_recording_logs currently includes all subdirectories. Align with other code paths by limiting to “.cap” directories.

-            if path.is_dir() {
+            if path.is_dir() && path.extension().and_then(|s| s.to_str()) == Some("cap") {
apps/desktop/src/routes/(window-chrome)/settings/feedback.tsx (4)

15-21: Don’t send “os” when it won’t validate (e.g., Linux).

The server zod schema accepts only “macos” | “windows”. plugin-os may return “linux”. Only append OS if it’s accepted to avoid a 400.

-  formData.append("os", ostype() as any);
+  const os = ostype() as string;
+  if (os === "macos" || os === "windows") formData.append("os", os);

63-67: Windows path join could be incorrect; build thumbnailPath with Tauri path APIs.

String concatenation with “/” can break on Windows. Use @tauri-apps/api/path.join to be safe.

-      const recentRecordings = result.slice(0, 10).map(([path, meta]) => ({
-        path,
-        name: meta.pretty_name,
-        thumbnailPath: `${path}/screenshots/display.jpg`,
-      }));
+      const recentRecordings = await Promise.all(
+        result.slice(0, 10).map(async ([p, meta]) => {
+          const thumbnailPath = await (await import("@tauri-apps/api/path")).join(p, "screenshots", "display.jpg");
+          return { path: p, name: meta.pretty_name, thumbnailPath };
+        }),
+      );

80-125: Nice UX flow for recording upload; consider additional guardrails.

  • If getRecordingZip returns a very large payload, show a size warning before sending.
  • Consider truncating recording name in UI to prevent layout overflow.

Would you like a quick helper to format MB sizes and truncate long names in the list?


127-172: Logs workflow is solid; consider deduplicating log paths and limiting file count.

Before calling getLogFiles, unique the path list and cap to, say, last 5 to avoid large payloads.

-      const logFilePaths = logsAndInfo.recent_logs
+      const logFilePaths = Array.from(new Set(
+        logsAndSystemInfo.recent_logs
         .filter((log) => log.log_file_path)
-        .map((log) => log.log_file_path as string);
+        .map((log) => log.log_file_path as string),
+      )).slice(0, 5);
apps/web/app/api/desktop/[...route]/root.ts (3)

60-71: Feedback: systemInfo is parsed but not used.

If collected on the client, consider including it in the Discord message (as an embed) to aid triage; otherwise drop it from the client to reduce noise.

Example:

- body: JSON.stringify({
-   content: messageContent,
- }),
+ body: JSON.stringify({
+   content: messageContent,
+   embeds: systemInfo
+     ? [{
+         title: "System Info",
+         color: 5814783,
+         fields: [
+           { name: "OS", value: `${systemInfo.os} ${systemInfo.os_version}`, inline: true },
+           { name: "Arch", value: systemInfo.arch, inline: true },
+           { name: "CPU", value: `${systemInfo.cpu_cores} cores`, inline: true },
+           { name: "Memory", value: `${systemInfo.memory_gb.toFixed(1)} GB`, inline: true },
+         ],
+       }]
+     : undefined,
+ }),

184-185: “valid for 7 days” may be inaccurate — derive TTL from provider or omit.

CloudFront/S3 signed URL TTL depends on provider config; don’t hardcode “7 days” unless guaranteed.

- value: `[Download Recording (valid for 7 days)](${downloadUrl})`,
+ value: `[Download Recording](${downloadUrl})`,

Please confirm the configured TTL of getSignedObjectUrl() for both CloudFront and plain S3 providers.


257-328: recentLogs is accepted but not surfaced in the Discord payload.

Consider summarizing the last N log IDs/timestamps/errors in an additional embed to provide quick triage context when no files are attached.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 55edfb2 and 3e89b36.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (10)
  • apps/desktop/src-tauri/Cargo.toml (1 hunks)
  • apps/desktop/src-tauri/src/lib.rs (2 hunks)
  • apps/desktop/src-tauri/src/system_info.rs (1 hunks)
  • apps/desktop/src/routes/(window-chrome)/settings/feedback.tsx (1 hunks)
  • apps/desktop/src/utils/tauri.ts (2 hunks)
  • apps/web/app/api/desktop/[...route]/root.ts (3 hunks)
  • packages/database/auth/auth-options.tsx (1 hunks)
  • packages/env/server.ts (1 hunks)
  • packages/ui-solid/src/auto-imports.d.ts (1 hunks)
  • packages/web-api-contract/src/desktop.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
apps/desktop/src-tauri/**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

Use tauri_specta on the Rust side for IPC (typed commands/events) and emit events via generated types

Files:

  • apps/desktop/src-tauri/src/lib.rs
  • apps/desktop/src-tauri/src/system_info.rs
{apps/desktop,packages/ui-solid}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Component naming (Solid): components in PascalCase; hooks/utilities in camelCase starting with 'use' where applicable

Files:

  • packages/ui-solid/src/auto-imports.d.ts
  • apps/desktop/src/routes/(window-chrome)/settings/feedback.tsx
  • apps/desktop/src/utils/tauri.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use strict TypeScript and avoid any; prefer shared types from packages

Files:

  • packages/ui-solid/src/auto-imports.d.ts
  • packages/database/auth/auth-options.tsx
  • packages/web-api-contract/src/desktop.ts
  • packages/env/server.ts
  • apps/desktop/src/routes/(window-chrome)/settings/feedback.tsx
  • apps/web/app/api/desktop/[...route]/root.ts
  • apps/desktop/src/utils/tauri.ts
apps/desktop/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

apps/desktop/**/*.{ts,tsx}: In the desktop app, rely on unplugin-icons auto-imports; do not manually import icon modules
Use generated tauri_specta commands/events (commands, events) in the desktop frontend; listen to generated events directly
Use @tanstack/solid-query for server state in the desktop app

Files:

  • apps/desktop/src/routes/(window-chrome)/settings/feedback.tsx
  • apps/desktop/src/utils/tauri.ts
apps/web/app/api/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

When HTTP routes are necessary, implement them under app/api/*, configure CORS correctly, and set precise revalidation

Files:

  • apps/web/app/api/desktop/[...route]/root.ts
apps/web/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

apps/web/**/*.{ts,tsx}: Use TanStack Query v5 for client-side server state and data fetching in the web app
Mutations should call Server Actions and perform precise cache updates with setQueryData/setQueriesData, avoiding broad invalidations
Prefer Server Components for initial data and pass initialData to client components for React Query hydration

Files:

  • apps/web/app/api/desktop/[...route]/root.ts
{apps/web,packages/ui}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

{apps/web,packages/ui}/**/*.{ts,tsx}: Use Tailwind CSS exclusively for styling in the web app and shared React UI components
Component naming: React components in PascalCase; hooks in camelCase starting with 'use'

Files:

  • apps/web/app/api/desktop/[...route]/root.ts
**/tauri.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Never edit auto-generated IPC bindings file: tauri.ts

Files:

  • apps/desktop/src/utils/tauri.ts
🧠 Learnings (3)
📚 Learning: 2025-08-25T10:58:06.134Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-25T10:58:06.134Z
Learning: Applies to apps/desktop/**/*.{ts,tsx} : In the desktop app, rely on unplugin-icons auto-imports; do not manually import icon modules

Applied to files:

  • packages/ui-solid/src/auto-imports.d.ts
📚 Learning: 2025-08-25T10:58:06.134Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-25T10:58:06.134Z
Learning: Applies to apps/desktop/src-tauri/**/*.rs : Use tauri_specta on the Rust side for IPC (typed commands/events) and emit events via generated types

Applied to files:

  • apps/desktop/src-tauri/src/system_info.rs
📚 Learning: 2025-08-25T10:58:06.134Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-25T10:58:06.134Z
Learning: Applies to apps/desktop/**/*.{ts,tsx} : Use generated tauri_specta commands/events (commands, events) in the desktop frontend; listen to generated events directly

Applied to files:

  • apps/desktop/src/utils/tauri.ts
🧬 Code graph analysis (6)
apps/desktop/src-tauri/src/lib.rs (1)
apps/desktop/src-tauri/src/system_info.rs (4)
  • get_logs_and_system_info (217-227)
  • get_log_files (237-268)
  • get_last_recording_zip (354-356)
  • get_recording_zip (279-350)
packages/database/auth/auth-options.tsx (8)
packages/database/auth/drizzle-adapter.ts (1)
  • DrizzleAdapter (8-254)
packages/database/index.ts (1)
  • db (30-35)
packages/env/server.ts (1)
  • serverEnv (81-85)
packages/database/emails/otp-email.tsx (1)
  • OTPEmail (16-66)
packages/database/emails/config.ts (1)
  • sendEmail (8-43)
packages/database/schema.ts (3)
  • users (45-87)
  • organizations (140-160)
  • organizationMembers (162-180)
packages/database/dub.ts (1)
  • dub (4-7)
packages/database/helpers.ts (1)
  • nanoId (6-9)
apps/desktop/src-tauri/src/system_info.rs (2)
apps/desktop/src-tauri/src/lib.rs (7)
  • std (1521-1539)
  • std (1563-1585)
  • list_audio_devices (1104-1113)
  • app (1206-1207)
  • app (2226-2226)
  • app (2255-2255)
  • None (2455-2455)
apps/desktop/src/utils/tauri.ts (6)
  • SystemInfo (452-452)
  • DisplayInfo (369-369)
  • RecordingLog (428-428)
  • LogsAndSystemInfo (404-404)
  • LogFile (400-400)
  • LastRecording (397-397)
apps/desktop/src/routes/(window-chrome)/settings/feedback.tsx (3)
apps/desktop/src/utils/tauri.ts (1)
  • commands (7-272)
apps/desktop/src/utils/web-api.ts (2)
  • protectedHeaders (61-65)
  • apiClient (34-37)
apps/desktop/src/utils/env.ts (1)
  • clientEnv (1-3)
apps/web/app/api/desktop/[...route]/root.ts (3)
packages/env/server.ts (1)
  • serverEnv (81-85)
apps/web/utils/s3.ts (1)
  • createBucketProvider (372-395)
packages/database/helpers.ts (1)
  • nanoId (6-9)
apps/desktop/src/utils/tauri.ts (6)
apps/desktop/src-tauri/src/windows.rs (1)
  • id (629-655)
crates/displays/src/lib.rs (4)
  • id (28-30)
  • id (118-120)
  • name (40-42)
  • name (150-152)
crates/displays/src/platform/macos.rs (4)
  • id (44-44)
  • id (288-290)
  • name (123-142)
  • name (320-332)
crates/displays/src/platform/win.rs (5)
  • id (129-129)
  • id (801-803)
  • name (218-255)
  • name (1304-1324)
  • name (1316-1321)
crates/camera-windows/src/lib.rs (2)
  • id (49-51)
  • name (53-55)
crates/camera-mediafoundation/src/lib.rs (2)
  • id (145-157)
  • name (135-143)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Analyze (rust)
🔇 Additional comments (10)
packages/env/server.ts (1)

70-76: No client-side exposure detected — resolving

  • A repo-wide search for DISCORD_LOGS_WEBHOOK_URL only returned definitions in
    • packages/env/server.ts (around line 44)
    • server‐side API routes under apps/web/app/api/desktop/[...route]/root.ts (around lines 141 and 260)
      No matches were found in any client components or pages.
  • The experimental__runtimeEnv block exists solely in packages/env/server.ts (lines 70–76) and isn’t referenced in apps/web/next.config.mjs or any other client-side code.

Since these variables remain confined to the server bundle and aren’t wired into Next.js’s client build, there’s no risk of leaking DISCORD_LOGS_WEBHOOK_URL (or any other server-only env) to the browser.

packages/database/auth/auth-options.tsx (2)

113-123: SameSite=None + Secure=true has dev implications

Always-secure session cookies are good for production and required for SameSite=None, but this will break local HTTP dev flows. If that’s intentional for desktop/embedded use, ignore; otherwise, consider env-gating.

Proposed tweak:

-          sameSite: "none",
+          sameSite: "none",
           path: "/",
-          secure: true,
+          secure: process.env.NODE_ENV === "production",

125-163: Verify cookie deletion in NextAuth signIn event

We couldn’t find any existing tests or upstream usages that confirm cookies().delete("dub_id") (and dub_partner_data) inside the events.signIn handler actually emit Set-Cookie headers on the auth callback response. Please:

  • Confirm that invoking cookies().delete("dub_id") and cookies().delete("dub_partner_data") within your signIn event truly results in the appropriate Set-Cookie: dub_id=; Max-Age=0 (etc.) headers on the /api/auth/callback response.
  • Add or update an integration test against your NextAuth /api/auth/[...nextauth] handler to assert the cookie(s) are cleared after sign-in.
  • If this proves flaky or unsupported (due to request-scoped Next.js cookies in an API route), consider moving the cleanup into a NextAuth callback (e.g. callbacks.session or callbacks.redirect) where you have direct access to the Response object and can call res.setHeader('Set-Cookie', ...).
packages/ui-solid/src/auto-imports.d.ts (1)

7-59: Formatting-only changes; consistent with auto-import expectations

No semantic changes; declarations remain intact. This aligns with our convention to rely on unplugin-icons auto-imports in desktop/UI code.

apps/desktop/src/utils/tauri.ts (3)

2-2: Do not hand-edit this file; confirm it was generated by tauri-specta.

Per our desktop guidelines, tauri.ts is generated and shouldn’t be modified manually. The header already states this; just confirming the changes came from a specta export tied to the Rust surface in this PR.

If there was any manual tweaking, please re-run the specta export from apps/desktop/src-tauri/src/lib.rs to regenerate this file.


260-271: New commands are correctly exposed and typed.

The additions (getLogsAndSystemInfo, getLogFiles, getLastRecordingZip, getRecordingZip) match the Rust commands/types in system_info.rs and are used from the feedback UI. Looks consistent with the retrieved learnings to “use generated tauri_specta commands/events.”


277-319: Events mapping remains type-safe and aligns with generated events.

The proxy-based makeEvents usage and the updated event list look good. No runtime behavior concerns identified.

apps/desktop/src-tauri/src/system_info.rs (1)

76-81: Confirm sysinfo memory units to avoid over/under-reporting memory_gb.

sysinfo’s total_memory unit changed in recent releases; verify that 0.32.1 returns bytes (vs KiB in older versions). If it returns bytes, your division by 1024^3 is correct. If not, adjust accordingly.

Please confirm by checking the exact sysinfo version in apps/desktop/src-tauri/Cargo.toml and its docs. If needed, I can provide a quick script to print sysinfo::System::total_memory on dev machines.

apps/desktop/src-tauri/src/lib.rs (2)

20-20: Module inclusion looks correct.

Adding mod system_info cleanly scopes the new commands/types and matches the specta export.


1976-1980: New commands are registered with specta; aligns with frontend bindings.

get_logs_and_system_info, get_log_files, get_last_recording_zip, and get_recording_zip are now exported to the TS bindings; consistent with apps/desktop/src/utils/tauri.ts.

Comment on lines 235 to 268
#[tauri::command]
#[specta::specta]
pub async fn get_log_files(paths: Vec<String>) -> Result<Vec<LogFile>, String> {
let mut log_files = vec![];

for path_str in paths {
let path = PathBuf::from(&path_str);
if path.exists() {
if let Ok(content) = fs::read(&path) {
let base64_content = base64::prelude::BASE64_STANDARD.encode(&content);
let file_name = path
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("recording.log")
.to_string();

let parent_dir_name = path
.parent()
.and_then(|p| p.file_name())
.and_then(|n| n.to_str())
.unwrap_or("unknown");

let full_name = format!("{}/{}", parent_dir_name, file_name);

log_files.push(LogFile {
name: full_name,
content: base64_content,
});
}
}
}

Ok(log_files)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

get_log_files accepts arbitrary filesystem paths — restrict scope and size to avoid exfiltration/DoS.

Frontend-controlled absolute paths can read and upload any file the app’s process can access. Add path whitelisting (only under app data recordings/screenshots), max file size, and explicit allowed filenames. Also prefer using canonicalize() before checking ancestry.

Apply a minimal hardening:

 #[tauri::command]
 #[specta::specta]
 pub async fn get_log_files(paths: Vec<String>) -> Result<Vec<LogFile>, String> {
-    let mut log_files = vec![];
+    let mut log_files = vec![];
+    // Limit number of files to mitigate abuse
+    let paths = paths.into_iter().take(8);
+
+    // Compute allowed roots
+    let allowed_roots = |app: &AppHandle| -> Result<Vec<PathBuf>, String> {
+        let base = app.path().app_data_dir().map_err(|e| e.to_string())?;
+        Ok(vec![base.join("recordings"), base.join("screenshots")])
+    };
+    // NOTE: If AppHandle not available here, pass it in or read from a cached path.

-    for path_str in paths {
-        let path = PathBuf::from(&path_str);
+    for path_str in paths {
+        let path = PathBuf::from(&path_str);
         if path.exists() {
-            if let Ok(content) = fs::read(&path) {
+            // Whitelist: ensure file is under one of our allowed roots
+            let Ok(real) = path.canonicalize() else { continue };
+            // TODO: plumb an AppHandle and use allowed_roots(&app)?
+            // Fallback: reject if path contains suspicious segments
+            if real.components().any(|c| matches!(c, std::path::Component::ParentDir)) {
+                continue;
+            }
+
+            // Limit size to e.g. 5 MiB per file
+            let size = fs::metadata(&real).map(|m| m.len()).unwrap_or(0);
+            if size > 5 * 1024 * 1024 {
+                continue;
+            }
+
+            if let Ok(content) = fs::read(&real) {
                 let base64_content = base64::prelude::BASE64_STANDARD.encode(&content);
                 let file_name = path
                     .file_name()
                     .and_then(|n| n.to_str())
                     .unwrap_or("recording.log")
                     .to_string();

If you can pass AppHandle into this command, replace the naive checks with a canonical ancestry check against app_data_dir()/recordings and screenshots.

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines 214 to 221
<Button
type="submit"
size="md"
disabled={!feedback().trim() || feedback().trim().length < 0}
class="mt-2 bg-primary text-primary"
>
{submission.pending ? "Submitting..." : "Submit Feedback"}
</Button>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Submit button disabled condition is wrong; length < 0 is always false.

It currently disables only on empty feedback; mismatch with minLength={10}. Make the check consistent.

Apply:

-                disabled={!feedback().trim() || feedback().trim().length < 0}
+                disabled={!feedback().trim() || feedback().trim().length < 10}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<Button
type="submit"
size="md"
disabled={!feedback().trim() || feedback().trim().length < 0}
class="mt-2 bg-primary text-primary"
>
{submission.pending ? "Submitting..." : "Submit Feedback"}
</Button>
<Button
type="submit"
size="md"
disabled={!feedback().trim() || feedback().trim().length < 10}
class="mt-2 bg-primary text-primary"
>
{submission.pending ? "Submitting..." : "Submit Feedback"}
</Button>
🤖 Prompt for AI Agents
In apps/desktop/src/routes/(window-chrome)/settings/feedback.tsx around lines
214 to 221, the disabled condition on the Submit button uses
feedback().trim().length < 0 which is always false and therefore doesn't enforce
the actual minimum length; update the condition to require the trimmed feedback
length be at least the component's minLength (10). Replace the current check
with one that computes the trimmed length and disables when it's zero or less
than 10 (e.g., disabled when feedback().trim().length < 10), keeping the rest of
the button props and the pending label logic unchanged.

Comment on lines 133 to 140
const buffer = Buffer.from(recording.content, "base64");

await bucket.putObject(recordingKey, buffer, {
contentType: "application/zip",
});

const downloadUrl = await bucket.getSignedObjectUrl(recordingKey);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Recording upload via JSON base64 risks memory pressure; prefer pre-signed direct uploads.

Decoding large base64 payloads in the API process is expensive and doubles memory usage. Issue a pre-signed URL (or multipart S3 upload) to the desktop client, then only post metadata to this endpoint.

If keeping current flow, enforce a strict request size limit and reject when buffer.length exceeds a threshold (e.g., 500 MB) before putObject.

🤖 Prompt for AI Agents
In apps/web/app/api/desktop/[...route]/root.ts around lines 133 to 140, avoid
decoding large base64 blobs in the API process; instead implement a two-step
flow where the server returns a pre-signed upload URL (or multipart S3 upload
credentials) to the desktop client and the client uploads the ZIP directly, with
this endpoint only accepting metadata. If you must keep the current server-side
flow, enforce a strict request size limit by validating recording.content size
(or the decoded buffer length) and immediately reject requests whose decoded
buffer would exceed a safe threshold (e.g., 500 MB) before calling
bucket.putObject, returning a 413 Payload Too Large response.

Comment on lines 331 to 351
if (logFiles && logFiles.length > 0) {
const formData = new FormData();

formData.append("payload_json", JSON.stringify(formattedMessage));

logFiles.forEach((file, index) => {
const buffer = Buffer.from(file.content, "base64");
const blob = new Blob([buffer], { type: "text/plain" });
formData.append(`files[${index}]`, blob, file.name);
});

const response = await fetch(discordWebhookUrl, {
method: "POST",
body: formData,
});

if (!response.ok)
throw new Error(
`Failed to send logs to Discord: ${response.statusText}`,
);
} else {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Logs: good use of multipart; add caps on file count and size.

Attackers could submit many/large files. Limit to N files (e.g., 10) and per-file size (e.g., 5–10 MB) before building FormData.

- logFiles.forEach((file, index) => {
+ logFiles.slice(0, 10).forEach((file, index) => {
   const buffer = Buffer.from(file.content, "base64");
+  if (buffer.length > 10 * 1024 * 1024) return; // skip oversized
   const blob = new Blob([buffer], { type: "text/plain" });
   formData.append(`files[${index}]`, blob, file.name);
 });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (logFiles && logFiles.length > 0) {
const formData = new FormData();
formData.append("payload_json", JSON.stringify(formattedMessage));
logFiles.forEach((file, index) => {
const buffer = Buffer.from(file.content, "base64");
const blob = new Blob([buffer], { type: "text/plain" });
formData.append(`files[${index}]`, blob, file.name);
});
const response = await fetch(discordWebhookUrl, {
method: "POST",
body: formData,
});
if (!response.ok)
throw new Error(
`Failed to send logs to Discord: ${response.statusText}`,
);
} else {
if (logFiles && logFiles.length > 0) {
const formData = new FormData();
formData.append("payload_json", JSON.stringify(formattedMessage));
// Limit to at most 10 files and skip any over 10 MB
logFiles.slice(0, 10).forEach((file, index) => {
const buffer = Buffer.from(file.content, "base64");
if (buffer.length > 10 * 1024 * 1024) return; // skip oversized
const blob = new Blob([buffer], { type: "text/plain" });
formData.append(`files[${index}]`, blob, file.name);
});
const response = await fetch(discordWebhookUrl, {
method: "POST",
body: formData,
});
if (!response.ok)
throw new Error(
`Failed to send logs to Discord: ${response.statusText}`,
);
} else {
🤖 Prompt for AI Agents
In apps/web/app/api/desktop/[...route]/root.ts around lines 331 to 351, the code
accepts and attaches all uploaded log files to the Discord multipart request
which allows attackers to submit many or very large files; enforce a maximum
file count (e.g., maxFiles = 10) and per-file size limit (e.g., maxFileBytes = 5
* 1024 * 1024) before creating FormData. Before building the FormData,
filter/slice logFiles to the first maxFiles entries and for each file decode or
measure its byte size and skip or truncate files larger than maxFileBytes (and
record which were skipped for telemetry/logging); continue to append only
allowed files and fail early with a clear error if no files remain or optionally
return a 4xx response when limits are exceeded.

Comment on lines 19 to 21
export const config = {
maxDuration: 120,
maxDuration: 120,
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Next.js route config likely ineffective here

Exporting config.maxDuration in a library file doesn’t affect the auth route timeout. This likely needs to live in the route module (e.g., apps/web/app/api/auth/[...nextauth]/route.ts).

Quick check:


🏁 Script executed:

#!/bin/bash
# Verify the auth route exports a config with maxDuration
fd -a 'route.ts' apps | rg -n 'api/auth|\[\.{3}nextauth\]'
rg -n --type=ts 'export\s+const\s+config\s*=\s*{[^}]*maxDuration' apps

Length of output: 358


🏁 Script executed:

#!/bin/bash
# Inspect the auth route file for config and maxDuration
FILE="apps/web/app/api/auth/[...nextauth]/route.ts"

echo "Searching for config export in $FILE"
rg -n "export\\s+const\\s+config\\s*=" -t ts "$FILE" || echo "No config export found"

echo "Searching for maxDuration in $FILE"
rg -n "maxDuration" -t ts "$FILE" || echo "No maxDuration found"

Length of output: 641


Add route-specific config.maxDuration export

The export const config = { maxDuration: 120 } in packages/database/auth/auth-options.tsx does not influence Next.js API route timeouts. Next.js reads route-level configs only from the route module itself.

• In apps/web/app/api/auth/[...nextauth]/route.ts, add before your handler export:

export const config = {
  maxDuration: 120,
};

• Optionally remove or deprecate the duplicate export in
packages/database/auth/auth-options.tsx if it’s no longer needed.

🤖 Prompt for AI Agents
In packages/database/auth/auth-options.tsx around lines 19-21, the exported
config with maxDuration does not affect Next.js API route timeouts because
Next.js only reads route-level configs from the route module itself; add an
export const config = { maxDuration: 120 } at the top level of
apps/web/app/api/auth/[...nextauth]/route.ts (immediately before the handler
export) so the route honors the timeout, and optionally remove or mark the
duplicate export in packages/database/auth/auth-options.tsx if it’s no longer
needed.

Comment on lines 74 to 109
async generateVerificationToken() {
return crypto.randomInt(100000, 1000000).toString();
},
async sendVerificationRequest({ identifier, token }) {
console.log("sendVerificationRequest");

if (!serverEnv().RESEND_API_KEY) {
console.log("\n");
console.log(
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
);
console.log("🔐 VERIFICATION CODE (Development Mode)");
console.log(
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
);
console.log(`📧 Email: ${identifier}`);
console.log(`🔢 Code: ${token}`);
console.log(`⏱️ Expires in: 10 minutes`);
console.log(
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
);
console.log("\n");
} else {
console.log({ identifier, token });
const { OTPEmail } = await import("../emails/otp-email");
const email = OTPEmail({ code: token, email: identifier });
console.log({ email });
await sendEmail({
email: identifier,
subject: `Your Cap Verification Code`,
react: email,
});
}
},
}),
];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove OTP/PII logging; this leaks verification codes into logs

Currently logs email addresses and the one-time code in both dev and prod paths. That’s sensitive.

Harden with environment-guarded logging and drop the token from logs:

-          async sendVerificationRequest({ identifier, token }) {
-            console.log("sendVerificationRequest");
-
-            if (!serverEnv().RESEND_API_KEY) {
-              console.log("\n");
-              console.log(
-                "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
-              );
-              console.log("🔐 VERIFICATION CODE (Development Mode)");
-              console.log(
-                "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
-              );
-              console.log(`📧 Email: ${identifier}`);
-              console.log(`🔢 Code: ${token}`);
-              console.log(`⏱️  Expires in: 10 minutes`);
-              console.log(
-                "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
-              );
-              console.log("\n");
-            } else {
-              console.log({ identifier, token });
-              const { OTPEmail } = await import("../emails/otp-email");
-              const email = OTPEmail({ code: token, email: identifier });
-              console.log({ email });
-              await sendEmail({
+          async sendVerificationRequest({ identifier, token }) {
+            const isDev = process.env.NODE_ENV !== "production";
+            if (!serverEnv().RESEND_API_KEY) {
+              if (isDev) {
+                // Dev-only helper: surface the code locally without persisting PII in logs.
+                console.info(`[DEV OTP] ${identifier} -> ${token}`);
+              }
+              return;
+            } else {
+              const { OTPEmail } = await import("../emails/otp-email");
+              const email = OTPEmail({ code: token, email: identifier });
+              await sendEmail({
                 email: identifier,
                 subject: `Your Cap Verification Code`,
                 react: email,
               });
             }
           },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async generateVerificationToken() {
return crypto.randomInt(100000, 1000000).toString();
},
async sendVerificationRequest({ identifier, token }) {
console.log("sendVerificationRequest");
if (!serverEnv().RESEND_API_KEY) {
console.log("\n");
console.log(
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
);
console.log("🔐 VERIFICATION CODE (Development Mode)");
console.log(
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
);
console.log(`📧 Email: ${identifier}`);
console.log(`🔢 Code: ${token}`);
console.log(`⏱️ Expires in: 10 minutes`);
console.log(
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
);
console.log("\n");
} else {
console.log({ identifier, token });
const { OTPEmail } = await import("../emails/otp-email");
const email = OTPEmail({ code: token, email: identifier });
console.log({ email });
await sendEmail({
email: identifier,
subject: `Your Cap Verification Code`,
react: email,
});
}
},
}),
];
async sendVerificationRequest({ identifier, token }) {
const isDev = process.env.NODE_ENV !== "production";
if (!serverEnv().RESEND_API_KEY) {
if (isDev) {
// Dev-only helper: surface the code locally without persisting PII in logs.
console.info(`[DEV OTP] ${identifier} -> ${token}`);
}
return;
} else {
const { OTPEmail } = await import("../emails/otp-email");
const email = OTPEmail({ code: token, email: identifier });
await sendEmail({
email: identifier,
subject: `Your Cap Verification Code`,
react: email,
});
}
},
🤖 Prompt for AI Agents
In packages/database/auth/auth-options.tsx around lines 74 to 109, remove all
logging that prints the verification token and full email address (console.log
of identifier, token, and email/OTPEmail content) and replace with
non-sensitive, environment-guarded messages: only log in development using a
clearly marked DEV-only flag (e.g. process.env.NODE_ENV === 'development' or a
specific RESEND_DEV flag), and if you must show the recipient, mask the email
(e.g. a***@domain.com) or log a success/failure event without the token; ensure
the production path never logs the token or PII, remove any console.log of the
email object, and keep the sendEmail call and functionality unchanged.

Copy link
Member

@Brendonovich Brendonovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely need to use presigned URLs for uploading these as coderabbit suggested, zipping to a file first would also be smart, we can then stream bytes from the file to the upload.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (5)
apps/web/app/api/desktop/[...route]/root.ts (3)

95-123: Add hard limits and basic sanity checks to the recording schema.
Attackers can claim arbitrary size and send huge base64 strings. Enforce server-side caps and validate claimed size.

Apply:

   z.object({
     systemInfo: z.object({
       os: z.string(),
       os_version: z.string(),
       arch: z.string(),
       cpu_cores: z.number(),
       memory_gb: z.number(),
       displays: z.array(
         z.object({
           width: z.number(),
           height: z.number(),
           scale_factor: z.number(),
         }),
       ),
       cameras: z.array(z.string()),
       microphones: z.array(z.string()),
     }),
     appVersion: z.string(),
     recording: z.object({
       name: z.string(),
       content: z.string(),
-      size_mb: z.number(),
+      // Cap client-claimed size to defend early and provide clearer errors.
+      size_mb: z.number().positive().lte(500),
     }),
   }),

Outside this hunk, define caps once:

// near top-level imports
const MAX_RECORDING_MB = 500;
const MAX_RECORDING_BYTES = MAX_RECORDING_MB * 1024 * 1024;

133-141: Base64 decoding of large blobs in-process risks memory pressure and DoS; prefer presigned uploads or enforce strict size limits.
This was raised earlier; still applicable. At minimum, validate decoded length vs claimed size and reject > MAX bytes with 413.

Apply:

-      const buffer = Buffer.from(recording.content, "base64");
+      // Decode cautiously and enforce caps
+      const buffer = Buffer.from(recording.content, "base64");
+      if (buffer.length > MAX_RECORDING_BYTES) {
+        return c.json(
+          { error: `Recording too large. Max ${MAX_RECORDING_MB} MB.` },
+          { status: 413 },
+        );
+      }
+      // Sanity check: ensure client-claimed size is close to actual
+      const actualSizeMb = buffer.length / (1024 * 1024);
+      if (Math.abs(actualSizeMb - recording.size_mb) > Math.max(5, recording.size_mb * 0.1)) {
+        console.warn(
+          `[RECORDING] Claimed size ${recording.size_mb} MB vs actual ${actualSizeMb.toFixed(2)} MB`,
+        );
+      }

If feasible, replace this entire flow with a presigned S3 (or multipart) upload and have this endpoint receive only metadata.


330-339: Limit file count/size and avoid unbounded base64 decoding for log files.
Prevents abuse and OOM. Also attach recentLogs as logs.json. Previously noted—still unresolved.

Apply:

-      const formData = new FormData();
+      const formData = new FormData();
+      const MAX_FILES = 10;
+      const MAX_FILE_BYTES = 10 * 1024 * 1024; // 10 MB
       formData.append("payload_json", JSON.stringify(formattedMessage));
-
-      logFiles.forEach((file, index) => {
-        const buffer = Buffer.from(file.content, "base64");
-        const blob = new Blob([buffer], { type: "text/plain" });
-        formData.append(`files[${index}]`, blob, file.name);
-      });
+      // Always attach recentLogs consolidated as JSON for reliability
+      const logsJson = JSON.stringify(recentLogs, null, 2);
+      formData.append(
+        `files[0]`,
+        new Blob([logsJson], { type: "application/json" }),
+        "recentLogs.json",
+      );
+      // Append bounded set of additional files
+      (logFiles ?? []).slice(0, MAX_FILES).forEach((file, idx) => {
+        try {
+          const buffer = Buffer.from(file.content, "base64");
+          if (buffer.length > MAX_FILE_BYTES) return; // skip oversized
+          const blob = new Blob([buffer], { type: "text/plain" });
+          // +1 because files[0] is recentLogs.json
+          formData.append(`files[${idx + 1}]`, blob, file.name);
+        } catch (e) {
+          console.warn("Skipping invalid base64 log file:", file.name);
+        }
+      });
apps/desktop/src-tauri/src/system_info.rs (2)

243-274: get_log_files accepts arbitrary filesystem paths — restrict scope and size to avoid exfiltration/DoS.

Frontend-controlled absolute paths can read and upload any file the app's process can access. Add path whitelisting (only under app data recordings/screenshots), max file size, and explicit allowed filenames. Also prefer using canonicalize() before checking ancestry.


335-363: Zipping entire recordings into memory can OOM; prefer streaming to a temp file, enforce size caps.

For large recordings, collecting the zip in Vec and then base64-encoding doubles memory pressure. Stream to a temp file, check upper bounds (e.g., 500 MB), then map as base64 in chunks or read after size verification.

🧹 Nitpick comments (11)
apps/web/app/api/desktop/[...route]/root.ts (7)

25-56: systemInfo is parsed/validated but currently unused in the /feedback payload.
Either include selected fields in the Discord payload or drop the field from the schema to avoid dead inputs.

Apply this minimal change to start using systemInfo (paired with the message builder below):

 z.object({
   feedback: z.string(),
   os: z.union([z.literal("macos"), z.literal("windows")]).optional(),
   version: z.string().optional(),
   systemInfo: z
     .string()
     .optional()
     .transform((val) => {
       if (!val) return undefined;
       try {
         return JSON.parse(val);
       } catch {
         return undefined;
       }
     })
     .pipe(
       z
         .object({
           os: z.string(),
           os_version: z.string(),
           arch: z.string(),
           cpu_cores: z.number(),
           memory_gb: z.number(),
           displays: z.array(
             z.object({
               width: z.number(),
               height: z.number(),
               scale_factor: z.number(),
             }),
           ),
           cameras: z.array(z.string()),
           microphones: z.array(z.string()),
         })
         .optional(),
     ),
 }),

59-59: Include OS/systemInfo details or remove unused variable.
You destructure systemInfo but it isn’t used later in the handler.


66-77: Fold optional OS/version/systemInfo into the feedback message; avoid unused schema.
Concise addition that keeps JSON payload while enriching context.

Apply:

-      let messageContent = `New feedback from ${c.get("user").email}:\n${feedback}`;
+      let messageContent = `New feedback from ${c.get("user").email}:\n${feedback}`;
+      if (os) messageContent += `\nOS: ${os}`;
       if (version) {
         messageContent += `\nCap version: ${version}`;
       }
-
-      const response = await fetch(discordWebhookUrl, {
+      // If systemInfo is present, add a compact embed. Otherwise send plain content.
+      const payload: Record<string, unknown> = { content: messageContent };
+      if (systemInfo) {
+        payload.embeds = [{
+          title: "System Information",
+          color: 5814783,
+          fields: [
+            { name: "OS", value: `${systemInfo.os} ${systemInfo.os_version}`, inline: true },
+            { name: "Arch", value: systemInfo.arch, inline: true },
+            { name: "CPU", value: `${systemInfo.cpu_cores} cores`, inline: true },
+            { name: "Memory", value: `${systemInfo.memory_gb.toFixed(1)} GB`, inline: true },
+          ],
+        }];
+      }
+      const response = await fetch(discordWebhookUrl, {
         method: "POST",
         headers: { "Content-Type": "application/json" },
-        body: JSON.stringify({
-          content: messageContent,
-        }),
+        body: JSON.stringify(payload),
       });

145-191: Use server-computed size in the embed and consider attaching a checksum for integrity.
Use actual buffer length; optional SHA-256 helps verify integrity on download.

Apply:

-              {
-                name: "Size",
-                value: `${recording.size_mb.toFixed(2)} MB`,
-                inline: true,
-              },
+              {
+                name: "Size",
+                value: `${(buffer.length / (1024 * 1024)).toFixed(2)} MB`,
+                inline: true,
+              },

Optionally:

// compute after decoding
const sha256 = crypto.createHash("sha256").update(buffer).digest("hex");
// add another field { name: "SHA-256", value: sha256, inline: false }

351-361: Consistent 4xx on Discord webhook failure would aid client UX.
You return 500 on failures; consider 502/503/504 or 424 (Failed Dependency) to distinguish webhook outage vs. client input problems.


363-369: Log the error server-side for /logs, like you do for /recording.
Improves operability.

Apply:

-    } catch (error) {
-      return c.json({ error: "Failed to submit logs" }, { status: 500 });
-    }
+    } catch (error) {
+      console.error("Error submitting logs:", error);
+      return c.json({ error: "Failed to submit logs" }, { status: 500 });
+    }

95-99: Ensure CORS covers desktop origins for new POST endpoints (recording/logs).
Given desktop/Tauri flows, verify CORS policy allows only expected origins and methods, or confirm these routes are same-origin and cookie-auth only.

If needed, add a CORS middleware with explicit allowlist for POST /feedback,/recording,/logs.

Also applies to: 214-218

apps/desktop/src-tauri/src/system_info.rs (4)

76-80: Use sysinfo consistently for memory calculation.

The current approach instantiates System::new_all() which loads all system information including processes, which is unnecessary for just getting memory info. Use System::new() instead for better performance.

 let memory_gb = {
     use sysinfo::System;
-    let sys = System::new_all();
+    let mut sys = System::new();
+    sys.refresh_memory();
     sys.total_memory() as f64 / (1024.0 * 1024.0 * 1024.0)
 };

98-128: Add fallback display detection for non-macOS platforms.

The hardcoded fallback display info (1920x1080) may not represent actual display configurations on Windows/Linux systems. Consider using platform-specific APIs or the winit crate for cross-platform display detection.

For a more robust cross-platform solution, consider using a crate like display-info or implementing Windows-specific display enumeration:

 #[cfg(not(target_os = "macos"))]
 {
-    vec![DisplayInfo {
-        width: 1920,
-        height: 1080,
-        scale_factor: 1.0,
-    }]
+    // TODO: Implement Windows/Linux display detection
+    // For now, return a reasonable default but consider using winit or display-info crate
+    vec![DisplayInfo {
+        width: 1920,
+        height: 1080,
+        scale_factor: 1.0,
+    }]
 }

191-191: Use system modification time as fallback for recording timestamp.

The current implementation defaults to chrono::Local::now() for the timestamp, but it would be more accurate to use the directory's modification time as a fallback when metadata doesn't contain a timestamp.

+        // Use directory modification time as fallback
+        let default_timestamp = recordings.iter()
+            .find(|(path, _)| path.file_name().and_then(|n| n.to_str()) == Some(&recording_id))
+            .map(|(_, time)| {
+                time.duration_since(std::time::UNIX_EPOCH)
+                    .unwrap_or_default()
+                    .as_secs()
+            })
+            .and_then(|secs| chrono::DateTime::from_timestamp(secs as i64, 0))
+            .map(|dt| dt.to_rfc3339())
+            .unwrap_or_else(|| chrono::Local::now().to_rfc3339());
+
         let mut log = RecordingLog {
             id: recording_id.clone(),
-            timestamp: chrono::Local::now().to_rfc3339(),
+            timestamp: default_timestamp,

371-418: Improve error handling and add file size validation in zip creation.

The add_dir_to_zip function loads entire files into memory without size checks, which could cause memory issues for large files. Consider adding individual file size limits and streaming large files.

 fn add_dir_to_zip<W: Write + std::io::Seek>(
     zip: &mut ZipWriter<W>,
     dir_path: &PathBuf,
     prefix: &str,
     options: &FileOptions,
 ) -> Result<(), String> {
+    const MAX_FILE_SIZE: u64 = 100 * 1024 * 1024; // 100MB per file
+    
     let entries = fs::read_dir(dir_path).map_err(|e| format!("Failed to read directory: {}", e))?;

     for entry in entries {
         let entry = entry.map_err(|e| format!("Failed to read entry: {}", e))?;
         let path = entry.path();
+        
+        if path.is_file() {
+            let file_size = path.metadata()
+                .map_err(|e| format!("Failed to get file metadata: {}", e))?
+                .len();
+            if file_size > MAX_FILE_SIZE {
+                return Err(format!("File too large: {} ({} bytes)", path.display(), file_size));
+            }
+        }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 3e89b36 and 8d6d347.

📒 Files selected for processing (6)
  • apps/desktop/src-tauri/src/system_info.rs (1 hunks)
  • apps/desktop/src/routes/(window-chrome)/settings/feedback.tsx (2 hunks)
  • apps/desktop/src/utils/tauri.ts (6 hunks)
  • apps/web/app/api/desktop/[...route]/root.ts (4 hunks)
  • packages/database/auth/auth-options.tsx (1 hunks)
  • packages/web-api-contract/src/desktop.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • packages/database/auth/auth-options.tsx
  • apps/desktop/src/routes/(window-chrome)/settings/feedback.tsx
  • apps/desktop/src/utils/tauri.ts
  • packages/web-api-contract/src/desktop.ts
🧰 Additional context used
📓 Path-based instructions (5)
apps/desktop/src-tauri/**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

Use tauri_specta on the Rust side for IPC (typed commands/events) and emit events via generated types

Files:

  • apps/desktop/src-tauri/src/system_info.rs
apps/web/app/api/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

When HTTP routes are necessary, implement them under app/api/*, configure CORS correctly, and set precise revalidation

Files:

  • apps/web/app/api/desktop/[...route]/root.ts
apps/web/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

apps/web/**/*.{ts,tsx}: Use TanStack Query v5 for client-side server state and data fetching in the web app
Mutations should call Server Actions and perform precise cache updates with setQueryData/setQueriesData, avoiding broad invalidations
Prefer Server Components for initial data and pass initialData to client components for React Query hydration

Files:

  • apps/web/app/api/desktop/[...route]/root.ts
{apps/web,packages/ui}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

{apps/web,packages/ui}/**/*.{ts,tsx}: Use Tailwind CSS exclusively for styling in the web app and shared React UI components
Component naming: React components in PascalCase; hooks in camelCase starting with 'use'

Files:

  • apps/web/app/api/desktop/[...route]/root.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use strict TypeScript and avoid any; prefer shared types from packages

Files:

  • apps/web/app/api/desktop/[...route]/root.ts
🧠 Learnings (1)
📚 Learning: 2025-08-25T10:58:06.134Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-25T10:58:06.134Z
Learning: Applies to apps/desktop/src-tauri/**/*.rs : Use tauri_specta on the Rust side for IPC (typed commands/events) and emit events via generated types

Applied to files:

  • apps/desktop/src-tauri/src/system_info.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Analyze (rust)
🔇 Additional comments (3)
apps/web/app/api/desktop/[...route]/root.ts (2)

3-3: LGTM: nanoId import is appropriate for collision-resistant keys.
No issues spotted.


12-12: LGTM: S3 bucket provider import aligns with new storage workflow.
No issues spotted.

apps/desktop/src-tauri/src/system_info.rs (1)

221-233: LGTM! Proper use of tauri_specta annotations.

The command correctly uses both #[tauri::command] and #[specta::specta] annotations as required by the coding guidelines for typed IPC in Tauri applications.

Comment on lines +1 to +10
use base64::prelude::*;
use serde::{Deserialize, Serialize};
use specta::Type;
use std::fs;
use std::io::{Read, Write};
use std::path::PathBuf;
use tauri::{AppHandle, Manager};
use zip::ZipWriter;
use zip::write::FileOptions;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Missing required dependency for macOS display functionality.

The code uses core_graphics::display::CGDisplay on macOS (lines 101-102) but the import is not present at the module level. This will cause compilation failures on macOS builds.

Add the missing import at the top of the file:

 use base64::prelude::*;
+#[cfg(target_os = "macos")]
+use core_graphics::display::CGDisplay;
 use serde::{Deserialize, Serialize};
 use specta::Type;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
use base64::prelude::*;
use serde::{Deserialize, Serialize};
use specta::Type;
use std::fs;
use std::io::{Read, Write};
use std::path::PathBuf;
use tauri::{AppHandle, Manager};
use zip::ZipWriter;
use zip::write::FileOptions;
use base64::prelude::*;
#[cfg(target_os = "macos")]
use core_graphics::display::CGDisplay;
use serde::{Deserialize, Serialize};
use specta::Type;
use std::fs;
use std::io::{Read, Write};
use std::path::PathBuf;
use tauri::{AppHandle, Manager};
use zip::ZipWriter;
use zip::write::FileOptions;
🤖 Prompt for AI Agents
In apps/desktop/src-tauri/src/system_info.rs around lines 1 to 10, the file uses
core_graphics::display::CGDisplay on macOS (used later at ~lines 101-102) but
the module-level import is missing; add a cfg-gated import like
core_graphics::display::CGDisplay (behind #[cfg(target_os = "macos")]) to the
top imports so macOS builds can reference CGDisplay without breaking non-macOS
targets.

Comment on lines +139 to +145
const downloadUrl = await bucket.getSignedObjectUrl(recordingKey);

const discordWebhookUrl = serverEnv().DISCORD_LOGS_WEBHOOK_URL;
if (!discordWebhookUrl)
throw new Error("Discord webhook URL is not configured");

const formattedMessage = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Mismatch between “valid for 7 days” copy and URL expiry; pass an explicit TTL when generating the signed URL.
Unless createBucketProvider defaults to 7 days, the text could be inaccurate and links may expire earlier.

Apply:

-      const downloadUrl = await bucket.getSignedObjectUrl(recordingKey);
+      const sevenDays = 7 * 24 * 60 * 60; // seconds
+      const downloadUrl = await bucket.getSignedObjectUrl(recordingKey, {
+        expiresInSeconds: sevenDays,
+      });

If the helper doesn’t accept options, extend it to pass the TTL through to the S3 signer.


🏁 Script executed:

#!/bin/bash
# Locate the createBucketProvider implementation
rg -n "createBucketProvider" -g "*.ts" -C3

# Locate the getSignedObjectUrl definition or overloads
rg -n "getSignedObjectUrl" -g "*.ts" -C3

Length of output: 29968


Ensure signed URL validity matches “valid for 7 days” copy

By default, our helper’s getSignedObjectUrl omits an explicit expiry, so it uses the AWS SDK’s default timeout (900 s when using @aws-sdk/s3-request-presigner) (github.com) or the 1 h fallback in our backend helper —both far shorter than the advertised “valid for 7 days.” AWS supports up to 604 800 s (7 days) via the expiresIn option (chatwithcloud.ai). We need to:

• Extend the S3BucketProvider interface to accept an optional expiresInSeconds parameter.
• Modify the getSignedObjectUrl implementation in apps/web/utils/s3.ts to pass that TTL through to getSignedUrl.
• Update callsites (e.g., apps/web/app/api/desktop/[...route]/root.ts) to supply 7 * 24 * 60 * 60 seconds for links meant to remain valid for 7 days.

Example diff for the helper:

--- a/apps/web/utils/s3.ts
+++ b/apps/web/utils/s3.ts
@@ -144,7 +144,8 @@ interface S3BucketProvider {
-  getSignedObjectUrl(key: string): Promise<string>;
+  getSignedObjectUrl(
+    key: string,
+    expiresInSeconds?: number
+  ): Promise<string>;
 }
 
 export async function createBucketProvider(…) {
@@ -238,7 +239,17 @@ export function createBucketProvider(…) {
-    async getSignedObjectUrl(key: string) {
-      return S3Presigner.getSignedUrl(
+    async getSignedObjectUrl(
+      key: string,
+      expiresInSeconds = 7 * 24 * 60 * 60 // default to 7 days
+    ) {
+      return S3Presigner.getSignedUrl(
         await getClient(false),
         new GetObjectCommand({ Bucket: bucket, Key: key }),
+        {
+          expiresIn: expiresInSeconds,
+        }
       );
     },

And in apps/web/app/api/desktop/[...route]/root.ts around line 139:

- const downloadUrl = await bucket.getSignedObjectUrl(recordingKey);
+ const downloadUrl = await bucket.getSignedObjectUrl(
+   recordingKey,
+   7 * 24 * 60 * 60
+ );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const downloadUrl = await bucket.getSignedObjectUrl(recordingKey);
const discordWebhookUrl = serverEnv().DISCORD_LOGS_WEBHOOK_URL;
if (!discordWebhookUrl)
throw new Error("Discord webhook URL is not configured");
const formattedMessage = {
const downloadUrl = await bucket.getSignedObjectUrl(
recordingKey,
7 * 24 * 60 * 60
);
const discordWebhookUrl = serverEnv().DISCORD_LOGS_WEBHOOK_URL;
if (!discordWebhookUrl)
throw new Error("Discord webhook URL is not configured");
const formattedMessage = {
🤖 Prompt for AI Agents
In apps/web/app/api/desktop/[...route]/root.ts around lines 139 to 145, the
signed URL is created via bucket.getSignedObjectUrl but no expiresIn is passed
so the link will use the SDK/helper default (much shorter than the advertised 7
days); update the S3BucketProvider interface to accept an optional
expiresInSeconds, modify apps/web/utils/s3.ts getSignedObjectUrl to forward
expiresInSeconds into getSignedUrl (using expiresIn option), and change this
callsite to pass 7 * 24 * 60 * 60 (604800) seconds when generating links that
must be valid for 7 days.

Comment on lines +214 to +255
app.post(
"/logs",
zValidator(
"json",
z.object({
systemInfo: z.object({
os: z.string(),
os_version: z.string(),
arch: z.string(),
cpu_cores: z.number(),
memory_gb: z.number(),
displays: z.array(
z.object({
width: z.number(),
height: z.number(),
scale_factor: z.number(),
}),
),
cameras: z.array(z.string()),
microphones: z.array(z.string()),
}),
recentLogs: z.array(
z.object({
id: z.string(),
timestamp: z.string(),
duration_seconds: z.number().nullable(),
error: z.string().nullable(),
log_content: z.string().nullable(),
log_file_path: z.string().nullable().optional(),
}),
),
appVersion: z.string(),
logFiles: z
.array(
z.object({
name: z.string(),
content: z.string(),
}),
)
.optional(),
}),
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

recentLogs are accepted but never used; add a limit and plan for delivery.
Cap the array length in the schema and decide how to forward it (embed vs attachment).

Apply:

   recentLogs: z.array(
     z.object({
       id: z.string(),
       timestamp: z.string(),
       duration_seconds: z.number().nullable(),
       error: z.string().nullable(),
       log_content: z.string().nullable(),
       log_file_path: z.string().nullable().optional(),
     }),
-  ),
+  ).max(200),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
app.post(
"/logs",
zValidator(
"json",
z.object({
systemInfo: z.object({
os: z.string(),
os_version: z.string(),
arch: z.string(),
cpu_cores: z.number(),
memory_gb: z.number(),
displays: z.array(
z.object({
width: z.number(),
height: z.number(),
scale_factor: z.number(),
}),
),
cameras: z.array(z.string()),
microphones: z.array(z.string()),
}),
recentLogs: z.array(
z.object({
id: z.string(),
timestamp: z.string(),
duration_seconds: z.number().nullable(),
error: z.string().nullable(),
log_content: z.string().nullable(),
log_file_path: z.string().nullable().optional(),
}),
),
appVersion: z.string(),
logFiles: z
.array(
z.object({
name: z.string(),
content: z.string(),
}),
)
.optional(),
}),
),
recentLogs: z.array(
z.object({
id: z.string(),
timestamp: z.string(),
duration_seconds: z.number().nullable(),
error: z.string().nullable(),
log_content: z.string().nullable(),
log_file_path: z.string().nullable().optional(),
})
).max(200),

Comment on lines +257 to +329
const { systemInfo, appVersion, logFiles } = c.req.valid("json");

try {
const discordWebhookUrl = serverEnv().DISCORD_LOGS_WEBHOOK_URL;
if (!discordWebhookUrl)
throw new Error("Discord webhook URL is not configured");

const formattedMessage = {
content: `🔧 **Logs Report from ${c.get("user").email}**`,
embeds: [
{
title: "System Information",
color: 5814783,
fields: [
{
name: "OS",
value: `${systemInfo.os} ${systemInfo.os_version}`,
inline: true,
},
{
name: "Architecture",
value: systemInfo.arch,
inline: true,
},
{
name: "App Version",
value: appVersion,
inline: true,
},
{
name: "CPU Cores",
value: systemInfo.cpu_cores.toString(),
inline: true,
},
{
name: "Memory",
value: `${systemInfo.memory_gb.toFixed(1)} GB`,
inline: true,
},
{
name: "Displays",
value: systemInfo.displays
.map((d) => `${d.width}x${d.height}`)
.join(", "),
inline: false,
},
{
name: "Cameras",
value:
systemInfo.cameras.slice(0, 3).join("\n") || "None detected",
inline: false,
},
{
name: "Microphones",
value:
systemInfo.microphones.slice(0, 3).join("\n") ||
"None detected",
inline: false,
},
],
},
] as Array<{
title: string;
color: number;
fields?: {
name: string;
value: string;
inline: boolean;
}[];
description?: string;
}>,
};

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Drop data loss: include recentLogs in the outbound payload (as attachment to avoid Discord field limits).
At the moment, recentLogs are discarded. Attach a logs.json file alongside any provided log files.

Apply (and include recentLogs in destructure):

-    const { systemInfo, appVersion, logFiles } = c.req.valid("json");
+    const { systemInfo, appVersion, logFiles, recentLogs } = c.req.valid("json");

And prepare a small, summarized description for the embed:

         embeds: [
           {
             title: "System Information",
             color: 5814783,
             fields: [
               // ...
             ],
+            description: `Attached: recentLogs (${recentLogs.length})`,
           },
         ] as Array<{
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const { systemInfo, appVersion, logFiles } = c.req.valid("json");
try {
const discordWebhookUrl = serverEnv().DISCORD_LOGS_WEBHOOK_URL;
if (!discordWebhookUrl)
throw new Error("Discord webhook URL is not configured");
const formattedMessage = {
content: `🔧 **Logs Report from ${c.get("user").email}**`,
embeds: [
{
title: "System Information",
color: 5814783,
fields: [
{
name: "OS",
value: `${systemInfo.os} ${systemInfo.os_version}`,
inline: true,
},
{
name: "Architecture",
value: systemInfo.arch,
inline: true,
},
{
name: "App Version",
value: appVersion,
inline: true,
},
{
name: "CPU Cores",
value: systemInfo.cpu_cores.toString(),
inline: true,
},
{
name: "Memory",
value: `${systemInfo.memory_gb.toFixed(1)} GB`,
inline: true,
},
{
name: "Displays",
value: systemInfo.displays
.map((d) => `${d.width}x${d.height}`)
.join(", "),
inline: false,
},
{
name: "Cameras",
value:
systemInfo.cameras.slice(0, 3).join("\n") || "None detected",
inline: false,
},
{
name: "Microphones",
value:
systemInfo.microphones.slice(0, 3).join("\n") ||
"None detected",
inline: false,
},
],
},
] as Array<{
title: string;
color: number;
fields?: {
name: string;
value: string;
inline: boolean;
}[];
description?: string;
}>,
};
// Before: apps/web/app/api/desktop/[...route]/root.ts
const { systemInfo, appVersion, logFiles, recentLogs } = c.req.valid("json");
try {
const discordWebhookUrl = serverEnv().DISCORD_LOGS_WEBHOOK_URL;
if (!discordWebhookUrl)
throw new Error("Discord webhook URL is not configured");
const formattedMessage = {
content: `🔧 **Logs Report from ${c.get("user").email}**`,
embeds: [
{
title: "System Information",
color: 5814783,
fields: [
{
name: "OS",
value: `${systemInfo.os} ${systemInfo.os_version}`,
inline: true,
},
{
name: "Architecture",
value: systemInfo.arch,
inline: true,
},
{
name: "App Version",
value: appVersion,
inline: true,
},
{
name: "CPU Cores",
value: systemInfo.cpu_cores.toString(),
inline: true,
},
{
name: "Memory",
value: `${systemInfo.memory_gb.toFixed(1)} GB`,
inline: true,
},
{
name: "Displays",
value: systemInfo.displays
.map((d) => `${d.width}x${d.height}`)
.join(", "),
inline: false,
},
{
name: "Cameras",
value:
systemInfo.cameras.slice(0, 3).join("\n") || "None detected",
inline: false,
},
{
name: "Microphones",
value:
systemInfo.microphones.slice(0, 3).join("\n") ||
"None detected",
inline: false,
},
],
description: `Attached: recentLogs (${recentLogs.length})`,
},
] as Array<{
title: string;
color: number;
fields?: {
name: string;
value: string;
inline: boolean;
}[];
description?: string;
}>,
};
// …rest of your dispatch/attachment logic…
🤖 Prompt for AI Agents
In apps/web/app/api/desktop/[...route]/root.ts around lines 257 to 329,
recentLogs are not being read or sent; update the destructure to include
recentLogs from c.req.valid("json"), create a logs.json attachment (stringify
recentLogs) and include it in the outbound attachments array alongside any
existing logFiles so logs aren't discarded, and add a short summarized
description field on the embed (e.g., first N lines or a concise summary) to
surface key points in the embed while attaching the full logs.json to avoid
Discord field/size limits.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants