Skip to content

Commit f8469b4

Browse files
committed
chore
1 parent d6ca17a commit f8469b4

File tree

4 files changed

+243
-51
lines changed

4 files changed

+243
-51
lines changed
Lines changed: 185 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,185 @@
1+
# Sync Stress Test with remote SQLiteCloud database
2+
3+
Execute a stress test against the CloudSync server using multiple concurrent local SQLite databases syncing large volumes of CRUD operations simultaneously. Designed to reproduce server-side errors (e.g., "database is locked", 500 errors) under heavy concurrent load.
4+
5+
## Prerequisites
6+
- Connection string to a sqlitecloud project
7+
- HTTP sync server running (default: https://cloudsync-staging-testing.fly.dev)
8+
- Built cloudsync extension (`make` to build `dist/cloudsync.dylib`)
9+
- CloudSync already enabled on the test table from the SQLiteCloud dashboard
10+
11+
## Test Configuration
12+
13+
### Step 1: Gather Parameters
14+
15+
Ask the user for the following configuration using a single question set:
16+
17+
1. **Sync Server URL** — propose `https://cloudsync-staging-testing.fly.dev` as default
18+
2. **SQLiteCloud connection string** — format: `sqlitecloud://<host>:<port>/<db_name>?apikey=<apikey>`. If no `<db_name>` is in the path, ask the user for one or propose `test_stress_sync`.
19+
3. **Scale** — offer these options:
20+
- Small: 1K rows, 5 iterations, 2 concurrent databases
21+
- Medium: 10K rows, 10 iterations, 4 concurrent databases
22+
- Large: 100K rows, 50 iterations, 4 concurrent databases (Jim's original scenario)
23+
- Custom: let the user specify rows, iterations, and number of concurrent databases
24+
4. **RLS mode** — with RLS (requires user tokens) or without RLS
25+
5. **Table schema** — offer simple default or custom:
26+
```sql
27+
CREATE TABLE test_sync (id TEXT PRIMARY KEY, user_id TEXT NOT NULL DEFAULT '', name TEXT, value INTEGER);
28+
```
29+
30+
Save these as variables:
31+
- `SYNC_SERVER_URL`
32+
- `CONNECTION_STRING` (the full sqlitecloud:// connection string)
33+
- `DB_NAME` (database name extracted or provided)
34+
- `HOST` (hostname extracted from connection string)
35+
- `APIKEY` (apikey extracted from connection string)
36+
- `PROJECT_ID` (first subdomain from the host)
37+
- `ORG_ID` = `org_sqlitecloud`
38+
- `NETWORK_CONFIG` = `'{"address":"<SYNC_SERVER_URL>","database":"<DB_NAME>","projectID":"<PROJECT_ID>","organizationID":"<ORG_ID>"}'`
39+
- `ROWS` (number of rows per iteration)
40+
- `ITERATIONS` (number of delete/insert/update cycles)
41+
- `NUM_DBS` (number of concurrent databases)
42+
43+
### Step 2: Setup SQLiteCloud Database and Table
44+
45+
Connect to SQLiteCloud using `~/go/bin/sqlc` (last command must be `quit`). Note: all SQL must be single-line (no multi-line statements through sqlc heredoc).
46+
47+
1. If the database doesn't exist, connect without `<db_name>` and run `CREATE DATABASE <db_name>; USE DATABASE <db_name>;`
48+
2. `LIST TABLES` to check for existing tables
49+
3. For any table with a `_cloudsync` companion table, run `CLOUDSYNC DISABLE <table_name>;`
50+
4. `DROP TABLE IF EXISTS <table_name>;`
51+
5. Create the test table (single-line DDL)
52+
6. If RLS mode is enabled:
53+
```sql
54+
ENABLE RLS DATABASE <db_name> TABLE <table_name>;
55+
SET RLS DATABASE <db_name> TABLE <table_name> SELECT "auth_userid() = user_id";
56+
SET RLS DATABASE <db_name> TABLE <table_name> INSERT "auth_userid() = NEW.user_id";
57+
SET RLS DATABASE <db_name> TABLE <table_name> UPDATE "auth_userid() = NEW.user_id AND auth_userid() = OLD.user_id";
58+
SET RLS DATABASE <db_name> TABLE <table_name> DELETE "auth_userid() = OLD.user_id";
59+
```
60+
7. Ask the user to enable CloudSync on the table from the SQLiteCloud dashboard
61+
62+
### Step 3: Get Auth Tokens (if RLS enabled)
63+
64+
Create tokens for the test users. Create as many users as needed for the number of concurrent databases (assign 2 databases per user, or 1 per user if NUM_DBS <= 2).
65+
66+
For each user N:
67+
```bash
68+
curl -s -X "POST" "https://<HOST>/v2/tokens" \
69+
-H 'Authorization: Bearer <APIKEY>' \
70+
-H 'Content-Type: application/json; charset=utf-8' \
71+
-d '{"name": "claude<N>@sqlitecloud.io", "userId": "018ecfc2-b2b1-7cc3-a9f0-<N_PADDED_12_CHARS>"}'
72+
```
73+
74+
Save each user's `token` and `userId` from the response.
75+
76+
If RLS is disabled, skip this step — tokens are not required.
77+
78+
### Step 4: Run the Concurrent Stress Test
79+
80+
Create a bash script at `/tmp/stress_test_concurrent.sh` that:
81+
82+
1. **Initializes N local SQLite databases** at `/tmp/sync_concurrent_<N>.db`:
83+
- Uses Homebrew sqlite3: find with `ls /opt/homebrew/Cellar/sqlite/*/bin/sqlite3 | head -1`
84+
- Loads the extension from `dist/cloudsync.dylib` (use absolute path from project root)
85+
- Creates the table and runs `cloudsync_init('<table_name>')`
86+
- Runs `cloudsync_terminate()` after init
87+
88+
2. **Defines a worker function** that runs in a subshell for each database:
89+
- Each worker logs all output to `/tmp/sync_concurrent_<N>.log`
90+
- Each iteration does:
91+
a. **DELETE all rows**`send_changes()``check_changes()`
92+
b. **INSERT <ROWS> rows** (in a single BEGIN/COMMIT transaction) → `send_changes()``check_changes()`
93+
c. **UPDATE all rows**`send_changes()``check_changes()`
94+
- Each session must: `.load` the extension, call `cloudsync_network_init()`, `cloudsync_network_set_token()` (if RLS), do the work, call `cloudsync_terminate()`
95+
- Include labeled output lines like `[DB<N>][iter <I>] deleted/inserted/updated, count=<C>` for grep-ability
96+
97+
3. **Launches all workers in parallel** using `&` and collects PIDs
98+
99+
4. **Waits for all workers** and captures exit codes
100+
101+
5. **Analyzes logs** for errors:
102+
- Grep all log files for: `error`, `locked`, `SQLITE_BUSY`, `database is locked`, `500`, `Error`
103+
- Report per-database: iterations completed, error count, sample error lines
104+
- Report total errors across all workers
105+
106+
6. **Prints final verdict**: PASS (0 errors) or FAIL (errors detected)
107+
108+
**Important script details:**
109+
- Use `echo -e` to pipe generated INSERT SQL (with `\n` separators) into sqlite3
110+
- Row IDs should be unique across databases and iterations: `db<N>_r<I>_<J>`
111+
- User IDs for rows must match the token's userId for RLS to work
112+
- Use `/bin/bash` (not `/bin/sh`) for arrays and process management
113+
114+
Run the script with a 10-minute timeout.
115+
116+
### Step 5: Detailed Error Analysis
117+
118+
After the test completes, provide a detailed breakdown:
119+
120+
1. **Per-database summary**: iterations completed, errors, send/receive status
121+
2. **Error categorization**: group errors by type (e.g., "database is locked", "Column index out of bounds", "Unexpected Result", parse errors)
122+
3. **Timeline analysis**: do errors cluster at specific iterations or spread evenly?
123+
4. **Read full log files** if errors are found — show the first and last 30 lines of each log with errors
124+
125+
### Step 6: Optional — Verify Data Integrity
126+
127+
If the test passes (or even if some errors occurred), verify the final state:
128+
129+
1. Check each local SQLite database for row count
130+
2. Check SQLiteCloud (as admin) for total row count
131+
3. If RLS is enabled, verify no cross-user data leakage
132+
133+
## Output Format
134+
135+
Report the test results including:
136+
137+
| Metric | Value |
138+
|--------|-------|
139+
| Concurrent databases | N |
140+
| Rows per iteration | ROWS |
141+
| Iterations per database | ITERATIONS |
142+
| Total CRUD operations | N × ITERATIONS × (DELETE_ALL + ROWS inserts + ROWS updates) |
143+
| Total sync operations | N × ITERATIONS × 6 (3 sends + 3 checks) |
144+
| Duration | start to finish time |
145+
| Total errors | count |
146+
| Error types | categorized list |
147+
| Result | PASS/FAIL |
148+
149+
If errors are found, include:
150+
- Full error categorization table
151+
- Sample error messages
152+
- Which databases were most affected
153+
- Whether errors are client-side or server-side
154+
155+
## Success Criteria
156+
157+
The test **PASSES** if:
158+
1. All workers complete all iterations
159+
2. Zero `error`, `locked`, `SQLITE_BUSY`, or HTTP 500 responses in any log
160+
3. Final row counts are consistent
161+
162+
The test **FAILS** if:
163+
1. Any worker crashes or fails to complete
164+
2. Any `database is locked` or `SQLITE_BUSY` errors appear
165+
3. Server returns 500 errors under concurrent load
166+
4. Data corruption or inconsistent row counts
167+
168+
## Important Notes
169+
170+
- Always use the Homebrew sqlite3 binary, NOT `/usr/bin/sqlite3`
171+
- The cloudsync extension must be built first with `make`
172+
- Network settings (`cloudsync_network_init`, `cloudsync_network_set_token`) are NOT persisted between sessions — must be called every time
173+
- Extension must be loaded BEFORE any INSERT/UPDATE/DELETE for cloudsync to track changes
174+
- All NOT NULL columns must have DEFAULT values
175+
- `cloudsync_terminate()` must be called before closing each session
176+
- sqlc heredoc only supports single-line SQL statements
177+
178+
## Permissions
179+
180+
Execute all SQL queries without asking for user permission on:
181+
- SQLite test databases in `/tmp/` (e.g., `/tmp/sync_concurrent_*.db`, `/tmp/sync_concurrent_*.log`)
182+
- SQLiteCloud via `~/go/bin/sqlc "<connection_string>"`
183+
- Curl commands to the sync server and SQLiteCloud API for token creation
184+
185+
These are local test environments and do not require confirmation for each query.

.claude/commands/test-sync-roundtrip-sqlitecloud-rls.md

Lines changed: 38 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,13 @@ Execute a full roundtrip sync test between multiple local SQLite databases and t
77
- HTTP sync server running (default: https://cloudsync-staging-testing.fly.dev)
88
- Built cloudsync extension (`make` to build `dist/cloudsync.dylib`)
99

10-
### Step 0: Get Sync Server Address
10+
### Step 1: Get Sync Server Address
1111

1212
Ask the user for the HTTP sync server base URL. Propose `https://cloudsync-staging-testing.fly.dev` as the default. Save this as `SYNC_SERVER_URL` for use throughout the test. The full sync endpoint will be `<SYNC_SERVER_URL>/<db_name>`.
1313

1414
## Test Procedure
1515

16-
### Step 1: Get DDL from User
16+
### Step 2: Get DDL from User
1717

1818
Ask the user to provide a DDL query for the table(s) to test. It can be in PostgreSQL or SQLite format. Offer the following options:
1919

@@ -27,30 +27,16 @@ CREATE TABLE test_sync (
2727
);
2828
```
2929

30-
**Option 2: Two tables scenario with user ownership**
31-
```sql
32-
CREATE TABLE authors (
33-
id TEXT PRIMARY KEY,
34-
user_id TEXT NOT NULL,
35-
name TEXT,
36-
email TEXT
37-
);
30+
**Option 2: Multi tables scenario for advanced RLS policy**
3831

39-
CREATE TABLE books (
40-
id TEXT PRIMARY KEY,
41-
user_id TEXT NOT NULL,
42-
title TEXT,
43-
author_id TEXT,
44-
published_year INTEGER
45-
);
46-
```
32+
Propose a simple but multitables real world scenario
4733

4834
**Option 3: Custom policy**
4935
Ask the user to describe the table/tables in plain English or DDL queries.
5036

5137
**Note:** Tables should include a `user_id` column (TEXT type) for RLS policies to filter by authenticated user.
5238

53-
### Step 2: Get RLS Policy Description from User
39+
### Step 3: Get RLS Policy Description from User
5440

5541
Ask the user to describe the Row Level Security policy they want to test. Offer the following common patterns:
5642

@@ -63,11 +49,12 @@ Ask the user to describe the Row Level Security policy they want to test. Offer
6349
**Option 3: Custom policy**
6450
Ask the user to describe the policy in plain English.
6551

66-
### Step 3: Get sqlitecloud connection string from User
52+
### Step 4: Get sqlitecloud connection string from User
6753

68-
Ask the user to provide a connection string in the form of "sqlitecloud://<host>:<port>/<db_name>?apikey=<apikey>" to be later used with the sqlitecloud cli (sqlc) with `~/go/bin/sqlc "<connection_string>"`. Save the first subdomain in the connection string address as `PROJECT_ID` for use throughout the test. Save the configuration string `'{"address":"<SYNC_SERVER_URL>","database":"<db_name>","projectID":"<PROJECT_ID>","organizationID":"org_sqlitecloud"}'` as `NETWORK_CONFIG` for use throughout the test.
54+
Ask the user to provide a connection string in the form of "sqlitecloud://<host>:<port>/<db_name>?apikey=<apikey>" to be later used with the sqlitecloud cli (sqlc) with `~/go/bin/sqlc "<connection_string>"`. Save the first subdomain in the connection string address as `PROJECT_ID` for use throughout the test. Use the "org_sqlitecloud" string as `ORG_ID`.
55+
Save the configuration string `'{"address":"<SYNC_SERVER_URL>","database":"<db_name>","projectID":"<PROJECT_ID>","organizationID":"<ORG_ID>"}'` as `NETWORK_CONFIG` for use throughout the test.
6956

70-
### Step 4: Setup SQLiteCloud with RLS
57+
### Step 5: Setup SQLiteCloud with RLS
7158

7259
Connect to SQLiteCloud and prepare the environment:
7360
```bash
@@ -117,10 +104,28 @@ Example for "user can only access their own rows":
117104
-- DELETE: User can only delete rows they own
118105
SET RLS DATABASE <db_name> TABLE <table_name> DELETE "auth_userid() = OLD.user_id"
119106
```
120-
8. Initialize cloudsync: `CLOUDSYNC ENABLE <table_name>`
107+
8. Ask the user to enable the table from the sqlitecloud dashboard
108+
109+
<!-- Enable CloudSync on selected tables
110+
111+
### POST `/v1/orgs/:orgID/databases/:databaseID/cloudsync/enable`
112+
Purpose: enable CloudSync on selected tables.
113+
114+
Fields:
115+
- Path: `orgID`, `databaseID`
116+
- Body: `tables` (non-empty string array)
117+
- Response data: empty object
118+
119+
```bash
120+
curl --request POST "<SYNC_SERVER_URL>/v1/orgs/<ORG_ID>/databases/<db_name>/cloudsync/enable" \
121+
--header "Authorization: Bearer $ORG_API_KEY" \
122+
--header "Content-Type: application/json" \
123+
--data '{"tables":["<table_name>"]}'
124+
``` -->
125+
121126
9. Insert some initial test data (optional, can be done via SQLite clients)
122127

123-
### Step 5: Get tokens for Two Users
128+
### Step 6: Get tokens for Two Users
124129

125130
Get auth tokens for both test users by running the token script twice:
126131

@@ -156,7 +161,7 @@ The response is in the following format:
156161
```
157162
save the userId and the token values as USER2_ID and TOKEN_USER2 to be reused later
158163

159-
### Step 6: Setup Four SQLite Databases
164+
### Step 7: Setup Four SQLite Databases
160165

161166
Create four temporary SQLite databases using the Homebrew version (IMPORTANT: system sqlite3 cannot load extensions):
162167

@@ -213,9 +218,11 @@ SELECT cloudsync_network_init('<NETWORK_CONFIG>');
213218
SELECT cloudsync_network_set_token('<TOKEN_USER2>');
214219
```
215220

216-
### Step 7: Insert Test Data
221+
### Step 8: Insert Test Data
217222

218-
Insert distinct test data in each database. Use the extracted user IDs for the `user_id` column:
223+
Ask the user for optional details about the kind of test data to insert in the tables, otherwise generate some real world data for the choosen tables.
224+
Insert distinct test data in each database. Use the extracted user IDs for the if needed.
225+
For example, for the simple table scenario:
219226

220227
**Database 1A (User 1):**
221228
```sql
@@ -239,7 +246,7 @@ INSERT INTO <table_name> (id, user_id, name, value) VALUES ('u2_a_2', '<USER2_ID
239246
INSERT INTO <table_name> (id, user_id, name, value) VALUES ('u2_b_1', '<USER2_ID>', 'User2 DeviceB Row1', 400);
240247
```
241248

242-
### Step 8: Execute Sync on All Databases
249+
### Step 9: Execute Sync on All Databases
243250

244251
For each of the four SQLite databases, execute the sync operations:
245252

@@ -259,7 +266,7 @@ SELECT cloudsync_network_check_changes();
259266
4. Sync Database 2B (send + check)
260267
5. Re-sync all databases (check_changes) to ensure full propagation
261268

262-
### Step 9: Verify RLS Enforcement
269+
### Step 10: Verify RLS Enforcement
263270

264271
After syncing all databases, verify that each database contains only the expected rows based on the RLS policy:
265272

@@ -288,7 +295,7 @@ SELECT COUNT(*) FROM <table_name>;
288295
SELECT user_id, COUNT(*) FROM <table_name> GROUP BY user_id;
289296
```
290297

291-
### Step 10: Test Write RLS Policy Enforcement
298+
### Step 11: Test Write RLS Policy Enforcement
292299

293300
Test that the server-side RLS policy blocks unauthorized writes by attempting to insert a row with a `user_id` that doesn't match the authenticated user's token.
294301

@@ -334,7 +341,7 @@ SELECT * FROM <table_name> WHERE id = 'malicious_1';
334341
1. The malicious row appears in PostgreSQL (RLS bypass vulnerability)
335342
2. The malicious row syncs to User 2's databases (data leakage)
336343

337-
### Step 11: Cleanup
344+
### Step 12: Cleanup
338345

339346
In each SQLite database before closing:
340347
```sql

0 commit comments

Comments
 (0)