-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Track unsettled usage #549
base: 02-24-add_fees_to_unsignedoriginatorenvelope
Are you sure you want to change the base?
Track unsettled usage #549
Conversation
WalkthroughThis pull request introduces functionality for managing unsettled usage records. It adds two new SQL operations— Changes
Suggested reviewers
Possibly related PRs
Warning There were issues while running some tools. Please review the errors and either fix the tool’s configuration or disable the tool if it’s a critical failure. 🔧 golangci-lint (1.62.2)Error: can't load config: the Go language version (go1.23) used to build golangci-lint is lower than the targeted Go version (1.24) Tip CodeRabbit's docstrings feature is now available as part of our Pro Plan! Simply use the command ✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Warning This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
This stack of pull requests is managed by Graphite. Learn more about stacking. |
f278d2a
to
d27de8f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently, please check "Code review limits" under "Moderation" settings.
Actionable comments posted: 4
🧹 Nitpick comments (4)
pkg/testutils/random.go (1)
71-73
: Consider using crypto/rand for consistency.The implementation is correct for test purposes. However, for consistency with other random functions in this package that use
crypto/rand
, consider using it here as well.func RandomInt32() int32 { - return rand.Int31() + bytes := make([]byte, 4) + _, _ = cryptoRand.Read(bytes) + return int32(bytes[0]) | int32(bytes[1])<<8 | int32(bytes[2])<<16 | int32(bytes[3])<<24 }pkg/utils/time.go (1)
5-9
: Add validation and consider int64 for future-proofing.The function has potential issues:
- Using
int32
could cause overflow in ~4085 years- No validation for times before epoch which would return negative values
-func MinutesSinceEpoch(timestamp time.Time) int32 { +func MinutesSinceEpoch(timestamp time.Time) (int32, error) { durationSinceEpoch := timestamp.Sub(time.Unix(0, 0)) + if durationSinceEpoch < 0 { + return 0, fmt.Errorf("timestamp %v is before Unix epoch", timestamp) + } + minutes := int32(durationSinceEpoch.Minutes()) + if minutes < 0 { + return 0, fmt.Errorf("minutes calculation overflow: %d", minutes) + } - return int32(durationSinceEpoch.Minutes()) + return minutes, nil }pkg/migrations/00007_unsettled-usage.up.sql (1)
1-10
: Consider additional constraints and indexes.The table structure is good but could be enhanced with:
- Foreign key constraints for data integrity
- Index on originator_id if frequently queried
- CHECK constraint for spend_picodollars
CREATE TABLE unsettled_usage( payer_id INTEGER NOT NULL, originator_id INTEGER NOT NULL, minutes_since_epoch INTEGER NOT NULL, - spend_picodollars BIGINT NOT NULL, + spend_picodollars BIGINT NOT NULL CHECK (spend_picodollars >= 0), PRIMARY KEY (payer_id, originator_id, minutes_since_epoch) + FOREIGN KEY (payer_id) REFERENCES payers(id), + FOREIGN KEY (originator_id) REFERENCES originators(id) ); CREATE INDEX idx_unsettled_usage_payer_id ON unsettled_usage(payer_id); +CREATE INDEX idx_unsettled_usage_originator_id ON unsettled_usage(originator_id);pkg/db/tx.go (1)
38-72
: Reduce code duplication withRunInTx
.The
RunInTxWithResult
function duplicates most of the transaction management code fromRunInTx
. Consider refactoring to share the common logic.Apply this diff to reduce duplication:
+func runInTxCommon( + ctx context.Context, + db *sql.DB, + opts *sql.TxOptions, +) (*sql.Tx, *queries.Queries, error) { + querier := queries.New(db) + tx, err := db.BeginTx(ctx, opts) + if err != nil { + return nil, nil, err + } + return tx, querier.WithTx(tx), nil +} + func RunInTx( ctx context.Context, db *sql.DB, opts *sql.TxOptions, fn func(ctx context.Context, txQueries *queries.Queries) error, ) error { - querier := queries.New(db) - tx, err := db.BeginTx(ctx, opts) + tx, querier, err := runInTxCommon(ctx, db, opts) if err != nil { return err } var done bool defer func() { if !done { _ = tx.Rollback() } }() - if err := fn(ctx, querier.WithTx(tx)); err != nil { + if err := fn(ctx, querier); err != nil { return err } done = true return tx.Commit() } func RunInTxWithResult[T any]( ctx context.Context, db *sql.DB, opts *sql.TxOptions, fn func(ctx context.Context, txQueries *queries.Queries) (T, error), ) (T, error) { - querier := queries.New(db) - tx, err := db.BeginTx(ctx, opts) + tx, querier, err := runInTxCommon(ctx, db, opts) if err != nil { var zero T return zero, err } var done bool defer func() { if !done { _ = tx.Rollback() } }() - result, err := fn(ctx, querier.WithTx(tx)) + result, err := fn(ctx, querier) if err != nil { var zero T return zero, err } done = true if err := tx.Commit(); err != nil { var zero T return zero, err } return result, nil }
🛑 Comments failed to post (4)
pkg/db/queries.sql (2)
190-199: 🛠️ Refactor suggestion
Handle potential overflow in SUM calculation.
The
GetUnsettledUsageForPayerOriginator
statement should handle potential overflow when summing largespend_picodollars
values.Apply this diff to add overflow protection:
SELECT - SUM(spend_picodollars) AS total_spend_picodollars + COALESCE(SUM(spend_picodollars::NUMERIC), 0)::BIGINT AS total_spend_picodollars FROM unsettled_usage WHERE payer_id = @payer_id AND originator_id = @originator_id AND minutes_since_epoch >= @start_minutes_since_epoch;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.-- name: GetUnsettledUsageForPayerOriginator :one SELECT COALESCE(SUM(spend_picodollars::NUMERIC), 0)::BIGINT AS total_spend_picodollars FROM unsettled_usage WHERE payer_id = @payer_id AND originator_id = @originator_id AND minutes_since_epoch >= @start_minutes_since_epoch;
183-189: 🛠️ Refactor suggestion
Add validation for negative spend values.
The
IncrementUnsettledUsage
statement should prevent negative spend values to maintain data integrity.Apply this diff to add a
CHECK
constraint:INSERT INTO unsettled_usage(payer_id, originator_id, minutes_since_epoch, spend_picodollars) - VALUES (@payer_id, @originator_id, @minutes_since_epoch, @spend_picodollars) + VALUES (@payer_id, @originator_id, @minutes_since_epoch, + CASE + WHEN @spend_picodollars < 0 THEN 0 + ELSE @spend_picodollars + END) ON CONFLICT (payer_id, originator_id, minutes_since_epoch) DO UPDATE SET - spend_picodollars = unsettled_usage.spend_picodollars + @spend_picodollars; + spend_picodollars = GREATEST(0, unsettled_usage.spend_picodollars + @spend_picodollars);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.-- name: IncrementUnsettledUsage :exec INSERT INTO unsettled_usage(payer_id, originator_id, minutes_since_epoch, spend_picodollars) VALUES (@payer_id, @originator_id, @minutes_since_epoch, CASE WHEN @spend_picodollars < 0 THEN 0 ELSE @spend_picodollars END) ON CONFLICT (payer_id, originator_id, minutes_since_epoch) DO UPDATE SET spend_picodollars = GREATEST(0, unsettled_usage.spend_picodollars + @spend_picodollars);
pkg/db/queries_test.go (2)
163-208: 🛠️ Refactor suggestion
Add test cases for edge cases in
TestIncrementUnsettledUsage
.The test should cover additional edge cases:
- Negative spend values
- Overflow scenarios
- Zero spend values
Apply this diff to add edge cases:
func TestIncrementUnsettledUsage(t *testing.T) { ctx := context.Background() db, _, cleanup := testutils.NewDB(t, ctx) defer cleanup() querier := queries.New(db) payerId := testutils.RandomInt32() originatorId := testutils.RandomInt32() minutesSinceEpoch := utils.MinutesSinceEpochNow() + // Test negative spend + require.NoError(t, querier.IncrementUnsettledUsage(ctx, queries.IncrementUnsettledUsageParams{ + PayerID: payerId, + OriginatorID: originatorId, + MinutesSinceEpoch: minutesSinceEpoch, + SpendPicodollars: -50, + })) + + unsettledUsage, err := querier.GetUnsettledUsageForPayerOriginator( + ctx, + queries.GetUnsettledUsageForPayerOriginatorParams{ + PayerID: payerId, + OriginatorID: originatorId, + StartMinutesSinceEpoch: 0, + }, + ) + require.NoError(t, err) + require.Equal(t, unsettledUsage, int64(0), "Negative spend should be capped at 0") + + // Test zero spend + require.NoError(t, querier.IncrementUnsettledUsage(ctx, queries.IncrementUnsettledUsageParams{ + PayerID: payerId, + OriginatorID: originatorId, + MinutesSinceEpoch: minutesSinceEpoch, + SpendPicodollars: 0, + })) + + unsettledUsage, err = querier.GetUnsettledUsageForPayerOriginator( + ctx, + queries.GetUnsettledUsageForPayerOriginatorParams{ + PayerID: payerId, + OriginatorID: originatorId, + StartMinutesSinceEpoch: 0, + }, + ) + require.NoError(t, err) + require.Equal(t, unsettledUsage, int64(0), "Zero spend should not affect total") + + // Test overflow + require.NoError(t, querier.IncrementUnsettledUsage(ctx, queries.IncrementUnsettledUsageParams{ + PayerID: payerId, + OriginatorID: originatorId, + MinutesSinceEpoch: minutesSinceEpoch, + SpendPicodollars: math.MaxInt64 - 50, + })) + + require.NoError(t, querier.IncrementUnsettledUsage(ctx, queries.IncrementUnsettledUsageParams{ + PayerID: payerId, + OriginatorID: originatorId, + MinutesSinceEpoch: minutesSinceEpoch, + SpendPicodollars: 100, + })) + + unsettledUsage, err = querier.GetUnsettledUsageForPayerOriginator( + ctx, + queries.GetUnsettledUsageForPayerOriginatorParams{ + PayerID: payerId, + OriginatorID: originatorId, + StartMinutesSinceEpoch: 0, + }, + ) + require.NoError(t, err) + require.Equal(t, unsettledUsage, math.MaxInt64, "Overflow should be capped at MaxInt64") + // Original test cases...📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func TestIncrementUnsettledUsage(t *testing.T) { ctx := context.Background() db, _, cleanup := testutils.NewDB(t, ctx) defer cleanup() querier := queries.New(db) payerId := testutils.RandomInt32() originatorId := testutils.RandomInt32() minutesSinceEpoch := utils.MinutesSinceEpochNow() // Test negative spend require.NoError(t, querier.IncrementUnsettledUsage(ctx, queries.IncrementUnsettledUsageParams{ PayerID: payerId, OriginatorID: originatorId, MinutesSinceEpoch: minutesSinceEpoch, SpendPicodollars: -50, })) unsettledUsage, err := querier.GetUnsettledUsageForPayerOriginator( ctx, queries.GetUnsettledUsageForPayerOriginatorParams{ PayerID: payerId, OriginatorID: originatorId, StartMinutesSinceEpoch: 0, }, ) require.NoError(t, err) require.Equal(t, unsettledUsage, int64(0), "Negative spend should be capped at 0") // Test zero spend require.NoError(t, querier.IncrementUnsettledUsage(ctx, queries.IncrementUnsettledUsageParams{ PayerID: payerId, OriginatorID: originatorId, MinutesSinceEpoch: minutesSinceEpoch, SpendPicodollars: 0, })) unsettledUsage, err = querier.GetUnsettledUsageForPayerOriginator( ctx, queries.GetUnsettledUsageForPayerOriginatorParams{ PayerID: payerId, OriginatorID: originatorId, StartMinutesSinceEpoch: 0, }, ) require.NoError(t, err) require.Equal(t, unsettledUsage, int64(0), "Zero spend should not affect total") // Test overflow require.NoError(t, querier.IncrementUnsettledUsage(ctx, queries.IncrementUnsettledUsageParams{ PayerID: payerId, OriginatorID: originatorId, MinutesSinceEpoch: minutesSinceEpoch, SpendPicodollars: math.MaxInt64 - 50, })) require.NoError(t, querier.IncrementUnsettledUsage(ctx, queries.IncrementUnsettledUsageParams{ PayerID: payerId, OriginatorID: originatorId, MinutesSinceEpoch: minutesSinceEpoch, SpendPicodollars: 100, })) unsettledUsage, err = querier.GetUnsettledUsageForPayerOriginator( ctx, queries.GetUnsettledUsageForPayerOriginatorParams{ PayerID: payerId, OriginatorID: originatorId, StartMinutesSinceEpoch: 0, }, ) require.NoError(t, err) require.Equal(t, unsettledUsage, math.MaxInt64, "Overflow should be capped at MaxInt64") // Original test cases... require.NoError(t, querier.IncrementUnsettledUsage(ctx, queries.IncrementUnsettledUsageParams{ PayerID: payerId, OriginatorID: originatorId, MinutesSinceEpoch: minutesSinceEpoch, SpendPicodollars: 100, })) unsettledUsage, err = querier.GetUnsettledUsageForPayerOriginator( ctx, queries.GetUnsettledUsageForPayerOriginatorParams{ PayerID: payerId, OriginatorID: originatorId, StartMinutesSinceEpoch: 0, }, ) require.NoError(t, err) require.Equal(t, unsettledUsage, int64(100)) require.NoError(t, querier.IncrementUnsettledUsage(ctx, queries.IncrementUnsettledUsageParams{ PayerID: payerId, OriginatorID: originatorId, MinutesSinceEpoch: minutesSinceEpoch, SpendPicodollars: 100, })) unsettledUsage, err = querier.GetUnsettledUsageForPayerOriginator( ctx, queries.GetUnsettledUsageForPayerOriginatorParams{ PayerID: payerId, OriginatorID: originatorId, StartMinutesSinceEpoch: 0, }, ) require.NoError(t, err) require.Equal(t, unsettledUsage, int64(200)) }
210-256: 🛠️ Refactor suggestion
Add test cases for edge cases in
TestGetUnsettledUsage
.The test should cover additional edge cases:
- Empty result when no records exist
- Empty result when all records are filtered out
- Large sums that could overflow
Apply this diff to add edge cases:
func TestGetUnsettledUsage(t *testing.T) { ctx := context.Background() db, _, cleanup := testutils.NewDB(t, ctx) defer cleanup() querier := queries.New(db) payerId := testutils.RandomInt32() originatorId := testutils.RandomInt32() + // Test empty result when no records exist + unsettledUsage, err := querier.GetUnsettledUsageForPayerOriginator( + ctx, + queries.GetUnsettledUsageForPayerOriginatorParams{ + PayerID: payerId, + OriginatorID: originatorId, + StartMinutesSinceEpoch: 0, + }, + ) + require.NoError(t, err) + require.Equal(t, unsettledUsage, int64(0), "Empty result should return 0") + + // Test empty result when all records are filtered out + addUsage(1, 100) + unsettledUsage, err = querier.GetUnsettledUsageForPayerOriginator( + ctx, + queries.GetUnsettledUsageForPayerOriginatorParams{ + PayerID: payerId, + OriginatorID: originatorId, + StartMinutesSinceEpoch: 2, + }, + ) + require.NoError(t, err) + require.Equal(t, unsettledUsage, int64(0), "Filtered out records should return 0") + + // Test large sums + addUsage(3, math.MaxInt64/2) + addUsage(4, math.MaxInt64/2) + unsettledUsage, err = querier.GetUnsettledUsageForPayerOriginator( + ctx, + queries.GetUnsettledUsageForPayerOriginatorParams{ + PayerID: payerId, + OriginatorID: originatorId, + StartMinutesSinceEpoch: 3, + }, + ) + require.NoError(t, err) + require.Equal(t, unsettledUsage, math.MaxInt64, "Large sums should be capped at MaxInt64") + // Original test cases...Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
pkg/db/unsettledUsage_test.go (2)
13-58
: Enhance test coverage with edge cases and error scenarios.While the test verifies basic functionality, consider adding test cases for:
- Negative spend values
- Zero spend values
- Maximum int64 boundary cases
- Error scenarios (e.g., database errors)
60-106
: Add test cases for time-based filtering edge cases.Consider adding test cases for:
- Empty results (no usage in time range)
- Large time ranges
- Time ranges with no data
- Boundary conditions (e.g., exactly at start time)
pkg/db/queries/queries.sql.go (1)
223-232
: Consider adding an index for performance optimization.The query filters on
(payer_id, originator_id, minutes_since_epoch)
. Consider adding a composite index to improve query performance, especially as the table grows.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
pkg/db/queries.sql
(1 hunks)pkg/db/queries/models.go
(1 hunks)pkg/db/queries/queries.sql.go
(1 hunks)pkg/db/tx.go
(1 hunks)pkg/db/unsettledUsage_test.go
(1 hunks)pkg/migrations/00007_unsettled-usage.down.sql
(1 hunks)pkg/migrations/00007_unsettled-usage.up.sql
(1 hunks)pkg/testutils/random.go
(1 hunks)pkg/utils/time.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
- pkg/testutils/random.go
- pkg/migrations/00007_unsettled-usage.down.sql
- pkg/db/queries/models.go
- pkg/utils/time.go
- pkg/db/queries.sql
- pkg/db/tx.go
- pkg/migrations/00007_unsettled-usage.up.sql
d27de8f
to
db408a2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (1)
pkg/db/queries/queries.sql.go (1)
223-232
: Review the query filtering ingetUnsettledUsageForPayerOriginator
.The query filters on
minutes_since_epoch >= $3
which means it will sum all records from the start time onwards without an upper bound. This could potentially return more data than intended if you're looking for usage within a specific time window.Consider whether you need an upper bound constraint like
AND minutes_since_epoch < $4
to limit the returned data to a specific time range.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
pkg/db/queries.sql
(1 hunks)pkg/db/queries/models.go
(1 hunks)pkg/db/queries/queries.sql.go
(1 hunks)pkg/db/tx.go
(1 hunks)pkg/db/unsettledUsage_test.go
(1 hunks)pkg/migrations/00007_unsettled-usage.down.sql
(1 hunks)pkg/migrations/00007_unsettled-usage.up.sql
(1 hunks)pkg/testutils/random.go
(1 hunks)pkg/utils/time.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (8)
- pkg/testutils/random.go
- pkg/migrations/00007_unsettled-usage.down.sql
- pkg/utils/time.go
- pkg/db/queries/models.go
- pkg/migrations/00007_unsettled-usage.up.sql
- pkg/db/unsettledUsage_test.go
- pkg/db/queries.sql
- pkg/db/tx.go
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Test (Node)
- GitHub Check: Build pre-baked anvil-xmtpd
🔇 Additional comments (1)
pkg/db/queries/queries.sql.go (1)
247-253
: Add validation and overflow protection.The query lacks:
- Validation for negative spend values
- Protection against int64 overflow during addition
Consider adding a CHECK constraint in the table definition and using a safe addition operation in the UPDATE clause.
Example constraint:
CHECK (spend_picodollars >= 0)Example safe addition:
- spend_picodollars = unsettled_usage.spend_picodollars + $4 + spend_picodollars = CASE + WHEN unsettled_usage.spend_picodollars + $4 < unsettled_usage.spend_picodollars THEN NULL -- overflow + ELSE unsettled_usage.spend_picodollars + $4 + END
db408a2
to
6f40deb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
pkg/db/queries.sql (1)
183-188
:⚠️ Potential issueValidate input and protect against overflow in the SQL query
The
IncrementUnsettledUsage
query lacks:
- Validation for negative spend values
- Protection against integer overflow during addition
Consider adding a CHECK constraint in the table definition and using a safe addition operation in the UPDATE clause to prevent overflow:
-- name: IncrementUnsettledUsage :exec INSERT INTO unsettled_usage(payer_id, originator_id, minutes_since_epoch, spend_picodollars) VALUES (@payer_id, @originator_id, @minutes_since_epoch, @spend_picodollars) ON CONFLICT (payer_id, originator_id, minutes_since_epoch) DO UPDATE SET - spend_picodollars = unsettled_usage.spend_picodollars + @spend_picodollars; + spend_picodollars = CASE + WHEN unsettled_usage.spend_picodollars + @spend_picodollars < unsettled_usage.spend_picodollars THEN NULL -- overflow + ELSE unsettled_usage.spend_picodollars + @spend_picodollars + END;pkg/db/queries/queries.sql.go (1)
264-272
: 🛠️ Refactor suggestionAdd parameter validation for IncrementUnsettledUsage
The function doesn't validate its input parameters before executing the SQL query. This could lead to runtime errors or invalid data being stored.
Add validation for:
- Check that
SpendPicodollars
is non-negative- Ensure
MinutesSinceEpoch
is valid (within expected range)func (q *Queries) IncrementUnsettledUsage(ctx context.Context, arg IncrementUnsettledUsageParams) error { + if arg.SpendPicodollars < 0 { + return fmt.Errorf("negative spend not allowed: %d", arg.SpendPicodollars) + } + _, err := q.db.ExecContext(ctx, incrementUnsettledUsage, arg.PayerID, arg.OriginatorID, arg.MinutesSinceEpoch, arg.SpendPicodollars, ) return err }
🧹 Nitpick comments (2)
pkg/db/queries/queries.sql.go (2)
257-262
: Type inconsistency between parametersThe
MinutesSinceEpoch
field is defined asint32
inIncrementUnsettledUsageParams
but the corresponding field inGetPayerUnsettledUsageParams
usesint64
. This inconsistency could lead to issues with time ranges.For consistency and future-proofing, consider changing the type:
type IncrementUnsettledUsageParams struct { PayerID int32 OriginatorID int32 - MinutesSinceEpoch int32 + MinutesSinceEpoch int64 SpendPicodollars int64 }
236-240
: Add validation for time range parametersThe
GetPayerUnsettledUsageParams
struct doesn't validate the time range parameters. Invalid combinations (where start time is after end time) could lead to empty results without an error.Add validation method to ensure the time parameters make logical sense:
func (p *GetPayerUnsettledUsageParams) Validate() error { // If both parameters are set, ensure they form a valid range if p.MinutesSinceEpochGt > 0 && p.MinutesSinceEpochLt > 0 { if p.MinutesSinceEpochGt >= p.MinutesSinceEpochLt { return fmt.Errorf("invalid time range: start (%d) must be before end (%d)", p.MinutesSinceEpochGt, p.MinutesSinceEpochLt) } } return nil }Then call this in the
GetPayerUnsettledUsage
function.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
pkg/db/queries.sql
(1 hunks)pkg/db/queries/models.go
(1 hunks)pkg/db/queries/queries.sql.go
(1 hunks)pkg/db/tx.go
(1 hunks)pkg/db/unsettledUsage_test.go
(1 hunks)pkg/migrations/00007_unsettled-usage.down.sql
(1 hunks)pkg/migrations/00007_unsettled-usage.up.sql
(1 hunks)pkg/testutils/random.go
(1 hunks)pkg/utils/time.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
- pkg/migrations/00007_unsettled-usage.down.sql
- pkg/testutils/random.go
- pkg/db/queries/models.go
- pkg/utils/time.go
- pkg/migrations/00007_unsettled-usage.up.sql
- pkg/db/unsettledUsage_test.go
- pkg/db/tx.go
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: Build pre-baked anvil-xmtpd
- GitHub Check: Test (Node)
- GitHub Check: Push Docker Images to GitHub Packages (xmtpd)
🔇 Additional comments (1)
pkg/db/queries.sql (1)
190-200
: Handle NULL result in GetPayerUnsettledUsageThis query doesn't handle the case when no records match the query criteria. PostgreSQL's SUM() will return NULL in this case, but the corresponding Go function assumes a valid int64 is always returned.
Consider explicitly handling the NULL result in the Go code by using a
sql.NullInt64
type or adding a COALESCE function in the SQL:SELECT - SUM(spend_picodollars) AS total_spend_picodollars + COALESCE(SUM(spend_picodollars), 0) AS total_spend_picodollars FROM unsettled_usage WHERE payer_id = @payer_id AND (@minutes_since_epoch_gt::BIGINT = 0 OR minutes_since_epoch > @minutes_since_epoch_gt::BIGINT) AND (@minutes_since_epoch_lt::BIGINT = 0 OR minutes_since_epoch < @minutes_since_epoch_lt::BIGINT);
6f40deb
to
4d5834b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (1)
pkg/sync/syncWorker.go (1)
450-470
: Offer assistance for TODO fee calculation.
There's a TODO comment indicating a plan to compute fees independently. If needed, I can help create a new ticket or draft logic for fee determination.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
pkg/api/message/publishWorker.go
(2 hunks)pkg/db/gatewayEnvelope.go
(1 hunks)pkg/db/gatewayEnvelope_test.go
(1 hunks)pkg/db/queries.sql
(1 hunks)pkg/db/queries/models.go
(1 hunks)pkg/db/queries/queries.sql.go
(1 hunks)pkg/db/tx.go
(1 hunks)pkg/db/unsettledUsage_test.go
(1 hunks)pkg/envelopes/originator.go
(2 hunks)pkg/migrations/00007_unsettled-usage.down.sql
(1 hunks)pkg/migrations/00007_unsettled-usage.up.sql
(1 hunks)pkg/sync/syncWorker.go
(3 hunks)pkg/testutils/random.go
(1 hunks)pkg/utils/time.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
- pkg/testutils/random.go
- pkg/migrations/00007_unsettled-usage.down.sql
- pkg/db/queries/models.go
- pkg/db/queries.sql
- pkg/db/unsettledUsage_test.go
- pkg/migrations/00007_unsettled-usage.up.sql
🧰 Additional context used
🪛 GitHub Check: Lint-Go
pkg/sync/syncWorker.go
[failure] 36-36:
field feesCalculator
is unused (unused)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Test (Node)
- GitHub Check: Build pre-baked anvil-xmtpd
🔇 Additional comments (16)
pkg/envelopes/originator.go (2)
5-5
: Import usage looks good.
No issues found regarding the newtime
import.
66-69
: Confirm potential edge cases with large nanosecond values.
IfOriginatorNs()
returns a large nanosecond-based timestamp, callingNsToDate()
withtime.Unix(0, ns)
may lead to undefined behavior or a panic. Please verify thatOriginatorNs()
values remain within the valid range.pkg/utils/time.go (1)
5-9
: Watch for possible overflow when converting to int32.
durationSinceEpoch.Minutes()
can exceed the range of an int32 for timestamps far from the UNIX epoch. Ensure that your usage scenarios do not require handling extremely large or negative timestamps.pkg/sync/syncWorker.go (3)
14-14
: Import addition acknowledged.
No immediate concerns, provided thefees
package is required for future usage.
21-21
: Import addition acknowledged.
No issues found.
447-449
: Re-confirm valid nanosecond values with NsToDate usage.
As previously discussed inNsToDate()
, ifenv.OriginatorNs()
is a full nanosecond-based UNIX timestamp, you may encounter range issues. Ensureenv.OriginatorNs()
stays within valid bounds.pkg/db/tx.go (1)
38-72
: Transaction helper looks good.This generic approach to return a typed result while rolling back on errors is correctly implemented. It adheres to the standard transaction pattern, ensuring rollback on error and proper commit on success.
pkg/db/gatewayEnvelope.go (1)
1-41
: Transactional insert & increment is well-structured.The code properly checks if a row was inserted before incrementing usage, avoiding unnecessary updates. The transaction boundary is clearly defined, and errors are handled at each step. This design should reliably maintain data integrity.
pkg/db/gatewayEnvelope_test.go (2)
38-67
: Single record insertion test is clear and comprehensive.The test verifies that one record is inserted and that unsettled usage increases by the expected amount.
68-86
: Payer existence check test is valid.Verifying that an error occurs when the payer does not exist ensures data integrity.
pkg/api/message/publishWorker.go (3)
151-151
: Clearly separates the NODE_ID usage for better readability.Extracting the Node ID into a separate variable improves readability and avoids redundant calls to
p.registrant.NodeID()
.
154-171
: Transaction management for atomic operation of envelope insertion + unsettled usage tracking.The change properly implements unsettled usage tracking alongside gateway envelope insertion, satisfying the PR objective of tracking spending with minute-level granularity. The function now:
- Uses a transaction-based approach via
InsertGatewayEnvelopeAndIncrementUnsettledUsage
- Provides all required parameters for unsettled usage tracking
- Maintains the same error handling logic
168-169
:✅ Verification successful
Verify time conversion and spend calculation.
Make sure that:
utils.MinutesSinceEpoch
returns a value that fits within an int32 (the parameter type inIncrementUnsettledUsageParams
)- The sum of
baseFee
andcongestionFee
doesn't overflow int64
🏁 Script executed:
#!/bin/bash # Check the return type of MinutesSinceEpoch rg -A 5 "func MinutesSinceEpoch" pkg/utils/ # Check the parameter type in IncrementUnsettledUsageParams rg -A 6 "type IncrementUnsettledUsageParams struct" pkg/db/Length of output: 972
Time conversion and spend calculation verified.
After reviewing the code and verifying via the provided shell scripts:
- Time Conversion:
utils.MinutesSinceEpoch
returns anint32
, which exactly matches theMinutesSinceEpoch
field inIncrementUnsettledUsageParams
.- Spend Calculation:
- Both
baseFee
andcongestionFee
are explicitly cast toint64
before addition, ensuring that the summed result is of typeint64
.- Although unlikely, please ensure that the values for
baseFee
andcongestionFee
remain within ranges that prevent anint64
overflow.No corrective actions are required.
pkg/db/queries/queries.sql.go (3)
242-247
: Handle NULL result in GetPayerUnsettledUsage implementation.The function doesn't explicitly handle the case when no records match the query criteria. PostgreSQL's SUM() will return NULL in this case, but the code assumes a valid int64 is always returned.
Add explicit NULL handling:
func (q *Queries) GetPayerUnsettledUsage(ctx context.Context, arg GetPayerUnsettledUsageParams) (int64, error) { row := q.db.QueryRowContext(ctx, getPayerUnsettledUsage, arg.PayerID, arg.MinutesSinceEpochGt, arg.MinutesSinceEpochLt) - var total_spend_picodollars int64 + var total_spend_picodollars sql.NullInt64 err := row.Scan(&total_spend_picodollars) - return total_spend_picodollars, err + if err != nil { + return 0, err + } + if !total_spend_picodollars.Valid { + return 0, nil + } + return total_spend_picodollars.Int64, nil }
249-255
: Add validation and overflow protection.The query lacks:
- Validation for negative spend values
- Protection against int64 overflow during addition
Consider adding a CHECK constraint in the table definition and using a safe addition operation in the UPDATE clause.
Example constraint:
CHECK (spend_picodollars >= 0)Example safe addition:
- spend_picodollars = unsettled_usage.spend_picodollars + $4 + spend_picodollars = CASE + WHEN unsettled_usage.spend_picodollars + $4 < unsettled_usage.spend_picodollars THEN NULL -- overflow + ELSE unsettled_usage.spend_picodollars + $4 + END
264-272
: Add parameter validation toIncrementUnsettledUsage
.The function doesn't validate its input parameters before executing the SQL query. This could lead to runtime errors or invalid data being stored.
Add validation for:
- Check that
SpendPicodollars
is non-negative- Ensure
MinutesSinceEpoch
is valid (within expected range)func (q *Queries) IncrementUnsettledUsage(ctx context.Context, arg IncrementUnsettledUsageParams) error { + if arg.SpendPicodollars < 0 { + return fmt.Errorf("negative spend not allowed: %d", arg.SpendPicodollars) + } + _, err := q.db.ExecContext(ctx, incrementUnsettledUsage, arg.PayerID, arg.OriginatorID, arg.MinutesSinceEpoch, arg.SpendPicodollars, ) return err }
b21cb0d
to
9a2fafe
Compare
3d245df
to
97f3fdc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
pkg/db/gatewayEnvelope.go (1)
10-41
: Good implementation of transaction-based insert with unsettled usage trackingThe function correctly implements atomicity by using transactions to ensure both the envelope insertion and unsettled usage increment succeed or fail together. The early return when an envelope already exists prevents duplicate accounting.
Minor suggestion: Consider adding more context in error messages to help with debugging.
- return 0, err + return 0, fmt.Errorf("failed to insert gateway envelope: %w", err)- return 0, err + return 0, fmt.Errorf("failed to increment unsettled usage: %w", err)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
pkg/api/message/publishWorker.go
(2 hunks)pkg/db/gatewayEnvelope.go
(1 hunks)pkg/db/gatewayEnvelope_test.go
(1 hunks)pkg/db/queries.sql
(1 hunks)pkg/db/queries/models.go
(1 hunks)pkg/db/queries/queries.sql.go
(1 hunks)pkg/db/tx.go
(1 hunks)pkg/db/unsettledUsage_test.go
(1 hunks)pkg/envelopes/originator.go
(2 hunks)pkg/migrations/00007_unsettled-usage.down.sql
(1 hunks)pkg/migrations/00007_unsettled-usage.up.sql
(1 hunks)pkg/sync/syncWorker.go
(2 hunks)pkg/testutils/random.go
(1 hunks)pkg/utils/time.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (12)
- pkg/testutils/random.go
- pkg/envelopes/originator.go
- pkg/migrations/00007_unsettled-usage.down.sql
- pkg/migrations/00007_unsettled-usage.up.sql
- pkg/db/queries/models.go
- pkg/db/queries.sql
- pkg/db/gatewayEnvelope_test.go
- pkg/sync/syncWorker.go
- pkg/db/unsettledUsage_test.go
- pkg/api/message/publishWorker.go
- pkg/utils/time.go
- pkg/db/queries/queries.sql.go
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Build pre-baked anvil-xmtpd
- GitHub Check: Test (Node)
🔇 Additional comments (1)
pkg/db/tx.go (1)
38-72
: Well-implemented generic transaction with result functionThis is a good implementation of a generic transaction handler that can return a result. It follows the same pattern as the existing
RunInTx
function but adds the ability to return a typed result.A few observations:
- Proper use of generics with
T any
- Consistent error handling with zero value returns
- Transaction rollback is properly handled via deferred function
- Code maintains the same structure as the original
RunInTx
ba33fad
to
87892cb
Compare
PayerID: payerId, | ||
OriginatorID: originatorID, | ||
MinutesSinceEpoch: utils.MinutesSinceEpoch(originatorTime), | ||
// TODO:(nm) Independently calculate fees |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is obviously wrong. Each node should independently verify the fees instead of trusting the originator. But we will need to wait until we have a consistent payload length to do that, which requires proto updates.
9a2fafe
to
4b7a5dc
Compare
87892cb
to
a42eda2
Compare
4b7a5dc
to
d977442
Compare
a42eda2
to
3e0878f
Compare
I don't quite understand what the time calculation is doing (minutesSinceEpoch), could you explain that to me? |
There's more detail in the XIP here, but the tl;dr is that we pre-aggregate all the unsettled payer usage in 1 minute buckets. When we get Basically, I'm trying to avoid ever having to do a sum of the messaging fees across individual messages, since there might be hundreds of thousands. The most rows we'd have to aggregate to get an originator's unsettled usage for 12 hours would be 720 (assuming the payer sent a message once in every minute) |
Hopefully this calculation is cheap enough we can use it in the hot path of message publishing to see if the Payer has a sufficient balance to send a given message. |
d977442
to
0078ca4
Compare
3e0878f
to
e53e8d3
Compare
e53e8d3
to
58762d6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
pkg/db/tx.go (1)
38-65
: Consider standardizing transaction handling approaches.The implementation follows the suggested approach from the past review comment, but it differs from the existing
RunInTx
function in how transactions are handled:
- Error handling in defer:
RunInTx
uses adone
flag, whileRunInTxWithResult
checkserr != nil
- Query object creation:
RunInTx
creates a querier first, whileRunInTxWithResult
creates it inlineConsider standardizing both functions to use the same approach for consistency and to minimize potential maintenance issues. Also, adding documentation to explain the purpose and usage of this function would be beneficial.
+// RunInTxWithResult executes the given function within a database transaction, +// returning both a result of type T and an error. The transaction is committed if +// the function executes successfully, or rolled back if an error occurs. func RunInTxWithResult[T any]( ctx context.Context, db *sql.DB, opts *sql.TxOptions, fn func(ctx context.Context, txQueries *queries.Queries) (T, error), ) (result T, err error) { + querier := queries.New(db) tx, err := db.BeginTx(ctx, opts) if err != nil { return result, err } + var done bool defer func() { - if err != nil { + if !done { _ = tx.Rollback() } }() - result, err = fn(ctx, queries.New(db).WithTx(tx)) + result, err = fn(ctx, querier.WithTx(tx)) if err != nil { return result, err } + done = true if err = tx.Commit(); err != nil { return result, err } return result, nil }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
pkg/api/message/publishWorker.go
(2 hunks)pkg/db/gatewayEnvelope.go
(1 hunks)pkg/db/gatewayEnvelope_test.go
(1 hunks)pkg/db/queries.sql
(1 hunks)pkg/db/queries/models.go
(1 hunks)pkg/db/queries/queries.sql.go
(1 hunks)pkg/db/tx.go
(1 hunks)pkg/db/unsettledUsage_test.go
(1 hunks)pkg/envelopes/originator.go
(2 hunks)pkg/migrations/00007_unsettled-usage.down.sql
(1 hunks)pkg/migrations/00007_unsettled-usage.up.sql
(1 hunks)pkg/sync/syncWorker.go
(2 hunks)pkg/testutils/random.go
(1 hunks)pkg/utils/time.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (12)
- pkg/testutils/random.go
- pkg/migrations/00007_unsettled-usage.down.sql
- pkg/sync/syncWorker.go
- pkg/db/gatewayEnvelope_test.go
- pkg/utils/time.go
- pkg/db/queries.sql
- pkg/db/queries/models.go
- pkg/db/gatewayEnvelope.go
- pkg/api/message/publishWorker.go
- pkg/migrations/00007_unsettled-usage.up.sql
- pkg/db/unsettledUsage_test.go
- pkg/envelopes/originator.go
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Graphite / mergeability_check
- GitHub Check: Test (Node)
- GitHub Check: Build pre-baked anvil-xmtpd
🔇 Additional comments (4)
pkg/db/queries/queries.sql.go (4)
242-247
: Handle NULL result in GetPayerUnsettledUsage function.The function doesn't explicitly handle the case when no records match the query criteria. While the SQL query uses COALESCE to return 0 for empty results, it's better to handle this at the code level for robustness.
Add explicit NULL handling:
func (q *Queries) GetPayerUnsettledUsage(ctx context.Context, arg GetPayerUnsettledUsageParams) (int64, error) { row := q.db.QueryRowContext(ctx, getPayerUnsettledUsage, arg.PayerID, arg.MinutesSinceEpochGt, arg.MinutesSinceEpochLt) - var total_spend_picodollars int64 + var total_spend_picodollars sql.NullInt64 err := row.Scan(&total_spend_picodollars) - return total_spend_picodollars, err + if err != nil { + return 0, err + } + if !total_spend_picodollars.Valid { + return 0, nil + } + return total_spend_picodollars.Int64, nil }
249-255
: Add validation and overflow protection in increment operation.The SQL query lacks protection against integer overflow during addition, which could lead to data corruption or unexpected behavior.
Example safe addition:
- spend_picodollars = unsettled_usage.spend_picodollars + $4 + spend_picodollars = CASE + WHEN unsettled_usage.spend_picodollars + $4 < unsettled_usage.spend_picodollars THEN NULL -- overflow + ELSE unsettled_usage.spend_picodollars + $4 + END
257-262
: Type mismatch in MinutesSinceEpoch parameter.There's an inconsistency in the type used for
MinutesSinceEpoch
- it's defined asint32
here but asint64
inGetPayerUnsettledUsageParams
. This could lead to data truncation if the epoch time in minutes exceeds the int32 range.type IncrementUnsettledUsageParams struct { PayerID int32 OriginatorID int32 - MinutesSinceEpoch int32 + MinutesSinceEpoch int64 SpendPicodollars int64 }
264-272
: Add parameter validation to IncrementUnsettledUsage.The function doesn't validate its input parameters before executing the SQL query, which could lead to unexpected behavior or invalid data being stored.
func (q *Queries) IncrementUnsettledUsage(ctx context.Context, arg IncrementUnsettledUsageParams) error { + if arg.SpendPicodollars < 0 { + return fmt.Errorf("negative spend not allowed: %d", arg.SpendPicodollars) + } + _, err := q.db.ExecContext(ctx, incrementUnsettledUsage, arg.PayerID, arg.OriginatorID, arg.MinutesSinceEpoch, arg.SpendPicodollars, ) return err }
const getPayerUnsettledUsage = `-- name: GetPayerUnsettledUsage :one | ||
SELECT | ||
COALESCE(SUM(spend_picodollars), 0)::BIGINT AS total_spend_picodollars | ||
FROM | ||
unsettled_usage | ||
WHERE | ||
payer_id = $1 | ||
AND ($2::BIGINT = 0 | ||
OR minutes_since_epoch > $2::BIGINT) | ||
AND ($3::BIGINT = 0 | ||
OR minutes_since_epoch < $3::BIGINT) | ||
` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Consider adding a test for edge case where both time filters are 0.
The SQL query has special handling for when time filters are set to 0, treating them as no filter. While this pattern works, it would be good to ensure this behavior is tested.
🏁 Script executed:
#!/bin/bash
# Check if there are tests for the edge case where both time filters are 0
rg -A 10 "GetPayerUnsettledUsage.*MinutesSinceEpochGt: 0.*MinutesSinceEpochLt: 0" || echo "Test for edge case not found"
Length of output: 150
Action Required: Add Test for Time Filter Zero Edge Case
It appears that there isn't any test covering the scenario where both time filters are set to 0—an important edge case to ensure the SQL query correctly treats 0 as "no filter." Please add a dedicated test case to verify the behavior of the SQL query when both time filters are 0.
TL;DR
What changed?
unsettled_usage
table to track spending between payers and originatorsMinutesSinceEpoch
utility functions for time trackingRunInTxWithResult
for transaction handling with return valuesHow to test?
queries_test.go
:TestIncrementUnsettledUsage
: Verifies incremental spending updatesTestGetUnsettledUsage
: Validates retrieval of usage data with time filteringWhy make this change?
This change enables tracking of real-time usage and spending between payers and originators at a granular level, which is essential for accurate billing and usage monitoring in the system. The minute-level tracking allows for precise usage accounting and settlement processes.
Summary by CodeRabbit
New Features
Tests