Skip to content

Conversation

@kun6fup4nd4
Copy link
Contributor

@kun6fup4nd4 kun6fup4nd4 commented Sep 16, 2025

PR Type

Bug fix


Description

  • Corrects group indexing logic for factv2 mode in transaction queue

  • Fixes loop bounds in getCorrespondingNodes for v2 logic

  • Ensures sender/receiver group calculations use correct group arrays

  • Prevents gaps and out-of-bounds errors in node correspondence


Changes walkthrough 📝

Relevant files
Bug fix
TransactionQueue.ts
Fix group array usage and indexing for factv2 mode             

src/state-manager/TransactionQueue.ts

  • Uses executionGroup instead of transactionGroup for sender index in
    factv2 mode
  • Updates senderNodeIndex calculation to use executionGroup in factv2
  • Ensures correct group arrays are used for node correspondence
  • +2/-1     
    fastAggregatedCorrespondingTell.ts
    Fix v2 loop bounds in getCorrespondingNodes function         

    src/utils/fastAggregatedCorrespondingTell.ts

  • Adds step counter and maxSteps for v2 loop to prevent infinite/gap
    errors
  • Adjusts loop condition for v2 to use steps < maxSteps
  • Increments steps within loop for v2 logic
  • Improves reliability of node correspondence calculation
  • +4/-1     

    Need help?
  • Type /help how to ... in the comments thread for any questions about PR-Agent usage.
  • Check out the documentation for more information.
  • @github-actions
    Copy link

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
    🏅 Score: 85
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Logic Consistency

    The changes to group indexing for factv2 mode alter which group arrays are used for sender and receiver calculations. Ensure that these changes maintain logical consistency and do not introduce off-by-one or group mismatch errors, especially in edge cases.

    if (configContext.p2p.factv2) senderIndex = queueEntry.executionGroup.findIndex((node) => node.id === Self.id)
    Loop Termination

    The new loop for v2 mode in getCorrespondingNodes uses a calculated maxSteps. Confirm that this prevents infinite loops and correctly covers all intended nodes, particularly when receiverGroupSize is not a multiple of sendGroupSize.

    let steps = 0
    const maxSteps = v2 ? Math.ceil(receiverGroupSize / sendGroupSize) : Infinity  
    while (v2 ? steps < maxSteps : targetNumber < receiverGroupSize) {

    Comment on lines +55 to 60
    let steps = 0
    const maxSteps = v2 ? Math.ceil(receiverGroupSize / sendGroupSize) : Infinity
    while (v2 ? steps < maxSteps : targetNumber < receiverGroupSize) {
    //send all payload to this node
    const destinationNode = wrappedIndex

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Suggestion: The destinationNode is always set to wrappedIndex, which may cause duplicate or incorrect indices if wrappedIndex is not updated properly within the loop. Ensure that wrappedIndex is updated before pushing to destinationNodes to avoid incorrect node assignments, especially when v2 is true. [possible issue, importance: 6]

    New proposed code:
     let steps = 0
     const maxSteps = v2 ? Math.ceil(receiverGroupSize / sendGroupSize) : Infinity  
     while (v2 ? steps < maxSteps : targetNumber < receiverGroupSize) {
       //send all payload to this node
    +  if (wrappedIndex >= transactionGroupSize) {
    +    wrappedIndex = wrappedIndex - transactionGroupSize
    +  }
       const destinationNode = wrappedIndex
     
       destinationNodes.push(destinationNode)

    }

    if (configContext.p2p.factv2) senderIndex = queueEntry.transactionGroup.findIndex((node) => node.id === Self.id)
    if (configContext.p2p.factv2) senderIndex = queueEntry.executionGroup.findIndex((node) => node.id === Self.id)

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Suggestion: Overwriting senderIndex here may conflict with the earlier assignment from wrappedIndex, potentially causing inconsistency if both conditions are true. Add a check to ensure only one assignment path is taken, or clarify precedence to avoid logic errors. [possible issue, importance: 7]

    Suggested change
    if (configContext.p2p.factv2) senderIndex = queueEntry.executionGroup.findIndex((node) => node.id === Self.id)
    if (configContext.p2p.factv2 && wrappedIndex == null) {
    senderIndex = queueEntry.executionGroup.findIndex((node) => node.id === Self.id)
    }

    Comment on lines 5316 to 5320
    if (configContext.p2p.factv2) {
    senderNodeIndex = queueEntry.executionGroup.findIndex((node) => node.id === senderNodeId)
    targetNodeIndex = queueEntry.transactionGroup.findIndex((node) => node.id === Self.id)
    targetEndIndex = targetEndIndex - 1
    } else {

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Suggestion: Assigning senderNodeIndex from executionGroup and targetNodeIndex from transactionGroup may cause mismatched indices if the groups are not aligned. Ensure both indices are derived from the same group to prevent logic errors in subsequent processing. [possible issue, importance: 8]

    Suggested change
    if (configContext.p2p.factv2) {
    senderNodeIndex = queueEntry.executionGroup.findIndex((node) => node.id === senderNodeId)
    targetNodeIndex = queueEntry.transactionGroup.findIndex((node) => node.id === Self.id)
    targetEndIndex = targetEndIndex - 1
    } else {
    if (configContext.p2p.factv2) {
    senderNodeIndex = queueEntry.executionGroup.findIndex((node) => node.id === senderNodeId)
    targetNodeIndex = queueEntry.executionGroup.findIndex((node) => node.id === Self.id)
    targetEndIndex = queueEntry.executionGroup.length - 1
    } else {
    if (queueEntry.isSenderWrappedTxGroup[senderNodeId] != null) {

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    2 participants