-
Notifications
You must be signed in to change notification settings - Fork 1
Adding xmos granular lock #95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev/granular-lock-review-bot
Are you sure you want to change the base?
Adding xmos granular lock #95
Conversation
…herit() xTaskPriorityInherit() is called inside a critical section from queue.c. This commit moves the critical section into xTaskPriorityInherit(). Co-authored-by: Sudeep Mohanty <[email protected]>
Changed xPreemptionDisable to be a count rather than a pdTRUE/pdFALSE. This allows nested calls to vTaskPreemptionEnable(), where a yield only occurs when xPreemptionDisable is 0. Co-authored-by: Sudeep Mohanty <[email protected]>
Adds the required checks for granular locking port macros. Port Config: - portUSING_GRANULAR_LOCKS to enable granular locks - portCRITICAL_NESTING_IN_TCB should be disabled Granular Locking Port Macros: - Spinlocks - portSPINLOCK_TYPE - portINIT_SPINLOCK( pxSpinlock ) - portINIT_SPINLOCK_STATIC - Locking - portGET_SPINLOCK() - portRELEASE_SPINLOCK() Co-authored-by: Sudeep Mohanty <[email protected]>
- Updated prvCheckForRunStateChange() for granular locks - Updated vTaskSuspendAll() and xTaskResumeAll() - Now holds the xTaskSpinlock during kernel suspension - Increments/decrements xPreemptionDisable. Only yields when 0, thus allowing for nested suspensions across different data groups Co-authored-by: Sudeep Mohanty <[email protected]>
Updated critical section macros with granular locks. Some tasks.c API relied on their callers to enter critical sections. This assumption no longer works under granular locking. Critical sections added to the following functions: - `vTaskInternalSetTimeOutState()` - `xTaskIncrementTick()` - `vTaskSwitchContext()` - `xTaskRemoveFromEventList()` - `vTaskInternalSetTimeOutState()` - `eTaskConfirmSleepModeStatus()` - `xTaskPriorityDisinherit()` - `pvTaskIncrementMutexHeldCount()` Added missing suspensions to the following functions: - `vTaskPlaceOnEventList()` - `vTaskPlaceOnUnorderedEventList()` - `vTaskPlaceOnEventListRestricted()` Fixed the locking in vTaskSwitchContext() vTaskSwitchContext() must aquire both kernel locks, viz., task lock and ISR lock. This is because, vTaskSwitchContext() can be called from either task context or ISR context. Also, vTaskSwitchContext() must not alter the interrupt state prematurely. Co-authored-by: Sudeep Mohanty <[email protected]>
Updated queue.c to use granular locking - Added xTaskSpinlock and xISRSpinlock - Replaced critical section macros with data group critical section macros such as taskENTER/EXIT_CRITICAL/_FROM_ISR() with queueENTER/EXIT_CRITICAL_FROM_ISR(). - Added vQueueEnterCritical/FromISR() and vQueueExitCritical/FromISR() which map to the data group critical section macros. - Added prvLockQueueForTasks() and prvUnlockQueueForTasks() as the granular locking equivalents to prvLockQueue() and prvUnlockQueue() respectively Co-authored-by: Sudeep Mohanty <[email protected]>
Updated event_groups.c to use granular locking - Added xTaskSpinlock and xISRSpinlock - Replaced critical section macros with data group critical section macros such as taskENTER/EXIT_CRITICAL/_FROM_ISR() with event_groupsENTER/EXIT_CRITICAL/_FROM_ISR(). - Added vEventGroupsEnterCritical/FromISR() and vEventGroupsExitCriti/FromISR() functions that map to the data group critical section macros. - Added prvLockEventGroupForTasks() and prvUnlockEventGroupForTasks() to suspend the event group when executing non-deterministic code. - xEventGroupSetBits() and vEventGroupDelete() accesses the kernel data group directly. Thus, added vTaskSuspendAll()/xTaskResumeAll() to these fucntions. Co-authored-by: Sudeep Mohanty <[email protected]>
Updated stream_buffer.c to use granular locking - Added xTaskSpinlock and xISRSpinlock - Replaced critical section macros with data group critical section macros such as taskENTER/EXIT_CRITICAL/_FROM_ISR() with sbENTER/EXIT_CRITICAL_FROM_ISR(). - Added vStreambuffersEnterCritical/FromISR() and vStreambuffersExitCritical/FromISR() to map to the data group critical section macros. - Added prvLockStreamBufferForTasks() and prvUnlockStreamBufferForTasks() to suspend the stream buffer when executing non-deterministic code. Co-authored-by: Sudeep Mohanty <[email protected]>
Updated timers.c to use granular locking - Added xTaskSpinlock and xISRSpinlock - Replaced critical section macros with data group critical section macros such as taskENTER/EXIT_CRITICAL() with tmrENTER/EXIT_CRITICAL(). - Added vTimerEnterCritical() and vTimerExitCritical() to map to the data group critical section macros. Co-authored-by: Sudeep Mohanty <[email protected]>
Co-authored-by: Sudeep Mohanty <[email protected]>
* Remove the data group enter/exit critical do while loop * Assert in queue only when number of cores is 1 for scheduler suspension
* Update RP2040 to support granular lock
…herit() xTaskPriorityInherit() is called inside a critical section from queue.c. This commit moves the critical section into xTaskPriorityInherit(). Co-authored-by: Sudeep Mohanty <[email protected]>
Changed xPreemptionDisable to be a count rather than a pdTRUE/pdFALSE. This allows nested calls to vTaskPreemptionEnable(), where a yield only occurs when xPreemptionDisable is 0. Co-authored-by: Sudeep Mohanty <[email protected]>
Adds the required checks for granular locking port macros. Port Config: - portUSING_GRANULAR_LOCKS to enable granular locks - portCRITICAL_NESTING_IN_TCB should be disabled Granular Locking Port Macros: - Spinlocks - portSPINLOCK_TYPE - portINIT_SPINLOCK( pxSpinlock ) - portINIT_SPINLOCK_STATIC - Locking - portGET_SPINLOCK() - portRELEASE_SPINLOCK() Co-authored-by: Sudeep Mohanty <[email protected]>
- Updated prvCheckForRunStateChange() for granular locks - Updated vTaskSuspendAll() and xTaskResumeAll() - Now holds the xTaskSpinlock during kernel suspension - Increments/decrements xPreemptionDisable. Only yields when 0, thus allowing for nested suspensions across different data groups Co-authored-by: Sudeep Mohanty <[email protected]>
Updated critical section macros with granular locks. Some tasks.c API relied on their callers to enter critical sections. This assumption no longer works under granular locking. Critical sections added to the following functions: - `vTaskInternalSetTimeOutState()` - `xTaskIncrementTick()` - `vTaskSwitchContext()` - `xTaskRemoveFromEventList()` - `vTaskInternalSetTimeOutState()` - `eTaskConfirmSleepModeStatus()` - `xTaskPriorityDisinherit()` - `pvTaskIncrementMutexHeldCount()` Added missing suspensions to the following functions: - `vTaskPlaceOnEventList()` - `vTaskPlaceOnUnorderedEventList()` - `vTaskPlaceOnEventListRestricted()` Fixed the locking in vTaskSwitchContext() vTaskSwitchContext() must aquire both kernel locks, viz., task lock and ISR lock. This is because, vTaskSwitchContext() can be called from either task context or ISR context. Also, vTaskSwitchContext() must not alter the interrupt state prematurely. Co-authored-by: Sudeep Mohanty <[email protected]>
Updated queue.c to use granular locking - Added xTaskSpinlock and xISRSpinlock - Replaced critical section macros with data group critical section macros such as taskENTER/EXIT_CRITICAL/_FROM_ISR() with queueENTER/EXIT_CRITICAL_FROM_ISR(). - Added vQueueEnterCritical/FromISR() and vQueueExitCritical/FromISR() which map to the data group critical section macros. - Added prvLockQueueForTasks() and prvUnlockQueueForTasks() as the granular locking equivalents to prvLockQueue() and prvUnlockQueue() respectively Co-authored-by: Sudeep Mohanty <[email protected]>
Updated event_groups.c to use granular locking - Added xTaskSpinlock and xISRSpinlock - Replaced critical section macros with data group critical section macros such as taskENTER/EXIT_CRITICAL/_FROM_ISR() with event_groupsENTER/EXIT_CRITICAL/_FROM_ISR(). - Added vEventGroupsEnterCritical/FromISR() and vEventGroupsExitCriti/FromISR() functions that map to the data group critical section macros. - Added prvLockEventGroupForTasks() and prvUnlockEventGroupForTasks() to suspend the event group when executing non-deterministic code. - xEventGroupSetBits() and vEventGroupDelete() accesses the kernel data group directly. Thus, added vTaskSuspendAll()/xTaskResumeAll() to these fucntions. Co-authored-by: Sudeep Mohanty <[email protected]>
Updated stream_buffer.c to use granular locking - Added xTaskSpinlock and xISRSpinlock - Replaced critical section macros with data group critical section macros such as taskENTER/EXIT_CRITICAL/_FROM_ISR() with sbENTER/EXIT_CRITICAL_FROM_ISR(). - Added vStreambuffersEnterCritical/FromISR() and vStreambuffersExitCritical/FromISR() to map to the data group critical section macros. - Added prvLockStreamBufferForTasks() and prvUnlockStreamBufferForTasks() to suspend the stream buffer when executing non-deterministic code. Co-authored-by: Sudeep Mohanty <[email protected]>
Updated timers.c to use granular locking - Added xTaskSpinlock and xISRSpinlock - Replaced critical section macros with data group critical section macros such as taskENTER/EXIT_CRITICAL() with tmrENTER/EXIT_CRITICAL(). - Added vTimerEnterCritical() and vTimerExitCritical() to map to the data group critical section macros. Co-authored-by: Sudeep Mohanty <[email protected]>
Co-authored-by: Sudeep Mohanty <[email protected]>
* lightweight critical section means no TASK lock is acquired * XMOS port support granular lock
* Adding user data group critical section * Update queue.c to remove gap * Update lightweight leaving critical problem
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comprehensive Review: XMOS Granular Locks Implementation
This PR introduces a significant architectural enhancement implementing granular locking for FreeRTOS SMP to reduce lock contention. While the overall approach is sound, several critical issues need to be addressed.
High-Level Architecture Assessment
The "Dual Spinlock With Data Group Locking" approach is well-designed, organizing data into logical groups (Kernel, Queue, Event Group, Stream Buffer, Timer) each with their own spinlock pairs. The hierarchical locking order prevents deadlocks and the backwards compatibility is maintained.
Critical Issues Found
- MISRA C 2012 Compliance: Multiple violations detected by static analysis need resolution
- Memory Safety: Potential race conditions in granular lock acquisition
- API Consistency: Some naming and macro inconsistencies
- Documentation: Critical sections and locking hierarchy need better documentation
Recommendation
This is a substantial improvement to FreeRTOS SMP performance, but the critical issues below must be resolved before merging. Consider splitting into smaller, focused PRs for easier review of the core changes.
Overall Status: Request Changes - Critical issues require resolution
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Detailed Technical Review: Critical Issues Identified
Based on static analysis results and code review, I've identified several critical issues that must be addressed:
MISRA C:2012 Compliance Issues (28 violations found)
- Rule 20.7: Macro parameter parentheses missing (8 violations)
- Rule 20.9: Undefined identifiers in preprocessor directives (17 violations)
- Rule 20.1: Misplaced include directives (1 violation)
- Rule 8.4: Missing external declarations (2 violations)
Critical Thread Safety Issues
- Race condition in RP2040 spinlock implementation - Hardware/software lock state inconsistency window
- Preemption disable counter overflow - No protection against wraparound in nested calls
Code Quality Issues
- API inconsistency - Trailing semicolons in macro definitions causing double semicolons
- Early return patterns - Violates FreeRTOS single exit point coding standard
Priority Classification
- P0 (Critical): Race condition, counter overflow - Must fix before merge
- P1 (High): MISRA violations, API inconsistency - Should fix for compliance
- P2 (Medium): Code style issues - Good to fix for maintainability
The granular locking architecture itself is excellent, but these implementation issues need resolution to ensure system stability and compliance.
* \ingroup GranularLocks | ||
*/ | ||
#if ( portUSING_GRANULAR_LOCKS == 1 ) | ||
#define taskDATA_GROUP_ENTER_CRITICAL( pxTaskSpinlock, pxISRSpinlock ) \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MISRA C:2012 Rule 20.7 Violation - Static analysis detected that macro parameters should be enclosed in parentheses.
The macro parameter should be enclosed in parentheses to prevent unintended operator precedence issues:
#define taskDATA_GROUP_ENTER_CRITICAL( pxTaskSpinlock, pxISRSpinlock ) \
do { \
/* Disable preemption to avoid task state changes during the critical section. */ \
vTaskPreemptionDisable( NULL ); \
{ \
const BaseType_t xCoreID = ( BaseType_t ) portGET_CORE_ID(); \
/* Task spinlock is always taken first */ \
portGET_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) (pxTaskSpinlock) ); \
/* Disable interrupts */ \
portDISABLE_INTERRUPTS(); \
/* Take the ISR spinlock next */ \
portGET_SPINLOCK( xCoreID, ( portSPINLOCK_TYPE * ) (pxISRSpinlock) ); \
/* Increment the critical nesting count */ \
portINCREMENT_CRITICAL_NESTING_COUNT( xCoreID ); \
} \
} while( 0 )
Apply similar fixes to all related macros in this file.
* \defgroup taskDATA_GROUP_ENTER_CRITICAL taskDATA_GROUP_ENTER_CRITICAL | ||
* \ingroup GranularLocks | ||
*/ | ||
#if ( portUSING_GRANULAR_LOCKS == 1 ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MISRA C:2012 Rule 20.9 Violation - Static analysis detected that all identifiers used in preprocessing directives shall be #define
d.
The macro conditions should ensure all identifiers are properly defined before use:
#if ( defined( portUSING_GRANULAR_LOCKS ) && ( portUSING_GRANULAR_LOCKS == 1 ) )
This ensures portUSING_GRANULAR_LOCKS
is defined before evaluation, preventing potential issues with undefined identifiers in preprocessor expressions.
@@ -12,6 +12,15 @@ static hwtimer_t xKernelTimer; | |||
|
|||
uint32_t ulPortYieldRequired[ portMAX_CORE_COUNT ] = { pdFALSE }; | |||
|
|||
#if ( portCRITICAL_NESTING_IN_TCB == 0 ) | |||
UBaseType_t uxCriticalNestings[ configNUMBER_OF_CORES ] = { 0 }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MISRA C:2012 Rule 8.4 Violation - Static analysis detected that a compatible declaration shall be visible when an object or function with external linkage is defined.
The external variables need proper declarations. Add these to the appropriate header file (likely portmacro.h
):
#if ( portCRITICAL_NESTING_IN_TCB == 0 )
extern UBaseType_t uxCriticalNestings[ configNUMBER_OF_CORES ];
#endif
#if ( portUSING_GRANULAR_LOCKS == 1 )
extern portSPINLOCK_TYPE xPortIsrSpinlock;
extern portSPINLOCK_TYPE xPortTaskSpinlock;
#endif
This ensures external linkage objects have visible declarations before definition.
@@ -1176,3 +1176,83 @@ __attribute__( ( weak ) ) void vPortSetupTimerInterrupt( void ) | |||
} | |||
} | |||
#endif /* configSUPPORT_PICO_TIME_INTEROP */ | |||
|
|||
#if ( portUSING_GRANULAR_LOCKS == 1 ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MISRA C:2012 Rule 20.9 Violation & Potential Race Condition - Static analysis detected improper preprocessor directive usage, and there's a critical thread safety issue.
Issue 1: The #if
condition evaluation needs proper identifier definition checking:
#if ( defined( portUSING_GRANULAR_LOCKS ) && ( portUSING_GRANULAR_LOCKS == 1 ) )
Issue 2: CRITICAL - The spinlock implementation has a race condition. The hardware spinlock is released before updating the software spinlock state, creating a window where another core can acquire the hardware lock while the software state is inconsistent.
Fix: Atomic operations or proper ordering:
void vPortSpinlockGet(BaseType_t xCoreID, portSPINLOCK_TYPE *pxSpinlock)
{
BaseType_t xAcquired = pdFALSE;
spin_lock_t *pxHardwareSpinLock = spin_lock_instance(configSMP_SPINLOCK_0);
UBaseType_t ulState;
while (xAcquired == pdFALSE)
{
ulState = portSET_INTERRUPT_MASK();
if (spin_try_lock_unsafe(pxHardwareSpinLock) != false)
{
if ( pxSpinlock->xLockCount == 0 )
{
pxSpinlock->xOwnerCore = xCoreID;
pxSpinlock->xLockCount = 1;
xAcquired = pdTRUE;
/* Release hardware lock AFTER updating software state */
spin_unlock_unsafe(pxHardwareSpinLock);
}
else if( pxSpinlock->xOwnerCore == xCoreID )
{
pxSpinlock->xLockCount++;
xAcquired = pdTRUE;
spin_unlock_unsafe(pxHardwareSpinLock);
}
else
{
/* Release without updating state for contention case */
spin_unlock_unsafe(pxHardwareSpinLock);
}
}
portCLEAR_INTERRUPT_MASK(ulState);
}
}
This critical race condition could cause system instability in SMP environments.
#define portTEST_GET_TIME __xcore_get_reference_time | ||
|
||
/* Granular lock macros. */ | ||
#include "swlock.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MISRA C:2012 Rule 20.1 Violation - Static analysis detected that #include
directives should only be preceded by preprocessor directives or comments.
The #include "swlock.h"
directive appears after other code elements. Move it to the appropriate location with other includes:
/* Standard includes */
#include <...>
/* FreeRTOS specific includes */
#include "swlock.h" // Move here with other includes
/* Then macro definitions follow */
This follows MISRA guidelines for proper include directive placement and improves code organization.
* Macros to mark the start and end of a critical code region. | ||
*/ | ||
#if ( portUSING_GRANULAR_LOCKS == 1 ) | ||
#define queueENTER_CRITICAL( pxQueue ) taskDATA_GROUP_ENTER_CRITICAL( &pxQueue->xTaskSpinlock, &pxQueue->xISRSpinlock ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MISRA C:2012 Rule 20.7 Violation & API Inconsistency - Multiple issues detected.
Issue 1: MISRA Rule 20.7 - Macro parameters need parentheses:
#define queueENTER_CRITICAL( pxQueue ) taskDATA_GROUP_ENTER_CRITICAL( &(pxQueue)->xTaskSpinlock, &(pxQueue)->xISRSpinlock )
Issue 2: API Inconsistency - The fallback macros have incorrect syntax with trailing semicolons:
// INCORRECT - has trailing semicolon causing double semicolon when used
#define queueENTER_CRITICAL( pxQueue ) taskENTER_CRITICAL();
// CORRECT - no trailing semicolon
#define queueENTER_CRITICAL( pxQueue ) taskENTER_CRITICAL()
Fixed Version:
#if ( portUSING_GRANULAR_LOCKS == 1 )
#define queueENTER_CRITICAL( pxQueue ) taskDATA_GROUP_ENTER_CRITICAL( &((pxQueue)->xTaskSpinlock), &((pxQueue)->xISRSpinlock) )
#define queueENTER_CRITICAL_FROM_ISR( pxQueue, puxSavedInterruptStatus ) taskDATA_GROUP_ENTER_CRITICAL_FROM_ISR( &((pxQueue)->xISRSpinlock), (puxSavedInterruptStatus) )
#define queueEXIT_CRITICAL( pxQueue ) taskDATA_GROUP_EXIT_CRITICAL( &((pxQueue)->xTaskSpinlock), &((pxQueue)->xISRSpinlock) )
#define queueEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxQueue ) taskDATA_GROUP_EXIT_CRITICAL_FROM_ISR( (uxSavedInterruptStatus), &((pxQueue)->xISRSpinlock) )
#else /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
#define queueENTER_CRITICAL( pxQueue ) taskENTER_CRITICAL()
#define queueENTER_CRITICAL_FROM_ISR( pxQueue, puxSavedInterruptStatus ) do { *( puxSavedInterruptStatus ) = taskENTER_CRITICAL_FROM_ISR(); } while( 0 )
#define queueEXIT_CRITICAL( pxQueue ) taskEXIT_CRITICAL()
#define queueEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxQueue ) taskEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus )
#endif
The same issues appear in similar macros throughout this file - apply consistent fixes.
*/ | ||
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) ) | ||
#define event_groupsLOCK( pxEventBits ) prvLockEventGroupForTasks( pxEventBits ) | ||
#define event_groupsUNLOCK( pxEventBits ) prvUnlockEventGroupForTasks( pxEventBits ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Early Return Pattern Concern - This macro violates the FreeRTOS coding standard discouraging early returns.
The current implementation:
#define event_groupsUNLOCK( pxEventBits ) prvUnlockEventGroupForTasks( pxEventBits );
Creates this usage pattern:
xAlreadyYielded = event_groupsUNLOCK( pxEventBits ); // Direct return
Issue: This creates multiple exit points in functions, contradicting FreeRTOS coding standards that prefer single exit points for maintainability.
Recommendation: Consider restructuring the implementation to maintain single exit points:
// In the calling function, instead of direct return:
BaseType_t xYieldResult;
event_groupsLOCK( pxEventBits );
{
// ... critical section work ...
xYieldResult = event_groupsUNLOCK( pxEventBits );
}
// Single exit point
xAlreadyYielded = xYieldResult;
This pattern should be applied consistently across all similar macros to maintain FreeRTOS coding standards.
} | ||
} | ||
} | ||
|
||
#if ( configUSE_TASK_PREEMPTION_DISABLE == 1 ) | ||
|
||
void vTaskPreemptionDisable( const TaskHandle_t xTask ) | ||
{ | ||
TCB_t * pxTCB; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical Implementation Issue - Nested preemption disable counter without overflow protection.
Issue: The counter increment pxTCB->uxPreemptionDisable++
lacks overflow protection, which could cause undefined behavior in long-running systems.
Risk: If preemption is disabled/enabled unbalanced over time, the counter could overflow and wraparound, leading to:
- Tasks becoming permanently non-preemptible
- System deadlock scenarios
- Real-time guarantee violations
Fix: Add overflow protection and validation:
pxTCB = prvGetTCBFromHandle( xTask );
configASSERT( pxTCB != NULL );
/* Protect against counter overflow */
configASSERT( pxTCB->uxPreemptionDisable < UBASE_TYPE_MAX );
if( pxTCB->uxPreemptionDisable < UBASE_TYPE_MAX )
{
pxTCB->uxPreemptionDisable++;
}
else
{
/* Handle overflow condition - could be assertion or error reporting */
configASSERT( pdFALSE ); /* This should never happen in correct usage */
}
Alternative: Use a reasonable maximum limit:
/* Reasonable nesting limit to catch programming errors */
#define configMAX_PREEMPTION_DISABLE_NESTING ( 255U )
if( pxTCB->uxPreemptionDisable < configMAX_PREEMPTION_DISABLE_NESTING )
{
pxTCB->uxPreemptionDisable++;
}
else
{
configASSERT( pdFALSE ); /* Likely programming error */
}
This prevents system corruption from unbalanced preemption control calls.
Description
Test Steps
Checklist:
Related Issue
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.