Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spawning of task can be interrupted by hardware task that is lower priority than the (to be) spawned task #1013

Open
twantonie opened this issue Jan 22, 2025 · 2 comments

Comments

@twantonie
Copy link

The situation where I ran into this is with one low priority software task (handle_ethernet) spawning a higher priority software task (handle_mess), but while this is getting spawned a hardware tasks interrupts (transfer_complete) and tries to spawn the same function. Resulting in an error on the spawn by the hardware task.

In my opinion this shouldn't happen. The handle_mess task is 'spawned' which has a higher priority than the hardware task so the hardware task should be preempted until the spawned task has completed.

As a workaround I've wrapped the handle_mess::spawn(MessEvent::ReadingComplete).unwrap() in a critical section which seems to solve the problem.

The rough code layout is shown below:

#[init]
fn init(ctx: init::Context) -> (SharedResources, LocalResources) {
  handle_ethernet::spawn().unwrap();
}

enum MessEvent {
    Detect,
    ReadingComplete,
    TimerComplete,
    TransferComplete,
    SpiFinished,
    Start,
    Stop,
}

fn ethernet_transmit_readings(ctx: &mut handle_ethernet::Context) {
        // Some stuff happens
        handle_mess::spawn(MessEvent::ReadingComplete).unwrap();
}

#[task(priority = 2)]
async fn handle_ethernet(mut ctx: handle_ethernet::Context) {
          while let Ok(event) = ctx.local.ethernet.receive_event.recv().await {
          match event {
              EthernetEvent::Poll => ethernet_poll(&mut ctx),
              EthernetEvent::FinishedCommand(code, size) => {
                  ethernet_finished_command(&mut ctx, code, size)
              }
              EthernetEvent::TransmitReadings => ethernet_transmit_readings(&mut ctx),
              EthernetEvent::CheckMessagesReceived => ethernet_check_messages_received(&mut ctx),
          }
      }
}

#[task(priority = 8)]
async fn handle_mess(ctx: handle_mess::Context, event: MessEvent) {
    match event {
        MessEvent::Detect => mess_detect(ctx),
        MessEvent::TimerComplete => mess_timer_complete(ctx),
        MessEvent::TransferComplete => mess_transfer_complete(ctx).await,
        MessEvent::SpiFinished => mess_spi_finished(ctx).await,
        MessEvent::ReadingComplete => mess_reading_complete(ctx),
        MessEvent::Start => mess_start(ctx).await,
        MessEvent::Stop => mess_stop(ctx),
        MessEvent::Command { size, command, id } => mess_command(ctx, size, command, id),
        MessEvent::Upload { set_data, command } => mess_upload(ctx, set_data, command),
    }
}

#[task(binds=DMA1_STR0, priority = 7)]
fn transfer_complete(_ctx: transfer_complete::Context) {
    handle_mess::spawn(MessEvent::TransferComplete).unwrap();
}

And I got the following backtrace from it

#0  lib::__bkpt () at asm/lib.rs:51
#1  0x00009f9a in cortex_m::asm::bkpt ()
    at /home/twan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/cortex-m-0.7.7/src/call_asm.rs:19
#2  panic_semihosting::panic (info=0x2001fb34) at src/lib.rs:84
#3  0x00007b28 in core::panicking::panic_fmt ()
    at core/src/panicking.rs:76
#4  0x00008c56 in core::result::unwrap_failed ()
    at core/src/result.rs:1699
#5  0x00001fb4 in core::result::Result<(), trigger_board::app::MessEvent>::unwrap<(), trigger_board::app::MessEvent> (self=...)
    at /home/twan/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/result.rs:1104
#6  trigger_board::app::transfer_complete () at src/main.rs:1484
#7  trigger_board::app::DMA1_STR0::{closure#0} () at src/main.rs:7
#8  rtic::export::cortex_basepri::run<trigger_board::app::DMA1_STR0::{closure_env#0}> (priority=7, f=...)
    at /home/twan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/rtic-2.1.2/src/export/cortex_basepri.rs:26
#9  trigger_board::app::DMA1_STR0 () at src/main.rs:7
#10 <signal handler called>
#11 0x0000372a in core::sync::atomic::atomic_store<u8> (
    dst=0x2001ffc9, val=1, 
    order=core::sync::atomic::Ordering::Release)
    at /home/twan/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/sync/atomic.rs:3328
#12 core::sync::atomic::AtomicU8::store (self=0x2001ffc9, val=1, 
    order=core::sync::atomic::Ordering::Release)
    at /home/twan/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/sync/atomic.rs:2465
#13 portable_atomic::imp::core_atomic::AtomicU8::store (
    self=0x2001ffc9, val=1, 
    order=core::sync::atomic::Ordering::Release)
    at /home/twan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/portable-atomic-1.10.0/src/imp/core_atomic.rs:178
#14 portable_atomic::AtomicBool::store (self=0x2001ffc9)
    at /home/twan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/portable-atomic-1.10.0/src/lib.rs:782
#15 rtic::export::executor::AsyncTaskExecutor<trigger_board::app::handle_mess::{async_fn_env#0}>::set_pending<trigger_board::app::handle_mess::{async_fn_env#0}> (self=<optimized out>)
    at /home/twan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/rtic-2.1.2/src/export/executor.rs:173
#16 rtic::export::executor::AsyncTaskExecutor<trigger_board::app::handle_mess::{async_fn_env#0}>::spawn<trigger_board::app::handle_mess::{async_fn_env#0}> (future=..., self=<optimized out>)
    at /home/twan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/rtic-2.1.2/src/export/executor.rs:192
#17 trigger_board::app::__rtic_internal_handle_mess_spawn (_0=...)
    at src/main.rs:7
#18 trigger_board::app::ethernet_transmit_readings (ctx=<optimized out>) at src/main.rs:582
#19 trigger_board::app::handle_ethernet::{async_fn#0} ()
    at src/main.rs:1359
#20 rtic::export::executor::AsyncTaskExecutor<trigger_board::app::handle_ethernet::{async_fn_env#0}>::poll<trigger_board::app::handle_ethernet::{async_fn_env#0}> (self=<optimized out>, wake=<optimized out>)
    at /home/twan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/rtic-2.1.2/src/export/executor.rs:203
#21 trigger_board::app::USART2::{closure#0} () at src/main.rs:7
#22 rtic::export::cortex_basepri::run<trigger_board::app::USART2::{closure_env#0}> (priority=2, f=...)
    at /home/twan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/rtic-2.1.2/src/export/cortex_basepri.rs:26
#23 trigger_board::app::USART2 () at src/main.rs:7
#24 <signal handler called>
#25 0x000050f4 in trigger_board::app::main () at src/main.rs:7
@korken89
Copy link
Collaborator

Hi,

What is most likely happening is that after a spawn handle_mess hits it's await point after being spawned by transfer_complete, if then ethernet_transmit_readings does a spawn it will fail as the high prio task is still working on the latest event. As there are await points in handle_mess it means that lower prio tasks can run while it waits for its events.

What you can do is to use a channel instead of sending messages with spawn to solve this issue. Or use an atomic bitfield to set the event as pending.

@twantonie
Copy link
Author

Hi Emil,

Thanks for your input, but the awaits are not hit in this state. I wrote the send event in such a way that it never relinquishes control as you can see here:

let Err(_e) = spi.send_event.try_send(EthernetEvent::TransmitReadings) else {
    return;
};

mess_reading_complete(ctx);

Did you look at the backtrace? From that it's pretty clear that handle_ethernet is running first at #19, it calls the handle_mess::spawn() at #17 and is interrupted by the hardware task at #9.

If the issue was as you say it is, simply wrapping the handle_mess::spawn(MessEvent::ReadingComplete).unwrap() in a critical section shouldn't fix the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants