Skip to content

Expose AtomicTask #550

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Aug 16, 2017
Merged

Expose AtomicTask #550

merged 3 commits into from
Aug 16, 2017

Conversation

carllerche
Copy link
Member

This PR exposes AtomicTask as it is a generally useful API. It also removes unsafe from the API.

This PR exposes AtomicTask as it is a generally useful API. It also
removes `unsafe` from the API.
@carllerche
Copy link
Member Author

CI is failing due to rustc regressions:

@alexcrichton
Copy link
Member

Ah yeah no worries about failures on beta. Could you add a detailed comment about the concurrent behavior of park?

@@ -52,30 +70,35 @@ impl AtomicTask {
}
}

/// The caller must ensure mutual exclusion
pub unsafe fn park(&self) {
if let Some(ref task) = *self.task.get() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wasn't this one of the big use cases of this cell? Is the perf impact not measureable?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was never measured. The main point was managing the logic around atomically swapping the consumer task while the producer is notifying. The original API was unsafe mostly because it was for a specific use case (FuturesUnordered) where we knew there was only a single consumer and it had to be &self.

Either way, if it proves to be a perf issue, we can always bring it back as park_unchecked.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds reasonable to me!

/// notify the consumer, but the consumer is in the process of being migrated to
/// a new logical task.
///
/// Roughly, consumers should call `park` before checking the result of a
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't quite true, right? Instead consumers shoud call park after they determine the value isn't ready, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is? If consumers call park after they determine the value isn't ready, then there is a race condition where:

  • Consumer checks value
  • Producer produces value
  • Producer calls notify (no task registered -> no-op)
  • Consumer calls park: nothing happens.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't that what this block is handling?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That block is handling concurrent calls to park. Those states are hit when a concurrent thread is currently in the park path, in which case the op is aborted as we consider the other thread has "won" the race.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm ok reading over the park/unpark implementation again it turns out it's significantly different in this aspect than I was expecting. I initially expected this to behave like std::thread's park/unpark where if you call unpark (or notify in this case) it "buffers a notification" if there's no blocking task. I think that would mean that you're not required to call park before you do any work. Do you know if there's a reason to not implement that?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that pattern was my original attempt. However, it doesn't really work in the face of the consumer changing its task in the same way that it does w/ thread::park().

As you said, thread::park() / unpark() behaves as a buffered notification. If unpark is called first, then that notification gets buffered until park is called to "consume" the notification.

AtomicTask scenario:

  • Consumer calls park w/ task A.
  • Consumer migrates to task B & checks computation result (not completed).
  • Producer completes computation.
  • Producer calls notify
  • Consumer calls park` on task B.

The notification will be sent to task A and task B has deadlocked.

This primitives is more as a strategy to manage consumers migrating across tasks. This is why the consumer must call park before.

One disconnect might be that this PR leaves the original park naming. I'm open to naming suggestions.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok yeah that makes sense to me. Perhaps something like:

  • Rename the method to store_current from park
  • Explicitly mention in the documentation that "park before you check" may not be expected, and it's different from other "park-like apis" like std::thread?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I renamed it -> register.

@carllerche
Copy link
Member Author

@alexcrichton I added additional notes to park, but I'm not sure if you had anything specific in mind.

@alexcrichton
Copy link
Member

I'm not sure if you had anything specific in mind.

Nah looks great to me, thanks!

@alexcrichton alexcrichton merged commit 7a6b64c into master Aug 16, 2017
@alexcrichton
Copy link
Member

👍

@alexcrichton alexcrichton deleted the expose-atomic-task branch August 16, 2017 16:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants