Skip to content

Conversation

luke-gruber
Copy link

WIP, needs discussion.

Having it per-ractor is weird because that means two threads in the same
ractor can both acquire the lock (if it's through the re-entering version).
I don't think that's how it was envisioned to work.

Also, it's currently broken if you raise an exception when the lock is
held because `rb_ec_vm_lock_rec(ec)` gives you the state of the ractor's
lock_rec, and that is saved on the `tag.lock_rec`. This value doesn't
necessarily reflect the state of that thread or fiber's lock_rec (or
even if that thread or fiber locked the VM lock), so we can't know if we
should release it or how many levels to release.

I changed it to work per-fiber like ruby mutexes. Now we can trust that
the value saved is correct per fiber, and other threads and fibers
trying to acquire the VM lock will be blocked by the fiber that
acquired.

Also, there was a "bug" (I'm not sure it can happen) where if you
acquire the VM lock then call rb_fork, you can deadlock. After a fork we
should reset ractor.sync.lock_rec to 0, lock_owner to NULL in case the
VM lock was held above the `rb_fork` call site.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant