-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Safer (checked) QCellOwnerSeq::new
#50
Comments
Thanks for the comments, and sorry for the delay in replying. I guess this must be for a Also, from the reaction in the forums, it seems like any unsoundness at all, even once every 139 days with a determined effort, is regarded as a demonstrated weakness and therefore a bug. So to be conservative, this is marked as Since the ID goes on every single I feel we got the balance about right on this one. Personally I prefer not to have long-term panics hanging over me, so using addresses or a wrapping ID means they will never run out. Considering the option of adding both panicking and non-panicking versions of So I think I can do what you're asking, by adding a new panicking call with its own ID range. Does this sound okay? I do wonder what the situation is where someone would prefer a hidden panic rather than no panic and an explicit "Safety: ..." notice in the code. But really I don't need to understand that. If you want this call it is not a big deal for me to add it. So if you agree, I have two actions from this:
|
std
'sRc
,Arc
, andThreadId
have essentially the same pathological edge case asQCellOwnerSeq
— incrementing the reference count (owner id sequence) in a loop without ever decreasing it (i.e. by forgetting the clonedRc
/Arc
, or always for the monotonicThreadId
andQCellOwnerSeq
counts) could overflow the counter and lead to UB.std
declares this as pathological unsupported behavior — when would you ever legitimately need even 231 different owning references or threads, let alone 2631 — and aborts if this overflow would happen2.At least on 64-bit targets (where exhausting the ID space is fundamentally impractical3), it'd be nice to have
QCellOwnerSeq::new
be made safe. I'd recommend doing a cmpxchg loop (fetch_update
) likeThreadId
, since creating new owners doesn't seem like it'd ever be a contended operation.Doing this would entail one of:
QCellOwnerSeq::new
safe (in theory not API breaking, but can cause downstreamunused_unsafe
warnings and makes the non-panicking path now potentially panicking); orQCellOwnerSeq::new
'sunsafe
requirements (from "don't misuse colliding IDs" to "don't exhaust the ID space") and making a separate constructor4 that does checked sequence based owner IDs.Since qcell doesn't expose a way to get the numeric value of a
QCellOwnerID
(and I think this is a good decision), there's no way to implement this downstream, even partially.Footnotes
For some context, with 128 threads all cooperatively incrementing the same shared counter at a rate of 6 GHz, it will still take 139 days at that rate to increment the the counter 263 times. It's effectively never going to happen accidentally. ↩
Rc
usescell.set(cell.get() + 1)
and aborts when overflow happens.Arc
usesatom.fetch_add(1, Relaxed)
and aborts atisize::MAX
, relying on the fact that havingisize::MAX
threads cloningArc
concurrently is impossible (at least two bytes of address space are consumed per thread).ThreadId
uses acompare_exchange
loop, presumably because creating a new thread will never be performance critical like cloningArc
can be. ↩Counting to 263 is never going to happen in a reasonable timeframe (see previous footnote), but counting to 232 is entirely practical. OTOH, 32-bit targets with 64-bit atomic support could still use 64-bit owner IDs, zero-extending the address-based owner IDs. ↩
In both cases
QCellOwner
could also switch to using checked sequence-based IDs to avoid the alloc requirement, but do note that this would result in the ID space being consumed without reuse byQCellOwner
as well asQCellOwnerSeq
, which it isn't currently. ↩The text was updated successfully, but these errors were encountered: