wasmtime/runtime/gc/enabled/rooting.rs
1//! Garbage collection rooting APIs.
2//!
3//! Rooting prevents GC objects from being collected while they are actively
4//! being used.
5//!
6//! ## Goals
7//!
8//! We have a few sometimes-conflicting goals with our GC rooting APIs:
9//!
10//! 1. Safety: It should never be possible to get a use-after-free bug because
11//! the user misused the rooting APIs, the collector "mistakenly" determined
12//! an object was unreachable and collected it, and then the user tried to
13//! access the object. This is our highest priority.
14//!
15//! 2. Moving GC: Our rooting APIs should moving collectors (such as
16//! generational and compacting collectors) where an object might get
17//! relocated after a collection and we need to update the GC root's pointer
18//! to the moved object. This means we either need cooperation and internal
19//! mutability from individual GC roots as well as the ability to enumerate
20//! all GC roots on the native Rust stack, or we need a level of indirection.
21//!
22//! 3. Performance: Our rooting APIs should generally be as low-overhead as
23//! possible. They definitely shouldn't require synchronization and locking
24//! to create, access, and drop GC roots.
25//!
26//! 4. Ergonomics: Our rooting APIs should be, if not a pleasure, then at least
27//! not a burden for users. Additionally, the API's types should be `Sync`
28//! and `Send` so that they work well with async Rust.
29//!
30//! The two main design axes that trade off the above goals are:
31//!
32//! - Where the GC reference itself is held. A root object could
33//! directly hold the underlying GC reference (offset into the GC
34//! storage area), which would allow more efficient dereferencing
35//! and access to the referred-to GC object. However, goal (2)
36//! requires that the GC is able to update references when objects
37//! move during a GC. Thus, such "direct roots" would need to be
38//! registered somehow in a global root registry, and would need to
39//! unregister themselves when dropped.
40//!
41//! The alternative is to hold some indirect kind of reference to a
42//! GC reference, with the latter stored directly in the `Store` so
43//! the GC can update it freely. This adds one pointer-chasing hop
44//! to accesses, but works much more nicely with ownership
45//! semantics. Logically, the `Store` "owns" the actual pointers;
46//! and rooting types own the slots that they are stored in.
47//!
48//! For the above reasons, all of our rooting types below use
49//! indirection. This avoids the need for an unsafe
50//! intrusive-linked-list for global registration, or a shared
51//! reference to a mutex-protected registry, or some other
52//! error-prone technique.
53//!
54//! - How unrooting occurs. Ideally, a rooting type would implement
55//! the Rust `Drop` trait and unroot itself when the Rust value is
56//! dropped. However, because the rooting state is held in the
57//! `Store`, this direct approach would imply keeping a shared,
58//! mutex-protected handle to the registry in every rooting
59//! object. This would add synchronization overhead to the common
60//! case, and in general would be a bad tradeoff.
61//!
62//! However, there are several other approaches:
63//!
64//! - The user could use an RAII wrapper that *does* own the `Store`,
65//! and defines a "scope" in which roots are created and then
66//! bulk-unrooted at the close of the scope.
67//! - The rooting type could hold a shared reference to some state
68//! *other* than the full registry, and update a flag in that
69//! state indicating it has been dropped; the `Store` could then
70//! later observe that flag and remove the root. This would have
71//! some allocation cost, but the shared state would be
72//! independent of the `Store` and specific to each root, so would
73//! impose no synchronization overhead between different roots or
74//! the GC itself.
75//! - The rooting type could provide a fully manual `unroot` method,
76//! allowing the user to make use of their own knowledge of their
77//! application's lifetimes and semantics and remove roots when
78//! appropriate.
79//!
80//! We provide an implementation of the first two of these
81//! strategies below in `Rooted` and `OwnedRooted`. The last, fully
82//! manual, approach is too difficult to use correctly (it cannot
83//! implement Rust's `Drop` trait, but there is no way in Rust to
84//! enforce that a value must be consumed rather than dropped) so it
85//! is not implemented.
86//!
87//! ## Two Flavors of Rooting API
88//!
89//! Okay, with that out of the way, this module provides two flavors
90//! of rooting API. One for the common, scoped lifetime case, and one that
91//! carries ownership until dropped, and can work as an RAII handle
92//! that interacts well with Rust ownership semantics (but at a minor
93//! performance cost):
94//!
95//! 1. `RootScope` and `Rooted<T>`: These are used for temporarily rooting GC
96//! objects for the duration of a scope. The internal implementation takes
97//! advantage of the LIFO property inherent in scopes, making creating and
98//! dropping `Rooted<T>`s and `RootScope`s super fast and roughly equivalent
99//! to bump allocation.
100//!
101//! This type is vaguely similar to V8's [`HandleScope`].
102//!
103//! [`HandleScope`]: https://v8.github.io/api/head/classv8_1_1HandleScope.html
104//!
105//! Note that `Rooted<T>` can't be statically tied to its context scope via a
106//! lifetime parameter, unfortunately, as that would allow the creation of
107//! only one `Rooted<T>` at a time, since the `Rooted<T>` would take a borrow
108//! of the whole context.
109//!
110//! This supports the common use case for rooting and provides good
111//! ergonomics.
112//!
113//! 2. `OwnedRooted<T>`: This is a root that manages rooting and
114//! unrooting automatically with its lifetime as a Rust value. In
115//! other words, the continued existence of the Rust value ensures
116//! the rooting of the underlying GC reference; and when the Rust
117//! value is dropped, the underlying GC reference is no longer
118//! rooted.
119//!
120//! Internally, this root holds a shared reference to a
121//! *root-specific* bit of state that is also tracked and observed
122//! by the `Store`. This means that there is minor memory
123//! allocation overhead (an `Arc<()>`) for each such root; this
124//! memory is shared over all clones of this root. The rooted GC
125//! reference is *logically* unrooted as soon as the last clone of
126//! this root is dropped. Internally the root may still exist until
127//! the next GC, or "root trim" when another `OwnedRooted` is
128//! created, but that is unobservable externally, and will not
129//! result in any additional GC object lifetime because it is
130//! always cleaned up before a gc.
131//!
132//! This type is roughly similar to SpiderMonkey's [`PersistentRooted<T>`],
133//! although they register roots on a per-thread `JSContext`, avoiding
134//! mutation costs in a way that is not viable for Wasmtime (which needs
135//! `Store`s to be `Send`).
136//!
137//! [`PersistentRooted<T>`]: http://devdoc.net/web/developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey/JSAPI_reference/JS::PersistentRooted.html
138//!
139//! At the end of the day, all of the above root types are just tagged
140//! indices into the store's `RootSet`. This indirection allows
141//! working with Rust's borrowing discipline (we use `&mut Store` to
142//! represent mutable access to the GC heap) while still allowing
143//! rooted references to be moved around without tying up the whole
144//! store in borrows. Additionally, and crucially, this indirection
145//! allows us to update the *actual* GC pointers in the `RootSet` and
146//! support moving GCs (again, as mentioned above).
147//!
148//! ## Unrooted References
149//!
150//! We generally don't expose *unrooted* GC references in the Wasmtime API at
151//! this time -- and I expect it will be a very long time before we do, but in
152//! the limit we may want to let users define their own GC-managed types that
153//! participate in GC tracing and all that -- so we don't have to worry about
154//! failure to root an object causing use-after-free bugs or failing to update a
155//! GC root pointer after a moving GC as long as users stick to our safe rooting
156//! APIs. (The one exception is `ValRaw`, which does hold raw GC references. But
157//! with `ValRaw` all bets are off and safety is 100% up to the user.)
158//!
159//! We do, however, have to worry about these things internally. So first of
160//! all, try to avoid ever working with unrooted GC references if you
161//! can. However, if you really must, consider also using an `AutoAssertNoGc`
162//! across the block of code that is manipulating raw GC references.
163
164use crate::runtime::vm::{GcRootsList, GcStore, VMGcRef};
165use crate::{
166 AsContext, AsContextMut, GcRef, Result, RootedGcRef,
167 error::OutOfMemory,
168 store::{AsStoreOpaque, AutoAssertNoGc, StoreId, StoreOpaque},
169};
170use crate::{ValRaw, prelude::*};
171use alloc::sync::{Arc, Weak};
172use core::any;
173use core::marker;
174use core::mem::{self, MaybeUninit};
175use core::num::{NonZeroU64, NonZeroUsize};
176use core::{
177 fmt::{self, Debug},
178 hash::{Hash, Hasher},
179 ops::{Deref, DerefMut},
180};
181use wasmtime_core::slab::{Id as SlabId, Slab};
182
183mod sealed {
184 use super::*;
185
186 /// Sealed, `wasmtime`-internal trait for GC references.
187 ///
188 /// # Safety
189 ///
190 /// All types implementing this trait must:
191 ///
192 /// * Be a newtype of a `GcRootIndex`
193 ///
194 /// * Not implement `Copy` or `Clone`
195 ///
196 /// * Only have `&self` methods.
197 pub unsafe trait GcRefImpl: Sized {
198 /// Transmute a `&GcRootIndex` into an `&Self`.
199 fn transmute_ref(index: &GcRootIndex) -> &Self;
200 }
201
202 /// Sealed, `wasmtime`-internal trait for the common methods on rooted GC
203 /// references.
204 pub trait RootedGcRefImpl<T: GcRef> {
205 /// Get this rooted GC reference's raw `VMGcRef` out of the store's GC
206 /// root set.
207 ///
208 /// Returns `None` for objects that have since been unrooted (eg because
209 /// its associated `RootedScope` was dropped).
210 ///
211 /// Panics if this root is not associated with the given store.
212 fn get_gc_ref<'a>(&self, store: &'a StoreOpaque) -> Option<&'a VMGcRef>;
213
214 /// Same as `get_gc_ref` but returns an error instead of `None` for
215 /// objects that have been unrooted.
216 fn try_gc_ref<'a>(&self, store: &'a StoreOpaque) -> Result<&'a VMGcRef> {
217 self.get_gc_ref(store).ok_or_else(|| {
218 format_err!("attempted to use a garbage-collected object that has been unrooted")
219 })
220 }
221
222 /// Get a clone of this rooted GC reference's raw `VMGcRef` out of the
223 /// store's GC root set.
224 ///
225 /// Returns `None` for objects that have since been unrooted (eg because
226 /// its associated `RootedScope` was dropped).
227 ///
228 /// Panics if this root is not associated with the given store.
229 fn clone_gc_ref(&self, store: &mut AutoAssertNoGc<'_>) -> Option<VMGcRef> {
230 let gc_ref = self.get_gc_ref(store)?.unchecked_copy();
231 Some(store.clone_gc_ref(&gc_ref))
232 }
233
234 /// Same as `clone_gc_ref` but returns an error instead of `None` for
235 /// objects that have been unrooted.
236 fn try_clone_gc_ref(&self, store: &mut AutoAssertNoGc<'_>) -> Result<VMGcRef> {
237 let gc_ref = self.try_gc_ref(store)?.unchecked_copy();
238 Ok(store.clone_gc_ref(&gc_ref))
239 }
240 }
241}
242pub(crate) use sealed::*;
243
244/// The index of a GC root inside a particular store's GC root set.
245///
246/// Can be either a LIFO- or owned-rooted object, depending on the
247/// `PackedIndex`.
248///
249/// Every `T` such that `T: GcRef` must be a newtype over this `GcRootIndex`.
250#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
251// Just `pub` to avoid `warn(private_interfaces)` in public APIs, which we can't
252// `allow(...)` on our MSRV yet.
253#[doc(hidden)]
254#[repr(C)] // NB: if this layout changes be sure to change the C API as well
255pub struct GcRootIndex {
256 store_id: StoreId,
257 generation: u32,
258 index: PackedIndex,
259}
260
261const _: () = {
262 // NB: these match the C API which should also be updated if this changes
263 assert!(mem::size_of::<GcRootIndex>() == 16);
264 assert!(mem::align_of::<GcRootIndex>() == mem::align_of::<u64>());
265};
266
267impl GcRootIndex {
268 #[inline]
269 pub(crate) fn comes_from_same_store(&self, store: &StoreOpaque) -> bool {
270 self.store_id == store.id()
271 }
272
273 /// Same as `RootedGcRefImpl::get_gc_ref` but not associated with any
274 /// particular `T: GcRef`.
275 ///
276 /// We must avoid triggering a GC while holding onto the resulting raw
277 /// `VMGcRef` to avoid use-after-free bugs and similar. The `'a` lifetime
278 /// threaded from the `store` to the result will normally prevent GCs
279 /// statically, at compile time, since performing a GC requires a mutable
280 /// borrow of the store. However, if you call `VMGcRef::unchecked_copy` on
281 /// the resulting GC reference, then all bets are off and this invariant is
282 /// up to you to manually uphold. Failure to uphold this invariant is memory
283 /// safe but will lead to general incorrectness such as panics and wrong
284 /// results.
285 ///
286 /// # Panics
287 ///
288 /// Panics if `self` is not associated with the given store.
289 pub(crate) fn get_gc_ref<'a>(&self, store: &'a StoreOpaque) -> Option<&'a VMGcRef> {
290 assert!(
291 self.comes_from_same_store(store),
292 "object used with wrong store"
293 );
294 if let Some(index) = self.index.as_lifo() {
295 let entry = store.gc_roots().lifo_roots.get(index)?;
296 if entry.generation == self.generation {
297 Some(&entry.gc_ref)
298 } else {
299 None
300 }
301 } else if let Some(id) = self.index.as_owned() {
302 let gc_ref = store.gc_roots().owned_rooted.get(id);
303 debug_assert!(gc_ref.is_some());
304 gc_ref
305 } else {
306 unreachable!()
307 }
308 }
309
310 /// Same as `get_gc_ref` but returns an error instead of `None` if
311 /// the GC reference has been unrooted.
312 ///
313 /// # Panics
314 ///
315 /// Panics if `self` is not associated with the given store.
316 pub(crate) fn try_gc_ref<'a>(&self, store: &'a StoreOpaque) -> Result<&'a VMGcRef> {
317 self.get_gc_ref(store).ok_or_else(|| {
318 format_err!("attempted to use a garbage-collected object that has been unrooted")
319 })
320 }
321
322 /// Same as `RootedGcRefImpl::clone_gc_ref` but not associated with any
323 /// particular `T: GcRef`.
324 pub(crate) fn try_clone_gc_ref(&self, store: &mut AutoAssertNoGc<'_>) -> Result<VMGcRef> {
325 let gc_ref = self.try_gc_ref(store)?.unchecked_copy();
326 Ok(store.clone_gc_ref(&gc_ref))
327 }
328}
329
330/// This is a bit-packed version of
331///
332/// ```ignore
333/// enum {
334/// Lifo(usize),
335/// Owned(SlabId),
336/// }
337/// ```
338///
339/// where the high bit is the discriminant and the lower 31 bits are the
340/// payload.
341#[derive(Clone, Copy, PartialEq, Eq, Hash)]
342#[repr(transparent)]
343struct PackedIndex(u32);
344
345impl Debug for PackedIndex {
346 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
347 if let Some(index) = self.as_lifo() {
348 f.debug_tuple("PackedIndex::Lifo").field(&index).finish()
349 } else if let Some(id) = self.as_owned() {
350 f.debug_tuple("PackedIndex::Owned").field(&id).finish()
351 } else {
352 unreachable!()
353 }
354 }
355}
356
357impl PackedIndex {
358 const DISCRIMINANT_MASK: u32 = 0b1 << 31;
359 const LIFO_DISCRIMINANT: u32 = 0b0 << 31;
360 const OWNED_DISCRIMINANT: u32 = 0b1 << 31;
361 const PAYLOAD_MASK: u32 = !Self::DISCRIMINANT_MASK;
362
363 fn new_lifo(index: usize) -> PackedIndex {
364 let index32 = u32::try_from(index).unwrap();
365 assert_eq!(index32 & Self::DISCRIMINANT_MASK, 0);
366 let packed = PackedIndex(Self::LIFO_DISCRIMINANT | index32);
367 debug_assert!(packed.is_lifo());
368 debug_assert_eq!(packed.as_lifo(), Some(index));
369 debug_assert!(!packed.is_owned());
370 debug_assert!(packed.as_owned().is_none());
371 packed
372 }
373
374 fn new_owned(id: SlabId) -> PackedIndex {
375 let raw = id.into_raw();
376 assert_eq!(raw & Self::DISCRIMINANT_MASK, 0);
377 let packed = PackedIndex(Self::OWNED_DISCRIMINANT | raw);
378 debug_assert!(packed.is_owned());
379 debug_assert_eq!(packed.as_owned(), Some(id));
380 debug_assert!(!packed.is_lifo());
381 debug_assert!(packed.as_lifo().is_none());
382 packed
383 }
384
385 fn discriminant(&self) -> u32 {
386 self.0 & Self::DISCRIMINANT_MASK
387 }
388
389 fn is_lifo(&self) -> bool {
390 self.discriminant() == Self::LIFO_DISCRIMINANT
391 }
392
393 fn is_owned(&self) -> bool {
394 self.discriminant() == Self::OWNED_DISCRIMINANT
395 }
396
397 fn payload(&self) -> u32 {
398 self.0 & Self::PAYLOAD_MASK
399 }
400
401 fn as_lifo(&self) -> Option<usize> {
402 if self.is_lifo() {
403 Some(usize::try_from(self.payload()).unwrap())
404 } else {
405 None
406 }
407 }
408
409 fn as_owned(&self) -> Option<SlabId> {
410 if self.is_owned() {
411 Some(SlabId::from_raw(self.payload()))
412 } else {
413 None
414 }
415 }
416}
417
418/// The set of all embedder-API GC roots in a single store/heap.
419#[derive(Debug, Default)]
420pub(crate) struct RootSet {
421 /// GC roots with arbitrary lifetime that are unrooted when
422 /// liveness flags are cleared (seen during a trimming pass), for
423 /// use with `OwnedRooted<T>`.
424 owned_rooted: Slab<VMGcRef>,
425
426 /// List of liveness flags and corresponding `SlabId`s into the
427 /// `owned_rooted` slab.
428 liveness_flags: Vec<(Weak<()>, SlabId)>,
429
430 /// High-water mark for liveness flag trimming. We use this to
431 /// ensure we have amortized constant-time behavior on adding
432 /// roots. See note below on `trim_liveness_flags()`.
433 liveness_trim_high_water: Option<NonZeroUsize>,
434
435 /// Strictly LIFO-ordered GC roots, for use with `RootScope` and
436 /// `Rooted<T>`.
437 lifo_roots: Vec<LifoRoot>,
438
439 /// Generation counter for entries to prevent ABA bugs with `RootScope` and
440 /// `Rooted<T>`.
441 lifo_generation: u32,
442}
443
444#[derive(Debug)]
445struct LifoRoot {
446 generation: u32,
447 gc_ref: VMGcRef,
448}
449
450impl RootSet {
451 pub(crate) fn trace_roots(&mut self, gc_roots_list: &mut GcRootsList) {
452 log::trace!("Begin trace user LIFO roots");
453 for root in &mut self.lifo_roots {
454 unsafe {
455 gc_roots_list.add_root((&mut root.gc_ref).into(), "user LIFO root");
456 }
457 }
458 log::trace!("End trace user LIFO roots");
459
460 log::trace!("Begin trace user owned roots");
461 for (_id, root) in self.owned_rooted.iter_mut() {
462 unsafe {
463 gc_roots_list.add_root(root.into(), "user owned root");
464 }
465 }
466 log::trace!("End trace user owned roots");
467 }
468
469 /// Enter a LIFO rooting scope.
470 ///
471 /// Returns an integer that should be passed unmodified to `exit_lifo_scope`
472 /// when the scope is finished.
473 ///
474 /// Calls to `{enter,exit}_lifo_scope` must happen in a strict LIFO order.
475 #[inline]
476 pub(crate) fn enter_lifo_scope(&self) -> usize {
477 self.lifo_roots.len()
478 }
479
480 /// Exit a LIFO rooting scope.
481 ///
482 /// The `scope` argument must be the result of the corresponding
483 /// `enter_lifo_scope` call.
484 ///
485 /// Calls to `{enter,exit}_lifo_scope` must happen in a strict LIFO order.
486 #[inline]
487 pub(crate) fn exit_lifo_scope(&mut self, gc_store: Option<&mut GcStore>, scope: usize) {
488 debug_assert!(self.lifo_roots.len() >= scope);
489
490 // If we actually have roots to unroot, call an out-of-line slow path.
491 if self.lifo_roots.len() > scope {
492 self.exit_lifo_scope_slow(gc_store, scope);
493 }
494 }
495
496 #[inline(never)]
497 #[cold]
498 fn exit_lifo_scope_slow(&mut self, mut gc_store: Option<&mut GcStore>, scope: usize) {
499 self.lifo_generation += 1;
500
501 // TODO: In the case where we have a tracing GC that doesn't need to
502 // drop barriers, this should really be:
503 //
504 // self.lifo_roots.truncate(scope);
505
506 let mut lifo_roots = mem::take(&mut self.lifo_roots);
507 for root in lifo_roots.drain(scope..) {
508 // Only drop the GC reference if we actually have a GC store. How
509 // can we have a GC reference but not a GC store? If we've only
510 // created `i31refs`, we never force a GC store's allocation. This
511 // is fine because `i31ref`s never need drop barriers.
512 if let Some(gc_store) = &mut gc_store {
513 gc_store.drop_gc_ref(root.gc_ref);
514 }
515 }
516 self.lifo_roots = lifo_roots;
517 }
518
519 pub(crate) fn with_lifo_scope<S, T>(store: &mut S, f: impl FnOnce(&mut S) -> T) -> T
520 where
521 S: ?Sized + DerefMut<Target = StoreOpaque>,
522 {
523 let scope = store.gc_roots().enter_lifo_scope();
524 let ret = f(store);
525 store.exit_gc_lifo_scope(scope);
526 ret
527 }
528
529 pub(crate) fn push_lifo_root(&mut self, store_id: StoreId, gc_ref: VMGcRef) -> GcRootIndex {
530 let generation = self.lifo_generation;
531 let index = self.lifo_roots.len();
532 let index = PackedIndex::new_lifo(index);
533 self.lifo_roots.push(LifoRoot { generation, gc_ref });
534 GcRootIndex {
535 store_id,
536 generation,
537 index,
538 }
539 }
540
541 /// Trim any stale (dropped) owned roots.
542 ///
543 /// `OwnedRooted` is implemented in a way that avoids the need to
544 /// have or keep a reference to the store (and thus this struct)
545 /// during its drop operation: to allow it to be independent, it
546 /// holds a shared reference to some other memory and sets that
547 /// "liveness flag" appropriately, then we later observe dead
548 /// liveness flags during a periodic scan and actually deallocate
549 /// the roots. We use an `Arc<()>` for this: it permits cheap
550 /// cloning, and it has minimal memory overhead. We hold a weak
551 /// ref in a list alongside the actual `GcRootIndex`, and we free
552 /// that index in the slab of owned roots when we observe that
553 /// only our weak ref remains.
554 ///
555 /// This trim step is logically separate from a full GC, though it
556 /// would not be very productive to do a GC without doing a
557 /// root-trim first: the root-trim should be quite a lot cheaper,
558 /// and it will allow for more garbage to exist.
559 ///
560 /// There is, additionally, nothing stopping us from doing trims
561 /// more often than just before each GC, and there are reasons
562 /// this could be a good idea: for example, a user program that
563 /// creates and removes many roots (perhaps as it accesses the GC
564 /// object graph) but ultimately is dealing with a static graph,
565 /// without further allocation, may need the "root set" to be GC'd
566 /// independently from the actual heap. Thus, we could trim before
567 /// adding a new root to ensure we don't grow that unboundedly (or
568 /// force an otherwise unneeded GC).
569 ///
570 /// The first case, just before GC, wants a "full trim": there's
571 /// no reason not to unroot as much as possible before we do the
572 /// expensive work of tracing the whole heap.
573 ///
574 /// On the other hand, the second case, adding a new root, wants a
575 /// kind of trim that is amortized constant time. Consider: if we
576 /// have some threshold for the trim, say N roots, and the user
577 /// program continually adds and removes one root such that it
578 /// goes just over the threshold, we might scan all N liveness
579 /// flags for each step, resulting in quadratic behavior overall.
580 ///
581 /// Thus, we implement a "high water mark" algorithm to guard
582 /// against this latter case: on the add-a-new-root case, we trim
583 /// only if the list is longer than the high water mark, and we
584 /// set the high water mark each time based on the after-trim
585 /// size. See below for details on this algorithm.
586 ///
587 /// `eager` chooses whether we eagerly trim roots or pre-filter
588 /// using the high-water mark.
589 ///
590 /// # Growth Algorithm
591 ///
592 /// We want to balance two factors: we must ensure that the
593 /// algorithmic complexity of creating a new root is amortized
594 /// O(1), and we must ensure that repeated creation and deletion
595 /// of roots without any GC must result in a root-set that has a
596 /// size linear in the actual live-root-set size. Stated formally:
597 ///
598 /// 1. Root creation must be O(1), amortized
599 /// 2. liveness_flags.len() must be O(|max live owned roots|),
600 /// i.e., must not grow to larger than a constant multiple of
601 /// the maximum working-root-set size of the program.
602 ///
603 /// Note that a naive exponential-growth-of-threshold algorithm,
604 /// where we trim when the root set reaches 1, 2, 4, 8, 16, 32,
605 /// ... elements, provides the first but *not* the second
606 /// property. A workload that has a constant live root set but
607 /// creates and drops roots constantly (say, as it's traversing a
608 /// static graph and moving "fingers" through it) will cause the
609 /// `liveness_flags` array to grow unboundedly.
610 ///
611 /// Instead, it turns out that we can achieve both of these goals
612 /// with a simple rule: we trim when the root list length reaches
613 /// a high-water mark; and then *after* trimming, we set the
614 /// high-water mark equal to the resulting live-root count
615 /// multiplied by a factor (e.g., 2).
616 ///
617 /// ## Proof
618 ///
619 /// - Root creation is O(1)
620 ///
621 /// Assume a sequence of root creation and drop events (and no
622 /// GCs, with a static GC graph, in the worst case -- only roots
623 /// are changing). We want to show that after N root creations,
624 /// we have incurred only only O(N) cost scanning the
625 /// `liveness_flags` list over the whole sequence.
626 ///
627 /// Assume a default high-water mark of D (e.g., 8) at
628 /// initialization with an empty root list.
629 ///
630 /// Consider "epochs" in the sequence split by trim events where
631 /// we scan the root list. Proceed by induction over epochs to
632 /// show: after each epoch, we will have scanned at most 2N
633 /// roots after N root creations.
634 ///
635 /// (These epochs don't exist in the algorithm: this is just a
636 /// mechanism to analyze the behavior.)
637 ///
638 /// Base case: after the first epoch, with D root creations, we
639 /// will scan D roots.
640 ///
641 /// Induction step: we have created N roots and scanned at most
642 /// 2N roots. After previous scan, L roots are still live; then
643 /// we set the high-water mark for next scan at 2L. The list
644 /// already has L, so after another L root creations, the epoch
645 /// ends. We will then incur a scan cost of 2L (the full
646 /// list). At that point we have thus seen N + L root creations,
647 /// with 2N + 2L scan cost; the invariant holds.
648 ///
649 /// (It's counter-intuitive that *not* raising the high-water
650 /// mark exponentially can still result in a constant amortized
651 /// cost! One intuition to understand this is that each root
652 /// that remains alive after a scan pushes the next high-water
653 /// mark up by one, so requires a new root creation to "pay for"
654 /// its next scan. So any given root may be scanned many times,
655 /// but each such root ensures other root creations happen to
656 /// maintain the amortized cost.)
657 ///
658 /// - `liveness_flags.len()` is always O(|max live roots|)
659 ///
660 /// Before the first trim, we have between 0 and D live roots,
661 /// which is O(1) (`D` is a compile-time constant).
662 ///
663 /// Just after a trim, the `liveness_flags` list has only live
664 /// roots, and the max live-root count is at least the count at
665 /// this time, so the property holds.
666 ///
667 /// The instantaneous maximum number of live roots is greater
668 /// than or equal to the maximum number of live roots observed
669 /// during a trim. (The trim is just some point in time, and the
670 /// max at some point in time is at most the overall max.)
671 ///
672 /// The high-water mark is set at 2 * `liveness_flags.len()`
673 /// after a trim, i.e., the number of live roots at that
674 /// time. We trim when we reach the high-water mark. So the
675 /// length of the array cannot exceed 2 *
676 /// `liveness_flags.len()`, which is less than or equal to the
677 /// overall max. So transitively, the list length at any time is
678 /// always O(|max live roots|).
679 ///
680 /// We thus have tight bounds (deterministic, not randomized) for
681 /// all possible sequences of root creation/dropping, ensuring
682 /// robustness.
683 pub(crate) fn trim_liveness_flags(&mut self, gc_store: &mut GcStore, eager: bool) {
684 const DEFAULT_HIGH_WATER: usize = 8;
685 const GROWTH_FACTOR: usize = 2;
686 let high_water_mark = self
687 .liveness_trim_high_water
688 .map(|x| x.get())
689 .unwrap_or(DEFAULT_HIGH_WATER);
690 if !eager && self.liveness_flags.len() < high_water_mark {
691 return;
692 }
693
694 self.liveness_flags.retain(|(flag, index)| {
695 if flag.strong_count() == 0 {
696 // No more `OwnedRooted` instances are holding onto
697 // this; dealloc the index and drop our Weak.
698 let gc_ref = self.owned_rooted.dealloc(*index);
699 gc_store.drop_gc_ref(gc_ref);
700 // Don't retain in the list.
701 false
702 } else {
703 // Retain in the list.
704 true
705 }
706 });
707
708 let post_trim_len = self.liveness_flags.len();
709 let high_water_mark = core::cmp::max(
710 DEFAULT_HIGH_WATER,
711 post_trim_len.saturating_mul(GROWTH_FACTOR),
712 );
713 self.liveness_trim_high_water = Some(NonZeroUsize::new(high_water_mark).unwrap());
714 }
715}
716
717/// A scoped, rooted reference to a garbage-collected `T`.
718///
719/// A `Rooted<T>` is a strong handle to a garbage-collected `T`, preventing its
720/// referent (and anything else transitively referenced) from being collected by
721/// the GC during the scope within which this `Rooted<T>` was created.
722///
723/// When the context exits this `Rooted<T>`'s scope, the underlying GC object is
724/// automatically unrooted and any further attempts to use access the underlying
725/// object will return errors or otherwise fail.
726///
727/// `Rooted<T>` dereferences to its underlying `T`, allowing you to call `T`'s
728/// methods.
729///
730/// # Example
731///
732/// ```
733/// # use wasmtime::*;
734/// # fn _foo() -> Result<()> {
735/// let mut store = Store::<()>::default();
736///
737/// // Allocating a GC object returns a `Rooted<T>`.
738/// let hello: Rooted<ExternRef> = ExternRef::new(&mut store, "hello")?;
739///
740/// // Because `Rooted<T>` derefs to `T`, we can call `T` methods on a
741/// // `Rooted<T>`. For example, we can call the `ExternRef::data` method when we
742/// // have a `Rooted<ExternRef>`.
743/// let data = hello
744/// .data(&store)?
745/// .ok_or_else(|| Error::msg("externref has no host data"))?
746/// .downcast_ref::<&str>()
747/// .ok_or_else(|| Error::msg("not a str"))?;
748/// assert_eq!(*data, "hello");
749///
750/// // A `Rooted<T>` roots its underlying GC object for the duration of the
751/// // scope of the store/caller/context that was passed to the method that created
752/// // it. If we only want to keep a GC reference rooted and alive temporarily, we
753/// // can introduce new scopes with `RootScope`.
754/// {
755/// let mut scope = RootScope::new(&mut store);
756///
757/// // This `Rooted<T>` is automatically unrooted after `scope` is dropped,
758/// // allowing the collector to reclaim its GC object in the next GC.
759/// let scoped_ref = ExternRef::new(&mut scope, "goodbye");
760/// }
761///
762/// let module = Module::new(store.engine(), r#"
763/// (module
764/// (global (export "global") (mut externref) (ref.null extern))
765/// (table (export "table") 10 externref)
766/// (func (export "func") (param externref) (result externref)
767/// local.get 0
768/// )
769/// )
770/// "#)?;
771/// let instance = Instance::new(&mut store, &module, &[])?;
772///
773/// // GC references returned from calls into Wasm also return (optional, if the
774/// // Wasm type is nullable) `Rooted<T>`s.
775/// let result: Option<Rooted<_>> = instance
776/// .get_typed_func::<Option<Rooted<ExternRef>>, Option<Rooted<ExternRef>>>(&mut store, "func")?
777/// .call(&mut store, Some(hello))?;
778///
779/// // Similarly, getting a GC reference from a Wasm instance's exported global
780/// // or table yields a `Rooted<T>`.
781///
782/// let global = instance
783/// .get_global(&mut store, "global")
784/// .ok_or_else(|| Error::msg("missing `global` export"))?;
785/// let global_val = global.get(&mut store);
786/// let global_ref: Option<&Rooted<_>> = global_val
787/// .externref()
788/// .ok_or_else(|| Error::msg("not an externref"))?;
789///
790/// let table = instance.get_table(&mut store, "table").unwrap();
791/// let table_elem = table
792/// .get(&mut store, 3)
793/// .ok_or_else(|| Error::msg("table out of bounds"))?;
794/// let table_elem_ref: Option<&Rooted<_>> = table_elem
795/// .as_extern()
796/// .ok_or_else(|| Error::msg("not an externref"))?;
797/// # Ok(())
798/// # }
799/// ```
800///
801/// # Differences Between `Rooted<T>` and `OwnedRooted<T>`
802///
803/// While `Rooted<T>` is automatically unrooted when its scope is
804/// exited, this means that `Rooted<T>` is only valid for strictly
805/// last-in-first-out (LIFO, aka stack order) lifetimes. This is in
806/// contrast to [`OwnedRooted<T>`][crate::OwnedRooted], which supports
807/// rooting GC objects for arbitrary lifetimes.
808///
809/// | Type | Supported Lifetimes | Unrooting | Cost |
810/// |----------------------------------------------|-----------------------------|-----------|----------------------------------|
811/// | [`Rooted<T>`][crate::Rooted] | Strictly LIFO / stack order | Automatic | very low (LIFO array) |
812/// | [`OwnedRooted<T>`][crate::OwnedRooted] | Arbitrary | Automatic | medium (separate `Arc` to track) |
813///
814/// `Rooted<T>` should suffice for most use cases, and provides decent
815/// ergonomics. In cases where LIFO scopes are difficult to reason
816/// about, e.g. heap-managed data structures, or when they may cause
817/// erroneous behavior, e.g. in errors that are propagated up the call
818/// stack, `OwnedRooted<T>` provides very safe ergonomics but at a
819/// small dynamic cost for the separate tracking allocation.
820///
821/// # Scopes
822///
823/// Wasmtime automatically creates two kinds of scopes:
824///
825/// 1. A [`Store`][crate::Store] is the outermost rooting scope. Creating a
826/// `Root<T>` directly inside a `Store` permanently roots the underlying
827/// object.
828///
829/// 2. A [`Caller`][crate::Caller] provides a rooting scope for the duration of
830/// a call from Wasm into a host function. Any objects rooted in a `Caller`
831/// will be unrooted after the host function returns. Note that there can be
832/// nested `Caller` scopes in the case where Wasm calls a host function,
833/// creating the first `Caller` and its rooting scope , and then the host
834/// function calls a Wasm function which then calls another host function,
835/// creating a second `Caller` and a second rooting scope. This nesting can
836/// be arbitrarily deep.
837///
838/// Additionally, if you would like to define finer-grained rooting scopes,
839/// Wasmtime provides the [`RootScope`][crate::RootScope] type.
840///
841/// Scopes are always nested in a last-in-first-out (LIFO) order. An outer scope
842/// is never exited (and the `Rooted<T>`s defined within it are never
843/// automatically unrooted) while an inner scope is still active. All inner
844/// scopes are exited before their outer scopes.
845///
846/// The following diagram illustrates various rooting scopes over time, how they
847/// nest, and when their `Rooted<T>`s are automatically unrooted:
848///
849/// ```text
850/// ----- new Store
851/// |
852/// |
853/// | let a: Rooted<T> = ...;
854/// |
855/// |
856/// | ----- call into Wasm
857/// | |
858/// | |
859/// | | ----- Wasm calls host function F
860/// | | |
861/// | | |
862/// | | | let b: Rooted<T> = ...;
863/// | | |
864/// | | |
865/// | | | ----- F calls into Wasm
866/// | | | |
867/// | | | |
868/// | | | | ----- Wasm call host function G
869/// | | | | |
870/// | | | | |
871/// | | | | | let c: Rooted<T> = ...;
872/// | | | | |
873/// | | | | |
874/// | | | | ----- return to Wasm from host function G (unroots `c`)
875/// | | | |
876/// | | | |
877/// | | | ----- Wasm returns to F
878/// | | |
879/// | | |
880/// | | ----- return from host function F (unroots `b`)
881/// | |
882/// | |
883/// | ----- return from Wasm
884/// |
885/// |
886/// | ----- let scope1 = RootScope::new(...);
887/// | |
888/// | |
889/// | | let d: Rooted<T> = ...;
890/// | |
891/// | |
892/// | | ----- let scope2 = RootScope::new(...);
893/// | | |
894/// | | |
895/// | | | let e: Rooted<T> = ...;
896/// | | |
897/// | | |
898/// | | ----- drop `scope2` (unroots `e`)
899/// | |
900/// | |
901/// | ----- drop `scope1` (unroots `d`)
902/// |
903/// |
904/// ----- drop Store (unroots `a`)
905/// ```
906///
907/// A `Rooted<T>` can be used successfully as long as it is still rooted so, in
908/// the above diagram, `d` is valid inside `scope2` because `scope2` is wholly
909/// contained within the scope `d` was rooted within (`scope1`).
910///
911/// See also the documentation for [`RootScope`][crate::RootScope].
912#[repr(transparent)]
913pub struct Rooted<T: GcRef> {
914 inner: GcRootIndex,
915 _phantom: marker::PhantomData<T>,
916}
917
918impl<T: GcRef> Clone for Rooted<T> {
919 fn clone(&self) -> Self {
920 Rooted {
921 inner: self.inner,
922 _phantom: marker::PhantomData,
923 }
924 }
925}
926
927impl<T: GcRef> Copy for Rooted<T> {}
928
929impl<T: GcRef> Debug for Rooted<T> {
930 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
931 let name = format!("Rooted<{}>", any::type_name::<T>());
932 f.debug_struct(&name).field("inner", &self.inner).finish()
933 }
934}
935
936impl<T: GcRef> RootedGcRefImpl<T> for Rooted<T> {
937 fn get_gc_ref<'a>(&self, store: &'a StoreOpaque) -> Option<&'a VMGcRef> {
938 assert!(
939 self.comes_from_same_store(store),
940 "object used with wrong store"
941 );
942 let index = self.inner.index.as_lifo().unwrap();
943 let entry = store.gc_roots().lifo_roots.get(index)?;
944 if entry.generation == self.inner.generation {
945 Some(&entry.gc_ref)
946 } else {
947 None
948 }
949 }
950}
951
952impl<T: GcRef> Deref for Rooted<T> {
953 type Target = T;
954
955 fn deref(&self) -> &Self::Target {
956 T::transmute_ref(&self.inner)
957 }
958}
959
960impl<T: GcRef> Rooted<T> {
961 /// Push the given `VMGcRef` onto our LIFO root set.
962 ///
963 /// `gc_ref` should belong to `store`'s heap; failure to uphold this is
964 /// memory safe but will result in general failures down the line such as
965 /// panics or incorrect results.
966 ///
967 /// `gc_ref` should be a GC reference pointing to an instance of the GC type
968 /// that `T` represents. Failure to uphold this invariant is memory safe but
969 /// will result in general incorrectness such as panics and wrong results.
970 pub(crate) fn new(store: &mut AutoAssertNoGc<'_>, gc_ref: VMGcRef) -> Rooted<T> {
971 let id = store.id();
972 let roots = store.gc_roots_mut();
973 let inner = roots.push_lifo_root(id, gc_ref);
974 Rooted {
975 inner,
976 _phantom: marker::PhantomData,
977 }
978 }
979
980 /// Create a new `Rooted<T>` from a `GcRootIndex`.
981 ///
982 /// Note that `Rooted::from_gc_root_index(my_rooted.index)` is not
983 /// necessarily an identity function, as it allows changing the `T` type
984 /// parameter.
985 ///
986 /// The given index should be a LIFO index of a GC reference pointing to an
987 /// instance of the GC type that `T` represents. Failure to uphold this
988 /// invariant is memory safe but will result in general incorrectness such
989 /// as panics and wrong results.
990 pub(crate) fn from_gc_root_index(inner: GcRootIndex) -> Rooted<T> {
991 debug_assert!(inner.index.is_lifo());
992 Rooted {
993 inner,
994 _phantom: marker::PhantomData,
995 }
996 }
997
998 #[inline]
999 pub(crate) fn comes_from_same_store(&self, store: &StoreOpaque) -> bool {
1000 debug_assert!(self.inner.index.is_lifo());
1001 self.inner.comes_from_same_store(store)
1002 }
1003
1004 /// Create an [`OwnedRooted<T>`][crate::OwnedRooted] holding onto the
1005 /// same GC object as `self`.
1006 ///
1007 /// Returns `None` if `self` is used outside of its scope and has therefore
1008 /// been unrooted.
1009 ///
1010 /// This does not unroot `self`, and `self` remains valid until its
1011 /// associated scope is exited.
1012 ///
1013 /// # Panics
1014 ///
1015 /// Panics if this object is not associate with the given store.
1016 ///
1017 /// # Example
1018 ///
1019 /// ```
1020 /// # use wasmtime::*;
1021 /// # fn _foo() -> Result<()> {
1022 /// let mut store = Store::<()>::default();
1023 ///
1024 /// let y: OwnedRooted<_> = {
1025 /// // Create a nested rooting scope.
1026 /// let mut scope = RootScope::new(&mut store);
1027 ///
1028 /// // `x` is only rooted within this nested scope.
1029 /// let x: Rooted<_> = ExternRef::new(&mut scope, "hello!")?;
1030 ///
1031 /// // Extend `x`'s rooting past its scope's lifetime by converting it
1032 /// // to an `OwnedRooted`.
1033 /// x.to_owned_rooted(&mut scope)?
1034 /// };
1035 ///
1036 /// // Now we can still access the reference outside the scope it was
1037 /// // originally defined within.
1038 /// let data = y.data(&store)?.expect("should have host data");
1039 /// let data = data.downcast_ref::<&str>().expect("host data should be str");
1040 /// assert_eq!(*data, "hello!");
1041 /// # Ok(())
1042 /// # }
1043 /// ```
1044 pub fn to_owned_rooted(&self, mut store: impl AsContextMut) -> Result<OwnedRooted<T>> {
1045 self._to_owned_rooted(store.as_context_mut().0)
1046 }
1047
1048 pub(crate) fn _to_owned_rooted(&self, store: &mut StoreOpaque) -> Result<OwnedRooted<T>> {
1049 let mut store = AutoAssertNoGc::new(store);
1050 let gc_ref = self.try_clone_gc_ref(&mut store)?;
1051 Ok(OwnedRooted::new(&mut store, gc_ref)?)
1052 }
1053
1054 /// Are these two `Rooted<T>`s the same GC root?
1055 ///
1056 /// Note that this function can return `false` even when `a` and `b` are
1057 /// rooting the same underlying GC object, but the object was rooted
1058 /// multiple times (for example in different scopes). Use
1059 /// [`Rooted::ref_eq`][crate::Rooted::ref_eq] to test whether these are
1060 /// references to the same underlying GC object or not.
1061 ///
1062 /// # Example
1063 ///
1064 /// ```
1065 /// # use wasmtime::*;
1066 /// # fn foo() -> Result<()> {
1067 /// let mut store = Store::<()>::default();
1068 ///
1069 /// let a = ExternRef::new(&mut store, "hello")?;
1070 /// let b = a;
1071 ///
1072 /// // `a` and `b` are the same GC root.
1073 /// assert!(Rooted::rooted_eq(a, b));
1074 ///
1075 /// {
1076 /// let mut scope = RootScope::new(&mut store);
1077 ///
1078 /// // `c` is a different GC root, in a different scope, even though it
1079 /// // is rooting the same object.
1080 /// let c = a.to_owned_rooted(&mut scope)?.to_rooted(&mut scope);
1081 /// assert!(!Rooted::rooted_eq(a, c));
1082 /// }
1083 ///
1084 /// let x = ExternRef::new(&mut store, "goodbye")?;
1085 ///
1086 /// // `a` and `x` are different GC roots, rooting different objects.
1087 /// assert!(!Rooted::rooted_eq(a, x));
1088 /// # Ok(())
1089 /// # }
1090 /// ```
1091 pub fn rooted_eq(a: Self, b: Self) -> bool {
1092 a.inner == b.inner
1093 }
1094
1095 /// Are these two GC roots referencing the same underlying GC object?
1096 ///
1097 /// This function will return `true` even when `a` and `b` are different GC
1098 /// roots (for example because they were rooted in different scopes) if they
1099 /// are rooting the same underlying GC object. To only test whether they are
1100 /// the same GC root, and not whether they are rooting the same GC object,
1101 /// use [`Rooted::rooted_eq`][crate::Rooted::rooted_eq].
1102 ///
1103 /// Returns an error if either `a` or `b` has been unrooted, for example
1104 /// because the scope it was rooted within has been exited.
1105 ///
1106 /// Because this method takes any `impl RootedGcRef<T>` arguments, it can be
1107 /// used to compare, for example, a `Rooted<T>` and a `OwnedRooted<T>`.
1108 ///
1109 /// # Panics
1110 ///
1111 /// Panics if either `a` or `b` is not associated with the given `store`.
1112 ///
1113 /// # Example
1114 ///
1115 /// ```
1116 /// # use wasmtime::*;
1117 /// # fn foo() -> Result<()> {
1118 /// let mut store = Store::<()>::default();
1119 ///
1120 /// let a = ExternRef::new(&mut store, "hello")?;
1121 /// let b = a;
1122 ///
1123 /// // `a` and `b` are rooting the same object.
1124 /// assert!(Rooted::ref_eq(&store, &a, &b)?);
1125 ///
1126 /// {
1127 /// let mut scope = RootScope::new(&mut store);
1128 ///
1129 /// // `c` is a different GC root, in a different scope, but still
1130 /// // rooting the same object.
1131 /// let c = a.to_owned_rooted(&mut scope)?.to_rooted(&mut scope);
1132 /// assert!(!Rooted::ref_eq(&scope, &a, &c)?);
1133 /// }
1134 ///
1135 /// let x = ExternRef::new(&mut store, "goodbye")?;
1136 ///
1137 /// // `a` and `x` are rooting different objects.
1138 /// assert!(!Rooted::ref_eq(&store, &a, &x)?);
1139 ///
1140 /// // You can also compare `Rooted<T>`s and `OwnedRooted<T>`s with this
1141 /// // function.
1142 /// let d = a.to_owned_rooted(&mut store)?;
1143 /// assert!(Rooted::ref_eq(&store, &a, &d)?);
1144 /// # Ok(())
1145 /// # }
1146 /// ```
1147 pub fn ref_eq(
1148 store: impl AsContext,
1149 a: &impl RootedGcRef<T>,
1150 b: &impl RootedGcRef<T>,
1151 ) -> Result<bool> {
1152 let store = store.as_context().0;
1153 Self::_ref_eq(store, a, b)
1154 }
1155
1156 pub(crate) fn _ref_eq(
1157 store: &StoreOpaque,
1158 a: &impl RootedGcRef<T>,
1159 b: &impl RootedGcRef<T>,
1160 ) -> Result<bool> {
1161 let a = a.try_gc_ref(store)?;
1162 let b = b.try_gc_ref(store)?;
1163 Ok(a == b)
1164 }
1165
1166 /// Hash this root.
1167 ///
1168 /// Note that, similar to `Rooted::rooted_eq`, this only operates on the
1169 /// root and *not* the underlying GC reference. That means that two
1170 /// different rootings of the same object will hash to different values
1171 /// (modulo hash collisions). If this is undesirable, use the
1172 /// [`ref_hash`][crate::Rooted::ref_hash] method instead.
1173 pub fn rooted_hash<H>(&self, state: &mut H)
1174 where
1175 H: Hasher,
1176 {
1177 self.inner.hash(state);
1178 }
1179
1180 /// Hash the underlying rooted object reference.
1181 ///
1182 /// Note that, similar to `Rooted::ref_eq`, and operates on the underlying
1183 /// rooted GC object reference, not the root. That means that two
1184 /// *different* rootings of the same object will hash to the *same*
1185 /// value. If this is undesirable, use the
1186 /// [`rooted_hash`][crate::Rooted::rooted_hash] method instead.
1187 pub fn ref_hash<H>(&self, store: impl AsContext, state: &mut H) -> Result<()>
1188 where
1189 H: Hasher,
1190 {
1191 let gc_ref = self.try_gc_ref(store.as_context().0)?;
1192 gc_ref.hash(state);
1193 Ok(())
1194 }
1195
1196 /// Cast `self` to a `Rooted<U>`.
1197 ///
1198 /// It is the caller's responsibility to ensure that `self` is actually a
1199 /// `U`. Failure to uphold this invariant will be memory safe but will
1200 /// result in general incorrectness such as panics and wrong results.
1201 pub(crate) fn unchecked_cast<U: GcRef>(self) -> Rooted<U> {
1202 Rooted::from_gc_root_index(self.inner)
1203 }
1204
1205 /// Common implementation of the `WasmTy::store` trait method for all
1206 /// `Rooted<T>`s.
1207 pub(super) fn wasm_ty_store(
1208 self,
1209 store: &mut AutoAssertNoGc<'_>,
1210 ptr: &mut MaybeUninit<ValRaw>,
1211 val_raw: impl Fn(u32) -> ValRaw,
1212 ) -> Result<()> {
1213 let gc_ref = self.inner.try_clone_gc_ref(store)?;
1214
1215 let raw = match store.optional_gc_store_mut() {
1216 Some(s) => s.expose_gc_ref_to_wasm(gc_ref),
1217 None => {
1218 // NB: do not force the allocation of a GC heap just because the
1219 // program is using `i31ref`s.
1220 debug_assert!(gc_ref.is_i31());
1221 gc_ref.as_raw_non_zero_u32()
1222 }
1223 };
1224
1225 ptr.write(val_raw(raw.get()));
1226 Ok(())
1227 }
1228
1229 /// Common implementation of the `WasmTy::load` trait method for all
1230 /// `Rooted<T>`s.
1231 pub(super) fn wasm_ty_load(
1232 store: &mut AutoAssertNoGc<'_>,
1233 raw_gc_ref: u32,
1234 from_cloned_gc_ref: impl Fn(&mut AutoAssertNoGc<'_>, VMGcRef) -> Self,
1235 ) -> Self {
1236 debug_assert_ne!(raw_gc_ref, 0);
1237 let gc_ref = VMGcRef::from_raw_u32(raw_gc_ref).expect("non-null");
1238
1239 let gc_ref = match store.optional_gc_store_mut() {
1240 Some(s) => s.clone_gc_ref(&gc_ref),
1241 None => {
1242 // NB: do not force the allocation of a GC heap just because the
1243 // program is using `i31ref`s.
1244 debug_assert!(gc_ref.is_i31());
1245 gc_ref.unchecked_copy()
1246 }
1247 };
1248
1249 from_cloned_gc_ref(store, gc_ref)
1250 }
1251
1252 /// Common implementation of the `WasmTy::store` trait method for all
1253 /// `Option<Rooted<T>>`s.
1254 pub(super) fn wasm_ty_option_store(
1255 me: Option<Self>,
1256 store: &mut AutoAssertNoGc<'_>,
1257 ptr: &mut MaybeUninit<ValRaw>,
1258 val_raw: impl Fn(u32) -> ValRaw,
1259 ) -> Result<()> {
1260 match me {
1261 Some(me) => me.wasm_ty_store(store, ptr, val_raw),
1262 None => {
1263 ptr.write(val_raw(0));
1264 Ok(())
1265 }
1266 }
1267 }
1268
1269 /// Common implementation of the `WasmTy::load` trait method for all
1270 /// `Option<Rooted<T>>`s.
1271 pub(super) fn wasm_ty_option_load(
1272 store: &mut AutoAssertNoGc<'_>,
1273 raw_gc_ref: u32,
1274 from_cloned_gc_ref: impl Fn(&mut AutoAssertNoGc<'_>, VMGcRef) -> Self,
1275 ) -> Option<Self> {
1276 let gc_ref = VMGcRef::from_raw_u32(raw_gc_ref)?;
1277 let gc_ref = store.clone_gc_ref(&gc_ref);
1278 Some(from_cloned_gc_ref(store, gc_ref))
1279 }
1280}
1281
1282/// Nested rooting scopes.
1283///
1284/// `RootScope` allows the creation or nested rooting scopes for use with
1285/// [`Rooted<T>`][crate::Rooted]. This allows for fine-grained control over how
1286/// long a set of [`Rooted<T>`][crate::Rooted]s are strongly held alive, giving
1287/// gives you the tools necessary to avoid holding onto GC objects longer than
1288/// necessary. `Rooted<T>`s created within a `RootScope` are automatically
1289/// unrooted when the `RootScope` is dropped. For more details on
1290/// [`Rooted<T>`][crate::Rooted] lifetimes and their interaction with rooting
1291/// scopes, see [`Rooted<T>`][crate::Rooted]'s documentation.
1292///
1293/// A `RootScope<C>` wraps a `C: AsContextMut` (that is, anything that
1294/// represents exclusive access to a [`Store`][crate::Store]) and in turn
1295/// implements [`AsContext`][crate::AsContext] and
1296/// [`AsContextMut`][crate::AsContextMut] in terms of its underlying
1297/// `C`. Therefore, `RootScope<C>` can be used anywhere you would use the
1298/// underlying `C`, for example in the [`Global::get`][crate::Global::get]
1299/// method. Any `Rooted<T>`s created by a method that a `RootScope<C>` was
1300/// passed as context to are tied to the `RootScope<C>`'s scope and
1301/// automatically unrooted when the scope is dropped.
1302///
1303/// # Example
1304///
1305/// ```
1306/// # use wasmtime::*;
1307/// # fn _foo() -> Result<()> {
1308/// let mut store = Store::<()>::default();
1309///
1310/// let a: Rooted<_>;
1311/// let b: Rooted<_>;
1312/// let c: Rooted<_>;
1313///
1314/// // Root `a` in the store's scope. It will be rooted for the duration of the
1315/// // store's lifetime.
1316/// a = ExternRef::new(&mut store, 42)?;
1317///
1318/// // `a` is rooted, so we can access its data successfully.
1319/// assert!(a.data(&store).is_ok());
1320///
1321/// {
1322/// let mut scope1 = RootScope::new(&mut store);
1323///
1324/// // Root `b` in `scope1`.
1325/// b = ExternRef::new(&mut scope1, 36)?;
1326///
1327/// // Both `a` and `b` are rooted.
1328/// assert!(a.data(&scope1).is_ok());
1329/// assert!(b.data(&scope1).is_ok());
1330///
1331/// {
1332/// let mut scope2 = RootScope::new(&mut scope1);
1333///
1334/// // Root `c` in `scope2`.
1335/// c = ExternRef::new(&mut scope2, 36)?;
1336///
1337/// // All of `a`, `b`, and `c` are rooted.
1338/// assert!(a.data(&scope2).is_ok());
1339/// assert!(b.data(&scope2).is_ok());
1340/// assert!(c.data(&scope2).is_ok());
1341///
1342/// // Drop `scope2`.
1343/// }
1344///
1345/// // Now `a` and `b` are still rooted, but `c` was unrooted when we dropped
1346/// // `scope2`.
1347/// assert!(a.data(&scope1).is_ok());
1348/// assert!(b.data(&scope1).is_ok());
1349/// assert!(c.data(&scope1).is_err());
1350///
1351/// // Drop `scope1`.
1352/// }
1353///
1354/// // And now only `a` is still rooted. Both `b` and `c` were unrooted when we
1355/// // dropped their respective rooting scopes.
1356/// assert!(a.data(&store).is_ok());
1357/// assert!(b.data(&store).is_err());
1358/// assert!(c.data(&store).is_err());
1359/// # Ok(())
1360/// # }
1361/// ```
1362pub struct RootScope<C>
1363where
1364 C: AsContextMut,
1365{
1366 store: C,
1367 scope: usize,
1368}
1369
1370impl<C> Drop for RootScope<C>
1371where
1372 C: AsContextMut,
1373{
1374 fn drop(&mut self) {
1375 self.store.as_context_mut().0.exit_gc_lifo_scope(self.scope);
1376 }
1377}
1378
1379impl<C> RootScope<C>
1380where
1381 C: AsContextMut,
1382{
1383 // NB: we MUST NOT expose a method like
1384 //
1385 // pub fn store(&mut self) -> &mut Store { ... }
1386 //
1387 // because callers could do treacherous things like
1388 //
1389 // let scope1 = RootScope::new(&mut store1);
1390 // let scope2 = RootScope::new(&mut store2);
1391 // std::mem::swap(scope1.store(), scope2.store());
1392 //
1393 // and then we would start truncate the store's GC root set's LIFO roots to
1394 // the wrong lengths.
1395 //
1396 // Instead, we just implement `AsContext[Mut]` for `RootScope`.
1397
1398 /// Construct a new scope for rooting GC objects.
1399 ///
1400 /// # Example
1401 ///
1402 /// ```
1403 /// # use wasmtime::*;
1404 /// let mut store = Store::<()>::default();
1405 ///
1406 /// {
1407 /// let mut scope = RootScope::new(&mut store);
1408 ///
1409 /// // Temporarily root GC objects in this nested rooting scope...
1410 /// }
1411 /// ```
1412 pub fn new(store: C) -> Self {
1413 let scope = store.as_context().0.gc_roots().enter_lifo_scope();
1414 RootScope { store, scope }
1415 }
1416
1417 fn gc_roots(&mut self) -> &mut RootSet {
1418 self.store.as_context_mut().0.gc_roots_mut()
1419 }
1420
1421 fn lifo_roots(&mut self) -> &mut Vec<LifoRoot> {
1422 &mut self.gc_roots().lifo_roots
1423 }
1424
1425 /// Reserve enough capacity for `additional` GC roots in this scope.
1426 ///
1427 /// # Example
1428 ///
1429 /// ```
1430 /// # use wasmtime::*;
1431 /// let mut store = Store::<()>::default();
1432 ///
1433 /// {
1434 /// let mut scope = RootScope::new(&mut store);
1435 ///
1436 /// // Ensure we have enough storage pre-allocated to root five GC
1437 /// // references inside this scope without any underlying reallocation.
1438 /// scope.reserve(5);
1439 ///
1440 /// // ...
1441 /// }
1442 /// ```
1443 pub fn reserve(&mut self, additional: usize) {
1444 self.lifo_roots().reserve(additional);
1445 }
1446}
1447
1448impl<T> AsContext for RootScope<T>
1449where
1450 T: AsContextMut,
1451{
1452 type Data = T::Data;
1453
1454 fn as_context(&self) -> crate::StoreContext<'_, Self::Data> {
1455 self.store.as_context()
1456 }
1457}
1458
1459impl<T> AsContextMut for RootScope<T>
1460where
1461 T: AsContextMut,
1462{
1463 fn as_context_mut(&mut self) -> crate::StoreContextMut<'_, Self::Data> {
1464 self.store.as_context_mut()
1465 }
1466}
1467
1468/// Internal version of `RootScope` that only wraps a `&mut StoreOpaque` rather
1469/// than a whole `impl AsContextMut<Data = T>`.
1470pub(crate) struct OpaqueRootScope<S>
1471where
1472 S: AsStoreOpaque,
1473{
1474 store: S,
1475 scope: usize,
1476}
1477
1478impl<S> Drop for OpaqueRootScope<S>
1479where
1480 S: AsStoreOpaque,
1481{
1482 fn drop(&mut self) {
1483 self.store.as_store_opaque().exit_gc_lifo_scope(self.scope);
1484 }
1485}
1486
1487impl<S> Deref for OpaqueRootScope<S>
1488where
1489 S: AsStoreOpaque,
1490{
1491 type Target = S;
1492
1493 fn deref(&self) -> &Self::Target {
1494 &self.store
1495 }
1496}
1497
1498// XXX: Don't use this `DerefMut` implementation to `mem::{swap,replace}` or
1499// etc... the underlying `StoreOpaque` in a `OpaqueRootScope`! That will result
1500// in truncating the store's GC root set's LIFO roots to the wrong length.
1501//
1502// We don't implement `DerefMut` for `RootScope` for exactly this reason, but
1503// allow it for `OpaqueRootScope` because it is only Wasmtime-internal and not
1504// publicly exported.
1505impl<S> DerefMut for OpaqueRootScope<S>
1506where
1507 S: AsStoreOpaque,
1508{
1509 fn deref_mut(&mut self) -> &mut Self::Target {
1510 &mut self.store
1511 }
1512}
1513
1514impl<S> OpaqueRootScope<S>
1515where
1516 S: AsStoreOpaque,
1517{
1518 pub(crate) fn new(mut store: S) -> Self {
1519 let scope = store.as_store_opaque().gc_roots().enter_lifo_scope();
1520 OpaqueRootScope { store, scope }
1521 }
1522}
1523
1524/// A rooted reference to a garbage-collected `T` with automatic lifetime.
1525///
1526/// An `OwnedRooted<T>` is a strong handle to a garbage-collected `T`,
1527/// preventing its referent (and anything else transitively referenced) from
1528/// being collected by the GC until it is dropped.
1529///
1530/// An `OwnedRooted<T>` keeps its rooted GC object alive at least
1531/// until the `OwnedRooted<T>` itself is dropped. The
1532/// "de-registration" of the root is automatic and is triggered (in a
1533/// deferred way) by the drop of this type.
1534///
1535/// The primary way to create an `OwnedRooted<T>` is to promote a temporary
1536/// `Rooted<T>` into an `OwnedRooted<T>` via its
1537/// [`to_owned_rooted`][crate::Rooted::to_owned_rooted] method.
1538///
1539/// `OwnedRooted<T>` dereferences to its underlying `T`, allowing you to call
1540/// `T`'s methods.
1541///
1542/// # Example
1543///
1544/// ```
1545/// # use wasmtime::*;
1546/// # fn _foo() -> Result<()> {
1547/// let mut store = Store::<Option<OwnedRooted<ExternRef>>>::default();
1548///
1549/// // Create our `OwnedRooted` in a nested scope to avoid rooting it for
1550/// // the duration of the store's lifetime.
1551/// let x = {
1552/// let mut scope = RootScope::new(&mut store);
1553/// let x = ExternRef::new(&mut scope, 1234)?;
1554/// x.to_owned_rooted(&mut scope)?
1555/// };
1556///
1557/// // Place `x` into our store.
1558/// *store.data_mut() = Some(x);
1559///
1560/// // Do a bunch stuff that may or may not access, replace, or take `x`...
1561/// # Ok(())
1562/// # }
1563/// ```
1564pub struct OwnedRooted<T>
1565where
1566 T: GcRef,
1567{
1568 inner: GcRootIndex,
1569 liveness_flag: Arc<()>,
1570 _phantom: marker::PhantomData<T>,
1571}
1572
1573const _: () = {
1574 use crate::{AnyRef, ExternRef};
1575
1576 // NB: these match the C API which should also be updated if this changes.
1577 //
1578 // The size is really "16 + pointer + alignment", which is either
1579 // 20 bytes on some 32-bit platforms or 24 bytes on other 32-bit
1580 // platforms (e.g., riscv32, which adds an extra 4 bytes of
1581 // padding) and 64-bit platforms.
1582 assert!(
1583 mem::size_of::<OwnedRooted<AnyRef>>() >= 16 && mem::size_of::<OwnedRooted<AnyRef>>() <= 24
1584 );
1585 assert!(mem::align_of::<OwnedRooted<AnyRef>>() == mem::align_of::<u64>());
1586 assert!(
1587 mem::size_of::<OwnedRooted<ExternRef>>() >= 16
1588 && mem::size_of::<OwnedRooted<ExternRef>>() <= 24
1589 );
1590 assert!(mem::align_of::<OwnedRooted<ExternRef>>() == mem::align_of::<u64>());
1591};
1592
1593impl<T: GcRef> Debug for OwnedRooted<T> {
1594 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1595 let name = format!("OwnedRooted<{}>", any::type_name::<T>());
1596 f.debug_struct(&name).field("inner", &self.inner).finish()
1597 }
1598}
1599
1600impl<T: GcRef> Deref for OwnedRooted<T> {
1601 type Target = T;
1602
1603 fn deref(&self) -> &Self::Target {
1604 T::transmute_ref(&self.inner)
1605 }
1606}
1607
1608impl<T: GcRef> Clone for OwnedRooted<T> {
1609 fn clone(&self) -> Self {
1610 OwnedRooted {
1611 inner: self.inner,
1612 liveness_flag: self.liveness_flag.clone(),
1613 _phantom: marker::PhantomData,
1614 }
1615 }
1616}
1617
1618impl<T> OwnedRooted<T>
1619where
1620 T: GcRef,
1621{
1622 /// Construct a new owned GC root.
1623 ///
1624 /// `gc_ref` should belong to `store`'s heap; failure to uphold this is
1625 /// memory safe but will result in general failures down the line such as
1626 /// panics or incorrect results.
1627 ///
1628 /// `gc_ref` should be a GC reference pointing to an instance of the GC type
1629 /// that `T` represents. Failure to uphold this invariant is memory safe but
1630 /// will result in general incorrectness such as panics and wrong results.
1631 pub(crate) fn new(
1632 store: &mut AutoAssertNoGc<'_>,
1633 gc_ref: VMGcRef,
1634 ) -> Result<Self, OutOfMemory> {
1635 // We always have the opportunity to trim and unregister stale
1636 // owned roots whenever we have a mut borrow to the store. We
1637 // take the opportunity to do so here to avoid tying growth of
1638 // the root-set to the GC frequency -- it is much cheaper to
1639 // eagerly trim these roots. Note that the trimming keeps a
1640 // "high water mark" that grows exponentially, so we have
1641 // amortized constant time even though an individual trim
1642 // takes time linear in the number of roots.
1643 store.trim_gc_liveness_flags(false);
1644
1645 let roots = store.gc_roots_mut();
1646 let id = roots.owned_rooted.alloc(gc_ref)?;
1647 let liveness_flag = Arc::new(());
1648 roots
1649 .liveness_flags
1650 .push((Arc::downgrade(&liveness_flag), id));
1651 Ok(OwnedRooted {
1652 inner: GcRootIndex {
1653 store_id: store.id(),
1654 generation: 0,
1655 index: PackedIndex::new_owned(id),
1656 },
1657 liveness_flag,
1658 _phantom: marker::PhantomData,
1659 })
1660 }
1661
1662 #[inline]
1663 pub(crate) fn comes_from_same_store(&self, store: &StoreOpaque) -> bool {
1664 debug_assert!(self.inner.index.is_owned());
1665 self.inner.comes_from_same_store(store)
1666 }
1667
1668 /// Clone this `OwnedRooted<T>` into a `Rooted<T>`.
1669 ///
1670 /// This operation does not consume or unroot this `OwnedRooted<T>`.
1671 ///
1672 /// The underlying GC object is re-rooted in the given context's scope. The
1673 /// resulting `Rooted<T>` is only valid during the given context's
1674 /// scope. See the [`Rooted<T>`][crate::Rooted] documentation for more
1675 /// details on rooting scopes.
1676 ///
1677 /// This operation does not consume or unroot this `OwnedRooted<T>`.
1678 ///
1679 /// # Panics
1680 ///
1681 /// Panics if this object is not associated with the given context's store.
1682 ///
1683 /// # Example
1684 ///
1685 /// ```
1686 /// # use wasmtime::*;
1687 /// # fn _foo() -> Result<()> {
1688 /// let mut store = Store::<()>::default();
1689 ///
1690 /// let root1: Rooted<_>;
1691 ///
1692 /// let owned = {
1693 /// let mut scope = RootScope::new(&mut store);
1694 /// root1 = ExternRef::new(&mut scope, 1234)?;
1695 /// root1.to_owned_rooted(&mut scope)?
1696 /// };
1697 ///
1698 /// // `root1` is no longer accessible because it was unrooted when `scope`
1699 /// // was dropped.
1700 /// assert!(root1.data(&store).is_err());
1701 ///
1702 /// // But we can re-root `owned` into this scope.
1703 /// let root2 = owned.to_rooted(&mut store);
1704 /// assert!(root2.data(&store).is_ok());
1705 /// # Ok(())
1706 /// # }
1707 /// ```
1708 pub fn to_rooted(&self, mut context: impl AsContextMut) -> Rooted<T> {
1709 self._to_rooted(context.as_context_mut().0)
1710 }
1711
1712 pub(crate) fn _to_rooted(&self, store: &mut StoreOpaque) -> Rooted<T> {
1713 assert!(
1714 self.comes_from_same_store(store),
1715 "object used with wrong store"
1716 );
1717 let mut store = AutoAssertNoGc::new(store);
1718 let gc_ref = self.clone_gc_ref(&mut store).unwrap();
1719 Rooted::new(&mut store, gc_ref)
1720 }
1721
1722 /// Are these two GC roots referencing the same underlying GC object?
1723 ///
1724 /// This function will return `true` even when `a` and `b` are different GC
1725 /// roots (for example because they were rooted in different scopes) if they
1726 /// are rooting the same underlying GC object.
1727 ///
1728 /// Because this method takes any `impl RootedGcRef<T>` arguments, it can be
1729 /// used to compare, for example, a `Rooted<T>` and an `OwnedRooted<T>`.
1730 ///
1731 /// # Panics
1732 ///
1733 /// Panics if either `a` or `b` is not associated with the given `store`.
1734 ///
1735 /// # Example
1736 ///
1737 /// ```
1738 /// # use wasmtime::*;
1739 /// # fn foo() -> Result<()> {
1740 /// let mut store = Store::<()>::default();
1741 ///
1742 /// let a;
1743 /// let b;
1744 /// let x;
1745 ///
1746 /// {
1747 /// let mut scope = RootScope::new(&mut store);
1748 ///
1749 /// a = ExternRef::new(&mut scope, "hello")?.to_owned_rooted(&mut scope)?;
1750 /// b = a.clone();
1751 ///
1752 /// // `a` and `b` are rooting the same object.
1753 /// assert!(OwnedRooted::ref_eq(&scope, &a, &b)?);
1754 ///
1755 /// // `c` is a different GC root, is in a different scope, and is a
1756 /// // `Rooted<T>` instead of a `OwnedRooted<T>`, but is still rooting
1757 /// // the same object.
1758 /// let c = a.to_rooted(&mut scope);
1759 /// assert!(OwnedRooted::ref_eq(&scope, &a, &c)?);
1760 ///
1761 /// x = ExternRef::new(&mut scope, "goodbye")?.to_owned_rooted(&mut scope)?;
1762 ///
1763 /// // `a` and `x` are rooting different objects.
1764 /// assert!(!OwnedRooted::ref_eq(&scope, &a, &x)?);
1765 /// }
1766 /// # Ok(())
1767 /// # }
1768 /// ```
1769 pub fn ref_eq(
1770 store: impl AsContext,
1771 a: &impl RootedGcRef<T>,
1772 b: &impl RootedGcRef<T>,
1773 ) -> Result<bool> {
1774 Rooted::ref_eq(store, a, b)
1775 }
1776
1777 /// Hash this root.
1778 ///
1779 /// Note that, similar to `Rooted::rooted_eq`, this only operates on the
1780 /// root and *not* the underlying GC reference. That means that two
1781 /// different rootings of the same object will hash to different values
1782 /// (modulo hash collisions). If this is undesirable, use the
1783 /// [`ref_hash`][crate::OwnedRooted::ref_hash] method instead.
1784 pub fn rooted_hash<H>(&self, state: &mut H)
1785 where
1786 H: Hasher,
1787 {
1788 self.inner.hash(state);
1789 }
1790
1791 /// Hash the underlying rooted object reference.
1792 ///
1793 /// Note that, similar to `Rooted::ref_eq`, and operates on the underlying
1794 /// rooted GC object reference, not the root. That means that two
1795 /// *different* rootings of the same object will hash to the *same*
1796 /// value. If this is undesirable, use the
1797 /// [`rooted_hash`][crate::Rooted::rooted_hash] method instead.
1798 pub fn ref_hash<H>(&self, store: impl AsContext, state: &mut H)
1799 where
1800 H: Hasher,
1801 {
1802 let gc_ref = self
1803 .get_gc_ref(store.as_context().0)
1804 .expect("OwnedRooted's get_gc_ref is infallible");
1805 gc_ref.hash(state);
1806 }
1807
1808 /// Cast `self` to an `OwnedRooted<U>`.
1809 ///
1810 /// It is the caller's responsibility to ensure that `self` is actually a
1811 /// `U`. Failure to uphold this invariant will be memory safe but will
1812 /// result in general incorrectness such as panics and wrong results.
1813 pub(crate) fn unchecked_cast<U: GcRef>(self) -> OwnedRooted<U> {
1814 OwnedRooted {
1815 inner: self.inner,
1816 liveness_flag: self.liveness_flag,
1817 _phantom: core::marker::PhantomData,
1818 }
1819 }
1820
1821 /// Common implementation of the `WasmTy::store` trait method for all
1822 /// `OwnedRooted<T>`s.
1823 pub(super) fn wasm_ty_store(
1824 self,
1825 store: &mut AutoAssertNoGc<'_>,
1826 ptr: &mut MaybeUninit<ValRaw>,
1827 val_raw: impl Fn(u32) -> ValRaw,
1828 ) -> Result<()> {
1829 let gc_ref = self.try_clone_gc_ref(store)?;
1830
1831 let raw = match store.optional_gc_store_mut() {
1832 Some(s) => s.expose_gc_ref_to_wasm(gc_ref),
1833 None => {
1834 debug_assert!(gc_ref.is_i31());
1835 gc_ref.as_raw_non_zero_u32()
1836 }
1837 };
1838
1839 ptr.write(val_raw(raw.get()));
1840 Ok(())
1841 }
1842
1843 /// Common implementation of the `WasmTy::load` trait method for all
1844 /// `OwnedRooted<T>`s.
1845 pub(super) fn wasm_ty_load(
1846 store: &mut AutoAssertNoGc<'_>,
1847 raw_gc_ref: u32,
1848 from_cloned_gc_ref: impl Fn(&mut AutoAssertNoGc<'_>, VMGcRef) -> Rooted<T>,
1849 ) -> Self {
1850 debug_assert_ne!(raw_gc_ref, 0);
1851 let gc_ref = VMGcRef::from_raw_u32(raw_gc_ref).expect("non-null");
1852 let gc_ref = store.clone_gc_ref(&gc_ref);
1853 RootSet::with_lifo_scope(store, |store| {
1854 let rooted = from_cloned_gc_ref(store, gc_ref);
1855 rooted._to_owned_rooted(store).expect("rooted is in scope")
1856 })
1857 }
1858
1859 /// Common implementation of the `WasmTy::store` trait method for all
1860 /// `Option<OwnedRooted<T>>`s.
1861 pub(super) fn wasm_ty_option_store(
1862 me: Option<Self>,
1863 store: &mut AutoAssertNoGc<'_>,
1864 ptr: &mut MaybeUninit<ValRaw>,
1865 val_raw: impl Fn(u32) -> ValRaw,
1866 ) -> Result<()> {
1867 match me {
1868 Some(me) => me.wasm_ty_store(store, ptr, val_raw),
1869 None => {
1870 ptr.write(val_raw(0));
1871 Ok(())
1872 }
1873 }
1874 }
1875
1876 /// Common implementation of the `WasmTy::load` trait method for all
1877 /// `Option<OwnedRooted<T>>`s.
1878 pub(super) fn wasm_ty_option_load(
1879 store: &mut AutoAssertNoGc<'_>,
1880 raw_gc_ref: u32,
1881 from_cloned_gc_ref: impl Fn(&mut AutoAssertNoGc<'_>, VMGcRef) -> Rooted<T>,
1882 ) -> Option<Self> {
1883 let gc_ref = VMGcRef::from_raw_u32(raw_gc_ref)?;
1884 let gc_ref = store.clone_gc_ref(&gc_ref);
1885 RootSet::with_lifo_scope(store, |store| {
1886 let rooted = from_cloned_gc_ref(store, gc_ref);
1887 Some(rooted._to_owned_rooted(store).expect("rooted is in scope"))
1888 })
1889 }
1890
1891 #[doc(hidden)]
1892 pub fn into_parts_for_c_api(self) -> (NonZeroU64, u32, u32, *const ()) {
1893 (
1894 self.inner.store_id.as_raw(),
1895 self.inner.generation,
1896 self.inner.index.0,
1897 Arc::into_raw(self.liveness_flag),
1898 )
1899 }
1900
1901 #[doc(hidden)]
1902 pub unsafe fn from_borrowed_raw_parts_for_c_api(
1903 a: NonZeroU64,
1904 b: u32,
1905 c: u32,
1906 d: *const (),
1907 ) -> OwnedRooted<T> {
1908 // We are given a *borrow* of the Arc. This is a little
1909 // sketchy because `Arc::from_raw()` takes *ownership* of the
1910 // passed-in pointer, so we need to clone then forget that
1911 // original.
1912 let liveness_flag = {
1913 let original = unsafe { Arc::from_raw(d) };
1914 let clone = original.clone();
1915 core::mem::forget(original);
1916 clone
1917 };
1918 OwnedRooted {
1919 inner: GcRootIndex {
1920 store_id: StoreId::from_raw(a),
1921 generation: b,
1922 index: PackedIndex(c),
1923 },
1924 liveness_flag,
1925 _phantom: marker::PhantomData,
1926 }
1927 }
1928
1929 #[doc(hidden)]
1930 pub unsafe fn from_owned_raw_parts_for_c_api(
1931 a: NonZeroU64,
1932 b: u32,
1933 c: u32,
1934 d: *const (),
1935 ) -> OwnedRooted<T> {
1936 let liveness_flag = unsafe { Arc::from_raw(d) };
1937 OwnedRooted {
1938 inner: GcRootIndex {
1939 store_id: StoreId::from_raw(a),
1940 generation: b,
1941 index: PackedIndex(c),
1942 },
1943 liveness_flag,
1944 _phantom: marker::PhantomData,
1945 }
1946 }
1947}
1948
1949impl<T: GcRef> RootedGcRefImpl<T> for OwnedRooted<T> {
1950 fn get_gc_ref<'a>(&self, store: &'a StoreOpaque) -> Option<&'a VMGcRef> {
1951 assert!(
1952 self.comes_from_same_store(store),
1953 "object used with wrong store"
1954 );
1955
1956 let id = self.inner.index.as_owned().unwrap();
1957 store.gc_roots().owned_rooted.get(id)
1958 }
1959}
1960
1961#[cfg(test)]
1962mod tests {
1963 use crate::ExternRef;
1964
1965 use super::*;
1966
1967 #[test]
1968 fn sizes() {
1969 // Try to keep tabs on the size of these things. Don't want them growing
1970 // unintentionally.
1971 assert_eq!(std::mem::size_of::<Rooted<ExternRef>>(), 16);
1972 assert!(std::mem::size_of::<OwnedRooted<ExternRef>>() <= 24);
1973 }
1974}