wasmtime/runtime/gc/enabled/
rooting.rs

1//! Garbage collection rooting APIs.
2//!
3//! Rooting prevents GC objects from being collected while they are actively
4//! being used.
5//!
6//! ## Goals
7//!
8//! We have a few sometimes-conflicting goals with our GC rooting APIs:
9//!
10//! 1. Safety: It should never be possible to get a use-after-free bug because
11//!    the user misused the rooting APIs, the collector "mistakenly" determined
12//!    an object was unreachable and collected it, and then the user tried to
13//!    access the object. This is our highest priority.
14//!
15//! 2. Moving GC: Our rooting APIs should moving collectors (such as
16//!    generational and compacting collectors) where an object might get
17//!    relocated after a collection and we need to update the GC root's pointer
18//!    to the moved object. This means we either need cooperation and internal
19//!    mutability from individual GC roots as well as the ability to enumerate
20//!    all GC roots on the native Rust stack, or we need a level of indirection.
21//!
22//! 3. Performance: Our rooting APIs should generally be as low-overhead as
23//!    possible. They definitely shouldn't require synchronization and locking
24//!    to create, access, and drop GC roots.
25//!
26//! 4. Ergonomics: Our rooting APIs should be, if not a pleasure, then at least
27//!    not a burden for users. Additionally, the API's types should be `Sync`
28//!    and `Send` so that they work well with async Rust.
29//!
30//! The two main design axes that trade off the above goals are:
31//!
32//! - Where the GC reference itself is held. A root object could
33//!   directly hold the underlying GC reference (offset into the GC
34//!   storage area), which would allow more efficient dereferencing
35//!   and access to the referred-to GC object. However, goal (2)
36//!   requires that the GC is able to update references when objects
37//!   move during a GC. Thus, such "direct roots" would need to be
38//!   registered somehow in a global root registry, and would need to
39//!   unregister themselves when dropped.
40//!
41//!   The alternative is to hold some indirect kind of reference to a
42//!   GC reference, with the latter stored directly in the `Store` so
43//!   the GC can update it freely. This adds one pointer-chasing hop
44//!   to accesses, but works much more nicely with ownership
45//!   semantics. Logically, the `Store` "owns" the actual pointers;
46//!   and rooting types own the slots that they are stored in.
47//!
48//!   For the above reasons, all of our rooting types below use
49//!   indirection. This avoids the need for an unsafe
50//!   intrusive-linked-list for global registration, or a shared
51//!   reference to a mutex-protected registry, or some other
52//!   error-prone technique.
53//!
54//! - How unrooting occurs. Ideally, a rooting type would implement
55//!   the Rust `Drop` trait and unroot itself when the Rust value is
56//!   dropped. However, because the rooting state is held in the
57//!   `Store`, this direct approach would imply keeping a shared,
58//!   mutex-protected handle to the registry in every rooting
59//!   object. This would add synchronization overhead to the common
60//!   case, and in general would be a bad tradeoff.
61//!
62//!   However, there are several other approaches:
63//!
64//!   - The user could use an RAII wrapper that *does* own the `Store`,
65//!     and defines a "scope" in which roots are created and then
66//!     bulk-unrooted at the close of the scope.
67//!   - The rooting type could hold a shared reference to some state
68//!     *other* than the full registry, and update a flag in that
69//!     state indicating it has been dropped; the `Store` could then
70//!     later observe that flag and remove the root. This would have
71//!     some allocation cost, but the shared state would be
72//!     independent of the `Store` and specific to each root, so would
73//!     impose no synchronization overhead between different roots or
74//!     the GC itself.
75//!   - The rooting type could provide a fully manual `unroot` method,
76//!     allowing the user to make use of their own knowledge of their
77//!     application's lifetimes and semantics and remove roots when
78//!     appropriate.
79//!
80//!   We provide an implementation of the first two of these
81//!   strategies below in `Rooted` and `OwnedRooted`. The last, fully
82//!   manual, approach is too difficult to use correctly (it cannot
83//!   implement Rust's `Drop` trait, but there is no way in Rust to
84//!   enforce that a value must be consumed rather than dropped) so it
85//!   is not implemented.
86//!
87//! ## Two Flavors of Rooting API
88//!
89//! Okay, with that out of the way, this module provides two flavors
90//! of rooting API. One for the common, scoped lifetime case, and one that
91//! carries ownership until dropped, and can work as an RAII handle
92//! that interacts well with Rust ownership semantics (but at a minor
93//! performance cost):
94//!
95//! 1. `RootScope` and `Rooted<T>`: These are used for temporarily rooting GC
96//!    objects for the duration of a scope. The internal implementation takes
97//!    advantage of the LIFO property inherent in scopes, making creating and
98//!    dropping `Rooted<T>`s and `RootScope`s super fast and roughly equivalent
99//!    to bump allocation.
100//!
101//!    This type is vaguely similar to V8's [`HandleScope`].
102//!
103//!    [`HandleScope`]: https://v8.github.io/api/head/classv8_1_1HandleScope.html
104//!
105//!    Note that `Rooted<T>` can't be statically tied to its context scope via a
106//!    lifetime parameter, unfortunately, as that would allow the creation of
107//!    only one `Rooted<T>` at a time, since the `Rooted<T>` would take a borrow
108//!    of the whole context.
109//!
110//!    This supports the common use case for rooting and provides good
111//!    ergonomics.
112//!
113//! 2. `OwnedRooted<T>`: This is a root that manages rooting and
114//!    unrooting automatically with its lifetime as a Rust value. In
115//!    other words, the continued existence of the Rust value ensures
116//!    the rooting of the underlying GC reference; and when the Rust
117//!    value is dropped, the underlying GC reference is no longer
118//!    rooted.
119//!
120//!    Internally, this root holds a shared reference to a
121//!    *root-specific* bit of state that is also tracked and observed
122//!    by the `Store`.  This means that there is minor memory
123//!    allocation overhead (an `Arc<()>`) for each such root; this
124//!    memory is shared over all clones of this root. The rooted GC
125//!    reference is *logically* unrooted as soon as the last clone of
126//!    this root is dropped. Internally the root may still exist until
127//!    the next GC, or "root trim" when another `OwnedRooted` is
128//!    created, but that is unobservable externally, and will not
129//!    result in any additional GC object lifetime because it is
130//!    always cleaned up before a gc.
131//!
132//!    This type is roughly similar to SpiderMonkey's [`PersistentRooted<T>`],
133//!    although they register roots on a per-thread `JSContext`, avoiding
134//!    mutation costs in a way that is not viable for Wasmtime (which needs
135//!    `Store`s to be `Send`).
136//!
137//!    [`PersistentRooted<T>`]: http://devdoc.net/web/developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey/JSAPI_reference/JS::PersistentRooted.html
138//!
139//! At the end of the day, all of the above root types are just tagged
140//! indices into the store's `RootSet`. This indirection allows
141//! working with Rust's borrowing discipline (we use `&mut Store` to
142//! represent mutable access to the GC heap) while still allowing
143//! rooted references to be moved around without tying up the whole
144//! store in borrows. Additionally, and crucially, this indirection
145//! allows us to update the *actual* GC pointers in the `RootSet` and
146//! support moving GCs (again, as mentioned above).
147//!
148//! ## Unrooted References
149//!
150//! We generally don't expose *unrooted* GC references in the Wasmtime API at
151//! this time -- and I expect it will be a very long time before we do, but in
152//! the limit we may want to let users define their own GC-managed types that
153//! participate in GC tracing and all that -- so we don't have to worry about
154//! failure to root an object causing use-after-free bugs or failing to update a
155//! GC root pointer after a moving GC as long as users stick to our safe rooting
156//! APIs. (The one exception is `ValRaw`, which does hold raw GC references. But
157//! with `ValRaw` all bets are off and safety is 100% up to the user.)
158//!
159//! We do, however, have to worry about these things internally. So first of
160//! all, try to avoid ever working with unrooted GC references if you
161//! can. However, if you really must, consider also using an `AutoAssertNoGc`
162//! across the block of code that is manipulating raw GC references.
163
164use crate::runtime::vm::{GcRootsList, GcStore, VMGcRef};
165use crate::{
166    AsContext, AsContextMut, GcRef, Result, RootedGcRef,
167    store::{AsStoreOpaque, AutoAssertNoGc, StoreId, StoreOpaque},
168};
169use crate::{ValRaw, prelude::*};
170use alloc::sync::{Arc, Weak};
171use core::any;
172use core::marker;
173use core::mem::{self, MaybeUninit};
174use core::num::{NonZeroU64, NonZeroUsize};
175use core::{
176    fmt::{self, Debug},
177    hash::{Hash, Hasher},
178    ops::{Deref, DerefMut},
179};
180use wasmtime_slab::{Id as SlabId, Slab};
181
182mod sealed {
183    use super::*;
184
185    /// Sealed, `wasmtime`-internal trait for GC references.
186    ///
187    /// # Safety
188    ///
189    /// All types implementing this trait must:
190    ///
191    /// * Be a newtype of a `GcRootIndex`
192    ///
193    /// * Not implement `Copy` or `Clone`
194    ///
195    /// * Only have `&self` methods.
196    pub unsafe trait GcRefImpl: Sized {
197        /// Transmute a `&GcRootIndex` into an `&Self`.
198        fn transmute_ref(index: &GcRootIndex) -> &Self;
199    }
200
201    /// Sealed, `wasmtime`-internal trait for the common methods on rooted GC
202    /// references.
203    pub trait RootedGcRefImpl<T: GcRef> {
204        /// Get this rooted GC reference's raw `VMGcRef` out of the store's GC
205        /// root set.
206        ///
207        /// Returns `None` for objects that have since been unrooted (eg because
208        /// its associated `RootedScope` was dropped).
209        ///
210        /// Panics if this root is not associated with the given store.
211        fn get_gc_ref<'a>(&self, store: &'a StoreOpaque) -> Option<&'a VMGcRef>;
212
213        /// Same as `get_gc_ref` but returns an error instead of `None` for
214        /// objects that have been unrooted.
215        fn try_gc_ref<'a>(&self, store: &'a StoreOpaque) -> Result<&'a VMGcRef> {
216            self.get_gc_ref(store).ok_or_else(|| {
217                anyhow!("attempted to use a garbage-collected object that has been unrooted")
218            })
219        }
220
221        /// Get a clone of this rooted GC reference's raw `VMGcRef` out of the
222        /// store's GC root set.
223        ///
224        /// Returns `None` for objects that have since been unrooted (eg because
225        /// its associated `RootedScope` was dropped).
226        ///
227        /// Panics if this root is not associated with the given store.
228        fn clone_gc_ref(&self, store: &mut AutoAssertNoGc<'_>) -> Option<VMGcRef> {
229            let gc_ref = self.get_gc_ref(store)?.unchecked_copy();
230            Some(store.clone_gc_ref(&gc_ref))
231        }
232
233        /// Same as `clone_gc_ref` but returns an error instead of `None` for
234        /// objects that have been unrooted.
235        fn try_clone_gc_ref(&self, store: &mut AutoAssertNoGc<'_>) -> Result<VMGcRef> {
236            let gc_ref = self.try_gc_ref(store)?.unchecked_copy();
237            Ok(store.clone_gc_ref(&gc_ref))
238        }
239    }
240}
241pub(crate) use sealed::*;
242
243/// The index of a GC root inside a particular store's GC root set.
244///
245/// Can be either a LIFO- or owned-rooted object, depending on the
246/// `PackedIndex`.
247///
248/// Every `T` such that `T: GcRef` must be a newtype over this `GcRootIndex`.
249#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
250// Just `pub` to avoid `warn(private_interfaces)` in public APIs, which we can't
251// `allow(...)` on our MSRV yet.
252#[doc(hidden)]
253#[repr(C)] // NB: if this layout changes be sure to change the C API as well
254pub struct GcRootIndex {
255    store_id: StoreId,
256    generation: u32,
257    index: PackedIndex,
258}
259
260const _: () = {
261    // NB: these match the C API which should also be updated if this changes
262    assert!(mem::size_of::<GcRootIndex>() == 16);
263    assert!(mem::align_of::<GcRootIndex>() == mem::align_of::<u64>());
264};
265
266impl GcRootIndex {
267    #[inline]
268    pub(crate) fn comes_from_same_store(&self, store: &StoreOpaque) -> bool {
269        self.store_id == store.id()
270    }
271
272    /// Same as `RootedGcRefImpl::get_gc_ref` but not associated with any
273    /// particular `T: GcRef`.
274    ///
275    /// We must avoid triggering a GC while holding onto the resulting raw
276    /// `VMGcRef` to avoid use-after-free bugs and similar. The `'a` lifetime
277    /// threaded from the `store` to the result will normally prevent GCs
278    /// statically, at compile time, since performing a GC requires a mutable
279    /// borrow of the store. However, if you call `VMGcRef::unchecked_copy` on
280    /// the resulting GC reference, then all bets are off and this invariant is
281    /// up to you to manually uphold. Failure to uphold this invariant is memory
282    /// safe but will lead to general incorrectness such as panics and wrong
283    /// results.
284    ///
285    /// # Panics
286    ///
287    /// Panics if `self` is not associated with the given store.
288    pub(crate) fn get_gc_ref<'a>(&self, store: &'a StoreOpaque) -> Option<&'a VMGcRef> {
289        assert!(
290            self.comes_from_same_store(store),
291            "object used with wrong store"
292        );
293        if let Some(index) = self.index.as_lifo() {
294            let entry = store.gc_roots().lifo_roots.get(index)?;
295            if entry.generation == self.generation {
296                Some(&entry.gc_ref)
297            } else {
298                None
299            }
300        } else if let Some(id) = self.index.as_owned() {
301            let gc_ref = store.gc_roots().owned_rooted.get(id);
302            debug_assert!(gc_ref.is_some());
303            gc_ref
304        } else {
305            unreachable!()
306        }
307    }
308
309    /// Same as `get_gc_ref` but returns an error instead of `None` if
310    /// the GC reference has been unrooted.
311    ///
312    /// # Panics
313    ///
314    /// Panics if `self` is not associated with the given store.
315    pub(crate) fn try_gc_ref<'a>(&self, store: &'a StoreOpaque) -> Result<&'a VMGcRef> {
316        self.get_gc_ref(store).ok_or_else(|| {
317            anyhow!("attempted to use a garbage-collected object that has been unrooted")
318        })
319    }
320
321    /// Same as `RootedGcRefImpl::clone_gc_ref` but not associated with any
322    /// particular `T: GcRef`.
323    pub(crate) fn try_clone_gc_ref(&self, store: &mut AutoAssertNoGc<'_>) -> Result<VMGcRef> {
324        let gc_ref = self.try_gc_ref(store)?.unchecked_copy();
325        Ok(store.clone_gc_ref(&gc_ref))
326    }
327}
328
329/// This is a bit-packed version of
330///
331/// ```ignore
332/// enum {
333///     Lifo(usize),
334///     Owned(SlabId),
335/// }
336/// ```
337///
338/// where the high bit is the discriminant and the lower 31 bits are the
339/// payload.
340#[derive(Clone, Copy, PartialEq, Eq, Hash)]
341#[repr(transparent)]
342struct PackedIndex(u32);
343
344impl Debug for PackedIndex {
345    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
346        if let Some(index) = self.as_lifo() {
347            f.debug_tuple("PackedIndex::Lifo").field(&index).finish()
348        } else if let Some(id) = self.as_owned() {
349            f.debug_tuple("PackedIndex::Owned").field(&id).finish()
350        } else {
351            unreachable!()
352        }
353    }
354}
355
356impl PackedIndex {
357    const DISCRIMINANT_MASK: u32 = 0b1 << 31;
358    const LIFO_DISCRIMINANT: u32 = 0b0 << 31;
359    const OWNED_DISCRIMINANT: u32 = 0b1 << 31;
360    const PAYLOAD_MASK: u32 = !Self::DISCRIMINANT_MASK;
361
362    fn new_lifo(index: usize) -> PackedIndex {
363        let index32 = u32::try_from(index).unwrap();
364        assert_eq!(index32 & Self::DISCRIMINANT_MASK, 0);
365        let packed = PackedIndex(Self::LIFO_DISCRIMINANT | index32);
366        debug_assert!(packed.is_lifo());
367        debug_assert_eq!(packed.as_lifo(), Some(index));
368        debug_assert!(!packed.is_owned());
369        debug_assert!(packed.as_owned().is_none());
370        packed
371    }
372
373    fn new_owned(id: SlabId) -> PackedIndex {
374        let raw = id.into_raw();
375        assert_eq!(raw & Self::DISCRIMINANT_MASK, 0);
376        let packed = PackedIndex(Self::OWNED_DISCRIMINANT | raw);
377        debug_assert!(packed.is_owned());
378        debug_assert_eq!(packed.as_owned(), Some(id));
379        debug_assert!(!packed.is_lifo());
380        debug_assert!(packed.as_lifo().is_none());
381        packed
382    }
383
384    fn discriminant(&self) -> u32 {
385        self.0 & Self::DISCRIMINANT_MASK
386    }
387
388    fn is_lifo(&self) -> bool {
389        self.discriminant() == Self::LIFO_DISCRIMINANT
390    }
391
392    fn is_owned(&self) -> bool {
393        self.discriminant() == Self::OWNED_DISCRIMINANT
394    }
395
396    fn payload(&self) -> u32 {
397        self.0 & Self::PAYLOAD_MASK
398    }
399
400    fn as_lifo(&self) -> Option<usize> {
401        if self.is_lifo() {
402            Some(usize::try_from(self.payload()).unwrap())
403        } else {
404            None
405        }
406    }
407
408    fn as_owned(&self) -> Option<SlabId> {
409        if self.is_owned() {
410            Some(SlabId::from_raw(self.payload()))
411        } else {
412            None
413        }
414    }
415}
416
417/// The set of all embedder-API GC roots in a single store/heap.
418#[derive(Debug, Default)]
419pub(crate) struct RootSet {
420    /// GC roots with arbitrary lifetime that are unrooted when
421    /// liveness flags are cleared (seen during a trimming pass), for
422    /// use with `OwnedRooted<T>`.
423    owned_rooted: Slab<VMGcRef>,
424
425    /// List of liveness flags and corresponding `SlabId`s into the
426    /// `owned_rooted` slab.
427    liveness_flags: Vec<(Weak<()>, SlabId)>,
428
429    /// High-water mark for liveness flag trimming. We use this to
430    /// ensure we have amortized constant-time behavior on adding
431    /// roots. See note below on `trim_liveness_flags()`.
432    liveness_trim_high_water: Option<NonZeroUsize>,
433
434    /// Strictly LIFO-ordered GC roots, for use with `RootScope` and
435    /// `Rooted<T>`.
436    lifo_roots: Vec<LifoRoot>,
437
438    /// Generation counter for entries to prevent ABA bugs with `RootScope` and
439    /// `Rooted<T>`.
440    lifo_generation: u32,
441}
442
443#[derive(Debug)]
444struct LifoRoot {
445    generation: u32,
446    gc_ref: VMGcRef,
447}
448
449impl RootSet {
450    pub(crate) fn trace_roots(&mut self, gc_roots_list: &mut GcRootsList) {
451        log::trace!("Begin trace user LIFO roots");
452        for root in &mut self.lifo_roots {
453            unsafe {
454                gc_roots_list.add_root((&mut root.gc_ref).into(), "user LIFO root");
455            }
456        }
457        log::trace!("End trace user LIFO roots");
458
459        log::trace!("Begin trace user owned roots");
460        for (_id, root) in self.owned_rooted.iter_mut() {
461            unsafe {
462                gc_roots_list.add_root(root.into(), "user owned root");
463            }
464        }
465        log::trace!("End trace user owned roots");
466    }
467
468    /// Enter a LIFO rooting scope.
469    ///
470    /// Returns an integer that should be passed unmodified to `exit_lifo_scope`
471    /// when the scope is finished.
472    ///
473    /// Calls to `{enter,exit}_lifo_scope` must happen in a strict LIFO order.
474    #[inline]
475    pub(crate) fn enter_lifo_scope(&self) -> usize {
476        self.lifo_roots.len()
477    }
478
479    /// Exit a LIFO rooting scope.
480    ///
481    /// The `scope` argument must be the result of the corresponding
482    /// `enter_lifo_scope` call.
483    ///
484    /// Calls to `{enter,exit}_lifo_scope` must happen in a strict LIFO order.
485    #[inline]
486    pub(crate) fn exit_lifo_scope(&mut self, gc_store: Option<&mut GcStore>, scope: usize) {
487        debug_assert!(self.lifo_roots.len() >= scope);
488
489        // If we actually have roots to unroot, call an out-of-line slow path.
490        if self.lifo_roots.len() > scope {
491            self.exit_lifo_scope_slow(gc_store, scope);
492        }
493    }
494
495    #[inline(never)]
496    #[cold]
497    fn exit_lifo_scope_slow(&mut self, mut gc_store: Option<&mut GcStore>, scope: usize) {
498        self.lifo_generation += 1;
499
500        // TODO: In the case where we have a tracing GC that doesn't need to
501        // drop barriers, this should really be:
502        //
503        //     self.lifo_roots.truncate(scope);
504
505        let mut lifo_roots = mem::take(&mut self.lifo_roots);
506        for root in lifo_roots.drain(scope..) {
507            // Only drop the GC reference if we actually have a GC store. How
508            // can we have a GC reference but not a GC store? If we've only
509            // created `i31refs`, we never force a GC store's allocation. This
510            // is fine because `i31ref`s never need drop barriers.
511            if let Some(gc_store) = &mut gc_store {
512                gc_store.drop_gc_ref(root.gc_ref);
513            }
514        }
515        self.lifo_roots = lifo_roots;
516    }
517
518    pub(crate) fn with_lifo_scope<S, T>(store: &mut S, f: impl FnOnce(&mut S) -> T) -> T
519    where
520        S: ?Sized + DerefMut<Target = StoreOpaque>,
521    {
522        let scope = store.gc_roots().enter_lifo_scope();
523        let ret = f(store);
524        store.exit_gc_lifo_scope(scope);
525        ret
526    }
527
528    pub(crate) fn push_lifo_root(&mut self, store_id: StoreId, gc_ref: VMGcRef) -> GcRootIndex {
529        let generation = self.lifo_generation;
530        let index = self.lifo_roots.len();
531        let index = PackedIndex::new_lifo(index);
532        self.lifo_roots.push(LifoRoot { generation, gc_ref });
533        GcRootIndex {
534            store_id,
535            generation,
536            index,
537        }
538    }
539
540    /// Trim any stale (dropped) owned roots.
541    ///
542    /// `OwnedRooted` is implemented in a way that avoids the need to
543    /// have or keep a reference to the store (and thus this struct)
544    /// during its drop operation: to allow it to be independent, it
545    /// holds a shared reference to some other memory and sets that
546    /// "liveness flag" appropriately, then we later observe dead
547    /// liveness flags during a periodic scan and actually deallocate
548    /// the roots. We use an `Arc<()>` for this: it permits cheap
549    /// cloning, and it has minimal memory overhead. We hold a weak
550    /// ref in a list alongside the actual `GcRootIndex`, and we free
551    /// that index in the slab of owned roots when we observe that
552    /// only our weak ref remains.
553    ///
554    /// This trim step is logically separate from a full GC, though it
555    /// would not be very productive to do a GC without doing a
556    /// root-trim first: the root-trim should be quite a lot cheaper,
557    /// and it will allow for more garbage to exist.
558    ///
559    /// There is, additionally, nothing stopping us from doing trims
560    /// more often than just before each GC, and there are reasons
561    /// this could be a good idea: for example, a user program that
562    /// creates and removes many roots (perhaps as it accesses the GC
563    /// object graph) but ultimately is dealing with a static graph,
564    /// without further allocation, may need the "root set" to be GC'd
565    /// independently from the actual heap. Thus, we could trim before
566    /// adding a new root to ensure we don't grow that unboundedly (or
567    /// force an otherwise unneeded GC).
568    ///
569    /// The first case, just before GC, wants a "full trim": there's
570    /// no reason not to unroot as much as possible before we do the
571    /// expensive work of tracing the whole heap.
572    ///
573    /// On the other hand, the second case, adding a new root, wants a
574    /// kind of trim that is amortized constant time. Consider: if we
575    /// have some threshold for the trim, say N roots, and the user
576    /// program continually adds and removes one root such that it
577    /// goes just over the threshold, we might scan all N liveness
578    /// flags for each step, resulting in quadratic behavior overall.
579    ///
580    /// Thus, we implement a "high water mark" algorithm to guard
581    /// against this latter case: on the add-a-new-root case, we trim
582    /// only if the list is longer than the high water mark, and we
583    /// set the high water mark each time based on the after-trim
584    /// size. See below for details on this algorithm.
585    ///
586    /// `eager` chooses whether we eagerly trim roots or pre-filter
587    /// using the high-water mark.
588    ///
589    /// # Growth Algorithm
590    ///
591    /// We want to balance two factors: we must ensure that the
592    /// algorithmic complexity of creating a new root is amortized
593    /// O(1), and we must ensure that repeated creation and deletion
594    /// of roots without any GC must result in a root-set that has a
595    /// size linear in the actual live-root-set size. Stated formally:
596    ///
597    /// 1. Root creation must be O(1), amortized
598    /// 2. liveness_flags.len() must be O(|max live owned roots|),
599    ///    i.e., must not grow to larger than a constant multiple of
600    ///    the maximum working-root-set size of the program.
601    ///
602    /// Note that a naive exponential-growth-of-threshold algorithm,
603    /// where we trim when the root set reaches 1, 2, 4, 8, 16, 32,
604    /// ... elements, provides the first but *not* the second
605    /// property. A workload that has a constant live root set but
606    /// creates and drops roots constantly (say, as it's traversing a
607    /// static graph and moving "fingers" through it) will cause the
608    /// `liveness_flags` array to grow unboundedly.
609    ///
610    /// Instead, it turns out that we can achieve both of these goals
611    /// with a simple rule: we trim when the root list length reaches
612    /// a high-water mark; and then *after* trimming, we set the
613    /// high-water mark equal to the resulting live-root count
614    /// multiplied by a factor (e.g., 2).
615    ///
616    /// ## Proof
617    ///
618    /// - Root creation is O(1)
619    ///
620    ///   Assume a sequence of root creation and drop events (and no
621    ///   GCs, with a static GC graph, in the worst case -- only roots
622    ///   are changing). We want to show that after N root creations,
623    ///   we have incurred only only O(N) cost scanning the
624    ///   `liveness_flags` list over the whole sequence.
625    ///
626    ///   Assume a default high-water mark of D (e.g., 8) at
627    ///   initialization with an empty root list.
628    ///
629    ///   Consider "epochs" in the sequence split by trim events where
630    ///   we scan the root list. Proceed by induction over epochs to
631    ///   show: after each epoch, we will have scanned at most 2N
632    ///   roots after N root creations.
633    ///
634    ///   (These epochs don't exist in the algorithm: this is just a
635    ///   mechanism to analyze the behavior.)
636    ///
637    ///   Base case: after the first epoch, with D root creations, we
638    ///   will scan D roots.
639    ///
640    ///   Induction step: we have created N roots and scanned at most
641    ///   2N roots. After previous scan, L roots are still live; then
642    ///   we set the high-water mark for next scan at 2L. The list
643    ///   already has L, so after another L root creations, the epoch
644    ///   ends. We will then incur a scan cost of 2L (the full
645    ///   list). At that point we have thus seen N + L root creations,
646    ///   with 2N + 2L scan cost; the invariant holds.
647    ///
648    ///   (It's counter-intuitive that *not* raising the high-water
649    ///   mark exponentially can still result in a constant amortized
650    ///   cost! One intuition to understand this is that each root
651    ///   that remains alive after a scan pushes the next high-water
652    ///   mark up by one, so requires a new root creation to "pay for"
653    ///   its next scan. So any given root may be scanned many times,
654    ///   but each such root ensures other root creations happen to
655    ///   maintain the amortized cost.)
656    ///
657    /// - `liveness_flags.len()` is always O(|max live roots|)
658    ///
659    ///   Before the first trim, we have between 0 and D live roots,
660    ///   which is O(1) (`D` is a compile-time constant).
661    ///
662    ///   Just after a trim, the `liveness_flags` list has only live
663    ///   roots, and the max live-root count is at least the count at
664    ///   this time, so the property holds.
665    ///
666    ///   The instantaneous maximum number of live roots is greater
667    ///   than or equal to the maximum number of live roots observed
668    ///   during a trim. (The trim is just some point in time, and the
669    ///   max at some point in time is at most the overall max.)
670    ///
671    ///   The high-water mark is set at 2 * `liveness_flags.len()`
672    ///   after a trim, i.e., the number of live roots at that
673    ///   time. We trim when we reach the high-water mark. So the
674    ///   length of the array cannot exceed 2 *
675    ///   `liveness_flags.len()`, which is less than or equal to the
676    ///   overall max. So transitively, the list length at any time is
677    ///   always O(|max live roots|).
678    ///
679    /// We thus have tight bounds (deterministic, not randomized) for
680    /// all possible sequences of root creation/dropping, ensuring
681    /// robustness.
682    pub(crate) fn trim_liveness_flags(&mut self, gc_store: &mut GcStore, eager: bool) {
683        const DEFAULT_HIGH_WATER: usize = 8;
684        const GROWTH_FACTOR: usize = 2;
685        let high_water_mark = self
686            .liveness_trim_high_water
687            .map(|x| x.get())
688            .unwrap_or(DEFAULT_HIGH_WATER);
689        if !eager && self.liveness_flags.len() < high_water_mark {
690            return;
691        }
692
693        self.liveness_flags.retain(|(flag, index)| {
694            if flag.strong_count() == 0 {
695                // No more `OwnedRooted` instances are holding onto
696                // this; dealloc the index and drop our Weak.
697                let gc_ref = self.owned_rooted.dealloc(*index);
698                gc_store.drop_gc_ref(gc_ref);
699                // Don't retain in the list.
700                false
701            } else {
702                // Retain in the list.
703                true
704            }
705        });
706
707        let post_trim_len = self.liveness_flags.len();
708        let high_water_mark = core::cmp::max(
709            DEFAULT_HIGH_WATER,
710            post_trim_len.saturating_mul(GROWTH_FACTOR),
711        );
712        self.liveness_trim_high_water = Some(NonZeroUsize::new(high_water_mark).unwrap());
713    }
714}
715
716/// A scoped, rooted reference to a garbage-collected `T`.
717///
718/// A `Rooted<T>` is a strong handle to a garbage-collected `T`, preventing its
719/// referent (and anything else transitively referenced) from being collected by
720/// the GC during the scope within which this `Rooted<T>` was created.
721///
722/// When the context exits this `Rooted<T>`'s scope, the underlying GC object is
723/// automatically unrooted and any further attempts to use access the underlying
724/// object will return errors or otherwise fail.
725///
726/// `Rooted<T>` dereferences to its underlying `T`, allowing you to call `T`'s
727/// methods.
728///
729/// # Example
730///
731/// ```
732/// # use wasmtime::*;
733/// # fn _foo() -> Result<()> {
734/// let mut store = Store::<()>::default();
735///
736/// // Allocating a GC object returns a `Rooted<T>`.
737/// let hello: Rooted<ExternRef> = ExternRef::new(&mut store, "hello")?;
738///
739/// // Because `Rooted<T>` derefs to `T`, we can call `T` methods on a
740/// // `Rooted<T>`. For example, we can call the `ExternRef::data` method when we
741/// // have a `Rooted<ExternRef>`.
742/// let data = hello
743///     .data(&store)?
744///     .ok_or_else(|| Error::msg("externref has no host data"))?
745///     .downcast_ref::<&str>()
746///     .ok_or_else(|| Error::msg("not a str"))?;
747/// assert_eq!(*data, "hello");
748///
749/// // A `Rooted<T>` roots its underlying GC object for the duration of the
750/// // scope of the store/caller/context that was passed to the method that created
751/// // it. If we only want to keep a GC reference rooted and alive temporarily, we
752/// // can introduce new scopes with `RootScope`.
753/// {
754///     let mut scope = RootScope::new(&mut store);
755///
756///     // This `Rooted<T>` is automatically unrooted after `scope` is dropped,
757///     // allowing the collector to reclaim its GC object in the next GC.
758///     let scoped_ref = ExternRef::new(&mut scope, "goodbye");
759/// }
760///
761/// let module = Module::new(store.engine(), r#"
762///     (module
763///         (global (export "global") (mut externref) (ref.null extern))
764///         (table (export "table") 10 externref)
765///         (func (export "func") (param externref) (result externref)
766///             local.get 0
767///         )
768///     )
769/// "#)?;
770/// let instance = Instance::new(&mut store, &module, &[])?;
771///
772/// // GC references returned from calls into Wasm also return (optional, if the
773/// // Wasm type is nullable) `Rooted<T>`s.
774/// let result: Option<Rooted<_>> = instance
775///     .get_typed_func::<Option<Rooted<ExternRef>>, Option<Rooted<ExternRef>>>(&mut store, "func")?
776///     .call(&mut store, Some(hello))?;
777///
778/// // Similarly, getting a GC reference from a Wasm instance's exported global
779/// // or table yields a `Rooted<T>`.
780///
781/// let global = instance
782///     .get_global(&mut store, "global")
783///     .ok_or_else(|| Error::msg("missing `global` export"))?;
784/// let global_val = global.get(&mut store);
785/// let global_ref: Option<&Rooted<_>> = global_val
786///     .externref()
787///     .ok_or_else(|| Error::msg("not an externref"))?;
788///
789/// let table = instance.get_table(&mut store, "table").unwrap();
790/// let table_elem = table
791///     .get(&mut store, 3)
792///     .ok_or_else(|| Error::msg("table out of bounds"))?;
793/// let table_elem_ref: Option<&Rooted<_>> = table_elem
794///     .as_extern()
795///     .ok_or_else(|| Error::msg("not an externref"))?;
796/// # Ok(())
797/// # }
798/// ```
799///
800/// # Differences Between `Rooted<T>` and `OwnedRooted<T>`
801///
802/// While `Rooted<T>` is automatically unrooted when its scope is
803/// exited, this means that `Rooted<T>` is only valid for strictly
804/// last-in-first-out (LIFO, aka stack order) lifetimes. This is in
805/// contrast to [`OwnedRooted<T>`][crate::OwnedRooted], which supports
806/// rooting GC objects for arbitrary lifetimes.
807///
808/// | Type                                         | Supported Lifetimes         | Unrooting | Cost                             |
809/// |----------------------------------------------|-----------------------------|-----------|----------------------------------|
810/// | [`Rooted<T>`][crate::Rooted]                 | Strictly LIFO / stack order | Automatic | very low (LIFO array)            |
811/// | [`OwnedRooted<T>`][crate::OwnedRooted]       | Arbitrary                   | Automatic | medium (separate `Arc` to track) |
812///
813/// `Rooted<T>` should suffice for most use cases, and provides decent
814/// ergonomics. In cases where LIFO scopes are difficult to reason
815/// about, e.g. heap-managed data structures, or when they may cause
816/// erroneous behavior, e.g. in errors that are propagated up the call
817/// stack, `OwnedRooted<T>` provides very safe ergonomics but at a
818/// small dynamic cost for the separate tracking allocation.
819///
820/// # Scopes
821///
822/// Wasmtime automatically creates two kinds of scopes:
823///
824/// 1. A [`Store`][crate::Store] is the outermost rooting scope. Creating a
825///    `Root<T>` directly inside a `Store` permanently roots the underlying
826///    object.
827///
828/// 2. A [`Caller`][crate::Caller] provides a rooting scope for the duration of
829///    a call from Wasm into a host function. Any objects rooted in a `Caller`
830///    will be unrooted after the host function returns. Note that there can be
831///    nested `Caller` scopes in the case where Wasm calls a host function,
832///    creating the first `Caller` and its rooting scope , and then the host
833///    function calls a Wasm function which then calls another host function,
834///    creating a second `Caller` and a second rooting scope. This nesting can
835///    be arbitrarily deep.
836///
837/// Additionally, if you would like to define finer-grained rooting scopes,
838/// Wasmtime provides the [`RootScope`][crate::RootScope] type.
839///
840/// Scopes are always nested in a last-in-first-out (LIFO) order. An outer scope
841/// is never exited (and the `Rooted<T>`s defined within it are never
842/// automatically unrooted) while an inner scope is still active. All inner
843/// scopes are exited before their outer scopes.
844///
845/// The following diagram illustrates various rooting scopes over time, how they
846/// nest, and when their `Rooted<T>`s are automatically unrooted:
847///
848/// ```text
849/// ----- new Store
850///   |
851///   |
852///   | let a: Rooted<T> = ...;
853///   |
854///   |
855///   | ----- call into Wasm
856///   |   |
857///   |   |
858///   |   | ----- Wasm calls host function F
859///   |   |   |
860///   |   |   |
861///   |   |   | let b: Rooted<T> = ...;
862///   |   |   |
863///   |   |   |
864///   |   |   | ----- F calls into Wasm
865///   |   |   |   |
866///   |   |   |   |
867///   |   |   |   | ----- Wasm call host function G
868///   |   |   |   |   |
869///   |   |   |   |   |
870///   |   |   |   |   | let c: Rooted<T> = ...;
871///   |   |   |   |   |
872///   |   |   |   |   |
873///   |   |   |   | ----- return to Wasm from host function G (unroots `c`)
874///   |   |   |   |
875///   |   |   |   |
876///   |   |   | ----- Wasm returns to F
877///   |   |   |
878///   |   |   |
879///   |   | ----- return from host function F (unroots `b`)
880///   |   |
881///   |   |
882///   | ----- return from Wasm
883///   |
884///   |
885///   | ----- let scope1 = RootScope::new(...);
886///   |   |
887///   |   |
888///   |   | let d: Rooted<T> = ...;
889///   |   |
890///   |   |
891///   |   | ----- let scope2 = RootScope::new(...);
892///   |   |   |
893///   |   |   |
894///   |   |   | let e: Rooted<T> = ...;
895///   |   |   |
896///   |   |   |
897///   |   | ----- drop `scope2` (unroots `e`)
898///   |   |
899///   |   |
900///   | ----- drop `scope1` (unroots `d`)
901///   |
902///   |
903/// ----- drop Store (unroots `a`)
904/// ```
905///
906/// A `Rooted<T>` can be used successfully as long as it is still rooted so, in
907/// the above diagram, `d` is valid inside `scope2` because `scope2` is wholly
908/// contained within the scope `d` was rooted within (`scope1`).
909///
910/// See also the documentation for [`RootScope`][crate::RootScope].
911#[repr(transparent)]
912pub struct Rooted<T: GcRef> {
913    inner: GcRootIndex,
914    _phantom: marker::PhantomData<T>,
915}
916
917impl<T: GcRef> Clone for Rooted<T> {
918    fn clone(&self) -> Self {
919        Rooted {
920            inner: self.inner,
921            _phantom: marker::PhantomData,
922        }
923    }
924}
925
926impl<T: GcRef> Copy for Rooted<T> {}
927
928impl<T: GcRef> Debug for Rooted<T> {
929    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
930        let name = format!("Rooted<{}>", any::type_name::<T>());
931        f.debug_struct(&name).field("inner", &self.inner).finish()
932    }
933}
934
935impl<T: GcRef> RootedGcRefImpl<T> for Rooted<T> {
936    fn get_gc_ref<'a>(&self, store: &'a StoreOpaque) -> Option<&'a VMGcRef> {
937        assert!(
938            self.comes_from_same_store(store),
939            "object used with wrong store"
940        );
941        let index = self.inner.index.as_lifo().unwrap();
942        let entry = store.gc_roots().lifo_roots.get(index)?;
943        if entry.generation == self.inner.generation {
944            Some(&entry.gc_ref)
945        } else {
946            None
947        }
948    }
949}
950
951impl<T: GcRef> Deref for Rooted<T> {
952    type Target = T;
953
954    fn deref(&self) -> &Self::Target {
955        T::transmute_ref(&self.inner)
956    }
957}
958
959impl<T: GcRef> Rooted<T> {
960    /// Push the given `VMGcRef` onto our LIFO root set.
961    ///
962    /// `gc_ref` should belong to `store`'s heap; failure to uphold this is
963    /// memory safe but will result in general failures down the line such as
964    /// panics or incorrect results.
965    ///
966    /// `gc_ref` should be a GC reference pointing to an instance of the GC type
967    /// that `T` represents. Failure to uphold this invariant is memory safe but
968    /// will result in general incorrectness such as panics and wrong results.
969    pub(crate) fn new(store: &mut AutoAssertNoGc<'_>, gc_ref: VMGcRef) -> Rooted<T> {
970        let id = store.id();
971        let roots = store.gc_roots_mut();
972        let inner = roots.push_lifo_root(id, gc_ref);
973        Rooted {
974            inner,
975            _phantom: marker::PhantomData,
976        }
977    }
978
979    /// Create a new `Rooted<T>` from a `GcRootIndex`.
980    ///
981    /// Note that `Rooted::from_gc_root_index(my_rooted.index)` is not
982    /// necessarily an identity function, as it allows changing the `T` type
983    /// parameter.
984    ///
985    /// The given index should be a LIFO index of a GC reference pointing to an
986    /// instance of the GC type that `T` represents. Failure to uphold this
987    /// invariant is memory safe but will result in general incorrectness such
988    /// as panics and wrong results.
989    pub(crate) fn from_gc_root_index(inner: GcRootIndex) -> Rooted<T> {
990        debug_assert!(inner.index.is_lifo());
991        Rooted {
992            inner,
993            _phantom: marker::PhantomData,
994        }
995    }
996
997    #[inline]
998    pub(crate) fn comes_from_same_store(&self, store: &StoreOpaque) -> bool {
999        debug_assert!(self.inner.index.is_lifo());
1000        self.inner.comes_from_same_store(store)
1001    }
1002
1003    /// Create an [`OwnedRooted<T>`][crate::OwnedRooted] holding onto the
1004    /// same GC object as `self`.
1005    ///
1006    /// Returns `None` if `self` is used outside of its scope and has therefore
1007    /// been unrooted.
1008    ///
1009    /// This does not unroot `self`, and `self` remains valid until its
1010    /// associated scope is exited.
1011    ///
1012    /// # Panics
1013    ///
1014    /// Panics if this object is not associate with the given store.
1015    ///
1016    /// # Example
1017    ///
1018    /// ```
1019    /// # use wasmtime::*;
1020    /// # fn _foo() -> Result<()> {
1021    /// let mut store = Store::<()>::default();
1022    ///
1023    /// let y: OwnedRooted<_> = {
1024    ///     // Create a nested rooting scope.
1025    ///     let mut scope = RootScope::new(&mut store);
1026    ///
1027    ///     // `x` is only rooted within this nested scope.
1028    ///     let x: Rooted<_> = ExternRef::new(&mut scope, "hello!")?;
1029    ///
1030    ///     // Extend `x`'s rooting past its scope's lifetime by converting it
1031    ///     // to an `OwnedRooted`.
1032    ///     x.to_owned_rooted(&mut scope)?
1033    /// };
1034    ///
1035    /// // Now we can still access the reference outside the scope it was
1036    /// // originally defined within.
1037    /// let data = y.data(&store)?.expect("should have host data");
1038    /// let data = data.downcast_ref::<&str>().expect("host data should be str");
1039    /// assert_eq!(*data, "hello!");
1040    /// # Ok(())
1041    /// # }
1042    /// ```
1043    pub fn to_owned_rooted(&self, mut store: impl AsContextMut) -> Result<OwnedRooted<T>> {
1044        self._to_owned_rooted(store.as_context_mut().0)
1045    }
1046
1047    pub(crate) fn _to_owned_rooted(&self, store: &mut StoreOpaque) -> Result<OwnedRooted<T>> {
1048        let mut store = AutoAssertNoGc::new(store);
1049        let gc_ref = self.try_clone_gc_ref(&mut store)?;
1050        Ok(OwnedRooted::new(&mut store, gc_ref))
1051    }
1052
1053    /// Are these two `Rooted<T>`s the same GC root?
1054    ///
1055    /// Note that this function can return `false` even when `a` and `b` are
1056    /// rooting the same underlying GC object, but the object was rooted
1057    /// multiple times (for example in different scopes). Use
1058    /// [`Rooted::ref_eq`][crate::Rooted::ref_eq] to test whether these are
1059    /// references to the same underlying GC object or not.
1060    ///
1061    /// # Example
1062    ///
1063    /// ```
1064    /// # use wasmtime::*;
1065    /// # fn foo() -> Result<()> {
1066    /// let mut store = Store::<()>::default();
1067    ///
1068    /// let a = ExternRef::new(&mut store, "hello")?;
1069    /// let b = a;
1070    ///
1071    /// // `a` and `b` are the same GC root.
1072    /// assert!(Rooted::rooted_eq(a, b));
1073    ///
1074    /// {
1075    ///     let mut scope = RootScope::new(&mut store);
1076    ///
1077    ///     // `c` is a different GC root, in a different scope, even though it
1078    ///     // is rooting the same object.
1079    ///     let c = a.to_owned_rooted(&mut scope)?.to_rooted(&mut scope);
1080    ///     assert!(!Rooted::rooted_eq(a, c));
1081    /// }
1082    ///
1083    /// let x = ExternRef::new(&mut store, "goodbye")?;
1084    ///
1085    /// // `a` and `x` are different GC roots, rooting different objects.
1086    /// assert!(!Rooted::rooted_eq(a, x));
1087    /// # Ok(())
1088    /// # }
1089    /// ```
1090    pub fn rooted_eq(a: Self, b: Self) -> bool {
1091        a.inner == b.inner
1092    }
1093
1094    /// Are these two GC roots referencing the same underlying GC object?
1095    ///
1096    /// This function will return `true` even when `a` and `b` are different GC
1097    /// roots (for example because they were rooted in different scopes) if they
1098    /// are rooting the same underlying GC object. To only test whether they are
1099    /// the same GC root, and not whether they are rooting the same GC object,
1100    /// use [`Rooted::rooted_eq`][crate::Rooted::rooted_eq].
1101    ///
1102    /// Returns an error if either `a` or `b` has been unrooted, for example
1103    /// because the scope it was rooted within has been exited.
1104    ///
1105    /// Because this method takes any `impl RootedGcRef<T>` arguments, it can be
1106    /// used to compare, for example, a `Rooted<T>` and a `OwnedRooted<T>`.
1107    ///
1108    /// # Panics
1109    ///
1110    /// Panics if either `a` or `b` is not associated with the given `store`.
1111    ///
1112    /// # Example
1113    ///
1114    /// ```
1115    /// # use wasmtime::*;
1116    /// # fn foo() -> Result<()> {
1117    /// let mut store = Store::<()>::default();
1118    ///
1119    /// let a = ExternRef::new(&mut store, "hello")?;
1120    /// let b = a;
1121    ///
1122    /// // `a` and `b` are rooting the same object.
1123    /// assert!(Rooted::ref_eq(&store, &a, &b)?);
1124    ///
1125    /// {
1126    ///     let mut scope = RootScope::new(&mut store);
1127    ///
1128    ///     // `c` is a different GC root, in a different scope, but still
1129    ///     // rooting the same object.
1130    ///     let c = a.to_owned_rooted(&mut scope)?.to_rooted(&mut scope);
1131    ///     assert!(!Rooted::ref_eq(&scope, &a, &c)?);
1132    /// }
1133    ///
1134    /// let x = ExternRef::new(&mut store, "goodbye")?;
1135    ///
1136    /// // `a` and `x` are rooting different objects.
1137    /// assert!(!Rooted::ref_eq(&store, &a, &x)?);
1138    ///
1139    /// // You can also compare `Rooted<T>`s and `OwnedRooted<T>`s with this
1140    /// // function.
1141    /// let d = a.to_owned_rooted(&mut store)?;
1142    /// assert!(Rooted::ref_eq(&store, &a, &d)?);
1143    /// # Ok(())
1144    /// # }
1145    /// ```
1146    pub fn ref_eq(
1147        store: impl AsContext,
1148        a: &impl RootedGcRef<T>,
1149        b: &impl RootedGcRef<T>,
1150    ) -> Result<bool> {
1151        let store = store.as_context().0;
1152        Self::_ref_eq(store, a, b)
1153    }
1154
1155    pub(crate) fn _ref_eq(
1156        store: &StoreOpaque,
1157        a: &impl RootedGcRef<T>,
1158        b: &impl RootedGcRef<T>,
1159    ) -> Result<bool> {
1160        let a = a.try_gc_ref(store)?;
1161        let b = b.try_gc_ref(store)?;
1162        Ok(a == b)
1163    }
1164
1165    /// Hash this root.
1166    ///
1167    /// Note that, similar to `Rooted::rooted_eq`, this only operates on the
1168    /// root and *not* the underlying GC reference. That means that two
1169    /// different rootings of the same object will hash to different values
1170    /// (modulo hash collisions). If this is undesirable, use the
1171    /// [`ref_hash`][crate::Rooted::ref_hash] method instead.
1172    pub fn rooted_hash<H>(&self, state: &mut H)
1173    where
1174        H: Hasher,
1175    {
1176        self.inner.hash(state);
1177    }
1178
1179    /// Hash the underlying rooted object reference.
1180    ///
1181    /// Note that, similar to `Rooted::ref_eq`, and operates on the underlying
1182    /// rooted GC object reference, not the root. That means that two
1183    /// *different* rootings of the same object will hash to the *same*
1184    /// value. If this is undesirable, use the
1185    /// [`rooted_hash`][crate::Rooted::rooted_hash] method instead.
1186    pub fn ref_hash<H>(&self, store: impl AsContext, state: &mut H) -> Result<()>
1187    where
1188        H: Hasher,
1189    {
1190        let gc_ref = self.try_gc_ref(store.as_context().0)?;
1191        gc_ref.hash(state);
1192        Ok(())
1193    }
1194
1195    /// Cast `self` to a `Rooted<U>`.
1196    ///
1197    /// It is the caller's responsibility to ensure that `self` is actually a
1198    /// `U`. Failure to uphold this invariant will be memory safe but will
1199    /// result in general incorrectness such as panics and wrong results.
1200    pub(crate) fn unchecked_cast<U: GcRef>(self) -> Rooted<U> {
1201        Rooted::from_gc_root_index(self.inner)
1202    }
1203
1204    /// Common implementation of the `WasmTy::store` trait method for all
1205    /// `Rooted<T>`s.
1206    pub(super) fn wasm_ty_store(
1207        self,
1208        store: &mut AutoAssertNoGc<'_>,
1209        ptr: &mut MaybeUninit<ValRaw>,
1210        val_raw: impl Fn(u32) -> ValRaw,
1211    ) -> Result<()> {
1212        let gc_ref = self.inner.try_clone_gc_ref(store)?;
1213
1214        let raw = match store.optional_gc_store_mut() {
1215            Some(s) => s.expose_gc_ref_to_wasm(gc_ref),
1216            None => {
1217                // NB: do not force the allocation of a GC heap just because the
1218                // program is using `i31ref`s.
1219                debug_assert!(gc_ref.is_i31());
1220                gc_ref.as_raw_non_zero_u32()
1221            }
1222        };
1223
1224        ptr.write(val_raw(raw.get()));
1225        Ok(())
1226    }
1227
1228    /// Common implementation of the `WasmTy::load` trait method for all
1229    /// `Rooted<T>`s.
1230    pub(super) fn wasm_ty_load(
1231        store: &mut AutoAssertNoGc<'_>,
1232        raw_gc_ref: u32,
1233        from_cloned_gc_ref: impl Fn(&mut AutoAssertNoGc<'_>, VMGcRef) -> Self,
1234    ) -> Self {
1235        debug_assert_ne!(raw_gc_ref, 0);
1236        let gc_ref = VMGcRef::from_raw_u32(raw_gc_ref).expect("non-null");
1237
1238        let gc_ref = match store.optional_gc_store_mut() {
1239            Some(s) => s.clone_gc_ref(&gc_ref),
1240            None => {
1241                // NB: do not force the allocation of a GC heap just because the
1242                // program is using `i31ref`s.
1243                debug_assert!(gc_ref.is_i31());
1244                gc_ref.unchecked_copy()
1245            }
1246        };
1247
1248        from_cloned_gc_ref(store, gc_ref)
1249    }
1250
1251    /// Common implementation of the `WasmTy::store` trait method for all
1252    /// `Option<Rooted<T>>`s.
1253    pub(super) fn wasm_ty_option_store(
1254        me: Option<Self>,
1255        store: &mut AutoAssertNoGc<'_>,
1256        ptr: &mut MaybeUninit<ValRaw>,
1257        val_raw: impl Fn(u32) -> ValRaw,
1258    ) -> Result<()> {
1259        match me {
1260            Some(me) => me.wasm_ty_store(store, ptr, val_raw),
1261            None => {
1262                ptr.write(val_raw(0));
1263                Ok(())
1264            }
1265        }
1266    }
1267
1268    /// Common implementation of the `WasmTy::load` trait method for all
1269    /// `Option<Rooted<T>>`s.
1270    pub(super) fn wasm_ty_option_load(
1271        store: &mut AutoAssertNoGc<'_>,
1272        raw_gc_ref: u32,
1273        from_cloned_gc_ref: impl Fn(&mut AutoAssertNoGc<'_>, VMGcRef) -> Self,
1274    ) -> Option<Self> {
1275        let gc_ref = VMGcRef::from_raw_u32(raw_gc_ref)?;
1276        let gc_ref = store.clone_gc_ref(&gc_ref);
1277        Some(from_cloned_gc_ref(store, gc_ref))
1278    }
1279}
1280
1281/// Nested rooting scopes.
1282///
1283/// `RootScope` allows the creation or nested rooting scopes for use with
1284/// [`Rooted<T>`][crate::Rooted]. This allows for fine-grained control over how
1285/// long a set of [`Rooted<T>`][crate::Rooted]s are strongly held alive, giving
1286/// gives you the tools necessary to avoid holding onto GC objects longer than
1287/// necessary. `Rooted<T>`s created within a `RootScope` are automatically
1288/// unrooted when the `RootScope` is dropped. For more details on
1289/// [`Rooted<T>`][crate::Rooted] lifetimes and their interaction with rooting
1290/// scopes, see [`Rooted<T>`][crate::Rooted]'s documentation.
1291///
1292/// A `RootScope<C>` wraps a `C: AsContextMut` (that is, anything that
1293/// represents exclusive access to a [`Store`][crate::Store]) and in turn
1294/// implements [`AsContext`][crate::AsContext] and
1295/// [`AsContextMut`][crate::AsContextMut] in terms of its underlying
1296/// `C`. Therefore, `RootScope<C>` can be used anywhere you would use the
1297/// underlying `C`, for example in the [`Global::get`][crate::Global::get]
1298/// method. Any `Rooted<T>`s created by a method that a `RootScope<C>` was
1299/// passed as context to are tied to the `RootScope<C>`'s scope and
1300/// automatically unrooted when the scope is dropped.
1301///
1302/// # Example
1303///
1304/// ```
1305/// # use wasmtime::*;
1306/// # fn _foo() -> Result<()> {
1307/// let mut store = Store::<()>::default();
1308///
1309/// let a: Rooted<_>;
1310/// let b: Rooted<_>;
1311/// let c: Rooted<_>;
1312///
1313/// // Root `a` in the store's scope. It will be rooted for the duration of the
1314/// // store's lifetime.
1315/// a = ExternRef::new(&mut store, 42)?;
1316///
1317/// // `a` is rooted, so we can access its data successfully.
1318/// assert!(a.data(&store).is_ok());
1319///
1320/// {
1321///     let mut scope1 = RootScope::new(&mut store);
1322///
1323///     // Root `b` in `scope1`.
1324///     b = ExternRef::new(&mut scope1, 36)?;
1325///
1326///     // Both `a` and `b` are rooted.
1327///     assert!(a.data(&scope1).is_ok());
1328///     assert!(b.data(&scope1).is_ok());
1329///
1330///     {
1331///         let mut scope2 = RootScope::new(&mut scope1);
1332///
1333///         // Root `c` in `scope2`.
1334///         c = ExternRef::new(&mut scope2, 36)?;
1335///
1336///         // All of `a`, `b`, and `c` are rooted.
1337///         assert!(a.data(&scope2).is_ok());
1338///         assert!(b.data(&scope2).is_ok());
1339///         assert!(c.data(&scope2).is_ok());
1340///
1341///         // Drop `scope2`.
1342///     }
1343///
1344///     // Now `a` and `b` are still rooted, but `c` was unrooted when we dropped
1345///     // `scope2`.
1346///     assert!(a.data(&scope1).is_ok());
1347///     assert!(b.data(&scope1).is_ok());
1348///     assert!(c.data(&scope1).is_err());
1349///
1350///     // Drop `scope1`.
1351/// }
1352///
1353/// // And now only `a` is still rooted. Both `b` and `c` were unrooted when we
1354/// // dropped their respective rooting scopes.
1355/// assert!(a.data(&store).is_ok());
1356/// assert!(b.data(&store).is_err());
1357/// assert!(c.data(&store).is_err());
1358/// # Ok(())
1359/// # }
1360/// ```
1361pub struct RootScope<C>
1362where
1363    C: AsContextMut,
1364{
1365    store: C,
1366    scope: usize,
1367}
1368
1369impl<C> Drop for RootScope<C>
1370where
1371    C: AsContextMut,
1372{
1373    fn drop(&mut self) {
1374        self.store.as_context_mut().0.exit_gc_lifo_scope(self.scope);
1375    }
1376}
1377
1378impl<C> RootScope<C>
1379where
1380    C: AsContextMut,
1381{
1382    // NB: we MUST NOT expose a method like
1383    //
1384    //     pub fn store(&mut self) -> &mut Store { ... }
1385    //
1386    // because callers could do treacherous things like
1387    //
1388    //     let scope1 = RootScope::new(&mut store1);
1389    //     let scope2 = RootScope::new(&mut store2);
1390    //     std::mem::swap(scope1.store(), scope2.store());
1391    //
1392    // and then we would start truncate the store's GC root set's LIFO roots to
1393    // the wrong lengths.
1394    //
1395    // Instead, we just implement `AsContext[Mut]` for `RootScope`.
1396
1397    /// Construct a new scope for rooting GC objects.
1398    ///
1399    /// # Example
1400    ///
1401    /// ```
1402    /// # use wasmtime::*;
1403    /// let mut store = Store::<()>::default();
1404    ///
1405    /// {
1406    ///     let mut scope = RootScope::new(&mut store);
1407    ///
1408    ///     // Temporarily root GC objects in this nested rooting scope...
1409    /// }
1410    /// ```
1411    pub fn new(store: C) -> Self {
1412        let scope = store.as_context().0.gc_roots().enter_lifo_scope();
1413        RootScope { store, scope }
1414    }
1415
1416    fn gc_roots(&mut self) -> &mut RootSet {
1417        self.store.as_context_mut().0.gc_roots_mut()
1418    }
1419
1420    fn lifo_roots(&mut self) -> &mut Vec<LifoRoot> {
1421        &mut self.gc_roots().lifo_roots
1422    }
1423
1424    /// Reserve enough capacity for `additional` GC roots in this scope.
1425    ///
1426    /// # Example
1427    ///
1428    /// ```
1429    /// # use wasmtime::*;
1430    /// let mut store = Store::<()>::default();
1431    ///
1432    /// {
1433    ///     let mut scope = RootScope::new(&mut store);
1434    ///
1435    ///     // Ensure we have enough storage pre-allocated to root five GC
1436    ///     // references inside this scope without any underlying reallocation.
1437    ///     scope.reserve(5);
1438    ///
1439    ///     // ...
1440    /// }
1441    /// ```
1442    pub fn reserve(&mut self, additional: usize) {
1443        self.lifo_roots().reserve(additional);
1444    }
1445}
1446
1447impl<T> AsContext for RootScope<T>
1448where
1449    T: AsContextMut,
1450{
1451    type Data = T::Data;
1452
1453    fn as_context(&self) -> crate::StoreContext<'_, Self::Data> {
1454        self.store.as_context()
1455    }
1456}
1457
1458impl<T> AsContextMut for RootScope<T>
1459where
1460    T: AsContextMut,
1461{
1462    fn as_context_mut(&mut self) -> crate::StoreContextMut<'_, Self::Data> {
1463        self.store.as_context_mut()
1464    }
1465}
1466
1467/// Internal version of `RootScope` that only wraps a `&mut StoreOpaque` rather
1468/// than a whole `impl AsContextMut<Data = T>`.
1469pub(crate) struct OpaqueRootScope<S>
1470where
1471    S: AsStoreOpaque,
1472{
1473    store: S,
1474    scope: usize,
1475}
1476
1477impl<S> Drop for OpaqueRootScope<S>
1478where
1479    S: AsStoreOpaque,
1480{
1481    fn drop(&mut self) {
1482        self.store.as_store_opaque().exit_gc_lifo_scope(self.scope);
1483    }
1484}
1485
1486impl<S> Deref for OpaqueRootScope<S>
1487where
1488    S: AsStoreOpaque,
1489{
1490    type Target = S;
1491
1492    fn deref(&self) -> &Self::Target {
1493        &self.store
1494    }
1495}
1496
1497// XXX: Don't use this `DerefMut` implementation to `mem::{swap,replace}` or
1498// etc... the underlying `StoreOpaque` in a `OpaqueRootScope`! That will result
1499// in truncating the store's GC root set's LIFO roots to the wrong length.
1500//
1501// We don't implement `DerefMut` for `RootScope` for exactly this reason, but
1502// allow it for `OpaqueRootScope` because it is only Wasmtime-internal and not
1503// publicly exported.
1504impl<S> DerefMut for OpaqueRootScope<S>
1505where
1506    S: AsStoreOpaque,
1507{
1508    fn deref_mut(&mut self) -> &mut Self::Target {
1509        &mut self.store
1510    }
1511}
1512
1513impl<S> OpaqueRootScope<S>
1514where
1515    S: AsStoreOpaque,
1516{
1517    pub(crate) fn new(mut store: S) -> Self {
1518        let scope = store.as_store_opaque().gc_roots().enter_lifo_scope();
1519        OpaqueRootScope { store, scope }
1520    }
1521}
1522
1523/// A rooted reference to a garbage-collected `T` with automatic lifetime.
1524///
1525/// An `OwnedRooted<T>` is a strong handle to a garbage-collected `T`,
1526/// preventing its referent (and anything else transitively referenced) from
1527/// being collected by the GC until [`unroot`][crate::OwnedRooted::unroot] is
1528/// explicitly called.
1529///
1530/// An `OwnedRooted<T>` keeps its rooted GC object alive at least
1531/// until the `OwnedRooted<T>` itself is dropped. The
1532/// "de-registration" of the root is automatic and is triggered (in a
1533/// deferred way) by the drop of this type.
1534///
1535/// The primary way to create an `OwnedRooted<T>` is to promote a temporary
1536/// `Rooted<T>` into an `OwnedRooted<T>` via its
1537/// [`to_owned_rooted`][crate::Rooted::to_owned_rooted] method.
1538///
1539/// `OwnedRooted<T>` dereferences to its underlying `T`, allowing you to call
1540/// `T`'s methods.
1541///
1542/// # Example
1543///
1544/// ```
1545/// # use wasmtime::*;
1546/// # fn _foo() -> Result<()> {
1547/// let mut store = Store::<Option<OwnedRooted<ExternRef>>>::default();
1548///
1549/// // Create our `OwnedRooted` in a nested scope to avoid rooting it for
1550/// // the duration of the store's lifetime.
1551/// let x = {
1552///     let mut scope = RootScope::new(&mut store);
1553///     let x = ExternRef::new(&mut scope, 1234)?;
1554///     x.to_owned_rooted(&mut scope)?
1555/// };
1556///
1557/// // Place `x` into our store.
1558/// *store.data_mut() = Some(x);
1559///
1560/// // Do a bunch stuff that may or may not access, replace, or take `x`...
1561/// # Ok(())
1562/// # }
1563/// ```
1564pub struct OwnedRooted<T>
1565where
1566    T: GcRef,
1567{
1568    inner: GcRootIndex,
1569    liveness_flag: Arc<()>,
1570    _phantom: marker::PhantomData<T>,
1571}
1572
1573const _: () = {
1574    use crate::{AnyRef, ExternRef};
1575
1576    // NB: these match the C API which should also be updated if this changes.
1577    //
1578    // The size is really "16 + pointer + alignment", which is either
1579    // 20 bytes on some 32-bit platforms or 24 bytes on other 32-bit
1580    // platforms (e.g., riscv32, which adds an extra 4 bytes of
1581    // padding) and 64-bit platforms.
1582    assert!(
1583        mem::size_of::<OwnedRooted<AnyRef>>() >= 16 && mem::size_of::<OwnedRooted<AnyRef>>() <= 24
1584    );
1585    assert!(mem::align_of::<OwnedRooted<AnyRef>>() == mem::align_of::<u64>());
1586    assert!(
1587        mem::size_of::<OwnedRooted<ExternRef>>() >= 16
1588            && mem::size_of::<OwnedRooted<ExternRef>>() <= 24
1589    );
1590    assert!(mem::align_of::<OwnedRooted<ExternRef>>() == mem::align_of::<u64>());
1591};
1592
1593impl<T: GcRef> Debug for OwnedRooted<T> {
1594    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1595        let name = format!("OwnedRooted<{}>", any::type_name::<T>());
1596        f.debug_struct(&name).field("inner", &self.inner).finish()
1597    }
1598}
1599
1600impl<T: GcRef> Deref for OwnedRooted<T> {
1601    type Target = T;
1602
1603    fn deref(&self) -> &Self::Target {
1604        T::transmute_ref(&self.inner)
1605    }
1606}
1607
1608impl<T: GcRef> Clone for OwnedRooted<T> {
1609    fn clone(&self) -> Self {
1610        OwnedRooted {
1611            inner: self.inner,
1612            liveness_flag: self.liveness_flag.clone(),
1613            _phantom: marker::PhantomData,
1614        }
1615    }
1616}
1617
1618impl<T> OwnedRooted<T>
1619where
1620    T: GcRef,
1621{
1622    /// Construct a new owned GC root.
1623    ///
1624    /// `gc_ref` should belong to `store`'s heap; failure to uphold this is
1625    /// memory safe but will result in general failures down the line such as
1626    /// panics or incorrect results.
1627    ///
1628    /// `gc_ref` should be a GC reference pointing to an instance of the GC type
1629    /// that `T` represents. Failure to uphold this invariant is memory safe but
1630    /// will result in general incorrectness such as panics and wrong results.
1631    pub(crate) fn new(store: &mut AutoAssertNoGc<'_>, gc_ref: VMGcRef) -> Self {
1632        // We always have the opportunity to trim and unregister stale
1633        // owned roots whenever we have a mut borrow to the store. We
1634        // take the opportunity to do so here to avoid tying growth of
1635        // the root-set to the GC frequency -- it is much cheaper to
1636        // eagerly trim these roots. Note that the trimming keeps a
1637        // "high water mark" that grows exponentially, so we have
1638        // amortized constant time even though an individual trim
1639        // takes time linear in the number of roots.
1640        store.trim_gc_liveness_flags(false);
1641
1642        let roots = store.gc_roots_mut();
1643        let id = roots.owned_rooted.alloc(gc_ref);
1644        let liveness_flag = Arc::new(());
1645        roots
1646            .liveness_flags
1647            .push((Arc::downgrade(&liveness_flag), id));
1648        OwnedRooted {
1649            inner: GcRootIndex {
1650                store_id: store.id(),
1651                generation: 0,
1652                index: PackedIndex::new_owned(id),
1653            },
1654            liveness_flag,
1655            _phantom: marker::PhantomData,
1656        }
1657    }
1658
1659    #[inline]
1660    pub(crate) fn comes_from_same_store(&self, store: &StoreOpaque) -> bool {
1661        debug_assert!(self.inner.index.is_owned());
1662        self.inner.comes_from_same_store(store)
1663    }
1664
1665    /// Clone this `OwnedRooted<T>` into a `Rooted<T>`.
1666    ///
1667    /// This operation does not consume or unroot this `OwnedRooted<T>`.
1668    ///
1669    /// The underlying GC object is re-rooted in the given context's scope. The
1670    /// resulting `Rooted<T>` is only valid during the given context's
1671    /// scope. See the [`Rooted<T>`][crate::Rooted] documentation for more
1672    /// details on rooting scopes.
1673    ///
1674    /// This operation does not consume or unroot this `OwnedRooted<T>`.
1675    ///
1676    /// # Panics
1677    ///
1678    /// Panics if this object is not associated with the given context's store.
1679    ///
1680    /// # Example
1681    ///
1682    /// ```
1683    /// # use wasmtime::*;
1684    /// # fn _foo() -> Result<()> {
1685    /// let mut store = Store::<()>::default();
1686    ///
1687    /// let root1: Rooted<_>;
1688    ///
1689    /// let owned = {
1690    ///     let mut scope = RootScope::new(&mut store);
1691    ///     root1 = ExternRef::new(&mut scope, 1234)?;
1692    ///     root1.to_owned_rooted(&mut scope)?
1693    /// };
1694    ///
1695    /// // `root1` is no longer accessible because it was unrooted when `scope`
1696    /// // was dropped.
1697    /// assert!(root1.data(&store).is_err());
1698    ///
1699    /// // But we can re-root `owned` into this scope.
1700    /// let root2 = owned.to_rooted(&mut store);
1701    /// assert!(root2.data(&store).is_ok());
1702    /// # Ok(())
1703    /// # }
1704    /// ```
1705    pub fn to_rooted(&self, mut context: impl AsContextMut) -> Rooted<T> {
1706        self._to_rooted(context.as_context_mut().0)
1707    }
1708
1709    pub(crate) fn _to_rooted(&self, store: &mut StoreOpaque) -> Rooted<T> {
1710        assert!(
1711            self.comes_from_same_store(store),
1712            "object used with wrong store"
1713        );
1714        let mut store = AutoAssertNoGc::new(store);
1715        let gc_ref = self.clone_gc_ref(&mut store).unwrap();
1716        Rooted::new(&mut store, gc_ref)
1717    }
1718
1719    /// Are these two GC roots referencing the same underlying GC object?
1720    ///
1721    /// This function will return `true` even when `a` and `b` are different GC
1722    /// roots (for example because they were rooted in different scopes) if they
1723    /// are rooting the same underlying GC object.
1724    ///
1725    /// Because this method takes any `impl RootedGcRef<T>` arguments, it can be
1726    /// used to compare, for example, a `Rooted<T>` and an `OwnedRooted<T>`.
1727    ///
1728    /// # Panics
1729    ///
1730    /// Panics if either `a` or `b` is not associated with the given `store`.
1731    ///
1732    /// # Example
1733    ///
1734    /// ```
1735    /// # use wasmtime::*;
1736    /// # fn foo() -> Result<()> {
1737    /// let mut store = Store::<()>::default();
1738    ///
1739    /// let a;
1740    /// let b;
1741    /// let x;
1742    ///
1743    /// {
1744    ///     let mut scope = RootScope::new(&mut store);
1745    ///
1746    ///     a = ExternRef::new(&mut scope, "hello")?.to_owned_rooted(&mut scope)?;
1747    ///     b = a.clone();
1748    ///
1749    ///     // `a` and `b` are rooting the same object.
1750    ///     assert!(OwnedRooted::ref_eq(&scope, &a, &b)?);
1751    ///
1752    ///     // `c` is a different GC root, is in a different scope, and is a
1753    ///     // `Rooted<T>` instead of a `OwnedRooted<T>`, but is still rooting
1754    ///     // the same object.
1755    ///     let c = a.to_rooted(&mut scope);
1756    ///     assert!(OwnedRooted::ref_eq(&scope, &a, &c)?);
1757    ///
1758    ///     x = ExternRef::new(&mut scope, "goodbye")?.to_owned_rooted(&mut scope)?;
1759    ///
1760    ///     // `a` and `x` are rooting different objects.
1761    ///     assert!(!OwnedRooted::ref_eq(&scope, &a, &x)?);
1762    /// }
1763    /// # Ok(())
1764    /// # }
1765    /// ```
1766    pub fn ref_eq(
1767        store: impl AsContext,
1768        a: &impl RootedGcRef<T>,
1769        b: &impl RootedGcRef<T>,
1770    ) -> Result<bool> {
1771        Rooted::ref_eq(store, a, b)
1772    }
1773
1774    /// Hash this root.
1775    ///
1776    /// Note that, similar to `Rooted::rooted_eq`, this only operates on the
1777    /// root and *not* the underlying GC reference. That means that two
1778    /// different rootings of the same object will hash to different values
1779    /// (modulo hash collisions). If this is undesirable, use the
1780    /// [`ref_hash`][crate::OwnedRooted::ref_hash] method instead.
1781    pub fn rooted_hash<H>(&self, state: &mut H)
1782    where
1783        H: Hasher,
1784    {
1785        self.inner.hash(state);
1786    }
1787
1788    /// Hash the underlying rooted object reference.
1789    ///
1790    /// Note that, similar to `Rooted::ref_eq`, and operates on the underlying
1791    /// rooted GC object reference, not the root. That means that two
1792    /// *different* rootings of the same object will hash to the *same*
1793    /// value. If this is undesirable, use the
1794    /// [`rooted_hash`][crate::Rooted::rooted_hash] method instead.
1795    pub fn ref_hash<H>(&self, store: impl AsContext, state: &mut H)
1796    where
1797        H: Hasher,
1798    {
1799        let gc_ref = self
1800            .get_gc_ref(store.as_context().0)
1801            .expect("OwnedRooted's get_gc_ref is infallible");
1802        gc_ref.hash(state);
1803    }
1804
1805    /// Cast `self` to an `OwnedRooted<U>`.
1806    ///
1807    /// It is the caller's responsibility to ensure that `self` is actually a
1808    /// `U`. Failure to uphold this invariant will be memory safe but will
1809    /// result in general incorrectness such as panics and wrong results.
1810    pub(crate) fn unchecked_cast<U: GcRef>(self) -> OwnedRooted<U> {
1811        OwnedRooted {
1812            inner: self.inner,
1813            liveness_flag: self.liveness_flag,
1814            _phantom: core::marker::PhantomData,
1815        }
1816    }
1817
1818    /// Common implementation of the `WasmTy::store` trait method for all
1819    /// `OwnedRooted<T>`s.
1820    pub(super) fn wasm_ty_store(
1821        self,
1822        store: &mut AutoAssertNoGc<'_>,
1823        ptr: &mut MaybeUninit<ValRaw>,
1824        val_raw: impl Fn(u32) -> ValRaw,
1825    ) -> Result<()> {
1826        let gc_ref = self.try_clone_gc_ref(store)?;
1827        let raw = store.require_gc_store_mut()?.expose_gc_ref_to_wasm(gc_ref);
1828        ptr.write(val_raw(raw.get()));
1829        Ok(())
1830    }
1831
1832    /// Common implementation of the `WasmTy::load` trait method for all
1833    /// `OwnedRooted<T>`s.
1834    pub(super) fn wasm_ty_load(
1835        store: &mut AutoAssertNoGc<'_>,
1836        raw_gc_ref: u32,
1837        from_cloned_gc_ref: impl Fn(&mut AutoAssertNoGc<'_>, VMGcRef) -> Rooted<T>,
1838    ) -> Self {
1839        debug_assert_ne!(raw_gc_ref, 0);
1840        let gc_ref = VMGcRef::from_raw_u32(raw_gc_ref).expect("non-null");
1841        let gc_ref = store.clone_gc_ref(&gc_ref);
1842        RootSet::with_lifo_scope(store, |store| {
1843            let rooted = from_cloned_gc_ref(store, gc_ref);
1844            rooted._to_owned_rooted(store).expect("rooted is in scope")
1845        })
1846    }
1847
1848    /// Common implementation of the `WasmTy::store` trait method for all
1849    /// `Option<OwnedRooted<T>>`s.
1850    pub(super) fn wasm_ty_option_store(
1851        me: Option<Self>,
1852        store: &mut AutoAssertNoGc<'_>,
1853        ptr: &mut MaybeUninit<ValRaw>,
1854        val_raw: impl Fn(u32) -> ValRaw,
1855    ) -> Result<()> {
1856        match me {
1857            Some(me) => me.wasm_ty_store(store, ptr, val_raw),
1858            None => {
1859                ptr.write(val_raw(0));
1860                Ok(())
1861            }
1862        }
1863    }
1864
1865    /// Common implementation of the `WasmTy::load` trait method for all
1866    /// `Option<OwnedRooted<T>>`s.
1867    pub(super) fn wasm_ty_option_load(
1868        store: &mut AutoAssertNoGc<'_>,
1869        raw_gc_ref: u32,
1870        from_cloned_gc_ref: impl Fn(&mut AutoAssertNoGc<'_>, VMGcRef) -> Rooted<T>,
1871    ) -> Option<Self> {
1872        let gc_ref = VMGcRef::from_raw_u32(raw_gc_ref)?;
1873        let gc_ref = store.clone_gc_ref(&gc_ref);
1874        RootSet::with_lifo_scope(store, |store| {
1875            let rooted = from_cloned_gc_ref(store, gc_ref);
1876            Some(rooted._to_owned_rooted(store).expect("rooted is in scope"))
1877        })
1878    }
1879
1880    #[doc(hidden)]
1881    pub fn into_parts_for_c_api(self) -> (NonZeroU64, u32, u32, *const ()) {
1882        (
1883            self.inner.store_id.as_raw(),
1884            self.inner.generation,
1885            self.inner.index.0,
1886            Arc::into_raw(self.liveness_flag),
1887        )
1888    }
1889
1890    #[doc(hidden)]
1891    pub unsafe fn from_borrowed_raw_parts_for_c_api(
1892        a: NonZeroU64,
1893        b: u32,
1894        c: u32,
1895        d: *const (),
1896    ) -> OwnedRooted<T> {
1897        // We are given a *borrow* of the Arc. This is a little
1898        // sketchy because `Arc::from_raw()` takes *ownership* of the
1899        // passed-in pointer, so we need to clone then forget that
1900        // original.
1901        let liveness_flag = {
1902            let original = unsafe { Arc::from_raw(d) };
1903            let clone = original.clone();
1904            core::mem::forget(original);
1905            clone
1906        };
1907        OwnedRooted {
1908            inner: GcRootIndex {
1909                store_id: StoreId::from_raw(a),
1910                generation: b,
1911                index: PackedIndex(c),
1912            },
1913            liveness_flag,
1914            _phantom: marker::PhantomData,
1915        }
1916    }
1917
1918    #[doc(hidden)]
1919    pub unsafe fn from_owned_raw_parts_for_c_api(
1920        a: NonZeroU64,
1921        b: u32,
1922        c: u32,
1923        d: *const (),
1924    ) -> OwnedRooted<T> {
1925        let liveness_flag = unsafe { Arc::from_raw(d) };
1926        OwnedRooted {
1927            inner: GcRootIndex {
1928                store_id: StoreId::from_raw(a),
1929                generation: b,
1930                index: PackedIndex(c),
1931            },
1932            liveness_flag,
1933            _phantom: marker::PhantomData,
1934        }
1935    }
1936}
1937
1938impl<T: GcRef> RootedGcRefImpl<T> for OwnedRooted<T> {
1939    fn get_gc_ref<'a>(&self, store: &'a StoreOpaque) -> Option<&'a VMGcRef> {
1940        assert!(
1941            self.comes_from_same_store(store),
1942            "object used with wrong store"
1943        );
1944
1945        let id = self.inner.index.as_owned().unwrap();
1946        store.gc_roots().owned_rooted.get(id)
1947    }
1948}
1949
1950#[cfg(test)]
1951mod tests {
1952    use crate::ExternRef;
1953
1954    use super::*;
1955
1956    #[test]
1957    fn sizes() {
1958        // Try to keep tabs on the size of these things. Don't want them growing
1959        // unintentionally.
1960        assert_eq!(std::mem::size_of::<Rooted<ExternRef>>(), 16);
1961        assert!(std::mem::size_of::<OwnedRooted<ExternRef>>() <= 24);
1962    }
1963}