Expand description
Pulley bytecode operations with their operands.
Structs§
- Bitcast
Float From Int32 low32(dst) = bitcast low32(src) as f32- Bitcast
Float From Int64 dst = bitcast src as f64- Bitcast
IntFrom Float32 low32(dst) = bitcast low32(src) as i32- Bitcast
IntFrom Float64 dst = bitcast src as i64- BrIf
- Conditionally transfer control to the given PC offset if
low32(cond)contains a non-zero value. - BrIfNot
- Conditionally transfer control to the given PC offset if
low32(cond)contains a zero value. - BrIf
Xeq32 - Branch if
a == b. - BrIf
Xeq64 - Branch if
a == b. - BrIf
Xeq32 I8 - Branch if
a == b. - BrIf
Xeq32 I32 - Branch if
a == b. - BrIf
Xeq64 I8 - Branch if
a == b. - BrIf
Xeq64 I32 - Branch if
a == b. - BrIf
Xneq32 - Branch if
a !=b. - BrIf
Xneq64 - Branch if
a !=b. - BrIf
Xneq32 I8 - Branch if
a !=b. - BrIf
Xneq32 I32 - Branch if
a !=b. - BrIf
Xneq64 I8 - Branch if
a !=b. - BrIf
Xneq64 I32 - Branch if
a !=b. - BrIf
Xsgt32 I8 - Branch if signed
a > b. - BrIf
Xsgt32 I32 - Branch if signed
a > b. - BrIf
Xsgt64 I8 - Branch if signed
a > b. - BrIf
Xsgt64 I32 - Branch if signed
a > b. - BrIf
Xsgteq32 I8 - Branch if signed
a >= b. - BrIf
Xsgteq32 I32 - Branch if signed
a >= b. - BrIf
Xsgteq64 I8 - Branch if signed
a >= b. - BrIf
Xsgteq64 I32 - Branch if signed
a >= b. - BrIf
Xslt32 - Branch if signed
a < b. - BrIf
Xslt64 - Branch if signed
a < b. - BrIf
Xslt32 I8 - Branch if signed
a < b. - BrIf
Xslt32 I32 - Branch if signed
a < b. - BrIf
Xslt64 I8 - Branch if signed
a < b. - BrIf
Xslt64 I32 - Branch if signed
a < b. - BrIf
Xslteq32 - Branch if signed
a <= b. - BrIf
Xslteq64 - Branch if signed
a <= b. - BrIf
Xslteq32 I8 - Branch if signed
a <= b. - BrIf
Xslteq32 I32 - Branch if signed
a <= b. - BrIf
Xslteq64 I8 - Branch if signed
a <= b. - BrIf
Xslteq64 I32 - Branch if signed
a <= b. - BrIf
Xugt32 U8 - Branch if unsigned
a > b. - BrIf
Xugt32 U32 - Branch if unsigned
a > b. - BrIf
Xugt64 U8 - Branch if unsigned
a > b. - BrIf
Xugt64 U32 - Branch if unsigned
a > b. - BrIf
Xugteq32 U8 - Branch if unsigned
a >= b. - BrIf
Xugteq32 U32 - Branch if unsigned
a >= b. - BrIf
Xugteq64 U8 - Branch if unsigned
a >= b. - BrIf
Xugteq64 U32 - Branch if unsigned
a >= b. - BrIf
Xult32 - Branch if unsigned
a < b. - BrIf
Xult64 - Branch if unsigned
a < b. - BrIf
Xult32 U8 - Branch if unsigned
a < b. - BrIf
Xult32 U32 - Branch if unsigned
a < b. - BrIf
Xult64 U8 - Branch if unsigned
a < b. - BrIf
Xult64 U32 - Branch if unsigned
a < b. - BrIf
Xulteq32 - Branch if unsigned
a <= b. - BrIf
Xulteq64 - Branch if unsigned
a <= b. - BrIf
Xulteq32 U8 - Branch if unsigned
a <= b. - BrIf
Xulteq32 U32 - Branch if unsigned
a <= b. - BrIf
Xulteq64 U8 - Branch if unsigned
a <= b. - BrIf
Xulteq64 U32 - Branch if unsigned
a <= b. - BrTable32
- Branch to the label indicated by
low32(idx). - Bswap32
dst = byteswap(low32(src))- Bswap64
dst = byteswap(src)- Call
- Transfer control to the PC at the given offset and set the
lrregister to the PC just after this instruction. - Call1
- Like
call, but alsox0 = arg1 - Call2
- Like
call, but alsox0, x1 = arg1, arg2 - Call3
- Like
call, but alsox0, x1, x2 = arg1, arg2, arg3 - Call4
- Like
call, but alsox0, x1, x2, x3 = arg1, arg2, arg3, arg4 - Call
Indirect - Transfer control to the PC in
regand setlrto the PC just after this instruction. - Call
Indirect Host - A special opcode to halt interpreter execution and yield control back to the host.
- F32From
F64 low32(dst) = demote(src)- F32From
X32S low32(dst) = checked_f32_from_signed(low32(src))- F32From
X32U low32(dst) = checked_f32_from_unsigned(low32(src))- F32From
X64S low32(dst) = checked_f32_from_signed(src)- F32From
X64U low32(dst) = checked_f32_from_unsigned(src)- F64From
F32 (st) = promote(low32(src))- F64From
X32S dst = checked_f64_from_signed(low32(src))- F64From
X32U dst = checked_f64_from_unsigned(low32(src))- F64From
X64S dst = checked_f64_from_signed(src)- F64From
X64U dst = checked_f64_from_unsigned(src)- FConst32
low32(dst) = bits- FConst64
dst = bits- FCopy
Sign32 low32(dst) = copysign(low32(src1), low32(src2))- FCopy
Sign64 dst = copysign(src1, src2)- FExtract
V32x4 low32(dst) = src[lane]- FExtract
V64x2 dst = src[lane]- FSelect32
low32(dst) = low32(cond) ? low32(if_nonzero) : low32(if_zero)- FSelect64
dst = low32(cond) ? if_nonzero : if_zero- Fabs32
low32(dst) = |low32(src)|- Fabs64
dst = |src|- Fadd32
low32(dst) = low32(src1) + low32(src2)- Fadd64
dst = src1 + src2- Fceil32
low32(dst) = ieee_ceil(low32(src))- Fceil64
dst = ieee_ceil(src)- Fdiv32
low32(dst) = low32(src1) / low32(src2)- Fdiv64
dst = src1 / src2- Feq32
low32(dst) = zext(src1 == src2)- Feq64
low32(dst) = zext(src1 == src2)- Ffloor32
low32(dst) = ieee_floor(low32(src))- Ffloor64
dst = ieee_floor(src)- Fload32
BeO32 low32(dst) = zext(*addr)- Fload32
LeG32 low32(dst) = zext(*addr)- Fload32
LeO32 low32(dst) = zext(*addr)- Fload32
LeZ low32(dst) = zext(*addr)- Fload64
BeO32 dst = *addr- Fload64
LeG32 dst = *addr- Fload64
LeO32 dst = *addr- Fload64
LeZ dst = *addr- Flt32
low32(dst) = zext(src1 < src2)- Flt64
low32(dst) = zext(src1 < src2)- Flteq32
low32(dst) = zext(src1 <= src2)- Flteq64
low32(dst) = zext(src1 <= src2)- Fmaximum32
low32(dst) = ieee_maximum(low32(src1), low32(src2))- Fmaximum64
dst = ieee_maximum(src1, src2)- Fminimum32
low32(dst) = ieee_minimum(low32(src1), low32(src2))- Fminimum64
dst = ieee_minimum(src1, src2)- Fmov
- Move between
fregisters. - Fmul32
low32(dst) = low32(src1) * low32(src2)- Fmul64
dst = src1 * src2- Fnearest32
low32(dst) = ieee_nearest(low32(src))- Fnearest64
dst = ieee_nearest(src)- Fneg32
low32(dst) = -low32(src)- Fneg64
dst = -src- Fneq32
low32(dst) = zext(src1 != src2)- Fneq64
low32(dst) = zext(src1 != src2)- Fsqrt32
low32(dst) = ieee_sqrt(low32(src))- Fsqrt64
dst = ieee_sqrt(src)- Fstore32
BeO32 *addr = low32(src)- Fstore32
LeG32 *addr = low32(src)- Fstore32
LeO32 *addr = low32(src)- Fstore32
LeZ *addr = low32(src)- Fstore64
BeO32 *addr = src- Fstore64
LeG32 *addr = src- Fstore64
LeO32 *addr = src- Fstore64
LeZ *addr = src- Fsub32
low32(dst) = low32(src1) - low32(src2)- Fsub64
dst = src1 - src2- Ftrunc32
low32(dst) = ieee_trunc(low32(src))- Ftrunc64
dst = ieee_trunc(src)- Jump
- Unconditionally transfer control to the PC at the given offset.
- Materialize
OpsVisitor decode - A visitor that materializes whole
Ops as it decodes the bytecode stream. - Nop
- No-operation.
- PopFrame
sp = fp; pop fp; pop lr- PopFrame
Restore - Inverse of
push_frame_save. Restoresregsfrom the top of the stack, then runsstack_free32 amt, then runspop_frame. - Push
Frame push lr; push fp; fp = sp- Push
Frame Save - Macro-instruction to enter a function, allocate some stack, and then save some registers.
- Ret
- Transfer control the address in the
lrregister. - Sext8
dst = sext(low8(src))- Sext16
dst = sext(low16(src))- Sext32
dst = sext(low32(src))- Stack
Alloc32 sp = sp.checked_sub(amt)- Stack
Free32 sp = sp + amt- Trap
- Raise a trap.
- VAdd
F32x4 dst = src1 + src2- VAdd
F64x2 dst = src1 + src2- VAdd
I8x16 dst = src1 + src2- VAdd
I8x16 Sat dst = satruating_add(src1, src2)- VAdd
I16x8 dst = src1 + src2- VAdd
I16x8 Sat dst = satruating_add(src1, src2)- VAdd
I32x4 dst = src1 + src2- VAdd
I64x2 dst = src1 + src2- VAdd
U8x16 Sat dst = satruating_add(src1, src2)- VAdd
U16x8 Sat dst = satruating_add(src1, src2)- VAddpairwise
I16x8S dst = [src1[0] + src1[1], ..., src2[6] + src2[7]]- VAddpairwise
I32x4S dst = [src1[0] + src1[1], ..., src2[2] + src2[3]]- VBand128
dst = src1 & src2- VBitselect128
dst = (c & x) | (!c & y)- VBnot128
dst = !src1- VBor128
dst = src1 | src2- VBxor128
dst = src1 ^ src2- VDiv
F64x2 dst = src1 / src2- VF32x4
From I32x4S - Int-to-float conversion (same as
f32_from_x32_s) - VF32x4
From I32x4U - Int-to-float conversion (same as
f32_from_x32_u) - VF64x2
From I64x2S - Int-to-float conversion (same as
f64_from_x64_s) - VF64x2
From I64x2U - Int-to-float conversion (same as
f64_from_x64_u) - VFdemote
- Demotes the two f64x2 lanes to f32x2 and then extends with two more zero lanes.
- VFpromote
Low - Promotes the low two lanes of the f32x4 input to f64x2.
- VI32x4
From F32x4S - Float-to-int conversion (same as
x32_from_f32_s - VI32x4
From F32x4U - Float-to-int conversion (same as
x32_from_f32_u - VI64x2
From F64x2S - Float-to-int conversion (same as
x64_from_f64_s - VI64x2
From F64x2U - Float-to-int conversion (same as
x64_from_f64_u - VInsert
F32 dst = src1; dst[lane] = src2- VInsert
F64 dst = src1; dst[lane] = src2- VInsert
X8 dst = src1; dst[lane] = src2- VInsert
X16 dst = src1; dst[lane] = src2- VInsert
X32 dst = src1; dst[lane] = src2- VInsert
X64 dst = src1; dst[lane] = src2- VLoad8x8SZ
- Load the 64-bit source as i8x8 and sign-extend to i16x8.
- VLoad8x8UZ
- Load the 64-bit source as u8x8 and zero-extend to i16x8.
- VLoad16x4
LeSZ - Load the 64-bit source as i16x4 and sign-extend to i32x4.
- VLoad16x4
LeUZ - Load the 64-bit source as u16x4 and zero-extend to i32x4.
- VLoad32x2
LeSZ - Load the 64-bit source as i32x2 and sign-extend to i64x2.
- VLoad32x2
LeUZ - Load the 64-bit source as u32x2 and zero-extend to i64x2.
- VLoad128
G32 dst = *(ptr + offset)- VLoad128
O32 dst = *addr- VLoad128Z
dst = *(ptr + offset)- VMul
F64x2 dst = src1 * src2- VMul
I8x16 dst = src1 * src2- VMul
I16x8 dst = src1 * src2- VMul
I32x4 dst = src1 * src2- VMul
I64x2 dst = src1 * src2- VPopcnt8x16
dst = count_ones(src)- VQmulrs
I16x8 dst = signed_saturate(src1 * src2 + (1 << (Q - 1)) >> Q)- VShl
I8x16 dst = src1 << src2- VShl
I16x8 dst = src1 << src2- VShl
I32x4 dst = src1 << src2- VShl
I64x2 dst = src1 << src2- VShr
I8x16S dst = src1 >> src2(signed)- VShr
I8x16U dst = src1 >> src2(unsigned)- VShr
I16x8S dst = src1 >> src2(signed)- VShr
I16x8U dst = src1 >> src2(unsigned)- VShr
I32x4S dst = src1 >> src2(signed)- VShr
I32x4U dst = src1 >> src2(unsigned)- VShr
I64x2S dst = src1 >> src2(signed)- VShr
I64x2U dst = src1 >> src2(unsigned)- VShuffle
dst = shuffle(src1, src2, mask)- VSplat
F32 dst = splat(low32(src))- VSplat
F64 dst = splat(src)- VSplat
X8 dst = splat(low8(src))- VSplat
X16 dst = splat(low16(src))- VSplat
X32 dst = splat(low32(src))- VSplat
X64 dst = splat(src)- VSub
F64x2 dst = src1 - src2- VSub
I8x16 dst = src1 - src2- VSub
I8x16 Sat dst = saturating_sub(src1, src2)- VSub
I16x8 dst = src1 - src2- VSub
I16x8 Sat dst = saturating_sub(src1, src2)- VSub
I32x4 dst = src1 - src2- VSub
I64x2 dst = src1 - src2- VSub
U8x16 Sat dst = saturating_sub(src1, src2)- VSub
U16x8 Sat dst = saturating_sub(src1, src2)- VWiden
High8x16S - Widens the high lanes of the input vector, as signed, to twice the width.
- VWiden
High8x16U - Widens the high lanes of the input vector, as unsigned, to twice the width.
- VWiden
High16x8S - Widens the high lanes of the input vector, as signed, to twice the width.
- VWiden
High16x8U - Widens the high lanes of the input vector, as unsigned, to twice the width.
- VWiden
High32x4S - Widens the high lanes of the input vector, as signed, to twice the width.
- VWiden
High32x4U - Widens the high lanes of the input vector, as unsigned, to twice the width.
- VWiden
Low8x16S - Widens the low lanes of the input vector, as signed, to twice the width.
- VWiden
Low8x16U - Widens the low lanes of the input vector, as unsigned, to twice the width.
- VWiden
Low16x8S - Widens the low lanes of the input vector, as signed, to twice the width.
- VWiden
Low16x8U - Widens the low lanes of the input vector, as unsigned, to twice the width.
- VWiden
Low32x4S - Widens the low lanes of the input vector, as signed, to twice the width.
- VWiden
Low32x4U - Widens the low lanes of the input vector, as unsigned, to twice the width.
- Vabs8x16
dst = |src|- Vabs16x8
dst = |src|- Vabs32x4
dst = |src|- Vabs64x2
dst = |src|- Vabsf32x4
dst = |src|- Vabsf64x2
dst = |src|- Valltrue8x16
- Store whether all lanes are nonzero in
dst. - Valltrue16x8
- Store whether all lanes are nonzero in
dst. - Valltrue32x4
- Store whether all lanes are nonzero in
dst. - Valltrue64x2
- Store whether any lanes are nonzero in
dst. - Vanytrue8x16
- Store whether any lanes are nonzero in
dst. - Vanytrue16x8
- Store whether any lanes are nonzero in
dst. - Vanytrue32x4
- Store whether any lanes are nonzero in
dst. - Vanytrue64x2
- Store whether any lanes are nonzero in
dst. - Vavground8x16
dst = (src1 + src2 + 1) // 2- Vavground16x8
dst = (src1 + src2 + 1) // 2- Vbitmask8x16
- Collect high bits of each lane into the low 32-bits of the destination.
- Vbitmask16x8
- Collect high bits of each lane into the low 32-bits of the destination.
- Vbitmask32x4
- Collect high bits of each lane into the low 32-bits of the destination.
- Vbitmask64x2
- Collect high bits of each lane into the low 32-bits of the destination.
- Vceil32x4
low128(dst) = ieee_ceil(low128(src))- Vceil64x2
low128(dst) = ieee_ceil(low128(src))- Vconst128
dst = imm- Vdivf32x4
low128(dst) = low128(src1) / low128(src2)- Veq8x16
dst = src == dst- Veq16x8
dst = src == dst- Veq32x4
dst = src == dst- Veq64x2
dst = src == dst- VeqF32x4
dst = src == dst- VeqF64x2
dst = src == dst- Vfloor32x4
low128(dst) = ieee_floor(low128(src))- Vfloor64x2
low128(dst) = ieee_floor(low128(src))- Vfma32x4
dst = ieee_fma(a, b, c)- Vfma64x2
dst = ieee_fma(a, b, c)- VltF32x4
dst = src < dst- VltF64x2
dst = src < dst- Vlteq
F32x4 dst = src <= dst- Vlteq
F64x2 dst = src <= dst- Vmax8x16S
dst = max(src1, src2)(signed)- Vmax8x16U
dst = max(src1, src2)(unsigned)- Vmax16x8S
dst = max(src1, src2)(signed)- Vmax16x8U
dst = max(src1, src2)(unsigned)- Vmax32x4S
dst = max(src1, src2)(signed)- Vmax32x4U
dst = max(src1, src2)(unsigned)- Vmaximumf32x4
dst = ieee_maximum(src1, src2)- Vmaximumf64x2
dst = ieee_maximum(src1, src2)- Vmin8x16S
dst = min(src1, src2)(signed)- Vmin8x16U
dst = min(src1, src2)(unsigned)- Vmin16x8S
dst = min(src1, src2)(signed)- Vmin16x8U
dst = min(src1, src2)(unsigned)- Vmin32x4S
dst = min(src1, src2)(signed)- Vmin32x4U
dst = min(src1, src2)(unsigned)- Vminimumf32x4
dst = ieee_minimum(src1, src2)- Vminimumf64x2
dst = ieee_minimum(src1, src2)- Vmov
- Move between
vregisters. - Vmulf32x4
low128(dst) = low128(src1) * low128(src2)- Vnarrow16x8S
- Narrows the two 16x8 vectors, assuming all input lanes are signed, to half the width. Narrowing is signed and saturating.
- Vnarrow16x8U
- Narrows the two 16x8 vectors, assuming all input lanes are signed, to half the width. Narrowing is unsigned and saturating.
- Vnarrow32x4S
- Narrows the two 32x4 vectors, assuming all input lanes are signed, to half the width. Narrowing is signed and saturating.
- Vnarrow32x4U
- Narrows the two 32x4 vectors, assuming all input lanes are signed, to half the width. Narrowing is unsigned and saturating.
- Vnarrow64x2S
- Narrows the two 64x2 vectors, assuming all input lanes are signed, to half the width. Narrowing is signed and saturating.
- Vnarrow64x2U
- Narrows the two 64x2 vectors, assuming all input lanes are signed, to half the width. Narrowing is unsigned and saturating.
- Vnearest32x4
low128(dst) = ieee_nearest(low128(src))- Vnearest64x2
low128(dst) = ieee_nearest(low128(src))- Vneg8x16
dst = -src- Vneg16x8
dst = -src- Vneg32x4
dst = -src- Vneg64x2
dst = -src- Vneg
F64x2 dst = -src- Vnegf32x4
low128(dst) = -low128(src)- Vneq8x16
dst = src != dst- Vneq16x8
dst = src != dst- Vneq32x4
dst = src != dst- Vneq64x2
dst = src != dst- Vneq
F32x4 dst = src != dst- Vneq
F64x2 dst = src != dst- Vselect
dst = low32(cond) ? if_nonzero : if_zero- Vslt8x16
dst = src < dst(signed)- Vslt16x8
dst = src < dst(signed)- Vslt32x4
dst = src < dst(signed)- Vslt64x2
dst = src < dst(signed)- Vslteq8x16
dst = src <= dst(signed)- Vslteq16x8
dst = src <= dst(signed)- Vslteq32x4
dst = src <= dst(signed)- Vslteq64x2
dst = src <= dst(signed)- Vsqrt32x4
low32(dst) = ieee_sqrt(low32(src))- Vsqrt64x2
low32(dst) = ieee_sqrt(low32(src))- Vstore128
LeG32 *(ptr + offset) = src- Vstore128
LeO32 *addr = src- Vstore128
LeZ *(ptr + offset) = src- Vsubf32x4
low128(dst) = low128(src1) - low128(src2)- Vswizzlei8x16
dst = swizzle(src1, src2)- Vtrunc32x4
low128(dst) = ieee_trunc(low128(src))- Vtrunc64x2
low128(dst) = ieee_trunc(low128(src))- Vult8x16
dst = src < dst(unsigned)- Vult16x8
dst = src < dst(unsigned)- Vult32x4
dst = src < dst(unsigned)- Vult64x2
dst = src < dst(unsigned)- Vulteq8x16
dst = src <= dst(unsigned)- Vulteq16x8
dst = src <= dst(unsigned)- Vulteq32x4
dst = src <= dst(unsigned)- Vulteq64x2
dst = src <= dst(unsigned)- Vunarrow64x2U
- Narrows the two 64x2 vectors, assuming all input lanes are unsigned, to half the width. Narrowing is unsigned and saturating.
- X32From
F32S low32(dst) = checked_signed_from_f32(low32(src))- X32From
F32S Sat low32(dst) = saturating_signed_from_f32(low32(src))- X32From
F32U low32(dst) = checked_unsigned_from_f32(low32(src))- X32From
F32U Sat low32(dst) = saturating_unsigned_from_f32(low32(src))- X32From
F64S low32(dst) = checked_signed_from_f64(src)- X32From
F64S Sat low32(dst) = saturating_signed_from_f64(src)- X32From
F64U low32(dst) = checked_unsigned_from_f64(src)- X32From
F64U Sat low32(dst) = saturating_unsigned_from_f64(src)- X64From
F32S dst = checked_signed_from_f32(low32(src))- X64From
F32S Sat dst = saturating_signed_from_f32(low32(src))- X64From
F32U dst = checked_unsigned_from_f32(low32(src))- X64From
F32U Sat dst = saturating_unsigned_from_f32(low32(src))- X64From
F64S dst = checked_signed_from_f64(src)- X64From
F64S Sat dst = saturating_signed_from_f64(src)- X64From
F64U dst = checked_unsigned_from_f64(src)- X64From
F64U Sat dst = saturating_unsigned_from_f64(src)- XAbs32
low32(dst) = |low32(src)|- XAbs64
dst = |src|- XBand32
low32(dst) = low32(src1) & low32(src2)- XBand64
dst = src1 & src2- XBnot32
low32(dst) = !low32(src1)- XBnot64
dst = !src1- XBor32
low32(dst) = low32(src1) | low32(src2)- XBor64
dst = src1 | src2- XBxor32
low32(dst) = low32(src1) ^ low32(src2)- XBxor64
dst = src1 ^ src2- XDiv32S
low32(dst) = low32(src1) / low32(src2)(signed)- XDiv32U
low32(dst) = low32(src1) / low32(src2)(unsigned)- XDiv64S
dst = src1 / src2(signed)- XDiv64U
dst = src1 / src2(unsigned)- XExtract
V8x16 low32(dst) = zext(src[lane])- XExtract
V16x8 low32(dst) = zext(src[lane])- XExtract
V32x4 low32(dst) = src[lane]- XExtract
V64x2 dst = src[lane]- XJump
- Unconditionally transfer control to the PC at specified register.
- XLoad8
S32G32 low32(dst) = sext_8_32(*addr)- XLoad8
S32G32 Bne low32(dst) = sext_8_32(*addr)- XLoad8
S32O32 low32(dst) = sext_8_32(*addr)- XLoad8
S32Z low32(dst) = sext_8_32(*addr)- XLoad8
U32G32 low32(dst) = zext_8_32(*addr)- XLoad8
U32G32 Bne low32(dst) = zext_8_32(*addr)- XLoad8
U32O32 low32(dst) = zext_8_32(*addr)- XLoad8
U32Z low32(dst) = zext_8_32(*addr)- XLoad16
BeS32 O32 low32(dst) = sext(*addr)- XLoad16
BeU32 O32 low32(dst) = zext(*addr)- XLoad16
LeS32 G32 low32(dst) = sext_16_32(*addr)- XLoad16
LeS32 G32Bne low32(dst) = sext_16_32(*addr)- XLoad16
LeS32 O32 low32(dst) = sext_16_32(*addr)- XLoad16
LeS32Z low32(dst) = sext_16_32(*addr)- XLoad16
LeU32 G32 low32(dst) = zext_16_32(*addr)- XLoad16
LeU32 G32Bne low32(dst) = zext_16_32(*addr)- XLoad16
LeU32 O32 low32(dst) = o32ext_16_32(*addr)- XLoad16
LeU32Z low32(dst) = zext_16_32(*addr)- XLoad32
BeO32 low32(dst) = zext(*addr)- XLoad32
LeG32 low32(dst) = *addr- XLoad32
LeG32 Bne low32(dst) = *addr- XLoad32
LeO32 low32(dst) = *addr- XLoad32
LeZ low32(dst) = *addr- XLoad64
BeO32 dst = *addr- XLoad64
LeG32 dst = *addr- XLoad64
LeG32 Bne dst = *addr- XLoad64
LeO32 dst = *addr- XLoad64
LeZ dst = *addr- XMul32
low32(dst) = low32(src1) * low32(src2)- XMul64
dst = src1 * src2- XMul
Hi64S dst = high64(src1 * src2)(signed)- XMul
Hi64U dst = high64(src1 * src2)(unsigned)- XRem32S
low32(dst) = low32(src1) % low32(src2)(signed)- XRem32U
low32(dst) = low32(src1) % low32(src2)(unsigned)- XRem64S
dst = src1 / src2(signed)- XRem64U
dst = src1 / src2(unsigned)- XSelect32
low32(dst) = low32(cond) ? low32(if_nonzero) : low32(if_zero)- XSelect64
dst = low32(cond) ? if_nonzero : if_zero- XStore8
G32 *addr = low8(src)- XStore8
G32Bne *addr = low8(src)- XStore8
O32 *addr = low8(src)- XStore8Z
*addr = low8(src)- XStore16
BeO32 *addr = low16(src)- XStore16
LeG32 *addr = low16(src)- XStore16
LeG32 Bne *addr = low16(src)- XStore16
LeO32 *addr = low16(src)- XStore16
LeZ *addr = low16(src)- XStore32
BeO32 *addr = low32(src)- XStore32
LeG32 *addr = low32(src)- XStore32
LeG32 Bne *addr = low32(src)- XStore32
LeO32 *addr = low32(src)- XStore32
LeZ *addr = low32(src)- XStore64
BeO32 *addr = low64(src)- XStore64
LeG32 *addr = src- XStore64
LeG32 Bne *addr = src- XStore64
LeO32 *addr = src- XStore64
LeZ *addr = src- Xadd32
- 32-bit wrapping addition:
low32(dst) = low32(src1) + low32(src2). - Xadd64
- 64-bit wrapping addition:
dst = src1 + src2. - Xadd32
U8 - Same as
xadd32butsrc2is a zero-extended 8-bit immediate. - Xadd32
U32 - Same as
xadd32butsrc2is a 32-bit immediate. - Xadd32
Uoverflow Trap - 32-bit checked unsigned addition:
low32(dst) = low32(src1) + low32(src2). - Xadd64
U8 - Same as
xadd64butsrc2is a zero-extended 8-bit immediate. - Xadd64
U32 - Same as
xadd64butsrc2is a zero-extended 32-bit immediate. - Xadd64
Uoverflow Trap - 64-bit checked unsigned addition:
dst = src1 + src2. - Xadd128
dst_hi:dst_lo = lhs_hi:lhs_lo + rhs_hi:rhs_lo- Xband32
S8 - Same as
xband64butsrc2is a sign-extended 8-bit immediate. - Xband32
S32 - Same as
xband32butsrc2is a sign-extended 32-bit immediate. - Xband64
S8 - Same as
xband64butsrc2is a sign-extended 8-bit immediate. - Xband64
S32 - Same as
xband64butsrc2is a sign-extended 32-bit immediate. - Xbmask32
- low32(dst) = if low32(src) == 0 { 0 } else { -1 }
- Xbmask64
- dst = if src == 0 { 0 } else { -1 }
- Xbor32
S8 - Same as
xbor64butsrc2is a sign-extended 8-bit immediate. - Xbor32
S32 - Same as
xbor32butsrc2is a sign-extended 32-bit immediate. - Xbor64
S8 - Same as
xbor64butsrc2is a sign-extended 8-bit immediate. - Xbor64
S32 - Same as
xbor64butsrc2is a sign-extended 32-bit immediate. - Xbxor32
S8 - Same as
xbxor64butsrc2is a sign-extended 8-bit immediate. - Xbxor32
S32 - Same as
xbxor32butsrc2is a sign-extended 32-bit immediate. - Xbxor64
S8 - Same as
xbxor64butsrc2is a sign-extended 8-bit immediate. - Xbxor64
S32 - Same as
xbxor64butsrc2is a sign-extended 32-bit immediate. - Xclz32
low32(dst) = leading_zeros(low32(src))- Xclz64
dst = leading_zeros(src)- Xconst8
- Set
dst = sign_extend(imm8). - Xconst16
- Set
dst = sign_extend(imm16). - Xconst32
- Set
dst = sign_extend(imm32). - Xconst64
- Set
dst = imm64. - Xctz32
low32(dst) = trailing_zeros(low32(src))- Xctz64
dst = trailing_zeros(src)- Xeq32
low32(dst) = low32(src1) == low32(src2)- Xeq64
low32(dst) = src1 == src2- Xmadd32
low32(dst) = low32(src1) * low32(src2) + low32(src3)- Xmadd64
dst = src1 * src2 + src3- Xmax32S
low32(dst) = max(low32(src1), low32(src2))(signed)- Xmax32U
low32(dst) = max(low32(src1), low32(src2))(unsigned)- Xmax64S
dst = max(src1, src2)(signed)- Xmax64U
dst = max(src1, src2)(unsigned)- Xmin32S
low32(dst) = min(low32(src1), low32(src2))(signed)- Xmin32U
low32(dst) = min(low32(src1), low32(src2))(unsigned)- Xmin64S
dst = min(src1, src2)(signed)- Xmin64U
dst = min(src1, src2)(unsigned)- Xmov
- Move between
xregisters. - XmovFp
- Gets the special “fp” register and moves it into
dst. - XmovLr
- Gets the special “lr” register and moves it into
dst. - Xmul32
S8 - Same as
xmul64butsrc2is a sign-extended 8-bit immediate. - Xmul32
S32 - Same as
xmul32butsrc2is a sign-extended 32-bit immediate. - Xmul64
S8 - Same as
xmul64butsrc2is a sign-extended 8-bit immediate. - Xmul64
S32 - Same as
xmul64butsrc2is a sign-extended 64-bit immediate. - Xneg32
low32(dst) = -low32(src)- Xneg64
dst = -src- Xneq32
low32(dst) = low32(src1) != low32(src2)- Xneq64
low32(dst) = src1 != src2- Xone
- Set
dst = 1 - Xpcadd
- Adds
offsetto the pc of this instruction and stores it indst. - Xpopcnt32
low32(dst) = count_ones(low32(src))- Xpopcnt64
dst = count_ones(src)- Xrotl32
low32(dst) = rotate_left(low32(src1), low32(src2))- Xrotl64
dst = rotate_left(src1, src2)- Xrotr32
low32(dst) = rotate_right(low32(src1), low32(src2))- Xrotr64
dst = rotate_right(src1, src2)- Xshl32
low32(dst) = low32(src1) << low5(src2)- Xshl64
dst = src1 << low5(src2)- Xshl32
U6 low32(dst) = low32(src1) << low5(src2)- Xshl64
U6 dst = src1 << low5(src2)- Xshr32S
low32(dst) = low32(src1) >> low5(src2)- Xshr32S
U6 low32(dst) = low32(src1) >> low5(src2)- Xshr32U
low32(dst) = low32(src1) >> low5(src2)- Xshr32U
U6 low32(dst) = low32(src1) >> low5(src2)- Xshr64S
dst = src1 >> low6(src2)- Xshr64S
U6 dst = src1 >> low6(src2)- Xshr64U
dst = src1 >> low6(src2)- Xshr64U
U6 dst = src1 >> low6(src2)- Xslt32
low32(dst) = low32(src1) < low32(src2)(signed)- Xslt64
low32(dst) = src1 < src2(signed)- Xslteq32
low32(dst) = low32(src1) <= low32(src2)(signed)- Xslteq64
low32(dst) = src1 <= src2(signed)- Xsub32
- 32-bit wrapping subtraction:
low32(dst) = low32(src1) - low32(src2). - Xsub64
- 64-bit wrapping subtraction:
dst = src1 - src2. - Xsub32
U8 - Same as
xsub32butsrc2is a zero-extended 8-bit immediate. - Xsub32
U32 - Same as
xsub32butsrc2is a 32-bit immediate. - Xsub64
U8 - Same as
xsub64butsrc2is a zero-extended 8-bit immediate. - Xsub64
U32 - Same as
xsub64butsrc2is a zero-extended 32-bit immediate. - Xsub128
dst_hi:dst_lo = lhs_hi:lhs_lo - rhs_hi:rhs_lo- Xult32
low32(dst) = low32(src1) < low32(src2)(unsigned)- Xult64
low32(dst) = src1 < src2(unsigned)- Xulteq32
low32(dst) = low32(src1) <= low32(src2)(unsigned)- Xulteq64
low32(dst) = src1 <= src2(unsigned)- Xwidemul64S
dst_hi:dst_lo = sext(lhs) * sext(rhs)- Xwidemul64U
dst_hi:dst_lo = zext(lhs) * zext(rhs)- Xzero
- Set
dst = 0 - Zext8
dst = zext(low8(src))- Zext16
dst = zext(low16(src))- Zext32
dst = zext(low32(src))
Enums§
- Extended
Op - An extended operation/instruction.
- Op
- A complete, materialized operation/instruction.