peachpy.function.
Argument
(c_type, name=None)¶Bases: object
Function argument.
An argument must have a C type and a name.
c_type – the type of the argument in C
name – the name of the argument
c_type (peachpy.c.types.Type) – the type of the argument in C.
When Go function is generated, the type is automatically converted to similar Go type.
Note that the short
, int
, long
, and long long
types do not have an equivalents in Go.
In particular, C’s int
type is not an equivalent of Go’s int
type. To get Go’s int
and uint
types use ptrdiff_t
and size_t
correspondingly.
name (str) –
the name of the argument. If the name is not provided explicitly, PeachPy tries to parse it from the caller code. The name must follow the C rules for identifiers:
It can contain only Latin letters, digits, and underscore symbol
It can not start with a digit
It can not start with double underscore (these names are reserved for PeachPy)
Name must be unique among the function arguments
peachpy.x86_64.function.
ABIFunction
(function, abi)¶Bases: object
ABI-specific x86-64 assembly function.
A function consists of C signature, ABI, and a list of instructions without virtual registers.
format
(assembly_format='peachpy', line_separator='\n', line_number=1)¶Formats assembly listing of the function according to specified parameters
format_code
(assembly_format='peachpy', line_separator='\n', indent=True, line_number=1)¶Returns code of assembly instructions comprising the function
peachpy.x86_64.function.
Argument
(argument, abi)¶Bases: peachpy.function.Argument
Extends generic Argument object with x86-64 specific attributes required for stack frame construction
register (peachpy.x86_64.registers.Register) – the register in which the argument is passed to the function or None if the argument is passed on stack.
stack_offset (int) – offset from the end of return address on the stack to the location of the argument on stack or None if the argument is passed in a register and has no stack location. Note that in Microsoft X64 ABI the first four arguments are passed in registers but have stack space reserved for their storage. For these arguments both register and stack_offset are non-null.
address (peachpy.x86_64.operand.MemoryAddress) – address of the argument on stack, relative to rsp or rbp. The value of this attribute is None until after register allocation. In Golang ABIs this attribute is never initialized because to load arguments from stack Golang uses its own pseudo-register FP, which is not representable in PeachPy (LOAD.ARGUMENT pseudo-instructions use stack_offset instead when formatted as Golang assembly).
peachpy.x86_64.function.
EncodedFunction
(function)¶Bases: object
ABI-specific x86-64 assembly function.
A function consists of C signature, ABI, and a list of instructions without virtual registers.
format
(assembly_format='peachpy', line_separator='\n')¶Formats assembly listing of the function according to specified parameters
format_code
(assembly_format='peachpy', line_separator='\n', indent=True)¶Returns code of assembly instructions comprising the function
peachpy.x86_64.function.
Function
(name, arguments, result_type=None, package=None, target=None, debug_level=None)¶Bases: object
Generalized x86-64 assembly function.
A function consists of C signature and a list of instructions.
On this level the function is supposed to be compatible with multiple ABIs. In particular, instructions may have virtual registers, and instruction stream may contain pseudo-instructions, such as LOAD.ARGUMENT or RETURN.
name (str) – name of the function without mangling (as in C language).
arguments (tuple) – a tuple of peachpy.Argument
objects.
result_type (Type) – the return type of the function. None if the function returns no value (void function).
package (str) – the name of the Go package containing this function.
target (Microarchitecture) – the target microarchitecture for this function.
debug_level (int) – the verbosity level for debug information collected for instructions. 0 means no debug information, 1 and above enables information about the lines of Python code that originated an instruction. Collecting debug information increases processing time by several times.
entry (Label) – a label that marks the entry point of the function. A user can place the entry point in any place in the function by defining this label with LABEL pseudo-instruction. If this label is not defined by the user, it will be placed automatically before the first instruction of the function.
_indent_level (int) – the level of indentation for this instruction in assembly listings. Indentation level is changed by Loop statements.
_instructions (list) – the list of Instruction
objects that comprise the function code.
_label_names (set) – a set of string names of LABEL quasi-instructions in the function. The set is populated as instructions are added and is intended to track duplicate labels.
_named_constants (dict) – a dictionary that maps names of literal constants to Constant objects. As instructions are added the dictionary is used to track constants with same names, but different content.
attach
()¶Makes active the function and its associated instruction stream.
While the instruction stream is active, generated instructions are added to this function.
While the function is active, generated instructions are checked for compatibility with the function target.
c_signature
¶C signature (including parameter names) for the function
detach
()¶Make the function and its associated instruction stream no longer active.
The function and its instruction stream must be active before calling the method.
format
(line_separator='\n')¶Formats assembly listing of the function according to specified parameters
format_instructions
(line_separator='\n')¶Formats instruction listing including data on input, output, available and live registers
go_signature
¶Go signature (including parameter names) for the function.
None if the function argument or return type is incompatible with Go
peachpy.x86_64.registers.
GeneralPurposeRegister
(mask, virtual_id=None, physical_id=None)¶Bases: peachpy.x86_64.registers.Register
A base class for general-purpose registers
peachpy.x86_64.registers.
GeneralPurposeRegister16
(physical_id=None, virtual_id=None)¶Bases: peachpy.x86_64.registers.GeneralPurposeRegister
16-bit general-purpose register
peachpy.x86_64.registers.
GeneralPurposeRegister32
(physical_id=None, virtual_id=None)¶Bases: peachpy.x86_64.registers.GeneralPurposeRegister
32-bit general-purpose register
peachpy.x86_64.registers.
GeneralPurposeRegister64
(physical_id=None, virtual_id=None)¶Bases: peachpy.x86_64.registers.GeneralPurposeRegister
64-bit general-purpose register
peachpy.x86_64.registers.
GeneralPurposeRegister8
(physical_id=None, virtual_id=None, is_high=False)¶Bases: peachpy.x86_64.registers.GeneralPurposeRegister
8-bit general-purpose register
peachpy.x86_64.registers.
KRegister
(physical_id=None, virtual_id=None)¶Bases: peachpy.x86_64.registers.Register
AVX-512 mask register
kcode
¶Returns the register encoding
zcode
¶Returns encoding of the merge/zero flags
peachpy.x86_64.registers.
MMXRegister
(physical_id=None, virtual_id=None)¶Bases: peachpy.x86_64.registers.Register
64-bit MMX technology register
peachpy.x86_64.registers.
Register
(mask, virtual_id=None, physical_id=None)¶Bases: object
A base class for all encodable registers (rip is not encodable)
ecode
¶Returns the bit 4 of register encoding
ehcode
¶Returns the bits 3-4 of register encoding
hcode
¶Returns the bit 3 of register encoding
hlcode
¶Returns the bits 0-3 of register encoding
is_virtual
¶Indicated if a register is virtual, i.e. not bounded to a physical register
lcode
¶Returns the bits 0-2 of register encoding
peachpy.x86_64.registers.
XMMRegister
(physical_id=None, virtual_id=None)¶Bases: peachpy.x86_64.registers.Register
128-bit xmm (SSE) register
code
¶Returns 5-bit encoding of the register
kcode
¶Returns encoding of mask register
zcode
¶Returns encoding of zeroing/merging flag of mask register
peachpy.x86_64.registers.
YMMRegister
(physical_id=None, virtual_id=None)¶Bases: peachpy.x86_64.registers.Register
256-bit ymm (AVX) register
code
¶Returns 5-bit encoding of the register
kcode
¶Returns encoding of mask register
zcode
¶Returns encoding of zeroing/merging flag of mask register
peachpy.x86_64.registers.
ZMMRegister
(physical_id=None, virtual_id=None)¶Bases: peachpy.x86_64.registers.Register
512-bit zmm (AVX-512) register
code
¶Returns 5-bit encoding of the register
kcode
¶Returns encoding of mask register
zcode
¶Returns encoding of zeroing/merging flag of mask register
peachpy.x86_64.generic.
ADC
(*args, **kwargs)¶Add with Carry
Supported forms:
ADC(r8, r8/m8)
ADC(r16, r16/m16)
ADC(r32, r32/m32)
ADC(r64, r64/m64)
ADC(r8/m8, imm8)
ADC(r8/m8, r8)
ADC(r16/m16, imm16)
ADC(r16/m16, r16)
ADC(r32/m32, imm32)
ADC(r32/m32, r32)
ADC(r64/m64, imm32)
ADC(r64/m64, r64)
peachpy.x86_64.generic.
ADCX
(*args, **kwargs)¶Unsigned Integer Addition of Two Operands with Carry Flag
Supported forms:
ADCX(r32, r32/m32) [ADX]
ADCX(r64, r64/m64) [ADX]
peachpy.x86_64.generic.
ADD
(*args, **kwargs)¶Add
Supported forms:
ADD(r8, r8/m8)
ADD(r16, r16/m16)
ADD(r32, r32/m32)
ADD(r64, r64/m64)
ADD(r8/m8, imm8)
ADD(r8/m8, r8)
ADD(r16/m16, imm16)
ADD(r16/m16, r16)
ADD(r32/m32, imm32)
ADD(r32/m32, r32)
ADD(r64/m64, imm32)
ADD(r64/m64, r64)
peachpy.x86_64.generic.
ADOX
(*args, **kwargs)¶Unsigned Integer Addition of Two Operands with Overflow Flag
Supported forms:
ADOX(r32, r32/m32) [ADX]
ADOX(r64, r64/m64) [ADX]
peachpy.x86_64.generic.
AND
(*args, **kwargs)¶Logical AND
Supported forms:
AND(r8, r8/m8)
AND(r16, r16/m16)
AND(r32, r32/m32)
AND(r64, r64/m64)
AND(r8/m8, imm8)
AND(r8/m8, r8)
AND(r16/m16, imm16)
AND(r16/m16, r16)
AND(r32/m32, imm32)
AND(r32/m32, r32)
AND(r64/m64, imm32)
AND(r64/m64, r64)
peachpy.x86_64.generic.
ANDN
(*args, **kwargs)¶Logical AND NOT
Supported forms:
ANDN(r32, r32, r32/m32) [BMI]
ANDN(r64, r64, r64/m64) [BMI]
peachpy.x86_64.generic.
BEXTR
(*args, **kwargs)¶Bit Field Extract
Supported forms:
BEXTR(r32, r32/m32, imm32) [TBM]
BEXTR(r64, r64/m64, imm32) [TBM]
BEXTR(r32, r32/m32, r32) [BMI]
BEXTR(r64, r64/m64, r64) [BMI]
peachpy.x86_64.generic.
BLCFILL
(*args, **kwargs)¶Fill From Lowest Clear Bit
Supported forms:
BLCFILL(r32, r32/m32) [TBM]
BLCFILL(r64, r64/m64) [TBM]
peachpy.x86_64.generic.
BLCI
(*args, **kwargs)¶Isolate Lowest Clear Bit
Supported forms:
BLCI(r32, r32/m32) [TBM]
BLCI(r64, r64/m64) [TBM]
peachpy.x86_64.generic.
BLCIC
(*args, **kwargs)¶Isolate Lowest Set Bit and Complement
Supported forms:
BLCIC(r32, r32/m32) [TBM]
BLCIC(r64, r64/m64) [TBM]
peachpy.x86_64.generic.
BLCMSK
(*args, **kwargs)¶Mask From Lowest Clear Bit
Supported forms:
BLCMSK(r32, r32/m32) [TBM]
BLCMSK(r64, r64/m64) [TBM]
peachpy.x86_64.generic.
BLCS
(*args, **kwargs)¶Set Lowest Clear Bit
Supported forms:
BLCS(r32, r32/m32) [TBM]
BLCS(r64, r64/m64) [TBM]
peachpy.x86_64.generic.
BLSFILL
(*args, **kwargs)¶Fill From Lowest Set Bit
Supported forms:
BLSFILL(r32, r32/m32) [TBM]
BLSFILL(r64, r64/m64) [TBM]
peachpy.x86_64.generic.
BLSI
(*args, **kwargs)¶Isolate Lowest Set Bit
Supported forms:
BLSI(r32, r32/m32) [BMI]
BLSI(r64, r64/m64) [BMI]
peachpy.x86_64.generic.
BLSIC
(*args, **kwargs)¶Isolate Lowest Set Bit and Complement
Supported forms:
BLSIC(r32, r32/m32) [TBM]
BLSIC(r64, r64/m64) [TBM]
peachpy.x86_64.generic.
BLSMSK
(*args, **kwargs)¶Mask From Lowest Set Bit
Supported forms:
BLSMSK(r32, r32/m32) [BMI]
BLSMSK(r64, r64/m64) [BMI]
peachpy.x86_64.generic.
BLSR
(*args, **kwargs)¶Reset Lowest Set Bit
Supported forms:
BLSR(r32, r32/m32) [BMI]
BLSR(r64, r64/m64) [BMI]
peachpy.x86_64.generic.
BSF
(*args, **kwargs)¶Bit Scan Forward
Supported forms:
BSF(r16, r16/m16)
BSF(r32, r32/m32)
BSF(r64, r64/m64)
peachpy.x86_64.generic.
BSR
(*args, **kwargs)¶Bit Scan Reverse
Supported forms:
BSR(r16, r16/m16)
BSR(r32, r32/m32)
BSR(r64, r64/m64)
peachpy.x86_64.generic.
BSWAP
(*args, **kwargs)¶Byte Swap
Supported forms:
BSWAP(r32)
BSWAP(r64)
peachpy.x86_64.generic.
BT
(*args, **kwargs)¶Bit Test
Supported forms:
BT(r16/m16, imm8)
BT(r16/m16, r16)
BT(r32/m32, imm8)
BT(r32/m32, r32)
BT(r64/m64, imm8)
BT(r64/m64, r64)
peachpy.x86_64.generic.
BTC
(*args, **kwargs)¶Bit Test and Complement
Supported forms:
BTC(r16/m16, imm8)
BTC(r16/m16, r16)
BTC(r32/m32, imm8)
BTC(r32/m32, r32)
BTC(r64/m64, imm8)
BTC(r64/m64, r64)
peachpy.x86_64.generic.
BTR
(*args, **kwargs)¶Bit Test and Reset
Supported forms:
BTR(r16/m16, imm8)
BTR(r16/m16, r16)
BTR(r32/m32, imm8)
BTR(r32/m32, r32)
BTR(r64/m64, imm8)
BTR(r64/m64, r64)
peachpy.x86_64.generic.
BTS
(*args, **kwargs)¶Bit Test and Set
Supported forms:
BTS(r16/m16, imm8)
BTS(r16/m16, r16)
BTS(r32/m32, imm8)
BTS(r32/m32, r32)
BTS(r64/m64, imm8)
BTS(r64/m64, r64)
peachpy.x86_64.generic.
BZHI
(*args, **kwargs)¶Zero High Bits Starting with Specified Bit Position
Supported forms:
BZHI(r32, r32/m32, r32) [BMI2]
BZHI(r64, r64/m64, r64) [BMI2]
peachpy.x86_64.generic.
CALL
(*args, **kwargs)¶Call Procedure
Supported forms:
CALL(rel32)
CALL(r64/m64)
peachpy.x86_64.generic.
CBW
(*args, **kwargs)¶Convert Byte to Word
Supported forms:
CBW()
peachpy.x86_64.generic.
CDQ
(*args, **kwargs)¶Convert Doubleword to Quadword
Supported forms:
CDQ()
peachpy.x86_64.generic.
CDQE
(*args, **kwargs)¶Convert Doubleword to Quadword
Supported forms:
CDQE()
peachpy.x86_64.generic.
CLC
(*args, **kwargs)¶Clear Carry Flag
Supported forms:
CLC()
peachpy.x86_64.generic.
CLD
(*args, **kwargs)¶Clear Direction Flag
Supported forms:
CLD()
peachpy.x86_64.generic.
CLFLUSH
(*args, **kwargs)¶Flush Cache Line
Supported forms:
CLFLUSH(m8) [CLFLUSH]
peachpy.x86_64.generic.
CLFLUSHOPT
(*args, **kwargs)¶Flush Cache Line Optimized
Supported forms:
CLFLUSHOPT(m8) [CLFLUSHOPT]
peachpy.x86_64.generic.
CLWB
(*args, **kwargs)¶Cache Line Write Back
Supported forms:
CLWB(m8) [CLWB]
peachpy.x86_64.generic.
CLZERO
(*args, **kwargs)¶Zero-out 64-bit Cache Line
Supported forms:
CLZERO() [CLZERO]
peachpy.x86_64.generic.
CMC
(*args, **kwargs)¶Complement Carry Flag
Supported forms:
CMC()
peachpy.x86_64.generic.
CMOVA
(*args, **kwargs)¶Move if above (CF == 0 and ZF == 0)
Supported forms:
CMOVA(r16, r16/m16) [CMOV]
CMOVA(r32, r32/m32) [CMOV]
CMOVA(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVAE
(*args, **kwargs)¶Move if above or equal (CF == 0)
Supported forms:
CMOVAE(r16, r16/m16) [CMOV]
CMOVAE(r32, r32/m32) [CMOV]
CMOVAE(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVB
(*args, **kwargs)¶Move if below (CF == 1)
Supported forms:
CMOVB(r16, r16/m16) [CMOV]
CMOVB(r32, r32/m32) [CMOV]
CMOVB(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVBE
(*args, **kwargs)¶Move if below or equal (CF == 1 or ZF == 1)
Supported forms:
CMOVBE(r16, r16/m16) [CMOV]
CMOVBE(r32, r32/m32) [CMOV]
CMOVBE(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVC
(*args, **kwargs)¶Move if carry (CF == 1)
Supported forms:
CMOVC(r16, r16/m16) [CMOV]
CMOVC(r32, r32/m32) [CMOV]
CMOVC(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVE
(*args, **kwargs)¶Move if equal (ZF == 1)
Supported forms:
CMOVE(r16, r16/m16) [CMOV]
CMOVE(r32, r32/m32) [CMOV]
CMOVE(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVG
(*args, **kwargs)¶Move if greater (ZF == 0 and SF == OF)
Supported forms:
CMOVG(r16, r16/m16) [CMOV]
CMOVG(r32, r32/m32) [CMOV]
CMOVG(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVGE
(*args, **kwargs)¶Move if greater or equal (SF == OF)
Supported forms:
CMOVGE(r16, r16/m16) [CMOV]
CMOVGE(r32, r32/m32) [CMOV]
CMOVGE(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVL
(*args, **kwargs)¶Move if less (SF != OF)
Supported forms:
CMOVL(r16, r16/m16) [CMOV]
CMOVL(r32, r32/m32) [CMOV]
CMOVL(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVLE
(*args, **kwargs)¶Move if less or equal (ZF == 1 or SF != OF)
Supported forms:
CMOVLE(r16, r16/m16) [CMOV]
CMOVLE(r32, r32/m32) [CMOV]
CMOVLE(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVNA
(*args, **kwargs)¶Move if not above (CF == 1 or ZF == 1)
Supported forms:
CMOVNA(r16, r16/m16) [CMOV]
CMOVNA(r32, r32/m32) [CMOV]
CMOVNA(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVNAE
(*args, **kwargs)¶Move if not above or equal (CF == 1)
Supported forms:
CMOVNAE(r16, r16/m16) [CMOV]
CMOVNAE(r32, r32/m32) [CMOV]
CMOVNAE(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVNB
(*args, **kwargs)¶Move if not below (CF == 0)
Supported forms:
CMOVNB(r16, r16/m16) [CMOV]
CMOVNB(r32, r32/m32) [CMOV]
CMOVNB(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVNBE
(*args, **kwargs)¶Move if not below or equal (CF == 0 and ZF == 0)
Supported forms:
CMOVNBE(r16, r16/m16) [CMOV]
CMOVNBE(r32, r32/m32) [CMOV]
CMOVNBE(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVNC
(*args, **kwargs)¶Move if not carry (CF == 0)
Supported forms:
CMOVNC(r16, r16/m16) [CMOV]
CMOVNC(r32, r32/m32) [CMOV]
CMOVNC(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVNE
(*args, **kwargs)¶Move if not equal (ZF == 0)
Supported forms:
CMOVNE(r16, r16/m16) [CMOV]
CMOVNE(r32, r32/m32) [CMOV]
CMOVNE(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVNG
(*args, **kwargs)¶Move if not greater (ZF == 1 or SF != OF)
Supported forms:
CMOVNG(r16, r16/m16) [CMOV]
CMOVNG(r32, r32/m32) [CMOV]
CMOVNG(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVNGE
(*args, **kwargs)¶Move if not greater or equal (SF != OF)
Supported forms:
CMOVNGE(r16, r16/m16) [CMOV]
CMOVNGE(r32, r32/m32) [CMOV]
CMOVNGE(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVNL
(*args, **kwargs)¶Move if not less (SF == OF)
Supported forms:
CMOVNL(r16, r16/m16) [CMOV]
CMOVNL(r32, r32/m32) [CMOV]
CMOVNL(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVNLE
(*args, **kwargs)¶Move if not less or equal (ZF == 0 and SF == OF)
Supported forms:
CMOVNLE(r16, r16/m16) [CMOV]
CMOVNLE(r32, r32/m32) [CMOV]
CMOVNLE(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVNO
(*args, **kwargs)¶Move if not overflow (OF == 0)
Supported forms:
CMOVNO(r16, r16/m16) [CMOV]
CMOVNO(r32, r32/m32) [CMOV]
CMOVNO(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVNP
(*args, **kwargs)¶Move if not parity (PF == 0)
Supported forms:
CMOVNP(r16, r16/m16) [CMOV]
CMOVNP(r32, r32/m32) [CMOV]
CMOVNP(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVNS
(*args, **kwargs)¶Move if not sign (SF == 0)
Supported forms:
CMOVNS(r16, r16/m16) [CMOV]
CMOVNS(r32, r32/m32) [CMOV]
CMOVNS(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVNZ
(*args, **kwargs)¶Move if not zero (ZF == 0)
Supported forms:
CMOVNZ(r16, r16/m16) [CMOV]
CMOVNZ(r32, r32/m32) [CMOV]
CMOVNZ(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVO
(*args, **kwargs)¶Move if overflow (OF == 1)
Supported forms:
CMOVO(r16, r16/m16) [CMOV]
CMOVO(r32, r32/m32) [CMOV]
CMOVO(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVP
(*args, **kwargs)¶Move if parity (PF == 1)
Supported forms:
CMOVP(r16, r16/m16) [CMOV]
CMOVP(r32, r32/m32) [CMOV]
CMOVP(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVPE
(*args, **kwargs)¶Move if parity even (PF == 1)
Supported forms:
CMOVPE(r16, r16/m16) [CMOV]
CMOVPE(r32, r32/m32) [CMOV]
CMOVPE(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVPO
(*args, **kwargs)¶Move if parity odd (PF == 0)
Supported forms:
CMOVPO(r16, r16/m16) [CMOV]
CMOVPO(r32, r32/m32) [CMOV]
CMOVPO(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVS
(*args, **kwargs)¶Move if sign (SF == 1)
Supported forms:
CMOVS(r16, r16/m16) [CMOV]
CMOVS(r32, r32/m32) [CMOV]
CMOVS(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMOVZ
(*args, **kwargs)¶Move if zero (ZF == 1)
Supported forms:
CMOVZ(r16, r16/m16) [CMOV]
CMOVZ(r32, r32/m32) [CMOV]
CMOVZ(r64, r64/m64) [CMOV]
peachpy.x86_64.generic.
CMP
(*args, **kwargs)¶Compare Two Operands
Supported forms:
CMP(r8, r8/m8)
CMP(r16, r16/m16)
CMP(r32, r32/m32)
CMP(r64, r64/m64)
CMP(r8/m8, imm8)
CMP(r8/m8, r8)
CMP(r16/m16, imm16)
CMP(r16/m16, r16)
CMP(r32/m32, imm32)
CMP(r32/m32, r32)
CMP(r64/m64, imm32)
CMP(r64/m64, r64)
peachpy.x86_64.generic.
CMPXCHG
(*args, **kwargs)¶Compare and Exchange
Supported forms:
CMPXCHG(r8/m8, r8)
CMPXCHG(r16/m16, r16)
CMPXCHG(r32/m32, r32)
CMPXCHG(r64/m64, r64)
peachpy.x86_64.generic.
CMPXCHG16B
(*args, **kwargs)¶Compare and Exchange 16 Bytes
Supported forms:
CMPXCHG16B(m128)
peachpy.x86_64.generic.
CMPXCHG8B
(*args, **kwargs)¶Compare and Exchange 8 Bytes
Supported forms:
CMPXCHG8B(m64)
peachpy.x86_64.generic.
CPUID
(*args, **kwargs)¶CPU Identification
Supported forms:
CPUID() [CPUID]
peachpy.x86_64.generic.
CQO
(*args, **kwargs)¶Convert Quadword to Octaword
Supported forms:
CQO()
peachpy.x86_64.generic.
CRC32
(*args, **kwargs)¶Accumulate CRC32 Value
Supported forms:
CRC32(r32, r8/m8) [SSE4.2]
CRC32(r32, r16/m16) [SSE4.2]
CRC32(r32, r32/m32) [SSE4.2]
CRC32(r64, r8/m8) [SSE4.2]
CRC32(r64, r64/m64) [SSE4.2]
peachpy.x86_64.generic.
CWD
(*args, **kwargs)¶Convert Word to Doubleword
Supported forms:
CWD()
peachpy.x86_64.generic.
CWDE
(*args, **kwargs)¶Convert Word to Doubleword
Supported forms:
CWDE()
peachpy.x86_64.generic.
DEC
(*args, **kwargs)¶Decrement by 1
Supported forms:
DEC(r8/m8)
DEC(r16/m16)
DEC(r32/m32)
DEC(r64/m64)
peachpy.x86_64.generic.
DIV
(*args, **kwargs)¶Unsigned Divide
Supported forms:
DIV(r8/m8)
DIV(r16/m16)
DIV(r32/m32)
DIV(r64/m64)
peachpy.x86_64.generic.
IDIV
(*args, **kwargs)¶Signed Divide
Supported forms:
IDIV(r8/m8)
IDIV(r16/m16)
IDIV(r32/m32)
IDIV(r64/m64)
peachpy.x86_64.generic.
IMUL
(*args, **kwargs)¶Signed Multiply
Supported forms:
IMUL(r8/m8)
IMUL(r16/m16)
IMUL(r32/m32)
IMUL(r64/m64)
IMUL(r16, r16/m16)
IMUL(r32, r32/m32)
IMUL(r64, r64/m64)
IMUL(r16, r16/m16, imm16)
IMUL(r32, r32/m32, imm32)
IMUL(r64, r64/m64, imm32)
peachpy.x86_64.generic.
INC
(*args, **kwargs)¶Increment by 1
Supported forms:
INC(r8/m8)
INC(r16/m16)
INC(r32/m32)
INC(r64/m64)
peachpy.x86_64.generic.
INT
(*args, **kwargs)¶Call to Interrupt Procedure
Supported forms:
INT(imm8)
peachpy.x86_64.generic.
JA
(*args, **kwargs)¶Jump if above (CF == 0 and ZF == 0)
Supported forms:
JA(rel32)
peachpy.x86_64.generic.
JAE
(*args, **kwargs)¶Jump if above or equal (CF == 0)
Supported forms:
JAE(rel32)
peachpy.x86_64.generic.
JB
(*args, **kwargs)¶Jump if below (CF == 1)
Supported forms:
JB(rel32)
peachpy.x86_64.generic.
JBE
(*args, **kwargs)¶Jump if below or equal (CF == 1 or ZF == 1)
Supported forms:
JBE(rel32)
peachpy.x86_64.generic.
JC
(*args, **kwargs)¶Jump if carry (CF == 1)
Supported forms:
JC(rel32)
peachpy.x86_64.generic.
JE
(*args, **kwargs)¶Jump if equal (ZF == 1)
Supported forms:
JE(rel32)
peachpy.x86_64.generic.
JECXZ
(*args, **kwargs)¶Jump if ECX register is 0
Supported forms:
JECXZ(rel8)
peachpy.x86_64.generic.
JG
(*args, **kwargs)¶Jump if greater (ZF == 0 and SF == OF)
Supported forms:
JG(rel32)
peachpy.x86_64.generic.
JGE
(*args, **kwargs)¶Jump if greater or equal (SF == OF)
Supported forms:
JGE(rel32)
peachpy.x86_64.generic.
JL
(*args, **kwargs)¶Jump if less (SF != OF)
Supported forms:
JL(rel32)
peachpy.x86_64.generic.
JLE
(*args, **kwargs)¶Jump if less or equal (ZF == 1 or SF != OF)
Supported forms:
JLE(rel32)
peachpy.x86_64.generic.
JMP
(*args, **kwargs)¶Jump Unconditionally
Supported forms:
JMP(rel32)
JMP(r64/m64)
peachpy.x86_64.generic.
JNA
(*args, **kwargs)¶Jump if not above (CF == 1 or ZF == 1)
Supported forms:
JNA(rel32)
peachpy.x86_64.generic.
JNAE
(*args, **kwargs)¶Jump if not above or equal (CF == 1)
Supported forms:
JNAE(rel32)
peachpy.x86_64.generic.
JNB
(*args, **kwargs)¶Jump if not below (CF == 0)
Supported forms:
JNB(rel32)
peachpy.x86_64.generic.
JNBE
(*args, **kwargs)¶Jump if not below or equal (CF == 0 and ZF == 0)
Supported forms:
JNBE(rel32)
peachpy.x86_64.generic.
JNC
(*args, **kwargs)¶Jump if not carry (CF == 0)
Supported forms:
JNC(rel32)
peachpy.x86_64.generic.
JNE
(*args, **kwargs)¶Jump if not equal (ZF == 0)
Supported forms:
JNE(rel32)
peachpy.x86_64.generic.
JNG
(*args, **kwargs)¶Jump if not greater (ZF == 1 or SF != OF)
Supported forms:
JNG(rel32)
peachpy.x86_64.generic.
JNGE
(*args, **kwargs)¶Jump if not greater or equal (SF != OF)
Supported forms:
JNGE(rel32)
peachpy.x86_64.generic.
JNL
(*args, **kwargs)¶Jump if not less (SF == OF)
Supported forms:
JNL(rel32)
peachpy.x86_64.generic.
JNLE
(*args, **kwargs)¶Jump if not less or equal (ZF == 0 and SF == OF)
Supported forms:
JNLE(rel32)
peachpy.x86_64.generic.
JNO
(*args, **kwargs)¶Jump if not overflow (OF == 0)
Supported forms:
JNO(rel32)
peachpy.x86_64.generic.
JNP
(*args, **kwargs)¶Jump if not parity (PF == 0)
Supported forms:
JNP(rel32)
peachpy.x86_64.generic.
JNS
(*args, **kwargs)¶Jump if not sign (SF == 0)
Supported forms:
JNS(rel32)
peachpy.x86_64.generic.
JNZ
(*args, **kwargs)¶Jump if not zero (ZF == 0)
Supported forms:
JNZ(rel32)
peachpy.x86_64.generic.
JO
(*args, **kwargs)¶Jump if overflow (OF == 1)
Supported forms:
JO(rel32)
peachpy.x86_64.generic.
JP
(*args, **kwargs)¶Jump if parity (PF == 1)
Supported forms:
JP(rel32)
peachpy.x86_64.generic.
JPE
(*args, **kwargs)¶Jump if parity even (PF == 1)
Supported forms:
JPE(rel32)
peachpy.x86_64.generic.
JPO
(*args, **kwargs)¶Jump if parity odd (PF == 0)
Supported forms:
JPO(rel32)
peachpy.x86_64.generic.
JRCXZ
(*args, **kwargs)¶Jump if RCX register is 0
Supported forms:
JRCXZ(rel8)
peachpy.x86_64.generic.
JS
(*args, **kwargs)¶Jump if sign (SF == 1)
Supported forms:
JS(rel32)
peachpy.x86_64.generic.
JZ
(*args, **kwargs)¶Jump if zero (ZF == 1)
Supported forms:
JZ(rel32)
peachpy.x86_64.generic.
LEA
(*args, **kwargs)¶Load Effective Address
Supported forms:
LEA(r16, m)
LEA(r32, m)
LEA(r64, m)
peachpy.x86_64.generic.
LFENCE
(*args, **kwargs)¶Load Fence
Supported forms:
LFENCE() [SSE2]
peachpy.x86_64.generic.
LZCNT
(*args, **kwargs)¶Count the Number of Leading Zero Bits
Supported forms:
LZCNT(r16, r16/m16) [LZCNT]
LZCNT(r32, r32/m32) [LZCNT]
LZCNT(r64, r64/m64) [LZCNT]
peachpy.x86_64.generic.
MFENCE
(*args, **kwargs)¶Memory Fence
Supported forms:
MFENCE() [SSE2]
peachpy.x86_64.generic.
MOV
(*args, **kwargs)¶Move
Supported forms:
MOV(r8, r8/m8)
MOV(r16, r16/m16)
MOV(r32, r32/m32)
MOV(r64, imm64)
MOV(r64, r64/m64)
MOV(r8/m8, imm8)
MOV(r8/m8, r8)
MOV(r16/m16, imm16)
MOV(r16/m16, r16)
MOV(r32/m32, imm32)
MOV(r32/m32, r32)
MOV(m64, imm32)
MOV(r64/m64, r64)
peachpy.x86_64.generic.
MOVBE
(*args, **kwargs)¶Move Data After Swapping Bytes
Supported forms:
MOVBE(r16, m16) [MOVBE]
MOVBE(r32, m32) [MOVBE]
MOVBE(r64, m64) [MOVBE]
MOVBE(m16, r16) [MOVBE]
MOVBE(m32, r32) [MOVBE]
MOVBE(m64, r64) [MOVBE]
peachpy.x86_64.generic.
MOVNTI
(*args, **kwargs)¶Store Doubleword Using Non-Temporal Hint
Supported forms:
MOVNTI(m32, r32) [SSE2]
MOVNTI(m64, r64) [SSE2]
peachpy.x86_64.generic.
MOVSX
(*args, **kwargs)¶Move with Sign-Extension
Supported forms:
MOVSX(r16, r8/m8)
MOVSX(r32, r8/m8)
MOVSX(r32, r16/m16)
MOVSX(r64, r8/m8)
MOVSX(r64, r16/m16)
peachpy.x86_64.generic.
MOVSXD
(*args, **kwargs)¶Move Doubleword to Quadword with Sign-Extension
Supported forms:
MOVSXD(r64, r32/m32)
peachpy.x86_64.generic.
MOVZX
(*args, **kwargs)¶Move with Zero-Extend
Supported forms:
MOVZX(r16, r8/m8)
MOVZX(r32, r8/m8)
MOVZX(r32, r16/m16)
MOVZX(r64, r8/m8)
MOVZX(r64, r16/m16)
peachpy.x86_64.generic.
MUL
(*args, **kwargs)¶Unsigned Multiply
Supported forms:
MUL(r8/m8)
MUL(r16/m16)
MUL(r32/m32)
MUL(r64/m64)
peachpy.x86_64.generic.
MULX
(*args, **kwargs)¶Unsigned Multiply Without Affecting Flags
Supported forms:
MULX(r32, r32, r32/m32) [BMI2]
MULX(r64, r64, r64/m64) [BMI2]
peachpy.x86_64.generic.
NEG
(*args, **kwargs)¶Two’s Complement Negation
Supported forms:
NEG(r8/m8)
NEG(r16/m16)
NEG(r32/m32)
NEG(r64/m64)
peachpy.x86_64.generic.
NOP
(*args, **kwargs)¶No Operation
Supported forms:
NOP()
peachpy.x86_64.generic.
NOT
(*args, **kwargs)¶One’s Complement Negation
Supported forms:
NOT(r8/m8)
NOT(r16/m16)
NOT(r32/m32)
NOT(r64/m64)
peachpy.x86_64.generic.
OR
(*args, **kwargs)¶Logical Inclusive OR
Supported forms:
OR(r8, r8/m8)
OR(r16, r16/m16)
OR(r32, r32/m32)
OR(r64, r64/m64)
OR(r8/m8, imm8)
OR(r8/m8, r8)
OR(r16/m16, imm16)
OR(r16/m16, r16)
OR(r32/m32, imm32)
OR(r32/m32, r32)
OR(r64/m64, imm32)
OR(r64/m64, r64)
peachpy.x86_64.generic.
PAUSE
(*args, **kwargs)¶Spin Loop Hint
Supported forms:
PAUSE()
peachpy.x86_64.generic.
PDEP
(*args, **kwargs)¶Parallel Bits Deposit
Supported forms:
PDEP(r32, r32, r32/m32) [BMI2]
PDEP(r64, r64, r64/m64) [BMI2]
peachpy.x86_64.generic.
PEXT
(*args, **kwargs)¶Parallel Bits Extract
Supported forms:
PEXT(r32, r32, r32/m32) [BMI2]
PEXT(r64, r64, r64/m64) [BMI2]
peachpy.x86_64.generic.
POP
(*args, **kwargs)¶Pop a Value from the Stack
Supported forms:
POP(r16/m16)
POP(r64/m64)
peachpy.x86_64.generic.
POPCNT
(*args, **kwargs)¶Count of Number of Bits Set to 1
Supported forms:
POPCNT(r16, r16/m16) [POPCNT]
POPCNT(r32, r32/m32) [POPCNT]
POPCNT(r64, r64/m64) [POPCNT]
peachpy.x86_64.generic.
PREFETCH
(*args, **kwargs)¶Prefetch Data into Caches
Supported forms:
PREFETCH(m8) [PREFETCH]
peachpy.x86_64.generic.
PREFETCHNTA
(*args, **kwargs)¶Prefetch Data Into Caches using NTA Hint
Supported forms:
PREFETCHNTA(m8) [MMX+]
peachpy.x86_64.generic.
PREFETCHT0
(*args, **kwargs)¶Prefetch Data Into Caches using T0 Hint
Supported forms:
PREFETCHT0(m8) [MMX+]
peachpy.x86_64.generic.
PREFETCHT1
(*args, **kwargs)¶Prefetch Data Into Caches using T1 Hint
Supported forms:
PREFETCHT1(m8) [MMX+]
peachpy.x86_64.generic.
PREFETCHT2
(*args, **kwargs)¶Prefetch Data Into Caches using T2 Hint
Supported forms:
PREFETCHT2(m8) [MMX+]
peachpy.x86_64.generic.
PREFETCHW
(*args, **kwargs)¶Prefetch Data into Caches in Anticipation of a Write
Supported forms:
PREFETCHW(m8) [PREFETCHW]
peachpy.x86_64.generic.
PREFETCHWT1
(*args, **kwargs)¶Prefetch Vector Data Into Caches with Intent to Write and T1 Hint
Supported forms:
PREFETCHWT1(m8) [PREFETCHWT1]
peachpy.x86_64.generic.
PUSH
(*args, **kwargs)¶Push Value Onto the Stack
Supported forms:
PUSH(imm32)
PUSH(r16/m16)
PUSH(r64/m64)
peachpy.x86_64.generic.
RCL
(*args, **kwargs)¶Rotate Left through Carry Flag
Supported forms:
RCL(r8/m8, imm8)
RCL(r8/m8, cl)
RCL(r16/m16, imm8)
RCL(r16/m16, cl)
RCL(r32/m32, imm8)
RCL(r32/m32, cl)
RCL(r64/m64, imm8)
RCL(r64/m64, cl)
peachpy.x86_64.generic.
RCR
(*args, **kwargs)¶Rotate Right through Carry Flag
Supported forms:
RCR(r8/m8, imm8)
RCR(r8/m8, cl)
RCR(r16/m16, imm8)
RCR(r16/m16, cl)
RCR(r32/m32, imm8)
RCR(r32/m32, cl)
RCR(r64/m64, imm8)
RCR(r64/m64, cl)
peachpy.x86_64.generic.
RDTSC
(*args, **kwargs)¶Read Time-Stamp Counter
Supported forms:
RDTSC() [RDTSC]
peachpy.x86_64.generic.
RDTSCP
(*args, **kwargs)¶Read Time-Stamp Counter and Processor ID
Supported forms:
RDTSCP() [RDTSCP]
peachpy.x86_64.generic.
RET
(*args, **kwargs)¶Return from Procedure
Supported forms:
RET()
RET(imm16)
peachpy.x86_64.generic.
ROL
(*args, **kwargs)¶Rotate Left
Supported forms:
ROL(r8/m8, imm8)
ROL(r8/m8, cl)
ROL(r16/m16, imm8)
ROL(r16/m16, cl)
ROL(r32/m32, imm8)
ROL(r32/m32, cl)
ROL(r64/m64, imm8)
ROL(r64/m64, cl)
peachpy.x86_64.generic.
ROR
(*args, **kwargs)¶Rotate Right
Supported forms:
ROR(r8/m8, imm8)
ROR(r8/m8, cl)
ROR(r16/m16, imm8)
ROR(r16/m16, cl)
ROR(r32/m32, imm8)
ROR(r32/m32, cl)
ROR(r64/m64, imm8)
ROR(r64/m64, cl)
peachpy.x86_64.generic.
RORX
(*args, **kwargs)¶Rotate Right Logical Without Affecting Flags
Supported forms:
RORX(r32, r32/m32, imm8) [BMI2]
RORX(r64, r64/m64, imm8) [BMI2]
peachpy.x86_64.generic.
SAL
(*args, **kwargs)¶Arithmetic Shift Left
Supported forms:
SAL(r8/m8, imm8)
SAL(r8/m8, cl)
SAL(r16/m16, imm8)
SAL(r16/m16, cl)
SAL(r32/m32, imm8)
SAL(r32/m32, cl)
SAL(r64/m64, imm8)
SAL(r64/m64, cl)
peachpy.x86_64.generic.
SAR
(*args, **kwargs)¶Arithmetic Shift Right
Supported forms:
SAR(r8/m8, imm8)
SAR(r8/m8, cl)
SAR(r16/m16, imm8)
SAR(r16/m16, cl)
SAR(r32/m32, imm8)
SAR(r32/m32, cl)
SAR(r64/m64, imm8)
SAR(r64/m64, cl)
peachpy.x86_64.generic.
SARX
(*args, **kwargs)¶Arithmetic Shift Right Without Affecting Flags
Supported forms:
SARX(r32, r32/m32, r32) [BMI2]
SARX(r64, r64/m64, r64) [BMI2]
peachpy.x86_64.generic.
SBB
(*args, **kwargs)¶Subtract with Borrow
Supported forms:
SBB(r8, r8/m8)
SBB(r16, r16/m16)
SBB(r32, r32/m32)
SBB(r64, r64/m64)
SBB(r8/m8, imm8)
SBB(r8/m8, r8)
SBB(r16/m16, imm16)
SBB(r16/m16, r16)
SBB(r32/m32, imm32)
SBB(r32/m32, r32)
SBB(r64/m64, imm32)
SBB(r64/m64, r64)
peachpy.x86_64.generic.
SETA
(*args, **kwargs)¶Set byte if above (CF == 0 and ZF == 0)
Supported forms:
SETA(r8/m8)
peachpy.x86_64.generic.
SETAE
(*args, **kwargs)¶Set byte if above or equal (CF == 0)
Supported forms:
SETAE(r8/m8)
peachpy.x86_64.generic.
SETB
(*args, **kwargs)¶Set byte if below (CF == 1)
Supported forms:
SETB(r8/m8)
peachpy.x86_64.generic.
SETBE
(*args, **kwargs)¶Set byte if below or equal (CF == 1 or ZF == 1)
Supported forms:
SETBE(r8/m8)
peachpy.x86_64.generic.
SETC
(*args, **kwargs)¶Set byte if carry (CF == 1)
Supported forms:
SETC(r8/m8)
peachpy.x86_64.generic.
SETE
(*args, **kwargs)¶Set byte if equal (ZF == 1)
Supported forms:
SETE(r8/m8)
peachpy.x86_64.generic.
SETG
(*args, **kwargs)¶Set byte if greater (ZF == 0 and SF == OF)
Supported forms:
SETG(r8/m8)
peachpy.x86_64.generic.
SETGE
(*args, **kwargs)¶Set byte if greater or equal (SF == OF)
Supported forms:
SETGE(r8/m8)
peachpy.x86_64.generic.
SETL
(*args, **kwargs)¶Set byte if less (SF != OF)
Supported forms:
SETL(r8/m8)
peachpy.x86_64.generic.
SETLE
(*args, **kwargs)¶Set byte if less or equal (ZF == 1 or SF != OF)
Supported forms:
SETLE(r8/m8)
peachpy.x86_64.generic.
SETNA
(*args, **kwargs)¶Set byte if not above (CF == 1 or ZF == 1)
Supported forms:
SETNA(r8/m8)
peachpy.x86_64.generic.
SETNAE
(*args, **kwargs)¶Set byte if not above or equal (CF == 1)
Supported forms:
SETNAE(r8/m8)
peachpy.x86_64.generic.
SETNB
(*args, **kwargs)¶Set byte if not below (CF == 0)
Supported forms:
SETNB(r8/m8)
peachpy.x86_64.generic.
SETNBE
(*args, **kwargs)¶Set byte if not below or equal (CF == 0 and ZF == 0)
Supported forms:
SETNBE(r8/m8)
peachpy.x86_64.generic.
SETNC
(*args, **kwargs)¶Set byte if not carry (CF == 0)
Supported forms:
SETNC(r8/m8)
peachpy.x86_64.generic.
SETNE
(*args, **kwargs)¶Set byte if not equal (ZF == 0)
Supported forms:
SETNE(r8/m8)
peachpy.x86_64.generic.
SETNG
(*args, **kwargs)¶Set byte if not greater (ZF == 1 or SF != OF)
Supported forms:
SETNG(r8/m8)
peachpy.x86_64.generic.
SETNGE
(*args, **kwargs)¶Set byte if not greater or equal (SF != OF)
Supported forms:
SETNGE(r8/m8)
peachpy.x86_64.generic.
SETNL
(*args, **kwargs)¶Set byte if not less (SF == OF)
Supported forms:
SETNL(r8/m8)
peachpy.x86_64.generic.
SETNLE
(*args, **kwargs)¶Set byte if not less or equal (ZF == 0 and SF == OF)
Supported forms:
SETNLE(r8/m8)
peachpy.x86_64.generic.
SETNO
(*args, **kwargs)¶Set byte if not overflow (OF == 0)
Supported forms:
SETNO(r8/m8)
peachpy.x86_64.generic.
SETNP
(*args, **kwargs)¶Set byte if not parity (PF == 0)
Supported forms:
SETNP(r8/m8)
peachpy.x86_64.generic.
SETNS
(*args, **kwargs)¶Set byte if not sign (SF == 0)
Supported forms:
SETNS(r8/m8)
peachpy.x86_64.generic.
SETNZ
(*args, **kwargs)¶Set byte if not zero (ZF == 0)
Supported forms:
SETNZ(r8/m8)
peachpy.x86_64.generic.
SETO
(*args, **kwargs)¶Set byte if overflow (OF == 1)
Supported forms:
SETO(r8/m8)
peachpy.x86_64.generic.
SETP
(*args, **kwargs)¶Set byte if parity (PF == 1)
Supported forms:
SETP(r8/m8)
peachpy.x86_64.generic.
SETPE
(*args, **kwargs)¶Set byte if parity even (PF == 1)
Supported forms:
SETPE(r8/m8)
peachpy.x86_64.generic.
SETPO
(*args, **kwargs)¶Set byte if parity odd (PF == 0)
Supported forms:
SETPO(r8/m8)
peachpy.x86_64.generic.
SETS
(*args, **kwargs)¶Set byte if sign (SF == 1)
Supported forms:
SETS(r8/m8)
peachpy.x86_64.generic.
SETZ
(*args, **kwargs)¶Set byte if zero (ZF == 1)
Supported forms:
SETZ(r8/m8)
peachpy.x86_64.generic.
SFENCE
(*args, **kwargs)¶Store Fence
Supported forms:
SFENCE() [MMX+]
peachpy.x86_64.generic.
SHL
(*args, **kwargs)¶Logical Shift Left
Supported forms:
SHL(r8/m8, imm8)
SHL(r8/m8, cl)
SHL(r16/m16, imm8)
SHL(r16/m16, cl)
SHL(r32/m32, imm8)
SHL(r32/m32, cl)
SHL(r64/m64, imm8)
SHL(r64/m64, cl)
peachpy.x86_64.generic.
SHLD
(*args, **kwargs)¶Integer Double Precision Shift Left
Supported forms:
SHLD(r16/m16, r16, imm8)
SHLD(r16/m16, r16, cl)
SHLD(r32/m32, r32, imm8)
SHLD(r32/m32, r32, cl)
SHLD(r64/m64, r64, imm8)
SHLD(r64/m64, r64, cl)
peachpy.x86_64.generic.
SHLX
(*args, **kwargs)¶Logical Shift Left Without Affecting Flags
Supported forms:
SHLX(r32, r32/m32, r32) [BMI2]
SHLX(r64, r64/m64, r64) [BMI2]
peachpy.x86_64.generic.
SHR
(*args, **kwargs)¶Logical Shift Right
Supported forms:
SHR(r8/m8, imm8)
SHR(r8/m8, cl)
SHR(r16/m16, imm8)
SHR(r16/m16, cl)
SHR(r32/m32, imm8)
SHR(r32/m32, cl)
SHR(r64/m64, imm8)
SHR(r64/m64, cl)
peachpy.x86_64.generic.
SHRD
(*args, **kwargs)¶Integer Double Precision Shift Right
Supported forms:
SHRD(r16/m16, r16, imm8)
SHRD(r16/m16, r16, cl)
SHRD(r32/m32, r32, imm8)
SHRD(r32/m32, r32, cl)
SHRD(r64/m64, r64, imm8)
SHRD(r64/m64, r64, cl)
peachpy.x86_64.generic.
SHRX
(*args, **kwargs)¶Logical Shift Right Without Affecting Flags
Supported forms:
SHRX(r32, r32/m32, r32) [BMI2]
SHRX(r64, r64/m64, r64) [BMI2]
peachpy.x86_64.generic.
STC
(*args, **kwargs)¶Set Carry Flag
Supported forms:
STC()
peachpy.x86_64.generic.
STD
(*args, **kwargs)¶Set Direction Flag
Supported forms:
STD()
peachpy.x86_64.generic.
SUB
(*args, **kwargs)¶Subtract
Supported forms:
SUB(r8, r8/m8)
SUB(r16, r16/m16)
SUB(r32, r32/m32)
SUB(r64, r64/m64)
SUB(r8/m8, imm8)
SUB(r8/m8, r8)
SUB(r16/m16, imm16)
SUB(r16/m16, r16)
SUB(r32/m32, imm32)
SUB(r32/m32, r32)
SUB(r64/m64, imm32)
SUB(r64/m64, r64)
peachpy.x86_64.generic.
SYSCALL
(*args, **kwargs)¶Fast System Call
Supported forms:
SYSCALL()
peachpy.x86_64.generic.
T1MSKC
(*args, **kwargs)¶Inverse Mask From Trailing Ones
Supported forms:
T1MSKC(r32, r32/m32) [TBM]
T1MSKC(r64, r64/m64) [TBM]
peachpy.x86_64.generic.
TEST
(*args, **kwargs)¶Logical Compare
Supported forms:
TEST(r8/m8, imm8)
TEST(r8/m8, r8)
TEST(r16/m16, imm16)
TEST(r16/m16, r16)
TEST(r32/m32, imm32)
TEST(r32/m32, r32)
TEST(r64/m64, imm32)
TEST(r64/m64, r64)
peachpy.x86_64.generic.
TZCNT
(*args, **kwargs)¶Count the Number of Trailing Zero Bits
Supported forms:
TZCNT(r16, r16/m16) [BMI]
TZCNT(r32, r32/m32) [BMI]
TZCNT(r64, r64/m64) [BMI]
peachpy.x86_64.generic.
TZMSK
(*args, **kwargs)¶Mask From Trailing Zeros
Supported forms:
TZMSK(r32, r32/m32) [TBM]
TZMSK(r64, r64/m64) [TBM]
peachpy.x86_64.generic.
UD2
(*args, **kwargs)¶Undefined Instruction
Supported forms:
UD2()
peachpy.x86_64.generic.
XADD
(*args, **kwargs)¶Exchange and Add
Supported forms:
XADD(r8/m8, r8)
XADD(r16/m16, r16)
XADD(r32/m32, r32)
XADD(r64/m64, r64)
peachpy.x86_64.generic.
XCHG
(*args, **kwargs)¶Exchange Register/Memory with Register
Supported forms:
XCHG(r8, r8/m8)
XCHG(r16, r16/m16)
XCHG(r32, r32/m32)
XCHG(r64, r64/m64)
XCHG(r8/m8, r8)
XCHG(r16/m16, r16)
XCHG(r32/m32, r32)
XCHG(r64/m64, r64)
peachpy.x86_64.generic.
XGETBV
(*args, **kwargs)¶Get Value of Extended Control Register
Supported forms:
XGETBV()
peachpy.x86_64.generic.
XOR
(*args, **kwargs)¶Logical Exclusive OR
Supported forms:
XOR(r8, r8/m8)
XOR(r16, r16/m16)
XOR(r32, r32/m32)
XOR(r64, r64/m64)
XOR(r8/m8, imm8)
XOR(r8/m8, r8)
XOR(r16/m16, imm16)
XOR(r16/m16, r16)
XOR(r32/m32, imm32)
XOR(r32/m32, r32)
XOR(r64/m64, imm32)
XOR(r64/m64, r64)
peachpy.x86_64.mmxsse.
ADDPD
(*args, **kwargs)¶Add Packed Double-Precision Floating-Point Values
Supported forms:
ADDPD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
ADDPS
(*args, **kwargs)¶Add Packed Single-Precision Floating-Point Values
Supported forms:
ADDPS(xmm, xmm/m128) [SSE]
peachpy.x86_64.mmxsse.
ADDSD
(*args, **kwargs)¶Add Scalar Double-Precision Floating-Point Values
Supported forms:
ADDSD(xmm, xmm/m64) [SSE2]
peachpy.x86_64.mmxsse.
ADDSS
(*args, **kwargs)¶Add Scalar Single-Precision Floating-Point Values
Supported forms:
ADDSS(xmm, xmm/m32) [SSE]
peachpy.x86_64.mmxsse.
ADDSUBPD
(*args, **kwargs)¶Packed Double-FP Add/Subtract
Supported forms:
ADDSUBPD(xmm, xmm/m128) [SSE3]
peachpy.x86_64.mmxsse.
ADDSUBPS
(*args, **kwargs)¶Packed Single-FP Add/Subtract
Supported forms:
ADDSUBPS(xmm, xmm/m128) [SSE3]
peachpy.x86_64.mmxsse.
ANDNPD
(*args, **kwargs)¶Bitwise Logical AND NOT of Packed Double-Precision Floating-Point Values
Supported forms:
ANDNPD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
ANDNPS
(*args, **kwargs)¶Bitwise Logical AND NOT of Packed Single-Precision Floating-Point Values
Supported forms:
ANDNPS(xmm, xmm/m128) [SSE]
peachpy.x86_64.mmxsse.
ANDPD
(*args, **kwargs)¶Bitwise Logical AND of Packed Double-Precision Floating-Point Values
Supported forms:
ANDPD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
ANDPS
(*args, **kwargs)¶Bitwise Logical AND of Packed Single-Precision Floating-Point Values
Supported forms:
ANDPS(xmm, xmm/m128) [SSE]
peachpy.x86_64.mmxsse.
BLENDPD
(*args, **kwargs)¶Blend Packed Double Precision Floating-Point Values
Supported forms:
BLENDPD(xmm, xmm/m128, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
BLENDPS
(*args, **kwargs)¶Blend Packed Single Precision Floating-Point Values
Supported forms:
BLENDPS(xmm, xmm/m128, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
BLENDVPD
(*args, **kwargs)¶Variable Blend Packed Double Precision Floating-Point Values
Supported forms:
BLENDVPD(xmm, xmm/m128, xmm0) [SSE4.1]
peachpy.x86_64.mmxsse.
BLENDVPS
(*args, **kwargs)¶Variable Blend Packed Single Precision Floating-Point Values
Supported forms:
BLENDVPS(xmm, xmm/m128, xmm0) [SSE4.1]
peachpy.x86_64.mmxsse.
CMPPD
(*args, **kwargs)¶Compare Packed Double-Precision Floating-Point Values
Supported forms:
CMPPD(xmm, xmm/m128, imm8) [SSE2]
peachpy.x86_64.mmxsse.
CMPPS
(*args, **kwargs)¶Compare Packed Single-Precision Floating-Point Values
Supported forms:
CMPPS(xmm, xmm/m128, imm8) [SSE]
peachpy.x86_64.mmxsse.
CMPSD
(*args, **kwargs)¶Compare Scalar Double-Precision Floating-Point Values
Supported forms:
CMPSD(xmm, xmm/m64, imm8) [SSE2]
peachpy.x86_64.mmxsse.
CMPSS
(*args, **kwargs)¶Compare Scalar Single-Precision Floating-Point Values
Supported forms:
CMPSS(xmm, xmm/m32, imm8) [SSE]
peachpy.x86_64.mmxsse.
COMISD
(*args, **kwargs)¶Compare Scalar Ordered Double-Precision Floating-Point Values and Set EFLAGS
Supported forms:
COMISD(xmm, xmm/m64) [SSE2]
peachpy.x86_64.mmxsse.
COMISS
(*args, **kwargs)¶Compare Scalar Ordered Single-Precision Floating-Point Values and Set EFLAGS
Supported forms:
COMISS(xmm, xmm/m32) [SSE]
peachpy.x86_64.mmxsse.
CVTDQ2PD
(*args, **kwargs)¶Convert Packed Dword Integers to Packed Double-Precision FP Values
Supported forms:
CVTDQ2PD(xmm, xmm/m64) [SSE2]
peachpy.x86_64.mmxsse.
CVTDQ2PS
(*args, **kwargs)¶Convert Packed Dword Integers to Packed Single-Precision FP Values
Supported forms:
CVTDQ2PS(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
CVTPD2DQ
(*args, **kwargs)¶Convert Packed Double-Precision FP Values to Packed Dword Integers
Supported forms:
CVTPD2DQ(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
CVTPD2PI
(*args, **kwargs)¶Convert Packed Double-Precision FP Values to Packed Dword Integers
Supported forms:
CVTPD2PI(mm, xmm/m128) [SSE]
peachpy.x86_64.mmxsse.
CVTPD2PS
(*args, **kwargs)¶Convert Packed Double-Precision FP Values to Packed Single-Precision FP Values
Supported forms:
CVTPD2PS(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
CVTPI2PD
(*args, **kwargs)¶Convert Packed Dword Integers to Packed Double-Precision FP Values
Supported forms:
CVTPI2PD(xmm, mm/m64) [SSE2]
peachpy.x86_64.mmxsse.
CVTPI2PS
(*args, **kwargs)¶Convert Packed Dword Integers to Packed Single-Precision FP Values
Supported forms:
CVTPI2PS(xmm, mm/m64) [SSE]
peachpy.x86_64.mmxsse.
CVTPS2DQ
(*args, **kwargs)¶Convert Packed Single-Precision FP Values to Packed Dword Integers
Supported forms:
CVTPS2DQ(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
CVTPS2PD
(*args, **kwargs)¶Convert Packed Single-Precision FP Values to Packed Double-Precision FP Values
Supported forms:
CVTPS2PD(xmm, xmm/m64) [SSE2]
peachpy.x86_64.mmxsse.
CVTPS2PI
(*args, **kwargs)¶Convert Packed Single-Precision FP Values to Packed Dword Integers
Supported forms:
CVTPS2PI(mm, xmm/m64) [SSE]
peachpy.x86_64.mmxsse.
CVTSD2SI
(*args, **kwargs)¶Convert Scalar Double-Precision FP Value to Integer
Supported forms:
CVTSD2SI(r32, xmm/m64) [SSE2]
CVTSD2SI(r64, xmm/m64) [SSE2]
peachpy.x86_64.mmxsse.
CVTSD2SS
(*args, **kwargs)¶Convert Scalar Double-Precision FP Value to Scalar Single-Precision FP Value
Supported forms:
CVTSD2SS(xmm, xmm/m64) [SSE2]
peachpy.x86_64.mmxsse.
CVTSI2SD
(*args, **kwargs)¶Convert Dword Integer to Scalar Double-Precision FP Value
Supported forms:
CVTSI2SD(xmm, r32/m32) [SSE2]
CVTSI2SD(xmm, r64/m64) [SSE2]
peachpy.x86_64.mmxsse.
CVTSI2SS
(*args, **kwargs)¶Convert Dword Integer to Scalar Single-Precision FP Value
Supported forms:
CVTSI2SS(xmm, r32/m32) [SSE]
CVTSI2SS(xmm, r64/m64) [SSE]
peachpy.x86_64.mmxsse.
CVTSS2SD
(*args, **kwargs)¶Convert Scalar Single-Precision FP Value to Scalar Double-Precision FP Value
Supported forms:
CVTSS2SD(xmm, xmm/m32) [SSE2]
peachpy.x86_64.mmxsse.
CVTSS2SI
(*args, **kwargs)¶Convert Scalar Single-Precision FP Value to Dword Integer
Supported forms:
CVTSS2SI(r32, xmm/m32) [SSE]
CVTSS2SI(r64, xmm/m32) [SSE]
peachpy.x86_64.mmxsse.
CVTTPD2DQ
(*args, **kwargs)¶Convert with Truncation Packed Double-Precision FP Values to Packed Dword Integers
Supported forms:
CVTTPD2DQ(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
CVTTPD2PI
(*args, **kwargs)¶Convert with Truncation Packed Double-Precision FP Values to Packed Dword Integers
Supported forms:
CVTTPD2PI(mm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
CVTTPS2DQ
(*args, **kwargs)¶Convert with Truncation Packed Single-Precision FP Values to Packed Dword Integers
Supported forms:
CVTTPS2DQ(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
CVTTPS2PI
(*args, **kwargs)¶Convert with Truncation Packed Single-Precision FP Values to Packed Dword Integers
Supported forms:
CVTTPS2PI(mm, xmm/m64) [SSE]
peachpy.x86_64.mmxsse.
CVTTSD2SI
(*args, **kwargs)¶Convert with Truncation Scalar Double-Precision FP Value to Signed Integer
Supported forms:
CVTTSD2SI(r32, xmm/m64) [SSE2]
CVTTSD2SI(r64, xmm/m64) [SSE2]
peachpy.x86_64.mmxsse.
CVTTSS2SI
(*args, **kwargs)¶Convert with Truncation Scalar Single-Precision FP Value to Dword Integer
Supported forms:
CVTTSS2SI(r32, xmm/m32) [SSE]
CVTTSS2SI(r64, xmm/m32) [SSE]
peachpy.x86_64.mmxsse.
DIVPD
(*args, **kwargs)¶Divide Packed Double-Precision Floating-Point Values
Supported forms:
DIVPD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
DIVPS
(*args, **kwargs)¶Divide Packed Single-Precision Floating-Point Values
Supported forms:
DIVPS(xmm, xmm/m128) [SSE]
peachpy.x86_64.mmxsse.
DIVSD
(*args, **kwargs)¶Divide Scalar Double-Precision Floating-Point Values
Supported forms:
DIVSD(xmm, xmm/m64) [SSE2]
peachpy.x86_64.mmxsse.
DIVSS
(*args, **kwargs)¶Divide Scalar Single-Precision Floating-Point Values
Supported forms:
DIVSS(xmm, xmm/m32) [SSE]
peachpy.x86_64.mmxsse.
DPPD
(*args, **kwargs)¶Dot Product of Packed Double Precision Floating-Point Values
Supported forms:
DPPD(xmm, xmm/m128, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
DPPS
(*args, **kwargs)¶Dot Product of Packed Single Precision Floating-Point Values
Supported forms:
DPPS(xmm, xmm/m128, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
EMMS
(*args, **kwargs)¶Exit MMX State
Supported forms:
EMMS() [MMX]
peachpy.x86_64.mmxsse.
EXTRACTPS
(*args, **kwargs)¶Extract Packed Single Precision Floating-Point Value
Supported forms:
EXTRACTPS(r32/m32, xmm, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
HADDPD
(*args, **kwargs)¶Packed Double-FP Horizontal Add
Supported forms:
HADDPD(xmm, xmm/m128) [SSE3]
peachpy.x86_64.mmxsse.
HADDPS
(*args, **kwargs)¶Packed Single-FP Horizontal Add
Supported forms:
HADDPS(xmm, xmm/m128) [SSE3]
peachpy.x86_64.mmxsse.
HSUBPD
(*args, **kwargs)¶Packed Double-FP Horizontal Subtract
Supported forms:
HSUBPD(xmm, xmm/m128) [SSE3]
peachpy.x86_64.mmxsse.
HSUBPS
(*args, **kwargs)¶Packed Single-FP Horizontal Subtract
Supported forms:
HSUBPS(xmm, xmm/m128) [SSE3]
peachpy.x86_64.mmxsse.
INSERTPS
(*args, **kwargs)¶Insert Packed Single Precision Floating-Point Value
Supported forms:
INSERTPS(xmm, xmm/m32, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
LDDQU
(*args, **kwargs)¶Load Unaligned Integer 128 Bits
Supported forms:
LDDQU(xmm, m128) [SSE3]
peachpy.x86_64.mmxsse.
LDMXCSR
(*args, **kwargs)¶Load MXCSR Register
Supported forms:
LDMXCSR(m32) [SSE]
peachpy.x86_64.mmxsse.
MASKMOVDQU
(*args, **kwargs)¶Store Selected Bytes of Double Quadword
Supported forms:
MASKMOVDQU(xmm, xmm) [SSE2]
peachpy.x86_64.mmxsse.
MASKMOVQ
(*args, **kwargs)¶Store Selected Bytes of Quadword
Supported forms:
MASKMOVQ(mm, mm) [MMX+]
peachpy.x86_64.mmxsse.
MAXPD
(*args, **kwargs)¶Return Maximum Packed Double-Precision Floating-Point Values
Supported forms:
MAXPD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
MAXPS
(*args, **kwargs)¶Return Maximum Packed Single-Precision Floating-Point Values
Supported forms:
MAXPS(xmm, xmm/m128) [SSE]
peachpy.x86_64.mmxsse.
MAXSD
(*args, **kwargs)¶Return Maximum Scalar Double-Precision Floating-Point Value
Supported forms:
MAXSD(xmm, xmm/m64) [SSE2]
peachpy.x86_64.mmxsse.
MAXSS
(*args, **kwargs)¶Return Maximum Scalar Single-Precision Floating-Point Value
Supported forms:
MAXSS(xmm, xmm/m32) [SSE]
peachpy.x86_64.mmxsse.
MINPD
(*args, **kwargs)¶Return Minimum Packed Double-Precision Floating-Point Values
Supported forms:
MINPD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
MINPS
(*args, **kwargs)¶Return Minimum Packed Single-Precision Floating-Point Values
Supported forms:
MINPS(xmm, xmm/m128) [SSE]
peachpy.x86_64.mmxsse.
MINSD
(*args, **kwargs)¶Return Minimum Scalar Double-Precision Floating-Point Value
Supported forms:
MINSD(xmm, xmm/m64) [SSE2]
peachpy.x86_64.mmxsse.
MINSS
(*args, **kwargs)¶Return Minimum Scalar Single-Precision Floating-Point Value
Supported forms:
MINSS(xmm, xmm/m32) [SSE]
peachpy.x86_64.mmxsse.
MOVAPD
(*args, **kwargs)¶Move Aligned Packed Double-Precision Floating-Point Values
Supported forms:
MOVAPD(xmm, xmm/m128) [SSE2]
MOVAPD(xmm/m128, xmm) [SSE2]
peachpy.x86_64.mmxsse.
MOVAPS
(*args, **kwargs)¶Move Aligned Packed Single-Precision Floating-Point Values
Supported forms:
MOVAPS(xmm, xmm/m128) [SSE]
MOVAPS(xmm/m128, xmm) [SSE]
peachpy.x86_64.mmxsse.
MOVD
(*args, **kwargs)¶Move Doubleword
Supported forms:
MOVD(mm, r32/m32) [MMX]
MOVD(r32/m32, mm) [MMX]
MOVD(xmm, r32/m32) [SSE2]
MOVD(r32/m32, xmm) [SSE2]
peachpy.x86_64.mmxsse.
MOVDDUP
(*args, **kwargs)¶Move One Double-FP and Duplicate
Supported forms:
MOVDDUP(xmm, xmm/m64) [SSE3]
peachpy.x86_64.mmxsse.
MOVDQ2Q
(*args, **kwargs)¶Move Quadword from XMM to MMX Technology Register
Supported forms:
MOVDQ2Q(mm, xmm) [SSE2]
peachpy.x86_64.mmxsse.
MOVDQA
(*args, **kwargs)¶Move Aligned Double Quadword
Supported forms:
MOVDQA(xmm, xmm/m128) [SSE2]
MOVDQA(xmm/m128, xmm) [SSE2]
peachpy.x86_64.mmxsse.
MOVDQU
(*args, **kwargs)¶Move Unaligned Double Quadword
Supported forms:
MOVDQU(xmm, xmm/m128) [SSE2]
MOVDQU(xmm/m128, xmm) [SSE2]
peachpy.x86_64.mmxsse.
MOVHLPS
(*args, **kwargs)¶Move Packed Single-Precision Floating-Point Values High to Low
Supported forms:
MOVHLPS(xmm, xmm) [SSE]
peachpy.x86_64.mmxsse.
MOVHPD
(*args, **kwargs)¶Move High Packed Double-Precision Floating-Point Value
Supported forms:
MOVHPD(xmm, m64) [SSE2]
MOVHPD(m64, xmm) [SSE2]
peachpy.x86_64.mmxsse.
MOVHPS
(*args, **kwargs)¶Move High Packed Single-Precision Floating-Point Values
Supported forms:
MOVHPS(xmm, m64) [SSE]
MOVHPS(m64, xmm) [SSE]
peachpy.x86_64.mmxsse.
MOVLHPS
(*args, **kwargs)¶Move Packed Single-Precision Floating-Point Values Low to High
Supported forms:
MOVLHPS(xmm, xmm) [SSE]
peachpy.x86_64.mmxsse.
MOVLPD
(*args, **kwargs)¶Move Low Packed Double-Precision Floating-Point Value
Supported forms:
MOVLPD(xmm, m64) [SSE2]
MOVLPD(m64, xmm) [SSE2]
peachpy.x86_64.mmxsse.
MOVLPS
(*args, **kwargs)¶Move Low Packed Single-Precision Floating-Point Values
Supported forms:
MOVLPS(xmm, m64) [SSE]
MOVLPS(m64, xmm) [SSE]
peachpy.x86_64.mmxsse.
MOVMSKPD
(*args, **kwargs)¶Extract Packed Double-Precision Floating-Point Sign Mask
Supported forms:
MOVMSKPD(r32, xmm) [SSE2]
peachpy.x86_64.mmxsse.
MOVMSKPS
(*args, **kwargs)¶Extract Packed Single-Precision Floating-Point Sign Mask
Supported forms:
MOVMSKPS(r32, xmm) [SSE]
peachpy.x86_64.mmxsse.
MOVNTDQ
(*args, **kwargs)¶Store Double Quadword Using Non-Temporal Hint
Supported forms:
MOVNTDQ(m128, xmm) [SSE2]
peachpy.x86_64.mmxsse.
MOVNTDQA
(*args, **kwargs)¶Load Double Quadword Non-Temporal Aligned Hint
Supported forms:
MOVNTDQA(xmm, m128) [SSE4.1]
peachpy.x86_64.mmxsse.
MOVNTPD
(*args, **kwargs)¶Store Packed Double-Precision Floating-Point Values Using Non-Temporal Hint
Supported forms:
MOVNTPD(m128, xmm) [SSE2]
peachpy.x86_64.mmxsse.
MOVNTPS
(*args, **kwargs)¶Store Packed Single-Precision Floating-Point Values Using Non-Temporal Hint
Supported forms:
MOVNTPS(m128, xmm) [SSE]
peachpy.x86_64.mmxsse.
MOVNTQ
(*args, **kwargs)¶Store of Quadword Using Non-Temporal Hint
Supported forms:
MOVNTQ(m64, mm) [MMX+]
peachpy.x86_64.mmxsse.
MOVQ
(*args, **kwargs)¶Move Quadword
Supported forms:
MOVQ(mm, mm) [MMX]
MOVQ(mm, r64/m64) [MMX]
MOVQ(r64/m64, mm) [MMX]
MOVQ(xmm, xmm) [SSE2]
MOVQ(xmm, r64/m64) [SSE2]
MOVQ(r64/m64, xmm) [SSE2]
peachpy.x86_64.mmxsse.
MOVQ2DQ
(*args, **kwargs)¶Move Quadword from MMX Technology to XMM Register
Supported forms:
MOVQ2DQ(xmm, mm) [SSE2]
peachpy.x86_64.mmxsse.
MOVSD
(*args, **kwargs)¶Move Scalar Double-Precision Floating-Point Value
Supported forms:
MOVSD(xmm, xmm/m64) [SSE2]
MOVSD(xmm/m64, xmm) [SSE2]
peachpy.x86_64.mmxsse.
MOVSHDUP
(*args, **kwargs)¶Move Packed Single-FP High and Duplicate
Supported forms:
MOVSHDUP(xmm, xmm/m128) [SSE3]
peachpy.x86_64.mmxsse.
MOVSLDUP
(*args, **kwargs)¶Move Packed Single-FP Low and Duplicate
Supported forms:
MOVSLDUP(xmm, xmm/m128) [SSE3]
peachpy.x86_64.mmxsse.
MOVSS
(*args, **kwargs)¶Move Scalar Single-Precision Floating-Point Values
Supported forms:
MOVSS(xmm, xmm/m32) [SSE]
MOVSS(xmm/m32, xmm) [SSE]
peachpy.x86_64.mmxsse.
MOVUPD
(*args, **kwargs)¶Move Unaligned Packed Double-Precision Floating-Point Values
Supported forms:
MOVUPD(xmm, xmm/m128) [SSE2]
MOVUPD(xmm/m128, xmm) [SSE2]
peachpy.x86_64.mmxsse.
MOVUPS
(*args, **kwargs)¶Move Unaligned Packed Single-Precision Floating-Point Values
Supported forms:
MOVUPS(xmm, xmm/m128) [SSE]
MOVUPS(xmm/m128, xmm) [SSE]
peachpy.x86_64.mmxsse.
MPSADBW
(*args, **kwargs)¶Compute Multiple Packed Sums of Absolute Difference
Supported forms:
MPSADBW(xmm, xmm/m128, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
MULPD
(*args, **kwargs)¶Multiply Packed Double-Precision Floating-Point Values
Supported forms:
MULPD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
MULPS
(*args, **kwargs)¶Multiply Packed Single-Precision Floating-Point Values
Supported forms:
MULPS(xmm, xmm/m128) [SSE]
peachpy.x86_64.mmxsse.
MULSD
(*args, **kwargs)¶Multiply Scalar Double-Precision Floating-Point Values
Supported forms:
MULSD(xmm, xmm/m64) [SSE2]
peachpy.x86_64.mmxsse.
MULSS
(*args, **kwargs)¶Multiply Scalar Single-Precision Floating-Point Values
Supported forms:
MULSS(xmm, xmm/m32) [SSE]
peachpy.x86_64.mmxsse.
ORPD
(*args, **kwargs)¶Bitwise Logical OR of Double-Precision Floating-Point Values
Supported forms:
ORPD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
ORPS
(*args, **kwargs)¶Bitwise Logical OR of Single-Precision Floating-Point Values
Supported forms:
ORPS(xmm, xmm/m128) [SSE]
peachpy.x86_64.mmxsse.
PABSB
(*args, **kwargs)¶Packed Absolute Value of Byte Integers
Supported forms:
PABSB(mm, mm/m64) [SSSE3]
PABSB(xmm, xmm/m128) [SSSE3]
peachpy.x86_64.mmxsse.
PABSD
(*args, **kwargs)¶Packed Absolute Value of Doubleword Integers
Supported forms:
PABSD(mm, mm/m64) [SSSE3]
PABSD(xmm, xmm/m128) [SSSE3]
peachpy.x86_64.mmxsse.
PABSW
(*args, **kwargs)¶Packed Absolute Value of Word Integers
Supported forms:
PABSW(mm, mm/m64) [SSSE3]
PABSW(xmm, xmm/m128) [SSSE3]
peachpy.x86_64.mmxsse.
PACKSSDW
(*args, **kwargs)¶Pack Doublewords into Words with Signed Saturation
Supported forms:
PACKSSDW(mm, mm/m64) [MMX]
PACKSSDW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PACKSSWB
(*args, **kwargs)¶Pack Words into Bytes with Signed Saturation
Supported forms:
PACKSSWB(mm, mm/m64) [MMX]
PACKSSWB(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PACKUSDW
(*args, **kwargs)¶Pack Doublewords into Words with Unsigned Saturation
Supported forms:
PACKUSDW(xmm, xmm/m128) [SSE4.1]
peachpy.x86_64.mmxsse.
PACKUSWB
(*args, **kwargs)¶Pack Words into Bytes with Unsigned Saturation
Supported forms:
PACKUSWB(mm, mm/m64) [MMX]
PACKUSWB(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PADDB
(*args, **kwargs)¶Add Packed Byte Integers
Supported forms:
PADDB(mm, mm/m64) [MMX]
PADDB(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PADDD
(*args, **kwargs)¶Add Packed Doubleword Integers
Supported forms:
PADDD(mm, mm/m64) [MMX]
PADDD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PADDQ
(*args, **kwargs)¶Add Packed Quadword Integers
Supported forms:
PADDQ(mm, mm/m64) [SSE2]
PADDQ(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PADDSB
(*args, **kwargs)¶Add Packed Signed Byte Integers with Signed Saturation
Supported forms:
PADDSB(mm, mm/m64) [MMX]
PADDSB(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PADDSW
(*args, **kwargs)¶Add Packed Signed Word Integers with Signed Saturation
Supported forms:
PADDSW(mm, mm/m64) [MMX]
PADDSW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PADDUSB
(*args, **kwargs)¶Add Packed Unsigned Byte Integers with Unsigned Saturation
Supported forms:
PADDUSB(mm, mm/m64) [MMX]
PADDUSB(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PADDUSW
(*args, **kwargs)¶Add Packed Unsigned Word Integers with Unsigned Saturation
Supported forms:
PADDUSW(mm, mm/m64) [MMX]
PADDUSW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PADDW
(*args, **kwargs)¶Add Packed Word Integers
Supported forms:
PADDW(mm, mm/m64) [MMX]
PADDW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PALIGNR
(*args, **kwargs)¶Packed Align Right
Supported forms:
PALIGNR(mm, mm/m64, imm8) [SSSE3]
PALIGNR(xmm, xmm/m128, imm8) [SSSE3]
peachpy.x86_64.mmxsse.
PAND
(*args, **kwargs)¶Packed Bitwise Logical AND
Supported forms:
PAND(mm, mm/m64) [MMX]
PAND(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PANDN
(*args, **kwargs)¶Packed Bitwise Logical AND NOT
Supported forms:
PANDN(mm, mm/m64) [MMX]
PANDN(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PAVGB
(*args, **kwargs)¶Average Packed Byte Integers
Supported forms:
PAVGB(mm, mm/m64) [MMX+]
PAVGB(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PAVGW
(*args, **kwargs)¶Average Packed Word Integers
Supported forms:
PAVGW(mm, mm/m64) [MMX+]
PAVGW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PBLENDVB
(*args, **kwargs)¶Variable Blend Packed Bytes
Supported forms:
PBLENDVB(xmm, xmm/m128, xmm0) [SSE4.1]
peachpy.x86_64.mmxsse.
PBLENDW
(*args, **kwargs)¶Blend Packed Words
Supported forms:
PBLENDW(xmm, xmm/m128, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
PCMPEQB
(*args, **kwargs)¶Compare Packed Byte Data for Equality
Supported forms:
PCMPEQB(mm, mm/m64) [MMX]
PCMPEQB(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PCMPEQD
(*args, **kwargs)¶Compare Packed Doubleword Data for Equality
Supported forms:
PCMPEQD(mm, mm/m64) [MMX]
PCMPEQD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PCMPEQQ
(*args, **kwargs)¶Compare Packed Quadword Data for Equality
Supported forms:
PCMPEQQ(xmm, xmm/m128) [SSE4.1]
peachpy.x86_64.mmxsse.
PCMPEQW
(*args, **kwargs)¶Compare Packed Word Data for Equality
Supported forms:
PCMPEQW(mm, mm/m64) [MMX]
PCMPEQW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PCMPESTRI
(*args, **kwargs)¶Packed Compare Explicit Length Strings, Return Index
Supported forms:
PCMPESTRI(xmm, xmm/m128, imm8) [SSE4.2]
peachpy.x86_64.mmxsse.
PCMPESTRM
(*args, **kwargs)¶Packed Compare Explicit Length Strings, Return Mask
Supported forms:
PCMPESTRM(xmm, xmm/m128, imm8) [SSE4.2]
peachpy.x86_64.mmxsse.
PCMPGTB
(*args, **kwargs)¶Compare Packed Signed Byte Integers for Greater Than
Supported forms:
PCMPGTB(mm, mm/m64) [MMX]
PCMPGTB(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PCMPGTD
(*args, **kwargs)¶Compare Packed Signed Doubleword Integers for Greater Than
Supported forms:
PCMPGTD(mm, mm/m64) [MMX]
PCMPGTD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PCMPGTQ
(*args, **kwargs)¶Compare Packed Data for Greater Than
Supported forms:
PCMPGTQ(xmm, xmm/m128) [SSE4.2]
peachpy.x86_64.mmxsse.
PCMPGTW
(*args, **kwargs)¶Compare Packed Signed Word Integers for Greater Than
Supported forms:
PCMPGTW(mm, mm/m64) [MMX]
PCMPGTW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PCMPISTRI
(*args, **kwargs)¶Packed Compare Implicit Length Strings, Return Index
Supported forms:
PCMPISTRI(xmm, xmm/m128, imm8) [SSE4.2]
peachpy.x86_64.mmxsse.
PCMPISTRM
(*args, **kwargs)¶Packed Compare Implicit Length Strings, Return Mask
Supported forms:
PCMPISTRM(xmm, xmm/m128, imm8) [SSE4.2]
peachpy.x86_64.mmxsse.
PEXTRB
(*args, **kwargs)¶Extract Byte
Supported forms:
PEXTRB(r32, xmm, imm8) [SSE4.1]
PEXTRB(m8, xmm, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
PEXTRD
(*args, **kwargs)¶Extract Doubleword
Supported forms:
PEXTRD(r32/m32, xmm, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
PEXTRQ
(*args, **kwargs)¶Extract Quadword
Supported forms:
PEXTRQ(r64/m64, xmm, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
PEXTRW
(*args, **kwargs)¶Extract Word
Supported forms:
PEXTRW(r32, mm, imm8) [MMX+]
PEXTRW(r32, xmm, imm8) [SSE4.1]
PEXTRW(m16, xmm, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
PHADDD
(*args, **kwargs)¶Packed Horizontal Add Doubleword Integer
Supported forms:
PHADDD(mm, mm/m64) [SSSE3]
PHADDD(xmm, xmm/m128) [SSSE3]
peachpy.x86_64.mmxsse.
PHADDSW
(*args, **kwargs)¶Packed Horizontal Add Signed Word Integers with Signed Saturation
Supported forms:
PHADDSW(mm, mm/m64) [SSSE3]
PHADDSW(xmm, xmm/m128) [SSSE3]
peachpy.x86_64.mmxsse.
PHADDW
(*args, **kwargs)¶Packed Horizontal Add Word Integers
Supported forms:
PHADDW(mm, mm/m64) [SSSE3]
PHADDW(xmm, xmm/m128) [SSSE3]
peachpy.x86_64.mmxsse.
PHMINPOSUW
(*args, **kwargs)¶Packed Horizontal Minimum of Unsigned Word Integers
Supported forms:
PHMINPOSUW(xmm, xmm/m128) [SSE4.1]
peachpy.x86_64.mmxsse.
PHSUBD
(*args, **kwargs)¶Packed Horizontal Subtract Doubleword Integers
Supported forms:
PHSUBD(mm, mm/m64) [SSSE3]
PHSUBD(xmm, xmm/m128) [SSSE3]
peachpy.x86_64.mmxsse.
PHSUBSW
(*args, **kwargs)¶Packed Horizontal Subtract Signed Word Integers with Signed Saturation
Supported forms:
PHSUBSW(mm, mm/m64) [SSSE3]
PHSUBSW(xmm, xmm/m128) [SSSE3]
peachpy.x86_64.mmxsse.
PHSUBW
(*args, **kwargs)¶Packed Horizontal Subtract Word Integers
Supported forms:
PHSUBW(mm, mm/m64) [SSSE3]
PHSUBW(xmm, xmm/m128) [SSSE3]
peachpy.x86_64.mmxsse.
PINSRB
(*args, **kwargs)¶Insert Byte
Supported forms:
PINSRB(xmm, r32, imm8) [SSE4.1]
PINSRB(xmm, m8, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
PINSRD
(*args, **kwargs)¶Insert Doubleword
Supported forms:
PINSRD(xmm, r32/m32, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
PINSRQ
(*args, **kwargs)¶Insert Quadword
Supported forms:
PINSRQ(xmm, r64/m64, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
PINSRW
(*args, **kwargs)¶Insert Word
Supported forms:
PINSRW(mm, r32, imm8) [MMX+]
PINSRW(mm, m16, imm8) [MMX+]
PINSRW(xmm, r32, imm8) [SSE2]
PINSRW(xmm, m16, imm8) [SSE2]
peachpy.x86_64.mmxsse.
PMADDUBSW
(*args, **kwargs)¶Multiply and Add Packed Signed and Unsigned Byte Integers
Supported forms:
PMADDUBSW(mm, mm/m64) [SSSE3]
PMADDUBSW(xmm, xmm/m128) [SSSE3]
peachpy.x86_64.mmxsse.
PMADDWD
(*args, **kwargs)¶Multiply and Add Packed Signed Word Integers
Supported forms:
PMADDWD(mm, mm/m64) [MMX]
PMADDWD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PMAXSB
(*args, **kwargs)¶Maximum of Packed Signed Byte Integers
Supported forms:
PMAXSB(xmm, xmm/m128) [SSE4.1]
peachpy.x86_64.mmxsse.
PMAXSD
(*args, **kwargs)¶Maximum of Packed Signed Doubleword Integers
Supported forms:
PMAXSD(xmm, xmm/m128) [SSE4.1]
peachpy.x86_64.mmxsse.
PMAXSW
(*args, **kwargs)¶Maximum of Packed Signed Word Integers
Supported forms:
PMAXSW(mm, mm/m64) [MMX+]
PMAXSW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PMAXUB
(*args, **kwargs)¶Maximum of Packed Unsigned Byte Integers
Supported forms:
PMAXUB(mm, mm/m64) [MMX+]
PMAXUB(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PMAXUD
(*args, **kwargs)¶Maximum of Packed Unsigned Doubleword Integers
Supported forms:
PMAXUD(xmm, xmm/m128) [SSE4.1]
peachpy.x86_64.mmxsse.
PMAXUW
(*args, **kwargs)¶Maximum of Packed Unsigned Word Integers
Supported forms:
PMAXUW(xmm, xmm/m128) [SSE4.1]
peachpy.x86_64.mmxsse.
PMINSB
(*args, **kwargs)¶Minimum of Packed Signed Byte Integers
Supported forms:
PMINSB(xmm, xmm/m128) [SSE4.1]
peachpy.x86_64.mmxsse.
PMINSD
(*args, **kwargs)¶Minimum of Packed Signed Doubleword Integers
Supported forms:
PMINSD(xmm, xmm/m128) [SSE4.1]
peachpy.x86_64.mmxsse.
PMINSW
(*args, **kwargs)¶Minimum of Packed Signed Word Integers
Supported forms:
PMINSW(mm, mm/m64) [MMX+]
PMINSW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PMINUB
(*args, **kwargs)¶Minimum of Packed Unsigned Byte Integers
Supported forms:
PMINUB(mm, mm/m64) [MMX+]
PMINUB(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PMINUD
(*args, **kwargs)¶Minimum of Packed Unsigned Doubleword Integers
Supported forms:
PMINUD(xmm, xmm/m128) [SSE4.1]
peachpy.x86_64.mmxsse.
PMINUW
(*args, **kwargs)¶Minimum of Packed Unsigned Word Integers
Supported forms:
PMINUW(xmm, xmm/m128) [SSE4.1]
peachpy.x86_64.mmxsse.
PMOVMSKB
(*args, **kwargs)¶Move Byte Mask
Supported forms:
PMOVMSKB(r32, mm) [MMX+]
PMOVMSKB(r32, xmm) [SSE2]
peachpy.x86_64.mmxsse.
PMOVSXBD
(*args, **kwargs)¶Move Packed Byte Integers to Doubleword Integers with Sign Extension
Supported forms:
PMOVSXBD(xmm, xmm/m32) [SSE4.1]
peachpy.x86_64.mmxsse.
PMOVSXBQ
(*args, **kwargs)¶Move Packed Byte Integers to Quadword Integers with Sign Extension
Supported forms:
PMOVSXBQ(xmm, xmm/m16) [SSE4.1]
peachpy.x86_64.mmxsse.
PMOVSXBW
(*args, **kwargs)¶Move Packed Byte Integers to Word Integers with Sign Extension
Supported forms:
PMOVSXBW(xmm, xmm/m64) [SSE4.1]
peachpy.x86_64.mmxsse.
PMOVSXDQ
(*args, **kwargs)¶Move Packed Doubleword Integers to Quadword Integers with Sign Extension
Supported forms:
PMOVSXDQ(xmm, xmm/m64) [SSE4.1]
peachpy.x86_64.mmxsse.
PMOVSXWD
(*args, **kwargs)¶Move Packed Word Integers to Doubleword Integers with Sign Extension
Supported forms:
PMOVSXWD(xmm, xmm/m64) [SSE4.1]
peachpy.x86_64.mmxsse.
PMOVSXWQ
(*args, **kwargs)¶Move Packed Word Integers to Quadword Integers with Sign Extension
Supported forms:
PMOVSXWQ(xmm, xmm/m32) [SSE4.1]
peachpy.x86_64.mmxsse.
PMOVZXBD
(*args, **kwargs)¶Move Packed Byte Integers to Doubleword Integers with Zero Extension
Supported forms:
PMOVZXBD(xmm, xmm/m32) [SSE4.1]
peachpy.x86_64.mmxsse.
PMOVZXBQ
(*args, **kwargs)¶Move Packed Byte Integers to Quadword Integers with Zero Extension
Supported forms:
PMOVZXBQ(xmm, xmm/m16) [SSE4.1]
peachpy.x86_64.mmxsse.
PMOVZXBW
(*args, **kwargs)¶Move Packed Byte Integers to Word Integers with Zero Extension
Supported forms:
PMOVZXBW(xmm, xmm/m64) [SSE4.1]
peachpy.x86_64.mmxsse.
PMOVZXDQ
(*args, **kwargs)¶Move Packed Doubleword Integers to Quadword Integers with Zero Extension
Supported forms:
PMOVZXDQ(xmm, xmm/m64) [SSE4.1]
peachpy.x86_64.mmxsse.
PMOVZXWD
(*args, **kwargs)¶Move Packed Word Integers to Doubleword Integers with Zero Extension
Supported forms:
PMOVZXWD(xmm, xmm/m64) [SSE4.1]
peachpy.x86_64.mmxsse.
PMOVZXWQ
(*args, **kwargs)¶Move Packed Word Integers to Quadword Integers with Zero Extension
Supported forms:
PMOVZXWQ(xmm, xmm/m32) [SSE4.1]
peachpy.x86_64.mmxsse.
PMULDQ
(*args, **kwargs)¶Multiply Packed Signed Doubleword Integers and Store Quadword Result
Supported forms:
PMULDQ(xmm, xmm/m128) [SSE4.1]
peachpy.x86_64.mmxsse.
PMULHRSW
(*args, **kwargs)¶Packed Multiply Signed Word Integers and Store High Result with Round and Scale
Supported forms:
PMULHRSW(mm, mm/m64) [SSSE3]
PMULHRSW(xmm, xmm/m128) [SSSE3]
peachpy.x86_64.mmxsse.
PMULHUW
(*args, **kwargs)¶Multiply Packed Unsigned Word Integers and Store High Result
Supported forms:
PMULHUW(mm, mm/m64) [MMX+]
PMULHUW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PMULHW
(*args, **kwargs)¶Multiply Packed Signed Word Integers and Store High Result
Supported forms:
PMULHW(mm, mm/m64) [MMX]
PMULHW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PMULLD
(*args, **kwargs)¶Multiply Packed Signed Doubleword Integers and Store Low Result
Supported forms:
PMULLD(xmm, xmm/m128) [SSE4.1]
peachpy.x86_64.mmxsse.
PMULLW
(*args, **kwargs)¶Multiply Packed Signed Word Integers and Store Low Result
Supported forms:
PMULLW(mm, mm/m64) [MMX]
PMULLW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PMULUDQ
(*args, **kwargs)¶Multiply Packed Unsigned Doubleword Integers
Supported forms:
PMULUDQ(mm, mm/m64) [SSE2]
PMULUDQ(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
POR
(*args, **kwargs)¶Packed Bitwise Logical OR
Supported forms:
POR(mm, mm/m64) [MMX]
POR(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSADBW
(*args, **kwargs)¶Compute Sum of Absolute Differences
Supported forms:
PSADBW(mm, mm/m64) [MMX+]
PSADBW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSHUFB
(*args, **kwargs)¶Packed Shuffle Bytes
Supported forms:
PSHUFB(mm, mm/m64) [SSSE3]
PSHUFB(xmm, xmm/m128) [SSSE3]
peachpy.x86_64.mmxsse.
PSHUFD
(*args, **kwargs)¶Shuffle Packed Doublewords
Supported forms:
PSHUFD(xmm, xmm/m128, imm8) [SSE2]
peachpy.x86_64.mmxsse.
PSHUFHW
(*args, **kwargs)¶Shuffle Packed High Words
Supported forms:
PSHUFHW(xmm, xmm/m128, imm8) [SSE2]
peachpy.x86_64.mmxsse.
PSHUFLW
(*args, **kwargs)¶Shuffle Packed Low Words
Supported forms:
PSHUFLW(xmm, xmm/m128, imm8) [SSE2]
peachpy.x86_64.mmxsse.
PSHUFW
(*args, **kwargs)¶Shuffle Packed Words
Supported forms:
PSHUFW(mm, mm/m64, imm8) [MMX+]
peachpy.x86_64.mmxsse.
PSIGNB
(*args, **kwargs)¶Packed Sign of Byte Integers
Supported forms:
PSIGNB(mm, mm/m64) [SSSE3]
PSIGNB(xmm, xmm/m128) [SSSE3]
peachpy.x86_64.mmxsse.
PSIGND
(*args, **kwargs)¶Packed Sign of Doubleword Integers
Supported forms:
PSIGND(mm, mm/m64) [SSSE3]
PSIGND(xmm, xmm/m128) [SSSE3]
peachpy.x86_64.mmxsse.
PSIGNW
(*args, **kwargs)¶Packed Sign of Word Integers
Supported forms:
PSIGNW(mm, mm/m64) [SSSE3]
PSIGNW(xmm, xmm/m128) [SSSE3]
peachpy.x86_64.mmxsse.
PSLLD
(*args, **kwargs)¶Shift Packed Doubleword Data Left Logical
Supported forms:
PSLLD(mm, imm8) [MMX]
PSLLD(mm, mm/m64) [MMX]
PSLLD(xmm, imm8) [SSE2]
PSLLD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSLLDQ
(*args, **kwargs)¶Shift Packed Double Quadword Left Logical
Supported forms:
PSLLDQ(xmm, imm8) [SSE2]
peachpy.x86_64.mmxsse.
PSLLQ
(*args, **kwargs)¶Shift Packed Quadword Data Left Logical
Supported forms:
PSLLQ(mm, imm8) [MMX]
PSLLQ(mm, mm/m64) [MMX]
PSLLQ(xmm, imm8) [SSE2]
PSLLQ(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSLLW
(*args, **kwargs)¶Shift Packed Word Data Left Logical
Supported forms:
PSLLW(mm, imm8) [MMX]
PSLLW(mm, mm/m64) [MMX]
PSLLW(xmm, imm8) [SSE2]
PSLLW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSRAD
(*args, **kwargs)¶Shift Packed Doubleword Data Right Arithmetic
Supported forms:
PSRAD(mm, imm8) [MMX]
PSRAD(mm, mm/m64) [MMX]
PSRAD(xmm, imm8) [SSE2]
PSRAD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSRAW
(*args, **kwargs)¶Shift Packed Word Data Right Arithmetic
Supported forms:
PSRAW(mm, imm8) [MMX]
PSRAW(mm, mm/m64) [MMX]
PSRAW(xmm, imm8) [SSE2]
PSRAW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSRLD
(*args, **kwargs)¶Shift Packed Doubleword Data Right Logical
Supported forms:
PSRLD(mm, imm8) [MMX]
PSRLD(mm, mm/m64) [MMX]
PSRLD(xmm, imm8) [SSE2]
PSRLD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSRLDQ
(*args, **kwargs)¶Shift Packed Double Quadword Right Logical
Supported forms:
PSRLDQ(xmm, imm8) [SSE2]
peachpy.x86_64.mmxsse.
PSRLQ
(*args, **kwargs)¶Shift Packed Quadword Data Right Logical
Supported forms:
PSRLQ(mm, imm8) [MMX]
PSRLQ(mm, mm/m64) [MMX]
PSRLQ(xmm, imm8) [SSE2]
PSRLQ(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSRLW
(*args, **kwargs)¶Shift Packed Word Data Right Logical
Supported forms:
PSRLW(mm, imm8) [MMX]
PSRLW(mm, mm/m64) [MMX]
PSRLW(xmm, imm8) [SSE2]
PSRLW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSUBB
(*args, **kwargs)¶Subtract Packed Byte Integers
Supported forms:
PSUBB(mm, mm/m64) [MMX]
PSUBB(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSUBD
(*args, **kwargs)¶Subtract Packed Doubleword Integers
Supported forms:
PSUBD(mm, mm/m64) [MMX]
PSUBD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSUBQ
(*args, **kwargs)¶Subtract Packed Quadword Integers
Supported forms:
PSUBQ(mm, mm/m64) [SSE2]
PSUBQ(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSUBSB
(*args, **kwargs)¶Subtract Packed Signed Byte Integers with Signed Saturation
Supported forms:
PSUBSB(mm, mm/m64) [MMX]
PSUBSB(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSUBSW
(*args, **kwargs)¶Subtract Packed Signed Word Integers with Signed Saturation
Supported forms:
PSUBSW(mm, mm/m64) [MMX]
PSUBSW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSUBUSB
(*args, **kwargs)¶Subtract Packed Unsigned Byte Integers with Unsigned Saturation
Supported forms:
PSUBUSB(mm, mm/m64) [MMX]
PSUBUSB(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSUBUSW
(*args, **kwargs)¶Subtract Packed Unsigned Word Integers with Unsigned Saturation
Supported forms:
PSUBUSW(mm, mm/m64) [MMX]
PSUBUSW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PSUBW
(*args, **kwargs)¶Subtract Packed Word Integers
Supported forms:
PSUBW(mm, mm/m64) [MMX]
PSUBW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PTEST
(*args, **kwargs)¶Packed Logical Compare
Supported forms:
PTEST(xmm, xmm/m128) [SSE4.1]
peachpy.x86_64.mmxsse.
PUNPCKHBW
(*args, **kwargs)¶Unpack and Interleave High-Order Bytes into Words
Supported forms:
PUNPCKHBW(mm, mm/m64) [MMX]
PUNPCKHBW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PUNPCKHDQ
(*args, **kwargs)¶Unpack and Interleave High-Order Doublewords into Quadwords
Supported forms:
PUNPCKHDQ(mm, mm/m64) [MMX]
PUNPCKHDQ(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PUNPCKHQDQ
(*args, **kwargs)¶Unpack and Interleave High-Order Quadwords into Double Quadwords
Supported forms:
PUNPCKHQDQ(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PUNPCKHWD
(*args, **kwargs)¶Unpack and Interleave High-Order Words into Doublewords
Supported forms:
PUNPCKHWD(mm, mm/m64) [MMX]
PUNPCKHWD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PUNPCKLBW
(*args, **kwargs)¶Unpack and Interleave Low-Order Bytes into Words
Supported forms:
PUNPCKLBW(mm, mm/m32) [MMX]
PUNPCKLBW(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PUNPCKLDQ
(*args, **kwargs)¶Unpack and Interleave Low-Order Doublewords into Quadwords
Supported forms:
PUNPCKLDQ(mm, mm/m32) [MMX]
PUNPCKLDQ(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PUNPCKLQDQ
(*args, **kwargs)¶Unpack and Interleave Low-Order Quadwords into Double Quadwords
Supported forms:
PUNPCKLQDQ(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PUNPCKLWD
(*args, **kwargs)¶Unpack and Interleave Low-Order Words into Doublewords
Supported forms:
PUNPCKLWD(mm, mm/m32) [MMX]
PUNPCKLWD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
PXOR
(*args, **kwargs)¶Packed Bitwise Logical Exclusive OR
Supported forms:
PXOR(mm, mm/m64) [MMX]
PXOR(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
RCPPS
(*args, **kwargs)¶Compute Approximate Reciprocals of Packed Single-Precision Floating-Point Values
Supported forms:
RCPPS(xmm, xmm/m128) [SSE]
peachpy.x86_64.mmxsse.
RCPSS
(*args, **kwargs)¶Compute Approximate Reciprocal of Scalar Single-Precision Floating-Point Values
Supported forms:
RCPSS(xmm, xmm/m32) [SSE]
peachpy.x86_64.mmxsse.
ROUNDPD
(*args, **kwargs)¶Round Packed Double Precision Floating-Point Values
Supported forms:
ROUNDPD(xmm, xmm/m128, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
ROUNDPS
(*args, **kwargs)¶Round Packed Single Precision Floating-Point Values
Supported forms:
ROUNDPS(xmm, xmm/m128, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
ROUNDSD
(*args, **kwargs)¶Round Scalar Double Precision Floating-Point Values
Supported forms:
ROUNDSD(xmm, xmm/m64, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
ROUNDSS
(*args, **kwargs)¶Round Scalar Single Precision Floating-Point Values
Supported forms:
ROUNDSS(xmm, xmm/m32, imm8) [SSE4.1]
peachpy.x86_64.mmxsse.
RSQRTPS
(*args, **kwargs)¶Compute Reciprocals of Square Roots of Packed Single-Precision Floating-Point Values
Supported forms:
RSQRTPS(xmm, xmm/m128) [SSE]
peachpy.x86_64.mmxsse.
RSQRTSS
(*args, **kwargs)¶Compute Reciprocal of Square Root of Scalar Single-Precision Floating-Point Value
Supported forms:
RSQRTSS(xmm, xmm/m32) [SSE]
peachpy.x86_64.mmxsse.
SHUFPD
(*args, **kwargs)¶Shuffle Packed Double-Precision Floating-Point Values
Supported forms:
SHUFPD(xmm, xmm/m128, imm8) [SSE2]
peachpy.x86_64.mmxsse.
SHUFPS
(*args, **kwargs)¶Shuffle Packed Single-Precision Floating-Point Values
Supported forms:
SHUFPS(xmm, xmm/m128, imm8) [SSE]
peachpy.x86_64.mmxsse.
SQRTPD
(*args, **kwargs)¶Compute Square Roots of Packed Double-Precision Floating-Point Values
Supported forms:
SQRTPD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
SQRTPS
(*args, **kwargs)¶Compute Square Roots of Packed Single-Precision Floating-Point Values
Supported forms:
SQRTPS(xmm, xmm/m128) [SSE]
peachpy.x86_64.mmxsse.
SQRTSD
(*args, **kwargs)¶Compute Square Root of Scalar Double-Precision Floating-Point Value
Supported forms:
SQRTSD(xmm, xmm/m64) [SSE2]
peachpy.x86_64.mmxsse.
SQRTSS
(*args, **kwargs)¶Compute Square Root of Scalar Single-Precision Floating-Point Value
Supported forms:
SQRTSS(xmm, xmm/m32) [SSE]
peachpy.x86_64.mmxsse.
STMXCSR
(*args, **kwargs)¶Store MXCSR Register State
Supported forms:
STMXCSR(m32) [SSE]
peachpy.x86_64.mmxsse.
SUBPD
(*args, **kwargs)¶Subtract Packed Double-Precision Floating-Point Values
Supported forms:
SUBPD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
SUBPS
(*args, **kwargs)¶Subtract Packed Single-Precision Floating-Point Values
Supported forms:
SUBPS(xmm, xmm/m128) [SSE]
peachpy.x86_64.mmxsse.
SUBSD
(*args, **kwargs)¶Subtract Scalar Double-Precision Floating-Point Values
Supported forms:
SUBSD(xmm, xmm/m64) [SSE2]
peachpy.x86_64.mmxsse.
SUBSS
(*args, **kwargs)¶Subtract Scalar Single-Precision Floating-Point Values
Supported forms:
SUBSS(xmm, xmm/m32) [SSE]
peachpy.x86_64.mmxsse.
UCOMISD
(*args, **kwargs)¶Unordered Compare Scalar Double-Precision Floating-Point Values and Set EFLAGS
Supported forms:
UCOMISD(xmm, xmm/m64) [SSE2]
peachpy.x86_64.mmxsse.
UCOMISS
(*args, **kwargs)¶Unordered Compare Scalar Single-Precision Floating-Point Values and Set EFLAGS
Supported forms:
UCOMISS(xmm, xmm/m32) [SSE]
peachpy.x86_64.mmxsse.
UNPCKHPD
(*args, **kwargs)¶Unpack and Interleave High Packed Double-Precision Floating-Point Values
Supported forms:
UNPCKHPD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
UNPCKHPS
(*args, **kwargs)¶Unpack and Interleave High Packed Single-Precision Floating-Point Values
Supported forms:
UNPCKHPS(xmm, xmm/m128) [SSE]
peachpy.x86_64.mmxsse.
UNPCKLPD
(*args, **kwargs)¶Unpack and Interleave Low Packed Double-Precision Floating-Point Values
Supported forms:
UNPCKLPD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
UNPCKLPS
(*args, **kwargs)¶Unpack and Interleave Low Packed Single-Precision Floating-Point Values
Supported forms:
UNPCKLPS(xmm, xmm/m128) [SSE]
peachpy.x86_64.mmxsse.
XORPD
(*args, **kwargs)¶Bitwise Logical XOR for Double-Precision Floating-Point Values
Supported forms:
XORPD(xmm, xmm/m128) [SSE2]
peachpy.x86_64.mmxsse.
XORPS
(*args, **kwargs)¶Bitwise Logical XOR for Single-Precision Floating-Point Values
Supported forms:
XORPS(xmm, xmm/m128) [SSE]
peachpy.x86_64.avx.
VADDPD
(*args, **kwargs)¶Add Packed Double-Precision Floating-Point Values
Supported forms:
VADDPD(xmm, xmm, xmm/m128) [AVX]
VADDPD(ymm, ymm, ymm/m256) [AVX]
VADDPD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VADDPD(zmm{k}{z}, zmm, zmm, {er}) [AVX512F]
VADDPD(zmm{k}{z}, zmm, zmm) [AVX512F]
VADDPD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VADDPD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VADDPD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VADDPD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VADDPS
(*args, **kwargs)¶Add Packed Single-Precision Floating-Point Values
Supported forms:
VADDPS(xmm, xmm, xmm/m128) [AVX]
VADDPS(ymm, ymm, ymm/m256) [AVX]
VADDPS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VADDPS(zmm{k}{z}, zmm, zmm, {er}) [AVX512F]
VADDPS(zmm{k}{z}, zmm, zmm) [AVX512F]
VADDPS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VADDPS(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VADDPS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VADDPS(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VADDSD
(*args, **kwargs)¶Add Scalar Double-Precision Floating-Point Values
Supported forms:
VADDSD(xmm, xmm, xmm/m64) [AVX]
VADDSD(xmm{k}{z}, xmm, xmm/m64) [AVX512F]
VADDSD(xmm{k}{z}, xmm, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VADDSS
(*args, **kwargs)¶Add Scalar Single-Precision Floating-Point Values
Supported forms:
VADDSS(xmm, xmm, xmm/m32) [AVX]
VADDSS(xmm{k}{z}, xmm, xmm/m32) [AVX512F]
VADDSS(xmm{k}{z}, xmm, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VADDSUBPD
(*args, **kwargs)¶Packed Double-FP Add/Subtract
Supported forms:
VADDSUBPD(xmm, xmm, xmm/m128) [AVX]
VADDSUBPD(ymm, ymm, ymm/m256) [AVX]
peachpy.x86_64.avx.
VADDSUBPS
(*args, **kwargs)¶Packed Single-FP Add/Subtract
Supported forms:
VADDSUBPS(xmm, xmm, xmm/m128) [AVX]
VADDSUBPS(ymm, ymm, ymm/m256) [AVX]
peachpy.x86_64.avx.
VALIGND
(*args, **kwargs)¶Align Doubleword Vectors
Supported forms:
VALIGND(zmm{k}{z}, zmm, m512/m32bcst, imm8) [AVX512F]
VALIGND(zmm{k}{z}, zmm, zmm, imm8) [AVX512F]
VALIGND(xmm{k}{z}, xmm, m128/m32bcst, imm8) [AVX512F and AVX512VL]
VALIGND(xmm{k}{z}, xmm, xmm, imm8) [AVX512F and AVX512VL]
VALIGND(ymm{k}{z}, ymm, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VALIGND(ymm{k}{z}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VALIGNQ
(*args, **kwargs)¶Align Quadword Vectors
Supported forms:
VALIGNQ(zmm{k}{z}, zmm, m512/m64bcst, imm8) [AVX512F]
VALIGNQ(zmm{k}{z}, zmm, zmm, imm8) [AVX512F]
VALIGNQ(xmm{k}{z}, xmm, m128/m64bcst, imm8) [AVX512F and AVX512VL]
VALIGNQ(xmm{k}{z}, xmm, xmm, imm8) [AVX512F and AVX512VL]
VALIGNQ(ymm{k}{z}, ymm, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VALIGNQ(ymm{k}{z}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VANDNPD
(*args, **kwargs)¶Bitwise Logical AND NOT of Packed Double-Precision Floating-Point Values
Supported forms:
VANDNPD(xmm, xmm, xmm/m128) [AVX]
VANDNPD(ymm, ymm, ymm/m256) [AVX]
VANDNPD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512DQ]
VANDNPD(zmm{k}{z}, zmm, zmm) [AVX512DQ]
VANDNPD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512DQ and AVX512VL]
VANDNPD(xmm{k}{z}, xmm, xmm) [AVX512DQ and AVX512VL]
VANDNPD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512DQ and AVX512VL]
VANDNPD(ymm{k}{z}, ymm, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VANDNPS
(*args, **kwargs)¶Bitwise Logical AND NOT of Packed Single-Precision Floating-Point Values
Supported forms:
VANDNPS(xmm, xmm, xmm/m128) [AVX]
VANDNPS(ymm, ymm, ymm/m256) [AVX]
VANDNPS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512DQ]
VANDNPS(zmm{k}{z}, zmm, zmm) [AVX512DQ]
VANDNPS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512DQ and AVX512VL]
VANDNPS(xmm{k}{z}, xmm, xmm) [AVX512DQ and AVX512VL]
VANDNPS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512DQ and AVX512VL]
VANDNPS(ymm{k}{z}, ymm, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VANDPD
(*args, **kwargs)¶Bitwise Logical AND of Packed Double-Precision Floating-Point Values
Supported forms:
VANDPD(xmm, xmm, xmm/m128) [AVX]
VANDPD(ymm, ymm, ymm/m256) [AVX]
VANDPD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512DQ]
VANDPD(zmm{k}{z}, zmm, zmm) [AVX512DQ]
VANDPD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512DQ and AVX512VL]
VANDPD(xmm{k}{z}, xmm, xmm) [AVX512DQ and AVX512VL]
VANDPD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512DQ and AVX512VL]
VANDPD(ymm{k}{z}, ymm, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VANDPS
(*args, **kwargs)¶Bitwise Logical AND of Packed Single-Precision Floating-Point Values
Supported forms:
VANDPS(xmm, xmm, xmm/m128) [AVX]
VANDPS(ymm, ymm, ymm/m256) [AVX]
VANDPS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512DQ]
VANDPS(zmm{k}{z}, zmm, zmm) [AVX512DQ]
VANDPS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512DQ and AVX512VL]
VANDPS(xmm{k}{z}, xmm, xmm) [AVX512DQ and AVX512VL]
VANDPS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512DQ and AVX512VL]
VANDPS(ymm{k}{z}, ymm, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VBLENDMPD
(*args, **kwargs)¶Blend Packed Double-Precision Floating-Point Vectors Using an OpMask Control
Supported forms:
VBLENDMPD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VBLENDMPD(zmm{k}{z}, zmm, zmm) [AVX512F]
VBLENDMPD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VBLENDMPD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VBLENDMPD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VBLENDMPD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VBLENDMPS
(*args, **kwargs)¶Blend Packed Single-Precision Floating-Point Vectors Using an OpMask Control
Supported forms:
VBLENDMPS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VBLENDMPS(zmm{k}{z}, zmm, zmm) [AVX512F]
VBLENDMPS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VBLENDMPS(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VBLENDMPS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VBLENDMPS(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VBLENDPD
(*args, **kwargs)¶Blend Packed Double Precision Floating-Point Values
Supported forms:
VBLENDPD(xmm, xmm, xmm/m128, imm8) [AVX]
VBLENDPD(ymm, ymm, ymm/m256, imm8) [AVX]
peachpy.x86_64.avx.
VBLENDPS
(*args, **kwargs)¶Blend Packed Single Precision Floating-Point Values
Supported forms:
VBLENDPS(xmm, xmm, xmm/m128, imm8) [AVX]
VBLENDPS(ymm, ymm, ymm/m256, imm8) [AVX]
peachpy.x86_64.avx.
VBLENDVPD
(*args, **kwargs)¶Variable Blend Packed Double Precision Floating-Point Values
Supported forms:
VBLENDVPD(xmm, xmm, xmm/m128, xmm) [AVX]
VBLENDVPD(ymm, ymm, ymm/m256, ymm) [AVX]
peachpy.x86_64.avx.
VBLENDVPS
(*args, **kwargs)¶Variable Blend Packed Single Precision Floating-Point Values
Supported forms:
VBLENDVPS(xmm, xmm, xmm/m128, xmm) [AVX]
VBLENDVPS(ymm, ymm, ymm/m256, ymm) [AVX]
peachpy.x86_64.avx.
VBROADCASTF128
(*args, **kwargs)¶Broadcast 128 Bit of Floating-Point Data
Supported forms:
VBROADCASTF128(ymm, m128) [AVX]
peachpy.x86_64.avx.
VBROADCASTF32X2
(*args, **kwargs)¶Broadcast Two Single-Precision Floating-Point Elements
Supported forms:
VBROADCASTF32X2(zmm{k}{z}, xmm/m64) [AVX512DQ]
VBROADCASTF32X2(ymm{k}{z}, xmm/m64) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VBROADCASTF32X4
(*args, **kwargs)¶Broadcast Four Single-Precision Floating-Point Elements
Supported forms:
VBROADCASTF32X4(zmm{k}{z}, m128) [AVX512F]
VBROADCASTF32X4(ymm{k}{z}, m128) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VBROADCASTF32X8
(*args, **kwargs)¶Broadcast Eight Single-Precision Floating-Point Elements
Supported forms:
VBROADCASTF32X8(zmm{k}{z}, m256) [AVX512DQ]
peachpy.x86_64.avx.
VBROADCASTF64X2
(*args, **kwargs)¶Broadcast Two Double-Precision Floating-Point Elements
Supported forms:
VBROADCASTF64X2(zmm{k}{z}, m128) [AVX512DQ]
VBROADCASTF64X2(ymm{k}{z}, m128) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VBROADCASTF64X4
(*args, **kwargs)¶Broadcast Four Double-Precision Floating-Point Elements
Supported forms:
VBROADCASTF64X4(zmm{k}{z}, m256) [AVX512F]
peachpy.x86_64.avx.
VBROADCASTI128
(*args, **kwargs)¶Broadcast 128 Bits of Integer Data
Supported forms:
VBROADCASTI128(ymm, m128) [AVX2]
peachpy.x86_64.avx.
VBROADCASTI32X2
(*args, **kwargs)¶Broadcast Two Doubleword Elements
Supported forms:
VBROADCASTI32X2(zmm{k}{z}, xmm/m64) [AVX512DQ]
VBROADCASTI32X2(xmm{k}{z}, xmm/m64) [AVX512DQ and AVX512VL]
VBROADCASTI32X2(ymm{k}{z}, xmm/m64) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VBROADCASTI32X4
(*args, **kwargs)¶Broadcast Four Doubleword Elements
Supported forms:
VBROADCASTI32X4(zmm{k}{z}, m128) [AVX512F]
VBROADCASTI32X4(ymm{k}{z}, m128) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VBROADCASTI32X8
(*args, **kwargs)¶Broadcast Eight Doubleword Elements
Supported forms:
VBROADCASTI32X8(zmm{k}{z}, m256) [AVX512DQ]
peachpy.x86_64.avx.
VBROADCASTI64X2
(*args, **kwargs)¶Broadcast Two Quadword Elements
Supported forms:
VBROADCASTI64X2(zmm{k}{z}, m128) [AVX512DQ]
VBROADCASTI64X2(ymm{k}{z}, m128) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VBROADCASTI64X4
(*args, **kwargs)¶Broadcast Four Quadword Elements
Supported forms:
VBROADCASTI64X4(zmm{k}{z}, m256) [AVX512F]
peachpy.x86_64.avx.
VBROADCASTSD
(*args, **kwargs)¶Broadcast Double-Precision Floating-Point Element
Supported forms:
VBROADCASTSD(ymm, m64) [AVX]
VBROADCASTSD(ymm, xmm) [AVX2]
VBROADCASTSD(zmm{k}{z}, xmm/m64) [AVX512F]
VBROADCASTSD(ymm{k}{z}, xmm/m64) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VBROADCASTSS
(*args, **kwargs)¶Broadcast Single-Precision Floating-Point Element
Supported forms:
VBROADCASTSS(xmm, m32) [AVX]
VBROADCASTSS(ymm, m32) [AVX]
VBROADCASTSS(xmm, xmm) [AVX2]
VBROADCASTSS(ymm, xmm) [AVX2]
VBROADCASTSS(zmm{k}{z}, xmm/m32) [AVX512F]
VBROADCASTSS(ymm{k}{z}, xmm/m32) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCMPPD
(*args, **kwargs)¶Compare Packed Double-Precision Floating-Point Values
Supported forms:
VCMPPD(xmm, xmm, xmm/m128, imm8) [AVX]
VCMPPD(ymm, ymm, ymm/m256, imm8) [AVX]
VCMPPD(k{k}, zmm, m512/m64bcst, imm8) [AVX512F]
VCMPPD(k{k}, zmm, zmm, {sae}, imm8) [AVX512F]
VCMPPD(k{k}, zmm, zmm, imm8) [AVX512F]
VCMPPD(k{k}, xmm, m128/m64bcst, imm8) [AVX512F and AVX512VL]
VCMPPD(k{k}, xmm, xmm, imm8) [AVX512F and AVX512VL]
VCMPPD(k{k}, ymm, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VCMPPD(k{k}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCMPPS
(*args, **kwargs)¶Compare Packed Single-Precision Floating-Point Values
Supported forms:
VCMPPS(xmm, xmm, xmm/m128, imm8) [AVX]
VCMPPS(ymm, ymm, ymm/m256, imm8) [AVX]
VCMPPS(k{k}, zmm, m512/m32bcst, imm8) [AVX512F]
VCMPPS(k{k}, zmm, zmm, {sae}, imm8) [AVX512F]
VCMPPS(k{k}, zmm, zmm, imm8) [AVX512F]
VCMPPS(k{k}, xmm, m128/m32bcst, imm8) [AVX512F and AVX512VL]
VCMPPS(k{k}, xmm, xmm, imm8) [AVX512F and AVX512VL]
VCMPPS(k{k}, ymm, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VCMPPS(k{k}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCMPSD
(*args, **kwargs)¶Compare Scalar Double-Precision Floating-Point Values
Supported forms:
VCMPSD(xmm, xmm, xmm/m64, imm8) [AVX]
VCMPSD(k{k}, xmm, xmm/m64, imm8) [AVX512F]
VCMPSD(k{k}, xmm, xmm, {sae}, imm8) [AVX512F]
peachpy.x86_64.avx.
VCMPSS
(*args, **kwargs)¶Compare Scalar Single-Precision Floating-Point Values
Supported forms:
VCMPSS(xmm, xmm, xmm/m32, imm8) [AVX]
VCMPSS(k{k}, xmm, xmm/m32, imm8) [AVX512F]
VCMPSS(k{k}, xmm, xmm, {sae}, imm8) [AVX512F]
peachpy.x86_64.avx.
VCOMISD
(*args, **kwargs)¶Compare Scalar Ordered Double-Precision Floating-Point Values and Set EFLAGS
Supported forms:
VCOMISD(xmm, xmm/m64) [AVX]
VCOMISD(xmm, xmm/m64) [AVX512F]
VCOMISD(xmm, xmm, {sae}) [AVX512F]
peachpy.x86_64.avx.
VCOMISS
(*args, **kwargs)¶Compare Scalar Ordered Single-Precision Floating-Point Values and Set EFLAGS
Supported forms:
VCOMISS(xmm, xmm/m32) [AVX]
VCOMISS(xmm, xmm/m32) [AVX512F]
VCOMISS(xmm, xmm, {sae}) [AVX512F]
peachpy.x86_64.avx.
VCOMPRESSPD
(*args, **kwargs)¶Store Sparse Packed Double-Precision Floating-Point Values into Dense Memory/Register
Supported forms:
VCOMPRESSPD(zmm{k}{z}, zmm) [AVX512F]
VCOMPRESSPD(m512{k}{z}, zmm) [AVX512F]
VCOMPRESSPD(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VCOMPRESSPD(m128{k}{z}, xmm) [AVX512F and AVX512VL]
VCOMPRESSPD(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
VCOMPRESSPD(m256{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCOMPRESSPS
(*args, **kwargs)¶Store Sparse Packed Single-Precision Floating-Point Values into Dense Memory/Register
Supported forms:
VCOMPRESSPS(zmm{k}{z}, zmm) [AVX512F]
VCOMPRESSPS(m512{k}{z}, zmm) [AVX512F]
VCOMPRESSPS(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VCOMPRESSPS(m128{k}{z}, xmm) [AVX512F and AVX512VL]
VCOMPRESSPS(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
VCOMPRESSPS(m256{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTDQ2PD
(*args, **kwargs)¶Convert Packed Dword Integers to Packed Double-Precision FP Values
Supported forms:
VCVTDQ2PD(xmm, xmm/m64) [AVX]
VCVTDQ2PD(ymm, xmm/m128) [AVX]
VCVTDQ2PD(zmm{k}{z}, m256/m32bcst) [AVX512F]
VCVTDQ2PD(zmm{k}{z}, ymm) [AVX512F]
VCVTDQ2PD(xmm{k}{z}, m64/m32bcst) [AVX512F and AVX512VL]
VCVTDQ2PD(ymm{k}{z}, m128/m32bcst) [AVX512F and AVX512VL]
VCVTDQ2PD(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VCVTDQ2PD(ymm{k}{z}, xmm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTDQ2PS
(*args, **kwargs)¶Convert Packed Dword Integers to Packed Single-Precision FP Values
Supported forms:
VCVTDQ2PS(xmm, xmm/m128) [AVX]
VCVTDQ2PS(ymm, ymm/m256) [AVX]
VCVTDQ2PS(zmm{k}{z}, m512/m32bcst) [AVX512F]
VCVTDQ2PS(zmm{k}{z}, zmm, {er}) [AVX512F]
VCVTDQ2PS(zmm{k}{z}, zmm) [AVX512F]
VCVTDQ2PS(xmm{k}{z}, m128/m32bcst) [AVX512F and AVX512VL]
VCVTDQ2PS(ymm{k}{z}, m256/m32bcst) [AVX512F and AVX512VL]
VCVTDQ2PS(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VCVTDQ2PS(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTPD2DQ
(*args, **kwargs)¶Convert Packed Double-Precision FP Values to Packed Dword Integers
Supported forms:
VCVTPD2DQ(xmm, xmm/m128) [AVX]
VCVTPD2DQ(xmm, ymm/m256) [AVX]
VCVTPD2DQ(ymm{k}{z}, m512/m64bcst) [AVX512F]
VCVTPD2DQ(ymm{k}{z}, zmm, {er}) [AVX512F]
VCVTPD2DQ(ymm{k}{z}, zmm) [AVX512F]
VCVTPD2DQ(xmm{k}{z}, m128/m64bcst) [AVX512F and AVX512VL]
VCVTPD2DQ(xmm{k}{z}, m256/m64bcst) [AVX512F and AVX512VL]
VCVTPD2DQ(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VCVTPD2DQ(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTPD2PS
(*args, **kwargs)¶Convert Packed Double-Precision FP Values to Packed Single-Precision FP Values
Supported forms:
VCVTPD2PS(xmm, xmm/m128) [AVX]
VCVTPD2PS(xmm, ymm/m256) [AVX]
VCVTPD2PS(ymm{k}{z}, m512/m64bcst) [AVX512F]
VCVTPD2PS(ymm{k}{z}, zmm, {er}) [AVX512F]
VCVTPD2PS(ymm{k}{z}, zmm) [AVX512F]
VCVTPD2PS(xmm{k}{z}, m128/m64bcst) [AVX512F and AVX512VL]
VCVTPD2PS(xmm{k}{z}, m256/m64bcst) [AVX512F and AVX512VL]
VCVTPD2PS(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VCVTPD2PS(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTPD2QQ
(*args, **kwargs)¶Convert Packed Double-Precision Floating-Point Values to Packed Quadword Integers
Supported forms:
VCVTPD2QQ(zmm{k}{z}, m512/m64bcst) [AVX512DQ]
VCVTPD2QQ(zmm{k}{z}, zmm, {er}) [AVX512DQ]
VCVTPD2QQ(zmm{k}{z}, zmm) [AVX512DQ]
VCVTPD2QQ(xmm{k}{z}, m128/m64bcst) [AVX512DQ and AVX512VL]
VCVTPD2QQ(ymm{k}{z}, m256/m64bcst) [AVX512DQ and AVX512VL]
VCVTPD2QQ(xmm{k}{z}, xmm) [AVX512DQ and AVX512VL]
VCVTPD2QQ(ymm{k}{z}, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VCVTPD2UDQ
(*args, **kwargs)¶Convert Packed Double-Precision Floating-Point Values to Packed Unsigned Doubleword Integers
Supported forms:
VCVTPD2UDQ(ymm{k}{z}, m512/m64bcst) [AVX512F]
VCVTPD2UDQ(ymm{k}{z}, zmm, {er}) [AVX512F]
VCVTPD2UDQ(ymm{k}{z}, zmm) [AVX512F]
VCVTPD2UDQ(xmm{k}{z}, m128/m64bcst) [AVX512F and AVX512VL]
VCVTPD2UDQ(xmm{k}{z}, m256/m64bcst) [AVX512F and AVX512VL]
VCVTPD2UDQ(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VCVTPD2UDQ(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTPD2UQQ
(*args, **kwargs)¶Convert Packed Double-Precision Floating-Point Values to Packed Unsigned Quadword Integers
Supported forms:
VCVTPD2UQQ(zmm{k}{z}, m512/m64bcst) [AVX512DQ]
VCVTPD2UQQ(zmm{k}{z}, zmm, {er}) [AVX512DQ]
VCVTPD2UQQ(zmm{k}{z}, zmm) [AVX512DQ]
VCVTPD2UQQ(xmm{k}{z}, m128/m64bcst) [AVX512DQ and AVX512VL]
VCVTPD2UQQ(ymm{k}{z}, m256/m64bcst) [AVX512DQ and AVX512VL]
VCVTPD2UQQ(xmm{k}{z}, xmm) [AVX512DQ and AVX512VL]
VCVTPD2UQQ(ymm{k}{z}, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VCVTPH2PS
(*args, **kwargs)¶Convert Half-Precision FP Values to Single-Precision FP Values
Supported forms:
VCVTPH2PS(xmm, xmm/m64) [F16C]
VCVTPH2PS(ymm, xmm/m128) [F16C]
VCVTPH2PS(zmm{k}{z}, ymm/m256) [AVX512F]
VCVTPH2PS(zmm{k}{z}, ymm, {sae}) [AVX512F]
VCVTPH2PS(xmm{k}{z}, xmm/m64) [AVX512F and AVX512VL]
VCVTPH2PS(ymm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTPS2DQ
(*args, **kwargs)¶Convert Packed Single-Precision FP Values to Packed Dword Integers
Supported forms:
VCVTPS2DQ(xmm, xmm/m128) [AVX]
VCVTPS2DQ(ymm, ymm/m256) [AVX]
VCVTPS2DQ(zmm{k}{z}, m512/m32bcst) [AVX512F]
VCVTPS2DQ(zmm{k}{z}, zmm, {er}) [AVX512F]
VCVTPS2DQ(zmm{k}{z}, zmm) [AVX512F]
VCVTPS2DQ(xmm{k}{z}, m128/m32bcst) [AVX512F and AVX512VL]
VCVTPS2DQ(ymm{k}{z}, m256/m32bcst) [AVX512F and AVX512VL]
VCVTPS2DQ(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VCVTPS2DQ(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTPS2PD
(*args, **kwargs)¶Convert Packed Single-Precision FP Values to Packed Double-Precision FP Values
Supported forms:
VCVTPS2PD(xmm, xmm/m64) [AVX]
VCVTPS2PD(ymm, xmm/m128) [AVX]
VCVTPS2PD(zmm{k}{z}, m256/m32bcst) [AVX512F]
VCVTPS2PD(zmm{k}{z}, ymm, {sae}) [AVX512F]
VCVTPS2PD(zmm{k}{z}, ymm) [AVX512F]
VCVTPS2PD(ymm{k}{z}, m128/m32bcst) [AVX512VL]
VCVTPS2PD(ymm{k}{z}, xmm) [AVX512VL]
VCVTPS2PD(xmm{k}{z}, m64/m32bcst) [AVX512F and AVX512VL]
VCVTPS2PD(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTPS2PH
(*args, **kwargs)¶Convert Single-Precision FP value to Half-Precision FP value
Supported forms:
VCVTPS2PH(xmm/m64, xmm, imm8) [F16C]
VCVTPS2PH(xmm/m128, ymm, imm8) [F16C]
VCVTPS2PH(m256{k}{z}, zmm, imm8) [AVX512F]
VCVTPS2PH(ymm{k}{z}, zmm, {sae}, imm8) [AVX512F]
VCVTPS2PH(ymm{k}{z}, zmm, imm8) [AVX512F]
VCVTPS2PH(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VCVTPS2PH(m64{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VCVTPS2PH(xmm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
VCVTPS2PH(m128{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTPS2QQ
(*args, **kwargs)¶Convert Packed Single Precision Floating-Point Values to Packed Singed Quadword Integer Values
Supported forms:
VCVTPS2QQ(zmm{k}{z}, m256/m32bcst) [AVX512DQ]
VCVTPS2QQ(zmm{k}{z}, ymm, {er}) [AVX512DQ]
VCVTPS2QQ(zmm{k}{z}, ymm) [AVX512DQ]
VCVTPS2QQ(xmm{k}{z}, m64/m32bcst) [AVX512DQ and AVX512VL]
VCVTPS2QQ(ymm{k}{z}, m128/m32bcst) [AVX512DQ and AVX512VL]
VCVTPS2QQ(xmm{k}{z}, xmm) [AVX512DQ and AVX512VL]
VCVTPS2QQ(ymm{k}{z}, xmm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VCVTPS2UDQ
(*args, **kwargs)¶Convert Packed Single-Precision Floating-Point Values to Packed Unsigned Doubleword Integer Values
Supported forms:
VCVTPS2UDQ(zmm{k}{z}, m512/m32bcst) [AVX512F]
VCVTPS2UDQ(zmm{k}{z}, zmm, {er}) [AVX512F]
VCVTPS2UDQ(zmm{k}{z}, zmm) [AVX512F]
VCVTPS2UDQ(xmm{k}{z}, m128/m32bcst) [AVX512F and AVX512VL]
VCVTPS2UDQ(ymm{k}{z}, m256/m32bcst) [AVX512F and AVX512VL]
VCVTPS2UDQ(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VCVTPS2UDQ(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTPS2UQQ
(*args, **kwargs)¶Convert Packed Single Precision Floating-Point Values to Packed Unsigned Quadword Integer Values
Supported forms:
VCVTPS2UQQ(zmm{k}{z}, m256/m32bcst) [AVX512DQ]
VCVTPS2UQQ(zmm{k}{z}, ymm, {er}) [AVX512DQ]
VCVTPS2UQQ(zmm{k}{z}, ymm) [AVX512DQ]
VCVTPS2UQQ(xmm{k}{z}, m64/m32bcst) [AVX512DQ and AVX512VL]
VCVTPS2UQQ(ymm{k}{z}, m128/m32bcst) [AVX512DQ and AVX512VL]
VCVTPS2UQQ(xmm{k}{z}, xmm) [AVX512DQ and AVX512VL]
VCVTPS2UQQ(ymm{k}{z}, xmm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VCVTQQ2PD
(*args, **kwargs)¶Convert Packed Quadword Integers to Packed Double-Precision Floating-Point Values
Supported forms:
VCVTQQ2PD(zmm{k}{z}, m512/m64bcst) [AVX512DQ]
VCVTQQ2PD(zmm{k}{z}, zmm, {er}) [AVX512DQ]
VCVTQQ2PD(zmm{k}{z}, zmm) [AVX512DQ]
VCVTQQ2PD(xmm{k}{z}, m128/m64bcst) [AVX512DQ and AVX512VL]
VCVTQQ2PD(ymm{k}{z}, m256/m64bcst) [AVX512DQ and AVX512VL]
VCVTQQ2PD(xmm{k}{z}, xmm) [AVX512DQ and AVX512VL]
VCVTQQ2PD(ymm{k}{z}, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VCVTQQ2PS
(*args, **kwargs)¶Convert Packed Quadword Integers to Packed Single-Precision Floating-Point Values
Supported forms:
VCVTQQ2PS(ymm{k}{z}, m512/m64bcst) [AVX512DQ]
VCVTQQ2PS(ymm{k}{z}, zmm, {er}) [AVX512DQ]
VCVTQQ2PS(ymm{k}{z}, zmm) [AVX512DQ]
VCVTQQ2PS(xmm{k}{z}, m128/m64bcst) [AVX512DQ and AVX512VL]
VCVTQQ2PS(xmm{k}{z}, m256/m64bcst) [AVX512DQ and AVX512VL]
VCVTQQ2PS(xmm{k}{z}, xmm) [AVX512DQ and AVX512VL]
VCVTQQ2PS(xmm{k}{z}, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VCVTSD2SI
(*args, **kwargs)¶Convert Scalar Double-Precision FP Value to Integer
Supported forms:
VCVTSD2SI(r32, xmm/m64) [AVX]
VCVTSD2SI(r64, xmm/m64) [AVX]
VCVTSD2SI(r32, xmm/m64) [AVX512F]
VCVTSD2SI(r64, xmm/m64) [AVX512F]
VCVTSD2SI(r32, xmm, {er}) [AVX512F]
VCVTSD2SI(r64, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VCVTSD2SS
(*args, **kwargs)¶Convert Scalar Double-Precision FP Value to Scalar Single-Precision FP Value
Supported forms:
VCVTSD2SS(xmm, xmm, xmm/m64) [AVX]
VCVTSD2SS(xmm{k}{z}, xmm, xmm/m64) [AVX512F]
VCVTSD2SS(xmm{k}{z}, xmm, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VCVTSD2USI
(*args, **kwargs)¶Convert Scalar Double-Precision Floating-Point Value to Unsigned Doubleword Integer
Supported forms:
VCVTSD2USI(r32, xmm/m64) [AVX512F]
VCVTSD2USI(r64, xmm/m64) [AVX512F]
VCVTSD2USI(r32, xmm, {er}) [AVX512F]
VCVTSD2USI(r64, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VCVTSI2SD
(*args, **kwargs)¶Convert Dword Integer to Scalar Double-Precision FP Value
Supported forms:
VCVTSI2SD(xmm, xmm, r32/m32) [AVX]
VCVTSI2SD(xmm, xmm, r64/m64) [AVX]
VCVTSI2SD(xmm, xmm, r32/m32) [AVX512F]
VCVTSI2SD(xmm, xmm, r64/m64) [AVX512F]
VCVTSI2SD(xmm, xmm, r64, {er}) [AVX512F]
peachpy.x86_64.avx.
VCVTSI2SS
(*args, **kwargs)¶Convert Dword Integer to Scalar Single-Precision FP Value
Supported forms:
VCVTSI2SS(xmm, xmm, r32/m32) [AVX]
VCVTSI2SS(xmm, xmm, r64/m64) [AVX]
VCVTSI2SS(xmm, xmm, r32/m32) [AVX512F]
VCVTSI2SS(xmm, xmm, r64/m64) [AVX512F]
VCVTSI2SS(xmm, xmm, r32, {er}) [AVX512F]
VCVTSI2SS(xmm, xmm, r64, {er}) [AVX512F]
peachpy.x86_64.avx.
VCVTSS2SD
(*args, **kwargs)¶Convert Scalar Single-Precision FP Value to Scalar Double-Precision FP Value
Supported forms:
VCVTSS2SD(xmm, xmm, xmm/m32) [AVX]
VCVTSS2SD(xmm{k}{z}, xmm, xmm/m32) [AVX512F]
VCVTSS2SD(xmm{k}{z}, xmm, xmm, {sae}) [AVX512F]
peachpy.x86_64.avx.
VCVTSS2SI
(*args, **kwargs)¶Convert Scalar Single-Precision FP Value to Dword Integer
Supported forms:
VCVTSS2SI(r32, xmm/m32) [AVX]
VCVTSS2SI(r64, xmm/m32) [AVX]
VCVTSS2SI(r32, xmm/m32) [AVX512F]
VCVTSS2SI(r64, xmm/m32) [AVX512F]
VCVTSS2SI(r32, xmm, {er}) [AVX512F]
VCVTSS2SI(r64, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VCVTSS2USI
(*args, **kwargs)¶Convert Scalar Single-Precision Floating-Point Value to Unsigned Doubleword Integer
Supported forms:
VCVTSS2USI(r32, xmm/m32) [AVX512F]
VCVTSS2USI(r64, xmm/m32) [AVX512F]
VCVTSS2USI(r32, xmm, {er}) [AVX512F]
VCVTSS2USI(r64, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VCVTTPD2DQ
(*args, **kwargs)¶Convert with Truncation Packed Double-Precision FP Values to Packed Dword Integers
Supported forms:
VCVTTPD2DQ(xmm, xmm/m128) [AVX]
VCVTTPD2DQ(xmm, ymm/m256) [AVX]
VCVTTPD2DQ(ymm{k}{z}, m512/m64bcst) [AVX512F]
VCVTTPD2DQ(ymm{k}{z}, zmm, {sae}) [AVX512F]
VCVTTPD2DQ(ymm{k}{z}, zmm) [AVX512F]
VCVTTPD2DQ(xmm{k}{z}, m128/m64bcst) [AVX512F and AVX512VL]
VCVTTPD2DQ(xmm{k}{z}, m256/m64bcst) [AVX512F and AVX512VL]
VCVTTPD2DQ(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VCVTTPD2DQ(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTTPD2QQ
(*args, **kwargs)¶Convert with Truncation Packed Double-Precision Floating-Point Values to Packed Quadword Integers
Supported forms:
VCVTTPD2QQ(zmm{k}{z}, m512/m64bcst) [AVX512DQ]
VCVTTPD2QQ(zmm{k}{z}, zmm, {sae}) [AVX512DQ]
VCVTTPD2QQ(zmm{k}{z}, zmm) [AVX512DQ]
VCVTTPD2QQ(xmm{k}{z}, m128/m64bcst) [AVX512DQ and AVX512VL]
VCVTTPD2QQ(ymm{k}{z}, m256/m64bcst) [AVX512DQ and AVX512VL]
VCVTTPD2QQ(xmm{k}{z}, xmm) [AVX512DQ and AVX512VL]
VCVTTPD2QQ(ymm{k}{z}, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VCVTTPD2UDQ
(*args, **kwargs)¶Convert with Truncation Packed Double-Precision Floating-Point Values to Packed Unsigned Doubleword Integers
Supported forms:
VCVTTPD2UDQ(ymm{k}{z}, m512/m64bcst) [AVX512F]
VCVTTPD2UDQ(ymm{k}{z}, zmm, {sae}) [AVX512F]
VCVTTPD2UDQ(ymm{k}{z}, zmm) [AVX512F]
VCVTTPD2UDQ(xmm{k}{z}, m128/m64bcst) [AVX512F and AVX512VL]
VCVTTPD2UDQ(xmm{k}{z}, m256/m64bcst) [AVX512F and AVX512VL]
VCVTTPD2UDQ(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VCVTTPD2UDQ(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTTPD2UQQ
(*args, **kwargs)¶Convert with Truncation Packed Double-Precision Floating-Point Values to Packed Unsigned Quadword Integers
Supported forms:
VCVTTPD2UQQ(zmm{k}{z}, m512/m64bcst) [AVX512DQ]
VCVTTPD2UQQ(zmm{k}{z}, zmm, {sae}) [AVX512DQ]
VCVTTPD2UQQ(zmm{k}{z}, zmm) [AVX512DQ]
VCVTTPD2UQQ(xmm{k}{z}, m128/m64bcst) [AVX512DQ and AVX512VL]
VCVTTPD2UQQ(ymm{k}{z}, m256/m64bcst) [AVX512DQ and AVX512VL]
VCVTTPD2UQQ(xmm{k}{z}, xmm) [AVX512DQ and AVX512VL]
VCVTTPD2UQQ(ymm{k}{z}, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VCVTTPS2DQ
(*args, **kwargs)¶Convert with Truncation Packed Single-Precision FP Values to Packed Dword Integers
Supported forms:
VCVTTPS2DQ(xmm, xmm/m128) [AVX]
VCVTTPS2DQ(ymm, ymm/m256) [AVX]
VCVTTPS2DQ(zmm{k}{z}, m512/m32bcst) [AVX512F]
VCVTTPS2DQ(zmm{k}{z}, zmm, {sae}) [AVX512F]
VCVTTPS2DQ(zmm{k}{z}, zmm) [AVX512F]
VCVTTPS2DQ(xmm{k}{z}, m128/m32bcst) [AVX512F and AVX512VL]
VCVTTPS2DQ(ymm{k}{z}, m256/m32bcst) [AVX512F and AVX512VL]
VCVTTPS2DQ(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VCVTTPS2DQ(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTTPS2QQ
(*args, **kwargs)¶Convert with Truncation Packed Single Precision Floating-Point Values to Packed Singed Quadword Integer Values
Supported forms:
VCVTTPS2QQ(zmm{k}{z}, m256/m32bcst) [AVX512DQ]
VCVTTPS2QQ(zmm{k}{z}, ymm, {sae}) [AVX512DQ]
VCVTTPS2QQ(zmm{k}{z}, ymm) [AVX512DQ]
VCVTTPS2QQ(xmm{k}{z}, m64/m32bcst) [AVX512DQ and AVX512VL]
VCVTTPS2QQ(ymm{k}{z}, m128/m32bcst) [AVX512DQ and AVX512VL]
VCVTTPS2QQ(xmm{k}{z}, xmm) [AVX512DQ and AVX512VL]
VCVTTPS2QQ(ymm{k}{z}, xmm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VCVTTPS2UDQ
(*args, **kwargs)¶Convert with Truncation Packed Single-Precision Floating-Point Values to Packed Unsigned Doubleword Integer Values
Supported forms:
VCVTTPS2UDQ(zmm{k}{z}, m512/m32bcst) [AVX512F]
VCVTTPS2UDQ(zmm{k}{z}, zmm, {sae}) [AVX512F]
VCVTTPS2UDQ(zmm{k}{z}, zmm) [AVX512F]
VCVTTPS2UDQ(xmm{k}{z}, m128/m32bcst) [AVX512F and AVX512VL]
VCVTTPS2UDQ(ymm{k}{z}, m256/m32bcst) [AVX512F and AVX512VL]
VCVTTPS2UDQ(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VCVTTPS2UDQ(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTTPS2UQQ
(*args, **kwargs)¶Convert with Truncation Packed Single Precision Floating-Point Values to Packed Unsigned Quadword Integer Values
Supported forms:
VCVTTPS2UQQ(zmm{k}{z}, m256/m32bcst) [AVX512DQ]
VCVTTPS2UQQ(zmm{k}{z}, ymm, {sae}) [AVX512DQ]
VCVTTPS2UQQ(zmm{k}{z}, ymm) [AVX512DQ]
VCVTTPS2UQQ(xmm{k}{z}, m64/m32bcst) [AVX512DQ and AVX512VL]
VCVTTPS2UQQ(ymm{k}{z}, m128/m32bcst) [AVX512DQ and AVX512VL]
VCVTTPS2UQQ(xmm{k}{z}, xmm) [AVX512DQ and AVX512VL]
VCVTTPS2UQQ(ymm{k}{z}, xmm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VCVTTSD2SI
(*args, **kwargs)¶Convert with Truncation Scalar Double-Precision FP Value to Signed Integer
Supported forms:
VCVTTSD2SI(r32, xmm/m64) [AVX]
VCVTTSD2SI(r64, xmm/m64) [AVX]
VCVTTSD2SI(r32, xmm/m64) [AVX512F]
VCVTTSD2SI(r64, xmm/m64) [AVX512F]
VCVTTSD2SI(r32, xmm, {sae}) [AVX512F]
VCVTTSD2SI(r64, xmm, {sae}) [AVX512F]
peachpy.x86_64.avx.
VCVTTSD2USI
(*args, **kwargs)¶Convert with Truncation Scalar Double-Precision Floating-Point Value to Unsigned Integer
Supported forms:
VCVTTSD2USI(r32, xmm/m64) [AVX512F]
VCVTTSD2USI(r64, xmm/m64) [AVX512F]
VCVTTSD2USI(r32, xmm, {sae}) [AVX512F]
VCVTTSD2USI(r64, xmm, {sae}) [AVX512F]
peachpy.x86_64.avx.
VCVTTSS2SI
(*args, **kwargs)¶Convert with Truncation Scalar Single-Precision FP Value to Dword Integer
Supported forms:
VCVTTSS2SI(r32, xmm/m32) [AVX]
VCVTTSS2SI(r64, xmm/m32) [AVX]
VCVTTSS2SI(r32, xmm/m32) [AVX512F]
VCVTTSS2SI(r64, xmm/m32) [AVX512F]
VCVTTSS2SI(r32, xmm, {sae}) [AVX512F]
VCVTTSS2SI(r64, xmm, {sae}) [AVX512F]
peachpy.x86_64.avx.
VCVTTSS2USI
(*args, **kwargs)¶Convert with Truncation Scalar Single-Precision Floating-Point Value to Unsigned Integer
Supported forms:
VCVTTSS2USI(r32, xmm/m32) [AVX512F]
VCVTTSS2USI(r64, xmm/m32) [AVX512F]
VCVTTSS2USI(r32, xmm, {sae}) [AVX512F]
VCVTTSS2USI(r64, xmm, {sae}) [AVX512F]
peachpy.x86_64.avx.
VCVTUDQ2PD
(*args, **kwargs)¶Convert Packed Unsigned Doubleword Integers to Packed Double-Precision Floating-Point Values
Supported forms:
VCVTUDQ2PD(zmm{k}{z}, m256/m32bcst) [AVX512F]
VCVTUDQ2PD(zmm{k}{z}, ymm) [AVX512F]
VCVTUDQ2PD(xmm{k}{z}, m64/m32bcst) [AVX512F and AVX512VL]
VCVTUDQ2PD(ymm{k}{z}, m128/m32bcst) [AVX512F and AVX512VL]
VCVTUDQ2PD(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VCVTUDQ2PD(ymm{k}{z}, xmm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTUDQ2PS
(*args, **kwargs)¶Convert Packed Unsigned Doubleword Integers to Packed Single-Precision Floating-Point Values
Supported forms:
VCVTUDQ2PS(zmm{k}{z}, m512/m32bcst) [AVX512F]
VCVTUDQ2PS(zmm{k}{z}, zmm, {er}) [AVX512F]
VCVTUDQ2PS(zmm{k}{z}, zmm) [AVX512F]
VCVTUDQ2PS(xmm{k}{z}, m128/m32bcst) [AVX512F and AVX512VL]
VCVTUDQ2PS(ymm{k}{z}, m256/m32bcst) [AVX512F and AVX512VL]
VCVTUDQ2PS(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VCVTUDQ2PS(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VCVTUQQ2PD
(*args, **kwargs)¶Convert Packed Unsigned Quadword Integers to Packed Double-Precision Floating-Point Values
Supported forms:
VCVTUQQ2PD(zmm{k}{z}, m512/m64bcst) [AVX512DQ]
VCVTUQQ2PD(zmm{k}{z}, zmm, {er}) [AVX512DQ]
VCVTUQQ2PD(zmm{k}{z}, zmm) [AVX512DQ]
VCVTUQQ2PD(xmm{k}{z}, m128/m64bcst) [AVX512DQ and AVX512VL]
VCVTUQQ2PD(ymm{k}{z}, m256/m64bcst) [AVX512DQ and AVX512VL]
VCVTUQQ2PD(xmm{k}{z}, xmm) [AVX512DQ and AVX512VL]
VCVTUQQ2PD(ymm{k}{z}, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VCVTUQQ2PS
(*args, **kwargs)¶Convert Packed Unsigned Quadword Integers to Packed Single-Precision Floating-Point Values
Supported forms:
VCVTUQQ2PS(ymm{k}{z}, m512/m64bcst) [AVX512DQ]
VCVTUQQ2PS(ymm{k}{z}, zmm, {er}) [AVX512DQ]
VCVTUQQ2PS(ymm{k}{z}, zmm) [AVX512DQ]
VCVTUQQ2PS(xmm{k}{z}, m128/m64bcst) [AVX512DQ and AVX512VL]
VCVTUQQ2PS(xmm{k}{z}, m256/m64bcst) [AVX512DQ and AVX512VL]
VCVTUQQ2PS(xmm{k}{z}, xmm) [AVX512DQ and AVX512VL]
VCVTUQQ2PS(xmm{k}{z}, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VCVTUSI2SD
(*args, **kwargs)¶Convert Unsigned Integer to Scalar Double-Precision Floating-Point Value
Supported forms:
VCVTUSI2SD(xmm, xmm, r32/m32) [AVX512F]
VCVTUSI2SD(xmm, xmm, r64/m64) [AVX512F]
VCVTUSI2SD(xmm, xmm, r64, {er}) [AVX512F]
peachpy.x86_64.avx.
VCVTUSI2SS
(*args, **kwargs)¶Convert Unsigned Integer to Scalar Single-Precision Floating-Point Value
Supported forms:
VCVTUSI2SS(xmm, xmm, r32/m32) [AVX512F]
VCVTUSI2SS(xmm, xmm, r64/m64) [AVX512F]
VCVTUSI2SS(xmm, xmm, r32, {er}) [AVX512F]
VCVTUSI2SS(xmm, xmm, r64, {er}) [AVX512F]
peachpy.x86_64.avx.
VDBPSADBW
(*args, **kwargs)¶Double Block Packed Sum-Absolute-Differences on Unsigned Bytes
Supported forms:
VDBPSADBW(zmm{k}{z}, zmm, zmm/m512, imm8) [AVX512BW]
VDBPSADBW(xmm{k}{z}, xmm, xmm/m128, imm8) [AVX512BW and AVX512VL]
VDBPSADBW(ymm{k}{z}, ymm, ymm/m256, imm8) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VDIVPD
(*args, **kwargs)¶Divide Packed Double-Precision Floating-Point Values
Supported forms:
VDIVPD(xmm, xmm, xmm/m128) [AVX]
VDIVPD(ymm, ymm, ymm/m256) [AVX]
VDIVPD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VDIVPD(zmm{k}{z}, zmm, zmm, {er}) [AVX512F]
VDIVPD(zmm{k}{z}, zmm, zmm) [AVX512F]
VDIVPD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VDIVPD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VDIVPD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VDIVPD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VDIVPS
(*args, **kwargs)¶Divide Packed Single-Precision Floating-Point Values
Supported forms:
VDIVPS(xmm, xmm, xmm/m128) [AVX]
VDIVPS(ymm, ymm, ymm/m256) [AVX]
VDIVPS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VDIVPS(zmm{k}{z}, zmm, zmm, {er}) [AVX512F]
VDIVPS(zmm{k}{z}, zmm, zmm) [AVX512F]
VDIVPS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VDIVPS(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VDIVPS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VDIVPS(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VDIVSD
(*args, **kwargs)¶Divide Scalar Double-Precision Floating-Point Values
Supported forms:
VDIVSD(xmm, xmm, xmm/m64) [AVX]
VDIVSD(xmm{k}{z}, xmm, xmm/m64) [AVX512F]
VDIVSD(xmm{k}{z}, xmm, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VDIVSS
(*args, **kwargs)¶Divide Scalar Single-Precision Floating-Point Values
Supported forms:
VDIVSS(xmm, xmm, xmm/m32) [AVX]
VDIVSS(xmm{k}{z}, xmm, xmm/m32) [AVX512F]
VDIVSS(xmm{k}{z}, xmm, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VDPPD
(*args, **kwargs)¶Dot Product of Packed Double Precision Floating-Point Values
Supported forms:
VDPPD(xmm, xmm, xmm/m128, imm8) [AVX]
peachpy.x86_64.avx.
VDPPS
(*args, **kwargs)¶Dot Product of Packed Single Precision Floating-Point Values
Supported forms:
VDPPS(xmm, xmm, xmm/m128, imm8) [AVX]
VDPPS(ymm, ymm, ymm/m256, imm8) [AVX]
peachpy.x86_64.avx.
VEXP2PD
(*args, **kwargs)¶Approximation to the Exponential 2^x of Packed Double-Precision Floating-Point Values with Less Than 2^-23 Relative Error
Supported forms:
VEXP2PD(zmm{k}{z}, m512/m64bcst) [AVX512ER]
VEXP2PD(zmm{k}{z}, zmm, {sae}) [AVX512ER]
VEXP2PD(zmm{k}{z}, zmm) [AVX512ER]
peachpy.x86_64.avx.
VEXP2PS
(*args, **kwargs)¶Approximation to the Exponential 2^x of Packed Single-Precision Floating-Point Values with Less Than 2^-23 Relative Error
Supported forms:
VEXP2PS(zmm{k}{z}, m512/m32bcst) [AVX512ER]
VEXP2PS(zmm{k}{z}, zmm, {sae}) [AVX512ER]
VEXP2PS(zmm{k}{z}, zmm) [AVX512ER]
peachpy.x86_64.avx.
VEXPANDPD
(*args, **kwargs)¶Load Sparse Packed Double-Precision Floating-Point Values from Dense Memory
Supported forms:
VEXPANDPD(zmm{k}{z}, zmm/m512) [AVX512F]
VEXPANDPD(xmm{k}{z}, xmm/m128) [AVX512VL]
VEXPANDPD(ymm{k}{z}, ymm/m256) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VEXPANDPS
(*args, **kwargs)¶Load Sparse Packed Single-Precision Floating-Point Values from Dense Memory
Supported forms:
VEXPANDPS(zmm{k}{z}, zmm/m512) [AVX512F]
VEXPANDPS(xmm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
VEXPANDPS(ymm{k}{z}, ymm/m256) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VEXTRACTF128
(*args, **kwargs)¶Extract Packed Floating-Point Values
Supported forms:
VEXTRACTF128(xmm/m128, ymm, imm8) [AVX]
peachpy.x86_64.avx.
VEXTRACTF32X4
(*args, **kwargs)¶Extract 128 Bits of Packed Single-Precision Floating-Point Values
Supported forms:
VEXTRACTF32X4(xmm{k}{z}, zmm, imm8) [AVX512F]
VEXTRACTF32X4(m128{k}{z}, zmm, imm8) [AVX512F]
VEXTRACTF32X4(xmm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
VEXTRACTF32X4(m128{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VEXTRACTF32X8
(*args, **kwargs)¶Extract 256 Bits of Packed Single-Precision Floating-Point Values
Supported forms:
VEXTRACTF32X8(ymm{k}{z}, zmm, imm8) [AVX512DQ]
VEXTRACTF32X8(m256{k}{z}, zmm, imm8) [AVX512DQ]
peachpy.x86_64.avx.
VEXTRACTF64X2
(*args, **kwargs)¶Extract 128 Bits of Packed Double-Precision Floating-Point Values
Supported forms:
VEXTRACTF64X2(xmm{k}{z}, zmm, imm8) [AVX512DQ]
VEXTRACTF64X2(m128{k}{z}, zmm, imm8) [AVX512DQ]
VEXTRACTF64X2(xmm{k}{z}, ymm, imm8) [AVX512DQ and AVX512VL]
VEXTRACTF64X2(m128{k}{z}, ymm, imm8) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VEXTRACTF64X4
(*args, **kwargs)¶Extract 256 Bits of Packed Double-Precision Floating-Point Values
Supported forms:
VEXTRACTF64X4(ymm{k}{z}, zmm, imm8) [AVX512F]
VEXTRACTF64X4(m256{k}{z}, zmm, imm8) [AVX512F]
peachpy.x86_64.avx.
VEXTRACTI128
(*args, **kwargs)¶Extract Packed Integer Values
Supported forms:
VEXTRACTI128(xmm/m128, ymm, imm8) [AVX2]
peachpy.x86_64.avx.
VEXTRACTI32X4
(*args, **kwargs)¶Extract 128 Bits of Packed Doubleword Integer Values
Supported forms:
VEXTRACTI32X4(xmm{k}{z}, zmm, imm8) [AVX512F]
VEXTRACTI32X4(m128{k}{z}, zmm, imm8) [AVX512F]
VEXTRACTI32X4(xmm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
VEXTRACTI32X4(m128{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VEXTRACTI32X8
(*args, **kwargs)¶Extract 256 Bits of Packed Doubleword Integer Values
Supported forms:
VEXTRACTI32X8(ymm{k}{z}, zmm, imm8) [AVX512DQ]
VEXTRACTI32X8(m256{k}{z}, zmm, imm8) [AVX512DQ]
peachpy.x86_64.avx.
VEXTRACTI64X2
(*args, **kwargs)¶Extract 128 Bits of Packed Quadword Integer Values
Supported forms:
VEXTRACTI64X2(xmm{k}{z}, zmm, imm8) [AVX512DQ]
VEXTRACTI64X2(m128{k}{z}, zmm, imm8) [AVX512DQ]
VEXTRACTI64X2(xmm{k}{z}, ymm, imm8) [AVX512DQ and AVX512VL]
VEXTRACTI64X2(m128{k}{z}, ymm, imm8) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VEXTRACTI64X4
(*args, **kwargs)¶Extract 256 Bits of Packed Quadword Integer Values
Supported forms:
VEXTRACTI64X4(ymm{k}{z}, zmm, imm8) [AVX512F]
VEXTRACTI64X4(m256{k}{z}, zmm, imm8) [AVX512F]
peachpy.x86_64.avx.
VEXTRACTPS
(*args, **kwargs)¶Extract Packed Single Precision Floating-Point Value
Supported forms:
VEXTRACTPS(r32/m32, xmm, imm8) [AVX]
VEXTRACTPS(r32/m32, xmm, imm8) [AVX512F]
peachpy.x86_64.avx.
VFIXUPIMMPD
(*args, **kwargs)¶Fix Up Special Packed Double-Precision Floating-Point Values
Supported forms:
VFIXUPIMMPD(zmm{k}{z}, zmm, m512/m64bcst, imm8) [AVX512F]
VFIXUPIMMPD(zmm{k}{z}, zmm, zmm, {sae}, imm8) [AVX512F]
VFIXUPIMMPD(zmm{k}{z}, zmm, zmm, imm8) [AVX512F]
VFIXUPIMMPD(xmm{k}{z}, xmm, m128/m64bcst, imm8) [AVX512F and AVX512VL]
VFIXUPIMMPD(xmm{k}{z}, xmm, xmm, imm8) [AVX512F and AVX512VL]
VFIXUPIMMPD(ymm{k}{z}, ymm, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VFIXUPIMMPD(ymm{k}{z}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VFIXUPIMMPS
(*args, **kwargs)¶Fix Up Special Packed Single-Precision Floating-Point Values
Supported forms:
VFIXUPIMMPS(zmm{k}{z}, zmm, m512/m32bcst, imm8) [AVX512F]
VFIXUPIMMPS(zmm{k}{z}, zmm, zmm, {sae}, imm8) [AVX512F]
VFIXUPIMMPS(zmm{k}{z}, zmm, zmm, imm8) [AVX512F]
VFIXUPIMMPS(xmm{k}{z}, xmm, m128/m32bcst, imm8) [AVX512VL]
VFIXUPIMMPS(xmm{k}{z}, xmm, xmm, imm8) [AVX512VL]
VFIXUPIMMPS(ymm{k}{z}, ymm, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VFIXUPIMMPS(ymm{k}{z}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VFIXUPIMMSD
(*args, **kwargs)¶Fix Up Special Scalar Double-Precision Floating-Point Value
Supported forms:
VFIXUPIMMSD(xmm{k}{z}, xmm, xmm/m64, imm8) [AVX512F]
VFIXUPIMMSD(xmm{k}{z}, xmm, xmm, {sae}, imm8) [AVX512F]
peachpy.x86_64.avx.
VFIXUPIMMSS
(*args, **kwargs)¶Fix Up Special Scalar Single-Precision Floating-Point Value
Supported forms:
VFIXUPIMMSS(xmm{k}{z}, xmm, xmm/m32, imm8) [AVX512F]
VFIXUPIMMSS(xmm{k}{z}, xmm, xmm, {sae}, imm8) [AVX512F]
peachpy.x86_64.avx.
VFPCLASSPD
(*args, **kwargs)¶Test Class of Packed Double-Precision Floating-Point Values
Supported forms:
VFPCLASSPD(k{k}, m512/m64bcst, imm8) [AVX512DQ]
VFPCLASSPD(k{k}, zmm, imm8) [AVX512DQ]
VFPCLASSPD(k{k}, m128/m64bcst, imm8) [AVX512DQ and AVX512VL]
VFPCLASSPD(k{k}, m256/m64bcst, imm8) [AVX512DQ and AVX512VL]
VFPCLASSPD(k{k}, xmm, imm8) [AVX512DQ and AVX512VL]
VFPCLASSPD(k{k}, ymm, imm8) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VFPCLASSPS
(*args, **kwargs)¶Test Class of Packed Single-Precision Floating-Point Values
Supported forms:
VFPCLASSPS(k{k}, m512/m32bcst, imm8) [AVX512DQ]
VFPCLASSPS(k{k}, zmm, imm8) [AVX512DQ]
VFPCLASSPS(k{k}, m128/m32bcst, imm8) [AVX512DQ and AVX512VL]
VFPCLASSPS(k{k}, m256/m32bcst, imm8) [AVX512DQ and AVX512VL]
VFPCLASSPS(k{k}, xmm, imm8) [AVX512DQ and AVX512VL]
VFPCLASSPS(k{k}, ymm, imm8) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VFPCLASSSD
(*args, **kwargs)¶Test Class of Scalar Double-Precision Floating-Point Value
Supported forms:
VFPCLASSSD(k{k}, xmm/m64, imm8) [AVX512DQ]
peachpy.x86_64.avx.
VFPCLASSSS
(*args, **kwargs)¶Test Class of Scalar Single-Precision Floating-Point Value
Supported forms:
VFPCLASSSS(k{k}, xmm/m32, imm8) [AVX512DQ]
peachpy.x86_64.avx.
VGATHERDPD
(*args, **kwargs)¶Gather Packed Double-Precision Floating-Point Values Using Signed Doubleword Indices
Supported forms:
VGATHERDPD(xmm, vm32x, xmm) [AVX2]
VGATHERDPD(ymm, vm32x, ymm) [AVX2]
VGATHERDPD(zmm{k}, vm32y) [AVX512F]
VGATHERDPD(xmm{k}, vm32x) [AVX512F and AVX512VL]
VGATHERDPD(ymm{k}, vm32x) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VGATHERDPS
(*args, **kwargs)¶Gather Packed Single-Precision Floating-Point Values Using Signed Doubleword Indices
Supported forms:
VGATHERDPS(xmm, vm32x, xmm) [AVX2]
VGATHERDPS(ymm, vm32y, ymm) [AVX2]
VGATHERDPS(zmm{k}, vm32z) [AVX512F]
VGATHERDPS(xmm{k}, vm32x) [AVX512F and AVX512VL]
VGATHERDPS(ymm{k}, vm32y) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VGATHERPF0DPD
(*args, **kwargs)¶Sparse Prefetch Packed Double-Precision Floating-Point Data Values with Signed Doubleword Indices Using T0 Hint
Supported forms:
VGATHERPF0DPD(vm32y{k}) [AVX512PF]
peachpy.x86_64.avx.
VGATHERPF0DPS
(*args, **kwargs)¶Sparse Prefetch Packed Single-Precision Floating-Point Data Values with Signed Doubleword Indices Using T0 Hint
Supported forms:
VGATHERPF0DPS(vm32z{k}) [AVX512PF]
peachpy.x86_64.avx.
VGATHERPF0QPD
(*args, **kwargs)¶Sparse Prefetch Packed Double-Precision Floating-Point Data Values with Signed Quadword Indices Using T0 Hint
Supported forms:
VGATHERPF0QPD(vm64z{k}) [AVX512PF]
peachpy.x86_64.avx.
VGATHERPF0QPS
(*args, **kwargs)¶Sparse Prefetch Packed Single-Precision Floating-Point Data Values with Signed Quadword Indices Using T0 Hint
Supported forms:
VGATHERPF0QPS(vm64z{k}) [AVX512PF]
peachpy.x86_64.avx.
VGATHERPF1DPD
(*args, **kwargs)¶Sparse Prefetch Packed Double-Precision Floating-Point Data Values with Signed Doubleword Indices Using T1 Hint
Supported forms:
VGATHERPF1DPD(vm32y{k}) [AVX512PF]
peachpy.x86_64.avx.
VGATHERPF1DPS
(*args, **kwargs)¶Sparse Prefetch Packed Single-Precision Floating-Point Data Values with Signed Doubleword Indices Using T1 Hint
Supported forms:
VGATHERPF1DPS(vm32z{k}) [AVX512PF]
peachpy.x86_64.avx.
VGATHERPF1QPD
(*args, **kwargs)¶Sparse Prefetch Packed Double-Precision Floating-Point Data Values with Signed Quadword Indices Using T1 Hint
Supported forms:
VGATHERPF1QPD(vm64z{k}) [AVX512PF]
peachpy.x86_64.avx.
VGATHERPF1QPS
(*args, **kwargs)¶Sparse Prefetch Packed Single-Precision Floating-Point Data Values with Signed Quadword Indices Using T1 Hint
Supported forms:
VGATHERPF1QPS(vm64z{k}) [AVX512PF]
peachpy.x86_64.avx.
VGATHERQPD
(*args, **kwargs)¶Gather Packed Double-Precision Floating-Point Values Using Signed Quadword Indices
Supported forms:
VGATHERQPD(xmm, vm64x, xmm) [AVX2]
VGATHERQPD(ymm, vm64y, ymm) [AVX2]
VGATHERQPD(zmm{k}, vm64z) [AVX512F]
VGATHERQPD(xmm{k}, vm64x) [AVX512F and AVX512VL]
VGATHERQPD(ymm{k}, vm64y) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VGATHERQPS
(*args, **kwargs)¶Gather Packed Single-Precision Floating-Point Values Using Signed Quadword Indices
Supported forms:
VGATHERQPS(xmm, vm64x, xmm) [AVX2]
VGATHERQPS(xmm, vm64y, xmm) [AVX2]
VGATHERQPS(ymm{k}, vm64z) [AVX512F]
VGATHERQPS(xmm{k}, vm64x) [AVX512F and AVX512VL]
VGATHERQPS(xmm{k}, vm64y) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VGETEXPPD
(*args, **kwargs)¶Extract Exponents of Packed Double-Precision Floating-Point Values as Double-Precision Floating-Point Values
Supported forms:
VGETEXPPD(zmm{k}{z}, m512/m64bcst) [AVX512F]
VGETEXPPD(zmm{k}{z}, zmm, {sae}) [AVX512F]
VGETEXPPD(zmm{k}{z}, zmm) [AVX512F]
VGETEXPPD(xmm{k}{z}, m128/m64bcst) [AVX512F and AVX512VL]
VGETEXPPD(ymm{k}{z}, m256/m64bcst) [AVX512F and AVX512VL]
VGETEXPPD(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VGETEXPPD(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VGETEXPPS
(*args, **kwargs)¶Extract Exponents of Packed Single-Precision Floating-Point Values as Single-Precision Floating-Point Values
Supported forms:
VGETEXPPS(zmm{k}{z}, m512/m32bcst) [AVX512F]
VGETEXPPS(zmm{k}{z}, zmm, {sae}) [AVX512F]
VGETEXPPS(zmm{k}{z}, zmm) [AVX512F]
VGETEXPPS(xmm{k}{z}, m128/m32bcst) [AVX512F and AVX512VL]
VGETEXPPS(ymm{k}{z}, m256/m32bcst) [AVX512F and AVX512VL]
VGETEXPPS(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VGETEXPPS(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VGETEXPSD
(*args, **kwargs)¶Extract Exponent of Scalar Double-Precision Floating-Point Value as Double-Precision Floating-Point Value
Supported forms:
VGETEXPSD(xmm{k}{z}, xmm, xmm/m64) [AVX512F]
VGETEXPSD(xmm{k}{z}, xmm, xmm, {sae}) [AVX512F]
peachpy.x86_64.avx.
VGETEXPSS
(*args, **kwargs)¶Extract Exponent of Scalar Single-Precision Floating-Point Value as Single-Precision Floating-Point Value
Supported forms:
VGETEXPSS(xmm{k}{z}, xmm, xmm/m32) [AVX512F]
VGETEXPSS(xmm{k}{z}, xmm, xmm, {sae}) [AVX512F]
peachpy.x86_64.avx.
VGETMANTPD
(*args, **kwargs)¶Extract Normalized Mantissas from Packed Double-Precision Floating-Point Values
Supported forms:
VGETMANTPD(zmm{k}{z}, m512/m64bcst, imm8) [AVX512F]
VGETMANTPD(zmm{k}{z}, zmm, {sae}, imm8) [AVX512F]
VGETMANTPD(zmm{k}{z}, zmm, imm8) [AVX512F]
VGETMANTPD(xmm{k}{z}, m128/m64bcst, imm8) [AVX512F and AVX512VL]
VGETMANTPD(ymm{k}{z}, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VGETMANTPD(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VGETMANTPD(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VGETMANTPS
(*args, **kwargs)¶Extract Normalized Mantissas from Packed Single-Precision Floating-Point Values
Supported forms:
VGETMANTPS(zmm{k}{z}, m512/m32bcst, imm8) [AVX512F]
VGETMANTPS(zmm{k}{z}, zmm, {sae}, imm8) [AVX512F]
VGETMANTPS(zmm{k}{z}, zmm, imm8) [AVX512F]
VGETMANTPS(xmm{k}{z}, m128/m32bcst, imm8) [AVX512F and AVX512VL]
VGETMANTPS(ymm{k}{z}, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VGETMANTPS(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VGETMANTPS(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VGETMANTSD
(*args, **kwargs)¶Extract Normalized Mantissa from Scalar Double-Precision Floating-Point Value
Supported forms:
VGETMANTSD(xmm{k}{z}, xmm, xmm/m64, imm8) [AVX512F]
VGETMANTSD(xmm{k}{z}, xmm, xmm, {sae}, imm8) [AVX512F]
peachpy.x86_64.avx.
VGETMANTSS
(*args, **kwargs)¶Extract Normalized Mantissa from Scalar Single-Precision Floating-Point Value
Supported forms:
VGETMANTSS(xmm{k}{z}, xmm, xmm/m32, imm8) [AVX512F]
VGETMANTSS(xmm{k}{z}, xmm, xmm, {sae}, imm8) [AVX512F]
peachpy.x86_64.avx.
VHADDPD
(*args, **kwargs)¶Packed Double-FP Horizontal Add
Supported forms:
VHADDPD(xmm, xmm, xmm/m128) [AVX]
VHADDPD(ymm, ymm, ymm/m256) [AVX]
peachpy.x86_64.avx.
VHADDPS
(*args, **kwargs)¶Packed Single-FP Horizontal Add
Supported forms:
VHADDPS(xmm, xmm, xmm/m128) [AVX]
VHADDPS(ymm, ymm, ymm/m256) [AVX]
peachpy.x86_64.avx.
VHSUBPD
(*args, **kwargs)¶Packed Double-FP Horizontal Subtract
Supported forms:
VHSUBPD(xmm, xmm, xmm/m128) [AVX]
VHSUBPD(ymm, ymm, ymm/m256) [AVX]
peachpy.x86_64.avx.
VHSUBPS
(*args, **kwargs)¶Packed Single-FP Horizontal Subtract
Supported forms:
VHSUBPS(xmm, xmm, xmm/m128) [AVX]
VHSUBPS(ymm, ymm, ymm/m256) [AVX]
peachpy.x86_64.avx.
VINSERTF128
(*args, **kwargs)¶Insert Packed Floating-Point Values
Supported forms:
VINSERTF128(ymm, ymm, xmm/m128, imm8) [AVX]
peachpy.x86_64.avx.
VINSERTF32X4
(*args, **kwargs)¶Insert 128 Bits of Packed Single-Precision Floating-Point Values
Supported forms:
VINSERTF32X4(zmm{k}{z}, zmm, xmm/m128, imm8) [AVX512F]
VINSERTF32X4(ymm{k}{z}, ymm, xmm/m128, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VINSERTF32X8
(*args, **kwargs)¶Insert 256 Bits of Packed Single-Precision Floating-Point Values
Supported forms:
VINSERTF32X8(zmm{k}{z}, zmm, ymm/m256, imm8) [AVX512DQ]
peachpy.x86_64.avx.
VINSERTF64X2
(*args, **kwargs)¶Insert 128 Bits of Packed Double-Precision Floating-Point Values
Supported forms:
VINSERTF64X2(zmm{k}{z}, zmm, xmm/m128, imm8) [AVX512DQ]
VINSERTF64X2(ymm{k}{z}, ymm, xmm/m128, imm8) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VINSERTF64X4
(*args, **kwargs)¶Insert 256 Bits of Packed Double-Precision Floating-Point Values
Supported forms:
VINSERTF64X4(zmm{k}{z}, zmm, ymm/m256, imm8) [AVX512F]
peachpy.x86_64.avx.
VINSERTI128
(*args, **kwargs)¶Insert Packed Integer Values
Supported forms:
VINSERTI128(ymm, ymm, xmm/m128, imm8) [AVX2]
peachpy.x86_64.avx.
VINSERTI32X4
(*args, **kwargs)¶Insert 128 Bits of Packed Doubleword Integer Values
Supported forms:
VINSERTI32X4(zmm{k}{z}, zmm, xmm/m128, imm8) [AVX512F]
VINSERTI32X4(ymm{k}{z}, ymm, xmm/m128, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VINSERTI32X8
(*args, **kwargs)¶Insert 256 Bits of Packed Doubleword Integer Values
Supported forms:
VINSERTI32X8(zmm{k}{z}, zmm, ymm/m256, imm8) [AVX512DQ]
peachpy.x86_64.avx.
VINSERTI64X2
(*args, **kwargs)¶Insert 128 Bits of Packed Quadword Integer Values
Supported forms:
VINSERTI64X2(zmm{k}{z}, zmm, xmm/m128, imm8) [AVX512DQ]
VINSERTI64X2(ymm{k}{z}, ymm, xmm/m128, imm8) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VINSERTI64X4
(*args, **kwargs)¶Insert 256 Bits of Packed Quadword Integer Values
Supported forms:
VINSERTI64X4(zmm{k}{z}, zmm, ymm/m256, imm8) [AVX512F]
peachpy.x86_64.avx.
VINSERTPS
(*args, **kwargs)¶Insert Packed Single Precision Floating-Point Value
Supported forms:
VINSERTPS(xmm, xmm, xmm/m32, imm8) [AVX]
VINSERTPS(xmm, xmm, xmm/m32, imm8) [AVX512F]
peachpy.x86_64.avx.
VLDDQU
(*args, **kwargs)¶Load Unaligned Integer 128 Bits
Supported forms:
VLDDQU(xmm, m128) [AVX]
VLDDQU(ymm, m256) [AVX]
peachpy.x86_64.avx.
VLDMXCSR
(*args, **kwargs)¶Load MXCSR Register
Supported forms:
VLDMXCSR(m32) [AVX]
peachpy.x86_64.avx.
VMASKMOVDQU
(*args, **kwargs)¶Store Selected Bytes of Double Quadword
Supported forms:
VMASKMOVDQU(xmm, xmm) [AVX]
peachpy.x86_64.avx.
VMASKMOVPD
(*args, **kwargs)¶Conditional Move Packed Double-Precision Floating-Point Values
Supported forms:
VMASKMOVPD(xmm, xmm, m128) [AVX]
VMASKMOVPD(ymm, ymm, m256) [AVX]
VMASKMOVPD(m128, xmm, xmm) [AVX]
VMASKMOVPD(m256, ymm, ymm) [AVX]
peachpy.x86_64.avx.
VMASKMOVPS
(*args, **kwargs)¶Conditional Move Packed Single-Precision Floating-Point Values
Supported forms:
VMASKMOVPS(xmm, xmm, m128) [AVX]
VMASKMOVPS(ymm, ymm, m256) [AVX]
VMASKMOVPS(m128, xmm, xmm) [AVX]
VMASKMOVPS(m256, ymm, ymm) [AVX]
peachpy.x86_64.avx.
VMAXPD
(*args, **kwargs)¶Return Maximum Packed Double-Precision Floating-Point Values
Supported forms:
VMAXPD(xmm, xmm, xmm/m128) [AVX]
VMAXPD(ymm, ymm, ymm/m256) [AVX]
VMAXPD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VMAXPD(zmm{k}{z}, zmm, zmm, {sae}) [AVX512F]
VMAXPD(zmm{k}{z}, zmm, zmm) [AVX512F]
VMAXPD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VMAXPD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VMAXPD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VMAXPD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMAXPS
(*args, **kwargs)¶Return Maximum Packed Single-Precision Floating-Point Values
Supported forms:
VMAXPS(xmm, xmm, xmm/m128) [AVX]
VMAXPS(ymm, ymm, ymm/m256) [AVX]
VMAXPS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VMAXPS(zmm{k}{z}, zmm, zmm, {sae}) [AVX512F]
VMAXPS(zmm{k}{z}, zmm, zmm) [AVX512F]
VMAXPS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VMAXPS(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VMAXPS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VMAXPS(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMAXSD
(*args, **kwargs)¶Return Maximum Scalar Double-Precision Floating-Point Value
Supported forms:
VMAXSD(xmm, xmm, xmm/m64) [AVX]
VMAXSD(xmm{k}{z}, xmm, xmm/m64) [AVX512F]
VMAXSD(xmm{k}{z}, xmm, xmm, {sae}) [AVX512F]
peachpy.x86_64.avx.
VMAXSS
(*args, **kwargs)¶Return Maximum Scalar Single-Precision Floating-Point Value
Supported forms:
VMAXSS(xmm, xmm, xmm/m32) [AVX]
VMAXSS(xmm{k}{z}, xmm, xmm/m32) [AVX512F]
VMAXSS(xmm{k}{z}, xmm, xmm, {sae}) [AVX512F]
peachpy.x86_64.avx.
VMINPD
(*args, **kwargs)¶Return Minimum Packed Double-Precision Floating-Point Values
Supported forms:
VMINPD(xmm, xmm, xmm/m128) [AVX]
VMINPD(ymm, ymm, ymm/m256) [AVX]
VMINPD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VMINPD(zmm{k}{z}, zmm, zmm, {sae}) [AVX512F]
VMINPD(zmm{k}{z}, zmm, zmm) [AVX512F]
VMINPD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VMINPD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VMINPD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VMINPD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMINPS
(*args, **kwargs)¶Return Minimum Packed Single-Precision Floating-Point Values
Supported forms:
VMINPS(xmm, xmm, xmm/m128) [AVX]
VMINPS(ymm, ymm, ymm/m256) [AVX]
VMINPS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VMINPS(zmm{k}{z}, zmm, zmm, {sae}) [AVX512F]
VMINPS(zmm{k}{z}, zmm, zmm) [AVX512F]
VMINPS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VMINPS(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VMINPS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VMINPS(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMINSD
(*args, **kwargs)¶Return Minimum Scalar Double-Precision Floating-Point Value
Supported forms:
VMINSD(xmm, xmm, xmm/m64) [AVX]
VMINSD(xmm{k}{z}, xmm, xmm/m64) [AVX512F]
VMINSD(xmm{k}{z}, xmm, xmm, {sae}) [AVX512F]
peachpy.x86_64.avx.
VMINSS
(*args, **kwargs)¶Return Minimum Scalar Single-Precision Floating-Point Value
Supported forms:
VMINSS(xmm, xmm, xmm/m32) [AVX]
VMINSS(xmm{k}{z}, xmm, xmm/m32) [AVX512F]
VMINSS(xmm{k}{z}, xmm, xmm, {sae}) [AVX512F]
peachpy.x86_64.avx.
VMOVAPD
(*args, **kwargs)¶Move Aligned Packed Double-Precision Floating-Point Values
Supported forms:
VMOVAPD(xmm, xmm/m128) [AVX]
VMOVAPD(ymm, ymm/m256) [AVX]
VMOVAPD(xmm/m128, xmm) [AVX]
VMOVAPD(ymm/m256, ymm) [AVX]
VMOVAPD(m512{k}{z}, zmm) [AVX512F]
VMOVAPD(zmm{k}{z}, zmm/m512) [AVX512F]
VMOVAPD(m128{k}{z}, xmm) [AVX512F and AVX512VL]
VMOVAPD(m256{k}{z}, ymm) [AVX512F and AVX512VL]
VMOVAPD(xmm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
VMOVAPD(ymm{k}{z}, ymm/m256) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMOVAPS
(*args, **kwargs)¶Move Aligned Packed Single-Precision Floating-Point Values
Supported forms:
VMOVAPS(xmm, xmm/m128) [AVX]
VMOVAPS(ymm, ymm/m256) [AVX]
VMOVAPS(xmm/m128, xmm) [AVX]
VMOVAPS(ymm/m256, ymm) [AVX]
VMOVAPS(m512{k}{z}, zmm) [AVX512F]
VMOVAPS(zmm{k}{z}, zmm/m512) [AVX512F]
VMOVAPS(m128{k}{z}, xmm) [AVX512F and AVX512VL]
VMOVAPS(m256{k}{z}, ymm) [AVX512F and AVX512VL]
VMOVAPS(xmm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
VMOVAPS(ymm{k}{z}, ymm/m256) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMOVD
(*args, **kwargs)¶Move Doubleword
Supported forms:
VMOVD(xmm, r32/m32) [AVX]
VMOVD(r32/m32, xmm) [AVX]
VMOVD(xmm, r32/m32) [AVX512F]
VMOVD(r32/m32, xmm) [AVX512F]
peachpy.x86_64.avx.
VMOVDDUP
(*args, **kwargs)¶Move One Double-FP and Duplicate
Supported forms:
VMOVDDUP(xmm, xmm/m64) [AVX]
VMOVDDUP(ymm, ymm/m256) [AVX]
VMOVDDUP(zmm{k}{z}, zmm/m512) [AVX512F]
VMOVDDUP(xmm{k}{z}, xmm/m64) [AVX512F and AVX512VL]
VMOVDDUP(ymm{k}{z}, ymm/m256) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMOVDQA
(*args, **kwargs)¶Move Aligned Double Quadword
Supported forms:
VMOVDQA(xmm, xmm/m128) [AVX]
VMOVDQA(ymm, ymm/m256) [AVX]
VMOVDQA(xmm/m128, xmm) [AVX]
VMOVDQA(ymm/m256, ymm) [AVX]
peachpy.x86_64.avx.
VMOVDQA32
(*args, **kwargs)¶Move Aligned Doubleword Values
Supported forms:
VMOVDQA32(m512{k}{z}, zmm) [AVX512F]
VMOVDQA32(zmm{k}{z}, zmm/m512) [AVX512F]
VMOVDQA32(m128{k}{z}, xmm) [AVX512F and AVX512VL]
VMOVDQA32(m256{k}{z}, ymm) [AVX512F and AVX512VL]
VMOVDQA32(xmm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
VMOVDQA32(ymm{k}{z}, ymm/m256) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMOVDQA64
(*args, **kwargs)¶Move Aligned Quadword Values
Supported forms:
VMOVDQA64(m512{k}{z}, zmm) [AVX512F]
VMOVDQA64(zmm{k}{z}, zmm/m512) [AVX512F]
VMOVDQA64(m128{k}{z}, xmm) [AVX512F and AVX512VL]
VMOVDQA64(m256{k}{z}, ymm) [AVX512F and AVX512VL]
VMOVDQA64(xmm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
VMOVDQA64(ymm{k}{z}, ymm/m256) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMOVDQU
(*args, **kwargs)¶Move Unaligned Double Quadword
Supported forms:
VMOVDQU(xmm, xmm/m128) [AVX]
VMOVDQU(ymm, ymm/m256) [AVX]
VMOVDQU(xmm/m128, xmm) [AVX]
VMOVDQU(ymm/m256, ymm) [AVX]
peachpy.x86_64.avx.
VMOVDQU16
(*args, **kwargs)¶Move Unaligned Word Values
Supported forms:
VMOVDQU16(m512{k}{z}, zmm) [AVX512BW]
VMOVDQU16(zmm{k}{z}, zmm/m512) [AVX512BW]
VMOVDQU16(m128{k}{z}, xmm) [AVX512BW and AVX512VL]
VMOVDQU16(m256{k}{z}, ymm) [AVX512BW and AVX512VL]
VMOVDQU16(xmm{k}{z}, xmm/m128) [AVX512BW and AVX512VL]
VMOVDQU16(ymm{k}{z}, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VMOVDQU32
(*args, **kwargs)¶Move Unaligned Doubleword Values
Supported forms:
VMOVDQU32(m512{k}{z}, zmm) [AVX512F]
VMOVDQU32(zmm{k}{z}, zmm/m512) [AVX512F]
VMOVDQU32(m128{k}{z}, xmm) [AVX512F and AVX512VL]
VMOVDQU32(m256{k}{z}, ymm) [AVX512F and AVX512VL]
VMOVDQU32(xmm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
VMOVDQU32(ymm{k}{z}, ymm/m256) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMOVDQU64
(*args, **kwargs)¶Move Unaligned Quadword Values
Supported forms:
VMOVDQU64(m512{k}{z}, zmm) [AVX512F]
VMOVDQU64(zmm{k}{z}, zmm/m512) [AVX512F]
VMOVDQU64(m128{k}{z}, xmm) [AVX512F and AVX512VL]
VMOVDQU64(m256{k}{z}, ymm) [AVX512F and AVX512VL]
VMOVDQU64(xmm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
VMOVDQU64(ymm{k}{z}, ymm/m256) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMOVDQU8
(*args, **kwargs)¶Move Unaligned Byte Values
Supported forms:
VMOVDQU8(m512{k}{z}, zmm) [AVX512BW]
VMOVDQU8(zmm{k}{z}, zmm/m512) [AVX512BW]
VMOVDQU8(m128{k}{z}, xmm) [AVX512BW and AVX512VL]
VMOVDQU8(m256{k}{z}, ymm) [AVX512BW and AVX512VL]
VMOVDQU8(xmm{k}{z}, xmm/m128) [AVX512BW and AVX512VL]
VMOVDQU8(ymm{k}{z}, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VMOVHLPS
(*args, **kwargs)¶Move Packed Single-Precision Floating-Point Values High to Low
Supported forms:
VMOVHLPS(xmm, xmm, xmm) [AVX]
VMOVHLPS(xmm, xmm, xmm) [AVX512F]
peachpy.x86_64.avx.
VMOVHPD
(*args, **kwargs)¶Move High Packed Double-Precision Floating-Point Value
Supported forms:
VMOVHPD(m64, xmm) [AVX]
VMOVHPD(xmm, xmm, m64) [AVX]
VMOVHPD(m64, xmm) [AVX512F]
VMOVHPD(xmm, xmm, m64) [AVX512F]
peachpy.x86_64.avx.
VMOVHPS
(*args, **kwargs)¶Move High Packed Single-Precision Floating-Point Values
Supported forms:
VMOVHPS(m64, xmm) [AVX]
VMOVHPS(xmm, xmm, m64) [AVX]
VMOVHPS(m64, xmm) [AVX512F]
VMOVHPS(xmm, xmm, m64) [AVX512F]
peachpy.x86_64.avx.
VMOVLHPS
(*args, **kwargs)¶Move Packed Single-Precision Floating-Point Values Low to High
Supported forms:
VMOVLHPS(xmm, xmm, xmm) [AVX]
VMOVLHPS(xmm, xmm, xmm) [AVX512F]
peachpy.x86_64.avx.
VMOVLPD
(*args, **kwargs)¶Move Low Packed Double-Precision Floating-Point Value
Supported forms:
VMOVLPD(m64, xmm) [AVX]
VMOVLPD(xmm, xmm, m64) [AVX]
VMOVLPD(m64, xmm) [AVX512F]
VMOVLPD(xmm, xmm, m64) [AVX512F]
peachpy.x86_64.avx.
VMOVLPS
(*args, **kwargs)¶Move Low Packed Single-Precision Floating-Point Values
Supported forms:
VMOVLPS(m64, xmm) [AVX]
VMOVLPS(xmm, xmm, m64) [AVX]
VMOVLPS(m64, xmm) [AVX512F]
VMOVLPS(xmm, xmm, m64) [AVX512F]
peachpy.x86_64.avx.
VMOVMSKPD
(*args, **kwargs)¶Extract Packed Double-Precision Floating-Point Sign Mask
Supported forms:
VMOVMSKPD(r32, xmm) [AVX]
VMOVMSKPD(r32, ymm) [AVX]
peachpy.x86_64.avx.
VMOVMSKPS
(*args, **kwargs)¶Extract Packed Single-Precision Floating-Point Sign Mask
Supported forms:
VMOVMSKPS(r32, xmm) [AVX]
VMOVMSKPS(r32, ymm) [AVX]
peachpy.x86_64.avx.
VMOVNTDQ
(*args, **kwargs)¶Store Double Quadword Using Non-Temporal Hint
Supported forms:
VMOVNTDQ(m128, xmm) [AVX]
VMOVNTDQ(m256, ymm) [AVX]
VMOVNTDQ(m512, zmm) [AVX512F]
VMOVNTDQ(m128, xmm) [AVX512F and AVX512VL]
VMOVNTDQ(m256, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMOVNTDQA
(*args, **kwargs)¶Load Double Quadword Non-Temporal Aligned Hint
Supported forms:
VMOVNTDQA(xmm, m128) [AVX]
VMOVNTDQA(ymm, m256) [AVX2]
VMOVNTDQA(zmm, m512) [AVX512F]
VMOVNTDQA(xmm, m128) [AVX512F and AVX512VL]
VMOVNTDQA(ymm, m256) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMOVNTPD
(*args, **kwargs)¶Store Packed Double-Precision Floating-Point Values Using Non-Temporal Hint
Supported forms:
VMOVNTPD(m128, xmm) [AVX]
VMOVNTPD(m256, ymm) [AVX]
VMOVNTPD(m512, zmm) [AVX512F]
VMOVNTPD(m128, xmm) [AVX512F and AVX512VL]
VMOVNTPD(m256, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMOVNTPS
(*args, **kwargs)¶Store Packed Single-Precision Floating-Point Values Using Non-Temporal Hint
Supported forms:
VMOVNTPS(m128, xmm) [AVX]
VMOVNTPS(m256, ymm) [AVX]
VMOVNTPS(m512, zmm) [AVX512F]
VMOVNTPS(m128, xmm) [AVX512F and AVX512VL]
VMOVNTPS(m256, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMOVQ
(*args, **kwargs)¶Move Quadword
Supported forms:
VMOVQ(xmm, xmm) [AVX]
VMOVQ(xmm, r64/m64) [AVX]
VMOVQ(r64/m64, xmm) [AVX]
VMOVQ(xmm, xmm) [AVX512F]
VMOVQ(xmm, r64/m64) [AVX512F]
VMOVQ(r64/m64, xmm) [AVX512F]
peachpy.x86_64.avx.
VMOVSD
(*args, **kwargs)¶Move Scalar Double-Precision Floating-Point Value
Supported forms:
VMOVSD(xmm, m64) [AVX]
VMOVSD(m64, xmm) [AVX]
VMOVSD(xmm, xmm, xmm) [AVX]
VMOVSD(m64{k}, xmm) [AVX512F]
VMOVSD(xmm{k}{z}, m64) [AVX512F]
VMOVSD(xmm{k}{z}, xmm, xmm) [AVX512F]
peachpy.x86_64.avx.
VMOVSHDUP
(*args, **kwargs)¶Move Packed Single-FP High and Duplicate
Supported forms:
VMOVSHDUP(xmm, xmm/m128) [AVX]
VMOVSHDUP(ymm, ymm/m256) [AVX]
VMOVSHDUP(zmm{k}{z}, zmm/m512) [AVX512F]
VMOVSHDUP(xmm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
VMOVSHDUP(ymm{k}{z}, ymm/m256) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMOVSLDUP
(*args, **kwargs)¶Move Packed Single-FP Low and Duplicate
Supported forms:
VMOVSLDUP(xmm, xmm/m128) [AVX]
VMOVSLDUP(ymm, ymm/m256) [AVX]
VMOVSLDUP(zmm{k}{z}, zmm/m512) [AVX512F]
VMOVSLDUP(xmm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
VMOVSLDUP(ymm{k}{z}, ymm/m256) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMOVSS
(*args, **kwargs)¶Move Scalar Single-Precision Floating-Point Values
Supported forms:
VMOVSS(xmm, m32) [AVX]
VMOVSS(m32, xmm) [AVX]
VMOVSS(xmm, xmm, xmm) [AVX]
VMOVSS(m32{k}, xmm) [AVX512F]
VMOVSS(xmm{k}{z}, m32) [AVX512F]
VMOVSS(xmm{k}{z}, xmm, xmm) [AVX512F]
peachpy.x86_64.avx.
VMOVUPD
(*args, **kwargs)¶Move Unaligned Packed Double-Precision Floating-Point Values
Supported forms:
VMOVUPD(xmm, xmm/m128) [AVX]
VMOVUPD(ymm, ymm/m256) [AVX]
VMOVUPD(xmm/m128, xmm) [AVX]
VMOVUPD(ymm/m256, ymm) [AVX]
VMOVUPD(m512{k}{z}, zmm) [AVX512F]
VMOVUPD(zmm{k}{z}, zmm/m512) [AVX512F]
VMOVUPD(m128{k}{z}, xmm) [AVX512F and AVX512VL]
VMOVUPD(m256{k}{z}, ymm) [AVX512F and AVX512VL]
VMOVUPD(xmm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
VMOVUPD(ymm{k}{z}, ymm/m256) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMOVUPS
(*args, **kwargs)¶Move Unaligned Packed Single-Precision Floating-Point Values
Supported forms:
VMOVUPS(xmm, xmm/m128) [AVX]
VMOVUPS(ymm, ymm/m256) [AVX]
VMOVUPS(xmm/m128, xmm) [AVX]
VMOVUPS(ymm/m256, ymm) [AVX]
VMOVUPS(m512{k}{z}, zmm) [AVX512F]
VMOVUPS(zmm{k}{z}, zmm/m512) [AVX512F]
VMOVUPS(m128{k}{z}, xmm) [AVX512F and AVX512VL]
VMOVUPS(m256{k}{z}, ymm) [AVX512F and AVX512VL]
VMOVUPS(xmm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
VMOVUPS(ymm{k}{z}, ymm/m256) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMPSADBW
(*args, **kwargs)¶Compute Multiple Packed Sums of Absolute Difference
Supported forms:
VMPSADBW(xmm, xmm, xmm/m128, imm8) [AVX]
VMPSADBW(ymm, ymm, ymm/m256, imm8) [AVX2]
peachpy.x86_64.avx.
VMULPD
(*args, **kwargs)¶Multiply Packed Double-Precision Floating-Point Values
Supported forms:
VMULPD(xmm, xmm, xmm/m128) [AVX]
VMULPD(ymm, ymm, ymm/m256) [AVX]
VMULPD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VMULPD(zmm{k}{z}, zmm, zmm, {er}) [AVX512F]
VMULPD(zmm{k}{z}, zmm, zmm) [AVX512F]
VMULPD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VMULPD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VMULPD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VMULPD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMULPS
(*args, **kwargs)¶Multiply Packed Single-Precision Floating-Point Values
Supported forms:
VMULPS(xmm, xmm, xmm/m128) [AVX]
VMULPS(ymm, ymm, ymm/m256) [AVX]
VMULPS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VMULPS(zmm{k}{z}, zmm, zmm, {er}) [AVX512F]
VMULPS(zmm{k}{z}, zmm, zmm) [AVX512F]
VMULPS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VMULPS(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VMULPS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VMULPS(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VMULSD
(*args, **kwargs)¶Multiply Scalar Double-Precision Floating-Point Values
Supported forms:
VMULSD(xmm, xmm, xmm/m64) [AVX]
VMULSD(xmm{k}{z}, xmm, xmm/m64) [AVX512F]
VMULSD(xmm{k}{z}, xmm, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VMULSS
(*args, **kwargs)¶Multiply Scalar Single-Precision Floating-Point Values
Supported forms:
VMULSS(xmm, xmm, xmm/m32) [AVX]
VMULSS(xmm{k}{z}, xmm, xmm/m32) [AVX512F]
VMULSS(xmm{k}{z}, xmm, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VORPD
(*args, **kwargs)¶Bitwise Logical OR of Double-Precision Floating-Point Values
Supported forms:
VORPD(xmm, xmm, xmm/m128) [AVX]
VORPD(ymm, ymm, ymm/m256) [AVX]
VORPD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512DQ]
VORPD(zmm{k}{z}, zmm, zmm) [AVX512DQ]
VORPD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512DQ and AVX512VL]
VORPD(xmm{k}{z}, xmm, xmm) [AVX512DQ and AVX512VL]
VORPD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512DQ and AVX512VL]
VORPD(ymm{k}{z}, ymm, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VORPS
(*args, **kwargs)¶Bitwise Logical OR of Single-Precision Floating-Point Values
Supported forms:
VORPS(xmm, xmm, xmm/m128) [AVX]
VORPS(ymm, ymm, ymm/m256) [AVX]
VORPS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512DQ]
VORPS(zmm{k}{z}, zmm, zmm) [AVX512DQ]
VORPS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512DQ and AVX512VL]
VORPS(xmm{k}{z}, xmm, xmm) [AVX512DQ and AVX512VL]
VORPS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512DQ and AVX512VL]
VORPS(ymm{k}{z}, ymm, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VPABSB
(*args, **kwargs)¶Packed Absolute Value of Byte Integers
Supported forms:
VPABSB(xmm, xmm/m128) [AVX]
VPABSB(ymm, ymm/m256) [AVX2]
VPABSB(zmm{k}{z}, zmm/m512) [AVX512BW]
VPABSB(xmm{k}{z}, xmm/m128) [AVX512BW and AVX512VL]
VPABSB(ymm{k}{z}, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPABSD
(*args, **kwargs)¶Packed Absolute Value of Doubleword Integers
Supported forms:
VPABSD(xmm, xmm/m128) [AVX]
VPABSD(ymm, ymm/m256) [AVX2]
VPABSD(zmm{k}{z}, m512/m32bcst) [AVX512F]
VPABSD(zmm{k}{z}, zmm) [AVX512F]
VPABSD(xmm{k}{z}, m128/m32bcst) [AVX512F and AVX512VL]
VPABSD(ymm{k}{z}, m256/m32bcst) [AVX512F and AVX512VL]
VPABSD(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPABSD(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPABSQ
(*args, **kwargs)¶Packed Absolute Value of Quadword Integers
Supported forms:
VPABSQ(zmm{k}{z}, m512/m64bcst) [AVX512F]
VPABSQ(zmm{k}{z}, zmm) [AVX512F]
VPABSQ(xmm{k}{z}, m128/m64bcst) [AVX512F and AVX512VL]
VPABSQ(ymm{k}{z}, m256/m64bcst) [AVX512F and AVX512VL]
VPABSQ(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPABSQ(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPABSW
(*args, **kwargs)¶Packed Absolute Value of Word Integers
Supported forms:
VPABSW(xmm, xmm/m128) [AVX]
VPABSW(ymm, ymm/m256) [AVX2]
VPABSW(zmm{k}{z}, zmm/m512) [AVX512BW]
VPABSW(xmm{k}{z}, xmm/m128) [AVX512BW and AVX512VL]
VPABSW(ymm{k}{z}, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPACKSSDW
(*args, **kwargs)¶Pack Doublewords into Words with Signed Saturation
Supported forms:
VPACKSSDW(xmm, xmm, xmm/m128) [AVX]
VPACKSSDW(ymm, ymm, ymm/m256) [AVX2]
VPACKSSDW(zmm{k}{z}, zmm, m512/m32bcst) [AVX512BW]
VPACKSSDW(zmm{k}{z}, zmm, zmm) [AVX512BW]
VPACKSSDW(xmm{k}{z}, xmm, m128/m32bcst) [AVX512BW and AVX512VL]
VPACKSSDW(xmm{k}{z}, xmm, xmm) [AVX512BW and AVX512VL]
VPACKSSDW(ymm{k}{z}, ymm, m256/m32bcst) [AVX512BW and AVX512VL]
VPACKSSDW(ymm{k}{z}, ymm, ymm) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPACKSSWB
(*args, **kwargs)¶Pack Words into Bytes with Signed Saturation
Supported forms:
VPACKSSWB(xmm, xmm, xmm/m128) [AVX]
VPACKSSWB(ymm, ymm, ymm/m256) [AVX2]
VPACKSSWB(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPACKSSWB(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPACKSSWB(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPACKUSDW
(*args, **kwargs)¶Pack Doublewords into Words with Unsigned Saturation
Supported forms:
VPACKUSDW(xmm, xmm, xmm/m128) [AVX]
VPACKUSDW(ymm, ymm, ymm/m256) [AVX2]
VPACKUSDW(zmm{k}{z}, zmm, m512/m32bcst) [AVX512BW]
VPACKUSDW(zmm{k}{z}, zmm, zmm) [AVX512BW]
VPACKUSDW(xmm{k}{z}, xmm, m128/m32bcst) [AVX512BW and AVX512VL]
VPACKUSDW(xmm{k}{z}, xmm, xmm) [AVX512BW and AVX512VL]
VPACKUSDW(ymm{k}{z}, ymm, m256/m32bcst) [AVX512BW and AVX512VL]
VPACKUSDW(ymm{k}{z}, ymm, ymm) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPACKUSWB
(*args, **kwargs)¶Pack Words into Bytes with Unsigned Saturation
Supported forms:
VPACKUSWB(xmm, xmm, xmm/m128) [AVX]
VPACKUSWB(ymm, ymm, ymm/m256) [AVX2]
VPACKUSWB(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPACKUSWB(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPACKUSWB(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPADDB
(*args, **kwargs)¶Add Packed Byte Integers
Supported forms:
VPADDB(xmm, xmm, xmm/m128) [AVX]
VPADDB(ymm, ymm, ymm/m256) [AVX2]
VPADDB(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPADDB(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPADDB(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPADDD
(*args, **kwargs)¶Add Packed Doubleword Integers
Supported forms:
VPADDD(xmm, xmm, xmm/m128) [AVX]
VPADDD(ymm, ymm, ymm/m256) [AVX2]
VPADDD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPADDD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPADDD(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPADDD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPADDD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPADDD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPADDQ
(*args, **kwargs)¶Add Packed Quadword Integers
Supported forms:
VPADDQ(xmm, xmm, xmm/m128) [AVX]
VPADDQ(ymm, ymm, ymm/m256) [AVX2]
VPADDQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPADDQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPADDQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPADDQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPADDQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPADDQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPADDSB
(*args, **kwargs)¶Add Packed Signed Byte Integers with Signed Saturation
Supported forms:
VPADDSB(xmm, xmm, xmm/m128) [AVX]
VPADDSB(ymm, ymm, ymm/m256) [AVX2]
VPADDSB(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPADDSB(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPADDSB(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPADDSW
(*args, **kwargs)¶Add Packed Signed Word Integers with Signed Saturation
Supported forms:
VPADDSW(xmm, xmm, xmm/m128) [AVX]
VPADDSW(ymm, ymm, ymm/m256) [AVX2]
VPADDSW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPADDSW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPADDSW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPADDUSB
(*args, **kwargs)¶Add Packed Unsigned Byte Integers with Unsigned Saturation
Supported forms:
VPADDUSB(xmm, xmm, xmm/m128) [AVX]
VPADDUSB(ymm, ymm, ymm/m256) [AVX2]
VPADDUSB(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPADDUSB(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPADDUSB(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPADDUSW
(*args, **kwargs)¶Add Packed Unsigned Word Integers with Unsigned Saturation
Supported forms:
VPADDUSW(xmm, xmm, xmm/m128) [AVX]
VPADDUSW(ymm, ymm, ymm/m256) [AVX2]
VPADDUSW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPADDUSW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPADDUSW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPADDW
(*args, **kwargs)¶Add Packed Word Integers
Supported forms:
VPADDW(xmm, xmm, xmm/m128) [AVX]
VPADDW(ymm, ymm, ymm/m256) [AVX2]
VPADDW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPADDW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPADDW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPALIGNR
(*args, **kwargs)¶Packed Align Right
Supported forms:
VPALIGNR(xmm, xmm, xmm/m128, imm8) [AVX]
VPALIGNR(ymm, ymm, ymm/m256, imm8) [AVX2]
VPALIGNR(zmm{k}{z}, zmm, zmm/m512, imm8) [AVX512BW]
VPALIGNR(xmm{k}{z}, xmm, xmm/m128, imm8) [AVX512BW and AVX512VL]
VPALIGNR(ymm{k}{z}, ymm, ymm/m256, imm8) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPAND
(*args, **kwargs)¶Packed Bitwise Logical AND
Supported forms:
VPAND(xmm, xmm, xmm/m128) [AVX]
VPAND(ymm, ymm, ymm/m256) [AVX2]
peachpy.x86_64.avx.
VPANDD
(*args, **kwargs)¶Bitwise Logical AND of Packed Doubleword Integers
Supported forms:
VPANDD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPANDD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPANDD(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPANDD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPANDD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPANDD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPANDN
(*args, **kwargs)¶Packed Bitwise Logical AND NOT
Supported forms:
VPANDN(xmm, xmm, xmm/m128) [AVX]
VPANDN(ymm, ymm, ymm/m256) [AVX2]
peachpy.x86_64.avx.
VPANDND
(*args, **kwargs)¶Bitwise Logical AND NOT of Packed Doubleword Integers
Supported forms:
VPANDND(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPANDND(zmm{k}{z}, zmm, zmm) [AVX512F]
VPANDND(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPANDND(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPANDND(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPANDND(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPANDNQ
(*args, **kwargs)¶Bitwise Logical AND NOT of Packed Quadword Integers
Supported forms:
VPANDNQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPANDNQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPANDNQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPANDNQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPANDNQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPANDNQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPANDQ
(*args, **kwargs)¶Bitwise Logical AND of Packed Quadword Integers
Supported forms:
VPANDQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPANDQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPANDQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPANDQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPANDQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPANDQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPAVGB
(*args, **kwargs)¶Average Packed Byte Integers
Supported forms:
VPAVGB(xmm, xmm, xmm/m128) [AVX]
VPAVGB(ymm, ymm, ymm/m256) [AVX2]
VPAVGB(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPAVGB(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPAVGB(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPAVGW
(*args, **kwargs)¶Average Packed Word Integers
Supported forms:
VPAVGW(xmm, xmm, xmm/m128) [AVX]
VPAVGW(ymm, ymm, ymm/m256) [AVX2]
VPAVGW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPAVGW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPAVGW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPBLENDD
(*args, **kwargs)¶Blend Packed Doublewords
Supported forms:
VPBLENDD(xmm, xmm, xmm/m128, imm8) [AVX2]
VPBLENDD(ymm, ymm, ymm/m256, imm8) [AVX2]
peachpy.x86_64.avx.
VPBLENDMB
(*args, **kwargs)¶Blend Byte Vectors Using an OpMask Control
Supported forms:
VPBLENDMB(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPBLENDMB(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPBLENDMB(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPBLENDMD
(*args, **kwargs)¶Blend Doubleword Vectors Using an OpMask Control
Supported forms:
VPBLENDMD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPBLENDMD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPBLENDMD(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPBLENDMD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPBLENDMD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPBLENDMD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPBLENDMQ
(*args, **kwargs)¶Blend Quadword Vectors Using an OpMask Control
Supported forms:
VPBLENDMQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPBLENDMQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPBLENDMQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPBLENDMQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPBLENDMQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPBLENDMQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPBLENDMW
(*args, **kwargs)¶Blend Word Vectors Using an OpMask Control
Supported forms:
VPBLENDMW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPBLENDMW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPBLENDMW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPBLENDVB
(*args, **kwargs)¶Variable Blend Packed Bytes
Supported forms:
VPBLENDVB(xmm, xmm, xmm/m128, xmm) [AVX]
VPBLENDVB(ymm, ymm, ymm/m256, ymm) [AVX2]
peachpy.x86_64.avx.
VPBLENDW
(*args, **kwargs)¶Blend Packed Words
Supported forms:
VPBLENDW(xmm, xmm, xmm/m128, imm8) [AVX]
VPBLENDW(ymm, ymm, ymm/m256, imm8) [AVX2]
peachpy.x86_64.avx.
VPBROADCASTB
(*args, **kwargs)¶Broadcast Byte Integer
Supported forms:
VPBROADCASTB(xmm, xmm/m8) [AVX2]
VPBROADCASTB(ymm, xmm/m8) [AVX2]
VPBROADCASTB(zmm{k}{z}, r32) [AVX512BW]
VPBROADCASTB(zmm{k}{z}, xmm/m8) [AVX512BW]
VPBROADCASTB(xmm{k}{z}, r32) [AVX512BW and AVX512VL]
VPBROADCASTB(ymm{k}{z}, r32) [AVX512BW and AVX512VL]
VPBROADCASTB(xmm{k}{z}, xmm/m8) [AVX512BW and AVX512VL]
VPBROADCASTB(ymm{k}{z}, xmm/m8) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPBROADCASTD
(*args, **kwargs)¶Broadcast Doubleword Integer
Supported forms:
VPBROADCASTD(xmm, xmm/m32) [AVX2]
VPBROADCASTD(ymm, xmm/m32) [AVX2]
VPBROADCASTD(zmm{k}{z}, xmm) [AVX512F]
VPBROADCASTD(zmm{k}{z}, r32/m32) [AVX512F]
VPBROADCASTD(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPBROADCASTD(ymm{k}{z}, xmm) [AVX512F and AVX512VL]
VPBROADCASTD(xmm{k}{z}, r32/m32) [AVX512F and AVX512VL]
VPBROADCASTD(ymm{k}{z}, r32/m32) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPBROADCASTMB2Q
(*args, **kwargs)¶Broadcast Low Byte of Mask Register to Packed Quadword Values
Supported forms:
VPBROADCASTMB2Q(zmm, k) [AVX512CD]
VPBROADCASTMB2Q(xmm, k) [AVX512VL and AVX512CD]
VPBROADCASTMB2Q(ymm, k) [AVX512VL and AVX512CD]
peachpy.x86_64.avx.
VPBROADCASTMW2D
(*args, **kwargs)¶Broadcast Low Word of Mask Register to Packed Doubleword Values
Supported forms:
VPBROADCASTMW2D(zmm, k) [AVX512CD]
VPBROADCASTMW2D(xmm, k) [AVX512VL and AVX512CD]
VPBROADCASTMW2D(ymm, k) [AVX512VL and AVX512CD]
peachpy.x86_64.avx.
VPBROADCASTQ
(*args, **kwargs)¶Broadcast Quadword Integer
Supported forms:
VPBROADCASTQ(xmm, xmm/m64) [AVX2]
VPBROADCASTQ(ymm, xmm/m64) [AVX2]
VPBROADCASTQ(zmm{k}{z}, xmm) [AVX512F]
VPBROADCASTQ(zmm{k}{z}, r64/m64) [AVX512F]
VPBROADCASTQ(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPBROADCASTQ(ymm{k}{z}, xmm) [AVX512F and AVX512VL]
VPBROADCASTQ(xmm{k}{z}, r64/m64) [AVX512F and AVX512VL]
VPBROADCASTQ(ymm{k}{z}, r64/m64) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPBROADCASTW
(*args, **kwargs)¶Broadcast Word Integer
Supported forms:
VPBROADCASTW(xmm, xmm/m16) [AVX2]
VPBROADCASTW(ymm, xmm/m16) [AVX2]
VPBROADCASTW(zmm{k}{z}, r32) [AVX512BW]
VPBROADCASTW(zmm{k}{z}, xmm/m16) [AVX512BW]
VPBROADCASTW(xmm{k}{z}, r32) [AVX512BW and AVX512VL]
VPBROADCASTW(ymm{k}{z}, r32) [AVX512BW and AVX512VL]
VPBROADCASTW(xmm{k}{z}, xmm/m16) [AVX512BW and AVX512VL]
VPBROADCASTW(ymm{k}{z}, xmm/m16) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPCMPB
(*args, **kwargs)¶Compare Packed Signed Byte Values
Supported forms:
VPCMPB(k{k}, zmm, zmm/m512, imm8) [AVX512BW]
VPCMPB(k{k}, xmm, xmm/m128, imm8) [AVX512BW and AVX512VL]
VPCMPB(k{k}, ymm, ymm/m256, imm8) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPCMPD
(*args, **kwargs)¶Compare Packed Signed Doubleword Values
Supported forms:
VPCMPD(k{k}, zmm, m512/m32bcst, imm8) [AVX512F]
VPCMPD(k{k}, zmm, zmm, imm8) [AVX512F]
VPCMPD(k{k}, xmm, m128/m32bcst, imm8) [AVX512F and AVX512VL]
VPCMPD(k{k}, xmm, xmm, imm8) [AVX512F and AVX512VL]
VPCMPD(k{k}, ymm, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VPCMPD(k{k}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPCMPEQB
(*args, **kwargs)¶Compare Packed Byte Data for Equality
Supported forms:
VPCMPEQB(xmm, xmm, xmm/m128) [AVX]
VPCMPEQB(ymm, ymm, ymm/m256) [AVX2]
VPCMPEQB(k{k}, zmm, zmm/m512) [AVX512BW]
VPCMPEQB(k{k}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPCMPEQB(k{k}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPCMPEQD
(*args, **kwargs)¶Compare Packed Doubleword Data for Equality
Supported forms:
VPCMPEQD(xmm, xmm, xmm/m128) [AVX]
VPCMPEQD(ymm, ymm, ymm/m256) [AVX2]
VPCMPEQD(k{k}, zmm, m512/m32bcst) [AVX512F]
VPCMPEQD(k{k}, zmm, zmm) [AVX512F]
VPCMPEQD(k{k}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPCMPEQD(k{k}, xmm, xmm) [AVX512F and AVX512VL]
VPCMPEQD(k{k}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPCMPEQD(k{k}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPCMPEQQ
(*args, **kwargs)¶Compare Packed Quadword Data for Equality
Supported forms:
VPCMPEQQ(xmm, xmm, xmm/m128) [AVX]
VPCMPEQQ(ymm, ymm, ymm/m256) [AVX2]
VPCMPEQQ(k{k}, zmm, m512/m64bcst) [AVX512F]
VPCMPEQQ(k{k}, zmm, zmm) [AVX512F]
VPCMPEQQ(k{k}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPCMPEQQ(k{k}, xmm, xmm) [AVX512F and AVX512VL]
VPCMPEQQ(k{k}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPCMPEQQ(k{k}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPCMPEQW
(*args, **kwargs)¶Compare Packed Word Data for Equality
Supported forms:
VPCMPEQW(xmm, xmm, xmm/m128) [AVX]
VPCMPEQW(ymm, ymm, ymm/m256) [AVX2]
VPCMPEQW(k{k}, zmm, zmm/m512) [AVX512BW]
VPCMPEQW(k{k}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPCMPEQW(k{k}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPCMPESTRI
(*args, **kwargs)¶Packed Compare Explicit Length Strings, Return Index
Supported forms:
VPCMPESTRI(xmm, xmm/m128, imm8) [AVX]
peachpy.x86_64.avx.
VPCMPESTRM
(*args, **kwargs)¶Packed Compare Explicit Length Strings, Return Mask
Supported forms:
VPCMPESTRM(xmm, xmm/m128, imm8) [AVX]
peachpy.x86_64.avx.
VPCMPGTB
(*args, **kwargs)¶Compare Packed Signed Byte Integers for Greater Than
Supported forms:
VPCMPGTB(xmm, xmm, xmm/m128) [AVX]
VPCMPGTB(ymm, ymm, ymm/m256) [AVX2]
VPCMPGTB(k{k}, zmm, zmm/m512) [AVX512BW]
VPCMPGTB(k{k}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPCMPGTB(k{k}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPCMPGTD
(*args, **kwargs)¶Compare Packed Signed Doubleword Integers for Greater Than
Supported forms:
VPCMPGTD(xmm, xmm, xmm/m128) [AVX]
VPCMPGTD(ymm, ymm, ymm/m256) [AVX2]
VPCMPGTD(k{k}, zmm, m512/m32bcst) [AVX512F]
VPCMPGTD(k{k}, zmm, zmm) [AVX512F]
VPCMPGTD(k{k}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPCMPGTD(k{k}, xmm, xmm) [AVX512F and AVX512VL]
VPCMPGTD(k{k}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPCMPGTD(k{k}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPCMPGTQ
(*args, **kwargs)¶Compare Packed Data for Greater Than
Supported forms:
VPCMPGTQ(xmm, xmm, xmm/m128) [AVX]
VPCMPGTQ(ymm, ymm, ymm/m256) [AVX2]
VPCMPGTQ(k{k}, zmm, m512/m64bcst) [AVX512F]
VPCMPGTQ(k{k}, zmm, zmm) [AVX512F]
VPCMPGTQ(k{k}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPCMPGTQ(k{k}, xmm, xmm) [AVX512F and AVX512VL]
VPCMPGTQ(k{k}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPCMPGTQ(k{k}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPCMPGTW
(*args, **kwargs)¶Compare Packed Signed Word Integers for Greater Than
Supported forms:
VPCMPGTW(xmm, xmm, xmm/m128) [AVX]
VPCMPGTW(ymm, ymm, ymm/m256) [AVX2]
VPCMPGTW(k{k}, zmm, zmm/m512) [AVX512BW]
VPCMPGTW(k{k}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPCMPGTW(k{k}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPCMPISTRI
(*args, **kwargs)¶Packed Compare Implicit Length Strings, Return Index
Supported forms:
VPCMPISTRI(xmm, xmm/m128, imm8) [AVX]
peachpy.x86_64.avx.
VPCMPISTRM
(*args, **kwargs)¶Packed Compare Implicit Length Strings, Return Mask
Supported forms:
VPCMPISTRM(xmm, xmm/m128, imm8) [AVX]
peachpy.x86_64.avx.
VPCMPQ
(*args, **kwargs)¶Compare Packed Signed Quadword Values
Supported forms:
VPCMPQ(k{k}, zmm, m512/m64bcst, imm8) [AVX512F]
VPCMPQ(k{k}, zmm, zmm, imm8) [AVX512F]
VPCMPQ(k{k}, xmm, m128/m64bcst, imm8) [AVX512F and AVX512VL]
VPCMPQ(k{k}, xmm, xmm, imm8) [AVX512F and AVX512VL]
VPCMPQ(k{k}, ymm, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VPCMPQ(k{k}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPCMPUB
(*args, **kwargs)¶Compare Packed Unsigned Byte Values
Supported forms:
VPCMPUB(k{k}, zmm, zmm/m512, imm8) [AVX512BW]
VPCMPUB(k{k}, xmm, xmm/m128, imm8) [AVX512BW and AVX512VL]
VPCMPUB(k{k}, ymm, ymm/m256, imm8) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPCMPUD
(*args, **kwargs)¶Compare Packed Unsigned Doubleword Values
Supported forms:
VPCMPUD(k{k}, zmm, m512/m32bcst, imm8) [AVX512F]
VPCMPUD(k{k}, zmm, zmm, imm8) [AVX512F]
VPCMPUD(k{k}, xmm, m128/m32bcst, imm8) [AVX512F and AVX512VL]
VPCMPUD(k{k}, xmm, xmm, imm8) [AVX512F and AVX512VL]
VPCMPUD(k{k}, ymm, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VPCMPUD(k{k}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPCMPUQ
(*args, **kwargs)¶Compare Packed Unsigned Quadword Values
Supported forms:
VPCMPUQ(k{k}, zmm, m512/m64bcst, imm8) [AVX512F]
VPCMPUQ(k{k}, zmm, zmm, imm8) [AVX512F]
VPCMPUQ(k{k}, xmm, m128/m64bcst, imm8) [AVX512F and AVX512VL]
VPCMPUQ(k{k}, xmm, xmm, imm8) [AVX512F and AVX512VL]
VPCMPUQ(k{k}, ymm, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VPCMPUQ(k{k}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPCMPUW
(*args, **kwargs)¶Compare Packed Unsigned Word Values
Supported forms:
VPCMPUW(k{k}, zmm, zmm/m512, imm8) [AVX512BW]
VPCMPUW(k{k}, xmm, xmm/m128, imm8) [AVX512BW and AVX512VL]
VPCMPUW(k{k}, ymm, ymm/m256, imm8) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPCMPW
(*args, **kwargs)¶Compare Packed Signed Word Values
Supported forms:
VPCMPW(k{k}, zmm, zmm/m512, imm8) [AVX512BW]
VPCMPW(k{k}, xmm, xmm/m128, imm8) [AVX512BW and AVX512VL]
VPCMPW(k{k}, ymm, ymm/m256, imm8) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPCOMPRESSD
(*args, **kwargs)¶Store Sparse Packed Doubleword Integer Values into Dense Memory/Register
Supported forms:
VPCOMPRESSD(zmm{k}{z}, zmm) [AVX512F]
VPCOMPRESSD(m512{k}{z}, zmm) [AVX512F]
VPCOMPRESSD(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPCOMPRESSD(m128{k}{z}, xmm) [AVX512F and AVX512VL]
VPCOMPRESSD(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
VPCOMPRESSD(m256{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPCOMPRESSQ
(*args, **kwargs)¶Store Sparse Packed Quadword Integer Values into Dense Memory/Register
Supported forms:
VPCOMPRESSQ(zmm{k}{z}, zmm) [AVX512F]
VPCOMPRESSQ(m512{k}{z}, zmm) [AVX512F]
VPCOMPRESSQ(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPCOMPRESSQ(m128{k}{z}, xmm) [AVX512F and AVX512VL]
VPCOMPRESSQ(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
VPCOMPRESSQ(m256{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPCONFLICTD
(*args, **kwargs)¶Detect Conflicts Within a Vector of Packed Doubleword Values into Dense Memory/Register
Supported forms:
VPCONFLICTD(zmm{k}{z}, m512/m32bcst) [AVX512CD]
VPCONFLICTD(zmm{k}{z}, zmm) [AVX512CD]
VPCONFLICTD(xmm{k}{z}, m128/m32bcst) [AVX512VL and AVX512CD]
VPCONFLICTD(ymm{k}{z}, m256/m32bcst) [AVX512VL and AVX512CD]
VPCONFLICTD(xmm{k}{z}, xmm) [AVX512VL and AVX512CD]
VPCONFLICTD(ymm{k}{z}, ymm) [AVX512VL and AVX512CD]
peachpy.x86_64.avx.
VPCONFLICTQ
(*args, **kwargs)¶Detect Conflicts Within a Vector of Packed Quadword Values into Dense Memory/Register
Supported forms:
VPCONFLICTQ(zmm{k}{z}, m512/m64bcst) [AVX512CD]
VPCONFLICTQ(zmm{k}{z}, zmm) [AVX512CD]
VPCONFLICTQ(xmm{k}{z}, m128/m64bcst) [AVX512VL and AVX512CD]
VPCONFLICTQ(ymm{k}{z}, m256/m64bcst) [AVX512VL and AVX512CD]
VPCONFLICTQ(xmm{k}{z}, xmm) [AVX512VL and AVX512CD]
VPCONFLICTQ(ymm{k}{z}, ymm) [AVX512VL and AVX512CD]
peachpy.x86_64.avx.
VPERM2F128
(*args, **kwargs)¶Permute Floating-Point Values
Supported forms:
VPERM2F128(ymm, ymm, ymm/m256, imm8) [AVX]
peachpy.x86_64.avx.
VPERM2I128
(*args, **kwargs)¶Permute 128-Bit Integer Values
Supported forms:
VPERM2I128(ymm, ymm, ymm/m256, imm8) [AVX2]
peachpy.x86_64.avx.
VPERMB
(*args, **kwargs)¶Permute Byte Integers
Supported forms:
VPERMB(zmm{k}{z}, zmm, zmm/m512) [AVX512VBMI]
VPERMB(xmm{k}{z}, xmm, xmm/m128) [AVX512VL and AVX512VBMI]
VPERMB(ymm{k}{z}, ymm, ymm/m256) [AVX512VL and AVX512VBMI]
peachpy.x86_64.avx.
VPERMD
(*args, **kwargs)¶Permute Doubleword Integers
Supported forms:
VPERMD(ymm, ymm, ymm/m256) [AVX2]
VPERMD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPERMD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPERMD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPERMD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPERMI2B
(*args, **kwargs)¶Full Permute of Bytes From Two Tables Overwriting the Index
Supported forms:
VPERMI2B(zmm{k}{z}, zmm, zmm/m512) [AVX512VBMI]
VPERMI2B(xmm{k}{z}, xmm, xmm/m128) [AVX512VL and AVX512VBMI]
VPERMI2B(ymm{k}{z}, ymm, ymm/m256) [AVX512VL and AVX512VBMI]
peachpy.x86_64.avx.
VPERMI2D
(*args, **kwargs)¶Full Permute of Doublewords From Two Tables Overwriting the Index
Supported forms:
VPERMI2D(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPERMI2D(zmm{k}{z}, zmm, zmm) [AVX512F]
VPERMI2D(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPERMI2D(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPERMI2D(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPERMI2D(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPERMI2PD
(*args, **kwargs)¶Full Permute of Double-Precision Floating-Point Values From Two Tables Overwriting the Index
Supported forms:
VPERMI2PD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPERMI2PD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPERMI2PD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPERMI2PD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPERMI2PD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPERMI2PD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPERMI2PS
(*args, **kwargs)¶Full Permute of Single-Precision Floating-Point Values From Two Tables Overwriting the Index
Supported forms:
VPERMI2PS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPERMI2PS(zmm{k}{z}, zmm, zmm) [AVX512F]
VPERMI2PS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPERMI2PS(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPERMI2PS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPERMI2PS(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPERMI2Q
(*args, **kwargs)¶Full Permute of Quadwords From Two Tables Overwriting the Index
Supported forms:
VPERMI2Q(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPERMI2Q(zmm{k}{z}, zmm, zmm) [AVX512F]
VPERMI2Q(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPERMI2Q(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPERMI2Q(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPERMI2Q(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPERMI2W
(*args, **kwargs)¶Full Permute of Words From Two Tables Overwriting the Index
Supported forms:
VPERMI2W(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPERMI2W(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPERMI2W(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPERMILPD
(*args, **kwargs)¶Permute Double-Precision Floating-Point Values
Supported forms:
VPERMILPD(xmm, xmm, xmm/m128) [AVX]
VPERMILPD(xmm, xmm/m128, imm8) [AVX]
VPERMILPD(ymm, ymm, ymm/m256) [AVX]
VPERMILPD(ymm, ymm/m256, imm8) [AVX]
VPERMILPD(zmm{k}{z}, m512/m64bcst, imm8) [AVX512F]
VPERMILPD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPERMILPD(zmm{k}{z}, zmm, imm8) [AVX512F]
VPERMILPD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPERMILPD(xmm{k}{z}, m128/m64bcst, imm8) [AVX512F and AVX512VL]
VPERMILPD(ymm{k}{z}, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VPERMILPD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPERMILPD(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VPERMILPD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPERMILPD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPERMILPD(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
VPERMILPD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPERMILPS
(*args, **kwargs)¶Permute Single-Precision Floating-Point Values
Supported forms:
VPERMILPS(xmm, xmm, xmm/m128) [AVX]
VPERMILPS(xmm, xmm/m128, imm8) [AVX]
VPERMILPS(ymm, ymm, ymm/m256) [AVX]
VPERMILPS(ymm, ymm/m256, imm8) [AVX]
VPERMILPS(zmm{k}{z}, m512/m32bcst, imm8) [AVX512F]
VPERMILPS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPERMILPS(zmm{k}{z}, zmm, imm8) [AVX512F]
VPERMILPS(zmm{k}{z}, zmm, zmm) [AVX512F]
VPERMILPS(xmm{k}{z}, m128/m32bcst, imm8) [AVX512F and AVX512VL]
VPERMILPS(ymm{k}{z}, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VPERMILPS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPERMILPS(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VPERMILPS(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPERMILPS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPERMILPS(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
VPERMILPS(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPERMPD
(*args, **kwargs)¶Permute Double-Precision Floating-Point Elements
Supported forms:
VPERMPD(ymm, ymm/m256, imm8) [AVX2]
VPERMPD(zmm{k}{z}, m512/m64bcst, imm8) [AVX512F]
VPERMPD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPERMPD(zmm{k}{z}, zmm, imm8) [AVX512F]
VPERMPD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPERMPD(ymm{k}{z}, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VPERMPD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPERMPD(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
VPERMPD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPERMPS
(*args, **kwargs)¶Permute Single-Precision Floating-Point Elements
Supported forms:
VPERMPS(ymm, ymm, ymm/m256) [AVX2]
VPERMPS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPERMPS(zmm{k}{z}, zmm, zmm) [AVX512F]
VPERMPS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPERMPS(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPERMQ
(*args, **kwargs)¶Permute Quadword Integers
Supported forms:
VPERMQ(ymm, ymm/m256, imm8) [AVX2]
VPERMQ(zmm{k}{z}, m512/m64bcst, imm8) [AVX512F]
VPERMQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPERMQ(zmm{k}{z}, zmm, imm8) [AVX512F]
VPERMQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPERMQ(ymm{k}{z}, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VPERMQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPERMQ(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
VPERMQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPERMT2B
(*args, **kwargs)¶Full Permute of Bytes From Two Tables Overwriting a Table
Supported forms:
VPERMT2B(zmm{k}{z}, zmm, zmm/m512) [AVX512VBMI]
VPERMT2B(xmm{k}{z}, xmm, xmm/m128) [AVX512VL and AVX512VBMI]
VPERMT2B(ymm{k}{z}, ymm, ymm/m256) [AVX512VL and AVX512VBMI]
peachpy.x86_64.avx.
VPERMT2D
(*args, **kwargs)¶Full Permute of Doublewords From Two Tables Overwriting a Table
Supported forms:
VPERMT2D(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPERMT2D(zmm{k}{z}, zmm, zmm) [AVX512F]
VPERMT2D(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPERMT2D(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPERMT2D(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPERMT2D(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPERMT2PD
(*args, **kwargs)¶Full Permute of Double-Precision Floating-Point Values From Two Tables Overwriting a Table
Supported forms:
VPERMT2PD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPERMT2PD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPERMT2PD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPERMT2PD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPERMT2PD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPERMT2PD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPERMT2PS
(*args, **kwargs)¶Full Permute of Single-Precision Floating-Point Values From Two Tables Overwriting a Table
Supported forms:
VPERMT2PS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPERMT2PS(zmm{k}{z}, zmm, zmm) [AVX512F]
VPERMT2PS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPERMT2PS(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPERMT2PS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPERMT2PS(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPERMT2Q
(*args, **kwargs)¶Full Permute of Quadwords From Two Tables Overwriting a Table
Supported forms:
VPERMT2Q(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPERMT2Q(zmm{k}{z}, zmm, zmm) [AVX512F]
VPERMT2Q(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPERMT2Q(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPERMT2Q(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPERMT2Q(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPERMT2W
(*args, **kwargs)¶Full Permute of Words From Two Tables Overwriting a Table
Supported forms:
VPERMT2W(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPERMT2W(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPERMT2W(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPERMW
(*args, **kwargs)¶Permute Word Integers
Supported forms:
VPERMW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPERMW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPERMW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPEXPANDD
(*args, **kwargs)¶Load Sparse Packed Doubleword Integer Values from Dense Memory/Register
Supported forms:
VPEXPANDD(zmm{k}{z}, zmm/m512) [AVX512F]
VPEXPANDD(xmm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
VPEXPANDD(ymm{k}{z}, ymm/m256) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPEXPANDQ
(*args, **kwargs)¶Load Sparse Packed Quadword Integer Values from Dense Memory/Register
Supported forms:
VPEXPANDQ(zmm{k}{z}, zmm/m512) [AVX512F]
VPEXPANDQ(xmm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
VPEXPANDQ(ymm{k}{z}, ymm/m256) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPEXTRB
(*args, **kwargs)¶Extract Byte
Supported forms:
VPEXTRB(r32, xmm, imm8) [AVX]
VPEXTRB(m8, xmm, imm8) [AVX]
VPEXTRB(r32, xmm, imm8) [AVX512BW]
VPEXTRB(m8, xmm, imm8) [AVX512BW]
peachpy.x86_64.avx.
VPEXTRD
(*args, **kwargs)¶Extract Doubleword
Supported forms:
VPEXTRD(r32/m32, xmm, imm8) [AVX]
VPEXTRD(r32/m32, xmm, imm8) [AVX512DQ]
peachpy.x86_64.avx.
VPEXTRQ
(*args, **kwargs)¶Extract Quadword
Supported forms:
VPEXTRQ(r64/m64, xmm, imm8) [AVX]
VPEXTRQ(r64/m64, xmm, imm8) [AVX512DQ]
peachpy.x86_64.avx.
VPEXTRW
(*args, **kwargs)¶Extract Word
Supported forms:
VPEXTRW(r32, xmm, imm8) [AVX]
VPEXTRW(m16, xmm, imm8) [AVX]
VPEXTRW(r32, xmm, imm8) [AVX512BW]
VPEXTRW(m16, xmm, imm8) [AVX512BW]
peachpy.x86_64.avx.
VPGATHERDD
(*args, **kwargs)¶Gather Packed Doubleword Values Using Signed Doubleword Indices
Supported forms:
VPGATHERDD(xmm, vm32x, xmm) [AVX2]
VPGATHERDD(ymm, vm32y, ymm) [AVX2]
VPGATHERDD(zmm{k}, vm32z) [AVX512F]
VPGATHERDD(xmm{k}, vm32x) [AVX512F and AVX512VL]
VPGATHERDD(ymm{k}, vm32y) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPGATHERDQ
(*args, **kwargs)¶Gather Packed Quadword Values Using Signed Doubleword Indices
Supported forms:
VPGATHERDQ(xmm, vm32x, xmm) [AVX2]
VPGATHERDQ(ymm, vm32x, ymm) [AVX2]
VPGATHERDQ(zmm{k}, vm32y) [AVX512F]
VPGATHERDQ(xmm{k}, vm32x) [AVX512F and AVX512VL]
VPGATHERDQ(ymm{k}, vm32x) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPGATHERQD
(*args, **kwargs)¶Gather Packed Doubleword Values Using Signed Quadword Indices
Supported forms:
VPGATHERQD(xmm, vm64x, xmm) [AVX2]
VPGATHERQD(xmm, vm64y, xmm) [AVX2]
VPGATHERQD(ymm{k}, vm64z) [AVX512F]
VPGATHERQD(xmm{k}, vm64x) [AVX512F and AVX512VL]
VPGATHERQD(xmm{k}, vm64y) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPGATHERQQ
(*args, **kwargs)¶Gather Packed Quadword Values Using Signed Quadword Indices
Supported forms:
VPGATHERQQ(xmm, vm64x, xmm) [AVX2]
VPGATHERQQ(ymm, vm64y, ymm) [AVX2]
VPGATHERQQ(zmm{k}, vm64z) [AVX512F]
VPGATHERQQ(xmm{k}, vm64x) [AVX512F and AVX512VL]
VPGATHERQQ(ymm{k}, vm64y) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPHADDD
(*args, **kwargs)¶Packed Horizontal Add Doubleword Integer
Supported forms:
VPHADDD(xmm, xmm, xmm/m128) [AVX]
VPHADDD(ymm, ymm, ymm/m256) [AVX2]
peachpy.x86_64.avx.
VPHADDSW
(*args, **kwargs)¶Packed Horizontal Add Signed Word Integers with Signed Saturation
Supported forms:
VPHADDSW(xmm, xmm, xmm/m128) [AVX]
VPHADDSW(ymm, ymm, ymm/m256) [AVX2]
peachpy.x86_64.avx.
VPHADDW
(*args, **kwargs)¶Packed Horizontal Add Word Integers
Supported forms:
VPHADDW(xmm, xmm, xmm/m128) [AVX]
VPHADDW(ymm, ymm, ymm/m256) [AVX2]
peachpy.x86_64.avx.
VPHMINPOSUW
(*args, **kwargs)¶Packed Horizontal Minimum of Unsigned Word Integers
Supported forms:
VPHMINPOSUW(xmm, xmm/m128) [AVX]
peachpy.x86_64.avx.
VPHSUBD
(*args, **kwargs)¶Packed Horizontal Subtract Doubleword Integers
Supported forms:
VPHSUBD(xmm, xmm, xmm/m128) [AVX]
VPHSUBD(ymm, ymm, ymm/m256) [AVX2]
peachpy.x86_64.avx.
VPHSUBSW
(*args, **kwargs)¶Packed Horizontal Subtract Signed Word Integers with Signed Saturation
Supported forms:
VPHSUBSW(xmm, xmm, xmm/m128) [AVX]
VPHSUBSW(ymm, ymm, ymm/m256) [AVX2]
peachpy.x86_64.avx.
VPHSUBW
(*args, **kwargs)¶Packed Horizontal Subtract Word Integers
Supported forms:
VPHSUBW(xmm, xmm, xmm/m128) [AVX]
VPHSUBW(ymm, ymm, ymm/m256) [AVX2]
peachpy.x86_64.avx.
VPINSRB
(*args, **kwargs)¶Insert Byte
Supported forms:
VPINSRB(xmm, xmm, r32, imm8) [AVX]
VPINSRB(xmm, xmm, m8, imm8) [AVX]
VPINSRB(xmm, xmm, r32, imm8) [AVX512BW]
VPINSRB(xmm, xmm, m8, imm8) [AVX512BW]
peachpy.x86_64.avx.
VPINSRD
(*args, **kwargs)¶Insert Doubleword
Supported forms:
VPINSRD(xmm, xmm, r32/m32, imm8) [AVX]
VPINSRD(xmm, xmm, r32/m32, imm8) [AVX512DQ]
peachpy.x86_64.avx.
VPINSRQ
(*args, **kwargs)¶Insert Quadword
Supported forms:
VPINSRQ(xmm, xmm, r64/m64, imm8) [AVX]
VPINSRQ(xmm, xmm, r64/m64, imm8) [AVX512DQ]
peachpy.x86_64.avx.
VPINSRW
(*args, **kwargs)¶Insert Word
Supported forms:
VPINSRW(xmm, xmm, r32, imm8) [AVX]
VPINSRW(xmm, xmm, m16, imm8) [AVX]
VPINSRW(xmm, xmm, r32, imm8) [AVX512BW]
VPINSRW(xmm, xmm, m16, imm8) [AVX512BW]
peachpy.x86_64.avx.
VPLZCNTD
(*args, **kwargs)¶Count the Number of Leading Zero Bits for Packed Doubleword Values
Supported forms:
VPLZCNTD(zmm{k}{z}, m512/m32bcst) [AVX512CD]
VPLZCNTD(zmm{k}{z}, zmm) [AVX512CD]
VPLZCNTD(xmm{k}{z}, m128/m32bcst) [AVX512VL and AVX512CD]
VPLZCNTD(ymm{k}{z}, m256/m32bcst) [AVX512VL and AVX512CD]
VPLZCNTD(xmm{k}{z}, xmm) [AVX512VL and AVX512CD]
VPLZCNTD(ymm{k}{z}, ymm) [AVX512VL and AVX512CD]
peachpy.x86_64.avx.
VPLZCNTQ
(*args, **kwargs)¶Count the Number of Leading Zero Bits for Packed Quadword Values
Supported forms:
VPLZCNTQ(zmm{k}{z}, m512/m64bcst) [AVX512CD]
VPLZCNTQ(zmm{k}{z}, zmm) [AVX512CD]
VPLZCNTQ(xmm{k}{z}, m128/m64bcst) [AVX512VL and AVX512CD]
VPLZCNTQ(ymm{k}{z}, m256/m64bcst) [AVX512VL and AVX512CD]
VPLZCNTQ(xmm{k}{z}, xmm) [AVX512VL and AVX512CD]
VPLZCNTQ(ymm{k}{z}, ymm) [AVX512VL and AVX512CD]
peachpy.x86_64.avx.
VPMADD52HUQ
(*args, **kwargs)¶Packed Multiply of Unsigned 52-bit Unsigned Integers and Add High 52-bit Products to Quadword Accumulators
Supported forms:
VPMADD52HUQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512IFMA]
VPMADD52HUQ(zmm{k}{z}, zmm, zmm) [AVX512IFMA]
VPMADD52HUQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512VL and AVX512IFMA]
VPMADD52HUQ(xmm{k}{z}, xmm, xmm) [AVX512VL and AVX512IFMA]
VPMADD52HUQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512VL and AVX512IFMA]
VPMADD52HUQ(ymm{k}{z}, ymm, ymm) [AVX512VL and AVX512IFMA]
peachpy.x86_64.avx.
VPMADD52LUQ
(*args, **kwargs)¶Packed Multiply of Unsigned 52-bit Integers and Add the Low 52-bit Products to Quadword Accumulators
Supported forms:
VPMADD52LUQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512IFMA]
VPMADD52LUQ(zmm{k}{z}, zmm, zmm) [AVX512IFMA]
VPMADD52LUQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512VL and AVX512IFMA]
VPMADD52LUQ(xmm{k}{z}, xmm, xmm) [AVX512VL and AVX512IFMA]
VPMADD52LUQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512VL and AVX512IFMA]
VPMADD52LUQ(ymm{k}{z}, ymm, ymm) [AVX512VL and AVX512IFMA]
peachpy.x86_64.avx.
VPMADDUBSW
(*args, **kwargs)¶Multiply and Add Packed Signed and Unsigned Byte Integers
Supported forms:
VPMADDUBSW(xmm, xmm, xmm/m128) [AVX]
VPMADDUBSW(ymm, ymm, ymm/m256) [AVX2]
VPMADDUBSW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPMADDUBSW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPMADDUBSW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMADDWD
(*args, **kwargs)¶Multiply and Add Packed Signed Word Integers
Supported forms:
VPMADDWD(xmm, xmm, xmm/m128) [AVX]
VPMADDWD(ymm, ymm, ymm/m256) [AVX2]
VPMADDWD(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPMADDWD(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPMADDWD(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMASKMOVD
(*args, **kwargs)¶Conditional Move Packed Doubleword Integers
Supported forms:
VPMASKMOVD(xmm, xmm, m128) [AVX2]
VPMASKMOVD(ymm, ymm, m256) [AVX2]
VPMASKMOVD(m128, xmm, xmm) [AVX2]
VPMASKMOVD(m256, ymm, ymm) [AVX2]
peachpy.x86_64.avx.
VPMASKMOVQ
(*args, **kwargs)¶Conditional Move Packed Quadword Integers
Supported forms:
VPMASKMOVQ(xmm, xmm, m128) [AVX2]
VPMASKMOVQ(ymm, ymm, m256) [AVX2]
VPMASKMOVQ(m128, xmm, xmm) [AVX2]
VPMASKMOVQ(m256, ymm, ymm) [AVX2]
peachpy.x86_64.avx.
VPMAXSB
(*args, **kwargs)¶Maximum of Packed Signed Byte Integers
Supported forms:
VPMAXSB(xmm, xmm, xmm/m128) [AVX]
VPMAXSB(ymm, ymm, ymm/m256) [AVX2]
VPMAXSB(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPMAXSB(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPMAXSB(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMAXSD
(*args, **kwargs)¶Maximum of Packed Signed Doubleword Integers
Supported forms:
VPMAXSD(xmm, xmm, xmm/m128) [AVX]
VPMAXSD(ymm, ymm, ymm/m256) [AVX2]
VPMAXSD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPMAXSD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPMAXSD(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPMAXSD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPMAXSD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPMAXSD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMAXSQ
(*args, **kwargs)¶Maximum of Packed Signed Quadword Integers
Supported forms:
VPMAXSQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPMAXSQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPMAXSQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPMAXSQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPMAXSQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPMAXSQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMAXSW
(*args, **kwargs)¶Maximum of Packed Signed Word Integers
Supported forms:
VPMAXSW(xmm, xmm, xmm/m128) [AVX]
VPMAXSW(ymm, ymm, ymm/m256) [AVX2]
VPMAXSW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPMAXSW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPMAXSW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMAXUB
(*args, **kwargs)¶Maximum of Packed Unsigned Byte Integers
Supported forms:
VPMAXUB(xmm, xmm, xmm/m128) [AVX]
VPMAXUB(ymm, ymm, ymm/m256) [AVX2]
VPMAXUB(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPMAXUB(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPMAXUB(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMAXUD
(*args, **kwargs)¶Maximum of Packed Unsigned Doubleword Integers
Supported forms:
VPMAXUD(xmm, xmm, xmm/m128) [AVX]
VPMAXUD(ymm, ymm, ymm/m256) [AVX2]
VPMAXUD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPMAXUD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPMAXUD(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPMAXUD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPMAXUD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPMAXUD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMAXUQ
(*args, **kwargs)¶Maximum of Packed Unsigned Quadword Integers
Supported forms:
VPMAXUQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPMAXUQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPMAXUQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPMAXUQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPMAXUQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPMAXUQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMAXUW
(*args, **kwargs)¶Maximum of Packed Unsigned Word Integers
Supported forms:
VPMAXUW(xmm, xmm, xmm/m128) [AVX]
VPMAXUW(ymm, ymm, ymm/m256) [AVX2]
VPMAXUW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPMAXUW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPMAXUW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMINSB
(*args, **kwargs)¶Minimum of Packed Signed Byte Integers
Supported forms:
VPMINSB(xmm, xmm, xmm/m128) [AVX]
VPMINSB(ymm, ymm, ymm/m256) [AVX2]
VPMINSB(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPMINSB(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPMINSB(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMINSD
(*args, **kwargs)¶Minimum of Packed Signed Doubleword Integers
Supported forms:
VPMINSD(xmm, xmm, xmm/m128) [AVX]
VPMINSD(ymm, ymm, ymm/m256) [AVX2]
VPMINSD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPMINSD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPMINSD(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPMINSD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPMINSD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPMINSD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMINSQ
(*args, **kwargs)¶Minimum of Packed Signed Quadword Integers
Supported forms:
VPMINSQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPMINSQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPMINSQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPMINSQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPMINSQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPMINSQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMINSW
(*args, **kwargs)¶Minimum of Packed Signed Word Integers
Supported forms:
VPMINSW(xmm, xmm, xmm/m128) [AVX]
VPMINSW(ymm, ymm, ymm/m256) [AVX2]
VPMINSW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPMINSW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPMINSW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMINUB
(*args, **kwargs)¶Minimum of Packed Unsigned Byte Integers
Supported forms:
VPMINUB(xmm, xmm, xmm/m128) [AVX]
VPMINUB(ymm, ymm, ymm/m256) [AVX2]
VPMINUB(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPMINUB(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPMINUB(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMINUD
(*args, **kwargs)¶Minimum of Packed Unsigned Doubleword Integers
Supported forms:
VPMINUD(xmm, xmm, xmm/m128) [AVX]
VPMINUD(ymm, ymm, ymm/m256) [AVX2]
VPMINUD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPMINUD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPMINUD(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPMINUD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPMINUD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPMINUD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMINUQ
(*args, **kwargs)¶Minimum of Packed Unsigned Quadword Integers
Supported forms:
VPMINUQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPMINUQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPMINUQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPMINUQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPMINUQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPMINUQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMINUW
(*args, **kwargs)¶Minimum of Packed Unsigned Word Integers
Supported forms:
VPMINUW(xmm, xmm, xmm/m128) [AVX]
VPMINUW(ymm, ymm, ymm/m256) [AVX2]
VPMINUW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPMINUW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPMINUW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMOVB2M
(*args, **kwargs)¶Move Signs of Packed Byte Integers to Mask Register
Supported forms:
VPMOVB2M(k, zmm) [AVX512BW]
VPMOVB2M(k, xmm) [AVX512BW and AVX512VL]
VPMOVB2M(k, ymm) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMOVD2M
(*args, **kwargs)¶Move Signs of Packed Doubleword Integers to Mask Register
Supported forms:
VPMOVD2M(k, zmm) [AVX512DQ]
VPMOVD2M(k, xmm) [AVX512DQ and AVX512VL]
VPMOVD2M(k, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VPMOVDB
(*args, **kwargs)¶Down Convert Packed Doubleword Values to Byte Values with Truncation
Supported forms:
VPMOVDB(xmm{k}{z}, zmm) [AVX512F]
VPMOVDB(m128{k}{z}, zmm) [AVX512F]
VPMOVDB(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVDB(m32{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVDB(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
VPMOVDB(m64{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVDW
(*args, **kwargs)¶Down Convert Packed Doubleword Values to Word Values with Truncation
Supported forms:
VPMOVDW(ymm{k}{z}, zmm) [AVX512F]
VPMOVDW(m256{k}{z}, zmm) [AVX512F]
VPMOVDW(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVDW(m64{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVDW(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
VPMOVDW(m128{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVM2B
(*args, **kwargs)¶Expand Bits of Mask Register to Packed Byte Integers
Supported forms:
VPMOVM2B(zmm, k) [AVX512BW]
VPMOVM2B(xmm, k) [AVX512BW and AVX512VL]
VPMOVM2B(ymm, k) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMOVM2D
(*args, **kwargs)¶Expand Bits of Mask Register to Packed Doubleword Integers
Supported forms:
VPMOVM2D(zmm, k) [AVX512DQ]
VPMOVM2D(xmm, k) [AVX512DQ and AVX512VL]
VPMOVM2D(ymm, k) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VPMOVM2Q
(*args, **kwargs)¶Expand Bits of Mask Register to Packed Quadword Integers
Supported forms:
VPMOVM2Q(zmm, k) [AVX512DQ]
VPMOVM2Q(xmm, k) [AVX512DQ and AVX512VL]
VPMOVM2Q(ymm, k) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VPMOVM2W
(*args, **kwargs)¶Expand Bits of Mask Register to Packed Word Integers
Supported forms:
VPMOVM2W(zmm, k) [AVX512BW]
VPMOVM2W(xmm, k) [AVX512BW and AVX512VL]
VPMOVM2W(ymm, k) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMOVMSKB
(*args, **kwargs)¶Move Byte Mask
Supported forms:
VPMOVMSKB(r32, xmm) [AVX]
VPMOVMSKB(r32, ymm) [AVX2]
peachpy.x86_64.avx.
VPMOVQ2M
(*args, **kwargs)¶Move Signs of Packed Quadword Integers to Mask Register
Supported forms:
VPMOVQ2M(k, zmm) [AVX512DQ]
VPMOVQ2M(k, xmm) [AVX512DQ and AVX512VL]
VPMOVQ2M(k, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VPMOVQB
(*args, **kwargs)¶Down Convert Packed Quadword Values to Byte Values with Truncation
Supported forms:
VPMOVQB(xmm{k}{z}, zmm) [AVX512F]
VPMOVQB(m64{k}{z}, zmm) [AVX512F]
VPMOVQB(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVQB(m16{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVQB(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
VPMOVQB(m32{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVQD
(*args, **kwargs)¶Down Convert Packed Quadword Values to Doubleword Values with Truncation
Supported forms:
VPMOVQD(ymm{k}{z}, zmm) [AVX512F]
VPMOVQD(m256{k}{z}, zmm) [AVX512F]
VPMOVQD(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVQD(m64{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVQD(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
VPMOVQD(m128{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVQW
(*args, **kwargs)¶Down Convert Packed Quadword Values to Word Values with Truncation
Supported forms:
VPMOVQW(xmm{k}{z}, zmm) [AVX512F]
VPMOVQW(m128{k}{z}, zmm) [AVX512F]
VPMOVQW(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVQW(m32{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVQW(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
VPMOVQW(m64{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVSDB
(*args, **kwargs)¶Down Convert Packed Doubleword Values to Byte Values with Signed Saturation
Supported forms:
VPMOVSDB(xmm{k}{z}, zmm) [AVX512F]
VPMOVSDB(m128{k}{z}, zmm) [AVX512F]
VPMOVSDB(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVSDB(m32{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVSDB(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
VPMOVSDB(m64{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVSDW
(*args, **kwargs)¶Down Convert Packed Doubleword Values to Word Values with Signed Saturation
Supported forms:
VPMOVSDW(ymm{k}{z}, zmm) [AVX512F]
VPMOVSDW(m256{k}{z}, zmm) [AVX512F]
VPMOVSDW(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVSDW(m64{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVSDW(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
VPMOVSDW(m128{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVSQB
(*args, **kwargs)¶Down Convert Packed Quadword Values to Byte Values with Signed Saturation
Supported forms:
VPMOVSQB(xmm{k}{z}, zmm) [AVX512F]
VPMOVSQB(m64{k}{z}, zmm) [AVX512F]
VPMOVSQB(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVSQB(m16{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVSQB(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
VPMOVSQB(m32{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVSQD
(*args, **kwargs)¶Down Convert Packed Quadword Values to Doubleword Values with Signed Saturation
Supported forms:
VPMOVSQD(ymm{k}{z}, zmm) [AVX512F]
VPMOVSQD(m256{k}{z}, zmm) [AVX512F]
VPMOVSQD(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVSQD(m64{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVSQD(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
VPMOVSQD(m128{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVSQW
(*args, **kwargs)¶Down Convert Packed Quadword Values to Word Values with Signed Saturation
Supported forms:
VPMOVSQW(xmm{k}{z}, zmm) [AVX512F]
VPMOVSQW(m128{k}{z}, zmm) [AVX512F]
VPMOVSQW(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVSQW(m32{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVSQW(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
VPMOVSQW(m64{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVSWB
(*args, **kwargs)¶Down Convert Packed Word Values to Byte Values with Signed Saturation
Supported forms:
VPMOVSWB(ymm{k}{z}, zmm) [AVX512BW]
VPMOVSWB(m256{k}{z}, zmm) [AVX512BW]
VPMOVSWB(xmm{k}{z}, xmm) [AVX512BW and AVX512VL]
VPMOVSWB(m64{k}{z}, xmm) [AVX512BW and AVX512VL]
VPMOVSWB(xmm{k}{z}, ymm) [AVX512BW and AVX512VL]
VPMOVSWB(m128{k}{z}, ymm) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMOVSXBD
(*args, **kwargs)¶Move Packed Byte Integers to Doubleword Integers with Sign Extension
Supported forms:
VPMOVSXBD(xmm, xmm/m32) [AVX]
VPMOVSXBD(ymm, xmm/m64) [AVX2]
VPMOVSXBD(zmm{k}{z}, xmm/m128) [AVX512F]
VPMOVSXBD(xmm{k}{z}, xmm/m32) [AVX512F and AVX512VL]
VPMOVSXBD(ymm{k}{z}, xmm/m64) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVSXBQ
(*args, **kwargs)¶Move Packed Byte Integers to Quadword Integers with Sign Extension
Supported forms:
VPMOVSXBQ(xmm, xmm/m16) [AVX]
VPMOVSXBQ(ymm, xmm/m32) [AVX2]
VPMOVSXBQ(zmm{k}{z}, xmm/m64) [AVX512F]
VPMOVSXBQ(xmm{k}{z}, xmm/m16) [AVX512F and AVX512VL]
VPMOVSXBQ(ymm{k}{z}, xmm/m32) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVSXBW
(*args, **kwargs)¶Move Packed Byte Integers to Word Integers with Sign Extension
Supported forms:
VPMOVSXBW(xmm, xmm/m64) [AVX]
VPMOVSXBW(ymm, xmm/m128) [AVX2]
VPMOVSXBW(zmm{k}{z}, ymm/m256) [AVX512BW]
VPMOVSXBW(xmm{k}{z}, xmm/m64) [AVX512BW and AVX512VL]
VPMOVSXBW(ymm{k}{z}, xmm/m128) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMOVSXDQ
(*args, **kwargs)¶Move Packed Doubleword Integers to Quadword Integers with Sign Extension
Supported forms:
VPMOVSXDQ(xmm, xmm/m64) [AVX]
VPMOVSXDQ(ymm, xmm/m128) [AVX2]
VPMOVSXDQ(zmm{k}{z}, ymm/m256) [AVX512F]
VPMOVSXDQ(xmm{k}{z}, xmm/m64) [AVX512F and AVX512VL]
VPMOVSXDQ(ymm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVSXWD
(*args, **kwargs)¶Move Packed Word Integers to Doubleword Integers with Sign Extension
Supported forms:
VPMOVSXWD(xmm, xmm/m64) [AVX]
VPMOVSXWD(ymm, xmm/m128) [AVX2]
VPMOVSXWD(zmm{k}{z}, ymm/m256) [AVX512F]
VPMOVSXWD(xmm{k}{z}, xmm/m64) [AVX512F and AVX512VL]
VPMOVSXWD(ymm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVSXWQ
(*args, **kwargs)¶Move Packed Word Integers to Quadword Integers with Sign Extension
Supported forms:
VPMOVSXWQ(xmm, xmm/m32) [AVX]
VPMOVSXWQ(ymm, xmm/m64) [AVX2]
VPMOVSXWQ(zmm{k}{z}, xmm/m128) [AVX512F]
VPMOVSXWQ(xmm{k}{z}, xmm/m32) [AVX512F and AVX512VL]
VPMOVSXWQ(ymm{k}{z}, xmm/m64) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVUSDB
(*args, **kwargs)¶Down Convert Packed Doubleword Values to Byte Values with Unsigned Saturation
Supported forms:
VPMOVUSDB(xmm{k}{z}, zmm) [AVX512F]
VPMOVUSDB(m128{k}{z}, zmm) [AVX512F]
VPMOVUSDB(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVUSDB(m32{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVUSDB(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
VPMOVUSDB(m64{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVUSDW
(*args, **kwargs)¶Down Convert Packed Doubleword Values to Word Values with Unsigned Saturation
Supported forms:
VPMOVUSDW(ymm{k}{z}, zmm) [AVX512F]
VPMOVUSDW(m256{k}{z}, zmm) [AVX512F]
VPMOVUSDW(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVUSDW(m64{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVUSDW(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
VPMOVUSDW(m128{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVUSQB
(*args, **kwargs)¶Down Convert Packed Quadword Values to Byte Values with Unsigned Saturation
Supported forms:
VPMOVUSQB(xmm{k}{z}, zmm) [AVX512F]
VPMOVUSQB(m64{k}{z}, zmm) [AVX512F]
VPMOVUSQB(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVUSQB(m16{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVUSQB(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
VPMOVUSQB(m32{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVUSQD
(*args, **kwargs)¶Down Convert Packed Quadword Values to Doubleword Values with Unsigned Saturation
Supported forms:
VPMOVUSQD(ymm{k}{z}, zmm) [AVX512F]
VPMOVUSQD(m256{k}{z}, zmm) [AVX512F]
VPMOVUSQD(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVUSQD(m64{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVUSQD(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
VPMOVUSQD(m128{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVUSQW
(*args, **kwargs)¶Down Convert Packed Quadword Values to Word Values with Unsigned Saturation
Supported forms:
VPMOVUSQW(xmm{k}{z}, zmm) [AVX512F]
VPMOVUSQW(m128{k}{z}, zmm) [AVX512F]
VPMOVUSQW(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVUSQW(m32{k}{z}, xmm) [AVX512F and AVX512VL]
VPMOVUSQW(xmm{k}{z}, ymm) [AVX512F and AVX512VL]
VPMOVUSQW(m64{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVUSWB
(*args, **kwargs)¶Down Convert Packed Word Values to Byte Values with Unsigned Saturation
Supported forms:
VPMOVUSWB(ymm{k}{z}, zmm) [AVX512BW]
VPMOVUSWB(m256{k}{z}, zmm) [AVX512BW]
VPMOVUSWB(xmm{k}{z}, xmm) [AVX512BW and AVX512VL]
VPMOVUSWB(m64{k}{z}, xmm) [AVX512BW and AVX512VL]
VPMOVUSWB(xmm{k}{z}, ymm) [AVX512BW and AVX512VL]
VPMOVUSWB(m128{k}{z}, ymm) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMOVW2M
(*args, **kwargs)¶Move Signs of Packed Word Integers to Mask Register
Supported forms:
VPMOVW2M(k, zmm) [AVX512BW]
VPMOVW2M(k, xmm) [AVX512BW and AVX512VL]
VPMOVW2M(k, ymm) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMOVWB
(*args, **kwargs)¶Down Convert Packed Word Values to Byte Values with Truncation
Supported forms:
VPMOVWB(ymm{k}{z}, zmm) [AVX512BW]
VPMOVWB(m256{k}{z}, zmm) [AVX512BW]
VPMOVWB(xmm{k}{z}, xmm) [AVX512BW and AVX512VL]
VPMOVWB(m64{k}{z}, xmm) [AVX512BW and AVX512VL]
VPMOVWB(xmm{k}{z}, ymm) [AVX512BW and AVX512VL]
VPMOVWB(m128{k}{z}, ymm) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMOVZXBD
(*args, **kwargs)¶Move Packed Byte Integers to Doubleword Integers with Zero Extension
Supported forms:
VPMOVZXBD(xmm, xmm/m32) [AVX]
VPMOVZXBD(ymm, xmm/m64) [AVX2]
VPMOVZXBD(zmm{k}{z}, xmm/m128) [AVX512F]
VPMOVZXBD(xmm{k}{z}, xmm/m32) [AVX512F and AVX512VL]
VPMOVZXBD(ymm{k}{z}, xmm/m64) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVZXBQ
(*args, **kwargs)¶Move Packed Byte Integers to Quadword Integers with Zero Extension
Supported forms:
VPMOVZXBQ(xmm, xmm/m16) [AVX]
VPMOVZXBQ(ymm, xmm/m32) [AVX2]
VPMOVZXBQ(zmm{k}{z}, xmm/m64) [AVX512F]
VPMOVZXBQ(xmm{k}{z}, xmm/m16) [AVX512F and AVX512VL]
VPMOVZXBQ(ymm{k}{z}, xmm/m32) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVZXBW
(*args, **kwargs)¶Move Packed Byte Integers to Word Integers with Zero Extension
Supported forms:
VPMOVZXBW(xmm, xmm/m64) [AVX]
VPMOVZXBW(ymm, xmm/m128) [AVX2]
VPMOVZXBW(zmm{k}{z}, ymm/m256) [AVX512BW]
VPMOVZXBW(xmm{k}{z}, xmm/m64) [AVX512BW and AVX512VL]
VPMOVZXBW(ymm{k}{z}, xmm/m128) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMOVZXDQ
(*args, **kwargs)¶Move Packed Doubleword Integers to Quadword Integers with Zero Extension
Supported forms:
VPMOVZXDQ(xmm, xmm/m64) [AVX]
VPMOVZXDQ(ymm, xmm/m128) [AVX2]
VPMOVZXDQ(zmm{k}{z}, ymm/m256) [AVX512F]
VPMOVZXDQ(xmm{k}{z}, xmm/m64) [AVX512F and AVX512VL]
VPMOVZXDQ(ymm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVZXWD
(*args, **kwargs)¶Move Packed Word Integers to Doubleword Integers with Zero Extension
Supported forms:
VPMOVZXWD(xmm, xmm/m64) [AVX]
VPMOVZXWD(ymm, xmm/m128) [AVX2]
VPMOVZXWD(zmm{k}{z}, ymm/m256) [AVX512F]
VPMOVZXWD(xmm{k}{z}, xmm/m64) [AVX512F and AVX512VL]
VPMOVZXWD(ymm{k}{z}, xmm/m128) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMOVZXWQ
(*args, **kwargs)¶Move Packed Word Integers to Quadword Integers with Zero Extension
Supported forms:
VPMOVZXWQ(xmm, xmm/m32) [AVX]
VPMOVZXWQ(ymm, xmm/m64) [AVX2]
VPMOVZXWQ(zmm{k}{z}, xmm/m128) [AVX512F]
VPMOVZXWQ(xmm{k}{z}, xmm/m32) [AVX512F and AVX512VL]
VPMOVZXWQ(ymm{k}{z}, xmm/m64) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMULDQ
(*args, **kwargs)¶Multiply Packed Signed Doubleword Integers and Store Quadword Result
Supported forms:
VPMULDQ(xmm, xmm, xmm/m128) [AVX]
VPMULDQ(ymm, ymm, ymm/m256) [AVX2]
VPMULDQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPMULDQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPMULDQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPMULDQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPMULDQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPMULDQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMULHRSW
(*args, **kwargs)¶Packed Multiply Signed Word Integers and Store High Result with Round and Scale
Supported forms:
VPMULHRSW(xmm, xmm, xmm/m128) [AVX]
VPMULHRSW(ymm, ymm, ymm/m256) [AVX2]
VPMULHRSW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPMULHRSW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPMULHRSW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMULHUW
(*args, **kwargs)¶Multiply Packed Unsigned Word Integers and Store High Result
Supported forms:
VPMULHUW(xmm, xmm, xmm/m128) [AVX]
VPMULHUW(ymm, ymm, ymm/m256) [AVX2]
VPMULHUW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPMULHUW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPMULHUW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMULHW
(*args, **kwargs)¶Multiply Packed Signed Word Integers and Store High Result
Supported forms:
VPMULHW(xmm, xmm, xmm/m128) [AVX]
VPMULHW(ymm, ymm, ymm/m256) [AVX2]
VPMULHW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPMULHW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPMULHW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMULLD
(*args, **kwargs)¶Multiply Packed Signed Doubleword Integers and Store Low Result
Supported forms:
VPMULLD(xmm, xmm, xmm/m128) [AVX]
VPMULLD(ymm, ymm, ymm/m256) [AVX2]
VPMULLD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPMULLD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPMULLD(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPMULLD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPMULLD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPMULLD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPMULLQ
(*args, **kwargs)¶Multiply Packed Signed Quadword Integers and Store Low Result
Supported forms:
VPMULLQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512DQ]
VPMULLQ(zmm{k}{z}, zmm, zmm) [AVX512DQ]
VPMULLQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512DQ and AVX512VL]
VPMULLQ(xmm{k}{z}, xmm, xmm) [AVX512DQ and AVX512VL]
VPMULLQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512DQ and AVX512VL]
VPMULLQ(ymm{k}{z}, ymm, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VPMULLW
(*args, **kwargs)¶Multiply Packed Signed Word Integers and Store Low Result
Supported forms:
VPMULLW(xmm, xmm, xmm/m128) [AVX]
VPMULLW(ymm, ymm, ymm/m256) [AVX2]
VPMULLW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPMULLW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPMULLW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPMULTISHIFTQB
(*args, **kwargs)¶Select Packed Unaligned Bytes from Quadword Sources
Supported forms:
VPMULTISHIFTQB(zmm{k}{z}, zmm, m512/m64bcst) [AVX512VBMI]
VPMULTISHIFTQB(zmm{k}{z}, zmm, zmm) [AVX512VBMI]
VPMULTISHIFTQB(xmm{k}{z}, xmm, m128/m64bcst) [AVX512VL and AVX512VBMI]
VPMULTISHIFTQB(xmm{k}{z}, xmm, xmm) [AVX512VL and AVX512VBMI]
VPMULTISHIFTQB(ymm{k}{z}, ymm, m256/m64bcst) [AVX512VL and AVX512VBMI]
VPMULTISHIFTQB(ymm{k}{z}, ymm, ymm) [AVX512VL and AVX512VBMI]
peachpy.x86_64.avx.
VPMULUDQ
(*args, **kwargs)¶Multiply Packed Unsigned Doubleword Integers
Supported forms:
VPMULUDQ(xmm, xmm, xmm/m128) [AVX]
VPMULUDQ(ymm, ymm, ymm/m256) [AVX2]
VPMULUDQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPMULUDQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPMULUDQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPMULUDQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPMULUDQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPMULUDQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPOPCNTD
(*args, **kwargs)¶Packed Population Count for Doubleword Integers
Supported forms:
VPOPCNTD(zmm{k}{z}, m512/m32bcst) [AVX512VPOPCNTDQ]
VPOPCNTD(zmm{k}{z}, zmm) [AVX512VPOPCNTDQ]
peachpy.x86_64.avx.
VPOPCNTQ
(*args, **kwargs)¶Packed Population Count for Quadword Integers
Supported forms:
VPOPCNTQ(zmm{k}{z}, m512/m64bcst) [AVX512VPOPCNTDQ]
VPOPCNTQ(zmm{k}{z}, zmm) [AVX512VPOPCNTDQ]
peachpy.x86_64.avx.
VPOR
(*args, **kwargs)¶Packed Bitwise Logical OR
Supported forms:
VPOR(xmm, xmm, xmm/m128) [AVX]
VPOR(ymm, ymm, ymm/m256) [AVX2]
peachpy.x86_64.avx.
VPORD
(*args, **kwargs)¶Bitwise Logical OR of Packed Doubleword Integers
Supported forms:
VPORD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPORD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPORD(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPORD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPORD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPORD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPORQ
(*args, **kwargs)¶Bitwise Logical OR of Packed Quadword Integers
Supported forms:
VPORQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPORQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPORQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPORQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPORQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPORQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPROLD
(*args, **kwargs)¶Rotate Packed Doubleword Left
Supported forms:
VPROLD(zmm{k}{z}, m512/m32bcst, imm8) [AVX512F]
VPROLD(zmm{k}{z}, zmm, imm8) [AVX512F]
VPROLD(xmm{k}{z}, m128/m32bcst, imm8) [AVX512F and AVX512VL]
VPROLD(ymm{k}{z}, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VPROLD(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VPROLD(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPROLQ
(*args, **kwargs)¶Rotate Packed Quadword Left
Supported forms:
VPROLQ(zmm{k}{z}, m512/m64bcst, imm8) [AVX512F]
VPROLQ(zmm{k}{z}, zmm, imm8) [AVX512F]
VPROLQ(xmm{k}{z}, m128/m64bcst, imm8) [AVX512F and AVX512VL]
VPROLQ(ymm{k}{z}, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VPROLQ(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VPROLQ(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPROLVD
(*args, **kwargs)¶Variable Rotate Packed Doubleword Left
Supported forms:
VPROLVD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPROLVD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPROLVD(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPROLVD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPROLVD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPROLVD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPROLVQ
(*args, **kwargs)¶Variable Rotate Packed Quadword Left
Supported forms:
VPROLVQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPROLVQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPROLVQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPROLVQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPROLVQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPROLVQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPRORD
(*args, **kwargs)¶Rotate Packed Doubleword Right
Supported forms:
VPRORD(zmm{k}{z}, m512/m32bcst, imm8) [AVX512F]
VPRORD(zmm{k}{z}, zmm, imm8) [AVX512F]
VPRORD(xmm{k}{z}, m128/m32bcst, imm8) [AVX512F and AVX512VL]
VPRORD(ymm{k}{z}, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VPRORD(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VPRORD(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPRORQ
(*args, **kwargs)¶Rotate Packed Quadword Right
Supported forms:
VPRORQ(zmm{k}{z}, m512/m64bcst, imm8) [AVX512F]
VPRORQ(zmm{k}{z}, zmm, imm8) [AVX512F]
VPRORQ(xmm{k}{z}, m128/m64bcst, imm8) [AVX512F and AVX512VL]
VPRORQ(ymm{k}{z}, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VPRORQ(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VPRORQ(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPRORVD
(*args, **kwargs)¶Variable Rotate Packed Doubleword Right
Supported forms:
VPRORVD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPRORVD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPRORVD(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPRORVD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPRORVD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPRORVD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPRORVQ
(*args, **kwargs)¶Variable Rotate Packed Quadword Right
Supported forms:
VPRORVQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPRORVQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPRORVQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPRORVQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPRORVQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPRORVQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSADBW
(*args, **kwargs)¶Compute Sum of Absolute Differences
Supported forms:
VPSADBW(xmm, xmm, xmm/m128) [AVX]
VPSADBW(ymm, ymm, ymm/m256) [AVX2]
VPSADBW(zmm, zmm, zmm/m512) [AVX512BW]
VPSADBW(xmm, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPSADBW(ymm, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSCATTERDD
(*args, **kwargs)¶Scatter Packed Doubleword Values with Signed Doubleword Indices
Supported forms:
VPSCATTERDD(vm32z{k}, zmm) [AVX512F]
VPSCATTERDD(vm32x{k}, xmm) [AVX512F and AVX512VL]
VPSCATTERDD(vm32y{k}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSCATTERDQ
(*args, **kwargs)¶Scatter Packed Quadword Values with Signed Doubleword Indices
Supported forms:
VPSCATTERDQ(vm32y{k}, zmm) [AVX512F]
VPSCATTERDQ(vm32x{k}, xmm) [AVX512F and AVX512VL]
VPSCATTERDQ(vm32x{k}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSCATTERQD
(*args, **kwargs)¶Scatter Packed Doubleword Values with Signed Quadword Indices
Supported forms:
VPSCATTERQD(vm64z{k}, ymm) [AVX512F]
VPSCATTERQD(vm64x{k}, xmm) [AVX512F and AVX512VL]
VPSCATTERQD(vm64y{k}, xmm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSCATTERQQ
(*args, **kwargs)¶Scatter Packed Quadword Values with Signed Quadword Indices
Supported forms:
VPSCATTERQQ(vm64z{k}, zmm) [AVX512F]
VPSCATTERQQ(vm64x{k}, xmm) [AVX512F and AVX512VL]
VPSCATTERQQ(vm64y{k}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSHUFB
(*args, **kwargs)¶Packed Shuffle Bytes
Supported forms:
VPSHUFB(xmm, xmm, xmm/m128) [AVX]
VPSHUFB(ymm, ymm, ymm/m256) [AVX2]
VPSHUFB(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPSHUFB(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPSHUFB(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSHUFD
(*args, **kwargs)¶Shuffle Packed Doublewords
Supported forms:
VPSHUFD(xmm, xmm/m128, imm8) [AVX]
VPSHUFD(ymm, ymm/m256, imm8) [AVX2]
VPSHUFD(zmm{k}{z}, m512/m32bcst, imm8) [AVX512F]
VPSHUFD(zmm{k}{z}, zmm, imm8) [AVX512F]
VPSHUFD(xmm{k}{z}, m128/m32bcst, imm8) [AVX512F and AVX512VL]
VPSHUFD(ymm{k}{z}, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VPSHUFD(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VPSHUFD(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSHUFHW
(*args, **kwargs)¶Shuffle Packed High Words
Supported forms:
VPSHUFHW(xmm, xmm/m128, imm8) [AVX]
VPSHUFHW(ymm, ymm/m256, imm8) [AVX2]
VPSHUFHW(zmm{k}{z}, zmm/m512, imm8) [AVX512BW]
VPSHUFHW(xmm{k}{z}, xmm/m128, imm8) [AVX512BW and AVX512VL]
VPSHUFHW(ymm{k}{z}, ymm/m256, imm8) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSHUFLW
(*args, **kwargs)¶Shuffle Packed Low Words
Supported forms:
VPSHUFLW(xmm, xmm/m128, imm8) [AVX]
VPSHUFLW(ymm, ymm/m256, imm8) [AVX2]
VPSHUFLW(zmm{k}{z}, zmm/m512, imm8) [AVX512BW]
VPSHUFLW(xmm{k}{z}, xmm/m128, imm8) [AVX512BW and AVX512VL]
VPSHUFLW(ymm{k}{z}, ymm/m256, imm8) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSIGNB
(*args, **kwargs)¶Packed Sign of Byte Integers
Supported forms:
VPSIGNB(xmm, xmm, xmm/m128) [AVX]
VPSIGNB(ymm, ymm, ymm/m256) [AVX2]
peachpy.x86_64.avx.
VPSIGND
(*args, **kwargs)¶Packed Sign of Doubleword Integers
Supported forms:
VPSIGND(xmm, xmm, xmm/m128) [AVX]
VPSIGND(ymm, ymm, ymm/m256) [AVX2]
peachpy.x86_64.avx.
VPSIGNW
(*args, **kwargs)¶Packed Sign of Word Integers
Supported forms:
VPSIGNW(xmm, xmm, xmm/m128) [AVX]
VPSIGNW(ymm, ymm, ymm/m256) [AVX2]
peachpy.x86_64.avx.
VPSLLD
(*args, **kwargs)¶Shift Packed Doubleword Data Left Logical
Supported forms:
VPSLLD(xmm, xmm, imm8) [AVX]
VPSLLD(xmm, xmm, xmm/m128) [AVX]
VPSLLD(ymm, ymm, imm8) [AVX2]
VPSLLD(ymm, ymm, xmm/m128) [AVX2]
VPSLLD(zmm{k}{z}, m512/m32bcst, imm8) [AVX512F]
VPSLLD(zmm{k}{z}, zmm, imm8) [AVX512F]
VPSLLD(zmm{k}{z}, zmm, xmm/m128) [AVX512F]
VPSLLD(xmm{k}{z}, m128/m32bcst, imm8) [AVX512F and AVX512VL]
VPSLLD(ymm{k}{z}, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VPSLLD(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VPSLLD(xmm{k}{z}, xmm, xmm/m128) [AVX512F and AVX512VL]
VPSLLD(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
VPSLLD(ymm{k}{z}, ymm, xmm/m128) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSLLDQ
(*args, **kwargs)¶Shift Packed Double Quadword Left Logical
Supported forms:
VPSLLDQ(xmm, xmm, imm8) [AVX]
VPSLLDQ(ymm, ymm, imm8) [AVX2]
VPSLLDQ(zmm, zmm/m512, imm8) [AVX512BW]
VPSLLDQ(xmm, xmm/m128, imm8) [AVX512BW and AVX512VL]
VPSLLDQ(ymm, ymm/m256, imm8) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSLLQ
(*args, **kwargs)¶Shift Packed Quadword Data Left Logical
Supported forms:
VPSLLQ(xmm, xmm, imm8) [AVX]
VPSLLQ(xmm, xmm, xmm/m128) [AVX]
VPSLLQ(ymm, ymm, imm8) [AVX2]
VPSLLQ(ymm, ymm, xmm/m128) [AVX2]
VPSLLQ(zmm{k}{z}, m512/m64bcst, imm8) [AVX512F]
VPSLLQ(zmm{k}{z}, zmm, imm8) [AVX512F]
VPSLLQ(zmm{k}{z}, zmm, xmm/m128) [AVX512F]
VPSLLQ(xmm{k}{z}, m128/m64bcst, imm8) [AVX512F and AVX512VL]
VPSLLQ(ymm{k}{z}, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VPSLLQ(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VPSLLQ(xmm{k}{z}, xmm, xmm/m128) [AVX512F and AVX512VL]
VPSLLQ(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
VPSLLQ(ymm{k}{z}, ymm, xmm/m128) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSLLVD
(*args, **kwargs)¶Variable Shift Packed Doubleword Data Left Logical
Supported forms:
VPSLLVD(xmm, xmm, xmm/m128) [AVX2]
VPSLLVD(ymm, ymm, ymm/m256) [AVX2]
VPSLLVD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPSLLVD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPSLLVD(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPSLLVD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPSLLVD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPSLLVD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSLLVQ
(*args, **kwargs)¶Variable Shift Packed Quadword Data Left Logical
Supported forms:
VPSLLVQ(xmm, xmm, xmm/m128) [AVX2]
VPSLLVQ(ymm, ymm, ymm/m256) [AVX2]
VPSLLVQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPSLLVQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPSLLVQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPSLLVQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPSLLVQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPSLLVQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSLLVW
(*args, **kwargs)¶Variable Shift Packed Word Data Left Logical
Supported forms:
VPSLLVW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPSLLVW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPSLLVW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSLLW
(*args, **kwargs)¶Shift Packed Word Data Left Logical
Supported forms:
VPSLLW(xmm, xmm, imm8) [AVX]
VPSLLW(xmm, xmm, xmm/m128) [AVX]
VPSLLW(ymm, ymm, imm8) [AVX2]
VPSLLW(ymm, ymm, xmm/m128) [AVX2]
VPSLLW(zmm{k}{z}, zmm, xmm/m128) [AVX512BW]
VPSLLW(zmm{k}{z}, zmm/m512, imm8) [AVX512BW]
VPSLLW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPSLLW(ymm{k}{z}, ymm, xmm/m128) [AVX512BW and AVX512VL]
VPSLLW(xmm{k}{z}, xmm/m128, imm8) [AVX512BW and AVX512VL]
VPSLLW(ymm{k}{z}, ymm/m256, imm8) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSRAD
(*args, **kwargs)¶Shift Packed Doubleword Data Right Arithmetic
Supported forms:
VPSRAD(xmm, xmm, imm8) [AVX]
VPSRAD(xmm, xmm, xmm/m128) [AVX]
VPSRAD(ymm, ymm, imm8) [AVX2]
VPSRAD(ymm, ymm, xmm/m128) [AVX2]
VPSRAD(zmm{k}{z}, m512/m32bcst, imm8) [AVX512F]
VPSRAD(zmm{k}{z}, zmm, imm8) [AVX512F]
VPSRAD(zmm{k}{z}, zmm, xmm/m128) [AVX512F]
VPSRAD(xmm{k}{z}, m128/m32bcst, imm8) [AVX512F and AVX512VL]
VPSRAD(ymm{k}{z}, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VPSRAD(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VPSRAD(xmm{k}{z}, xmm, xmm/m128) [AVX512F and AVX512VL]
VPSRAD(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
VPSRAD(ymm{k}{z}, ymm, xmm/m128) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSRAQ
(*args, **kwargs)¶Shift Packed Quadword Data Right Arithmetic
Supported forms:
VPSRAQ(zmm{k}{z}, m512/m64bcst, imm8) [AVX512F]
VPSRAQ(zmm{k}{z}, zmm, imm8) [AVX512F]
VPSRAQ(zmm{k}{z}, zmm, xmm/m128) [AVX512F]
VPSRAQ(xmm{k}{z}, m128/m64bcst, imm8) [AVX512F and AVX512VL]
VPSRAQ(ymm{k}{z}, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VPSRAQ(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VPSRAQ(xmm{k}{z}, xmm, xmm/m128) [AVX512F and AVX512VL]
VPSRAQ(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
VPSRAQ(ymm{k}{z}, ymm, xmm/m128) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSRAVD
(*args, **kwargs)¶Variable Shift Packed Doubleword Data Right Arithmetic
Supported forms:
VPSRAVD(xmm, xmm, xmm/m128) [AVX2]
VPSRAVD(ymm, ymm, ymm/m256) [AVX2]
VPSRAVD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPSRAVD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPSRAVD(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPSRAVD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPSRAVD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPSRAVD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSRAVQ
(*args, **kwargs)¶Variable Shift Packed Quadword Data Right Arithmetic
Supported forms:
VPSRAVQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPSRAVQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPSRAVQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPSRAVQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPSRAVQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPSRAVQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSRAVW
(*args, **kwargs)¶Variable Shift Packed Word Data Right Arithmetic
Supported forms:
VPSRAVW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPSRAVW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPSRAVW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSRAW
(*args, **kwargs)¶Shift Packed Word Data Right Arithmetic
Supported forms:
VPSRAW(xmm, xmm, imm8) [AVX]
VPSRAW(xmm, xmm, xmm/m128) [AVX]
VPSRAW(ymm, ymm, imm8) [AVX2]
VPSRAW(ymm, ymm, xmm/m128) [AVX2]
VPSRAW(zmm{k}{z}, zmm, xmm/m128) [AVX512BW]
VPSRAW(zmm{k}{z}, zmm/m512, imm8) [AVX512BW]
VPSRAW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPSRAW(ymm{k}{z}, ymm, xmm/m128) [AVX512BW and AVX512VL]
VPSRAW(xmm{k}{z}, xmm/m128, imm8) [AVX512BW and AVX512VL]
VPSRAW(ymm{k}{z}, ymm/m256, imm8) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSRLD
(*args, **kwargs)¶Shift Packed Doubleword Data Right Logical
Supported forms:
VPSRLD(xmm, xmm, imm8) [AVX]
VPSRLD(xmm, xmm, xmm/m128) [AVX]
VPSRLD(ymm, ymm, imm8) [AVX2]
VPSRLD(ymm, ymm, xmm/m128) [AVX2]
VPSRLD(zmm{k}{z}, m512/m32bcst, imm8) [AVX512F]
VPSRLD(zmm{k}{z}, zmm, imm8) [AVX512F]
VPSRLD(zmm{k}{z}, zmm, xmm/m128) [AVX512F]
VPSRLD(xmm{k}{z}, m128/m32bcst, imm8) [AVX512F and AVX512VL]
VPSRLD(ymm{k}{z}, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VPSRLD(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VPSRLD(xmm{k}{z}, xmm, xmm/m128) [AVX512F and AVX512VL]
VPSRLD(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
VPSRLD(ymm{k}{z}, ymm, xmm/m128) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSRLDQ
(*args, **kwargs)¶Shift Packed Double Quadword Right Logical
Supported forms:
VPSRLDQ(xmm, xmm, imm8) [AVX]
VPSRLDQ(ymm, ymm, imm8) [AVX2]
VPSRLDQ(zmm, zmm/m512, imm8) [AVX512BW]
VPSRLDQ(xmm, xmm/m128, imm8) [AVX512BW and AVX512VL]
VPSRLDQ(ymm, ymm/m256, imm8) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSRLQ
(*args, **kwargs)¶Shift Packed Quadword Data Right Logical
Supported forms:
VPSRLQ(xmm, xmm, imm8) [AVX]
VPSRLQ(xmm, xmm, xmm/m128) [AVX]
VPSRLQ(ymm, ymm, imm8) [AVX2]
VPSRLQ(ymm, ymm, xmm/m128) [AVX2]
VPSRLQ(zmm{k}{z}, m512/m64bcst, imm8) [AVX512F]
VPSRLQ(zmm{k}{z}, zmm, imm8) [AVX512F]
VPSRLQ(zmm{k}{z}, zmm, xmm/m128) [AVX512F]
VPSRLQ(xmm{k}{z}, m128/m64bcst, imm8) [AVX512F and AVX512VL]
VPSRLQ(ymm{k}{z}, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VPSRLQ(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VPSRLQ(xmm{k}{z}, xmm, xmm/m128) [AVX512F and AVX512VL]
VPSRLQ(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
VPSRLQ(ymm{k}{z}, ymm, xmm/m128) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSRLVD
(*args, **kwargs)¶Variable Shift Packed Doubleword Data Right Logical
Supported forms:
VPSRLVD(xmm, xmm, xmm/m128) [AVX2]
VPSRLVD(ymm, ymm, ymm/m256) [AVX2]
VPSRLVD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPSRLVD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPSRLVD(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPSRLVD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPSRLVD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPSRLVD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSRLVQ
(*args, **kwargs)¶Variable Shift Packed Quadword Data Right Logical
Supported forms:
VPSRLVQ(xmm, xmm, xmm/m128) [AVX2]
VPSRLVQ(ymm, ymm, ymm/m256) [AVX2]
VPSRLVQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPSRLVQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPSRLVQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPSRLVQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPSRLVQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPSRLVQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSRLVW
(*args, **kwargs)¶Variable Shift Packed Word Data Right Logical
Supported forms:
VPSRLVW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPSRLVW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPSRLVW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSRLW
(*args, **kwargs)¶Shift Packed Word Data Right Logical
Supported forms:
VPSRLW(xmm, xmm, imm8) [AVX]
VPSRLW(xmm, xmm, xmm/m128) [AVX]
VPSRLW(ymm, ymm, imm8) [AVX2]
VPSRLW(ymm, ymm, xmm/m128) [AVX2]
VPSRLW(zmm{k}{z}, zmm, xmm/m128) [AVX512BW]
VPSRLW(zmm{k}{z}, zmm/m512, imm8) [AVX512BW]
VPSRLW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPSRLW(ymm{k}{z}, ymm, xmm/m128) [AVX512BW and AVX512VL]
VPSRLW(xmm{k}{z}, xmm/m128, imm8) [AVX512BW and AVX512VL]
VPSRLW(ymm{k}{z}, ymm/m256, imm8) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSUBB
(*args, **kwargs)¶Subtract Packed Byte Integers
Supported forms:
VPSUBB(xmm, xmm, xmm/m128) [AVX]
VPSUBB(ymm, ymm, ymm/m256) [AVX2]
VPSUBB(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPSUBB(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPSUBB(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSUBD
(*args, **kwargs)¶Subtract Packed Doubleword Integers
Supported forms:
VPSUBD(xmm, xmm, xmm/m128) [AVX]
VPSUBD(ymm, ymm, ymm/m256) [AVX2]
VPSUBD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPSUBD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPSUBD(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPSUBD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPSUBD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPSUBD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSUBQ
(*args, **kwargs)¶Subtract Packed Quadword Integers
Supported forms:
VPSUBQ(xmm, xmm, xmm/m128) [AVX]
VPSUBQ(ymm, ymm, ymm/m256) [AVX2]
VPSUBQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPSUBQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPSUBQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPSUBQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPSUBQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPSUBQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPSUBSB
(*args, **kwargs)¶Subtract Packed Signed Byte Integers with Signed Saturation
Supported forms:
VPSUBSB(xmm, xmm, xmm/m128) [AVX]
VPSUBSB(ymm, ymm, ymm/m256) [AVX2]
VPSUBSB(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPSUBSB(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPSUBSB(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSUBSW
(*args, **kwargs)¶Subtract Packed Signed Word Integers with Signed Saturation
Supported forms:
VPSUBSW(xmm, xmm, xmm/m128) [AVX]
VPSUBSW(ymm, ymm, ymm/m256) [AVX2]
VPSUBSW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPSUBSW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPSUBSW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSUBUSB
(*args, **kwargs)¶Subtract Packed Unsigned Byte Integers with Unsigned Saturation
Supported forms:
VPSUBUSB(xmm, xmm, xmm/m128) [AVX]
VPSUBUSB(ymm, ymm, ymm/m256) [AVX2]
VPSUBUSB(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPSUBUSB(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPSUBUSB(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSUBUSW
(*args, **kwargs)¶Subtract Packed Unsigned Word Integers with Unsigned Saturation
Supported forms:
VPSUBUSW(xmm, xmm, xmm/m128) [AVX]
VPSUBUSW(ymm, ymm, ymm/m256) [AVX2]
VPSUBUSW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPSUBUSW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPSUBUSW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPSUBW
(*args, **kwargs)¶Subtract Packed Word Integers
Supported forms:
VPSUBW(xmm, xmm, xmm/m128) [AVX]
VPSUBW(ymm, ymm, ymm/m256) [AVX2]
VPSUBW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPSUBW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPSUBW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPTERNLOGD
(*args, **kwargs)¶Bitwise Ternary Logical Operation on Doubleword Values
Supported forms:
VPTERNLOGD(zmm{k}{z}, zmm, m512/m32bcst, imm8) [AVX512F]
VPTERNLOGD(zmm{k}{z}, zmm, zmm, imm8) [AVX512F]
VPTERNLOGD(xmm{k}{z}, xmm, m128/m32bcst, imm8) [AVX512F and AVX512VL]
VPTERNLOGD(xmm{k}{z}, xmm, xmm, imm8) [AVX512F and AVX512VL]
VPTERNLOGD(ymm{k}{z}, ymm, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VPTERNLOGD(ymm{k}{z}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPTERNLOGQ
(*args, **kwargs)¶Bitwise Ternary Logical Operation on Quadword Values
Supported forms:
VPTERNLOGQ(zmm{k}{z}, zmm, m512/m64bcst, imm8) [AVX512F]
VPTERNLOGQ(zmm{k}{z}, zmm, zmm, imm8) [AVX512F]
VPTERNLOGQ(xmm{k}{z}, xmm, m128/m64bcst, imm8) [AVX512F and AVX512VL]
VPTERNLOGQ(xmm{k}{z}, xmm, xmm, imm8) [AVX512F and AVX512VL]
VPTERNLOGQ(ymm{k}{z}, ymm, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VPTERNLOGQ(ymm{k}{z}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPTEST
(*args, **kwargs)¶Packed Logical Compare
Supported forms:
VPTEST(xmm, xmm/m128) [AVX]
VPTEST(ymm, ymm/m256) [AVX]
peachpy.x86_64.avx.
VPTESTMB
(*args, **kwargs)¶Logical AND of Packed Byte Integer Values and Set Mask
Supported forms:
VPTESTMB(k{k}, zmm, zmm/m512) [AVX512BW]
VPTESTMB(k{k}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPTESTMB(k{k}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPTESTMD
(*args, **kwargs)¶Logical AND of Packed Doubleword Integer Values and Set Mask
Supported forms:
VPTESTMD(k{k}, zmm, m512/m32bcst) [AVX512F]
VPTESTMD(k{k}, zmm, zmm) [AVX512F]
VPTESTMD(k{k}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPTESTMD(k{k}, xmm, xmm) [AVX512F and AVX512VL]
VPTESTMD(k{k}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPTESTMD(k{k}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPTESTMQ
(*args, **kwargs)¶Logical AND of Packed Quadword Integer Values and Set Mask
Supported forms:
VPTESTMQ(k{k}, zmm, m512/m64bcst) [AVX512F]
VPTESTMQ(k{k}, zmm, zmm) [AVX512F]
VPTESTMQ(k{k}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPTESTMQ(k{k}, xmm, xmm) [AVX512F and AVX512VL]
VPTESTMQ(k{k}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPTESTMQ(k{k}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPTESTMW
(*args, **kwargs)¶Logical AND of Packed Word Integer Values and Set Mask
Supported forms:
VPTESTMW(k{k}, zmm, zmm/m512) [AVX512BW]
VPTESTMW(k{k}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPTESTMW(k{k}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPTESTNMB
(*args, **kwargs)¶Logical NAND of Packed Byte Integer Values and Set Mask
Supported forms:
VPTESTNMB(k{k}, zmm, zmm/m512) [AVX512F and AVX512BW]
VPTESTNMB(k{k}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPTESTNMB(k{k}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPTESTNMD
(*args, **kwargs)¶Logical NAND of Packed Doubleword Integer Values and Set Mask
Supported forms:
VPTESTNMD(k{k}, zmm, m512/m32bcst) [AVX512F]
VPTESTNMD(k{k}, zmm, zmm) [AVX512F]
VPTESTNMD(k{k}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPTESTNMD(k{k}, xmm, xmm) [AVX512F and AVX512VL]
VPTESTNMD(k{k}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPTESTNMD(k{k}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPTESTNMQ
(*args, **kwargs)¶Logical NAND of Packed Quadword Integer Values and Set Mask
Supported forms:
VPTESTNMQ(k{k}, zmm, m512/m64bcst) [AVX512F]
VPTESTNMQ(k{k}, zmm, zmm) [AVX512F]
VPTESTNMQ(k{k}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPTESTNMQ(k{k}, xmm, xmm) [AVX512F and AVX512VL]
VPTESTNMQ(k{k}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPTESTNMQ(k{k}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPTESTNMW
(*args, **kwargs)¶Logical NAND of Packed Word Integer Values and Set Mask
Supported forms:
VPTESTNMW(k{k}, zmm, zmm/m512) [AVX512F and AVX512BW]
VPTESTNMW(k{k}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPTESTNMW(k{k}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPUNPCKHBW
(*args, **kwargs)¶Unpack and Interleave High-Order Bytes into Words
Supported forms:
VPUNPCKHBW(xmm, xmm, xmm/m128) [AVX]
VPUNPCKHBW(ymm, ymm, ymm/m256) [AVX2]
VPUNPCKHBW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPUNPCKHBW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPUNPCKHBW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPUNPCKHDQ
(*args, **kwargs)¶Unpack and Interleave High-Order Doublewords into Quadwords
Supported forms:
VPUNPCKHDQ(xmm, xmm, xmm/m128) [AVX]
VPUNPCKHDQ(ymm, ymm, ymm/m256) [AVX2]
VPUNPCKHDQ(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPUNPCKHDQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPUNPCKHDQ(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPUNPCKHDQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPUNPCKHDQ(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPUNPCKHDQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPUNPCKHQDQ
(*args, **kwargs)¶Unpack and Interleave High-Order Quadwords into Double Quadwords
Supported forms:
VPUNPCKHQDQ(xmm, xmm, xmm/m128) [AVX]
VPUNPCKHQDQ(ymm, ymm, ymm/m256) [AVX2]
VPUNPCKHQDQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPUNPCKHQDQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPUNPCKHQDQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPUNPCKHQDQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPUNPCKHQDQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPUNPCKHQDQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPUNPCKHWD
(*args, **kwargs)¶Unpack and Interleave High-Order Words into Doublewords
Supported forms:
VPUNPCKHWD(xmm, xmm, xmm/m128) [AVX]
VPUNPCKHWD(ymm, ymm, ymm/m256) [AVX2]
VPUNPCKHWD(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPUNPCKHWD(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPUNPCKHWD(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPUNPCKLBW
(*args, **kwargs)¶Unpack and Interleave Low-Order Bytes into Words
Supported forms:
VPUNPCKLBW(xmm, xmm, xmm/m128) [AVX]
VPUNPCKLBW(ymm, ymm, ymm/m256) [AVX2]
VPUNPCKLBW(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPUNPCKLBW(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPUNPCKLBW(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPUNPCKLDQ
(*args, **kwargs)¶Unpack and Interleave Low-Order Doublewords into Quadwords
Supported forms:
VPUNPCKLDQ(xmm, xmm, xmm/m128) [AVX]
VPUNPCKLDQ(ymm, ymm, ymm/m256) [AVX2]
VPUNPCKLDQ(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPUNPCKLDQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPUNPCKLDQ(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPUNPCKLDQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPUNPCKLDQ(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPUNPCKLDQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPUNPCKLQDQ
(*args, **kwargs)¶Unpack and Interleave Low-Order Quadwords into Double Quadwords
Supported forms:
VPUNPCKLQDQ(xmm, xmm, xmm/m128) [AVX]
VPUNPCKLQDQ(ymm, ymm, ymm/m256) [AVX2]
VPUNPCKLQDQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPUNPCKLQDQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPUNPCKLQDQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPUNPCKLQDQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPUNPCKLQDQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPUNPCKLQDQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPUNPCKLWD
(*args, **kwargs)¶Unpack and Interleave Low-Order Words into Doublewords
Supported forms:
VPUNPCKLWD(xmm, xmm, xmm/m128) [AVX]
VPUNPCKLWD(ymm, ymm, ymm/m256) [AVX2]
VPUNPCKLWD(zmm{k}{z}, zmm, zmm/m512) [AVX512BW]
VPUNPCKLWD(xmm{k}{z}, xmm, xmm/m128) [AVX512BW and AVX512VL]
VPUNPCKLWD(ymm{k}{z}, ymm, ymm/m256) [AVX512BW and AVX512VL]
peachpy.x86_64.avx.
VPXOR
(*args, **kwargs)¶Packed Bitwise Logical Exclusive OR
Supported forms:
VPXOR(xmm, xmm, xmm/m128) [AVX]
VPXOR(ymm, ymm, ymm/m256) [AVX2]
peachpy.x86_64.avx.
VPXORD
(*args, **kwargs)¶Bitwise Logical Exclusive OR of Packed Doubleword Integers
Supported forms:
VPXORD(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VPXORD(zmm{k}{z}, zmm, zmm) [AVX512F]
VPXORD(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VPXORD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPXORD(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VPXORD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VPXORQ
(*args, **kwargs)¶Bitwise Logical Exclusive OR of Packed Quadword Integers
Supported forms:
VPXORQ(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VPXORQ(zmm{k}{z}, zmm, zmm) [AVX512F]
VPXORQ(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VPXORQ(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VPXORQ(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VPXORQ(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VRANGEPD
(*args, **kwargs)¶Range Restriction Calculation For Packed Pairs of Double-Precision Floating-Point Values
Supported forms:
VRANGEPD(zmm{k}{z}, zmm, m512/m64bcst, imm8) [AVX512DQ]
VRANGEPD(zmm{k}{z}, zmm, zmm, {sae}, imm8) [AVX512DQ]
VRANGEPD(zmm{k}{z}, zmm, zmm, imm8) [AVX512DQ]
VRANGEPD(xmm{k}{z}, xmm, m128/m64bcst, imm8) [AVX512DQ and AVX512VL]
VRANGEPD(xmm{k}{z}, xmm, xmm, imm8) [AVX512DQ and AVX512VL]
VRANGEPD(ymm{k}{z}, ymm, m256/m64bcst, imm8) [AVX512DQ and AVX512VL]
VRANGEPD(ymm{k}{z}, ymm, ymm, imm8) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VRANGEPS
(*args, **kwargs)¶Range Restriction Calculation For Packed Pairs of Single-Precision Floating-Point Values
Supported forms:
VRANGEPS(zmm{k}{z}, zmm, m512/m32bcst, imm8) [AVX512DQ]
VRANGEPS(zmm{k}{z}, zmm, zmm, {sae}, imm8) [AVX512DQ]
VRANGEPS(zmm{k}{z}, zmm, zmm, imm8) [AVX512DQ]
VRANGEPS(xmm{k}{z}, xmm, m128/m32bcst, imm8) [AVX512DQ and AVX512VL]
VRANGEPS(xmm{k}{z}, xmm, xmm, imm8) [AVX512DQ and AVX512VL]
VRANGEPS(ymm{k}{z}, ymm, m256/m32bcst, imm8) [AVX512DQ and AVX512VL]
VRANGEPS(ymm{k}{z}, ymm, ymm, imm8) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VRANGESD
(*args, **kwargs)¶Range Restriction Calculation For a pair of Scalar Double-Precision Floating-Point Values
Supported forms:
VRANGESD(xmm{k}{z}, xmm, xmm/m64, imm8) [AVX512DQ]
VRANGESD(xmm{k}{z}, xmm, xmm, {sae}, imm8) [AVX512DQ]
peachpy.x86_64.avx.
VRANGESS
(*args, **kwargs)¶Range Restriction Calculation For a pair of Scalar Single-Precision Floating-Point Values
Supported forms:
VRANGESS(xmm{k}{z}, xmm, xmm/m32, imm8) [AVX512DQ]
VRANGESS(xmm{k}{z}, xmm, xmm, {sae}, imm8) [AVX512DQ]
peachpy.x86_64.avx.
VRCP14PD
(*args, **kwargs)¶Compute Approximate Reciprocals of Packed Double-Precision Floating-Point Values
Supported forms:
VRCP14PD(zmm{k}{z}, m512/m64bcst) [AVX512F]
VRCP14PD(zmm{k}{z}, zmm) [AVX512F]
VRCP14PD(xmm{k}{z}, m128/m64bcst) [AVX512F and AVX512VL]
VRCP14PD(ymm{k}{z}, m256/m64bcst) [AVX512F and AVX512VL]
VRCP14PD(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VRCP14PD(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VRCP14PS
(*args, **kwargs)¶Compute Approximate Reciprocals of Packed Single-Precision Floating-Point Values
Supported forms:
VRCP14PS(zmm{k}{z}, m512/m32bcst) [AVX512F]
VRCP14PS(zmm{k}{z}, zmm) [AVX512F]
VRCP14PS(xmm{k}{z}, m128/m32bcst) [AVX512F and AVX512VL]
VRCP14PS(ymm{k}{z}, m256/m32bcst) [AVX512F and AVX512VL]
VRCP14PS(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VRCP14PS(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VRCP14SD
(*args, **kwargs)¶Compute Approximate Reciprocal of a Scalar Double-Precision Floating-Point Value
Supported forms:
VRCP14SD(xmm{k}{z}, xmm, xmm/m64) [AVX512F]
peachpy.x86_64.avx.
VRCP14SS
(*args, **kwargs)¶Compute Approximate Reciprocal of a Scalar Single-Precision Floating-Point Value
Supported forms:
VRCP14SS(xmm{k}{z}, xmm, xmm/m32) [AVX512F]
peachpy.x86_64.avx.
VRCP28PD
(*args, **kwargs)¶Approximation to the Reciprocal of Packed Double-Precision Floating-Point Values with Less Than 2^-28 Relative Error
Supported forms:
VRCP28PD(zmm{k}{z}, m512/m64bcst) [AVX512ER]
VRCP28PD(zmm{k}{z}, zmm, {sae}) [AVX512ER]
VRCP28PD(zmm{k}{z}, zmm) [AVX512ER]
peachpy.x86_64.avx.
VRCP28PS
(*args, **kwargs)¶Approximation to the Reciprocal of Packed Single-Precision Floating-Point Values with Less Than 2^-28 Relative Error
Supported forms:
VRCP28PS(zmm{k}{z}, m512/m32bcst) [AVX512ER]
VRCP28PS(zmm{k}{z}, zmm, {sae}) [AVX512ER]
VRCP28PS(zmm{k}{z}, zmm) [AVX512ER]
peachpy.x86_64.avx.
VRCP28SD
(*args, **kwargs)¶Approximation to the Reciprocal of a Scalar Double-Precision Floating-Point Value with Less Than 2^-28 Relative Error
Supported forms:
VRCP28SD(xmm{k}{z}, xmm, xmm/m64) [AVX512ER]
VRCP28SD(xmm{k}{z}, xmm, xmm, {sae}) [AVX512ER]
peachpy.x86_64.avx.
VRCP28SS
(*args, **kwargs)¶Approximation to the Reciprocal of a Scalar Single-Precision Floating-Point Value with Less Than 2^-28 Relative Error
Supported forms:
VRCP28SS(xmm{k}{z}, xmm, xmm/m32) [AVX512ER]
VRCP28SS(xmm{k}{z}, xmm, xmm, {sae}) [AVX512ER]
peachpy.x86_64.avx.
VRCPPS
(*args, **kwargs)¶Compute Approximate Reciprocals of Packed Single-Precision Floating-Point Values
Supported forms:
VRCPPS(xmm, xmm/m128) [AVX]
VRCPPS(ymm, ymm/m256) [AVX]
peachpy.x86_64.avx.
VRCPSS
(*args, **kwargs)¶Compute Approximate Reciprocal of Scalar Single-Precision Floating-Point Values
Supported forms:
VRCPSS(xmm, xmm, xmm/m32) [AVX]
peachpy.x86_64.avx.
VREDUCEPD
(*args, **kwargs)¶Perform Reduction Transformation on Packed Double-Precision Floating-Point Values
Supported forms:
VREDUCEPD(zmm{k}{z}, m512/m64bcst, imm8) [AVX512DQ]
VREDUCEPD(zmm{k}{z}, zmm, imm8) [AVX512DQ]
VREDUCEPD(xmm{k}{z}, m128/m64bcst, imm8) [AVX512DQ and AVX512VL]
VREDUCEPD(ymm{k}{z}, m256/m64bcst, imm8) [AVX512DQ and AVX512VL]
VREDUCEPD(xmm{k}{z}, xmm, imm8) [AVX512DQ and AVX512VL]
VREDUCEPD(ymm{k}{z}, ymm, imm8) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VREDUCEPS
(*args, **kwargs)¶Perform Reduction Transformation on Packed Single-Precision Floating-Point Values
Supported forms:
VREDUCEPS(zmm{k}{z}, m512/m32bcst, imm8) [AVX512DQ]
VREDUCEPS(zmm{k}{z}, zmm, imm8) [AVX512DQ]
VREDUCEPS(xmm{k}{z}, m128/m32bcst, imm8) [AVX512DQ and AVX512VL]
VREDUCEPS(ymm{k}{z}, m256/m32bcst, imm8) [AVX512DQ and AVX512VL]
VREDUCEPS(xmm{k}{z}, xmm, imm8) [AVX512DQ and AVX512VL]
VREDUCEPS(ymm{k}{z}, ymm, imm8) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VREDUCESD
(*args, **kwargs)¶Perform Reduction Transformation on a Scalar Double-Precision Floating-Point Value
Supported forms:
VREDUCESD(xmm{k}{z}, xmm, xmm/m64, imm8) [AVX512DQ]
peachpy.x86_64.avx.
VREDUCESS
(*args, **kwargs)¶Perform Reduction Transformation on a Scalar Single-Precision Floating-Point Value
Supported forms:
VREDUCESS(xmm{k}{z}, xmm, xmm/m32, imm8) [AVX512DQ]
peachpy.x86_64.avx.
VRNDSCALEPD
(*args, **kwargs)¶Round Packed Double-Precision Floating-Point Values To Include A Given Number Of Fraction Bits
Supported forms:
VRNDSCALEPD(zmm{k}{z}, m512/m64bcst, imm8) [AVX512F]
VRNDSCALEPD(zmm{k}{z}, zmm, {sae}, imm8) [AVX512F]
VRNDSCALEPD(zmm{k}{z}, zmm, imm8) [AVX512F]
VRNDSCALEPD(xmm{k}{z}, m128/m64bcst, imm8) [AVX512F and AVX512VL]
VRNDSCALEPD(ymm{k}{z}, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VRNDSCALEPD(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VRNDSCALEPD(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VRNDSCALEPS
(*args, **kwargs)¶Round Packed Single-Precision Floating-Point Values To Include A Given Number Of Fraction Bits
Supported forms:
VRNDSCALEPS(zmm{k}{z}, m512/m32bcst, imm8) [AVX512F]
VRNDSCALEPS(zmm{k}{z}, zmm, {sae}, imm8) [AVX512F]
VRNDSCALEPS(zmm{k}{z}, zmm, imm8) [AVX512F]
VRNDSCALEPS(xmm{k}{z}, m128/m32bcst, imm8) [AVX512F and AVX512VL]
VRNDSCALEPS(ymm{k}{z}, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VRNDSCALEPS(xmm{k}{z}, xmm, imm8) [AVX512F and AVX512VL]
VRNDSCALEPS(ymm{k}{z}, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VRNDSCALESD
(*args, **kwargs)¶Round Scalar Double-Precision Floating-Point Value To Include A Given Number Of Fraction Bits
Supported forms:
VRNDSCALESD(xmm{k}{z}, xmm, xmm/m64, imm8) [AVX512F]
VRNDSCALESD(xmm{k}{z}, xmm, xmm, {sae}, imm8) [AVX512F]
peachpy.x86_64.avx.
VRNDSCALESS
(*args, **kwargs)¶Round Scalar Single-Precision Floating-Point Value To Include A Given Number Of Fraction Bits
Supported forms:
VRNDSCALESS(xmm{k}{z}, xmm, xmm/m32, imm8) [AVX512F]
VRNDSCALESS(xmm{k}{z}, xmm, xmm, {sae}, imm8) [AVX512F]
peachpy.x86_64.avx.
VROUNDPD
(*args, **kwargs)¶Round Packed Double Precision Floating-Point Values
Supported forms:
VROUNDPD(xmm, xmm/m128, imm8) [AVX]
VROUNDPD(ymm, ymm/m256, imm8) [AVX]
peachpy.x86_64.avx.
VROUNDPS
(*args, **kwargs)¶Round Packed Single Precision Floating-Point Values
Supported forms:
VROUNDPS(xmm, xmm/m128, imm8) [AVX]
VROUNDPS(ymm, ymm/m256, imm8) [AVX]
peachpy.x86_64.avx.
VROUNDSD
(*args, **kwargs)¶Round Scalar Double Precision Floating-Point Values
Supported forms:
VROUNDSD(xmm, xmm, xmm/m64, imm8) [AVX]
peachpy.x86_64.avx.
VROUNDSS
(*args, **kwargs)¶Round Scalar Single Precision Floating-Point Values
Supported forms:
VROUNDSS(xmm, xmm, xmm/m32, imm8) [AVX]
peachpy.x86_64.avx.
VRSQRT14PD
(*args, **kwargs)¶Compute Approximate Reciprocals of Square Roots of Packed Double-Precision Floating-Point Values
Supported forms:
VRSQRT14PD(zmm{k}{z}, m512/m64bcst) [AVX512F]
VRSQRT14PD(zmm{k}{z}, zmm) [AVX512F]
VRSQRT14PD(xmm{k}{z}, m128/m64bcst) [AVX512F and AVX512VL]
VRSQRT14PD(ymm{k}{z}, m256/m64bcst) [AVX512F and AVX512VL]
VRSQRT14PD(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VRSQRT14PD(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VRSQRT14PS
(*args, **kwargs)¶Compute Approximate Reciprocals of Square Roots of Packed Single-Precision Floating-Point Values
Supported forms:
VRSQRT14PS(zmm{k}{z}, m512/m32bcst) [AVX512F]
VRSQRT14PS(zmm{k}{z}, zmm) [AVX512F]
VRSQRT14PS(xmm{k}{z}, m128/m32bcst) [AVX512F and AVX512VL]
VRSQRT14PS(ymm{k}{z}, m256/m32bcst) [AVX512F and AVX512VL]
VRSQRT14PS(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VRSQRT14PS(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VRSQRT14SD
(*args, **kwargs)¶Compute Approximate Reciprocal of a Square Root of a Scalar Double-Precision Floating-Point Value
Supported forms:
VRSQRT14SD(xmm{k}{z}, xmm, xmm/m64) [AVX512F]
peachpy.x86_64.avx.
VRSQRT14SS
(*args, **kwargs)¶Compute Approximate Reciprocal of a Square Root of a Scalar Single-Precision Floating-Point Value
Supported forms:
VRSQRT14SS(xmm{k}{z}, xmm, xmm/m32) [AVX512F]
peachpy.x86_64.avx.
VRSQRT28PD
(*args, **kwargs)¶Approximation to the Reciprocal Square Root of Packed Double-Precision Floating-Point Values with Less Than 2^-28 Relative Error
Supported forms:
VRSQRT28PD(zmm{k}{z}, m512/m64bcst) [AVX512ER]
VRSQRT28PD(zmm{k}{z}, zmm, {sae}) [AVX512ER]
VRSQRT28PD(zmm{k}{z}, zmm) [AVX512ER]
peachpy.x86_64.avx.
VRSQRT28PS
(*args, **kwargs)¶Approximation to the Reciprocal Square Root of Packed Single-Precision Floating-Point Values with Less Than 2^-28 Relative Error
Supported forms:
VRSQRT28PS(zmm{k}{z}, m512/m32bcst) [AVX512ER]
VRSQRT28PS(zmm{k}{z}, zmm, {sae}) [AVX512ER]
VRSQRT28PS(zmm{k}{z}, zmm) [AVX512ER]
peachpy.x86_64.avx.
VRSQRT28SD
(*args, **kwargs)¶Approximation to the Reciprocal Square Root of a Scalar Double-Precision Floating-Point Value with Less Than 2^-28 Relative Error
Supported forms:
VRSQRT28SD(xmm{k}{z}, xmm, xmm/m64) [AVX512ER]
VRSQRT28SD(xmm{k}{z}, xmm, xmm, {sae}) [AVX512ER]
peachpy.x86_64.avx.
VRSQRT28SS
(*args, **kwargs)¶Approximation to the Reciprocal Square Root of a Scalar Single-Precision Floating-Point Value with Less Than 2^-28 Relative Error
Supported forms:
VRSQRT28SS(xmm{k}{z}, xmm, xmm/m32) [AVX512ER]
VRSQRT28SS(xmm{k}{z}, xmm, xmm, {sae}) [AVX512ER]
peachpy.x86_64.avx.
VRSQRTPS
(*args, **kwargs)¶Compute Reciprocals of Square Roots of Packed Single-Precision Floating-Point Values
Supported forms:
VRSQRTPS(xmm, xmm/m128) [AVX]
VRSQRTPS(ymm, ymm/m256) [AVX]
peachpy.x86_64.avx.
VRSQRTSS
(*args, **kwargs)¶Compute Reciprocal of Square Root of Scalar Single-Precision Floating-Point Value
Supported forms:
VRSQRTSS(xmm, xmm, xmm/m32) [AVX]
peachpy.x86_64.avx.
VSCALEFPD
(*args, **kwargs)¶Scale Packed Double-Precision Floating-Point Values With Double-Precision Floating-Point Values
Supported forms:
VSCALEFPD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VSCALEFPD(zmm{k}{z}, zmm, zmm, {er}) [AVX512F]
VSCALEFPD(zmm{k}{z}, zmm, zmm) [AVX512F]
VSCALEFPD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VSCALEFPD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VSCALEFPD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VSCALEFPD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VSCALEFPS
(*args, **kwargs)¶Scale Packed Single-Precision Floating-Point Values With Single-Precision Floating-Point Values
Supported forms:
VSCALEFPS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VSCALEFPS(zmm{k}{z}, zmm, zmm, {er}) [AVX512F]
VSCALEFPS(zmm{k}{z}, zmm, zmm) [AVX512F]
VSCALEFPS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VSCALEFPS(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VSCALEFPS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VSCALEFPS(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VSCALEFSD
(*args, **kwargs)¶Scale Scalar Double-Precision Floating-Point Value With a Double-Precision Floating-Point Value
Supported forms:
VSCALEFSD(xmm{k}{z}, xmm, xmm/m64) [AVX512F]
VSCALEFSD(xmm{k}{z}, xmm, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VSCALEFSS
(*args, **kwargs)¶Scale Scalar Single-Precision Floating-Point Value With a Single-Precision Floating-Point Value
Supported forms:
VSCALEFSS(xmm{k}{z}, xmm, xmm/m32) [AVX512F]
VSCALEFSS(xmm{k}{z}, xmm, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VSCATTERDPD
(*args, **kwargs)¶Scatter Packed Double-Precision Floating-Point Values with Signed Doubleword Indices
Supported forms:
VSCATTERDPD(vm32y{k}, zmm) [AVX512F]
VSCATTERDPD(vm32x{k}, xmm) [AVX512F and AVX512VL]
VSCATTERDPD(vm32x{k}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VSCATTERDPS
(*args, **kwargs)¶Scatter Packed Single-Precision Floating-Point Values with Signed Doubleword Indices
Supported forms:
VSCATTERDPS(vm32z{k}, zmm) [AVX512F]
VSCATTERDPS(vm32x{k}, xmm) [AVX512F and AVX512VL]
VSCATTERDPS(vm32y{k}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VSCATTERPF0DPD
(*args, **kwargs)¶Sparse Prefetch Packed Double-Precision Floating-Point Data Values with Signed Doubleword Indices Using T0 Hint with Intent to Write
Supported forms:
VSCATTERPF0DPD(vm32y{k}) [AVX512PF]
peachpy.x86_64.avx.
VSCATTERPF0DPS
(*args, **kwargs)¶Sparse Prefetch Packed Single-Precision Floating-Point Data Values with Signed Doubleword Indices Using T0 Hint with Intent to Write
Supported forms:
VSCATTERPF0DPS(vm32z{k}) [AVX512PF]
peachpy.x86_64.avx.
VSCATTERPF0QPD
(*args, **kwargs)¶Sparse Prefetch Packed Double-Precision Floating-Point Data Values with Signed Quadword Indices Using T0 Hint with Intent to Write
Supported forms:
VSCATTERPF0QPD(vm64z{k}) [AVX512PF]
peachpy.x86_64.avx.
VSCATTERPF0QPS
(*args, **kwargs)¶Sparse Prefetch Packed Single-Precision Floating-Point Data Values with Signed Quadword Indices Using T0 Hint with Intent to Write
Supported forms:
VSCATTERPF0QPS(vm64z{k}) [AVX512PF]
peachpy.x86_64.avx.
VSCATTERPF1DPD
(*args, **kwargs)¶Sparse Prefetch Packed Double-Precision Floating-Point Data Values with Signed Doubleword Indices Using T1 Hint with Intent to Write
Supported forms:
VSCATTERPF1DPD(vm32y{k}) [AVX512PF]
peachpy.x86_64.avx.
VSCATTERPF1DPS
(*args, **kwargs)¶Sparse Prefetch Packed Single-Precision Floating-Point Data Values with Signed Doubleword Indices Using T1 Hint with Intent to Write
Supported forms:
VSCATTERPF1DPS(vm32z{k}) [AVX512PF]
peachpy.x86_64.avx.
VSCATTERPF1QPD
(*args, **kwargs)¶Sparse Prefetch Packed Double-Precision Floating-Point Data Values with Signed Quadword Indices Using T1 Hint with Intent to Write
Supported forms:
VSCATTERPF1QPD(vm64z{k}) [AVX512PF]
peachpy.x86_64.avx.
VSCATTERPF1QPS
(*args, **kwargs)¶Sparse Prefetch Packed Single-Precision Floating-Point Data Values with Signed Quadword Indices Using T1 Hint with Intent to Write
Supported forms:
VSCATTERPF1QPS(vm64z{k}) [AVX512PF]
peachpy.x86_64.avx.
VSCATTERQPD
(*args, **kwargs)¶Scatter Packed Double-Precision Floating-Point Values with Signed Quadword Indices
Supported forms:
VSCATTERQPD(vm64z{k}, zmm) [AVX512F]
VSCATTERQPD(vm64x{k}, xmm) [AVX512F and AVX512VL]
VSCATTERQPD(vm64y{k}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VSCATTERQPS
(*args, **kwargs)¶Scatter Packed Single-Precision Floating-Point Values with Signed Quadword Indices
Supported forms:
VSCATTERQPS(vm64z{k}, ymm) [AVX512F]
VSCATTERQPS(vm64x{k}, xmm) [AVX512F and AVX512VL]
VSCATTERQPS(vm64y{k}, xmm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VSHUFF32X4
(*args, **kwargs)¶Shuffle 128-Bit Packed Single-Precision Floating-Point Values
Supported forms:
VSHUFF32X4(zmm{k}{z}, zmm, m512/m32bcst, imm8) [AVX512F]
VSHUFF32X4(zmm{k}{z}, zmm, zmm, imm8) [AVX512F]
VSHUFF32X4(ymm{k}{z}, ymm, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VSHUFF32X4(ymm{k}{z}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VSHUFF64X2
(*args, **kwargs)¶Shuffle 128-Bit Packed Double-Precision Floating-Point Values
Supported forms:
VSHUFF64X2(zmm{k}{z}, zmm, m512/m64bcst, imm8) [AVX512F]
VSHUFF64X2(zmm{k}{z}, zmm, zmm, imm8) [AVX512F]
VSHUFF64X2(ymm{k}{z}, ymm, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VSHUFF64X2(ymm{k}{z}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VSHUFI32X4
(*args, **kwargs)¶Shuffle 128-Bit Packed Doubleword Integer Values
Supported forms:
VSHUFI32X4(zmm{k}{z}, zmm, m512/m32bcst, imm8) [AVX512F]
VSHUFI32X4(zmm{k}{z}, zmm, zmm, imm8) [AVX512F]
VSHUFI32X4(ymm{k}{z}, ymm, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VSHUFI32X4(ymm{k}{z}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VSHUFI64X2
(*args, **kwargs)¶Shuffle 128-Bit Packed Quadword Integer Values
Supported forms:
VSHUFI64X2(zmm{k}{z}, zmm, m512/m64bcst, imm8) [AVX512F]
VSHUFI64X2(zmm{k}{z}, zmm, zmm, imm8) [AVX512F]
VSHUFI64X2(ymm{k}{z}, ymm, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VSHUFI64X2(ymm{k}{z}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VSHUFPD
(*args, **kwargs)¶Shuffle Packed Double-Precision Floating-Point Values
Supported forms:
VSHUFPD(xmm, xmm, xmm/m128, imm8) [AVX]
VSHUFPD(ymm, ymm, ymm/m256, imm8) [AVX]
VSHUFPD(zmm{k}{z}, zmm, m512/m64bcst, imm8) [AVX512F]
VSHUFPD(zmm{k}{z}, zmm, zmm, imm8) [AVX512F]
VSHUFPD(xmm{k}{z}, xmm, m128/m64bcst, imm8) [AVX512F and AVX512VL]
VSHUFPD(xmm{k}{z}, xmm, xmm, imm8) [AVX512F and AVX512VL]
VSHUFPD(ymm{k}{z}, ymm, m256/m64bcst, imm8) [AVX512F and AVX512VL]
VSHUFPD(ymm{k}{z}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VSHUFPS
(*args, **kwargs)¶Shuffle Packed Single-Precision Floating-Point Values
Supported forms:
VSHUFPS(xmm, xmm, xmm/m128, imm8) [AVX]
VSHUFPS(ymm, ymm, ymm/m256, imm8) [AVX]
VSHUFPS(zmm{k}{z}, zmm, m512/m32bcst, imm8) [AVX512F]
VSHUFPS(zmm{k}{z}, zmm, zmm, imm8) [AVX512F]
VSHUFPS(xmm{k}{z}, xmm, m128/m32bcst, imm8) [AVX512F and AVX512VL]
VSHUFPS(xmm{k}{z}, xmm, xmm, imm8) [AVX512F and AVX512VL]
VSHUFPS(ymm{k}{z}, ymm, m256/m32bcst, imm8) [AVX512F and AVX512VL]
VSHUFPS(ymm{k}{z}, ymm, ymm, imm8) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VSQRTPD
(*args, **kwargs)¶Compute Square Roots of Packed Double-Precision Floating-Point Values
Supported forms:
VSQRTPD(xmm, xmm/m128) [AVX]
VSQRTPD(ymm, ymm/m256) [AVX]
VSQRTPD(zmm{k}{z}, m512/m64bcst) [AVX512F]
VSQRTPD(zmm{k}{z}, zmm, {er}) [AVX512F]
VSQRTPD(zmm{k}{z}, zmm) [AVX512F]
VSQRTPD(xmm{k}{z}, m128/m32bcst) [AVX512F and AVX512VL]
VSQRTPD(ymm{k}{z}, m256/m32bcst) [AVX512F and AVX512VL]
VSQRTPD(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VSQRTPD(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VSQRTPS
(*args, **kwargs)¶Compute Square Roots of Packed Single-Precision Floating-Point Values
Supported forms:
VSQRTPS(xmm, xmm/m128) [AVX]
VSQRTPS(ymm, ymm/m256) [AVX]
VSQRTPS(zmm{k}{z}, m512/m32bcst) [AVX512F]
VSQRTPS(zmm{k}{z}, zmm, {er}) [AVX512F]
VSQRTPS(zmm{k}{z}, zmm) [AVX512F]
VSQRTPS(xmm{k}{z}, m128/m32bcst) [AVX512F and AVX512VL]
VSQRTPS(ymm{k}{z}, m256/m32bcst) [AVX512F and AVX512VL]
VSQRTPS(xmm{k}{z}, xmm) [AVX512F and AVX512VL]
VSQRTPS(ymm{k}{z}, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VSQRTSD
(*args, **kwargs)¶Compute Square Root of Scalar Double-Precision Floating-Point Value
Supported forms:
VSQRTSD(xmm, xmm, xmm/m64) [AVX]
VSQRTSD(xmm{k}{z}, xmm, xmm/m64) [AVX512F]
VSQRTSD(xmm{k}{z}, xmm, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VSQRTSS
(*args, **kwargs)¶Compute Square Root of Scalar Single-Precision Floating-Point Value
Supported forms:
VSQRTSS(xmm, xmm, xmm/m32) [AVX]
VSQRTSS(xmm{k}{z}, xmm, xmm/m32) [AVX512F]
VSQRTSS(xmm{k}{z}, xmm, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VSTMXCSR
(*args, **kwargs)¶Store MXCSR Register State
Supported forms:
VSTMXCSR(m32) [AVX]
peachpy.x86_64.avx.
VSUBPD
(*args, **kwargs)¶Subtract Packed Double-Precision Floating-Point Values
Supported forms:
VSUBPD(xmm, xmm, xmm/m128) [AVX]
VSUBPD(ymm, ymm, ymm/m256) [AVX]
VSUBPD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VSUBPD(zmm{k}{z}, zmm, zmm, {er}) [AVX512F]
VSUBPD(zmm{k}{z}, zmm, zmm) [AVX512F]
VSUBPD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VSUBPD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VSUBPD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VSUBPD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VSUBPS
(*args, **kwargs)¶Subtract Packed Single-Precision Floating-Point Values
Supported forms:
VSUBPS(xmm, xmm, xmm/m128) [AVX]
VSUBPS(ymm, ymm, ymm/m256) [AVX]
VSUBPS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VSUBPS(zmm{k}{z}, zmm, zmm, {er}) [AVX512F]
VSUBPS(zmm{k}{z}, zmm, zmm) [AVX512F]
VSUBPS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VSUBPS(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VSUBPS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VSUBPS(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VSUBSD
(*args, **kwargs)¶Subtract Scalar Double-Precision Floating-Point Values
Supported forms:
VSUBSD(xmm, xmm, xmm/m64) [AVX]
VSUBSD(xmm{k}{z}, xmm, xmm/m64) [AVX512F]
VSUBSD(xmm{k}{z}, xmm, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VSUBSS
(*args, **kwargs)¶Subtract Scalar Single-Precision Floating-Point Values
Supported forms:
VSUBSS(xmm, xmm, xmm/m32) [AVX]
VSUBSS(xmm{k}{z}, xmm, xmm/m32) [AVX512F]
VSUBSS(xmm{k}{z}, xmm, xmm, {er}) [AVX512F]
peachpy.x86_64.avx.
VTESTPD
(*args, **kwargs)¶Packed Double-Precision Floating-Point Bit Test
Supported forms:
VTESTPD(xmm, xmm/m128) [AVX]
VTESTPD(ymm, ymm/m256) [AVX]
peachpy.x86_64.avx.
VTESTPS
(*args, **kwargs)¶Packed Single-Precision Floating-Point Bit Test
Supported forms:
VTESTPS(xmm, xmm/m128) [AVX]
VTESTPS(ymm, ymm/m256) [AVX]
peachpy.x86_64.avx.
VUCOMISD
(*args, **kwargs)¶Unordered Compare Scalar Double-Precision Floating-Point Values and Set EFLAGS
Supported forms:
VUCOMISD(xmm, xmm/m64) [AVX]
VUCOMISD(xmm, xmm/m64) [AVX512F]
VUCOMISD(xmm, xmm, {sae}) [AVX512F]
peachpy.x86_64.avx.
VUCOMISS
(*args, **kwargs)¶Unordered Compare Scalar Single-Precision Floating-Point Values and Set EFLAGS
Supported forms:
VUCOMISS(xmm, xmm/m32) [AVX]
VUCOMISS(xmm, xmm/m32) [AVX512F]
VUCOMISS(xmm, xmm, {sae}) [AVX512F]
peachpy.x86_64.avx.
VUNPCKHPD
(*args, **kwargs)¶Unpack and Interleave High Packed Double-Precision Floating-Point Values
Supported forms:
VUNPCKHPD(xmm, xmm, xmm/m128) [AVX]
VUNPCKHPD(ymm, ymm, ymm/m256) [AVX]
VUNPCKHPD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VUNPCKHPD(zmm{k}{z}, zmm, zmm) [AVX512F]
VUNPCKHPD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VUNPCKHPD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VUNPCKHPD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VUNPCKHPD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VUNPCKHPS
(*args, **kwargs)¶Unpack and Interleave High Packed Single-Precision Floating-Point Values
Supported forms:
VUNPCKHPS(xmm, xmm, xmm/m128) [AVX]
VUNPCKHPS(ymm, ymm, ymm/m256) [AVX]
VUNPCKHPS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VUNPCKHPS(zmm{k}{z}, zmm, zmm) [AVX512F]
VUNPCKHPS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VUNPCKHPS(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VUNPCKHPS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VUNPCKHPS(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VUNPCKLPD
(*args, **kwargs)¶Unpack and Interleave Low Packed Double-Precision Floating-Point Values
Supported forms:
VUNPCKLPD(xmm, xmm, xmm/m128) [AVX]
VUNPCKLPD(ymm, ymm, ymm/m256) [AVX]
VUNPCKLPD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512F]
VUNPCKLPD(zmm{k}{z}, zmm, zmm) [AVX512F]
VUNPCKLPD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512F and AVX512VL]
VUNPCKLPD(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VUNPCKLPD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512F and AVX512VL]
VUNPCKLPD(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VUNPCKLPS
(*args, **kwargs)¶Unpack and Interleave Low Packed Single-Precision Floating-Point Values
Supported forms:
VUNPCKLPS(xmm, xmm, xmm/m128) [AVX]
VUNPCKLPS(ymm, ymm, ymm/m256) [AVX]
VUNPCKLPS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512F]
VUNPCKLPS(zmm{k}{z}, zmm, zmm) [AVX512F]
VUNPCKLPS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512F and AVX512VL]
VUNPCKLPS(xmm{k}{z}, xmm, xmm) [AVX512F and AVX512VL]
VUNPCKLPS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512F and AVX512VL]
VUNPCKLPS(ymm{k}{z}, ymm, ymm) [AVX512F and AVX512VL]
peachpy.x86_64.avx.
VXORPD
(*args, **kwargs)¶Bitwise Logical XOR for Double-Precision Floating-Point Values
Supported forms:
VXORPD(xmm, xmm, xmm/m128) [AVX]
VXORPD(ymm, ymm, ymm/m256) [AVX]
VXORPD(zmm{k}{z}, zmm, m512/m64bcst) [AVX512DQ]
VXORPD(zmm{k}{z}, zmm, zmm) [AVX512DQ]
VXORPD(xmm{k}{z}, xmm, m128/m64bcst) [AVX512DQ and AVX512VL]
VXORPD(xmm{k}{z}, xmm, xmm) [AVX512DQ and AVX512VL]
VXORPD(ymm{k}{z}, ymm, m256/m64bcst) [AVX512DQ and AVX512VL]
VXORPD(ymm{k}{z}, ymm, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VXORPS
(*args, **kwargs)¶Bitwise Logical XOR for Single-Precision Floating-Point Values
Supported forms:
VXORPS(xmm, xmm, xmm/m128) [AVX]
VXORPS(ymm, ymm, ymm/m256) [AVX]
VXORPS(zmm{k}{z}, zmm, m512/m32bcst) [AVX512DQ]
VXORPS(zmm{k}{z}, zmm, zmm) [AVX512DQ]
VXORPS(xmm{k}{z}, xmm, m128/m32bcst) [AVX512DQ and AVX512VL]
VXORPS(xmm{k}{z}, xmm, xmm) [AVX512DQ and AVX512VL]
VXORPS(ymm{k}{z}, ymm, m256/m32bcst) [AVX512DQ and AVX512VL]
VXORPS(ymm{k}{z}, ymm, ymm) [AVX512DQ and AVX512VL]
peachpy.x86_64.avx.
VZEROALL
(*args, **kwargs)¶Zero All YMM Registers
Supported forms:
VZEROALL() [AVX]
peachpy.x86_64.avx.
VZEROUPPER
(*args, **kwargs)¶Zero Upper Bits of YMM Registers
Supported forms:
VZEROUPPER() [AVX]