Mon May 24 14:22:08 2021 UTC ()
qemu: Update to 6.0.0

* Add zstd dependency.

Changelog:
== System emulation ==

=== Incompatible changes ===

Consult the [https://qemu-project.gitlab.io/qemu/system/removed-features.html 'Removed features' ] page for details of suggested replacement functionality

* The deprecated ''pc-1.0'', ''pc-1.1'', ''pc-1.2'' and ''pc-1.3'' machine types have been removed (they likely could not be used for live migration from old QEMU versions anymore anyway). Use a newer ''pc-i440fx-...'' machine type instead.
* TileGX emulation has been removed without replacement
* The ''change'' QMP command has been removed. Use ''blockdev-change-medium'' or ''change-vnc-password'' instead.
* The ''-show-cursor'' option has been removed. Use ''-display sdl,show-cursor=on'' instead.
* The ''-realtime'' option has been removed. Use ''-overcommit mem-lock=on|off' instead.
* The ''-tb-size'' option has been removed. Use ''-accel tcg,tb-size=...'' instead.
* The configure script --enable/disable-git-update args have been replaced with --with-git-submodules
* The ''-usbdevice audio'' option has been removed.  Use ''-device usb-audio'' instead.
* The ''-usbdevice ccid'' option has been removed with no replacement
* The ''-vnc'' parameter ''acl'' option, and ''acl_*'' monitor commands have been removed.
* The ''pretty'' option is no longer accepted when used with the human monitor
* The ''change'' QMP command has been removed. Use ''blockdev-change-medium'' or ''change-vnc-password'' instead.
* The ''query-events'' QMP command has been removed
* The ''migrate_set_speed'', ''migrate_set_downtime'' and ''migrate-set-cache-size'' QMP/HMP commands have been removed.
* The ''query-cpus'' QMP command has been removed
* The ''arch'' field in the ''query-cpus-fast'' command has been removed
* The ''-chardev'' parameter ''wait'' option is no longer accepted for socket clients
* The ''ide-drive'' device type has been removed
* The ''scsi-disk'' device type has been removed
* The ''encryption_key_missing'' field has been removed from block device info data
* The ''status'' field has been removed from dirty bitmap info
* The ''dirty-bitmaps'' field has been removed from the ''BlockInfo'' struct
* The ''file'' block driver no longer permits use with block devices
* The use of ''-global'' to set floppy controllers is removed. Use ''-device floppy,...'' instead.
* The ''-drive'' option must now use ''if=none'' for drives the onboard device does not pick up.
* The ''object-add'' QMP command member ''props'' has been removed.  Its contents may be used with less nesting instead.
* The mips ''fulong2e'' machine alias has been removed. Use ''fuloong2e'' instead.

=== New deprecated options and features ===

Consult the [https://www.qemu.org/docs/master/system/deprecated.html "Deprecated Features"] chapter of the QEMU System Emulation User's Guide for further details of the deprecations and their suggested replacements.

* The --enable-fips option has been deprecated. Consumers wishing to have FIPS compliance must build QEMU with libcrypt and gnutls, NOT nettle.
* The ''-writeconfig'' option has been deprecated. The functionality of ''-writeconfig'' is limited and the code does not even try to detect cases where it prints incorrect syntax (for example if values have a quote in them). It will be removed without replacement.
* Boolean parameters such as ''share=on'' / ''share=off'' could be written in short form as ''share'' and ''noshare''.  This is now deprecated and will cause a warning.
* ''-chardev'' backend aliases ''tty'' and ''parport'' are aliases that will be removed. Instead, the actual backend names ''serial'' and ''parallel'' should be used.
* The ''delay'' option for socket character devices is now deprecated.
* Userspace local APIC with KVM (''-M kernel-irqchip=off'')
* hexadecimal sizes with scaling multipliers (e.g. ''0x20M'')
* ''-spice password=string'' is deprecated now. Use ''password-secret'' option instead.
* ''opened'' property of ''rng-*'' objects
* ''loaded'' property of ''secret'' and ''secret_keyring''
* MIPS ''Trap-and-Emulate'' KVM support

=== 68k ===

* Add a new machine, virt, based on virtio devices

=== Alpha ===

=== Arm ===

* QEMU now supports emulation of the Arm-v8.1M architecture and the Cortex-M55 CPU
* Emulation of the ARMv8.4-TTST extension is now supported
* Emulation of the ARMv8.4-SEL2 extension is now supported
* Emulation of the FEAT_SSBS extension is now supported
* Emulation of the PAuth extension now supports an optional IMPDEF pauth algorithm which is not cryptographically secure but is much faster to compute
* Emulation of the ARMv8.4-DIT extension is now supported. (Note that QEMU's implementation does not in fact provide any timing guarantees; emulation of the extension is purely to support guests which query its presence and work with the PSTATE.DIT bit.)
* Emulation of the ARMv8.5-MemTag extension is now supported for linux-user. (It was already supported for system emulation.)
* xlnx-zynqmp boards now support the Xilinx ZynqMP CAN controllers
* the sbsa-ref board now supports Cortex-A53/57/72 cpus
* the xlnx-versal board now has USB support, and a model of the XRAMs and the XRAM controller
* the sabrelite board emulation has been improved and it can now run U-Boot
* the npcm7xx boards support more devices: ADC, PWM, SMBus, EMC, MFT
* the gdbstub's representation of SVE registers allows GDB to properly handle aliasing
* the 'virt' board now provides a mechanism for secure (EL3) firmware to power down or reset the system
* documentation for vexpress/versatile has been updated with example kernel configuration/command lines
* A new board model mps3-an524 (using Cortex-M33) is now implemented
* A new board model mps3-an547 (using Cortex-M55) is now implemented

=== AVR ===

=== Hexagon ===

* QEMU can now emulate Qualcomm's Hexagon DSP units.

=== HPPA ===

=== Microblaze ===

=== MIPS ===
* Loongson-3 "virt" machine added

=== Nios2 ===

=== OpenRISC ===

=== PowerPC ===
* Deprecated 'compat' property of server class POWER cpus removed (use the 'max-cpu-compat' machine option instead)
* You can now explicitly choose 'kvm_type=auto' rather than only being able to do that by not setting it at all.
* powernv machine type now defaults to 1GiB of RAM
* powernv now allows an external BMC
* pseries will now send MEM_UNPLUG_ERROR QAPI message in cases where it can detect that a memory unplug has failed
* pseries will now allow cpu unplug requests to be retried, even if the guest hasn't responded to them yet.
  * This will re-signal the guest, which might an unplug to complete which the guest previous rejected

=== Renesas RX ===

=== Renesas SH ===

=== RISC-V ===
* Improve the sifive_u DTB generation
* Add QSPI NOR flash to Microchip PFSoC
* Improvements to the Microchip PFSoc to improve support with the SDK
* A range of fixes to the Hypervisor extension
* Fix some mstatus mask defines
* Ibex PLIC and UART improvements
* OpenTitan memory layout update (Breaking change)
* Initial steps towards support for 32-bit CPUs on 64-bit builds
* Automate GDB XML generation (should fix GDB E14 errors)
* Sifive OTP handle OTP access failures
* Correctly generate a PMP failure when no PMP entry is configured
* Fixes to PMP region checking
* Fix 32-bit Linux boot problems with DTB placement
* OpenSBI upgraded to v0.9
* Support the QMP dump-guest-memory command
* Add support for the SiFive SPI controller (sifive_u)
* Initial RISC-V system documentation
* Support for high PCIe memory in the virt machine
* Fixes to the vector extensions CSR accesses
* ramfb support in the virt machine

=== s390 ===
* Linux kernels built with clang-11 and clang-12 now work correctly under tcg

=== SPARC ===

=== TileGX ===

* TileGX has been removed without replacement.TileGX was only implemented in linux-user mode, but support for this CPU was removed from the upstream Linux kernel in 2018, and it has also been dropped from glibc, so there is no new Linux development taking place with this architecture, rendering the linux-user mode emulation rather useless. For running older binaries, users can simply use older versions of QEMU.

=== Tricore ===
* Added Triboard with tc27x SoC

=== x86 ===
* TCG can emulate the PKS feature (protection keys for supervisor pages).
* Intel PT can now be exposed to KVM guests when <code>CPUID.(EAX=14,ECX=0).ECX[LIP]</code> (bit 31) is 1. Previous versions only supported Intel PT when LIP=0
* New <code>sev-inject-launch-secret</code> QMP command
* The WHPX accelerator supports accelerated APIC ("-accel whpx,kernel-irqchip=on")
* The microvm machine type got a second (optional) ioapic for the virtio-mmio irq lines, which in turn allows 24 (instead of 8) virtio-mmio devices.
* Support for running SEV-ES encrypted guests.

=== Xtensa ===

=== Device emulation and assignment ===

==== ACPI ====
* new ''-machine'' options ''oem-id'' and ''oem-table-id'' to allow setting custom values for ''OEM ID'' and ''OEM table ID'' ACPI table fields
* in QEMU 5.1, PCI root UID changed to from 1 to 0 for all x86 machine types, this caused issues in Windows guest with virtio devices being re-enumeraed as new devices. QEMU 6.0 fixes it by reverting UID to 1 for 5.1 and older machine types. See commit 0a343a5add75 for details. For 5.2 and later machine types it might be necessary to reconfigure/reinstall Windows VM, if used disk image was created on 5.1 and older machine types.
* Support for user provided PCI NIC index on ''pc'' machine type with help of new ''acpi-index'' PCI device option. For linux guests, It lets user to use ''onboard'' naming scheme ''enoX'' where X is set with ''acpi-index'' option. It makes NIC naming independent from which PCI slot it is plugged in. Works with cold and hot-plugged NICs, as long as used PCI bus is managed by ACPI PCI hotplug (which is enabled for PCI root bus and bridges present at boot time by default on latest ''pc'' machine type ).

==== Audio ====

==== Block devices ====
* virtio-blk reports <tt>--device virtio-blk-pci,discard_granularity=</tt> in the virtio-blk <tt>discard_sector_alignment</tt> configuration space field so that guests with new machine types can take advantage of this information. Previously virtio-blk devices reported <tt>--device virtio-blk-pci,logical_block_size=</tt> instead.

==== Graphics ====

==== Input devices ====

==== IPMI ====

==== Multi-process QEMU ====

* The experimental <code>-machine x-remote</code> and <code>-device x-pci-proxy-dev</code> options have been added to support out-of-process device emulation. Currently only the <code>lsi53c895</code> SCSI device can be emulated in a separate process. Please see [https://qemu.readthedocs.io/en/latest/system/multi-process.html the documentation] and [[Features/MultiProcessQEMU]] for details on this experimental feature, which is still subject to change.

==== Network devices ====

==== NVDIMM ====

* nvdimm devices will check that <code>-device nvdimm,unarmed=on</code>  option is used when using <code>-object memory-backend-file,readonly=on</code>

==== NVMe ====

===== Emulated NVMe Controller =====

* ''Highlights''
** The implemented spec version has been bumped to v1.4
** Experimental support for Zoned Namespaces (TP 4053) has been added
** Experimental support for NVM Subsystems, multipath I/O and namespace sharing
** Experimental support for Metadata and End-to-End Data Protection
* ''New commands''
** Dataset Management
** Compare
** Simple Copy (TP 4065)
** Format NVM
** Verify
* ''Other new features''
** Support for reporting the Deallocated or Unwritten Logical Block Error (DULBE)
** Namespace UUID reported as a Namespace Descriptor
** Support for Namespace Types (TP 4056)
** Support for triggering a SMART Critical Warning through QMP
** Controller Memory Buffer support has been enhanced for NVMe v1.4 (to revert to v1.3 behavior, use the new <code>legacy-cmb</code> controller parameter)
** Persistent Memory Region RDS/WDS support
* ''New log pages''
** Commands Supported and Effects

==== PCI/PCIe ====

* The 'pvpanic-pci' device is a PCI-device version of the 'pvpanic' ISA device, which can be used on systems with only PCI and no ISA bus as a mechanism for the guest to inform QEMU that it has paniced.

==== SCSI ====
* Rework of the ESP SCSI emulation to allow mixed FIFO/(P)DMA commands along with various other fixes

==== SD card ====

==== SMBIOS ====

==== TPM ====

==== USB ====

* Support for writing usb traffic to package capture files for inspection with wireshark has been added.  Use the new pcap=<file> property added to all usb devices to enable this.

==== VFIO ====

==== virtio ====

==== Xen ====

* A new [https://qemu.readthedocs.io/en/latest/system/guest-loader.html guest loader] which allows testing of Xen-like hypervisors booting kernels without messing around with firmware/bootloaders

==== fw_cfg ====

==== 9pfs ====

==== virtiofs ====
* Security fix for CVE-2020-35517 - prevent opening of special files
* Security fix for CVE-2021-20263 - when used with xattrmap, drop remapped security.capability
* Performance improvements with new guest kernel feature FUSE_KILLPRIV_V2

==== Semihosting ====
* Added support for RiscV (ARM style s= Character devices ===

=== Crypto subsystem ===

==== experimental qmp interface ====

=== GUI ===
* vnc: support for cursors with alpha channel has been added.
* vnc: support for extended desktop resize has been added.  With virtio-vga the guest displab representation for SVE registers

=== TCG Plugins ===

* New API for querying details about HW access
* Bug fix to avoid double counting some instructions when using -icount

=== Host support ===

=== Memory backends ===

* hostmem-file: added readonly=lation to NBD_STATE_HOLE.
* ''qemu-img'' gained more accurate parsing for size values.  Previously, only 53 significant digits were supported, and large sizes could end up with inadvertent rounding; now the parser supports a full 64 bits of precision.
* The ''object-add'' QMP command is now available in qemu-storage-daemon.
* qemu-storage-daemon supports a ''--pidfile'' option now
* The ''parallels'' image format driver has gained support for dirty bitmaps in read-only mode

=== Tracing ===

=== Miscellaneous ===
* The command line option ''-object'' (or ''--object'') accepts JSON input now in all binaries (system emulators and tools). In tools, it also supports non-scalar options using the dotted key syntax known from options like ''--blockdev''.
* The QMP command ''object-add'' is now covered by the QAPI schema and clients can use schema introspection to detect object types and options supported by the given QEMU binary.
* A new command line option ''-action'', with suboptions ''panic'', ''shutdown'', ''reboot'' and ''watchdog''.  ''-action'' subsumes the pre-existing options ''-no-shutdown'' (''-action panic=pause,shutdown=pause''), ''-no-reboot'' (''-action reboot=shutdown'') and ''-watchdog-action''; plus, it allows the user to choose whether guest panic should pause the guest (''-action panic=pause''), shut it down (''-action panic=poweroff'', the default) or be ignored (''-action panic=none'').
* A new generic machine option ''confidential-guest-support'' was added to (partially) unify configuration for AMD SEV memory encrypt, POWER PEF and s390 Protected Virtualization, plus future methods of protecting a guest from eavesdropping by a compromised hypervisor.
* A new [https://qemu.readthedocs.io/en/latest/system/guest-loader.html guest loader] whications.

== User-mode emulation ==
=== binfmt_misc ===

Added support of 'P' flag (preserve-argv[0])

With kernel v5.12, QEMU can detect if it is started with preserve-argv[0] flag and adjust the list of arguments accordingly.

=== Hexagon ===

Added support for the Qualcomm Hexagon processor, in linux-user mode only.

For more information, see [https://www.youtube.com/watch?v=3EpnTYBOXCI our presenation from the 2019 KVM Forum]
or the [https://github.com/qemu/qemu/blob/master/target/hexagon/README README] file

== TCG ==

* Added support for Apple Silicon hosts (macOS)


(ryoon)
diff -r1.278 -r1.279 pkgsrc/emulators/qemu/Makefile
diff -r1.74 -r1.75 pkgsrc/emulators/qemu/PLIST
diff -r1.177 -r1.178 pkgsrc/emulators/qemu/distinfo
diff -r0 -r1.1 pkgsrc/emulators/qemu/patches/patch-accel_Kconfig
diff -r0 -r1.1 pkgsrc/emulators/qemu/patches/patch-nvmm-accel-ops.c
diff -r0 -r1.1 pkgsrc/emulators/qemu/patches/patch-nvmm-accel-ops.h
diff -r0 -r1.1 pkgsrc/emulators/qemu/patches/patch-nvmm-all.c
diff -r0 -r1.1 pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_meson.build
diff -r0 -r1.1 pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_nvmm-accel-ops.c
diff -r0 -r1.1 pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_nvmm-accel-ops.h
diff -r0 -r1.1 pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_nvmm-all.c
diff -r1.3 -r0 pkgsrc/emulators/qemu/patches/patch-accel_stubs_nvmm-stub.c
diff -r1.3 -r0 pkgsrc/emulators/qemu/patches/patch-target_i386_helper.c
diff -r1.31 -r1.32 pkgsrc/emulators/qemu/patches/patch-configure
diff -r1.1 -r0 pkgsrc/emulators/qemu/patches/patch-contrib_ivshmem-client_ivshmem-client.c
diff -r1.1 -r0 pkgsrc/emulators/qemu/patches/patch-contrib_ivshmem-server_ivshmem-server.c
diff -r1.1 -r0 pkgsrc/emulators/qemu/patches/patch-include_sysemu_hw_accel.h
diff -r1.1 -r0 pkgsrc/emulators/qemu/patches/patch-target_i386_kvm-stub.c
diff -r1.1 -r0 pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_cpus.c
diff -r1.1 -r0 pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_cpus.h
diff -r1.1 -r1.2 pkgsrc/emulators/qemu/patches/patch-hw_mips_meson.build
diff -r1.1 -r1.2 pkgsrc/emulators/qemu/patches/patch-meson__options.txt
diff -r1.1 -r1.2 pkgsrc/emulators/qemu/patches/patch-target_i386_meson.build
diff -r0 -r1.4 pkgsrc/emulators/qemu/patches/patch-include_sysemu_hw__accel.h
diff -r1.3 -r1.4 pkgsrc/emulators/qemu/patches/patch-include_sysemu_nvmm.h
diff -r1.5 -r1.6 pkgsrc/emulators/qemu/patches/patch-meson.build
diff -r1.4 -r1.5 pkgsrc/emulators/qemu/patches/patch-qemu-options.hx
diff -r1.2 -r0 pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_all.c

cvs diff -r1.278 -r1.279 pkgsrc/emulators/qemu/Makefile (expand / switch to unified diff)

--- pkgsrc/emulators/qemu/Makefile 2021/05/23 13:53:10 1.278
+++ pkgsrc/emulators/qemu/Makefile 2021/05/24 14:22:08 1.279
@@ -1,17 +1,16 @@ @@ -1,17 +1,16 @@
1# $NetBSD: Makefile,v 1.278 2021/05/23 13:53:10 thorpej Exp $ 1# $NetBSD: Makefile,v 1.279 2021/05/24 14:22:08 ryoon Exp $
2 2
3DISTNAME= qemu-5.2.0 3DISTNAME= qemu-6.0.0
4PKGREVISION= 8 
5CATEGORIES= emulators 4CATEGORIES= emulators
6MASTER_SITES= https://download.qemu.org/ 5MASTER_SITES= https://download.qemu.org/
7EXTRACT_SUFX= .tar.xz 6EXTRACT_SUFX= .tar.xz
8 7
9MAINTAINER= pkgsrc-users@NetBSD.org 8MAINTAINER= pkgsrc-users@NetBSD.org
10HOMEPAGE= http://www.qemu-project.org/ 9HOMEPAGE= http://www.qemu-project.org/
11COMMENT= CPU emulator using dynamic translation 10COMMENT= CPU emulator using dynamic translation
12LICENSE= gnu-gpl-v2 AND gnu-lgpl-v2.1 AND mit AND modified-bsd 11LICENSE= gnu-gpl-v2 AND gnu-lgpl-v2.1 AND mit AND modified-bsd
13 12
14TOOL_DEPENDS+= ninja-build-[0-9]*:../../devel/ninja-build 13TOOL_DEPENDS+= ninja-build-[0-9]*:../../devel/ninja-build
15 14
16USE_CURSES= resize_term wide 15USE_CURSES= resize_term wide
17USE_LANGUAGES+= c c++ 16USE_LANGUAGES+= c c++
@@ -172,26 +171,27 @@ do-install: @@ -172,26 +171,27 @@ do-install:
172post-build: 171post-build:
173 ${TOUCH} ${WRKSRC}/config-host.mak 172 ${TOUCH} ${WRKSRC}/config-host.mak
174 173
175post-install: 174post-install:
176 ${INSTALL_DATA} ${FILESDIR}/Makefile.multinode-NetBSD \ 175 ${INSTALL_DATA} ${FILESDIR}/Makefile.multinode-NetBSD \
177 ${DESTDIR}${PREFIX}/share/doc/qemu/ 176 ${DESTDIR}${PREFIX}/share/doc/qemu/
178 ${RM} -f ${DESTDIR}${PREFIX}/share/doc/qemu/interop/.buildinfo 177 ${RM} -f ${DESTDIR}${PREFIX}/share/doc/qemu/interop/.buildinfo
179 ${RM} -f ${DESTDIR}${PREFIX}/share/doc/qemu/specs/.buildinfo 178 ${RM} -f ${DESTDIR}${PREFIX}/share/doc/qemu/specs/.buildinfo
180 ${RM} -f ${WRKDIR}/PLIST.STATIC 179 ${RM} -f ${WRKDIR}/PLIST.STATIC
181 cd ${DESTDIR}${PREFIX} && \ 180 cd ${DESTDIR}${PREFIX} && \
182 ${FIND} share/doc/qemu -path '*/_static/*' -type f -print > ${WRKDIR}/PLIST.STATIC 181 ${FIND} share/doc/qemu -path '*/_static/*' -type f -print > ${WRKDIR}/PLIST.STATIC
183 182
184.include "../../archivers/lzo/buildlink3.mk" 183.include "../../archivers/lzo/buildlink3.mk"
 184.include "../../archivers/zstd/buildlink3.mk"
185.include "../../devel/glib2/buildlink3.mk" 185.include "../../devel/glib2/buildlink3.mk"
186.include "../../devel/jemalloc/buildlink3.mk" 186.include "../../devel/jemalloc/buildlink3.mk"
187.include "../../devel/snappy/buildlink3.mk" 187.include "../../devel/snappy/buildlink3.mk"
188.include "../../devel/zlib/buildlink3.mk" 188.include "../../devel/zlib/buildlink3.mk"
189.include "../../graphics/hicolor-icon-theme/buildlink3.mk" 189.include "../../graphics/hicolor-icon-theme/buildlink3.mk"
190.include "../../graphics/png/buildlink3.mk" 190.include "../../graphics/png/buildlink3.mk"
191.include "../../lang/python/tool.mk" 191.include "../../lang/python/tool.mk"
192.include "../../lang/python/versioned_dependencies.mk" 192.include "../../lang/python/versioned_dependencies.mk"
193.include "../../security/libgcrypt/buildlink3.mk" 193.include "../../security/libgcrypt/buildlink3.mk"
194.include "../../www/curl/buildlink3.mk" 194.include "../../www/curl/buildlink3.mk"
195.include "../../x11/pixman/buildlink3.mk" 195.include "../../x11/pixman/buildlink3.mk"
196.include "../../mk/curses.buildlink3.mk" 196.include "../../mk/curses.buildlink3.mk"
197.include "../../mk/jpeg.buildlink3.mk" 197.include "../../mk/jpeg.buildlink3.mk"

cvs diff -r1.74 -r1.75 pkgsrc/emulators/qemu/PLIST (expand / switch to unified diff)

--- pkgsrc/emulators/qemu/PLIST 2021/04/08 13:14:51 1.74
+++ pkgsrc/emulators/qemu/PLIST 2021/05/24 14:22:08 1.75
@@ -1,14 +1,14 @@ @@ -1,14 +1,14 @@
1@comment $NetBSD: PLIST,v 1.74 2021/04/08 13:14:51 nia Exp $ 1@comment $NetBSD: PLIST,v 1.75 2021/05/24 14:22:08 ryoon Exp $
2bin/elf2dmp 2bin/elf2dmp
3${PLIST.aarch64}bin/qemu-aarch64 3${PLIST.aarch64}bin/qemu-aarch64
4${PLIST.aarch64_be}bin/qemu-aarch64_be 4${PLIST.aarch64_be}bin/qemu-aarch64_be
5${PLIST.alpha}bin/qemu-alpha 5${PLIST.alpha}bin/qemu-alpha
6${PLIST.arm}bin/qemu-arm 6${PLIST.arm}bin/qemu-arm
7${PLIST.armeb}bin/qemu-armeb 7${PLIST.armeb}bin/qemu-armeb
8${PLIST.cris}bin/qemu-cris 8${PLIST.cris}bin/qemu-cris
9bin/qemu-edid 9bin/qemu-edid
10bin/qemu-ga 10bin/qemu-ga
11${PLIST.hppa}bin/qemu-hppa 11${PLIST.hppa}bin/qemu-hppa
12${PLIST.i386}bin/qemu-i386 12${PLIST.i386}bin/qemu-i386
13bin/qemu-img 13bin/qemu-img
14bin/qemu-io 14bin/qemu-io
@@ -65,158 +65,188 @@ bin/qemu-system-sh4 @@ -65,158 +65,188 @@ bin/qemu-system-sh4
65bin/qemu-system-sh4eb 65bin/qemu-system-sh4eb
66bin/qemu-system-sparc 66bin/qemu-system-sparc
67bin/qemu-system-sparc64 67bin/qemu-system-sparc64
68bin/qemu-system-tricore 68bin/qemu-system-tricore
69bin/qemu-system-x86_64 69bin/qemu-system-x86_64
70bin/qemu-system-xtensa 70bin/qemu-system-xtensa
71bin/qemu-system-xtensaeb 71bin/qemu-system-xtensaeb
72${PLIST.x86_64}bin/qemu-x86_64 72${PLIST.x86_64}bin/qemu-x86_64
73${PLIST.xtensa}bin/qemu-xtensa 73${PLIST.xtensa}bin/qemu-xtensa
74${PLIST.xtensaeb}bin/qemu-xtensaeb 74${PLIST.xtensaeb}bin/qemu-xtensaeb
75${PLIST.bridge-helper}libexec/qemu-bridge-helper 75${PLIST.bridge-helper}libexec/qemu-bridge-helper
76${PLIST.virtfs-proxy-helper}libexec/virtfs-proxy-helper 76${PLIST.virtfs-proxy-helper}libexec/virtfs-proxy-helper
77man/man1/qemu-img.1 77man/man1/qemu-img.1
 78man/man1/qemu-storage-daemon.1
78man/man1/qemu.1 79man/man1/qemu.1
79${PLIST.virtfs-proxy-helper}man/man1/virtfs-proxy-helper.1 80${PLIST.virtfs-proxy-helper}man/man1/virtfs-proxy-helper.1
80man/man7/qemu-block-drivers.7 81man/man7/qemu-block-drivers.7
81man/man7/qemu-cpu-models.7 82man/man7/qemu-cpu-models.7
82man/man7/qemu-ga-ref.7 83man/man7/qemu-ga-ref.7
83man/man7/qemu-qmp-ref.7 84man/man7/qemu-qmp-ref.7
 85man/man7/qemu-storage-daemon-qmp-ref.7
84man/man8/qemu-ga.8 86man/man8/qemu-ga.8
85man/man8/qemu-nbd.8 87man/man8/qemu-nbd.8
86man/man8/qemu-pr-helper.8 88man/man8/qemu-pr-helper.8
87share/applications/qemu.desktop 89share/applications/qemu.desktop
 90share/doc/qemu/.buildinfo
88share/doc/qemu/Makefile.multinode-NetBSD 91share/doc/qemu/Makefile.multinode-NetBSD
 92share/doc/qemu/devel/atomics.html
 93share/doc/qemu/devel/bitops.html
 94share/doc/qemu/devel/block-coroutine-wrapper.html
 95share/doc/qemu/devel/build-system.html
 96share/doc/qemu/devel/clocks.html
 97share/doc/qemu/devel/code-of-conduct.html
 98share/doc/qemu/devel/conflict-resolution.html
 99share/doc/qemu/devel/control-flow-integrity.html
 100share/doc/qemu/devel/decodetree.html
 101share/doc/qemu/devel/fuzzing.html
 102share/doc/qemu/devel/index.html
 103share/doc/qemu/devel/kconfig.html
 104share/doc/qemu/devel/loads-stores.html
 105share/doc/qemu/devel/memory.html
 106share/doc/qemu/devel/migration.html
 107share/doc/qemu/devel/multi-process.html
 108share/doc/qemu/devel/multi-thread-tcg.html
 109share/doc/qemu/devel/qgraph.html
 110share/doc/qemu/devel/qom.html
 111share/doc/qemu/devel/qtest.html
 112share/doc/qemu/devel/reset.html
 113share/doc/qemu/devel/s390-dasd-ipl.html
 114share/doc/qemu/devel/secure-coding-practices.html
 115share/doc/qemu/devel/stable-process.html
 116share/doc/qemu/devel/style.html
 117share/doc/qemu/devel/tcg-icount.html
 118share/doc/qemu/devel/tcg-plugins.html
 119share/doc/qemu/devel/tcg.html
 120share/doc/qemu/devel/testing.html
 121share/doc/qemu/devel/tracing.html
 122share/doc/qemu/genindex.html
89share/doc/qemu/index.html 123share/doc/qemu/index.html
90share/doc/qemu/interop/bitmaps.html 124share/doc/qemu/interop/bitmaps.html
91share/doc/qemu/interop/dbus-vmstate.html 125share/doc/qemu/interop/dbus-vmstate.html
92share/doc/qemu/interop/dbus.html 126share/doc/qemu/interop/dbus.html
93share/doc/qemu/interop/genindex.html 
94share/doc/qemu/interop/index.html 127share/doc/qemu/interop/index.html
95share/doc/qemu/interop/live-block-operations.html 128share/doc/qemu/interop/live-block-operations.html
96share/doc/qemu/interop/objects.inv 
97share/doc/qemu/interop/pr-helper.html 129share/doc/qemu/interop/pr-helper.html
98share/doc/qemu/interop/qemu-ga-ref.html 130share/doc/qemu/interop/qemu-ga-ref.html
99share/doc/qemu/interop/qemu-ga.html 131share/doc/qemu/interop/qemu-ga.html
100share/doc/qemu/interop/qemu-qmp-ref.html 132share/doc/qemu/interop/qemu-qmp-ref.html
101share/doc/qemu/interop/search.html 133share/doc/qemu/interop/qemu-storage-daemon-qmp-ref.html
102share/doc/qemu/interop/searchindex.js 
103share/doc/qemu/interop/vhost-user-gpu.html 134share/doc/qemu/interop/vhost-user-gpu.html
104share/doc/qemu/interop/vhost-user.html 135share/doc/qemu/interop/vhost-user.html
105share/doc/qemu/interop/vhost-vdpa.html 136share/doc/qemu/interop/vhost-vdpa.html
 137share/doc/qemu/objects.inv
 138share/doc/qemu/search.html
 139share/doc/qemu/searchindex.js
106share/doc/qemu/specs/acpi_hest_ghes.html 140share/doc/qemu/specs/acpi_hest_ghes.html
107share/doc/qemu/specs/acpi_hw_reduced_hotplug.html 141share/doc/qemu/specs/acpi_hw_reduced_hotplug.html
108share/doc/qemu/specs/genindex.html 
109share/doc/qemu/specs/index.html 142share/doc/qemu/specs/index.html
110share/doc/qemu/specs/objects.inv 
111share/doc/qemu/specs/ppc-spapr-numa.html 143share/doc/qemu/specs/ppc-spapr-numa.html
112share/doc/qemu/specs/ppc-spapr-xive.html 144share/doc/qemu/specs/ppc-spapr-xive.html
113share/doc/qemu/specs/ppc-xive.html 145share/doc/qemu/specs/ppc-xive.html
114share/doc/qemu/specs/search.html 
115share/doc/qemu/specs/searchindex.js 
116share/doc/qemu/specs/tpm.html 146share/doc/qemu/specs/tpm.html
117share/doc/qemu/system/.buildinfo 
118share/doc/qemu/system/arm/aspeed.html 147share/doc/qemu/system/arm/aspeed.html
119share/doc/qemu/system/arm/collie.html 148share/doc/qemu/system/arm/collie.html
120share/doc/qemu/system/arm/cpu-features.html 149share/doc/qemu/system/arm/cpu-features.html
121share/doc/qemu/system/arm/digic.html 150share/doc/qemu/system/arm/digic.html
122share/doc/qemu/system/arm/gumstix.html 151share/doc/qemu/system/arm/gumstix.html
123share/doc/qemu/system/arm/integratorcp.html 152share/doc/qemu/system/arm/integratorcp.html
124share/doc/qemu/system/arm/mps2.html 153share/doc/qemu/system/arm/mps2.html
125share/doc/qemu/system/arm/musca.html 154share/doc/qemu/system/arm/musca.html
126share/doc/qemu/system/arm/musicpal.html 155share/doc/qemu/system/arm/musicpal.html
127share/doc/qemu/system/arm/nseries.html 156share/doc/qemu/system/arm/nseries.html
128share/doc/qemu/system/arm/nuvoton.html 157share/doc/qemu/system/arm/nuvoton.html
129share/doc/qemu/system/arm/orangepi.html 158share/doc/qemu/system/arm/orangepi.html
130share/doc/qemu/system/arm/palm.html 159share/doc/qemu/system/arm/palm.html
131share/doc/qemu/system/arm/raspi.html 160share/doc/qemu/system/arm/raspi.html
132share/doc/qemu/system/arm/realview.html 161share/doc/qemu/system/arm/realview.html
 162share/doc/qemu/system/arm/sabrelite.html
133share/doc/qemu/system/arm/sbsa.html 163share/doc/qemu/system/arm/sbsa.html
134share/doc/qemu/system/arm/stellaris.html 164share/doc/qemu/system/arm/stellaris.html
135share/doc/qemu/system/arm/sx1.html 165share/doc/qemu/system/arm/sx1.html
136share/doc/qemu/system/arm/versatile.html 166share/doc/qemu/system/arm/versatile.html
137share/doc/qemu/system/arm/vexpress.html 167share/doc/qemu/system/arm/vexpress.html
138share/doc/qemu/system/arm/virt.html 168share/doc/qemu/system/arm/virt.html
139share/doc/qemu/system/arm/xlnx-versal-virt.html 169share/doc/qemu/system/arm/xlnx-versal-virt.html
140share/doc/qemu/system/arm/xscale.html 170share/doc/qemu/system/arm/xscale.html
141share/doc/qemu/system/build-platforms.html 171share/doc/qemu/system/build-platforms.html
142share/doc/qemu/system/cpu-hotplug.html 172share/doc/qemu/system/cpu-hotplug.html
143share/doc/qemu/system/deprecated.html 173share/doc/qemu/system/deprecated.html
144share/doc/qemu/system/gdb.html 174share/doc/qemu/system/gdb.html
145share/doc/qemu/system/genindex.html 175share/doc/qemu/system/generic-loader.html
 176share/doc/qemu/system/guest-loader.html
146share/doc/qemu/system/i386/microvm.html 177share/doc/qemu/system/i386/microvm.html
147share/doc/qemu/system/i386/pc.html 178share/doc/qemu/system/i386/pc.html
148share/doc/qemu/system/images.html 179share/doc/qemu/system/images.html
149share/doc/qemu/system/index.html 180share/doc/qemu/system/index.html
150share/doc/qemu/system/invocation.html 181share/doc/qemu/system/invocation.html
151share/doc/qemu/system/ivshmem.html 182share/doc/qemu/system/ivshmem.html
152share/doc/qemu/system/keys.html 183share/doc/qemu/system/keys.html
153share/doc/qemu/system/license.html 184share/doc/qemu/system/license.html
154share/doc/qemu/system/linuxboot.html 185share/doc/qemu/system/linuxboot.html
155share/doc/qemu/system/managed-startup.html 186share/doc/qemu/system/managed-startup.html
156share/doc/qemu/system/monitor.html 187share/doc/qemu/system/monitor.html
 188share/doc/qemu/system/multi-process.html
157share/doc/qemu/system/mux-chardev.html 189share/doc/qemu/system/mux-chardev.html
158share/doc/qemu/system/net.html 190share/doc/qemu/system/net.html
159share/doc/qemu/system/objects.inv 191share/doc/qemu/system/nvme.html
 192share/doc/qemu/system/ppc/embedded.html
 193share/doc/qemu/system/ppc/powermac.html
 194share/doc/qemu/system/ppc/powernv.html
 195share/doc/qemu/system/ppc/prep.html
 196share/doc/qemu/system/ppc/pseries.html
160share/doc/qemu/system/pr-manager.html 197share/doc/qemu/system/pr-manager.html
161share/doc/qemu/system/qemu-block-drivers.html 198share/doc/qemu/system/qemu-block-drivers.html
162share/doc/qemu/system/qemu-cpu-models.html 199share/doc/qemu/system/qemu-cpu-models.html
163share/doc/qemu/system/qemu-manpage.html 200share/doc/qemu/system/qemu-manpage.html
164share/doc/qemu/system/quickstart.html 201share/doc/qemu/system/quickstart.html
 202share/doc/qemu/system/removed-features.html
 203share/doc/qemu/system/riscv/microchip-icicle-kit.html
 204share/doc/qemu/system/riscv/sifive_u.html
165share/doc/qemu/system/s390x/3270.html 205share/doc/qemu/system/s390x/3270.html
166share/doc/qemu/system/s390x/bootdevices.html 206share/doc/qemu/system/s390x/bootdevices.html
167share/doc/qemu/system/s390x/css.html 207share/doc/qemu/system/s390x/css.html
168share/doc/qemu/system/s390x/protvirt.html 208share/doc/qemu/system/s390x/protvirt.html
169share/doc/qemu/system/s390x/vfio-ap.html 209share/doc/qemu/system/s390x/vfio-ap.html
170share/doc/qemu/system/s390x/vfio-ccw.html 210share/doc/qemu/system/s390x/vfio-ccw.html
171share/doc/qemu/system/search.html 
172share/doc/qemu/system/searchindex.js 
173share/doc/qemu/system/security.html 211share/doc/qemu/system/security.html
174share/doc/qemu/system/target-arm.html 212share/doc/qemu/system/target-arm.html
175share/doc/qemu/system/target-avr.html 213share/doc/qemu/system/target-avr.html
176share/doc/qemu/system/target-i386.html 214share/doc/qemu/system/target-i386.html
177share/doc/qemu/system/target-m68k.html 215share/doc/qemu/system/target-m68k.html
178share/doc/qemu/system/target-mips.html 216share/doc/qemu/system/target-mips.html
179share/doc/qemu/system/target-ppc.html 217share/doc/qemu/system/target-ppc.html
 218share/doc/qemu/system/target-riscv.html
180share/doc/qemu/system/target-rx.html 219share/doc/qemu/system/target-rx.html
181share/doc/qemu/system/target-s390x.html 220share/doc/qemu/system/target-s390x.html
182share/doc/qemu/system/target-sparc.html 221share/doc/qemu/system/target-sparc.html
183share/doc/qemu/system/target-sparc64.html 222share/doc/qemu/system/target-sparc64.html
184share/doc/qemu/system/target-xtensa.html 223share/doc/qemu/system/target-xtensa.html
185share/doc/qemu/system/targets.html 224share/doc/qemu/system/targets.html
186share/doc/qemu/system/tls.html 225share/doc/qemu/system/tls.html
187share/doc/qemu/system/usb.html 226share/doc/qemu/system/usb.html
188share/doc/qemu/system/virtio-net-failover.html 227share/doc/qemu/system/virtio-net-failover.html
189share/doc/qemu/system/virtio-pmem.html 228share/doc/qemu/system/virtio-pmem.html
190share/doc/qemu/system/vnc-security.html 229share/doc/qemu/system/vnc-security.html
191share/doc/qemu/tools/.buildinfo 
192share/doc/qemu/tools/genindex.html 
193share/doc/qemu/tools/index.html 230share/doc/qemu/tools/index.html
194share/doc/qemu/tools/objects.inv 
195share/doc/qemu/tools/qemu-img.html 231share/doc/qemu/tools/qemu-img.html
196share/doc/qemu/tools/qemu-nbd.html 232share/doc/qemu/tools/qemu-nbd.html
197share/doc/qemu/tools/qemu-pr-helper.html 233share/doc/qemu/tools/qemu-pr-helper.html
 234share/doc/qemu/tools/qemu-storage-daemon.html
198share/doc/qemu/tools/qemu-trace-stap.html 235share/doc/qemu/tools/qemu-trace-stap.html
199share/doc/qemu/tools/search.html 
200share/doc/qemu/tools/searchindex.js 
201share/doc/qemu/tools/virtfs-proxy-helper.html 236share/doc/qemu/tools/virtfs-proxy-helper.html
202share/doc/qemu/tools/virtiofsd.html 237share/doc/qemu/tools/virtiofsd.html
203share/doc/qemu/user/.buildinfo 
204share/doc/qemu/user/genindex.html 
205share/doc/qemu/user/index.html 238share/doc/qemu/user/index.html
206share/doc/qemu/user/main.html 239share/doc/qemu/user/main.html
207share/doc/qemu/user/objects.inv 
208share/doc/qemu/user/search.html 
209share/doc/qemu/user/searchindex.js 
210share/icons/hicolor/128x128/apps/qemu.png 240share/icons/hicolor/128x128/apps/qemu.png
211share/icons/hicolor/16x16/apps/qemu.png 241share/icons/hicolor/16x16/apps/qemu.png
212share/icons/hicolor/24x24/apps/qemu.png 242share/icons/hicolor/24x24/apps/qemu.png
213share/icons/hicolor/256x256/apps/qemu.png 243share/icons/hicolor/256x256/apps/qemu.png
214share/icons/hicolor/32x32/apps/qemu.bmp 244share/icons/hicolor/32x32/apps/qemu.bmp
215share/icons/hicolor/32x32/apps/qemu.png 245share/icons/hicolor/32x32/apps/qemu.png
216share/icons/hicolor/48x48/apps/qemu.png 246share/icons/hicolor/48x48/apps/qemu.png
217share/icons/hicolor/512x512/apps/qemu.png 247share/icons/hicolor/512x512/apps/qemu.png
218share/icons/hicolor/64x64/apps/qemu.png 248share/icons/hicolor/64x64/apps/qemu.png
219share/icons/hicolor/scalable/apps/qemu.svg 249share/icons/hicolor/scalable/apps/qemu.svg
220${PLIST.gtk}share/locale/bg/LC_MESSAGES/qemu.mo 250${PLIST.gtk}share/locale/bg/LC_MESSAGES/qemu.mo
221${PLIST.gtk}share/locale/de_DE/LC_MESSAGES/qemu.mo 251${PLIST.gtk}share/locale/de_DE/LC_MESSAGES/qemu.mo
222${PLIST.gtk}share/locale/fr_FR/LC_MESSAGES/qemu.mo 252${PLIST.gtk}share/locale/fr_FR/LC_MESSAGES/qemu.mo
@@ -322,13 +352,14 @@ share/qemu/skiboot.lid @@ -322,13 +352,14 @@ share/qemu/skiboot.lid
322share/qemu/slof.bin 352share/qemu/slof.bin
323share/qemu/trace-events-all 353share/qemu/trace-events-all
324share/qemu/u-boot-sam460-20100605.bin 354share/qemu/u-boot-sam460-20100605.bin
325share/qemu/u-boot.e500 355share/qemu/u-boot.e500
326share/qemu/vgabios-ati.bin 356share/qemu/vgabios-ati.bin
327share/qemu/vgabios-bochs-display.bin 357share/qemu/vgabios-bochs-display.bin
328share/qemu/vgabios-cirrus.bin 358share/qemu/vgabios-cirrus.bin
329share/qemu/vgabios-qxl.bin 359share/qemu/vgabios-qxl.bin
330share/qemu/vgabios-ramfb.bin 360share/qemu/vgabios-ramfb.bin
331share/qemu/vgabios-stdvga.bin 361share/qemu/vgabios-stdvga.bin
332share/qemu/vgabios-virtio.bin 362share/qemu/vgabios-virtio.bin
333share/qemu/vgabios-vmware.bin 363share/qemu/vgabios-vmware.bin
334share/qemu/vgabios.bin 364share/qemu/vgabios.bin
 365@pkgdir var/run

cvs diff -r1.177 -r1.178 pkgsrc/emulators/qemu/distinfo (expand / switch to unified diff)

--- pkgsrc/emulators/qemu/distinfo 2021/05/23 13:53:10 1.177
+++ pkgsrc/emulators/qemu/distinfo 2021/05/24 14:22:08 1.178
@@ -1,55 +1,55 @@ @@ -1,55 +1,55 @@
1$NetBSD: distinfo,v 1.177 2021/05/23 13:53:10 thorpej Exp $ 1$NetBSD: distinfo,v 1.178 2021/05/24 14:22:08 ryoon Exp $
2 2
3SHA1 (palcode-clipper-qemu-5.2.0nb8) = ddbf1dffb7c2b2157e0bbe9fb7db7e57105130b1 3SHA1 (palcode-clipper-qemu-5.2.0nb8) = ddbf1dffb7c2b2157e0bbe9fb7db7e57105130b1
4RMD160 (palcode-clipper-qemu-5.2.0nb8) = 3f9fe19a40f7ca72ecfe047d1449e55b63cba3ee 4RMD160 (palcode-clipper-qemu-5.2.0nb8) = 3f9fe19a40f7ca72ecfe047d1449e55b63cba3ee
5SHA512 (palcode-clipper-qemu-5.2.0nb8) = 33695d6001d86a19793a92d5e31775607c4dfc9ab9eea019ea6c4d543a2e11e8c07f83cca4934811a13ef829b528737ea37d9d2aaf66cba6f2746d44d2aa0b43 5SHA512 (palcode-clipper-qemu-5.2.0nb8) = 33695d6001d86a19793a92d5e31775607c4dfc9ab9eea019ea6c4d543a2e11e8c07f83cca4934811a13ef829b528737ea37d9d2aaf66cba6f2746d44d2aa0b43
6Size (palcode-clipper-qemu-5.2.0nb8) = 159808 bytes 6Size (palcode-clipper-qemu-5.2.0nb8) = 159808 bytes
7SHA1 (qemu-5.2.0.tar.xz) = 146578267387e301423502d19024f8ffe35ab332 7SHA1 (qemu-6.0.0.tar.xz) = 131854b10d8c1614ae137c647aa31b756782ba2e
8RMD160 (qemu-5.2.0.tar.xz) = 2c33e773f012e333f99237e3d4ff1653ea0bc88f 8RMD160 (qemu-6.0.0.tar.xz) = 0785bb4c32f1e9d23dcdfad562f18d232677a0c6
9SHA512 (qemu-5.2.0.tar.xz) = bddd633ce111471ebc651e03080251515178808556b49a308a724909e55dac0be0cc0c79c536ac12d239678ae94c60100dc124be9b9d9538340c03a2f27177f3 9SHA512 (qemu-6.0.0.tar.xz) = ee3ff00aebec4d8891d2ff6dabe4e667e510b2a4fe3f6190aa34673a91ea32dcd2db2e9bf94c2f1bf05aa79788f17cfbbedc6027c0988ea08a92587b79ee05e4
10Size (qemu-5.2.0.tar.xz) = 106902800 bytes 10Size (qemu-6.0.0.tar.xz) = 107333232 bytes
11SHA1 (patch-accel_stubs_nvmm-stub.c) = d66d47eabb8bb6728e777da7589b43d491adbcc8 11SHA1 (patch-accel_Kconfig) = d343285a8b548d2d6387b92576aed801265d2b24
12SHA1 (patch-backends_tpm_tpm__ioctl.h) = fbd6c877ad605f7120290efbb0ac653c69f351de 12SHA1 (patch-backends_tpm_tpm__ioctl.h) = fbd6c877ad605f7120290efbb0ac653c69f351de
13SHA1 (patch-configure) = 8b392c5633c70d65f2f27af3b617a53af9772899 13SHA1 (patch-configure) = d94427a90bbb8e4d1347503e5583b4966b039e37
14SHA1 (patch-contrib_ivshmem-client_ivshmem-client.c) = 40c8751607cbf66a37e4c4e08f2664b864e2e984 
15SHA1 (patch-contrib_ivshmem-server_ivshmem-server.c) = d8f53432b5752f4263dc4ef96108a976a05147a3 
16SHA1 (patch-hw-mips-Kconfig) = c7199ad26ac45116ab4d38252db4234ae93bdf9a 14SHA1 (patch-hw-mips-Kconfig) = c7199ad26ac45116ab4d38252db4234ae93bdf9a
17SHA1 (patch-hw-mips-mipssim.c) = f701897f2c2bee4a8c3fa5222903789f991a663a 15SHA1 (patch-hw-mips-mipssim.c) = f701897f2c2bee4a8c3fa5222903789f991a663a
18SHA1 (patch-hw_alpha_alpha_sys.h) = 5908698208937ff9eb0bf1c504e1144af3d1bcc4 16SHA1 (patch-hw_alpha_alpha_sys.h) = 5908698208937ff9eb0bf1c504e1144af3d1bcc4
19SHA1 (patch-hw_alpha_dp264.c) = 856304784f098863728ecac3d0a9287aa22190d7 17SHA1 (patch-hw_alpha_dp264.c) = 856304784f098863728ecac3d0a9287aa22190d7
20SHA1 (patch-hw_alpha_typhoon.c) = 1bed5cd6f355c4163585c5331356ebf38c5c3a16 18SHA1 (patch-hw_alpha_typhoon.c) = 1bed5cd6f355c4163585c5331356ebf38c5c3a16
21SHA1 (patch-hw_core_uboot__image.h) = 17eef02349343c5fcfb7a4069cb6f8fd11efcb59 19SHA1 (patch-hw_core_uboot__image.h) = 17eef02349343c5fcfb7a4069cb6f8fd11efcb59
22SHA1 (patch-hw_display_omap__dss.c) = 6b13242f28e32346bc70548c216c578d98fd3420 20SHA1 (patch-hw_display_omap__dss.c) = 6b13242f28e32346bc70548c216c578d98fd3420
23SHA1 (patch-hw_mips_meson.build) = 4d1ed1ae2dbfb3edfe5fa5271c4561531b08efee 21SHA1 (patch-hw_mips_meson.build) = ff4bec33d9d2f86a425e02928aa3b6963c22da68
24SHA1 (patch-hw_net_etraxfs__eth.c) = e5dd1661d60dbcd27b332403e0843500ba9544bc 22SHA1 (patch-hw_net_etraxfs__eth.c) = e5dd1661d60dbcd27b332403e0843500ba9544bc
25SHA1 (patch-hw_net_xilinx__axienet.c) = ebcd2676d64ce6f31e4a8c976d4fdf530ad5e8b7 23SHA1 (patch-hw_net_xilinx__axienet.c) = ebcd2676d64ce6f31e4a8c976d4fdf530ad5e8b7
26SHA1 (patch-hw_rtc_mc146818rtc.c) = cc7a3b28010966b65b7a16db756226ac2669f310 24SHA1 (patch-hw_rtc_mc146818rtc.c) = cc7a3b28010966b65b7a16db756226ac2669f310
27SHA1 (patch-hw_scsi_scsi-disk.c) = fdbf2f962a6dcb1a115a7f8a5b8790ff9295fb33 25SHA1 (patch-hw_scsi_scsi-disk.c) = fdbf2f962a6dcb1a115a7f8a5b8790ff9295fb33
28SHA1 (patch-hw_usb_dev-mtp.c) = 94ddf53a41cc75810cfece1b8aef1831fab4ce43 26SHA1 (patch-hw_usb_dev-mtp.c) = 94ddf53a41cc75810cfece1b8aef1831fab4ce43
29SHA1 (patch-include_sysemu_hw_accel.h) = d083cd51434e28eb0d647b5107d34018b0ef63dc 27SHA1 (patch-include_sysemu_hw__accel.h) = a3cd022368a074e30dd3958932a006fa0fe011a6
30SHA1 (patch-include_sysemu_kvm.h) = 9847abe3be70bd708a521310f5d5515e45a1a5a0 28SHA1 (patch-include_sysemu_kvm.h) = 9847abe3be70bd708a521310f5d5515e45a1a5a0
31SHA1 (patch-include_sysemu_nvmm.h) = 1fe49c4f11910d6faf683ae3233f783a0b03ce5a 29SHA1 (patch-include_sysemu_nvmm.h) = 7e49abdc7dc6a03f293780c63ac6c242d3914d15
32SHA1 (patch-meson.build) = 235f4bb3f8ee244a8ee9570b2270300189800983 30SHA1 (patch-meson.build) = fe1ef65033aa387a8b029d3db206a04e341644d5
33SHA1 (patch-meson__options.txt) = 286d097f596baa5af244a990d2874f1a7ee65198 31SHA1 (patch-meson__options.txt) = 050adf1d5c07dc211fdafde7a21e2afe52db9169
34SHA1 (patch-net_tap-solaris.c) = cc953c9a624dd55ace4e130d0b31bbfb956c17d5 32SHA1 (patch-net_tap-solaris.c) = cc953c9a624dd55ace4e130d0b31bbfb956c17d5
35SHA1 (patch-qemu-options.hx) = e2f264117f703aa4ccf56219f370c3b1303e8b07 33SHA1 (patch-nvmm-accel-ops.c) = 23ef13420a61d8bfa78f36ed7eae2e1523464617
 34SHA1 (patch-nvmm-accel-ops.h) = 101b4f3f2a5775db4c93ffcf10b150e8545a3655
 35SHA1 (patch-nvmm-all.c) = 93d33e285b616a20ad2af550bef31e88c55f6a22
 36SHA1 (patch-qemu-options.hx) = 2e68ce28c9a678a666c3f23a0c1369d3568aa1eb
36SHA1 (patch-roms_qemu-palcode_hwrpb.h) = ae7b4c0680367af6f740d62a54dc86352128d76f 37SHA1 (patch-roms_qemu-palcode_hwrpb.h) = ae7b4c0680367af6f740d62a54dc86352128d76f
37SHA1 (patch-roms_qemu-palcode_init.c) = 7a0ebcd86f4106318791e7d90273fb55a424f1b8 38SHA1 (patch-roms_qemu-palcode_init.c) = 7a0ebcd86f4106318791e7d90273fb55a424f1b8
38SHA1 (patch-roms_qemu-palcode_memcpy.c) = 7761774ae9092d0f494deaf302d663ba479a09cf 39SHA1 (patch-roms_qemu-palcode_memcpy.c) = 7761774ae9092d0f494deaf302d663ba479a09cf
39SHA1 (patch-roms_qemu-palcode_memset.c) = 55fa4e52e03a351eb98475e7c4755e5edc409e6c 40SHA1 (patch-roms_qemu-palcode_memset.c) = 55fa4e52e03a351eb98475e7c4755e5edc409e6c
40SHA1 (patch-roms_qemu-palcode_pal.S) = fd13cf4ff7a4ba48a9cbb773d520eacf06615301 41SHA1 (patch-roms_qemu-palcode_pal.S) = fd13cf4ff7a4ba48a9cbb773d520eacf06615301
41SHA1 (patch-roms_qemu-palcode_pci.c) = 1d5b240fd6c940cbbe8518e4db529adba23d6fec 42SHA1 (patch-roms_qemu-palcode_pci.c) = 1d5b240fd6c940cbbe8518e4db529adba23d6fec
42SHA1 (patch-roms_qemu-palcode_pci.h) = 081c9d6d9955be24fd19455ae653339cdb133f02 43SHA1 (patch-roms_qemu-palcode_pci.h) = 081c9d6d9955be24fd19455ae653339cdb133f02
43SHA1 (patch-roms_qemu-palcode_printf.c) = 7fb158f85bd1be9a939850d9d86175013f7a142b 44SHA1 (patch-roms_qemu-palcode_printf.c) = 7fb158f85bd1be9a939850d9d86175013f7a142b
44SHA1 (patch-roms_qemu-palcode_protos.h) = 60cf9db5544cb842207a893a78fa6bbe45af4c71 45SHA1 (patch-roms_qemu-palcode_protos.h) = 60cf9db5544cb842207a893a78fa6bbe45af4c71
45SHA1 (patch-roms_qemu-palcode_sys-clipper.h) = 8983d7072b1c1e66bf0a18d2e49e503745692a46 46SHA1 (patch-roms_qemu-palcode_sys-clipper.h) = 8983d7072b1c1e66bf0a18d2e49e503745692a46
46SHA1 (patch-roms_qemu-palcode_vgaio.c) = c8d7adc053cd6655f005527d16647611040c09d2 47SHA1 (patch-roms_qemu-palcode_vgaio.c) = c8d7adc053cd6655f005527d16647611040c09d2
47SHA1 (patch-roms_u-boot-sam460ex_Makefile) = 3a1bbf19b1422c10ebdd819eb0b711fafc78e2f2 48SHA1 (patch-roms_u-boot-sam460ex_Makefile) = 3a1bbf19b1422c10ebdd819eb0b711fafc78e2f2
48SHA1 (patch-roms_u-boot_tools_imx8m__image.sh) = e4c452062f40569e33aa93eec4a65bd3af2e74fc 49SHA1 (patch-roms_u-boot_tools_imx8m__image.sh) = e4c452062f40569e33aa93eec4a65bd3af2e74fc
49SHA1 (patch-target_i386_helper.c) = 3314e65df11492438af2ec2c53ed3082a0b62b09 50SHA1 (patch-target_i386_meson.build) = 0b6430825e1f5715f6deea556043b7e5063cf10a
50SHA1 (patch-target_i386_kvm-stub.c) = 4cd2b7a8d8d8a317829f982b5acff7fdf2479d9f 51SHA1 (patch-target_i386_nvmm_meson.build) = c773fbed28a87f53263ab5299a63ca77423d164f
51SHA1 (patch-target_i386_meson.build) = d0e0d7d4dd96ea43fc386e7166bbabbd71b0f4fc 52SHA1 (patch-target_i386_nvmm_nvmm-accel-ops.c) = fdc29ccd0fcd47b72e7802655fe92b08f7d22bb9
52SHA1 (patch-target_i386_nvmm_all.c) = 9a6d85eb650b260dc33d63caee4bcd0e1f4cb49c 53SHA1 (patch-target_i386_nvmm_nvmm-accel-ops.h) = 74d6442e1ac1cdf187996f3dd82bb3efddc002ec
53SHA1 (patch-target_i386_nvmm_cpus.c) = 7f028bf2637fe31d8524f710a9e508c8ce65c822 54SHA1 (patch-target_i386_nvmm_nvmm-all.c) = cd75f6a584920093407ec254b9276b056f83132e
54SHA1 (patch-target_i386_nvmm_cpus.h) = 0a25e49929cb772fc46a4ace91127ccf3605521d 
55SHA1 (patch-target_sparc_translate.c) = 7ec2add2fd808facb48b9a66ccc345599251bf76 55SHA1 (patch-target_sparc_translate.c) = 7ec2add2fd808facb48b9a66ccc345599251bf76

File Added: pkgsrc/emulators/qemu/patches/Attic/patch-accel_Kconfig
$NetBSD: patch-accel_Kconfig,v 1.1 2021/05/24 14:22:08 ryoon Exp $

--- accel/Kconfig.orig	2021-04-29 17:18:58.000000000 +0000
+++ accel/Kconfig
@@ -1,6 +1,9 @@
 config WHPX
     bool
 
+config NVMM
+    bool
+
 config HAX
     bool
 

File Added: pkgsrc/emulators/qemu/patches/Attic/patch-nvmm-accel-ops.c
$NetBSD: patch-nvmm-accel-ops.c,v 1.1 2021/05/24 14:22:08 ryoon Exp $

--- nvmm-accel-ops.c.orig	2021-05-06 04:47:35.604520043 +0000
+++ nvmm-accel-ops.c
@@ -0,0 +1,111 @@
+/*
+ * Copyright (c) 2018-2019 Maxime Villard, All rights reserved.
+ *
+ * NetBSD Virtual Machine Monitor (NVMM) accelerator for QEMU.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "sysemu/kvm_int.h"
+#include "qemu/main-loop.h"
+#include "sysemu/cpus.h"
+#include "qemu/guest-random.h"
+
+#include "sysemu/nvmm.h"
+#include "nvmm-accel-ops.h"
+
+static void *qemu_nvmm_cpu_thread_fn(void *arg)
+{
+    CPUState *cpu = arg;
+    int r;
+
+    assert(nvmm_enabled());
+
+    rcu_register_thread();
+
+    qemu_mutex_lock_iothread();
+    qemu_thread_get_self(cpu->thread);
+    cpu->thread_id = qemu_get_thread_id();
+    current_cpu = cpu;
+
+    r = nvmm_init_vcpu(cpu);
+    if (r < 0) {
+        fprintf(stderr, "nvmm_init_vcpu failed: %s\n", strerror(-r));
+        exit(1);
+    }
+
+    /* signal CPU creation */
+    cpu_thread_signal_created(cpu);
+    qemu_guest_random_seed_thread_part2(cpu->random_seed);
+
+    do {
+        if (cpu_can_run(cpu)) {
+            r = nvmm_vcpu_exec(cpu);
+            if (r == EXCP_DEBUG) {
+                cpu_handle_guest_debug(cpu);
+            }
+        }
+        while (cpu_thread_is_idle(cpu)) {
+            qemu_cond_wait_iothread(cpu->halt_cond);
+        }
+        qemu_wait_io_event_common(cpu);
+    } while (!cpu->unplug || cpu_can_run(cpu));
+
+    nvmm_destroy_vcpu(cpu);
+    cpu_thread_signal_destroyed(cpu);
+    qemu_mutex_unlock_iothread();
+    rcu_unregister_thread();
+    return NULL;
+}
+
+static void nvmm_start_vcpu_thread(CPUState *cpu)
+{
+    char thread_name[VCPU_THREAD_NAME_SIZE];
+
+    cpu->thread = g_malloc0(sizeof(QemuThread));
+    cpu->halt_cond = g_malloc0(sizeof(QemuCond));
+    qemu_cond_init(cpu->halt_cond);
+    snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/NVMM",
+             cpu->cpu_index);
+    qemu_thread_create(cpu->thread, thread_name, qemu_nvmm_cpu_thread_fn,
+                       cpu, QEMU_THREAD_JOINABLE);
+}
+
+/*
+ * Abort the call to run the virtual processor by another thread, and to
+ * return the control to that thread.
+ */
+static void nvmm_kick_vcpu_thread(CPUState *cpu)
+{
+    cpu->exit_request = 1;
+    cpus_kick_thread(cpu);
+}
+
+static void nvmm_accel_ops_class_init(ObjectClass *oc, void *data)
+{
+    AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
+
+    ops->create_vcpu_thread = nvmm_start_vcpu_thread;
+    ops->kick_vcpu_thread = nvmm_kick_vcpu_thread;
+
+    ops->synchronize_post_reset = nvmm_cpu_synchronize_post_reset;
+    ops->synchronize_post_init = nvmm_cpu_synchronize_post_init;
+    ops->synchronize_state = nvmm_cpu_synchronize_state;
+    ops->synchronize_pre_loadvm = nvmm_cpu_synchronize_pre_loadvm;
+}
+
+static const TypeInfo nvmm_accel_ops_type = {
+    .name = ACCEL_OPS_NAME("nvmm"),
+
+    .parent = TYPE_ACCEL_OPS,
+    .class_init = nvmm_accel_ops_class_init,
+    .abstract = true,
+};
+
+static void nvmm_accel_ops_register_types(void)
+{
+    type_register_static(&nvmm_accel_ops_type);
+}
+type_init(nvmm_accel_ops_register_types);

File Added: pkgsrc/emulators/qemu/patches/Attic/patch-nvmm-accel-ops.h
$NetBSD: patch-nvmm-accel-ops.h,v 1.1 2021/05/24 14:22:08 ryoon Exp $

--- nvmm-accel-ops.h.orig	2021-05-06 04:47:35.605973012 +0000
+++ nvmm-accel-ops.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (c) 2018-2019 Maxime Villard, All rights reserved.
+ *
+ * NetBSD Virtual Machine Monitor (NVMM) accelerator for QEMU.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#ifndef NVMM_CPUS_H
+#define NVMM_CPUS_H
+
+#include "sysemu/cpus.h"
+
+int nvmm_init_vcpu(CPUState *cpu);
+int nvmm_vcpu_exec(CPUState *cpu);
+void nvmm_destroy_vcpu(CPUState *cpu);
+
+void nvmm_cpu_synchronize_state(CPUState *cpu);
+void nvmm_cpu_synchronize_post_reset(CPUState *cpu);
+void nvmm_cpu_synchronize_post_init(CPUState *cpu);
+void nvmm_cpu_synchronize_pre_loadvm(CPUState *cpu);
+
+#endif /* NVMM_CPUS_H */

File Added: pkgsrc/emulators/qemu/patches/Attic/patch-nvmm-all.c
$NetBSD: patch-nvmm-all.c,v 1.1 2021/05/24 14:22:08 ryoon Exp $

--- nvmm-all.c.orig	2021-05-06 04:47:35.606086411 +0000
+++ nvmm-all.c
@@ -0,0 +1,1226 @@
+/*
+ * Copyright (c) 2018-2019 Maxime Villard, All rights reserved.
+ *
+ * NetBSD Virtual Machine Monitor (NVMM) accelerator for QEMU.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "cpu.h"
+#include "exec/address-spaces.h"
+#include "exec/ioport.h"
+#include "qemu-common.h"
+#include "qemu/accel.h"
+#include "sysemu/nvmm.h"
+#include "sysemu/cpus.h"
+#include "sysemu/runstate.h"
+#include "qemu/main-loop.h"
+#include "qemu/error-report.h"
+#include "qapi/error.h"
+#include "qemu/queue.h"
+#include "migration/blocker.h"
+#include "strings.h"
+
+#include "nvmm-accel-ops.h"
+
+#include <nvmm.h>
+
+struct qemu_vcpu {
+    struct nvmm_vcpu vcpu;
+    uint8_t tpr;
+    bool stop;
+
+    /* Window-exiting for INTs/NMIs. */
+    bool int_window_exit;
+    bool nmi_window_exit;
+
+    /* The guest is in an interrupt shadow (POP SS, etc). */
+    bool int_shadow;
+};
+
+struct qemu_machine {
+    struct nvmm_capability cap;
+    struct nvmm_machine mach;
+};
+
+/* -------------------------------------------------------------------------- */
+
+static bool nvmm_allowed;
+static struct qemu_machine qemu_mach;
+
+static struct qemu_vcpu *
+get_qemu_vcpu(CPUState *cpu)
+{
+    return (struct qemu_vcpu *)cpu->hax_vcpu;
+}
+
+static struct nvmm_machine *
+get_nvmm_mach(void)
+{
+    return &qemu_mach.mach;
+}
+
+/* -------------------------------------------------------------------------- */
+
+static void
+nvmm_set_segment(struct nvmm_x64_state_seg *nseg, const SegmentCache *qseg)
+{
+    uint32_t attrib = qseg->flags;
+
+    nseg->selector = qseg->selector;
+    nseg->limit = qseg->limit;
+    nseg->base = qseg->base;
+    nseg->attrib.type = __SHIFTOUT(attrib, DESC_TYPE_MASK);
+    nseg->attrib.s = __SHIFTOUT(attrib, DESC_S_MASK);
+    nseg->attrib.dpl = __SHIFTOUT(attrib, DESC_DPL_MASK);
+    nseg->attrib.p = __SHIFTOUT(attrib, DESC_P_MASK);
+    nseg->attrib.avl = __SHIFTOUT(attrib, DESC_AVL_MASK);
+    nseg->attrib.l = __SHIFTOUT(attrib, DESC_L_MASK);
+    nseg->attrib.def = __SHIFTOUT(attrib, DESC_B_MASK);
+    nseg->attrib.g = __SHIFTOUT(attrib, DESC_G_MASK);
+}
+
+static void
+nvmm_set_registers(CPUState *cpu)
+{
+    struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
+    struct nvmm_machine *mach = get_nvmm_mach();
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct nvmm_vcpu *vcpu = &qcpu->vcpu;
+    struct nvmm_x64_state *state = vcpu->state;
+    uint64_t bitmap;
+    size_t i;
+    int ret;
+
+    assert(cpu_is_stopped(cpu) || qemu_cpu_is_self(cpu));
+
+    /* GPRs. */
+    state->gprs[NVMM_X64_GPR_RAX] = env->regs[R_EAX];
+    state->gprs[NVMM_X64_GPR_RCX] = env->regs[R_ECX];
+    state->gprs[NVMM_X64_GPR_RDX] = env->regs[R_EDX];
+    state->gprs[NVMM_X64_GPR_RBX] = env->regs[R_EBX];
+    state->gprs[NVMM_X64_GPR_RSP] = env->regs[R_ESP];
+    state->gprs[NVMM_X64_GPR_RBP] = env->regs[R_EBP];
+    state->gprs[NVMM_X64_GPR_RSI] = env->regs[R_ESI];
+    state->gprs[NVMM_X64_GPR_RDI] = env->regs[R_EDI];
+#ifdef TARGET_X86_64
+    state->gprs[NVMM_X64_GPR_R8]  = env->regs[R_R8];
+    state->gprs[NVMM_X64_GPR_R9]  = env->regs[R_R9];
+    state->gprs[NVMM_X64_GPR_R10] = env->regs[R_R10];
+    state->gprs[NVMM_X64_GPR_R11] = env->regs[R_R11];
+    state->gprs[NVMM_X64_GPR_R12] = env->regs[R_R12];
+    state->gprs[NVMM_X64_GPR_R13] = env->regs[R_R13];
+    state->gprs[NVMM_X64_GPR_R14] = env->regs[R_R14];
+    state->gprs[NVMM_X64_GPR_R15] = env->regs[R_R15];
+#endif
+
+    /* RIP and RFLAGS. */
+    state->gprs[NVMM_X64_GPR_RIP] = env->eip;
+    state->gprs[NVMM_X64_GPR_RFLAGS] = env->eflags;
+
+    /* Segments. */
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_CS], &env->segs[R_CS]);
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_DS], &env->segs[R_DS]);
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_ES], &env->segs[R_ES]);
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_FS], &env->segs[R_FS]);
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_GS], &env->segs[R_GS]);
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_SS], &env->segs[R_SS]);
+
+    /* Special segments. */
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_GDT], &env->gdt);
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_LDT], &env->ldt);
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_TR], &env->tr);
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_IDT], &env->idt);
+
+    /* Control registers. */
+    state->crs[NVMM_X64_CR_CR0] = env->cr[0];
+    state->crs[NVMM_X64_CR_CR2] = env->cr[2];
+    state->crs[NVMM_X64_CR_CR3] = env->cr[3];
+    state->crs[NVMM_X64_CR_CR4] = env->cr[4];
+    state->crs[NVMM_X64_CR_CR8] = qcpu->tpr;
+    state->crs[NVMM_X64_CR_XCR0] = env->xcr0;
+
+    /* Debug registers. */
+    state->drs[NVMM_X64_DR_DR0] = env->dr[0];
+    state->drs[NVMM_X64_DR_DR1] = env->dr[1];
+    state->drs[NVMM_X64_DR_DR2] = env->dr[2];
+    state->drs[NVMM_X64_DR_DR3] = env->dr[3];
+    state->drs[NVMM_X64_DR_DR6] = env->dr[6];
+    state->drs[NVMM_X64_DR_DR7] = env->dr[7];
+
+    /* FPU. */
+    state->fpu.fx_cw = env->fpuc;
+    state->fpu.fx_sw = (env->fpus & ~0x3800) | ((env->fpstt & 0x7) << 11);
+    state->fpu.fx_tw = 0;
+    for (i = 0; i < 8; i++) {
+        state->fpu.fx_tw |= (!env->fptags[i]) << i;
+    }
+    state->fpu.fx_opcode = env->fpop;
+    state->fpu.fx_ip.fa_64 = env->fpip;
+    state->fpu.fx_dp.fa_64 = env->fpdp;
+    state->fpu.fx_mxcsr = env->mxcsr;
+    state->fpu.fx_mxcsr_mask = 0x0000FFFF;
+    assert(sizeof(state->fpu.fx_87_ac) == sizeof(env->fpregs));
+    memcpy(state->fpu.fx_87_ac, env->fpregs, sizeof(env->fpregs));
+    for (i = 0; i < CPU_NB_REGS; i++) {
+        memcpy(&state->fpu.fx_xmm[i].xmm_bytes[0],
+            &env->xmm_regs[i].ZMM_Q(0), 8);
+        memcpy(&state->fpu.fx_xmm[i].xmm_bytes[8],
+            &env->xmm_regs[i].ZMM_Q(1), 8);
+    }
+
+    /* MSRs. */
+    state->msrs[NVMM_X64_MSR_EFER] = env->efer;
+    state->msrs[NVMM_X64_MSR_STAR] = env->star;
+#ifdef TARGET_X86_64
+    state->msrs[NVMM_X64_MSR_LSTAR] = env->lstar;
+    state->msrs[NVMM_X64_MSR_CSTAR] = env->cstar;
+    state->msrs[NVMM_X64_MSR_SFMASK] = env->fmask;
+    state->msrs[NVMM_X64_MSR_KERNELGSBASE] = env->kernelgsbase;
+#endif
+    state->msrs[NVMM_X64_MSR_SYSENTER_CS]  = env->sysenter_cs;
+    state->msrs[NVMM_X64_MSR_SYSENTER_ESP] = env->sysenter_esp;
+    state->msrs[NVMM_X64_MSR_SYSENTER_EIP] = env->sysenter_eip;
+    state->msrs[NVMM_X64_MSR_PAT] = env->pat;
+    state->msrs[NVMM_X64_MSR_TSC] = env->tsc;
+
+    bitmap =
+        NVMM_X64_STATE_SEGS |
+        NVMM_X64_STATE_GPRS |
+        NVMM_X64_STATE_CRS  |
+        NVMM_X64_STATE_DRS  |
+        NVMM_X64_STATE_MSRS |
+        NVMM_X64_STATE_FPU;
+
+    ret = nvmm_vcpu_setstate(mach, vcpu, bitmap);
+    if (ret == -1) {
+        error_report("NVMM: Failed to set virtual processor context,"
+            " error=%d", errno);
+    }
+}
+
+static void
+nvmm_get_segment(SegmentCache *qseg, const struct nvmm_x64_state_seg *nseg)
+{
+    qseg->selector = nseg->selector;
+    qseg->limit = nseg->limit;
+    qseg->base = nseg->base;
+
+    qseg->flags =
+        __SHIFTIN((uint32_t)nseg->attrib.type, DESC_TYPE_MASK) |
+        __SHIFTIN((uint32_t)nseg->attrib.s, DESC_S_MASK) |
+        __SHIFTIN((uint32_t)nseg->attrib.dpl, DESC_DPL_MASK) |
+        __SHIFTIN((uint32_t)nseg->attrib.p, DESC_P_MASK) |
+        __SHIFTIN((uint32_t)nseg->attrib.avl, DESC_AVL_MASK) |
+        __SHIFTIN((uint32_t)nseg->attrib.l, DESC_L_MASK) |
+        __SHIFTIN((uint32_t)nseg->attrib.def, DESC_B_MASK) |
+        __SHIFTIN((uint32_t)nseg->attrib.g, DESC_G_MASK);
+}
+
+static void
+nvmm_get_registers(CPUState *cpu)
+{
+    struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
+    struct nvmm_machine *mach = get_nvmm_mach();
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct nvmm_vcpu *vcpu = &qcpu->vcpu;
+    X86CPU *x86_cpu = X86_CPU(cpu);
+    struct nvmm_x64_state *state = vcpu->state;
+    uint64_t bitmap, tpr;
+    size_t i;
+    int ret;
+
+    assert(cpu_is_stopped(cpu) || qemu_cpu_is_self(cpu));
+
+    bitmap =
+        NVMM_X64_STATE_SEGS |
+        NVMM_X64_STATE_GPRS |
+        NVMM_X64_STATE_CRS  |
+        NVMM_X64_STATE_DRS  |
+        NVMM_X64_STATE_MSRS |
+        NVMM_X64_STATE_FPU;
+
+    ret = nvmm_vcpu_getstate(mach, vcpu, bitmap);
+    if (ret == -1) {
+        error_report("NVMM: Failed to get virtual processor context,"
+            " error=%d", errno);
+    }
+
+    /* GPRs. */
+    env->regs[R_EAX] = state->gprs[NVMM_X64_GPR_RAX];
+    env->regs[R_ECX] = state->gprs[NVMM_X64_GPR_RCX];
+    env->regs[R_EDX] = state->gprs[NVMM_X64_GPR_RDX];
+    env->regs[R_EBX] = state->gprs[NVMM_X64_GPR_RBX];
+    env->regs[R_ESP] = state->gprs[NVMM_X64_GPR_RSP];
+    env->regs[R_EBP] = state->gprs[NVMM_X64_GPR_RBP];
+    env->regs[R_ESI] = state->gprs[NVMM_X64_GPR_RSI];
+    env->regs[R_EDI] = state->gprs[NVMM_X64_GPR_RDI];
+#ifdef TARGET_X86_64
+    env->regs[R_R8]  = state->gprs[NVMM_X64_GPR_R8];
+    env->regs[R_R9]  = state->gprs[NVMM_X64_GPR_R9];
+    env->regs[R_R10] = state->gprs[NVMM_X64_GPR_R10];
+    env->regs[R_R11] = state->gprs[NVMM_X64_GPR_R11];
+    env->regs[R_R12] = state->gprs[NVMM_X64_GPR_R12];
+    env->regs[R_R13] = state->gprs[NVMM_X64_GPR_R13];
+    env->regs[R_R14] = state->gprs[NVMM_X64_GPR_R14];
+    env->regs[R_R15] = state->gprs[NVMM_X64_GPR_R15];
+#endif
+
+    /* RIP and RFLAGS. */
+    env->eip = state->gprs[NVMM_X64_GPR_RIP];
+    env->eflags = state->gprs[NVMM_X64_GPR_RFLAGS];
+
+    /* Segments. */
+    nvmm_get_segment(&env->segs[R_ES], &state->segs[NVMM_X64_SEG_ES]);
+    nvmm_get_segment(&env->segs[R_CS], &state->segs[NVMM_X64_SEG_CS]);
+    nvmm_get_segment(&env->segs[R_SS], &state->segs[NVMM_X64_SEG_SS]);
+    nvmm_get_segment(&env->segs[R_DS], &state->segs[NVMM_X64_SEG_DS]);
+    nvmm_get_segment(&env->segs[R_FS], &state->segs[NVMM_X64_SEG_FS]);
+    nvmm_get_segment(&env->segs[R_GS], &state->segs[NVMM_X64_SEG_GS]);
+
+    /* Special segments. */
+    nvmm_get_segment(&env->gdt, &state->segs[NVMM_X64_SEG_GDT]);
+    nvmm_get_segment(&env->ldt, &state->segs[NVMM_X64_SEG_LDT]);
+    nvmm_get_segment(&env->tr, &state->segs[NVMM_X64_SEG_TR]);
+    nvmm_get_segment(&env->idt, &state->segs[NVMM_X64_SEG_IDT]);
+
+    /* Control registers. */
+    env->cr[0] = state->crs[NVMM_X64_CR_CR0];
+    env->cr[2] = state->crs[NVMM_X64_CR_CR2];
+    env->cr[3] = state->crs[NVMM_X64_CR_CR3];
+    env->cr[4] = state->crs[NVMM_X64_CR_CR4];
+    tpr = state->crs[NVMM_X64_CR_CR8];
+    if (tpr != qcpu->tpr) {
+        qcpu->tpr = tpr;
+        cpu_set_apic_tpr(x86_cpu->apic_state, tpr);
+    }
+    env->xcr0 = state->crs[NVMM_X64_CR_XCR0];
+
+    /* Debug registers. */
+    env->dr[0] = state->drs[NVMM_X64_DR_DR0];
+    env->dr[1] = state->drs[NVMM_X64_DR_DR1];
+    env->dr[2] = state->drs[NVMM_X64_DR_DR2];
+    env->dr[3] = state->drs[NVMM_X64_DR_DR3];
+    env->dr[6] = state->drs[NVMM_X64_DR_DR6];
+    env->dr[7] = state->drs[NVMM_X64_DR_DR7];
+
+    /* FPU. */
+    env->fpuc = state->fpu.fx_cw;
+    env->fpstt = (state->fpu.fx_sw >> 11) & 0x7;
+    env->fpus = state->fpu.fx_sw & ~0x3800;
+    for (i = 0; i < 8; i++) {
+        env->fptags[i] = !((state->fpu.fx_tw >> i) & 1);
+    }
+    env->fpop = state->fpu.fx_opcode;
+    env->fpip = state->fpu.fx_ip.fa_64;
+    env->fpdp = state->fpu.fx_dp.fa_64;
+    env->mxcsr = state->fpu.fx_mxcsr;
+    assert(sizeof(state->fpu.fx_87_ac) == sizeof(env->fpregs));
+    memcpy(env->fpregs, state->fpu.fx_87_ac, sizeof(env->fpregs));
+    for (i = 0; i < CPU_NB_REGS; i++) {
+        memcpy(&env->xmm_regs[i].ZMM_Q(0),
+            &state->fpu.fx_xmm[i].xmm_bytes[0], 8);
+        memcpy(&env->xmm_regs[i].ZMM_Q(1),
+            &state->fpu.fx_xmm[i].xmm_bytes[8], 8);
+    }
+
+    /* MSRs. */
+    env->efer = state->msrs[NVMM_X64_MSR_EFER];
+    env->star = state->msrs[NVMM_X64_MSR_STAR];
+#ifdef TARGET_X86_64
+    env->lstar = state->msrs[NVMM_X64_MSR_LSTAR];
+    env->cstar = state->msrs[NVMM_X64_MSR_CSTAR];
+    env->fmask = state->msrs[NVMM_X64_MSR_SFMASK];
+    env->kernelgsbase = state->msrs[NVMM_X64_MSR_KERNELGSBASE];
+#endif
+    env->sysenter_cs  = state->msrs[NVMM_X64_MSR_SYSENTER_CS];
+    env->sysenter_esp = state->msrs[NVMM_X64_MSR_SYSENTER_ESP];
+    env->sysenter_eip = state->msrs[NVMM_X64_MSR_SYSENTER_EIP];
+    env->pat = state->msrs[NVMM_X64_MSR_PAT];
+    env->tsc = state->msrs[NVMM_X64_MSR_TSC];
+
+    x86_update_hflags(env);
+}
+
+static bool
+nvmm_can_take_int(CPUState *cpu)
+{
+    struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct nvmm_vcpu *vcpu = &qcpu->vcpu;
+    struct nvmm_machine *mach = get_nvmm_mach();
+
+    if (qcpu->int_window_exit) {
+        return false;
+    }
+
+    if (qcpu->int_shadow || !(env->eflags & IF_MASK)) {
+        struct nvmm_x64_state *state = vcpu->state;
+
+        /* Exit on interrupt window. */
+        nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_INTR);
+        state->intr.int_window_exiting = 1;
+        nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_INTR);
+
+        return false;
+    }
+
+    return true;
+}
+
+static bool
+nvmm_can_take_nmi(CPUState *cpu)
+{
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+
+    /*
+     * Contrary to INTs, NMIs always schedule an exit when they are
+     * completed. Therefore, if window-exiting is enabled, it means
+     * NMIs are blocked.
+     */
+    if (qcpu->nmi_window_exit) {
+        return false;
+    }
+
+    return true;
+}
+
+/*
+ * Called before the VCPU is run. We inject events generated by the I/O
+ * thread, and synchronize the guest TPR.
+ */
+static void
+nvmm_vcpu_pre_run(CPUState *cpu)
+{
+    struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
+    struct nvmm_machine *mach = get_nvmm_mach();
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct nvmm_vcpu *vcpu = &qcpu->vcpu;
+    X86CPU *x86_cpu = X86_CPU(cpu);
+    struct nvmm_x64_state *state = vcpu->state;
+    struct nvmm_vcpu_event *event = vcpu->event;
+    bool has_event = false;
+    bool sync_tpr = false;
+    uint8_t tpr;
+    int ret;
+
+    qemu_mutex_lock_iothread();
+
+    tpr = cpu_get_apic_tpr(x86_cpu->apic_state);
+    if (tpr != qcpu->tpr) {
+        qcpu->tpr = tpr;
+        sync_tpr = true;
+    }
+
+    /*
+     * Force the VCPU out of its inner loop to process any INIT requests
+     * or commit pending TPR access.
+     */
+    if (cpu->interrupt_request & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
+        cpu->exit_request = 1;
+    }
+
+    if (!has_event && (cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
+        if (nvmm_can_take_nmi(cpu)) {
+            cpu->interrupt_request &= ~CPU_INTERRUPT_NMI;
+            event->type = NVMM_VCPU_EVENT_INTR;
+            event->vector = 2;
+            has_event = true;
+        }
+    }
+
+    if (!has_event && (cpu->interrupt_request & CPU_INTERRUPT_HARD)) {
+        if (nvmm_can_take_int(cpu)) {
+            cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
+            event->type = NVMM_VCPU_EVENT_INTR;
+            event->vector = cpu_get_pic_interrupt(env);
+            has_event = true;
+        }
+    }
+
+    /* Don't want SMIs. */
+    if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
+        cpu->interrupt_request &= ~CPU_INTERRUPT_SMI;
+    }
+
+    if (sync_tpr) {
+        ret = nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_CRS);
+        if (ret == -1) {
+            error_report("NVMM: Failed to get CPU state,"
+                " error=%d", errno);
+        }
+
+        state->crs[NVMM_X64_CR_CR8] = qcpu->tpr;
+
+        ret = nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_CRS);
+        if (ret == -1) {
+            error_report("NVMM: Failed to set CPU state,"
+                " error=%d", errno);
+        }
+    }
+
+    if (has_event) {
+        ret = nvmm_vcpu_inject(mach, vcpu);
+        if (ret == -1) {
+            error_report("NVMM: Failed to inject event,"
+                " error=%d", errno);
+        }
+    }
+
+    qemu_mutex_unlock_iothread();
+}
+
+/*
+ * Called after the VCPU ran. We synchronize the host view of the TPR and
+ * RFLAGS.
+ */
+static void
+nvmm_vcpu_post_run(CPUState *cpu, struct nvmm_vcpu_exit *exit)
+{
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
+    X86CPU *x86_cpu = X86_CPU(cpu);
+    uint64_t tpr;
+
+    env->eflags = exit->exitstate.rflags;
+    qcpu->int_shadow = exit->exitstate.int_shadow;
+    qcpu->int_window_exit = exit->exitstate.int_window_exiting;
+    qcpu->nmi_window_exit = exit->exitstate.nmi_window_exiting;
+
+    tpr = exit->exitstate.cr8;
+    if (qcpu->tpr != tpr) {
+        qcpu->tpr = tpr;
+        qemu_mutex_lock_iothread();
+        cpu_set_apic_tpr(x86_cpu->apic_state, qcpu->tpr);
+        qemu_mutex_unlock_iothread();
+    }
+}
+
+/* -------------------------------------------------------------------------- */
+
+static void
+nvmm_io_callback(struct nvmm_io *io)
+{
+    MemTxAttrs attrs = { 0 };
+    int ret;
+
+    ret = address_space_rw(&address_space_io, io->port, attrs, io->data,
+        io->size, !io->in);
+    if (ret != MEMTX_OK) {
+        error_report("NVMM: I/O Transaction Failed "
+            "[%s, port=%u, size=%zu]", (io->in ? "in" : "out"),
+            io->port, io->size);
+    }
+
+    /* Needed, otherwise infinite loop. */
+    current_cpu->vcpu_dirty = false;
+}
+
+static void
+nvmm_mem_callback(struct nvmm_mem *mem)
+{
+    cpu_physical_memory_rw(mem->gpa, mem->data, mem->size, mem->write);
+
+    /* Needed, otherwise infinite loop. */
+    current_cpu->vcpu_dirty = false;
+}
+
+static struct nvmm_assist_callbacks nvmm_callbacks = {
+    .io = nvmm_io_callback,
+    .mem = nvmm_mem_callback
+};
+
+/* -------------------------------------------------------------------------- */
+
+static int
+nvmm_handle_mem(struct nvmm_machine *mach, struct nvmm_vcpu *vcpu)
+{
+    int ret;
+
+    ret = nvmm_assist_mem(mach, vcpu);
+    if (ret == -1) {
+        error_report("NVMM: Mem Assist Failed [gpa=%p]",
+            (void *)vcpu->exit->u.mem.gpa);
+    }
+
+    return ret;
+}
+
+static int
+nvmm_handle_io(struct nvmm_machine *mach, struct nvmm_vcpu *vcpu)
+{
+    int ret;
+
+    ret = nvmm_assist_io(mach, vcpu);
+    if (ret == -1) {
+        error_report("NVMM: I/O Assist Failed [port=%d]",
+            (int)vcpu->exit->u.io.port);
+    }
+
+    return ret;
+}
+
+static int
+nvmm_handle_rdmsr(struct nvmm_machine *mach, CPUState *cpu,
+    struct nvmm_vcpu_exit *exit)
+{
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct nvmm_vcpu *vcpu = &qcpu->vcpu;
+    X86CPU *x86_cpu = X86_CPU(cpu);
+    struct nvmm_x64_state *state = vcpu->state;
+    uint64_t val;
+    int ret;
+
+    switch (exit->u.rdmsr.msr) {
+    case MSR_IA32_APICBASE:
+        val = cpu_get_apic_base(x86_cpu->apic_state);
+        break;
+    case MSR_MTRRcap:
+    case MSR_MTRRdefType:
+    case MSR_MCG_CAP:
+    case MSR_MCG_STATUS:
+        val = 0;
+        break;
+    default: /* More MSRs to add? */
+        val = 0;
+        error_report("NVMM: Unexpected RDMSR 0x%x, ignored",
+            exit->u.rdmsr.msr);
+        break;
+    }
+
+    ret = nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_GPRS);
+    if (ret == -1) {
+        return -1;
+    }
+
+    state->gprs[NVMM_X64_GPR_RAX] = (val & 0xFFFFFFFF);
+    state->gprs[NVMM_X64_GPR_RDX] = (val >> 32);
+    state->gprs[NVMM_X64_GPR_RIP] = exit->u.rdmsr.npc;
+
+    ret = nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_GPRS);
+    if (ret == -1) {
+        return -1;
+    }
+
+    return 0;
+}
+
+static int
+nvmm_handle_wrmsr(struct nvmm_machine *mach, CPUState *cpu,
+    struct nvmm_vcpu_exit *exit)
+{
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct nvmm_vcpu *vcpu = &qcpu->vcpu;
+    X86CPU *x86_cpu = X86_CPU(cpu);
+    struct nvmm_x64_state *state = vcpu->state;
+    uint64_t val;
+    int ret;
+
+    val = exit->u.wrmsr.val;
+
+    switch (exit->u.wrmsr.msr) {
+    case MSR_IA32_APICBASE:
+        cpu_set_apic_base(x86_cpu->apic_state, val);
+        break;
+    case MSR_MTRRdefType:
+    case MSR_MCG_STATUS:
+        break;
+    default: /* More MSRs to add? */
+        error_report("NVMM: Unexpected WRMSR 0x%x [val=0x%lx], ignored",
+            exit->u.wrmsr.msr, val);
+        break;
+    }
+
+    ret = nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_GPRS);
+    if (ret == -1) {
+        return -1;
+    }
+
+    state->gprs[NVMM_X64_GPR_RIP] = exit->u.wrmsr.npc;
+
+    ret = nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_GPRS);
+    if (ret == -1) {
+        return -1;
+    }
+
+    return 0;
+}
+
+static int
+nvmm_handle_halted(struct nvmm_machine *mach, CPUState *cpu,
+    struct nvmm_vcpu_exit *exit)
+{
+    struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
+    int ret = 0;
+
+    qemu_mutex_lock_iothread();
+
+    if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
+          (env->eflags & IF_MASK)) &&
+        !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
+        cpu->exception_index = EXCP_HLT;
+        cpu->halted = true;
+        ret = 1;
+    }
+
+    qemu_mutex_unlock_iothread();
+
+    return ret;
+}
+
+static int
+nvmm_inject_ud(struct nvmm_machine *mach, struct nvmm_vcpu *vcpu)
+{
+    struct nvmm_vcpu_event *event = vcpu->event;
+
+    event->type = NVMM_VCPU_EVENT_EXCP;
+    event->vector = 6;
+    event->u.excp.error = 0;
+
+    return nvmm_vcpu_inject(mach, vcpu);
+}
+
+static int
+nvmm_vcpu_loop(CPUState *cpu)
+{
+    struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
+    struct nvmm_machine *mach = get_nvmm_mach();
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct nvmm_vcpu *vcpu = &qcpu->vcpu;
+    X86CPU *x86_cpu = X86_CPU(cpu);
+    struct nvmm_vcpu_exit *exit = vcpu->exit;
+    int ret;
+
+    /*
+     * Some asynchronous events must be handled outside of the inner
+     * VCPU loop. They are handled here.
+     */
+    if (cpu->interrupt_request & CPU_INTERRUPT_INIT) {
+        nvmm_cpu_synchronize_state(cpu);
+        do_cpu_init(x86_cpu);
+        /* set int/nmi windows back to the reset state */
+    }
+    if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
+        cpu->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        apic_poll_irq(x86_cpu->apic_state);
+    }
+    if (((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
+         (env->eflags & IF_MASK)) ||
+        (cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
+        cpu->halted = false;
+    }
+    if (cpu->interrupt_request & CPU_INTERRUPT_SIPI) {
+        nvmm_cpu_synchronize_state(cpu);
+        do_cpu_sipi(x86_cpu);
+    }
+    if (cpu->interrupt_request & CPU_INTERRUPT_TPR) {
+        cpu->interrupt_request &= ~CPU_INTERRUPT_TPR;
+        nvmm_cpu_synchronize_state(cpu);
+        apic_handle_tpr_access_report(x86_cpu->apic_state, env->eip,
+            env->tpr_access_type);
+    }
+
+    if (cpu->halted) {
+        cpu->exception_index = EXCP_HLT;
+        qatomic_set(&cpu->exit_request, false);
+        return 0;
+    }
+
+    qemu_mutex_unlock_iothread();
+    cpu_exec_start(cpu);
+
+    /*
+     * Inner VCPU loop.
+     */
+    do {
+        if (cpu->vcpu_dirty) {
+            nvmm_set_registers(cpu);
+            cpu->vcpu_dirty = false;
+        }
+
+        if (qcpu->stop) {
+            cpu->exception_index = EXCP_INTERRUPT;
+            qcpu->stop = false;
+            ret = 1;
+            break;
+        }
+
+        nvmm_vcpu_pre_run(cpu);
+
+        if (qatomic_read(&cpu->exit_request)) {
+            nvmm_vcpu_stop(vcpu);
+        }
+
+        /* Read exit_request before the kernel reads the immediate exit flag */
+        smp_rmb();
+        ret = nvmm_vcpu_run(mach, vcpu);
+        if (ret == -1) {
+            error_report("NVMM: Failed to exec a virtual processor,"
+                " error=%d", errno);
+            break;
+        }
+
+        nvmm_vcpu_post_run(cpu, exit);
+
+        switch (exit->reason) {
+        case NVMM_VCPU_EXIT_NONE:
+            break;
+        case NVMM_VCPU_EXIT_STOPPED:
+            /*
+             * The kernel cleared the immediate exit flag; cpu->exit_request
+             * must be cleared after
+             */
+            smp_wmb();
+            qcpu->stop = true;
+            break;
+        case NVMM_VCPU_EXIT_MEMORY:
+            ret = nvmm_handle_mem(mach, vcpu);
+            break;
+        case NVMM_VCPU_EXIT_IO:
+            ret = nvmm_handle_io(mach, vcpu);
+            break;
+        case NVMM_VCPU_EXIT_INT_READY:
+        case NVMM_VCPU_EXIT_NMI_READY:
+        case NVMM_VCPU_EXIT_TPR_CHANGED:
+            break;
+        case NVMM_VCPU_EXIT_HALTED:
+            ret = nvmm_handle_halted(mach, cpu, exit);
+            break;
+        case NVMM_VCPU_EXIT_SHUTDOWN:
+            qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
+            cpu->exception_index = EXCP_INTERRUPT;
+            ret = 1;
+            break;
+        case NVMM_VCPU_EXIT_RDMSR:
+            ret = nvmm_handle_rdmsr(mach, cpu, exit);
+            break;
+        case NVMM_VCPU_EXIT_WRMSR:
+            ret = nvmm_handle_wrmsr(mach, cpu, exit);
+            break;
+        case NVMM_VCPU_EXIT_MONITOR:
+        case NVMM_VCPU_EXIT_MWAIT:
+            ret = nvmm_inject_ud(mach, vcpu);
+            break;
+        default:
+            error_report("NVMM: Unexpected VM exit code 0x%lx [hw=0x%lx]",
+                exit->reason, exit->u.inv.hwcode);
+            nvmm_get_registers(cpu);
+            qemu_mutex_lock_iothread();
+            qemu_system_guest_panicked(cpu_get_crash_info(cpu));
+            qemu_mutex_unlock_iothread();
+            ret = -1;
+            break;
+        }
+    } while (ret == 0);
+
+    cpu_exec_end(cpu);
+    qemu_mutex_lock_iothread();
+
+    qatomic_set(&cpu->exit_request, false);
+
+    return ret < 0;
+}
+
+/* -------------------------------------------------------------------------- */
+
+static void
+do_nvmm_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
+{
+    nvmm_get_registers(cpu);
+    cpu->vcpu_dirty = true;
+}
+
+static void
+do_nvmm_cpu_synchronize_post_reset(CPUState *cpu, run_on_cpu_data arg)
+{
+    nvmm_set_registers(cpu);
+    cpu->vcpu_dirty = false;
+}
+
+static void
+do_nvmm_cpu_synchronize_post_init(CPUState *cpu, run_on_cpu_data arg)
+{
+    nvmm_set_registers(cpu);
+    cpu->vcpu_dirty = false;
+}
+
+static void
+do_nvmm_cpu_synchronize_pre_loadvm(CPUState *cpu, run_on_cpu_data arg)
+{
+    cpu->vcpu_dirty = true;
+}
+
+void nvmm_cpu_synchronize_state(CPUState *cpu)
+{
+    if (!cpu->vcpu_dirty) {
+        run_on_cpu(cpu, do_nvmm_cpu_synchronize_state, RUN_ON_CPU_NULL);
+    }
+}
+
+void nvmm_cpu_synchronize_post_reset(CPUState *cpu)
+{
+    run_on_cpu(cpu, do_nvmm_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
+}
+
+void nvmm_cpu_synchronize_post_init(CPUState *cpu)
+{
+    run_on_cpu(cpu, do_nvmm_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
+}
+
+void nvmm_cpu_synchronize_pre_loadvm(CPUState *cpu)
+{
+    run_on_cpu(cpu, do_nvmm_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
+}
+
+/* -------------------------------------------------------------------------- */
+
+static Error *nvmm_migration_blocker;
+
+/*
+ * The nvmm_vcpu_stop() mechanism breaks races between entering the VMM
+ * and another thread signaling the vCPU thread to exit.
+ */
+
+static void
+nvmm_ipi_signal(int sigcpu)
+{
+    if (current_cpu) {
+        struct qemu_vcpu *qcpu = get_qemu_vcpu(current_cpu);
+        struct nvmm_vcpu *vcpu = &qcpu->vcpu;
+        nvmm_vcpu_stop(vcpu);
+    }
+}
+
+static void
+nvmm_init_cpu_signals(void)
+{
+    struct sigaction sigact;
+    sigset_t set;
+
+    /* Install the IPI handler. */
+    memset(&sigact, 0, sizeof(sigact));
+    sigact.sa_handler = nvmm_ipi_signal;
+    sigaction(SIG_IPI, &sigact, NULL);
+
+    /* Allow IPIs on the current thread. */
+    sigprocmask(SIG_BLOCK, NULL, &set);
+    sigdelset(&set, SIG_IPI);
+    pthread_sigmask(SIG_SETMASK, &set, NULL);
+}
+
+int
+nvmm_init_vcpu(CPUState *cpu)
+{
+    struct nvmm_machine *mach = get_nvmm_mach();
+    struct nvmm_vcpu_conf_cpuid cpuid;
+    struct nvmm_vcpu_conf_tpr tpr;
+    Error *local_error = NULL;
+    struct qemu_vcpu *qcpu;
+    int ret, err;
+
+    nvmm_init_cpu_signals();
+
+    if (nvmm_migration_blocker == NULL) {
+        error_setg(&nvmm_migration_blocker,
+            "NVMM: Migration not supported");
+
+        (void)migrate_add_blocker(nvmm_migration_blocker, &local_error);
+        if (local_error) {
+            error_report_err(local_error);
+            migrate_del_blocker(nvmm_migration_blocker);
+            error_free(nvmm_migration_blocker);
+            return -EINVAL;
+        }
+    }
+
+    qcpu = g_malloc0(sizeof(*qcpu));
+    if (qcpu == NULL) {
+        error_report("NVMM: Failed to allocate VCPU context.");
+        return -ENOMEM;
+    }
+
+    ret = nvmm_vcpu_create(mach, cpu->cpu_index, &qcpu->vcpu);
+    if (ret == -1) {
+        err = errno;
+        error_report("NVMM: Failed to create a virtual processor,"
+            " error=%d", err);
+        g_free(qcpu);
+        return -err;
+    }
+
+    memset(&cpuid, 0, sizeof(cpuid));
+    cpuid.mask = 1;
+    cpuid.leaf = 0x00000001;
+    cpuid.u.mask.set.edx = CPUID_MCE | CPUID_MCA | CPUID_MTRR;
+    ret = nvmm_vcpu_configure(mach, &qcpu->vcpu, NVMM_VCPU_CONF_CPUID,
+        &cpuid);
+    if (ret == -1) {
+        err = errno;
+        error_report("NVMM: Failed to configure a virtual processor,"
+            " error=%d", err);
+        g_free(qcpu);
+        return -err;
+    }
+
+    ret = nvmm_vcpu_configure(mach, &qcpu->vcpu, NVMM_VCPU_CONF_CALLBACKS,
+        &nvmm_callbacks);
+    if (ret == -1) {
+        err = errno;
+        error_report("NVMM: Failed to configure a virtual processor,"
+            " error=%d", err);
+        g_free(qcpu);
+        return -err;
+    }
+
+    if (qemu_mach.cap.arch.vcpu_conf_support & NVMM_CAP_ARCH_VCPU_CONF_TPR) {
+        memset(&tpr, 0, sizeof(tpr));
+        tpr.exit_changed = 1;
+        ret = nvmm_vcpu_configure(mach, &qcpu->vcpu, NVMM_VCPU_CONF_TPR, &tpr);
+        if (ret == -1) {
+            err = errno;
+            error_report("NVMM: Failed to configure a virtual processor,"
+                " error=%d", err);
+            g_free(qcpu);
+            return -err;
+        }
+    }
+
+    cpu->vcpu_dirty = true;
+    cpu->hax_vcpu = (struct hax_vcpu_state *)qcpu;
+
+    return 0;
+}
+
+int
+nvmm_vcpu_exec(CPUState *cpu)
+{
+    int ret, fatal;
+
+    while (1) {
+        if (cpu->exception_index >= EXCP_INTERRUPT) {
+            ret = cpu->exception_index;
+            cpu->exception_index = -1;
+            break;
+        }
+
+        fatal = nvmm_vcpu_loop(cpu);
+
+        if (fatal) {
+            error_report("NVMM: Failed to execute a VCPU.");
+            abort();
+        }
+    }
+
+    return ret;
+}
+
+void
+nvmm_destroy_vcpu(CPUState *cpu)
+{
+    struct nvmm_machine *mach = get_nvmm_mach();
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+
+    nvmm_vcpu_destroy(mach, &qcpu->vcpu);
+    g_free(cpu->hax_vcpu);
+}
+
+/* -------------------------------------------------------------------------- */
+
+static void
+nvmm_update_mapping(hwaddr start_pa, ram_addr_t size, uintptr_t hva,
+    bool add, bool rom, const char *name)
+{
+    struct nvmm_machine *mach = get_nvmm_mach();
+    int ret, prot;
+
+    if (add) {
+        prot = PROT_READ | PROT_EXEC;
+        if (!rom) {
+            prot |= PROT_WRITE;
+        }
+        ret = nvmm_gpa_map(mach, hva, start_pa, size, prot);
+    } else {
+        ret = nvmm_gpa_unmap(mach, hva, start_pa, size);
+    }
+
+    if (ret == -1) {
+        error_report("NVMM: Failed to %s GPA range '%s' PA:%p, "
+            "Size:%p bytes, HostVA:%p, error=%d",
+            (add ? "map" : "unmap"), name, (void *)(uintptr_t)start_pa,
+            (void *)size, (void *)hva, errno);
+    }
+}
+
+static void
+nvmm_process_section(MemoryRegionSection *section, int add)
+{
+    MemoryRegion *mr = section->mr;
+    hwaddr start_pa = section->offset_within_address_space;
+    ram_addr_t size = int128_get64(section->size);
+    unsigned int delta;
+    uintptr_t hva;
+
+    if (!memory_region_is_ram(mr)) {
+        return;
+    }
+
+    /* Adjust start_pa and size so that they are page-aligned. */
+    delta = qemu_real_host_page_size - (start_pa & ~qemu_real_host_page_mask);
+    delta &= ~qemu_real_host_page_mask;
+    if (delta > size) {
+        return;
+    }
+    start_pa += delta;
+    size -= delta;
+    size &= qemu_real_host_page_mask;
+    if (!size || (start_pa & ~qemu_real_host_page_mask)) {
+        return;
+    }
+
+    hva = (uintptr_t)memory_region_get_ram_ptr(mr) +
+        section->offset_within_region + delta;
+
+    nvmm_update_mapping(start_pa, size, hva, add,
+        memory_region_is_rom(mr), mr->name);
+}
+
+static void
+nvmm_region_add(MemoryListener *listener, MemoryRegionSection *section)
+{
+    memory_region_ref(section->mr);
+    nvmm_process_section(section, 1);
+}
+
+static void
+nvmm_region_del(MemoryListener *listener, MemoryRegionSection *section)
+{
+    nvmm_process_section(section, 0);
+    memory_region_unref(section->mr);
+}
+
+static void
+nvmm_transaction_begin(MemoryListener *listener)
+{
+    /* nothing */
+}
+
+static void
+nvmm_transaction_commit(MemoryListener *listener)
+{
+    /* nothing */
+}
+
+static void
+nvmm_log_sync(MemoryListener *listener, MemoryRegionSection *section)
+{
+    MemoryRegion *mr = section->mr;
+
+    if (!memory_region_is_ram(mr)) {
+        return;
+    }
+
+    memory_region_set_dirty(mr, 0, int128_get64(section->size));
+}
+
+static MemoryListener nvmm_memory_listener = {
+    .begin = nvmm_transaction_begin,
+    .commit = nvmm_transaction_commit,
+    .region_add = nvmm_region_add,
+    .region_del = nvmm_region_del,
+    .log_sync = nvmm_log_sync,
+    .priority = 10,
+};
+
+static void
+nvmm_ram_block_added(RAMBlockNotifier *n, void *host, size_t size)
+{
+    struct nvmm_machine *mach = get_nvmm_mach();
+    uintptr_t hva = (uintptr_t)host;
+    int ret;
+
+    ret = nvmm_hva_map(mach, hva, size);
+
+    if (ret == -1) {
+        error_report("NVMM: Failed to map HVA, HostVA:%p "
+            "Size:%p bytes, error=%d",
+            (void *)hva, (void *)size, errno);
+    }
+}
+
+static struct RAMBlockNotifier nvmm_ram_notifier = {
+    .ram_block_added = nvmm_ram_block_added
+};
+
+/* -------------------------------------------------------------------------- */
+
+static int
+nvmm_accel_init(MachineState *ms)
+{
+    int ret, err;
+
+    ret = nvmm_init();
+    if (ret == -1) {
+        err = errno;
+        error_report("NVMM: Initialization failed, error=%d", errno);
+        return -err;
+    }
+
+    ret = nvmm_capability(&qemu_mach.cap);
+    if (ret == -1) {
+        err = errno;
+        error_report("NVMM: Unable to fetch capability, error=%d", errno);
+        return -err;
+    }
+    if (qemu_mach.cap.version < NVMM_KERN_VERSION) {
+        error_report("NVMM: Unsupported version %u", qemu_mach.cap.version);
+        return -EPROGMISMATCH;
+    }
+    if (qemu_mach.cap.state_size != sizeof(struct nvmm_x64_state)) {
+        error_report("NVMM: Wrong state size %u", qemu_mach.cap.state_size);
+        return -EPROGMISMATCH;
+    }
+
+    ret = nvmm_machine_create(&qemu_mach.mach);
+    if (ret == -1) {
+        err = errno;
+        error_report("NVMM: Machine creation failed, error=%d", errno);
+        return -err;
+    }
+
+    memory_listener_register(&nvmm_memory_listener, &address_space_memory);
+    ram_block_notifier_add(&nvmm_ram_notifier);
+
+    printf("NetBSD Virtual Machine Monitor accelerator is operational\n");
+    return 0;
+}
+
+int
+nvmm_enabled(void)
+{
+    return nvmm_allowed;
+}
+
+static void
+nvmm_accel_class_init(ObjectClass *oc, void *data)
+{
+    AccelClass *ac = ACCEL_CLASS(oc);
+    ac->name = "NVMM";
+    ac->init_machine = nvmm_accel_init;
+    ac->allowed = &nvmm_allowed;
+}
+
+static const TypeInfo nvmm_accel_type = {
+    .name = ACCEL_CLASS_NAME("nvmm"),
+    .parent = TYPE_ACCEL,
+    .class_init = nvmm_accel_class_init,
+};
+
+static void
+nvmm_type_init(void)
+{
+    type_register_static(&nvmm_accel_type);
+}
+
+type_init(nvmm_type_init);

File Added: pkgsrc/emulators/qemu/patches/Attic/patch-target_i386_nvmm_meson.build
$NetBSD: patch-target_i386_nvmm_meson.build,v 1.1 2021/05/24 14:22:08 ryoon Exp $

--- target/i386/nvmm/meson.build.orig	2021-05-06 05:09:24.910385600 +0000
+++ target/i386/nvmm/meson.build
@@ -0,0 +1,8 @@
+i386_softmmu_ss.add(when: 'CONFIG_NVMM', if_true:
+  files(
+  'nvmm-all.c',
+  'nvmm-accel-ops.c',
+  )
+)
+
+i386_softmmu_ss.add(when: 'CONFIG_NVMM', if_true: nvmm)

File Added: pkgsrc/emulators/qemu/patches/Attic/patch-target_i386_nvmm_nvmm-accel-ops.c
$NetBSD: patch-target_i386_nvmm_nvmm-accel-ops.c,v 1.1 2021/05/24 14:22:08 ryoon Exp $

--- target/i386/nvmm/nvmm-accel-ops.c.orig	2021-05-06 05:09:24.910489458 +0000
+++ target/i386/nvmm/nvmm-accel-ops.c
@@ -0,0 +1,111 @@
+/*
+ * Copyright (c) 2018-2019 Maxime Villard, All rights reserved.
+ *
+ * NetBSD Virtual Machine Monitor (NVMM) accelerator for QEMU.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "sysemu/kvm_int.h"
+#include "qemu/main-loop.h"
+#include "sysemu/cpus.h"
+#include "qemu/guest-random.h"
+
+#include "sysemu/nvmm.h"
+#include "nvmm-accel-ops.h"
+
+static void *qemu_nvmm_cpu_thread_fn(void *arg)
+{
+    CPUState *cpu = arg;
+    int r;
+
+    assert(nvmm_enabled());
+
+    rcu_register_thread();
+
+    qemu_mutex_lock_iothread();
+    qemu_thread_get_self(cpu->thread);
+    cpu->thread_id = qemu_get_thread_id();
+    current_cpu = cpu;
+
+    r = nvmm_init_vcpu(cpu);
+    if (r < 0) {
+        fprintf(stderr, "nvmm_init_vcpu failed: %s\n", strerror(-r));
+        exit(1);
+    }
+
+    /* signal CPU creation */
+    cpu_thread_signal_created(cpu);
+    qemu_guest_random_seed_thread_part2(cpu->random_seed);
+
+    do {
+        if (cpu_can_run(cpu)) {
+            r = nvmm_vcpu_exec(cpu);
+            if (r == EXCP_DEBUG) {
+                cpu_handle_guest_debug(cpu);
+            }
+        }
+        while (cpu_thread_is_idle(cpu)) {
+            qemu_cond_wait_iothread(cpu->halt_cond);
+        }
+        qemu_wait_io_event_common(cpu);
+    } while (!cpu->unplug || cpu_can_run(cpu));
+
+    nvmm_destroy_vcpu(cpu);
+    cpu_thread_signal_destroyed(cpu);
+    qemu_mutex_unlock_iothread();
+    rcu_unregister_thread();
+    return NULL;
+}
+
+static void nvmm_start_vcpu_thread(CPUState *cpu)
+{
+    char thread_name[VCPU_THREAD_NAME_SIZE];
+
+    cpu->thread = g_malloc0(sizeof(QemuThread));
+    cpu->halt_cond = g_malloc0(sizeof(QemuCond));
+    qemu_cond_init(cpu->halt_cond);
+    snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/NVMM",
+             cpu->cpu_index);
+    qemu_thread_create(cpu->thread, thread_name, qemu_nvmm_cpu_thread_fn,
+                       cpu, QEMU_THREAD_JOINABLE);
+}
+
+/*
+ * Abort the call to run the virtual processor by another thread, and to
+ * return the control to that thread.
+ */
+static void nvmm_kick_vcpu_thread(CPUState *cpu)
+{
+    cpu->exit_request = 1;
+    cpus_kick_thread(cpu);
+}
+
+static void nvmm_accel_ops_class_init(ObjectClass *oc, void *data)
+{
+    AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
+
+    ops->create_vcpu_thread = nvmm_start_vcpu_thread;
+    ops->kick_vcpu_thread = nvmm_kick_vcpu_thread;
+
+    ops->synchronize_post_reset = nvmm_cpu_synchronize_post_reset;
+    ops->synchronize_post_init = nvmm_cpu_synchronize_post_init;
+    ops->synchronize_state = nvmm_cpu_synchronize_state;
+    ops->synchronize_pre_loadvm = nvmm_cpu_synchronize_pre_loadvm;
+}
+
+static const TypeInfo nvmm_accel_ops_type = {
+    .name = ACCEL_OPS_NAME("nvmm"),
+
+    .parent = TYPE_ACCEL_OPS,
+    .class_init = nvmm_accel_ops_class_init,
+    .abstract = true,
+};
+
+static void nvmm_accel_ops_register_types(void)
+{
+    type_register_static(&nvmm_accel_ops_type);
+}
+type_init(nvmm_accel_ops_register_types);

File Added: pkgsrc/emulators/qemu/patches/Attic/patch-target_i386_nvmm_nvmm-accel-ops.h
$NetBSD: patch-target_i386_nvmm_nvmm-accel-ops.h,v 1.1 2021/05/24 14:22:08 ryoon Exp $

--- target/i386/nvmm/nvmm-accel-ops.h.orig	2021-05-06 05:09:24.910599351 +0000
+++ target/i386/nvmm/nvmm-accel-ops.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (c) 2018-2019 Maxime Villard, All rights reserved.
+ *
+ * NetBSD Virtual Machine Monitor (NVMM) accelerator for QEMU.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#ifndef NVMM_CPUS_H
+#define NVMM_CPUS_H
+
+#include "sysemu/cpus.h"
+
+int nvmm_init_vcpu(CPUState *cpu);
+int nvmm_vcpu_exec(CPUState *cpu);
+void nvmm_destroy_vcpu(CPUState *cpu);
+
+void nvmm_cpu_synchronize_state(CPUState *cpu);
+void nvmm_cpu_synchronize_post_reset(CPUState *cpu);
+void nvmm_cpu_synchronize_post_init(CPUState *cpu);
+void nvmm_cpu_synchronize_pre_loadvm(CPUState *cpu);
+
+#endif /* NVMM_CPUS_H */

File Added: pkgsrc/emulators/qemu/patches/Attic/patch-target_i386_nvmm_nvmm-all.c
$NetBSD: patch-target_i386_nvmm_nvmm-all.c,v 1.1 2021/05/24 14:22:08 ryoon Exp $

--- target/i386/nvmm/nvmm-all.c.orig	2021-05-06 05:09:24.911125954 +0000
+++ target/i386/nvmm/nvmm-all.c
@@ -0,0 +1,1226 @@
+/*
+ * Copyright (c) 2018-2019 Maxime Villard, All rights reserved.
+ *
+ * NetBSD Virtual Machine Monitor (NVMM) accelerator for QEMU.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "cpu.h"
+#include "exec/address-spaces.h"
+#include "exec/ioport.h"
+#include "qemu-common.h"
+#include "qemu/accel.h"
+#include "sysemu/nvmm.h"
+#include "sysemu/cpus.h"
+#include "sysemu/runstate.h"
+#include "qemu/main-loop.h"
+#include "qemu/error-report.h"
+#include "qapi/error.h"
+#include "qemu/queue.h"
+#include "migration/blocker.h"
+#include "strings.h"
+
+#include "nvmm-accel-ops.h"
+
+#include <nvmm.h>
+
+struct qemu_vcpu {
+    struct nvmm_vcpu vcpu;
+    uint8_t tpr;
+    bool stop;
+
+    /* Window-exiting for INTs/NMIs. */
+    bool int_window_exit;
+    bool nmi_window_exit;
+
+    /* The guest is in an interrupt shadow (POP SS, etc). */
+    bool int_shadow;
+};
+
+struct qemu_machine {
+    struct nvmm_capability cap;
+    struct nvmm_machine mach;
+};
+
+/* -------------------------------------------------------------------------- */
+
+static bool nvmm_allowed;
+static struct qemu_machine qemu_mach;
+
+static struct qemu_vcpu *
+get_qemu_vcpu(CPUState *cpu)
+{
+    return (struct qemu_vcpu *)cpu->hax_vcpu;
+}
+
+static struct nvmm_machine *
+get_nvmm_mach(void)
+{
+    return &qemu_mach.mach;
+}
+
+/* -------------------------------------------------------------------------- */
+
+static void
+nvmm_set_segment(struct nvmm_x64_state_seg *nseg, const SegmentCache *qseg)
+{
+    uint32_t attrib = qseg->flags;
+
+    nseg->selector = qseg->selector;
+    nseg->limit = qseg->limit;
+    nseg->base = qseg->base;
+    nseg->attrib.type = __SHIFTOUT(attrib, DESC_TYPE_MASK);
+    nseg->attrib.s = __SHIFTOUT(attrib, DESC_S_MASK);
+    nseg->attrib.dpl = __SHIFTOUT(attrib, DESC_DPL_MASK);
+    nseg->attrib.p = __SHIFTOUT(attrib, DESC_P_MASK);
+    nseg->attrib.avl = __SHIFTOUT(attrib, DESC_AVL_MASK);
+    nseg->attrib.l = __SHIFTOUT(attrib, DESC_L_MASK);
+    nseg->attrib.def = __SHIFTOUT(attrib, DESC_B_MASK);
+    nseg->attrib.g = __SHIFTOUT(attrib, DESC_G_MASK);
+}
+
+static void
+nvmm_set_registers(CPUState *cpu)
+{
+    struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
+    struct nvmm_machine *mach = get_nvmm_mach();
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct nvmm_vcpu *vcpu = &qcpu->vcpu;
+    struct nvmm_x64_state *state = vcpu->state;
+    uint64_t bitmap;
+    size_t i;
+    int ret;
+
+    assert(cpu_is_stopped(cpu) || qemu_cpu_is_self(cpu));
+
+    /* GPRs. */
+    state->gprs[NVMM_X64_GPR_RAX] = env->regs[R_EAX];
+    state->gprs[NVMM_X64_GPR_RCX] = env->regs[R_ECX];
+    state->gprs[NVMM_X64_GPR_RDX] = env->regs[R_EDX];
+    state->gprs[NVMM_X64_GPR_RBX] = env->regs[R_EBX];
+    state->gprs[NVMM_X64_GPR_RSP] = env->regs[R_ESP];
+    state->gprs[NVMM_X64_GPR_RBP] = env->regs[R_EBP];
+    state->gprs[NVMM_X64_GPR_RSI] = env->regs[R_ESI];
+    state->gprs[NVMM_X64_GPR_RDI] = env->regs[R_EDI];
+#ifdef TARGET_X86_64
+    state->gprs[NVMM_X64_GPR_R8]  = env->regs[R_R8];
+    state->gprs[NVMM_X64_GPR_R9]  = env->regs[R_R9];
+    state->gprs[NVMM_X64_GPR_R10] = env->regs[R_R10];
+    state->gprs[NVMM_X64_GPR_R11] = env->regs[R_R11];
+    state->gprs[NVMM_X64_GPR_R12] = env->regs[R_R12];
+    state->gprs[NVMM_X64_GPR_R13] = env->regs[R_R13];
+    state->gprs[NVMM_X64_GPR_R14] = env->regs[R_R14];
+    state->gprs[NVMM_X64_GPR_R15] = env->regs[R_R15];
+#endif
+
+    /* RIP and RFLAGS. */
+    state->gprs[NVMM_X64_GPR_RIP] = env->eip;
+    state->gprs[NVMM_X64_GPR_RFLAGS] = env->eflags;
+
+    /* Segments. */
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_CS], &env->segs[R_CS]);
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_DS], &env->segs[R_DS]);
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_ES], &env->segs[R_ES]);
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_FS], &env->segs[R_FS]);
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_GS], &env->segs[R_GS]);
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_SS], &env->segs[R_SS]);
+
+    /* Special segments. */
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_GDT], &env->gdt);
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_LDT], &env->ldt);
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_TR], &env->tr);
+    nvmm_set_segment(&state->segs[NVMM_X64_SEG_IDT], &env->idt);
+
+    /* Control registers. */
+    state->crs[NVMM_X64_CR_CR0] = env->cr[0];
+    state->crs[NVMM_X64_CR_CR2] = env->cr[2];
+    state->crs[NVMM_X64_CR_CR3] = env->cr[3];
+    state->crs[NVMM_X64_CR_CR4] = env->cr[4];
+    state->crs[NVMM_X64_CR_CR8] = qcpu->tpr;
+    state->crs[NVMM_X64_CR_XCR0] = env->xcr0;
+
+    /* Debug registers. */
+    state->drs[NVMM_X64_DR_DR0] = env->dr[0];
+    state->drs[NVMM_X64_DR_DR1] = env->dr[1];
+    state->drs[NVMM_X64_DR_DR2] = env->dr[2];
+    state->drs[NVMM_X64_DR_DR3] = env->dr[3];
+    state->drs[NVMM_X64_DR_DR6] = env->dr[6];
+    state->drs[NVMM_X64_DR_DR7] = env->dr[7];
+
+    /* FPU. */
+    state->fpu.fx_cw = env->fpuc;
+    state->fpu.fx_sw = (env->fpus & ~0x3800) | ((env->fpstt & 0x7) << 11);
+    state->fpu.fx_tw = 0;
+    for (i = 0; i < 8; i++) {
+        state->fpu.fx_tw |= (!env->fptags[i]) << i;
+    }
+    state->fpu.fx_opcode = env->fpop;
+    state->fpu.fx_ip.fa_64 = env->fpip;
+    state->fpu.fx_dp.fa_64 = env->fpdp;
+    state->fpu.fx_mxcsr = env->mxcsr;
+    state->fpu.fx_mxcsr_mask = 0x0000FFFF;
+    assert(sizeof(state->fpu.fx_87_ac) == sizeof(env->fpregs));
+    memcpy(state->fpu.fx_87_ac, env->fpregs, sizeof(env->fpregs));
+    for (i = 0; i < CPU_NB_REGS; i++) {
+        memcpy(&state->fpu.fx_xmm[i].xmm_bytes[0],
+            &env->xmm_regs[i].ZMM_Q(0), 8);
+        memcpy(&state->fpu.fx_xmm[i].xmm_bytes[8],
+            &env->xmm_regs[i].ZMM_Q(1), 8);
+    }
+
+    /* MSRs. */
+    state->msrs[NVMM_X64_MSR_EFER] = env->efer;
+    state->msrs[NVMM_X64_MSR_STAR] = env->star;
+#ifdef TARGET_X86_64
+    state->msrs[NVMM_X64_MSR_LSTAR] = env->lstar;
+    state->msrs[NVMM_X64_MSR_CSTAR] = env->cstar;
+    state->msrs[NVMM_X64_MSR_SFMASK] = env->fmask;
+    state->msrs[NVMM_X64_MSR_KERNELGSBASE] = env->kernelgsbase;
+#endif
+    state->msrs[NVMM_X64_MSR_SYSENTER_CS]  = env->sysenter_cs;
+    state->msrs[NVMM_X64_MSR_SYSENTER_ESP] = env->sysenter_esp;
+    state->msrs[NVMM_X64_MSR_SYSENTER_EIP] = env->sysenter_eip;
+    state->msrs[NVMM_X64_MSR_PAT] = env->pat;
+    state->msrs[NVMM_X64_MSR_TSC] = env->tsc;
+
+    bitmap =
+        NVMM_X64_STATE_SEGS |
+        NVMM_X64_STATE_GPRS |
+        NVMM_X64_STATE_CRS  |
+        NVMM_X64_STATE_DRS  |
+        NVMM_X64_STATE_MSRS |
+        NVMM_X64_STATE_FPU;
+
+    ret = nvmm_vcpu_setstate(mach, vcpu, bitmap);
+    if (ret == -1) {
+        error_report("NVMM: Failed to set virtual processor context,"
+            " error=%d", errno);
+    }
+}
+
+static void
+nvmm_get_segment(SegmentCache *qseg, const struct nvmm_x64_state_seg *nseg)
+{
+    qseg->selector = nseg->selector;
+    qseg->limit = nseg->limit;
+    qseg->base = nseg->base;
+
+    qseg->flags =
+        __SHIFTIN((uint32_t)nseg->attrib.type, DESC_TYPE_MASK) |
+        __SHIFTIN((uint32_t)nseg->attrib.s, DESC_S_MASK) |
+        __SHIFTIN((uint32_t)nseg->attrib.dpl, DESC_DPL_MASK) |
+        __SHIFTIN((uint32_t)nseg->attrib.p, DESC_P_MASK) |
+        __SHIFTIN((uint32_t)nseg->attrib.avl, DESC_AVL_MASK) |
+        __SHIFTIN((uint32_t)nseg->attrib.l, DESC_L_MASK) |
+        __SHIFTIN((uint32_t)nseg->attrib.def, DESC_B_MASK) |
+        __SHIFTIN((uint32_t)nseg->attrib.g, DESC_G_MASK);
+}
+
+static void
+nvmm_get_registers(CPUState *cpu)
+{
+    struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
+    struct nvmm_machine *mach = get_nvmm_mach();
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct nvmm_vcpu *vcpu = &qcpu->vcpu;
+    X86CPU *x86_cpu = X86_CPU(cpu);
+    struct nvmm_x64_state *state = vcpu->state;
+    uint64_t bitmap, tpr;
+    size_t i;
+    int ret;
+
+    assert(cpu_is_stopped(cpu) || qemu_cpu_is_self(cpu));
+
+    bitmap =
+        NVMM_X64_STATE_SEGS |
+        NVMM_X64_STATE_GPRS |
+        NVMM_X64_STATE_CRS  |
+        NVMM_X64_STATE_DRS  |
+        NVMM_X64_STATE_MSRS |
+        NVMM_X64_STATE_FPU;
+
+    ret = nvmm_vcpu_getstate(mach, vcpu, bitmap);
+    if (ret == -1) {
+        error_report("NVMM: Failed to get virtual processor context,"
+            " error=%d", errno);
+    }
+
+    /* GPRs. */
+    env->regs[R_EAX] = state->gprs[NVMM_X64_GPR_RAX];
+    env->regs[R_ECX] = state->gprs[NVMM_X64_GPR_RCX];
+    env->regs[R_EDX] = state->gprs[NVMM_X64_GPR_RDX];
+    env->regs[R_EBX] = state->gprs[NVMM_X64_GPR_RBX];
+    env->regs[R_ESP] = state->gprs[NVMM_X64_GPR_RSP];
+    env->regs[R_EBP] = state->gprs[NVMM_X64_GPR_RBP];
+    env->regs[R_ESI] = state->gprs[NVMM_X64_GPR_RSI];
+    env->regs[R_EDI] = state->gprs[NVMM_X64_GPR_RDI];
+#ifdef TARGET_X86_64
+    env->regs[R_R8]  = state->gprs[NVMM_X64_GPR_R8];
+    env->regs[R_R9]  = state->gprs[NVMM_X64_GPR_R9];
+    env->regs[R_R10] = state->gprs[NVMM_X64_GPR_R10];
+    env->regs[R_R11] = state->gprs[NVMM_X64_GPR_R11];
+    env->regs[R_R12] = state->gprs[NVMM_X64_GPR_R12];
+    env->regs[R_R13] = state->gprs[NVMM_X64_GPR_R13];
+    env->regs[R_R14] = state->gprs[NVMM_X64_GPR_R14];
+    env->regs[R_R15] = state->gprs[NVMM_X64_GPR_R15];
+#endif
+
+    /* RIP and RFLAGS. */
+    env->eip = state->gprs[NVMM_X64_GPR_RIP];
+    env->eflags = state->gprs[NVMM_X64_GPR_RFLAGS];
+
+    /* Segments. */
+    nvmm_get_segment(&env->segs[R_ES], &state->segs[NVMM_X64_SEG_ES]);
+    nvmm_get_segment(&env->segs[R_CS], &state->segs[NVMM_X64_SEG_CS]);
+    nvmm_get_segment(&env->segs[R_SS], &state->segs[NVMM_X64_SEG_SS]);
+    nvmm_get_segment(&env->segs[R_DS], &state->segs[NVMM_X64_SEG_DS]);
+    nvmm_get_segment(&env->segs[R_FS], &state->segs[NVMM_X64_SEG_FS]);
+    nvmm_get_segment(&env->segs[R_GS], &state->segs[NVMM_X64_SEG_GS]);
+
+    /* Special segments. */
+    nvmm_get_segment(&env->gdt, &state->segs[NVMM_X64_SEG_GDT]);
+    nvmm_get_segment(&env->ldt, &state->segs[NVMM_X64_SEG_LDT]);
+    nvmm_get_segment(&env->tr, &state->segs[NVMM_X64_SEG_TR]);
+    nvmm_get_segment(&env->idt, &state->segs[NVMM_X64_SEG_IDT]);
+
+    /* Control registers. */
+    env->cr[0] = state->crs[NVMM_X64_CR_CR0];
+    env->cr[2] = state->crs[NVMM_X64_CR_CR2];
+    env->cr[3] = state->crs[NVMM_X64_CR_CR3];
+    env->cr[4] = state->crs[NVMM_X64_CR_CR4];
+    tpr = state->crs[NVMM_X64_CR_CR8];
+    if (tpr != qcpu->tpr) {
+        qcpu->tpr = tpr;
+        cpu_set_apic_tpr(x86_cpu->apic_state, tpr);
+    }
+    env->xcr0 = state->crs[NVMM_X64_CR_XCR0];
+
+    /* Debug registers. */
+    env->dr[0] = state->drs[NVMM_X64_DR_DR0];
+    env->dr[1] = state->drs[NVMM_X64_DR_DR1];
+    env->dr[2] = state->drs[NVMM_X64_DR_DR2];
+    env->dr[3] = state->drs[NVMM_X64_DR_DR3];
+    env->dr[6] = state->drs[NVMM_X64_DR_DR6];
+    env->dr[7] = state->drs[NVMM_X64_DR_DR7];
+
+    /* FPU. */
+    env->fpuc = state->fpu.fx_cw;
+    env->fpstt = (state->fpu.fx_sw >> 11) & 0x7;
+    env->fpus = state->fpu.fx_sw & ~0x3800;
+    for (i = 0; i < 8; i++) {
+        env->fptags[i] = !((state->fpu.fx_tw >> i) & 1);
+    }
+    env->fpop = state->fpu.fx_opcode;
+    env->fpip = state->fpu.fx_ip.fa_64;
+    env->fpdp = state->fpu.fx_dp.fa_64;
+    env->mxcsr = state->fpu.fx_mxcsr;
+    assert(sizeof(state->fpu.fx_87_ac) == sizeof(env->fpregs));
+    memcpy(env->fpregs, state->fpu.fx_87_ac, sizeof(env->fpregs));
+    for (i = 0; i < CPU_NB_REGS; i++) {
+        memcpy(&env->xmm_regs[i].ZMM_Q(0),
+            &state->fpu.fx_xmm[i].xmm_bytes[0], 8);
+        memcpy(&env->xmm_regs[i].ZMM_Q(1),
+            &state->fpu.fx_xmm[i].xmm_bytes[8], 8);
+    }
+
+    /* MSRs. */
+    env->efer = state->msrs[NVMM_X64_MSR_EFER];
+    env->star = state->msrs[NVMM_X64_MSR_STAR];
+#ifdef TARGET_X86_64
+    env->lstar = state->msrs[NVMM_X64_MSR_LSTAR];
+    env->cstar = state->msrs[NVMM_X64_MSR_CSTAR];
+    env->fmask = state->msrs[NVMM_X64_MSR_SFMASK];
+    env->kernelgsbase = state->msrs[NVMM_X64_MSR_KERNELGSBASE];
+#endif
+    env->sysenter_cs  = state->msrs[NVMM_X64_MSR_SYSENTER_CS];
+    env->sysenter_esp = state->msrs[NVMM_X64_MSR_SYSENTER_ESP];
+    env->sysenter_eip = state->msrs[NVMM_X64_MSR_SYSENTER_EIP];
+    env->pat = state->msrs[NVMM_X64_MSR_PAT];
+    env->tsc = state->msrs[NVMM_X64_MSR_TSC];
+
+    x86_update_hflags(env);
+}
+
+static bool
+nvmm_can_take_int(CPUState *cpu)
+{
+    struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct nvmm_vcpu *vcpu = &qcpu->vcpu;
+    struct nvmm_machine *mach = get_nvmm_mach();
+
+    if (qcpu->int_window_exit) {
+        return false;
+    }
+
+    if (qcpu->int_shadow || !(env->eflags & IF_MASK)) {
+        struct nvmm_x64_state *state = vcpu->state;
+
+        /* Exit on interrupt window. */
+        nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_INTR);
+        state->intr.int_window_exiting = 1;
+        nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_INTR);
+
+        return false;
+    }
+
+    return true;
+}
+
+static bool
+nvmm_can_take_nmi(CPUState *cpu)
+{
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+
+    /*
+     * Contrary to INTs, NMIs always schedule an exit when they are
+     * completed. Therefore, if window-exiting is enabled, it means
+     * NMIs are blocked.
+     */
+    if (qcpu->nmi_window_exit) {
+        return false;
+    }
+
+    return true;
+}
+
+/*
+ * Called before the VCPU is run. We inject events generated by the I/O
+ * thread, and synchronize the guest TPR.
+ */
+static void
+nvmm_vcpu_pre_run(CPUState *cpu)
+{
+    struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
+    struct nvmm_machine *mach = get_nvmm_mach();
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct nvmm_vcpu *vcpu = &qcpu->vcpu;
+    X86CPU *x86_cpu = X86_CPU(cpu);
+    struct nvmm_x64_state *state = vcpu->state;
+    struct nvmm_vcpu_event *event = vcpu->event;
+    bool has_event = false;
+    bool sync_tpr = false;
+    uint8_t tpr;
+    int ret;
+
+    qemu_mutex_lock_iothread();
+
+    tpr = cpu_get_apic_tpr(x86_cpu->apic_state);
+    if (tpr != qcpu->tpr) {
+        qcpu->tpr = tpr;
+        sync_tpr = true;
+    }
+
+    /*
+     * Force the VCPU out of its inner loop to process any INIT requests
+     * or commit pending TPR access.
+     */
+    if (cpu->interrupt_request & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
+        cpu->exit_request = 1;
+    }
+
+    if (!has_event && (cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
+        if (nvmm_can_take_nmi(cpu)) {
+            cpu->interrupt_request &= ~CPU_INTERRUPT_NMI;
+            event->type = NVMM_VCPU_EVENT_INTR;
+            event->vector = 2;
+            has_event = true;
+        }
+    }
+
+    if (!has_event && (cpu->interrupt_request & CPU_INTERRUPT_HARD)) {
+        if (nvmm_can_take_int(cpu)) {
+            cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
+            event->type = NVMM_VCPU_EVENT_INTR;
+            event->vector = cpu_get_pic_interrupt(env);
+            has_event = true;
+        }
+    }
+
+    /* Don't want SMIs. */
+    if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
+        cpu->interrupt_request &= ~CPU_INTERRUPT_SMI;
+    }
+
+    if (sync_tpr) {
+        ret = nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_CRS);
+        if (ret == -1) {
+            error_report("NVMM: Failed to get CPU state,"
+                " error=%d", errno);
+        }
+
+        state->crs[NVMM_X64_CR_CR8] = qcpu->tpr;
+
+        ret = nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_CRS);
+        if (ret == -1) {
+            error_report("NVMM: Failed to set CPU state,"
+                " error=%d", errno);
+        }
+    }
+
+    if (has_event) {
+        ret = nvmm_vcpu_inject(mach, vcpu);
+        if (ret == -1) {
+            error_report("NVMM: Failed to inject event,"
+                " error=%d", errno);
+        }
+    }
+
+    qemu_mutex_unlock_iothread();
+}
+
+/*
+ * Called after the VCPU ran. We synchronize the host view of the TPR and
+ * RFLAGS.
+ */
+static void
+nvmm_vcpu_post_run(CPUState *cpu, struct nvmm_vcpu_exit *exit)
+{
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
+    X86CPU *x86_cpu = X86_CPU(cpu);
+    uint64_t tpr;
+
+    env->eflags = exit->exitstate.rflags;
+    qcpu->int_shadow = exit->exitstate.int_shadow;
+    qcpu->int_window_exit = exit->exitstate.int_window_exiting;
+    qcpu->nmi_window_exit = exit->exitstate.nmi_window_exiting;
+
+    tpr = exit->exitstate.cr8;
+    if (qcpu->tpr != tpr) {
+        qcpu->tpr = tpr;
+        qemu_mutex_lock_iothread();
+        cpu_set_apic_tpr(x86_cpu->apic_state, qcpu->tpr);
+        qemu_mutex_unlock_iothread();
+    }
+}
+
+/* -------------------------------------------------------------------------- */
+
+static void
+nvmm_io_callback(struct nvmm_io *io)
+{
+    MemTxAttrs attrs = { 0 };
+    int ret;
+
+    ret = address_space_rw(&address_space_io, io->port, attrs, io->data,
+        io->size, !io->in);
+    if (ret != MEMTX_OK) {
+        error_report("NVMM: I/O Transaction Failed "
+            "[%s, port=%u, size=%zu]", (io->in ? "in" : "out"),
+            io->port, io->size);
+    }
+
+    /* Needed, otherwise infinite loop. */
+    current_cpu->vcpu_dirty = false;
+}
+
+static void
+nvmm_mem_callback(struct nvmm_mem *mem)
+{
+    cpu_physical_memory_rw(mem->gpa, mem->data, mem->size, mem->write);
+
+    /* Needed, otherwise infinite loop. */
+    current_cpu->vcpu_dirty = false;
+}
+
+static struct nvmm_assist_callbacks nvmm_callbacks = {
+    .io = nvmm_io_callback,
+    .mem = nvmm_mem_callback
+};
+
+/* -------------------------------------------------------------------------- */
+
+static int
+nvmm_handle_mem(struct nvmm_machine *mach, struct nvmm_vcpu *vcpu)
+{
+    int ret;
+
+    ret = nvmm_assist_mem(mach, vcpu);
+    if (ret == -1) {
+        error_report("NVMM: Mem Assist Failed [gpa=%p]",
+            (void *)vcpu->exit->u.mem.gpa);
+    }
+
+    return ret;
+}
+
+static int
+nvmm_handle_io(struct nvmm_machine *mach, struct nvmm_vcpu *vcpu)
+{
+    int ret;
+
+    ret = nvmm_assist_io(mach, vcpu);
+    if (ret == -1) {
+        error_report("NVMM: I/O Assist Failed [port=%d]",
+            (int)vcpu->exit->u.io.port);
+    }
+
+    return ret;
+}
+
+static int
+nvmm_handle_rdmsr(struct nvmm_machine *mach, CPUState *cpu,
+    struct nvmm_vcpu_exit *exit)
+{
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct nvmm_vcpu *vcpu = &qcpu->vcpu;
+    X86CPU *x86_cpu = X86_CPU(cpu);
+    struct nvmm_x64_state *state = vcpu->state;
+    uint64_t val;
+    int ret;
+
+    switch (exit->u.rdmsr.msr) {
+    case MSR_IA32_APICBASE:
+        val = cpu_get_apic_base(x86_cpu->apic_state);
+        break;
+    case MSR_MTRRcap:
+    case MSR_MTRRdefType:
+    case MSR_MCG_CAP:
+    case MSR_MCG_STATUS:
+        val = 0;
+        break;
+    default: /* More MSRs to add? */
+        val = 0;
+        error_report("NVMM: Unexpected RDMSR 0x%x, ignored",
+            exit->u.rdmsr.msr);
+        break;
+    }
+
+    ret = nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_GPRS);
+    if (ret == -1) {
+        return -1;
+    }
+
+    state->gprs[NVMM_X64_GPR_RAX] = (val & 0xFFFFFFFF);
+    state->gprs[NVMM_X64_GPR_RDX] = (val >> 32);
+    state->gprs[NVMM_X64_GPR_RIP] = exit->u.rdmsr.npc;
+
+    ret = nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_GPRS);
+    if (ret == -1) {
+        return -1;
+    }
+
+    return 0;
+}
+
+static int
+nvmm_handle_wrmsr(struct nvmm_machine *mach, CPUState *cpu,
+    struct nvmm_vcpu_exit *exit)
+{
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct nvmm_vcpu *vcpu = &qcpu->vcpu;
+    X86CPU *x86_cpu = X86_CPU(cpu);
+    struct nvmm_x64_state *state = vcpu->state;
+    uint64_t val;
+    int ret;
+
+    val = exit->u.wrmsr.val;
+
+    switch (exit->u.wrmsr.msr) {
+    case MSR_IA32_APICBASE:
+        cpu_set_apic_base(x86_cpu->apic_state, val);
+        break;
+    case MSR_MTRRdefType:
+    case MSR_MCG_STATUS:
+        break;
+    default: /* More MSRs to add? */
+        error_report("NVMM: Unexpected WRMSR 0x%x [val=0x%lx], ignored",
+            exit->u.wrmsr.msr, val);
+        break;
+    }
+
+    ret = nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_GPRS);
+    if (ret == -1) {
+        return -1;
+    }
+
+    state->gprs[NVMM_X64_GPR_RIP] = exit->u.wrmsr.npc;
+
+    ret = nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_GPRS);
+    if (ret == -1) {
+        return -1;
+    }
+
+    return 0;
+}
+
+static int
+nvmm_handle_halted(struct nvmm_machine *mach, CPUState *cpu,
+    struct nvmm_vcpu_exit *exit)
+{
+    struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
+    int ret = 0;
+
+    qemu_mutex_lock_iothread();
+
+    if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
+          (env->eflags & IF_MASK)) &&
+        !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
+        cpu->exception_index = EXCP_HLT;
+        cpu->halted = true;
+        ret = 1;
+    }
+
+    qemu_mutex_unlock_iothread();
+
+    return ret;
+}
+
+static int
+nvmm_inject_ud(struct nvmm_machine *mach, struct nvmm_vcpu *vcpu)
+{
+    struct nvmm_vcpu_event *event = vcpu->event;
+
+    event->type = NVMM_VCPU_EVENT_EXCP;
+    event->vector = 6;
+    event->u.excp.error = 0;
+
+    return nvmm_vcpu_inject(mach, vcpu);
+}
+
+static int
+nvmm_vcpu_loop(CPUState *cpu)
+{
+    struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
+    struct nvmm_machine *mach = get_nvmm_mach();
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct nvmm_vcpu *vcpu = &qcpu->vcpu;
+    X86CPU *x86_cpu = X86_CPU(cpu);
+    struct nvmm_vcpu_exit *exit = vcpu->exit;
+    int ret;
+
+    /*
+     * Some asynchronous events must be handled outside of the inner
+     * VCPU loop. They are handled here.
+     */
+    if (cpu->interrupt_request & CPU_INTERRUPT_INIT) {
+        nvmm_cpu_synchronize_state(cpu);
+        do_cpu_init(x86_cpu);
+        /* set int/nmi windows back to the reset state */
+    }
+    if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
+        cpu->interrupt_request &= ~CPU_INTERRUPT_POLL;
+        apic_poll_irq(x86_cpu->apic_state);
+    }
+    if (((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
+         (env->eflags & IF_MASK)) ||
+        (cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
+        cpu->halted = false;
+    }
+    if (cpu->interrupt_request & CPU_INTERRUPT_SIPI) {
+        nvmm_cpu_synchronize_state(cpu);
+        do_cpu_sipi(x86_cpu);
+    }
+    if (cpu->interrupt_request & CPU_INTERRUPT_TPR) {
+        cpu->interrupt_request &= ~CPU_INTERRUPT_TPR;
+        nvmm_cpu_synchronize_state(cpu);
+        apic_handle_tpr_access_report(x86_cpu->apic_state, env->eip,
+            env->tpr_access_type);
+    }
+
+    if (cpu->halted) {
+        cpu->exception_index = EXCP_HLT;
+        qatomic_set(&cpu->exit_request, false);
+        return 0;
+    }
+
+    qemu_mutex_unlock_iothread();
+    cpu_exec_start(cpu);
+
+    /*
+     * Inner VCPU loop.
+     */
+    do {
+        if (cpu->vcpu_dirty) {
+            nvmm_set_registers(cpu);
+            cpu->vcpu_dirty = false;
+        }
+
+        if (qcpu->stop) {
+            cpu->exception_index = EXCP_INTERRUPT;
+            qcpu->stop = false;
+            ret = 1;
+            break;
+        }
+
+        nvmm_vcpu_pre_run(cpu);
+
+        if (qatomic_read(&cpu->exit_request)) {
+            nvmm_vcpu_stop(vcpu);
+        }
+
+        /* Read exit_request before the kernel reads the immediate exit flag */
+        smp_rmb();
+        ret = nvmm_vcpu_run(mach, vcpu);
+        if (ret == -1) {
+            error_report("NVMM: Failed to exec a virtual processor,"
+                " error=%d", errno);
+            break;
+        }
+
+        nvmm_vcpu_post_run(cpu, exit);
+
+        switch (exit->reason) {
+        case NVMM_VCPU_EXIT_NONE:
+            break;
+        case NVMM_VCPU_EXIT_STOPPED:
+            /*
+             * The kernel cleared the immediate exit flag; cpu->exit_request
+             * must be cleared after
+             */
+            smp_wmb();
+            qcpu->stop = true;
+            break;
+        case NVMM_VCPU_EXIT_MEMORY:
+            ret = nvmm_handle_mem(mach, vcpu);
+            break;
+        case NVMM_VCPU_EXIT_IO:
+            ret = nvmm_handle_io(mach, vcpu);
+            break;
+        case NVMM_VCPU_EXIT_INT_READY:
+        case NVMM_VCPU_EXIT_NMI_READY:
+        case NVMM_VCPU_EXIT_TPR_CHANGED:
+            break;
+        case NVMM_VCPU_EXIT_HALTED:
+            ret = nvmm_handle_halted(mach, cpu, exit);
+            break;
+        case NVMM_VCPU_EXIT_SHUTDOWN:
+            qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
+            cpu->exception_index = EXCP_INTERRUPT;
+            ret = 1;
+            break;
+        case NVMM_VCPU_EXIT_RDMSR:
+            ret = nvmm_handle_rdmsr(mach, cpu, exit);
+            break;
+        case NVMM_VCPU_EXIT_WRMSR:
+            ret = nvmm_handle_wrmsr(mach, cpu, exit);
+            break;
+        case NVMM_VCPU_EXIT_MONITOR:
+        case NVMM_VCPU_EXIT_MWAIT:
+            ret = nvmm_inject_ud(mach, vcpu);
+            break;
+        default:
+            error_report("NVMM: Unexpected VM exit code 0x%lx [hw=0x%lx]",
+                exit->reason, exit->u.inv.hwcode);
+            nvmm_get_registers(cpu);
+            qemu_mutex_lock_iothread();
+            qemu_system_guest_panicked(cpu_get_crash_info(cpu));
+            qemu_mutex_unlock_iothread();
+            ret = -1;
+            break;
+        }
+    } while (ret == 0);
+
+    cpu_exec_end(cpu);
+    qemu_mutex_lock_iothread();
+
+    qatomic_set(&cpu->exit_request, false);
+
+    return ret < 0;
+}
+
+/* -------------------------------------------------------------------------- */
+
+static void
+do_nvmm_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
+{
+    nvmm_get_registers(cpu);
+    cpu->vcpu_dirty = true;
+}
+
+static void
+do_nvmm_cpu_synchronize_post_reset(CPUState *cpu, run_on_cpu_data arg)
+{
+    nvmm_set_registers(cpu);
+    cpu->vcpu_dirty = false;
+}
+
+static void
+do_nvmm_cpu_synchronize_post_init(CPUState *cpu, run_on_cpu_data arg)
+{
+    nvmm_set_registers(cpu);
+    cpu->vcpu_dirty = false;
+}
+
+static void
+do_nvmm_cpu_synchronize_pre_loadvm(CPUState *cpu, run_on_cpu_data arg)
+{
+    cpu->vcpu_dirty = true;
+}
+
+void nvmm_cpu_synchronize_state(CPUState *cpu)
+{
+    if (!cpu->vcpu_dirty) {
+        run_on_cpu(cpu, do_nvmm_cpu_synchronize_state, RUN_ON_CPU_NULL);
+    }
+}
+
+void nvmm_cpu_synchronize_post_reset(CPUState *cpu)
+{
+    run_on_cpu(cpu, do_nvmm_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
+}
+
+void nvmm_cpu_synchronize_post_init(CPUState *cpu)
+{
+    run_on_cpu(cpu, do_nvmm_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
+}
+
+void nvmm_cpu_synchronize_pre_loadvm(CPUState *cpu)
+{
+    run_on_cpu(cpu, do_nvmm_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
+}
+
+/* -------------------------------------------------------------------------- */
+
+static Error *nvmm_migration_blocker;
+
+/*
+ * The nvmm_vcpu_stop() mechanism breaks races between entering the VMM
+ * and another thread signaling the vCPU thread to exit.
+ */
+
+static void
+nvmm_ipi_signal(int sigcpu)
+{
+    if (current_cpu) {
+        struct qemu_vcpu *qcpu = get_qemu_vcpu(current_cpu);
+        struct nvmm_vcpu *vcpu = &qcpu->vcpu;
+        nvmm_vcpu_stop(vcpu);
+    }
+}
+
+static void
+nvmm_init_cpu_signals(void)
+{
+    struct sigaction sigact;
+    sigset_t set;
+
+    /* Install the IPI handler. */
+    memset(&sigact, 0, sizeof(sigact));
+    sigact.sa_handler = nvmm_ipi_signal;
+    sigaction(SIG_IPI, &sigact, NULL);
+
+    /* Allow IPIs on the current thread. */
+    sigprocmask(SIG_BLOCK, NULL, &set);
+    sigdelset(&set, SIG_IPI);
+    pthread_sigmask(SIG_SETMASK, &set, NULL);
+}
+
+int
+nvmm_init_vcpu(CPUState *cpu)
+{
+    struct nvmm_machine *mach = get_nvmm_mach();
+    struct nvmm_vcpu_conf_cpuid cpuid;
+    struct nvmm_vcpu_conf_tpr tpr;
+    Error *local_error = NULL;
+    struct qemu_vcpu *qcpu;
+    int ret, err;
+
+    nvmm_init_cpu_signals();
+
+    if (nvmm_migration_blocker == NULL) {
+        error_setg(&nvmm_migration_blocker,
+            "NVMM: Migration not supported");
+
+        (void)migrate_add_blocker(nvmm_migration_blocker, &local_error);
+        if (local_error) {
+            error_report_err(local_error);
+            migrate_del_blocker(nvmm_migration_blocker);
+            error_free(nvmm_migration_blocker);
+            return -EINVAL;
+        }
+    }
+
+    qcpu = g_malloc0(sizeof(*qcpu));
+    if (qcpu == NULL) {
+        error_report("NVMM: Failed to allocate VCPU context.");
+        return -ENOMEM;
+    }
+
+    ret = nvmm_vcpu_create(mach, cpu->cpu_index, &qcpu->vcpu);
+    if (ret == -1) {
+        err = errno;
+        error_report("NVMM: Failed to create a virtual processor,"
+            " error=%d", err);
+        g_free(qcpu);
+        return -err;
+    }
+
+    memset(&cpuid, 0, sizeof(cpuid));
+    cpuid.mask = 1;
+    cpuid.leaf = 0x00000001;
+    cpuid.u.mask.set.edx = CPUID_MCE | CPUID_MCA | CPUID_MTRR;
+    ret = nvmm_vcpu_configure(mach, &qcpu->vcpu, NVMM_VCPU_CONF_CPUID,
+        &cpuid);
+    if (ret == -1) {
+        err = errno;
+        error_report("NVMM: Failed to configure a virtual processor,"
+            " error=%d", err);
+        g_free(qcpu);
+        return -err;
+    }
+
+    ret = nvmm_vcpu_configure(mach, &qcpu->vcpu, NVMM_VCPU_CONF_CALLBACKS,
+        &nvmm_callbacks);
+    if (ret == -1) {
+        err = errno;
+        error_report("NVMM: Failed to configure a virtual processor,"
+            " error=%d", err);
+        g_free(qcpu);
+        return -err;
+    }
+
+    if (qemu_mach.cap.arch.vcpu_conf_support & NVMM_CAP_ARCH_VCPU_CONF_TPR) {
+        memset(&tpr, 0, sizeof(tpr));
+        tpr.exit_changed = 1;
+        ret = nvmm_vcpu_configure(mach, &qcpu->vcpu, NVMM_VCPU_CONF_TPR, &tpr);
+        if (ret == -1) {
+            err = errno;
+            error_report("NVMM: Failed to configure a virtual processor,"
+                " error=%d", err);
+            g_free(qcpu);
+            return -err;
+        }
+    }
+
+    cpu->vcpu_dirty = true;
+    cpu->hax_vcpu = (struct hax_vcpu_state *)qcpu;
+
+    return 0;
+}
+
+int
+nvmm_vcpu_exec(CPUState *cpu)
+{
+    int ret, fatal;
+
+    while (1) {
+        if (cpu->exception_index >= EXCP_INTERRUPT) {
+            ret = cpu->exception_index;
+            cpu->exception_index = -1;
+            break;
+        }
+
+        fatal = nvmm_vcpu_loop(cpu);
+
+        if (fatal) {
+            error_report("NVMM: Failed to execute a VCPU.");
+            abort();
+        }
+    }
+
+    return ret;
+}
+
+void
+nvmm_destroy_vcpu(CPUState *cpu)
+{
+    struct nvmm_machine *mach = get_nvmm_mach();
+    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+
+    nvmm_vcpu_destroy(mach, &qcpu->vcpu);
+    g_free(cpu->hax_vcpu);
+}
+
+/* -------------------------------------------------------------------------- */
+
+static void
+nvmm_update_mapping(hwaddr start_pa, ram_addr_t size, uintptr_t hva,
+    bool add, bool rom, const char *name)
+{
+    struct nvmm_machine *mach = get_nvmm_mach();
+    int ret, prot;
+
+    if (add) {
+        prot = PROT_READ | PROT_EXEC;
+        if (!rom) {
+            prot |= PROT_WRITE;
+        }
+        ret = nvmm_gpa_map(mach, hva, start_pa, size, prot);
+    } else {
+        ret = nvmm_gpa_unmap(mach, hva, start_pa, size);
+    }
+
+    if (ret == -1) {
+        error_report("NVMM: Failed to %s GPA range '%s' PA:%p, "
+            "Size:%p bytes, HostVA:%p, error=%d",
+            (add ? "map" : "unmap"), name, (void *)(uintptr_t)start_pa,
+            (void *)size, (void *)hva, errno);
+    }
+}
+
+static void
+nvmm_process_section(MemoryRegionSection *section, int add)
+{
+    MemoryRegion *mr = section->mr;
+    hwaddr start_pa = section->offset_within_address_space;
+    ram_addr_t size = int128_get64(section->size);
+    unsigned int delta;
+    uintptr_t hva;
+
+    if (!memory_region_is_ram(mr)) {
+        return;
+    }
+
+    /* Adjust start_pa and size so that they are page-aligned. */
+    delta = qemu_real_host_page_size - (start_pa & ~qemu_real_host_page_mask);
+    delta &= ~qemu_real_host_page_mask;
+    if (delta > size) {
+        return;
+    }
+    start_pa += delta;
+    size -= delta;
+    size &= qemu_real_host_page_mask;
+    if (!size || (start_pa & ~qemu_real_host_page_mask)) {
+        return;
+    }
+
+    hva = (uintptr_t)memory_region_get_ram_ptr(mr) +
+        section->offset_within_region + delta;
+
+    nvmm_update_mapping(start_pa, size, hva, add,
+        memory_region_is_rom(mr), mr->name);
+}
+
+static void
+nvmm_region_add(MemoryListener *listener, MemoryRegionSection *section)
+{
+    memory_region_ref(section->mr);
+    nvmm_process_section(section, 1);
+}
+
+static void
+nvmm_region_del(MemoryListener *listener, MemoryRegionSection *section)
+{
+    nvmm_process_section(section, 0);
+    memory_region_unref(section->mr);
+}
+
+static void
+nvmm_transaction_begin(MemoryListener *listener)
+{
+    /* nothing */
+}
+
+static void
+nvmm_transaction_commit(MemoryListener *listener)
+{
+    /* nothing */
+}
+
+static void
+nvmm_log_sync(MemoryListener *listener, MemoryRegionSection *section)
+{
+    MemoryRegion *mr = section->mr;
+
+    if (!memory_region_is_ram(mr)) {
+        return;
+    }
+
+    memory_region_set_dirty(mr, 0, int128_get64(section->size));
+}
+
+static MemoryListener nvmm_memory_listener = {
+    .begin = nvmm_transaction_begin,
+    .commit = nvmm_transaction_commit,
+    .region_add = nvmm_region_add,
+    .region_del = nvmm_region_del,
+    .log_sync = nvmm_log_sync,
+    .priority = 10,
+};
+
+static void
+nvmm_ram_block_added(RAMBlockNotifier *n, void *host, size_t size)
+{
+    struct nvmm_machine *mach = get_nvmm_mach();
+    uintptr_t hva = (uintptr_t)host;
+    int ret;
+
+    ret = nvmm_hva_map(mach, hva, size);
+
+    if (ret == -1) {
+        error_report("NVMM: Failed to map HVA, HostVA:%p "
+            "Size:%p bytes, error=%d",
+            (void *)hva, (void *)size, errno);
+    }
+}
+
+static struct RAMBlockNotifier nvmm_ram_notifier = {
+    .ram_block_added = nvmm_ram_block_added
+};
+
+/* -------------------------------------------------------------------------- */
+
+static int
+nvmm_accel_init(MachineState *ms)
+{
+    int ret, err;
+
+    ret = nvmm_init();
+    if (ret == -1) {
+        err = errno;
+        error_report("NVMM: Initialization failed, error=%d", errno);
+        return -err;
+    }
+
+    ret = nvmm_capability(&qemu_mach.cap);
+    if (ret == -1) {
+        err = errno;
+        error_report("NVMM: Unable to fetch capability, error=%d", errno);
+        return -err;
+    }
+    if (qemu_mach.cap.version < NVMM_KERN_VERSION) {
+        error_report("NVMM: Unsupported version %u", qemu_mach.cap.version);
+        return -EPROGMISMATCH;
+    }
+    if (qemu_mach.cap.state_size != sizeof(struct nvmm_x64_state)) {
+        error_report("NVMM: Wrong state size %u", qemu_mach.cap.state_size);
+        return -EPROGMISMATCH;
+    }
+
+    ret = nvmm_machine_create(&qemu_mach.mach);
+    if (ret == -1) {
+        err = errno;
+        error_report("NVMM: Machine creation failed, error=%d", errno);
+        return -err;
+    }
+
+    memory_listener_register(&nvmm_memory_listener, &address_space_memory);
+    ram_block_notifier_add(&nvmm_ram_notifier);
+
+    printf("NetBSD Virtual Machine Monitor accelerator is operational\n");
+    return 0;
+}
+
+int
+nvmm_enabled(void)
+{
+    return nvmm_allowed;
+}
+
+static void
+nvmm_accel_class_init(ObjectClass *oc, void *data)
+{
+    AccelClass *ac = ACCEL_CLASS(oc);
+    ac->name = "NVMM";
+    ac->init_machine = nvmm_accel_init;
+    ac->allowed = &nvmm_allowed;
+}
+
+static const TypeInfo nvmm_accel_type = {
+    .name = ACCEL_CLASS_NAME("nvmm"),
+    .parent = TYPE_ACCEL,
+    .class_init = nvmm_accel_class_init,
+};
+
+static void
+nvmm_type_init(void)
+{
+    type_register_static(&nvmm_accel_type);
+}
+
+type_init(nvmm_type_init);

File Deleted: pkgsrc/emulators/qemu/patches/Attic/patch-accel_stubs_nvmm-stub.c

File Deleted: pkgsrc/emulators/qemu/patches/Attic/patch-target_i386_helper.c

cvs diff -r1.31 -r1.32 pkgsrc/emulators/qemu/patches/Attic/patch-configure (expand / switch to unified diff)

--- pkgsrc/emulators/qemu/patches/Attic/patch-configure 2021/03/06 11:19:34 1.31
+++ pkgsrc/emulators/qemu/patches/Attic/patch-configure 2021/05/24 14:22:08 1.32
@@ -1,43 +1,40 @@ @@ -1,43 +1,40 @@
1$NetBSD: patch-configure,v 1.31 2021/03/06 11:19:34 reinoud Exp $ 1$NetBSD: patch-configure,v 1.32 2021/05/24 14:22:08 ryoon Exp $
2 2
3Add NVMM support. 3--- configure.orig 2021-04-29 17:18:59.000000000 +0000
4Fix jemalloc detection. 
5 
6--- configure.orig 2020-12-08 16:59:44.000000000 +0000 
7+++ configure 4+++ configure
8@@ -334,6 +334,7 @@ vhost_user_fs="" 5@@ -352,6 +352,7 @@ kvm="auto"
9 kvm="auto" 
10 hax="auto" 6 hax="auto"
11 hvf="auto" 7 hvf="auto"
12+nvmm="auto" 
13 whpx="auto" 8 whpx="auto"
14 rdma="" 9+nvmm="auto"
15 pvrdma="" 10 rdma="$default_feature"
16@@ -1102,6 +1103,10 @@ for opt do 11 pvrdma="$default_feature"
 12 gprof="no"
 13@@ -1107,6 +1108,10 @@ for opt do
17 ;; 14 ;;
18 --enable-hvf) hvf="enabled" 15 --enable-hvf) hvf="enabled"
19 ;; 16 ;;
20+ --disable-nvmm) nvmm="disabled" 17+ --disable-nvmm) nvmm="disabled"
21+ ;; 18+ ;;
22+ --enable-nvmm) nvmm="enabled" 19+ --enable-nvmm) nvmm="enabled"
23+ ;; 20+ ;;
24 --disable-whpx) whpx="disabled" 21 --disable-whpx) whpx="disabled"
25 ;; 22 ;;
26 --enable-whpx) whpx="enabled" 23 --enable-whpx) whpx="enabled"
27@@ -1783,6 +1788,7 @@ disabled with --disable-FEATURE, default 24@@ -1848,6 +1853,7 @@ disabled with --disable-FEATURE, default
28 kvm KVM acceleration support 25 kvm KVM acceleration support
29 hax HAX acceleration support 26 hax HAX acceleration support
30 hvf Hypervisor.framework acceleration support 27 hvf Hypervisor.framework acceleration support
31+ nvmm NVMM acceleration support 28+ nvmm NVMM acceleration support
32 whpx Windows Hypervisor Platform acceleration support 29 whpx Windows Hypervisor Platform acceleration support
33 rdma Enable RDMA-based migration 30 rdma Enable RDMA-based migration
34 pvrdma Enable PVRDMA support 31 pvrdma Enable PVRDMA support
35@@ -7005,7 +7011,7 @@ NINJA=$ninja $meson setup \ 32@@ -6410,7 +6416,7 @@ NINJA=$ninja $meson setup \
36 ${staticpic:+-Db_staticpic=$staticpic} \ 
37 -Db_coverage=$(if test "$gcov" = yes; then echo true; else echo false; fi) \ 33 -Db_coverage=$(if test "$gcov" = yes; then echo true; else echo false; fi) \
 34 -Db_lto=$lto -Dcfi=$cfi -Dcfi_debug=$cfi_debug \
38 -Dmalloc=$malloc -Dmalloc_trim=$malloc_trim -Dsparse=$sparse \ 35 -Dmalloc=$malloc -Dmalloc_trim=$malloc_trim -Dsparse=$sparse \
39- -Dkvm=$kvm -Dhax=$hax -Dwhpx=$whpx -Dhvf=$hvf \ 36- -Dkvm=$kvm -Dhax=$hax -Dwhpx=$whpx -Dhvf=$hvf \
40+ -Dkvm=$kvm -Dhax=$hax -Dwhpx=$whpx -Dhvf=$hvf -Dnvmm=$nvmm \ 37+ -Dkvm=$kvm -Dhax=$hax -Dwhpx=$whpx -Dhvf=$hvf -Dnvmm=$nvmm \
41 -Dxen=$xen -Dxen_pci_passthrough=$xen_pci_passthrough -Dtcg=$tcg \ 38 -Dxen=$xen -Dxen_pci_passthrough=$xen_pci_passthrough -Dtcg=$tcg \
42 -Dcocoa=$cocoa -Dmpath=$mpath -Dsdl=$sdl -Dsdl_image=$sdl_image \ 39 -Dcocoa=$cocoa -Dgtk=$gtk -Dmpath=$mpath -Dsdl=$sdl -Dsdl_image=$sdl_image \
43 -Dvnc=$vnc -Dvnc_sasl=$vnc_sasl -Dvnc_jpeg=$vnc_jpeg -Dvnc_png=$vnc_png \ 40 -Dvnc=$vnc -Dvnc_sasl=$vnc_sasl -Dvnc_jpeg=$vnc_jpeg -Dvnc_png=$vnc_png \

File Deleted: pkgsrc/emulators/qemu/patches/Attic/patch-contrib_ivshmem-client_ivshmem-client.c

File Deleted: pkgsrc/emulators/qemu/patches/Attic/patch-contrib_ivshmem-server_ivshmem-server.c

File Deleted: pkgsrc/emulators/qemu/patches/Attic/patch-include_sysemu_hw_accel.h

File Deleted: pkgsrc/emulators/qemu/patches/Attic/patch-target_i386_kvm-stub.c

File Deleted: pkgsrc/emulators/qemu/patches/Attic/patch-target_i386_nvmm_cpus.c

File Deleted: pkgsrc/emulators/qemu/patches/Attic/patch-target_i386_nvmm_cpus.h

cvs diff -r1.1 -r1.2 pkgsrc/emulators/qemu/patches/patch-hw_mips_meson.build (expand / switch to unified diff)

--- pkgsrc/emulators/qemu/patches/patch-hw_mips_meson.build 2021/02/20 22:59:29 1.1
+++ pkgsrc/emulators/qemu/patches/patch-hw_mips_meson.build 2021/05/24 14:22:08 1.2
@@ -1,13 +1,13 @@ @@ -1,13 +1,13 @@
1$NetBSD: patch-hw_mips_meson.build,v 1.1 2021/02/20 22:59:29 ryoon Exp $ 1$NetBSD: patch-hw_mips_meson.build,v 1.2 2021/05/24 14:22:08 ryoon Exp $
2 2
3--- hw/mips/meson.build.orig 2020-12-08 16:59:44.000000000 +0000 3--- hw/mips/meson.build.orig 2021-04-29 17:18:58.000000000 +0000
4+++ hw/mips/meson.build 4+++ hw/mips/meson.build
5@@ -3,7 +3,7 @@ mips_ss.add(files('addr.c', 'mips_int.c' 5@@ -5,7 +5,7 @@ mips_ss.add(when: 'CONFIG_FULOONG', if_t
6 mips_ss.add(when: 'CONFIG_FULOONG', if_true: files('fuloong2e.c')) 6 mips_ss.add(when: 'CONFIG_LOONGSON3V', if_true: files('loongson3_bootp.c', 'loongson3_virt.c'))
7 mips_ss.add(when: 'CONFIG_JAZZ', if_true: files('jazz.c')) 7 mips_ss.add(when: 'CONFIG_JAZZ', if_true: files('jazz.c'))
8 mips_ss.add(when: 'CONFIG_MALTA', if_true: files('gt64xxx_pci.c', 'malta.c')) 8 mips_ss.add(when: 'CONFIG_MALTA', if_true: files('gt64xxx_pci.c', 'malta.c'))
9-mips_ss.add(when: 'CONFIG_MIPSSIM', if_true: files('mipssim.c')) 9-mips_ss.add(when: 'CONFIG_MIPSSIM', if_true: files('mipssim.c'))
10+mips_ss.add(when: 'CONFIG_MIPSSIM', if_true: files('mipssim.c', 'mipssim_virtio.c')) 10+mips_ss.add(when: 'CONFIG_MIPSSIM', if_true: files('mipssim.c', 'mipssim_virtio.c'))
11 mips_ss.add(when: 'CONFIG_MIPS_BOSTON', if_true: [files('boston.c'), fdt]) 11 mips_ss.add(when: 'CONFIG_MIPS_BOSTON', if_true: [files('boston.c'), fdt])
12 mips_ss.add(when: 'CONFIG_MIPS_CPS', if_true: files('cps.c')) 12 mips_ss.add(when: 'CONFIG_MIPS_CPS', if_true: files('cps.c'))
13  13

cvs diff -r1.1 -r1.2 pkgsrc/emulators/qemu/patches/Attic/patch-meson__options.txt (expand / switch to unified diff)

--- pkgsrc/emulators/qemu/patches/Attic/patch-meson__options.txt 2021/03/06 11:19:34 1.1
+++ pkgsrc/emulators/qemu/patches/Attic/patch-meson__options.txt 2021/05/24 14:22:08 1.2
@@ -1,13 +1,13 @@ @@ -1,13 +1,13 @@
1$NetBSD: patch-meson__options.txt,v 1.1 2021/03/06 11:19:34 reinoud Exp $ 1$NetBSD: patch-meson__options.txt,v 1.2 2021/05/24 14:22:08 ryoon Exp $
2 2
3--- meson_options.txt.orig 2020-12-08 16:59:44.000000000 +0000 3--- meson_options.txt.orig 2021-04-29 17:18:58.000000000 +0000
4+++ meson_options.txt 4+++ meson_options.txt
5@@ -29,6 +29,8 @@ option('whpx', type: 'feature', value: ' 5@@ -33,6 +33,8 @@ option('whpx', type: 'feature', value: '
6 description: 'WHPX acceleration support') 6 description: 'WHPX acceleration support')
7 option('hvf', type: 'feature', value: 'auto', 7 option('hvf', type: 'feature', value: 'auto',
8 description: 'HVF acceleration support') 8 description: 'HVF acceleration support')
9+option('nvmm', type: 'feature', value: 'auto', 9+option('nvmm', type: 'feature', value: 'auto',
10+ description: 'NVMM acceleration support') 10+ description: 'NVMM acceleration support')
11 option('xen', type: 'feature', value: 'auto', 11 option('xen', type: 'feature', value: 'auto',
12 description: 'Xen backend support') 12 description: 'Xen backend support')
13 option('xen_pci_passthrough', type: 'feature', value: 'auto', 13 option('xen_pci_passthrough', type: 'feature', value: 'auto',

cvs diff -r1.1 -r1.2 pkgsrc/emulators/qemu/patches/patch-target_i386_meson.build (expand / switch to unified diff)

--- pkgsrc/emulators/qemu/patches/patch-target_i386_meson.build 2021/03/06 11:19:34 1.1
+++ pkgsrc/emulators/qemu/patches/patch-target_i386_meson.build 2021/05/24 14:22:08 1.2
@@ -1,15 +1,12 @@ @@ -1,15 +1,12 @@
1$NetBSD: patch-target_i386_meson.build,v 1.1 2021/03/06 11:19:34 reinoud Exp $ 1$NetBSD: patch-target_i386_meson.build,v 1.2 2021/05/24 14:22:08 ryoon Exp $
2 2
3--- target/i386/meson.build.orig 2020-12-08 16:59:44.000000000 +0000 3--- target/i386/meson.build.orig 2021-04-29 17:18:58.000000000 +0000
4+++ target/i386/meson.build 4+++ target/i386/meson.build
5@@ -34,6 +34,10 @@ i386_softmmu_ss.add(when: 'CONFIG_WHPX', 5@@ -19,6 +19,7 @@ i386_softmmu_ss.add(files(
6 'whpx-all.c', 6 subdir('kvm')
7 'whpx-cpus.c', 7 subdir('hax')
8 )) 8 subdir('whpx')
9+i386_softmmu_ss.add(when: 'CONFIG_NVMM', if_true: files( 9+subdir('nvmm')
10+ 'nvmm-all.c', 10 subdir('hvf')
11+ 'nvmm-cpus.c', 11 subdir('tcg')
12+)) 12
13 i386_softmmu_ss.add(when: 'CONFIG_HAX', if_true: files( 
14 'hax-all.c', 
15 'hax-mem.c', 

File Added: pkgsrc/emulators/qemu/patches/Attic/patch-include_sysemu_hw__accel.h
$NetBSD: patch-include_sysemu_hw__accel.h,v 1.4 2021/05/24 14:22:08 ryoon Exp $

--- include/sysemu/hw_accel.h.orig	2021-04-29 17:18:58.000000000 +0000
+++ include/sysemu/hw_accel.h
@@ -16,6 +16,7 @@
 #include "sysemu/kvm.h"
 #include "sysemu/hvf.h"
 #include "sysemu/whpx.h"
+#include "sysemu/nvmm.h"
 
 void cpu_synchronize_state(CPUState *cpu);
 void cpu_synchronize_post_reset(CPUState *cpu);

cvs diff -r1.3 -r1.4 pkgsrc/emulators/qemu/patches/Attic/patch-include_sysemu_nvmm.h (expand / switch to unified diff)

--- pkgsrc/emulators/qemu/patches/Attic/patch-include_sysemu_nvmm.h 2021/03/06 11:19:34 1.3
+++ pkgsrc/emulators/qemu/patches/Attic/patch-include_sysemu_nvmm.h 2021/05/24 14:22:08 1.4
@@ -1,16 +1,16 @@ @@ -1,16 +1,16 @@
1$NetBSD: patch-include_sysemu_nvmm.h,v 1.3 2021/03/06 11:19:34 reinoud Exp $ 1$NetBSD: patch-include_sysemu_nvmm.h,v 1.4 2021/05/24 14:22:08 ryoon Exp $
2 2
3--- include/sysemu/nvmm.h.orig 2021-03-05 22:29:22.991663471 +0000 3--- include/sysemu/nvmm.h.orig 2021-05-06 04:47:40.186492405 +0000
4+++ include/sysemu/nvmm.h 4+++ include/sysemu/nvmm.h
5@@ -0,0 +1,26 @@ 5@@ -0,0 +1,26 @@
6+/* 6+/*
7+ * Copyright (c) 2018-2019 Maxime Villard, All rights reserved. 7+ * Copyright (c) 2018-2019 Maxime Villard, All rights reserved.
8+ * 8+ *
9+ * NetBSD Virtual Machine Monitor (NVMM) accelerator support. 9+ * NetBSD Virtual Machine Monitor (NVMM) accelerator support.
10+ * 10+ *
11+ * This work is licensed under the terms of the GNU GPL, version 2 or later. 11+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
12+ * See the COPYING file in the top-level directory. 12+ * See the COPYING file in the top-level directory.
13+ */ 13+ */
14+ 14+
15+#ifndef QEMU_NVMM_H 15+#ifndef QEMU_NVMM_H
16+#define QEMU_NVMM_H 16+#define QEMU_NVMM_H

cvs diff -r1.5 -r1.6 pkgsrc/emulators/qemu/patches/patch-meson.build (expand / switch to unified diff)

--- pkgsrc/emulators/qemu/patches/patch-meson.build 2021/03/19 13:25:36 1.5
+++ pkgsrc/emulators/qemu/patches/patch-meson.build 2021/05/24 14:22:08 1.6
@@ -1,95 +1,76 @@ @@ -1,95 +1,76 @@
1$NetBSD: patch-meson.build,v 1.5 2021/03/19 13:25:36 reinoud Exp $ 1$NetBSD: patch-meson.build,v 1.6 2021/05/24 14:22:08 ryoon Exp $
2 2
3* Add NetBSD support. 3* Add NetBSD support.
4* Detect iconv in libc properly for pkgsrc (pkgsrc removes -liconv) 4* Detect iconv in libc properly for pkgsrc (pkgsrc removes -liconv)
5 to fix qemu-system-aarch64 link. 5 to fix qemu-system-aarch64 link.
6* Detect curses (non-ncurses{,w} too) 6* Detect curses (non-ncurses{,w} too)
7 7
8--- meson.build.orig 2020-12-08 16:59:44.000000000 +0000 8--- meson.build.orig 2021-04-29 17:18:58.000000000 +0000
9+++ meson.build 9+++ meson.build
10@@ -84,6 +84,7 @@ if cpu in ['x86', 'x86_64'] 10@@ -87,6 +87,7 @@ if cpu in ['x86', 'x86_64']
11 accelerator_targets += { 11 accelerator_targets += {
12 'CONFIG_HAX': ['i386-softmmu', 'x86_64-softmmu'], 12 'CONFIG_HAX': ['i386-softmmu', 'x86_64-softmmu'],
13 'CONFIG_HVF': ['x86_64-softmmu'], 13 'CONFIG_HVF': ['x86_64-softmmu'],
14+ 'CONFIG_NVMM': ['i386-softmmu', 'x86_64-softmmu'], 14+ 'CONFIG_NVMM': ['i386-softmmu', 'x86_64-softmmu'],
15 'CONFIG_WHPX': ['i386-softmmu', 'x86_64-softmmu'], 15 'CONFIG_WHPX': ['i386-softmmu', 'x86_64-softmmu'],
16 } 16 }
17 endif 17 endif
18@@ -169,6 +170,7 @@ version_res = [] 18@@ -170,6 +171,7 @@ version_res = []
19 coref = [] 19 coref = []
20 iokit = [] 20 iokit = []
21 emulator_link_args = [] 21 emulator_link_args = []
22+nvmm = [] 22+nvmm =not_found
23 cocoa = not_found 
24 hvf = not_found 23 hvf = not_found
25 if targetos == 'windows' 24 if targetos == 'windows'
26@@ -196,6 +198,12 @@ elif targetos == 'openbsd' 25 socket = cc.find_library('ws2_32')
27 # Disable OpenBSD W^X if available 26@@ -227,6 +229,14 @@ if not get_option('hax').disabled()
28 emulator_link_args = cc.get_supported_link_arguments('-Wl,-z,wxneeded') 
29 endif 
30+elif targetos == 'netbsd' 
31+ if not get_option('nvmm').disabled() 
32+ if cc.has_header('nvmm.h') 
33+ nvmm = cc.find_library('nvmm') 
34+ endif 
35+ endif 
36 endif 
37  
38 accelerators = [] 
39@@ -228,6 +236,11 @@ if not get_option('hax').disabled() 
40 accelerators += 'CONFIG_HAX' 27 accelerators += 'CONFIG_HAX'
41 endif 28 endif
42 endif 29 endif
43+if not get_option('nvmm').disabled() 30+if targetos == 'netbsd'
44+ if cc.has_header('nvmm.h', required: get_option('nvmm')) 31+ if cc.has_header('nvmm.h', required: get_option('nvmm'))
 32+ nvmm = cc.find_library('nvmm', required: get_option('nvmm'))
 33+ endif
 34+ if nvmm.found()
45+ accelerators += 'CONFIG_NVMM' 35+ accelerators += 'CONFIG_NVMM'
46+ endif 36+ endif
47+endif 37+endif
 38
 39 tcg_arch = config_host['ARCH']
48 if not get_option('tcg').disabled() 40 if not get_option('tcg').disabled()
49 if cpu not in supported_cpus 41@@ -271,6 +281,9 @@ endif
50 if 'CONFIG_TCG_INTERPRETER' in config_host 
51@@ -246,6 +259,9 @@ endif 
52 if 'CONFIG_HVF' not in accelerators and get_option('hvf').enabled() 42 if 'CONFIG_HVF' not in accelerators and get_option('hvf').enabled()
53 error('HVF not available on this platform') 43 error('HVF not available on this platform')
54 endif 44 endif
55+if 'CONFIG_NVMM' not in accelerators and get_option('nvmm').enabled() 45+if 'CONFIG_NVMM' not in accelerators and get_option('nvmm').enabled()
56+ error('NVMM not available on this platform') 46+ error('NVMM not available on this platform')
57+endif 47+endif
58 if 'CONFIG_WHPX' not in accelerators and get_option('whpx').enabled() 48 if 'CONFIG_WHPX' not in accelerators and get_option('whpx').enabled()
59 error('WHPX not available on this platform') 49 error('WHPX not available on this platform')
60 endif 50 endif
61@@ -517,7 +533,7 @@ if have_system and not get_option('curse 51@@ -607,7 +620,7 @@ if have_system and not get_option('curse
62 has_curses_h = cc.has_header('curses.h', args: curses_compile_args) 52 has_curses_h = cc.has_header('curses.h', args: curses_compile_args)
63 endif 53 endif
64 if has_curses_h 54 if has_curses_h
65- curses_libname_list = (targetos == 'windows' ? ['pdcurses'] : ['ncursesw', 'cursesw']) 55- curses_libname_list = (targetos == 'windows' ? ['pdcurses'] : ['ncursesw', 'cursesw'])
66+ curses_libname_list = (targetos == 'windows' ? ['pdcurses'] : ['ncursesw', 'cursesw', 'curses']) 56+ curses_libname_list = (targetos == 'windows' ? ['pdcurses'] : ['ncursesw', 'cursesw', 'curses'])
67 foreach curses_libname : curses_libname_list 57 foreach curses_libname : curses_libname_list
68 libcurses = cc.find_library(curses_libname, 58 libcurses = cc.find_library(curses_libname,
69 required: false, 59 required: false,
70@@ -535,7 +551,7 @@ if have_system and not get_option('curse 60@@ -625,7 +638,7 @@ if have_system and not get_option('curse
71 endif 61 endif
72 endif 62 endif
73 if not get_option('iconv').disabled() 63 if not get_option('iconv').disabled()
74- foreach link_args : [ ['-liconv'], [] ] 64- foreach link_args : [ ['-liconv'], [] ]
75+ foreach link_args : [ [], ['-liconv'] ] 65+ foreach link_args : [ [], ['-liconv'] ]
76 # Programs will be linked with glib and this will bring in libiconv on FreeBSD. 66 # Programs will be linked with glib and this will bring in libiconv on FreeBSD.
77 # We need to use libiconv if available because mixing libiconv's headers with 67 # We need to use libiconv if available because mixing libiconv's headers with
78 # the system libc does not work. 68 # the system libc does not work.
79@@ -1815,7 +1831,7 @@ foreach target : target_dirs 69@@ -2576,6 +2589,7 @@ if have_system
80 'name': 'qemu-system-' + target_name, 70 summary_info += {'HAX support': config_all.has_key('CONFIG_HAX')}
81 'gui': false, 71 summary_info += {'HVF support': config_all.has_key('CONFIG_HVF')}
82 'sources': files('softmmu/main.c'), 72 summary_info += {'WHPX support': config_all.has_key('CONFIG_WHPX')}
83- 'dependencies': [] 73+ summary_info += {'NVMM support': config_all.has_key('CONFIG_NVMM')}
84+ 'dependencies': [nvmm] 74 summary_info += {'Xen support': config_host.has_key('CONFIG_XEN_BACKEND')}
85 }] 75 if config_host.has_key('CONFIG_XEN_BACKEND')
86 if targetos == 'windows' and (sdl.found() or gtk.found()) 76 summary_info += {'xen ctrl version': config_host['CONFIG_XEN_CTRL_INTERFACE_VERSION']}
87 execs += [{ 
88@@ -2106,6 +2122,7 @@ summary_info += {'Install blobs': ge 
89 summary_info += {'KVM support': config_all.has_key('CONFIG_KVM')} 
90 summary_info += {'HAX support': config_all.has_key('CONFIG_HAX')} 
91 summary_info += {'HVF support': config_all.has_key('CONFIG_HVF')} 
92+summary_info += {'NVMM support': config_all.has_key('CONFIG_NVMM')} 
93 summary_info += {'WHPX support': config_all.has_key('CONFIG_WHPX')} 
94 summary_info += {'TCG support': config_all.has_key('CONFIG_TCG')} 
95 if config_all.has_key('CONFIG_TCG') 

cvs diff -r1.4 -r1.5 pkgsrc/emulators/qemu/patches/Attic/patch-qemu-options.hx (expand / switch to unified diff)

--- pkgsrc/emulators/qemu/patches/Attic/patch-qemu-options.hx 2021/03/06 11:19:34 1.4
+++ pkgsrc/emulators/qemu/patches/Attic/patch-qemu-options.hx 2021/05/24 14:22:08 1.5
@@ -1,42 +1,40 @@ @@ -1,42 +1,40 @@
1$NetBSD: patch-qemu-options.hx,v 1.4 2021/03/06 11:19:34 reinoud Exp $ 1$NetBSD: patch-qemu-options.hx,v 1.5 2021/05/24 14:22:08 ryoon Exp $
2 2
3Add NVMM support. 3--- qemu-options.hx.orig 2021-04-29 17:18:59.000000000 +0000
4 
5--- qemu-options.hx.orig 2020-04-28 16:49:25.000000000 +0000 
6+++ qemu-options.hx 4+++ qemu-options.hx
7@@ -26,7 +26,7 @@ DEF("machine", HAS_ARG, QEMU_OPTION_mach 5@@ -26,7 +26,7 @@ DEF("machine", HAS_ARG, QEMU_OPTION_mach
8 "-machine [type=]name[,prop[=value][,...]]\n" 6 "-machine [type=]name[,prop[=value][,...]]\n"
9 " selects emulated machine ('-machine help' for list)\n" 7 " selects emulated machine ('-machine help' for list)\n"
10 " property accel=accel1[:accel2[:...]] selects accelerator\n" 8 " property accel=accel1[:accel2[:...]] selects accelerator\n"
11- " supported accelerators are kvm, xen, hax, hvf, whpx or tcg (default: tcg)\n" 9- " supported accelerators are kvm, xen, hax, hvf, whpx or tcg (default: tcg)\n"
12+ " supported accelerators are kvm, xen, hax, hvf, nvmm, whpx or tcg (default: tcg)\n" 10+ " supported accelerators are kvm, xen, hax, hvf, nvmm, whpx or tcg (default: tcg)\n"
13 " vmport=on|off|auto controls emulation of vmport (default: auto)\n" 11 " vmport=on|off|auto controls emulation of vmport (default: auto)\n"
14 " dump-guest-core=on|off include guest memory in a core dump (default=on)\n" 12 " dump-guest-core=on|off include guest memory in a core dump (default=on)\n"
15 " mem-merge=on|off controls memory merge support (default: on)\n" 13 " mem-merge=on|off controls memory merge support (default: on)\n"
16@@ -58,7 +58,7 @@ SRST 14@@ -58,7 +58,7 @@ SRST
17  15
18 ``accel=accels1[:accels2[:...]]`` 16 ``accel=accels1[:accels2[:...]]``
19 This is used to enable an accelerator. Depending on the target 17 This is used to enable an accelerator. Depending on the target
20- architecture, kvm, xen, hax, hvf, whpx or tcg can be available. 18- architecture, kvm, xen, hax, hvf, whpx or tcg can be available.
21+ architecture, kvm, xen, hax, hvf, nvmm, whpx or tcg can be available. 19+ architecture, kvm, xen, hax, hvf, nvmm, whpx or tcg can be available.
22 By default, tcg is used. If there is more than one accelerator 20 By default, tcg is used. If there is more than one accelerator
23 specified, the next one is used if the previous one fails to 21 specified, the next one is used if the previous one fails to
24 initialize. 22 initialize.
25@@ -119,7 +119,7 @@ ERST 23@@ -135,7 +135,7 @@ ERST
26  24
27 DEF("accel", HAS_ARG, QEMU_OPTION_accel, 25 DEF("accel", HAS_ARG, QEMU_OPTION_accel,
28 "-accel [accel=]accelerator[,prop[=value][,...]]\n" 26 "-accel [accel=]accelerator[,prop[=value][,...]]\n"
29- " select accelerator (kvm, xen, hax, hvf, whpx or tcg; use 'help' for a list)\n" 27- " select accelerator (kvm, xen, hax, hvf, whpx or tcg; use 'help' for a list)\n"
30+ " select accelerator (kvm, xen, hax, hvf, nvmm, whpx or tcg; use 'help' for a list)\n" 28+ " select accelerator (kvm, xen, hax, hvf, nvmm, whpx or tcg; use 'help' for a list)\n"
31 " igd-passthru=on|off (enable Xen integrated Intel graphics passthrough, default=off)\n" 29 " igd-passthru=on|off (enable Xen integrated Intel graphics passthrough, default=off)\n"
32 " kernel-irqchip=on|off|split controls accelerated irqchip support (default=on)\n" 30 " kernel-irqchip=on|off|split controls accelerated irqchip support (default=on)\n"
33 " kvm-shadow-mem=size of KVM shadow MMU in bytes\n" 31 " kvm-shadow-mem=size of KVM shadow MMU in bytes\n"
34@@ -128,7 +128,7 @@ DEF("accel", HAS_ARG, QEMU_OPTION_accel, 32@@ -145,7 +145,7 @@ DEF("accel", HAS_ARG, QEMU_OPTION_accel,
35 SRST 33 SRST
36 ``-accel name[,prop=value[,...]]`` 34 ``-accel name[,prop=value[,...]]``
37 This is used to enable an accelerator. Depending on the target 35 This is used to enable an accelerator. Depending on the target
38- architecture, kvm, xen, hax, hvf, whpx or tcg can be available. By 36- architecture, kvm, xen, hax, hvf, whpx or tcg can be available. By
39+ architecture, kvm, xen, hax, hvf, nvmm, whpx or tcg can be available. By 37+ architecture, kvm, xen, hax, hvf, nvmm, whpx or tcg can be available. By
40 default, tcg is used. If there is more than one accelerator 38 default, tcg is used. If there is more than one accelerator
41 specified, the next one is used if the previous one fails to 39 specified, the next one is used if the previous one fails to
42 initialize. 40 initialize.

File Deleted: pkgsrc/emulators/qemu/patches/Attic/patch-target_i386_nvmm_all.c