Hypervisor

RSS for tag

Build virtualization solutions on top of a lightweight hypervisor without the need for third-party kernel extensions using Hypervisor.

Posts under Hypervisor tag

22 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

2 Requests for Rosetta: support BMI1/2 and F16C and support also AVX1/2 on Rosetta Linux..
Hi, REQUEST 1: seems Microsoft is ahead of Apple in X86 ARM emulation support at least in features supported.. see: https://blogs.windows.com/windows-insider/2024/11/06/announcing-windows-11-insider-preview-build-27744-canary-channel/ x64 emulated applications through Prism will now have support for additional extensions to the x86 instruction set architecture. These extensions include AVX and AVX2, as well as BMI, FMA, F16C BMI1/2 and F16C aren't yet supported by Rosetta.. would be useful for games like Alan Wake 2.. so asking for Rosetta equaling features to Prism emulator.. REQUEST 2: there is no way to currently enable AVX1/2 on Rosetta Linux.. on macOS using export ROSETTA_ADVERTISE_AVX=1 does the trick.. but not on Linux VM's.. tested setting this via: /bin/launchctl setenv ROSETTA_ADVERTISE_AVX 1 on Mac before VM launch and inside Linux VM but AVX2 isn't exposed..
0
1
89
19h
Developer account can't be added on a virtualised macOS using Virtualization framework
Dear Support Team, I constantly get a 401 unauthorised error, when I try to add my Developer account in Xcode while running a virutalised macOS using https://developer.apple.com/documentation/virtualization. So I can't use signing, entitlements, etc when building within a virutalised macOS. The error shown is below: There was a failure decoding response: (HTTP 401, 60 bytes) The data couldn’t be read because it isn’t in the correct format. I've found probably the same issue here https://developer.apple.com/forums/thread/759877 Unfortunately, I can't find any updates. Are you aware of this problem? Are there any planned fixes in upcoming macOS updates? 15.0 (24A335) Version 16.1 (16B40) Apple M1 Pro, 16 GB Ram Best regards, Evgenii
0
0
95
1d
Rossetta Linux fails with recent kernels>=6.11
Hi, please see detailed findings on: https://github.com/utmapp/UTM/discussions/6799 basically apps that runned via Rosetta Linux now fail in kernels>=6.11 like the included in Ubuntu 24.10 with: /media/rosetta/rosetta hello assertion failed [hash_table != nullptr]: Failed to find vdso DT_HASH (Vdso.cpp:78 get_vdso_dynamic_data) Trace/breakpoint trap the issue seems to be due to this commit. https://github.com/torvalds/linux/commit/48f6430505c0b0498ee9020ce3cf9558b1caaaeb
2
0
69
3d
Nested Hyper-V Support for VM's
Hello! I am wondering about the status of Nested Hyper-V Support for VM's? This is specifically regarding this issue with Parallels Desktop, which claims the issue is on Apple's side: Parallels Article: https://kb.parallels.com/en/116239 Within the Article, the no longer accessible previous Apple discussion post for this issue (at least I cannot access it): https://discussions.apple.com/thread/255546412 Is this something that will be fixed and supported soon? Thank you! (If this should be posted somewhere else please just let me know where!)
1
0
102
5d
M4 devices - VMs pre 13.4 fail to boot
Hi, It seems that on M4 devices any virtual machine with macOS version older than 13.4 fail to boot, they stuck with a black screen. This is regardless of the virtualization software used (UTM, VirtualBuddy, Viable, etc...). After talking to many people everyone experiences the same. At least for me, this is a massive limitation of the platform, I really hope this is a bug which can be fixed. Thanks, Csaba
6
6
521
3d
Metal passthrough on intel VMs causes com.apple.screensharing.menuextra to crash and screensharing to exit
https://feedbackassistant.apple.com/feedback/15645457 Metal passthrough on intel VMs causes com.apple.screensharing.menuextra to crash and screensharing to exit Create a 15.1 VM with metal passthrough on 15.0.1 or 15.1 host, enable Screen Sharing, then try connecting to with VNC after restarting the machine. I'm using Anka to create the VM. You'll see VNC work (open vnc://192.168.64.3:5900), then a few seconds in show "Reconnecting...", then work, then go to "Reconnecting..." for ~5m until it eventually works consistently. You'll see launchd showing exits/failures (see screenshots) You'll see diagnostic reports showing things like: Thread 0 Crashed:: Dispatch queue: com.apple.RenderBox.Encoder 0 libsystem_kernel.dylib 0x7ff801da5b52 __pthread_kill + 10 1 libsystem_pthread.dylib 0x7ff801ddff85 pthread_kill + 262 2 libsystem_c.dylib 0x7ff801d00b19 abort + 126 3 libsystem_c.dylib 0x7ff801cffddc __assert_rtn + 314 4 Metal 0x7ff80d045d72 MTLReportFailure.cold.1 + 41 5 Metal 0x7ff80d01fa2a MTLReportFailure + 513 6 Metal 0x7ff80cfb74e0 +[MTLLoader sliceIDForDevice:legacyDriverVersion:airntDriverVersion:] + 200 7 Metal 0x7ff80cf265c9 +[_MTLBinaryArchive(MTLBinaryArchiveInternal) deserializeBinaryArchiveHeader:fileData:device:] + 89 8 Metal 0x7ff80cf10f0c -[_MTLBinaryArchive loadFromURL:error:] + 537 9 Metal 0x7ff80cf10288 -[_MTLBinaryArchive initWithOptions:device:url:error:] + 844 10 RenderBox 0x7ff9041a15fd RB::(anonymous namespace)::load_library_archive(NSBundle*,
1
1
162
2w
Xcode 16.1 can't load the Account information in VM
I have a MacMini M2 machine running Sequoia 15.1 OS. On this machine, I am running a Virtual Machine, utilizing the Virtualization.Framework, with the same OS version, 15.1. Logging into my account in the System Settings is successful. Next, I need to add my account in Xcode 16.1. While the initial login is successful, Xcode immediately displays the following error: Decoding Error. There was a failure decoding response: (HTTP 401, 60 bytes) The data couldn’t be read because it isn’t in the correct format. As a result, I cannot see any account information, teams, etc. A very similar bug has been reported at this issue - https://developer.apple.com/forums/thread/759877, but there has been no progress or updates there. Is there any chance to fix this and get it working?
6
6
405
2w
Access denied to Hypervisor redistributor register
Hi! I would like try to boot the Linux kernel with the Hypervisor framework and see how far I get. So far the kernel runs up to the point where it's trying to identify the redistributor of the Hypervisor's GICv3, but I get an exception when it's reading the memory-mapped GICR_FIDR2 register. I tried the same via hv_gic_get_redistributor_reg() and get HV_DENIED. What could be the reason for this exception? I believe I've initialized enough of the GIC for it to work. No interrupts yet, though. It is of course entirely possible I forgot to set/clear some bits, but there are several redistributor registers missing in the framework, so it's not possible to do the full initialization a hardware GIC v3 implementation needs. I assume the Hypervisor's GIC abstraction takes care of several steps internally. What are the steps to initialize the HVF's GIC? Do you have a working example? I couldn't find anything on the internet. The popular virtualization software out there all seem to bring their own emulated interrupt controller. I'm using Sequoia 15.0.1. Thank you for any hints!
3
0
239
Oct ’24
hv_vcpu_run on M1 suppose to return but it is not
Hi, I'm building a Virtual Machine Manager on top of Hypervisor Framework and having a problem with hv_vcpu_run never return somehow. I tested the same code on Asahi Linux using KVM and everything are working correctly. I really have no idea what I'm doing wrong here. How I setup a vCPU: https://github.com/obhq/obliteration/blob/main/gui/src/vmm/aarch64.rs How I use the Hypervisor Framework: https://github.com/obhq/obliteration/blob/main/gui/src/vmm/hv/macos/cpu.rs#L402 Thanks in advance.
1
0
192
Oct ’24
Problems integrating Hypervisor.framework APIC and IOAPIC
Introduction I'm trying to integrate support for the APIC implementation added to Hypervisor.framework back in macOS 12 into the open source Qemu VMM. Qemu contains a VMM-side software implementation of the APIC, but it shows up as a major performance constraint in profiling, so it'd be nice to use the in-kernel implementation. I've previously submitted DTS TSIs (case 3345863) for this and received some high level pointers, but I'm told the forums are now the focus for DTS. I've got things working to what feels like 95%, but I'm still tripping up on a few things. FreeBSD and macOS guests are successfully booting and running most of the time, but there are sporadic stalls which point towards undelivered interrupts. Linux fails early on. A number of key test cases are failing in the 'apic' and 'ioapic' test suites that are part of the open source 'kvm-unit-tests' project, and I've run out of ideas for workarounds. Broadly, I'm doing this: When calling hv_vm_create, I pass the HV_VM_ACCEL_APIC flag. The VM uses the newer hv_vcpu_run_until() API. After VM exits, query hv_vcpu_exit_info() in case there's anything else to do. Page fault VM exits in the APIC's MMIO range are forwarded to hv_vcpu_apic_write and hv_vcpu_apic_read respectively. (With hv_vcpu_exit_info check and post-processing if no_side_effect returns true.) Writes to the APICBASE MSR do some sanity checks (throw exception on invalid state transitions etc) and update the MMIO range via hv_vmx_vcpu_set_apic_address() if necessary. HVF seems to do its own additional handling for the actual APIC state changes. (Moving the MMIO and enabling the APIC at the same time fails: FB14021745) Various machinery and state handling around INIT and STARTUP IPIs for bringing up the other vCPUs. This was fiddly to get working but I think I've got it now. MSIs from virtual devices are delivered via hv_vm_lapic_msi. Reads and writes for PIC and ELCR I/O ports are forwarded to the hv_vm_atpic_port_write/hv_vm_atpic_port_read APIs. (In theory, interrupt levels on the PIC are controlled via hv_vm_atpic_assert_irq/hv_vm_atpic_deassert_irq but all modern OSes disable the PIC anyway.) Page faults for the IOAPIC's MMIO range are forwarded to hv_vm_ioapic_read/hv_vm_ioapic_write. Virtual devices deliver their interrupts using hv_vm_ioapic_assert_irq/hv_vm_ioapic_deassert_irq/hv_vm_ioapic_pulse_irq for level/edge-triggered interrupts respectively. Now for the parts where I'm stuck and I'm either doing something wrong, or there are bugs in HVF's implementation: Issues I'm running into IOAPIC: 1. Unmasking during raised interrupt level, test_ioapic_level_mask test case: Guest enables masking on a particular level-triggered interrupt line. (MMIO write to ioredtbl entry) The virtual device raises interrupt level to 1. VMM calls hv_vm_ioapic_assert_irq(). No interrupt, because masked, so far so good. The guest unmasks the interrupt via another write to the ioredtbl entry. At this point I would expect the interrupt to be delivered to the vCPU. This is not the case. Even another call to hv_vm_ioapic_assert_irq() after unmasking will have no effect. Only if we deassert and reassert does the guest receive anything. (This is my current workaround, but it is rather ugly because I essentially need to maintain shadow state to detect the situation.) 2. Retriggering, test case test_ioapic_level_retrigger: The vCPU enters a interrupts-disabled section (cli instruction) The virtual device asserts level-triggered interrupt. VMM calls hv_vm_ioapic_assert_irq(). The vCPU leaves the interrupts-disabled section (sti instruction) and starts executing other code (or halts, as in the test case) Interrupt is delivered to vCPU, runs interrupt handler. Interrupt handler signals EOI. Note that interrupt is still asserted. Outside the interrupt handler, the vCPU briefly disables interrupts again (cli) The vCPU once again re-enables interrupts (sti) and halts (hlt) Here we would expect the interrupt to be delivered again, but it is not. I don't currently have a workaround for this because none of these steps causes hv_vcpu_run_until exits where this condition could be detected. 3. Coalescing, test_ioapic_level_coalesce: The virtual device asserts a level-triggered interrupt line. The vCPU enters the corresponding handler. The device de-asserts the interrupt level. The device re-asserts the interrupt. The device once again de-asserts the interrupt. The interrupt handler sets EOI and returns. We would expect the interrupt handler to only run once in this sequence of events, but as it turns out, it runs a second time! This is less critical than the previous 2 unexpected behaviours, because spurious interrupts are usually only slightly detrimental to performance, whereas undelivered interrupts can cause system hangs. However, this doesn't exactly instill confidence in the implementation. I've submitted the above, as they look like bugs to me - either in the implementation, or lack of documentation - as FB14425412. APIC To work around the HVF IOAPIC problems mentioned above, I tried to use the HVF APIC implementation in isolation without the ATPIC and IOAPIC implementations. Instead, I provided (VMM side) software implementations of these controllers. However, the software IOAPIC needs to receive end-of-interrupt notifications from the APIC. This is what I understood the HV_APIC_CTRL_IOAPIC_EOI flag to be responsible for, so I passed it to hv_vcpu_apic_ctrl() during vCPU initialisation. The software IOAPIC implementation receives all the MMIO writes, maintains IOAPIC state, and calls hv_vm_send_ioapic_intr() whenever interrupts should be delivered to the VM. However, I have found that hv_vcpu_exit_info() never returns HV_VM_EXITINFO_IOAPIC_EOI. When the HVF APIC is in xAPIC mode, I can detect writes to offset 0xb0 in the MMIO write handler and query hv_vcpu_exit_ioapic_eoi() for the vector whose handler has run. However, once the APIC is in x2APIC mode, there are no exits for the x2APIC MSR accesses, so I can't see how I might get those EOI notifications. Am I interpreting the purpose of HV_APIC_CTRL_IOAPIC_EOI correctly? Do I need to do anything other than hv_vcpu_apic_ctrl to make it work? How should I be receiving the EOI notifications? I was expecting vCPU run exits, but this does not appear to be the case? Again, either a crucial step is missing from the documentation, or there's a bug in the implementation. I've submitted this as FB14425590. My Questions: Has anyone got the HVF APIC/IOAPIC working for the general purpose case, i.e. guest OS agnostic, all edge cases handled? The issues I've run into - are these bugs in HVF? Do I need extra support code/workarounds to make the edge cases work? Is using the APIC without the HVF's IOAPIC an intended supported use case or am I wasting my time on this "split" setup?
1
1
526
Jul ’24
Please enable Hypervisor APIs on iPad OS
Still no hypervisor support in iPadOS 17 Hypervisor is indeed physically possible on any of the M series chips included in the iPad Airs and iPad Pros, but locked away the iPadOS. Block hypervisor on iOS is reasonable to me, because it consume powers, not frendly for battery and not sutable for a mobile phone. But for iPadOS, the limitation is not reasonable to me. First, the Guideline 2.5.2 of iOS and iPadOS blocks code execution that loads dynamically, it may protect users because apps may load malicious code after it passes the revew from app store. But if we load codes in the hypervisor, any malicious can only run in the VM, and the safety of the VM is not an issue. Escape from a VM is even harder than escape from the sandbox of the safari browser. Even there are still other concerns about load arbitrary codes to hypervisor, we can limit it only load user selected code to the hypervisor, blocks app load code from interent without user intention. Running user selected code in the hypervisor won't threaten the security at all, there is no reason for Guideline 2.5.2 applies to hypervisor. Second, iPad a laptop replacement in the advertisement. As a laptop, it can't execute any user generated code on it, it can only be interpreted. As a software develper, it means iPadOS basically not useable. I can only run code on a remote server, and use iPad as a thin client. It can't be a standalone devices, even it has a powerful M2 chip. For the xcode on iPad, if apple concerns xcode on iPad breaks the security model, we can run the compiled code in the hypervisor, which isolates the reviewed code and user generated code. iPad has a powerful M2 chip, but iPadOS limit the power for it.
3
14
1.6k
Mar ’24
Rosetta fail on shared memory in Sonoma 14.3
I use UTM.app for virtualisation. I have full virtualise "Fedora 38-aarch64" in UTM.app with rosetta enabled. After upgrading Sonoma to 14.3 it stop properly virtualised shared memory. I have this test file: #include <stdio.h> #include <sys/shm.h> #include <sys/stat.h> int main () { int segment_id; char* shared_memory; struct shmid_ds shmbuffer; int segment_size; const int shared_segment_size = 0x6400; /* Allocate a shared memory segment. */ segment_id = shmget (IPC_PRIVATE, shared_segment_size, IPC_CREAT | IPC_EXCL | S_IRUSR | S_IWUSR); /* Attach the shared memory segment. */ shared_memory = (char*) shmat (segment_id, 0, 0); printf ("shared memory attached at address %p\n", shared_memory); /* Determine the segment's size. */ shmctl (segment_id, IPC_STAT, &shmbuffer); segment_size = shmbuffer.shm_segsz; printf ("segment size: %d\n", segment_size); /* Write a string to the shared memory segment. */ sprintf (shared_memory, "Hello, world."); /* Detach the shared memory segment. */ shmdt (shared_memory); /* Reattach the shared memory segment, at a different address. */ shared_memory = (char*) shmat (segment_id, (void*) 0x5000000, 0); printf ("shared memory reattached at address %p\n", shared_memory); /* Print out the string from shared memory. */ printf ("%s\n", shared_memory); /* Detach the shared memory segment. */ shmdt (shared_memory); /* Deallocate the shared memory segment. */ shmctl (segment_id, IPC_RMID, 0); return 0; } Command to compile it is gcc -Wall a.c && ./a.out When I compile it in virtualised Fedora work properly show this: shared memory attached address segment size: shared memory reattached address Hello, world. When I compile directly on M1 mac id it die shared memory attached address segment size: shared memory reattached address Segmentation fault: I'm try it also in docker x86 in virtualised fedora and also show error In "Fedora 38-aarch64 virtualised" run x86 docker "docker run -it --platform linux/amd64 oraclelinux:7.9 bash" Install gcc in docker shell "yum install -y gcc" After compile and run it die with shared memory attached address segment size: shared memory reattached address Hello, world. assertion failed [rem_idx != ]: Unable find existing allocation shared memory segment to unmap (VMAllocationTracker.cpp remove_shared_mem) Trace/breakpoint (core dumped) How can I fix it? On previous version of Sonoma works properly. Thank you
5
2
2.1k
Feb ’24
Any virtual machine software on M1/arm?
I used to run VirtualBox on macOS to run Windows guests for some reasons. Recently I bought a new Mac mini M1, now I have a problem - VB does not have a stable release for arm (yet). What other options do I have? BTW, I came across this doc article (https://developer.apple.com/documentation/virtualization/running_macos_in_a_virtual_machine_on_apple_silicon). I read thru it, but could not conclude if it offers the same functionalities as a full-blown VM suite; and more specifically I want to run Windows guests.
2
0
1.6k
Jul ’24