Does anyone measure the brightness of Vision Pro?
It seems to be dimmer than I expected.
Besides, is there any way to set the brightness of the Vision Pro to be maximum by script?
Many thanks!
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
I’ve submitted the following feedback:
FB13820942 (List Outline View Not Using Accent Color on Disclosure Caret for visionOS)
I’d appreciate help on this to see if I’m doing something wrong or indeed it’s the way visionOS currently works and it’s a suggested feedback.
If I have windows that occupy the shared space and are located in various spatial locations for the user, say various rooms in a home, how do I have those windows reappear/open in the same spatial location when the user returns to the shared space after being in a full immersive space? I assume it has to do with somehow setting the state of the windows before heading into the immersive view, but I’m not sure how to begin thinking about this or the type of code I would need for that. Any help for this question would be appreciated.
Inputs
Updates to inputs on Apple Vision Pro let you decide if you want the user’s hands to appear in front of or behind the digital content.
Trying to understand why this is being introduced? Why would one corrupt the spatial experience by forcing your hands to appear in front of or behind digital content? Won't this be confusing to users? i.e., it should be a natural mixed reality experience where occlusion occurs when needed. if your "physical hand" is in front of a virtual obj, then it remains visible... and likewise, if you move it behind, then it disappears (not a semi-transparent view of your hand through the model).
Has anyone who’s installed the beta happened to test if you need to have the MacOS beta installed to have virtual desktop? Also I know the hype is about the widescreen but was there clips or notes I missed suggesting that we‘ll get a vertical display this time round As well.
Here is a sample asset demoing the problem:
https://cloudzeta.com/zeta/public-demo/session/lx9tkmenrcj4o5ad/quicklook.usdz
Assets like this used to work well in visionOS 1, but now it's missing textures for some reason:
https://www.youtube.com/watch?v=2TOMnkGvi8I
Hi all, been working with visionOS for a bit so far and am trying to develop a feature that allows users to shareplay and interact with a 3D model pulled from the cloud (icloud in this case, but may use a regular backend service in the future). Ideally, I would want to be able to click a custom button on a regular window that starts the group activity/shareplay with another person in the facetime call and opens the volumetric window with the model and can switch to an immersive space freely. TLDR/questions at the very end for reference point.
I was able to get this working when only working with a single window group (i.e. a volumetric window group scene and an immersive space scene). However I am running into trouble getting shareplay to correctly grab the desired scene (or any scene at all) when I have multiple window group scenes defined.
I have been referencing the following documentation in my attempts to implement this:
https://developer.apple.com/documentation/groupactivities/sceneassociationbehavior
https://developer.apple.com/documentation/groupactivities/adding-spatial-persona-support-to-an-activity
https://developer.apple.com/documentation/groupactivities/defining-your-apps-shareplay-activities
https://developer.apple.com/documentation/groupactivities/joining-and-managing-a-shared-activity
No luck so far however. Additionally, here is a quick breakdown of what I've done so far to attempt implementation:
Define group activity that contains static var activityIdentifier: String and var metadata: GroupActivityMetadata as well as conforms to GroupActivity.
Provide methods to start shareplay via a startShareplay() method that instantiates the above group activity and switch awaits on activity.prepareForActivation() to activate the activity if case .activationPreferred. I have also provided a separate group activity registration method to start shareplay via airdrop as mentioned in the Building spatial SharePlay experiences developer video (timestamped), which does expose a group activity to the share context menu/ornament but does not indicate being shared afterwards.
On app start, trigger a method to configure group sessions and provide listeners for sessions (including subscribers for active participants, session state, messages of the corresponding state type - in my case it's ModelState.self, journal attachments for potentially providing models that the other user may not have as we are getting models from cloud/backend, local participant states, etc). At the very end, call groupSession.join().
Add external activation handlers to the corresponding scenes in the scene declaration (as per this documentation on SceneAssociationBehavior using the handlesExternalEvents(matching:) scene modifier to open the scene when shareplay starts). I have also attempted using the view modifier handlesExternalEvents(preferring:allowing:) on views but also no luck. Both are being used with the corresponding activityIdentifier from the group activity and I've also tried passing a specific identifier while using the .content(_) sceneAssociationBehavior as well but no luck there either.
I have noted that in this answer regarding shareplay in visionOS, the VP engineer notes that when the app receives the session, it should setup any necessary UI then join the session, but I would expect even if the UI isn't being set up via the other person's session that the person who started shareplay should still see the sharing ornament turn green on the corresponding window which doesn't seem to occur. In fact, none of the windows that are open even get the green sharing ornament (just keeps showing "Not shared").
TLDR: Added external events handling and standard group activity stuff to multi-window app. When using shareplay, no windows are indicated as being shared.
My questions thus are:
Am I incorrect in my usage of the scene/view modifiers for handlesExternalEvents to open and associate a specific windowgroup/scene with the group activity?
In regards to opening a specific window when the group activity is activated, how do we pass any values if the window group requires it? i.e. if it's something like WindowGroup(id: ..., for: URL.self) { url in ... }
Do I still need to provide UI setup in the session listener (for await session in MyActivity.sessions())? Is this just a simple openWindow?
Past the initializing shareplay stuff above, what are the best practices for sharing 3d models that not all users in the session might have? Is it adding it as an attachment to GroupSessionJournal? Or should I pass the remote URL to everyone to download the model locally instead?
Thanks for any help and apologies for the long post. Please let me know if there's any additional information I can provide to help resolve this.
My visionOS app was rejected for not supporting gaze-and-pinch features, but as far as I can tell there is no way to track the users eye movements in visionOS. (The intended means of interaction for my app is an extended gamepad.) I am able to detect pinch but can't find any way to detect where the user is looking. ARFaceAnchor and lookAtPoint appear to not be available in visionOS. So how do we go about doing this? For context, I am porting a metal game to vision OS from iOS.
The new Mac virtual display feature on visionOS 2 offers a curved/panoramic window. I was wondering if this is simply a property that can be applied to a window, or if it involves an immersive mode or SceneKit/RealityKit?
I can get the fully immersive rendering working with metal and composite services but in WWDC 24 rendering metal with passthrough was announced: https://developer.apple.com/wwdc24/10092. I watched the video and downloaded the test project. I noticed that the passthrough was showing up in the demo project but not in my metal project. After debugging I found out it was this key: Preferred Default Scene Session Role in my Info.plist that was set to Compositor Services Immersive Space Application Session Role (like the video said) but it needed to be set to Window Application Session Role for the passthrough to come in.
Is this a bug?
Hello,
I've been creating my own stereoscopic images on my laptop and airdropping them to the Vision Pro to view them in 3D.
My custom images have a left_eye.png and right_eye.png and have been combined into one HEIF image (as it is done natively with the headset)
In VisionOS 1.xx Photos app, I was able to see my custom images in 3D, but in VisionOS 2, the device no longer recognizes that my image(s) should also be shown stereoscopically and instead, it shows it in 2D.
I see that it gives me the option to use the AI tool to convert 2D into 3D, but the original file that I airdropped to myself (Mac --> AVP Photos Album) already has a left and right image pair.
Is this something that can be fixed?
Hello friends!
I am looking into a use case where I want to add an animated avatars into a RealityView. I am looking to use a third party package but have not found any that have good iOS or visionOS support. Has anyone come across a package for this that I could look into?
I am getting the error "Initializing hosting entity without a context" in the console when I build and run my game in XCode 16.0 beta, targeting Vision Pro OS 2.0 (22N5252n).
Not sure where the error is originating.
I have an input texture in a ShaderGraphMaterial. I use .replace(withDrawables:) to replace the texture with a drawable queue. When I present drawables to this queue, nothing happens in VisionOS 2.
The drawables are not presented, I can't get any more via nextDrawable() because the unpresented ones are holding things up.
This is with both bgra8Unorm_srgb and rgba16float formats.
I have confirmed the material applied to my object has the modified texture resources on them.
It was working in VisionOS 1.2. What changed in VisionOS 2 to cause this?
Hello everyone,
Super exciting stuff released this year!
I was playing around with the Metal passthrough sample code
(see: https://developer.apple.com/documentation/compositorservices/interacting_with_virtual_content_blended_with_passthrough)
... and noticed that the upperLimbVisibility set to .automatic does not seem to work and my hand is always on top.
How to reproduce:
Draw something
Position your hand behind the brush stroke
Notice that your hands are always rendered on top
Taking a GPU frame capture reveals that the depth is correctly written.
Xcode: Version 16.0 beta (16A5171c)
VisionOS: visionOS 2.0 (22N5252n)
Is there a way to provide CLHeading in visionOS so I can more accurately understand the direction in which a user is facing?
Thank you again for pushing the web forward in VisionOS 2, super exciting!
The latest WWDC24 video touched on VR experiences for VisionOS2.0 using WebXR, however there was no mention of passthrough AR experiences.
Samples such as this one are not supported:
https://immersive-web.github.io/webxr-samples/immersive-ar-session.html
In Settings > Safari, there is a feature flag for the AR WebXR module, but enabling it did not seem to change anything.
Is this the expected behavior at this time? Any developer preview(s) we could try?
My project has integrated SwiftUI panel , I noticed that when I open the SwiftUI interface, it always appears in front of me and stays at a certain distance. I want it to spawn on my left side instead, and with some rotation. How can I set the position in the code to achieve this?
Posting here as I did not see a section for Dev Documentation portal
Using the search box in the documentation portal I searched for "frustum" hoping to find any APIs that game me control over frustum culling.
https://developer.apple.com/search/?q=frustum&type=Documentation
The search came up empty for hits in RealityKit.
Hours later I found the boundsMargin API which explains how it affect frustum culling.
I went back and tried the search again to verify the documentation search result were incomplete.
site:developer.apple.com/documentation/realitykit frustum
on google worked fine.
Fixing this can save everyone time and stress.
I was wondering of anyone had guidance on how to “livestream“ MV-HEVC content. More specifically, I have a left and right eye view for stereoscopic content (perhaps, for example, the views were taken from a stereoscopic video being passed through an AVPlayer). I know, based on sample code, that I can convert the stereoscopic video into a MV-HEVC file using AVAssetWriter. However, how would I take the stereoscopic video and encode it, in realtime, to a stream that could then leverage HLS Tools to deliver to clients? Is AVFoundation capable of this directly? Or is there an API within VideoToolbox that can help with this?