Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

ARKit confidence level and precision 3D model (BIM)
Hi, I want to develop an AR App for construction site on which i need to prove the calibration quality of the 3D model on plane. For that i have already retrieve informations like TrackingState, points cloud, confidence map... I would like to know if the ConfidenceLevel, that appears to be an enumeration, is available or if I need to analyse the points cloud to make my own confidence level. And also if you have informations on how can I know the precision of the 3D map on real life.
0
0
284
Feb ’24
Multiple active AR Sessions in RoomPlan application, who creates them?
I am running a modified RoomPllan app in my test environment I get two ARSessions active, sometimes more. It appears that the first one is created by Scene Kit because it is related go ARSCNView. Who controls that and what gets processed through it? I noticed that I get a lot of Session Interruptions from Sensor Failure when I am doing World Tracking and the first one happens almost immediately. When I get the room capture delegates fired up I start getting images to the delegate via a second session that is collecting images. How do I tell which session is the scene kit session and which one is the RoomCapture session on thee fly when it comes through the delegate? Is there a difference in the object desciptor that I can use as a differentiator? Relying on the Address of the ARSession buffer being different is okay if you get your timing right. It wasn't clear from any of the documentation that there would be TWO or more AR Sessions delivering data through the delegates. The books on the use of ARKIT are not much help in determining the partition of responsibilities between the origins. The buffer arrivals at the functions supported by the delegates do not have a clear delineation of what function is delivered through which delegate discernible from the highly fragmented documentation provided by the Developer document library. Can someone give me some guidance here? Are there sources for CLEAR documentation of what is delivered via which delegate for the various interfaces?
2
0
728
Mar ’24
Understanding shader graphs in Reality Composer Pro
I am very new to shaders, never used one of the large systems like Unity. However I have started exploring visionOS programming and that led me to create some effects for materials in Reality Composer Pro. I have been overwhelmed with the possibilities, but also kind of lost. I understand that RCPs shaders are based on MaterialX, so maybe there are tutorials on the web that would cover how to create procedural effects (fire, wind, water, etc)? I’ve stumbled through…but it’s slow going. Are there any good resources that talk about how to use the various nodes to create procedural effects? For example, it took me a while to figure out that using the “time” node allows me to animate cool color changes, especially when combined with various math and remap nodes. Just looking for some basic resources I think. Would the shader graph tutorials about Unity, apply to using RCP? Are the node types similar enough?
0
0
844
Mar ’24
Grab frames in Vision Pro using ARFrame
I'd like to grab the current camera frame in visionOS. I have a Swift file (am new to Swift) that looks like this: import ARKit import SwiftUI class ARSessionManager: NSObject, ObservableObject, ARSessionDelegate { var arSession: ARSession override init() { arSession = ARSession() super.init() arSession.delegate = self } func startSession() { let configuration = ARWorldTrackingConfiguration() configuration.planeDetection = .horizontal arSession.run(configuration) } // ARSessionDelegate method to capture frames func session(_ session: ARSession, didUpdate frame: ARFrame) { // Process the frame, e.g., capture image data } } and I get errors including "Cannot find type 'ARSessionDelegate' in scope". Help? Is ARFrame called something different for Vision Pro?
1
0
746
Mar ’24
Vision Pro - lets join forces to improve VisionOS platform
Hi guys, if you started using Vision Pro, I'm sure you already found some limitations. Let's join forces and make feature requests. When creating Feedback, request from one guy may not get any attenption from Apple, but if we join and more of us make the same request, we might just push those ideas through. Feel free to add your ideas and don't forget to create feedback: app windows can only be moved forward to a distance of about 20ft/6m. I'm pretty sure some users would like to push window as far as a few miles away and make the window large to be still legible. This would be very interesting especialy when using Environments and 360-degree view. I really want to put some apps up on the sky above the mountains and around me, even those iOS apps just made compatible with Vision Pro. when capturing screen, I always get message "Video capture not possible due to insufficient lighting". Why? I have Environment loaded and extended 360 degrees with some apps opened, so there is no need for external lighting (at least I thing it's not needed). I just want to capture what I see. Imagine creating tutorials, recording lessons for learning various subjects, etc. Actual Vision Pro user might prefer loading their on environments an setup app in spatial domain, but for those that don't have it yet or when creating videos to be available on antique 2D computer screens , it may be useful to create 2D videos this way. 3D video recording is not very good, kind of shaky, not when Vision Pro is static, but when walking and especially when turning head left/right/up/down (even relatively slowly). I think hardware should be able to capture and create nice and smooth video. It's possible that Apple just designed simple camera app and wants to give developers a chance to create a better Camera app, but it still would be nice to have something better out of the box. I would like to be able to walk through Environments. I understand safety of see-through effect, so users didn't hit any obstacles, but perhaps obstacles could be detected and when user gets to 6ft/2m from obstacle then it could present at first warning (there is already "You are close to and object" and then make surroundigns visible, but if there are no obstacles (user can be located in large space and can place a tape or a thread around the safe area), I should be able to walk around and take a look inside that crater on the Moon. We need Environments, Environments, Environments and yet more of them, I was hoping for hundreds, so we could even pick some of them and use in our apps, like games where you want to setup a specific environment. Well, that's just a beginning and I could go on and on and on, but tell me what you guys think. Regards and enjoy new virtual adventure! Robert
5
0
1.1k
Mar ’24
Reality Composer Pro node previews?
I have been digging into learning shader graphs by watching Unity shader graph content, cause lots of the same concepts apply. One thing I noticed was that in Unity, each node in the shader graph has a little preview. I don't think this exists in Reality Composer Pro, but is there anyway to mimic it (like can I hook up a node that allows me to debug the graph at that point?) If not, I'm happy to just file a feedback about it, but just thought I'd ask!
3
0
939
Mar ’24
Reality Composer Pro - animate per vertex with noise?
I am struggling to figure out how to make a shader to animate each vertex of a model separately using noise. I watched a video on how to do this in Unity, but I think something must be different with how Reality Composer Pro handles the noise nodes? For example, in this graph I just hooked up the noise node directly to the geometry modifier: In my output you can see the plane is adjust per-vertex using the noise node. My goal would be to animate this like waves, but moving the noise. So in this graph I use time with sin to adjust the UV of the noise. This seems to change the noise node to output a single value (I guess that makes sense, since I modify the UV, it results in a single value, at that UV in the noise map). So then, I take that as the Y value and put it back into the geometry modifier. But now it doesn't work per-vertex, it moves the whole model up and down (based on the single value coming out of the noise map). How do I make this apply to each vertex of the noise map individually? This is an example of the output I want in Unity, the plane is being adjusted per-vertex by a scrolling 2d noise node:
3
0
1.2k
Mar ’24
"Meet Reality Composer Pro" - Spatial Audio Problem
I'm following the Meet Reality Composer Pro walkthrough and ran into something that didn't function as expected. When I got to the step where I add five "Bird_With_Audio.usda" references to the scene, I found they did not play audio. After some trial and error, I found that Preview > Resource in each of their Spatial Audio items was set to "None." If I click the dropdown menu, I see several "Bird_Calls" groups to pick from. I checked the original Bird_With_Audio.usda that I had created, and the "Bird_Calls" audio group was correctly assigned and worked. I tried dragging a sixth Bird_With_Audio into the scene and confirmed that the Spatial Audio item suddenly empties, rendering the bird silent. I was able to go through each of the five birds and set their Spatial Audio Resource to Bird_Calls, and the group worked like the video demonstrates. While this fixed the issue, as a beginner I'd like to know why this happened. It doesn't seem right that I would build and item and then have to re-attach any sounds to it when I place it in the main scene. So…where did I mess up?
0
0
578
Mar ’24
Non-convex collision?
Hi! I think this should be a pretty normal usage of ARKit / RealityKit I have a static mesh for my environment, that I want to have static collision properties. My options for making this interact with dynamic bodies are: ShapeResource.generateConvex(...) -- which overshoots my shape dramatically. Entity.generateCollisionShapes(...) which also overshoots. I notice additional APIs around ShapeResource -- ShapeResource.generateStaticMesh(positions:faceIndices) seems to be exactly what I need. So far, I haven't been able to invoke this successfully to set my collision box. Questions: Is this not, a completely normal thing for developers to want to do? Why is there no support for this out-of-the-box in RealityKit/ARKit? To support this in my app, everywhere I've read has said I need to parse the .obj of my terrain manually, and find triangulated faces and pipe them into this function. That feels like a very standardized process -- and given that RealityKit is already forcing me to use .usdz, why should this not be a part of the SDK? Regardless- I triangulated my terrain mesh, and have been working on parsing code to get the positions and faceIndices for this set up (as an extension on Entity). Is this the right approach? Am I missing something more obvious? Thanks, Justin
6
0
866
Mar ’24
ARKit mapping from screen coordinates to Camera frame coordinates. Measurments application
Im working on the following problem: For a measurment application i want to take a picture of something laying on the ground, and given i will have the floor plane detecred, i plan to raycast 4 points from the corners of the screens, given the raycast land on this plane, i want to use those coordinates to do a perspective transform (warp) of the camera-image onto the new coordinates. This way i should be able to perform pixel-per-cm measurments. The prolem i have is the screen coordinates dont seem to reflect the camera-frame coordinates, and im not sure how to go from one to another.
0
0
382
Mar ’24
Rotate an entity with the attachments
Hi, I create an entity and add a bunch of attachments (code is based on the Diorama demo). I can rotate the entity with this: .gesture( DragGesture() .targetedToAnyEntity() .onChanged { value in let entity = value.entity let orientation = Rotation3D(entity.orientation(relativeTo: nil)) let newOrientation: Rotation3D if (value.location.x >= lastGestureValue) { newOrientation = orientation.rotated(by: .init(angle: .degrees(0.5), axis: .y)) } else { newOrientation = orientation.rotated(by: .init(angle: .degrees(-0.5), axis: .y)) } entity.setOrientation(.init(newOrientation), relativeTo: nil) lastGestureValue = value.location.x } ) But the attachments stay still. How can I rotate the entity AND the attachment at the same time?
2
0
797
Mar ’24
Custom material getting converted to PhysicallyBasedMaterial
I have a custom material in Reality Composer. When I attach it to a cube and try loading the scene in XCode, the material cannot be cast to a ShaderGraphMaterial because it has been changed to a PhysicallyBasedMaterial. The material was always a Custom material, I did not change the type in Reality Composer. Does anyone know how to fix?
1
0
972
Mar ’24
Converting a Unity model / prefab to UDZ
We are porting a iOS Unity AR app to native visionOS. Ideally, we want to re-use our AR models in both applications. These AR models are rather simple. But still, converting them manually would be time-consuming, especially when it gets to the shaders. Is anyone aware of any attempts to write conversion tools for this? Maybe in other ecosystems like Godot or Unreal, where folks also want to convert the proprietary Unity format to something else? I've seen there's an FBX converter, but this would not care for shaders or particles. I am basically looking for something like the Polyspatial-internal conversion tools, but without the heavy weight of all the rest of Unity. Alternatively, is there a way to export a Unity project to visionOS and then just take the models out of the Xcode project?
0
0
553
Mar ’24
Adding 2D PNG to Reality Composer Pro
I've got a couple 2D PNG assets that I want to add to a scene made of a couple other udsz files in RCP (picture adding a couple 2D videogame characters to a simple 3D diorama). When I try to drag the PNGs to the workspace or the file tree…nothing happens. I found a walkthrough on Medium (called "Importing and Exporting Personalized Objects for Augmented Reality: Reality Composer and SwiftUI" for those curious as I can't link to Medium posts here) that makes it look like users could do this with simple drag-and-drop. The Medium post is from June 2023, and in the screenshots RCP visually looks a lot more like Reality Composer on iPad, so I'm assuming it's changed a lot since then? Is there still a way to do this? I've tried adding the 2D elements to a scene with Blenders "import images as planes," but I'm getting weird halos around them and was hoping RCP could make the process a bit easier/cleaner.
1
0
628
Mar ’24