visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Memory for Apps under Vision OS
I have a Vision OS APP with memory requirement greater than 5120MB, is there any solution other than optimizing resources, dynamic loading and sub-scenarios? My Question: 1. I would like to know what is the maximum running memory of the APP under Vision OS? 2. Is it possible to modify the maximum running memory of an APP? 3. Does Vision OS provide an interface for jumping between apps?
0
0
32
5h
[App synchronization] I have a question about synchronizing Vision Pro app contents.
Hi! I'm creating an app like this: Using Image Tracking to set world anchor in real world first. The timeline in Reality Composer Pro scene needs to be played in same time(for the people in same place using the app). People using the app will see the same contents in same position in same time in same place. I already made Image Tracking feature worked. But the big problem is "Synchronization". I found Group Activities and TabletopKit to solve the problem. But I don't know if this are the right modules for this project. How do I solve this problem technically? If you have ideas, please let me know. I really need help for this.
0
0
86
18h
Horizontal window/map on visionOS
Hi, would anyone be so kind and try to guide me, which technologies, Kits, APIs, approaches etc. are useful for creating a horizontal window with map (preferrably MapKit) on visionOS using SwiftUI? I was hoping to achieve scenario: User can walk around and interact with horizontal map window and also interact with (3D) pins on the map. Similar thing was done by SAP in their "SAP Analytics Cloud" app (second image from top). Since I am complete beginner in this area, I was looking for a clean, simple solution. I need to know, if AR/RealityKit is necessary or is this achievable only using native SwiftUI? I tried using just Map() with .rotation3DEffect() which actually makes the map horizontal, but gestures on the map are out of sync and I really dont know, if this approach is valid or complete rubbish. Any feedback appreciated.
0
0
55
20h
Post processing in VisionOS
WWDC21 had a cool demo project with fish, with a watery, misty look (Dive into RealityKit). It used post processing in RealityKit, but the ARView class isn’t available in VisionOS. Can CompositorLayer be used instead for post processing in full immersion?
0
0
47
1d
Maintain rotation sense while dragging
Hi, I heve a problem with an visionOS app and I couldn't find a solution. I have 3D carousel with cards and when I use the drag gesture and I drag to the left I want the carousel to rotate clockwise and when I drag to the right I want the carousel to rotate counter clockwise. My problem is when I rotate my body more than 90 degrees to the left or to the right the drag gesture changes it's value and the carousel rotates in the opposite direction? Do you know how can I maintain the right sense taking into account that the user can rotate his body? I've tried to take the user orientation with device tracking and check if rotation on Y axis is greater than 90 degrees in both direction but It is a very small area bettween 70-110 degrees when it still rotates in the opposite direction. I think that's because the device traker doesn't update at the same rate as drag gesture or it doesn't have the same acurracy.
0
0
85
1d
Hover Effect & Tap Problem
Hi, since I updated my device to visionOS 2.0 or higher I have some problems with my app. Sometimes when I go with my eyes over buttons or the tabview which is a SwiftUI component that many apps use, the hover effect doesn't trigger anymore, I can tap on the icon from the tabview but doesn't extend when I hover it to se the description of the button. It is weird because it doesn't happen all the time. But in VisionOS 1.0 doesn't happen at all. My second issue is that I have an navbar as an attachment and this attachment has a draggable modifier that we created. The buttons are not tappable until I drag the navbar a little, this also never happened in VisionOS 1.0 but always happen in Vision 2.0+ because I've tested in the simulator with different versions. Is it possible these problems to be related to handTracking Service that we use from ARKit? Because sometimes when we close the trackers the app works as intended.
0
0
108
2d
Rotating ModelEntities (without Gestures) Help
I'm building a proof of concept application leveraging the PlaneDetectionProvider to generate UI and interactive elements on a horizontal plane the user is looking at. I'm able to create a cube at the centroid of the plane and change it's location via position. However, I can't seem to rotate the cube programmatically and from this forum post in September I'm not sure if the modelEntity.move functionality is still bugged or the documentation is not up to date. if let planeCentroid = planeEntity.centroid { // Create a cube at the centroid let cubeMesh = MeshResource.generateBox(size: 0.1) // Create a cube with side length of 0.1 meters let cubeMaterial = SimpleMaterial(color: .blue, isMetallic: false) let cubeEntity = ModelEntity(mesh: cubeMesh, materials: [cubeMaterial]) cubeEntity.position = planeCentroid cubeEntity.position.y += 0.3048 planeEntity.addChild(cubeEntity) let rotationY = simd_quatf(angle: Float(45.0 * .pi/180.0), axis: SIMD3(x: 0, y: 1, z: 0)) let cubeTransform = Transform(rotation: rotationY) cubeEntity.move(to: cubeTransform, relativeTo: planeEntity, duration: 5, timingFunction: .linear) Ideally, I'd like to have the cube start/stop rotation when the user pinches on the plane mesh but I'd be happy just to see it rotate!
2
0
133
2d
INFOS FROM CMSSampleBuffer
==> Which information will I get from CMSSampleBuffer ? Is there an option to block close up accomodation of the camera ? Is there a way for the object capture module to take a video instead of a series of picture ? It would be fantastic to have an answer on all of these questions to be able to move forward on new implementations.
0
0
102
3d
Tracking Multiple Instances of the Same Object on the Apple Vision Pro
Hello everyone, I'm currently working through this example project Exploring object tracking with ARKit to learn how to use Object Tracking with visionOS. I was able to modify the example project to overlay a different model onto the detected object. However, with the example code, I'm only able to track 1 instance of the trained object at a time. I'm wondering if there is a way to track multiple instances of 1 object? For example, I have a usdz model of a box, then trained that box for object tracking, I'm able to overlay a model of a chair over that box once it's detected. But now, I have multiple copies of that same box, and I want to arrange them so that when I wear the Vision Pro, I can see the chairs arranged however I want. I'm still new to visionOS development, so I'm not sure if there's a way to accomplish that by just training 1 object and having copies of it. If it helps, this is my current modification to overlay a virtual object ontop of a detected object. func loadReferenceObjects() async -> [ReferenceObject] { // Get a list of all reference object files in the app's main bundle and attempt to load each. var referenceObjectFiles: [String] = [] if let resourcesPath = Bundle.main.resourcePath { print("resource path: \(resourcesPath)") try? referenceObjectFiles = FileManager.default.contentsOfDirectory(atPath: resourcesPath).filter { $0.hasSuffix(".referenceobject") } } await withTaskGroup(of: Void.self) { group in for file in referenceObjectFiles { let objectURL = Bundle.main.bundleURL.appending(path: file) group.addTask { // get the file name let fileNameWithoutExtension = (file as NSString).deletingPathExtension // load each ref objs as task await self.loadSingleReferenceObject(url: objectURL, fileName: fileNameWithoutExtension) } } } return self.referenceObjects } // Private helper method to load a single object // and assign entity to it. private func loadSingleReferenceObject(url: URL, fileName: String) async { var referenceObject: ReferenceObject do { print("Loading reference object from \(url)") // Load the file as a `ReferenceObject` - this can take a while for larger objects. try await referenceObject = ReferenceObject(from: url) } catch { fatalError("Failed to load reference object with error \(error)") } // add the ref obj to the ref objs array self.referenceObjects.append(referenceObject) // entity with each file var model: Entity = Entity() // add entity according to the file name switch fileName { // Box1 ref obj binds with Chair1 case "Box1": // try to load the model do { try await model = Entity(named: "chair1", in: realityKitContentBundle) } catch { print("Failed to load chair1") } // Box2 ref obj binds with Chair2 case "Box2": // try to load the model do { try await model = Entity(named: "chair2", in: realityKitContentBundle) } catch { print("Failed to load chair2") } default: print("no model associated with this file name: \(fileName)") break } // map entity to ref objs usdzPerReferenceObject[referenceObject.id] = model } Any help or suggestions would be greatly appreciated. Thank you.
2
0
154
2d
compilation error in CoreFoundation - visionos
when trying to build wireshark I'm getting the following, any idea how to solve it? [ 13%] Building C object wsutil/CMakeFiles/wsutil.dir/os_version_info.c.o In file included from wireshark/wsutil/os_version_info.c:23: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/CoreFoundation.h:54: /Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/CFBase.h:676:195: error: expected ',' 676 | void *CFAllocatorAllocateTyped(CFAllocatorRef allocator, CFIndex size, CFAllocatorTypeID descriptor, CFOptionFlags hint) API_AVAILABLE(macos(15.0), ios(18.0), watchos(11.0), tvos(18.0), visionos(2.0)); | ^ /Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/CFBase.h:679:211: error: expected ',' 679 | void *CFAllocatorReallocateTyped(CFAllocatorRef allocator, void *ptr, CFIndex newsize, CFAllocatorTypeID descriptor, CFOptionFlags hint) API_AVAILABLE(macos(15.0), ios(18.0), watchos(11.0), tvos(18.0), visionos(2.0)); | ^ /Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/CFBase.h:682:165: error: expected ',' 682 | void *CFAllocatorAllocateBytes(CFAllocatorRef allocator, CFIndex size, CFOptionFlags hint) API_AVAILABLE(macos(15.0), ios(18.0), watchos(11.0), tvos(18.0), visionos(2.0)); | ^ /Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/CFBase.h:685:181: error: expected ',' 685 | void *CFAllocatorReallocateBytes(CFAllocatorRef allocator, void *ptr, CFIndex newsize, CFOptionFlags hint) API_AVAILABLE(macos(15.0), ios(18.0), watchos(11.0), tvos(18.0), visionos(2.0)); | ^ In file included from wireshark/wsutil/os_version_info.c:23: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/CoreFoundation.h:73: /Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/CFNumberFormatter.h:144:147: error: expected ',' 144 | CF_EXPORT const CFNumberFormatterKey kCFNumberFormatterMinGroupingDigits API_AVAILABLE(macos(15.0), ios(18.0), watchos(11.0), tvos(18.0), visionos(2.0)); // CFNumber | ^ 5 errors generated. make[2]: *** [wsutil/CMakeFiles/wsutil.dir/os_version_info.c.o] Error 1 make[1]: *** [wsutil/CMakeFiles/wsutil.dir/all] Error 2 make: *** [all] Error 2
1
0
100
6d
How to roll a ball by physic in RealityKit
I decided to use a club to kick a ball and let it roll on the turf in RealityKit, but now I can only let it slide but can not roll. I add collision on the turf(static), club (kinematic) and the ball(dynamic), and set some parameters: radius, mass. Using these parameters calculate linear damping, inertia, besides, use time between frames and the club position to calculate speed. Code like these: let radius: Float = 0.025 let mass: Float = 0.04593 // 质量,单位:kg var inertia = 2/5 * mass * pow(radius, 2) let currentPosition = entity.position(relativeTo: nil) let distance = distance(currentPosition, rgfc.lastPosition) let deltaTime = Float(context.deltaTime) let speed = distance / deltaTime let C_d: Float = 0.47 //阻力系数 let linearDamping = 0.5 * 1.2 * pow(speed, 2) * .pi * pow(radius, 2) * C_d //线性阻尼(1.2表示空气密度) entity.components[PhysicsBodyComponent.self]?.massProperties.inertia = SIMD3<Float>(inertia, inertia, inertia) entity.components[PhysicsBodyComponent.self]?.linearDamping = linearDamping // force let acceleration = speed / deltaTime let forceDirection = normalize(currentPosition - rgfc.lastPosition) let forceMultiplier: Float = 1.0 let appliedForce = forceDirection * mass * acceleration * forceMultiplier entityCollidedWith.addForce(appliedForce, at: rgfc.hitPosition, relativeTo: nil) Also I try to applyImpulse but not addForce, like: let linearImpulse = forceDirection * speed * forceMultiplier * mass No matter how I adjust the friction(static, dynamic) and restitution, using addForce or applyImpulse, the ball can only slide. How can I solve this problem?
0
0
180
1w
Creating a multiview video playback experience in visionOS. There is no back button on the player.
Function Introduction "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/" When I use this function, my videoPlayer has no back Action in player. And we did not find any method provided by the system "addChildViewControllerAndView(form)" "https://developer.apple.com/documentation/avkit/adopting-the-system-player-interface-in-visionos" Referencing this document also did not work As long as you enter this line of code let playerController = AVPlayerViewController() // Enable the multiview experience along with the default recommended set. playerController.experienceController.allowedExperiences = .recommended(including: [.multiview]) there is no back button, only full screen and zoom out
6
0
200
1d
Limited Window Manipulation in Mixed Immersive Style in VisionOS
In VisionOS versions 2.1 and 2.2, I’m encountering a significant limitation when using the .immersionStyle(selection: .constant(.mixed), in: .mixed) mode, specifically in mixed immersive style. Here’s a breakdown of the behavior: In full immersion mode (.immersionStyle(selection: .constant(.full), in: .full)), users can interact with and manipulate system windows while inside a 3D model, allowing typical interactions like moving windows, pinching, or activating UI switches. However, in mixed immersive mode, using the exact same layout “inside” a 3D model (which doesn’t visually obstruct the window), users are unable to interact with window content or move the window. Basic interactions like pinching or toggling switches require users to physically touch these elements in AR space, which is inconsistent with the behavior in full immersion. From a usability perspective, this restriction seems unnecessary, as the software should ideally allow for similar interaction capabilities across both immersive styles. The expected behavior is to enable window manipulation within a 3D model in mixed mode, matching the functionality observed in full immersion. The scene in question is a House in which the user is placed during the immersion that's why I am referring to the user being "Inside" of the scene. Has anyone else experienced this or found a workaround?
1
0
158
1w
Cannot Transcribe Audio During SharePlay in VisionOS
I’ve encountered an issue when trying to transcribe audio during a SharePlay session in VisionOS. Specifically, the AVAudioSession appears to fail when sharing audio, preventing successful transcription. The problem seems related to AVAudioSession.sharedInstance() and using the .mixWithOthers option, which is supposed to enable multiple audio sources to coexist without interference. Here’s the relevant code snippet that throws the error: private static func prepareEngine() throws -> (AVAudioEngine, SFSpeechAudioBufferRecognitionRequest) { let audioEngine = AVAudioEngine() let request = SFSpeechAudioBufferRecognitionRequest() request.shouldReportPartialResults = true let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playAndRecord, mode: .default, options: [.mixWithOthers, .allowBluetooth]) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) let inputNode = audioEngine.inputNode let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in request.append(buffer) } audioEngine.prepare() try audioEngine.start() return (audioEngine, request) } The setup is designed to initialize an AVAudioEngine and a SFSpeechAudioBufferRecognitionRequest for real-time transcription, but fails within the SharePlay context. Notably, while .mixWithOthers is intended to handle concurrent audio sessions, it doesn’t appear to work as expected during SharePlay. The audioSession.setActive(true) line is where the setup typically fails, with no clear solution to proceed. Has anyone else faced similar issues with AVAudioSession and SharePlay in VisionOS? Any insights on how to manage audio sharing or transcription during a SharePlay session would be greatly appreciated! The specific error is: The operation couldn't be completed. (com.apple.coreaudio.avfaudio error 561145187.)
0
0
93
1w
RealityKit Rotation Origin
I am trying to rotate topEntity around the origin point of shapeEntity, but have not found a way to do so. topEntity is an entity group that also contains shapeEntity, so I cannot set topEntity as a child of shapeEntity. From Blender I set the correct origin of topEntity, but when I import the usd model into Reality Composer Pro it does not save the origin point and there is no way to set the origin in Reality Composer Pro. DragGesture() .targetedToEntity(where: .has(CustomComponent.self)) .onChanged({ value in let rotation = -Float(value.translation.height) let clampedRotation = min(max(rotation, 0), 45) if value.entity.name == "grab"{ if let topEntity = selectedEntity.findEntity(named: "top"), let shapeEntity = selectedEntity.findEntity(named: "Shape_1"){ topEntity.transform.rotation = simd_quatf( angle: clampedRotation * .pi / 180, axis: SIMD3(x: 0, y: 0, z: 1) ) } } })
1
0
86
1w
Prevent Window (or Volume) Mouse Focus
When using a trackpad (or screen-shared Mac) with the Vision Pro, moving your attention to a new window or app immediately refocuses the mouse cursor, which in many circumstances is really useful. But in circumstances where there is a viewer-only window, that window jumping gets in the way. Imagine a 3d object editor of some sort, with a live viewer in a second window, maybe a browser. Manipulating the 3d object with the mouse in the editor gets continually interrupted when looking at the live viewer because the cursor jumps to the viewer window. Is there anyway to reject that focus?
2
0
182
1w
Game controller issues in visionOS
Hi, I'm building a windowed game in visionOS which requires a gamepad. And I'm using a PS5 controller for this case. However, I found a few problems: When looking at the window close button, and press X in the gamepad, the window will be closed. This can happen accidentally when the user is playing. When press the PS button(the home button), I want my app to handle it, e.g. take the user to the title screen. But visionOS will always capture this input and opens the home screen(App Library). How to avoid these 2 issues? Or in general, how to make the gamepad input only available to my app but not the visionOS system?
1
0
187
1w