I have a UIKit app and would like to provide spacial experience when run on VisionOS.
I added VisionOS support, but not sure how to present an immersive view. All tutorials are in SwiftUI, but my app is in UIKit.
This is an example from a SwiftUI project, but how how do I declare this ImmersiveView in UIKit?
struct VirtualApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}.windowStyle(.volumetric)
ImmersiveSpace(id: "ImmersiveSpace") {
ImmersiveView()
}
}
}
and in UIKit how do I make the call to open the ImmersiveView?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
Hi community,
I have a pair of stereo images, one for each eye. How should I render it on visionOS?
I know that for 3D videos, the AVPlayerViewController could display them in fullscreen mode. But I couldn't find any docs relating to 3D stereo images.
I guess my question can be brought up in a more general way: Is there any method that we can render different content for each eye? This could also be helpful to someone who only has sight on one eye.
Hi,
I have an existing iPhone/iPad app that I'm considering for Vision Pro. It's an informational app that shares info. with the watch.
What would be better:
To add a target for the vision pro.
Make a separate app so I can use all of the VP's features.
What are YOU doing to support the Vision Pro?
Thanks,
Dan
Where can I find sample video, sample encoding apps, viewing apps, etc?
I see specs, high level explanations, etc. but not finding any samples or command lines or app documentation to explain how to make and view these files.
Thank you, looking forward to promoting a spatial video rich future.
Hello,
I am developing a VisionOS based application, that uses the various data providers like Image Tracking, Plane Detection, Scene Reconstruction but these are not supported on VisionOS Simulator. What is the Work Around for this issue ?
Hello! I have a question about usage snapshots from ios17 sample app on macOS 14. I tried to export folders "Photos" and "Snapshots" captured from ios and then wrote like:
let checkpointDirectoryPath = "/path/to/the/Snapshots/"
let checkpointDirectoryURL = URL(fileURLWithPath: checkpointDirectoryPath, isDirectory: true)
if #available(macOS 14.0, *) {
configuration.checkpointDirectory = checkpointDirectoryURL
} else {
// Fallback on earlier versions
}
But I didn't notice any speed or performance improvements. It looks like "Snapshots" folder was simply ignored. Please advise what I can do so that "Snapshots" folder is affected during calculations.
I have several ipads that have been upgraded to 17.0.3 but I need to be able to back them up to 16.6.1 version. We have apps that do not work currently on 17. I have downloaded the 16.6.1 .ipsw file and every time I try to use it I get OS cannot be restored on "iPad". Personalization failed. Any way to get an os file that would work?
Hi,
In the newly released Object Capture API, for a PhotogrammetrySession, we can get the poses of the saved images, and the same images will be used to create the model.
But in the sample project, https://developer.apple.com/documentation/realitykit/guided-capture-sample
Only the 3D model that's generated will be saved, but for the others, pose, poses, bounds, point cloud, and model entity, there was a comment added, saying that it is
// Not supported yet
When will this be available for the developers ?? Can you give us a tentative date at least???
I'm running into an issue with the frame bounds of a Metal-based iOS app on the visionOS simulator. Here's a snapshot:
That's the result of downloading Apple's sample code and running it in the simulator (Apple Vision Pro (Designed for iPad)). Is it a bug in the simulator / iOS->visionOS emulation, or is that sample code doing something odd that isn't compatible with visionOS?
Thanks!
Eddy
Hello,
I want to use Apple's PhotogrammetrySession to scan a window. However, ObjectCaptureSession seems to be a monotasker and won't allow capture to occur with anything but a small object on a flat surface.
So, I need to manually feed data into PhotogrammetrySession. But when I do, it focuses way too much on the scene behind the window, sacrificing detail on the window itself.
Is there a way for me to either coax ObjectCaptureSession into capturing an area on the wall, or for me to restrict PhotogrammetrySession's target bounding box manually? How does ObjectCaptureSession communicate the limited bounding box to PhotogrammetrySession?
Thanks,
Sebastian
在 Full 模式下,
我创建一球体 半径 10 ,给球添加 CollisionComponent 与 InputTargetComponent
我接着创建一个0.2 正方体 也添加了 上面的两组件
又添加。一个 attrach 的附件信息
代码如下
` RealityView{content,attachments in
let meshgenerate = MeshResource.generateSphere(radius: 10)
let collisionShape = ShapeResource.generateSphere(radius: 10 )
var sp = ModelEntity(mesh: meshgenerate)
sp.components.set(CollisionComponent(shapes: [collisionShape]))
sp.components.set(InputTargetComponent())
sp.transform.scale *= .init(-1, 1, 1)
sp.name = "sp"
content.add(sp)
let ont = ModelEntity(mesh: MeshResource.generateBox(size: 0.2) )
ont.components.set(CollisionComponent(shapes: [ShapeResource.generateBox(size: .init(x: 0.2, y: 0.2, z: 0.2))]))
ont.components.set(InputTargetComponent())
ont.name = "ont"
ont.position = .init(x: 0, y: 0, z: -2)
content.add(ont)
if let stack = attachments.entity(for: "aid")
{
stack.name = "sssssss"
stack.setPosition(.init(x: 0, y: 1.5, z: -1), relativeTo: nil)
// stack.generateCollisionShapes(recursive: false)
//stack.components.set(InputTargetComponent())
content.add(stack)
}
}
attachments: {
let rostion = Rotation3D(angle: Angle2D(degrees: 30), axis: .x)
Attachment(id: "aid") {
Button {
print("sss","Button")
} label: {
Text("New Color")
.font(.extraLargeTitle)
.padding(40)
}
.background(.yellow)
}
} .gesture(TapGesture().targetedToAnyEntity().onEnded({ value in
print("sss" ,"TapGesture",value.entity.name)
//openwind(id: "main")
}))
只有球台可以出发 gesture 其他的 EntityModel 及 附加的信息 都无法触发 gestrue
我知道问题出在 其他实体放到了球内,同时因为球体有 InputTargetComponent 组件我如果想 不求出 InputTargetComponent 情况下 希望他的附件信息也能触发gesture,应该如何解决
Is it possible to render a Safari-based webview in full immersive space, so an app can show web pages there?
Suppose I want to use the Vision Pro device in multiple rooms in my home.
I have worn the device when I entered my home, checked some notifications on the device, closed the apps. With the device still on my head, I move to my bedroom. Now I want to open some other application without removing the headset and wearing it again. Is this possible?
In the state of .detecting a box is display, does someone know how to get the dimensions of that box? I just want to detect the object create a box around it and get the dimensions.
I have an iPad App that works/available on visionOS store.
However, TestFlight releases are displaying that this in
iOS app only, and Incompatible on this Apple Vision Pro.
How do I enable my iPadOS app for TestFlight in vision OS
PS. Native visionOS can appear there,
I don't have any approved or released builds yet for visionOS.
I also see the same issue with "app not compatible" in TestFlight without visionOS section present. The same app is available in App Store in visionOS/iPad apps
Aloha,
I'm wondering where the documentation for the Vision Pro "Developer Strap" is located. I have the Vision Pro devices and the developer straps, but I'm not sure how to go about using the developer straps for VisionOS development in Xcode & Unity.
I tried to show spatial photo on my application by swiftUI's Image but it just show flat version of it even I Use Vision Pro,
so, how can I show spatial photo to users,
does there any options for this?
Currently, visionos is customizing immersive mode in 360-degree full, and I'm looking for a way to adjust it like Apple's basic immersive mode.
Hi!
I think this should be a pretty normal usage of ARKit / RealityKit
I have a static mesh for my environment, that I want to have static collision properties.
My options for making this interact with dynamic bodies are:
ShapeResource.generateConvex(...) -- which overshoots my shape dramatically.
Entity.generateCollisionShapes(...) which also overshoots.
I notice additional APIs around ShapeResource -- ShapeResource.generateStaticMesh(positions:faceIndices) seems to be exactly what I need.
So far, I haven't been able to invoke this successfully to set my collision box.
Questions:
Is this not, a completely normal thing for developers to want to do? Why is there no support for this out-of-the-box in RealityKit/ARKit?
To support this in my app, everywhere I've read has said I need to parse the .obj of my terrain manually, and find triangulated faces and pipe them into this function. That feels like a very standardized process -- and given that RealityKit is already forcing me to use .usdz, why should this not be a part of the SDK?
Regardless- I triangulated my terrain mesh, and have been working on parsing code to get the positions and faceIndices for this set up (as an extension on Entity).
Is this the right approach? Am I missing something more obvious?
Thanks,
Justin
Is there any way to detect if an entity is being looked at in a RealityView. I know it is possible to add a "HoverEffectComponent()" which will highlight the entity a little when you gaze on it, but there doesn't seem to be any way to call a function from this. There is also no GazeGesture or anything similar.