I have a UIKit app and would like to provide spacial experience when run on VisionOS.
I added VisionOS support, but not sure how to present an immersive view. All tutorials are in SwiftUI, but my app is in UIKit.
This is an example from a SwiftUI project, but how how do I declare this ImmersiveView in UIKit?
struct VirtualApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}.windowStyle(.volumetric)
ImmersiveSpace(id: "ImmersiveSpace") {
ImmersiveView()
}
}
}
and in UIKit how do I make the call to open the ImmersiveView?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
Hi community,
I have a pair of stereo images, one for each eye. How should I render it on visionOS?
I know that for 3D videos, the AVPlayerViewController could display them in fullscreen mode. But I couldn't find any docs relating to 3D stereo images.
I guess my question can be brought up in a more general way: Is there any method that we can render different content for each eye? This could also be helpful to someone who only has sight on one eye.
Where can I find sample video, sample encoding apps, viewing apps, etc?
I see specs, high level explanations, etc. but not finding any samples or command lines or app documentation to explain how to make and view these files.
Thank you, looking forward to promoting a spatial video rich future.
Hello,
I am developing a VisionOS based application, that uses the various data providers like Image Tracking, Plane Detection, Scene Reconstruction but these are not supported on VisionOS Simulator. What is the Work Around for this issue ?
Hello,
I want to use Apple's PhotogrammetrySession to scan a window. However, ObjectCaptureSession seems to be a monotasker and won't allow capture to occur with anything but a small object on a flat surface.
So, I need to manually feed data into PhotogrammetrySession. But when I do, it focuses way too much on the scene behind the window, sacrificing detail on the window itself.
Is there a way for me to either coax ObjectCaptureSession into capturing an area on the wall, or for me to restrict PhotogrammetrySession's target bounding box manually? How does ObjectCaptureSession communicate the limited bounding box to PhotogrammetrySession?
Thanks,
Sebastian
在 Full 模式下,
我创建一球体 半径 10 ,给球添加 CollisionComponent 与 InputTargetComponent
我接着创建一个0.2 正方体 也添加了 上面的两组件
又添加。一个 attrach 的附件信息
代码如下
` RealityView{content,attachments in
let meshgenerate = MeshResource.generateSphere(radius: 10)
let collisionShape = ShapeResource.generateSphere(radius: 10 )
var sp = ModelEntity(mesh: meshgenerate)
sp.components.set(CollisionComponent(shapes: [collisionShape]))
sp.components.set(InputTargetComponent())
sp.transform.scale *= .init(-1, 1, 1)
sp.name = "sp"
content.add(sp)
let ont = ModelEntity(mesh: MeshResource.generateBox(size: 0.2) )
ont.components.set(CollisionComponent(shapes: [ShapeResource.generateBox(size: .init(x: 0.2, y: 0.2, z: 0.2))]))
ont.components.set(InputTargetComponent())
ont.name = "ont"
ont.position = .init(x: 0, y: 0, z: -2)
content.add(ont)
if let stack = attachments.entity(for: "aid")
{
stack.name = "sssssss"
stack.setPosition(.init(x: 0, y: 1.5, z: -1), relativeTo: nil)
// stack.generateCollisionShapes(recursive: false)
//stack.components.set(InputTargetComponent())
content.add(stack)
}
}
attachments: {
let rostion = Rotation3D(angle: Angle2D(degrees: 30), axis: .x)
Attachment(id: "aid") {
Button {
print("sss","Button")
} label: {
Text("New Color")
.font(.extraLargeTitle)
.padding(40)
}
.background(.yellow)
}
} .gesture(TapGesture().targetedToAnyEntity().onEnded({ value in
print("sss" ,"TapGesture",value.entity.name)
//openwind(id: "main")
}))
只有球台可以出发 gesture 其他的 EntityModel 及 附加的信息 都无法触发 gestrue
我知道问题出在 其他实体放到了球内,同时因为球体有 InputTargetComponent 组件我如果想 不求出 InputTargetComponent 情况下 希望他的附件信息也能触发gesture,应该如何解决
Is it possible to render a Safari-based webview in full immersive space, so an app can show web pages there?
Suppose I want to use the Vision Pro device in multiple rooms in my home.
I have worn the device when I entered my home, checked some notifications on the device, closed the apps. With the device still on my head, I move to my bedroom. Now I want to open some other application without removing the headset and wearing it again. Is this possible?
In the state of .detecting a box is display, does someone know how to get the dimensions of that box? I just want to detect the object create a box around it and get the dimensions.
I have an iPad App that works/available on visionOS store.
However, TestFlight releases are displaying that this in
iOS app only, and Incompatible on this Apple Vision Pro.
How do I enable my iPadOS app for TestFlight in vision OS
PS. Native visionOS can appear there,
I don't have any approved or released builds yet for visionOS.
I also see the same issue with "app not compatible" in TestFlight without visionOS section present. The same app is available in App Store in visionOS/iPad apps
Aloha,
I'm wondering where the documentation for the Vision Pro "Developer Strap" is located. I have the Vision Pro devices and the developer straps, but I'm not sure how to go about using the developer straps for VisionOS development in Xcode & Unity.
I tried to show spatial photo on my application by swiftUI's Image but it just show flat version of it even I Use Vision Pro,
so, how can I show spatial photo to users,
does there any options for this?
Currently, visionos is customizing immersive mode in 360-degree full, and I'm looking for a way to adjust it like Apple's basic immersive mode.
Hi!
I think this should be a pretty normal usage of ARKit / RealityKit
I have a static mesh for my environment, that I want to have static collision properties.
My options for making this interact with dynamic bodies are:
ShapeResource.generateConvex(...) -- which overshoots my shape dramatically.
Entity.generateCollisionShapes(...) which also overshoots.
I notice additional APIs around ShapeResource -- ShapeResource.generateStaticMesh(positions:faceIndices) seems to be exactly what I need.
So far, I haven't been able to invoke this successfully to set my collision box.
Questions:
Is this not, a completely normal thing for developers to want to do? Why is there no support for this out-of-the-box in RealityKit/ARKit?
To support this in my app, everywhere I've read has said I need to parse the .obj of my terrain manually, and find triangulated faces and pipe them into this function. That feels like a very standardized process -- and given that RealityKit is already forcing me to use .usdz, why should this not be a part of the SDK?
Regardless- I triangulated my terrain mesh, and have been working on parsing code to get the positions and faceIndices for this set up (as an extension on Entity).
Is this the right approach? Am I missing something more obvious?
Thanks,
Justin
Is there any way to detect if an entity is being looked at in a RealityView. I know it is possible to add a "HoverEffectComponent()" which will highlight the entity a little when you gaze on it, but there doesn't seem to be any way to call a function from this. There is also no GazeGesture or anything similar.
Hi!
Currently showing the Apple Vision Pro to my clients. Sharing the screen is challenging with Guest Mode on the AVP. You can share the screen but you need to teather the AVP and MBP to an iPhone and then you can AirPlay. A lot of big corporate WiFi networks may not allow AirPlay so teathering is a better option.
Does anyone know if Apple is making their AVP demo app available to developers?
This would be super helpful for showing off the AVP's capabilities.
Thanks!
JB
Hi ,
We are using Scanning objects using Object Capture app provided by apple it was working fine but suddenly it started crashing while scanning the object with bounding box settings.
Scanning app crash log
Getting device log but showing different reasons for app crash.
Device : iPad Pro / iPhone 15 Pro
iOS : 17.4
Attaching the device log for crash, waiting for your response.
Hi ,
We are using Scanning objects using Object Capture app provided by apple it was working fine but suddenly it started crashing while scanning the object with bounding box settings.
Getting device log but showing different reasons for app crash.
Device : iPad Pro / iPhone 15 Pro
iOS : 17.4
Attaching the device log for crash, waiting for your response.
-------------------------------------
Translated Report (Full Report Below)
-------------------------------------
Incident Identifier: 0797ADAF-D653-4C92-8AA0-300AA167002B
CrashReporter Key: 48724a3b30ef15e069f513afff7d1aa2e935a520
Hardware Model: iPhone16,1
Process: GuidedCapture [918]
Path: /private/var/containers/Bundle/Application/ACCE6C58-98F0-4DD7-AA6E-732190E0FD30/GuidedCapture.app/GuidedCapture
Identifier: com.example.apple-samplecode.GuidedCaptureH28X75MLUY
Version: 1.0 (1)
Code Type: ARM-64 (Native)
Role: Foreground
Parent Process: launchd [1]
Coalition: com.example.apple-samplecode.GuidedCaptureH28X75MLUY [743]
Date/Time: 2024-03-26 18:23:07.8269 +0530
Launch Time: 2024-03-26 18:20:02.5964 +0530
OS Version: iPhone OS 17.4.1 (21E236)
Release Type: User
Baseband Version: 1.55.04
Report Version: 104
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000001, 0x000000022e6577a4
Termination Reason: SIGNAL 5 Trace/BPT trap: 5
Terminating Process: exc handler [918]
Triggered by Thread: 33
Thread 33 name: Dispatch queue: com.apple.coreoc.queues.serial.session
Thread 33 Crashed:
0 CoreOC 0x22e6577a4 0x22e657400 + 932
1 CoreOC 0x22e657588 0x22e657401 + 391
2 CoreOC 0x22e697628 0x22e697221 + 1031
3 CoreOC 0x22e684864 0x22e684431 + 1075
4 CoreOC 0x22e6831ec 0x22e6828d5 + 2327
5 CoreOC 0x22e6c1fb4 0x22e6c1f5d + 87
6 CoreOC 0x22e5ef128 0x22e5ef105 + 35
7 libdispatch.dylib 0x19291113c _dispatch_call_block_and_release + 31
8 libdispatch.dylib 0x192912dd4 _dispatch_client_callout + 19
9 libdispatch.dylib 0x19291a400 _dispatch_lane_serial_drain + 747
10 libdispatch.dylib 0x19291af30 _dispatch_lane_invoke + 379
11 libdispatch.dylib 0x192925cb4 _dispatch_root_queue_drain_deferred_wlh + 287
12 libdispatch.dylib 0x192925528 _dispatch_workloop_worker_thread + 403
13 libsystem_pthread.dylib 0x1e69f8f20 _pthread_wqthread + 287
14 libsystem_pthread.dylib 0x1e69f8fc0 start_wqthread + 7
Is there a method for finding APIs that are compatible with both iOS and Vision OS (ex. hoverStyle)?
I'm encountering difficulties in developing for Vision OS, although I can successfully build for 'Apple Vision (designedForIPad)'. Are there any methods for discovering APIs that support both platforms? I'm looking to enhance my application and would appreciate any guidance on where to find such APIs.
Additionally, I'm interested in changing the background to a glass style. However, it seems that this feature may not be supported by the available APIs, particularly those designed for Vision OS. Any suggestions or insights would be greatly appreciated."
I am attempting to integrate VisionOS support into my existing iOS app, which utilizes Swift UI and CocoaPods. However, upon adding VisionOS as a supported platform and attempting to run the app, I encounter two errors:
"'jot/jot.h' file not found" at "/Users/xxxxxx/Desktop/IOS_DEVELOPMENT/iOS/xxxxxxxxx/xxxxxxxxx-Bridging-Header.h:17:9".
"Failed to emit precompiled header" at "/Users/xxxxxx/Library/Developer/Xcode/DerivedData/xxxxxxxxx-bnhvaxypgfhmvqgklzjdnxxbrdhu/Build/Intermediates.noindex/PrecompiledHeaders/xxxxxxxxx-Bridging-Header-swift_6TTOG1OAZB5F-clang_21TRHDW14EDOZ.pch" for bridging header "/Users/xxxxxxxx/Desktop/IOS_DEVELOPMENT/iOS/xxxxxxx/xxxxxxxx-Bridging-Header.h".
I'm seeking assistance with resolving these errors. Below is my Podfile configuration:
source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '15.0'
target 'xxxxxxxxxx' do
use_frameworks!
pod 'RealmSwift'
pod 'JGProgressHUD'
pod 'BadgeLabel'
pod 'jot'
pod 'MaterialComponents/Chips'
pod 'GoogleMaps'
pod 'Firebase/Crashlytics'
pod 'Firebase/Analytics' # Firebase pod for Google Analytics
# Add pods for any other desired Firebase products
# https://firebase.google.com/docs/ios/setup#available-pods
end
post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '15.0'
end
end
end
Any assistance in resolving these errors would be greatly appreciated.