I am attempting to integrate VisionOS support into my existing iOS app, which utilizes Swift UI and CocoaPods. However, upon adding VisionOS as a supported platform and attempting to run the app, I encounter two errors:
"'jot/jot.h' file not found" at "/Users/xxxxxx/Desktop/IOS_DEVELOPMENT/iOS/xxxxxxxxx/xxxxxxxxx-Bridging-Header.h:17:9".
"Failed to emit precompiled header" at "/Users/xxxxxx/Library/Developer/Xcode/DerivedData/xxxxxxxxx-bnhvaxypgfhmvqgklzjdnxxbrdhu/Build/Intermediates.noindex/PrecompiledHeaders/xxxxxxxxx-Bridging-Header-swift_6TTOG1OAZB5F-clang_21TRHDW14EDOZ.pch" for bridging header "/Users/xxxxxxxx/Desktop/IOS_DEVELOPMENT/iOS/xxxxxxx/xxxxxxxx-Bridging-Header.h".
I'm seeking assistance with resolving these errors. Below is my Podfile configuration:
source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '15.0'
target 'xxxxxxxxxx' do
use_frameworks!
pod 'RealmSwift'
pod 'JGProgressHUD'
pod 'BadgeLabel'
pod 'jot'
pod 'MaterialComponents/Chips'
pod 'GoogleMaps'
pod 'Firebase/Crashlytics'
pod 'Firebase/Analytics' # Firebase pod for Google Analytics
# Add pods for any other desired Firebase products
# https://firebase.google.com/docs/ios/setup#available-pods
end
post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '15.0'
end
end
end
Any assistance in resolving these errors would be greatly appreciated.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
spatial video is ok ,but can wen play the spatial photo also, thank you:)
Rendering the scene onto a RenderTarget with twice the resolution of the Drawable, and then downsampling to the Drawable, causes the image to appear distorted.
Modifications were made on the Xcode VisionOS template
Foveation should be enabled by default
struct ContentStageConfiguration: CompositorLayerConfiguration {
func makeConfiguration(capabilities: LayerRenderer.Capabilities, configuration: inout LayerRenderer.Configuration) {
configuration.depthFormat = .depth32Float
configuration.colorFormat = .bgra8Unorm_srgb
let foveationEnabled = capabilities.supportsFoveation
configuration.isFoveationEnabled = foveationEnabled
let options: LayerRenderer.Capabilities.SupportedLayoutsOptions = foveationEnabled ? [.foveationEnabled] : []
let supportedLayouts = capabilities.supportedLayouts(options: options)
configuration.layout = supportedLayouts.contains(.layered) ? .layered : .dedicated
}
}
To avoid errors, rasterizationRateMap is not set.
var renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = self.renderTarget.currentFrameColor
renderPassDescriptor.renderTargetWidth = self.renderTarget.currentFrameColor.width
renderPassDescriptor.renderTargetHeight = self.renderTarget.currentFrameColor.height
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].storeAction = .store
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 0.0)
renderPassDescriptor.depthAttachment.texture = self.renderTarget.currentFrameDepth
renderPassDescriptor.depthAttachment.loadAction = .clear
renderPassDescriptor.depthAttachment.storeAction = .store
renderPassDescriptor.depthAttachment.clearDepth = 0.0
//renderPassDescriptor.rasterizationRateMap = drawable.rasterizationRateMaps.first
if layerRenderer.configuration.layout == .layered {
renderPassDescriptor.renderTargetArrayLength = drawable.views.count
}
The rendering process is as follows:
Hello. I am trying to load my own Image Based Lighting file in a visionOS RealityView. I used the code you get when creating a new project from scratch and selecting the immersive space to full when creating the project. With the sample file Apple provides, it works. But when I put my image in PNG, HEIC or EXR format in the same location the example file was in, it doesn't load and the error states:
Failed to find resource with name "SkyboxUpscaled2" in bundle
In this image you can see the file "ImageBasedLight", which is the one that comes with the project and the file "SkyboxUpscaled2" which is my own in the .exr format.
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
do{
let resource = try await EnvironmentResource(named: "SkyboxUpscaled2")
let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25)
immersiveContentEntity.components.set(iblComponent)
immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity))
}catch{
print(error.localizedDescription)
}
Does anyone have an idea why the file is not found? Thanks in advance!
Every time I dismiss ImmersiveSpace with progressive ImmersionStyle and open another one I get about 30-40% of immersion level. Can immersion be set to 100% in progressive by default?
Hello!
Is it possible to turn on hand pass-through (hand cutout, not sure what is the correct name, real hands from the camera) in WebXR when in immersive-vr and hand tracking feature is enabled? I only see my hands when I disable hand tracking feature in WebXR, but then I don't get the joints positions and orientations.
Hi Team
Is there a way to extract a colorized scan as well with using the roomplan SDK ? . If yes, can you point me to the right reference link ?
Does the roomplan SDK provide dimensions of the room ?
Our app needs to scan QR codes (or a similar mechanism) to populate it with content the user wants to see.
Is there any update on QR code scanning availability on this platform? I asked this before, but never got any feedback.
I know that there is no way to access the camera (which is an issue in itself), but at least the system could provide an API to scan codes.
(It would be also cool if we were able to use the same codes Vision Pro uses for detecting the Zeiss glasses, as long as we could create these via server-side JavaScript code.)
I'm wondering if it's possible to implement object tracking on Vision Pro using the Vision framework of Apple? I see that the Vision documentation offers a variety of classes for computer vision which have a tag "visionOS", but all the example codes in the documentation are only for iOS, iPadOS or macOS. So can those classes also be used for developing Vision Pro apps? If so, how do they get data feed from the camera of Vision Pro?
I am developing an iPhone app, but I've been targeting the AVP, as well. In fact, since I got the AVP, I've mainly be building and running my app on it. This morning, I had an upgrade to Xcode 15.4 (15F31d). Ever since I have not been able to see my AVP as a run destination.
It does show up in the device list, although there are no provisioning files on it for some reason. But I can't target it for building. I've tried unpairing and turning developer mode off and on.
Has anyone else seen this problem after upgrading Xcode? Any help is appreciated.
I'm writing a Vision Pro app that's fully immersive and rendered using Metal. Occasionally, some users of this app would benefit from being able to use a physical keyboard (or other accessory like a game controller). It seems very straightforward to capture and handle spatial gesture events, but I cannot find an interface that allows the detection, capture, or handling of keyboard events in any of the objects associated with fully immersive metal rendering: CompositorServices, LayerRenderer, and its associated .frame, .drawable, and .drawable.view don't seem to have any accessory awareness. Can you help me handle a keyboard event?
We are hoping to port our app which requires printing to visionOS. Can I check if visionOS supports AirPrint/printing? If anyone has already done it, anything we should watch out for?
We followed this documentation https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal to display a fully immersive map using our metal rendering engine, which worked great. But this part of the article: https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal#4193614 mentions how to use the onSpatialEvent callback to receive gesture events. We are receiving the gesture events but the location property of the event (https://developer.apple.com/documentation/swiftui/spatialeventcollection/event/location) is always coming back as (x: 0, y:0) which is not helpful. We are unable to get a single valid location of any gesture, therefore, we are unable to hook up these gestures. We tried this on a simulator and a Vision Pro device.
Hello,
I'm trying to download a native spatial video for a software program I'm putting together where people can upload spatial videos from the web and deploy them inside a native VisionOS app showing a breadth of different file formats.
Have a bug I'm trying to resolve on an app review through the store.
The basic flow is this:
User presses a button and enters a fully immersive space
While in the the fully immersive space, user presses the digital crown button to exit fully immersive mode and return to shared space (Note: this is not rotating the digital crown to control immersion level)
At this point I need an event or onchange (or similar) to know when a user is in immersive mode so I can reset a flag I've been manually setting to track whether or not the user is currently viewing an immersive space.
I have an onchange watching the scenePhase changes and printing to console the old/new values however this is never triggered.
Seems like it might be an edge case but curious if there's another way to detect whether or not a user is in an immersive scene.
I have followed every step of all the instructions.
Nothing happens.
Did factory settings of both my Macbook Pro & Vision Pro
with the same apple ID.
Still Vision Pro doesn't appear.
Hi,
We are currently trying to implement very simple test application using Vision Pro.
We display a virtual copy of an object (based on CAD data) and then we try to align the real object with the virtual one.
It seems to be impossible!
You can align them to a certain degree but if you walk around the object to control the alignment it seems the reality us warping and wobbling for almost 2 cm.
Is there any way to fix this?
I'd like to enter a fully immersive scene without the grab bar and close icon.
The full immersion app that comes with Xcode doesn't exit the immersion state when "x" is hit, but all the grab etc disappears - if I could only do that programmatically!
I've tried conditionally removing the View that launches the ImmersiveSpace, but the WindowGroup seems to be the thing that puts out the UI I'm trying to hide...
WindowGroup {
if(gameState.immersiveSpaceOpened){
ContentView()
.environmentObject(gameState)
}
}
In the code example provided there is a bool in the Video object to set a video as 3D:
/// A Boolean value that indicates whether the video contains 3D content. let is3D: Bool
I have a hosted spatial video that I know works correctly on the AVP player. When I point the Videos.json file to the this URL and set is3D=true my 3D video doesn't show up and I get the follow error:
iPVC/1-0 Received playback error: [Error Domain=AVFoundationErrorDomain Code=-11850 "Operation Stopped" UserInfo={NSLocalizedFailureReason=The server is not correctly configured., NSLocalizedDescription=Operation Stopped, NSUnderlyingError=0x30227c510 {Error Domain=CoreMediaErrorDomain Code=-12939 "byte range length mismatch - should be length 2 is length 2434" UserInfo={NSDescription=byte range length mismatch - should be length 2 is length 2434, NSURL=https: <omitted for post> }}}]
Can anyone tell me what might be going on? The error is telling me my server is not configured correctly. For context, I'm using a google drive to deliver dynamic images/videos using:
https://drive.google.com/uc?export=download&id= <file ID>
And the above works great for my images and 2d videos. Is there something I need to do specifically when delivering MV-HEVC videos?
Hello, is it possible to change the immersion style at runtime?