Hello everyone!
I'm planning to buy an Apple Vision Pro (for replacing a Varjo XR-3)
I want to use it for a professional project, and I want to know if it can fit our need.
I want to develop a program on the Vision Pro for playing some live streaming videos from our local network cameras. (using RTSP)
Is this possible to get and play more than one live stream video.
One of those videos come from a stereo camera, streaming a SideBySide 3d stereo video.
Is this possible to have a classic 2d video in one ultra-wide virtual screen and another one virtual screen displaying a 3D video with depth simultaneously?
Thank you everyone in advance.
Regard's.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
Does anyone have code for realistic shaders for the Sun and Earth for a cosmology app in visionOS? Thanks
I am having problems getting button input from an Xbox game controller.
I have the visionOS 2 beta on my Apple Vision Pro, and I am trying to use an Xbox game controller with a RealityView following the instructions from the WWDC session Explore game input in visionOS.
The notification about a game controller is picking up the game controller, finds GCInputButtonA, and I am setting closures for touchedChangedHandler, pressedChangedHandler, and valueChangedHandler that just print an os_log statement.
buttonA.valueChangedHandler = { button, value, pressed in
os_log("Got valueChangedHandler")
}
At the end of RealityView, I have the modifier
RealityView { content in
// stuff
}
.handlesGameControllerEvents(matching: .gamepad)
But I am never seeing the log message appear in the console when I press the 'A' button (or any other button).
Any ideas what I might be doing wrong?
The Xbox controller is pretty old. Settings is reporting it as version 9.0.3
I would like to know if I can simply show a spatial image on the app without having to open a "preview"
The photos would be spatial photos (HEIC files) converted using Vision Pro.
Hello I've just updated my Vision Pro to the newest and greatest 2.0, and I see that the way to call out control center has been changed to the hand gesture which I'm assuming that is powered by computer vision? using the cams, for me there is a use case of watching apple TV shows at night where my gf would like to have the lights turned off, and that is the time where it fails, the new method is greate and the responsiveness are crazily good, but I would like this to be a toggle so that we can select our own method? like for example during the day where lights are sufficient we use the new gestrue recogn way, and when low light we can switch it back to look up? or as a fellow programmer who are just learning, I think it is possible to just make it automatic toggle, whenever the lighting condition can't make the hand gestrue work, hope to see that fixed :) cheers
I'm having issue with loading 1.2GB USDZ file with visionOS
Here's the details
the file is download via backend api
file is download to document directory
FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true)
when loading the asset, it took almost 30s to load the asset
Loaded usd((extension in RealityFoundation):RealityKit.Entity.LoadStatistics.USDLoader.rio) in 29.24642503261566 seconds
loading asset code
let model = try await Entity(contentsOf: assetUrl)
USDZ file is exported from RealityComposerPro
Did I make any mistake on the flow or is there any other approach to decrease the loading time?
For context, we have a fully immersive application running on visions. The application starts with a standard View/menu, and when the user clicks an option, it takes you to fully immersive mode with a small floating toolbar window that you can move and interact with as you move around the virtual space. When the user clicks the small x button below the window, we intercept the scenePhase .background event and handle exiting immersive mode and displaying the main menu. This all works fine.
The problem happens if the user turns around and doesn't look at the floating window for a couple of minutes. The system decides that the window should go into the background, and the same scenePhase background event is called - causing the system to exit immersive mode without warning. There seems to be no way of preventing this, and no distinction between this and the user clicking the close button.
Is there a reliable way to detect if the user intentionally clicks the close button vs the window from going into the background through lack of use? onDisappear doesn't trigger.
thanks in advance
Hello!
I encountered a problem when developing a project using VisionOS. I placed a file with a model size of 294.8MB in the project and tried to load the file model using the "FileManager. default. contentsOfDirectory (atPath: resourcesPath). filter {$0. hasSuffix (". usdz ")}" code. However, when the program was loaded, it was directly shut down by the system, prompting me to exceed memory. I would like to ask how to solve this problem?
Hello!
I have developed an application using Unity that runs on the visionPro.
I have correctly built it, installed it on the device and it works. The spatial computing application continue to work after several days (pretty normal, I launch the app and it works. It doesn't use any external services).
After several weeks, like a month or so, I launch again the same app but it's not working anymore. The only way I have to make it work again is to rebuild and reinstall it again.
What am I missing here? Why an application built and installed few weeks ago suddenly stops working on the VisionPro?
In Progressive mode, you can turn the digital crown which will reveal your environment by limiting/expanding the field of view of your Immersive scene.
I'm trying to create a different sort of behavior where your Immersive scene remains in 360 mode but adjusting a dial (doesn't have to be the crown, it could be an in-app dial/slider) adjusts the transparency of the scene.
My users aren't quite satisfied with the native features that help ensure you aren't about to run into a wall or furniture and want a way of quickly adjusting the transparency on the fly.
Is that possible?
I've got an Immersive scene that I want to be able to bring additional users into via SharePlay where each user would be able to see (and hopefully interact) with the Immersive scene. How does one implement that?
Hi, I read online that to downgrade from Vision OS 2 developer beta back to Vision OS 1 - we need the developer strap. My request for the strap is still under review, is there a way to downgrade without the strap if the need arises?
Is there a way to access the coordinates of where the camera is while scanning the room with Roomplan?
Hi all,
Currently working on a shareplay feature where users pull data from a remote source and are able to share it in a volumetric window with others in the facetime call. However, I am running into an issue where the group activity/session seems to be throwing an error on the recipient of the journal's attachment with the description of notSupported.
As I understand it, we use GroupSessionJournal for larger pieces of data like images (like in the Drawing Together example) and in my case 3d models.
The current flow goes as follows:
User will launch the app and fetch a model from remote.
User can start a shareplay instance in which the system captures the volumetric window for users to join and see.
At this point, only the original user can see the model. The user can press a button to share this model with the other participants using
/// modelData is serialized `Data`
try await journal.add(modelData)
In the group session configuration, I already have a task listening for
for await attachments in journal.attachments {
for attachment in attachments { ... }
}
This task attempts to load data via the following code:
let modelData = try await attachment.load(Data.self) /// this is where the error is thrown: `notSupported`
I expect the attachment.load(Data.self) call to properly deliver the model data, but instead I am receiving this error.
I have also attempted to wrap the model data within an enclosing struct that has a name and data property and conform the enclosing struct to Transferable but that continued to throw the notSupported error.
Is there something I'm doing wrong or is this simply a bug in the GroupSessionJournal? Please let me know if more information is required for debugging and resolution.
Thanks!
Hello,
I recently got the entitlement for the Enterprise API this week. Although adding the license and the entitlement to the project, I couldn't get any frame from the cameraFrameUpdates. Here are the logs of the authorization and the cameraFrameUpdates
[cameraAccess: allowed]
CameraFrameUpdates(stream: Swift.AsyncStream<ARKit.CameraFrame>(context: Swift.AsyncStream<ARKit.CameraFrame>._Context))
Could anyone point out what I'm doing wrong in the process?
I'm developing an app where a user can bring a video or content from a WKWebView into an immersive space using SwiftUI attachments on a RealityView.
This works just fine, but I'm having some trouble configuring how the audio from the web content should sound in an immersive space.
When in windowed mode, content playing sounds just fine and very natural. The spatial audio effect with head tracking is pronounced and adds depth to content with multichannel or Dolby Atmos audio.
When I move the same web view into an immersive space however, the audio becomes excessively echoey, as if a large amount of reverb has been put onto the audio. The spatial audio effect is also decreased, and while still there, is no where near as immersive.
I've tried the following:
Setting all entities in my space to use channel audio, including the web view attachment.
for entity in content.entities {
entity.channelAudio = ChannelAudioComponent()
entity.ambientAudio = nil
entity.spatialAudio = nil
}
Changing the AVAudioSessionSpatialExperience:
And I've also tried every soundstage size and anchoring strategy, large works the best, but doesn't remove that reverb.
let experience = AVAudioSessionSpatialExperience.headTracked(
soundStageSize: .large,
anchoringStrategy: .automatic
)
try? AVAudioSession.sharedInstance().setIntendedSpatialExperience(experience)
I'm also aware of ReverbComponent in visionOS 2 (which I haven't updated to just yet), but ideally I need a way to configure this for visionOS 1 users too.
Am I missing something? Surely there's a way for developers to stop the system messing with the audio and applying these effects? A few of my users have complained that the audio sounds considerably worse in my cinema immersive space compared to in a window.
Hello everyone,
It seems that Vision Pro supports 6DoF tracking, but is it possible to switch to 3DoF tracking? The reason for my question is that I would like to use it while riding in a car, but it seems that the 6DoF tracking is not working well in this situation. I was wondering if switching to 3DoF tracking might solve the issue.
Note: I am using the travel mode.
Hi all,
I tried the "isSpatialVideoCaptureEnabled" with AVCaptureMovieFileOutput mentioned in WWDC24: Build compelling spatial photo and video experiences, and it works.
But there are some issues and questions:
Below codes, the change.newValue always nil so the code seems not work.
let observation = videoDevice.observe(\.spatialCaptureDiscomfortReasons) { (device, change) in
guard let newValue = change.newValue else { return }
if newValue.contains(.subjectTooClose) {
// Guide user to move back
}
if newValue.contains(.notEnoughLight) {
// Guide user to find a brighter environment
}
}
AVCaptureMovieFileOutput is support spatial video capturing.
May I ask if AVCaptureVideoDataOutput will also support spatial video capturing?
The 3D object capture feature doesn’t seem to work on my iphone 12 pro. The circle that is supposed to show up when you begin to begin to move around the object doesnt show up so object capture doesn’t even begin. It says ‘more light..’ or ‘move closer’ but this doesnt happen on my iphone 14 pro. Works perfectly fine on that even with the same lighting. How can this be fixed?
So in the WWDC23 video on the Roomplan enhancement, it says that it is now possible to set a custom ARSession for the RoomCaptureSession. But how do you actually set the config for the custom ARSession?
init() {
let arConfig = ARWorldTrackingConfiguration()
arConfig.worldAlignment = .gravityAndHeading
arSession = ARSession()
roomCaptureView = RoomCaptureView(frame: CGRect(x: 0, y: 0, width: 42, height: 42), arSession: arSession)
sessionConfig = RoomCaptureSession.Configuration()
roomCaptureView.captureSession.delegate = self
roomCaptureView.delegate = self
}
However, I keep getting an issue that self is being used in the property access before being initialised.
What can I do to fix it?