Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

Presenting immersive content in UIKit app
I have a UIKit app and would like to provide spacial experience when run on VisionOS. I added VisionOS support, but not sure how to present an immersive view. All tutorials are in SwiftUI, but my app is in UIKit. This is an example from a SwiftUI project, but how how do I declare this ImmersiveView in UIKit? struct VirtualApp: App { var body: some Scene { WindowGroup { ContentView() }.windowStyle(.volumetric) ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() } } } and in UIKit how do I make the call to open the ImmersiveView?
5
1
1.6k
Jul ’23
How to control the position of windows and volumes in immersive space
My app has a window and a volume. I am trying to display the volume on the right side of the window. I know .defaultWindowPlacement can achieve that, but I want more control over the exact position of my volume in relation to my window. I need the volume to move as I move the window so that it always stays in the same position relative to the window. I think I need a way to track the positions of both the window and the volume. If this can be achieved without immersive space, it would be great. If not, how do I do that in immersive space? Current code: import SwiftUI @main struct tiktokForSpacialModelingApp: App { @State private var appModel: AppModel = AppModel() var body: some Scene { WindowGroup(id: appModel.launchWindowID) { LaunchWindow() .environment(appModel) } .windowResizability(.contentSize) WindowGroup(id: appModel.mainViewWindowID) { MainView() .frame(minWidth: 500, maxWidth: 600, minHeight: 1200, maxHeight: 1440) .environment(appModel) } .windowResizability(.contentSize) WindowGroup(id: appModel.postVolumeID) { let initialSize = Size3D(width: 900, height: 500, depth: 900) PostVolume() .frame(minWidth: initialSize.width, maxWidth: initialSize.width * 4, minHeight: initialSize.height, maxHeight: initialSize.height * 4) .frame(minDepth: initialSize.depth, maxDepth: initialSize.depth * 4) } .windowStyle(.volumetric) .windowResizability(.contentSize) .defaultWindowPlacement { content, context in // Get WindowProxy from context based on id if let mainViewWindow = context.windows.first(where: { $0.id == appModel.mainViewWindowID }) { return WindowPlacement(.trailing(mainViewWindow)) } else { return WindowPlacement() } } ImmersiveSpace(id: appModel.immersiveSpaceID) { ImmersiveView() .onAppear { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } } .immersionStyle(selection: .constant(.progressive), in: .progressive) } }
1
0
282
Jul ’24
RealityKit Subdivide
In the Discover RealityKit APIs for iOS, macOS, and visionOS presentation, there was a slide at the end highlighting new features not covered in the video. One of them was surface subdivision, but I have not been able to find any documentation or APIs that support this feature. Does anyone have any further details or how this works in RealityKit?
3
1
433
Jul ’24
Understanding the Topology of LowLevelMesh in RealityKitDrawingApp Sample Code
I have recently developed an interest in the shader effects commonly found in Apple's UI and have been studying them. Additionally, as I own a Vision Pro, I have a strong desire to understand LowLevelMesh and am currently analyzing the sample code after watching the related session. The part where I am completely stuck and unable to understand is the initializer section of CurveExtruder. /// Initializes the `CurveExtruder` with the shape to sweep along the curve. /// /// - Parameters: /// - shape: The 2D shape to sweep along the curve. init(shape: [SIMD2<Float>]) { self.shape = shape // Compute topology // // Triangle fan lists each vertex in `shape` once for each ring, except for vertex `0` of `shape` which // is listed twice. Plus one extra index for the end-index (0xFFFFFFFF). let indexCountPerFan = 2 * (shape.count + 1) + 1 var topology: [UInt32] = [] topology.reserveCapacity(indexCountPerFan) // Build triangle fan. for vertexIndex in shape.indices.reversed() { topology.append(UInt32(vertexIndex)) topology.append(UInt32(shape.count + vertexIndex)) } // Wrap around to the first vertex. topology.append(UInt32(shape.count - 1)) topology.append(UInt32(2 * shape.count - 1)) // Add end-index. topology.append(UInt32.max) assert(topology.count == indexCountPerFan) I have tried to understand why the capacity reserved for the topology array is 2 * (shape.count + 1) + 1, but I am struggling to figure it out. I do not understand the principle behind the order in which vertexIndex is added to the topology. The confusion is even greater because, while the comment mentions trianglefan, the actual creation of the LowLevelMesh.Part object uses the topology: .triangleStrip argument. (Did I misunderstand? I know that the topology option includes triangle, but this uses duplicated vertices.) I am feeling very stuck. It's hard to find answers even through search options or LLMs. Maybe this requires specialized knowledge in computer graphics, which makes me feel embarrassed to ask. However, personally, I have tried various directions without external help but still cannot find a clear path, so I am desperately seeking assistance! P.S. As Korean is my primary language, I apologize in advance if there are any awkward or rude expressions.
0
0
322
Jul ’24
PreviewApplication open Spatial Video issue
Hi, I have a Spatial Video that I am trying to load in a visionOS app with PreviewApplication API let url = URL(string: "https://mauiman.azureedge.net/videos/SpatialJourney/watermelon_cat.MOV") let item = PreviewItem(url: url!) _ = PreviewApplication.open(items: [item]) When I run the application, I am getting the following error. Did I miss anything? QLUbiquitousItemFetcher: <QLUbiquitousItemFetcher: 0x6000022edfe0> could not create sandbox wrapper. Error: Error Domain=NSPOSIXErrorDomain Code=2 "couldn't issue sandbox extension com.apple.quicklook.readonly for '/videos/SpatialJourney/watermelon_cat.MOV': No such file or directory" UserInfo={NSDescription=couldn't issue sandbox extension com.apple.quicklook.readonly for '/videos/SpatialJourney/watermelon_cat.MOV': No such file or directory} #PreviewItem The screen shows up as: Putting the spatial video locally, I get the following error: let url = URL(fileURLWithPath: "watermelon_cat.MOV") let item = PreviewItem(url: url) _ = PreviewApplication.open(items: [item]) Error getting the size of file(watermelon_cat.MOV -- file:///) with error (Error Domain=NSCocoaErrorDomain Code=260 "The file “watermelon_cat.MOV” couldn’t be opened because there is no such file." UserInfo={NSURL=watermelon_cat.MOV -- file:///, NSFilePath=/watermelon_cat.MOV, NSUnderlyingError=0x600000ea1650 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}}) #Generic Any help is greatly appreciated. Thank you in advance.
3
1
461
Jul ’24
Not being able to set maximum window size for Views under different tabs
On TikTok on Vision Pro, the home page has different minimum and maximum window heights and widths compared to the search page. Now I am able to set minimum window size for different tab views but maximum size doesn't seem to work Code: // WindowSizeModel.swift import Foundation import SwiftUI enum TabType { case home case search case profile } @Observable class WindowSizeModel { var minWidth: CGFloat = 400 var maxWidth: CGFloat = 500 var minHeight: CGFloat = 400 var maxHeight: CGFloat = 500 func setWindowSize(for tab: TabType) { switch tab { case .home: configureWindowSize(minWidth: 400, maxWidth: 500, minHeight: 400, maxHeight: 500) case .search: configureWindowSize(minWidth: 300, maxWidth: 800, minHeight: 300, maxHeight: 800) case .profile: configureWindowSize(minWidth: 800, maxWidth: 1000, minHeight: 800, maxHeight: 1000) } } private func configureWindowSize(minWidth: CGFloat, maxWidth: CGFloat, minHeight: CGFloat, maxHeight: CGFloat) { self.minWidth = minWidth self.maxWidth = maxWidth self.minHeight = minHeight self.maxHeight = maxHeight } } // tiktokForSpacialModelingApp.swift import SwiftUI @main struct tiktokForSpacialModelingApp: App { @State private var windowSizeModel: WindowSizeModel = WindowSizeModel() var body: some Scene { WindowGroup { MainView() .frame( minWidth: windowSizeModel.minWidth, maxWidth: windowSizeModel.maxWidth, minHeight: windowSizeModel.minHeight, maxHeight: windowSizeModel.maxHeight) .environment(windowSizeModel) } .windowResizability(.contentSize) } } // MainView.swift import SwiftUI import RealityKit struct MainView: View { @State private var selectedTab: TabType = TabType.home @Environment(WindowSizeModel.self) var windowSizeModel; var body: some View { @Bindable var windowSizeModel = windowSizeModel TabView(selection: $selectedTab) { Tab("Home", systemImage: "play.house", value: TabType.home) { HomeView() } Tab("Search", systemImage: "magnifyingglass", value: TabType.search) { SearchView() } Tab("Profile", systemImage: "person.crop.circle", value: TabType.profile) { ProfileView() } } .onAppear { windowSizeModel.setWindowSize(for: TabType.home) } .onChange(of: selectedTab) { oldTab, newTab in if oldTab == newTab { return } else if newTab == TabType.home { windowSizeModel.setWindowSize(for: TabType.home) } else if newTab == TabType.search { windowSizeModel.setWindowSize(for: TabType.search) } else if newTab == TabType.profile { windowSizeModel.setWindowSize(for: TabType.profile) } } } }
1
0
228
Jul ’24
What is the difference between an entity Action and Animation
What’s the difference between an action and an animation eg.: FromToByAnimation vs FromToByAction. The documentation on them is pretty similar and I'm not understanding the differences exactly... : S FromToByAnimation → https://developer.apple.com/documentation/realitykit/fromtobyanimation?changes=__2_2 FromToByAction → https://developer.apple.com/documentation/realitykit/fromtobyaction?changes=__2_2 As developer, when should we reach out to use an animation vs action ? 🤔
0
1
226
Jul ’24
How Does Update Closure Work in RealityView
I have looked here: Reality View Documentation Found this thread: RealityView Update Closure Thread I am not able to find documentation on how the update closure works. I am loading attachments using reality view's attachment feature (really helpful). I want to remove them programmatically from another file. I found that @State variables can be used. But I am not able to modify them from out side of the ImmersiveView swift file. The second problem I faced was even if I update them inside the file. My debugging statements don't execute. So exactly when does update function run. I know it get's executed at the start (twice for some reason). It also get's executed when I add a window using: openWindow?(id: "ButtonView") I need to use the update closure because I am also not able to get the reference to RealityViewAttachment outside the RealityView struct. My Code(only shown the code necessary. there is other code): @State private var pleaseRefresh = "" @StateObject var model = HandTrackingViewModel() var body: some View { RealityView { content, attachments in if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) } content.add(model.setupContentEntity()) content.add(entityDummy) print("View Loaded") } update: { content, attachments in print("Update Closure Executed") if (model.editWindowAdded) { print("WINDOW ADDED") let theattachment = attachments.entity(for: "sample")! entityDummy.addChild(theattachment) // more code here } } attachments: { Attachment(id: "sample") { Button(action: { model.canEditPos = true model.canRotate = false pleaseRefresh = "changed" }) { HStack { Image(systemName: "pencil.and.outline") .resizable() .scaledToFit() .frame(width: 32, height: 32) Text("Edit Placement") .font(.caption) } .padding(4) } .frame(width: 160, height: 60) } } How can the update method (or the code inside it) run when I want it to? I am new to swift. I apologize if my question seems naive.
1
0
448
Jul ’24
VisionOS GroupActivities WatchTogether
I have an application that is meant to be a "watch together" GroupActivity using SharePlay that coordinates video playback using AVPlayerPlaybackCoordinator. In the current implementation, the activity begins before opening the AVPlayer, however when clicking the back button within the AVPlayer view, the user is prompted to "End Activity for Everyone" or "End Activity for just me". There is not an option to continue the group activity. My goal is to retain the same GroupSession, even if a user exits the AVPlayer view. Is there a way to avoid ending the session when coordinating playback using the AVPlayerPlaybackCoordinator? private func startObservingSessions() async { sessionInfo = .init() // Await new sessions to watch video together. for await session in MyActivity.sessions() { // Clean up the old session, if it exists. cleanUpSession(groupSession) #if os(visionOS) // Retrieve the new session's system coordinator object to update its configuration. guard let systemCoordinator = await session.systemCoordinator else { continue } // Create a new configuration that enables all participants to share the same immersive space. var configuration = SystemCoordinator.Configuration() // Sets up spatial persona configuration configuration.spatialTemplatePreference = .sideBySide configuration.supportsGroupImmersiveSpace = true // Update the coordinator's configuration. systemCoordinator.configuration = configuration #endif // Set the app's active group session before joining. groupSession = session // Store session for use in sending messages sessionInfo?.session = session let stateListener = Task { await self.handleStateChanges(groupSession: session) } subscriptions.insert(.init { stateListener.cancel() }) // Observe when the local user or a remote participant changes the activity on the GroupSession let activityListener = Task { await self.handleActivityChanges(groupSession: session) } subscriptions.insert(.init { activityListener.cancel() }) // Join the session to participate in playback coordination. session.join() } } /// An implementation of `AVPlayerPlaybackCoordinatorDelegate` that determines how /// the playback coordinator identifies local and remote media. private class CoordinatorDelegate: NSObject, AVPlayerPlaybackCoordinatorDelegate { var video: Video? // Adopting this delegate method is required when playing local media, // or any time you need a custom strategy for identifying media. Without // implementing this method, coordinated playback won't function correctly. func playbackCoordinator(_ coordinator: AVPlayerPlaybackCoordinator, identifierFor playerItem: AVPlayerItem) -> String { // Return the video id as the player item identifier. "\(video?.id ?? -1)" } } /// /// Initializes the playback coordinator for synchronizing video playback func initPlaybackCoordinator(playbackCoordinator: AVPlayerPlaybackCoordinator) async { self.playbackCoordinator = playbackCoordinator if let coordinator = self.playbackCoordinator { coordinator.delegate = coordinatorDelegate } if let activeSession = groupSession { // Set the group session on the AVPlayer instances's playback coordinator // so it can synchronize playback with other devices. playbackCoordinator.coordinateWithSession(activeSession) } } /// A coordinator that acts as the player view controller's delegate object. final class PlayerViewControllerDelegate: NSObject, AVPlayerViewControllerDelegate { let player: PlayerModel init(player: PlayerModel) { self.player = player } #if os(visionOS) // The app adopts this method to reset the state of the player model when a user // taps the back button in the visionOS player UI. func playerViewController(_ playerViewController: AVPlayerViewController, willEndFullScreenPresentationWithAnimationCoordinator coordinator: UIViewControllerTransitionCoordinator) { Task { @MainActor in // Calling reset dismisses the full-window player. player.reset() } } #endif }
0
0
300
Jul ’24
Can't Get OrbitAnimation() to work on my project
DESCRIPTION OF PROBLEM I have an Apple Vision Pro App Store app called Starship SE Corps. I'm trying to add an animation for my app so that the starship entity orbits the Earth entity. I'm trying to use OrbitAnimation as discussed in the WWDC23 session "Build Spatial Experiences with RealityKit" (https://developer.apple.com/wwdc23/10080). However, I can't get the animation to work. STEPS TO REPRODUCE I created a sample test app called "SampleOrbitAnimationApp" to focus in on the code I'm having trouble with. When I build and run my sample test app, the app runs on both the visionOS 1.2 simulator and on my real Apple Vision Pro device running visionOS 1.2. However, my starship entity is static and is not animating/orbiting around my Earth entity. I tried putting my OrbitAnimation code in the RealityView update: closure. Doing that, however causes some property scope errors because the entity I refer to in the OrbitAnimation code is my entity that I create in the RealityView code block...so the update: closure code block can't see the entity property. Trying to make the entity reference more global at the top of the ImmersiveView (so update: closure sees the entity property) causes other parameter issues in the .app file call to the ImmersiveView and in the #Preview call to the ImmersiveView. Maybe that should be expected and I would need to workaround that (but I couldn't find a sensible way to do so). If this is the right approach, I need help on how to resolve this across the project files. I did find some example code online where a developer put the OrbitAnimation code directly in the RealityView code block without having an update: or attachments: closure at all. I tried that approach but also couldn't get that to work. The test sample app tries to target the OrbitAnimation and ImmersiveView code I'm struggling with (i.e. I can't get the starship to move and orbit around the Earth). It uses my same production app Package for Starship and Earth entities, built in Reality Composer Pro. Those entities, included in my sample test app, work fine on my latest production App Store release, so I think they are fine. The issue is how to do the OrbitAnimation code for those entities. I realize new capabilities are coming in visionOS 2, but I would like to make OrbitAnimation work now in my visionOS 1.2 app.
9
0
669
Jul ’24
Building for 'iOS', but linking in object file built for 'visionOS'
I have an application made from Flutter, which is possible to run on VisionOS by running as design to Ipad, and I would like that inside this application would be possible to go to mixed reality somehow. I am trying to do so far was to embedded the vision project that I have inside the swift application that flutter generates, but in this attempt I got an error from Xcode telling me that this way is not possible. I wonder if is there an another way that I could achieve my goal?
2
0
413
Jul ’24
Unable to display contextMenu
This is a visionOS App. I added contextMenu under a combination view, but when I pressed the view for a long time, there was no response. I tried to use this contextMenu in other views, which can be used normally, so I think there is something wrong with this combination view, but I don't know what the problem is. I hope you can remind me. Thank you! Views with problems: struct NAMEView: View { @StateObject private var placeStore = PlaceStore() var body: some View { ZStack { Group { HStack(spacing: 2) { Image(systemName: "mappin.circle.fill") .font(.system(size: 50)) .symbolRenderingMode(.multicolor) .accessibilityLabel("your location") .accessibilityAddTraits([.isHeader]) .padding(.leading, 5.5) VStack { Text("\(placeStore.locationName)") .font(.title3) .accessibilityLabel(placeStore.locationName) Text("You are here in App") .font(.system(size: 13)) .foregroundColor(.secondary) .accessibilityLabel("You are here in App") } .hoverEffect { effect, isActive, _ in effect.opacity(isActive ? 1 : 0) } .padding() } } .onAppear { placeStore.updateLocationName() } .glassBackgroundEffect() .hoverEffect { effect, isActive, proxy in effect.clipShape(.capsule.size( width: isActive ? proxy.size.width : proxy.size.height, height: proxy.size.height, anchor: .leading )) .scaleEffect(isActive ? 1.05 : 1.0) } } } }
1
0
355
Jul ’24
Receiving main camera stream
Hello, I recently got the entitlement for the Enterprise API this week. Although adding the license and the entitlement to the project, I couldn't get any frame from the cameraFrameUpdates. Here are the logs of the authorization and the cameraFrameUpdates [cameraAccess: allowed] CameraFrameUpdates(stream: Swift.AsyncStream<ARKit.CameraFrame>(context: Swift.AsyncStream<ARKit.CameraFrame>._Context)) Could anyone point out what I'm doing wrong in the process?
1
0
548
Jun ’24