Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics

Post

Replies

Boosts

Views

Activity

Draw a 3D Helix using lines on the Vision Pro
I'm rebuilding a Unity app in Swift because Unity's Polyspatial library doesn't support LineRenderers yet, and that's like 90% of my app. So far I can draw 2D lines in the VisionOS "Hello World" project using paths and CGPoints in the body View of the Globe.swift file. I don't really know what I'm doing, just got some example lines from ChatGPT that work for a line. I can't make these 3D though. I haven't been able to find anything on drawing lines for the Vision Pro. Not just 2D lines. I need to draw helixes (helices?) Am I missing something? Thanks, Adam
0
0
475
Feb ’24
AGPT not working properly.
I am on a MacBook Pro 2023. 16inch, 16gb ram, 1tb drive. I am on the latest MacOs (Sonoma 14.3.1) and using the steps from the Apple Gaming Wiki to download the Game Porting Toolkit. I have met all the requirements. I tried using the automated installer, didn't work. I tried doing homebrew, and it worked all up until I got to the step where I had to run the command to download Apple tap: brew tap apple/apple http://github.com/apple/homebrew-apple I get this error: Logs: /Users/MYNAME/Library/Logs/Homebrew/game-porting-toolkit/00.options.out /Users/MYNAME/Library/Logs/Homebrew/game-porting-toolkit/01.configure /Users/MYNAME/Library/Logs/Homebrew/game-porting-toolkit/01.configure.cc /Users/MYNAME/Library/Logs/Homebrew/game-porting-toolkit/02.make /Users/MYNAME/Library/Logs/Homebrew/game-porting-toolkit/wine64-build If reporting this issue please do so to (not Homebrew/brew or Homebrew/homebrew-core): apple/apple
2
0
720
Feb ’24
Loading SwiftData @Model image as Texture for RealityKit modelEntity: How can i convert the type 'Data' to expected argument type 'URL'?
I can't figure this one out. I've been able to load image textures from a struct model but not a class Model for my modelEntity. This for example, works for me, this is what I have been using up to now, without SwiftData, using a struct to hold my model if let imageURL = model.imageURL { let picInBox2 = ModelEntity(mesh: .generateBox(size: simd_make_float3(0.6, 0.5, 0.075), cornerRadius: 0.01)) picInBox2.position = simd_make_float3(0, 0, -0.8) if let imageURL = model.imageURL { if let texture = try? TextureResource.load(contentsOf: imageURL) { var unlitMaterial = UnlitMaterial() var imageMaterial = UnlitMaterial() unlitMaterial.baseColor = MaterialColorParameter.texture(texture) picInBox2.model?.materials = [imageMaterial] } } However, when I try to use my SwiftData model it doesn't work. I need to convert Data to url and I am not able to do this. This is what I would like to use for my image texture, from my SwiftData model @Attribute(.externalStorage) var image: Data? If/when I try to do this, substitute if let imageURL = item.image { ` for the old if let imageURL = model.imageURL { in if let imageURL = model.imageURL { if let texture = try? TextureResource.load(contentsOf: imageURL) { var unlitMaterial = UnlitMaterial() var imageMaterial = UnlitMaterial() unlitMaterial.baseColor = MaterialColorParameter.texture(texture) picInBox2.model?.materials = [imageMaterial] } it doesn't work. I get the error: Cannot convert value of type 'Data' to expected argument type 'URL' How can i convert the type 'Data' to expected argument type 'URL'? The original imageURL I am using here comes from the struct Model where it's saved as a variable var imageURL: URL? = Bundle.main.url(forResource: "cat", withExtension: "png") I am at my wit's end. Thank you for any pointers!
1
0
918
Feb ’24
Several Mapping SDKs from Unity throwing same error
I'm testing all of the existing mapping SDKs from Unity via the PolySpatial workflow to see if any of them work on the Vision Pro. ArcGIS and Bing SDKs both play successfully in Editor, and Build successfully from Unity, but they both hit the same errors when building in Xcode (captured in screenshot attached). Is this a common error in Xcode? I can't find much on it. Thanks!
0
0
479
Feb ’24
Several instances of 3D model with skinner, and duplication of weights/indices information
I have a human-like rigged 3D model in a DAE file. I want to programmatically build a scene with several instances of this model in different poses. I can extract the SCNSkinner and skeleton chain from the DAE file without problem. I have discovered that to have different poses, I need to clone the skeleton chain, and clone the SCNSkinner as well, then modify the skeletons position. Works fine. This is done this way: // Read the skinner from the DAE file let skinnerNode = daeScene.rootNode.childNode(withName: "toto-base", recursively: true)! // skinner let skeletonNode1 = skinnerNode.skinner!.skeleton! // Adding the skinner node as a child of the skeleton node makes it easier to // 1) clone the whole thing // 2) add the whole thing to a scene skeletonNode1.addChildNode(skinnerNode) // Clone first instance to have a second instance var skeletonNode2 = skeletonNode1.clone() // Position and move the first instance skeletonNode1.position.x = -3 let skeletonNode1_rightLeg = skeletonNode1.childNode(withName: "RightLeg", recursively: true)! skeletonNode1_rightLeg.eulerAngles.x = 0.6 scene.rootNode.addChildNode(skeletonNode1) // Position and move the second instance skeletonNode2.position.x = 3 let skeletonNode2_leftLeg = skeletonNode2.childNode(withName: "LeftLeg", recursively: true)! skeletonNode2_leftLeg.eulerAngles.z = 1.3 scene.rootNode.addChildNode(skeletonNode2) It seems the boneWeights and boneIndices sources are duplicated for each skinner, so if I have let's say 100 instances, I eat a huge amount of memory, for something that is constant. Is there any way to avoid the duplication of the boneWeights and boneIndices ?
0
1
614
Feb ’24
Metal API on visionOS?
Is it possible to use the Metal API on vision Pro? I noticed that using MTKView in my visionOS app is not recognized, and also noticed other forum posts from months ago saying that MTKView is not yet supported. If it is still not an option, if and when will it be supported? Also wondering about metal-cpp support as well, since my app involves integrating an existing C++ library with visionOS (see here: https://github.com/MinVR/MinVR). Is this possible?
3
1
1.9k
Feb ’24
[VisionOS] Can't import RealitKit/RealityKit.h for custom shader materials
Version details: Xcode Version 15.3 beta (15E5178i) visionOS 1.0 (21N301) SDK + visionOS 1.0 (21N305) Simulator (Installed) I'm trying to make a ModelEntity with a CustomMaterial.GeometryModifier for which I also created a metal shader file. The said shader file is extremely simple at this time: #include <metal_stdlib> #include <RealityKit/RealityKit.h> using namespace metal; [[visible]] void ExpandGeometryModifier(realitykit::geometry_parameters params) { // Nothing. } When trying to compile my project, I get the following error: 'RealityKit/RealityKit.h' file not found Is this not supported on VisionOS?
2
0
774
Feb ’24
RealityKit visualize the virtual depth texture from post-process callback
I am using RealityKit and ARView PostProcessContext to get the sourceDepthTexture of the current virtual scene in RealityKit, using .nonAR camera mode. My experience with Metal is limited to RealityKit GeometryModifier and SurfaceShader for CustomMaterial, but I am excited to learn more! Having studied the Underwater sample code I have a general idea of how I want to explore the capabilities of a proper post processing pipeline in my RealityKit project, but right now I just want to visualize this MTLTexture to see what the virtual depth of the scene looks like. Here’s my current approach, trying to create a depth UIImage from the context sourceDepthTexture: func postProcess(context: ARView.PostProcessContext) { let depthTexture = context.sourceDepthTexture var uiImage: UIImage? // or cg/ci if processPost { print("#P Process: Post Processs BLIT") // UIImage from MTLTexture uiImage = try createDepthUIImage(from: depthTexture) let blitEncoder = context.commandBuffer.makeBlitCommandEncoder() blitEncoder?.copy(from: context.sourceColorTexture, to: context.targetColorTexture) blitEncoder?.endEncoding() getPostProcessed() } else { print("#P No Process: Pass-Through") let blitEncoder = context.commandBuffer.makeBlitCommandEncoder() blitEncoder?.copy(from: context.sourceColorTexture, to: context.targetColorTexture) blitEncoder?.endEncoding() } } func createUIImage(from metalTexture: MTLTexture) throws -> UIImage { guard let device = MTLCreateSystemDefaultDevice() else { throw CIMError.noDefaultDevice } let descriptor = MTLTextureDescriptor.texture2DDescriptor( pixelFormat: .depth32Float_stencil8, width: metalTexture.width, height: metalTexture.height, mipmapped: false) descriptor.usage = [.shaderWrite, .shaderRead] guard let texture = device.makeTexture(descriptor: descriptor) else { throw NSError(domain: "Failed to create Metal texture", code: -1, userInfo: nil) } // Blit! let commandQueue = device.makeCommandQueue() let commandBuffer = commandQueue?.makeCommandBuffer() let blitEncorder = commandBuffer?.makeBlitCommandEncoder() blitEncorder?.copy(from: metalTexture, to: texture) blitEncorder?.endEncoding() commandBuffer?.commit() // Raw pixel bytes let bytesPerRow = 4 * texture.width let dataSize = texture.height * bytesPerRow var bytes = [UInt8](repeating: 0, count: dataSize) //var depthData = [Float](repeating: 0, count: dataSize) bytes.withUnsafeMutableBytes { bytesPtr in texture.getBytes( bytesPtr.baseAddress!, bytesPerRow: bytesPerRow, from: .init(origin: .init(), size: .init(width: texture.width, height: texture.height, depth: 1)), mipmapLevel: 0 ) } // CGDataProvider from the raw bytes let dataProvider = CGDataProvider(data: Data(bytes: bytes, count: bytes.count) as CFData) // CGImage from the data provider let cgImage = CGImage(width: texture.width, height: texture.height, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: bytesPerRow, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue), provider: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent) // Return as UIImage return UIImage(cgImage: cgImage!) } I have hacked together the ‘createUIImage’ function with generative aid and online research to provide some visual feedback, but it looks like I am converting the depth values incorrectly — or somehow tapping into the stencil component of the pixels in the texture. Either way I am out of my depth, and would love some help. Ideally, I would like to produce a grayscale depth image, but really any guidance on how I can visualize the depth would be greatly appreciated. As you can see from the magnified view on the right, there are some artifacts or pixels that are processed differently than the core stencil. The empty background is transparent in the image as expected.
0
0
559
Feb ’24
Children of a dragged entity get left behind when moving slowly
Hey friends, I'm using a drag gesture to rotate a parent object that contains several child colliders. When I drag slowly, sometimes the child colliders don't rotate along with the parent. Any help would be appreciated, thanks! .gesture( DragGesture() .targetedToAnyEntity() .onChanged { value in let startLocation = value.convert(value.startLocation3D, from: .local, to: .scene) let currentLocation = value.convert(value.location3D, from: .local, to: .scene) let delta = currentLocation - startLocation let spinX = Double(delta.y) let spinY = Double(delta.x) let pitch = Transform(pitch: Float(spinX * -1)).matrix let roll = Transform(roll: Float(spinY * -1)).matrix value.entity.transform.matrix = roll * pitch })
1
0
484
Feb ’24
How to integrate UIDevice rotation and creating a new UIBezierPath after rotation?
How to integrate UIDevice rotation and creating a new UIBezierPath after rotation? My challenge here is to successfully integrate UIDevice rotation and creating a new UIBezierPath every time the UIDevice is rotated. (Please accept my apologies for this Post’s length .. but I can’t seem to avoid it) As a preamble, I have bounced back and forth between NotificationCenter.default.addObserver(self, selector: #selector(rotated), name: UIDevice.orientationDidChangeNotification, object: nil) called within my viewDidLoad() together with @objc func rotated() { } and override func viewWillLayoutSubviews() { // please see code below } My success was much better when I implemented viewWillLayoutSubviews(), versus rotated() .. so let me provide detailed code just for viewWillLayoutSubviews(). I have concluded that every time I rotate the UIDevice, a new UIBezierPath needs to be generated because positions and sizes of my various SKSprieNodes change. I am definitely not saying that I have to create a new UIBezierPath with every rotation .. just saying I think I have to. Start of Code // declared at the top of my `GameViewController`: var myTrain: SKSpriteNode! var savedTrainPosition: CGPoint? var trackOffset = 60.0 var trackRect: CGRect! var trainPath: UIBezierPath! My UIBezierPath creation and SKAction.follow code is as follows: // called with my setTrackPaths() – see way below func createTrainPath() { // savedTrainPosition initially set within setTrackPaths() // and later reset when stopping + resuming moving myTrain // via stopFollowTrainPath() trackRect = CGRect(x: savedTrainPosition!.x, y: savedTrainPosition!.y, width: tracksWidth, height: tracksHeight) trainPath = UIBezierPath(ovalIn: trackRect) trainPath = trainPath.reversing() // makes myTrain move CW } // createTrainPath func startFollowTrainPath() { let theSpeed = Double(5*thisSpeed) var trainAction = SKAction.follow( trainPath.cgPath, asOffset: false, orientToPath: true, speed: theSpeed) trainAction = SKAction.repeatForever(trainAction) createPivotNodeFor(myTrain) myTrain.run(trainAction, withKey: runTrainKey) } // startFollowTrainPath func stopFollowTrainPath() { guard myTrain == nil else { myTrain.removeAction(forKey: runTrainKey) savedTrainPosition = myTrain.position return } } // stopFollowTrainPath Here is the detailed viewWillLayoutSubviews I promised earlier: override func viewWillLayoutSubviews() { super.viewWillLayoutSubviews() if (thisSceneName == "GameScene") { // code to pause moving game pieces setGamePieceParms() // for GamePieces, e.g., trainWidth setTrackPaths() // for trainPath reSizeAndPositionNodes() // code to resume moving game pieces } // if (thisSceneName == "GameScene") } // viewWillLayoutSubviews func setGamePieceParms() { if (thisSceneName == "GameScene") { roomScale = 1.0 let roomRect = UIScreen.main.bounds roomWidth = roomRect.width roomHeight = roomRect.height roomPosX = 0.0 roomPosY = 0.0 tracksScale = 1.0 tracksWidth = roomWidth - 4*trackOffset // inset from screen edge #if os(iOS) if UIDevice.current.orientation.isLandscape { tracksHeight = 0.30*roomHeight } else { tracksHeight = 0.38*roomHeight } #endif // center horizontally tracksPosX = roomPosX // flush with bottom of UIScreen let temp = roomPosY - roomHeight/2 tracksPosY = temp + trackOffset + tracksHeight/2 trainScale = 2.8 trainWidth = 96.0*trainScale // original size = 96 x 110 trainHeight = 110.0*trainScale trainPosX = roomPosX #if os(iOS) if UIDevice.current.orientation.isLandscape { trainPosY = temp + trackOffset + tracksHeight + 0.30*trainHeight } else { trainPosY = temp + trackOffset + tracksHeight + 0.20*trainHeight } #endif } // setGamePieceParms // a work in progress func setTrackPaths() { if (thisSceneName == "GameScene") { if (savedTrainPosition == nil) { savedTrainPosition = CGPoint(x: tracksPosX - tracksWidth/2, y: tracksPosY) } else { savedTrainPosition = CGPoint(x: tracksPosX - tracksWidth/2, y: tracksPosY) } createTrainPath() } // if (thisSceneName == "GameScene") } // setTrackPaths func reSizeAndPositionNodes() { myTracks.size = CGSize(width: tracksWidth, height: tracksHeight) myTracks.position = CGPoint(x: tracksPosX, y: tracksPosY) // more Nodes here .. } End of Code My theory says when I call setTrackPaths() with every UIDevice rotation, createTrainPath() is called. Nothing happens of significance visually as far as the UIBezierPath is concerned .. until I call startFollowTrainPath(). Bottom Line It is then that I see for sure that a new UIBezierPath has not been created as it should have been when I called createTrainPath() when I rotated the UIDevice. The new UIBezierPath is not new, but the old one. If you’ve made it this far through my long code, the question is what do I need to do to make a new UIBezierPath that fits the resized and repositioned SKSpriteNode?
8
0
1.1k
Feb ’24
Skyboxes for progressive views in Apple Vision Pro
I've added the Starfield image from Apple's World sample code to the Progressive immersive project template, and I've experimented with a few other images I had around. I have a few questions: (1) Lighter shots look fairly pixelated. Does Apple recommend any minimum/maximum resolutions for images used for the giant sphere? (I noticed Starfield is 4096x4096) (2) I just put the other images in the 2x well for the image set. Should I put other images in their own 2x well no matter the DPI of the image? (3) Apple's Starfield image is square, but skybox images I've used before tend to be much wider (with the top and bottom areas distorted). Is there a particular aspect ratio I should be using? (4) In at least one case, I think the center of the image was rotated to the right by about 20 degrees. Is this expected? Could it have been an artifact of the image's size or aspect ratio?
1
1
646
Feb ’24
Unsupported method: -[MTLComputeCommandEncoder. encodeStartWhile: offset: comparison: referenceValue:]
it appears that the Metal Debugging interface does not support this method, at least the function hashing algorithm does not have a pattern for it in the symbol dictionary as presented. Where do we get updated C- libraries and functions that sync with the things that are presented in the Demo Kits and Samples that Apple puts in the user domain? Why does this stuff get out into the wild insufficiently tested? It seems thet the demo kits made available to users should be included in the test domain used to verify new code releases. I came from a development environment where the 6 month release cycle involved automated execution of the test suite before it went beta or anywhere else.
1
0
712
Feb ’24
RealityKit Target Framerate
I'm porting a scenekit app to RealityKit, eventually offering an AR experience there. I noticed that when I run it on my iPhone 15 Pro and iPad Pro with the 120Hz screen, the framerate seems to be limited to 60fps. Is there a way to increase the target framerate to 120 like I can with sceneKit? I'm setting up my arView like so: @IBOutlet private var arView: ARView! { didSet { arView.cameraMode = .nonAR arView.debugOptions = [.showStatistics] } }
0
0
757
Feb ’24
Is it possible to change usdz objects inside a scene programmatically?
Let's say I've created a scene with 3 models inside side by side. Now upon user interaction, I'd like to change these models to another model (that is also in the same reality composer pro project). Is that possible? How can one do that? One way I can think of is to just load all the individual models in RealityView and then just toggle the opacity to show/hide the models. But this doesn't seem like the right way for performance/memory reasons. How do you swap in and out usdz models?
1
0
693
Feb ’24
RealityView Attachments normal
I have a view attachment attached to a hand anchor. When the attachment is facing away I don't want it to render. I might be missing something obvious, but I've made a System that runs on every render loop. In the update call I'm getting a reference to the Attachment using components. And this is as long as I got. I can't figure out how to get the normal of an Entity I receive in the update function. My plan was to take the head anchor normal and compare it to the entity normal. If they are facing each other I render the viewAttachment, otherwise not. Is there a simpler way? And if not, how do I get the normal of an entity?
0
0
442
Feb ’24
jax-metal error jax.numpy.linalg.inv
Hi, I have a an issue with jax.numpy.linalg.inv(a). import jax.numpy.linalg as jnpl B = jnp.identity(2) jnpl.inv(B) Throws the following error: XlaRuntimeError: UNKNOWN: /var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: error: failed to legalize operation 'mhlo.triangular_solve' /var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: note: called from /var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: note: see current operation: %120 = \"mhlo.triangular_solve\"(%42#4, %119) {left_side = true, lower = true, transpose_a = #mhlo&lt;transpose NO_TRANSPOSE&gt;, unit_diagonal = true} : (tensor&lt;2x2xf32&gt;, tensor&lt;2x2xf32&gt;) -&gt; tensor&lt;2x2xf32&gt; Any ideas what could be the issue or how to solve it?
2
0
876
Feb ’24
Location of demo "World App"
Where's the xcode project for the "World App" referenced in the Build spatial experiences with RealityKit? At 3 minutes in, "the world app" is shown with a 2D window, and seems to be the expected starting place for the 3 module series. I see the code snippets below the video, which seem to intend adjustments to the original project. I've searched a.. I found it by searching github, maybe I'm missing an obvious link on the page. It is available here: https://developer.apple.com/documentation/visionos/world under the documentation page. Hope this helps someone.
0
0
371
Feb ’24
FxPlug resolution change w/o scaling - FCPX AI/ML Upscale Effect via Motion?
Namaste! I'm putting together a FCPX Effect that is supposed to increase the resolution with AI upscale, but the only way to add resolution is by scaling. The problem is that scaling causes the video to clip. I want to be able to give a 480 video this "Resolution Upscale" Effect and have it output a 720 or 1080 AI upscaled video, however both FxPlug and Motion Effects does not allow such a thing. The FxPlug is always getting 640x480 input (correct) but only 640x480 output. What is the FxPlug code or Motion Configuration/Cncept for upscaling the resolution without affecting the scale? Is there a way to do this in Motion/FxPlug? Scaling up by FxPlug effect, but then scaling down in a parent Motion Group doesn't do anything. Setting the Group 2D Fixed Resolution doesn't output different dimensions; the debug output from the FxPlug continues saying the input and output is 640x480, even when the group is set at fixed resolution 1920x1080. Doing a hierarchy of Groups with different settings for 2D Fixed Resolution and 3D Flatten do not work. In these instances, the debug output continues saying 640x480 for both input and output. So the plug in isn't aware of the Fixed Resolution change. Does there need to be a new FxPlug property, via [properties:...], like "kFxPropertyKey_ResolutionChange" and an API for changing the dest image resolution? (and without changing the dest rect size) How do we do this?
0
0
591
Feb ’24