Hi,
I'm trying to use the ECS System class, and noticing on hardware, that update() is not being called every frame as its described in the Simulator, or as described in the documentation.
To reproduce, simply create a System like so:
class MySystem : System {
var count : Int = 0
required init(scene: Scene) {
}
func update(context: SceneUpdateContext) {
count = count + 1
print("Update \(count)")
}
}
Then, inside the RealityView, register the System with:
MySystem.registerSystem()
Notice that while it'll reliably be called every frame in Simulator, it'll run for a few seconds on hardware, freeze, then only be called when indirectly doing something like moving a window or performing other visionOS actions that are analogous to those that call "invalidate" to refresh a window in other operating systems.
Thanks in advance,
-Rob.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
Hi everyone,
I'm trying to implement matchmaking in visionOS using GameKit and GameCenter.
I'm following the example project that been shared but I get an error.
Error: The requested operation could not be completed because you are not signed in to iCloud..
I'm getting this error as a result of matchmaking. I'm already logged in to iCloud in Vision Pro Simulator. I've tried to switch off-on every related settings but didn't work.
I'm using latest Xcode Dev Beta and visionOS Beta v6.
Would you mind share me any workaround?
Regards,
Melih
I tried to update my current game controller code to the new iOS 17.0 GCControllerLiveInput. But the Touchpads of the PS4 and PS5 controllers are not listed as elements of the GCControllerLiveInput.
Were they moved somewhere else? They are not listed as a GCMouse, and I didn't find anything about this in the documentation or header files either...
I'm developing 3D Scanner works on iPad.
I'm using AVCapturePhoto and Photogrammetry Session.
photoCaptureDelegate is like below:
extension PhotoCaptureDelegate: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
let fileUrl = CameraViewModel.instance.imageDir!.appendingPathComponent("\(PhotoCaptureDelegate.name)\(id).heic")
let img = CIImage(cvPixelBuffer: photo.pixelBuffer!, options: [ .auxiliaryDepth: true, .properties: photo.metadata ])
let depthData = photo.depthData!.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB)
let fileData = CIContext().heifRepresentation(of: img, format: .RGBA8, colorSpace: colorSpace!, options: [ .avDepthData: depthData ])
try? fileData!.write(to: fileUrl, options: .atomic)
}
}
But, Photogrammetry session spits warning messages:
Sample 0 missing LiDAR point cloud!
Sample 1 missing LiDAR point cloud!
Sample 2 missing LiDAR point cloud!
Sample 3 missing LiDAR point cloud!
Sample 4 missing LiDAR point cloud!
Sample 5 missing LiDAR point cloud!
Sample 6 missing LiDAR point cloud!
Sample 7 missing LiDAR point cloud!
Sample 8 missing LiDAR point cloud!
Sample 9 missing LiDAR point cloud!
Sample 10 missing LiDAR point cloud!
The session creates a usdz 3d model but scale is not correct.
I think the point cloud can help Photogrammetry session to find right scale, but I don't know how to attach point cloud.
I have been using MTKView to display CVPixelBuffer from the camera. I use so many options to configure color space of the MTKView/CAMetalLayer that may be needed to tonemap content to the display (CAEDRMetadata for instance). If however I use AVSampleBufferDisplayLayer, there are not many configuration options for color matching. I believe AVSampleBufferDisplayLayer uses pixel buffer attachments to determine the native color space of the input image and does the tone mapping automatically. Does AVSampleBufferDisplayLayer have any limitations compared to MTKView, or both can be used without any compromise on functionality?
https://developer.apple.com/documentation/realitykit/photogrammetrysession/request/modelfile(url:detail:geometry:)
if the path given is a file path that contains a .usdz extension, then it will be saved as .usdz, or else if we provide a folder, it will save as OBJ; I tried it, but no use. Right before saving, it shows the folder that will be saved, but after I click on done and check the folder, it's always empty.
Hi! Im having an issue creating a PortalComponent on visionOS
Im trying to anchor a Portal to a wall or floor anchor and always the portal appears opposite to the anchor.
If I use a vertical anchor (wall) the portal appears horizontal on the scene
If I use a horizontal anchor (floor) the portal appears vertical on the scene
Im tested on xcode
15.1.0 beta 3
15.1.0 beta 2
15.0 beta 8
Any ideas ?? Thank you so much!
For better memory usage when working with MTLTextures (editing + displaying in render passes, compute shaders, etc.) is it possible to save the texture to the app's Documents folder, and then use an UnsafeMutablePointer to access/modify the contents of the texture before displaying in a render pass? And would this be performant (i.e 60fps)? That way the texture won't be directly in memory all the time, but the contents can still be edited and displayed when needed.
I have a metal compute kernel that crashes MTLCompiler when I try to load the .metallib on an iPhone SE running iOS 14.4. I get the following error:
MTLCompiler: Compilation failed with XPC_ERROR_CONNECTION_INTERRUPTED on 3 try
Unable to init metal: Compiler encountered an internal error
By selectively commenting out code I narrowed it down to the use of SIMD permute functions. These functions are supported by the iOS/Metal version running on the device, but not by the actual hardware (according to the metal features set table). I have 2 variants of this code, one that uses the SIMD permute functions and one that doesn't.
My thought was to check the GPU family in Swift, then set a function constant supportsSIMDPermute and use this to ensure the correct variation runs on the device, like this:
constant bool supportsSIMDPermute [[ function_constant(1) ]];
// kernel code
if(supportsSIMDPermute) {
// variant of code that uses SIMD permute
} else {
// Variant that uses slightly slower code that does not use SIMD permute
}
The library compiles without complaints in Xcode, but then crashes the on-device compiler when loading the library. Is this expected behaviour?
is there a way to debug Metal raytracing intersection functions?
how is that done? i cannot see the intersection functions in the Xcode shader debugger.
From this WWDC session video, apple suggests that any objects that appear to emit lights should shine color onto nearby objects.
But when I was trying to construct an cinema immersive space with videoMaterial and some other entities. I find that videoMaterial is not emissive and the nearby entity doesn't reflect any lights from the screen Entity where my videoMaterial is attached to.
What is the right way to achieve an effect similar to the TV app that is displayed in the video?
Hi everyone
When I try to load the model with binary .ply format, MDLAsset can’t read the data as binary
(I tried debugging and checking the buffer but I didn’t see any data)
But when I change the model to ASCII format then model display normally.
If you know the reason, please tell me how to fix it. Thank you
I need to associate sound with the movement of a sprite. Movement can be as a result of physics, not as a result of an SKAction. When the object is sliding thee should be sliding sound throughout the time when it is sliding, and then a different sound when it bumps into a rock and goes up in the air. When the object is airborne, there is no sound, till it falls again - a falling sound, and then slides down with a sliding sound. The sounds associated with the collision ( rock, ground and so on ) are straightforward and work fine. But am having difficulty associating the sound with movement.
The closest result I have is to check the velocity of the sprite's physics body every update cycle and play or stop the sound based on whether the velocity is greater than zero. I tried SKAction.playSoundFileNamed first - the sound kept going even when the object was not moving. I tried adding an SKAudioNode with Play and Stop, with no better result. I finally tried using AVAudioPlayer to play and Pause , which yielded the best results, but the sliding sound still played past the sliding action.
What is the best way to do this?
My code for playing the sound is as follows:
var blockSliding = false
for block in gameBlocks {
if (block.physicsBody?.velocity.dx ?? 0) + (ball.physicsBody?.velocity.dy ?? 0) > 0.05 {
blockSliding = true
break
}
}
if slideSound.isPlaying {
if !blockSliding {
slideSound.pause()
}
} else {
if blockSliding {
slideSound.play()
}
}
I have setup slideSound earlier loading the appropriate sound file into an AVAudioPlayer
Hello,
I'm working on a game application and I've been wondering about how I can separate the GameKit Leaderboards I'm using for testing purpose and the ones that would be "live" (i.e: only for the users)
I've been point out that I can attach leaderboards to an app version but it makes me wondering about compatibility.
I'm working on the application in version 1.1.
My users are using the application in version 1.0 (with their leaderboards).
Then I'm releasing a version 1.2.
It would means I need to attach all the leaderboards from version 1.0 to version 1.2 and to me it's very tedious.
What would be the best practice with using leaderboards on multiple environments?
Thanks
Hello,
I was wondering if we could get the metersPerUnit metadata from a USDZ loaded as an Entity (or even MDLAsset) using Reality Kit?
Thanks
We plan to release a new app version with Apple Game Center login ability and wondering if that will require any additional review or verification by Apple upon app submission.
In addition I can add, that we already add Game Center support in provision profile and activated Game Center in app store connect few app versions ago.
I am trying to control the orientation of a box in Scene Kit (iOS) using gestures. I am using the translation in x and y to update the x and y rotation of the SCNNode.
After a long search I have realised that x and y rotation will always lead to z rotation, thanks to this excellent post: [https://gamedev.stackexchange.com/questions/136174/im-rotating-an-object-on-two-axes-so-why-does-it-keep-twisting-around-the-thir?newreg=130c66c673f848a7be2873bf675573a9)
So I am trying to get the z rotation causes, and then remove this from my object by applying the inverse quaternion however when I rotate the object 90 deg around x, and then 90 deg around Y it behaves VERY weirdly.
It is almost behaving as it is in gimbal lock, but I did not think that using quaternion in the way that I am would cause gimbal lock in this way.
I am sure it is something I am missing, or perhaps I am not able to remove the z rotation in this way.
Thanks!
I have added a video of the strange behaviour here [https://github.com/marcusraty/RotationExample/blob/main/Example.MP4)
And the code example is here [https://github.com/marcusraty/RotationExample)
My environment: Tensorflow: 2.14, tf-metal: 1.1, M3 Max
I am working on an GAN full of residual sum and concatenation. It is trained correctly if using CPU only. However, if I enable GPU, it would cause:
oc("mps_slice_1"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/d615290d-668b-11ee-9734-0697ca55970a/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":359:0)): error: 'mps.slice' op failed: length value 32 does not fit within the dimension size (33) with start value (32)
/AppleInternal/Library/BuildRoots/d615290d-668b-11ee-9734-0697ca55970a/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:2133: failed assertion `Error: MLIR pass manager failed'
Some customization I guess might be related to the error:
tf.bitwise.bitwise_xor, tf.concat, tf.pad in custom layers
numpy.random in train steps.
Another debug hint I found is that the "32" is the number of channel of my models' conv layer, and change as I change the number of channel.
Is there anyone know what is wrong? Thank you so much
I did not know that you can also get GPTK when you're working on an AppleTV app. I'm getting back into game programming so I booted up an app I installed on my AppleTV a few months ago and the display showed GPTK so I was able to see performance.
Drew
I know that CustomMaterial in RealityKit can update texture by use DrawableQueue, but in new VisionOS, CustomMaterial doesn't work anymore. How i can do the same thing,does ShaderGraphMaterial can do?I can't find example about how to do that. Looking forward your repley, thank you!