Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics

Post

Replies

Boosts

Views

Activity

Remove iphone-performance-gaming-tier
Hello, I've been experimenting with the iphone-performance-gaming-tier Device Capability using TestFlight. I've decided I don't want the restriction and I've just submitted a new build (new version number) without the value but the build metadata still includes it. How do i remove it? I don't want to the restriction. Cheers
1
0
281
Oct ’24
SceneKit Animations Transition Abruptly on iOS 18 Device, but Smooth in Simulator
Hi Friends! I’m facing an issue with SceneKit. I’m developing a 3D mobile game. I have a character 3D model and several skeletal animations CAAnimation. I import both the model and the animations from Maya in *.dae format. The character’s animations play continuously one after the other, with each new animation being triggered randomly. The transition between animations makes smoothly by setting the fadeInDuration and fadeOutDuration properties. Here’s an example of the code: import UIKit import QuartzCore import SceneKit class TestAnimationController: UIViewController { var bodyNode: SCNNode? override func viewDidLoad() { super.viewDidLoad() let scnView = SCNView(frame: self.view.bounds) scnView.backgroundColor = .black // Set your desired background color scnView.autoresizingMask = [.flexibleWidth, .flexibleHeight] let scene = SCNScene(named: "art.scnassets/scene/Base_room/ROOM5.scn")! bodyNode = collada2SCNNode(filepath: "art.scnassets/female/girl_body_races.dae")! bodyNode?.renderingOrder = 10 scene.rootNode.addChildNode(bodyNode!) playIdleAnimation() scnView.scene = scene // Assign the scene to the SCNView self.view.addSubview(scnView) // Add the SCNView to your main view) } func collada2SCNNode(filepath:String) -> SCNNode? { if let scene = SCNScene(named: filepath) { let node = scene.rootNode.childNodes[0] return node } else { return nil } } func playIdleAnimation() { let array = [ "art.scnassets/female/animations/idle/girl_idle_4.dae", "art.scnassets/female/animations/idle/girl_idle1.dae", "art.scnassets/female/animations/idle/girl_idle2.dae", "art.scnassets/female/animations/idle/Girl_idle3.dae", ] let animation = CAAnimation.animationWithSceneNamed(array.randomElement() ?? "")! self.setAnimationAdd( fadeInDuration: 1.0, fadeOutDuration: 1.0, keyTime: 0.99, animation, isLooped: false ) { [weak self] in guard let self = self else { return } try? self.playBoringAnimations() } } func playBoringAnimations() { let array = [ "art.scnassets/female/animations/boring/girl_boring1.dae", "art.scnassets/female/animations/boring/girl_boring2.dae", "art.scnassets/female/animations/boring/girl_boring3.dae", "art.scnassets/female/animations/boring/girl_boring4.dae", "art.scnassets/female/animations/boring/girl_boring5.dae", "art.scnassets/female/animations/boring/girl_boring6.dae", "art.scnassets/female/animations/boring/girl_boring8.dae" ] let animation = CAAnimation.animationWithSceneNamed(array.randomElement() ?? "")! self.setAnimationAdd( fadeInDuration: 1.0, fadeOutDuration: 1.0, keyTime: 0.99, animation, isLooped: false ) { [weak self] in guard let self = self else { return } try? self.playIdleAnimation() } } func setAnimationAdd(fadeInDuration : CGFloat, fadeOutDuration : CGFloat, keyTime : CGFloat, _ animation: CAAnimation, isLooped: Bool, completion: (() -> Void)?) { animation.fadeInDuration = fadeInDuration animation.fadeOutDuration = fadeOutDuration if !isLooped { animation.repeatCount = 1 } else { animation.repeatCount = Float.greatestFiniteMagnitude } animation.animationEvents = [ SCNAnimationEvent(keyTime: keyTime, block: { _, _, _ in completion?() }) ] bodyNode?.addAnimation(animation, forKey: "avatarAnimation") } } Everything worked perfectly until I updated to iOS 18. On a physical device, the animations now transition abruptly without the smooth blending that was present in earlier iOS versions. The switch between them is very noticeable, as if the fadeInDuration and fadeOutDuration parameters are being ignored. However, in the iOS 18 simulator, the animations still transition smoothly as before. Here two example videos - IOS 17.5 and IOS 18 https://youtube.com/shorts/jzoMRF4skAQ - IOS 17,5 smooth https://youtube.com/shorts/VJXrZzO9wl0 - IOS 18 not smooth I try this code in IOS 17.5, everything works fine Does anyone have any ideas on how to fix this issue?
0
0
256
Oct ’24
No smooth animation transition in IOS 18
Hi, I’m facing an issue with SceneKit. I’m developing a 3D mobile game. I have a character 3D model and several skeletal animations CAAnimation. I import both the model and the animations from Maya in *.dae format. The character’s animations play continuously one after the other, with each new animation being triggered randomly. The transition between animations makes smoothly by setting the fadeInDuration and fadeOutDuration properties. Here’s an example of the code: import UIKit import QuartzCore import SceneKit //---------------------- class TestAnimationController: UIViewController { var bodyNode: SCNNode? override func viewDidLoad() { super.viewDidLoad() let scnView = SCNView(frame: self.view.bounds) scnView.backgroundColor = .black // Set your desired background color scnView.autoresizingMask = [.flexibleWidth, .flexibleHeight] let scene = SCNScene(named: "art.scnassets/scene/Base_room/ROOM5.scn")! bodyNode = collada2SCNNode(filepath: "art.scnassets/female/girl_body_races.dae")! bodyNode?.renderingOrder = 10 scene.rootNode.addChildNode(bodyNode!) playIdleAnimation() scnView.scene = scene // Assign the scene to the SCNView self.view.addSubview(scnView) // Add the SCNView to your main view) } func collada2SCNNode(filepath:String) -> SCNNode? { if let scene = SCNScene(named: filepath) { let node = scene.rootNode.childNodes[0] return node } else { return nil } } func playIdleAnimation() { let array = [ "art.scnassets/female/animations/idle/girl_idle_4.dae", "art.scnassets/female/animations/idle/girl_idle1.dae", "art.scnassets/female/animations/idle/girl_idle2.dae", "art.scnassets/female/animations/idle/Girl_idle3.dae", ] let animation = CAAnimation.animationWithSceneNamed(array.randomElement() ?? "")! self.setAnimationAdd( fadeInDuration: 1.0, fadeOutDuration: 1.0, keyTime: 0.99, animation, isLooped: false ) { [weak self] in guard let self = self else { return } try? self.playBoringAnimations() } } func playBoringAnimations() { let array = [ "art.scnassets/female/animations/boring/girl_boring1.dae", "art.scnassets/female/animations/boring/girl_boring2.dae", "art.scnassets/female/animations/boring/girl_boring3.dae", "art.scnassets/female/animations/boring/girl_boring4.dae", "art.scnassets/female/animations/boring/girl_boring5.dae", "art.scnassets/female/animations/boring/girl_boring6.dae", "art.scnassets/female/animations/boring/girl_boring8.dae" ] let animation = CAAnimation.animationWithSceneNamed(array.randomElement() ?? "")! self.setAnimationAdd( fadeInDuration: 1.0, fadeOutDuration: 1.0, keyTime: 0.99, animation, isLooped: false ) { [weak self] in guard let self = self else { return } try? self.playIdleAnimation() } } func setAnimationAdd(fadeInDuration : CGFloat, fadeOutDuration : CGFloat, keyTime : CGFloat, _ animation: CAAnimation, isLooped: Bool, completion: (() -> Void)?) { animation.fadeInDuration = fadeInDuration animation.fadeOutDuration = fadeOutDuration if !isLooped { animation.repeatCount = 1 } else { animation.repeatCount = Float.greatestFiniteMagnitude } animation.animationEvents = [ SCNAnimationEvent(keyTime: keyTime, block: { _, _, _ in completion?() }) ] bodyNode?.addAnimation(animation, forKey: "avatarAnimation") } } Everything worked perfectly until I updated to iOS 18. On a physical device, the animations now transition abruptly without the smooth blending that was present in earlier iOS versions. The switch between them is very noticeable, as if the fadeInDuration and fadeOutDuration parameters are being ignored. However, in the iOS 18 simulator, the animations still transition smoothly as before. Here two example videos - IOS 17.5 and IOS 18 https://youtube.com/shorts/jzoMRF4skAQ - IOS 17,5 smooth https://youtube.com/shorts/VJXrZzO9wl0 - IOS 18 not smooth
0
1
214
Oct ’24
0.5 zoom on RealityKit back camera
My app uses RealityKit with an arView with World tracking and a scene construction mesh. Working well. As I understand it, the default camera selection using ARIkit, is ultra wide camera. However, in the camera app, there is a 0.5 option to increase the field of view further. Is there any way to enable this 0.5x option using code? Or any control over FOV using RealityKit and a mesh? Thanks!
2
0
369
Oct ’24
Can SceneKit be used with Swift 6 Concurrency ?
I am trying to port SceneKit projects to Swift 6, and I just can't figure out how that's possible. I even start thinking SceneKit and Swift 6 concurrency just don't match together, and SceneKit projects should - hopefully for the time being only - stick to Swift 5. The SCNSceneRendererDelegate methods are called in the SceneKit Thread. If the delegate is a ViewController: class GameViewController: UIViewController { let aNode = SCNNode() func renderer(_ renderer: any SCNSceneRenderer, updateAtTime time: TimeInterval) { aNode.position.x = 10 } } Then the compiler generates the error "Main actor-isolated instance method 'renderer(_:updateAtTime:)' cannot be used to satisfy nonisolated protocol requirement" Which is fully understandable. The compiler even tells you those methods can't be used for protocol conformance, unless: Conformance is declare as @preconcurrency SCNSceneRendererDelegate like this: class GameViewController: UIViewController, @preconcurrency SCNSceneRendererDelegate { But that just delays the check to runtime, and therefore, crash in the SceneKit Thread happens at runtime... Again, fully understandable. or the delegate method is declared nonisolated like this: nonisolated func renderer(_ renderer: any SCNSceneRenderer, updateAtTime time: TimeInterval) { aNode.position.x = 10 } Which generates the compiler error: "Main actor-isolated property 'position' can not be mutated from a nonisolated context". Again fully understandable. If the delegate is not a ViewController but a nonisolated class, we also have the problem that SCNNode can't be used. Nearly 100% of the SCNSceneRendererDelegate I've seen do use SCNNode or similar MainActor bound types, because they are meant for that. So, where am I wrong ? What is the solution to use SceneKit SCNSceneRendererDelegate methods with full Swift 6 compilation ? Is that even possible for now ?
3
0
349
Oct ’24
Mac crashing
Hi, everytime I try to run Minecraft Launcher theres a bunch of code that comes up? It ran fine about a month ago but now I cant even open the app, it wont let me copy all of the code but it does say this ""PC register does not match crashing frame (0x0 vs 0x10B338024)"". My Mac is fully up to date, I have deleted and reinstalled Minecraft numerous times ut it continues to display a large amount of code with nothing to fix it. Could someone help me fix it or recommend anything I can do to fix it?
1
0
295
Oct ’24
Change zFar in RealityView
I am working on a RealityView on iOS 18 that needs to render objects farther away than 1,000 meters. My app is used outside in open areas. I am using RealityView with content.camera = .spatialTracking and I have turned off occlusion, collisions, and plane detection with a simple scene understanding like this. let configuration = SpatialTrackingSession.Configuration( tracking: [.camera], sceneUnderstanding: [], //We don't want occlusions, collisions, etc camera: .back) let session = SpatialTrackingSession() if let unavailable = await session.run(configuration) { print("unavailable \(unavailable)") } Is this possible with spatialTracking with RealityView or with ARView? I have my RealityView working on visionOS inside an ImmersiveSpace. On visionOS I don't have the camera as a passthrough, it is virtual scene and it has wold tracking set up via the WorldTrackingProvider and I can render objects father away than 1000 meters. I would like to do the same thing on iOS. I don't need to have the camera pass through, but I do need to have the world tracking. I see that PerspectiveCameraComponent lets me set the near and far clipping planes, but I don't see how I can use that camera with world tracking.
0
0
311
Oct ’24
Error "No Apple native plug-in libraries found." when importing freshly built plugins into Unity
I just built Core and GameKit without errors. The command line I used was the following: python3 build.py -p Core GameKit -m macOS -c "Apple Development ID (redacted)" When importing these packages into Unity, as soon as the editor refreshes I get these errors: DllNotFoundException: GameKitWrapper assembly:<unknown assembly> type:<unknown type> member:(null) And DllNotFoundException: AppleCoreNativeMac assembly:<unknown assembly> type:<unknown type> member:(null) And Apple Unity Plug-ins] No Apple native plug-in libraries found. Were the libraries built with the build script (build.py), xcodebuild, or Xcode? Was any output generated from the build script/tool? Were there any errors or issues? Please check to ensure that libraries (.a, .framework, or .bundle files) were created and saved to each plug-in's "NativeLibraries~" folder. I made sure that the libraries were saved to this NativeLibraries folder. I made sure to update xcode, python, npm and Unity to the latest version, got the most recent commity from Github (Sept. 17th) but the errors persist. The errors also occur right as the game starts in a built player. Also occurs in a blank Unity project, where I tested both 2022.3.17f and 2022.3.48f. The versions I was trying to import was Core 3.1.4 and GameKit 3.0.0. Same issue with Core 3.1.3 and Gamekit 2.2.2. Last version I tried that works was Core 1.05 and GameKit 1.04. I haven't seen anyone else encounter this on these forums so I'm guessing that I'm personally doing something wrong rather than there being an issue with the plugins themselves, but what am I doing wrong?
4
0
505
Sep ’24
Particle burst with exact amount of particles
Hello dear forum, I need to emit an exact amount of particles from a SCNParticleSystem in a burst (this is for an UI effect). This worked for me perfectly in the scene editor by setting the birthrate to the amount and emission duration to 0. Sadly when I either load such a particle system from a scene or creating it by code, the emitted particles are sometimes one less, or one more. The first time I run it in the simulator, it seems to work fine but then amount of particles varies as described. video: https://youtube.com/shorts/MRzqWBy2ypA?feature=share Does anybody know how to make this predictable? Thanks so much in advance, Seb
1
0
419
Sep ’24
Having problems with the game porting toolkit
So I'm installing the toolkit but during the process the terminal just gives me this: Last 15 lines from /Users/quiqu/Library/Logs/Homebrew/game-porting-toolkit/01.configure: --enable-win64 --with-gnutls --with-freetype --with-gstreamer CC=/usr/local/opt/game-porting-toolkit-compiler/bin/clang CXX=/usr/local/opt/game-porting-toolkit-compiler/bin/clang++ checking build system type... x86_64-apple-darwin23.5.0 checking host system type... x86_64-apple-darwin23.5.0 checking whether make sets $(MAKE)... yes checking for gcc... /usr/local/opt/game-porting-toolkit-compiler/bin/clang checking whether the C compiler works... no configure: error: in /private/tmp/game-porting-toolkit-20240928-16869-7g5o5t/wine64-build': configure: error: C compiler cannot create executables See config.log' for more details If reporting this issue please do so to (not Homebrew/brew or Homebrew/homebrew-core): apple/apple quiqu@Quiques-Laptop ~ % please if someone could help I have a tournament this weekend and I need this to work
2
3
589
Sep ’24
Options to have MSAA in Tile-Based Deferred Renderer
Hi folks, I'm working on a Tile based Deferred renderer, similar to this Apple example. I'm wondering how to add MSAA to the renderer, and I see two choices: Copy the single-sampled texture at the end of the GBuffer/Lighting render pass to a multi-sampled texture and resolve from that Make all render targets (GBuffer) multi-sampled and deal with sampling/resolving all intermediate textures as well as the final, combined texture. Which is the proper approach, and are there any examples of how to implement it? Thanks!
0
0
265
Sep ’24
Can't link metal-cpp to Modern Rendering With Metal sample
There is a sample project from Apple here. It has a scene of a city at night and you can move in it. It basically has 2 parts: application code written in what looks like Objective-C (I am more familiar with C++), which inherits from things like NSObject, MTKView, NSViewController and so on - it processes input and all app-related and window-related stuff. rendering code that also looks like Objective-C. Btw both parts are mostly in .mm files (Obj-C++ AFAIK). The application part directly uses only one class from the rendering part - AAPLRenderer. I want to move the rendering part to C++ using metal-cpp. For that I need to link metal-cpp to the project. I did it successfully with blank projects several times before using this tutorial. But with this sample project Xcode can't find Foundation/Foundation.hpp (and other metal-cpp headers). The error says this: Did not find header 'Foundation.hpp' in framework 'Foundation' (loaded from '/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.0.sdk/System/Library/Frameworks') Pls help
1
0
321
Sep ’24
Why is the speed of metal shading kernel so slow?
Hi, I am recently writing metal shader language to parallelize the algorithms to accelerate the speed of it. I created a simple example to show the acceleration result of it. Since Rust is used in our algorithm, so I used metal-rs as the wrapper to execute the MSL kernels from rust side. In this example, I am calculating the result of two arrays, and kernel looks like: kernel void two_array_addition_2( constant uint* a [[buffer(0)]], constant uint* b [[buffer(1)]], device uint* c [[buffer(2)]], uint idx [[thread_position_in_grid]] ) { c[idx] = a[idx] + b[idx]; } in the main.rs, you can see a function called execute_kernel() , this function has all it needs to execute the kernel in MSL (such as commandEncoder, piplelineState, etc). use core::mem; use metal::{Buffer, MTLSize}; use objc::rc::autoreleasepool; use std::time::Instant; use two_array_addition::abstractions::state::MetalState; fn execute_kernel( name: &str, state: &MetalState, input_a: &Buffer, input_b: &Buffer, output_c: &Buffer, ) -> Vec<u32> { // assert!(input_a.len() == input_b.len() && input_a.len() == output_c.len()); // let len = input_a.len() as u64; let len = input_a.length() as u64 / mem::size_of::<u32>() as u64; // 1. Init the MetalState // - we inited it // 2. Set up Pipeline State let pipeline = state.setup_pipeline(name).unwrap(); // 3. Allocate the buffers for A, B, and C // - we allocated outside of this function let mut result: &[u32] = &[]; autoreleasepool(|| { // 4. Create the command buffer & command encoder let (command_buffer, command_encoder) = state.setup_command( &pipeline, Some(&[(0, input_a), (1, input_b), (2, output_c)]), ); // 5. command encoder dispatch the threadgroup size and num of threads per threadgroup let threadgroup_count = MTLSize::new((len + 256 - 1) / 256, 1, 1); let thread_per_threadgroup = MTLSize::new(256, 1, 1); // let grid_size = MTLSize::new(len, 1, 1); // let threadgroup_count = MTLSize::new(pipeline.max_total_threads_per_threadgroup(), 1, 1); command_encoder.dispatch_thread_groups(threadgroup_count, thread_per_threadgroup); command_encoder.end_encoding(); command_buffer.commit(); command_buffer.wait_until_completed(); // 6. Copy the result back to the host let start = Instant::now(); result = MetalState::retrieve_contents::<u32>(output_c); let duration = start.elapsed(); println!("Duration for copying result back to host: {:?}", duration); }); result.to_vec() } The performance of the result is kinda interesting to me. This is the result: $ cargo run -r This is expected to run for a while... please wait... Generating input arrays... Generating input arrays... Generating output array... Generating expected output... Duration for allocating buffers: 2.015258s Executing 1st kernel (1)... Duration for copying result back to host: 5.75µs Executing 1st kernel (2)... Duration for copying result back to host: 542ns Executing 2nd kernel (1)... Duration for copying result back to host: 1µs Executing 2nd kernel (2)... Duration for copying result back to host: 458ns Duration expected: 183.406167ms Duration for 1st kernel (1): 1.894994875s Duration for 1st kernel (2): 537.318208ms Duration for 2nd kernel (1): 501.33275ms Duration for 2nd kernel (2): 497.339916ms You have successfully run the kernels! The speed is slower when executing in the MSL kernel, while I reckon of the dataset is quite big ($2^{29}$) The first kernel execution takes more time to launch. Is there any way to optimize the MSL in this case? And in most case, when you design the algorithm into parallelism, what would be the concerns? The machine I am using is M1 Pro with 14-core GPU and 16 GB memory. Does anyone have idea / explanation for why these happen? Thank you
1
0
329
Sep ’24
GCControllerDidConnect notification not received in VisionOS 2.0
I am unable to get VisionOS 2.0 (simulator) to receive the GCControllerDidConnect notification and thus am unable to setup support for a gamepad. However, it works in VisionOS 1.2. For VisionOS 2.0 I've tried adding: .handlesGameControllerEvents(matching: .gamepad) attribute to the view Supports Controller User Interaction to Info.plist Supported game controller types -> Extended Gamepad to Info.plist ...but the notification still doesn't fire. It does when the code is run from VisionOS 1.2 simulator, both of which have the Send Game Controller To Device option enabled. Here is the example code. It's based on the Xcode project template. The only files updated were ImmersiveView.swift and Info.plist, as detailed above: import SwiftUI import GameController import RealityKit import RealityKitContent struct ImmersiveView: View { var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) } NotificationCenter.default.addObserver( forName: NSNotification.Name.GCControllerDidConnect, object: nil, queue: nil) { _ in print("Handling GCControllerDidConnect notification") } } .modify { if #available(visionOS 2.0, *) { $0.handlesGameControllerEvents(matching: .gamepad) } else { $0 } } } } extension View { func modify<T: View>(@ViewBuilder _ modifier: (Self) -> T) -> some View { return modifier(self) } }
1
0
339
Sep ’24
MTKTextureLoader loading texture error on visionOS2.0
hello everyone. I got a texture loading error on visionOS 2.0: Can't create texture(Error Domain=MTKTextureLoaderErrorDomain Code=0 "Pixel format(MTLPixelFormatInvalid) is not valid on this device" UserInfo={NSLocalizedDescription=Pixel format(MTLPixelFormatInvalid) is not valid on this device, MTKTextureLoaderErrorKey=Pixel format(MTLPixelFormatInvalid) is not valid on this device} But this texture can load correctly on visionOS1.3. I don't know what happen between visionOS1.3 and visionOS2.0. The texture is a ktx file which stores cubemap that encoding in astc6x6hdr. And the ktx texture has a glInternalFormat info: GL_COMPRESSED_RGBA_ASTC_6x6. I wonder if visionOS2.0 no longer supports astc6x6hdr cubemap format, or there is something wrong with my assets.
1
0
270
Sep ’24
What triggers Game Mode?
What are the specific characteristics that trigger Game Mode in an iOS game? I have several casual SpriteKit games in the App Store but only one of them triggers Game Mode. What does GCSupportsGameMode do when set to true? Will it trigger Game Mode or will the OS still decide by itself?
2
0
453
Sep ’24
USDZ in Keynote
Apple's own USDZ files from their website does not display properly both in OPReview and in Keynote when imported. It displays properly, however, in Reality Converter bit with this error: "Invalid USD shader node in USD file Shader nodes must have “id” as the implementationSource, with id values that begin with “Usd”. Also, shader inputs with connections must each have a single, valid connection source." I tried importing other models from external sources and they work without any issue at all. Is there any potential fix or workaround this? Thanks in advance.
1
0
303
Sep ’24