Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics

Post

Replies

Boosts

Views

Activity

What happened to WWDC 2016 session 608?
I know I watched this but it is no where to be found on Apple's site or the Developer app. Nor does the Wayback Machine have it. http://developer.apple.com/wwdc16/608 Graphics and Games #WWDC16 What’s New in GameplayKit Session 608 Bruno Sommer Game Technologies Engineer Sri Nair Game Technologies Engineer Michael Brennan Game Technologies Engineer
1
1
594
Feb ’24
Frame not rendered, too many frames in flight.
On startup I'm getting a "We reached more than 3 frames in flight. That's too many. Did you forget to call cp_frame_end_submission()?" error despite cp_frame_end_submission() being called when needed. Nothing is rendered in the 1 frame that does go through. Is there something I'm missing that would cause cp_frame_end_submission to not register?
0
1
356
Feb ’24
Help me understand the crash report
help me understand the crash report this started happening from last update only Translated Report (Full Report Below) Process: dota2 [7353] Path: /Users/USER/Library/Application Support/Steam/*/dota2.app/Contents/MacOS/dota2 Identifier: com.valvesoftware.dota2 Version: 1.0.0 Code Type: X86-64 (Translated) Parent Process: launchd [1] User ID: 501 Date/Time: 2024-02-18 18:00:45.9766 -0500 OS Version: macOS 14.3.1 (23D60) Report Version: 12 Anonymous UUID: 0F5E4D0D-9839-DF78-5C28-93F6D26A5763 Sleep/Wake UUID: 52D18CB1-ADD8-4A75-B6A1-C0CF4CF2A306 Time Awake Since Boot: 85000 seconds Time Since Wake: 1722 seconds System Integrity Protection: enabled Notes: PC register does not match crashing frame (0x0 vs 0x1032D1C08) Crashed Thread: 0 MainThrd Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000441f0f660002 Exception Codes: 0x0000000000000001, 0x0000441f0f660002 Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11 Terminating Process: exc handler [7353] VM Region Info: 0x441f0f660002 is not in any region. Bytes after previous region: 48357375344643 Bytes before following region: 65536781844478 REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL Memory Tag 255 1823fb340000-1823fb380000 [ 256K] rw-/rwx SM=PRV ---> GAP OF 0x67960cc80000 BYTES MALLOC_MEDIUM 7fba08000000-7fba10000000 [128.0M] rw-/rwx SM=PRV Error Formulating Crash Report: PC register does not match crashing frame (0x0 vs 0x1032D1C08) Kernel Triage: VM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter
1
0
1.4k
Feb ’24
VisionOS VideoMaterial on 3D Mesh
I'm trying to get video material to work on an imported 3D asset, and this asset is a USDC file. There's actually an example in this WWDC video from Apple. You can see it running on the flag in this airplane, but there are no examples of this, and there are no other examples on the internet. Does anybody know how to do this? You can look at 10:34 in this video. https://developer.apple.com/documentation/realitykit/videomaterial
1
0
425
Feb ’24
Portals and ImmersiveSpace?
I've added a simple visionOS Portal to an app's initial WindowGroup (a window with an attached portal is all that is displayed), but I've had troubles adding a portal to an ImmersiveSpace. For example, using the boilerplate code that Xcode creates for a mixed spatial experience, I'd like to turn on & off the ImmersiveSpace which has a portal in it. So far, the portal isn't showing up. Is it possible to add a portal to an ImmersiveSpace? Are there any restrictions on where portals can be added?
1
0
552
Feb ’24
Occlusion material and progressive ImmersiveSpace
In a progressive ImmersiveSpace, I created an object (a cylinder) and applied an OcclusionMaterial to it. It does hide my virtual content behind it, but does not show the content of my room. The cylinder just appears black. In progressive (or full?) ImmersiveSpace, is it possible to apply occlusion material (or something else), so I can see the room behind the virtual content? Basically, I want to punch a hole through the virtual content and see the room behind it. As a practical example, imagine being in a progressive ImmersiveSpace, but you have a plane with an occlusion mesh applied to it above your Apple Magic Keyboard so you can see your keyboard. Is this possible?
0
0
481
Feb ’24
Is it possible to have a custom light blend with environment light in RealityKit?
I was able to add a spotlight effect to my entities using ImageBasedLightComponent and the sample code. However, I noticed that whenever you set ImageBasedLightComponent the environmental lighting is completely turned off. Is it possible to merge them somehow? So imagine you have a toy in a the real world, and you shine a flashlight on it. The environment light should still have an effect right?
0
0
492
Feb ’24
Game Center authenticateHandler on visionOS
GKLocalPlayer.local.authenticateHandler = {viewController, error in When authenticating a player using authenticateHandler, the completion handler is only called if the player is already logged in. If the player is not logged in, the authentication window will appear but the completion handler is never called. If I have content in a volumetric window that obscures the login window (which appears at a slight Z increase from the parent window), what can I do? If the completion handler was being called then I could make adjustments to my view, but it never gets called if the user is not already logged in. https://developer.apple.com/documentation/gamekit/authenticating_a_player Thanks.
1
1
562
Feb ’24
How can Picker or ColorPicker be used in a volumetric scenes in visionOS?
Hi, My app has a volumetric window displaying some 3D content for the user. I would like the user to be able to control the color of the material using a color picker displayed below the model in the same window, but unfortunately neither ColorPicker nor Picker are functional in volumetric scenes. Attempting to use them causes the app to crash with NSInternalInconsistencyException: Presentations are not permitted within volumetric window scenes. This seems rather limiting. Is there a way either of these components can be utilized? I could build a different "control panel" window but it would not be attached to the model window and it would get confusing if user has multiple 3d windows open. Thank you
3
2
1k
Feb ’24
Using video chat instead of voice chat on online game ?
Hello, I am building an app that requires players joined to an online game session and to have a video chat with both of their front cameras turned on. I currently know that GameKit supports the voice chat option but could not find any source for video chat. Is it possible set up a video chat on an online game session?. What libraries can I use that have support in implementing the video chat (AVFoundation) ? Your assistance is greatly appreciated.
2
0
896
Feb ’24
PhotogrammetrySession chops off feet
We have a custom photo booth for taking photos of people for use with photogrammetry - the usual vertical cylinder of cameras with the human subject stood in the middle. We've found that often the lower legs of the subject are missing - this is particularly likely if the subject is wearing dark pants. The API for PhotogrammetrySession is really very limited, but we've tried all the combinations or detail and sensitivity and object masking we can think of - nothing results in a reliable scan. Personally I think this is related to the automatic isolation of the subject, rather than the photogrammetry itself. Often we get just the person, perfectly modelled. Occasionally we get everything the cameras can see - including the booth itself and the room it's in! But sometimes we get this footless result. Is there anything we can try to improve the situation?
0
0
292
Feb ’24
How to use the video playing in an AVPlayerViewController as a light source
I want to implement an immersive environment similar to AppleTV's Cinema environment for the video that plays in my app - currently, I want to use an AVPlayerViewController so that I don't have to build a control view or deal with aspect ratios (which I would have to do if I used VideoMaterial). To do this, it looks like I'll need to use the imagery from the video stream itself as an image for an ImageBasedLightComponent, but the API for that class seems restrict its input to an EnvironmentResource, which looks like it's meant to use an equirectangular still image that has to be part of the app bundle. Does anyone know how to achieve this effect? Where the "light" from the video being played in an AVPlayerViewController's player can be cast on 3D objects in the RealityKit scene? Is AppleTV doing something wild like combining an AVPlayerViewController and VideoMaterial? Where the VideoMaterial is layered onto the objects in the scene to simulate a light source? Thanks in advance!
0
0
589
Feb ’24
SCNView and SKView showing different colors
An SCNNode is created and used for either an SCNView or an SKView. SceneKit and SpriteKit are using default values. The SceneView has an SCNScene with a rootNode of the SCNNode. The SpriteKitView has a SpriteKitScene with an SK3DNode that has an SCNScene with a rootNode of the SCNNode. There is no other code changing or adding values. Why are the colors for the SCNView less vibrant than the colors for the SKView? Is there a default to change to make them equivalent, or another value to add? I have tried changing the default SCNMaterial but only succeeded in making the image black or dark. Any help is appreciated.
0
0
658
Feb ’24
Poor precision with fract in MSL fast-math mode
Here is an example fragment shader code (Rendering a cube with texCoord in [0, 1]): colorSample.x = in.texCoord.x; Which produce this result: However, if I make a small change to the code like this: colorSample.x = fract(ceil(0.1 + in.texCoord.x * 0.8) * 1000000) + in.texCoord.x; Then it will produce this result: If I disable fast-math in the second case, then it will produce the same image as in the first case. It seems that in fast-math mode, large parameter for fract() will affect precision of other operand in the same expression. Is this a bug in fast-math mode? How should I circumvent this problem?
2
0
364
Feb ’24
How to use drag gestures on objects with inverted normals?
I want to build a panorama sphere around the user. The idea is that the users can interact with this panorama, i.e. pan it around and select markers placed on it, like on a map. So I set up a sphere that works like a skybox, and inverted its normal, which makes the material is inward facing, using this code I found online: import Combine import Foundation import RealityKit import SwiftUI extension Entity { func addSkybox(for skybox: Skybox) { let subscription = TextureResource .loadAsync(named: skybox.imageName) .sink(receiveCompletion: { completion in switch completion { case .finished: break case let .failure(error): assertionFailure("\(error)") } }, receiveValue: { [weak self] texture in guard let self = self else { return } var material = UnlitMaterial() material.color = .init(texture: .init(texture)) let sphere = ModelComponent(mesh: .generateSphere(radius: 5), materials: [material]) self.components.set(sphere) /// flip sphere inside out so the texture is inside self.scale *= .init(x: -1, y: 1, z: 1) self.transform.translation += SIMD3(0.0, 1.0, 0.0) }) components.set(Entity.SubscriptionComponent(subscription: subscription)) } struct SubscriptionComponent: Component { var subscription: AnyCancellable } } This works fine and is looking awesome. However, I can't get a gesture work on this. If the sphere is "normally" oriented, i.e. the user drags it "from the outside", I can do it like this: import RealityKit import SwiftUI struct ImmersiveMap: View { @State private var rotationAngle: Float = 0.0 var body: some View { RealityView { content in let rootEntity = Entity() rootEntity.addSkybox(for: .worldmap) rootEntity.components.set(CollisionComponent(shapes: [.generateSphere(radius: 5)])) rootEntity.generateCollisionShapes(recursive: true) rootEntity.components.set(InputTargetComponent()) content.add(rootEntity) } .gesture(DragGesture().targetedToAnyEntity().onChanged({ _ in log("drag gesture") })) But if the user drags it from the inside (i.e. the negative x scale is in place), I get no drag events. Is there a way to achieve this?
1
0
422
Feb ’24
PortalComponent – allow world content to peek out
Hello, I've been tinkering with PortalComponent on visionOS a bit but noticed that the content of the WorldComponent is always clipped to the mesh geometry of whatever entities have the PortalComponent applied. Now I'm wondering if there is any way or trick to allow contents of the portal to peek out – similar to the Encounter Dinosaurs experience on Vision Pro (I assume it also uses PortalComponent?). I saw that PortalComponent has a clippingPlane property (https://developer.apple.com/documentation/realitykit/portalcomponent/clippingplane-swift.property). But so far I haven't been able to achieve a perceptible visual difference with it. If possible I would like to avoid hacky tricks using duplicate meshes or similar to achieve this. Thanks for any hints!
4
0
1k
Feb ’24
Unable to draw textures on SCNGeometry which is created from ARKit FaceAnchor points.
In the below code I have extracted face mesh vertices from ARKit face anchors and created a custom face mesh using SceneKit SCNGeometry. This enabled me to stretch face mesh vertices as per my requirement. Now the problem I am facing is as follows. I am trying to apply a lipstick texture material which is of type SCNMaterial. Although ARSCNFaceGeometry lets me apply different textures through SCNMaterial and SCNNode, I am not able to do the same using mu CustomFaceGeometry. When I am applying a lipstick texture which looks like the image attached below, the full face is getting colored or modified, I want only that part of the face which has texture transparency as >0 and I dont want other part of the face to be modified. Can you give me a detailed solution using code? // ViewController.swift import UIKit import ARKit import SceneKit import simd class ViewController: UIViewController, ARSCNViewDelegate, ARSessionDelegate{ @IBOutlet weak var sceneView: ARSCNView! let vertexIndicesOfInterest = [250] var customFaceGeometry: CustomFaceGeometry! var scnFaceGeometry: SCNGeometry! private var faceUvGenerator: FaceTextureGenerator! var faceGeometry: ARSCNFaceGeometry! override func viewDidLoad() { super.viewDidLoad() sceneView.delegate = self override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) let configuration = ARFaceTrackingConfiguration() sceneView.session.run(configuration) } } extension ViewController { func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) { guard let faceAnchor = anchor as? ARFaceAnchor else { return } customFaceGeometry = CustomFaceGeometry(fromFaceAnchor: faceAnchor) let customGeometryNode = SCNNode(geometry: customFaceGeometry.geometry) customFaceGeometry.geometry.firstMaterial?.fillMode = .lines customFaceGeometry.geometry.firstMaterial?.transparency = 0.0 customFaceGeometry.geometry.firstMaterial?.isDoubleSided = true node.addChildNode(customGeometryNode) } func renderer(_ renderer: SCNSceneRenderer, willUpdate node: SCNNode, for anchor: ARAnchor) { guard let faceAnchor = anchor as? ARFaceAnchor, let faceMeshNode = node.childNodes.first else { return } DispatchQueue.main.async { self.customFaceGeometry.update(withFaceAnchor: faceAnchor, node: faceMeshNode) } } } class CustomFaceGeometry { var geometry: SCNGeometry let lipImage = UIImage(named: "Face.scnassets/lip_arks_y7.png") init(fromFaceAnchor faceAnchor: ARFaceAnchor) { self.geometry = CustomFaceGeometry.createCustomSCNGeometry(from: faceAnchor)! } static func createCustomFaceGeometry(fromVertices vertices_o: [SCNVector3]) -> SCNGeometry { var vertices = vertices_o let vertexData = Data(bytes: vertices, count: vertices.count * MemoryLayout<SCNVector3>.size) let vertexSource = SCNGeometrySource(data: vertexData, semantic: .vertex, vectorCount: vertices.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<SCNVector3>.stride) let indices: [Int32] = Array(0..<Int32(vertices.count)) let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<Int32>.size) let element = SCNGeometryElement(data: indexData, primitiveType: .point, primitiveCount: vertices.count, bytesPerIndex: MemoryLayout<Int32>.size) return SCNGeometry(sources: [vertexSource], elements: [element]) } static func createGeometry(fromFaceAnchor faceAnchor: ARFaceAnchor) -> SCNGeometry let vertices = faceAnchor.geometry.vertices.map { SCNVector3($0.x, $0.y, $0.z) } return CustomFaceGeometry.createCustomFaceGeometry(fromVertices: vertices) } func update(withFaceAnchor faceAnchor: ARFaceAnchor, node: SCNNode) { if let newGeometry = CustomFaceGeometry.createCustomSCNGeometry(from: faceAnchor) { node.geometry = newGeometry let lipstickNode = SCNNode(geometry: newGeometry) let lipstickTextureMaterial = SCNMaterial() lipstickTextureMaterial.diffuse.contents = lipImage lipstickTextureMaterial.transparency = 1.0 lipstickNode.geometry?.firstMaterial = lipstickTextureMaterial node.geometry?.firstMaterial?.fillMode = .lines node.geometry?.firstMaterial?.transparency = 0.5 } } static func createCustomSCNGeometry(from faceAnchor: ARFaceAnchor) -> SCNGeometry? { let faceGeometry = faceAnchor.geometry var vertices: [SCNVector3] = faceGeometry.vertices.map { SCNVector3($0.x, $0.y, $0.z) } print(vertices[250]) let ll_ratio_y = Float(0.969999) vertices[290] = SCNVector3(x: vertices[290].x, y: vertices[290].y*ll_ratio_y, z: vertices[290].z) vertices[274] = SCNVector3(x: vertices[274].x, y: vertices[274].y*ll_ratio_y, z: vertices[274].z) vertices[265] = SCNVector3(x: vertices[265].x, y: vertices[265].y*ll_ratio_y, z: vertices[265].z) vertices[700] = SCNVector3(x: vertices[700].x, y: vertices[700].y*ll_ratio_y, z: vertices[700].z) vertices[730] = SCNVector3(x: vertices[730].x, y: vertices[730].y*ll_ratio_y, z: vertices[730].z) vertices[25] = SCNVector3(x: vertices[25].x, y: vertices[25].y*ll_ratio_y, z: vertices[25].z) vertices[709] = SCNVector3(x: vertices[709].x, y: vertices[709].y*ll_ratio_y, z: vertices[709].z) vertices[725] = SCNVector3(x: vertices[725].x, y: vertices[725].y*ll_ratio_y, z: vertices[725].z) vertices[710] = SCNVector3(x: vertices[710].x, y: vertices[710].y*ll_ratio_y, z: vertices[710].z) let vertexData = Data(bytes: vertices, count: vertices.count * MemoryLayout<SCNVector3>.size) let vertexSource = SCNGeometrySource(data: vertexData, semantic: .vertex, vectorCount: vertices.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<SCNVector3>.stride) let indices: [UInt16] = faceGeometry.triangleIndices.map(UInt16.init) let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<UInt16>.size) let element = SCNGeometryElement(data: indexData, primitiveType: .triangles, primitiveCount: indices.count / 3, bytesPerIndex: MemoryLayout<UInt16>.size) return SCNGeometry(sources: [vertexSource], elements: [element]) } }
2
0
717
Feb ’24
How can I simultaneously apply the drag gesture to multiple entities?
I wanted to drag EntityA while also dragging EntityB independently. I've tried to separate them by entity but it only recognizes the latest drag gesture RealityView { content, attachments in ... } .gesture( DragGesture() .targetedToEntity(EntityA) .onChanged { value in ... } ) .gesture( DragGesture() .targetedToEntity(EntityB) .onChanged { value in ... } ) also tried using the simultaneously but didn't work too, maybe i'm missing something .gesture( DragGesture() .targetedToEntity(EntityA) .onChanged { value in ... } .simultaneously(with: DragGesture() .targetedToEntity(EntityB) .onChanged { value in ... } )
0
1
444
Feb ’24