In VisionOS1.1, when using the com.apple.unityplugin.core-3.1.0 & com.apple.unityplugin.gamekit-2.2.0 to sign in Apple Game Center.
var player = await GKLocalPlayer.Authenticate();
Debug.Log($"GKLocalPlayer Player: {player.DisplayName}");
Debug.Log($"GKLocalPlayer Player Alias: {player.Alias}");
it returns
GKLocalPlayer Player:
GKLocalPlayer Player Alias: Unknown
all other parameters are fine, except for the DisplayName is blank and the Alias returns "Unknown".
However, it works fine on iOS.
General
RSS for tagDelve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
As per the new App Review Guidelines, are HTML5 games provided within apps required to embed in the binary?
Given that the App Review Guidelines' section 4.7 has been updated.
I wanna draw a pixel buffer directly on the screen with the Metal API.
in OpenGL I can use glDrawPixels
how to do it in Metal?
Hello,
I've been trying to render these models in a VisionOS app using RealityKit's Model3D API. The heart seem to appear dark all the time. Any thoughts on why this would happen?
Color.clear
.overlay {
Model3D(named: modelName, bundle: realityKitContentBundle) { model in
model.resizable()
.scaledToFit()
.rotation3DEffect(
Rotation3D(
eulerAngles: .init(angles: orientation, order: .xyz)
)
)
.frame(depth: modelDepth)
.offset(z: -modelDepth / 2)
.accessibilitySortPriority(1)
} placeholder: {
ProgressView()
.offset(z: -modelDepth * 0.75)
}
}
.dragRotation(yawLimit: .degrees(120), pitchLimit: .degrees(20))
.offset(z: modelDepth)
I want to render a dense point cloud in Mixed Reality view using RealityKit. How could I achieve this, if this is possible? It seems to only support rendering mesh geometries with triangle faces.
I'm using DrawableQueue to create textures that I apply to my ShaderGraphMaterial texture. My metal render is using a range of alpha values as a test.
My objects displayed with the DrawableQueue texture are working as expected, but the alpha component is not working.
Is this an issue with my DrawableQueue descriptor? My ShaderGraphMaterial? A missing setting on my scene objects? or some limitation in visionOS?
DrawableQueue descriptor
let descriptor = await TextureResource.DrawableQueue.Descriptor(
pixelFormat: .rgba8Unorm,
width: textureResource!.width,
height: textureResource!.height,
usage: [.renderTarget, .shaderRead, .shaderWrite], // Usage should match the requirements for how the texture will be used
//usage: [.renderTarget], // Usage should match the requirements for how the texture will be used
mipmapsMode: .none // Assuming no mipmaps are needed for the text texture
)
let queue = try await TextureResource.DrawableQueue(descriptor)
queue.allowsNextDrawableTimeout = true
await textureResource!.replace(withDrawables: queue)
Draw frame:
guard
let drawable = try? drawableQueue!.nextDrawable(),
let commandBuffer = commandQueue?.makeCommandBuffer()//,
//let renderPipelineState = renderPipelineState
else {
return
}
let renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = drawable.texture
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].storeAction = .store
renderPassDescriptor.colorAttachments[0].clearColor = clearColor
/*renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(
red: clearColor.red,
green: clearColor.green,
blue: clearColor.blue,
alpha: 0.5 )*/
renderPassDescriptor.renderTargetHeight = drawable.texture.height
renderPassDescriptor.renderTargetWidth = drawable.texture.width
guard let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) else {
return
}
renderEncoder.pushDebugGroup("DrawNextFrameWithColor")
//renderEncoder.setRenderPipelineState(renderPipelineState)
// No need to create a render command encoder with shaders, as we are only clearing the drawable.
// Since we are just clearing the drawable to a solid color, no need to draw primitives
renderEncoder.endEncoding()
commandBuffer.commit()
commandBuffer.waitUntilCompleted()
drawable.present()
}
On macOS 14.2.1 Sonoma, when our app is displayed in full screen, there is a problem where the operations at hand are not reflected at all on the connection destination.
This issue does not occur on macOS 13 Ventura.
If you stop displaying full screen and reduce the window size, you will be able to operate it to some extent.
Were there any specifications or technical changes around the screen between Ventura and Sonoma?
Mac at hand* - Relay server - Windows to connect to
*This does not occur on windows, android, or ios.
I am running the RoomPlan Demo app and keep getting the above error and when I try to find someplace to get the archive in the Metal Libraries my searches come up blank. There are no files that show up in a search that contain such identifiers. A number of messages are displayed about "deprecated" interfaces also. Is it normal to send out demo apps that are hobbled in this way?
With the advent of the third dimension, I wanted to know wether if it's currently possible to display the flat swiftUI Views with some thickness in xrOS?
While the .frame(depth: CGFloat?) does the job for Views in general, I am eager for a more granular level of control at the pixel-specific level.
I was hoping that there are lower level APIs to achieve this & I've looked into the fairly new layerEffect shader API, yet it seems it's incapable of setting the depths of pixels...
Hi, I'm displaying linear gray by CAMetalLayer with the shader below.
fragment float4 fragmentShader(VertexOut in [[stage_in]],
texture2d<float, access::sample> BGRATexture [[ texture(0) ]])
{
float color = in.texCoordinates.x;
return float4(float3(color), 1.0);
}
And my CAMetalLayer has been set to linearSRGB.
metalLayer.colorspace = CGColorSpace(name: CGColorSpace.linearSRGB)
metalLayer.pixelFormat = .bgra8Unorm
Why the display seem add gamma? Apparently the middle gray is 187 but not 128.
Hello,
I have been following the excellent/informative "Metal for Machine Learning" from WWDC19 to learn how to do on device training (I have a specific use case for this) and it is all working really well using the MPSNNGraph.
However, I would like to call my own metal compute/render function/pipeline to transform the inference result before calculating the loss, does anyone know if this possible and what would this look like in code?
Please see my current code below, at the comment I need to call an intermediate compute/render function to transform the inference result image before passing to the MPSNNForwardLossNode.
let rgbImageNode = MPSNNImageNode(handle: nil)
let inferGraph = makeInferenceGraph()
let reshape = MPSNNReshapeNode(source: inferGraph.resultImage, resultWidth: 64, resultHeight: 64, resultFeatureChannels: 4)
//Need to call render or compute pipeline to post process in the inference result image
let rgbLoss = MPSNNForwardLossNode(source:reshape.resultImage, labels:rgbImageNode, lossDescriptor:lossDescriptor)
let initGrad = MPSNNInitialGradientNode(source:rgbLoss.resultImage)
let gradNodes = initGrad.trainingGraph(withSourceGradient:nil, nodeHandler:nil)
guard let trainGraph = MPSNNGraph(device: device, resultImage: gradNodes![0].resultImage, resultImageIsNeeded: true) else{
fatalError("Unable to get training graph.")
}
Thanks
Detecting touching a SKSpriteNode within a touchesBegan event?
My experience to date has focused on using GamepadControllers with Apps, not a touch-activated iOS App.
Here are some short code snippets:
Note: the error I am trying to correct is noted in the very first snippet = touchesBegan within the comment <== shows "horse"
Yes, there is a "horse", but it is no where near the "creditsInfo" SKSpriteNode within my .sksfile.
Please note that this "creditsInfo" SKSpriteNode is programmatically generated by my addCreditsButton(..) and will be placed very near the top-left of my GameScene.
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let ourScene = GameScene(fileNamed: "GameScene") {
if let touch:UITouch = touches.first {
let location = touch.location(in: view)
let node:SKNode = ourScene.atPoint(location)
print("node.name = \(node.name!)") // <== shows "horse"
if (node.name == "creditsInfo") {
showCredits()
}
}
} // if let ourScene
} // touchesBegan
The above touchesBegan function is an extension GameViewController which according to the docs is okay, namely, touchesBegan is a UIView method besides being a UIViewController method.
Within my primary showScene() function, I have:
if let ourScene = GameScene(fileNamed: "GameScene") {
#if os(iOS)
addCreditsButton(toScene: ourScene)
#endif
}
with:
func addCreditsButton(toScene: SKScene) {
if thisSceneName == "GameScene" {
itsCreditsNode.name = "creditsInfo"
itsCreditsNode.anchorPoint = CGPoint(x: 0.5, y: 0.5)
itsCreditsNode.size = CGSize(width: 2*creditsCircleRadius,
height: 2*creditsCircleRadius)
itsCreditsNode.zPosition = 3
creditsCirclePosY = roomHeight/2 - creditsCircleRadius - creditsCircleOffsetY
creditsCirclePosX = -roomWidth/2 + creditsCircleRadius + creditsCircleOffsetX
itsCreditsNode.position = CGPoint(x: creditsCirclePosX,
y: creditsCirclePosY)
toScene.addChild(itsCreditsNode)
} // if thisSceneName
} // addCreditsButton
To finish, I repeat what I stated at the very top:
The error I am trying to correct is noted in the very first snippet = touchesBegan within the comment <== shows "horse"
Hello,
I've been tinkering with PortalComponent on visionOS a bit but noticed that the content of the WorldComponent is always clipped to the mesh geometry of whatever entities have the PortalComponent applied. Now I'm wondering if there is any way or trick to allow contents of the portal to peek out – similar to the Encounter Dinosaurs experience on Vision Pro (I assume it also uses PortalComponent?).
I saw that PortalComponent has a clippingPlane property (https://developer.apple.com/documentation/realitykit/portalcomponent/clippingplane-swift.property). But so far I haven't been able to achieve a perceptible visual difference with it.
If possible I would like to avoid hacky tricks using duplicate meshes or similar to achieve this.
Thanks for any hints!
In the below code I have extracted face mesh vertices from ARKit face anchors and created a custom face mesh using SceneKit SCNGeometry. This enabled me to stretch face mesh vertices as per my requirement.
Now the problem I am facing is as follows. I am trying to apply a lipstick texture material which is of type SCNMaterial. Although ARSCNFaceGeometry lets me apply different textures through SCNMaterial and SCNNode, I am not able to do the same using mu CustomFaceGeometry. When I am applying a lipstick texture which looks like the image attached below, the full face is getting colored or modified, I want only that part of the face which has texture transparency as >0 and I dont want other part of the face to be modified.
Can you give me a detailed solution using code?
// ViewController.swift
import UIKit
import ARKit
import SceneKit
import simd
class ViewController: UIViewController, ARSCNViewDelegate, ARSessionDelegate{
@IBOutlet weak var sceneView: ARSCNView!
let vertexIndicesOfInterest = [250]
var customFaceGeometry: CustomFaceGeometry!
var scnFaceGeometry: SCNGeometry!
private var faceUvGenerator: FaceTextureGenerator!
var faceGeometry: ARSCNFaceGeometry!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARFaceTrackingConfiguration()
sceneView.session.run(configuration)
}
}
extension ViewController {
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor else { return }
customFaceGeometry = CustomFaceGeometry(fromFaceAnchor: faceAnchor)
let customGeometryNode = SCNNode(geometry: customFaceGeometry.geometry)
customFaceGeometry.geometry.firstMaterial?.fillMode = .lines
customFaceGeometry.geometry.firstMaterial?.transparency = 0.0
customFaceGeometry.geometry.firstMaterial?.isDoubleSided = true
node.addChildNode(customGeometryNode)
}
func renderer(_ renderer: SCNSceneRenderer, willUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor,
let faceMeshNode = node.childNodes.first else { return }
DispatchQueue.main.async {
self.customFaceGeometry.update(withFaceAnchor: faceAnchor, node: faceMeshNode)
}
}
}
class CustomFaceGeometry {
var geometry: SCNGeometry
let lipImage = UIImage(named: "Face.scnassets/lip_arks_y7.png")
init(fromFaceAnchor faceAnchor: ARFaceAnchor) {
self.geometry = CustomFaceGeometry.createCustomSCNGeometry(from: faceAnchor)!
}
static func createCustomFaceGeometry(fromVertices vertices_o: [SCNVector3]) -> SCNGeometry {
var vertices = vertices_o
let vertexData = Data(bytes: vertices, count: vertices.count * MemoryLayout<SCNVector3>.size)
let vertexSource = SCNGeometrySource(data: vertexData,
semantic: .vertex,
vectorCount: vertices.count,
usesFloatComponents: true,
componentsPerVector: 3,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: 0,
dataStride: MemoryLayout<SCNVector3>.stride)
let indices: [Int32] = Array(0..<Int32(vertices.count))
let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<Int32>.size)
let element = SCNGeometryElement(data: indexData, primitiveType: .point, primitiveCount: vertices.count, bytesPerIndex: MemoryLayout<Int32>.size)
return SCNGeometry(sources: [vertexSource], elements: [element])
}
static func createGeometry(fromFaceAnchor faceAnchor: ARFaceAnchor) -> SCNGeometry
let vertices = faceAnchor.geometry.vertices.map { SCNVector3($0.x, $0.y, $0.z) }
return CustomFaceGeometry.createCustomFaceGeometry(fromVertices: vertices)
}
func update(withFaceAnchor faceAnchor: ARFaceAnchor, node: SCNNode) {
if let newGeometry = CustomFaceGeometry.createCustomSCNGeometry(from: faceAnchor) {
node.geometry = newGeometry
let lipstickNode = SCNNode(geometry: newGeometry)
let lipstickTextureMaterial = SCNMaterial()
lipstickTextureMaterial.diffuse.contents = lipImage
lipstickTextureMaterial.transparency = 1.0
lipstickNode.geometry?.firstMaterial = lipstickTextureMaterial
node.geometry?.firstMaterial?.fillMode = .lines
node.geometry?.firstMaterial?.transparency = 0.5
}
}
static func createCustomSCNGeometry(from faceAnchor: ARFaceAnchor) -> SCNGeometry? {
let faceGeometry = faceAnchor.geometry
var vertices: [SCNVector3] = faceGeometry.vertices.map { SCNVector3($0.x, $0.y, $0.z) }
print(vertices[250])
let ll_ratio_y = Float(0.969999)
vertices[290] = SCNVector3(x: vertices[290].x, y: vertices[290].y*ll_ratio_y, z: vertices[290].z)
vertices[274] = SCNVector3(x: vertices[274].x, y: vertices[274].y*ll_ratio_y, z: vertices[274].z)
vertices[265] = SCNVector3(x: vertices[265].x, y: vertices[265].y*ll_ratio_y, z: vertices[265].z)
vertices[700] = SCNVector3(x: vertices[700].x, y: vertices[700].y*ll_ratio_y, z: vertices[700].z)
vertices[730] = SCNVector3(x: vertices[730].x, y: vertices[730].y*ll_ratio_y, z: vertices[730].z)
vertices[25] = SCNVector3(x: vertices[25].x, y: vertices[25].y*ll_ratio_y, z: vertices[25].z)
vertices[709] = SCNVector3(x: vertices[709].x, y: vertices[709].y*ll_ratio_y, z: vertices[709].z)
vertices[725] = SCNVector3(x: vertices[725].x, y: vertices[725].y*ll_ratio_y, z: vertices[725].z)
vertices[710] = SCNVector3(x: vertices[710].x, y: vertices[710].y*ll_ratio_y, z: vertices[710].z)
let vertexData = Data(bytes: vertices, count: vertices.count * MemoryLayout<SCNVector3>.size)
let vertexSource = SCNGeometrySource(data: vertexData, semantic: .vertex, vectorCount: vertices.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<SCNVector3>.stride)
let indices: [UInt16] = faceGeometry.triangleIndices.map(UInt16.init)
let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<UInt16>.size)
let element = SCNGeometryElement(data: indexData, primitiveType: .triangles, primitiveCount: indices.count / 3, bytesPerIndex: MemoryLayout<UInt16>.size)
return SCNGeometry(sources: [vertexSource], elements: [element])
}
}
The indoor sky box is displayed large and far in the field of view in visionos?
why?
func addSkybox(for destination: Destination) {
let subscription = TextureResource.loadAsync(named: destination.imageName).sink(
receiveCompletion: {
switch $0 {
case .finished: break
case .failure(let error): assertionFailure("\(error)")
}
},
receiveValue: { [weak self] texture in
guard let self = self else { return }
var material = UnlitMaterial()
material.color = .init(texture: .init(texture))
self.components.set(ModelComponent(
mesh: .generateSphere(radius: 1E3),
materials: [material]
))
// We flip the sphere inside out so the texture is shown inside.
self.scale *= .init(x: -1, y: 1, z: 1)
self.transform.translation += SIMD3<Float>(0.0, 1.0, 0.0)
// Rotate the sphere to show the best initial view of the space.
updateRotation(for: destination)
}
)
components.set(Entity.SubscriptionComponent(subscription: subscription))
}
by https://developer.apple.com/documentation/visionos/destination-video
When developping jax code locally, I use jax's debug_callback. Metal does not implement it.
NotImplementedError: MLIR translation rule for primitive 'debug_callback' not found for platform METAL
I experience an issue with SceneKit that is driving me crazy ;(
I have severe hangs when I disable Metal API Validation (which is default when you don't run from Xcode). So is there any way to force enable Metal API Validation for AppStore binary? (run MTL_DEBUG_LAYER=1 for Testflight or App Store)
Hangs happen on Catalyst but also on iOS if I use lightingEnvironment...
I'm working on a project wherein RealityKit for iOS will be used to display 3D files (USDZ) in a real-world environment. The model will also need to animate differently depending on which button is pressed. When using models that are downloaded from various websites or via Apple QuickLook, the code functions well. I can hold the animation in place and click a button to play it.
Unfortunately, although the model (through blender) my team provided is animating in SceneKit, it does not play at all when left in the real world, not even when a button is pressed.
I checked RealityKit USDZ tool, and found usdz file is not valid, they are not figure out whats wrong.
Could you please help me figure out what's wrong with my USDZ file?
Working USDZ: https://developer.apple.com/augmented-reality/quick-look/models/drummertoy/toy_drummer_idle.usdz
My file: https://drive.google.com/file/d/1UibIKBy2fx4q0XxSNodOwQZMLgktKiKF/view?usp=sharing
Hi,I would like to render parallax frames on the left and right frame buffer. Are there any documents or examples I can refer to?
Hi,
Re: WWDC2023-10089, I have a question about creating texture maps during pipeline setup. In traditional MTKView setups, it's easy to query for the view size to know what the dimensions of the texture map should be. But, after digging through all the documentation on the classes, I don't see any way to find this information.
There's the drawable, and querying it, and then maybe getting the info from the default render texture maps – but, I'm trying to set these textures up when I set up the pipelines, and so I don't think that's going to work. (Because the render loop won't have started yet.)
Secondly, I'm wondering w/ foviation if there's even more that needs to be considered regarding creating these types of auxiliary render passes.
Basically, for example's sake, imagine you have a working visionOS Metal pipeline. But, now you want to add a special render pass to do some effects. Typically you'd create a texture map to store that pass, calculate the work in a fragment shader, etc, and then do another pipeline state to mix that with the default rendering pipeline.
Any help appreciated, thanks!