In the project template for using ARKit with Metal, there's a definition for the memory alignment of the buffer that holds the SharedUniforms structure. It is defined like this:
// The 16 byte aligned size of our uniform structures
let kAlignedSharedUniformsSize: Int = (MemoryLayout<SharedUniforms>.size & ~0xFF) + 0x100
If I understood correctly, this line of code does this:
Calculates the size of the SharedUniforms structure in bytes
Clears out the last 8 bits of the size representation
Adds 256 bytes to the size
So if I'm not mistaken, this will round up the size of the SharedUniforms structure to the 256 bytes, and not 16 bytes as the comment suggests.
Is there something I've overlooked since I can't wrap my head around how will this align the size to 16 bytes?
General
RSS for tagDelve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
Basically the title.
Is it all done in Reality Composer Pro or need some coding in swift.?
Hi,
Re: WWDC2023-10089, I have a question about creating texture maps during pipeline setup. In traditional MTKView setups, it's easy to query for the view size to know what the dimensions of the texture map should be. But, after digging through all the documentation on the classes, I don't see any way to find this information.
There's the drawable, and querying it, and then maybe getting the info from the default render texture maps – but, I'm trying to set these textures up when I set up the pipelines, and so I don't think that's going to work. (Because the render loop won't have started yet.)
Secondly, I'm wondering w/ foviation if there's even more that needs to be considered regarding creating these types of auxiliary render passes.
Basically, for example's sake, imagine you have a working visionOS Metal pipeline. But, now you want to add a special render pass to do some effects. Typically you'd create a texture map to store that pass, calculate the work in a fragment shader, etc, and then do another pipeline state to mix that with the default rendering pipeline.
Any help appreciated, thanks!
Hi,I would like to render parallax frames on the left and right frame buffer. Are there any documents or examples I can refer to?
I'm working on a project wherein RealityKit for iOS will be used to display 3D files (USDZ) in a real-world environment. The model will also need to animate differently depending on which button is pressed. When using models that are downloaded from various websites or via Apple QuickLook, the code functions well. I can hold the animation in place and click a button to play it.
Unfortunately, although the model (through blender) my team provided is animating in SceneKit, it does not play at all when left in the real world, not even when a button is pressed.
I checked RealityKit USDZ tool, and found usdz file is not valid, they are not figure out whats wrong.
Could you please help me figure out what's wrong with my USDZ file?
Working USDZ: https://developer.apple.com/augmented-reality/quick-look/models/drummertoy/toy_drummer_idle.usdz
My file: https://drive.google.com/file/d/1UibIKBy2fx4q0XxSNodOwQZMLgktKiKF/view?usp=sharing
I experience an issue with SceneKit that is driving me crazy ;(
I have severe hangs when I disable Metal API Validation (which is default when you don't run from Xcode). So is there any way to force enable Metal API Validation for AppStore binary? (run MTL_DEBUG_LAYER=1 for Testflight or App Store)
Hangs happen on Catalyst but also on iOS if I use lightingEnvironment...
When developping jax code locally, I use jax's debug_callback. Metal does not implement it.
NotImplementedError: MLIR translation rule for primitive 'debug_callback' not found for platform METAL
The indoor sky box is displayed large and far in the field of view in visionos?
why?
func addSkybox(for destination: Destination) {
let subscription = TextureResource.loadAsync(named: destination.imageName).sink(
receiveCompletion: {
switch $0 {
case .finished: break
case .failure(let error): assertionFailure("\(error)")
}
},
receiveValue: { [weak self] texture in
guard let self = self else { return }
var material = UnlitMaterial()
material.color = .init(texture: .init(texture))
self.components.set(ModelComponent(
mesh: .generateSphere(radius: 1E3),
materials: [material]
))
// We flip the sphere inside out so the texture is shown inside.
self.scale *= .init(x: -1, y: 1, z: 1)
self.transform.translation += SIMD3<Float>(0.0, 1.0, 0.0)
// Rotate the sphere to show the best initial view of the space.
updateRotation(for: destination)
}
)
components.set(Entity.SubscriptionComponent(subscription: subscription))
}
by https://developer.apple.com/documentation/visionos/destination-video
Detecting touching a SKSpriteNode within a touchesBegan event?
My experience to date has focused on using GamepadControllers with Apps, not a touch-activated iOS App.
Here are some short code snippets:
Note: the error I am trying to correct is noted in the very first snippet = touchesBegan within the comment <== shows "horse"
Yes, there is a "horse", but it is no where near the "creditsInfo" SKSpriteNode within my .sksfile.
Please note that this "creditsInfo" SKSpriteNode is programmatically generated by my addCreditsButton(..) and will be placed very near the top-left of my GameScene.
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let ourScene = GameScene(fileNamed: "GameScene") {
if let touch:UITouch = touches.first {
let location = touch.location(in: view)
let node:SKNode = ourScene.atPoint(location)
print("node.name = \(node.name!)") // <== shows "horse"
if (node.name == "creditsInfo") {
showCredits()
}
}
} // if let ourScene
} // touchesBegan
The above touchesBegan function is an extension GameViewController which according to the docs is okay, namely, touchesBegan is a UIView method besides being a UIViewController method.
Within my primary showScene() function, I have:
if let ourScene = GameScene(fileNamed: "GameScene") {
#if os(iOS)
addCreditsButton(toScene: ourScene)
#endif
}
with:
func addCreditsButton(toScene: SKScene) {
if thisSceneName == "GameScene" {
itsCreditsNode.name = "creditsInfo"
itsCreditsNode.anchorPoint = CGPoint(x: 0.5, y: 0.5)
itsCreditsNode.size = CGSize(width: 2*creditsCircleRadius,
height: 2*creditsCircleRadius)
itsCreditsNode.zPosition = 3
creditsCirclePosY = roomHeight/2 - creditsCircleRadius - creditsCircleOffsetY
creditsCirclePosX = -roomWidth/2 + creditsCircleRadius + creditsCircleOffsetX
itsCreditsNode.position = CGPoint(x: creditsCirclePosX,
y: creditsCirclePosY)
toScene.addChild(itsCreditsNode)
} // if thisSceneName
} // addCreditsButton
To finish, I repeat what I stated at the very top:
The error I am trying to correct is noted in the very first snippet = touchesBegan within the comment <== shows "horse"
Hello,
I have been following the excellent/informative "Metal for Machine Learning" from WWDC19 to learn how to do on device training (I have a specific use case for this) and it is all working really well using the MPSNNGraph.
However, I would like to call my own metal compute/render function/pipeline to transform the inference result before calculating the loss, does anyone know if this possible and what would this look like in code?
Please see my current code below, at the comment I need to call an intermediate compute/render function to transform the inference result image before passing to the MPSNNForwardLossNode.
let rgbImageNode = MPSNNImageNode(handle: nil)
let inferGraph = makeInferenceGraph()
let reshape = MPSNNReshapeNode(source: inferGraph.resultImage, resultWidth: 64, resultHeight: 64, resultFeatureChannels: 4)
//Need to call render or compute pipeline to post process in the inference result image
let rgbLoss = MPSNNForwardLossNode(source:reshape.resultImage, labels:rgbImageNode, lossDescriptor:lossDescriptor)
let initGrad = MPSNNInitialGradientNode(source:rgbLoss.resultImage)
let gradNodes = initGrad.trainingGraph(withSourceGradient:nil, nodeHandler:nil)
guard let trainGraph = MPSNNGraph(device: device, resultImage: gradNodes![0].resultImage, resultImageIsNeeded: true) else{
fatalError("Unable to get training graph.")
}
Thanks
With the advent of the third dimension, I wanted to know wether if it's currently possible to display the flat swiftUI Views with some thickness in xrOS?
While the .frame(depth: CGFloat?) does the job for Views in general, I am eager for a more granular level of control at the pixel-specific level.
I was hoping that there are lower level APIs to achieve this & I've looked into the fairly new layerEffect shader API, yet it seems it's incapable of setting the depths of pixels...
On macOS 14.2.1 Sonoma, when our app is displayed in full screen, there is a problem where the operations at hand are not reflected at all on the connection destination.
This issue does not occur on macOS 13 Ventura.
If you stop displaying full screen and reduce the window size, you will be able to operate it to some extent.
Were there any specifications or technical changes around the screen between Ventura and Sonoma?
Mac at hand* - Relay server - Windows to connect to
*This does not occur on windows, android, or ios.
I want to render a dense point cloud in Mixed Reality view using RealityKit. How could I achieve this, if this is possible? It seems to only support rendering mesh geometries with triangle faces.
In the WWDC talk "Enhance your spatial computing app with RealityKit." we see how to create a portal effect with RealityKit. In the "Encounter Dinosaurs" experience on Vision Pro there is a similar portal, except this portal allows entities to stick out of the portal. Using the provided example code, I have been unable to replicate this effect. With the example code, anything that sticks out of the portal gets clipped.
How do I get entities to stick out of the portal in a way similar to the "Encounter Dinosaurs" experience?
I am familiar with the old way of using OcclusionMaterial to create portals, but if the camera gets between the OcclusionMaterial and the entity (such as walking behind the portal), this can break the effect, and I was unable to break the effect in the "Encounter Dinosaurs" experience.
If it helps at all: I have noticed that if you look from the edge of the portal very closely, the rocks will not stick out the way that the dinosaurs do; The rocks get clipped. Therefore, the dinosaurs are somehow being rendered differently.
My MacBook Air M1 has installed Mac OS Sonoma 14.3.1, and I tried to install game-poring-toolkit tonight. After the step which it requires me to input the command "brew -v install apple/apple/game-porting-toolkit", Terminal ran for minutes. But at the end this error appeared: Error: apple/apple/game-porting-toolkit 1.1 did not build.
I don't know anything about coding and software. Could someone please tell me what cause this error and how to fix it after you read my post? I will appreciate your help!
Hello,
I've been trying to render these models in a VisionOS app using RealityKit's Model3D API. The heart seem to appear dark all the time. Any thoughts on why this would happen?
Color.clear
.overlay {
Model3D(named: modelName, bundle: realityKitContentBundle) { model in
model.resizable()
.scaledToFit()
.rotation3DEffect(
Rotation3D(
eulerAngles: .init(angles: orientation, order: .xyz)
)
)
.frame(depth: modelDepth)
.offset(z: -modelDepth / 2)
.accessibilitySortPriority(1)
} placeholder: {
ProgressView()
.offset(z: -modelDepth * 0.75)
}
}
.dragRotation(yawLimit: .degrees(120), pitchLimit: .degrees(20))
.offset(z: modelDepth)
I wanna draw a pixel buffer directly on the screen with the Metal API.
in OpenGL I can use glDrawPixels
how to do it in Metal?
I am working on an application where we are planning to use Metal for directly rendering custom content. When user looks at something on the rendered image, I want to get the position or ray of cursor (the point where the user is currently looking at) to render something else like a crosshair. Is it possible to get the cursor position information on VisionOS to accomplish this? How can I know if something is being hovered on by the eyes?
As per the new App Review Guidelines, are HTML5 games provided within apps required to embed in the binary?
Given that the App Review Guidelines' section 4.7 has been updated.
In VisionOS1.1, when using the com.apple.unityplugin.core-3.1.0 & com.apple.unityplugin.gamekit-2.2.0 to sign in Apple Game Center.
var player = await GKLocalPlayer.Authenticate();
Debug.Log($"GKLocalPlayer Player: {player.DisplayName}");
Debug.Log($"GKLocalPlayer Player Alias: {player.Alias}");
it returns
GKLocalPlayer Player:
GKLocalPlayer Player Alias: Unknown
all other parameters are fine, except for the DisplayName is blank and the Alias returns "Unknown".
However, it works fine on iOS.