Basically the title.
Is it all done in Reality Composer Pro or need some coding in swift.?
General
RSS for tagDelve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
How to integrate UIDevice rotation and creating a new UIBezierPath after rotation?
My challenge here is to successfully integrate UIDevice rotation and creating a new UIBezierPath every time the UIDevice is rotated.
(Please accept my apologies for this Post’s length .. but I can’t seem to avoid it)
As a preamble, I have bounced back and forth between
NotificationCenter.default.addObserver(self,
selector: #selector(rotated),
name: UIDevice.orientationDidChangeNotification,
object: nil)
called within my viewDidLoad() together with
@objc func rotated() {
}
and
override func viewWillLayoutSubviews() {
// please see code below
}
My success was much better when I implemented viewWillLayoutSubviews(), versus rotated() .. so let me provide detailed code just for viewWillLayoutSubviews().
I have concluded that every time I rotate the UIDevice, a new UIBezierPath needs to be generated because positions and sizes of my various SKSprieNodes change.
I am definitely not saying that I have to create a new UIBezierPath with every rotation .. just saying I think I have to.
Start of Code
// declared at the top of my `GameViewController`:
var myTrain: SKSpriteNode!
var savedTrainPosition: CGPoint?
var trackOffset = 60.0
var trackRect: CGRect!
var trainPath: UIBezierPath!
My UIBezierPath creation and SKAction.follow code is as follows:
// called with my setTrackPaths() – see way below
func createTrainPath() {
// savedTrainPosition initially set within setTrackPaths()
// and later reset when stopping + resuming moving myTrain
// via stopFollowTrainPath()
trackRect = CGRect(x: savedTrainPosition!.x,
y: savedTrainPosition!.y,
width: tracksWidth,
height: tracksHeight)
trainPath = UIBezierPath(ovalIn: trackRect)
trainPath = trainPath.reversing() // makes myTrain move CW
} // createTrainPath
func startFollowTrainPath() {
let theSpeed = Double(5*thisSpeed)
var trainAction = SKAction.follow(
trainPath.cgPath,
asOffset: false,
orientToPath: true,
speed: theSpeed)
trainAction = SKAction.repeatForever(trainAction)
createPivotNodeFor(myTrain)
myTrain.run(trainAction, withKey: runTrainKey)
} // startFollowTrainPath
func stopFollowTrainPath() {
guard myTrain == nil else {
myTrain.removeAction(forKey: runTrainKey)
savedTrainPosition = myTrain.position
return
}
} // stopFollowTrainPath
Here is the detailed viewWillLayoutSubviews I promised earlier:
override func viewWillLayoutSubviews() {
super.viewWillLayoutSubviews()
if (thisSceneName == "GameScene") {
// code to pause moving game pieces
setGamePieceParms() // for GamePieces, e.g., trainWidth
setTrackPaths() // for trainPath
reSizeAndPositionNodes()
// code to resume moving game pieces
} // if (thisSceneName == "GameScene")
} // viewWillLayoutSubviews
func setGamePieceParms() {
if (thisSceneName == "GameScene") {
roomScale = 1.0
let roomRect = UIScreen.main.bounds
roomWidth = roomRect.width
roomHeight = roomRect.height
roomPosX = 0.0
roomPosY = 0.0
tracksScale = 1.0
tracksWidth = roomWidth - 4*trackOffset // inset from screen edge
#if os(iOS)
if UIDevice.current.orientation.isLandscape {
tracksHeight = 0.30*roomHeight
}
else {
tracksHeight = 0.38*roomHeight
}
#endif
// center horizontally
tracksPosX = roomPosX
// flush with bottom of UIScreen
let temp = roomPosY - roomHeight/2
tracksPosY = temp + trackOffset + tracksHeight/2
trainScale = 2.8
trainWidth = 96.0*trainScale // original size = 96 x 110
trainHeight = 110.0*trainScale
trainPosX = roomPosX
#if os(iOS)
if UIDevice.current.orientation.isLandscape {
trainPosY = temp + trackOffset + tracksHeight + 0.30*trainHeight
}
else {
trainPosY = temp + trackOffset + tracksHeight + 0.20*trainHeight
}
#endif
} // setGamePieceParms
// a work in progress
func setTrackPaths() {
if (thisSceneName == "GameScene") {
if (savedTrainPosition == nil) {
savedTrainPosition = CGPoint(x: tracksPosX - tracksWidth/2, y: tracksPosY)
}
else {
savedTrainPosition = CGPoint(x: tracksPosX - tracksWidth/2, y: tracksPosY)
}
createTrainPath()
} // if (thisSceneName == "GameScene")
} // setTrackPaths
func reSizeAndPositionNodes() {
myTracks.size = CGSize(width: tracksWidth, height: tracksHeight)
myTracks.position = CGPoint(x: tracksPosX, y: tracksPosY)
// more Nodes here ..
}
End of Code
My theory says when I call setTrackPaths() with every UIDevice rotation, createTrainPath() is called.
Nothing happens of significance visually as far as the UIBezierPath is concerned .. until I call startFollowTrainPath().
Bottom Line
It is then that I see for sure that a new UIBezierPath has not been created as it should have been when I called createTrainPath() when I rotated the UIDevice.
The new UIBezierPath is not new, but the old one.
If you’ve made it this far through my long code, the question is what do I need to do to make a new UIBezierPath that fits the resized and repositioned SKSpriteNode?
In the project template for using ARKit with Metal, there's a definition for the memory alignment of the buffer that holds the SharedUniforms structure. It is defined like this:
// The 16 byte aligned size of our uniform structures
let kAlignedSharedUniformsSize: Int = (MemoryLayout<SharedUniforms>.size & ~0xFF) + 0x100
If I understood correctly, this line of code does this:
Calculates the size of the SharedUniforms structure in bytes
Clears out the last 8 bits of the size representation
Adds 256 bytes to the size
So if I'm not mistaken, this will round up the size of the SharedUniforms structure to the 256 bytes, and not 16 bytes as the comment suggests.
Is there something I've overlooked since I can't wrap my head around how will this align the size to 16 bytes?
Here is an example fragment shader code (Rendering a cube with texCoord in [0, 1]):
colorSample.x = in.texCoord.x;
Which produce this result:
However, if I make a small change to the code like this:
colorSample.x = fract(ceil(0.1 + in.texCoord.x * 0.8) * 1000000) + in.texCoord.x;
Then it will produce this result:
If I disable fast-math in the second case, then it will produce the same image as in the first case. It seems that in fast-math mode, large parameter for fract() will affect precision of other operand in the same expression.
Is this a bug in fast-math mode? How should I circumvent this problem?
help me understand the crash report
this started happening from last update only
Translated Report (Full Report Below)
Process: dota2 [7353]
Path: /Users/USER/Library/Application Support/Steam/*/dota2.app/Contents/MacOS/dota2
Identifier: com.valvesoftware.dota2
Version: 1.0.0
Code Type: X86-64 (Translated)
Parent Process: launchd [1]
User ID: 501
Date/Time: 2024-02-18 18:00:45.9766 -0500
OS Version: macOS 14.3.1 (23D60)
Report Version: 12
Anonymous UUID: 0F5E4D0D-9839-DF78-5C28-93F6D26A5763
Sleep/Wake UUID: 52D18CB1-ADD8-4A75-B6A1-C0CF4CF2A306
Time Awake Since Boot: 85000 seconds
Time Since Wake: 1722 seconds
System Integrity Protection: enabled
Notes:
PC register does not match crashing frame (0x0 vs 0x1032D1C08)
Crashed Thread: 0 MainThrd Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000441f0f660002
Exception Codes: 0x0000000000000001, 0x0000441f0f660002
Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11
Terminating Process: exc handler [7353]
VM Region Info: 0x441f0f660002 is not in any region. Bytes after previous region: 48357375344643 Bytes before following region: 65536781844478
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
Memory Tag 255 1823fb340000-1823fb380000 [ 256K] rw-/rwx SM=PRV
---> GAP OF 0x67960cc80000 BYTES
MALLOC_MEDIUM 7fba08000000-7fba10000000 [128.0M] rw-/rwx SM=PRV
Error Formulating Crash Report:
PC register does not match crashing frame (0x0 vs 0x1032D1C08)
Kernel Triage:
VM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter
On startup I'm getting a "We reached more than 3 frames in flight. That's too many. Did you forget to call cp_frame_end_submission()?" error despite cp_frame_end_submission() being called when needed.
Nothing is rendered in the 1 frame that does go through. Is there something I'm missing that would cause cp_frame_end_submission to not register?
Hi, I have a an issue with jax.numpy.linalg.inv(a).
import jax.numpy.linalg as jnpl
B = jnp.identity(2)
jnpl.inv(B)
Throws the following error:
XlaRuntimeError: UNKNOWN: /var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: error: failed to legalize operation 'mhlo.triangular_solve'
/var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: note: called from
/var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: note: see current operation: %120 = \"mhlo.triangular_solve\"(%42#4, %119) {left_side = true, lower = true, transpose_a = #mhlo<transpose NO_TRANSPOSE>, unit_diagonal = true} : (tensor<2x2xf32>, tensor<2x2xf32>) -> tensor<2x2xf32>
Any ideas what could be the issue or how to solve it?
I'm trying to resize NSImages on macOS. I'm doing so with an extension like this.
extension NSImage {
// MARK: Resizing
/// Resize the image to the given size.
///
/// - Parameter size: The size to resize the image to.
/// - Returns: The resized image.
func resized(toSize targetSize: NSSize) -> NSImage? {
let frame = NSRect(x: 0, y: 0, width: targetSize.width, height: targetSize.height)
guard let representation = self.bestRepresentation(for: frame, context: nil, hints: nil) else {
return nil
}
let image = NSImage(size: targetSize, flipped: false, drawingHandler: { (_) -> Bool in
return representation.draw(in: frame)
})
return image
}
}
The problem is, as far as I can tell, the image that comes out of the drawing handler has lost the original color profile of the original image rep. I'm testing it with a wide color gamut image, attached.
This becomes pure red when examing the image result.
If this was UIKit I guess I'd use the UIGraphicsImageRenderer and select the right UIGraphicsImageRendererFormat.Range and so I'm suspecting I need to use the NSGraphicsContext here to do the rendering but I can't see what on that I would set to make it use wide color or how I'd use it?
Is there a way for an FXPlug to access the Source audio?
Or do we need to make an AU plugin, apply it to a audio source [both video or audio track], and feed the info via shared memory to an FXPlug?
Is there an AU plugin for external processes to "listen" to the audio?
I know I watched this but it is no where to be found on Apple's site or the Developer app. Nor does the Wayback Machine have it.
http://developer.apple.com/wwdc16/608
Graphics and Games #WWDC16
What’s New in GameplayKit
Session 608
Bruno Sommer Game Technologies Engineer
Sri Nair Game Technologies Engineer
Michael Brennan Game Technologies Engineer
Namaste!
I'm putting together a FCPX Effect that is supposed to increase the resolution with AI upscale, but the only way to add resolution is by scaling. The problem is that scaling causes the video to clip.
I want to be able to give a 480 video this "Resolution Upscale" Effect and have it output a 720 or 1080 AI upscaled video, however both FxPlug and Motion Effects does not allow such a thing.
The FxPlug is always getting 640x480 input (correct) but only 640x480 output.
What is the FxPlug code or Motion Configuration/Cncept for upscaling the resolution without affecting the scale? Is there a way to do this in Motion/FxPlug?
Scaling up by FxPlug effect, but then scaling down in a parent Motion Group doesn't do anything.
Setting the Group 2D Fixed Resolution doesn't output different dimensions; the debug output from the FxPlug continues saying the input and output is 640x480, even when the group is set at fixed resolution 1920x1080.
Doing a hierarchy of Groups with different settings for 2D Fixed Resolution and 3D Flatten do not work. In these instances, the debug output continues saying 640x480 for both input and output. So the plug in isn't aware of the Fixed Resolution change.
Does there need to be a new FxPlug property, via [properties:...], like "kFxPropertyKey_ResolutionChange" and an API for changing the dest image resolution? (and without changing the dest rect size)
How do we do this?
In Reality Composter Pro has a triplanar projection node based on the provision of images. Is there a way to make a triplanar projection to input the dynamic material?
Let's say I've created a scene with 3 models inside side by side. Now upon user interaction, I'd like to change these models to another model (that is also in the same reality composer pro project). Is that possible? How can one do that?
One way I can think of is to just load all the individual models in RealityView and then just toggle the opacity to show/hide the models. But this doesn't seem like the right way for performance/memory reasons.
How do you swap in and out usdz models?
Is it possible to use the Metal API on vision Pro? I noticed that using MTKView in my visionOS app is not recognized, and also noticed other forum posts from months ago saying that MTKView is not yet supported. If it is still not an option, if and when will it be supported?
Also wondering about metal-cpp support as well, since my app involves integrating an existing C++ library with visionOS (see here: https://github.com/MinVR/MinVR).
Is this possible?
I'm porting a scenekit app to RealityKit, eventually offering an AR experience there. I noticed that when I run it on my iPhone 15 Pro and iPad Pro with the 120Hz screen, the framerate seems to be limited to 60fps. Is there a way to increase the target framerate to 120 like I can with sceneKit?
I'm setting up my arView like so:
@IBOutlet private var arView: ARView! {
didSet {
arView.cameraMode = .nonAR
arView.debugOptions = [.showStatistics]
}
}
it appears that the Metal Debugging interface does not support this method, at least the function hashing algorithm does not have a pattern for it in the symbol dictionary as presented. Where do we get updated C- libraries and functions that sync with the things that are presented in the Demo Kits and Samples that Apple puts in the user domain? Why does this stuff get out into the wild insufficiently tested? It seems thet the demo kits made available to users should be included in the test domain used to verify new code releases. I came from a development environment where the 6 month release cycle involved automated execution of the test suite before it went beta or anywhere else.
I am using RealityKit and ARView PostProcessContext to get the sourceDepthTexture of the current virtual scene in RealityKit, using .nonAR camera mode.
My experience with Metal is limited to RealityKit GeometryModifier and SurfaceShader for CustomMaterial, but I am excited to learn more! Having studied the Underwater sample code I have a general idea of how I want to explore the capabilities of a proper post processing pipeline in my RealityKit project, but right now I just want to visualize this MTLTexture to see what the virtual depth of the scene looks like.
Here’s my current approach, trying to create a depth UIImage from the context sourceDepthTexture:
func postProcess(context: ARView.PostProcessContext) {
let depthTexture = context.sourceDepthTexture
var uiImage: UIImage? // or cg/ci
if processPost {
print("#P Process: Post Processs BLIT")
// UIImage from MTLTexture
uiImage = try createDepthUIImage(from: depthTexture)
let blitEncoder = context.commandBuffer.makeBlitCommandEncoder()
blitEncoder?.copy(from: context.sourceColorTexture, to: context.targetColorTexture)
blitEncoder?.endEncoding()
getPostProcessed()
}
else {
print("#P No Process: Pass-Through")
let blitEncoder = context.commandBuffer.makeBlitCommandEncoder()
blitEncoder?.copy(from: context.sourceColorTexture, to: context.targetColorTexture)
blitEncoder?.endEncoding()
}
}
func createUIImage(from metalTexture: MTLTexture) throws -> UIImage {
guard let device = MTLCreateSystemDefaultDevice() else { throw CIMError.noDefaultDevice }
let descriptor = MTLTextureDescriptor.texture2DDescriptor(
pixelFormat: .depth32Float_stencil8,
width: metalTexture.width,
height: metalTexture.height,
mipmapped: false)
descriptor.usage = [.shaderWrite, .shaderRead]
guard let texture = device.makeTexture(descriptor: descriptor) else {
throw NSError(domain: "Failed to create Metal texture", code: -1, userInfo: nil)
}
// Blit!
let commandQueue = device.makeCommandQueue()
let commandBuffer = commandQueue?.makeCommandBuffer()
let blitEncorder = commandBuffer?.makeBlitCommandEncoder()
blitEncorder?.copy(from: metalTexture, to: texture)
blitEncorder?.endEncoding()
commandBuffer?.commit()
// Raw pixel bytes
let bytesPerRow = 4 * texture.width
let dataSize = texture.height * bytesPerRow
var bytes = [UInt8](repeating: 0, count: dataSize)
//var depthData = [Float](repeating: 0, count: dataSize)
bytes.withUnsafeMutableBytes { bytesPtr in
texture.getBytes(
bytesPtr.baseAddress!,
bytesPerRow: bytesPerRow,
from: .init(origin: .init(), size: .init(width: texture.width, height: texture.height, depth: 1)),
mipmapLevel: 0
)
}
// CGDataProvider from the raw bytes
let dataProvider = CGDataProvider(data: Data(bytes: bytes, count: bytes.count) as CFData)
// CGImage from the data provider
let cgImage = CGImage(width: texture.width,
height: texture.height,
bitsPerComponent: 8,
bitsPerPixel: 32,
bytesPerRow: bytesPerRow,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue),
provider: dataProvider!,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent)
// Return as UIImage
return UIImage(cgImage: cgImage!)
}
I have hacked together the ‘createUIImage’ function with generative aid and online research to provide some visual feedback, but it looks like I am converting the depth values incorrectly — or somehow tapping into the stencil component of the pixels in the texture.
Either way I am out of my depth, and would love some help.
Ideally, I would like to produce a grayscale depth image, but really any guidance on how I can visualize the depth would be greatly appreciated.
As you can see from the magnified view on the right, there are some artifacts or pixels that are processed differently than the core stencil. The empty background is transparent in the image as expected.
I would like to know if the applications/games targeting the Metal 3 API will be fully compatible with the M1 Pro GPU. Thanks.
I'm testing all of the existing mapping SDKs from Unity via the PolySpatial workflow to see if any of them work on the Vision Pro. ArcGIS and Bing SDKs both play successfully in Editor, and Build successfully from Unity, but they both hit the same errors when building in Xcode (captured in screenshot attached). Is this a common error in Xcode? I can't find much on it. Thanks!
Hello, I have a crash in the Metal framework under Sonoma 14.4 public beta on a Mac Mini M1 2020:
Thread 1 crashed with ARM Thread State (64-bit):
x0: 0x0000000000000000 x1: 0x0000000000000000 x2: 0x0000000000000000 x3: 0x0000000000000000
x4: 0x0000000000000000 x5: 0x0000000000000000 x6: 0x0000000000000000 x7: 0x0000000000000000
x8: 0x17c2770b7ca20001 x9: 0x17c2770b7ca20001 x10: 0x0000000000000025 x11: 0x0000000000000001
x12: 0x000000016bb555b2 x13: 0x0000000000000000 x14: 0x0000000104acc7e9 x15: 0x0000000207c5c5b0
x16: 0xfffffffffffffff4 x17: 0x0000000211f42c48 x18: 0x0000000000000000 x19: 0x000000016bb55898
x20: 0x0000600002901180 x21: 0x0000600003cd0e20 x22: 0x0000000000000003 x23: 0x0000000277b7e040
x24: 0x00000000000002ec x25: 0x0000000000000001 x26: 0x0000000000000000 x27: 0x0000000000000000
x28: 0x0000000207c96b50 fp: 0x000000016bb55880 lr: 0x2d648001a439d394
sp: 0x000000016bb557b0 pc: 0x00000001a439d394 cpsr: 0x60001000
far: 0x0000000000000000 esr: 0xf2000001 (Breakpoint) brk 1
Binary Images:
0x139c00000 - 0x139c6bfff com.apple.AppleMetalOpenGLRenderer (1.0) <8b69c871-19c2-3d46-b8de-8dbc62e532cd> /System/Library/Extensions/AppleMetalOpenGLRenderer.bundle/Contents/MacOS/AppleMetalOpenGLRenderer
0x109b74000 - 0x109baffff libjogl_mobile.dylib () <9c3ef505-8828-36ab-a776-5ffdb9d4cd79> /Applications/scilab-2024.0.0.app/Contents/lib/thirdparty/libjogl_mobile.dylib
0x13b494000 - 0x13b50ffff libjogl_desktop.dylib () <543b42ae-90a4-325c-8850-84951b1fa6ee> /Applications/scilab-2024.0.0.app/Contents/lib/thirdparty/libjogl_desktop.dylib
0x108588000 - 0x10858ffff libnativewindow_macosx.dylib (*) <2c256988-735b-38b7-9712-0bfc58c3ff90> /Applications/scilab-2024.0.0.app/Contents/lib/thirdparty/libnativewindow_macosx.dylib
How can I get rid of ot ?
S.