Hello Apple community,
I hope this message finds you well. I'm writing to report an issue that I've encountered after upgrading my iPad to iPadOS 17. The problem seems to be related to the Quick Look AR application, which I use extensively for 3D modeling and visualization.
Prior to the upgrade, everything was working perfectly fine. I create 3D models in Reality Composer and export them as USDZ files for use with Quick Look AR. However, after the upgrade to iPadOS 17, I've noticed a rather troubling issue.
Problem Description:
When I view my 3D models using Quick Look AR on iPadOS 17, some of the 3D models exhibit a peculiar problem. Instead of displaying the correct textures, they show a bright pink texture in their place. This issue occurs only when I have subsequent scenes added to the initial scene. Strangely, the very first scene in the sequence displays the textures correctly.
Steps to Reproduce:
Create a 3D model in Reality Composer.
Export the model as a USDZ file.
Open the USDZ file using Quick Look AR.
Observe that the textures appear correctly on the initial scene.
Add additional scenes to the model.
Navigate to the subsequent scenes.
Notice that some of the 3D models display a pink texture instead of the correct textures (see picture).
Expected Behavior:
The 3D models should consistently display their textures, even when multiple scenes are added to the scene sequence.
Workaround:
As of now, there doesn't seem to be a viable workaround for this issue, which is quite problematic for my work in 3D modeling and visualization.
I would greatly appreciate any insights, solutions, or workarounds that the community might have for this problem. Additionally, I would like to know if others are experiencing the same issue after upgrading to iPadOS 17. This information could be helpful for both users and Apple in addressing this problem.
Thank you for your attention to this matter, and I look forward to hearing from the community and hopefully finding a resolution to this Quick Look AR issue.
Best regards
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
Hello - I have been struggling to find a solution online and I hope you can help me timely. I have installed the latest tesnorflow and tensorflow-metal, I even went to install the ternsorflow-nightly. My app generates the following as a result of my fit function on a CNN model with 8 layers.
2023-09-29 22:21:06.115768: I metal_plugin/src/device/metal_device.cc:1154] Metal device set to: Apple M1 Pro
2023-09-29 22:21:06.115846: I metal_plugin/src/device/metal_device.cc:296] systemMemory: 16.00 GB
2023-09-29 22:21:06.116048: I metal_plugin/src/device/metal_device.cc:313] maxCacheSize: 5.33 GB
2023-09-29 22:21:06.116264: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:306] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2023-09-29 22:21:06.116483: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:272] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )
Most importantly, the learning process is very slow and I'd like to take advantage of al the new features of the latest versions. What can I do?
I only get this error when using the JAX Metal device (CPU is fine). It seems to be a problem whenever I want to modify values of an array in-place using at and set.
note: see current operation:
%2903 = "mhlo.scatter"(%arg3, %2902, %2893) ({
^bb0(%arg4: tensor<f32>, %arg5: tensor<f32>):
"mhlo.return"(%arg5) : (tensor<f32>) -> ()
}) {indices_are_sorted = true, scatter_dimension_numbers = #mhlo.scatter<update_window_dims = [0, 1], inserted_window_dims = [1], scatter_dims_to_operand_dims = [1]>, unique_indices = true} : (tensor<10x100x4xf32>, tensor<1xsi32>, tensor<10x4xf32>) -> tensor<10x100x4xf32>
blocks = blocks.at[i].set(
...
Warning: apple/apple/game-porting-toolkit 1.0.4 is already installed and up-to-date.
To reinstall 1.0.4, run:
brew reinstall game-porting-toolkit
dmitrxx@MacBook-Pro-Dima ~ % WINEPREFIX=~/Win10 brew --prefix game-porting-toolkit/bin/wine64 winecfg
Error: undefined method __prefix' for Homebrew:Module Please report this issue: https://docs.brew.sh/Troubleshooting /usr/local/Homebrew/Library/Homebrew/brew.rb:86:in '
zsh: no such file or directory: /bin/wine64
dmitrxx@MacBook-Pro-Dima ~ %
We transferred application, using guide there:
https://developer.apple.com/help/app-store-connect/transfer-an-app/overview-of-app-transfer/
We use Game Center to identify users via playerid https://developer.apple.com/documentation/gamekit/gkplayer/1521127-playerid
After transferring application playerid for our current users is changed, and so users are unable to login
How can we restore playerid for our users?
For "Sign in with Apple" there are migration process, so there are no issues, is there something like that for Game Center?
When I am setting up a profile on my PS5 controller to remap the buttons to play Roblox, but it doesn't change anything or worse it makes it malfunction.
How can I obtain the coordinates of the default camera in a RealityView scene? Is it possible to manipulate the transform of the default camera, or to replace it and manipulate another PerspectiveCamera() instead?
How can i draw a CapturedRoom.Surface.Curve in Scenekit? Is there a way
by using an UIBezierPath or by splitting it up into segments?
The error I get with visionOS simulator:
cannot migrate AudioUnit assets for current process
code:
guard let resource = try? AudioFileGroupResource.load(
named: "/Root/AudioGroupDropStone",
from: "Scene.usda",
in: realityKitContentBundle
)
Any ideas how to debug this?
The audio files seem to work fine in Reality Composer Pro.
The release notes for Xcode 14 mention a new AppleTextureConverter library.
https://developer.apple.com/documentation/xcode-release-notes/xcode-14-release-notes
TextureConverter 2.0 adds support for decompressing textures, advanced texture error metrics, and support for reading and writing KTX2 files.
The new AppleTextureConverter library makes TextureConverter available for integration into third-party engines and tools. (82244472)
Does anyone know how to include this library into a project and use it at runtime?
Hi,
I've been working on a spatial image design, guided by this Apple developer video:
https://developer.apple.com/videos/play/wwdc2023/10081?time=792.
I've hit a challenge: I'm trying to position a label to the left of the portal. Although I've used an attachment for the label within the content, pinpointing the exact starting position of the portal to align the label is proving challenging.
Any insights or suggestions would be appreciated.
Below is the URL of the image used:
https://cdn.polyhaven.com/asset_img/primary/rural_asphalt_road.png?height=780
struct PortalView: View {
let radius = Float(0.3)
var world = Entity()
var portal = Entity()
init() {
world = makeWorld()
portal = makePortal(world: world)
}
var body: some View {
RealityView { content, attachments in
content.add(world)
content.add(portal)
if let attachment = attachments.entity(for: 0) {
portal.addChild(attachment)
attachment.position.x = -radius/2.0
attachment.position.y = radius/2.0
}
} attachments: {
Attachment(id: 0) {
Text("Title")
.background(Color.red)
}
}
}
func makeWorld() -> Entity {
let world = Entity()
world.components[WorldComponent.self] = .init()
let imageEntity = Entity()
var material = UnlitMaterial()
let texture = try! TextureResource.load(named: "road")
material.color = .init(texture: .init(texture))
imageEntity.components.set(
ModelComponent(mesh: .generateSphere(radius: radius), materials: [material])
)
imageEntity.position = .zero
imageEntity.scale = .init(x: -1, y: 1, z: 1)
world.addChild(imageEntity)
return world
}
func makePortal(world: Entity) -> Entity {
let portal = Entity()
let portalMaterial = PortalMaterial()
let planeMesh = MeshResource.generatePlane(width: radius, height: radius, cornerRadius: 0)
portal.components[ModelComponent.self] = .init(mesh: planeMesh, materials: [portalMaterial])
portal.components[PortalComponent.self] = .init(
target: world
)
return portal
}
}
#Preview {
PortalView()
}
I'm developing a drawing app. I use MTKView to render the canvas. But for some reason and for only a few users, the pixels are not rendered correctly (pixels have different sizes), the majority of users have no problem with this. Here is my setup:
Each pixel is rendered as 2 triangles
MTKView's frame dimensions are always multiple of the canvas size (a 100x100 canvas will have the frame size of 100x100, 200x200, and so on)
There is a grid to indicate pixels (it's an SwiftUI Path) which display correctly, and we can see that they don't align with the pixels).
There is also a checkerboard pattern in the background rendered using another MTKView which lines up with the pixels but not the grid.
Previously, I had a similar issue when my view's frame is not a multiple of the canvas size, but I fixed that with the setup above already.
The issue worsens when the number of points representing a pixel of the canvas becomes smaller. E.g. a 100x100 canvas on a 100x100 view is worse than a 100x100 canvas on a 500x500 view
The vertices have accurate coordinates, this is a rendering issue. As you can see in the picture, some pixels are bigger than others.
I tried changing the contentScaleFactor to 1, 2, and 3 but none seems to solve the problem.
My MTKView setup:
clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0)
delegate = renderer
renderer.setup()
isOpaque = false
layer.magnificationFilter = .nearest
layer.minificationFilter = .nearest
Renderer's setup:
let pipelineDescriptor = MTLRenderPipelineDescriptor()
pipelineDescriptor.vertexFunction = vertexFunction
pipelineDescriptor.fragmentFunction = fragmentFunction
pipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
pipelineState = try? device.makeRenderPipelineState(descriptor: pipelineDescriptor)
Draw method of renderer:
commandEncoder.setRenderPipelineState(pipelineState)
commandEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
commandEncoder.setVertexBuffer(colorBuffer, offset: 0, index: 1)
commandEncoder.drawIndexedPrimitives(
type: .triangle,
indexCount: indexCount,
indexType: .uint32,
indexBuffer: indexBuffer,
indexBufferOffset: 0
)
commandEncoder.endEncoding()
commandBuffer.present(drawable)
commandBuffer.commit()
Metal file:
struct VertexOut {
float4 position [[ position ]];
half4 color;
};
vertex VertexOut frame_vertex(constant const float2* vertices [[ buffer(0) ]],
constant const half4* colors [[ buffer(1) ]],
uint v_id [[ vertex_id ]]) {
VertexOut out;
out.position = float4(vertices[v_id], 0, 1);
out.color = colors[v_id / 4];
return out;
}
fragment half4 frame_fragment(VertexOut v [[ stage_in ]]) {
half alpha = v.color.a;
return half4(v.color.r * alpha, v.color.g * alpha, v.color.b * alpha, v.color.a);
}
I've been object/mesh shaders using MTLRenderPipelineState built from MTLMeshRenderPipelineDescriptor with visible function tables. However, it seems that some functionality that is present on other MTL*RenderPipelineDescriptor types is missing. Namely, it lacks max<Stage>CallStackDepth() and setSupportAdding<Stage>BinaryFunctions().
The latter isn't too bad: I can always rebuild the the pipeline states from scratch if I want to add new linked functions.
However, not being able to set the max call stack depth is limiting. I assume that means I only get a depth of 1 as that is the default value for the other descriptor types. In practice it seems that I can go up to 2 with the functions I'm using before I start getting kIOGPUCommandBufferCallbackErrorSubmissionsIgnored errors due to "prior/excessive GPU errors".
I'm curious if the lack of this functionality on MTLMeshRenderPipelineDescriptor is a simple oversight. In my case I only am using VFTs and linked functions in the fragment stage. I suspect it should be possible since the other render pipeline descriptor types expose max call depths and adding binary functions in the fragment stage.
FWIW I'm also using Metal-CPP (which is fantastic!) but I don't see this functionality in the Swift/Obj-C docs either.
Hi, i am developing a Metal renderer the Metal debbuger gives me an error when i try to debug a Fragment Shader. This is happening since i updated to sonoma and Xcode 15, before everything was working fine.
Also i want to mention that i have tried the Apples DeferredLightning demo project and it gives the same error so its not my projects fault.
Device: MacbookPro 16 2019 5300M
MacOs: 14.0
Xcode: 15.0
Error:
Unable to create shader debug session
Thread data is corrupt
GPUDebugger error - 15A240c - DYPShaderDebuggerDataErrorDomain (2): Thread data is corrupt
=== GPUDebugger Item ===
API Call: 16 [drawIndexedPrimitives:Triangle indexCount:26652 indexType:UInt32 indexBuffer:MDL_OBJ-Indices indexBufferOffset:0]
Resource: fragment_main
See attached for console output after launch. Search for "err".
Once you get to the point where Origin can launch successfully, the game will attempt to launch for a second or two and then close.
M1 Macbook Air running 14.0 (23A344), using Whisky to handle the creation and linking of bottles
0754: thread_get_state failed on Apple Silicon - faking zero debug registers
0750:err:d3d:wined3d_check_gl_call >>>>>>> GL_INVALID_FRAMEBUFFER_OPERATION (0x506) from glClear @ /private/tmp/game-porting-toolkit-20231007-39251-eze8n5/wine/dlls/wined3d/context_gl.c / 2330.
06a0:fixme:d3d:wined3d_check_device_format_conversion output 0079CCB0, device_type WINED3D_DEVICE_TYPE_HAL, src_format WINED3DFMT_B8G8R8X8_UNORM, dst_format WINED3DFMT_B8G8R8X8_UNORM stub!
0758: thread_get_state failed on Apple Silicon - faking zero debug registers
0758:err:d3d:wined3d_check_gl_call >>>>>>> GL_INVALID_FRAMEBUFFER_OPERATION (0x506) from glClear @ /private/tmp/game-porting-toolkit-20231007-39251-eze8n5/wine/dlls/wined3d/context_gl.c / 2330.
0760: thread_get_state failed on Apple Silicon - faking zero debug registers
0764: thread_get_state failed on Apple Silicon - faking zero debug registers
0768: thread_get_state failed on Apple Silicon - faking zero debug registers
076c: thread_get_state failed on Apple Silicon - faking zero debug registers
0770: thread_get_state failed on Apple Silicon - faking zero debug registers
0688:fixme:kernelbase:AppPolicyGetProcessTerminationMethod FFFFFFFA, 0012FEB8
wine: Unhandled page fault on read access to 0000000000000000 at address 0000000000000000 (thread 06fc), starting debugger...
06a0:fixme:kernelbase:AppPolicyGetProcessTerminationMethod FFFFFFFA, 0012FEB8
wine: Unhandled page fault on read access to 0000000000000000 at address 0000000140003035 (thread 070c), starting debugger...
067c:fixme:file:ReplaceFileW Ignoring flags 2
0798: thread_get_state failed on Apple Silicon - faking zero debug registers
0784:fixme:imm:ImeSetActiveContext (0x36b920, 1): stub
0784:fixme:imm:ImmReleaseContext (0000000000030276, 000000000036B920): stub
0794:fixme:imm:ImeSetActiveContext (0x36b920, 1): stub
0794:fixme:imm:ImmReleaseContext (00000000000202DA, 000000000036B920): stub
0640:fixme:cryptnet:check_ocsp_response_info check responder id
07a0: thread_get_state failed on Apple Silicon - faking zero debug registers
0640:fixme:cryptnet:check_ocsp_response_info check responder id
0778:fixme:imm:ImeSetActiveContext (0x36ed90, 1): stub
0778:fixme:imm:ImmReleaseContext (000000000001030E, 000000000036ED90): stub
07a8: thread_get_state failed on Apple Silicon - faking zero debug registers
07ac: thread_get_state failed on Apple Silicon - faking zero debug registers
078c:fixme:imm:ImeSetActiveContext (0x36ed90, 1): stub
078c:fixme:imm:ImmReleaseContext (0000000000010330, 000000000036ED90): stub
06f0:fixme:d3d:wined3d_guess_card_vendor Received unrecognized GL_VENDOR "Apple". Returning HW_VENDOR_NVIDIA.
06f0:fixme:ntdll:NtQuerySystemInformation info_class SYSTEM_PERFORMANCE_INFORMATION
06f0:fixme:d3d:wined3d_check_device_format_conversion output 000000000027DC30, device_type WINED3D_DEVICE_TYPE_HAL, src_format WINED3DFMT_B8G8R8X8_UNORM, dst_format WINED3DFMT_B8G8R8X8_UNORM stub!
07b0:err:d3d:wined3d_check_gl_call >>>>>>> GL_INVALID_FRAMEBUFFER_OPERATION (0x506) from glClear @ /private/tmp/game-porting-toolkit-20231007-39251-eze8n5/wine/dlls/wined3d/context_gl.c / 2330.
steam-origin-launch.txt
I have a blender project, for simplicity a black hole. The way that it is modeled is a sphere on top of a round plane, and then a bunch of effects on that.
I have tried multiple ways:
convert to USD from the file menu
convert to obj and then import
But all of them have resulted in just the body, not any effects.
Does anybody know how to do this properly? I seem to have no clue except for going through the Reality Converter Pro (which I planned on going through already - but modeling it there)
Surface screen position
does it return model's vertices XYZ position normalized?
node graph needs more tutorials and explanations
made 0 progress
Hi all
So I'm quite new into GameDev and am struggling a bit with the Tilemap
All my elements have the size of 64x64. As you can see in my screenshot there is some gap between the street and the water. It might be simple but what's the best way to fix that gap? I could increase the width of the left and right edge png but then I will sooner or later run into other problems as it then is not fitting with the rest.
Thanks for your help
Cheers from Switzerland
The Apple documentation seems to say RealityKit should obey the autoplay metadata, but it doesn't seem to work. Is the problem with my (hand coded) USDA files, the Swift, or something else? Thanks in advance.
I can make the animations run with an explicit call to run, but what have I done wrong to get the one cube to autoplay?
https://github.com/carlynorama/ExploreVisionPro_AnimationTests
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
@State var enlarge = false
var body: some View {
VStack {
//A ModelEntity, not expected to autoplay
Model3D(named: "cube_purple_autoplay", bundle: realityKitContentBundle)
//An Entity, actually expected this to autoplay
RealityView { content in
if let cube = try? await Entity(named: "cube_purple_autoplay", in: realityKitContentBundle) {
print(cube.components)
content.add(cube)
}
}
//Scene has one cube that should auto play, one that should not.
//Neither do, but both will start (as expected) with click.
RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(scene)
}
} update: { content in
// Update the RealityKit content when SwiftUI state changes
if let scene = content.entities.first {
if enlarge {
for animation in scene.availableAnimations {
scene.playAnimation(animation.repeat())
}
} else {
scene.stopAllAnimations()
}
let uniformScale: Float = enlarge ? 1.4 : 1.0
scene.transform.scale = [uniformScale, uniformScale, uniformScale]
}
}
.gesture(TapGesture().targetedToAnyEntity().onEnded { _ in
enlarge.toggle()
})
VStack {
Toggle("Enlarge RealityView Content", isOn: $enlarge)
.toggleStyle(.button)
}.padding().glassBackgroundEffect()
}
}
}
No autospin meta data
#usda 1.0
(
defaultPrim = "transformAnimation"
endTimeCode = 89
startTimeCode = 0
timeCodesPerSecond = 24
upAxis = "Y"
)
def Xform "transformAnimation" ()
{
def Scope "Geom"
{
def Xform "xform1"
{
float xformOp:rotateY.timeSamples = {
...
}
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateY"]
over "cube_1" (
prepend references = @./cube_base_with_purple_linked.usd@
)
{
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate"]
}
}
With autoplay metadata
#usda 1.0
(
defaultPrim = "autoAnimation"
endTimeCode = 89
startTimeCode = 0
timeCodesPerSecond = 24
autoPlay = true
playbackMode = "loop"
upAxis = "Y"
)
def Xform "autoAnimation"
{
def Scope "Geom"
{
def Xform "xform1"
{
float xformOp:rotateY.timeSamples = {
...
}
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateY"]
over "cube_1" (
prepend references = @./cube_base_with_purple_linked.usd@
)
{
quatf xformOp:orient = (1, 0, 0, 0)
float3 xformOp:scale = (2, 2, 2)
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:orient", "xformOp:scale"]
}
}
}
}
How can I take the contents (i.e. the stroke and fill) of a CAShapeLayer and draw it into an MTLTexture, which can then be displayed with a normal vertex/fragment shader?