General
RSS for tagDelve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
When I distribute a app that includes Apple Unity plugin, I got an error of asset validation failed.
The bundle 'Payload/{app name}/Frameworks/AppleCoreNative.framework' is missing plist key. The Info.plist file is missing the required key: CFBundleShortVersionString.
Does anyone get this?
Is it possible to set the header include paths when compiling a metal library from source at runtime?
I'm dynamically generating kernel source that I'm compiling at runtime. I want to include some functions I have defined in a header. If I was doing online computation, I can specify include paths using -I /path/to/include using the command.
xcrun metal -c -I /path/to/include example.metal -o example.air
However when I'm online, it doesn't appear possible to define a header search path.
MTLCompileOptions *options = [MTLCompileOptions new];
options.fastMathEnabled = NO;
library = [device newLibraryWithSource:kernel_source
options:options
error:&error];
Is there a way to specify a header search path using newLibraryWithSource?
Where do I start with this error? I am using the Metal Debugger and have.a bunch of stuck command buffers. how do I look at the command buffers to see the errors? My suspicion is that the cause is some sort of memory leak. Not having access to the source for Metal leaves me stuck. The following message shows up in the logging pane of the execution.
Execution of the command buffer was aborted due to an error during execution. Ignored (for causing prior/excessive GPU errors) (00000004:kIOGPUCommandBufferCallbackErrorSubmissionsIgnored)
Type: Error | Timestamp: 2024-04-11 14:16:13.464336-05:00 | Library: Metal | Subsystem: Metal | Category: Default | TID: 0x2a0b8c
I just need some guidance
I'm brand new to Metal. I've googled, but can't get the right answer to come up. (Thanks, unhelpful ChatGPT generated answers polluting everything, but I digress...)
Ultimately, I'm trying to figure out how to use Metal to render 3D DICOM data on iOS specifically. If you're not familiar with DICOM, let's just say I've got a whole stack of CT image slices. Or to get really simple, I've got a cube of voxel values with differing values at each voxel coordinate.
Where do I even start in Metal to render something like this?
(I was trying to get the VTK toolkit compiled for iOS, which uses OpenGL, but that appears to be a dead end. And besides, Metal is supposed to be so much better.)
Thanks for any tips/leads/suggestions/general pointers.
I wanted to show a progress of a certain part of the game using an entity that looks like a "pie chart", basically cylinder with a cut-out. And as progress is changed (0-100) the entity would be fuller.
Is there a way to create this kind of model entity? I know there are ways to animated entities, warp them between meshes, but I was wondering if somebody knows how to achieve it in a simplest way possible?
Maybe some kind of custom shader that would just change how the material is rendered? I do not need its physics body, just to show it.
I know how to do it in UIKit and classic 2d UI Apple frameworks but here working with model entities it gets a bit tricky for me.
Here is example of how it would look, examples are in 2d but you can imagine it being 3d cylinders with a cut-out.
Thank you!
The Drawing fully immersive content using Metal guide describes how to use Metal for visionOS immersive experiences, but seemingly requires swift to bring up the required CompositorLayer.
@main
struct MyApp: App {
var body: some Scene {
ImmersiveSpace(id: "MyContent") {
CompositorLayer { layerRenderer in
let renderThread = Thread {
let engine = myEngineCreate(layerRenderer)
myEngineRenderLoop(engine)
}
renderThread.name = "Render Thread"
renderThread.start()
}
}
}
The ImmersiveSpace scene can presumably be replaced with a call to
[UIApplication.sharedApplication activateSceneSessionForRequest:[UISceneSessionActivationRequest requestWithRole:UISceneSessionRoleImmersiveSpaceApplication] errorHandler:nil]
But is there a replacement for CompositorLayer? Or some other way to produce a cp_layer_renderer?
Perhaps it would be possible to write a small swift helper for this, but given the swift interface for CompositorLayer how would that be tied to an existing UIScene as created above?
@available(visionOS 1.0, *)
public struct CompositorLayer : SwiftUI.ImmersiveSpaceContent {
public init(configuration: any _CompositorServices_SwiftUI.CompositorLayerConfiguration = .default, renderer: @escaping (CompositorServices.LayerRenderer) -> Swift.Void)
public var body: Swift.Never {
get
}
public typealias Body = Swift.Never
}
I am trying to implement a way to rotate a 3D model around its y axis, but this doesn't seem to work. What am I missing?
The scene only contains one model entity.
@State private var rotateBy:Double = 0.0
RealityView { content in
do {
let entity = try await Entity.init(named: "VinylScene", in: realityKitContentBundle)
entity.scale = SIMD3<Float>(repeating: 0.6)
content.add(entity)
} catch {
ProgressView()
}
}
.gesture(
DragGesture(minimumDistance: 0.0)
.targetedToAnyEntity()
.onChanged { value in
let location3d = value.convert(value.location3D, from: .local, to: .scene)
let startLocation = value.convert(value.startLocation3D, from: .local, to: .scene)
let delta = location3d - startLocation
rotateBy = Double(atan(delta.x * 200))
}
)
Hello. I have a model of a CD record and box, and I would like to change the artwork of it via an external image URL. My 3D knowledge is limited, but what I can say is that the RealityView contains the USDZ of the record, which in turn contains multiple materials: ArtBack, ArtFront, PlasticBox, CD.
How do I target an artwork material and change it to another image? Here is the code so far.
RealityView { content in
do {
let entity = try await Entity.init(named: "VinylScene", in: realityKitContentBundle)
entity.scale = SIMD3<Float>(repeating: 0.6)
content.add(entity)
} catch {
ProgressView()
}
}
App Store Connect says that the name for my app is taken, but when I search on the App Store, there aren't any apps with that name. Is this a bug? Is there a way to contact Apple so that we can claim the name we'd like? Thank you!
Hi,
I'm struggling a little with the Shader Graph in Reality Composer Pro. Doing something simple like linearly interpolating between two colors has me searching through nodes without any luck. Is there a node comparable to the lerp node in Unity? And if so, what is it called?
I have a fbo generated by last frame, which contain a color texture and a depth texture. Then i want blit that fbo to current MTKView, but failed. Thanks a lot for any suggestions.
Here is the code:
last frame:
id <MTLTexture> src_color;
id <MTLTexture> src_depth;
drawTo(src_color, src_depth);
current frame:
first, init a depth texture if null:
id<MTLTexture> depth_texture_;
if(depth_texture_) {
depth_texture_ = [device_ newTextureWithDescriptor:desc];
}
second,
create desc for encoder:
current_desc = metal_view_.currentRenderPassDescriptor;
current_desc.depthAttachment.texture = depth_texture_;
current_desc.depthAttachment.loadAction = MTLLoadActionClear;
current_desc.depthAttachment.clearDepth = 1.0;
current_desc.stencilAttachment.texture = depth_texture_;
current_desc.stencilAttachment.loadAction = MTLLoadActionClear;
current_desc.stencilAttachment.clearStencil = 0;
current_desc.colorAttachments[0].loadAction = MTLLoadActionClear;
current_desc.colorAttachments[0].clearColor = MTLClearColorMake(1.0, 1, 1, 1.0);
third,
blit to current MTKView:
auto dst_color = current_desc.colorAttachments[0].texture;
auto dst_depth = current_desc.depthAttachment.texture;
id<MTLCommandBuffer> commandBuffer = [command_queue_ commandBuffer];
commandBuffer.label = @"blit";
id <MTLBlitCommandEncoder> blitEncoder = [commandBuffer blitCommandEncoder];
[blitEncoder copyFromTexture:src_color
sourceSlice:0
sourceLevel:0
sourceOrigin:MTLOriginMake(srcRect.x, srcRect.y, 0)
sourceSize:MTLSizeMake(srcRect.width, srcRect.height, 1)
toTexture:dst_color
destinationSlice:0
destinationLevel:0
destinationOrigin:MTLOriginMake(dstRect.x, dstRect.y, 0)];
[blitEncoder copyFromTexture:src_depth
sourceSlice:0
sourceLevel:0
sourceOrigin:MTLOriginMake(srcRect.x, srcRect.y, 0)
sourceSize:MTLSizeMake(srcRect.width, srcRect.height, 1)
toTexture:dst_depth
destinationSlice:0
destinationLevel:0
destinationOrigin:MTLOriginMake(dstRect.x, dstRect.y, 0)];
[blitEncoder endEncoding];
[commandBuffer commit];
[commandBuffer waitUntilCompleted];
I'm trying to follow the metal-cpp tutorials I've found at https://developer.apple.com/metal/sample-code/?q=learn
The program seems to be launching correctly (I can see the menu bar and interact with it), but nothing is rendered inside the window. I suppose the culprit is somewhere in the following function (I see it binds the device, the view and the window with the object in charge of drawing stuff in the view)
void core::Application::applicationDidFinishLaunching(NS::Notification *pNotification)
{
CGRect frame = (CGRect){{100.0, 100.0}, {512.0, 512.0}};
m_Window->init(frame, NS::WindowStyleMaskClosable | NS::WindowStyleMaskTitled, NS::BackingStoreBuffered, false);
m_Device = MTL::CreateSystemDefaultDevice();
m_View = MTK::View::alloc()->init(frame, m_Device);
m_View->setColorPixelFormat(MTL::PixelFormat::PixelFormatBGRA8Unorm);
m_View->setClearColor(MTL::ClearColor::Make(1.0, 0.0, 0.0, 1.0));
m_ViewDelegate = new graphics::ViewDelegate(m_Device);
m_View->setDelegate(m_ViewDelegate);
m_Window->setContentView(m_View);
m_Window->setTitle(NS::String::string("Template 1", NS::StringEncoding::UTF8StringEncoding));
m_Window->makeKeyAndOrderFront(nullptr);
NS::Application* nsApp = reinterpret_cast<NS::Application*>(pNotification->object());
nsApp->activateIgnoringOtherApps(true);
}
but, as you can infer from the fact that I'm failing at the very first tutorial of the bunch, I'm quite lost. I've tried debugging the app with the Xcode debugger and I saw that it never enters in this function.
void ViewDelegate::drawInMTKView(MTK::View *pView)
{
m_Renderer->Draw(pView);
}
Can it be a symptom of some call missing from my code?
Thank you in advance for your help
Hi,
I have a CUDA program that I want to convert to Metal Compute so that we can support Apple hardware.
When I wrote the CUDA version, I was able to write efficient code because I learned first about the Cuda-core architecture. The way the cores can access memory for instance is very important information so that I could write code that efficiently access the memory.
Now I want to do the same for the Metal Compute software. But I can not find any information about the low level architecture and especially the things you should know to be able to write efficient code.
Do I miss something?
Is there some guide giving hints for the most efficient way to access memory for instance?
I have a plane that is stereoscopic so represents to the user depth that is beyond the plane.
I would like to have the options to render the depth buffer for the pixels or to not render any information into the depth for the plane.
I cannot see any option in Shader Graph Material to affect the depth buffer during render. I also cannot see any way in RealityKit to not render to the depth buffer for an entity.
I'm open to any suggestions.
Platform 'METAL' is experimental and not all JAX functionality may be correctly supported!
2024-03-23 22:04:38.947506: W pjrt_plugin/src/mps_client.cc:563] WARNING: JAX Apple GPU support is experimental and not all JAX functionality is correctly supported!
Metal device set to: Apple M1 Pro
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
loc("-":0:0): error: current mps dialect version is 1.0.0, can't parse version 1.1.0
/AppleInternal/Library/BuildRoots/495c257e-668e-11ee-93ce-926038f30c31/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:1097: failed assertion `Error importing MLIR bytecode.
'
zsh: abort python -c 'import jax; print(jax.numpy.arange(10))'
I tried to render to two layers using vertex amplification in my mesh shader program, but in the vision pro only the left eye has content and it contains 2 eyes image, and when I switch the mapping0.renderTargetArrayIndexOffset in the encoder, it does not transfer the image to the right eye. Using vertex amplification achieve 2 eyes rendering?
I am in Unity 2022.3.21f1 using the Apple plugins for Unity with the following versions:
Apple.Core - 3.1.0
Apple.Accessibility - 1.1.0
Apple.GameController - 1.2.0
Apple.GameKit - 2.2.0
I am on MacOS 14.4 Apple Silicon and Xcode 15.3.
I started working in a file that I haven't worked on in a while, and found that I was getting errors in Unity with the Apple.Accessibility plugin, so I updated the Apple plugins and stopped getting the errors. However, when I went to build my project (which is just for iOS), I now get the following error for each of the four plugins I have installed:
Please ensure that the build invocation (build.py, xcodebuild, or Xcode) compiled cleanly and that the build was configured to support Release on iOS.
UnityEngine.Debug:LogError (object)
Apple.Core.AppleNativeLibraryUtility:ProcessWrapperLibrary (string,UnityEditor.BuildTarget,string,UnityEditor.iOS.Xcode.PBXProject) (at ./Library/PackageCache/com.apple.unityplugin.core@ba71bdbec187/Editor/ApplePlugInEnvironment.cs:604)
Apple.GameController.Editor.AppleGameControllerBuildStep:OnProcessFrameworks (Apple.Core.AppleBuildProfile,UnityEditor.BuildTarget,string,UnityEditor.iOS.Xcode.PBXProject) (at ./Library/PackageCache/com.apple.unityplugin.gamecontroller@4ec66225948e/Editor/AppleGameControllerBuildStep.cs:61)
Apple.Core.AppleBuild:OnPostProcessBuild (UnityEditor.BuildTarget,string) (at ./Library/PackageCache/com.apple.unityplugin.core@ba71bdbec187/Editor/AppleBuild.cs:195)
UnityEditor.EditorApplication:Internal_CallGlobalEventHandler () (at /Users/bokken/build/output/unity/unity/Editor/Mono/EditorApplication.cs:493)
I downloaded these plugins from Github and built with the build.py script, and had no errors in doing so. I've tried rebuilding multiple times and even specifying the platform as Release (although the default is all so it should have built Release anyways). I've tried rolling back to previous versions of the plugins as well with no luck so far. I don't remember which exact versions I used to be on but have had no luck with the approximate ones.
Does anyone know how I can point Unity to the NativeRelease folders? I've checked that the frameworks for my libraries are there (i.e. at ../Library/PackageCache/com.apple.unityplugin.core@287366a1eaa5/NativeLibraries~/Release/iOS/AppleCoreNative.framework)
Hello,
Documentation says CGDisplayCreateImage() is deprecated.
Are there any equivalent which can be used instead of CGDisplayCreateImage()? (any function which implements the same functionality)
Thank you for the help,
Pavel
We have built the game on Unreal engine 4 and we have optimised the game to run on tvOS devices newer than 2017 (viz. Apple TV 4k and above). We could not bring it down to support Apple TV HD (2015) due to its visual and memory requirements. Is there a way to exclude Apple TV HD from support list. We couldnt find any required device capability to add to info.plist (eg: iphone-ipad-minimum-performance-a12, we tried it but this does not work for tvOS build).