Hello I am very new here in the forum (in iOS dev as well). I am trying to build an app that uses 3d face filters and I want to use Reality Composer. I knew Xcode 15 did not have it so I downloaded the beta 8 version (as suggested in another post). This one actually has Reality Composure Pro (XCode -> Developer tools -> Reality Composure Pro) but the Experience.rcproject still does not appear. Is there a way to create one? When I use Reality Composure it seems only able to create standalone projects and it does not seem to be bundled in a any way to xCode. Thanks for your time people!
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
I am trying to create a custom CGColorSpace in Swift on macOS but am not sure I really understand the concepts.
I want to use a custom color space called Spot1 and if I extract the spot color from a PDF I get the following:
"ColorSpace<Dictionary>" = {
"Cs2<Array>" = (
Separation,
Spot1,
DeviceCMYK,
{
"BitsPerSample<Integer>" = 8;
"Domain<Array>" = (
0,
1
);
"Filter<Name>" = FlateDecode;
"FunctionType<Integer>" = 0;
"Length<Integer>" = 526;
"Range<Array>" = (
0,
1,
0,
1,
0,
1,
0,
1
);
"Size<Array>" = (
1024
);
}
);
};
How can I create this same color space using the CGColorSpace(propertyListPlist: CFPropertyList) API
func createSpot1() -> CGColorSpace? {
let dict0 : NSDictionary = [
"BitsPerSample": 8,
"Domain" : [0,1],
"Filter" : "FlateDecode",
"FunctionType" : 0,
"Length" : 526,
"Range" : [0,1,0,1,0,1,0,1],
"Size" : [1024]]
let dict : NSDictionary = [
"Cs2" : ["Separation","Spot1", "DeviceCMYK", dict0]
]
let space = CGColorSpace(propertyListPlist: dict as CFPropertyList)
if space == nil {
DebugLog("Spot1 color space is nil!")
}
return space
}
Hi,
my MoltenVk app suffers slow frame rates and stutter. The Metal Profiler, Display Tab lists several "Cannot go direct to display due to display rejected" warnings. Anyone with deeper insights?
Why {videoPlayerComponent.playerScreenSize} cannot get right to the size of the video, print out the x and y is [0, 0], is this a BUG?
Hi,
In a metal shader I have a user-defined struct with a square brackets operator.
This is a simplified version of it:
struct MyData
{
float data[12];
float operator[](int i) const
{
return data[i];
}
};
I pass a device buffer of that type in a compute kernel function:
device const MyData* myDataBuffer
Using the operator with a thread-space object works fine:
MyData data_object = myDataBuffer[0];
float x = data_object[0]; // ok
..but trying to use it with the device-space buffer fails:
float x = myDataBuffer[0][0]; // compilation error
No viable overloaded operator[] for type 'const device MyData'
Candidate function not viable: address space mismatch in 'this' argument ('const device MyData'), parameter type must be 'const MyData'
For other operators I could define the function outside the struct and pass a reference to a device-memory object but
the function for the square brackets operator can only be a member function.
Am I missing something that could make the above statement compile?
I'm trying to display the native GameCenter interface from Unity (I've already tried Bounded and Unbouded Volume Mode, that is, with and without Full Immersive Mode) but I can't display this interface.
If I use Unity's SocialAPI (https://docs.unity3d.com/ScriptReference/Social.ShowLeaderboardUI.html), nothing is displayed, and I get the following message in XCode (simulator):
[u 9C225095-F55E-42CC-8136-957279631DF3:m (null)] [com.apple.GameCenterUI.GameCenterDashboardExtension(1.0)] Connection to plugin interrupted while in use.
setViewControllers:animated: called on <GKGameCenterViewController 0x106825600> while an existing transition or presentation is occurring; the navigation stack will not be updated.
Type: Notice | Timestamp: 2023-12-08 12:13:50.585973+01:00 | Process: leaderboard-test | Library: UIKitCore | TID: 0x3e57bd
viewServiceDidTerminateWithError:: Error Domain=_UIViewServiceInterfaceErrorDomain Code=3 "(null)" UserInfo={Message=Service Connection Interrupted}
Type: Notice | Timestamp: 2023-12-08 12:13:50.586997+01:00 | Process: leaderboard-test | Library: UIKitCore | TID: 0x3e57bd
[u 9C225095-F55E-42CC-8136-957279631DF3:m (null)] [com.apple.GameCenterUI.GameCenterDashboardExtension(1.0)] Connection to plugin invalidated while in use.
Type: Error | Timestamp: 2023-12-08 12:13:50.588393+01:00 | Process: leaderboard-test | Library: PlugInKit | Subsystem: com.apple.PlugInKit | Category: lifecycle | TID: 0x3e5a18
If I try to do it using Apple's plugin for GameCenter (GameKitWrapper) adapted to VisionOS, the application crashes with the following error:
"Presentations are not permitted within volumetric window scenes."
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Presentations are not permitted within volumetric window scenes.'
*** First throw call stack:
(
0 CoreFoundation 0x00000001804a510c __exceptionPreprocess + 172
1 libobjc.A.dylib 0x0000000180082f50 objc_exception_throw + 56
2 UIKitCore 0x0000000184bc2798 -[UIViewController _presentViewController:withAnimationController:completion:] + 1136
3 UIKitCore 0x0000000184bc4000 __63-[UIViewController _presentViewController:animated:completion:]_block_invoke + 88
4 UIKitCore 0x0000000184bc42d0 -[UIViewController _performCoordinatedPresentOrDismiss:animated:] + 484
5 UIKitCore 0x0000000184bc3f6c -[UIViewController _presentViewController:animated:completion:] + 160
6 UIKitCore 0x0000000184bc4374 -[UIViewController presentViewController:animate
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Presentations are not permitted within volumetric window scenes.'
*** First throw call stack:
(
0 CoreFoundation 0x00000001804a510c __exceptionPreprocess + 172
1 libobjc.A.dylib 0x0000000180082f50 objc_exception_throw + 56
2 UIKitCore 0x0000000184bc2798 -[UIViewController _presentViewController:withAnimationController:completion:] + 1136
3 UIKitCore 0x0000000184bc4000 __63-[UIViewController _presentViewController:animated:completion:]_block_invoke + 88
4 UIKitCore 0x0000000184bc42d0 -[UIViewController _performCoordinatedPresentOrDismiss:animated:] + 484
5 UIKitCore 0x0000000184bc3f6c -[UIViewController _presentViewController:animated:completion:] + 160
6 UIKitCore 0x0000000184bc4374 -[UIViewController presentViewController:animated:completion:] + 140
7 GameKitWrapper 0x00000001057aa864 $s14GameKitWrapper34GKGameCenterViewController_Present7pointer6taskId9onSuccessySv_s5Int64VyAGXCtF ...
)
The code that produces this crash is the following (trying to display the GameCenter UI):
let viewController = UIApplication.shared.windows.first!.rootViewController;
viewController?.present(target, animated: true);
@_cdecl("GKGameCenterViewController_Present")
public func GKGameCenterViewController_Present
(
pointer: UnsafeMutableRawPointer,
taskId: Int64,
onSuccess: @escaping SuccessTaskCallback
)
{
let target = Unmanaged<GKGameCenterViewController>.fromOpaque(pointer).takeUnretainedValue();
_currentPresentingGameCenterDelegate = GameKitUIDelegateHandler(taskId: taskId, onSuccess: onSuccess);
target.gameCenterDelegate = _currentPresentingGameCenterDelegate;
#if os(iOS) || os(tvOS) || os(visionOS)
let viewController = UIApplication.shared.windows.first!.rootViewController;
viewController?.present(target, animated: true);
#endif
}
Is there a way to present the GameCenter UI overlaid on the Unity app?
Hey guys
How I can fit RealityView content inside a volumetric window?
I have below simple example:
WindowGroup(id: "preview") {
RealityView { content in
if let entity = try? await Entity(named: "name") {
content.add(entity)
entity.setPosition(.zero, relativeTo: entity.parent)
}
}
}
.defaultSize(width: 0.6, height: 0.6, depth: 0.6, in: .meters)
.windowStyle(.volumetric)
I understand that we can resize a Model3D view automatically using .resizable() and .scaledToFit() after the model loaded.
Can we achieve the same result using a RealityView?
Cheers
In new VisionOS platform, CustomMaterial in RealityKit can not userd, it should use ShaderGraphMaterial instead, but i can't find the way to change the culling mode. With old CustomMaterial, it has a facculling property,
Is there a way to change the culling mode in new ShaderGraphMaterial?
I am trying to use my animated model in XCode with SceneKit. I exported my model from Maya with Animation Data in .usd format, then converted it to .usdz with Reality Converter. When I open it in XCode viewer it is animated and everything is fine. However when I try to use it in my app it doesn't animate. On the other hand, when I try with the robot_walk_idle model from Apple's example models, it is animated. Maybe I am missing a option in export settings. Thanks for any help.
import SwiftUI
import SceneKit
struct ModelView: View {
var body: some View{
VStack{
SceneView(scene: SCNScene(named: "robot_walk_idle.usdz"))
}
}
}
I have a Unity project that I have moved to URP to work with the Vision Pro's PolySpatial framework. I have it to the point that everything in my scene loads into the Simulator except for LineRenderers, even with URP/Lit shaders. Is there something special I need to do to make these visible?
Hi,
I have a usdz asset of a torus / hoop shape that I would like to pass another Reality Kit Entity cube-like object through (without touching the torus) in VisionOS. Similar to how a basketball goes through a hoop.
Whenever I pass the cube through, I am getting a collision notification, even if the objects are not actually colliding. I want to be able to detect when the objects are actually colliding, vs when the cube passes cleanly through the opening in the torus.
I am using entity.generateCollisionShapes(recursive: true) to generate the collision shapes. I believe the issue is in the fact that the collision shape of the torus is a rectangular box, and not the actual shape of the torus. I know that the collision shape is a rectangular box because I can see this in the vision os simulator by enabling "Collision Shapes"
Does anyone know how to programmatically create a torus in collision shape in SwiftUI / RealityKit for VisionOS. Followup, can I create a torus in reality kit, so I don't even have to use a .usdz asset?
I want to place a ModelEntity at an AnchorEntity's location, but not as a child of the AnchorEntity. ( I want to be able to raycast to it, and have collisions work.)
I've placed an AnchorEntity in my scene like so:
AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous)
In my RealityView update closure, I print out this entity's position relative to "nil" like so:
wallAnchor.position(relativeTo: nil)
Unfortunately, this position doesn't make sense. It's very close to zero, even though it appears several meters away.
I believe this is because AnchorEntities have their own self contained coordinate spaces that are independent from the scene's coordinate space, and it is reporting its position relative to its own coordinate space.
How can I bridge the gap between these two?
WorldAnchor has an originFromAnchorTransform property that helps with this, but I'm not seeing something similar for AnchorEntity.
Thank you
In visionOS, once an immersive space is opened, the background color is solid black. How do I change this? I just need to set a static color without any shading, but I can't find any documentation or examples on how to do this, and the template that comes with Xcode 15.1 beta 3 doesn't change the background color.
I've searched around for information, but all I can find points back to MTKView.clearColor, which I can't use when drawing into an immersive space, since immersive spaces on visionOS use Compositor Services and not MTKView or CAMetalLayer for drawing 3D content.
The touch input stutter issue that exists since iOS 16 on devices with Pro Motion Displays has not been fixed yet. I filed a bug report in July but there isn't any progress since months.
I see the problem in all games I tried. My game is fast paced so the stutters are quite obvious and I receive a lot of complaining emails.
My game did run smoothly on Pro Motion devices with iOS 15. Is there a known workaround? I am seeing other developers having the same issue but I can't find any solutions.
Other threads about this issue:
IPhone 14 Pro stuttering in most games when using touch controls
FPS drops when tapping the screen on iPhone 13 Pro Max
In my project, i want to use new shadergraphmaterial to do the stereoscopic render, i notice that there is a node called Camera Index Switch Node can do this. But when i tried it , i found that :
It can only output Integer type value, when i change to float value , it change back again, i don't konw if it is a bug.
2. So i test this node with a IF node,i found that it output is weird.
Below is zero should output,it is black
but when i change to IF node,it is grey,it is neither 0 nor 1(My IF node result is TRUE result 1, FALSE result 0)
I wanna ask if this is a bug, and if this is a correct way to do the stereoscopic render.
How to get current player score and position? Thanks!
In pretty much every Metal tutorial out there, people use MTLVertexDescriptor like this: they create a struct like
struct Vertex {
var position: float3
var color: float3
}
then a vertex array and buffer:
let vertices: [Vertex] = ...
guard let vertexBuffer = device.makeBuffer(bytes: vertices,
length: MemoryLayout<Vertex>.stride * vertices.count,
options: []) else { ... }
This is all good, we have a buffer with interleaved position and color data. The problem is, when creating a vertex descriptor, they use MemoryLayout<float3>.stride as the offset for the second attribute:
let vertexDescriptor = MTLVertexDescriptor()
vertexDescriptor.attributes[0].format = .float3
vertexDescriptor.attributes[0].offset = 0
vertexDescriptor.attributes[0].bufferIndex = 0
vertexDescriptor.attributes[1].format = .float3
vertexDescriptor.attributes[1].offset = MemoryLayout<float3>.stride // <-- here!
vertexDescriptor.attributes[1].bufferIndex = 0
vertexDescriptor.layouts[0].stride = MemoryLayout<Vertex>.stride
This does not look correct to me. The code happens to work only because the stride of SIMD3<Float> (a.k.a. float3) matches the alignment of the fields in this particular struct.
But if we have something like this:
struct Vertex {
var attr0: Float
var attr1: Float
var attr2: SIMD3<Float>
}
then the naive approach of using stride won't work. Because of padding, attr2 does not start right after the two floats, at offset 2 * MemoryLayout<Float>.stride but at offset = 16.
So it seems to me that the only correct and robust way to set the vertex descriptor's offset is to use offset(of:), like this:
vertexDescriptor.attributes[2].offset = MemoryLayout<Vertex>.offset(of: \.attr2)!
Yet, I'm not able to find a single code example that does this. Am I missing something, or is everybody else just being careless with their offsets?
Hello fellow developers,
I am currently exploring the functionality of the UsdPrimvarReader node in Shader Graph Editor and would appreciate some clarification on its operational principles. Despite my efforts to understand its functionality, I find myself in need of some guidance.
Specifically, I would appreciate insights into the proper functioning of the UsdPrimvarReader node, including how it should ideally operate, the essential data that should be specified in the Varname field, and the Primvars that can be extracted from a USD file. Additionally, I am curious about the correct code representation of a Primvar in USD file to ensure it can be invoked successfully.
If anyone could share their expertise or point me in the right direction to relevant documentation, I would be immensely grateful.
Thank you in advance for your time and consideration. I look forward to any insights or recommendations you may have.
I have an immersive environment with a skybox which uses an png image inside a sphere. I added an IBL, but I am not sure what the best format / prep method is for the IBL image.
I have tried several different images for my IBL, and all are very different vibes from what I have in Blender.
My question is how can I create an IBL that's closest to Blender's Cycles rendering engine?
However, it's a rather difficult to answer question, so I want to ask some smaller questions first.
Does IBL need to be BW or will colour work?
From my tests: colour works just as well. But why does Apple only show use of BW ones? Should we use BW?
What is the best file format for IBL? Any pros/cons? Or should we just test out each format and check visually.
From my tests: PNG, OpenEXR (.exr), Radiance HDR (.hdr) all work. But which format is recommended?
Will IBL on visionOS create shadows for us? In Blender an HDRI gives shadows.
From my tests: No, IBL does not provide shadows on your loaded environment/meshes. Is "shadow baking" the only solution for the time being?
Looking at a scene in Blender which uses HDRI as global lighting, how can we best "prep" the IBL image that will give the closest light similar to Blender's Cycles rendering engine?
From my tests: I tried (as shown below)
A) make a render of just the Blender HDRI (without meshes) via 360-degree panoramic camera.
→ Usage as IBL makes everything too bright.
B) make a render of the entire Blender scene via 360-degree panoramic camera.
→ Usage as IBL makes everything washed out and yellowish.
C) Use the Sunlight.png from the sample project.
→ With this IBL the scene is too dark.
D) Use the SystemIBL.exr from another sample project.
→ With this IBL the scene looks very flat and not realistic at all.
Here I show each IBL I described above 1~4 and sample screenshots from the simulator:
A)
B)
C)
D)
The atmosphere I'm aiming for as per Blender's Cycles rendering engine:
Can anyone help me with my questions 1 ~ 4 above.
It would give me some insight in how to create immersive environments with realistic lighting & shadows. : )
Much appreciated!
— Luca
Hi there -
Where would a dev go these days to get an initial understanding of SceneKit?
The WWDC videos linked in various places seem to be gone?!
For example, the SceneKit page at developer.apple.com lists features a session videos link that comes up without any result, https://developer.apple.com/scenekit/
Any advice..?
Cheers,
Jan