Hi,
since RealityKit 4 now supports Blend Shapes I was wondering if there are any workflow or tooling recommendations to bake/export them into a USDZ.
Are Blender or Cinema4D capable to do that out of the box? Should we look into NVIDIA omniverse (https://docs.omniverse.nvidia.com/connect/latest/blender/manual.htm)
So far this topic seems very sparsely documented and I would appreciate any hints. Thank you!
3D Graphics
RSS for tagDiscuss integrating three-dimensional graphics into your app.
Posts under 3D Graphics tag
17 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
We have a native iOS app that supports the upload and display of USDZ files. It has been working great since beta (late 2022) and live launch (late 2023) until now. But recently we had reports from some users on Max model phones (14 + 15 Pro Max) at least. When they launch tap and launch a 3D file the Quick Look player is triggered. So far so good. But for affected users the controls along the top of the player - X (close) AR | Object (toggle) and share button - are moving too high up the phone screen and getting stuck (untappable) behind the phone's top status bar (time, camera bug, connection, battery).
This means that when they open a USDZ file in AR or 3D view they have to hard-close the app to get out of it again. This doesn't happen when they open a USDZ file from Files, Dropbox etc on their phone (which also uses the Quick Look player). The controls only move up and get stuck when launching a USDZ from within our app.
I'm at a loss to figure out what might be causing this on some phones and not all others. And why only when opening a USDZ file from our app! So far we have replicated this issue on a single iphone 14 Pro Max and a 15Pro Max, both running iOS18+
We have tested on other 15Pro Max's on same OS, and Pros, normal iPhones, Minis and are not experiencing the issue. You would think that a USDZ file is a USDZ file and that your iPhone knows what to do with it and open it in the Quick Look player regardless of where you open the file from. Why would the navigation items be moving if you open the USDZ file from within our app, and why only for some select users?
We will continue to troubleshoot and test but I wanted to throw this out to the community in case anyone had experienced this or if anyone had any theories that would expedite our testing. Your thoughts are most appeciated!
Here is a video showing the expected (correct) behaviour: https://www.dropbox.com/scl/fi/0sp8s4opaf2m4gukkcbrk/How-opening-a-USDZ-should-behave_correct-behaviour.MP4?rlkey=tzzau9x91mwox66gsgguryhep&st=qiykmne9&dl=0 and a screenSHOT attached below of what is happening on one of the affected user's iPhone 15Pro Max.
I'm using the Apple RoomPlan sdk to generate a .usdz file, which works fine, and gives me a 3D scan of my room.
But when I try to use Model I/O's MDLAsset to convert that output into an .obj file, it comes out as a completely flat model shaped like a rectangle. Here is my Swift code:
let destinationURL = destinationFolderURL.appending(path: "Room.usdz")
do {
try FileManager.default.createDirectory(at: destinationFolderURL, withIntermediateDirectories: true)
try finalResults?.export(to: destinationURL, exportOptions: .model)
let newUsdz = destinationURL;
let asset = MDLAsset(url: newUsdz);
let obj = destinationFolderURL.appending(path: "Room.obj")
try asset.export(to: obj)
}
Not sure what's wrong here. According to MDLAsset documentation, .obj is a supported format and exporting from .usdz to the other formats like .stl and .ply works fine and retains the original 3D shape.
Some things I've tried:
changing "exportOptions" to parametric, mesh, or model.
simply changing the file extension of "destinationURL" (throws error)
I am creating a 3D model from multiple images using the photogrammetry session. Now, when the session generates an OBJ file and I measure the distance between two points, the distance is displayed sporadically in different units. Sometimes it's meters, then centimeters, or another unit altogether. How can I tell the photogrammetry session to always create the model in millimeters?
Hello Dev team,
3 weeks I'm looking for how I can export a static Cinema 4D objects WITH TEXTURES to Reality Composer Pro !
I can export it directly on USDA format and it works well for the 3D model in Reality Composer Pro, BUT, I can't have the textures on my model. My model is simple not colored !
Of course I expect to have textures applied on the good place and same appearance I've in Cinema 4D.
Could you give me a process to do that please ?
I'm using Cinema 4D R25 and Last XCode and Reality Composer Pro beta versions.
Big big thanks to the one could help me on this. It will unblock many things to me!!!!
Cheers
Mathis
Working on a vision OS app. I've noticed that even when castsShadow is false, performance goes down the drain when there are more than a few dozen entities that have GroundingShadowComponent. I managed to hard crash the Vision Pro with about 200 or so entities that each had two ModelEntities with GroundingShadowComponent attached but set to castShadows = false.
My solution is to add and remove the GroundingShadowComponent from entities as needed, but I thought maybe someone at Apple might want to look into this. I don't expect great performance with that many entities casting shadows, but I'd think turning the shadow off would effectively disable the component and not incur a performance penalty.
Hello!
I need to display a .scnz 3D model in an iOS app. I tried converting the file to a .scn file so I could use it with SCNScene but the file became corrupted.
I also tried to instantiate a SCNScene with the .scnz file but that didn't work either (crash when instantiating it).
After all this, what would be the best way to use this file knowing that converting it or exporting it to a .scn file with scntool hasn't worked?
Thank you!
I am fairly new to 3D model rendering and do not know where to start.
I am trying to, ideally with ARKit & RealityKit or SceneKit, do a scan of an environment. This includes:
Applying realistic textures to the model.
Being able to save it as a .usdz file (to be able to open it within the App itself)
Once it is save do post-processing measurements within the model.
I would prefer to accomplish this feature by using a mesh, instead of the pointCloud that is used in the sample project of apple. Would this be doable using Apple's APIs and on a mobile device or would it be necessary to use a third party program?
I have managed to create a USDZ file using SceneKit's .scene.write(to:,delegate:) method. However the saved file is a "single object" and it is not possible to use raycasting to do post-processing measurements in the model.
I have a glb model that is loading absolutely fine, repeatedly, in safari or chrome.
There is only one texture that is 8192x8192
it never has a problem when loading in browser.
when we embed the url into an app, the model loads the first few times (exiting the model and going back to the main menu and then reloading the model) but, after a few attempts, the texture fails to load. The model and all data is visible but the texture, itself, is black.
why could this be happening? Is there something in the iOS code that is breaking it? Is the iOS code trying to automatically cache the texture and it’s running out of memory?
anyone who can provide the help and support that we require will be much appreciated.
thank you advance.
Greetings, I'm a new developer and would like to understand exactly how XCode, SwiftUI, Reality Kit, ARKit, Reality Composer Pro and Unity work together to create a cosmology app in 3D?
I have created a working solar system using Javascript and html and WebGL for the 3D stuff. I would now like to carry that over to the Apple Vision Pro. Can someone tell me what software frameworks, and api's in the Apple ecosystem I can use to code that?
Many thanks
With the advent of the third dimension, I wanted to know wether if it's currently possible to display the flat swiftUI Views with some thickness in xrOS?
While the .frame(depth: CGFloat?) does the job for Views in general, I am eager for a more granular level of control at the pixel-specific level.
I was hoping that there are lower level APIs to achieve this & I've looked into the fairly new layerEffect shader API, yet it seems it's incapable of setting the depths of pixels...
Hello,
I've been tinkering with PortalComponent on visionOS a bit but noticed that the content of the WorldComponent is always clipped to the mesh geometry of whatever entities have the PortalComponent applied. Now I'm wondering if there is any way or trick to allow contents of the portal to peek out – similar to the Encounter Dinosaurs experience on Vision Pro (I assume it also uses PortalComponent?).
I saw that PortalComponent has a clippingPlane property (https://developer.apple.com/documentation/realitykit/portalcomponent/clippingplane-swift.property). But so far I haven't been able to achieve a perceptible visual difference with it.
If possible I would like to avoid hacky tricks using duplicate meshes or similar to achieve this.
Thanks for any hints!
I want to transport my creations from UE/Unity to the APVpro environment. Is it possible right now? How can I do this?
(Beg your pardon but, I have no skills in programming and I've never touched an Apple Develop kit software. I'm just a 3D artist.)
We scan the room using the RoomPlan API, and after the scan, we obtain objects with a white color along with shadows and shading. However, upon updating the color of these objects, we experience a loss of shadows and shading.
RoomPlan scan
After Update
Hi there -
Where would a dev go these days to get an initial understanding of SceneKit?
The WWDC videos linked in various places seem to be gone?!
For example, the SceneKit page at developer.apple.com lists features a session videos link that comes up without any result, https://developer.apple.com/scenekit/
Any advice..?
Cheers,
Jan
I am trying to control the orientation of a box in Scene Kit (iOS) using gestures. I am using the translation in x and y to update the x and y rotation of the SCNNode.
After a long search I have realised that x and y rotation will always lead to z rotation, thanks to this excellent post: [https://gamedev.stackexchange.com/questions/136174/im-rotating-an-object-on-two-axes-so-why-does-it-keep-twisting-around-the-thir?newreg=130c66c673f848a7be2873bf675573a9)
So I am trying to get the z rotation causes, and then remove this from my object by applying the inverse quaternion however when I rotate the object 90 deg around x, and then 90 deg around Y it behaves VERY weirdly.
It is almost behaving as it is in gimbal lock, but I did not think that using quaternion in the way that I am would cause gimbal lock in this way.
I am sure it is something I am missing, or perhaps I am not able to remove the z rotation in this way.
Thanks!
I have added a video of the strange behaviour here [https://github.com/marcusraty/RotationExample/blob/main/Example.MP4)
And the code example is here [https://github.com/marcusraty/RotationExample)
Hi, I trying to use Metal cpp, but I have compile error:
ISO C++ requires the name after '::' to be found in the same scope as the name before '::'
metal-cpp/Foundation/NSSharedPtr.hpp(162):
template <class _Class>
_NS_INLINE NS::SharedPtr<_Class>::~SharedPtr()
{
if (m_pObject)
{
m_pObject->release();
}
}
Use of old-style cast
metal-cpp/Foundation/NSObject.hpp(149):
template <class _Dst>
_NS_INLINE _Dst NS::Object::bridgingCast(const void* pObj)
{
#ifdef __OBJC__
return (__bridge _Dst)pObj;
#else
return (_Dst)pObj;
#endif // __OBJC__
}
XCode Project was generated using CMake:
target_compile_features(${MODULE_NAME} PRIVATE cxx_std_20)
target_compile_options(${MODULE_NAME}
PRIVATE
"-Wgnu-anonymous-struct"
"-Wold-style-cast"
"-Wdtor-name"
"-Wpedantic"
"-Wno-gnu"
)
May be need to set some CMake flags for C++ compiler ?