Hello everyone,
I'm developing an app using RoomPlan where users can capture several rooms individually and assign custom names to each room. When I merge these captured rooms into a single CapturedStructure using the StructureBuilder, I want the merged structure to retain the original sections with their custom labels.
However, I'm encountering an issue:
Problem: After merging, the CapturedStructure's sections have different identifiers (UUIDs) compared to the original sections in the individual CapturedRooms. This means there's no straightforward way to map the custom labels from the original rooms to the sections in the merged structure.
What I've Tried:
Mapping by Identifiers: Attempted to map custom labels using the sections' identifiers, but the identifiers change during the merge, making this ineffective.
Mapping by Positions: Tried matching sections based on their center positions (simd_float3), but these change.
Modifying Section Labels: Considered changing the label property of the sections before merging, but the label is read-only and cannot be modified directly, and the CoreModel is used for merging rather than the data that can be modified via json.
Question:
Is there a recommended way to preserve or map custom section labels when merging multiple CapturedRooms into a single CapturedStructure using RoomPlan? How can I ensure that the custom labels assigned to rooms before the merge are correctly associated with the corresponding sections after the merge?
Any guidance or suggestions would be greatly appreciated!
Thank you!
RoomPlan
RSS for tagCreate parametric 3D scans of rooms and room-defining objects.
Posts under RoomPlan tag
70 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I stumbled across the function setWorldOrigin(relativeTransform:) from the ARSession which is documented here:
https://developer.apple.com/documentation/arkit/arsession/2942278-setworldorigin
I made a custom ARSession where i override this function and print and modify the relativeTransform parameter. The print shows that this function is called with an updated relativeTransform value but it seems that it has no impact e.g. on the world origin when starting or continuing a scan, the tiny puppet house in RoomPlan or any tracking position that i get from ARKit.
Has anybody experience with this method or knows what parts are influenced by setWorldOrigin()?
I'm using the Apple RoomPlan sdk to generate a .usdz file, which works fine, and gives me a 3D scan of my room.
But when I try to use Model I/O's MDLAsset to convert that output into an .obj file, it comes out as a completely flat model shaped like a rectangle. Here is my Swift code:
let destinationURL = destinationFolderURL.appending(path: "Room.usdz")
do {
try FileManager.default.createDirectory(at: destinationFolderURL, withIntermediateDirectories: true)
try finalResults?.export(to: destinationURL, exportOptions: .model)
let newUsdz = destinationURL;
let asset = MDLAsset(url: newUsdz);
let obj = destinationFolderURL.appending(path: "Room.obj")
try asset.export(to: obj)
}
Not sure what's wrong here. According to MDLAsset documentation, .obj is a supported format and exporting from .usdz to the other formats like .stl and .ply works fine and retains the original 3D shape.
Some things I've tried:
changing "exportOptions" to parametric, mesh, or model.
simply changing the file extension of "destinationURL" (throws error)
Hi, from the 2023 WWDC video on RoomPlan, they mention that it should be possible to integrate photo / video with RoomPlan: https://developer.apple.com/videos/play/wwdc2023/10192/ (at ~2:30)
However, when I attempt to use AVFoundation and AVCaptureSession with RoomPlan, I get the simple error of "Cannot Record".
So I'm not sure if there is something wrong with my setup/code, or if these two libraries are actually incompatible. Are there any kinds of guides for doing things like this? Am I going in the right direction or should I try a different approach? Happy to share code if necessary. Thanks
Hello everyone, is it possible to embed a USDZ file (Roomplan) in an HTML5 website and view it in 3D? Or do I need to convert the USDZ file into a GLTF file?
In lots of houses there are different levels but are still on the same floor. What i mean is that there are things like stairs on the entrance that only have a few steps and would count basically as the same story.
RoomPlan already does a nice job recognizing them during the scanning but after the StructureBuilder or the optimization step it is not really satisfying.
Has anyone managed to handle those cases? Or do you have to scan a specific way to capture such small differences within a level?
Hello
We are exploring the iOS 17 RoomPlan updates that allow for a custom ARSession to be passed into the RoomCaptureSession via the new initializer.
let roomCaptureSession = RoomCaptureSession(arSession: myARSession)
Currently we use our ARSession to extract sceneDepth from the ARFrames via the delegate callback. This works prior to activation of the RoomCaptureSession via session.run(configuration).
However, when we do call run on the RoomCaptureSession, sceneDepth is no longer present on the incoming ARFrames.
Are these mutually exclusive? Should we expect ARFrame depth data to be present when a RoomCaptureSession is running with the shared ARSession?
The RoomPlan API makes it possible to serialize and de-serialize CapturedRoom objects. This opens up the possibility to modify a CapturedRoom (e.g. deleting surfaces/objects) in a de-serialized state and serialize it as a new CapturedRoom. All modified attributes are loaded accordingly, so far so good.
My problem starts with the StructureBuilder and it's merge function capturedStructure().
This function ignores any modifications to attributes of a CapturedRoom. The only data that is considered is encoded in the CoreModel attribute (which is not mentioned in the official documentation).
If someone has more information or a working solution about how to modify CapturedRooms please let me know.
Additionally if there is somewhere a documentation about the CoreModel-attribute please post a link here.
Hello!
We're having this issue in our app that is implementing multi room scan via RoomPlan, where the ARSession world origin is shifted to wherever the RoomCaptureSession is ran again (e.g in the next room)
To clarify a few point
We are using the RoomCaptureView, starting a new room using roomCaptureView.captureSession.run(configuration: captureSessionConfig) and stopping the room scan via roomCaptureView.captureSession.stop(pauseARSession: false)
We are re-using the same ARSession and, which is passed into the RoomCaptureView as so:
arSession = ARSession()
roomCaptureView = RoomCaptureView(frame: .zero, arSession: arSession)
Any clue why the AR world origin is reset? I need it to be consistent for storing frame camera position
Thanks!
After scanning the room I use the .export method passing a ModelProvider. Then I import the USDZ into a SCNView and continue processing the scene: I would like to apply a texture to the walls and floor, but I can't do it for the walls and floor because they don't contain the TextureCoordinates. Creating them from the USDZ file is not easy. I tried to combine the CapturedRoom data for the walls and floor only, adding the TextureCoordinates. I'm managing, but I'm struggling a lot. Isn't there an easier way to do it? Is there a ModelProvider planned for surfaces in the future? If so, where can I access the RoomPlan beta documentation?
The structure builder provides walls and floors for each captured story, but not a ceiling. For my case it is necessary that the scanned geometry is closed to open up the possibility to place objects on the ceiling for example and therefore it is important that there is an estimated ceiling for different rooms within a story.
Is there any info that apple has something like this on the roadmap in the future because i think this can open opportunities especially when thinking about industrial application of the API.
If somebody has more insights on this topic pls share :)
When I use RoomPlan, I notice performance issues in larger rooms or those with a lot of furniture. Is there a way to configure RoomPlan to focus only on detecting properties of a surface (window, door opening and wall) during scanning, possibly through an argument or setting? Filtering afterward is an option, but it doesn't address the slowdown during the scan.
Hello everyone,
I am struggling to find a solution for the following problem, and I would be glad and thankful if anyone can help me.
My Use Case:
I am using RoomPlan to scan a room. While scanning, there is a function to take pictures. The position from where the pictures are taken will be saved (in my app, they are called "points of interest" = POI).
This works fine for a single room, but when adding a new room and combining the two of them using:
structureBuilder.capturedStructure(from: capturedRooms)
the first room will be transformed and thus moved around to fit in the world space.
The points are not transformed with the rest of the room since they are not in the rooms structure specifically, which is fine, but how can I transform the POIs too, so that they are in the correct positions where they were taken?
I used:
func captureSession(_ session: RoomCaptureSession, didEndWith data: CapturedRoomData, error: (Error)?)
to get the transform matrix from "arFrameReferenceOriginTransform" and apply this to the POIs, but it still seems that this is not enough.
I would be happy for any tips and help!
Thanks in advance!
My Update function:
func updatePOIPositions(with originTransform: simd_float4x4) {
for i in 0..<(poisOldRooms.count) {
var poi = poisOldRooms[i]
let originalPosition = SIMD4<Float>(
poi.data.cameraOriginX,
poi.data.cameraOriginY,
poi.data.cameraOriginZ,
1.0
)
let updatedTransform = originTransform * originalPosition
poisOldRooms[i].data.cameraX = updatedTransform.x
poisOldRooms[i].data.cameraY = updatedTransform.y
poisOldRooms[i].data.cameraZ = updatedTransform.z
}
}
For RoomAnchors there's different mesh classifications for mesh anchors, but only walls and floors are supported by geometries() function.
So given this how can I get information about other mesh classifications?
I have two questions to ask:
1 The wall of the exported USDZ model has no thickness.
2 Can the exported model format be set to USDZ format or OBJ format.
Thank you.
We have an issue with Apple Roomplan - on regular bases the objects which are captured are not positioned corretly in the model which happens 50% of the cases we have - that makes the feature almost useless. Is there any idea how to solve that problem?
In RealityKit using visionOS, I scan the room and use the resulting mesh to create occlusion and physical boundaries. That works well and iI can place cubes (with physics on) onto that too.
However, I also want to update the mesh with versions from new scans and that make all my cubes jump.
Is there a way to prevent this? I get that the inaccuracies will produce slightly different mesh and I don’t want to anchor the objects so my guess is I need to somehow determine a fixed floor height and alter the scanned meshes so they adhere that fixed height.
Any thoughts or ideas appreciated
/Andreas
Is it possible to create a roomplan with the texture of a room plan's wall - or some way to combine ObjectCapture results with RoomPlan results?
So in the WWDC23 video on the Roomplan enhancement, it says that it is now possible to set a custom ARSession for the RoomCaptureSession. But how do you actually set the config for the custom ARSession?
init() {
let arConfig = ARWorldTrackingConfiguration()
arConfig.worldAlignment = .gravityAndHeading
arSession = ARSession()
roomCaptureView = RoomCaptureView(frame: CGRect(x: 0, y: 0, width: 42, height: 42), arSession: arSession)
sessionConfig = RoomCaptureSession.Configuration()
roomCaptureView.captureSession.delegate = self
roomCaptureView.delegate = self
}
However, I keep getting an issue that self is being used in the property access before being initialised.
What can I do to fix it?
Is there a way to access the coordinates of where the camera is while scanning the room with Roomplan?