I’m working on real-time object detection using YOLOv8, but I only need to detect objects in approximately 40% of the screen area. Is it possible to limit the captureOut method to focus solely on that specific region of the screen?
If this isn’t feasible, I’m considering an approach where the full-screen pixel buffer is captured and then cropped to the target area before running detection. However, I’m concerned about how this might affect real-time performance.
I’d appreciate any insights on how to maintain real-time performance or suggestions for better alternatives. Thank you!
Hello @keisan1231,
If you are using Vision's CoreMLRequest to handle prediction using your model, set its regionOfInterest to the normalized region you are interested in.
Best regards,
Greg