Incredible code, I've been checking your work 'cause we're needing to build something similar here at work. I've been studying Swift + Xcode + ARKit for more than a week, and you've made an excellent job. Though, I still have no idea how I would achieve something similar, hahaha. Thanks for the inspiration! PS.: I need to build something similar but without the plane detection and vertical not horizontal, I must identify a rectangle object with some measures and replace it by a node.
This code is not giving proper results in iphone xr. The bounding box of vnrectobservation not being drawn completely. The width is less than the actual object. In allnother devices it is working.
We discussed this in a GitHub issue on the project, but no one has made a PR to include an update. @yeldarby says they were able to figure it out, so you might want to get in touch with them. If you have a solution and want to submit a PR, please do.
Depending on what you're trying to detect, possibly. The Vision API can detect hand-drawn text (see www.appcoda.com/vision-framework-introduction/). It could possibly detect hand-drawn rectangles if they look rectangular enough but you'll likely have to play with some of the settings the API provides. I'm not familiar enough with CoreML's API, but if there's an API available to detect what you're looking for in a static image, it can be applied to AR. You just need to translate the coordinates inside the detected image to points you can perform a hit-test on in AR.