This video shows an example use of the HoloLens in a plant environment, for a typical maintenance task. To help keep the user's hands free for doing the work, we implemented voice commands. see`communities.be...
WOW! awesome Technology. I like the way your App gives step by step instructions, with verification that each step has been completed. Maint. on high pressure Fluid / Gas lines could result in a Horrific result, if the person doing the Maint. was kind of forgetful, or just Tired , and they were trying to remove the pressure valve, and skipped a crucial step such as closing the supply line or evacuating the back pressure.
Hi . Whether we should go for spatial mapping to render the full enviroinment ?. can you please tell me process of doing this. Because this is the solution which i am looking for and also i am very new to unity3d and hololens. Help me please
You could definately change reality per your desire without necessarily understanding what you were doing. At least with the assistance of the machine designed to help maintenance conduct their duties.
starting to think this holo info for workers will be used for when robots take over these jobs, cause all he did was turn some valves and replace a piece, something i see robots being able to do in seconds
Interesting. This video shows a typical maintenance task - such tasks are known, well described, and could indeed eventually be achieved by an automated system. Augmented reality assistance will likely prove much more valuable when assisting users who need to fix a problem for which no solution is currently known (see our other video ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-lYptlaO9rKw.html ) and for which an automated robot would be useless.
I think you are right. AR is currently being used to aid the human user and meanwhile location and mapping technology evolves. Once the space context is conquered, the robots can take over. And it does not matter that circumstances may be not pre-determined. Machine learning will deal with that.
Robots are pricey and can't handle unexpected situations. For $3k you get to turn a human into a slow robot that handles it's own maintenance and transit, does the work and takes pay weeks after the task is completed
With this technology, we have to start somewhere to allow us to understand the challenges we'll face in the real world. Sometimes it can look basic, but even basic activity can determine what is necessary to enhance the solution.
Very good question. The answer is: no, we are not using markers. The HoloLens is very good at calculating its own position in real time, so no marker is needed for tracking. But then we need to know where to display the augmentations w/r to the pipes. To achieve that, we start with a 3D model of the pipe setup, or a 3D mesh, and the first phase of the augmentation session consists of "aligning" the 3D model with the physical pipe setup. That quick setup step needs to be done only once, since the HoloLens can "remember" rooms it has been in. Therefore subsequent augmentation sessions can automatically re-use the alignment that was done the first time, and still display the augmentations at the right place.
actually I have not seen this precison yet in other apps yet. the most precised I have seen yet is Lowe's kitchen app and they also used markers for the first use. it is quite brilliant you have achieved it.
If I get this right, I have to load the pipe and valve setup and align it with my Hololens with the one I see before starting the maintenance. If I manage a number of installations / setups in my facility, I need to store them somewhere so I can load them when I get in front. Do you use a file system or something like a QR code to load the right one before a maintenance?
We did not use other SDKs for tracking or augmentation - only Unity for the display, and HoloLens for tracking, voice recognition, and hand interaction.
But how is the manual calibration working? Just dragging the virtual model on top of the real model without recognition? Seems like not accurate enough...