Тёмный

Pedro Lopes keynote at VR Summit 2024 "VR’s Ultimate Display? It is the integration with your body" 

HCintegration
Подписаться 990
Просмотров 597
50% 1

Pedro Lopes is a Prof. of Computer Science at the University of Chicago (lab.plopes.org)
Since Ivan Sutherland’s vision for virtual and augmented reality (AR/VR) in the 1960s-the Ultimate Display-many research/industrial efforts focused on advancing the technology needed to enable users to be fully immersed in digital experiences, where they can simulate entire realities (e.g., adaptive & safe environments to practice a new skill, like sports, painting, cooking, or engaging gaming experiences).
Each decade of computing brought along revolutions that enabled the infrastructure required for AR/VR to decrease in cost and, more importantly, in convenience. In Sutherland’s 1960s, the Ultimate Display was a headset tethered to moving pole, requiring a specialized room to run it. In the 1970s, advances in computing & optics enabled the headsets to display more than simple vector graphics, displaying scenes with a sense of depth that started to resemble everyday experiences. Later, in the 1980s, hardware prototypes started not only track the user’s head movements but also their hands (e.g., DataGlove), bringing more of the user’s body into VR. By the early 1990s, powered by advances from the first decade of desktop-sized computers, the infrastructure needed to run a VR experience was shrinking to the size of personal computers, rather than entire specialized workstations. But it was the next revolution in computing, the mobile devices (e.g., smartphones), that changed VR the most by enabling VR headsets to shrink dramatically in size, while still packing the hardware for more realistic graphics and sensors to track the user’s movements-even the hands can now be tracked without the need for cumbersome gloves.
By the 2020s, fueled by the explosive increase in computing speed predicted by Moore’s law, shouldn’t the average AR/VR headset should be our best contender to the title of an Ultimate Display?
No. Because, this was not what Sutherland envisioned, nor what we all desire, for an Ultimate Display: “it should serve as many senses as possible (…) smell, or taste. (…) kinesthetic display”.
This begs the question, why are we not there? How did 60 years of computing advances not lead us to the techniques needed to create a headset that can create sensations of touch, forces, smell or taste?
When it comes to adding these missing senses, the existing approaches clash with the convenience offered by today’s VR-headsets are portable, untethered and do not require encumbering the user’s hands. Conversely, to create a realistic sense of force when a user touches a virtual wall, one must push back against the user’s hand with motorized actuators, such as exoskeletons. Similar, to recreate the surface of the virtual wall, we must attach vibration motors to the user’s fingerpads, blocking these from feeling any real surface around the user. The convenience decreases as we add more senses.
This technical roadblock exists because the devices needed to physically create sensations are also mechanical-because these devices have moving parts they do not scale down well (Moore’s law applies to devices made from semiconductors, not from moving parts).
I posit there is a way to move past this roadblock by integrating interactive devices with the user’s body. Instead of using large mechanical devices to create physical sensations, we directly stimulate the nerves responsible for feeling the desired sensations using much smaller electronic devices.
The first key advantage of this body-device integration is that puts forward a new generation of miniaturized devices; allowing us to circumvent traditional physical constraints. For instance, in the case of our devices based on electrical muscle stimulation, they create realistic haptic feedback (e.g., forces in VR/AR) while circumventing the constraints imposed by robotic exoskeletons. A second key advantage is that integrating devices with the user’s body allows for new interactions to emerge without encumbering the user’s hands. Using our approach, we demonstrated how to create tactile sensations in the user’s fingerpads without putting any hardware on the fingerpads-instead, we intercept the fingerpad nerves from the back of the user’s hand.
So, what benefits can we reap from an Ultimate Display that can create temperatures, smells, touches or even forces? I argue that this unlocks physical modes for reasoning with AR/VR, going beyond just symbolic thinking-“an involvement with this physical world” was indeed an aim that Sutherland’s vision also sought after. For example, we have engineered a set of devices that control the user’s muscles to provide tacit information to the user, such as to learn new skills (e.g., playing piano or sign language).
Finally, I posit that these bodily-integrated devices are the natural succession to wearable interfaces and allow us to investigate how interfaces will connect to our bodies in a more direct and physical way.

Опубликовано:

 

6 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 1   
@Techne89
@Techne89 2 дня назад
Great to listen 🎧
Далее
Hands-On: Meta Orion Augmented Reality Glasses!
53:18
Просмотров 447 тыс.
Bro's Using 3 Weapons
00:36
Просмотров 3,6 млн
Mcdonalds cups and ball trick 🤯🥤 #shorts
00:25
Просмотров 987 тыс.
What Happened to Visor XR at Immersed IRL 2024?
13:35
Why you’re so tired
19:52
Просмотров 1,7 млн
I Saw the Future of VR/AR & It's WILD - AWE 2024
17:58
The AI already in your phone | BBC News
7:33
Просмотров 85 тыс.