I saw this as a student in 2015 and now I work professionally as a VR designer and I still think about this, like, monthly at least. So much good thinking here and it still holds up, so thank you, Mike!
This is a truly impressive overview of VR UX guidance. I love the references, the presentation pace and editing, and all the visual examples broken down into simple overviews without patronizing the viewer. I'm so pleased to see this has led to your job at a pioneer in the VR/AR space. I'll definitely be using this video, and your others, to help stir interest and understanding of how VR can be properly used at my company. I hope you continue to produce content on your own, rather than letting such a large company envelop you.
Excellent work. Looking forward to learning more. I wish there was a site dedicated to Virtual, Augmented, and Mixed reality design patterns, research, and best practices built in the clear, concise manner you've presented here. One can dream...
+Xin Chung Hey Xin, long time no see. It's so great to see someone like Mike Alger get a position at Google... Look forward to seeing the future designs, Mike, and thanks for such an engaging summary of your thoughts and design work.
This has got to be one of the most impressive master theses ever written. Make that 3D OS of yours, and you’ve changed the world - making you a billionaire in the process. Keep it up!
6:14 "... maybe it needs to be useful for an airplane seat." Holy shit, I did not think about that. VR would basically allow anyone an office environment in transit. I've been pro AR and really not too enthusiastic about the whole virtual world thing for some time, but the ability to completely shut out all external input is actually extremely useful.
Sometimes people separate AR and VR and say one is better than the other. When we're talking about head mounted displays, they're the same thing. Covering your view in AR makes it VR. Putting cameras on VR makes it AR. You're choosing between having black vs transparency. That's part of what I was getting at @14:00.
Wow! This is awesome work. I love the visualizations. You gave a high quality presentation with tons of showmanship. I thought your button design took the cake, and I'm a huge fan of your thought process for determining the color palette and especially loved the 'push-through' effect past the barrier. Seems close to the springiness of iOS scrolling, like when trying to scroll upward into the top of a list or webpage. Great way of making years of computer interface progress easy to understand and digest quickly. I would say the final office environment, while showing a large variety of interactions, did not gel together quite as well as your simple cinema player interactions set. You're pursuing a pretty ambitious problem, sort of translating all of iOS into VR, so congrats on the success so far.
This is still so relevant and should have a lot more views! Great information for user ergonomics and expectations, baseline experience considerations.
Hololens additive blending limitation can be resolved using a calculator-style black LCD to silhouette virtual objects on the front lens to mask background light from surroundings, and the reflected virtual scene on the inside there would be the only light reaching the eye. It would likely not be a perfect fit or alignment without being adjusted to your eyes/head shape/position.
Also, a prototype VR OS could easily be done by doing a replacement Windows shell, where you tell windows to use something besides explorer.exe at startup. It was easier pre-W10 but it can still be done without much added difficulty.
I've long been averse to the confines of the 2D space. I am an amateur graphic designer and always looked at the web's design as an ancient relic. I was first turned onto tag based systems to replace folder hierarchies. This got me thinking about how the "desktop" was an emulation of just that, the top of a desk. I don't need to tell you how dated that is. Having information exist in a web of meta data instead of a system of incepted folders opened my mind up to how we can redesign our operating systems. I recently tried VR on the Vive and knew immediately this was a step towards what I had been flirting with. Since that first experience, I had an epiphany. With screen real estate no longer a constraint on design, applications no longer need their entire capacity displayed at once. In fact, I'd argue they less displayed the better. In order to effectively manipulate and interact with the interface, the intention behind use must be predetermined, therefore only the relevant controls are presented. This simple idea seems dramatically pivotal. Minimize the signal loss from intention to execution and you open up a whole world of possibility. I don't see a need to distinguish between desktop and browser. These thoughts led me to this video, to see if the world was onto it. This is important. Please consider connecting with me at richrusset@gmail.com. I need to pursue all avenues in relation to the above ideas.
It's refreshing to see someone actually take this medium, which for a long time has seemed, to me, like a gimmick which has been lost to the depths of video games (not that I'm complaining about that, as a gamer, either) and look to how it can be used in a productivity sense. I especially liked how you broke everything down simply and eloquently. It gives me real hope for the more practical side of how to use VR in business and home productivity. As I live alone in a small flat. I'd like to have access to a wall with a 50 inch TV to watch movies or space for enough PC monitors to have all of my tasks on but VR offers, in many ways, a better experience in this regard, I've always thought, and you have shown just how that kind of thing could work. In lamens terms, thanks :) and the colour scheme was good xD
Mr Mike, this is the third video i watch for you, Y O U A R E AMAZING, its 2021 and all what you said reflected in Real, VR AR is advancing rapidly , ways and methods to interact are more friendly now, many apps and programs are there competing for attenction. thank you a TON . bless you
Great job in communicating these ideas succinctly and visually! Honestly one of the most polished and informative video talks I have ever seen. Wish there were more of them coming. Good luck at Google!
I really enjoy the study of zone you have summarized up. The button and other UI components in the OS workflow demo seemed really intuitive. Thanks for sharing your study.
Great video. In future projects I suggest you to let even more people test your prototypes of VR OS-like applications. All mad scientists need test subjects. And let VRgins try too, their input are usually the most valuable.
Hey dude, I hope you ended up getting hired somewhere good. You're really intelligent and great at abstracting all of these amazing concepts and making them relatable for laymen like me without skimping on the detail. Keep it up, you're helping change the world.
This is brilliant. Years ahead of it's time in terms of virtual trends (in both application and implementation even ideation), thank you for the video @Mike Alger
Your work is very inspiring! I hope there're more details on how you made those mockups... I'm trying to make similar mockups in After Effects but there're many things that I can't figure out:) It'd be nice if you can write a post about it!
Very nice and pleasant to watch video. i like the little infos you put in here and there, which really matter but come across as lightly as you go on. Congrats and help bring google VR forward :)
I discovered you through your cross-segmented 3d model of the human anatomy while procrastinating my Biology homework. Now, I just want to do what you do :(
Wow. i just impressed very very much. it was my so very important topic then, you just explan whole idea of VR interface design you might be genius, and humble person. i appriciate for this!!
Congratulations for the video! It all makes sense to me, the fields zones are really important and you could demonstrate it really well. For designers who are leaving the 2d and entering the 3d space, it helps a lot. :D
5:00 to 9:05 It would be neat to have an "active" mode where things are intentionally farther away than usual for people who would like to stand at work or turn around, and generally avoid the whole Wall-E phenomenon. One of the biggest benefits I see for VR working is that I can choose NOT to sit down! I agree that for mass marketing and work applications, using it sitting is crucial, but I would hate for that to make applications too delicate for the wobbly arm of someone standing up to reliably hit a button.
I'm excited , too. When it comes to VR, Google does more than cardboard, though. Remember the other projects going on like the Tiltbrush and Earth demos on Vive, resources like Jump or Tango, and the myriad of existing Google things like WebVR in Chrome or 360 videos on RU-vid, for a few examples.
Yeah, there's some that I didn't know, but most is related with CB. Anyway, I like the way that google is pushing VR into the masses you know, democratizing technology :)
5:43 well now the oculus rift and htc vive have a fov of about 110 Degrees so that should be useful, also the "VR-OS.V0.3 Pre-Alpha"(I don't know why I gave it a name) looks awesome and I would use it and I'm probably going to make a mock-up of it in unity
hand gestures are for immersive apps, not for a desktop environment. the desktop will be controlled with our eyes and a touchpad which captures finger movements on a virtual keyboard and which turns on/off eye tracking and ar/vr overlays. hand movements arent practical because we are more productive if we can do real world tasks with them and in some environments you just dont want to move your hands. while youre cookingor while youre outside
if you work in a callcenter and get a new customer every 5 minutes and have to do all kinds of kung fu to operate the crm software, you would be less productive than with a mouse and keyboard.
you could design a chair which helps with the arm movements but do we really need the classical chair anymore? the input devices were created with the desktop and chair in mind but vr/ar allows us to rethink in which position we work
This is for sure great basic work, but from a user point of view I`d argue like this: I`m not interested into this for giving me a bodily similar workflow like I had before with a spacier feeling around it. I`m interested in this, cause it is or can be real space, without the limitations of being glued to ways of custom work behaviour. Work, even administrative work, for the first time in history, can and could be adapted to physically more natural, healthier and of course highly personal forms. As a designer I would stop to think in categorys like middle&background, at least have some kind of "doors" there. Better, neighbouring territorys of friends or co-workers.
Mike Alger's video delves into pre-visualization methods for VR interface design, covering content zones and interaction types. He envisions the possibility of a full virtual reality operating system with adequate resources. Check out this insightful exploration of VR interface design.
great video ! 43 of age i saw most of this coming up at the time. allow a side comment re lazyness: combined with findings of working ergonomics the massive amount of sensor data might help to prevent back problems etc. :-) if you analyse the sensor-data and help the user to correct his posture during the day, do workouts now and then :-) worries of an older guy... :-)
Hmm, that's an interesting one. For the first consumer devices specifically, it seems like we'll have the head and hands' rotation and position, but extrapolating the elbow, shoulders, and spine curve wouldn't be exact. Individual applications for gamified assisted physical therapy are the starting point for this right now, though and exercise is an option (although a little bit of a foggy, sweaty one, atm). This may be more viable if people are SO accepting of VR that full body tracked solutions become commercially popular, but it looks like we'll need to start with what they'll have first.
That was great Mike thank you. You've gave me a different perspective. I'm a Game Designer and developer ans believe me i will use this incredibly useful informations to create my gamer experience.
This is brilliant. Makes me really interested in developing for VR, but it seems to be a ton of programming and level/game design. How high is the barrier of entry for learning to do things like creating interfaces like this? I feel like picking it up in my free time but I'm gauging whether it's worth to do all of this if people are already insane interface creators and my time would be wasted. A virtual desktop interface with custom screens would be a huge game changer for the desktop environment as it goes beyond buying more monitors as you can create your own. I wonder if there will be OS specifically designed for VR in the future? Hopefully a great one comes out soon, here's hoping. Amazing video mate, keep it up.
+Anjeleo Villarruz The barrier to entry is much lower than you're imagining. It seems like it would be complicated or crazy because of those movies, but it's just the same as any other design. You just get to feel cool because it's 3D. If you're worried about competition, there is none. Basically nobody doing volumetric interface stuff now was doing it their whole career. There are no experts. And the ones who ARE doing it are the ones who would be helping you and reading/watching any findings you have as well. As far as learning curve, most people start out by learning Unity and C#, which are thankfully free things with plenty of online help and tutorials. But as you saw in this video, you can prototype concepts in other ways like animations.
Nice video. An idea I had looking at your video : could you test a system where the content slowly move back to the center of the comfort zone the longer we look at it ? Suppose the lady noticed the alarm for her appointment and started to deal with it, then got caught up in meeting planning. It would be great if the calendar widget was automatically moved (at low speed) to the center of the content zone. That would let the user head return to resting position without interruption, and all the other content would be moved along the sphere in the same direction (pushed). Once the user finishes with the calendar, the whole spherical graphical interface slides back to the default configuration. In a way, it would be similar to this sliding of applications for mac, but more fluid.
I really wish you the best of luck with your job hunt! However I think you could achieve such amazing things if you managed to get an investor to support you and a small team in creating that OS you talk about, I can understand why you are wary of trying kickstarter. I have a feeling that in a few years the name Mike Alger could be synonymous with the OS of VR if you did try. If it helps both Microsoft and Apple both started in a garages if I remember correctly!
Is the Samsung study referenced focused on 3D TV, or arbitrary 3D perception? While there's a limit to how far something can be before you stop perceiving depth on a 3D display, that limit is significantly further on a head tracked display, due to subtle occlusion changes as you re-position your head.
+Brad Davis It was actually specific to head mounted displays, particularly when they were working on the Gear VR in the beginning. The way he measured it was to move spheres further from the user until their change in distance was imperceptible. I'd say watch that video because it's really great: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-XjnHr_6WSqo.html However, I tried coming at it from a different perspective to estimate it mathematically, too. What's interesting is that you get about the same 20 meter distance for the specs of DK2, CV1, Vive, Gear VR, etc. All distances from about 20 meters to infinity rely on the perception of anti-aliasing/interpolation of a single pixel. You can perceive further distances with this, but my purpose was for interface design and using depth meaningfully for user understanding. mikealgermovingimage.tumblr.com/post/127113260256/hmd-resolution-and-maximum-depth-perception
+Mike Alger I'll check it out. However, it's worth noting that again, the Gear VR doesn't have positional tracking, though I believe it does include a head and neck model, so you get a slight positional modification as you tilt your head off the XZ plane. However, even if something at 20 meters away resolves to the same pixel with the IPD distance, humans can still pick up a significant amount of depth information based on finding as they lean their upper body, the side of an object occludes the next pixel of the background in one eye slightly before or after it becomes occluded in the other eye. It's probably academic, since I imagine most developers would probably include a significant safety zone between the content they render in stereo vs content they might push to a skybox. I'm just suggesting that the depth perception curve graph is really 3D rather than 2D, with the third axis being the positional range of the viewer, which could probably be bucketed into 'fixed', 'head and neck model', 'upper body translation with a fixed seated position' and probably several categories of 'free roam', perhaps corresponding to common VR environments like CAVE, or Steam VR room size presets. In general I think the video's brilliant. I hope to apply some of the concepts to my own work on UI at High Fidelity. Thanks.
+Brad Davis Yes, you can perceive depth past 20 meters and parallax contributes to that greatly, it's just not ideal for UI design in my opinion since I'll want a user to be able to rely on stereoscopy without movement. I mean things like using depth for file hierarchy or object relationships.
hmm, but how do you design VR interfaces that doesn't require a device that tracks hand movement, but can use a swipe/tap like a Samsung GearVR or Google Cardboard? I see a lot of this use of card/web design trends spilling over into VR, like the Oculus Menu...
Hey Mike, loved the video and the "historical" aspect of VR-technology :P are you concerned about the prospect of information overload with VR-centered interfaces or do you think that having a "physical" environment to set up would actually decrease the risk of the user being overwhelmed by the interface?
+snevesnis I think we do this natural thing of finding a balance between clutter/stress and cleanliness/minimalism. When you have too many tabs, you close tabs. When an app is annoying you, you disable its notifications. Your desk gets too messy so you clean it. There's this concept of cognitive load and that you can only have so many things on your mind at a time. The thought is that maybe all the little environmental distractions around you take up some of your RAM. And that maybe each time you have to think about how to do something, you have a little less brain power available at that time for actually doing it. So, the thought is that a well designed experience allows you to do things without thinking about them - it's not distracting and it's so intuitive that it seems natural. As a result, you perform the task better because your mind was freed up to think about what you were doing and not how to do it. The overload you're talking about is when there's too much going on and you can't think about what all the things are you have to do and how to do them at the same time. Yes, you can set this up for yourself in VR, too. The idea here is that maybe the way we have to think about these flat screens adds some cognitive load. And maybe we can give some of that brain power back to the user by giving them depth and spatial cognition, which is how we more naturally understand objects.
+Mike Alger that is an interesting angle and a way of deploying virtual interfaces I am very much interested in learning more about! Do you have some papers you would recommend in this regard? I am currently working on a thesis for increasing UX and Universal Design for multimodal 3D-interfaces on touch-based platforms and would love to have a broader selection to choose from as far as alternatives to screen based interaction is concerned. Again, fantastic video!
The cognitive load stuff originates from this paper: www.realtechsupport.org/UB/I2C/Sweller_CognitiveLoadTheory_1994.pdf but there's further stuff on Wikipedia and such. This post has more information on how the concept may apply to volumetric digital interfaces: blog.leapmotion.com/truly-3d-operating-system-look-like/ and a video from the same people blog.leapmotion.com/designing-vr-tools-good-bad-ugly/
07:52 I tried to use this mathematical formula, but there were a lot of problems. So I did it backwards with the those numbers you mentioned in your video, FOV of DK2 is 94 degrees, R is 1080px, IPD is more or less 64mm, and the final d comes out to be around 9900mm. This doesn't make any sense, can anyone tell me whatz wrong?
Yeah, 9900mm is close to 10 meters. So, what that formula is saying is after a virtual point is 10 meters away from the user, if you keep pushing it back, it’s essentially going to still be using the same pixels on the right and left eyes - or the difference will be sub-pixel, anyway. So, on the DK2, which was relatively low resolution, you could perceive the stereo depth of things closer than 10 meters much more easily than things past it. Truthfully in practice, anti-aliasing does make it possible to keep seeing things, but in my opinion, past 20m wasn’t worth adding detail to. 6dof leaning also added parallax that was helpful for seeing further depth. These days, resolutions are getting a lot higher and you can meaningfully perceive depth much further, but the formula should still give a rough 3dof near-bound for lower LOD assets, at the very least.
@@MikeAlger I thought that D is 20000mm, my bad. Thank you so much for the detailed explanation! One last question, the initial point of that D is the middle point of IPD or the FOV boundary extension focal point?Coz Im trying to figure out how that formula mean. Thank you and have a nice day.
Brilliant video. Gave me a lot of food for thoughts. Excellent synthesis of VR all-around as well. Would love to chat with you about all this if you have a twitter, FB or any other social account for this ! Thanks again for this video.