This is a channel for the CSE160 - Introduction to Computer Graphics class from the University of California, Santa Cruz (UCSC). Here we have videos of our lectures, lab sections and short tutorials to guide students on the programming assignments.
Wish my Uni have that kind of classes. Really cool that someone teaching such things. Only things they teached in computer graphics are photoshop and basic 3d max, almost everyone be not wanting to study no more cause of even more agonizing ways to model. Btw did you heard of SmallUPBP or CPPM, they probably the best methods so far.
Did you make up your own version of reality and believe it as you were writing this comment? Photon mapping has been a thing for 10+ years and so has ray tracing
My understanding is that they use the rasterized image as source for raytracing in the game engine and use minimal amount of samples (with the 2000 series this minimal amount of samples can be quite high) in a buffer zone before it's previewed and using a method of approximation probably with the help of a neural network to display the image as if it was ray traced with a higher sample count. It feels like an enhanced version of the rasterized image to me rather than a raytraced one. Having everything baked in the game engine also help I suppose.
This was exactly what I was looking for! Use case of non symmetrical l/r t/b: Let's say your face is not in the center of your monitor. Or you have multiple monitors (where the outer ones are not pointed toward you). Or perhaps you are walking around your monitor. Or in other words when your monitor (image plane) is at an angle to you. Taking into account head position (and therefore relative vanishing point) using non symmetrical values would create the properly skewed perspective on your monitor that when viewed at an angle would be the correct perspective to your eyes. This relative transformation could be considered keystone correction. Games that have portals inadvertently do this to correctly project the other side of the portal onto the image plane of the portal. As far as I understand. Currently no software takes head position into account in this way. There are some head tracking implementations (tracker ir), but as far as I understand they just change camera orientation. I propose that if head tracking was taken into account in this way, then monitors could be nearly as holographically awesome as vr. This is one of the things I'm currently playing with.
Not sure what you're on about. This is pretty much normal (hence the lack of reaction from the lecturer) here in Germany (where education is free) as well. You have to consider this could be a lecture attended by a few hundred people, so lots of psychological effects can be going on. For instance, you've got introverts who know the answer right away but do not want to stand in the spotlight presenting it, you got people who have said a lot during the lecture already and want others to have a learning experience as well, ... On top of that, this is an online lecture (likely Corona lockdown related) where it's always hard to perceive the general mood of the room. Do people raise their hands? Do they look like they know? Ever since Corona semesters for me, this has been a huge problem when lecturers don't use meeting features like emotes to ask for answers or if someone knows. People hesitate a lot more just activating their mic and speaking than they would raise a hand in presence. This has nothing to do with being rich / spoiled the slightest imho
in Windows 10 Powershell at first I was trying to use commands starting with $ python3 <cmd> following the instruction at @8:00 however it did not work until I removed the 3 and ran $python <cmd>