Тёмный

NVIDIA’s New Tech: Next Level Ray Tracing! 

Two Minute Papers
Подписаться 1,6 млн
Просмотров 103 тыс.
50% 1

❤️ Check out Microsoft Azure AI and try it out for free:
azure.microsoft.com/en-us/sol...
📝 The "Amortizing Samples in Physics-Based Inverse Rendering using ReSTIR" is available here:
shuangz.com/projects/psdr-res...
Erratum: at 5:12, I should have said "has 100x lower relative error". Apologies! Removed that part of the video so you won't hear it anymore.
Andrew Price's Blender tutorials:
• Blender Tutorial for C...
📝 My paper on simulations that look almost like reality is available for free here:
rdcu.be/cWPfD
Or this is the orig. Nature Physics link with clickable citations:
www.nature.com/articles/s4156...
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Gaston Ingaramo, Gordon Child, John Le, Kyle Davis, Loyal Alchemist, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Sundvall, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: / twominutepapers
Thumbnail background design: Felícia Zsolnai-Fehér - felicia.hu
Károly Zsolnai-Fehér's research works: cg.tuwien.ac.at/~zsolnai/
Twitter: / twominutepapers
#nvidia

Наука

Опубликовано:

 

9 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 209   
@TwoMinutePapers
@TwoMinutePapers 17 дней назад
Erratum: at 5:12, I should have said "has 100x lower relative error". Apologies and thanks for the catch @thomasgoodwin2648! 🙏 Update: Removed that part of the video so you won't hear it anymore.
@Tom-cq2ui
@Tom-cq2ui 17 дней назад
I was just wondering about that. Thanks for the correction! Amazing topic too!
@etmax1
@etmax1 17 дней назад
100 x lower error is not what the text said, it was 20 x lower error. The 100 x was the speed. 🙂
@321mumm
@321mumm 16 дней назад
​@@etmax1 The only time-related words i can see are "At equal frame time" which means the same time in my book. So there are no statements for being slower or faster as far as i could read.
@etmax1
@etmax1 15 дней назад
@@321mumm Two Minute Papers acknowledged what I was saying and attributed it to thomasgoodwin2648 (presumably because they posted it first, so I suggest look more closely at what I wrote, or look at thomasgoodwin2648 s post.
@Nulley0
@Nulley0 18 дней назад
Take a shot everytime Nvidia releases "Next level RayTracing"
@doodidood
@doodidood 18 дней назад
Alcohol poisoning imminent
@Shy--Tsunami
@Shy--Tsunami 18 дней назад
Consistent buzz
@dertythegrower
@dertythegrower 18 дней назад
it came out in 2019ish.. I have built gaming pc since 1999... ray tracing came out with gaming for all in the 2050 and 2060 series.. i got a 2060 super as soon as i saw it, before the crash for parts
@AzumiRM
@AzumiRM 18 дней назад
Take a shot every time apple says the "innovated" something that already exists.
@viktorianas
@viktorianas 18 дней назад
So how did you end up in anonymous alcoholic club? Well... here's my story...
@thomasgoodwin2648
@thomasgoodwin2648 18 дней назад
@5:07 Actually, it reads 'Up to 100x lower RELATIVE ERROR than baseline methods.' , not 100x faster. Still Awesome though. 🖖🙂👍
@colinbrown7947
@colinbrown7947 18 дней назад
Right before that, it also says "in the same timeframe" So two minute papers assumed a linear relationship between error and time, and guessed if you wanted the same quality it would be 100x faster. Although I think thats unlikely
@TwoMinutePapers
@TwoMinutePapers 17 дней назад
I should have been a little more accurate there, good catch, thank you! Upvoted for visibility and added a note about it in the video description. Update: Removed that part of the video so you won't hear it anymore.
@Kuroo9
@Kuroo9 18 дней назад
Two more papers down the line would be next week, right?
@thomasgoodwin2648
@thomasgoodwin2648 18 дней назад
You really think it''l take that long?
@test-uy4vc
@test-uy4vc 18 дней назад
What a ray time to be traced alive!
@mr.critic
@mr.critic 18 дней назад
⚡😂
@locinolacolino1302
@locinolacolino1302 16 дней назад
Life has always been ray traced throughout history, yet it is only now we must come to terms with the fact the ray is inescapable.
@MTheoOA
@MTheoOA 18 дней назад
This is a dream to me. I'm creating a world with more than 150 characters, more than 1000 buildings being draw by hand, and this... this is what i wanted. I can model, i'm an arch student, using Rhino, Grasshopper, etc. but it's just absolutely crazy what NVIDIA is doing. I hope this come soon.
@dertythegrower
@dertythegrower 18 дней назад
It was awesome when it came out for us on the 2060 series
@ValidatingUsername
@ValidatingUsername 18 дней назад
Try to downgrade the number of objects rendered to save energy requirements if you can.
@timhaldane7588
@timhaldane7588 18 дней назад
Rhino? Now that's a software name I've not heard in a long time.
@WallyMahar
@WallyMahar 18 дней назад
And please. Stop the erroneous RU-vid algorithm needing jump cuts with videos that have nothing to do with what you're talking about. That was kind of disappointing
@westingtyler2
@westingtyler2 17 дней назад
ooh, what is your game? mine is called #NotSSgame, and I have a similar number of characters and buildings. I have some update vids about it. I hope to see more about your project soon!
@Shy--Tsunami
@Shy--Tsunami 18 дней назад
My mouth opened at the dragon modelling
@Nulley0
@Nulley0 18 дней назад
That was "two papers up the line" (previous paper)
@Mad3011
@Mad3011 18 дней назад
What a time to be a ray!
@sash1ell
@sash1ell 15 дней назад
Love getting traced.
@thezoidmaster
@thezoidmaster 18 дней назад
What a time to be alive!
@itskittyme
@itskittyme 18 дней назад
Where is the previous video about how to control ChatGPT ? the video was removed before I had a chance to watch it 😞
@samuel.f.koehler
@samuel.f.koehler 17 дней назад
me too!
@TwoMinutePapers
@TwoMinutePapers 17 дней назад
I apologize as the quality of the video wasn't really what you would expect from us, and thus we removed it.
@itskittyme
@itskittyme 17 дней назад
@@TwoMinutePapers aaww okay 😞thank you for letting us know
@anywallsocket
@anywallsocket 18 дней назад
You know what would make all these scenes even more realistic? Adding a bit of dirt everywhere 😂
@rallicat69
@rallicat69 18 дней назад
IM REVERSE HOLDING ONTO MY PAPERS REALLY HARD SIR✋
@issacdhan
@issacdhan 18 дней назад
Here I lost my Job. 😒
@damianzieba5133
@damianzieba5133 18 дней назад
Don't worry, someone still has to clean the city... That's our future, us artists are going to finally go to "real work" someday. But you can still create in your free time
@AgroFro
@AgroFro 18 дней назад
@@damianzieba5133 STEM. Someone has to create the AI They're getting paid.
@MrGTAmodsgerman
@MrGTAmodsgerman 18 дней назад
@@damianzieba5133 If you do "real work" you don't have enough free time and energy for the actual creative stuff.
@heatflash184
@heatflash184 17 дней назад
Skill issue
@locinolacolino1302
@locinolacolino1302 16 дней назад
All your Vimeo work appears to be simulations, if you're still doing this then I can assure you are 110% fine for a job. FX with millions of particles, small pieces breaking/colliding and overlapping elements are THE single biggest torture test you could put an algorithm like this through, which currently don't even operate on videos and are unlikely to do so as adding motion into the equation would create even more unknown variables than already. Furthermore, the value of a human performing simulations isn't in creating generic effects, but artistically driven effects which look way more visually compelling. Becoming something like a Senior Houdini specialist is a surefire way to ensure job stability, as that level of intuition when it comes to the simulations, tools and nodes available in meeting a client's need is something an AI would not possess.
@Yamagatabr
@Yamagatabr 16 дней назад
Oh man I love to do computer graphics. Thats what I dreamed of since I ws a kid. Is really a bummer to see the artistic process being ereased like this.
@AlessandroRodriguez
@AlessandroRodriguez 18 дней назад
6:13 what a revelation!, I would have never thought you like papers
@scruffy3121
@scruffy3121 17 дней назад
At 5:09 the marked text says 100x less error rate, not 100x faster. Am I missing something?
@DesignDebtClub
@DesignDebtClub 15 дней назад
I feel like thinking about this as a way to take existing 3D renders back to 3D meshes is impressive but an odd and narrow use case. Seems to me that this is heading toward the ability to reconstruct scenes based off photographs - even things off camera based off shadows and reflections. It’s heading toward a tool for blade runner type detective work.
@Z_Inspector
@Z_Inspector 18 дней назад
The Gigachad, Way2 Dank and Copege drinks in the first few scenes are hilarious
@ShadowRam242
@ShadowRam242 18 дней назад
Inverse Rendering? Screw video games. Do you know what that would do for SLAM and robotic navigation?
@ScriptureFirst
@ScriptureFirst 17 дней назад
2:52 oooh! 😱 scary maths! 😅
@haydenveenstra1941
@haydenveenstra1941 16 дней назад
What a time to be alive! I can't wait to see if this will be used for forensic science where shadows of objects are reverse engineered to expand a video or image in greater detail and help solve cases!
@Atimo133
@Atimo133 18 дней назад
Fun and games for researchers Death call for artists and creative piplines, techbros etc. ...
@Jacen777
@Jacen777 18 дней назад
I'm not sure that's completely true. For example, I always wanted to be a fiction writer but I suffer from dyslexia. With the help of AI, I'm now well on my way to completing my first novel. It's important to note that AI is a tool that that allows me to bring my thoughts and ideas into the world. But it doesn't simply spit out the work. I still spend many hours planning, developing, guiding, tweaking and editing the entire process. These tools give me the ability to create in ways I could never dream of before. So, I think it's possible that AI will be used to empower new artists who previously faced some physical or mental disability that prevented them from creating in that space before. I believe this technology will create numerous new artists.
@uponeric36
@uponeric36 18 дней назад
The level of paranoia from artists and such is actually precedented and expected. Since the beginning of time doodlers have seen what's coming next and shook their paintbrush at the sky because technology dared to advance without asking for their permission. We're going to be entering the next great era of art with this tech, luddites be damned. Personally I can't wait for the downfall of all these useless art corporations and the resulting freedom of their artists since the level of resources needed to make the projects of today will dramatically drop. Assassin Creed or Call of Duty sized games could become the typical indie project, if they brother to do something that bland and outdated with such powerful tech.
@Atimo133
@Atimo133 17 дней назад
@@uponeric36 If the tools are used correctly, they are nothing but tools and more than necessary for progresse. But as soon as the tool becomes the maker itself, and with the trend going on of everything going down the gutter quality & meaningwise, attention spans getting shorter and shorter with everyday i don't see this advance in a positive light so far. So far what i saw is creatives getting replaced, AI trained on the works of actual human beings- AI is taking away CREATIVE Jobs Not stuff we dont want to do. So yeah, i am not very fond of how the technology is going to be used. A small shiver of hope is there, in the sense that it's just going to raise the bar. But i just don't see it with how things have been going the past 10 years.
@Atimo133
@Atimo133 17 дней назад
@@Jacen777 This for example is a great case, where i hope all the needed funds and research will go into for enabling everybody to create and be creative :)
@skyrade508
@skyrade508 16 дней назад
@@Atimo133 regarding ai trained on the works of humans ... well although it is quite a problem that current ai systems need to scrape the entire internet and hence are not as data efficient as us humans when we take inspiration and therefore it gives the illusion of copyright infringement. It will get solved a few years from now. Secondly , you should not assume that the current AI systems that have such limited capabilities and have such major flaws such as creating artworks of ridiculous quality will continue on that trajectory. Instead we should assume that we really don't know what kind of powerful self-improving AI systems the top AI labs are building in secret so as to not scare the public with their superior-than-human capabilities . Thirdly , regarding ai replacing jobs is a bad thing .... well i disagree. You only say this because you were born in the 21st century where the economic system is completely designed by " human minds" and hence limits your farsight and thinking . No one has said anything about AIs or anyone preventing any human forcefully in a post AGI world from doing any activity they desire. i.e If you want to create games or movies with the traditional human touch or want to continue work in any STEM field then you are free to do so. However It just means that the activities that humans undertake in such a situation will not be labelled as "jobs" but as part of the new concept of "Leisure". That is because you won't be paid money or have to pay money to anyone for any service or product they make. Part 2 : You might think that if that's the case then how will people survive without money ? Well the answer is that money was and is only a theoretical social construct we imagined into existence to facilitate trade of goods and services . It itself never had any value . The solution would be to build a currency-less economic system where the distribution of goods and services are based on concepts similar to the ones found in really creative video games " like level and point" systems. I honestly don't know the answer to what the system would actually look like , but I'm pretty sure that if we have an AI that can end-to end do the jobs of economists with Superhuman creativity then we can then just ask it to devise such a system.
@sethart22
@sethart22 16 дней назад
Is your voice generated by AI? Because it sure sounds like it.
@eduardodubois4994
@eduardodubois4994 18 дней назад
Wow just incredible.
@alejobrcn6515
@alejobrcn6515 11 дней назад
This voice and vídeo edition and script words are syntetic, that's Magic! 😮
@Kenjineering
@Kenjineering 18 дней назад
"Enhance 15 to 23. Give me a hard copy right there"
@wadeheying7117
@wadeheying7117 17 дней назад
Thanks for including the legendary Andrew Price.
@abowden556
@abowden556 17 дней назад
I like this idea, I have been collecting images from beautiful or interesting places with the goal of someday using technology like this one them.
@KevinLarsson42
@KevinLarsson42 18 дней назад
I can imagine a ton of use-cases of this
@derekgamer1978
@derekgamer1978 18 дней назад
I'm so curious now. What if you put a non-linear geometry as the photo? Like a illusion.
@user-xv4gm2zc6x
@user-xv4gm2zc6x 17 дней назад
0:53 Jam a man of fortune and J must seek my fortune - xQc
@geekswithfeet9137
@geekswithfeet9137 18 дней назад
This sounds like an absolutely winner for computed tomography.
@coloryvr
@coloryvr 18 дней назад
Yes! Amazing!
@MoMoGammerOfficial
@MoMoGammerOfficial 17 дней назад
Whoa! this is game changer!
@Drokkstar_
@Drokkstar_ 18 дней назад
Seeing real-world simulations become more accurate as well as getting faster makes me wonder if P really does equal NP.
@yumri4
@yumri4 18 дней назад
it is much more close to it then the research paper 2 years ago that went through how generate a 3D object from a 2D image. She kind of got it kind of didn't The part that wasn't able to get correct is where there was a dip in the 3D object on the top that had nothing that went through the object. From this i think that example will still be unable to be done.
@ReportJungle
@ReportJungle 17 дней назад
What a wonderful time to be alive.
@astr010
@astr010 18 дней назад
Imagine doing text to image to 3d scene
@KP-bi6px
@KP-bi6px 18 дней назад
coming soon: create a movie/video game from text 😂
@dertythegrower
@dertythegrower 18 дней назад
already existed 2023. for commercial use. on video it was shown year or so ago... iykyk
@dertythegrower
@dertythegrower 18 дней назад
​@@KP-bi6px that exists also like i said above.. 2023. it does exist, in first stages and shown om video if you dig through this kind of nonsense pushed to the top
@dertythegrower
@dertythegrower 18 дней назад
all above is on video and exists... this guy shows mainstream
@KP-bi6px
@KP-bi6px 18 дней назад
@@dertythegrower ah
@Entropy67
@Entropy67 17 дней назад
that would be amazing, its a dream to make something with my hands and have that modelled into a game... with just a picture I can do that. Wow.
@Troph2
@Troph2 14 дней назад
Picture to 3d modeling is huge for 3d printing.
@georgepaschalis365
@georgepaschalis365 18 дней назад
What's the 'Revolt' looking game? Would love to play that!
@Jacen777
@Jacen777 18 дней назад
This is a really good thing. I always wanted to be a fiction writer but I suffer from dyslexia. With the help of AI, I'm now well on my way to completing my first novel. It's important to note that AI is a tool that that allows me to bring my thoughts and ideas into the world. But it doesn't simply spit out the work. I still spend many hours planning, developing, guiding, tweaking and editing the entire process. These tools give me the ability to create in ways I could never dream of before. So, I think it's possible that AI will be used to empower new artists who previously faced some physical or mental disability that prevented them from creating in that space before. I believe this technology will create numerous new artists.
@MikevomMars
@MikevomMars 17 дней назад
This is similar to what the Quest 3 VR headset does when scanning your room and creating a 3D mesh from it.
@goldenheartOh
@goldenheartOh 17 дней назад
I'm picturing our grandkids using this thing to casually create games as easily as we doodle, & them being in awe that we were aever smart enough to write the code for games ourselves from scratch.
@ClintochX
@ClintochX 18 дней назад
What's the ray tracing in this?
@_John_P
@_John_P 18 дней назад
At 04:05, an example is given where the shadow is the input and the method reconstructs the object from it. The shadow even moves, making the object move in accordance, hence it's some sort of "reverse ray tracing" effect.
@sammidgirl94
@sammidgirl94 15 дней назад
How do magnets work?
@Qimchiy
@Qimchiy 18 дней назад
Kinda like a morphing geometric version of Gaussian Splatting?
@bug5654
@bug5654 16 дней назад
0:11 Blender donut man sighted, community engagement in progress.... (note: making fun of chat not accusing 2minpapers)
@handycap7625
@handycap7625 18 дней назад
what happened to your 'can AI be controlled' video?
@The_Questionaut
@The_Questionaut 15 дней назад
This is unrelated but- I was thinking for text to video being easy and intuitive to pose and animate characters. You could use something like controlnet with key frames, so you can use it for videos posing and movements of a character in the text to video generation. I don't think anyone has done this, I would love to see it done. Imagine posing your character just by dragging a stickman around then hitting the generate image button. Easy posing if someone pulls this off.
@Verrisin
@Verrisin 17 дней назад
is prism tracing possible ? - follow not single "line" - but genuine triangle (3 rays) mapped to screen find what areas it intersects, each contributing that percent of the color. Would split between many sub-prisms instead of running a ray all the way from start each time - Here knowing if the contribution is large or tiny. - Then "reflections" are of whole "triangle surface intersection" - creating a new, wider prism, with caring about detail less - possible multiple - One would batch not whole "path" of ray - but each "straight prism segment" - Then SORT all remaining by contribution (area intersection with ~screen) -- repeat until time runs out, then somehow cheaply "guess" the rest.
@Verrisin
@Verrisin 17 дней назад
might need a different way to represent objects in scene, but if possible, I really like this conceptually.
@pxrposewithnopurpose5801
@pxrposewithnopurpose5801 18 дней назад
THATS CRAZY
@user-pq7lk3mq4o
@user-pq7lk3mq4o 14 дней назад
2:26 what paper is that?
@LydianMelody
@LydianMelody 17 дней назад
Donut 5.0’s gonna be a real short video
@noisetide
@noisetide 18 дней назад
I'm getting raybumps...
@Graeme_Lastname
@Graeme_Lastname 18 дней назад
Nice to see you m8. 🙂
@parthasarathyvenkatadri
@parthasarathyvenkatadri 16 дней назад
Wouldn't it be better to have a video of the place and it renders a 3d scene
@pxrposewithnopurpose5801
@pxrposewithnopurpose5801 18 дней назад
lumen needs that
@user-dj3uf9qt4x
@user-dj3uf9qt4x 18 дней назад
Do stimpy next pls!
@TaylorCks03
@TaylorCks03 17 дней назад
This is amazing 😮
@pxrposewithnopurpose5801
@pxrposewithnopurpose5801 18 дней назад
THATS FKING CRAZY
@Mranshumansinghr
@Mranshumansinghr 18 дней назад
Soon I will not be texturing my models. Nvidia will do it for me. What a time to be Live!
@korinogaro
@korinogaro 18 дней назад
Cool but it is hard to (especially in context of game making OP mentions in the beginning) come to ideas of creative use of this. I get situation when you've had hardware disaster, you lost a lot of data (3D models and materials included) but some backup with screenshots of models survived. So automatic re-making of the model. But how would you use it in constructive and not reconstructive way?
@somdudewillson
@somdudewillson 18 дней назад
It would accelerate the process of going from concept art -> usable game asset by potentially providing a good starting point.
@korinogaro
@korinogaro 18 дней назад
@@somdudewillson yeah, I thought about it couple seconds after posting. Designers make character/object designs, give it to 3D artists, they use something like that to make 3d models fast and then just touch it here and there.
@GCAGATGAGTTAGCAAGA
@GCAGATGAGTTAGCAAGA 18 дней назад
@@korinogaro yes, but IN THE FUTURE there will be no concept artists, because we don't need them anymore! 🤓More than that, we will not need any human anymore! An AI for generating prompt, the next AI is generating art from prompt, the next one making models from this art and so on! Wow, what a time to be alive! 😎
@SeanMcCann70
@SeanMcCann70 18 дней назад
Thank you so much for the donut reference
@P-G-77
@P-G-77 18 дней назад
Awesome...
@AAvfx
@AAvfx 17 дней назад
😮😮😮😮😮Best show ever 😁
@dexgaming6394
@dexgaming6394 17 дней назад
I'd really like to see a machine learning model completely relplace the rendering process. Imagine if you gave the model the textures, materials, and geometry information, and it could generate an image that appears like it was rendered with a slow path-tracer. That could completely make ALL path-tracers obsolete, and also make gaming far better if it could run in real time.
@raymond_luxury_yacht
@raymond_luxury_yacht 17 дней назад
The future is diffusion ? Once consistency is achieved you just diffuse the frames
@dexgaming6394
@dexgaming6394 17 дней назад
@@raymond_luxury_yacht No, not really. I think that diffusion would be too slow to run in real time. I thought of something like Neural Control Variates, which was also covered on this channel. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-yl1jkmF7Xug.html
@dexgaming6394
@dexgaming6394 16 дней назад
@@raymond_luxury_yacht No, because I think that process would be too slow to run in real time. There's something called Neural Control Variates, which has been covered on this same channel. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-yl1jkmF7Xug.html The only problem is that this AI model is not available to the public.
@zrakonthekrakon494
@zrakonthekrakon494 17 дней назад
It’s time we start asking the real questions, what will ray tracing look like 3 papers down the line… exactly the same? Probably
@coisinho47
@coisinho47 18 дней назад
so this is what they are going to make exclusive to rtx 5000 series cards...
@razorsyntax
@razorsyntax 18 дней назад
So if you take an animated hypercube, which is the 3D projection of a 4D hypercube, would this be able to reconstruct the 4D cube in some way? Getting us closer to visualizing higher dimensions is a worthwhile effort.
@noob19087
@noob19087 18 дней назад
Nope. We know exactly what 4D hypercubes look like mathematically. It's just that it can't be projected in reality because it requires information that doesn't exist in our universe, i.e. a fourth spatial dimension. No AI will ever fix that.
@carlosrivadulla8903
@carlosrivadulla8903 18 дней назад
Ok, I'm ending my blender subscription now
@goldenheartOh
@goldenheartOh 17 дней назад
Blender has subscriptions??? I haven't kept up with it. Last I knew, it was free. Edit: part of me wonders if that was a joke to make fun of other apps.
@ChaineYTXF
@ChaineYTXF 17 дней назад
​@@goldenheartOhI'm surprised as well. Perhaps for fast rendering on distant servers?
@halko1
@halko1 18 дней назад
How? 🤯
@deniskhafizov6827
@deniskhafizov6827 17 дней назад
4:18 Human beings, specialized in x-ray crystallography (that is basically reconstructing molecules using their shadows), would doubt that. For example, Dorothy Hodgkin established the structure of vitamin B12 solely by hand without using computers. (and got Nobel prize for that)
@mindful_minipods
@mindful_minipods 18 дней назад
Im going outside with my robot.
@jakobhetland4083
@jakobhetland4083 12 дней назад
Nvidia got some next level stuff every other day it feels like
@mercerwing1458
@mercerwing1458 17 дней назад
2:25 Holy balls how can I access this?
@MrSongib
@MrSongib 18 дней назад
In my ideal future, we only need a 360 video of an object that we want to "Scan" to make a 3D and have every materials properties and after that, we only just one images to get the same results. that would be cool and EZ Clap.
@somdudewillson
@somdudewillson 18 дней назад
If you have a 350 video of an object you already can reconstruct geometry and materials. Photogrammetry can do that, and it's been around for a while.
@MrSongib
@MrSongib 18 дней назад
@@somdudewillson not that thing. xd something like a simple turn around videos without bunch of cam. I should choose my word more clearly next time. noted
@WallyMahar
@WallyMahar 18 дней назад
I really thought you were going to do reverse Ray tracing .Ray Trace all the Rays from a photograph by looking at the materials, the glass, the reflections and reverse Ray tracing. It looked like you were going to show that but then ...Oh well next paper
@Sintrania
@Sintrania 17 дней назад
More realistic, more performance hungry. More sales of 4090 class card. Hope they make use of these technique to hep lower performance requirements for rt work load.
@zachb1706
@zachb1706 17 дней назад
The 5080 will probably beat the performance of the 4090, and the 6070 will probably beat the 5080. So in 10 years a 6070 will be more than capable of full ray tracing
@jamiespartz1316
@jamiespartz1316 15 дней назад
All this great AI and you can't get a smooth talking bot for this video?
@davekite5690
@davekite5690 18 дней назад
'just imagine, with enough compute.... this being run on google maps and all the photo's people have shared... and all the cctv camera's and all the autonomous vehicle camera's... a live 3D model of the world... ... ... coming 'soon'.
@zhonkvision
@zhonkvision 18 дней назад
My wish comes true
@ChaineYTXF
@ChaineYTXF 17 дней назад
The point in time when humans reach 100% obsolescence draw nigh😢
@b.7944
@b.7944 18 дней назад
it is theoritically impossible to model the invisible side of an object in a single image
@OGPatriot03
@OGPatriot03 18 дней назад
What makes you say that? A human can do it by understanding the context behind the image. (If you've seen what a desk lamp looks like, then you can figure out what the back side of it likely looks like, with a high degree of accuracy) AI works in a similar fashion.
@b.7944
@b.7944 18 дней назад
@@OGPatriot03 You will never be sure about an invisible side. You can only guess. How do you know this time that desk lamp looks different? There is not even an argument about this. It is just impossible. If a guess is sufficient for you, than that is fine. Or you can use more images.
@imakelists
@imakelists 17 дней назад
hi, i make lists here are some notable AND's 0:06 0:23 0:39 0:49 0:51 1:08 1:18 2:00 2:37 2:40 2:56 2:58 3:43 3:52 4:58 5:04 5:18 5:42 6:03 6:16 6:33 as a bonus because this is my first comment: here are some notable BUT's 1:38 2:46 3:17 3:44 4:50 5:26 5:47 6:13
@unadventurer_
@unadventurer_ 18 дней назад
I'll eat my shoe if this guy can finish a sentence without awkwardly pausing every 3 words.
@zrakonthekrakon494
@zrakonthekrakon494 17 дней назад
It’s the spice of life
@Scimblo
@Scimblo 5 дней назад
Let me get ray tracing I can run please 🙏!!! That would be next level.
@czargs
@czargs 18 дней назад
8 GB VRAM
@voltagetoe
@voltagetoe 18 дней назад
glad i left the industry couple years ago - there's just nothing challenging/rewarding left anymore.
@RIPxBlackHawk
@RIPxBlackHawk 14 дней назад
Let's not forget, if people can make their own games with a prompt, AAA games will not have the audience to back the effort. Everyone will be busy playing mediocre games
@caseycbenn
@caseycbenn 17 дней назад
16 minutes to recreate a bush from a shadow you could create in Houdini in 30 seconds. Great... four bushes per hour, I am sure someone needs that somewhere. Its not the miracle cure its being hyped as. Also, we didn't see the back side of the dragon. Is it amazing you could reconstruct a scene from a photo? Sure, that sounds amazing. Ok fine, provide many photo references of all sides and bam, model, texture, etc. Still, it just changes from the creative craft of sculpting and modeling to photo taking and finding source material. Then, what? You are going the find the same images most people do on Google and end up with the same models everyone uses? I suppose the saving grace is that Concept Artists will become more in demand since they can truly create original ideas from many angles from their mind which could be fed to a machine to reconstruct in 3d. I am betting that aspect will actually be beneficial to the job market. Otherwise you'll give up the craft of modeling in favor of picture taking or image searching time. BUT.. yes, it is amazing reconstruction is possible this way, but it isn't an end all cure or replacement for design or creative direction.
@t1460bb
@t1460bb 17 дней назад
my godness
@AhaAha-gq6zg
@AhaAha-gq6zg 17 дней назад
My friends, the Man who speaks on this site, saying good things of hope and love, is not a bad man. Listen to which is good, and forget the bad. We only call that which we do not understand, Evil. But the myriad ways and methods of the Creator, are full of mystery. So go forth, Love, Empathise, and Do good. Do only Good, not Evil. Do not hate yourselves, or the others around you, even if they seem weird, or strange. These are the Words of the Creator, and if you abide by them, you shall be saved. I am only blessed enough, and lucky enough, to be the Creator's messenger.
@drewmandan
@drewmandan 16 дней назад
I'm calling bullshit at 4:37. The information about the height of the octagonal prism is not contained in that shadow. It's not possible to match the height like that. There's some fuckery going on here.
@BigeppyFR
@BigeppyFR 18 дней назад
Let's goooo
@andreas3904
@andreas3904 16 дней назад
Is his voice ai generated or what? Why does it sound so weird, why does he stop all the time for half a second??
@egretfx
@egretfx 13 дней назад
watch his video from 5 years ago and tell me if its AI.....idiot.
@geniferteal4178
@geniferteal4178 16 дней назад
Oh no, i've seen your face!😮😅 Nothing scary, just weird to see a face you've heard so many times but have no idea what they look like. It never matches what you expect. Nice to meet you! l o l😊
@priyeshpv
@priyeshpv 16 дней назад
The guy in the lower right? That`s not him, it`s Andrew Price.
@geniferteal4178
@geniferteal4178 16 дней назад
​@@priyeshpvthanks! Watching again I can see different words being said though some basic movement is kind of in line with his speaking.
@mattbrandon9157
@mattbrandon9157 17 дней назад
hearing this narrator was worse than nails scraping chalk board 20 seconds and i'm gone.
@rexbk09
@rexbk09 17 дней назад
@Drumaier
@Drumaier 11 дней назад
As a 3d generalist I'm not loving any of this.
Далее
NVIDIA’s AI: Virtual Worlds, Now 10,000x Faster!
6:53
How do Video Game Graphics Work?
21:00
Просмотров 3,3 млн
My little bro is funny😁  @artur-boy
00:18
Просмотров 4,2 млн
Barry Policeman And His Son Vs Prisoners
00:26
Просмотров 691 тыс.
NVIDIA Just Supercharged Ray Tracing!
6:59
Просмотров 151 тыс.
DeepMind AlphaFold 3 - This Will Change Everything!
9:47
Has Generative AI Already Peaked? - Computerphile
12:48
Is the Intelligence-Explosion Near? A Reality Check.
10:19
I Made a Neural Network with just Redstone!
17:23
Просмотров 658 тыс.
Are You an NPC?
12:18
Просмотров 3,8 млн
I Built a CoPilot+ AI PC (without Windows)
12:50
Просмотров 273 тыс.
Will the battery emit smoke if it rotates rapidly?
0:11
Asus  VivoBook Винда за 8 часов!
1:00
Просмотров 1,1 млн
iPhone 16 - КРУТЕЙШИЕ ИННОВАЦИИ
4:50