Тёмный

NVidia is launching a NEW type of Accelerator... and it could end AMD and Intel 

Coreteks
Подписаться 135 тыс.
Просмотров 41 тыс.
50% 1

Urcdkeys.Com 25% code: C25 【Mid-Year super sale】
Win11 pro key($21):biitt.ly/f3ojw
Win10 pro key($15):biitt.ly/pP7RN
Win10 home key($14):biitt.ly/nOmyP
office2019 pro key($50):biitt.ly/7lzGn
office2021 pro key($84):biitt.ly/DToFr
MS SQL Server 2019 Standard 2 Core CD Key Global($93):biitt.ly/oUjiR
Support me on Patreon: / coreteks
Buy a mug: teespring.com/stores/coreteks
My channel on Odysee: odysee.com/@coreteks
I now stream at:​​
/ coreteks_youtube
Follow me on Twitter: / coreteks
And Instagram: / hellocoreteks
Footage from various sources including official youtube channels from AMD, Intel, NVidia, Samsung, etc, as well as other creators are used for educational purposes, in a transformative manner. If you'd like to be credited please contact me
#nvidia #accelerator #rubin

Наука

Опубликовано:

 

1 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 361   
@CrashBashL
@CrashBashL 15 дней назад
No one will end anyone.
@Koozwad
@Koozwad 15 дней назад
AI will end the world, in time
@pf100andahalf
@pf100andahalf 14 дней назад
Some may end themselves.
@modrribaz1691
@modrribaz1691 14 дней назад
Based and truthpilled. This faked up competition has to continue for as long as possible, It's a 24/7 publicity stunt for the entirety of this market with Nvidia basically playing the big bully.
@PuppetMasterdaath144
@PuppetMasterdaath144 14 дней назад
I will end this conversation.
@CrashBashL
@CrashBashL 14 дней назад
@@PuppetMasterdaath144 No, you won't, you Puppet.
@Siranoxz
@Siranoxz 15 дней назад
We are in dire need of a diverse GPU market.
@Koozwad
@Koozwad 15 дней назад
yeah what happened to "diversity is strength" 😂
@mryellow6918
@mryellow6918 15 дней назад
We have a diverse market, it's just they aren't a monopoly because they are the only ones they are a monopoly simply because they are the best.
@oussama123654789
@oussama123654789 15 дней назад
sadly china still needs at least 5 years for a product worthy of buying
@Siranoxz
@Siranoxz 15 дней назад
@@Koozwad I have no idea what you're trying to convey, its just about more companies building GPU's, nothing else.
@Siranoxz
@Siranoxz 15 дней назад
@@mryellow6918 Sure that is one factor, or these invisible GPU manufacturers do not promote their GPU's and optimized support for the games like NVIDIA and AMD does. But being the best comes with a nifty price huh?.
@noanyobiseniss7462
@noanyobiseniss7462 14 дней назад
So Nvidia wants to patent order of operations, kinda reminds me of apple trying to patent the rectangle.
@brodriguez11000
@brodriguez11000 14 дней назад
Math can't be patented.
@GrimK77
@GrimK77 14 дней назад
@@brodriguez11000 there is a loophole for it, unfortunatelly, that should have never been granted
@EnochGitongaKimathi
@EnochGitongaKimathi 15 дней назад
Intel, AMD and now Qualcomm will be just fine.
@fred-ts9pb
@fred-ts9pb 14 дней назад
amd is a bottom feeder.
@waynewhite2314
@waynewhite2314 14 дней назад
​@@fred-ts9pbWhat tech company are you running? Oh yes Trolltech!
@christophorus9235
@christophorus9235 10 дней назад
@@fred-ts9pb Lol tell us more about how you know nothing about the industry...
@misterstudentloan2615
@misterstudentloan2615 15 дней назад
Just costs 30 pikachus to do that operation....
@dreamonion6558
@dreamonion6558 15 дней назад
thats alot of pikachus!
@MyrKnof
@MyrKnof 14 дней назад
@@dreamonion6558 You want as few pikachus pr op as possible! also, "a lot"
@alexstraz
@alexstraz 13 дней назад
How many joules is in a pickahu?
@handlemonium
@handlemonium 11 дней назад
And one Magikarp
@PaulSpades
@PaulSpades 15 дней назад
It's funny how we now need fixed function accelerators for matrices after 15 years of turning GPUs (fixed function FP accelerators) into programmable devices. Also, we went from 12/20/36bit word computers to 4bit and 8bit micros, to 16bit RISC processors and FP engines, to 32bit and now 64. Only to discover we now need much less precision, FP 32 and fp16, now 8 and 4bit. We could probably go down to ternary for large model nodes, or 2bit.
@BHBalast
@BHBalast 15 дней назад
AI workflow != Classical computers, no one will get back to 8 bits on a consumer device :p
@maou5025
@maou5025 14 дней назад
It is still 64. Floating point is kinda different.
@PaulSpades
@PaulSpades 14 дней назад
@@BHBalast Well yes. But. Do you need photoshop if the AI-box-thing can generate the file you asked for and uploaded it? I'm not saying we won't need general computing like we do now, but most people won't. Because most people don't need programable computers, they need media generation and media consumption devices. Most tasks are filling forms, reading and writing.
@jonragnarsson
@jonragnarsson 14 дней назад
Hate them or love their corporation, NVidia really has some brilliant engineers.
@awesomewav2419
@awesomewav2419 15 дней назад
jesus nvidia does not rest for the competition.
@visitante-pc5zc
@visitante-pc5zc 15 дней назад
And consumers pockets
@maloxi1472
@maloxi1472 15 дней назад
@@visitante-pc5zc As long as it aligns...
@fred-ts9pb
@fred-ts9pb 14 дней назад
It's over for amd before it started. Keep throwing 50M a year at a failed amd ceo.
@Acetyl53
@Acetyl53 14 дней назад
What a weird comment. Probably a bot or shill. Creep. Weirdo.
@hupekyser
@hupekyser 14 дней назад
at some point. 3d and ai need to fork into dedicated architectures instead of having a general do it all GPU
@aladdin8623
@aladdin8623 10 дней назад
Both intel and amd have fpga ip to achieve close to bare metal performance. And in comparison to nvidia's asic plans here a fpga is flexible. Its logic gates can be 'rewired' while nvidia's asics force you to buy more hardware again and again.
@Johnmoe_
@Johnmoe_ 15 дней назад
Sounds cool, but all I want is more VRAM under 10k
@pierrebroccoli.9396
@pierrebroccoli.9396 15 дней назад
Darn - hate to be held hostage to leather jacket man but it is a good way to go - local AI processing instead of relying on large corporates for AI services on ones data.
@Zorro33313
@Zorro33313 15 дней назад
absolutely the same shit as an encrypted channel between you and processing datacenter. processing is not local anyway it seems. ai locally only fractures data in some bullshit tokens (just packets as usual) and send them to data center to get the processed response back. this sounds just like bs channel encryption using AI cuz nvidia can't do anything else but ai.
@BHBalast
@BHBalast 15 дней назад
​@@Zorro33313wtf?
@awindowskrill2060
@awindowskrill2060 15 дней назад
@@Zorro33313 what meth are you smoking mate
@FlorinArjocu
@FlorinArjocu 15 дней назад
We'd still need cloud AI services as the same methods will get to their datacenters and will create more advanced things to do online much faster. But more "simpler" will get local, indeed.
@avejst
@avejst 15 дней назад
5:12-5:44, is there a Reason for the blackout in the video? Interesting video as always
@Slavolko
@Slavolko 13 дней назад
Probably video editing or rendering error.
@Erribell
@Erribell 14 дней назад
I would bet my life coreteks owns nvidia stock
@selohcin
@selohcin 13 дней назад
I assume he owns Nvidia, AMD, Intel, and several other tech companies.
@SalvatorePellitteri
@SalvatorePellitteri 14 дней назад
This time you are wrong! Inference is where AMD and Intel play in a plainfield with NVIDIA. NPUs have much more simple apis so the vertical stack is thin, almost irrelevant and all the applications are going to support NPUs from intel, amd and nvidia very easily and AMD and Intel have already NPUs enabled processors and PCIE cards on the wild
@sacamentobob
@sacamentobob 14 дней назад
He has been wrong plenty of times.
@NatrajChaturvedi
@NatrajChaturvedi 15 дней назад
Would be interesting to hear your take on Qualcom's "PC Reborn" pitch too.
@K9PT
@K9PT 14 дней назад
CEO of Nvidia only Said IA . IA... IA dozen times, only one gaming...SAD TIMES
@Jdparachoniak
@Jdparachoniak 14 дней назад
to me he said money, money money lol
@fabianhwnd6265
@fabianhwnd6265 14 дней назад
Nvidia outgrow the gaming market they could abandon the gaming market and they wouldn't notice the losses
@LukeLane1984
@LukeLane1984 13 дней назад
What's IA?
@K9PT
@K9PT 13 дней назад
@@LukeLane1984 lol
@pazize
@pazize 14 дней назад
Great work! Thank you for sharing! If this materializes it'll be very exciting
@Raphy_Afk
@Raphy_Afk 15 дней назад
This is extremely interesting, I hope they will release discrete accelerators for desktop users
@user-et4qo9yy3z
@user-et4qo9yy3z 15 дней назад
It's extremely boring actually.
@gamingtemplar9893
@gamingtemplar9893 15 дней назад
@@user-et4qo9yy3z Actually, you are boring. Stop spamming your stupidity and go watch cat videos.
@visitante-pc5zc
@visitante-pc5zc 15 дней назад
@user-et4qo9yy3z yes
@Raphy_Afk
@Raphy_Afk 15 дней назад
@@user-et4qo9yy3z For those who do use only their pc for gaming .
@maloxi1472
@maloxi1472 14 дней назад
@@user-et4qo9yy3z Oh look mommy ! The "akshually" guy is real !
@YourSkyliner
@YourSkyliner 15 дней назад
4:44 oh no, they went from 30 Pikachus to only 1.5 Pikachus 😮 where did all the Pikachus go then??
@LtdJorge
@LtdJorge 15 дней назад
They didn’t go from 30 to 1.5, 30 is how much energy it takes to load the values, and 1.5 how much it takes to compute one value once loaded (with FMA). With the HMMA instruction, it takes 110pJ to compute an entire matrix of values, so the overhead of loading becomes negligible, while with scalar operations like FMA, the loading part dominates the power consumption.
@GIANNHSPEIRAIAS
@GIANNHSPEIRAIAS 15 дней назад
how is that new and how this will end amd or intel? like whats stopping amd from getting their xillinx accel to do the same job?
@AshT8524
@AshT8524 15 дней назад
Haha title reminded me of 30 series launch rumor, I really wanted it to happen but all we got was upgrade in prices lol
@bass-dc9175
@bass-dc9175 15 дней назад
I never got why people want any company to destroy its competition. Because if Nvidia had eliminated AMD with the 30 series: Then we would not have the current increased GPU prices. No. It would be 10 times worse with Nvidia at a monopoly.
@Tential1
@Tential1 15 дней назад
I wonder how long before you figure out you can benefit from Nvidia raising prices.
@FJaypewpew
@FJaypewpew 15 дней назад
Dude gobbles nvidia hard
@Vorexia
@Vorexia 15 дней назад
30-series would’ve been a pretty solid gen if it weren’t for the scalper pandemic.
@AshT8524
@AshT8524 15 дней назад
@@bass-dc9175 I don't want the competition to die, I just want better and affordable products from both companies especially when comparing to previous generations.
@johndinsdale1707
@johndinsdale1707 15 дней назад
I think the NPU accelerator is very much a open market. Both Apple and Qualcomm are embedding NPU accelerators into their ARM V9 SOCs. Also Groq has alternative approach to inference which is much more power efficient?
@AbolishTheInternet
@AbolishTheInternet 14 дней назад
10:50 Yes, I'd like to use AI to turn my cat photo into a protein.
@Firetim01
@Firetim01 15 дней назад
Very informative Ty
@denvera1g1
@denvera1g1 14 дней назад
28nm to 4nm is only a 3x density increase? But isnt the 4nm 8700G like 2.5x more transistors than the 7nm 5700G? (both around 180mm²) I mean i guess clock speeds makes a difference, but weren't clock speed lower on 28nm?
@B4nan0n
@B4nan0n 14 дней назад
Accelerator is not the same thing that you said the 40 series was going to have?
@francisquebachmann7375
@francisquebachmann7375 12 дней назад
I just realized that Pikachu is just a pun for Picojoules.
@Sam_Saraguy
@Sam_Saraguy 14 дней назад
Super interesting. Will be following developments.
@chuuni6924
@chuuni6924 14 дней назад
So it's an NPU? Or did I miss something?
@ageofdoge
@ageofdoge 14 дней назад
Do you think Tesla will jump into this market at some point? Would the FSD HW4 be competitive? I don't know if this is a market they are interested in, but it seems like they have already done a lot of the work as far as low power usage inference.
@jaenaldjavier188
@jaenaldjavier188 14 дней назад
I feel like the next big step to properly implement the coming technologies in AI and acceleration would be to integrate such architectures directly into the motherboard. especially in light of how large NVidias highest end cards are becoming and how much more space efficient MoBos have gotten through the last half decade, and not to mention the power and efficiency of APUs and NPUs that are coming out this year. To physically offloading those calculations onto a dedicated spot on the motherboard could provide an upper hand in computer hardware. This also doesnt seem all too farfetched when you take into account the industry is planning to implement an "NPU standard" across mobile devices and various OS. Also the fact mobo manufacturers are already re configuring things like RAM from DIMM slots to CAM2 on desktops. combine all of this plus the fact that the technologies could potentially be tied closer to the CPU on the north bridge and it feels like a no brainer to work with mobo manufacturing to further push the limitations of computing power
@abowden556
@abowden556 15 дней назад
What about 1.58 bit architectures? same accuracy, way lower memory and transister footprint, be interesting to see how that works out.
@kimmono
@kimmono 13 дней назад
Why would you even think the accuracy would be the same? Everything said in this video regarding no loss of accuracy, is also wrong. Llama 3 has massive degradation, because it is a dense model.
@cjjuszczak
@cjjuszczak 14 дней назад
Can't wait to buy a PhysX, i mean, Nvidia-AI PCIe card :)
@sirab3ee198
@sirab3ee198 14 дней назад
lol also the 3D Nvidia glasses, the dedicated G-sync module in monitors etc ...
@christophermoriarty7847
@christophermoriarty7847 10 дней назад
For what I'm gathering this accelerator is a type of cash for the GPU which means it won't be a dedicated card and consumer products it will probably be parts of the video card itself.
@wrongthinker843
@wrongthinker843 13 дней назад
Yeah, given the performance of their last 2 gens, I'm sure AMD is shaking in their boots.
@hdz77
@hdz77 15 дней назад
I might actually end Nvidia if they keep up their ridiculous pricing.
@gamingtemplar9893
@gamingtemplar9893 15 дней назад
Prices are set by the consumers, the market, no the company. If anything, it would only go down with competition. Value is SUBJECTIVE, there is no intrinsic value on anything. You will pay what the market wants, if the pricing was "ridiculous" as you say, then nvidia would be losing money, it is not, then it is not ridiculous. Learn economy before saying communist shit.
@__-fi6xg
@__-fi6xg 15 дней назад
their costumers, other billion companys can affort it with ease, no need worry.
@SlyNine
@SlyNine 15 дней назад
Unfortunately the prices are high because that's what people are paying.
@Alpine_flo92002
@Alpine_flo92002 15 дней назад
The pricing isnt bad when you actually look at what their products provide.
@dagnisnierlins188
@dagnisnierlins188 15 дней назад
​@@Alpine_flo92002for business and prosumers and 4090 in gaming, everything else is overpriced.
@ShaneMcGrath.
@ShaneMcGrath. 12 дней назад
The more they push all this A.I. the more likely I am to end up switching off and going back outside.
@--waffle-
@--waffle- 14 дней назад
When do you think NVIDIA will release their consumer desktop pc GH200 Grace Hopper Superchip style product? I'd love an Nvidia-ARM all-in-one Linux beast.
@--waffle-
@--waffle- 14 дней назад
When Part 2!?!? (Also, New nvidia shield tablet!!....yes please)
@kazedcat
@kazedcat 15 дней назад
Fetch and Decode does not move data they only process instruction not data. Also instructions are tiny 16~32bit vs. 512~1024bit SIMD vector data
@RudyJi158
@RudyJi158 14 дней назад
Thanks. Great video
@rodfer5406
@rodfer5406 15 дней назад
Video error-blacks out
@sanatmondal7093
@sanatmondal7093 12 дней назад
Some wise man said that Jensen wears leather jacket even on the hottest day of summer
@Neonmirrorblack
@Neonmirrorblack 15 дней назад
18:39 Truth bombs being dropped.
@lamhkak47
@lamhkak47 14 дней назад
Also, love in the local AI space, Apple has accidentally (or strategically) made their Mac Studio a rather economic choice for local inferencing. That and Nvidia's profit margin (nearly twice that of Apple's) making Tim Apple gushing also shows how dominating Nvidia is in the market right now.
@wakannnai1
@wakannnai1 15 дней назад
All of these hw are pretty equivalent. AMD showcased similar jumps and you'll see pretty similar jumps next year from Blackwell am next gen MI. Question is how much do you want to pay for it.
@mryellow6918
@mryellow6918 15 дней назад
Amd hasn't showed similar jumps in performance at all.
@arenzricodexd4409
@arenzricodexd4409 15 дней назад
Raw performance is one thing. But how well and how easy those raw perfoamce can be tapped is another thing.
@v3xx3r
@v3xx3r 14 дней назад
Let me guess a traversal coprocessor?
@Ronny999x
@Ronny999x 14 дней назад
I think it will be just as Successful as the Traversal Coprocessor 😈
@edgeldine3499
@edgeldine3499 15 дней назад
When did we change times to x? I've been hearing it more and more lately.. gamers nexus said it earlier (technically yesterday) and a few months ago I remember hearing it.. maybe it's been standing out more and more to me but I think it's now a pet peeve. It's ten times as much sounds like the proper way to say it rather than ten x as much.
@pedromallol6498
@pedromallol6498 12 дней назад
Have any of CoreTek's predictions ever come true just as described?
@user-wt7pq5qc2q
@user-wt7pq5qc2q 13 дней назад
Well done again, I want to be able to add several cards to my pc and cluster them, Might need a three phase plug.
@hikenone
@hikenone 14 дней назад
the audio is kinda weird
@Altirix_
@Altirix_ 15 дней назад
5:20 black screen?
@technicallyme
@technicallyme 15 дней назад
Didn't Google use tensor cores before nvidia
@jackinthebox301
@jackinthebox301 14 дней назад
Tensor is just a mathematical term. This may just be pedantic, verging on semantics, but Nvidia owns the 'Tensor Core' architecture specifically used in their products. So technically no, Google didn't use 'Tensor Cores' before Nvidia. They may have had something they referred to as 'Tensor', but again, that's not Nvidia's architecture.
@Eskoxo
@Eskoxo 14 дней назад
Mostly for Server side I do not think consumers have as much interest in AI as these corpos make it seem
@noobgamer4709
@noobgamer4709 15 дней назад
AMD also entering tick tock cadence for the MIx00 series. Ex: M1300X (CDNA3+HBM3) with MI325X (CDNA3+HBM3E) after that, MI350X (CDNA4+HBM3E/HBM4)
@user-lp5wb2rb3v
@user-lp5wb2rb3v 14 дней назад
Thats fine, Its better this way, because you cant expect ground up technologies every year
@jimgolab536
@jimgolab536 14 дней назад
I think much will depend on how aggressively NVIDIA builds and defends it (leading edge) patent portfolio. First is best. :)
@cem_kaya
@cem_kaya 14 дней назад
i think CXL and a gpu would solve the inference problem with right software.
@JoeRichardRules
@JoeRichardRules 14 дней назад
I like the Pokemon reference you're making
@FlyingPhilUK
@FlyingPhilUK 15 дней назад
It's interesting how nVidia is still locked out of the Desktop & Laptop CPU market - with AMD, Intel and now Qualcomm pushing Co-Pilot PCs & Laptops - I know Qualcomm had an exclusive on Windows ARM CPU development, but that ends this year (?) - so obvious nVidia should be making SOCs for this market
@Sanguen666
@Sanguen666 15 дней назад
excellent and professional video, ty for ur work!
@user-et4qo9yy3z
@user-et4qo9yy3z 15 дней назад
Get your tongue out of his ass.
@TheDaswilhelm
@TheDaswilhelm 14 дней назад
When did this become a meme page?
@davidtindell950
@davidtindell950 15 дней назад
Do You think that there will be a competitor or alternative to NVidia within the next 18 months ?
@PaulSpades
@PaulSpades 15 дней назад
There were dozens of startups developing exactly fixed function accelerators for inference - some have already ran out of money, some have been poached by the bigger players like Apple, Google, Amazon... some are developing in memory computing and analogue logic, which will probably never see the light of day this decade. Unless you can get TPUs from Google, there's not much actual commercial hardware you can get that's more efficient than nvidia's, if you need horsepower and memory. If you want to run basic local inference, any 8gig GPU will do, or any of the new laptop processors that can do around 40tops.
@nick_g
@nick_g 15 дней назад
Nope. Even if a competitor started now and copied the ideas in the video, it would take about 18 months to design, validate, and produce them AND that would be version 1. NVDA is pushing ahead faster than anyone can keep up with
@omnymisa
@omnymisa 15 дней назад
To the Nvidia spot I don{t think but anyone can enter the market and show what they can bring and try kicking AMD and Intel, but Nvidia looks very well and secure as the leader, but sure we would be very glad if there is some other strong competitors around because it feels like a monopoly, not that good.
@ps3301
@ps3301 15 дней назад
These startup can try to sell but once they get any traction, nvidia will buy it with their one quarter profit
@004307ec
@004307ec 15 дней назад
😅Huawei Ascend I guess? Though the software side is kind of bad.
@elmariachi5133
@elmariachi5133 14 дней назад
I expect nvidia to soon produce Decelrators and have these slowdown any computer immensely unless the owners pays horrible subscription fees ..
@Starfishtroopers
@Starfishtroopers 15 дней назад
could... nah
@charlesballiet7074
@charlesballiet7074 15 дней назад
it is kinda nuts how much nvidia has gotten out of that cuda core
@mryellow6918
@mryellow6918 15 дней назад
That's what happens when your industry best for 15+ years. People end up developing technology to perform better on your hardware.
@user-lp5wb2rb3v
@user-lp5wb2rb3v 14 дней назад
@@mryellow6918 fermi was bad, so was kepler compared to gcn The issue with amd was software and lack of money. Not to mention they were crippled by GF failing on 14nm, and delaying 28nm.
@wasd-moves-me
@wasd-moves-me 15 дней назад
I'm already so tired of AI this AI that AI underwear AI toothbrush AI AI AI AI
@brodriguez11000
@brodriguez11000 14 дней назад
AI condom.
@FaeTheo
@FaeTheo 15 дней назад
you can also get win 11 for free and activate it with cmd.
@ragingmonk6080
@ragingmonk6080 15 дней назад
This is nothing more than a joke! "Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco and Broadcom have announced the formation of the catchily titled "Ultra Accelerator Link Promoter Group", with the goal of creating a new interconnect standard for AI accelerator chips." People are tired of Nvidia gimmicks and they will shut them out.
@120420
@120420 15 дней назад
Fan boys have entered the building!
@ragingmonk6080
@ragingmonk6080 15 дней назад
@@120420 I quote tech news and you call me a fanboy because you are a fanboy that didn't like the news. Way to go champ. mom must be proud.
@gamingtemplar9893
@gamingtemplar9893 15 дней назад
People are not tired, some people are and they don't have any clue about what they are talking about. Same way people who defended Nvidia back in the day and still do, like Gamers Nexus defending the cable debacle to protect nvidia. You guys are all one side or the other fanboys who don't understand how things really work.
@ragingmonk6080
@ragingmonk6080 15 дней назад
@@gamingtemplar9893 We understand how things work. Nvidia use to use the "black box" called GameWorks to add triangles to meshes that were not needed, to increase compute demand. Then they would program their drivers to ignore a certain amount of triangles to give them a performance edge. Wouldn't give devs access to the black box either. GSynce was a rip off to make money because adaptive sync was free. Limit which Nvidia cards can use dlss so you have to upgrade. Then limit which Nvidia GPU's can use frame generation so you have to upgrade again. We know what is going on and how the Nvidia cult drinks the kool-aid.
@ragingmonk6080
@ragingmonk6080 15 дней назад
@@gamingtemplar9893 We know Nvidia's gimmicks too well. I cut them off with the GTX 1070.
@Zorro33313
@Zorro33313 15 дней назад
if you still need to send data ta data center to be processed there and then get the result back, how it is different from any cloud based service? this sounds like a normal cloud-based computing absurdly overcomplicated with unnecessary AI shit instead of encryption.
@Six_Gorillion
@Six_Gorillion 14 дней назад
Thats called marketing wank. Slap some AI on that sucker and fanboys will take another 20% price increase up their ends with a smile on their face.
@New-Tech-gamer
@New-Tech-gamer 15 дней назад
isn't that "accelerator" the NPU everybody is talking about nowaday? specialized in local low power inference. Nvidia may have the best prototype, but Qualcomm and AMD are already starting to ship CPU with NPU doing 40-50TOPS. all back up by Microsoft within W11. so even if nvidia comes to market in 2025 it may be too late.
@oraz.
@oraz. 15 дней назад
I don't care about llm ai assistants. If gaussian splatting takes over rendering than ok.
@edgeldine3499
@edgeldine3499 15 дней назад
*1000 ex performance uplift" I'm annoyed by people using ex instead of times, been hearing it more lately I guess.. I know it's been a thing for awhile but maybe I'm getting old?
@Prasanna_Shinde
@Prasanna_Shinde 15 дней назад
i think Nvidia/Jensen are banking so much on AI, because they want generate all the possible money/margin, to be used for CPU R&D, so they will have complete solution/stack 🧐
@donutwindy
@donutwindy 14 дней назад
NVidia making $30,000 AI chips that consumers are not going to purchase should not affect AMD/Intel who aren't currently competing in AI. To "end" AMD and Intel, the NVidia chips would have to be under $500 as that is the limit most consumers will spend on a graphic card or CPU. Somehow, I don't see that happening anytime soon.
@chryoko
@chryoko 14 дней назад
From memory, yesterday, Jensen was speaking about 1000 GWh consumption (1 TWh) for GPT4. I found that HUGE. FYI a battery Gigafactory will consume about 1GWh of electricity per year to produce about 40 GWh of Li on cells , enough to produce 50kWh packs for 800 000 small cars each year. Or maybe, i misunderstood him.... maybe a subject for a video .... how crazy energy you need to run such large LLMs on a server.... compared with what consume our little brain.... (BTW, i do not need to solve plenty of matrices full of sin, cos, tan to move my shoulder, arm, hand, finger every time i want to scratch my head 😉)
@drewwilson8756
@drewwilson8756 14 дней назад
No mention of games until 20:09
@jackinthebox301
@jackinthebox301 14 дней назад
Nvidia doesn't care about gaming GPU's anymore, dude. AI has 10x the revenue at wildly better margins. The only part of gaming Nvidia cares about is ego driven. Having the fastest card, regardless of price.
@gstormcz
@gstormcz 14 дней назад
AI acceleration sounds as good as 8ch sound for my pair of ears. The presentation looks more eyecatching than RGB, maybe because I like spreadsheets and informed narrative.(Just telling that I viewed it with will to absorb as much as my brain accepts🤷🏼‍♀️, when AI make it to desktop PC and games I will understand 2x more) You know more how ground breaking Nvidia acceleration could be, but I am sure I will watch it from distance with my slim wallet 😂 GG, pretty news of this topic.. As usual by Core-news-tech 👍 Patents last legally only limited time, right? AMD and all other will develop their acceleration at law bureau soon.
@metatronblack
@metatronblack 14 дней назад
But isn't Batman the same as Batman Batman
@MichalCanecky
@MichalCanecky 15 дней назад
I like the part where he talks about Pikachu
@jackskalski3699
@jackskalski3699 14 дней назад
Either I'm bad at listening or I just didn't understand but inference or running neural models locally is all the rage about NPUs and TOPS in current SoC's isn't it? Apple with M3/4, AMD Strix with 50 TOPS and Snapdragon Elite X and Windows 12 with Copilot is exactly that use-case, running models locally isn't it? So why not just cram in these NPUs or new type of accelerators into your CPUs or discrete GPUs and call it a day? What's so revolutionary about this new type of accelerators from NV, that the chips that are hitting the market TODAY, don't have? It's my understanding that optimisations happen on all fronts all the time, transistor level, instruction level, compiler level and software level. When I look for open job positions in IT it hits me how many compiler and kernel optimisation roles are opening for drivers, CUDA and ROCm... Don't get me wrong I love your videos but I just don't see the NV surprise, when everyone is releasing ai accelerators today vs NV promising them in maybe 1 year. NV was focused on the server market, while AMD was actually present in both server and client. Also notice, that NV was already using neural accelerators for their ray tracing workloads, which significantly lowered the required budget of rays, that needed to be cast as they could reconstruct the proper signal quite believably with neural networks. We'd need to assume that TOPS/W metric is only understood by NV and that everyone else will sit idle and be blind to it. I doubt that, judging on what is happening right now. Also we assume, that models will keep growing, at least the cost of learning. There are some diminishing returns somewhere, so I expect models to also shrink and be optimised as opposed to only grow in size. As more people/companies start releasing more models they really need to think how to protect the IP, which is the weights in neurons of these networks because transfer learning is a "biatch" for them :) With progress happening so fast, yesterdays models become commodities. As they become commodities they are likely also to become open-sourced. As such you can expect a lot of transfer learning activities happening, that will act as a force which leads to democratization of older still very good models. So this is a head wind for server HW as I can cheaply transfer learn locally... For me local models are mostly important in two areas of my life: coding aid and photography processing. I really follow what fylm.ai does with color extraction and color matching. As NPUs proliferate more and more cloud based features can be run locally.... (for example Lightroom, Photoshop, fylm ai or copilot like models to aid programmers).
@jackskalski3699
@jackskalski3699 14 дней назад
I was thinking a bit more and there is another aspect that we're missing from the analysis: Data distance: If you are running a hybrid workload and you really care about perf/W you are actually going to host NPUs on the GPU and also separately as a standalone accelerator. So when you are running a chatbot or some generative local model you will use the standalone accelerator and throttle down your GPU. That's the dark silicon concept to conserve energy. If you are running a latency sensitive workload like 3D graphics, that are aided by neural networks, like the ray tracing / path tracing workloads, then you are going to utilise the low latency on-GPU NPUs because you need the results ASAP -> you might throttle down the standalone NPU accelerator. There is a catch. If your game uses these rumoured "AI" NPCs, then that workload will be run on the discrete NPU accelerator and you're going to be forced to keep it running along the GPU. Now the Lightroom use-case is interesting. Intelligent masking or image segmentation can be done on the discrete accelerator, especially if it means same results but lower Watt usage (in Puget benchmarks). However there might also be hybrid algorithms that utilise GPU compute along with NPU neural network for processing, in which case it might be more beneficial to run that on the GPU (with NPUs onboard). To prove I'm not talking gibberish, Intel is doing exactly that with Lunar Lake :) There are discrete NPUs with 60+ TOPS and the GPU hosts it's own local NPUs with ~40 TOPS. Thus intel can also claim 100+ "Platform" TOPS although that last naming is misleading as you are unlikely to see a workload, that utilises both to run your co-pilot. A Game on the other hand might be different. Lastly I remember years ago AMD's tile based design was marketed as exactly, that, a platform, that not only helps with yields (from a certain chip size onwards) but also allows you to host additional optimised accelerators like DSPs, GPUs, CPus and now NPUs on a single chip. So you could argue AMD has been preparing the foundations for that years ago...
@TotalMegaCool
@TotalMegaCool 14 дней назад
If Nvidia is planning to sell another type of card "AI Accelerator" it would explain the rumors of the RTX 50xx GPUs being a dual slot GPU. If you own a tin foil hat you might think that the RTX 40xx GPU's were larger than they needed to be to prime users, by encouraging them into buying a bigger PSU and case.
@patrikmiskovic791
@patrikmiskovic791 8 дней назад
Becouse of price I will always buy and GPU and CPU
@XfStef
@XfStef 14 дней назад
So the industry is having YET ANOTHER go at thin client BS. I hope, again, for them to, again, fail miserably.
@jeffmofo5013
@jeffmofo5013 14 дней назад
Idk,, I should be as enthusiastic as you. This may be an investment opportunity. Still waiting for NVidia stock to pull back. Technical analysis says it's at it's cycle top. But I'm also a machine learning expert. While inference is important. It's currently the fastest thing compared to training. The problem is phones not laptops. Laptops can more than handle inference. A phone on the other hand struggles. So Samsung with it's focus on an AI chip is more important in this arena. Unless NVidia is going to start making phones, I don't see this, as an implementer, as that impactful. And memory is more important on phones than cpu for this type of work. On a side note I don't even use GPUs for my AI work. I did a comparison and it only gave me a 20% increase in performance and cost twice as much. So at scale I can buy more CPUs than GPUs. And one more CPU is 100% increase in performance compared to gaining the 20% increase at twice the cost. So I don't see the NVidia AI hype. 1x cpu 1x gpu is 120% at twice the cost 1x cpu is 100% at the same cost is 2x so I get 200% increase for the same price as 1 cpu and 1 gpu
@newjustice1
@newjustice1 15 дней назад
What does it look like brainiac
@Thor_Asgard_
@Thor_Asgard_ 14 дней назад
Lets be clear. I wouldnt buy Nvidia, even if they were the last one, id rather quit gaming. Therefor no, they ended nothing. They can keep their greedy shit to themselfes.
@aladdin8623
@aladdin8623 15 дней назад
At this point of development nvidia might as well build a new computer concept with the GPU as the central processing unit. The CPU is becoming more and more a marginal gimmick. AI shifts things and intel and amd have to react, if they want to survive. Arm seems to understand and Jim Keller with tenstorrent understood it as well.
@LtdJorge
@LtdJorge 14 дней назад
You don’t know what you’re talking about.
@aladdin8623
@aladdin8623 14 дней назад
@@LtdJorge I don't care your lack of understanding basic IT principles to be honest or the impact of the current development for the future. Maybe you want to go play some fortnite and troll some teenies there instead?
@LtdJorge
@LtdJorge 10 дней назад
@@aladdin8623 GPU as central processing unit is the dumbest thing I've ever heard. It's clear you've never programmed in CUDA, OpenCL, etc.
@aladdin8623
@aladdin8623 10 дней назад
@@LtdJorge The only thing you ever did or understood about computers, is how to play fortnite.
@Jackpkmn
@Jackpkmn 13 дней назад
Ah so it's more AI fluff that will amount to more hardware in landfills after the AI bubble bursts.
@muchkoniabouttown6997
@muchkoniabouttown6997 13 дней назад
I Wana geek out about this but I'm convinced that all the ai advancement and inference refinement is just snake oil for over 90% of buyers. So I hate it. I'm down for new approaches, but fr can any non-salesman or non fan boi explain how this will benefit any more than 1-3 companies??
@boynextdoor931
@boynextdoor931 15 дней назад
For the sake of the market, please step up other companies.
@genericusername5909
@genericusername5909 15 дней назад
But who actually needs this to the point of paying for it?
@angellestat2730
@angellestat2730 14 дней назад
In the last year you were saying that Nvidia was in a dead end and that their stock price should plunge, instead, their price has rise a 200%. Lucky for me at that time I did not listen to you and I bought, taking into account how important AI was going to be making a lot of profit.
@El.Duder-ino
@El.Duder-ino День назад
Nvidia has plenty of cash to continue to be aggressive in its goal to disrupt and lead as many markets as possible. It's pretty much confirmed their next goal will be around edge and consumer AI which basically predicted their unsuccessful acquisition of the Arm in the past. It will be very interesting to see how their edge/consumer Arm SoC they r working on will compete with the rest of the players. Thx for the vid and for shedding more light about their future accelerator👍
@supremeboy
@supremeboy 15 дней назад
They focus more AI and looks at the gamers as peasants because its less and less money coming from gaming segment each year compared to AI and acceleration focus. Enjoy the more expensive Geforce cards coming each gen. Imagine if there would be no AMD at all for dedicated GPU's or Intel. You go to store and look prices and thats it, you ether have to buy or find other hobby :D
@mryellow6918
@mryellow6918 15 дней назад
Then they wouldn't make them. What's your point.
@jackinthebox301
@jackinthebox301 14 дней назад
@@mryellow6918 Even if Nvidia makes it through the AI bubble on top they will likely never exit the consumer GPU market, considering its where they started. But OP's general idea is correct. Nvidia has no motivation to create anything but expensive high end GPU's because the silicon spent on low margin GPU's is less silicon for high margin AI workloads. Consumer GPU's are more about Jensen's ego and Nvidia branding than growing the company. With that said, gaming revenue is doing just fine if you only compare it to itself. Its fluctuating within its product cycle quite normally. It's just being overshadowed by the massive AI bubble.
@subz424
@subz424 15 дней назад
If you're going to talk about LLM/AI inference speeds, why aren't you looking at other hardware like Groq's LPUs? 20x~ (average) cheaper than gpt-4o per million tokens, far faster inference. Basically, efficient. Would be nice to see a video from you on other companies like Groq.
@axl1002
@axl1002 15 дней назад
You need like 8 groq chips to run llama3 8b.
@lemoncake8377
@lemoncake8377 14 дней назад
Thats a lot of pikachus!!
@Matlockization
@Matlockization 15 дней назад
Accelerators have been around in CPU's for decades now, I don't get all the Nvidia fan fair.
@IcecalGamer
@IcecalGamer 14 дней назад
15:05 "Nvidia has the verticality" Already established for drop-in/plug-n-play. It took Intel 3 years just to make their first gen PC GPUs somewhat work, drivers and software. AMD can't even made 1st party heat-sinks for their PC GPUs; check how "good" their vapor-chamber design was. Even IF price/power(W)/performance was a tie amd=nvidia=intel, the other 2 are jokes when it comes to ease of adoption and reliability. Only thing AMD has in the GPU market is the openness/ hackability, but that is a double-edged sword since it requires time and effort to make it do what you want; you can do it tho 👍unlike Nvidia or Intel that are closed ecosystems. As for Intel... it's so schetchy that even now there are doubts that BattleMage will ever come out or they could drop from the GPU/accelerator market all together (like they did with soo many of they Pro-sumer or enterprise products; Optane for ex.).
@Greez1337
@Greez1337 14 дней назад
Return to the Emotion engine. AI revolution will help Game developers maximise the man jaws and diversity of every movie slop game.
@atatopatato
@atatopatato 14 дней назад
coreteksilikiaaaa
@ejtaylor73
@ejtaylor73 13 дней назад
This won't end AMD for one reason, PRICING!!! Not everyone is related to Bill Gates or want to take out a loan to be able to buy this.
Далее
Forget NVidia, AMD and Intel. Qualcomm OWNED computex!
19:13
How Tesla Reinvented The Supercomputer
12:56
Просмотров 146 тыс.
World’s Deadliest Obstacle Course!
28:25
Просмотров 73 млн
Intel’s Next Breakthrough: Backside Power Delivery
19:13
I should have tried this earlier - Bigscreen VR.
11:07
The Insane Engineering of the F-117 Nighthawk
27:37
Просмотров 1,5 млн
They Turned off this GPU Factory For Me!
15:31
Просмотров 1 млн
This New Photonic Chip Computes in Femtoseconds
18:14
Просмотров 211 тыс.
Intel have just made their BIGGEST MISTAKE yet
13:15
Просмотров 63 тыс.
The FUTURE of graphics
15:47
Просмотров 35 тыс.
How Realistic Are Today’s Robots?
17:50
Просмотров 363 тыс.
Why NVIDIA is suddenly worth $3 Trillion
12:49
Просмотров 222 тыс.
Bardak ile Projektör Nasıl Yapılır?
0:19
Просмотров 6 млн
Face ID iPhone 14 Pro
0:59
Просмотров 18 тыс.