Тёмный

RYZEN AI - AMD's bet on Artificial Intelligence 

High Yield
Подписаться 57 тыс.
Просмотров 121 тыс.
50% 1

Опубликовано:

 

1 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 334   
@siliconalleyelectronics187
@siliconalleyelectronics187 Год назад
The close business relationships between AMD, Microsoft, and OpenAI are starting to make a lot of sense.
@HighYield
@HighYield Год назад
Fully agree!
@rkalla
@rkalla Год назад
I didn't realize AMD and MSFT were super close - I missed that manifestation. Do you have any events or partnerships I could go lookup to get context?
@StoutProper
@StoutProper Год назад
More like a cabal
@thesolidsnek8096
@thesolidsnek8096 Год назад
@@rkalla Does CES ring a bell? They publicly announced their love relationship just a few weeks ago.
@diamondlion47
@diamondlion47 Год назад
@@rkalla AMD is in Azure and have been in the xbox for a few gens now.
@Akveet
@Akveet Год назад
I hope there will be a common instruction set for matrix operations (what's in all of these AI-branded coprocessors) so that developers could just use it not specializing for a specific hardware implementation.
@HighYield
@HighYield Год назад
That's super important, otherwise it wont take off. We dont need closed source shenanigans.
@004307ec
@004307ec Год назад
Microsoft's take is DirectCompute.
@juliusfucik4011
@juliusfucik4011 Год назад
I don't think these instructions are needed as matrix addition and multiplication is fairly generic. It suffices to have good libraries such as BLAS and IPP that make optimal use of the existing instruction set. Training online takes only little computational power. It is the initial training that is expensive. For that, we have GPUs. The AI cores are only meant for running the network forward for inference. This means no feedback, gradient calculation and weight adaptation is needed. Fun fact; if you quantizatize a typical neural network from floating point to integer you can get 30+fps on a single core of a Raspberry Pi 4. Inference just isn't that expensive.
@ttb1513
@ttb1513 Год назад
A library or SW layer is where matrix operations belong. And it needs to be optimized for the specific hardware implementation, including compute cores, cache sizes, DRAM sizes and bandwidth and so much more. Take a look at how very large matrix multiplies are done. They are not done in the simple way that would take N^3 multiplies and ignore the HUGE differences differences in each level of the memory hierarchy. Standardization is helpful, but not at too low of an abstraction level that prevents optimizations.
@mrrolandlawrence
@mrrolandlawrence Год назад
Lets hope the scalable matrix extension for ARM delivers. Coming to ARM v9A. Something they should have added some years ago IMO.
@alirobe
@alirobe Год назад
Potentially easier analogy; A CPU core is like a 4 math professors, a GPU core is like a 1000 promotional pocket calculators.
@RealLifeTech187
@RealLifeTech187 Год назад
I definitely was considering a Phoenix APU before knowing about Ryzen AI and my excitement only increased hearing this news. AI upscaling for video content is the thing I'm most excited about because there are so many low bitrate low resolution videos out there and the potential for conferencing is also huge since webcams probably won't get any better (if the covid home office years didn't get OEMs to improve their webcams nothing will)
@juliusfucik4011
@juliusfucik4011 Год назад
But any video card that is less than 5 years old can already do this... why want it in the CPU as well?
@polystree
@polystree Год назад
@@juliusfucik4011 because most ultrabooks & office computers don't have any dGPUs? Also, running an "AI Assistant" or any other AI task with a GPU is for sure not the most efficient way to do on laptops. I think this product is part of AMD and Microsoft cooperation. Microsoft want to try AI-powered Windows on mobile devices (Surface lineup) and AMD want to try their AIE in real life workload before launch it on other segments with little to no use.
@MostlyPennyCat
@MostlyPennyCat Год назад
My best idea for AI in games is AI vision and hearing systems for NPCs. At the moment in gaming, let's take a Stealth game example, the enemies have vision and hearing cones, dumb pure distance mechanics triggering a behaviour branch if the player is close enough or loud enough, usually augmented with simplistic rules based around crouching, movement speed limits, baked shadow regions and 'special grass' Replace that with a quick and dirty low resolution rendering of what the NPC is looking at using the GPU. Now run that image through a trained neutral network. Suddenly this opens up the possibilities of real effects from movement, lighting and camouflage. Literal camouflage, you're trying to fool the pattern matching algorithm in the machine in exactly the same way we try to fool the pattern matching algorithm situated between every humans ears 👉🧠 Same with audio, you render the sound at where the NPC is and run it through another NN and see if it meets a threshold to trigger the NPC AI (too many things named AI) universities branch. The game design trick is feeding back to the player the level of danger they are in without hokey constructs like the 'interest danger' markers in games like far cry.
@kirby0louise
@kirby0louise Год назад
You've absolutely nailed it on the need for strong software support. I looked into it, and apparently it has it's own special API/SDK required to utilize it. This is a big disappointment, they should allow it to plug in to DirectML (this is how AI acceleration works on Xboxes and it's great). By integrating it into existing APIs AMD would have a large amount of support out of the gate and avoid further fracturing the programming ecosystem.
@RainbowDollyPng
@RainbowDollyPng Год назад
I mean, every specialized hardware implementation needs its own SDK, handling the specifics. That alone doesn't prevent it from plugging into DirectML.
@RainbowDollyPng
@RainbowDollyPng Год назад
​@@leeloodog DirectML is a pretty high level abstraction and one that's Windows exclusive at that. You don't build hardware directly to that standard. There is always going to be a low-level SDK that handles the hardware access. Now of course, it could be handled differently, DirectML could be supported from the get-go, which is a shame that they didn't do that, I agree.
@zhafranrama
@zhafranrama Год назад
One reason I can think why DirectML is not a focus for AMD is because it's not cross platform, and doesn't work on Linux. Why is this important? AI computation in enterprises are usually done on Linux. Enterprises are one of the biggest consumer of AI compute
@dennisp8520
@dennisp8520 Год назад
There is irony in this to, I think AMD is making some of the same mistakes that Intel made whenever they got large and powerful. For now this stuff isn’t gunna be very useful until all chip makers get on board working on a standard
@RainbowDollyPng
@RainbowDollyPng Год назад
@@dennisp8520 Although from what I can tell, PyTorch, Tensorflow and ONNX are all supported by the Xilinx AI Framework as Frontends. So really, there is no huge need to support DirectML as Middleware between Frontend Frameworks and the hardware Backend.
@RM-el3gw
@RM-el3gw Год назад
ah yes, the APU that I've been waiting for. Not yet out there but looking very promising. Any idea of when it will be out? Also, are those AI cores also supposed to be used for something like FSR, such as in the way that Nvidia uses AI cores in its GPUs to sharpen and upscale stuff? Thanks and cheers.
@teapouter6109
@teapouter6109 Год назад
If AI cores are to be used for FSR, then FSR will not work on the vast number of GPUs that it currently works on. I do not think AMD would go in that direction for the time being.
@HighYield
@HighYield Год назад
The RDNA3 cores come with their own smaller AI cores which are used for FSR, and FSR in general doesnt even need AI acceleration iirc, thats why it also runs on older GPUs. Phoenix should be out in late Q1, but thinking back to Rembrandt last year, it might take AMD longer. Lets hope the rollout will happen faster this time!
@fleurdewin7958
@fleurdewin7958 Год назад
AMD APU for notebooks , Phoenix Point will arrive in March 2023 . It was announced by AMD in CES 2023
@zdenkakoren6660
@zdenkakoren6660 Год назад
AI will just learn what is best and fastest way to make the use of gpu or cpu, and it doesnt even have to send data to AMD or NVIDIA. It is a baby learning machine. it may work or not, like GCN 5 had primitive shaders that never got the usage...Radeon had tessalation way back and was not used ATI Radeon 8500 in 2001...Nvidia Physx was short lived...4870x2 had double gpu and between PLX chip that was never really used.....Intel had AVX 512 in cpu and now its removed,AMD has this only now in 7000 series xD.....Nvidia RTX 2000 series had AI and it learned how to better use DLSS and optimize drivers, but AMD has better stronger hardware so this will help driver team alot. AI will need time like 3-4Years to make some proper use of it IF it will work like people think.
@Bubu567
@Bubu567 Год назад
Basically, AI needs to multiply the weights set by a model across the whole network to figure out the best fit output, but it doesn't need high precision, since it only needs to determine a rough estimate on it's certainty.
@axe863
@axe863 Год назад
Everything can be approximated with randomized varying depth relus with proper regularization......... standard sparse linear learners no complicated solver needed. Algorithmic complexity is far far more important than hardware power
@leorickpccenter
@leorickpccenter 11 месяцев назад
Microsoft will be requiring this and needs at least 40-50 TOPS of performance for it to be a smooth AI experience with windows 11, presumably with the upcoming Co-Pilot.
@claymorexl
@claymorexl Год назад
I feel like, outside of notebooks and mobile computing, by the time specialized hardware is preferable for handling AI tasks, that discrete accelerator cards will be the market standard. Either that, or GPUs will market AI accelerators on their boards and make use of the insane bandwidth PCIe 5 gives them. Integrated AI cores will be more or less like integrated graphics in future in x86/PC applications
@redsnow846
@redsnow846 Год назад
Nvidia already sells GPU's that do this.
@thebritishindian1
@thebritishindian1 Год назад
Great video, very informative. These are the kinds of videos I like to educate myself on the future of computing. Coreteks is a great channel, but his niche is mainly for the future of gaming and graphics, which is less relevant to what I need to know about.
@nicholassabai7284
@nicholassabai7284 Год назад
Manual rotoscoping in video editors would take from a few minutes to hours depending on the complexity of the scenes, and I was surprised to see an AI engine pull that off in seconds.
@electrodacus
@electrodacus Год назад
This is what I'm waiting for. Hope it will be available in some mini PC form. Also hope there will be an API available for XDNA in Linux.
@HighYield
@HighYield Год назад
With how well AMD is doing for their GPU drivers on Linux, I think theres a good chance.
@VideogamesAsArt
@VideogamesAsArt 8 месяцев назад
sadly a year later, Linux support is still missing. Did you get a mini PC though? I have a Framework 13 with Phoenix myself, although not for the AI engine but more for the battery life and incredible efficiency
@electrodacus
@electrodacus 8 месяцев назад
@@VideogamesAsArt I did not got one as I was busy with other things ans since there is no support for XDNA I will probably wait for XDNA2. There is not as much progress as I will have hoped. Still use a i7 - 3770 so over a decade old.
@fakshen1973
@fakshen1973 Год назад
GPU-style parallel processors are very nice-to-haves for digital artists such as musicians, video editors, and animators.
@azurehydra
@azurehydra Год назад
Ai is the future. Even now in its infancy it helps me a bunch. If it became 100x better in assisting me. DAMN! It'll do all my work for me.
@first-thoughtgiver-of-will2456
What AMD needs to do is innovate further on their cache chiplet design and SoC infinity fabric IP to form a VRAM like cache for these DSPs. This is just another AVX extension or Snapdragon DSP equivalent (still awesome to see) but AMD is positioned to fix the real problem with machine learning models which is memory hierarchy. CPUs are surprisingly powerful compared to GPUs it's the memory locality that really makes GPUs outperform CPUs by so much due to cache misses in parameter space. Throw an L4 equivalent on the outside of the CCX chiplet and extend the ISA for AVX (also throw bfloat16 in there please).
@popytkpisatel
@popytkpisatel Год назад
Ai is pretty much hype. It will be as any thing: there is a potential and there will be very real and awesome implementations. At the other side of the medal, there will be also much of a generic implementation and overinflated expecations and trust in AI. as with ChatGP, with copy and content writers who boats "how they can not sip their coffee and automate their writing work" - chatGP (and other AI models) will just regenerate sameness from the existing dataset. The aspect "creativity" and "newness" or "challenging the status quo" will not exist anymore. While everyone accepts the bland repetition with a "but its AI, it knows better" or "it is too complicated, even AI can't figure it out" atittude. I believe, it will do great things (like many tedious repetitive jobs or grunt jobs. Both in labour and intellectual labour) - there will be some creative implementations where people make money. At the other hand, it will suck the life out of other things and make it very boring and bland.
@cameronquick1157
@cameronquick1157 Год назад
Mindblowing stuff. Definitely convinces me to continue waiting for 7040 availability in thin and lights. Aside from all the potential applications, battery life improvements should also be significant.
@christheswiss390
@christheswiss390 Год назад
A highly interesting and insightful video! Thank you.
@MostlyPennyCat
@MostlyPennyCat Год назад
I wonder if AMD's HSA (Heterogenous System Architecture) can rise from the grave now. Seems the perfect fit for adding AI inference to your code?
@matthewstewart7077
@matthewstewart7077 10 месяцев назад
Thank you and a great overview. I'm getting into AI machine learning and am hoping to utilize this new feature for training models. Do you have any resources on how to utilize Ryzen AI for machine learning model training?
@bev8200
@bev8200 Год назад
Just looking at AI art, this makes me super optimistic about the gaming industry. The environments that AI will create will be incredible.
@MWcrazyhorse
@MWcrazyhorse Год назад
Or they will be hellish. ah fuck it what could go wrong? Let's gooooo!!!
@erlienfrommars
@erlienfrommars Год назад
Windows 11 getting its own software based AI-engine to complement these AI hardware accelerators that can improve Audio, video and telecommunications would be amazing, and about time as Apple has been doing this for years since they moved to M-series Macs.
@HighYield
@HighYield Год назад
I'm sure Microsoft is already hard at work.
@craneology
@craneology Год назад
The more to spy on you with.
@kekkodance
@kekkodance Год назад
​@@craneologysame
@NaumRusomarov
@NaumRusomarov Год назад
i wonder how this is going to be exposed to the os and software. i'd like them to make this configurable through the compilers so that devs could use the ai cores if available.
@HighYield
@HighYield Год назад
I also hope they will provide open APIs.
@NaumRusomarov
@NaumRusomarov Год назад
@@HighYield that would be spectacular! :-)
@SundaraRamanR
@SundaraRamanR Год назад
@@HighYield it's AMD, so they probably will
@e.l809
@e.l809 Год назад
AMD should work to make it compatible with the ONNX format (by microsoft) it's open source and support a lot of hardware. It's the beginning of a "standard" for this industry.
@gstormcz
@gstormcz Год назад
There is always some hype. Not always being bad. I think it depends on both demand and chip design possibilities. I just don't know if that AI chip cores is not that what was otherwise listed in chip specs as there were AVX, MMX and other cpu extensions if correct. AMD calling software devs to make good use of those AI areas in chip really looks like other attempt to make use of raytracing or other cores. It's AMD business to sell what they make. I remember Intel Pentium MMX having some extensions, claiming better gaming performance, but when you went in AMD chip one gen later, you usually found that extensions there too, with sometimes either better pricing or simply higher raw performance over Intel. "I just hope my computer won't watch me for every single action one day in future, giving me advice to live better, faster, more efficient and doing more things at once, asking only plug me into grid and slap cooler on my head. Turn me off if I become idle but consuming too much." (xd) When I played chess with computer, it was tough at medium difficulty on PC 8086 upto 486. 3D shooter bots in Quake 1 or 3 were beatable bit higher, being quite fast, dextrous and a accurate. Playing vs bots in World of warships seems quite easy most of time, they sometimes get stuck at islands( no complain on devs), mostly not that devastating gunners, but already change course, speed etc. It is still programmed, having bots with performance equal to human is not goal of coop game there (IMHO), many players prefer relax, easier game than that one pvp. But I can imagine AI could make fair bot enemy either matching player skills or surpassing and teach what to improve passive way or with guidance/AI tips. Finally AI could just teach car drivers how to drive well without complete automatization of it. I personally don't seek AI driving my life, some Google results on my casual search is enough.
@HighYield
@HighYield Год назад
I think its important to make the AI engine accessible and with time, real use cases will appear.
@pedro.alcatra
@pedro.alcatra Год назад
Thats no a big deal for 90% of home users, for shure. But wait till they make a partnership with unreal and we start seeing it on NPCs or something like that lol
@icweener1636
@icweener1636 Год назад
I want an AI that will help me get rid of Noisy neighbors
@quinton1630
@quinton1630 Год назад
7:39-7:41 audio blip from the audio editing on “transistor”
@HighYield
@HighYield Год назад
I'm pretty sure I just had a horrible microphone pop at this point and tried to remove it, the result is a few missing frames and the audio blip. Why are you paying so much attention? Cant even make my mistakes in peace ;) Good catch tho!
@quinton1630
@quinton1630 Год назад
@@HighYield I’m a professor who pre-records some lessons, so I’m all too familiar with replaying 1 second of audio a dozen times to fix pops, blips and doots :P Great video by the way, kudos on being informative and entertaining!
@dragonsyph2557
@dragonsyph2557 Год назад
AMDoa
@EthelbertCoyote
@EthelbertCoyote Год назад
A Ai engine coupled with a small FPGA on chip could cover a lot of non efficient tasks that would burden a GPU or CPU's main task set correct?
@HighYield
@HighYield Год назад
Yes, I’d say so too
@em0jr
@em0jr Год назад
The'ol Math Co-processor is back!
@procedupixel213
@procedupixel213 Год назад
Yeah, AI hardware is definitely mass producing fast food. Or even junk food, when you consider that precision can get as low as just two bits per coefficient.
@HighYield
@HighYield Год назад
I honestly think my analogy isnt that far off :D
@MK-xc9to
@MK-xc9to Год назад
It seems Meteor Lake is d e l a y e d again and may even be scrapped ( at least for Desktop) due to the lack of high CPU Frequency , instead there may be another Raptor Lake Refresh , Raptor Lake itself wasnt planned and is only a refresh of Alderlake . Maybe we will see Meteor Lake on mobile 2023 but that depends on " Intel 4 " which still has some issues but may be good enough for mobile .
@HighYield
@HighYield Год назад
Yes, Meteor Lake is hanging in the ropes right now, but I still think we might see a mobile version this year.
@soupborsh6002
@soupborsh6002 Год назад
I use GNU/Linux!!! We need open source!
@oliviertremois1500
@oliviertremois1500 Год назад
Very good video. Most of the information on XDNA is exact, I mean not overestimated!!! All the animations on the AI Engine are really nice, compared to my poor PPTs !
@kaemmili4590
@kaemmili4590 Год назад
quaity content, thank u so much wd love to see more of you, specially for new chip paradigms , on the research side of things
@Chalisque
@Chalisque Год назад
If it only runs on AMD's APUs, then it will only run on a fraction of PCs, making for a small target market for software developers. It makes sense to add it to their desktop Ryzens to, and possibly their discrete GPUs, possibly even separate PCIe cards with just XDNA on it (market dominance will require the tech to be available to PC users with Intel CPUs and nVidia GPUs). But I can't see how the market will embrace this technology if it is only available on AMD's APUs.
@danimatzevogelheim6913
@danimatzevogelheim6913 Год назад
Wieder top Video! Again, a high quality video!
@BrandonMeyer1641
@BrandonMeyer1641 Год назад
Man I wish I had something like this when I was taking a class on ai last year. Some code would take several minutes to run. This probably would have cut that down a bit. If compilers can take advantage of such features on the silicon automatically it will have huge implications for students. Additionally once ai cores are common to most laptop chips universities can adjust curriculums to teach cs students how to leverage them before they graduate.
@Yusufyusuf-lh3dw
@Yusufyusuf-lh3dw Год назад
AI engines and dedicated AI capabilities are already available on Apple and Intel cpus. Apple has dedicated IP for AI offloading and intel has TMUL instructions in Alder lake for AI operations. It's just a matter of which one has more application support and which one is more effective in terms of performance and power consumption. Secondly as you said meteor lake has dedicated AI engine on the cpu and raptor lake has onboard AI IP.
@MWcrazyhorse
@MWcrazyhorse Год назад
Nooooooo don't make an AI chip!!!! The Robots will eat our brains!!!!!! Didn't you watch Terminator???!!!!
@HenryTheOunce
@HenryTheOunce Год назад
RAIZEN
@user-qr4jf4tv2x
@user-qr4jf4tv2x Год назад
can't wait for my cpu to have existential crisis
@HighYield
@HighYield Год назад
So you are saying it can run crysis?!
@HazzyDevil
@HazzyDevil Год назад
Always happy to see a new upload. Thanks for covering this! As a gamer, I’m excited to see how good AI upscaling become. DLSS 2/3 has already shown a lot of promise, now just waiting for AMD to release their version.
@adamrak7560
@adamrak7560 Год назад
I am more excited about neural rendering (neural radiance fields), it is not real-time on current hardware, but with the right dedicated hardware it will be soon.
@LyubomirIko
@LyubomirIko Год назад
Finally! Skynet is coming! Win 14 will be able to finally not only spy on you, but teach you how to be obedient to your overlords and how to think with the help of the new AI processor! How anticipated! And Win 16 will be your one way ticket to get that needed plasma soup for the day, just log in and report who of the humans talked against the AI, the system will take good care of them, while keeping you well fed! Win-win.
@D9ID9I
@D9ID9I Год назад
Meh, 12 TOPS is really not much. Especially when divided by 4 streams. Jetson Orin brings 275 TOPS. That's for comparison. So that's like adding 3x Google Coral modules. Not bad but not overwhelming so far. You can always just plug in $40 worth dual Google Coral M2.keyE module into older system and get 8 TOPS right now.
@wilmarkjohnatty4924
@wilmarkjohnatty4924 Год назад
I'd like to understand how strategies deployed by AMD will compare to NVDA, and maybe Broadcom with RISC /ARM, and is this why NVDA bought ARM? There is a hell of alot of hype about NDVA, are they likely to live up to that? What will AI do to already seemingly dying INTC?
@zajlord2930
@zajlord2930 Год назад
sorry if you already mentioned this but are the ai cores only for amd to use for fsr or something or its something users can use for machine learning or whatever and how do these compare to gpu? like can i do as much as on gpu with this or how much better or worse is it? again, if you already mentioned this im really sorry but im too tired to rewatch it again today
@HighYield
@HighYield Год назад
It’s not meant for FSR, those cores are inside the GPU. In theory you should be able to use it for machine learning code.
@RM-el3gw
@RM-el3gw Год назад
@@HighYield ah crap just asked this question haha.
@kirby0louise
@kirby0louise Год назад
CPUs, GPUs and the AI Engine all are Turing complete, so technically they all can execute the same tasks provided they are programmed for the respective processor. What differs is the speed at which they can do certain tasks. Linear, logic heavy code will perform best on CPUs. General purpose parallel number crunching will be best on GPUs. Specialized parallel matrix math will perform best on the AI Engine. Comparing it to the rest of the Pheonix APU (Ryzen 7 variant), the integrated 780M provides up to 8.9 TFLOPS of FP32/17.8 TFLOPS FP16 (possibly 35.6 TOPS Int8/71.3 TOPS Int4? The ISA manual states support for Int8/Int4 matrix math but not packed acceleration of it. I would assume this is carried over from Xboxes but I can't be totally sure). The AI Engine hits 12 TOPS (unspecified, assuming Int4). While it might sound like this makes the AI Engine pointless, the real story is in the perf/watt. The AI Engine according to AMD has power usage measured in milliwatts, while the 780M could easily pull 20W+. Thus, the AI Engine is great for ultrabooks that cannot afford to be blasting the GPU like that.
@simplemechanics246
@simplemechanics246 Год назад
AMD can develop, intel can not... Luckily intel goes out from CPU business. Once they was leader but it ended on 2014 year. They can not make anything new, because they was so focused on price increasing, not quality or newer tech. RIP intel
@petershaw1048
@petershaw1048 Год назад
This week I built an AMD system based on the x670e-Pro chipset (Pcie5) with an 8-core processor. When they come out, I will drop in a CPU with Ryzen AI ...
@theminer49erz
@theminer49erz Год назад
I'm actually very happy to see this. I have been expecting something similar for a while. Although I was envisioning it being more like those "Physics" cards in the early 2000s. It seems that ever since the Xbox 360/PS3 era began, in game AI (NPCs, Enemies, wildlife, etc) has basically been an afterthought IF that. I believe it's because by the time those consoles were releases they were already rather outdated compared to PC capabilities and have been trying g to keep up (and failing) ever since. That means when studios make games, even if they will be prominently for PC, they can't get too "fancy" or else the difference between the PC version and the console versions wpuld be too great and point out how bad the system is. I doubt they would get a license to sell such a bad port and without console licenses they will not get budget. So in order to maintain the illusion of graphical improvements over time, things like AI and view distance etc. were left on the cutting room floor. Think about A-Life in STALKER games that allows for AI based enemy tactics, wildlife, and NPC interactions. It makes for a much more realistic, immersion heavy game experience that is almost always diffrent since even when you are not around or on the map, the NPCs etc do their own thing. Also think about the first FEAR game, the enemy tactics were amazing and felt real, but graphical quality was compromised to do so(was worth it). Anyway, my point is that I hope this is used for such things moving forward. I know publishers would almost never approve lowering graphical quality just for better AI since their market research says "graphics are the most importantpart of a game" (aka asking random people who have no idea what makes a game good, "what makes a game good?"). However if this hardware becomes more common, they wouldn't have to make that trade off. Lastly, I believe that MS and Sony only have maybe one more "Next Gen" console in them before the price/performance of a PC would surpass what they can make and sell. They already take a bigger and bigger loss on the system sales each cycle and rely on licensing to make up for it. However since they seem to use next gen APUs now, so that means of they can get, say a AMD APU with RDNA3/4 and "AI cores" games may start making use of in game AI again since it won't have to be a trade off/can be applied to console titles. They could also allow for things like DLSS type AI upscalling can be taken off the GPU and given to the CPU perhaps. I see APUs being the main go to in the future. AMD has the head start and chiplet stacking/3D cache can make them extremely powerful. I also see dedicated APU motherboards that have both system RAM and VRAM slots that will allow for more upgrade paths and less waste. Yes a mobo will cost like $300+, but you won't need the whole GPU PCB and there could even be some performance gains by having all of that in the mobo instead of having to all go through PCIE slots. Anyway, this is good news I think! There are also really promising potential for other things, but that is a secret as I am currently working on something that would benifiet greatly from such a thing. It wpuld also be nice to have offline home assistant/automation computing more readily available to more people instead of having everything that happens in their home get sent to an Amazon server to be analyzed and archived just so it can play a song when you ask it to. This is possible now if you have a server with a GPU now(like me) but it's not supported very well as far as software choices. If it wasn't such a weird thing to set up, I'm sure there wpuld be much more options. I will conclude here, thanks for the video!! I'm looking forward to hearing more on the new "Zen 4D" and how RDNA3 is evolving. I'm not keeping up with most the main outlets because they are getting annoying, so I'm counting on you to keep me up to date! :-D have a great weekend!!
@HighYield
@HighYield Год назад
I remember "PhysX" very well. At some point Nvidia thought everyone would have a dedicated physics card in the future. But unlike Nvidias proprietary API, I'm sure AI engines will make their way into most computer chips eventually.
@Raja995mh33
@Raja995mh33 Год назад
It's just annoying that this stuff is called "AI". NOTHING about this stuff is artificial intelligence. It's machine learning and fancy algorithms. Also kinda funny to call it "bleeding edge technology" while Apple uses this stuff for 3 years in Macs and more than 5 years in phones (and so are many Android phones).
@Sanguen666
@Sanguen666 Год назад
still AMD got not answer for matrix multiplication and CUDA. sad. I like AMDs open source nature and linux support. I really want them to succeed. I don't want to be forced to buy an NVIDIA workstation if i have a choice... :C
@dannyynnad-u4p
@dannyynnad-u4p Год назад
Yea yea yea. I still don't understand why user processors need AI chips. Almost nobody trains AI models for daily or even work use, the only real consumer use now is maybe deepfakes. But muh chatgpt? You aren't even training the AI model, you are only using the model that was trained in 2021. People keep saying ohhhh we gonna train our own models in the future.. like heck a tiny AI chip can train anything useful like an entire bespoke chatgpt model for you. Maybe these so called factory of simple calculations would be useful to supplement normal CPU, but it's already rare to see implemented GPU acceleration, and now you are adding a third, arguably weaker option? I am not saying AI for consumers is pointless, it's just pointless to integrate a half-butt chip with the CPU. If AI model training is actually going to be something everyone needs to do often and well, it has to be its own processing unit, or even a card like GPUs. And all the bull about voice recognition and stuff - maybe fix the voice assistants themselves first - except oh, if you can't sell crap with siri, you won't even implement useful features into it. Go watch Linus rant about voice assistants - it has been a decade and these assistants are still as crap as before, some has even regressed.
@curumo_curunir
@curumo_curunir 11 месяцев назад
AMD is a disappointment when it comes to the software. I am not hopeful. AMD does not care about its software side; they do not understand (or care) the importance of the software/drivers. NVIDIA is more far away from its competitors. I want alternatives because it help the consumers, but Nvidia is the only real-life proven producer in this field.
@chadjones4255
@chadjones4255 Год назад
It's not exactly the singularity -- but it will be an important milestone when gaming NPCs become more interesting and thoughtful people than the mass of real human NPCs who actually run the world. We're already getting very close...
@fy7589
@fy7589 Год назад
Why it took so long for the x86 to catch up arm based devices on AI? It's because Apple has more money than anyone else, they can do things much faster. Nvidia makes a whole loads more money than AMD makes so they're probably not much different than Apple. Intel's been trying their xeon phi's and stuff. They are probably obsolete for a decade now but Intel's been trying. AMD only recently bought a big player in the industry and they weren't making any money until they came up with ryzen. And let's not forget they sold a lot of polaris cards to miners. There were even mining edition branded polaris cards. And besides arm makes way more sense when you consider that AI apparently can parallelize and yields are much less in large size dies. Arm chips are usually small and power efficient. And to be honest probably most people won't even use the AI engine on the cpu so why waste it?
@TrueThanny
@TrueThanny Год назад
AI is definitely over-hyped. But the bigger problem is that it's not treated with sufficient skepticism. Nothing put out by AI should be relied on to be true in any sense. Image enhancement, facial recognition, speech transcription and translation, etc. All these things should be, at most, the starting point for any kind of action. But that's not how things are actually working. Neural networks are black boxes. You cannot debug them when they give the wrong answer. You can only retrain them to create a new black box which no longer gives that wrong answer (but might give another wrong answer that it previously wouldn't). What's really needed are laws that specifically exclude any kind of evidence that has been touched by AI processes from any and all legal proceedings. That includes something as simple as AI-based upscaling. No facial recognition should ever be considered evidence for an arrest warrant, for example. It should never be anything but a lead that needs to be followed up by a human. We've already seen police departments violate this obvious bit of common sense on multiple occasions. So before everyone gets invested in this so-called artificial intelligence, we need to put structures in place that makes it clear to everyone just what its limitations are, so that AI can be prevented from causing more harm than it already has.
@DeadCatX2
@DeadCatX2 10 месяцев назад
As an FPGA engineer I've used DSP cores to accelerate certain algorithms in hardware and upon hearing that MAC is the basis of AI I pictured the Leonardo DiCaprio pointing meme, where DSP cores are pointing at AI cores
@rolyantrauts2304
@rolyantrauts2304 Год назад
Its a matter of cost as if the 7040 undercuts what is likely to be the m3 then there is a huge new arena of local voice activated LLMs that a single server can service a reasonable number of zones for most homes. The Home HAL is on the way and that is the 2001 type not abstraction layer.
@johankuster1377
@johankuster1377 Год назад
Training an AI model is not the same as using it. When you use a model like Chatgpt you are just running software that is somewhere in the cloud on a server. The model is way too big. So, what is the use case of this AI engine? Will users generate their own models locally without their knowledge to be used by their OS or any other software they use? Could be.
@Z0o0L
@Z0o0L Год назад
maybe they can use this AI for pricefinding that isnt redicoulus for the 7000 series cpu....
@randomsam83
@randomsam83 Год назад
Dude your accent is perfect for explaining technical stuff. Consider using a German word once in a while to make it perfect. Great work !
@6XCcustom
@6XCcustom 10 месяцев назад
the extremely rapid AI development in form of software and hardware implies this that, the hardware must be replaced much faster now
@thomasfischer9259
@thomasfischer9259 Год назад
XDNA will be a cheat code. I promise you that Nvidia is scared shitless right now.
@SirMo
@SirMo Год назад
@MirroredVoid AMD literally invented HBM, also check MI300.
@Amonny
@Amonny Год назад
Let's talk again in about 10 years, when T-800s will be roaming the Earth xD
@Silent1Majority
@Silent1Majority Год назад
This was GREAT!!
@ryanmundell3504
@ryanmundell3504 Год назад
it's more like using a fight jet to deliver a package. While a GPU is like a cargoship.
@pichan8841
@pichan8841 Год назад
Is that a grandfather clock running off camera?
@HighYield
@HighYield Год назад
You hear a ticking sound?
@pichan8841
@pichan8841 Год назад
@@HighYield Actually, I do. Not constantly, though...Am I hearing things? e. g. 2min40 - 2min49, 3min03 - 3min13 or 3min43 - 4min11...Grandfather clock!
@evdb9255
@evdb9255 Год назад
I have a new Whatsapp business idea. You know voice messages? What if you can hear the voice message real time and speak back at the same time? Take that ai.
@iLLya_
@iLLya_ Год назад
so let me get this straight my laptop is gonna have a AI optimizing storage an ai generating frames in gpu another ai in processor and all of them operate under a ai softare which interfaces with me with a AI chat bot that i use to browse AI generated art recommended to me by a AI controlled algorithem ?
@SmartK8
@SmartK8 Год назад
I want my CPU, GPU, APU, and QPU (Quantum Processing Unit).
@Xantosh82
@Xantosh82 8 месяцев назад
My question is, do you think they will they allow you to disable this engine for those in the tinfoil hat gang that see AI as this big bad Skynet thing?
@TerraWare
@TerraWare Год назад
We need ai shader compilation to get rid of stutters.
@SirMo
@SirMo Год назад
That's a developer issue. Not really something you can fix in hardware. Some games do it correctly.
@6SoulHunter9
@6SoulHunter9 Год назад
I am about to have lunch and that sandwitch looked so tasty that it distracted me from the topic of the video xD
@Sychonut
@Sychonut Год назад
"AI Engine" in Silicon Valley lingo is pure horse shit. In the vast majority of cases we are literally talking about a piece of fixed function hardware that accelerates matrix multiplications as that is what convolutions and linear layers in a neutral network boil down to, and they account for the majority of the compute load in inference (and training as well.) Now companies could say we have a piece of hardware that accelerates matrix multiplies but that wouldn't have been approved by marketing nor would it have sounded as sexy so they twist the words to make it sound much more advanced than it really is.
@ZeZeBatata69
@ZeZeBatata69 Год назад
Can it run CUDA? if not it would be an impossible task. NVIDIA really cornered the AI market spreading .cuda() functions all in most of the AI existing code.
@HighYield
@HighYield Год назад
No, of course not, since CUDA is proprietary Nvidia software. But Nvidias domination with CUDA is slowly eroding, PyTorch 2.0 and Triton from OpenAI are good examples.
@metalhead2550
@metalhead2550 Год назад
As normal, take instances of AI and replace with ML.
@MWcrazyhorse
@MWcrazyhorse Год назад
I think you meant to say leading edge technology. Unless bleeding edge is a word. Maybe we should make it a word. It sounds dark. Bleeding edge.
@HighYield
@HighYield Год назад
I think it already is ;) en.wikipedia.org/wiki/Bleeding_Edge
@tikz.-3738
@tikz.-3738 Год назад
Nvidia s tensor cores are in essence already a fixed function ai accelerator so is Intel with their upcoming stuff. Amd is just playing catch up nothing else. The major problem with AI accelerators is not compute but bandwidth which is planety in a GPU vs cpu. So it isn't that ground breaking by any means
@SirMo
@SirMo Год назад
Nvidia has nothing like this. This is about AI at the edge. And about speeding up inference. Where Nvidia excels is training.
@critical_always
@critical_always Год назад
Might as well be the cap of her tampon box. It's a rectangle! Woohoo
@jmtradbr
@jmtradbr Год назад
Nvidia have this for several years already. AMD was needing it.
@autarchprinceps
@autarchprinceps Год назад
How is this special? Apple has had NPUs in their processors for years, as do most smartphones. Seems more like someone catching up to the trend, than an innovation. Like when Intel introduced big little.
@HighYield
@HighYield Год назад
I specifically talk about that question :)
@hellraserfleshlight
@hellraserfleshlight Год назад
The question is, with local AI processing, will applications stop sending PII to the cloud to be processed and cataloged improving use privacy, or will it just save Google, Facebook, Amazon, Microsoft, and others money on processing data, letting them harvest more "polished" PII.
@klh8689
@klh8689 Год назад
I need instructions within the instructions because I am that dumb sometimes.
@mapp0v0
@mapp0v0 Год назад
What is your thoughts on Brainchip's Akida?
@Stopinvadingmyhardware
@Stopinvadingmyhardware Год назад
So a return of the FPU, math coprocessor.
@stevenwest1494
@stevenwest1494 Год назад
Interesting, but it'll need to be sold to the average user of a PC as something they need. That will need to be built into the Windows 11 scheduler, which for some reason is having problems with Zen 4 cores over 2 CCD's, and Intel's big, little design. Understandably with 2 different core designs for Intel, but CCD's have been an AMD standard for generations now. Also Zen 5 will use a big Zen 4+ core and smaller Zen 5 little core. It'll be another generation, Zen 6 earliest we see AI in desktop CPU's from AMD.
@briancase6180
@briancase6180 Год назад
Sheesh, it's about time. Apple and Google have had "neural engines" for years. Apple's new M-series SoCs have also good AI accelerator blocks.
@DUKE_of_RAMBLE
@DUKE_of_RAMBLE Год назад
Mmmm... 🤤 That notion of game enemies leveraging the AI Engine, is nifty! I don't know exactly how improved it would be over the current means, which have already had "learning" abilities; albeit, minimal and session based. if the new one could store complex info and re-use it on the next game load, that'd be great. (although, this probably falls under "machine learning", not "general AI" 😕)
@mark970lost8
@mark970lost8 Год назад
ok hold on now... skynet is becoming a real thing now?
@shaunhall960
@shaunhall960 Год назад
Wait until AI decides we are no longer important.
@Alauz
@Alauz Год назад
Would be nice to have A.I. accelerated graphics replacing traditional raster tech in the near future. Maybe full Path-Traced graphics with A.I. accelerators can make huge GPUs unnecessary and we can simply use APUs and re-shrink the ultimate gaming machines to the size of watches.
@ytsks
@ytsks Год назад
"AI calculations don't need high percision" please don't ever make videos on topics you have no idea about.
@HighYield
@HighYield Год назад
There’s a reason almost no one uses FP32, it’s because the performance and power trade off is worth it for AI computation. Why is that? Because they are very tolerant of low precision. Hope that clears it up for you :)
@larsb4572
@larsb4572 Год назад
A big use will be night to light. Perfect sunny days in even pitch black. What is black for the human eye is just nuances of dark for the AI with a decent optic. and as such, you can implement it in the windshield and sidewindows in your car so at night, you get 240fps AI sunshine at 1am during pitchblack driving. On the phone too.. just hold up your phone or put it in a headset to see around you while underground, or outdoor while its dark.
@janivainola
@janivainola Год назад
Would be very cool with rpg games where the plot is set up, but AI follows the actions and style of the player to update the story during play. Making each playthrough unique...
@woolfel
@woolfel Год назад
I spent the weekend benchmarking apple M2Max and newer ANE. For densenet121, it can do over 700 FPS versus 100 FPS on GPU. It's taken AMD far too long to add tensor processors.
@superslug8093
@superslug8093 Год назад
Tesla is going to love AMD for these cheap Ai chips.
@youtubesucksdicks9474
@youtubesucksdicks9474 Год назад
"Neural net processor; a learning computer." =)
Далее
Why AMD's first Hybrid-CPU is Different
10:05
Просмотров 159 тыс.
Do we really need NPUs now?
15:30
Просмотров 591 тыс.
#慧慧很努力#家庭搞笑#生活#亲子#记录
00:11
DAXSHAT!!! Avaz Oxun sahnada yeg'lab yubordi
10:46
Просмотров 515 тыс.
Microsoft Is KILLING Windows | ft. Steve @GamersNexus
19:19
I tried using AI. It scared me.
15:49
Просмотров 7 млн
I used to hate QR codes. But they're actually genius
35:13
How long can Nvidia stay monolithic?
14:02
Просмотров 44 тыс.
Cerebras Co-Founder Deconstructs Blackwell GPU Delay
23:17
Why Lunar Lake changes (almost) everything
19:09
Просмотров 118 тыс.
When you Accidentally Compromise every CPU on Earth
15:59
NEVER install these programs on your PC... EVER!!!
19:26
Deep-dive into the technology of AMD's MI300
17:40
Просмотров 61 тыс.
How this tiny GPU invented the Future
18:00
Просмотров 222 тыс.
#慧慧很努力#家庭搞笑#生活#亲子#记录
00:11