Тёмный

How Digital Audio Works - Computerphile 

Computerphile
Подписаться 2,4 млн
Просмотров 264 тыс.
50% 1

Learn how to add narration to your Kindle eBooks. Visit www.amazon.com/...
How does digital audio work? Programmer, Producer and Professional Musician David Domminney Fowler takes us through the basics.
Science of Drumming - Sixty Symbols: • The Science of Drummin...
Captain Buzz: Smartphone Pilot: • Captain Buzz: Smartpho...
Factory of Ideas (Bell Labs): • The Factory of Ideas: ...
Could we Ban Encryption?: • Could We Ban Encryptio...
Computer that Changed Everything: • Computer That Changed ...
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscom...
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Опубликовано:

 

2 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 493   
@TheHoaxHotel
@TheHoaxHotel 9 лет назад
That's why I always record at 88kHz, so my dog can appreciate the fine notes of the super piccolo.
@pracheerdeka6737
@pracheerdeka6737 4 года назад
hahaahha
@pracheerdeka6737
@pracheerdeka6737 4 года назад
HE IS NOT SPEAKING FOR DOGS LOL HAHAHAH
@sonnenhafen5499
@sonnenhafen5499 4 года назад
@donald trump why should a speaker not be able to output 20kHz+? besides that it isn't particularly flat in that band because it's not designed for that. DAC makes sense, do you know why in detail?
@juniorcastiel952
@juniorcastiel952 3 года назад
i guess im asking the wrong place but does someone know a tool to get back into an Instagram account? I stupidly forgot the password. I love any tips you can offer me
@samdustin6733
@samdustin6733 3 года назад
@Junior Castiel Instablaster :)
@OneBigBug
@OneBigBug 9 лет назад
Obligatory correction: The reason that 44.1 was chosen is not because they measured the threshold of human hearing at precisely 22.05, it's because it was "about 20kHz" and they needed to add a bit more on the high end as a transition band, which you might think of as...room for error in how they design the electronics. The reason it's 44,100 and not 44000 or 45000 (or 48,000) is related to how it was stored on old video recording systems. Today, if we didn't care about keeping to standards developed ages ago, it really just needs to be "40,000Hz + a bit more than that" The "Humans can hear 20kHz" thing is just a general guideline, not a hard and fast rule. Biology is not precise enough to say "22.05kHz is exactly the right amount." People vary too much for that. edit: Also, sampling a signal at twice the frequency is called the "Nyquist rate" and will give you alias-free sampling, which is why we use that. It's not just arbitrarily "Yeah, that seems like enough", it's a mathematical rule. That's not really super important to know, but it's a good term to Google if you want to know more.
@Timmsstuff
@Timmsstuff 9 лет назад
+OneBigBug Exactly, there needs to be room for the antialiasing filter
@xasdrubalex
@xasdrubalex 9 лет назад
+OneBigBug +1 for Nyquist (aka Shannon-Nyquist sampling theorem), i was a little disappointed because this great fact was missing from the video
@FireBarrels
@FireBarrels 9 лет назад
+Joe Mills It sounds more like distortion from the speaker trying to play that loudly than a 24Khz tone though.
@dvamateur
@dvamateur 9 лет назад
+OneBigBug 48kHz was used on DAT tape to prevent straight digital copying from CD which are 44.1kHz. Yes, 48kHz was used to combat piracy.
@anothergol
@anothergol 9 лет назад
+Joe Mills most people won't even hear a 17khz tone (which isn't a problem, there's nothing interesting for us above that), and if you think you can, double-check your system. And this whole video, while not saying huge BS, is technically rather vague.
@FASTFASTmusic
@FASTFASTmusic 9 лет назад
The very first statement was wrong. The air doesn't move across a room until it hits your eardrums, that would imply that the air in the room moves at the speed of sound across a room. The air on average stays in the same place, but as it is an elastic medium the information it carries moves across the room at the speed of sound, so not so much like billiard balls passing on energy, but rather billiard balls attached by springs that return to where they were initially.
@FASTFASTmusic
@FASTFASTmusic 9 лет назад
and at 7:20 the Square wave myth!!! This has been disproven countless times. The dots on a waveform although they look like square waves on the screen are actually just average values of where the real waveform is. When it goes through your DAC a perfect sine-wave will ALWAYS be reproduced even at 2-bit 22Khz tone. Furthermore a Square wave is physically impossible in nature (check out Fourier expansion and adding sine-waves to represent square waves)
@monkeyxx
@monkeyxx 7 лет назад
That was the first "gotcha" I spotted in this video as well.
@francescodipalma9785
@francescodipalma9785 7 лет назад
Glad to see that someone pointed this out. It's incredible how almost everybody got this wrong, even pros.
@GuyWithAnAmazingHat
@GuyWithAnAmazingHat 9 лет назад
I'd love to see more videos on digital audio, sound recording and editing.
@schitlipz
@schitlipz 9 лет назад
[not a reply to your comment, seems I can't post a new comment but only "reply"] Brainy guys: 1) What about the information theory (or whatever it's called) where it takes like 3 samples to reconstruct a sine wave?? So two samples of a 20khz wave at 44khz doesn't sound like enough. I dunno. 2) With regards to depth; why not crowd up the samples at lower volumes, in a logarithmic manner - the way we percieve sound. That way high frq, low volume sounds would have better reproduction. Again I dunno.
@Wasthere73
@Wasthere73 9 лет назад
+schitlipz for the most part, many people are unable to hear up to 20k, and tbh, its not a huge loss. my own ears cut off around 18.5k. Unless you are gonna design software around audio, you wont have to worry if 44.1 is enough. For the most part, it is. just do your own hearing tests. If you are super curious though, look up the nyquist theory. and about bit depth, that one is a lot more complicated. the guy in the video wasnt entirely right when he explained it. i cant explain it to you in simplified terms. just look it up on youtube
@pandaofdoom7684
@pandaofdoom7684 8 лет назад
Correction (at 7:10): The explanation of the noise ("grain") of very low volume signals is wrong. The computer doesn't "connect the samples with lines". Instead, whenever the computer measures the signal, it has to be rounded to the next associated integer value. This causes the so called quantization error - basically the rounding error. So the signal you hear upon playback is the original one plus the quantization error, which causes distortion (unless dithering is used, but that's another story). No square waves here.
@sumit_2111
@sumit_2111 Год назад
Exactly, if we look at images and say we have only 2 bits to store intensity value of pixels so effectively 4 levels between highest and lowest intensity value, the lower values will be mapped to zero and the medium to higher all will be mapped to 1 and hence will create very bad looking photos as the detailing is gone
@tylerwmbass
@tylerwmbass Год назад
Dither is a topic that could get its own video too
@MovingThePicture
@MovingThePicture 9 лет назад
3bit are more than enough for the average loudness war song.
@lotrbuilders5041
@lotrbuilders5041 6 лет назад
MovingThePicture true, but 4-bit is much easier to process
@brianpacheco1927
@brianpacheco1927 6 лет назад
Working in IT and being a huge music lover, this gives me a huge level of appreciation for artists and their studio engineers on how much work goes into putting together an album with lots of tiny microdetails never really ever heard. Definitely makes me want to go out and buy some higher end audio rather than all the streaming MP3's we do nowadays.
@patk2225
@patk2225 6 лет назад
Brian Pacheco yeah my sound system is pixel perfect and my sound system has microdetails that no other sound system has
@brianpacheco1927
@brianpacheco1927 6 лет назад
When I had the HD800s they were great for micro-detail but I realized that it just gets fatiguing after a while and doesn't sound natural.
@patk2225
@patk2225 6 лет назад
my tweeters and subwoofer has 21 watts but my mid range woofer has 190 watts why does my mid range woofer need so many watts ?
@malenkytolchock
@malenkytolchock 9 лет назад
This is a real shame Computerphile. I find your videos informative and entertaining but this is inaccurate and misleading. I hope that your other videos in areas which i am not knowledgeable in aren't as incorrect as this is.
@gummansgubbe6225
@gummansgubbe6225 9 лет назад
+malenkytolchock Yeah!I worked with ADC's (not music) 20 years ago and this is just a pain to watch. Ever heard about oversampling? Noise reshaping? Digital noise? And the reason you are havin' trouble with your cymbals have you? Metallica fan are you? Google loudness wars.
@yanwo2359
@yanwo2359 9 лет назад
+malenkytolchock What was incorrect? Considering this was admittedly a simplification.
@DoctorLuk
@DoctorLuk 9 лет назад
+malenkytolchock In all honesty it all made sense to me. What's wrong?
@DrMcCoy
@DrMcCoy 9 лет назад
+malenkytolchock Yeah. I think Programmer, Producer and Professional Musician David Domminney Fowler needs a refresher on Nyquist-Shannon...
@domminney
@domminney 9 лет назад
It's hard to get this kind of subject into 10 mins so unfortunately this is a simplification.
@TheGregaM
@TheGregaM 9 лет назад
Drawing lines between points? Sorry but that's just not true. That would produce a lot of aliasing and distortion (like square waves as he said) and that's certainly not what happens. DA converter draws smooth curve with reconstruction filter. Only thing you loose with lowering bit depth is data lost with quantizing, which makes noise floor louder. I think you should correct that statement.
@domminney
@domminney 9 лет назад
Apologies, but for the sake of a short introduction video I simplified the whole subject somewhat.
@TCWordz
@TCWordz 9 лет назад
+David Domminney Fowler Please don't do that, and if you feel it is necessary to oversimplify to such an extent pleast specify in the video "this is something of an oversimplification". Because this video is downright incorrect in some cases.
@Bencarelle
@Bencarelle 9 лет назад
+David Domminney Fowler, I agree with +Tommy59375 completely. I watch the *phile videos precisely because of the way that masters of their craft are able to explain deeply complicated concepts without distortion or oversimplification. These aren't buzzfeed videos.
@omgimgfut
@omgimgfut 9 лет назад
a more in-depth explanation can be found by searching: Digital Show and Tell Monty Montgomery on youtube
@GoldSrc_
@GoldSrc_ 9 лет назад
+omgimgfut Oh yes, that video explains it better.
@planetweed
@planetweed 9 лет назад
+omgimgfut Was about to comment the exact same thing but you beat me to it :)
@elephantgrass631
@elephantgrass631 7 лет назад
For some reason, I can only thumbs up your comment once. I was trying to thumbs up it a billion times.
@francescodipalma9785
@francescodipalma9785 7 лет назад
It's not just more in depth. It's also the correct explanation.
@Hermiel
@Hermiel 5 лет назад
Ha, I just posted the same thing.
@FrankJavCee
@FrankJavCee 9 лет назад
(My speaker properties after watching this video) 24-BIT 192,000hz
@FASTFASTmusic
@FASTFASTmusic 9 лет назад
+FrankJavCee youtube changes it back to 16bit and 126kbps mp3 so have fun
@LovSven2011
@LovSven2011 9 лет назад
+JamesMulvale Well, they mostly use AAC (Advanced audio codec). HD quality (720p+) is usually accompanied by 256kbps audio and Non-HD (480p and lower) by 128 kbps audio. Now, I heard that due to AAC being better compression codec than MP3 and the quality of RU-vid's encoders, you can get the same result with 96 kbps AAC as with 128 kbps MP3. That 96kbps bitrate is used in MP4 360p video+audio stream on RU-vid. Also, 320 kbps AAC uploaded audio track is allegedly of same fidelity when they encode it at RU-vid AAC 256 kbps. At this point you get to *psychometric* measuring of *subjective quality* of sound. That's a whole other discussion. You people might want to look up "Fraunhofer Institute for Integrated Circuits IIS" for more information.
@LovSven2011
@LovSven2011 9 лет назад
+JamesMulvale So, *in short*, if you don't turn on HD, you get about 128kbps MP3 equivalent quality.
@FASTFASTmusic
@FASTFASTmusic 9 лет назад
the point is you will never get 192khz/24bit audio on youtube its a waste of data. HD maxes out around 192kbps (which is NOT 192khz - it's samplerate and bitdepth combined). i can't be bothered arguing any more. bye
@LovSven2011
@LovSven2011 9 лет назад
I agree that setting soundcard to 192kHz is useless, and I know the difference between bitrate (bps) and sample rate (Hz) or depth (bit). I was trying to add to your information. I didn't speak about sample rate because I don't understand what sample means in widely used lossy compression codecs vs. in WAVE (PCM) codec. Sorry to bother you. But original poster seemed to be ignorant so I replied to you. I'm, however, *not* a sound engineer, but this much I know. (No need to reply.) Bye
@HandyAndyTechTips
@HandyAndyTechTips 9 лет назад
Great video. Just thought I'd add to the chorus, and comment on an irony in digital audio. Back when CDs were introduced, everyone was trumpeting how amazing their 90+db of dynamic range was. Now, we're lucky to see discs released from major record labels that use more than 10db :-) And, in fact, most albums I've bought recently go one step worse, and use heaps of heavy digital clipping on all of the drum hits. A bit sad, I suppose....
@JonnyInfinite
@JonnyInfinite 9 лет назад
+HandyAndy Tech Tips that's a consequence of the Loudness War...
@johnberkley6942
@johnberkley6942 9 лет назад
Clipping on the drum tracks has a long history. Back in the day Motown got a great drum sound that way. But analog and digital clipping are different beasts. What sounds great on tape sounds really shitty in the digital domain.
@ClearInstructionsOnly
@ClearInstructionsOnly 9 лет назад
Instruction Clear. Successfully picked up acoustic waves through my red cat. Thank you.
@pawepalczynski5621
@pawepalczynski5621 9 лет назад
The hearing range is up to 16-20 kHz. The reason it's 44.1 kHz is because it allows to detect 22.05 kHz frequencies. The gap between 20 kHz and 22.05 kHz is because before the A/D converter there is a filter that cuts off anything above 22.05 kHz to avoid aliasing. That filter starts cutting off at 20 kHz and reaches -60dB at 22.05 kHz so that nothing audible is lost. If the filter cut-off at exactly 20 kHz (or much closer to it) it would introduce a lot of distortion in frequncies and phases.
@black_platypus
@black_platypus 9 лет назад
+Paweł Palczyński That is some cool extra info, thanks for sharing!
@VickyBro
@VickyBro 9 лет назад
+Paweł Palczyński Something like a badgap?
@FernieCanto
@FernieCanto 9 лет назад
+Paweł Palczyński Well, the actual reason why 44.1 KHz was chosen is because, originally, digital audio for CD production was recorded in Sony U-Matic tapes (yes, the format for analogue video), and the technical specifications of the tape made 44.1KHz the most obvious choice. Having a roll-off filter at the top of the spectrum is useful to avoid distortion, but the exact frequency you choose is not really very important; it's not like a lot of people can hear anything above 18 KHz.
@pawepalczynski5621
@pawepalczynski5621 9 лет назад
+Joe Mills If you use bones (skull, jaw) to input sound instead of eardrum you can go even much higher. There is a upper cutoff frequency for air transmitted sound however, because at some point sound would need to be painfully loud for you to hear. A frequency at which it is not yet painful but you can still hear it as loud as 1kHz at 10^(-12) W m^(-2) intensity would be a highest you can hear then.
@marcinsosinski766
@marcinsosinski766 9 лет назад
+Paweł Palczyński Actually 44,1 has nothing to do with any type of filter. You can always calculate and implement a slightly different one. There is one simple reason for 44,1 or 48 kHz was chosen. There were no HARD DISKS big nough at that time to perform all digital master. Remember it's 1970s. Only posibll method to make a master from witch you could make a matrice to press CD was to record digital signal on video tapes.This was pre BETA time. So only viable option was to use UMATIC (stanard video recorders in TV production of that period) I cant remember exact resolution in nowadays digital terms but it worked out you could record (in monochrome) 16bit/48kHz or 16bit/44.1kHz on tape running 30 frames per second (NTSC framerate) 44.1 was choosen for CD probably only because you could fit extra 8% running time on disk that way. As for argue that it is inferior to 48, that doesn't matter. First CD players from Phillips and Sony were equipped with 14 bit DAC's, and up for this day people value this players for pleasant sound.
@MisterSofty
@MisterSofty 9 лет назад
I love the square wave! Stop hating on the square wave! :D
@vuurniacsquarewave5091
@vuurniacsquarewave5091 9 лет назад
+Mister Softy Pulse waves with different duty cycles... they're good for everything!
@PeterWalkerHP16c
@PeterWalkerHP16c 9 лет назад
+Mister Softy LOL, I'll have one of everything thanks Mr Fourier.
@J2897Tutorials
@J2897Tutorials 9 лет назад
+Mister Softy Square waves can be dangerous depending on the size of your ship.
@RubixB0y
@RubixB0y 8 лет назад
After "Bitshift Variations in C minor" I have a special place in my heart for sawtooth waves.
@Scarabola
@Scarabola 4 года назад
8 bit video games have some bomb music.
@dude157
@dude157 9 лет назад
Hi Computerphile, great video. I'll just add a minor correction for you. Humans can hear up to 20kHz , not 22kHz. In fact, by the time people reach adulthood, the top end of hearing is closer to 16kHz. The reason a sampling frequency of 44.1kHz was chosen as a standard was not because it is twice that of 22050Hz. It's to do with a problem called aliasing. Any frequency content contained within a signal, which is above half of the sampling frequency will introduce low frequency alias signals (See Nyquist Theorem). This is the exact same reason helicopter blades appear to spin backwards or slowly in video. For audio we would like to capture frequency information up to 20kHz, thus determining the sample frequency to be 40kHz. The only problem is, any high frequency information above 20kHz will ruin the audio due to aliasing. So in addition we add a low pass filter, called an anti-aliasing filter. Filters can't have a really steep cut off without causing all sorts of distortion, so we need to leave a little bit of space in the frequency spectrum to fit one in. Hence we oversample at 44.1kHz to allow for that.
@EgoShredder
@EgoShredder 9 лет назад
+Sam Smith I would add that 16KHz is IF you have looked after your hearing, e.g. wearing ear plugs at concerts and not blowing your ear drums out with awful dance music. I've done a few sample audio tests and found I can hear to around 17KHz and I am 44 years old. Certain sounds at specific frequencies cause me a significant amount of physical pain, however this had no effect on my mum who could not hear anything. So I wonder if the rate of decline in hearing is steady or suddenly drops off the cliff at a certain age?
@LinucNerd
@LinucNerd 9 месяцев назад
Eight years later, and I see this wonderful comment... Even tho I still can't quite make sense of aliasing.
@__dm__
@__dm__ 9 лет назад
egregiously wrong in pretty much every topic he described. sampling frequency explanation was botched, quantization noise explanation was spoken like an audio snob rather than an audio engineer (the computer doesn't "draw lines" between samples, and the effect of quantization isn't like what you described--it mostly sets the dynamic range)
@thefauvel7558
@thefauvel7558 8 лет назад
Could you explain to me in detail. I'm seriously intrigued by this. Every video I see about 'Digital Audio' seems to think that the wave from a digital source is square, when it's clearly not.
@LiViro1
@LiViro1 7 лет назад
I decided to learn something about digital audio, and ended up here (among other places). Great explanation, you should be a teacher the way you communicate.
@robin888official
@robin888official 9 лет назад
I like the fact that 44100 = 2^2 * 3^2 * 5^2 * 7^2 (First four primes squared.) I can't imagine that's a coincidence.
@whiteeyedshadow8423
@whiteeyedshadow8423 5 лет назад
yur being a math nerd(and i like it)
@djsunil6333
@djsunil6333 4 года назад
Every number is some product of primes
@deus_ex_machina_
@deus_ex_machina_ 4 года назад
Illuminati confirmed?
@recelkauz1813
@recelkauz1813 9 лет назад
I like the topic *digital audio* but this guy is not the best person to explain it (since he has himself some knowledge gaps).
@yanwo2359
@yanwo2359 9 лет назад
Excellent succinct explanation! Ironic that the audio had a hum throughout.
@MacoveiVlad
@MacoveiVlad 9 лет назад
Yes, it is some kind of fan. But if the video editor heard it he would have removed it. That is why i suppose he did not hear it.
@seanski44
@seanski44 9 лет назад
Dave's PC was by his knee, the fan was running at different speeds throughout the video so not easy to remove.
@MacoveiVlad
@MacoveiVlad 9 лет назад
So it was Murphy's law :)
@Matiburon04
@Matiburon04 9 лет назад
+Yan Wo It was on purpose, obviously
@EgoShredder
@EgoShredder 9 лет назад
+Yan Wo They had a recording session of meditating monks in doing a new album. ;)
@johnberkley6942
@johnberkley6942 9 лет назад
Love it! More! For instance a video about the geeky side of compression. It would be nice to understand what I'm doing when I twiddle them knobs...
@darrylwatkins2335
@darrylwatkins2335 4 года назад
Best explanation of how Soundwaves are converted to digital. Thank you much!
@mrnarason
@mrnarason 9 лет назад
makes me want to be a sound engineer
@DirtyRobot
@DirtyRobot 9 лет назад
+Victor P. Do it, I did.
@morgogs
@morgogs 9 лет назад
+Dirty Robot What does that involve?
@KutAnimus
@KutAnimus 9 лет назад
+Dirty Robot I wouldn't do that considering the sad economic state the audio engineering industry is right now. Lots of really good sound engineers go months upon end without gigs these days.
@thelol1759
@thelol1759 9 лет назад
+morgogs Math.
@DirtyRobot
@DirtyRobot 8 лет назад
+morgogs Depends if you want to go study or you want to dive in. When I made my choice there were no courses you could do but I had a fair bit of experience so I contacted all the recording studios in my area and took a low paid position then worked my way up.
@colinsmith6480
@colinsmith6480 9 лет назад
Really nice explanation, thank you so much, been looking for a decent explanation for ages
@kilésengati
@kilésengati 9 лет назад
Hmm, I want more of such videos. Music is so interessting when corresponding with electronics and informatics.
@vuurniacsquarewave5091
@vuurniacsquarewave5091 9 лет назад
I'd love to see a video about different audio formats, not in the .wav or .mp3 file type sense, but rather the encoding methods, like PCM, ADPCM, DPCM, PWM, etc.
@Le-Samourai
@Le-Samourai 9 лет назад
Argggh this video made me cringe so many times. He confuses "quality" and "bandwidth" and the purpose of higher sampling frequencies and nyquist frequency and bandwidth. Ahhhhhhh
@geonerd
@geonerd 9 лет назад
+samurai1200 Agree. I appreciate the effort, but this fellow gets a fair bit flat wrong.
@Mr1Samurai1
@Mr1Samurai1 9 лет назад
+samurai1200 What? He explains that you need a sample frequency that is double the rate of the highest frequency you want to record. Just because he didn't label it as the Nyquist frequency (which to most of the audience is a meaningless label) doesn't mean he didn't explain the concept. Also, calling it bandwidth over quality would just confuse the audience unless he was willing to go into a long explanation of what bandwidth is which deserves its own video. His explanation was perfectly fine.
@jcdenton1111
@jcdenton1111 6 лет назад
It's astonishing that Computerphile gives it's name to such garbage. Computerfool would be a more fitting channel for this video...
@vyli1
@vyli1 9 лет назад
excellent video... I hope there's more coming from this guy; Amazing, thanks for the upload.
@wesleyjin8071
@wesleyjin8071 3 года назад
This is great info for a producer trying to approach it from an engineering side.
@tsjoencinema
@tsjoencinema 9 лет назад
Awesome video. More like this, please.
@victortrejo5725
@victortrejo5725 8 лет назад
Completely skipped over the Nyquist theory!
@h7opolo
@h7opolo 3 года назад
1:46 with this information, i was able to deduce the solution to my staticky PC audio; I had the sample rate for audio output set to max setting of "24 bit, 192,000 Hz (Studio quality)," and once I lowered the sample rate to "24 bits, 96000 Hz (Studio quality)", all the static noise magically vanished! YAY, thank you, this had been harming me for years.
@vonantero9458
@vonantero9458 9 лет назад
Make a video about sound, have annoying buzz on the background. Great video though, thank you! :)
@FirefoxisredExplorerisblueGoog
+Von Antero Just a CPU fan.
@magnum3.14
@magnum3.14 9 лет назад
+Von Antero At first I thought it was an easter egg in the video.
@vonantero9458
@vonantero9458 9 лет назад
+danielcw Yeah, I thought it could be a hidden message, but it sounds too "boring" to have anything in it. Not that I know much about that sort of things
@teekanne15
@teekanne15 9 лет назад
maybe one on synths? Different waveforms, envelops, filters, overdrive and such?
@johannes914
@johannes914 9 лет назад
Please explain when dynamic compression comes in the process ...
@jacobh1995
@jacobh1995 9 лет назад
+johannes914 It comes into the process, way overused, once a record company is involved.
@jacobh1995
@jacobh1995 9 лет назад
rhoyt15 Yeah, I'm stereotyping. But 9 times outta 10, the record companies don't use it correctly.
@DirtyRobot
@DirtyRobot 9 лет назад
+johannes914 dynamic compression comes into the process after recording. It is technically a post production tool but can be used in other places like live performance. You would choose to use it when a sound source is very dynamic, in that the volume levels change a lot and the rate that they change is not predictable. Think of it basically as an AI fader that can analyse the input and quickly make a decision of how much volume breaking or boosting it requires.
@FASTFASTmusic
@FASTFASTmusic 9 лет назад
+johannes914 ln simple terms, compression makes the loudest parts of the signal quieter which then means you can turn the whole thing back up without distortion.
@DirtyRobot
@DirtyRobot 9 лет назад
***** If someone gives too much mic, enough to damage the recording or performance then you just lost your job.
@oisiaa
@oisiaa 9 лет назад
Fascinating and well explained.
@YingwuUsagiri
@YingwuUsagiri 9 лет назад
Awesome to know how the technical part of my audio work works. Working with orchestral pieces and a lot of low and a lot of high and those 'lingering' cymbal notes I did work at the higher settings like 24 bit. Good to know why exactly I have to do so on a technical level.
@teharbitur7377
@teharbitur7377 9 лет назад
How come you didn't mention the Nyquist-Shannon sampling theorem? I know you are trying to make it less 'technical' but sometimes it's nice to mention some of the more technical things.
@SkitchAle
@SkitchAle 8 лет назад
+Teh Arbitur I was wandering the same thing
@dlarge6502
@dlarge6502 5 лет назад
44.1 kHz was chosen as it captured all desired frequencies and it also just happened to be the perfect rate to store the samples digitally on PAL and NTSC videotape. This was used as the storage method for transporting the audio between locations. Later, when we didnt need to care about storing it on videotape we switched to 48 kHz as it helps with making filtering much simpler. Also you dont get square waves, thats impossible with a band limited signal. You can only get the original signal, smooth and non-square. 16 bits determines where the noise floor is and for playback 16 bits is more than enough to capture the dying cymbal. You use more bits, like 32 bit float, when editing before finally exporting to 16 bits. Editing with 24 bits or 32 bit floats helps because it gives you so much headroom to apply filters etc without adding any more noise that will be noticeable.
@heavymaskinen
@heavymaskinen 8 лет назад
Only partially correct about the bit-depth. It really only determines the noise floor (noise made from quantisation). Very important for recording. For consumer-playback - not so much. The human hearing threshold is generally regarded as 20kHz, but sensitivity drops way before that, and both the sensitivity and threshold lowers with age.
@paulanderson79
@paulanderson79 8 лет назад
Yes. There's a lot of confusion about bit depth and resolution. For consumer playback there is no benefit as you say.
@dlarge6502
@dlarge6502 5 лет назад
I can hear up to about 15k at 38. And a 15k tone really isnt very interesting!
@nigelnigel9773
@nigelnigel9773 9 лет назад
Whilst I agree this is a simplification, this is pretty much what A Level music technology teaches and it's explained well.
@GoldSrc_
@GoldSrc_ 9 лет назад
Yeah, I will have to call BS about the part about the sound card outputting a square wave if the bit depth is too low. All the bit depth affects is the noise floor.
@RWoody1995
@RWoody1995 8 лет назад
+Gordon Freeman he wasn't wrong, he was talking about in an audio editing situation, if you add two 24bit tracks together without halving the volume of each beforehand you will get clipping because you will likely have parts of the audio that are at lets say 40,000/65536 in each, add those together and you get 80,000 which is out of range of that 65536 so anything above 65536 will be cut off, if this is bad enough it would become a square wave, and the reason you wouldn't halve it beforehand even though that might seem simpler is because you will throw away some of the data in doing that, better to add them together in a higher bit depth and then convert that back down to 24bit afterwards or in the case of the final output 16bit. in terms of the final output file you are right but thats not what he was explaining.
@GoldSrc_
@GoldSrc_ 8 лет назад
megaspeed2v2 Fair enough, but your average joe will never get into contact with 24bit audio.
@RWoody1995
@RWoody1995 8 лет назад
Gordon Freeman this isnt about your average joe though, this applies to the audio engineers producing the audio in the first place before it gets compressed down to 16bit and put on a CD DVD or game
@RWoody1995
@RWoody1995 8 лет назад
Gordon Freeman Yes its for the average joe, he was just explaining that anything higher than 16bit is only needed for audio that is going to be edited, not really rocket science.
@GoldSrc_
@GoldSrc_ 8 лет назад
megaspeed2v2 Well, he said that the computer draws lines to connect the samples, that is not what happens, any audio engineer would know that's not how you would explain it to the average joe. He also could have simplified it by saying that the bit depth affects the quietest sounds which don't end up sounding like square waves. He got quite a few things wrong, he could have done better.
@tylerwatt12
@tylerwatt12 9 лет назад
Now I know why higher sample rates improve treble clarity, in a waveform high frequencies spike up and down a lot very quickly, a slow sample rate misses those spikes. Great video!
@oresteszoupanos
@oresteszoupanos 9 лет назад
+Tyler Watthanaphand Yup, that's part of the Sampling Theorem. There's a rule that says "if you wanna capture perfect audio up to X KHz, then you need to sample *at a minimum" of 2X KHz." Search for Nyquist Frequency for more info... :-)
@Aikanarooo
@Aikanarooo 9 лет назад
+Orestes Zoupanos Better results are achieved when you double the sampling frequency (according to the maximum frequency of the signal) and add a little more sampling cycles. Hence the 44.1 KHz, which is 22 KHz * 2 + 100 Hz. The extra 100 Hz prevent the signal from containing aliasing artifacts.
@WingmanSR
@WingmanSR 9 лет назад
+Hugo Neves You're right, but he did basically say that. That's what he meant by _sample *at a minimum" of 2X KHz."_ . Granted, that doesn't really emphasise the benefit of the fudge factor. I guess it depends on how you think of it, it's either _Nyquist >= 2X_ or _2X+100 = Nyquist_.
@RyanRenteria
@RyanRenteria 9 лет назад
+Hugo Neves they picked 44.1 because it was compatible with both PAL and NTSC video equipment. Early digital audio was stored on video cassettes. They needed a minimum of 40hz, and then extra room for the antialiasing filter, and it had to be divisible by both PAL and NTSC standards.
@GoldSrc_
@GoldSrc_ 9 лет назад
+Tyler Watthanaphand Sample rates above 44.1KHz do not improve audio quality at all. It doesn't matter if you go from 20Hz to 20KHz in an instant, at 44.1 it will all be perfect.
@fuppetti
@fuppetti 9 лет назад
Heh, thought it was Alan Davies on the thumbnail for as second.
@domminney
@domminney 9 лет назад
Ha ha :)
@black_platypus
@black_platypus 9 лет назад
+Deltaexio Heh, you're right! :D
@bmurph24
@bmurph24 9 лет назад
+David Domminney Fowler Thanks for making this! Super cool video, deals with two of my favorite things music and computing!
@TheBluMeeny
@TheBluMeeny 9 лет назад
Ha! I remember asking for a video on this topic ages ago! Its awesome that we finally got it!
@maxgrrr2244
@maxgrrr2244 9 лет назад
Rather inaccurate video imho. 1. It's not 44.1KHz because the limit of human hearing is 22.05KHz. It has to do with the upper limit of human hearing and the choice of anti-aliasing filter (transition band width). The assumed human hearing range is roughly 20Hz to 20KHz. The Nyquist sampling theorem tells us that the sampling rate should be at least twice the max frequency of the signal (so 40Khz). The problem now is that our original signal contains frequency above 20KHz, if we try to sample at 40Khz aliasing will occurs (frequency above 20KHz fold into the hearing range). The signal must be low pass filtered (anti-aliasing filter). We can't perfectly cut frequency right at 20KHz, in practice a transition band is necessary. For practical and economic reasons, a 2.05KHz transition band was chosen. Now our signal contains frequency from 20Hz to 20+2.05 = 22.05KHz. Back to Niquist, we need to sample at 44.1KHz. 2. You DON'T end up with square wave. You could use 4 bits per sample and you still would not get square wave. You'll get a ton of quantization error (rounding error) and the signal will be drawn into noise (low SNR). Moreover, the computer doesn't draw line between point. It finds a continuous signal from the sequence of samples, there is a unique solution.
@crappymeal
@crappymeal 9 лет назад
nice bit of info for when, if ever i use my microphone
@realraven2000
@realraven2000 9 лет назад
You also get "squares" at the top end when you compress to hard as the peak of the waves can be cut off. Any sharp edges are heard as a form of distortion.
@woosix7735
@woosix7735 3 года назад
quiet signals don't sound grainy because they are square waves, it's because of quantization noise.
@PaulIstoan
@PaulIstoan 9 лет назад
Great show!
@OnixRose
@OnixRose 9 лет назад
My DAW has settings up to 192,000 Hz, are there any benefits or downsides of using a sample rate this high? Considering the "industry standard" is significantly lower, what applications make use of this sample rate ?
@RyanRenteria
@RyanRenteria 9 лет назад
+OnixRose almost none. 192 will give you a huge file size and slow your sessions down. On top of that, there is scientific evidence that ultra high sample rates actually sound worse (intermodulation distortion). Plugins and DAWs like higher sample rates because it gives them more information to process. as a result, most plugins up sample to 88.2 or 96k when your session is running at 44.1/48k. I wouldn't bother using higher than 96k. 96 would be a good sample rate to run at if you think you're going to be doing a lot of time stretching or similar processing. Other than that, you cant really go wrong with 48k.
@NoahBarr85
@NoahBarr85 9 лет назад
Square waves don't sound nice? Must've hated listening to NES music.
@hgdggdffc6333
@hgdggdffc6333 2 года назад
Computer engineering or electrical engineering? Which is better for audio?
@mckennacisler01
@mckennacisler01 9 лет назад
What is the reason microphones have an upper frequency limit? Is it just because they can't oscillate at a rate above a certain frequency?
@domminney
@domminney 9 лет назад
That's a whole other subject that isn't really suited to a computerphile video, maybe I'll do one myself!
@monkeyking822
@monkeyking822 9 лет назад
+Mckenna Cisler yeah or they can, but not well. There's distortion.
@antivanti
@antivanti 9 лет назад
Will this be a series of videos? Will you go into audio compression and things like that like you did with pictures and JPEG compression?
@DJBremen
@DJBremen 9 лет назад
Very helpful.
@snickers10m
@snickers10m 9 лет назад
I'd love to hear more from this guy!
@MichaelW.1980
@MichaelW.1980 Год назад
The human ear, talked about in this video , is the ear of a teenager of 17 years or younger, whose hearing hasn’t been damaged/weakened yet. In the 20s you might be down to 17-18kHz, in the 30s down to 16khz. These are averages. The exact numbers vary individually of course, with the point being, that only a small percentage of humans can hear 20KHz.
@dipi71
@dipi71 9 лет назад
Nice illustrations, but this was just the tip of the tip of the iceberg. It would be nice for further episodes about the topic of digital audio to mention Shannon, the logarithmic nature of the decibel scale, real-life annoyances like the noise floor and other concepts, e.g. bit rate, data compression (in FLAC, for example) vs data reduction (i.e. data loss, and that in more than one respect, like in MP3). It's a very interesting field of topics.
@satisfiction
@satisfiction 9 лет назад
Thank you for a great vid... but you guys need to do a follow-up about Shannon-Nyquist theorum. It wasn't just "some guys picked it" xD.
@NemGames
@NemGames 9 лет назад
I actually followed this whole thing. great explanation
@Sandromatic
@Sandromatic 9 лет назад
Could you do sound with a logarithmic scale? such that the shorter waves can get more detail than the larger ones?
@deluksic
@deluksic 9 лет назад
Sandra Nicole Yeah, more like a float rather than integer :)
@WarrenGarabrandt
@WarrenGarabrandt 9 лет назад
+Sandra Nicole Interestingly, this is sort of what happens with audio compression techniques. Many audio codecs chop up the continuous sound signal into short sample frames (of a few milliseconds long each) and convert the audio signal into discrete frequencies (with a Fourier Transform of some kind). Then a psychoacoustic model is applied to the frequency spectrum to eliminate details our ears care less about, and apply emphasis to those remaining that it does care about. This is sometimes done with a MEL Frequency analysis, thought every codec has its own method. Lastly, the remaining frequencies are packed in a compressed data stream, such as a Huffman tree, or LZW, or whatever the codec calls for. Decoding and reconstituting the original sound perfectly is of course impossible at this point, so they call these kinds of codecs "Lossy", since they throw out a lot of information to fit the most important parts of the sound into the least amount of data possible.
@anothergol
@anothergol 9 лет назад
+Sandra Nicole as written, it does exist, but for 8bit audio. For 16bit it's simply not needed. And in fact, even 14bit is already enough for our perception, unless you like to crank up the volume a lot on quiet parts of a song.
@otakuribo
@otakuribo 9 лет назад
I'm loving the audio stuff, between this and Sixty Symbols, I've learned a lot.
@edsonbrusque
@edsonbrusque 6 лет назад
When you get an engineer to explains things to users you got a video that few would understand. When you get an user to talk an engineering topic you got a video that people will understand, but with lots of errors and misconceptions. Tough decision...
@random30303
@random30303 9 лет назад
Sorry Computerphile and David Domminney, This video is over-simplified and has a lot of errors and misconceptions throughout. If one is going to use analogies and simplifications to help make a short explanation then at least point those out at the same time please! - musician, producer, engineer and Computerphile fan!
@NavJack27gaming
@NavJack27gaming 7 лет назад
with fully electronic music, especially noise music, record the highest your software let's you use. 192khz 32bit and then mix down to 96khz 24bit. not enough music is done like this.
@reuben8856
@reuben8856 9 лет назад
The dented speaker is annoying me.
@BOBOUDA
@BOBOUDA 9 лет назад
So sample frequency is the "fps" of sounds ?
@heheheheheeho
@heheheheheeho 9 лет назад
+BOBOUDA More like Vsync on a monitor. At least when converting from analog to digital =)
@bennyuoppd33
@bennyuoppd33 9 лет назад
+Patrick The Buried Then what happens when your cpu stalls for a bit?
@stoppi89
@stoppi89 9 лет назад
+BOBOUDA Pretty much, yes (analog audio/ live audio has infinite sample frequency, just like RL). Bit depth is like Contrast of a monitor.
@stoppi89
@stoppi89 9 лет назад
+BOBOUDA Pretty much, yes (analog audio/ live audio has infinite sample frequency, just like RL). Bit depth is like Contrast of a monitor.
@3snoW_
@3snoW_ 9 лет назад
+Benny Kolesnikov That usually doesn't happen because the cpu speed is WAY higher than the sampling frequency and between 2 samples the cpu has more than enough time to do whatever it needs. However i've seen it happen, it sounded like slow-mo with stuttering, it was really weird. And just before my pc crashed lol.
@user-kl4oh2co2y
@user-kl4oh2co2y 7 лет назад
I followed a course on signal analysis and this video is not that great, incomplete information
@SproutyPottedPlant
@SproutyPottedPlant 9 лет назад
That guy is amazing!!!!!
@RealCadde
@RealCadde 9 лет назад
I think this could easily be compared to and illustrated through digital imaging and sequencing. An 8 bit image will look grainy and lack color space and an animation running at 10 frames per second will looks choppy. For comparisons sake then, 16 bit and 24 bit audio can be likened to 16 bit and 24 bit images respectively. One can see some hard edges in color tone on 16 bit images with color gradients (which can be likened the the wave's peaks and troughs) and one can perceive choppiness at 24 frames per second vs 30 frames per second. The latter being the reason SOME audiophiles use 96 KHz because just like with their audio i can tell the difference between 30 FPS and 60 FPS and even 60 FPS vs 90 or 120 FPS. It's the reason monitors come with syncing technology like GSync and FreeSync these days that support 144 Hz and even more. Some people are just more sensitive to different stimuli, be it a master chef, an FPS competitive gamer, a musical prodigy or a tightrope walker with a very keen sense of balance. Some of us get seasickness or motion sickness while others have no idea what the other person is experiencing. Obviously for me, visuals is my frame of reference hence why i draw the comparison with audio fidelity and visual quality.
@RealCadde
@RealCadde 8 лет назад
Çerastes And to you apparently there's not difference between a MIDI file and a FLAC file.... Go home, you're drunk.
@RealCadde
@RealCadde 8 лет назад
Çerastes Why so ignorant?
@SuperHugoTron
@SuperHugoTron 6 лет назад
aaaaamazing video thank you thank you thank you!!!
@supersonicdickhead374
@supersonicdickhead374 8 лет назад
the air doesn't move, it's a pressure wave moving through the air
@4pThorpy
@4pThorpy 9 лет назад
The sound in the background is really really distracting
@adamhlj
@adamhlj 9 лет назад
This is exactly what I have been wondering about lately. I just got a Zoom H5 and was confused about all the recording settings! This explained it.
@MrSparker95
@MrSparker95 9 лет назад
If sampling of a signal with frequency higher than half of the sampling frequency occurs then the signal will not be 'cut off', it will transform into a signal with another frequency.
@black_platypus
@black_platypus 9 лет назад
+Sparker yes, nobody said otherwise... they were only talking about cutting off when adding waves past the bit depth, weren't they?
@MrSparker95
@MrSparker95 9 лет назад
Benjamin Philipp No, I'm talking about the 'cutting off' at 2:24.
@black_platypus
@black_platypus 9 лет назад
+Sparker oh, pardon, I "overlooked" that (shame that you can't use the word "overheard" in the english language like that) Yeah, I guess that can only ever be taken as an abstract concept where you "cut off" at an information threshold :/
@kathalave6678
@kathalave6678 9 лет назад
it helps me to understand this because i have a subject digital audio..i am a MT student multimedia technology
@GoldSrc_
@GoldSrc_ 9 лет назад
+Kath Alave Go watch D/A and A/D | Digital Show and Tell (Monty Montgomery @ xiph.org) here in youtube, it gives you a better explanation than this video.
@illuminator4633
@illuminator4633 8 лет назад
Good audio should always be sampled at Planck time.
@illuminator4633
@illuminator4633 7 лет назад
halfasemitone Nonsense.
@elephantgrass631
@elephantgrass631 7 лет назад
Ok fine. Quadruple Planck time.
@illuminator4633
@illuminator4633 7 лет назад
halfasemitone Never!
@BEP0
@BEP0 9 лет назад
Nice.
@unvergebeneid
@unvergebeneid 9 лет назад
I know he was simplifying but to be sure, if you sample frequencies higher than twice the sample rate, you're not simply "cutting them off." It's actually much worse: you're introducing kind of "phantom frequencies" that weren't in the signal but turn up in your digital signal. Those are mirrored at the highest frequencies you can represent, so the higher you go into "you didn't filter properly" territory, the lower the frequency gets (until it can't anymore, then it gets higher again), thus being more off and mostly more noticeable, too.
@SKRUBL0RD
@SKRUBL0RD 9 лет назад
It would be nice to parallel this with analog recordings, such as tape and vinyl. Vinyl, especially, is very interesting because the headroom is 'technically' unlimited and really only limited by volume. The higher volume you stick onto the vinyl the wider the channels become and you end up having less time available on the record which is why vinyl is always given a much lower volume than today's modern music because the idea is at the end of your vinyl player you will have a sound system with a volume control to amplify it up. I am one who wants to see the digital audio industry cut out the bullshit 'volume war' and instead educate everyone on having an amplifier to turn volume up. There are even headphone amplifiers; there should be no excuses for quality of music to suffer.
@rafagd
@rafagd 9 лет назад
10:00 There is another reason for softwares to use 32bits. Most computers can only allocate memory in lots of 8, 16, 32 and 64 bits, so 24 bits are awkward to use. So the software may use the larger 32bits because it is easier and faster, and just convert it to 24bit when you ask it to save/export the file.
@anzer789
@anzer789 6 лет назад
I saw on another channel that the 44.1 kHz sampling rate was because we kept the human limit of hearing as 20 kHz and added a 2.05 kHz extra limit because low pass filters weren't accurate enough to cut off exactly above 20 kHz, so we added a bit of leeway. Double that and you end up with 44.1 kHz
@esekion1
@esekion1 5 лет назад
and it's true !
@dlarge6502
@dlarge6502 5 лет назад
It was also the prefect rate for data storage methods used at the time.
@PhattyMo
@PhattyMo 8 лет назад
The sampling rate is (at least) 2x the highest frequency you plan to sample..because,aliasing,Nyquist,something,something.
@MrTeushi
@MrTeushi 9 лет назад
The video's thumbnail made me think :"Wow, Alan Davies is in Computerphile! Great!" I probably watch way too much QI.
@nsipid
@nsipid 6 лет назад
Thank you for the information
@realraven2000
@realraven2000 9 лет назад
Hey you got the same speakers as I. Gotta love the alesis!
@daveandlouise123
@daveandlouise123 9 лет назад
So that's why you tidied up Dave
@gothxx
@gothxx 9 лет назад
Interesting, would be nice with a video on how the data is stored/compressed in files.
@ljevans572
@ljevans572 8 лет назад
Just not what I was looking for I wanted a way to listen to the text(read) for the book
@kartoffelbrei8090
@kartoffelbrei8090 Год назад
Wow the discrepancy between the title and what is actually discussed is immense
@BeastOfTraal
@BeastOfTraal 9 лет назад
Now can you explain mp3 compression?
@danwalker77
@danwalker77 9 лет назад
Very nice video guys!
@WhyFi59
@WhyFi59 9 лет назад
Wow! This is just what I was waiting for! Thank you!
@MuztabaHasanat
@MuztabaHasanat 9 лет назад
if the wave does not fit in the bit depth , why don't we just increase the bit depth like 32 bit or 48 bit ? Another question is why not bit depth is 16 bit or 32 bit ? why 24 bit ?
@fuckwadify
@fuckwadify 9 лет назад
+Muztaba Hasanat the reason you can't keep increasing the bit depth is because the file size get bigger, remember 1 cd is 700mb and that only 16bit
@MuztabaHasanat
@MuztabaHasanat 9 лет назад
Thanks to all :)
@Aikanarooo
@Aikanarooo 9 лет назад
+Alex Lee Most humans can't detect the difference between an analog audio signal and a digital audio signal with 16 bits/sample. Also, when you use higher and higher bit depths, you need better digitization equipment. The least significant bits tend to be mostly noise if you don't possess a good equipment.
@hydrox24
@hydrox24 9 лет назад
+Muztaba Hasanat Every wave *could* fit in 16bit. It's just that for mixing and recording they require overhead. This is because every time you manipulate a wave a little bit, the volume of every sample needs to be described using a number between -16,000 and +16,000 (or so). If you change the volume for a sample and the maths ends up saying 'give this sample a volume of 5,256.6'. But you can't have floating point numbers (no decimal places). So it rounds it up to 5,257. You can't hear this change in volume, but it over many thousands of changes to a sample this begins to create a fairly significant change. a combination of lots of differences in .5 from what the wave should be. Done to many samples in a song, this creates a quiet noise. Larger bit depths make this noise much quieter. So that is why 24-bit (or 32bit) is generally used in mixing. The final master doesn't need this because no more changes will be made and the noise doesn't have a chance to get loud. 16-bit is more than enough for even hi-fi listening. 16-bit gives enough control over volume that the quietest sounds are around -120 dB, which is well out of any human's hearing range.
@stephenkamenar
@stephenkamenar 9 лет назад
5:25 Why use negative volumes? If you didn't, you'd get double the bit depth, and you wouldn't have to deal with phase cancellation. I'm sure there's a good reason why, so tell me
@MaverickJeyKidding
@MaverickJeyKidding 7 лет назад
Am I only one hearing some kind of background fuzzy noise throughout the whole video? Like a distorted sinewave, but it's very quiet
@ajinzrathod
@ajinzrathod Год назад
when we say 44.1 kHz is recorded. I means 44100 samples values are recorded in each second But the bass typically ranges from about 20 Hz to 250 Hz or so But the bass can be recorded for more than 300 seconds, right? and 300 seconds would be 13230(300*44.1) So how it is between 20 to 250 Hz only. Correct me if in understood something wrong
Далее
Digital Audio Compression - Computerphile
7:06
Просмотров 101 тыс.
The 4 Fundamentals of a Good Mix (with Dan Worrall)
26:03
Why do CPUs Need Caches? - Computerphile
6:06
Просмотров 312 тыс.
1. Signal Paths - Digital Audio Fundamentals
8:22
Просмотров 58 тыс.
Has Generative AI Already Peaked? - Computerphile
12:48
Sound By Numbers: The Rise of Digital Sound
14:12
Просмотров 413 тыс.
Digital Audio Explained
12:36
Просмотров 23 тыс.
CPU vs GPU (What's the Difference?) - Computerphile
6:39
How Ray Tracing Works - Computerphile
20:23
Просмотров 90 тыс.