Тёмный

Why Higher Bit Depth and Sample Rates Matter in Music Production 

Audio University
Подписаться 456 тыс.
Просмотров 200 тыс.
50% 1

What is the benefit of using higher bit depth and sample rate in a DAW session for recording music? Should you use 16-bit, 24-bit, or 32-bit floating point? Is it worth recording music in 96kHz or 192kHz, or is 48kHz sample rate good enough?
Watch Part 1 here: • Debunking the Digital ...
Watch this video to learn more about sample rates in music production (Dan Worrall and Fabfilter): • Samplerates: the highe...
Dan Worrall RU-vid Channel: / @danworrall
Fabfilter RU-vid Channel: / @fabfilter
This video includes excerpts from "Digital Show & Tell", a video that was originally created by Christopher "Monty" Montgomery and xiph.org. The video has been adapted to make the concepts more accessible to viewers by providing context and commentary throughout the video.
"Digital Show & Tell" is distributed under a Creative Commons Attribution-ShareAlike (BY-SA) license. Learn more here: creativecommons.org/licenses/...
Watch the full video here: • Digital Show & Tell ("...
Original Video: xiph.org/video/vid2.shtml
Learn More: people.xiph.org/~xiphmont/dem...
Book a one to one call:
audiouniversityonline.com/one...
Website: audiouniversityonline.com/
Facebook: / audiouniversityonline
Twitter: / audiouniversity
Instagram: / audiouniversity
Patreon: / audiouniversity
#AudioUniversity
Disclaimer: This description contains affiliate links, which means that if you click them, I will receive a small commission at no cost to you.

Опубликовано:

 

7 июн 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 402   
@Texasbluesalley
@Texasbluesalley Год назад
Fantastic breakdown. I went through a 192Khz phase about 15 years ago and suffice it to say.... lack of hard drive space and the computing power of the day cured me of that pretty quickly. 🤣
@snowwsquire
@snowwsquire Год назад
@MF Nickster I don’t know about other Daws, but reaper lets you move audio clips on a sub sample level
@Emily_M81
@Emily_M81 Год назад
hah. I have an old MOTU Ultralite mk3 kicking around the advertised 192Khz and in the day it was the new hotness I was just like O.O at ever wanting to record that
@MrJamesGeary
@MrJamesGeary Год назад
Holy moly you’re brave. About a year ago I started mostly working at 88.2/96. I’ve been blown away by how far computing power has come as performance is super smooth these days and the results are quality but man a decade+ later I still feel you on that storage battle. Can’t even imagine what you went through. Whenever I break and print my hardware inserts in the session it’s basically a “lol -12gb” button for my C drive
@andyboxish4436
@andyboxish4436 7 месяцев назад
96khz is all you need even if you are a believer in this stuff. Not a fan of 192
@popsarocker
@popsarocker 5 месяцев назад
It's interesting. Shot a concert recently with ≈ 10 cameras and multicammed it in the NLE. We used Sony Venice across the board and shot X-OCN ST. That's about 5GB per minute. We recorded ≈ 100 channels of 24/48 on the audio side. If we had recorded @192Khz we would have still only wound up with around 45% less data than the video. From a purely data storage and bitrate standpoint PCM audio is not really all that terrible. Sony X-OCN is pretty efficient actually. ProRes is a real hog on the other hand but notably unlike PCM neither X-OCN nor ProRes are "uncompressed". Undoubtedly the resource hog for audio is plugins because unlike video where VFX and color correction tend to be tertiary operations (i.e. entirely non-real time and done by someone else), audio engineers are typically focusing on editing and finishing within the same software environment and in real time.
@pirojfmifhghek566
@pirojfmifhghek566 Год назад
That video you mentioned last time absolutely blew my mind. I didn't have a clue that the aliasing around the Nyquist frequency issue was a thing at all. I had the feeling that higher sample rates were better for basic audio clarity, in the same way that a higher bit-depth helps with dynamic range. I just had no idea how or why.
@colin.chaffers
@colin.chaffers Год назад
Love it, I worked for Sony Broadcast including professional audio products, my team worked in Abbey road and the like, this take me back to those days when the analogue and digital battle lines were being drawn, I've always maintained digital offers a better sustainable quality, for the reasons you outline. Keep it up
@AudioUniversity
@AudioUniversity Год назад
Thanks, Colin! Sounds like you’ve worked on some awesome projects!
@frankfarago2825
@frankfarago2825 Год назад
There is no "battle"going on, Dude. BTW, did you work on the analog or digital side of (Abbey) Road?
@colin.chaffers
@colin.chaffers Год назад
@@frankfarago2825 I said battles lines, I did not say there was a battle, I worked for Sony broadcast in the time when digital recording equipment like the PCM 3324 was being introduced, and remember conversations with engineers where they preferred analogue recorders, because they could get a better sound by altering it like bias levels, which to me always felt they were distorting the original recordings. I ran a team of engineers who installed, maintained and supported (sometimes during recording session (sometimes overnight)) these products at a time when the industry was starting to embrace the technology.
@InsideOfMyOwnMind
@InsideOfMyOwnMind Год назад
I remember this time when digital audio wasn't quite in the hands of the consumer yet and a guy who's name escapes me from Sony's "Digital Audio Division" as he put it brought a digital reel to reel deck into the studio of a radio station in San Francisco and played the theme to Star Trek Motion Picture/ Wrath of Kahn. It was awesome but the station was not properly set up for it and there was heavy audible clipping. They stopped and came back to it later and while the clipping was gone the solution just sucked all the life out of the recording. I wish I remembered the guy's name. I think it was Richard something.
@christopherdunn317
@christopherdunn317 Год назад
But how many albums out there have been recorded to tape ? most all of them ! how many digital albums have i heard ? squat and if i have ? it was early adat !
@lohikarhu734
@lohikarhu734 11 месяцев назад
Yep, I think that it's quite analogous to photography, where 8-bit colour channels work "pretty well" for printed output, but really fall apart for original scene capture, and just get worse when any kind DSP is applied to the "signal", where 'banding' shows up in stretched tones, and softened edges can get banding or artifacts introduced during processing... Great discussion.
@oh515
@oh515 Год назад
I find a higher sample rate most useful when stretching audio tracks is necessary. Especially on drums to avoid “stretch marks.” But it's enough to bounce up from 48 (my standard) to e.g. 96 and bounce back when done.
@simongunkel7457
@simongunkel7457 Год назад
Here's the simpler way to get the same effect. Check the settings for your time stretch algorithm. The default is usually the highest pitch accuracy. What increasing the project sample rate does is decrease pitch accuracy in favor of time accuracy. The alternative way of doing this is to set the time stretching algorithm to a lower pitch accuracy.
@PippPriss
@PippPriss Год назад
​@@simongunkel7457 Are you using REAPER? There is this setting to preserve lowest formants - is that what you mean?
@simongunkel7457
@simongunkel7457 Год назад
@@PippPriss Sorry for the late reply, didn't get a notification for some reason. REAPER has multiple time stretch engines and for this particular application switching from Elastique Pro to Elastique Efficient is the way to go. You can more directly change the window size on "simple windowed", though Reaper actually goes with a time based setting, rather than a number of samples. Also note that stretch markers can be set to pitch-optimized, transient-optimized and balanced..
@customjohnny
@customjohnny Год назад
Haha, “Stretch Marks” never heard that before. I’m going to say that instead of ‘artifacting’ from now on
@DrBuffaloBalls
@DrBuffaloBalls 3 месяца назад
How exactly does upsampling from 48k make the stretching more transparent? Since it's not adding any new data, I'd imagine it would do nothing.
@RealHomeRecording
@RealHomeRecording Год назад
I like high sample rates and I cannot lie. You other engineers cannot deny....
@ericwarrington6650
@ericwarrington6650 Год назад
Lol...itty bitty waist....round thing...face..😂🤘🎶
@Mix3dbyMark
@Mix3dbyMark Год назад
When the engineer walks in with some RME and a Large Nyquist in your face, you get sprung!
@maxuno8524
@maxuno8524 Год назад
​@@Mix3dbyMark😂😂😂
@DeltaWhiskeyBravo13579
@DeltaWhiskeyBravo13579 Год назад
FWIW I'm running 32 bit float and 48k on my DAW. That's my max bit depth with Studio One 6.1 Artist, it goes to 64 bit float in Pro. As for sample rates, it looks like S1 goes up to 768K. Good enough?
@jordanrodrigues1279
@jordanrodrigues1279 Год назад
...if a mix walks in with that crunchy ali'sin and cramped bells boostin air I just can't
@alanpassmore2574
@alanpassmore2574 Год назад
For me 24 bit, 48khz digital recorder with analog desk and outboard gives all I need. You get the balance of pushing the levels through the analog to create the excitement and keeping lower digital levels to capture it with plenty of headroom.
@jmsiener
@jmsiener 3 месяца назад
It’s all you need. Your DAW might process audio as a 32bit float but your ADC is more than likely capturing 24bit. 48k gives a touch more high frequencies before nyquist sets it without essentially doubling file size.
@Call-me-James
@Call-me-James Год назад
A fun fact: The exact same reasoning is used in professional video cameras. The Arri Alexa 35 - a camera often used in movie making - has a whopping 17 stops of dynamic range. So even if a scene is way under exposed or over exposed, the problems can be corrected in post-processing.
@jackroutledge352
@jackroutledge352 Год назад
That's really interesting. So why is everything on my TV so dark and hard to see!!!!!!
@blakestone75
@blakestone75 Год назад
​@@jackroutledge352 Maybe modern filmmakers think underexposed means "gritty" and "realistic". Lol.
@Breakstuff5050
@Breakstuff5050 Год назад
​@jackroutledge352 perhaps your TV doesn't have a good dynamic range.
@RealHomeRecording
@RealHomeRecording Год назад
​@@jackroutledge352yeah that sounds like an issue with your TV quality. Or maybe your settings are not optimized? I have a 4K OLED Sony TV and it has HDR. Looks gorgeous with the right material.
@Magnus_Loov
@Magnus_Loov 11 месяцев назад
@@RealHomeRecording It's a well-known problem/phenomenon. Lot's of people are complaining about the darker TV/movie-productions these days. It is much darker now. I also have a 4k Oled TV (LG) but I can also see that some scenes are very dark in production.
@mastersingleton
@mastersingleton Год назад
Thank you for showing that 24 bit is not necessary for audio playback however for audio production that makes a big difference in terms of the amount of buffer between clipping and not clipping the input audio that is being produced.
@simonmedia7
@simonmedia7 Год назад
I always thought about the sample rate problem being that if you wanted to slow down a piece of audio with the highest frequencies being 20kHz, you'd lose information proportional to the amount you slow it down. So you need the extra magical inaudible information beyond 20kHz for the slowed down audio to still fill the upper end of the audible spectrum. That is something every producer will have probably experienced.
@albusking2966
@albusking2966 6 месяцев назад
yeah if its essential for your workflow to slow some audio down then yes by all means. otherwise im happy with 48 or 44.1 because it sounds good. I like to export any files before mastering as 32 bit files tho, saves you issues from downsampling (as most DAWs run a 32 bit master fader now)
@Sycraft
@Sycraft 11 месяцев назад
Something to add about bit depth and floating point for audio processing is the phenomena of rounding/truncation and accumulated error. If you are processing with 16 or 24-bit integers then every time you do a math operation, you are truncating to that length. Now that doesn't sound bad on the surface, particularly for 24-bit, what would the data below 144dB matter? The problem is that the error in the result accumulates with repeated operations. So while just the least significant bit might be wrong at first, it can creep up and up as more and more math is done and could potentially become audible. It is a kind of quantization error. The solution is to use floating point math, since it maintains a fixed precision over a large range. Thus errors are much slower to accumulate and the results more accurate. So it ends up being important not only for things like avoiding clipping, but also to avoid potentially audible quantization errors if lots of processing is happening. In theory with enough operations, you could still run in to quantization error with 32-bit floating point, since it only has 24 bits of precision, though I'm not aware of it being an issue in practice. However plenty of modern DAWs and plugins like to operate in 64-bit floating point which has such a ridiculous amount of precisions (from an audio standpoint) that you would never have any error wind up in the final product, even at 24-bit.
@user-bu4wg1ok5n
@user-bu4wg1ok5n 28 дней назад
Rounding and truncation should not be a problem as long as the levels are not wildly out of range, either way too high (digital clipping) or way to low (down in the noise floor). The latter would be almost impossible, since recording at 144 dB below digital full scale would be obviously ridiculous -- even room sound and microphone preamplifier noise should be quite a ways above this level. However, there is one thing that needs to be watched, and that is proper dithering. Vanderkooy and Lipschitz did pioneering work on dither, and they recommend that triangular probability density dither should always be applied at 1/2 least significant bit whenever audio is sampled, or resampled (sample rate converted or gain shifted down, where the dither might be reduced below the current bit depth). Vanderkooy and Lipschitz did say that dither might not be necessary when working with more than 24 bits, especially if the master is going to be converted to 16 bits for distribution. It can be dithered for 16 bits when transcribing to CD Audio or whatever. The dither provides a digital noise floor that spreads the quantization error power spectrum across the entire audio band, effectively making the resolution greater than the dynamic range implied by the actual bit depth. This is the white paper, from AES, not free: aes2.org/publications/elibrary-page/?id=5482
@DDRMR
@DDRMR Год назад
I've slowly been learning the benefits of oversampling for the last few years and before final mix export ill spend an hour or so applying oversampling on every plug in that offers the option on every mixer track. This video really solidified my knowledge and affirmed that me spending that extra time has always been worth it! The final mixes and masters do sound fucking cleaner by the end of it all because I do use a lot of compressions and saturation on most things.
@benjoe999
@benjoe999 Год назад
Would be cool to see the importance of audio resolution in resampling!
@Mix3dbyMark
@Mix3dbyMark Год назад
Yes please
@BertFlanders
@BertFlanders Год назад
Thanks for clarifying these things! Really useful for deciding on project bitrates and sample rates. Cheers!
@shorerocks
@shorerocks Год назад
Thumbs up for the Dan Worrall link. His, and Audio Universities videos are the top vids on YT to watch.
@AudioUniversity
@AudioUniversity Год назад
I love Dan’s videos! Thank you for the kind words, Sven!
@maxheadrom3088
@maxheadrom3088 5 месяцев назад
Both this and the previous video are great! Thanks for the great work!
@macronencer
@macronencer Год назад
This is an excellent and very clear explanation. Thank you so much! I've seen Dan Worrall's videos on this topic, and I agree they are also brilliant.
@Fix_It_Again_Tony
@Fix_It_Again_Tony Год назад
Awesome stuff. Keep 'em coming.
@eitantal726
@eitantal726 Год назад
Same reason why graphic designers need high res and high bit depth. A 1080p jpg image is great for viewing, but will look terrible once you zoom or change the brightness. If your final image is composed of other images, they better be at a good resolution, or they'll look pixelated
@lolaa2200
@lolaa2200 Год назад
It's not about resolution it's about compression. Not entering int eh details but actually how much you compress your JPEG and the trade off between image quality and file size is exactly what is discussed here : matter of sampling rate.
@gblargg
@gblargg Год назад
As an amateur Photo(GIMP)-shopper, I figured this out a few years ago. Always start with a higher resolution than you think you'll need. It's easy to reduce the final product but a pain to go back and redo it with higher resolution.
@MatthijsvanDuin
@MatthijsvanDuin Год назад
@@lolaa2200 Ehh no, audio sampling rate is directly analogous to image resolution. We're not talking about image compression nor audio compression here.
@lolaa2200
@lolaa2200 Год назад
@@MatthijsvanDuin I reacted to a message talking about JPEG. The principle of JPEG compression is precisely to give a different sample rate to different part of the image so yes it totally relate. JPEG compression IS a sampling matter.
@mandolinic
@mandolinic Год назад
This stuff is pure gold. Thank you so much.
@rowanshole
@rowanshole 7 месяцев назад
It's like Ansel Adams 'zone system' for audio. Adams would prefog his film with light, then over expose the film in camera, while under exposing the film in chemistry, so as to get rid of the noise floor (full blacks) and get rid of the digital clipping (full whites), both of which he maintained "contained no information". This resulted in very pleasing photographs.
@dmillionaire7
@dmillionaire7 3 месяца назад
Who would I do this process in photoshop
@emiel333
@emiel333 Год назад
Great video, Kyle.
@magica2z
@magica2z 11 месяцев назад
Thank you for all your great videos and subscribed.
@TarzanHedgepeth
@TarzanHedgepeth Год назад
Good stuff to know. Thank you.
@breernancy
@breernancy Год назад
!All points on point!
@ZadakLeader
@ZadakLeader Год назад
I guess having a high sample rate for when you need to e.g. slow recordings down is also useful because you still have that data. And to me that's pretty much the only reason to have things above 44kHz sampling rate
@AudioUniversity
@AudioUniversity Год назад
Great point, Zadar Leader!
@simongunkel7457
@simongunkel7457 Год назад
Not something I think is neccessary, unless you specifically want ultrasonic content and make bat sounds audible. Now if you think time stretching sound better with a higher sample rate, you might be right, but you are using the most computationally expensive hack I can think of. Time stretching and pitch shifting algorithms use windows of a particular size (e.g. 2048 samples). But their effect depends on how long those windows are in time. So a higher sample rate would just decrease the time window. All of these effects make a trade-off though: The longer the window, the more accurate they get in the frequency domain, but the shorter the window the more accurate they get in the time domain. Most of them default to maximal window size and thus maximal accuracy in the frequency domain, but the errors in the time domain lead to some artefacts. So instead of increasing your project sample rate, which will make all processing more costly in computation, you could just opt for the second higherst frequency domain setting for your pitch shifting or time stretching algorithm. Which means window size is decreased, which actually reduces computational load for pitch shifting or time stretching.
@5ilver42
@5ilver42 Год назад
@@simongunkel7457 I think he means the simpler resampling version where things get pitched down when playing slower, then the higher sample rate still has content to fill in the top of the spectrum when pitched down.
@gblargg
@gblargg Год назад
@@5ilver42 That's the ultrasonic content he referred to, wanting it to be captured so when you lower the rate it drops into the audible range.
@MatthijsvanDuin
@MatthijsvanDuin Год назад
@@5ilver42 It depends on the application, but if you're just slowing down for effect you actually want the top of the spectrum to remain vacant rather than shifting (previously inaudible) ultrasonic sounds into the audible part of the spectrum. Obviously if you want to record bat sounds you need to use an appropriate sample rate for that application, regardless of how you intend to subsequently process that record.
@zyonbaxter
@zyonbaxter Год назад
I'm surprised he didn't mention how higher sample rates decrease latency when live monitoring. PS. I would love to see videos about the future of DANTE AV and Midi 2.0.
@MikeDS49
@MikeDS49 Год назад
I guess because the digital buffers fill up sooner?
@AudioUniversity
@AudioUniversity Год назад
Good point, Zyon Baxter! It’s a balance in practice though, as it’s more processor intensive so using a higher sample rate might lead to needing a larger buffer size. If anyone reading this is interested in learning more about this, check out this video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-zzM4yk3I8tc.html
@lolaa2200
@lolaa2200 Год назад
Actually with a given computing power, and assuming you make full use of it, higher sample rate mean higher latency.
@andytwgss
@andytwgss Год назад
@@lolaa2200 lower latency, even within the ADC/DAC, feedback loop is reduced
@DanWorrall
@DanWorrall Год назад
I think this is kind of a myth in all honesty. In every other way, doubling samplerate means doubling buffer sizes. You have a delay effect? You'll need twice as many samples in the buffer at 96k. Same for the playback buffer: if you double the samplerate, but keep the same number of samples in the buffer, you've actually halved the buffer size.
@theonly5001
@theonly5001 Год назад
More samples are a great thing for denoising as well. Temporal Denoising is quite a resource intensive task, but it works wonders in recodings of any type. Especially if you want to get rid of higher frequency noise.
@cassettedisco6954
@cassettedisco6954 Год назад
Gracias amigo, saludos desde México 🇲🇽❤
@AdielaMedia
@AdielaMedia 7 месяцев назад
great video!
@aaronmathias6739
@aaronmathias6739 Год назад
It is good to see you back in action with your awesome videos!
@-IE_it_yourself
@-IE_it_yourself Год назад
you have done it again. i would love to see a video on square waves
@oaooaoipip2238
@oaooaoipip2238 Год назад
Don't ignore clipping. Or it will sound like Golden hour by JVKE.
@michelvondenhoff9673
@michelvondenhoff9673 Год назад
Compression you apply before any gain or od/ds is brought into the signalpath. It only might be applied again when mastering for different formats.
@mattbarachko5298
@mattbarachko5298 Год назад
Got so excited when I saw you were using reaper
@danielsfarris
@danielsfarris Год назад
WRT noise floor and compression, when working with analog tape, it was (and I presume still is) much more common to compress and EQ on the way in to avoid adding noise by doing it later.
@MadMaxMiller64
@MadMaxMiller64 4 месяца назад
Modern converters work as 1bit sigma-delta anyway and convert the data stream after the fact, using digital filters with a dithering noise beyond the audible range.
@delhibill
@delhibill Год назад
Clear explanations
@DeltaWhiskeyBravo13579
@DeltaWhiskeyBravo13579 Год назад
Excellent video Kyle. Sometimes I miss the analog tape days, till it comes to signal to noise. At least tape saturation sounds much better than digital clipping, which I'm sure nobody goes that hot. 🙂
@M1ster77
@M1ster77 Месяц назад
Here are two other reasons to go with 96k: The ad/da latency of your system will be much smaller, and (if for some reason) you record to a file played back in the wrong sample rate you will notice it right away 😁
@3L3V3NDRUMS
@3L3V3NDRUMS Год назад
That was really great man! I didn't know this before! I was just using standard because I didn't know what would it change. But now I understand it! 🤘
@AudioUniversity
@AudioUniversity Год назад
Glad to help, 3L3V3N DRUMS! I still use 48kHz most of the time because the processing power and storage I save outweigh the tiny bit of aliasing that might occur. (In my opinion)
@3L3V3NDRUMS
@3L3V3NDRUMS Год назад
@@AudioUniversity Great to know. That's actual the standard in Ardour while I'm recording my drums. So I'll let it like that!
@bulletsforteeth5029
@bulletsforteeth5029 Год назад
It will require 50% more storage capacity, so be sure to factor that in on your projects.
@simongunkel7457
@simongunkel7457 Год назад
@@AudioUniversity Where would it occur though? Your converter on the hardware side always uses the maximum sample rate it can support, because that makes the analog filter design much easier. Then if you record at lower sampling rates it will apply a digital filter and then downsample - both are hardware accelerated DSP that get controled via the driver. If you set to record at 48k, your converters don't switch to a different filter design and a physically different sample rate, they just start to perform filtering and downsampling before sending the digital signal to the box.
@simongunkel7457
@simongunkel7457 Год назад
@MF Nickster I agree.
@Zelectrocutica
@Zelectrocutica Год назад
I already say this but i say again, most plugin work better at high sample rates since most plugins doesn't have internal oversampling, so it's good to work at reasonably high sample rate like 96k or 192kHz, though i say this but im still working at 44-48kHz 😂
@weschilton
@weschilton Год назад
Actually almost all plugins these days have internal oversampling.
@simongunkel7457
@simongunkel7457 Год назад
My DAW (Reaper) has external oversampling per plugin or plugin chain, which means it takes care of the upsampling and after processing the filtering and downsampling. To the plugin it looks like the project runs at a higher sample rate, while for plugins where aliasing isn't an issue it can still run at the lower sample rate.
@mb2776
@mb2776 Год назад
most plugins like 10 years ago had oversampling allready built in them.
@plfreeman111
@plfreeman111 Год назад
"...for any properly mastered recording." I long for properly mastered recordings. A thing of myth and beauty. Like a unicorn.
@BeforeAndAfterScience
@BeforeAndAfterScience Год назад
Succinctly, while human hearing has an upper frequency bound, targeting that limit when converting from analog to digital can (and usually does) result in literally corrupted digital representation because the contribution of the higher analog frequencies to the waveform don't just disappear, they get aliased into the lower frequencies.
@nine96six
@nine96six 11 месяцев назад
Rather than for musical purposes, I think it is valuable as data for profiling or natural phenomenon analysis in the future.
@SigururGubrandsson
@SigururGubrandsson Год назад
Really nice stuff. But I disagree with the Nyquist alignment. You can defend it if you know the input, but if its misshapen like music, then you cant defend Nyquist. Misshapen frequency, volume and variance needs to be taken into account and you need mpre than 2x the frequency for that, as well the bit depth. Not to mention intentional saturation. Keep it up, I'm eager to watch the next vid!
@MegaBeatOfficial
@MegaBeatOfficial 7 месяцев назад
AFAIK this method is built in to every audio interface nowadays. So obviously a sampling resolution higher than 44.1 is useful, but in principle you shouldn't record audio files at 96kHz or higher, because they just take up a lot of hard disk space and need more cpu power to play them, especially if you have a lot of tracks... but you don't gain quality.
@lolaa2200
@lolaa2200 Год назад
almost nailed it although what you said about the pressure on anti-aliasing analog filter is only true with very basic converter topologies. So if you've only atanded mixed signal electronic 101 that's what you have seen. However we don't use that topology anymore for audio for several decades now and mostly for this exact reason. The "true" (i.e. : in terms of what is seen by the analog side) sampling frequency is several MHz.
@gblargg
@gblargg Год назад
The way modern converters work just amplifies his point: the higher the sampling rate, the easier the filtering is.
@deaffatalbruno
@deaffatalbruno Год назад
well, the comments around noise floor are a bit misleading. a 24 bit signal doesn't have 144 db noise floor, that would be nice, as this depends on the noise floor of the conversion. 144 ( 6db per bit, ) is theory only.
@shueibdahir
@shueibdahir 3 месяца назад
The demonstration about analog audio gain and noise floor is exatcly how cameras work aswell. I'm actually shocked by how similar they are. Capturing images with a camera is a constant battle between distortion (clipping the highs) and the image being too dark (blending in with the noise floor) and bringing it up in post then causes the noise to come up aswell.
@camgere
@camgere Год назад
I'm a bit rusty on this, but there is an issue with the Nyquist frequency. Going from analog to digital. You want to "brick wall" band pass the signal at half the sampling frequency. Brick wall is a perfect low pass filter, which doesn't exit. There are very good low pass filters. Going from digital to analog, you again want to brick wall filter the signal to recover the analog signal from the sampled signal. Even more confusing, there are digital low pass filters, but they have to obey Nyquist as well.
@bassyey
@bassyey Год назад
I record 24bit because of the noise floor. But I record on 96KHz because of the round trip time! My system actually has a lower latency when it's set to 24bit 96KHz.
@user-ch8gs2ks6v
@user-ch8gs2ks6v Год назад
Waiting this stuff...
@TonyAndersonMusic
@TonyAndersonMusic Год назад
That was super clear. You’re a great instructor. Is it useless to record in 96k and then bounce stems down to 48k to give my logic session a break?
@AudioUniversity
@AudioUniversity Год назад
No. It’s not useless. You can even bounce out the multitrack instead of combining sections into stems.
@ProjectOverseer
@ProjectOverseer Год назад
I use 192kHz multi tracking then master to DSD for amazing replay via a decent DAC
@professoromusic
@professoromusic Год назад
Love this, always great content. Where have you studied?
@AudioUniversity
@AudioUniversity Год назад
Thanks, Professor O. I studied audio production at Webster University. I’ve also learned a lot from mentors, of course!
@lucianocastillo694
@lucianocastillo694 7 месяцев назад
I wish there was a higher sample rate option for highmid to higher frequencies that keeps a 48hz sample rate on the lowmid-low frequencies but targets a higher sample rate for the rest.
@ukaszpruski3528
@ukaszpruski3528 Год назад
Perhaps an Idea to consider (and make a video) that compares DSD to PCM and the differences between PURE DSD recording mastering output and the ones that use PCM in between ... Nevertheless, DSD128 or DSD256. PCM 24/96 vs DSD128 ... Is it really that close ? Or is there some "hidden difference" ;-) ...
@kyleo2113
@kyleo2113 Год назад
Is there any advantage to upsampling when applying parametric EQ, crossfeed, filters, volume leveling etc? Also do some DACs work better with higher sample rates if you are able to offload the conversion in the digital domain in a pc? I am a roon user and curious your take on this.
@Mix3dbyMark
@Mix3dbyMark Год назад
🔥🔥🔥
@rts100x5
@rts100x5 Год назад
say whatever you want to ...believe whatever you want to .... the difference between DSD recordings and Lossless wav or flac on my Fiio DAP is NIGHT vs DAY Its really about the recording just as much as the file type / resolution
@-IE_it_yourself
@-IE_it_yourself Год назад
5:55 that is cool
@Tryggvasson
@Tryggvasson Год назад
sample rate does more than help with anti-aliasing. rupert neve was convinced that capturing and processing the ultrasonic signal that came with the audible actually contributed to the perceived pleasantness of the sound, and the emotional state it communicates. so, even if you can't hear it, per se, it counts in the overall timbre and feel - you can easily argue that, in the analog domain, ultrasonic signal - for instance harmonics - actually changed the behavior of compressors, to say the least - and that, multiplied by x number of tracks. so higher sample rates also allow for a wider bandwidth into the ultrasonics, which seems to matter for the quality of the signal. the downside is the processing power, and storage space.
@FallenStarFeatures
@FallenStarFeatures Год назад
It's risky to record frequencies above 20KHz, even when the original sample rate is above 88.2khz. Ultrasonic frequencies in this band are susceptible to being digitally folded down into the audio range, producing extremely unnatural-sounding aliasing distortion. While this hazard can be carefully avoided within a pure 96Khz+ digital processing chain, any side trip to an external digital processor may involve resampling that can run afoul of ultrasonic frequencies. Why take such risks when the speculative benefits have never been shown to be audible?
@Jungle_Riddims
@Jungle_Riddims Год назад
Word of the day: Nyquist 😉💥
@isotoxin
@isotoxin Год назад
Finally I have strong arguments to argue with audiophiles! 😅
@garymiles484
@garymiles484 Год назад
Sadly, most are like flat earthers. They just won't listen.
@nunnukanunnukalailailai1767
Weird how limiting was not discussed. It's one of the applications of high sample rates that actually make actual sense in practice most of the time. At least in a mastering context that is. The sample peak level has a higher chance to match the intersample peak level (true peak) when higher sample rates are used even if high sample rates have no effect on the d/a. That's the main working principle behind true peak limiters.
@Skandish
@Skandish Год назад
Yep, frequency and volume range is enough. But what about resolution? In 16 bits 48 kHz signal is just so many data paints, which will wipe any difference between very close, but slightly different signals. For example, digitizing short enough 15 kHz and 15.001 kHz sine signals would result in the same binary file. Moreover, DAC is not looking at the whole file, only at a short part of it, meaning that we will likely have frequencies changing over time. Compare this it to image sensors or displays. Having 1 inch HDR sensor gives enough size and depth. But we still want it to be 4K or 8K.
@___David___Savian
@___David___Savian 3 месяца назад
Here is the right level to render to for audio uploaded to RU-vid: The ideal volume limit level is -5.9 Db. (RU-vid automatically normalizes volume to that level) All instruments should be below this level with the peak spikes reaching -5.9 Db. Just put all instruments at around -18 Db and then increase accordingly between -18 and -5.9 Db.
@WaddleQwacker
@WaddleQwacker 11 месяцев назад
it's sort of the same with visual production with pictures and video files. The average joe posts jpegs in 8bits and maybe a png with alpha channel every sunday. But in production we use 32bits EXRs everywhere because you can play with high dynamic range in comp and it's fast it can store layers and metadata you haven't even heard about and deep data and ....
@XRaym
@XRaym Год назад
02:02 it worths noticing that it is not because you record in 24 bits audio that you have 24 bits of dynamics : hardwares have noise level as well. But sure, digital intefrace are still way lower in noise than analog.
@AgentSmith911
@AgentSmith911 2 месяца назад
I heared Spotify is rumored to offer higher fidelity audio, probably with less compression or lossless audio using codecs like Flak instead of mp3. My audio equipment probably isn't good enough to hear the difference though, but maybe it will be good for music producers.
@paullevine9598
@paullevine9598 2 месяца назад
Could you do a video on dsd, what it is, pros, cons etc
@elanfrenkel8058
@elanfrenkel8058 2 месяца назад
Another reason to use higher sample rates is it decreases latency
@stephenbaldassarre2289
@stephenbaldassarre2289 10 месяцев назад
One thing often overlooked in the sample rate argument is digital mixers. The converters are often run in low-latency (high speed) mode in order to keep the round trip through the console low enough that it doesn't affect people's performances. This is done by simplifying the digital anti-aliasing filters to reduce processing time. This is not trivial stuff, I'm talking on the order of 40dB attenuation at .6fs vs 100dB. In other words, if your console runs at 48KHz, an input of 28.8K at full scale will come out the other side of your console as 19.2K at -40dB. That's enough to cause some issues, especially since a lot of manufacturers trying to meet a price point completely leave out the analogue anti-aliasing filters (Sony suggests 5-pole analogue filters in front of 48K ADCs). Running a digital console at 96KHz effectively means around 90dB stop-band attenuation even with the ADCs in low-latency mode. Of course, you also reduce aliasing caused by internal processing as you say.
@stephenbaldassarre2289
@stephenbaldassarre2289 9 месяцев назад
@mfnickster The issue isn't processing power so much as ADCs MUST have group delay in order to have linear phase anti-aliasing. DACs must also have group delay for the reconstruction filters. The processing power within the console's DSP is fast, but nothing is instantaneous, so every place once can reduce the latency must be considered. Oversampling also requires group delay, so pick your poison. In a computer environment, the plug-in can report it's internal latency so the DAW can compensate by pre-reading the track, not so in a mixer.
@LuizCarlosJesusdosSantos
@LuizCarlosJesusdosSantos Год назад
👏🏽👏🏽👏🏽 👏🏽
@TheChromaticz355
@TheChromaticz355 Год назад
Dan Worrall entered the chat!
@ats-3693
@ats-3693 Год назад
Aliasing definitely isn't a problem unique to digital audio recording, I'm a geophysicist and a geophysical data processor aliasing is also an issue in geophysical data in exactly the same way except it ends up being a visual issue.
@VendendoNaInternetAgora
@VendendoNaInternetAgora Месяц назад
I'm watching all the videos on the channel, thank you for sharing your knowledge with us. One question: what is the setup of the sound equipment installed in your car? Is it a High-End system? I'm curious to know what system (equipment) you use in your car...
@AudioUniversity
@AudioUniversity Месяц назад
I just use the stock system, but I’d love to upgrade someday! Thanks.
@calumgrant1
@calumgrant1 3 месяца назад
Real music is not single static sine waves but a whole spectrum that varies with time. I would like to see this mathematical argument extended to spectra, because the error on each frequency component would surely accumulate? Real music is very very processed, being encoded and decoded multiple times from various streaming services and codecs, so I think adding a bit of headroom in terms of frequency and bit depth is quite sensible to keep the artefacts down.
@ferrograph
@ferrograph 11 месяцев назад
Nice to see that this stuff is understood properly by the younger engineers that didnt live through the evolution of analogue and digital recording. So much nonsense spoken about hgh bit rates. Well done.
@rodrigotobiaslorenzoni5707
@rodrigotobiaslorenzoni5707 Год назад
Excellent Vídeo!!!! One question I have is if the higher sample rates would help to draw a more complex wave , like a mastered song, with many instruments playing at the same time, more accurately. I'm supposing that a "complex" wave may be a combination of various frequencies and even sine and not perfectly sine wavez. May be worth testing.
@AudioUniversity
@AudioUniversity Год назад
The only frequencies that won’t be accurately sampled exceed the Nyquist frequency, and therefore the audible frequency range. Check out this video to see this demonstrated: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-UqiBJbREUgU.html
@RobertFisher1969
@RobertFisher1969 Год назад
Any complex wave form can be decomposed into a set of sine waves. If none of those component sine waves are above the Nyquist frequency, then the DAC can perfectly reproduce the complex wave. Which is why such high frequencies need to be filtered out of the complex wave to prevent aliasing.
@macronencer
@macronencer Год назад
I understand about oversampling and why it's used internally. However, sometimes I think of potential reasons to *record* at higher sample rates - but I'm no expert and I wonder whether this is ever justified. Two such reasons I can think of right now: 1. Field recordings that you might want to slow down later on to half or quarter speed. 2. Recordings made in adverse conditions that might need noise reduction processing (I've heard some people say that higher sample rates can help with NR quality). Do you have any comments on either of these? I'd be interested to hear your advice. Thank you!
@RealHomeRecording
@RealHomeRecording Год назад
The two reasons you listed are indeed valid points. Pitch correction or pitch manipulation would be another.
@macronencer
@macronencer Год назад
@@RealHomeRecording Many thanks, that's helpful!
@marcbenitez3227
@marcbenitez3227 8 месяцев назад
96 is the sweet spot, think of sample rates as the display quality on your monitor, 1080p is going to look worse than 4k because it has less pixels, it’s the same thing in music, more samples equals more detail.
@DGTelevsionNetwork
@DGTelevsionNetwork Год назад
This is why the USMC Sony and others need to make DSD more available and not guard it so much. It's a lot easier to work with when the editing program supports it. Almost never have to worry about noise floor and you can do almost all processing on a core duo with ease.
@wavemechanic4280
@wavemechanic4280 11 месяцев назад
You using Pyramix for this? If not, then what?
@brucerain2106
@brucerain2106 5 месяцев назад
Could have just linked the fabfilter video from the start
@TWEAKER01
@TWEAKER01 11 месяцев назад
So, wrt sample rates (5:01), when a more gentle low pass filter to Nyquist of 48k is surely within the realms of DSP and CPU now, it begs the question: why do anti-aliasing filters remain at 24kHz for processing at 96k, or when over-sampled 2x-4x?
@fasti8993
@fasti8993 Год назад
Great video. In audio production, another beneficial effect of using higher sample rates, apart from getting rid of aliasing, is that doubling the sample rate cuts latency in half...
@gblargg
@gblargg Год назад
It's odd that so many people bring this up. It tells me that many systems are poorly designed and don't adjust the sample buffer size to the sample rate, e.g. they are a fixed number of samples rather than a fixed amount of time. Or people just don't know how to adjust the buffer size to reduce latency (at a cost of higher chance of dropouts).
@scarface44243213
@scarface44243213 3 месяца назад
Hey, what microphone are you using in this video? It's really nice
@crapmalls
@crapmalls Год назад
Higher sample rates reproduce higher frequency. There is no more info in the audible range. The clue is in the file size, double the frequency double the size. More bits is lower noise floor which most dacs cant reproduce out the audio port. And yet it sometimes sounds better to me 🤷‍♂️
@paulhamacher773
@paulhamacher773 Год назад
did you test it in an ABX-setting? Otherwise your perception just might have fooled you! 😀 #beenthere
@crapmalls
@crapmalls Год назад
@@paulhamacher773 a lot of the time its difficult to find a higher res version of the same mastering
@mb2776
@mb2776 Год назад
@@crapmalls ...then just record your own stuff at different settings and let somebody else play it for you without telling. also, use more than just a few examples. you will see, you got fooled. there isn't more info, you can't hear above 20kHz.
@crapmalls
@crapmalls Год назад
@@mb2776 yeah thats what i mean. I know theres literally no difference because the higher sample rate just goes into higher frequencies. The file size is the giveaway. Apparently it can help with timing in the dac but thats an oversampling issue and a dac issue IF the dac is even good enough for it to matter
@kristopapazo1229
@kristopapazo1229 Год назад
Dolby S was introduced in 1989.
@moskitoh2651
@moskitoh2651 Год назад
If your signal to noise ratio is below 96dB (including not only mic and preamp but also room), recording with 24 Bits only makes sense for the manufacturer. ;-) Unless you like to record 8 Bits of noise...
@AwinohLewis
@AwinohLewis Год назад
🙏👏
@baronofgreymatter14
@baronofgreymatter14 5 месяцев назад
So in purely playback scenarios, is it recommended to oversample above 44.1....for example my streamer allows me to oversample thru its USB output to my DAC. Does it make sense to oversample to 88.2 or higher in order to get the smoother roll off above nyquist?
@tillda2
@tillda2 5 месяцев назад
Question: Is the 20bit HDCD mastering any good for playback? Is it recognizable (compared to normal CD), given a good enough audio system? Thanks for answering.
@jamesgrant3343
@jamesgrant3343 Год назад
Bit depth matters, assuming you are going to change the dynamic range of what is on the file by a lot. Sample rate does not (assuming you only care about audible frequencies!)… if you stretch a 20 KHz sinewave, and make it a 19 KHz sinewave, the application doing this re-sampling is not taking the original samples and moving them, it is interpolating a position between the original samples and synthesising a new sample, it will be as good as the algorithm the software uses - the sample rate of the source (44.1/48/96etc) is irrelevant, if the software is good, it will do a good job, if the software is poor, it will do a poor job. Luckily for us in 2023, this is a very solved problem and things like Reaper which still has a re-sampling mode on export, default to very good implementations for sample rate conversion whereas in the olden days, where the original Pentium processor was crazy expensive, it would take forever to export whilst re sampling. Any re-sampling algorithm, that is used today, does not simply draw a straight line between two samples and put a new dot the appropriate proportion along the line, The wave form represented by the original samples is effectively constructed, and the sample that is synthesised is placed on the reconstructed wave form, which is mathematically very precise relative to the original samples. This accuracy does not get better at high sample rates, these samples are temporal, not amplitudinal (ie - the inaccuracy is in the bit depth, not jitter in when the sample was made - unless the ADC was bad - in which case it’s bad at any rate!!) For those that are thinking about aliasing, again, the quality of the software you are using is far more profound than the sample rate you select, for example, a good piece of software may put a low resonance, brick wall filter at about 21 kHz to filter away higher frequencies, so they don’t cause aliasing. If your software does this, and many do, there is a good chance that the software developer has thought things through carefully. If you are dependent on sample rate to minimise aliasing, then there is a good chance that your software of choice has problems in many areas!
@GLENNKEARNEY1
@GLENNKEARNEY1 Год назад
So what's your Opinion On Ableton Live 11
@kwcnasa
@kwcnasa Год назад
Can we talk about should EBU R128 and LUFS apply on RU-vid platform?
@Paulkatz123
@Paulkatz123 Год назад
1:44 you are missing the compander part here.
@loui7196
@loui7196 Год назад
For audio 24 bit 88.2 kHz is optimal. For video 96 kHz.
@antoniofigueiredo3812
@antoniofigueiredo3812 10 месяцев назад
Naive question: listening to a band play live, don't the different instruments produce frequencies above the audible range of 20 kHz, which we cannot hear, but that interfere with similar frequencies from other instruments thus generating beat frequencies that we do hear? Not saying that happens, just asking. Now, if that would be the case, wouldn't we lose those beating sounds by recording instruments in separate tracks at let's say 44 kHz?
Далее
Samplerates: the higher the better, right?
29:22
Просмотров 498 тыс.
Rope climb tutorial !! 😱😱
00:22
Просмотров 4,1 млн
Осторожно селеба идет 😂
00:16
Просмотров 287 тыс.
КТО ЭТО БЫЛ?
25:31
Просмотров 852 тыс.
24 bits or 96 kHz? Which makes most difference?
11:24
Genius FREE PLUGINS Pros Want To Keep Secret
11:03
Просмотров 60 тыс.
STOP USING 44.1k!!  -  Let Me Explain...
10:08
Просмотров 145 тыс.
Audio Bit Depth and Sample Rate Explained
6:15
Просмотров 60 тыс.
16-bit vs. 24-bit - Less noise or more detail?
11:03
Просмотров 36 тыс.
What Sample Rate Should You Use?
8:05
Просмотров 26 тыс.
I Tried the Cheapest Music Gear on Temu
11:42
Просмотров 160 тыс.