iZotope makes innovative products that inspire and enable people to be creative. With over a decade developing award-winning products and audio technologies for professionals and hobbyists alike, and used by millions of people in over 50 countries, iZotope products are a core component of GRAMMY-winning music studios, Oscar and Emmy-winning film and TV post production studios, and prominent radio studios, as well as basement and bedroom studios across the globe.
iZotope is part of Native Instruments, whose mission is to make music and audio creation a more joyful and inspiring experience for creators everywhere. Stemming from the union between NI, iZotope, Plugin Alliance and Brainworx, together we inspire and empower creators to express themselves and reimagine the future of sound.
WOW! I'm at the 18:30 mark and I wasn't even in the market for a "balance control" plugin, but Tonal Balance Control looks AMAZING. It offers so much, it's actually insane. Also, everything up to this point has been insanely informative and easily explained. I'm not sure what this guy gets paid, but he needs a raise immediately!
I followed one of your pages and was led here. Most people think that the sample rates and bit depth fully capture the frequency range possible in the sample rate but the truth is that it's only a spectrum/frequency range container/allocated where as the audio capture device also needs the ability to capture the higher frequency range to pass it onto the container to get the full benefit. Most audio interfaces are rated between 20-20khz and will have fall off levels after that (even if advertised 24/192 max sample rate.) I miss my Roland UA-101 (no win 11 driver) Why would "professionals" want higher than human hearing? usually to create ambient tracks and creature voices by recording the highest frequency range they can and then stretch out the audio so the audio is smooth rather than missing frequencies and sounding bit crushed. Another giant factor that many may not realize is that missing frequencies maybe detrimental to your health, all physical matter is effected and shaped by sound - look up cymatics. Sound stimulates cells were as deprivation kills cells - just like a puppy without its mother or father to hear a heart beat, if there is none it dies or develops neurological problems.
Yeeah . 👎 What a dumbed down version of what was a decent channel strip plugin. WTH are you guy's thinking with all these NI " improvements ".. ? Thought Trash was enough to see your fanbase thumbs down.
I also wanted to add that I am a big Billie Eilish fan, and in my opinion, her latest album is mastered fantastically! It's great to now know who and what is behind it! And I am very grateful that this useful tutorial is also provided with subtitles, which makes it more accessible to an audience that does not speak English!
Thank you very much for introducing the great new feature! I was also struggling with mastering nature sounds, and it is indeed important to be able to equalize the whole recording without any loss. So that it doesn't happen that a horse neighs loudly in one place and the rest of the recording, where birds are chirping much more quietly, becomes dull because of that one neigh. And thank you, I don't want to compress the rest. At some point, I considered using automations, but I find this new feature quite good. It's even a bit addictive, and I think it shouldn't be used everywhere. But then again, why not!? 🙂
An incredible tool…weird how the streaming industry told DIYers to “master to -14LUFS or get turned down anyways,” while the last example in this video was measuring -6.99LUFS.
"select the one of your choice" ie: you cannot account for all streaming platforms. And it's largely irrelevant, as it's utterly dependent on the playback normalizing settings, which in turn are dependent on the platform, if it's a free or paid account, playback settings, desktop or mobile app, and of course is subject to change at any time. And with bluetooth and spotify you immediately get bonus *compound* lossy encoding! Just make it sound great for the music, lossless, on a detailed (flat) system. The listener always has the final say, as does the multitude of radio processing.
I recently got VEA and it does not want to install on my M1 MacBook Pro, I am running MacOSX Monterey 12.1 I would assistance in this matter as there is no solution for this issue on the Izotope website
I recently got Trash lite and it does not want to install on my M1 MacBook Pro, I am running MacOSX Monterey 12.1 I would assistance in this matter as there is no solution for this issue on the Izotope website
Sorry to hear you had problems installing. Please reach out to our support team directly here: bit.ly/3ScGEJa We'll get back to you by email as soon as we can.
I recently got Neutron 4 Elements and it does not want to install on my M1 MacBook Pro, I am running MacOSX Monterey 12.1 I would assistance in this matter as there is no solution for this issue on the Izotope website
@@iZotopeOfficial what I mean is that almost all your AI enhancements are for music with voice or podcast recordings, your full demo video talks about these for almost the entire duration of it, and nothing much really has been done for instrumental music since the last updates. I can clean with the same results instrumental audio recordings in RX 6 and in RX 10 so not sure if RX 11 will provide a huge improvement in tools like Spectral de-noise, or repair or the attenuation tools. I was hoping to have better improvement in unwanted noises and using AI for noise vs music separation.
I may be wrong but this seems part wrong @izotopeofficial, the -10LU gate should be relative to the current reading of LUFS, that's how it's explained BS.1770-3 paper and thats how it works too with nugen vislm meter. Why the beginning of the song in this video is under the gate line considering nothing louder happen before?
For anyone that uses Maschine 2, you probably know the "convert to clip/s" function, now, imagine just how much more we could do when there would be a function to convert clips into patterns! "convert to pattern/s" which will then be automatically applied to the scenes beneath it, i wish so often that i had that..
I am so glad this showed up early enough for me to get, phew.. I just now mastered my song with it and at first it wasn't really going all that great but, yea, this is awesome! ♥
Thank you so much. I don't understand why all new songs have this type of flat waveform that is in final stage and not waveform that was before. They sound like overcompressed in this way. Also they are like lofi type or like not many highs. But it's an EQ issue I think.
This is the sort of video people like me need to see more of. A comprehensive and clear communication of what the feature does and why it's important. Really excellent; thank you.
And all that really needs to happen is for streaming services to *stop re-processing and lossy encoding* music, aside from some volume normalising (downwards, not upwards with limiting) as a playback option.
RX Advanced upgrade from 10 to 11 is way too expensive for the features. It would be worth the current upgrade price if they have included AI partial reconstructed speech synthesis. Adobe is miles ahead here.
so pretty much this 12:00 minute example is like having a parallel reversed compression. it isnt necessary to own the RX11 to do that. Of course you need to understand the plugin choice and the phase related issues.
For me I discovered that is more useful in speech than in music. Music needs to increase dynamic range before using this tool but still with Ozone 11 upward compressor it is quite unnecessary as It always leads to "Loudness measurement is already optimized"