The t-pain effect on "fly me to the moon" changes the premise of the song. It changes from romantic to an actual robot literally trying travel to Mars.
@@dondamon4669 lol... is this even a quote by either of them? it really sounds like bullshit facebook LiVe LaUgH LoVe tripe falsely attributed to someone famous to get shares
@@michaelwerkov3438 no it’s Kurt who also said “what I have in my heart and soul must find a way out , that’s the reason for music, and to play without passion is inexcusable “ Kurt and Beethoven were brethren’s
For those of you wondering how I casually mention decapitation.....One of my principle voice teachers was pre-med in college before she turned to voice. During her pre-med stint, she observed cadavers as part of her study, and witnessed one of her professors demonstrate this as part of a larger context. She was the one who connected the card-on-bicycle spokes sound through the headless body with the pharynx being responsible for timbre, color and vowel of the voice back in the 1960's. This was before she had studied vocal pedagogy and voice science, which confirm this in less, er, graphic ways.
Thank you for explaining this! hahaha, it really was a surprise how suddenly we were talking about air being released through the throat-flaps of headless cadavers... ohhh too funny, you epiglott- me... epi-got-me, oh well i tried :/
Kinda whack you of all people don't understand splitting the stems creates terrible artificing and is in no way ever going to result in a good mix when re rendered after editing. When you do that things like reverb and echo get lost or tuned too and it's always gonna sound "off" This would only really work to prove your points (of which I admittedly agree with) if you did it with official stems, or with a singer directly. This is NOT how to showcase the loss of what inexplicable artistry you keep harping on. You are already damaging the vocal when you force it into a stem with an algorithm. The second you touch any of those stems, the entire mix suffers. I'd love to see a reiteration of these key points with actual stems or raw recordings with a vocalist that's part of the video.
As soon as I heard "an app separating vocals from any stereo recording" I scrolled the comments just to find this comment lol Yeah you can't do that without damaging the audio.
@@jaxnitsua1200 I agree with him completely just think it's not the ideal way to make these points. You should try pitch correcting a stripped stem sometime, the level of jank is at least 10 times what it otherwise would be with a pure raw recording. Especially if you're doing it to vocals extracted from music that old. I just found it an odd premise to make the points but it was certainly more entertaining this way I give Adam that!
how did you get 6k likes on this comment on a video from a year ago? every other comment on this vid is like, 300-600 likes. and you only have 2 replies. fishy ass comment.
This reminded me of a classic moment from MASH, where Charles, a man of means and culture, explains to an injured pianist the he always wanted to play the piano, saying "I have hands that can make a scalpel sing, but above all I wanted to play. I can play the notes, but I can not *make the music*." It's such a succinct way to express the imperfections or uniqueness of musical performance.
That's why Chopin composed so much on the black keys. His pianos sounded off in those keys (pre equal temperament). Nowadays classical musicians are very busy forgetting it was the case. ;-)
@@kohaponx well when you get to this level of ‘f***ing around’ it doesn’t matter if you call it Autotune, Melodyne, or even Tune Real-Time whatever lmao
@@Guinneissik It does matter, because it's a different technique and sound. Especially when making jokes about "sounding like T-Pain" when using a different tool and purpose.
With Sinatra, his breathiness actually gives a bassy thickness to his vocals. Whereas the fixed one sounds anemic. Its insane how something so small can make a big difference. Wow.
@@leaveitorsinkit242 modifying anything about the voice will change how it's perceived, it's impossible for a program to simulate what the singer would have sung a couple cents higher or lower
I think that's an artifact of the AI stem separation software, not Melodyne. I would guess the lower frequencies of the human voice are more difficult to distinguish for the software, just as they are for us
Losing Aretha's pitch slide on the word "for" in "for a little respect" was the most jarring uncanny valley moment for me. It needs that slide to work!
I liked seeing the phrase "what you need" on the grid, you can see the way she slid up to the third 3 times and each iteration was slightly flatter; "need" was almost a semitone below "what" (at around 8:20)
I didn't even consciously notice a clear difference on these direct and short comparisons, but if listening to longer pieces, I'd definitely feel either enriched or dulled depending on whether it has life in it or is sterilized.
Ficaria legal dobrar algumas segundas ali pô. Principalmente nas terminações das frases. Mesmo usando esse pedal de harmônico, achei muito interessante harmonia
Last week I got really stoned and listened to "Whole Lotta Love" on repeat on my new vinyl record player set up... and came to the conclusion it was the greatest rock song ever recorded, so to hear someone is "fixing" it is like nails down the chalk board of my soul. 🤣
@@Youngapollo47 I think the fuss is all about the "tune" part, not whether you do it manually or algorithmically. At least to listeners who are not producers. I mean, most people consider production itself as "computers doing it", and not as actual hard work, talent and artistic decisions that are involved.
@@Medytacjusz i dont think that’s relevant to the original comment or my reply, right as you may be lol i was just pointing out that it’s funny he’s using melodyne but calls it auto tune
I'm a former music major, and I hear nothing different except pitch center, like he chose a slightly higher approach, which as a classical singer, is exactly what you want to do. I think this is more a recording mix video, where people who work sound boards are more conditioned to hear those differences in headphones. On stage I can hear really subtle things, but in a recording mix, those subtleties just aren't as present, they're mixed out, and so hearing the post mixed sound takes a certain kind of practice.
So "Ain't No Sunshine" is really noticeable, but that's a very stripped down recording, so it makes sense. Pink Floyd as well, but I never liked that drifting Tom Petty vocal sound, so I'm ignoring it. With "Ain't No Sunshine", smoothing his pitch actually changes the rhythm of the song, bc he uses the pitch inflection to set up the attack of each word. By smoothing it out, the music looses it's 'swung' quality and it sounds more like he holds a note too long and has to play catch up. That's when losing expression actually fundamentally changes the song in my mind.
Everything’s fine until Those blues thirds start falling away, it’s kinda the thing that distinguishes vocals from other instruments-so the phrases sound similar in isolation but if one listened to the whole track “perfected” you’d never hear the vocals do “their thing” in the pitch landscape. Kinda like how the guitar is tuned just shapes the sort of phrases we hear from it, like “this is a guitar melody, I can tell by the notes” even when it comes out of piano.
Exactly. If you think this "fixes" Led Zeppelin, then you don't know ANYTHING about rock music. Next, let's get rid of all of those pesky bends that Jimmy page is playing.
This was so interesting to me - I especially noticed with the Aretha clip how much her subtle bending of the notes added emotion and “humanity” to her singing. Really fascinating, thanks!
quantizing larger tempo shifts also really throws the listener for a loop. I heard a version of SOAD's Know that had been corrected to a constant tempo across all sections for the purposes of being mashup-able and it's absolutely bizare sounding to not have the choruses slow down
This reminds me of what I learned about Indian singing traditions. Often how you get to the note matters more than the note itself because the little bends along the way can convey a lot.
If you look at any instrument, being 100% in tune is not desirable. That’s what vibrato is. It’s a rhythmic variation in the pitch. When you 100% correct this it will remove all the emotion.
@@TheMAU5SoundsLikThis What gets me is that the common Western equal temperament scale isn't isn't 100% in tune so "correcting" vocals to it makes even less sense. Sometimes the in between notes count.
@@garydiamondguitarist Nevermind how pianos are tuned. As you go towards one end of the keyboard, every note starts being "stretched." (called octave stretching)
More Mama Neely!!! I run a recording studio in Western Kentucky and I had a young vocalist (~13-14) who was recording her first album. She would add these little diatonic embellishments at the front of phrases. Instead of a scoop into the phrase it was an actual pitch. The performance struck me as strange and it wasn’t until about tune 5 that I realized she was mimicking auto tune. It got really obvious on her more modern cover tunes. Great topic to go deep on.
Damn I just tested that on myself (I'm gen Z) and tried humming the melody of a song I know is definitely autotuned and then the Zeppelin song, I can totally hear the effect the song which had autotune had on my voice. Every note came in as on tune as I could make it (which wasn't very on tune since I'm not much of a singer). It almost felt like I was trying to play my voice like it was a piano, while with the Zeppelin song felt more like I was just singing normally... Really interesting effect.
I wonder if part of that piano feel was simply trying to match pitch. Try to match pitch with Plant and you may feel the same way. Don't "feel" it, just match his pitch. Curious how that feels to you.
Hey Adam, I wonder what would happen if you went the opposite direction: take a song that has already been pitch-corrected to death and use melodyne to bluesify it. It would be interesting to see if this makes the vocal more soulful or just renders it even more uncanny. Of course, this would be *way* more work, as you'd have to choose which notes to bend and how far instead of just jacking all the sliders to 11, but it would be super interesting to find out what happens.
I think an easier approach to finding out what a heavily pitch corrected song would sound like without pitch correction would be to see if the artist ever sang the song live without an autotune microphone and maybe had a few pitches off. You can also see if anyone did a cover of that song without pitch correction. Though, I think it would be hard to find heavily pitch corrected songs that would sound better without pitch correction because I would assume that music artists and producers kinda know what they are doing and they probably wouldn’t pitch correct a song where it really sounds worse with it. Unless the song is kinda bluesy or has a semi-conversational style of singing, it probably would sound either the same with pitch correction or better. I guess musical theater numbers and operas would probably sound awful with pitch correction too though. But, I think pop music uses pitch correction for a good reason.
Yeah, that’s not really possible, but I appreciate the sentiment. Like someone else said, all you can really do is try to find a live recording without pitch correction. Unfortunately, a lot of pop artists are forced to use it even in their live shows, so that might be difficult to find.
I was about to comment the same thing. I was wondering if the uncanniness if because the effect mangles the original tone / timbre of the voice. This experiment would reveal if it is the corrected tuning or the distortion caused by the effect that is making the singer sound "off" or "robotic".
Pitch correction is a very useful tool when you have an epic performance filled with emotion and mojo but just missed those one or two notes that were off enough to mar an otherwise excellent track. A little snip snip fix in the mix and you save all the juicy goodness. Where it goes too far is snapping the entire vocal track to grid and losing all of the human element. Those scoops and slides are the secret sauce that separate an epic and memorable performance that touches our soul from a cold robotic one.
For me, it's those imperfections that create perfection. It's also why I usually prefer live performance. I'm not keen on everything being adjusted, artificially, towards some record industry 'standard' sound.
This is so true for most music but what if it's an integral part of the genre? I've heard a lot of heavily emotional performances where they purposefully use portamento with autotune to get that snappy "dodODODO" sound between notes and it's awesome in my opinion (and a lot of others). For example Fri3ndzone - streetview. It's really blowing up with the youngsters lately lol
@@abasement666Using it deliberately as an effect tool can have good results. But those who use it as that are good singers in the first place. They don't use it as a crutch. As an example there is someone here on RU-vid who is analysing Chers 'Believe' which was one of the first commercial uses of autotune. And maybe to everybody's surprise - it isn't used throughout the whole piece but actually as an accenting tool.
It's kind of annoying that he put on that energy on I guess because he expects the audience to expect it? I'm not sure. He's obviously proud of his mother and he invited her on the show, so why not give her the respect and just treat her like a regular guest and not like she came to his school with his gym shorts because he forgot them at home.
Hahahhah ... I immediately thought of Lou Reed ... the chap wasn't a "singer" for sure, but he sang his lyrics as he felt on the day (and never the same in two performances). Among the best as a story-teller poet there was. Have a feeling he would have refused to use autotune ... he consistently eschewed "embelishments" of any form unless he felt the song needed tham.
“If you were to cut off your head above the vocal chords and pass air through, it would sound like a baseball card in bicycle spokes”… uh… Thanks, Adam Neely’s Mom.
I was joking - last year Adam made a big issue over our focus on "classical music" (and related theory) sidelining other music traditions and being inappropriate to analyse such musics.
I found Adam's mom's blog after this, and really enjoyed reading it. She had several posts about working with post-menopausal vocalists and the voice-changes they can experience. As a post-menopausal vocalist myself, I found it very encouraging ;) Also read her description of her father (= Adam's grandfather!), a brilliant organist. So interesting to read examples of how music can be passed down and transformed within a family.
yeah, music is a family thing)) my grandpa played the accordeon and the piano (without any formal education!), my grandma sang in a pop band, my mom is a singer and choir conductor, and I...um... I played some drums in high school ._.
@@doitnowvideosyeah5841 Everyone's grandma played piano at parties. There was no tv, radio, or recorded music back in the day. Just live music or if you had thirty grand a player piano, the cutting edge technology of the time.
One thing Adam isn't considering is that the notes actually ARE perfect. They just don't align to this perfectly spaced grid that we 'say' is perfect. This...degree of separation that someone said is 'perfection'. Humans are perfectly imperfect. And those human imperfections are what Neely would describe as "mojo".
All singers are worsened by pitch correction. It's not their voice, so by definition it's not a genuine performance. Might as well have an AI singer at that point.
@@LocrianDorian Nah, some people are so clearly out of tune (the more distorted the voice is, the more the pitch had to be corrected), that I'd rather listen to a robotic voice than their original mess.
As someone who got into music late and was born after 1985, I've found it took me a really long time to appreciate singers like Bill Withers, whom I found "pitchy". Then, when I really started listening to older music in general, I realized almost everyone was "pitchy" and pitch was an appropriate sacrifice to emotion and soul. Now, when I listen to modern music, I find it more devoid of character. I appreciate this video because it definitely affects how I'll correct my own voice and trust my ear more when something sounds right and isn't perfectly on the pitch grid. Thanks Adam!
I've noticed this with a lot of people who predominantly listen to contemporary pop. They perceive natural singing as pitchy. But of course perfection is rare in actual musical performance and it's also not aimed for by performers because it's boring and unnatural. Slight deviations from the perfect pitch are the norm with singers, violinists, cellists, etc. It's even used consciously for an emotional effect. The same thing is true for slight rhythmic variations. A lot of people have become used to music that has been dehumanized in many different ways.
A violinist friend of mine once asked me "What is the difference between a good musician and a great one?" I shrugged and he replied "The great musician knows when to get the notes wrong and by how much. The note has to serve the music not the other way around". I think that this is what your video is laying bare. Great work, by the way. Keep it up
That had to be said. I love Adam Neely but if I had no idea about Melodyne and never used it before I would probably create a worse image about that tool. It is made to be able to keep imperfections more than other pitch correction softwares and its mostly how u choose to use it.
I personally tend to just slightly automate the pitch for little corrections, but this is probably a lot more convenient and visually reassuring. Every way of changing pitch without changing the speed also has it's artifacts when working with audio files, but that only seems to be on more drastic pitch changes, I could see myself picking up this plugin for that as well, if it's got a unique and cool algorithm.
The drift on vocals is what really kills things, but usually vocals are things you can recut until you get it right. There are numerous technical problems (making a 3 saddle telecaster stay in tune when doubling piano melody) that used to require detuning a string to get around the intonation issues, then only playing part, doing another pass after retuning, that you can simply fix after the fact now. It's not fix it in the mix, it's fix something that is inherently flawed in the instrument without wasting time. Why is it that Adam didn't do something I know he is probably annoyed by on a regular basis - fix a trumpet that is slightly sharp on an otherwise great take that you didn't realize was off? You nudge it in a few seconds. Beats cutting and pasting the note from another bar. It's a powerful tool and vocals are the worst example. You wouldn't do this to BB King, but Frank Zappa... that's where you see the usefulness. Blues singers have great control to begin with. A marimba and harp will fight each other - harps don't stay in tune. Melodyne make that go away quickly with no artifacts. It's not the enemy. It's your best friend for technical issues that aren't laziness.
That’s a dumbass quote lmao. Style is just honed by realizing your mistakes overtime. Clearly that quote was the dude trying to sound smart and it fails miserably
So by your logic, polyphias whole style is mistakes they can’t help but make, but their whole style is very technically demanding and has almost NO mistakes. So please. Make it make sense for me.
@@NBrixH I’m not saying polyphia sounds good. I don’t like them either, but to say their technical prowess are mistakes is insulting. They are more technically skilled than you or I more than likely ever will. They deserve respect due to the work they put in. I can acknowledge how outrageously hard something is, and acknowledging the talent needed to play it, whilst still thinking it doesn’t sound great. But some people do think it sounds great, so to say it sounds terrible is just your opinion, and you’re stating it as though it’s fact.
I dunno if this sounds crazy, it’s like the soul - is in the moments the voice changes from one note to the other. The story telling is in HOW the voice goes from one note to the other. That’s what colors the emotion into it. Just like not all human emotion will ever be pitch perfect- so Auto tune will never be able to place true emotion into the song. Awesome video was engaged from the second second on thank you for making this.
this kinda reminds me of when people take old animation and "improve" it by using software to interpolate it to 60 fps, destroying all the nuance and detail of the animation
@@lolwutizit The original animation artists were constrained by the medium and lack of technology of the time. They implied motion. Computers, when trained correctly, can see that implied motion and fill in the detail. I won't argue that it's not a computer's "opinion", it certainly is. I will argue if you say it's objectively a bad thing. It's quite subjective and IMO it's a good thing for many people, myself included. I like it, and I like how the implied motion is automatically smoothed. "I feel like it adds detail and makes the nuance easier to appreciate." is my original statement and I stand by it. The interpolation is quite sophisticated, taking implied detail and realizing it. I appreciate that and believe it accentuates nuance.
As a ballet dancer, this is very similar to what a teacher bestowed upon me...as an artist, we establish that we know the rules and express through breaking them. That is the definition of art.
This is so wild. I was just listening to IV last night and noticed that in "Battle of Evermore" there's a moment where all the harmonies come together in unison and they are not at all on the same pitch...and that makes it infinitely better.
Side tangent: T-Pain went into a big depressive episode after Usher told him that he "ruined music". After some time away, T-Pain went on to win the masked singer, to prove to himself and the world that he can sing without the "crutch" of autotune.
Yeah we all saw that Netflix doc. And even if we didn't its pretty well known at this point. That along with the tiny desk concert is why everyone loves T-Pain now.
The most important part is what he has to say though, and what he has to say is always interesting and enlightening. The editing is merely at the service of the discourse.
How many times have you had to use a mobile rig to record a sax player you get one day for overdubbing 10+ songs? Not every 'bap' is going to be perfectly in tune, and you won't catch every out of tune one because you're blazing through. Human beings get tired. 'Imperfections' are also 'mistakes' sometimes. Melodyne is subtle enough to fix pitch drift or going slightly sharp on the occasional note, without artifacts, which was ignored in this video. You cannot catch very error, so Melodyne is a lifesaver when you get a great performance that has a few tiny flaws that can be fixed other ways, but is far easier with Melodyne. Imperfections that make you wince are not the ones you want to keep. If you suck at using Melodyne, then you make a sax into a tone generator - and that's user error. It requires skill as an engineer to use it properly.
@@CHHuey There is a level to this debate about what constitutes "proper" use. In most cases I would actually prefer to hear the wince. Ever since I started buying records (because I am old enough to have had to save up for music on vinyl) I have preferred live albums over studio ones. There is a magic that happens when a live band is absolutely on it and hearing a slightly bum note or a glitch is what proves that it is live and not re-engineered. So that leads in to the issue of who decides what to leave in and what to take out. There is an artistic discretion being exercised between the musician and the engineer/producer but that inevitably creates tension. The businessmen want something that fits between the lines because there is a risk that the audience, who are now so used to perfection that they expect nothing less, will be turned off by something that has a few warts so they require what I would consider to be excessive correction. But the warts could be deliberate. Frank Sinatra could clearly sing in tune but chose to sing flat for the effect. Unless the person applying the "corrections" sits with the artist and goes through every change they want to make to see if the musician meant to create the wart then there are going to be places where the software is obliterating a deliberate part of the performance. An engineer should never try to be a better guitarist than the guitarist. Or singer. Or drummer. Or just about any other instrument. It may be that you and I are in agreement on this but I suspect that for some people "proper" means polishing out all the imperfections and with them all of the character and for me that taints the whole process.
@@Birkguitars Frank Sinatra wasn't singing flat. He knew where the note was. It just wasn't a piano note. Blues and jazz have a long tradition of using microtonal methods of singing. Melodyne isn't build with that mindset, but it can help if the occasional problem sneaks in and you just didn't catch it. You can say you like the mistakes at gigs but in all honesty, you don't hear the majority of them because the acoustics usually aren't good enough which is why bands will still 'doctor' live albums, because the bassist couldn't hear himself in the monitors and hit the wrong note, which no one in the audience noticed. There is no magic when you have the same person recording 3 different saxophones while wearing headphones, playing to a pre-recorded track with 2 people staring at him for hours on end. That psychological barrier is difficult for people. Not everyone is comfortable and it affects the performance. You're equivocating between the 'acceptable mistakes' and the kind of mistake that makes you wince and notice the mistake. Anything acceptable doesn't need to be fixed. You statements about engineers makes me think you've never had to work with one. The software doesn't 'obliterate' anything. I have had to fix notes on a pedal harp because a pedal harp is an instrument that goes out of tune in a very unpleasant way that no harpist likes. It's a problem with the instrument. Those discs make the strings go out of tune - design flaw. The engineer is working for the producer, who is making those decisions with the artist. That is the job of the producer, to listen to those mistakes and decide which ones to keep in and which ones to let go. I never said fix every problem, but the ones that make you wince are the ones you fix because you just didn't catch it in time. You've totally restated my argument into something that fits your world view instead of analyzing it for its own merit. When you get someone who is composer, producer, engineer and performer and in general in charge, you get to see the big picture. I have. There are mistakes you keep in and those you throw out. I never said you purge it of humanity, but there is a threshold of mistakes that you can tolerate, and those you can't. Do you really think that something like Frank Zappa's 'Waka Jawaka' or 'Hot Rats' is better with the wrong mistakes left in, because they weren't. He edited them out, but the guy edited so well, composed produced and engineered, that he knew what was supposed to be there and what wasn't. That's producing. I do not think we agree because I'm not sure you get that recording a live concert and a multi-track recording represent different things, and if you're doing it on a budget with limited time, this is not 'demeaning the music' or anything like that, it's bring out the intent of what the person playing meant to play. It removes a barrier that lets you focus on performance and not on all the weird technical problems. Most saxophone players don't WANT to hit the wrong note. That they do is not proof of 'humanity'. You can easily correct the pitch while leaving the intonation as it should be - Adam severely under utilized that aspect of Melodyne, which is one of the best features of Melodyne. The person who decides to leave it in or take it out is the producer, musical director, or the band if they're working with an engineer who really sticks to that strictly. If you want to go with 'live album', fair enough, but once you abandon that, you have to go with 'this isn't reality'. That's where Eddie Kramer, George Martin, guys like that come in. There was a Melodyne before Melodyne - it was called session musicians who wouldn't screw up and producers like Tom Wilson would often insist on it if the band wasn't cutting it. The person in charge of the production that the musicians hire makes the decision. That's why they exist. Maybe it's one band member, maybe it's someone running a group like a jazz group instead of the Beatles, but at the end of the day there's no tyranny of Melodyne trying to purge humanity of it's expression, just an ability to make things that in the past would've taken forever since an analog synth goes out of tune the longer you leave it one. Those PITA problems go away and the music sounds better. But of course a band like Black Flag isn't served like that the way that Frank Zappa was. But that's because someone MADE the decision, and that was Greg Ginn in Black Flag and Frank Zappa, the composers. The only reason I care to put all of this in writing is that someone who could do something extraordinary might not because they got peer-pressured into 'auto-tune is bad'. I'm tired of that crap just as much as I'm tired of bands that don't do pre-production demos, and practice while they record. Studio time is for studio stuff, and this is a blessing for fixing stuff when you're limited by time and other resources. That is ignoring the creative uses of it, too. Laziness is laziness, and this video was only about the laziness.
@@CHHuey I have actually seen a documentary on Sinatra that mentioned singing flat but I can't track it down at the moment so unfortunately I can't quote authority but I will hold to it. You mention microtonal methods but I think that is actually what I am talking about - being a few cents under true pitch rather than being randomly out of key. That aside I don't actually disagree with any of your other observations but I think I have a different reference. Years ago I saw an interview with Pete Waterman who said explicitly that he did not want any of "his" acts (note the possessiveness of the attitude) to sing live because he and his team spent so long in the studio making sure the vocal performance sounded as good as they could make it anything done live would be second best. In HIS hands Melodyne would be a purely business tool not a musical one. I am old enough to remember newspaper headlines about how the Bay City Rollers didn't play their own instruments so I am aware of the rent a band concept with session musicians. I went through University to the soundtrack of Relax by Frankie Goes to Hollywood on which Norman Watt-Roy came up with and played the iconic bass line, not the band member. And I have heard Glen Fricker screaming about bands that turn up to a studio without having done the practice to know how difficult it can be to tease out a recordable performance. Ultimately the choice of which points to change is an aesthetic one and that is where the differences in preference come in. A while back I was in a band doing a cover version of Rocking in the Free World and in researching options to create our own interpretation I found a version that Pearl Jam did live. The audio is straight off the video so there is no editing or correcting. One of the guitars is clearly out of tune by a significant margin but I love it as a performance because it captures so much energy. I have no doubt that others would think it unmusical garbage. I concede that this is a personal preference hence my understanding that Melodyne can have a place. My concern lies around the difference between what you describe, a tweak to correct something that is not on the button because of practical limitations, and the ability to take a voice that honks like a drunken goose and put it on tune and on time. I want to hear an authentic representation of what the performer is capable of and wants to project but with Melodyne I cannot be sure that this is what I am getting particularly when someone like Pete Waterman would be using it to create an electronic version of Milli Vanilli, hence my deep distrust of the outcomes.
You can watch the bonus video and the un-edited original cut over on Nebula, if you'd like. (curiositystream.com/adamneely) Also, yes, I use the term "Autotune" to mean many different things in this video, including simply "pitch-correction" and not the piece of software specifically. I used Melodyne in the video mainly since I don't own Antares Autotune, but they can do very similar things.
Great work fixing all those out of tune amateurs - but you missed a trick, I noticed that they are all off the beat, I mean really, it's like they just sing whatever they want whenever they want, lets lock it down and get them strictly in time for the next video.
It's no surprise that hours and hours of work honing the skill of playing a piano (an instrument that is pitch-quantized, incidentally) can produce far more sophisticated and musical results than simply 'banging on the keys'. Is it surprising that Melodyne (as an instrument in it's own right, incidentally) is any different?
I was always taught when singing thirds in a choral setting, you aim high or low on it depending on if you're ascending or descending, or depending on the chord and other harmonies. I understood it to be voices aren't limited by the tuning system that instruments are beholden to. AFAIK this is very important in barbershop too.
@@mikesmovingimages I believe you. But vocal and especially choral music is closer to my heart. And no one really told me to change colour of notes. It was always just about "right" intonation.
@@mikesmovingimages though the popularity of the technique of bending notes on guitar has been a more recent thing, you're technically right that fretless instruments such as violin and related instruments have been able to bend and/or play pitches outside of 12edo for a long time. Or if you meant "bending notes" as in using any tuning system other than 12edo, that's certainly true. 12edo has only had widespread adoption within the past couple hundred years, and before that, all sorts of other tunings have been used throughout history and in all different cultures.
@@bragtime1052 in addition to bending notes for melodic interest, the idea of being in tune is a subjective concept. In reality there are not 12 tones in a harmonic scale but 17. All the emharmonic notes are comprised of two. Eb and D# are not the same, for example. Which one to use depends on the context. The ratio of a third or sixth to fundamental tone yields an irrational number. Orchestras and choruses never needed equal temperament, and as Adam's video demonstrates, "fixing" art by forcing it to a grid is a dehumanizing exercise. There is much beauty beyond the cold algorithms.
Another part of this is that pitch correction introduces audio artifacts. The software is altering the audio samples to produce a different pitch, which is in itself an imperfect process. The more the software alters the audio, the more artifacting there is. This further exacerbates the unnatural feeling of heavily-tuned vocals. Now, to your point, this can be desirable for an aesthetic purpose, but if you want to actually tune a vocal and still sound "natural", you have to apply it VERY delicately. In many cases, audio engineers are not strictly locking the audio to the desired pitch (as was done in this video), and are instead trying to get it "close enough" that it is read as "in-tune" (according to Western tuning systems) without introducing too much artifacting. The main goal is to eliminate distractions, fixing where a note is sour, rather than simply being bent for creative effect. It's a delicate dance to pitch correct a vocal without "ruining" it.
Also hot take: the best mainstream rock, blues, and RNB singers of the 70s are not representative of all the singers from the 70s. It kind of goes without saying that there are a lot of people who would have killed for this technology back in the day and these post hoc arguments are a product of validating nostalgia. Hell, as a producer, I work with plenty of people who I don’t think need any pitch tuning.
@@BradsGonnaPlay also, another hot take: moses is just not good enough for this. you need perfectly isolated vocals before room acoustics being applied for this to work at all. Software separated vocals with natural room reverb just ain’t it,
A prime example of why I have loved your channel for years, This and the Coltrane Fractal 2 of my all-time favorite musicology videos on here. Also, your mom is awesome!
I think after being hammered with perfect pop vocals for years now, I actually crave something a bit pitchy. The vulnerability of singing, the cracks and timbre of the voice, all of that stuff adds colour and richness to a song. We’re here for the stuff that fall outside the lines!
lol it wasnt subtle, it sounds subtle but it was too aggressive, you must not take all the way up the settings and once you tune, you need to fix the vibrato and connect the note vibrato which he wont do it for a video, the vibrato its what makes a voice sounds natural and alive, also if you dont connect the notes it sounds unnatural in most cases, pop singers make it more subtle in most of the cases
Yeah, you have to trim note beginnings and endings. Leave the vibratos and ornaments pretty much untouched. Just tune it a little bit and it will sound better. Also, 90% of the time leave pitch drift on 0%. Especially with such good performances. Melodyne is just a tool, it's for the engineer to know how to properly use it.
Well yeah, I don’t think Adam was trying to see if Melodyne would make it sound better. The point is sort of the imperfection of the pitch IS the point.
Without elaborating, it reminds me of Photoshop on models. It's done so well that it's beginning to look natural and anything that doesn't quite match that ideal is seen as not as attractive. And it's setting a standard that we artificially have created for ourselves and will idk, I doubt it will be as damaging as Photoshop, but it'll have an impact for sure.
Kinda but no, like you said. In both cases, the tech is being used to take a product that isn't quite what they want it to be and get it there. But physical appearance isn't in the same realm culturally/psychologically as music. People seem to tend to reject when music is artificial, even with no prior reference. People would rather have a naturally good singer but commercial music prioritizes image (looks) over the talent, and thus we get an inferior sounding product. Interesting how the looks part is connected to this. Flawless personal appearance is something we seem to desire or aspire to, and not because of photoshop. Makeup and other personal physical attribute enhancements have been a staple of all cultures since before written history (interestingly this exists among the different cultures' varying ideals of what is attractive). So in the photoshop case, the market is supplying a demand (but I do agree that it becomes circular because that ideal then in turn does influence the cultural ideals).
13:11 this, for me, is the difference between Melodyme and natural. The "corrected" Bill Withers sounds good, but when you played the uncorrected I immediately got goosebumps
What melodyne does is make a microtonal voice instrument "pirch perfect", but music is not about perfect pitch (that is all about production value and conformity). The microtones are there for a reason. It's like removing the bends from a guitar solo.
When Adam's mom started talking about what the vocal chords sound like without a head (at 10:45), I definitely thought "Some dude donated his body to science, and some other dude cut off his head and played with his vocal chords just to see what it sounded like."
I think being "off pitch" can sometimes sound better or more natural to us probably because it's closer to just intonation than equal temperament. Maybe?
got nothing to do with intonation or music theory, and everything to do with our organs and ears + what we are used to. it's just "how it is" and theorizing around it won't change that :P of course there's subjective taste as well
I think it sounds more natural because it just…IS more natural. Most people (and even a significant amount of instruments) don’t have absolutely perfect pitch and hearing something tuned so perfectly probably rings as unnatural not just because we have context for the original versions, but because we don’t usually hear natural examples of perfect tuning in that same vein.
@@paniccleo it depends on the instance. I don't agree with the person above who said that it difinitively has nothing to do with intonation, but I do think it depends on the instance. I am sure there are times where this is the case. There are plenty of songs I know of that are intentionally out of tune that sound better that way than they would *in tune*
There’s a certain inertia to the sounds of blues; like a heavy rock sliding on ice. You push it to change its direction but it wants to keep going straight. I think that sound gives a solo or melody much more authority over the other parts of the song.
This is like the reverse of listening to T-Pain sing live without vocal effects. It turns out he's a really excellent singer and only uses auto-tuning because he likes the effect
In contrast to a lot of reggaetón singers who autotune the heck out of their music, and then when you hear them sing live they can't even carry a tune. Every time I go to a live performance at a discoteca, I can't believe that these guys are so famous for singing. 😂 I can't be too judgmental though because I would need the autotune myself soooo much.
It might make it mechanically and acoustically more perfect. But it loses the human aspect, the "soul" of the music. Those little imperfections and variations.
@@joewalker3166 I listen to Bob and I dont like his style. Anyways it adds atmosphere to his songs witch are unique and original and coresponds well with lyrics etc. And yes, he is well tuned. Its just about the taste.
For me Dylan had one of the best voices up to 78/9 then it was harder for me but before that his voice was out of this world. But I don’t like proper singers anyway I can’t even stand Mercury
@@joewalker3166 um you are aware he was drunk as hell on stage like 90% of the acts he performed right? I feel you are cherry picking with rose tinted nostalgia friend. do I love the songs, yes, but I can be hard on them with that love.
The crazy thing is that none of them really sound bad per se; there’s some artifacting but that’s to be expected; they just sound off. They just sound a little less human, a little less expressive. As Alejandro Jodorowsky once said, “Too much perfection is a mistake.”
Imagine autotuning some even more out of tune masterpieces, like John Wetton in King Crimson (always sliding into and out of notes, usually fighting not to be flat, love him nevertheless), or Bob Dylan singing anything off of Blonde On Blonde. Goodness sakes that would be terrifying.
"I love Bill Withers' voice so much. It's so pure but there's so much soul and so much feeling when he sings AND WE'RE 'BOUT READY TO CORRECT ALL OF THAT"
I'm practically tone deaf... I couldn't really hear a difference in Robert Plant's or Frank Sinatra. But Aretha Franklin, Dave Gilmour and Bill Withers were ruined for me.
Maybe because I've been a full time recording engineer for 30 years or so now, AutoTune/Melodyne/etc. always gives me the heebie-jeebies. It just removes all the emotional payload for me. I find it as engaging as being serenaded by a Speak and Spell would be. Uncanny valley indeed.
Ever recorded a pedal harp? I have. I love Melodyne. The instrument itself will not stay in tune by design. Using it on vocals like shown is just laziness. You may have 30 years, but you haven't had Melodyne for 30 years so I'd really suggest pulling up any instrument that is prone to intonation problems, or analog synths that drift in tuning as they heat up, and seeing how useful it is and how it doesn't remove 'soul' but 'technical error' so you can get a better performance without doing all the workarounds you likely know about. I don't tempo map, don't fix bad vocals, but a jazz chord on a 3 saddle uncompensated Tele around the 8th fret? A nudge fixes that problem. You're missing out.
13:48 Today I learned smething new (besides the autotune of vintage artists and the pitch correction): change is not always good change is not always bad change is change understanding change is the important thing Great quote. Thanks Adam.
Except that he's not using Melodyne correctly. His edits are producing something that's far from perfection, so we're really not getting a fair comparison.
I don't think it enhances it in any way. I feel like it ruins it. The imperfections are the beauty of pre-digital music. John Bonham from this same band always played a little behind the beat which makes Led Zeppelin who they are. To fix his drum beats to a precise bpm would ruin it even worse than Melodyne ruins Plant's vocals.
9:10 When I heard the melodyne version I was like "yeah it still sounds decent" and then you played the original and I instantly got goosebumps. I could see her singing in my head as the original played, it instantly felt so vivid. It's crazy how you can get such a forceful emotional response from the original simply by hearing the doctored version first.
Lol You guys aren't taking into account nostalgia. I'm sure you guys would still get chills if the original vocals had pitch correction and it was the only version you knew.