Тёмный

Nutrition Studies Are Just Terrible 

Healthcare Triage
Подписаться 447 тыс.
Просмотров 88 тыс.
50% 1

Nutrition studies are really, really bad a lot of the time. Sometimes researchers don't do careful work, and the systems in place don't always prevent weak research from being published. Add that to the fact that media outlets tend to overblow stories about food and cancer, and it's a recipe for research disaster.
Follow Tamar on Twitter @TamarHaspel
/ tamarhaspel
Related HCT episodes:
1. Low Fat, Low Carb, Whatever! • Sorry, but Low-Carb an...
Be sure to check out our podcast!
• Podcast
Other Healthcare Triage Links:
1. Support the channel on Patreon: vid.io/xqXr
2. Check out our Facebook page: goo.gl/LnOq5z
3. We still have merchandise available at www.hctmerch.com
4. Aaron's book "The Bad Food Bible: How and Why to Eat Sinfully" is available wherever books are sold, such as Amazon: amzn.to/2hGvhKw
Credits:
John Green -- Executive Producer
Stan Muller -- Director, Producer
Aaron Carroll -- Writer
Mark Olsen - Art Director
Meredith Danko - Social Media
images/video
Videoblocks/dapoopta
Videoblocks/kamilaC_
Videoblocks/VladyslavStarozhylov
iStock/MichaelJung
Videoblocks/HDVMaster
Videoblocks/Artfamily
Videoblocks/JamieBird
iStock/noipornpan
#healthcare #healthcaretriage #nutrition

Опубликовано:

 

5 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 228   
@dosadoodle
@dosadoodle 5 лет назад
There's a statistically significant relationship between watching Healthcare Triage and recognizing BS research. It's also clinically significant.
@mitchumsport
@mitchumsport 5 лет назад
I find a significant association between watching health-related youtube channels and farting. It's science. You can't dispute it.
@NattosoupStudio
@NattosoupStudio 5 лет назад
If you enjoy the BS finding aspect, you may want to listen to Quackcast or Persiflagers Infectious Disease podcast.
@jfgreen1959
@jfgreen1959 4 года назад
Keep on Truckin' Nattosoup , I can’t keep on trucking, Pennsylvania keeps putting on weather restrictions every time there’s a threat of precipitation.
@Patrickpisawesome
@Patrickpisawesome 5 лет назад
I think a huge problem with not only nutrition science but most of health related science is negative findings are rarely published. It force’s scientist to paint with broader brushes in order to publish only positive results.
@ajwajw8152
@ajwajw8152 5 лет назад
Patrick yes absolutely true! We get away in healthcare research by running the same study repeatedly until we find significant results, while the repeated null results go unpublished! It abuses the assumptions of the statistical models we use to support our findings!
@DampeS8N
@DampeS8N 5 лет назад
I popped in here thinking "What got P-hacked today?" They are nothing if not consistent.
@vineetbhagwat4256
@vineetbhagwat4256 5 лет назад
Pre-registration should force you to publish all the results you find for the outcomes you register
@thishandleistaken1011
@thishandleistaken1011 5 лет назад
It's not *really* statistically significant. They're tracking so many variables, that some are bound to randomly form an association.
@unappropadope
@unappropadope 5 лет назад
I just rewatched veritasium’s great video “is most published research wrong?” that expands on this. It’s great to have the regular contribution of statistically critical voices in my feed; it helps me ask the right questions to make actually informed decisions. I love these videos!
@rogerhinman5427
@rogerhinman5427 5 лет назад
Veritasium's channel is awesome.
@nixasteria
@nixasteria 4 года назад
I watched a video where a professor at a college talked about how most studies supporting taking supplements were observational studies. He went on to say that most of those observational stories couldn’t pin point taking supplies as the cause of better health since healthier and health conscious people were more likely to take them. He himself had been apart of a randomized control study that couldn’t find the same results. All the comments on the video were saying the guy was dumb because there were so many studies saying taking supplements was beneficial. I was surprised by the lack of understanding people had about studies and their accuracy
@myjciskate4
@myjciskate4 5 лет назад
Nutrition is literally the most irritating field to study. Like literally, the only thing we have widespread consensus on is to eat lots of Whole Foods and limit processed foods.
@Khn90
@Khn90 5 лет назад
In my opinion, it's a tie between nutrition and mental health.
@Jason608
@Jason608 5 лет назад
Yeah, I don't know how many more of these I can... stomach!
@Croz89
@Croz89 5 лет назад
Even that consensus is hard because there's no single concrete definition on what "whole foods" or "processed foods" are. Is bread a processed food? How much processing is required to turn a "whole food" into a "processed food"? It's like obscene material, it's "I know it when I see it" scenario, and that's not good for scientific study.
@connorlynch3474
@connorlynch3474 5 лет назад
"Eat food, not too much, mostly plants" is Michael Pollan's summary, where food is in contrast to "food-like products", probably all we can definitively say at this point.
@GalenMatson
@GalenMatson 5 лет назад
Actually a pretty spot summary.
@AustinChiangMD
@AustinChiangMD 5 лет назад
So many confounders to any nutrition study that it’s virtually impossible to have solid study design IMO
@unappropadope
@unappropadope 5 лет назад
Austin Chiang MD MPH It certainly isn’t cheap
@vishalb84
@vishalb84 5 лет назад
Imagine doing that in US with water quality of flint, not you have to adjust for that.
@tophers3756
@tophers3756 5 лет назад
Unless you want to lock people inside a facility for a few years and monitor everything they eat. Even then things won't be accounted for.
@1DangerMouse1
@1DangerMouse1 3 года назад
Could this apply to public health studies too?
@MargaritaMagdalena
@MargaritaMagdalena 3 года назад
Why do you need nutrition studies? Eating healthy isn't that hard.
@Pirsqed
@Pirsqed 5 лет назад
A note just about the credit/end bit of the video. The sound levels have been off for a few videos now. The music is too loud and the voice too low. Otherwise, another great video. (Depressing as it is.)
@karm65
@karm65 5 лет назад
happens a lot in youtube videos. Apparently, sound equalization is not a thing for them.
@hata1499
@hata1499 5 лет назад
Ahh, weight and height. My favourite disease under the sun :)
@Travisharger
@Travisharger 5 лет назад
Thanks man, keep up the good work.
@jimmyshrimbe9361
@jimmyshrimbe9361 5 лет назад
This is absolutely awesome! Thank you!
@jcarey1983
@jcarey1983 5 лет назад
Just started the bad food Bible on audible yesterday. thank you so much, sir!!
@SocietyNeedsImprovement
@SocietyNeedsImprovement 5 лет назад
You should make a video on how to read/break down a study. I struggle to follow your explanation of a study and I'm sure plenty of other people are too. Teach a man to fish...
@Loathomar
@Loathomar 5 лет назад
One of the things I wish was covered here is what the data means and why collecting data in the way this study did is SO VERY BAD. Statistically significant fines mean that the odds that the data that was found came from random chance is 1 in 20 or more. So, if you just study sugar drinks for cancer can your results show a link, that is reasonable likely to be true, but here we have 3,300 data points in and compared that to at least 20 possible outcomes. So, you have 66,000 outcomes!!! So, because statistically significant 1 in 20 from random chance, we should expect 3,300 statistically significant results from this study by pure random chance. This whole thing is express as in a term called the P value. The 1 in 20 is a p value of 0.05 and that is reasonable standard, but even if this study with it 66,000 results had a p value of 1 in 1000, or a P value of 0.001, which is extremely high, they would still have 66 results that were extremely statistically significant.
@SeanBennett
@SeanBennett 5 лет назад
Put a link to the mentioned Twitter handle in the description, please!
@healthcaretriage
@healthcaretriage 5 лет назад
Done!
@eugenetswong
@eugenetswong 5 лет назад
Thank you, Doctor!
@ems7623
@ems7623 2 года назад
Yeah, i have realized this years ago. It doesn't help that the media, fitness trainers, and self-styled "life coaches" cherry-pick and promote random studies, confusing the public even more
@lukepascal6023
@lukepascal6023 5 лет назад
I just graduated with a degree in food science and I'm not quite clear on where the problem lies. Even if the scientist are cherrypicking positive results, should that still undermine the results? Reporting on 3300 foods allows them to eliminate other variables and increases total data points for statistical analysis. Can someone explain?
@Emmasama240
@Emmasama240 5 лет назад
Not a scientist or a statistician, but my understanding is that by looking at so many different food and health factor combinations, there is a very high chance of one combo producing statistically significant results through pure random chance even if there's no real relationship of note. They reported the results of one combination, and nothing else. Would a scientist withhold exciting findings? No. That indicates that the combination of soda/juice and cancer was that one random pairing that returned significant results, whereas all the others weren't notable. Odds are good that an experiment looking only at sugary drinks and cancer would fail to find significance, and a reproduction of this giant crapshoot study would find random significance in some other food/condition combination instead
@lukepascal6023
@lukepascal6023 5 лет назад
@@Emmasama240 Thanks for the thoughtful response. I see what you're saying, especially in relation to the preregistration. Reading your comment and watching veritasium's "why most research is wrong." helped me understand. So basically by making it so broad, the scientist can handpick the outcomes they want to enhance the likelihood of their paper getting published.
@MichiruEll
@MichiruEll 5 лет назад
A vague memory from statistics classes tells me that you're supposed to adjust your statistical test for the number of outcomes you tested for (for exactly the reason we see here). Did they do that adjustment?
@Roll587
@Roll587 5 лет назад
That's the Bonferroni adjustment, yes?
@ProfDamatu
@ProfDamatu 4 года назад
@@Roll587 Yeah, for ANOVA and the like. Cost me a few significant results in my dissertation research!
@Gaardofit
@Gaardofit 4 года назад
Great video my broskie
@hippiemuslim
@hippiemuslim 5 лет назад
Thank you for a nice dose of skepticism
@JoeyHumble
@JoeyHumble 5 лет назад
Did they control for multiple comparisons using something like the bonferonni correction? If they did, I really don't understand what the problem is.
@KeeliaSilvis
@KeeliaSilvis 5 лет назад
Could you please link the journalist's Twitter in the video description and/or a pinned comment? Idk what is wrong with my spelling, but I can't find her.
@adsmith_
@adsmith_ 5 лет назад
Keelia Silvis Tamar not Tamara, that was my issue
@Ou8y2k2
@Ou8y2k2 5 лет назад
twitter.com/TamarHaspel
@KeeliaSilvis
@KeeliaSilvis 5 лет назад
@@Ou8y2k2 HaspEL, not Hasp!!! Thanks much!
@Ou8y2k2
@Ou8y2k2 5 лет назад
@@KeeliaSilvis np
@g3i0r
@g3i0r 5 лет назад
3:30 Wait a minute! They wanted to analyze the primary outcome in HEIGHT??
@cuckoophendula8211
@cuckoophendula8211 5 лет назад
I guess they were thinking about people who may start shrinking from things like osteoporosis? (shrugs)
@paielinelli
@paielinelli 5 лет назад
Height is most likely used with weight to calculate BMI.
@jasonsamaniego4886
@jasonsamaniego4886 4 года назад
@@paielinelli Thank you;
@RoRight
@RoRight 5 лет назад
To be fair, if researchers of the study adjusted for multiple comparisons using something like Bonferroni correction then their results are valid.
@thedoc5848
@thedoc5848 4 месяца назад
Not at all, you can only adjust for confounders you are aware of and accurately measured, that would require absolute knowledge
@SudaNIm103
@SudaNIm103 5 лет назад
It’s shit like this that is slowly eroding trust in experts.
@doctaflo
@doctaflo 5 лет назад
help me understand... i get the idea that if your search space is all possible associations among all foods and all diseases, it's intuitive that that you'd see some correlations due to statistical noise. but doesn't crossing the threshold of "statistical significance" address this concern? why does it matter if you were looking for a specific association before you did the study once a "statistically significant" association is uncovered?
@moribundmurdoch
@moribundmurdoch 5 лет назад
Would this video be under "641 Food & drink 641.1 Applied nutrition, 641.2 Beverages (Drinks), 641.3 Food, and 641.4 Food " in the Dewey decimal system.
@rociosilverroot2261
@rociosilverroot2261 5 лет назад
I love your gumball holder!
@ShakalDraconis
@ShakalDraconis 5 лет назад
I mean, this is the same journal that published a paper on Cello Scrotum disease. So, you know, might want to cross check a bit.
@CAl3vara
@CAl3vara 5 лет назад
Not that I'm doubting here, but could you explain further what the broadness of this registration does to invalidate their findings? This seemed to be a significant sample size, with solid considerations for other factors, and saw an increase in cancer in consumers of sugary foods vs those who did not. Now the "Sugar causes cancer" headline is clearly mistaking correlation with causation, but from a scientific perspective, what about this study negates that correlation?
@hungrymusicwolf
@hungrymusicwolf 5 лет назад
Not a scientist but I have a bit of background knowledge (I might make some minor mistakes due to how long ago it is that I learned this stuff). They test for whether the outcomes were significant by using a measure of how likely it is that this result happened because of random chance and not an actual cause. These measures are based on the assumption that you are measuring one thing at a time. This measure is written approximately like this: this correlation between x and y has z% chance of being a coincidence. This measure can however be gamed by measuring dozens or hundreds of things at a time because if you measure for hundreds if not thousands of illnesses and then search for a cause among 3300 foods then to use a comparison it would be like flipping a coin a thousand times and pointing out that at some point you flipped tails 20 times in a row and then saying that you "flipped 20 times in a row and got 20 tails so that couldn't be a coincidence" completely ignoring the 9980 other times you flipped the coin. This study is doing the same thing by flipping a coin for every illness and every food they measure and after flipping it so many times you are bound to get some weird / random chance occurrences somewhere.
@jakehunsaker8838
@jakehunsaker8838 Год назад
After watching Dr. Carroll dive deeper into this study on sugary drinks and cancer, I feel like trusting any research is difficult. It seems that anything can be made to appear statistically significant these days despite pre-registration measures meant to prevent it. Yet researchers still find a way to cherry-pick data from their research and present it to the world. It seems relatively harmless in regards to this specific study because this information would hopefully prompt people to decrease their intake of sugary drinks. But consider the ethical implications of these research methods were applied to a different topic such as the efficacy of new cancer treatment. Something could be shown to be statistically significant, but in reality have no clinical impact on the morbidity and mortality of the patients. Because of the prevalence of questionable research methods, I think it is so valuable to have channels like this where people with a far greater understanding of statistics and research methods can present information in a way I can understand.
@TheAlison1456
@TheAlison1456 3 года назад
4:12 thank God we have pre-registration now.
@adamcarheden9467
@adamcarheden9467 5 лет назад
To be fair, the public may be talking about such findings, but everyone knows we don't act on them as evidenced by the fact that Coca-Cola stock didn't dip on that news.
@cjdabes
@cjdabes 5 лет назад
This sounded a bit like P-hacking or data dredging.
@thishandleistaken1011
@thishandleistaken1011 5 лет назад
That's exactly what is happening.
@indraneilpaul1309
@indraneilpaul1309 5 лет назад
The video still doesn't clarify why this practice is deleterious. Just because you throw a ton of mud and filter for what sticks, it doesn't rule out the fact that it stuck. Any statistical reasons why inferences drawn by such means are any less valid?
@zdenek3010
@zdenek3010 Год назад
Someone should do a study of gold eating. The most inert metal can for sure be correlated to so many health benefits it's crazy.
@TheEulerID
@TheEulerID 4 года назад
How does something so obviously flawed ever get through peer review? It's shocking.
@christianlibertarian5488
@christianlibertarian5488 4 года назад
Well done, Dr. You should also mention, though, that there are probabilities associated with the findings; specifically, most medical studies accept a 5% rate of "unlikelihood" as being good enough proof. But, if you study 100 different things, as was done here, by the odds, something is going to meet the level of significance even if it is just random. In other words, the study you cited found nothing at all.
@Phlegethon
@Phlegethon 5 лет назад
Everyone should watch this
@BeCurieUs
@BeCurieUs 5 лет назад
Can we get the link to her twitter in the dobbly do?
@healthcaretriage
@healthcaretriage 5 лет назад
Added!
@vstexas09
@vstexas09 5 лет назад
My brain is too slow to follow and currently distracted. What’s the issue with pre registration?
@BadgerPride89
@BadgerPride89 5 лет назад
the issue is that these scientists didn't actually choose what they were studying, all they did was say 'we're going to study everything and whatever gets us the best result, we're publishing.' the whole point of experiments is to have an idea and test specific variables but they didn't really do that.
@km1dash6
@km1dash6 4 года назад
There are advanced statistical analyses out there that allow for testing multiple DVs without the risk of alpha inflation. Unless they said they were doing linear regression for each DV individually, without adjusting the alpha level, having multiple DVs isn't inherently a bad thing.
@vinnieramone4818
@vinnieramone4818 4 года назад
phones might be close to being able to recognize a photo of food including quantity. people in a study could take a picture of what there about to eat and send it . this could also be randomized so people wouldn't have an incentive to lie
@Drexistential
@Drexistential 5 лет назад
I am proud to see pre-registration has now been adopted. Although I am very disappointed to see that cherrypicking is still occuring.
@Deanzphx
@Deanzphx 5 лет назад
Too many variables
@hugo-garcia
@hugo-garcia 3 года назад
Another study based on questionaries and not in science
@robertofontiglia4148
@robertofontiglia4148 5 лет назад
This is my first visit to Healthcare Triage. Wow. You're like Alex Jones, but in a good way...
@justanotherhappyhumanist8832
@justanotherhappyhumanist8832 5 лет назад
He is nothing like Alex Jones, and there is no such thing as "Alex Jones, but in a good way". The very phrase is oxymoronic.
@carolzinha94
@carolzinha94 4 года назад
It exists a personal genetic question. Some food are great for one person or doesn't causes cancer in this one, but in another one causes. Some type of vegetable or meat is very good for one individual and for another no.
@jeiaz
@jeiaz 5 лет назад
I understand how the methodology is flawed but don't get how this makes the results concerning cancer and beverages not reliable. Could someone explain?
@ProfDamatu
@ProfDamatu 4 года назад
It's been awhile, but just in case anyone's curious still: Basically, these researchers collected a huge data set, with information on many different dietary choices and many different health conditions, and looked for associations between foods consumed and negative health outcomes. This means that they (in essence; this is a simplification) ran probably hundreds (thousands?) of individual analyses, seeing if consuming food A was associated with a higher risk of condition A, then condition B, then condition C, and so on. The problem with that is as follows: if you run hundreds of tests, looking for statistical significance at the .05 level, as the sheer number of tests mounts, so too does the probability of getting a spurious statistically significant result, just by chance. This is kind of "meta," but think of it like this - when you do a regression analysis, or ANOVA, or whatever, and your standard for statistical significance is .05, that means that if you get a positive result, there's a 95% chance that the two variables are in fact associated, but a 5% chance that there is no underlying association (but the sample you tested just happens to be structured such that a spurious association appears). When you're just doing a couple of statistical analyses on your data set, .05-level results are usually (depending on the field) considered strong enough to at least support further investigation on the link between the variables. Sure, the result MIGHT be in error, but you ran a tight, well-constructed study, so let's continue that line of investigation. BUT, if you're subjecting that data set to dozens of simultaneous analyses, basically going on a fishing expedition, those two significant results out of 50 comparisons are no longer necessarily impressive. In fact, if you run enough statistical tests on a data set, you expect to get some positive results just by chance (even if there are no associations AT ALL in the data), for the same reason that when running a single test at the .05 level, we expect that 5 times out of 100 we'll get an incorrect positive result. It's never a good idea to do what the people responsible for this study seem to have done; hypotheses and studies really should be much more focused than appears to have been the case here. Sometimes, though, it's unavoidable to do a larger number of comparisons. In such cases, you run what are called post-hoc tests for multiple comparisons (Bonferroni is a commonly used one); in very simple terms, these post-hoc tests "devalue" the significance of your results according to how many simultaneous comparisons you're doing. If any of the statistically significant results survive that process, you're good to go, but often they don't. In this case, I'm wondering if those post-hoc tests were done. In any event, the reason that the results concerning cancer and beverages wouldn't be reliable *unless it still held after post-hoc corrections* is because these significant results may very well be simply due to chance - with this many comparisons, we'd expect a few to be positive even if there were, in reality, no associations among any of the variables.
@fredsimon
@fredsimon 5 лет назад
Agree with the ethics. But still biased against sugary beverages and most of processed food -
@danielcuevas5899
@danielcuevas5899 5 лет назад
The Japanese Healthcare System! Come on!
@bruceliu1657
@bruceliu1657 4 года назад
I feel like their threshold of seeing an association too low. I would say at least a 20% association should be reported.
@pumpkinpumpkin8288
@pumpkinpumpkin8288 5 лет назад
It's a newer science than most so I'm not super surprised by this, however this behavior still should not be tolerated.
@Khn90
@Khn90 5 лет назад
It's p-hacking and cherry-picking in steroids. I can't understand how authors get away with it
@kateajurors8640
@kateajurors8640 5 лет назад
The do better part go for the Publications quite frankly. The entire system keeps pushing publication after publication after publication pushing scientists to try to find something relevant even if it's minuscule. Real science doesn't always give us an answer real science doesn't always turn out all good but that's the way our system is written now. There's no way that you're staying in business as a researcher especially a business or collaborative research effort if you're only looking for a few hundred outcomes and only studying a few hundred things. You might be able to get away with it with a time or two but you still need a pad all the research papers that are required to keep you funded. They don't fund you for finding something that was proven to be a myth. Even if you've proven it they don't pay you for it they want you to find something new something relevant something that people will look at. It's not just nutrition studies it's all scientific studies. Around the world as it is even. We've become so bent on being efficient and not wasteful that our governments wasting thousands and thousands of dollars not one government is excluded from this. Yet we can't as scientists make a mistake and slip up or else the entire Institute can lose its funding. you wouldn't sink that one person not publishing something frequently enough could affect an entire Institute but it does. Even if only a small amount and some cases
@shericontrary2535
@shericontrary2535 5 лет назад
Please do a video about diabetes. I've been doing a lot of research because I was finally diagnosed with pre-diabetes after having hypoglycemia for decades. It angers me that doctors don't know much about all this and so RU-vid is my doctor.
@SoulDelSol
@SoulDelSol 5 лет назад
Just get a specialist not internet jeez
@SoulDelSol
@SoulDelSol 5 лет назад
This video is about identifying bad research methodology which doesn't mean that all research is bad. Anyway it's about scientists not doctors. Getting your info from RU-vid is likely to get more bad science / bad research. Doctors who study healthcare are knowledgeable about basic conditions like yours and if you have a crap dr just change (research reviews of drs and ask for recommendations)
@MrJayPuff
@MrJayPuff 5 лет назад
But all researchers do the same thing. Werent the results still significant though?
@km1dash6
@km1dash6 4 года назад
Statistically significant results may be a result of a large sample size rather than a "real" underlying connection. For example, if you get a random sample of 100,000 people and ask them what icecream they like, and find there's a small difference between men and women in flavor preferences, it could be that there is some genetic, psychological, or social factor that causes that difference, or it could have been that you collected a large sample, so minor deviations are picked up by the test and reported as significant.
@TXRhody
@TXRhody 5 лет назад
Wait. ALL nutrition studies are terrible because ONE study is terrible?
@AFSamizdat
@AFSamizdat 5 лет назад
As a research scientist in nutrition, that was a high bait title.
@Skjetch
@Skjetch 5 лет назад
I understand your critic of the methods, but that does not necessarily mean that their findings are incorrect.
@johnw4016
@johnw4016 3 года назад
Couldn’t you make a similar criticism of many epidemiology studies?
@ClearerThanMud
@ClearerThanMud Год назад
Hmm. I don't see any problem at all with looking for a wide range of correlations -- as long as it is used to guide further research rather than being interpreted as evidence of a causal relation. So I'm not sure the problem is really the way the study was pre-registered so much as the way the results were reported.
@mirceadraculov6515
@mirceadraculov6515 3 года назад
Hey guys, did you know air conditioners cause heat strokes?
@vanessamilanesa
@vanessamilanesa 5 лет назад
Dude, this guy is the best doctor ever.
@TipoQueTocaelPiano
@TipoQueTocaelPiano 3 года назад
That's not bad research, those are exploratory analysis. There are thousands of nutrition studies per year, don't try to make it seem as if all nutritionists did was finding correlations.
@milantopalov1157
@milantopalov1157 4 года назад
Ancel Keyes is the King of this
@jacobfoxfires9647
@jacobfoxfires9647 5 лет назад
For anyone here who might of also had memories but I had seen a lot of stuff on TV local news that had made me realize how bullshit nutrition and food science was. As I remember a time in which eggs were ALWAYS on the news about something. They were linked with cancer, then they were not, then they were a super good, and then they could be risky if you eat too many. Then all of a sudden pork was cancerous, and so beef. And now they aren’t. Literally anything in regards of the diets, just don’t listen. We all have different bodies and we have to experiment and change your dietary to composite what you are missing. There isn’t a blanket “avoid this food” or “always eat this food” that can be applied to everyone. Look into it yourself, ask a doctor if you are worried about your personal health and make changes that is smart.
@keepgoing6296
@keepgoing6296 4 года назад
Focus on your blessings, not your misfortunes.
@ChetanSharma-tv6jb
@ChetanSharma-tv6jb Год назад
Nutrition is not always about weight gain or loss or calories or risk reduction. Nutrition research is more about studying micronutrients and their effects on immunity. Hormones. Cell functioning. Reduction in atherosclerosis as shown in niacin and vitamin c. . So Nutrition research is not terrible. Ur topic choice is.
@richardjacobs7632
@richardjacobs7632 2 года назад
I’ve been doing Dr. Berg low carb healthy keto! I lost targeted weight and have maintained since!
@McCaffreyPickleball
@McCaffreyPickleball 6 месяцев назад
I don't think you understand how these cohorts work... They collect a lot of data... As much as they can (on inputs/variables and different health outcomes. Then they publish the data and make it available for researchers. Then researchers pick a particular thing they want to look at... You could do this. Maybe you want to look at whether leafy green consumption correlates with heart disease risk. You'd look at previous research in the field to guide what you should focus on/try to isolate. Then you'd isolate that part of the data, apply adjustments, split into quintiles. It's not like an RCT declaring primary outcomes and intervention. In epidemiology, the more data collected from the cohort, the better. Also... 18% increased risk isn't clinically significant? What in the heck are you even talking about? All that aside, I agree there was some weird stuff in the NutriNet Sante cohort data (E.g. Some of the dose response risk curves for non-nutritive sweeteners)
@paielinelli
@paielinelli 5 лет назад
Just as this one study cannot prove a broad conclusion, Healthcare Triage’s broad conclusion of “Nutrition studies are just terrible” is not proven. Where’s the meta-analysis? Please, look at every nutrition study and tell me they are all garbage rather than one bad study to shit on the rest. If they are not all garbage, give me a video that breaks down the good from the bad.
@googleuser1937
@googleuser1937 5 лет назад
This video is sponsored by Coca Cola
@RBuckminsterFuller
@RBuckminsterFuller 5 лет назад
God, this really bums me out. For shame.
@johnlynch4814
@johnlynch4814 5 лет назад
I feel like this title is a bit misleading.
@ucchi9829
@ucchi9829 5 лет назад
I'm really not sure what the main take away was from this video. So for starters this is one paper so if this is the title "Nutrition Studies Are Just Terrible" and all your going to do is use one example then I don't really see your point. Maybe the person you appealed to on twitter wasn't sure what she was saying or you may have misunderstood her point. Simply scroll down to the section called Strengths and limitations of this study , and you'll see what the authors admit the limitations are and what population of people they feel this study is relevant for. Just because a study is published does not mean we have to accept the conclusion..... I think this quote from the study really tells you alot here and maybe why your confused "Finally, this is an observational study, thus causality of the observed associations cannot be established and residual confounding cannot be entirely ruled out. " Heres the study btw: www.bmj.com/content/366/bmj.l2408
@alexplaysgames8774
@alexplaysgames8774 5 лет назад
Breaking news existing causes cancer
@MarkVanReeth
@MarkVanReeth 5 лет назад
Is it so bad to not have a preconceived hypothesis but just follow wherever the evidence leads? Of course, this would make it an exploratory study that would need follow-up studies to detect causality, but in and of itself this concept doesn't seem so awful to me.
@MathAndComputers
@MathAndComputers 5 лет назад
If you look for one association and find a 95% chance that it's real, that's one thing. If you look for 1000 associations among data with no true associations, and each one has a 95% of the result being correct, there will be on average 50 false positives that look like associations but aren't. That's effectively what happened in this case, so it's impossible to determine anything from their results.
@TheSilv3r
@TheSilv3r 5 лет назад
Yeah this definitely feels like ground level data science exploratory analysis to me. Maybe that’s the problem, that such things would as be published in a medical journal? I’d have to read the paper to truly judge (and I’m not doing that), but it feels like an indicator for further exploration or something to include as a data point in a meta analysis rather than something that holds strong on its own.
@karm65
@karm65 5 лет назад
Preselecting experiment's results would promote bias. I was taught that a good experiment is doing the experiment and recording and processing the results to see what your finding are?
@karm65
@karm65 5 лет назад
​@@cookicha So you think they should ignore any and all results but the ones they expect and can explain? If you know what the results are it is not a experiment it is a processor, ware you fallow a formula. Get the expected results every time wile this is vary handy you don't learn anything from it. How it that good science? Science gets results there not expecting and or are unable to explain all the time. and yes they report that they saw X and no they do not know why it did that so further study is needed. if you only accept the results that are expect and can be explained. nothing is learned and the hole mess is a huge waste of time and money.
@opinionatedape5895
@opinionatedape5895 5 лет назад
Maintain a healthy weight, eat real food and get regular exercise and the rest is genetics.
@DrummerDucky
@DrummerDucky 5 лет назад
Meat is real food and it'll still overload you with saturated fat and cholesterol. There is far more nuances to be had if you plan to be in top shape past 40 years old.
@johnspartan12
@johnspartan12 4 года назад
@@DrummerDucky Meat is a great food, depending of the animal, the cut of meat and the way you cook it. Please don't talk nonsense
@DrummerDucky
@DrummerDucky 4 года назад
@@johnspartan12 Legumes are superior in every way - even price - to any form of meat, with a substantially lesser impact on the environment. Bonus : it also allows for a consistent ethical approach to animal welfare.
@johnspartan12
@johnspartan12 4 года назад
@@DrummerDucky Meat is superior in the bioavailability of it's proteins, and also in content of B12, so i wouldn't say that legumes are superior in every way, you are right about the environment impact and ethical implications.
@jinchoung
@jinchoung 5 лет назад
I don't understand why the pre-registering is an issue. if they throw everything at the wall to see what sticks, that's a broad methodology but doesn't the fact that those things "sticked" matter?
@MathAndComputers
@MathAndComputers 5 лет назад
If you look for one association and find a 95% chance that it's real, that's one thing. If you look for 1000 associations among data with no true associations, and each one has a 95% of the result being correct, there will be on average 50 false positives that look like associations but aren't. That's effectively what happened in this case, so it's impossible to determine anything from their results. Hopefully that helps clarify the issue.
@ajwajw8152
@ajwajw8152 5 лет назад
I love this channel... huge fan, but the title of this video is hyperbolic and overgeneralizes the problems of one study to an entire field. The criticisms of the study are absolutely true, but it doesn’t mean the entire field of nutrition research is bunk, just this study. We get upset when one study overgeneralizes it’s results, but at the same time, this video does the same thing. All due respect though... keep up the good work. Love your channel!
@hungrymusicwolf
@hungrymusicwolf 5 лет назад
This study is merely an example. I tried to study up on nutritional science in order to better my diet but it is absolutely infested with studies like these. Not all of them are just as bad as this one but I found it very hard to find even a single good study and even then it is hard to come to any conclusions based on just a single study so generally the nutritional sciences are pretty bad and unreliable.
@bekkayya
@bekkayya 5 лет назад
If I had a twitter I would give her my sub, maybe she should start a youtube :#
@bekkayya
@bekkayya 5 лет назад
for real though I'm really glad ya'll are looking at these reports in detail. somebody has got to
@derrickng4017
@derrickng4017 5 лет назад
The YT channel, What I've Learned, does great video essays. Did one on nutrition research. Awesome channel everyone should check out
@optimisticallyskeptical1842
@optimisticallyskeptical1842 4 года назад
John Harvey Kellogg (a religious zealot) and Ancel Keys (an egocentric sycophant- "public servant")
@sterlgirlceline
@sterlgirlceline Год назад
⭐️⭐️⭐️⭐️⭐️⭐️
@Raikomon
@Raikomon 5 лет назад
What is so wrong about only publishing about the variables they found significant? Yes there is a lack of reporting of negative results but why are the results that are found to be significant still not valid?
@lovely-mk4rt
@lovely-mk4rt 5 лет назад
This is an ignorant site!
@xfrancescanicolex
@xfrancescanicolex 5 лет назад
I would love to see MictheVegan respond to this video. There's definitely bad research within nutrition that gives all of it a bad name. Same with psychology. Traditional scientists are so busy taking potshots at newbies that they forget these fields can and do really help people, and could help more if they had better resources (training, better peer reviews, funding).
@351cleavland
@351cleavland 5 лет назад
I am struggling to understand the point of the video 1) measuring multiple variables and diseases leads to eventual correlations on at least a few occasions-a kitchen sink approach 2) The researchers aren't sticking to a specified hypothesis 3) A few of the correlations may be statisticaly significant but not clinically so. 4) Media then runs with the "conclusions" of the correlations. Isn't the point of conducting a study to potentially add to the overall narrative in research. In other words, "this is what we know now but its not complete and ready for someone else to build upon that." Would it not then be up to the researchers to follow up on the previous work or others take note and do follow up studies to refine or debunk the findings? I get the media aspect of jumping to conclusions and that's not a science problem as much as it is a media/consumer issue.
@nedewag1581
@nedewag1581 5 лет назад
Cohort studies are expensive, so using them to only investigate a single exposure and outcome relationship is a waste of money and time. Cohorts are also the only way to investigate associations with long term outcomes, like cancer. Maybe I missed the argument amongst all the yelling, but if nutritional epidemiology cannot inform us on diet-disease associations then what will? Or you don't think that diet is associated with disease at all? Or are you more of a treatment instead of a prevention kinda guy? So many questions
@idnwiw
@idnwiw 5 лет назад
Let me play devils advocate here: Casting a wide net of foods eaten + negativ health outcomes itself isn't a bad thing. This "big data approach" allows us to find new correlations, helps detect "blind spots" of connections that food science hasn't suspected before. What is the main problem imho is that in the current scientific-journal landscape the entire results of that study stating, as initially posted "we have looked at all those variables and only those few correlations are statistically relevant" has little chance of being published in a high-impact journal and even lesser chance of making it into the normal-people media.
@fredgotpub871
@fredgotpub871 5 лет назад
Strange: they followed all the rules but it's not good enough. It's not not cherry peeking because even the negative results are published.
@MathAndComputers
@MathAndComputers 5 лет назад
If you look for one association and find a 95% chance that it's real, that's one thing. If you look for 1000 associations among data with no true associations, and each one has a 95% of the result being correct, there will be on average 50 false positives that look like associations but aren't. That's effectively what happened in this case, so it's impossible to determine anything from their results.
@OceanPatriot777
@OceanPatriot777 5 лет назад
Anything that tastes good is BAD for you Anything that's healthy is going to taste awful !!!!
@dotnothing5620
@dotnothing5620 5 лет назад
I really don't like the headline on this. But the content was fine.
@azipoor3468
@azipoor3468 5 лет назад
But I think nutrition is one of the best branches of biology
@andyp8464
@andyp8464 5 лет назад
it isnt scientific in any way , so , no.
@hungrymusicwolf
@hungrymusicwolf 5 лет назад
@@andyp8464 It is scientific but it's just shitty science.
@SoulControlla99
@SoulControlla99 5 лет назад
BMJ? Jesus, way to branch out from JAMA for once.
@sidecar7714
@sidecar7714 5 лет назад
The science is fine. The reporting is the problem.
Далее
Experts are wrong about Calories. [Science Explained]
18:22
Veterinarians Debunk 15 Dog Myths
10:29
Просмотров 2,1 млн
Why we can't stop eating unhealthy foods
13:30
Просмотров 1,9 млн
MIT Study Reveals Why Africa Is Still Poor
20:20
Просмотров 1,2 млн
Many Common Treatments Aren't Helpful
6:28
Просмотров 74 тыс.
Why You Can't Trust Nutrition Science & Health Claims
9:55