Having published in a handful of scientific/academic journals, I have not once been asked for actual deidentified data. The expectation is that researchers conduct themselves with honesty and integrity, but obviously that isn't universally practiced.
@@PeteJudo1 I'm the "anyone asking" and thewolf's post makes my point. In general the peer review system is dead. The world of "Doveryai, no proveryai", trust but verify (Suzanne Massie), has parted company with some sectors of the scientific community. And we wonder why there are people who live by another mantra, "There's a sucker born every minute." (P.T. Barnum) Not properly peer reviewing scientific proffers has cost people their lives and it is worth more than an "anyone asking how peer review didn't catch this" snark.
I want to say peer review should take care of ensuring honesty and integrity but we all know that's a joke. That's why the only people reading academic journals are academics.
@@PeteJudo1After monitoring the academic caliber of Harvard’s products/graduates (aka degree ‘recipients’, NOT earners) vs their ability to perform, I’m not in the least surprised!! Without excoriating all of them, I’ve long become weary of encountering them directly and recently constantly watching their mendacious behavior headlining the news for epic failures. Harvard used to represent status, respectability and imminence. The past 30 years have certainly changed my perception of exceptionalism. Narcissism and absolutism pervade their credentials so I’m much less apt to blindly trust. Worse they’ve selectively had an epiphany recently regarding their own ROOTS in slavery by their founders while endlessly castigating anyone else for lesser acts. I’ll send the link, info on this but RU-vid chooses to block these topics. It’s a sad fact that an Ivy degree doesn’t certify anything. Indeed it may be something to avoid like the monumental damage inflicted by Jeffrey Skilling, Rajat Gupta, Elizabeth Holmes, Alissa Heinerscheid, etc. Higher Education itself isn’t impressive on the whole either. Look at all of the graduates collectively who have been DEMANDING their SLD be mystically ERASED. Their degree obviously isn’t paying off, TOO many haven’t found work and began boomeranging back home around 2008 which has become ‘normalized’. It’s a pathetic narrative that these grads have formulated and cleave to.
The fact that this was only caught because she was so terrible at faking the data really raises eyebrows. How much data is faked by competent fraudsters?
There are entire fields of "study" built on extremely dubious research. 10 years ago in my first Psych class the professor told us that most psychology studies cannot be reproduced and that a lot of data is faked. In a decade, nothing has changed. It's no wonder mental health is poor for so many people when the entire field of study is polluted with bad data and studies that aren't scientifically sound. ( Not to single out Psychology -- Quantum Physics is another field with a lot of people wasting their time )
@@Anankin12she probably was really good at it. But with time she just didnt think about the consequences of cheating, shed done it for years now. So she just got sloppy
Same with some therapists and psychologists. Some people take those positions to control people or make themselves feel better that they have better lives and manipulate others. But also remember, there are honest men and women that want and try to help. Just remember shitty people should not be treated the same as those really trying
I was doing a meta-analysis once for a bio-medical project. I came across a research paper that I was pulling data from. The papers contents read good and showed a positive correlation in the outcomes of the intervention. I went to looking at the tables to pull the hard data and I just couldn't make sense of what I was reading. It felt very off. I'm not great at understanding the deep depths of statistics, so I took that paper to a professor of statistics, who had to take a moment after looking at the tables. As it turned out, the tables had been carefully put together to obscure that there wasn't much of a correlation at all. The authors didn't blatantly lie in the paper, but the text implied one thing, while the tables obscured the truth of another thing. It literally took a PhD in statistics to see what they were doing.
When I took Business Statistics we used a book “How to Lie With Statistics” it was a great resource for learning how to spot things like this. Also I can never watch commercials the same way again.
I generally hate meta-analyses, as they combine dis-similar data. But you have pointed out one good aspect of them, in that they at least do a review of other data.
One correction … this proves getting CAUGHT being a fraud is unacceptable in the profession BUT being a fraud is how to rise to the top in the profession
The researchers who challenged her findings aren't vigilantes, they're professors at universities. Challenging your colleagues's work is meant to be part of the job. It's kind of like calling the SIU vigilantes.
I used to work in a lab, and after putting my blood and my sweat into research, I generated excellent data. One day the professor showed up and claimed that he needed to get a patent for his company and with my work but without my name being acknowledged and asked for my work. Immediately after he received the work, he told me that he would give me another project to work on as he wanted to show investors that he was the one who generated the data in order to secure more funding. when I protested his decision, he fired me and stole my notebook. As an international student, I had no way of getting compensated and was left in limbo. People like these should be removed from academia!
Academia is broken because of the behavior of journals. Decades ago, negative results were publishable as it helped fellow researchers not to follow the same path or learn what doesn't work, but nowadays they are thrown out by journals because journals demand only positive results. So, this practice of manipulation is becoming increasingly common.
The pressure in psychology to produce novel results, with no resources other than access to some undergraduate subjects, has been forcing researchers to p-hack for at least 60-70 years. It was a huge problem when I was getting my doctorate in social psychology 40 years ago. It isn't just the journals, its the whole system. Only substantive changes in academia will fix this. Journals publishing negative results will help but that's not enough, the pressure will still exist to generate novel findings from quick studies with zero funding. In psychology, if you can't get multiple quick publications of unusual results, you don't have a career. Some possible solutions are pre-registering hypotheses and methods before the work is initiated, with the publication of results assured. Another idea is making replication part of the standard graduate school requirements, like a second year project, with journals specifically publishing replications i.e. The Journal of Psychological Research Replication.
It's more complicated. negative findings may result in spam. Here is a video, why particle physics has a similar problem: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-lu4mH3Hmw2o.html
Journals don't accept/reject papers. Scientists acting like peer-reviewers do. We need to change the culture in Science to accept again interesting negative results
There’s a famous saying: You get what you incent. And it often holds true. She’s an obvious fraud who has harmed the industry for perceived personal gain.
She wrote a book that is literally called: "Rebel Talent: Why it Pays to Break the Rules in Work and Life." I mean seriously, how much more transparent do you expect her to be? She literally advises people to break the rules to advance their career. Why would she not take her own advice?
Isn't really that bad of an advice. Following all the rules to the last letter is really bad, and can even harm the company (aka Italian strike. Edit: also known as work-to-rule or slowdown)
Have you read "Corruptible: who gets power and how it changes us" by Brian Klass? You should because, it's like a survey of the scientific research on Power, and it's not unknown for ambition to trump professional ethics in pursuit of status and power. And the culture in academia can foster that urge, because it's "publish or die.c And no journal is interested in negative results. They're on the look out for attention grabbing articles that might jump the gap to the mainstream. And Gino obligingly supplied results that fit the bill. BTW, Klass mentions very early on a case study on a former career financial criminal, who cites his ability to learn new ways to steal money throughout his career as the foundation of his former success. Klass argues that one of the key characteristics of the powerful is that they are lifetime learners. And this criminal at his height was making 6-figure sums, when a million was a lot of money. So, the incentives are powerful, esoecially when your disciolibe is struving to align itself with government. For example, the former UK Prime Minister David Cameron, installed a Behavioural Science unit in his government called the Nudge Unit, because the academics working in it promised to be able to use their expertise to shape policy and communication in order to persuade or nudge the general public to behave in ways that fulfilled policy goals... Creepy isn't it? But, just search here on RU-vid, abd you will find behavioural scientists pushing that narrative. For myself, especially after watching Succession, and reading sime old favourites like "The Dictator’s Handbook: why bad behavior is almost always good politics by Bruce Bueno de Mesquita and Alastair Smith (an updated 2nd editon was published last year), I'm inclined to be more sceptical and cautious of such endeavours. Especially after how his administration ended, and what followed. Those who wield power - even thev good ones - often have a complex relationship with ethics and morals. Tge problem is, whilst they may have talents in specific domains, and they acquire power because of that, they may be no more immune to folly than any one else other domains. And when the powerful succumb to their flaws, the result can be anything from significant to catastrophic. But read both these books, and I'd read Corruptible before The Dictator's Handbook, because the former is slightly more theoretical and broader in its scope, than the latter, which focuses purely on the politics. Corruptible explains why as a species our relationship with power can tend to be challenging. Happy reading!
That’s a slap to the face to any student dreaming of entering academia. Basically confirms all my suspicions that it’s not about who is doing the best work or knows what’s happening, but the people who can trick others into making it look that way (especially investors/funders and school heads)
"Publish or Perish" didn't ruin academia, government funded institutions measuring academic output exclusively through citation volume and forcing the creation of incestual citation networks did. Publications now exist soley as checkbox busywork to justify continuing employment, not to convey and preserve knowledge.
@@dagreatpenguini “durrrrrr pAgiNg sEnaToR wARrEn sHe mEAn tO eLon” I knew the mouth breathers and incels would be in the comments of this video, even though they have absolutely no idea what the point was.
On the other hand there are people who have worked really hard on their research and have had to resubmit papers to one journal after another to try and get published because journals just don't care about negative results. It's unethical cheaters like these that are actually driving good honest research OUT of the top journals. No sympathy here.
I think it is the system that is problem here. The cheaters are more of a symptom. Even if no one cheated, the bias towards surprising results over repeated studies would still result in most of new research being wrong.
I think its also related to the status of researcher being from "elite" universities. This happened in both sides. I imagine if the critics of Gino did not came from U. of Penn or Yale, would someone listen to them? This "elite" uni special treatment perhaps what softens the editorial work on them. I cannot blame the reviewers too, some reviewer worked unpaid for hours, and perhaps Gino's reviewer only able to review the logical sequence of the paper, the accuracy of representation (not the validity), and since the diagram looks made sense, they did not pay enough attention to them.
In academia everyone knows that you need strong positive results to get published and you need to get published to not end up out of academia or in a job as an adjunct making minimum wage. The people who succeed then are either excessively lucky to keep getting great positive results, or they are cheaters. As someone specializing in dishonesty, this professor must have been well aware of the dominance of cheaters in her discipline, and she made a pretty easy choice to not buck the trend and have a successful career.
@@galois6569es but the fact that only studies with extreme/shocking results get published is ubiquitous among researchers IS the driving factor for lies and deception
I ended up realizing my dream of working in academia and I cannot tell you how disappointed I was. The pursuit of knowledge and truth are rare now. It's all "publish or perish," so "researchers" make up stuff or they are totally biased and pick and choose data. It's a joke. Those of us who value knowledge and truth are swiftly bullied out of our positions.
More people should consider that finding out that your hypothesis didn't pan out is a good thing too. You still made progress by figuring out which route you should avoid taking.
You're right, their mistakes could've been easily avoidable. Which makes you think of how much data out there is faked in a way which cannot be discovered, full ideologies would be broken if that's the case
In research, less than 10% of hypothesis are leading somewhere. For genuine researchers it should be no shame admitting inconclusive results. That is also a contribution to knowledge.
@@SigFigNewton This. Verifying the null hypothesis doesn't get you grant money, doesn't arouse the interest of prestigious journals, and certainly doesn't get you that sweet, sweet impact score.
Those genuine researchers don't get even the most ordinary tenure-track positions in US universities any more. Most successful research professors produce anywhere between 10-30 publications per year. It is IMPOSSIBLE for any human being to be aware of the details of every publication, even at the lower end of that range.
Yes, it keeps other labs from wasting recourses on dead ends, and especially time. My PI told me he had read well over 1200 articles about one specific subgroup of proteins, and committed to reading any new material produced. Knowing what all these others had contributed saved him a lot of time.
@@boncoderz1430 it shouldn’t be. We should incentivize scientists to have strict standards, not incentivize them to just try and get a certain number. This incentivization has led to so much damage
or rather prevent publishing from being such a burdersome costriction that you have to publish even if you have nothing to add. Publishing isn't really the only way a scholar can be useful to the community. A most important way, for sure, but not the only one.
@@anttiasikainen3124 Why is this scary? Everybody knows that you shouldn't take this kind of research serious. For example if you want to know how potential customers react to a new feature in your product, then you do a study yourself because you know that if you consult the research papers you will not get any truth. You will get what suited the researchers best. If you ask a company to do it for you then the company will look at your behavior to estimate the results that best suit you and they will make a study up to confirm what you want to hear, because else you will not hire them in the future again.
How ironic, liars doing a study on lying and then lying about their results. (Update: the irony lies in the statement many would have us believe: "Trust the Science!")
There they were at the crossroads.... a fork in the road... One path led to the city of Truth, the other to Falsehoods. They saw a person there but did not know which path led to which city, for the signs had fallen. The paths were both well worn, and there was no other indication. Legend had it that once you go to the city of Truth, however, you only tell the truth, and likewise for the city of Falsehoods with only telling lies. But which was this person from? Due to time constraints, they only had one chance.... one moment for a question... and they asked "Which path goes to the city of Truth?" The person lied.... and the wrong path was followed. If they'd only asked... "If I asked you what a person in the city you're from would say was the city of Truth, which would they say?" A Truth teller would tell you what a truth teller would tell you... and lead you to the city of Truth. A liar would tell you a lie about what the liar would say... the person from that city would obviously point you to the wrong city, the city of Falsehoods, so the liar would point you to the city of Truth. And you would follow that path instead.... It's a shame they did not ask the right question.
it's generally called 'projection'. you also see it a lot in cheating spouses, dishonest employers, and any religious leaders, which includes academics.
Wow, I'm a data analyst and not even a good one and looking at this data is hilarious and looks like something a clueless intern would do. Harvard professors should be able to fake data better than this lmao.
Well, in the future, they'll be able to use software generators to produce plausible data that passes the obvious tests such as Benford's law. Or maybe that's happening already and it's only those who are really sloppy at fabrication that get caught.
I doubt they even collect their own data. Which is why they don’t know how to fudge and get caught so badly. They couldn’t even be bothered to ask the poor grad/assistant that most likely collected the data the structure first before they went and fudged it.
It requires a degree of sophistication to fake data convincingly. Those professors in my experience are math ignoramuses most of the time, they just learn certain statistical techniques by rote. The rest is just the art of pulling wool which is far less sophisticated.
At least there is data to manipulate. I have seen meny examples of papers using no data at all. Only using references to sources that don't use data themselves (the source only uses their own hypothesis/opinion and have no source data).
@@Duke00x even a meta study on real studies dies under publication bias. Studies that show are correlation are published. Null-Result studies remain unpublished and their raw data lost for ever.
the whole point of "peer reviewed" articles and studies is specifically to prevent this. Whatever organization agreed to publish her failed at their job imo.
Yeah, that's exactly what I was thinking about. It seems that the peer reviewing is broken in the field. Doesn't mean all academia is broken, but maybe they haven't set a high enough standard in this field for checking datasets.
Academia and media share one important trait: some sources are considered "trustworthy" and are therefore approached with much less scrutiny as they should, if at all.
lol, this is not new. American Scientific made a big study of their own publications and concluded that a great percentage don't even reach the criteria of being scientific.
I am at the end of my academic career -- I published my first academic paper in the 70s. When I got my first professor job - at a major research-oriented university, the expectation was that you would publish 2 (or 3 max) papers per year. If you published that few now, you would quickly lose your job (definitely not get tenure, definitely not get grants - which is what drives your research). But, you really cannot publish more than 2 or 3 and have them be high quality, so profs are forced to publish more, and therefore lower quality, papers. At one point my chair told me to publish more papers -- and said explicitly that the university did not care about the quality of the papers, just that there be 5-6 papers published per year. This is all coming from the administrators "managing" professors and finding simple metrics for gauging scholarly "productivity". Fraud is just the most egregious outcome of the system -- the worst part is just having to wade through all the dreck that is being published. So some universities give more weight to papers published in high prestige journals, which are always looking for papers with surprising results -- which leads directly to this kind of fraud. It is a systemic problem, not a problem of a few bad apples.
Why do administrators want more papers published per year at the cost of their quality? Surely they understand that is not going to make the university more prestigious, nobody measures how good a university is by "number of papers produced".
The same reason you have Stupids metrics in every job, they need a metric to show to the higher ups / to determine who is working and half the time they don't know or care about the metric just that the people under them are getting good results so they can say that they're managing the office/university the right way
"Production for the sake of production - the obsession with the rate of growth, whether in the capitalist market or in planned economies - leads to monstrous absurdities. The only acceptable finality of human activity is the production of a subjectivity that is auto-enriching its relation to the world in a continuous fashion." - Félix Guattari
@@freshrockpapa-e7799Universities compete for prestige. Administrators want to advance the university and come up with often stupid ways to do so, like cooking up dumb metrics like number of papers published. There are lots of junk journals that will publish pretty much anything for a fee. Publication at its lowest level just means the paper is formatted correctly and written in passable English.
@@freshrockpapa-e7799Admins are the shining example of bureaucracy and it’s many idiotic flaws which only a bureaucrat stuffing their pockets would want.
With her most recent book titled “Rebel Talent: Why it Pays to Break the Rules in Work and Life,” she certainly practiced what she preached, breaking the rules to profit.
Reminds me of that person who wrote a book about murdering someone actually killing someone. I think it was last year they were found guilty of murdering their spouse.
Before we get into whether she was fudging the results or not, the fact that she was trying to find a causal link between test result honesty and cleansing products tells you all you need to know about her and the institution. To that end, I'd like to apply for her now-vacant role with funding for the following hypotheses: 1. Can I attract better looking birds driving a Ferrari or a beat up Chevy? 2. Do Coldplay play better gigs when I have a front seat, VIP ticket, compared to when I don't ? 3. The statistical probability of a coin toss outcome conducted in the Bronx compared to a four star hotel in Bali. If the Provost can forward the application papers ill be right over .
Well, ethical scientists don't get employment in Academia anymore since the change to publish or perish. You are at a disadvantage from the beginning, during your PhD you can publish 10's of bullshit papers or 1-2 genuine really good one. Problem is that even if you publish good ones, they won't have enough citations by the time you graduate to let you get a tenure track in a good institution.
@@futurethinking I like being in math, where lying about your results to your journal is essentially impossible to do. The proof is the paper itself, and requires no trust on the part of the editors that you didn’t lie.
Sadly, since diverging from small communities where you knew pretty well the competence of each individual, society nowadays rewards you for appearance rather than merit. You are much more rewarded for being an incompetent con artist that knows how to lie convincingly than a competent and productive honest Pearson.
My undergrad degree is in Psychology. When I took the senior research class, the instructor (she was grad student adjunct, not a full professor), told us that if one statistical test did not bear out our hypothesis, we should try another, and another, etc., until we got the results we wanted. I was shocked. When I questioned whether this was academic dishonesty, she shrugged and said, "This is how it's done." The entire field is a sham. I went to grad school for something else entirely.
It’s stuff like this that give social sciences like psychology a bad repute, and they only have themselves to blame for not setting up/adhering to standards. Fudging the numbers to fit a particular narrative is not what real scientists do. This adjunct professor should try politics, she’ll fit right in there.
@@SorbusAucubaria not necessarily, medicine in general has a lot higher threshold of standards to meet because that type of research has to go through an IRB (a type of review board that authorizes conducting clinical studies on real patients)
And not just professors. Imagine how much expertise large corporations with lots of money to make who have captured their regulatory agencies have. It’s unlikely we’d ever find out.
I can't help but think of the 5000 Chinese students that were more qualified to get that position but didn't because they really wanted to hire a woman.
My wife is a researcher in the medical field, and she was horrified by the system implemented in some departments at Harvard: they would put several teams on the same subject and let go of those... Underperforming. Think about bad incentives. This scandal at Harvard is just a small speck of ice, sitting on the upper tip of the iceberg.
Academic fraud should be prosecuted imo. Often universities just fire the person and that's the end of it, but there is a lot of public money involved and it endangers credibility of science as a whole.
In the Netherlands a Dutch researcher has been caught and sentenced to 120 hours of community service. With addition of a settlement of 1,5 years of salary to avoid further criminal prosecution. en.wikipedia.org/wiki/Diederik_Stapel
I was struck by the sheer silliness of the subjects of the articles. Also the fact in the first paper that the subjects being tested were lied to - telling them that their answers were destroyed when they were presented. No good person would tell such fibs!
I noticed that too. And the smug asswhole who revealed this unethical subterfuge basked in the laughter of his audience who obviously thought lying to study participants is so so amusing doncha know.
So called "science" is completely corrupted!! Absolutely everything. Sad but true. People who say it isn't are too dumb or in it. These are all Muppets with "degrees" paid by the industrie and others. So yeah, the system is broken. Since the very beginning.
Capitalism is fundamentally fucked. When everything's done in the pursuit of profit, things like "facts", "compassion for other humans" or "morals" are just hurdles.
@@paulunga when the CDC is nothing more than a pass through for corporate interests, then we have people like Fauci taking “royalties”. Laws have been broken, and if we aren’t a nation of laws where these people are prosecuted then I agree -we are f’d.
But the system IS what's broken. It's "publish or perish," so if you're not constantly putting something out there you won't make it. Universities need your papers to get money, so they don't care what you put out there. People with ethics have a really hard time in such a system and many of us get bullied out of it.
Sometimes good science means null results. We had an academic scandal at my university and it was painful. We'd spent 4 years pursuing hypothesis formulated from fabricated results. We couldn't get their antibodies or their transgenic mice to work and yet 'we' were viewed as the problem. We were made to repeat our results over and over on the assumption the original paper was pristine until the news broke regarding the main investigator, and his graduate students taken in by other PIs. I think the pressure to publish on top of the pressure to find 'surprising results' can lead to academic dishonesty, but there should be a greater push from academia and industry to publish the non-surprising and null results. It might lessen the amount of duplicative bench work and bring greater integrity to the fields. Pre-registering studies and public data repositories are starting to gain traction but I wish the process were faster. This sort of open sharing can certainly shed greater light on fraudulent data, as revealed by Uri, Joe, & Leif.
This is exactly the problem, our economy is so warped it's better to cheat and get paid than to do actual science and make a living off it. Null results are good, double checking and peer review is worth studying and spending money on
Getting published in a journal is like getting a trophy just for saying you've won. Move the standard to peer review, and make people sign a waiver that'll ruin them financially if they're caught lying to the journal.
That is my main takeaway, she got caught mainly because 1) Nobody bothered to actually scrutinize her work. 2) The data manipulation was so badly done. And yet it took many years for someone to finally notice the anomalies, now imagine how many competent grifters are out there and nobody suspects anything...
@@ii795 it's not *just* about that. But all the "research" by these crooked blue blooded universities can be & must be questioned. These "Ivy Leagues" have sole purpose. Manufacturing believeable intellectual crap that the TPTB can use to sell their propaganda. But they no longer even care about the intellectual part. e.g. Judith Butler's book is literal diarrhea on paper without any intellectually honest research sticking to any actual academia. Yet, the TPTB use it to push the gender madness.
@@ii795 Given how many industries, companies and even governments rely on so called "Greenhouse emitting processes", if climate change was a hoax they would've debunked it a long time ago due to a strong incentive, that is not the case; so I think that one is pretty much cemented as a real thing.
@@ii795except climate change isn’t solely written about by a single rogue researcher and is instead a topic agreed upon by thousands of independent researchers who study things like that for their profession.
Their called actual scientist's, all they did was their job in verifying others work before accepting it as true. Something that has been deliberately stopped and replaced with peer review, to facilitate this exact problem.
I bet it occurs more in some fields than in others, depending on the nature of the discipline (if indeed the field qualifies to be considered as a discipline.) The disinterested search for truth does not drive a field which is structured around political positions.
She ought to get sued. Other researchers have built their research on top of her data. Harvard has seen there reputation muddled. Her coworkers could be collateral damage
@@michaelgoldsmith9359 Mind you, if you're going to attack a comment with ad hominem first thing, at least proof read your own to make some sense. How did you end up with "coauthors" when he's talking about people basing their work on hers? (That's referencing, in case you didn't know. Not collaborating)
As a person who has authored papers with others, it would be a pretty huge betrayal for one of my coauthors to lie and spoof data. If I have to check every piece of their work with a fine toothed comb and recreate it myself, why even have a coauthor in the first place. In any sort of big data field it’d be essentially impossible. The fact that it would tarnish my name is another reason why it’s unacceptable to do
I worked for the world’s biggest research journal as a copy editor from 1992 to 2005; fraud is rampant and covered up. Millions of dollars are at stake.
Also ironic that being such a massive cheater and full of it, she was incredibly non-creative when it came to hiding her cheating. Kind of further disproves her own hypothesis
A major thing broken about academia is the peer-review system. Do you think that can be blamed in this instance? It seems that quite a lot of work was needed to spot the falsification.
I personally have not participated in getting a paper published or reviewed. Still, I imagine there's a fair bit of rubber stamping going on like there is in code reviews (I'm a software developer). We don't typically depend on code reviews for catching bugs/cheats, we have separate quality controls to handle those circumstances. I imagine experiment repeatability is something that should be done more often, though I imagine that it's hard to find funding for such endeavors. AI may have a role to play here. Additionally, we may want to start requiring that the raw data be stored: not just excel sheets, but scans of any documents/forms used.
Peer review is only as good as the peers who do the reviewing. Most times the reviewers are not that familiar with the subject matter. You have no idea how long it takes to do this exhaustively either. Most scientists don't have the time and only do it since it is sort of an obligation. Usually the reviews are fairly superficial as a result, and generally would be done by 2 or 3 people. There is not really any other practical way of doing it. The real peer review happens AFTER something is published when those working in the field will either accept it or dismiss it as weak or otherwised flawed work. This is also part of the scientific process. Just because something is published does not mean it is accepted or written in stone. Many papers are simply wrong for a variety of reasons, fraud just being one of them. More usually it is because of poor experiment design (typically lack of adequate controls), poor statistics or the researcher themself only having a superficial understanding of the subject matter. All of these other factors are much more common than fraud. Most people who commit fraud DON'T want to be noticed, they just want publications, so it is rarely found in significant papers. Fraud tends to be in more routine stuff where no one will notice because it is just like similar work, except it involves made up numbers rather than actual data.
@berenddeliagrebohl1981 Nope. While you can suggest names in your field to the editor sometimes, you usually have no idea who the manuscript will be sent to, and in fact it is often sent to people who are clearly unfamiliar with the field which can be a problem sometimes. If you get a manuscript from someone you know personally and like you will probably be more kind with the wording of the review but other than that they get the same critique as anyone else (they will not know that you were the reviewer). If it is someone you know and don't like, they will get extra vigourous attention 😀
Please explain. I understood, from the investigation's conclusion, that, contrary to the paper's conclusion, there was no statistical significance in results of placing the honesty-pledge at the top or bottom of the application.
I did my master's in Software Engineering and started my PhD in the same college. After a few months, I got so disgusted by the massive egos of everyone in Academia I decided to quit before I lost my soul in that mire. I've been working in the private industry for nearly a decade now. I've met with CEOs who are worth hundreds of millions. I'm yet to see anyone with an ego as big as random teaching assistants in my old university. Honestly, I'm amazed anyone is able to achieve anything in the world of academia.
I’ve worked on both sides of the aisle, I worked as a research assistant for a decade after getting my BSc in Physics, I’ve also worked in the oil and gas industry after getting an MSc in Geophysics. What I’ve noticed is that the egos are just a large in the business world but they’re found only at the highest levels and in most corporations they are not allowed to interfere with the operation of the business. There is much more accountability for your actions and behaviour at all levels of a company than you would find in an academic research setting. The world of academic politics is much more intense, more personal and more viscous than in the corporate sector precisely because their is little bottom line to damage. Corporate shareholders and boards of directors take a very dim view of infighting which gets to the point of impacting the image and profitability of a company. Whereas two professors and their respective research teams who are feuding don’t get reined in, the university normally doesn’t involve itself in such matters as long as the feud isn’t too public and damaging to donations and reputation. You’re right in how disgustingly low academics can stoop to, in 10 years of working in a lab I’ve seen tantrums which would put toddlers to shame. I’ve also seen darker things happening which are best left unsaid.
I can only imagine. I've only studied to bachelor level, first in the humanities and then later in STEM, and have been put off by academia just from outside observation, enough to know that it's not something I want to be part of. I think there's something to the idea that the egotism and ideological leanings typical in academia, particularly the humanities, can be attributed to the fact that professors, etc. may have authority in their field but zero direct influence in society (in contrast to business/military/political/religious leaders), leading to an institutional case of inferiority complex. Anyway, good on you for having the awareness and self-respect to remove yourself from that mire, as you aptly put it.
@@glenyoung1809 You put in a long time! I'm getting a second doctorate and I have 5 years in academia (for work). I'm amazed at the tenuous ties to the hospital and department. It feels like no one (dean, finance, payroll, supply chain) knows what we do even on a multi-million dollar project. It isn't that we need more oversight on my studies, but allies to help get our work done. No one believes me when I say it is cut throat; I'm glad you said it. Eye-ing industry....
An important review, thank you, and appreciated the tender messaging to Francesca Gino at the end. Just one thing, but important: "vigilantes" act outside the law, whereas the academics who tested Gino et al's data and found it wanting were conducting research well within scientific rules. They were testing it properly. Real life vigilantes take the law into their own hands, without objectivity. I know rerecording this so many times would be a lot of work, but perhaps some comment about this is deserving for the brave three writers who took the issue on? Thanks again for the main message of this video, very helpeful!
There's a lot of projection going around these days...Once you are aware that folks/groups do it, it becomes very easy to assess what they are really up to.
AI research is the worse, is probably the most corrupted sector of science atm. Money rules over everything of course, even physics is affected by political and ideological propaganda, just look at Einstein work, nothing was proven of what he suggested yet.
The thing about research is that it's not hard to get the answer you want: that's a simple matter of switching statistical operations until you get the answer you want. The trick to research is asking the right questions; developing the right tools. A better question is why people are dishonest to begin with.
The only way something like this could happen anyways is in an experiment where the subject is strongly encouraged to lie. And unless you’re specifically testing for that…well, you get the point. What I find it interesting is that she weighted the study towards lying but kept fudging the data until she got the results she wanted. Wouldn’t it have made more sense to try to find the breaking point, when the incentive to cheat becomes statistically significant?
@@jeffjones6951 Of Course, just as everyone would expect If he did otherwise , observers would expect they were being manipulated by him. Whole different psychological game going on with Trump, or any Trump like character.
I agree. The very least any journal publishing quantitative research should do is check for a properly 'cleaned' dataset. All three examples here would've failed that review. Most journal are particular about the software used for the studies submitted to them for publication, so I don't see so much effort needed to get people to look at the datasets that support the research. Model or modeling errors can be ferreted out by peer review or the readers themselves.... Perhaps, we need to grade the publishers on such a minimal compliance basis as this.
@@SaintRubiconyea, once again, 3 unpaid professors after like a year or more of researching it. Maybe if there was an incentive or something, given we live in a capitalist hellscape. But something absolutely does need to be done, i just dont want people to turn this into one giant “academia bad” thing bc of one professor. Theres still thousands of hard working researchers out there doing real work.
Remember when scientists discovered the Viceroy butterflies taste bad too (re Monarch mimicry)? This was proof that science doesn't work. , because scientists found their theory was wrong and changed it. My point: trust in scientists is fragile in a post-factual society. Gino makes it so much worse.
In case nobody else tells you, I super appreciate the rundown on your video at the beginning. I'm usually listening and don't get the chance to search a description for a summary. That being said, you're wasting your time. I always manage to finish them:P
One positive about this story is that due to pubpeer, retraction watch and a growing number of people actively calling out fraud (i.e. Elizabeth Bik), these kind of people are increasingly being caught whereas 20 years ago they may not have been.
Is your explaining character. There is an absolute lack of character in america today. That probably has everything to do with the Psychological operation, as in, psyop, that they, military/gov't. entity, have been presenting unknowingly to this population of liars and cheaters.. The lack of character today explains exactly everything we're seeing in the population of what is allowed.
Recently 3 students duped several academic journals. They reworked Mein Kaumpt as a feminist paper snd it was readily accepted. Not only was it accepted they invited the authors to be reviewers.
Agreed. But do we think she is the only one who has been lying for the last 20 years. How much of the academy's foundation is built upon junk-research dressed up to look fascinating, incredible, impactful? How much has been used to create cultural and political movements whose activists are so believing in their truth, yet their entire outlook is predicated upon lies?
Falsifying scientific data should be a crime. Research like this influences some very important areas of society. Laws, government, business, society as a whole. "Scientific" studies and papers directly influence so many things. On top of that, these people are defrauding and lying to people for their own economic benefit. She should receive a prison sentence.
unfortunately those that are in a position to put forward and enforce such laws are also the ones often with a vested interest in pre-determined outcomes...
no, that directly goes against free speech. You are allowed to publish whatever you like. What is going wrong is the scientific method, namely reproducibility and peer review. Eventually, as in this case, someone will review things in greater detail. But sometimes it might take decades before someone does (and for things people don't really care that much about, it may never get reviewed). Accountability comes about when caught out, you tend to lose your job and are very unlikely to get hired in the field again. It is up to each institution to protect its reputation by holding its researchers to a high standard.
It would be funny if she came out and said she did all this on purpose to see just how long it would take people to actually check the evidence and not believe what others are saying so easily and THAT was part of her behavioral analysis
The sweetness of that ending caught me so off guard it nearly made me tear up. I just realised how that kind of sentiment even towards people who could be deemed 'the enemy' is so valuable.
Academia needs to find away to incentivize researchers to publish the trials that didn’t lead to the expected results so others learn from it as well and avoid that path.
I guess I will present them 3 hypothesis about predicting the future to see if they fund me. I bet the results will be surprising. * Tomorrow will be another day * After the storm calm will come * In only 2 days tomorrow will be yesterday. 😂
The truth is it’s a lot of work. Preprint archives like arxiv are perfect for that role, yet a lot of work goes unpublished. It’s seen as a waste of time writing up results that don’t advance your career or field-nobody got a doctorate with inconclusive data.
You would think that’s desirable. But it’s something that is looked down upon by someone like my PhD advisor. He would simply imply that it’s a skill issue (meaning you executed your experiments poorly) for being unable to produce positive data. That’s why academia is so toxic and many PhDs are not remaining in it (myself included).
the problem are also the journals: they often do not want to publish research that yields uninteresting results or replication studies that try to verify ore refute another paper. simply because new and interesting gives them better PR/means to earn money than unintersting and old. the second big part is how the peer review is handled. the journals basically go to random researchers in the same field and ask them to check a paper/study. the researchers get paid pennies for this and often don't have a lot of time to thoroughly check everything. I've read stories were the professors simply relegated this to some or student or a postgraduate. and if the paper doesn't look completely bogus / illogical it gets okay-ed. another big part is how research budget is getting alloted. once again everybody only wants to finance studies that yield big, new, interesting results leading researchers to go for fringe topics and issues with specifically designed studies to ge the result they want. its a host of issues that lead to a very unfortunate situation that academia finds itelf in.
Not sympathetic to your sympathy for Gino's cheating. All cheaters cheat for some benefit to themselves at the expense of others. Gino has more security as a tenured professor than nearly any other professional. Any "pressure" she felt was nothing more than greed for prestige and fame. Academics aren't automatically morally superior to others.
There's nothing virtuous in bending over backwards to defend the indefensible. Neither is it compassionate to make up excuses and rationalizations for others.
I think he just doen't want to be responsible for pushing her further in the direction of doing something self destructive. Cut him some slack. If he published this video and she took drastic steps he would feel terrible and people would blame him.
I agree. Academic cheating is not just harmless cheating to get papers published. It should be a crime. The cheaters effectively steal money from bodies who fund their research, the universities who pay their salaries, people who pay for their speaking fees, etc. Furthermore their promotions in their careers and the promotions of people who coauthored with them robbed those truly honest researchers of opportunities to pursue their ambitions. The cheaters don't just damage the fields, they damage live and livelihood of real people too.
At the start of my PhD I remember that I tried to replicate results from two separate publications and for the life of me simply could not replicate them and blamed myself. I’ve heard anecdotes from other academics that apparently as much as 85% of all published data cannot be replicated. It legit chills me to the bone to think of all the hours I’ve spent reading research papers that are (most likely unintentionally) completely inaccurate and how much of it I’ve included in my own work. Edit: Original comment over-exaggerated academic dishonesty and I felt bad 😔
Same thing happened with me during my masters, I had the simple task of replicating the results from a paper, then applying a new idea on it. I don't know the integrity of that work, but there was so much missing details considering that it is an engineering paper. We never replicated the work, and I got fed up from this project and moved to another. I have no idea how that paper got published, simply because the information provided in the paper is not enough to replicate it
The reproducibility issue is more to do with insufficent research design rather than fraudulent conduct as far as I'm aware - at least in terms of biology lab based work.
@@somedude1666 basically, I am saying that I don't think some journals review papers well enough sometimes, and thus might be vulnerable to fraudulent behavior
@@darkpistol96 I didn't even see your comment tbf lol I was replying to OP. Yeah it's pretty bad out there though. Its difficult to trust the data available when data from different labs can be so highly variable from the same experiments.
The advantage of the rise of scientism. The undying trust of the flock who will blindly believe you based on your scientific reputation and credits on paper (and the success of your career), and the reputation of your employer (in this case Harvard) and the praise received from fellow scammers in similar positions and part of the same cliques of con-artists posing as scientists making supposed valuable contributions to the sciences.
@@curious_one1156 "how are you going to find and prove who did the murder? You need to spend money to do that." Isn't that just what investigations of crimes are about? Spending the money to gather enough evidence to prosecute the person who caused harm. Seems like the guys who've published these articles were able to do whatever digital forensics needed to confidently allege deliberate editing of the data.
It's not a legal document, you can't charge them with perjury; the first amendment protects the right to lie in most situations, this one included. The solution isn't legal consequences but increased skepticism of academic studies that haven't been reproduced by multiple independent researchers. Repeatability is the cornerstone of science, if an experiment hasn't been repeated multiple times using the same methodology, the results should always be suspect.
I was shocked to learn that when a paper is submitted for peer review, the reviewers usually do not have complete access to all of the original data. This makes it impossible for reviewers do properly check the paper's analysis and conclusions.
You usually read it and give a few comments, suggest some citations or some changes to the paper, but you never go through any tables or check any data.
If the paper is very closely related to your research, you might assign a grad student to try to replicate the results. In that case you might request the original data. But otherwise you're not really equipped to evaluate the data, so there's no point in having it. When reviewing, you're mostly checking the methodology, ensuring clarity, and so on. You assume that the data was compiled correctly and honestly. Somebody will check the results in some future experiment; that's not the reviewer's role.
@@glenm99 I would think that a peer ought to be "really equipped to evaluate the data". You're his peer, after all. Also, the reviewer should check the methodology by reviewing how the data was analyzed. In the famous Michael Mann paper about climate change with the hockey stick graph, the data was massaged in a way that any input values would show an increasing trend. Without adequate review, any paper could contain misleading graphs, misleading statistics, and unwarranted conclusions. Even a completely honest author could have inappropriate statistical analysis.
When I was an undergrad in the 80's I had a visiting professor from Harvard who taught a class in Psychology. He was working on a pet project of his at the time and talked about it all the time, so of course we decided to suck up to him to get a better grade by working on some related research for our term paper. My little team did our research and found zero effect, absolutely nothing that he was hypothesizing was right, which wasn't what we had expected. We knew that he wasn't going to be happy with that result, so we worked as hard as we could to explain this or that away to make it look like there was ANYTHING there. We managed to eke out a "maybe if this had been different the effect would be this" kind of paper, and he was fine with it, but we knew that his theory was totally wrong. A few years later, I saw that he had published a paper about it that had a completely opposite result, everything that he was theorizing was proven beyond a shadow of a doubt, and it was basically accepted as brilliant scientific fact. I've always thought that he must have been very selective in his research subjects, or that there was some manipulation of data by somebody to prove his point. To this day I'm suspicious of everything that comes out of "research" of any kind, I want to read the actual papers before I'll believe what people say the study says. Half the time the data is weak, and the other half it's being misinterpreted by whoever wants to make a point.
This is also true of news and the media that delivers it. Don’t just believe the reports that the news media disseminates. Investigate the sources and draw your own conclusions.
You know what else is a problem? The people who know the names of these fraudulent 'researchers' and don't publicly expose them. If you want to make a difference, post his name and your research proving him wrong.
That dishonesty/creativity paper is extremely dangerous for society, especially coming from Harvard. We already have corporations ruined because of what their leaders learn at top business schools like Harvard, Penn Yale et all (money above all else, even if you have to fake numbers and cheat and do things nearly-illegally), but having an actual paper from Harvard that basically rewards dishonesty really is scary. Btw: I know someone who went to Harvard. She said the hardest part of Harvard is getting in. Yes the education itself was good, but she felt it wasn't anything "special" beyond the pedigree, and she said she likely could get just a good an education elsewhere.
The scariest part of this is that all her mistakes that got her caught were really stupid, and could have easily been prevented with more careful cheating. Which makes me wonder if falsifying data in psychology research is far more ubiquitous than it looks.
People who will do this are arrogant. They always think they are the smartest one in the room and it will be easy to dupe others because they are stupid.
The craziest part of this data manipulation is she wouldn’t have even been caught if she wasn’t so messy and lazy with the fake data. Seriously. If she had just sorted it and labeled it identically to the other answers… we would never know. And that’s not cool.
It always strikes me how lazy these guys (Schoen... etc) are when they are fabricating their data. But, honestly, these guys can't be all that dumb! Now this would lead to an unpleasant conclusion ...
The scary part about this is: she (or whoever altered the data) was only able to be caught like this because she was sloppy. She could have easily re-sorted the sheets, and then it would have been much harder to figure out where the manipulation was. How many people have faked data without making such elementary mistakes?
Yes and no. Maybe the manipulation wouldn't have been caught, but as shown in the video, a number of researchers found the results suspicious which means even if they couldn't find problems with her methodology, they would still make a strong point for independent replication with good transparency and then the lack of successful replication would cast doubt on the original studies. Independent replication is a cornerstone of building established scientific knowledge which is why I'm always frustrated with how many people hype up novel scientific findings before enough good research has been done.
@@yiorgosh4739 Yeah, that's a great system. By the time the fake is called after all the later research she has had a prestigeous career and retired rich.
@@johnpublic6582I don't know why the sarcasm. Science is a process for better understanding the laws of nature, despite our human biases and failings. There have been many egotistical and shady scientists and doctors throughout history. That's not a hit against the process. To the contrary, the success of science over time speaks to how powerful the process is despite our human bull****. If you want to feel better about someone getting their comeuppance, look elsewhere.
Good point. So, what makes us so willing to accept that someone so good at her craft was that skoppy? How do those two thoughts coexist? Isn't it nore likely that she did not alter the data?
@@codyspendlove8986 Do we know for absolute certain that she (or, again, whoever it actually was; I don't actually care who) did alter it? Well, no. However, it's at least a plausible explanation for the evidence. What would be your explanation for why spreadsheets that are otherwise sorted in particular ways just happen to have particular rows out of the sorted order, _and_ that those rows just happen to be among the ones contributing the most to the claimed effect, often having most of the extreme values, _and_ that when plausible real values can be hypothesized, substituting those makes the effect disappear completely? And why the weirdly consistent bad answer to the "year in school" question? _And_ this apparently happened at least three times, maybe four (I don't know about the fourth instance alluded to, but presumably there's similar evidence in that case). How likely is this to happen by chance? _How_ does it even happen without tampering? (Seriously, I don't have ideas about alternative explanations; if I did, maybe I could bring them up. If you do, then maybe we can discuss them, because I realize there is a possible cognitive bias here.) Also, just what craft is it she is supposed to be good at? Doing science? Writing academic papers? Sure, those make sense. Altering computer files without leaving forensic evidence? Not necessarily. Not to mention, she may not have thought anyone would bother to look at the raw data, although admittedly I'm not sure why, seeing as the results seemed too good to be true. It is indeed a little baffling to me why someone seemingly so smart would be so sloppy in this one way. (I would like to ask: Do the researchers at Harvard, or at many universities, routinely make files with their raw data accessible as a matter of course or even as an institutional rule, or was it just good fortune that hers could be found? If she really thought no one would ever look at the files, or that they _couldn't_ look at them, then it makes more sense why she wouldn't bother to cover up the anomalies.) Then again, claiming such small p-values in a field like psychology seems suspicious in itself. If the data really was tampered with, that in itself wasn't so smart. Much better to be modest about it, if possible (obviously, p-values aren't necessarily easy to manipulate arbitrarily, but still...). Not, of course, evidence for or against it being fraud, but maybe it helps to answer the question about seemingly contradictory thoughts coexisting.
Another sign that some fields of Academia are broken is the over-reliance on finding a p-value < 0.05 to show your research is important 1) 0.05 is a pretty arbitrary threshold, and a pretty low one at that 2) Rejecting the null hypothesis doesn't necessarily support your alternative hypothesis. Besides, p-values are probabilities about the *data*, not how likely your hypotheses are to be true! 3) Something could be statistically significant, but have a meagre effect size 4) Your sample could violate the assumptions of your statistical test (usually an assumption of normality and independent samples), thereby making any p-value you produce straight up meaningless P-values can be useful, but only in the appropriate context, under the appropriate assumptions, and with an appropriate interpretation. Getting a significant p-value is a decent *first step* to then conduct follow up studies. At least, ideally.
It is good to bring up the topic of the broken 'publish or perish' system at universities. You must publish papers to stay employed, the only papers that get published are ones where you find a result, find something new. As you have no power over finding a result, that is essentially random, you are instead incentivized by your employer to influence what you can control, which is creating the appearance of finding a result. So long as the system pivots around a factor that professors cannot control, they will be encouraged to cheat as cheating or luck are the only two things that will keep them employed.
Same problem exists in industry in a different way. You get handed projects to work on, and maybe that project works out and makes the company a lot of money, or maybe it fails. Whether or not it succeeds has very little to do with the people running the project, usually. For example one analytical development project I had in the pharmaceutical industry -- product failed in formulation and was likely impossible from the get-go under the constraints the company had to operate by. But it was still 18 months of effort from me on the analytical side that yielded me zero that I could report as an accomplishment in performance reviews, or for promotion, or on my resume for other jobs.
I'm going to wait for her to release a new paper about how all the cheating she did on the previous papers was actually just part of a larger study on how much cheating you can get away with in academia. People like her always double down.
And that would be ten magnitude more stupid than what she has done. Even if that's true It would be a terrible experiment set up with like zero research value. Three academic fraud from one person? That's a joke of a sample not even fit for primary school project for 10 yrs old.
I'm reminded of the apocraphyl saying that most psychologists go to school to figure out what's wrong with them. The fact she studies honesty isn't ironic, its telling.
It's horrible. But also a product of the pay system: she keeps her job only if her research papees are sensational. She gets even more money selling articles on the rhe studies. It's a society problem. Because people cheat and cut corners all the time in even regular, non academic jobs. We're taught to never admit mistakes and this is the result. More lying to others and others
That would actually make her a genuinely good and honest pure soul. I don't believe this. People who are true to themselves are true to the world as well.
The suppression of valid, peer-reviewed studies showing that IVM and hydroxychloroquine both worked against the virus---while at the same time, the Lancet published a giant FAKE study (later retracted due to lack of actual valid data) showing HCQ didn't work against the virus--tells me everything I need to know about today's "studies". Pfizer itself doctored or removed its own trial data to make its failed experimental product look like it worked.
I remember at a physics class one day I screwed up the math or something IDK, the (already known) viscosity calculation of a fluid was turning out way too much. Funnily enough it was spot-on when I did calculations with π=3. Now that I think about it, the issue was probably that cold temperature increased viscosity.
It's a fundamental flaw of "publish or perish" professorial/grad student evaluation models. Negative results? No publication. Unsurprising results? No publication. Replication of a previous study, to confirm/refute the results? No publication. Data actively contradicts your hypothes? Still no publication. Combined, that means that study authors have a huge incentive to fudge the numbers to make it look like their new, interesting hypotheses were correct every single time. ...which is why these "vigilantes" had to do what they did the way they did. "We were skeptical about the results of this top-of-the-field researcher, but couldn't get the same results" ticks all the boxes...but wrong. Replication study? Check. Negative results? Check. Without something juicy, something like "faked data!" their "hey, this can't be replicated" findings might well have been ignored. I first started to question some of this back in 2006, when I realized that the grad student I was working with designed the study saying "if A, then our hypothesis is right," and then when the results produced the _opposite_ of A, they said that _that_ proved... the same hypothesis. And if I hadn't known about the previous position, the spun explanation would have sounded good
That data is honestly so strange. Like, it’s not what I would call “blatantly faked”, and it’s definitely not well hidden. It’s just… weird. The “Harvard” school year in particular is just bizarre. It’s almost like a third party who didn’t even see or understand the study was told to change the data to create the desired result
Agreed - Starting the video I expected that data had failed benford's law or similar advanced tests for randomness, but the manipulation is beyond "lazy". Why even maintain the files at that point? Or why not take the 10 seconds to re-sort and strip metadata? It's like making counterfeit nickles on a xerox then trying to sell them to the feds.
@@jamesdavis3851I am not even angry at her for faking the data. I am angry at her faking it in such a simpleton manner. A 10 year old with basic excel knowledge could do better than her.
How about professors in tier-two universities who cannot get funding to support their team because their results are unimpressive compared to these falsified results?
That's because they're paid to check your work. No one is really paid to check published works like that. It would in fact be detriment to both the journals and universities.
@UnusuallyLargeCrab - I went back and checked your original post before you commented, and it read ‘would have *_choked fake cola_* this blatant.’ Hmmm. p< 0.05 Eh ?!
Right?! That's by far the biggest suprise to me. Reduce the significance and bit and santatize the file I bet there would be 0 proof, but they couldn't even be assed to do that
Peer reviewers don't check data and calculations. It is much more about checking descriptive language, methodology, equations, and conclusions. The details are be checked by others post-publication. Being retracted is an academic's worst fear. That fear is what keeps most academics honest.
Honestly i am not shocked - when i wrote my bachelors' thesis in psychology my professor basically demanded that we tamper with the dataset in order to produce an effect since the total dataset wasn't showing anything
Same this is a huge issue with data and statistics that they can be manipulated, often there is an incentive for researchers to do so. This is very sad as it undermines public trust in academic research in behaviour studies.
As a humble bush accountant, I'm stunned that if someone 'adjusts' a data set in a spreadsheet they do not even have the foresight to re-sort it. Wow. Not that this would have totally hidden the manipulation but it would have made it at least look legit.
As a former “financial analyst” (accountant), I suspect that the data was auto-sorted originally on a spreadsheet. Then it was pasted into some other source unformatted, removing any links and formulas so they don’t break - standard practice. Then when data was added in, the data wasn’t sorted as expected. Not that I falsified any data, just that this kind of thing is fairly common when editing raw data that has been used/linked elsewhere
Just trust the science. Ya, right! Figures don't lie, but liars can figure. To people who believe that the ends justifies the means, this dishonesty is "ok". Such people are dispicable.
This has been a problem for decades. Alfred Kinseys data was well known to be of poor quality, and he used bad reasoning to extrapolate prison data and misrepresented it as representing the general population.
He also used data of nonces touching kids to prove that kids had a sexuality. Kinsey was such a scandal, it's interesting to see him celebrated in modern culture
Whoa there. That's the God of transgenderism you're talking about. You can't do that. Everybody knows that gender ideology is above criticism and scrutiny.
thats like john money who lied about being able to change your identity. it was all a lie while he went around lecturing on the subject using falsified data and now look what we have.
Then there's the Lindsay, Pluckrose (and some third researcher) scandal where they purposefully submitted bad research just to show how stupidly easy it is to push bad data into journals despite all of the supposed "peer-review" that should be weeding out all the bad research.
Getting that high in the academia food chain is more or less the equivalent to becoming a governor, I'd say, so you are not wrong suggesting what you suggest...
People are always shocked when I tell them that the quality of the people at Harvard is like an inverted normal distribution with most people at the two extremes. I had this experience before and after experiencing people at Harvard. It seems likely to me that Harvard is home to one of the highest ratios of fake vs. actual experts in the world. In fact, in certain academic settings in the larger Boston area, I've heard "Harvard expert" as a term used for someone who doesn't realize their expertise is lower than they present it, but also for someone who disingenuously presents themselves as an expert.
This beautifully demonstrates that "scientists" and researchers are no different from everybody else. The myth that science is unique in that it is unbiased and honest and trustworthy is rightly turned upside down with this report. Researchers are NO different from everyone else: flawed, prone to lying and deception, and motivated by making a name for themselves. Science is not pure because humans are involved. And I say this as one who has a PhD in a field of science and did my own publishing in research journals in my younger years. Thank you for this video! Everyone should watch this!
As a data scientist, it'd be interesting to check which authors after this revelation tried to pull there datasets from public access. Good indicator to check them out as well.
main trouble isn't even the science fraud. that mostly gets uncovered over time. as wakefield proved: once the fraudulent data has spread, it doesn't really matter if it's shown to be fraudulent. universities and journals both need to go much further: all titles get stripped, and that paper gets marked as fraud but is left up for anyone to see. and yes, there my even have to introduce some basic algorithmic checking (data irregularities, logical fallacy, summary doesn't match actual data) instead of only peer review. the process is simply not designed to detect fraud. (there was the funny case, when a bunch of informatics students submitted some completely nonsensical paper created by a AI. It passed peer review, since basically all reviewer couldn't make sense of it and marked it as:" not really my area of expertise, can't comment".
@@tom9380 The whole system is corrupt. The only data or science you can trust is what you gather and analyze yourself. There is too much money and too much tribalism and it corrupts everything.
@@tom9380 I'll try to elaborate in his/her stead, as I share the same concern I think. When a scientific paper is published in a peer reviewed journal, the journal is responsible for reviewing it (usually by sending to other researchers from the same field volunteering for review), which ensures quality of articles. Given how badly done these fake data were forged -assuming they are fake, which they seem to be-, no serious peer reviewed journal should have had accepted those articles. So it raises two hypothesis : - Maybe the Gino and others paid, or used of similar influence on, a journal in order to be published. - Maybe the journal is simply super bad actually. This is quite the problem, since often articles in journals have paid access, meaning journals make money by publishing articles. Even if this practice is subject to debate, because science is often funded by public money, as are volunteering reviewers, and so research results should be free to access, this is often justified by the quality of articles. If one of the two previous hypothesis are true, it would mean that even this justification for paid access to scientific knowledge isn't actually solid, and raises concerns for the quality of all articles published in same journals as those of Gino.
Honestly, every interaction I've had with people from Harvard tells me that we should expect this kind of behaviour from the school and their graduates.
I once talked with some Harvard student in my home city of Copenhagen. They were so pretentious that I pretended that I had never heard of Harvard which made them so angry and frustrated