Lex Fridman I love these clips (amuse-gueules). They’re great teasers for the full conversations which, lets be honest, are a bit dense but totally worthwhile. Yum yum.
I'd love to see you have Ben groetzel on your podcast, both of you have been on the Joe Rogan podcast. Even if their is a lot of technical detail in the discussion, it would be inspiring to watch.
human social/economic pressure that crushes incentive to do creative work which ai is supposed to liberate us to do. did we find the answer to 42 already?
The underlying problem to this is the pressure to publish "improvements", positive results. This is endemic to all fields of science research and drags them all down. Null results are frowned upon, but shouldn't be, because knowing what NOT to do is just as valuable information as knowing WHAT to do.
Unfortunately this is true. You might even discover some new method, that didn't work for your application but will be useful in the future, but that doesn't really matter. All that matters is that at the end of your paper there is a graph which is strictly above any competing approach.
We have to be strong! Writing a paper it's just like trying not to eat another piece of pizza! You must control yourself and focus on what makes difference instead of doing what everyone expect from us!!
It's reality. In my university, which is actually great, some guys admitted that they write a paper just to write a paper. They know that this paper has no scientific value, but they just want to be published.
Most of research has no immediate application, but that doesn’t mean they are a total waste of time. In fact, it is precisely the blind pursuit of immediate application that hinders scientific advance. Funding for basic research has been ever dwindling, and scientists have been forced to settle in the circle of their most immediate expertise to get a job. To solve our society’s problem you need to allow people some freedom of exploring seemingly irrelevant things, just like in reinforcement learning.
I didn't get that his message was so *everyone* should focus on immediately functional results. I think he's merely pointing out that there's just too few researchers doing it. (Granted I don't know if that claim is even accurate)
His claim is quite accurate. In most fields people jump on the more researched area as it's kind of already there and you can build on it. For an example, something like capsule networks, it's completely different and new and you don't see 10 papers on it the next year. Take deep RL for the same, there are so many problems and so few people addressing then
Saddening is the truth that most research has no immediate application. However, every research should have a vision. In long term what sphere of life would the research possibly influence, not a vague vision, like an end use vision.
Exactly. Academics focus on things that are interesting not on commercial or practical value. For instance, trying to figure out, theoretically speaking, why neural nets work so well.
He is talking about researchers publishing papers just for the sake of getting citations, in highly researched sub-fields, doing work that they know starting out will have little to no significance.
One advantage of doing simple, "useless" research is that we familiarize ourselves with the tools and practices of our field. This is especially true for PhD students.
@@hashkenhabib I'm not familiar on how you read arxiv, do you browse it directly, or don't you just read anything that gets linked there? I see no problem in using arxiv in the same manner as github.
@@sassort sure but there is no review process. The review process as it is now is questionable too, no doubt. However, because of the lack of review, and because anyone with 5k$ can do AI research, there is a flood of papers which are utterly useless (+.1% on MNIST) and could be at best a blog post. Because of that, you need a lot of time to sift through the jungle of papers to find something good. What ends up happening is that you default to a few big names you know you can trust, which entirely defeats the purpose of arxiv in the first place.
@@hashkenhabib I would say that we should treat arxiv just as that: a platform/ forum for discussion. It might be prudent to inform readers that "unless its published by a verified journal, and you can read that on the front cover of the paper, you shouldn't believe it has been rigorously conducted". That's how I've treated it now: many relevant names have published their first versions on arxiv for discussion, before sending them in to actual journals or conferences. Again I would like to ask, why do you say sift through the jungle of papers? Are you actually going directly to arxiv and reading the latest papers there? I've ever only followed links to arxiv myself. Of course when googling for a specific topic I do end up at arxiv too, if that's what you do. But for me it has worked that I merely check the names and whether it has been published in a good enough place. Furthermore, I do think it might be pretty hard to monitor what gets published a platform like that unless a journal would mandate a bunch of moderators for that task specifically. And then it wouldn't be cost efficient to be run without a fee for the users.
I did my PHD on stuff most people weren't interested in cause I can't stand to follow trends. I was able to grow a great deal because of that, but man it sucks in terms gaining what is actually important in academia.
That is a problem I'm having at the moment. I'm trying to do something practical. But I'm advised to follow a certain method that would yield scientific contribution (and at this point I'm really not sure what that means anymore). When I ask why I can not publish on something the way I want it (why my method needs to be a certain way). The answer is "the editors will not understand dozen new things you propose, So you wont get published consequently you wont get a phd." I understand where they are coming from, but really? I'm trying to find something that will work, but I'm advised (by almost everyone) to focus on why it works and explain. Yeah I will do that, once I have found something that works in practice with real and messy data.
Being allowed to do "useless" stuff for my undergrad thesis was a such a luxury to me. Honestly, I did it out of pure passion and aced my degree thanks to it, securing a PhD scholarship.
@@a_name_a Unfortunately that is the best solution we have currently. Nobody had ever come up with a better system yet. Company/Private research labs are basically facing the same issues, but nobody can do much about it. The thing is, no one can tell if a research direction, or researchers themselves, are wasting their time or not. Since, well, no one knows which direction is correct. Only when the results are out we can be partially acknowledged about that. This had been the case for thousands of years. Just like nobody foreseened Descartes Mechanism could overthrown Aristotelian, Planck's constant will create the field of Quantum physics. No one can tell what is correct, or, is correct even being defined at all. What is "correct" by definition anyway....
@@a_name_a I'd like to start by sharing my disgust at someone using the term "academic" in a derogatory fashion. This is further evidence of the degradation of this society; looking down on academic institutions and their researchers. It's further sickening that people viewing this channel would ever up vote your comment as this is the channel of an MIT lecturer. MIT carries this proud torch of this academic tradition of research for the sake of research. Surely the primality of certain natural numbers was a useless academic curiosity. Though many years after their discovery a society far removed in time uses them extensively for encryption. Science, is a process of discovery. We will find ourselves only temporarily served by research guided by necessity. A broad inquiry across disciplines and irrespective of application and profit is necessary. "It is a profound and necessary truth that the deep things in science are not found because they are useful: they are found because it was possible to find them." - J. Robert Oppenheimer (Technical director of the Manhattan project) "Although perhaps of no practical importance, the question is of theoretical interest, and it is hoped that a satisfactory solution of this problem will act as a wedge in attacking other problems of a similar nature and of greater significance." - Claude Shannon ("Father" of information theory)
The same could be said about all papers produced on pure mathematics and philosophy. Still-- great way to get cheap recognition as a grad student or new PhD!
I think another reason for this problem is the feedback loop caused by people doing pure research needing to secure funding and looking at what other programs got funded. The majority of modern computing advancements can trace their roots to Bell labs in the 60s and 70s, and it's policy of hiring smart people and paying them to work on whatever interested them.
It seems like the more things are discovered, the more things there are to explore and most papers or "research" is worthless but you never know if something could lead to something more worthwhile. Science continues to branch out into sub-disciplines and cross-disciplines. Most research is worthless, but you have to fund everything if you want to discover something.
@@musicalfringe hmm I think it is obviously clear that there are way too many branches in any field. The idea is that: the research is not done for the sake of research and innovation but rather done to get some "citations" or for some other selfish reason.
@@FirstNameLastName-rh6zc you can't always work on what you want. the lab has a leader and you can only work on what they agree. they are the ones who get the money for research, they have the name that brings money in, you can only go as far as they agree. You will be spending their labs resources: money and time. Another problem is that income also depends on how often you write papers. the measure of performance varies from institution to institution. Students have a time to finish their research. You can't stay 5 years trying to make something outstanding. You have to finish papers that are accepted by respected journals or conferences. If you see many papers being accepted using one mathematical model, you can surf that wave to get a paper accept that has your name on it. You can't be a new isaac newton that spends time trying to get something from alchemy when you have to put out something new every year or two. The closest you do is several "non important" papers when moving towards a bigger goal. But this only happens when you do have a good new idea. Most people in the world, including in research with phd and all, can't come up with new ideas that will result in something significant. Some people come up with only 1 good idea in their whole lives. There's only one place for a first timer, if someone else have the same idea as you and finishes before you, you are not the one that made a breakthrough. I guess most people make a breakthrough discovery out of luck that no one else had that idea before. A good amount of time of a researcher is taken by reading what other people have done, just because of all that. Many researches aim for a tiny variation of the same problem that no one else has tried yet. We can transfer this discussion to other fields. let's move to music industry. Why do most music sound pretty much like another one? why can't all musicians spend time making songs that are completely original?
In my experience in Deep Learning research, it's been 10% coding and 90% waiting for code to finish running. But then again, I have no publications so I'm probably doing it wrong lol
That's been my experience too, in fact you're lucky if it's as much as 10%. I think this is an area of software where performance actually matters early on because it affects the rate at which you can experiment.
Stop blaming or laughing at the incremental research done by junior people - they need to minimize risk and get publication. It is these big shots/famous researchers’ responsibility to guide the fields to the right, or most risky directions!
@@yzhang9198 Euh no. Working on hard, risky projects is a privilege that needs to be earned. First, show that you can understand the research. Second, show that you can make an incremental improvement. And then maybe you can get paid to work on groundbreaking stuff.
its not about blaming people, its about critically examimining the system that pressures people into wasting time doing nearly useless research just to finance their career
Okay, I bet you're the same type of person who wonders why Ph.D.'s are "underpaid". It is precisely because of the issue Jeremy is describing. They work on practically useless work that doesn't benefit society.
A friend of mine just beat SOA massively in translating sound in an emergency scenario, by implementing old embedding algorithms, likely because no one else cared to try...
@@madhureshminoshi4272 SOA is state of the art, and so far they use one algorithm to create the embeddings, but my friend used two algorithms to create embeddings and used a CNN on top of the embeddings to get an improved score. Though I dislike the way they test as it is cross fold validation...
He has a point here. In addition, many researchers stay in their corner sticking to a special topic for which they get reputation - and build a list of publications-, and this in turn becomes important for getting funded. And so the cycle goes on for years researchers doing the same thing.
Sometimes, people forget other scientific fields. I use deep learning in cancer genetics to create a genetic signature to help in prognosis. Since my dataset has almost 27000 columns, it is a great deal help me to help medics to help people. If we don’t have basic research, even apparently useless, science just dont exist.
I think a major role in why lots of research is a "total waste of time" is because the element of popularity/interest plays a role in what an engineer/researcher will want to put their time into and also what the public response will be. People want to see AI beat professional video game players, they don't want to read a technical report bloated with jargon and lots of stuff that will be hard to understand and stay focused on.
I think there is something important that is missed here: Neural networks were 'useless stuff' for 20 years but some researchers kept on doing academic research on them. Academics continue to explore many directions and most of them end up being useless, but the whole point is that we don't know what will end up having an impact in the real world. That is why its important to keep an ecosystem that explores basic science with a long-term horizon, while companies focus on short term immediate gains.
So many people commenting have completely misrepresented his position to the point that I don't think they're even listening to the words coming out of his mouth. He's not saying basic science is pointless. He's criticizing the fact that the vast majority of the machine learning community is hyper-fixated on a very narrow set of problems. As a result, the majority of that research is "useless" as the field is bloated with inconsequential optimizations by people simply looking to get citations. In fact, he's advocating for MORE basic science in under-explored fields that could still make huge breakthroughs and radically improve the state of machine learning. It's frustrating that people don't even respond to the video but rather to the imaginary argument they constructed in their minds when they saw the video title.
It's things like this that make me wonder why our systems make it so difficult for an interested individual to work on stuff like this. There's no room in our society for people to do research on interesting problems outside frameworks like academic research or specialised companies. There's so much untapped potential out there if we could just find a way to make it practical for people to detach themselves from wage slavery for a year at a time.
Agreed, the more you have to work with the system the more you recognize how much better it could be without the overriding financial incentive ruling everything
Do what you love, play with the ideas that you enjoy. You don't always have to have a metric. Nice to have long-term goals, and maybe a sense of direction that will take more shape over time, but don't let that stop you from enjoying it now. You are always producing something for your efforts, and sometimes, that something is just your happiness, and that's awesome.
Here is a story. JC Bose in India was working on Microwaves saying this could be future mode of communication but everyone laughed, including the Royal Society of London. So much so that the invention of radio was attributed to Marconi, who didn't actually invented the radio. He used the coherer built by JC Bose to simply transmit a radio wave. 50 years later, people realised that FM and AM are not the best way to communicate but rather its the microwave transmission that is going to be the game changer. So, research never go to waste. In future it might also tell you that you should waste your time on certain xyz because it's of no use and had been already tested. Just like Alva Edison made 100 of failures to create a filament only to realise its the tungsten which can only be used for the purpose.
Jeremy claims is that not enough people work on transfer or active learning. At the same time he only ever sees published stuff, and most likely not all of it, but only the trending stuff. So why is he not considering the possibility that people actually do care about these topics, it is just that they are genuinely hard and hence tiny movements in those directions never become popular enough to reach his attention
My perspective is that nowadays papers tend to be pushed out just to publish and not perish. "Hey, we got this new dataset, let's apply DL to it and make it crazy.. like do 1000 layers! Wow, great paper idea!". Yet the foundational problems are addressed only sporadically... just look at how many GAN papers came out where the scientific addition is minimal at best, and how it took a few years before people actually were doing something about the distance metric, the architecture to make high quality, etc.
Is it? I mean lets say 1 out of 10 researches give results for a great solutions, but the fact is that if those other 9 were not continiously trying to get better results they will never become number one, and its by wasting time that you eventually become the 1 out of 10. What i mean is that if no one would wasted time one this no one also would get good results. Its all part of a system where trying and failing will get us all to succed.
I think the point is that the bulk of machine learning research is on very specific improvements in a narrow fields that have already been researched to death, and very little research time and effort goes towards all the broad range of research fields. If the field is deep learning is like a metaphorical car, then rather than trying to continually improve the car's design across the board, the vast majority of research would be going towards engine efficiency and tyre technology, and perhaps neglecting improvements in safety, chassis weight, brakes, usability, etc. So in 10 years maybe the engine's efficiency is improved 5%, which is great, but if the research was split more equally, perhaps the engine's efficiency would be improved by only 3%, but the chassis's weight might have been reduced by 3% also, reducing the fuel demand, and creating an overall greater efficiency gain. Not the greatest analogy, maybe, but I think you get my point. :)
@@WeAreSoPredictable very valid point my friend. Lets hope at some point engine solutions over saturate the market so we can start research on the breaks as well jaja
One of the main topics in the video was that active learning was not worked upon. Keep in mind that active learning involves having a researcher working on the technical details, as well as a non-technical person working on (re)evaluating the decision making performed in the process. As a non-PhD who has had some contact with the PhD world, most of the project proposals that I've seen on a university can afford some 30k€/year, which is barely enough to pay a full-time researcher. Maybe I am biased in the sense that project proposals that I have seen or been involved with might not have been ambitious enough, but I do believe that research is guided by money, or what will the most cost-effective project be. This seems to align with the trending topic of the moment. A related example of this is start-ups, which are usually aligned with the trending topics, so for instance on the current year there are many start-ups based on large language models.
Thank you for making these clips. I am glad you got on the clips bandwagon finally. With so much to learn in AI by itself, it sometimes becomes hard to follow an hour long podcast. I would not mind an hour long podcast if it was the only podcast out there
I wouldn’t say a complete waste of time. My interpretation of his statement is that most research doesn’t solve an immediate problem, but that’s not the way scientific research works - it is incremental. Every paper is intended to be just one piece of a very complex puzzle within a given field
The problem with most research in ML is that it's not about "one piece" of a puzzle like in more classical research where you investigate one component. In general, the output is not really accesible or transferable. Most paper would test let say one customised NN or using a known one on a specific application, but without transfer learning all the output is "stuck".
This reminds me when Elon Musk said that "most academic papers are useless". These opinions need to be more specific in this point, because through the next years; some researches and publications will have the chance to be applicable not just in the theory trap.
Well, that turned-out to be totally wrong. To say "research is a waste of time" is kind of silly at best. I assume that you realize that there is a significant difference between theoretical research, analytical research, applied research, correlational research, action research and other types and branches of research. You sound a lot like Noam Chomsky, who said exactly that for decades about deep learning, asserting it is a waste of time and money, and instead, people need to just apply basic machine learning in AI and stop wasting time and money trying of AI deep learning research.
It's fair to say some more practical problems might be overlooked in the science and technical communities, while it's not fair to say those trending topics familiar within the communities have not practical meaning.
Number of researchers should be limited but IT IS TOTALLY NECESARY to keep investing in talented people to have new tecnology that humans can apply later in our modern society
I think the major problem is current publishing methods. Journals publish only "new" and "interesting" research and researchers get evaluated by the amount of papers the can get published. Veritasium made a great video about the issue with the title "Is Most Published Research Wrong?" - it nicely explains why things cannot get better unless we can introduce better measurement of researcher quality.
the core difference to me between ai and humans is that humans combine multiple ai type deeplearning neural networks (in our brains) together to classify everything we see in the world around us; with ai there are all these disparate systems, but nobody seems to have tried to combine these disparate ai systems into one holistic classification/action system. put some robots with cameras in a room/house, have them run around and classify things and be able to interact with them. Make the other robots able to classify things, and have say power plugs and things that they need to charge up. see how they behave. give them problems to solve and see if they can develop new neural nets to solve the problem.
Well, I'm glad that he doesn't need papers or citations to go ahead in his career; the rest of us junior researchers can't say the same. We need to follow the funding, and we have little influence on where the funding goes.
Being an academic sounds like a horrible existence. Always concerned about what the others in your little clique are going to say about your work behind your back. Can't control who's going to show up in your clique and start upsetting the applecart. Gotta publish something anyway, and make it sound good enough to keep the grants coming. Always too timid to ever do anything of real value.
Interesting ! As a newcomer in data science, i wanted to focus my study into deep learning things. But for the first time i just heard this "active learning" term. I think i should looking into it,
“Whoever, in the pursuit of science, seeks after immediate practical utility, may generally rest assured that he will seek in vain.” - Hermann von Helmholtz (one of the greatest scientists this planet has ever witnessed)
Academic research isn't useless. That's just a crappy populist slogan for people who don't understand how progress works. You need a large community of people thinking about a problem to make progress. This guy may think he knows where the next progress will be made, but he doesn't really. Nobody does. It's a highly speculative field. In the business world, data management is probably a bigger problem that lack of scientific progress in the field of AI. Also, the is a major lack of understanding in the business world of how AI can benefit them. so this criticism of academia is largely academic xD
Creativity is a very highly sought after commodity and the world is in such short supply. To come up with something which isn't a derivative is really hard. Only few people are able to achieve it.
This is really exciting for someone who’s interested in the ML/DL/AI space but isn’t a student at an institution. I’m curious about other important topics, besides active learning, that aren’t receiving attention. Anyone out there with insight to offer?
Well, here’s one : A demonstrably correct and accurate definition of ‘information’. Only a relatively slightly more rigorous and less confirmation bias re-enforcing re-examination of the existential status (ontology) of ‘information’ not too tardily shows that our present definition thereof is woefully incorrect. In a word, it’s not digits, no matter how many of them one has at one’s disposal, nor how cleverly arranged they are, nor how large, powerful, fast and globally interconnected are any of the machines and/or devices operating on them. Although I am not going to share ‘information’s’ correct ontological identity here with you in this RU-vid comment, once its new and improved - well, demonstrably accurate and correct - identification is recognised to be this (still secret) thing, no great difficulty whatsoever attends any of the tasks and exercises surrounding those having to do both with understanding computation of any kind and amount utilising digits, and understanding those operations having to do with any kind and amount of ‘real thinking’, which latter utilises ‘real information’ (and not mere digits) and which phenomenon, namely real thinking, is an entirely different and distinct - not to omit galactically more important - phenomenon than mere computation using mere digits. Visual images, sound, touch, taste, odour, temperature, pain, pleasure, sexiness, need to evacuate one’s bowels, sensations of breathing, coughing, choking, sneezing, etc, etc, etc, etc, etc are all units of (real) information our very own internal ‘real thinking machine’ utilises in all of its real thinking operations. Our internal thinking machine is not performing mere computational operations on the units of info our senses first gather and then shunt on up to it. No. It’s performing real thinking, and it uses real, sensory-gathered information to do so. ‘Thought’, ‘mind’, ‘intelligence’, knowledge, knowing, understanding, memory, learning, cognition and consciousness’ - understandings and definitions of which particular phenomena are not only greatly sought-after but each of which phenomena are also all ‘information-related’ phenomena, and as such become quite easily understood and defined once ‘information’s’ correct identity is properly recognised and understood. As George Gilder points out in ‘Life Beyond Google’ - digit-using machines and devices are nothing more than ABACUSES - low level bean counting machines, albeit massively miniaturised, vastly accelerated, electronically automated, user-friendly-interfaced devices, but again for all that nothing more than abacuses. (Even so Gilder doesn’t know information’s correct identity as do I.) And (again) as simple bean-counters, digit-using devices are no more capable of really thinking than are vast compendiums of grade school multiplication tables, slide rules and one hundred year almanacs. So, James, for whatever it’s worth !!! here’s my new insight .... information is not digits and no digit using machine or device will ever really think, nor spontaneously perform operations such as ‘transfer learning’ or ‘active learning’. Just saying ......
Logistic Regression Algorithm is properly designed or not. It does not fit to the practical scenario as Qualitative Variable is totally distinctive from Quantitative Variable and both should never be interchanged. Hope u disagree but that prevented me from going further into Logistic Regression Algorithm!!
They should start with an original idea on how to approach transfer learning or active learning first, because I guess if they get into without any clue how to achieve that, they may spent months and years without results which it could damage their careers. But all the points he made are correct in my opinion. Maybe this should be more a decisions from the people who paid the bills instead of the researches.
It may seem impractical but that is only in the short term. Everyone wants to publish the next big thing but in reality real innovation happens through the practical application of multiple discoveries. Cell phones weren't invented in a day. Through multiple iterations of communications technologies and general electronic computing, we arrived where we are now. We humans think in such short timescales because of our life expectancy. Do you consider the efforts of early electronic pioneers useless or impractical? Hell no. Without them taking shots in the dark, we never would have found the light.
Is it possibly more beneficial to try to push DL by way of a startup? In this case all you need is to become profitable then any work you’re doing can advance as far as the market will need it
hey need a bit of help i am a graduated chemical engineer and now thinking to study further in machine learning and data science but i almost know nothing of machine learning and i have to submit a research proposal in machine learning or data science can someone help me to chose a topic or how to find a topi thanks
Imagine if we could collate all human knowledge into a coherent dataset that would allow us to have a better perspective of what we do not know, which is more than we do.
I highly disagree. You're making the surreptitious assumptions that: 1- Creating such a "coherent" dataset isn't equivalent (if not harder) than solving the problem you purport to have built this dataset for 2- the "set" of all human knowledge as a whole, trivially extends (because by definition you can only use what's inside said "set") within the space of what there is to know The truth is that most of the interesting "delta" in human knowledge comes from theory-building and not just deduction/induction. For that reason, even having a "coherent" (whatever that means), all-encompassing dataset, wouldn't help much
@@maloxi1472 Thank you for your view it has allowed me to think deeper and more broadly. I will try and revise my view. You study information theory or in computer science?
Well talking about the improvements, the innovations in any field also originate from active research on something which was already created. I condemn things that are done just because you want to publish your paper, you did corrections here and there and then created a paper out of it, but these small corrections are the basis of something useful we are using these days. Talking about a simple technology of face detection. If only a single paper on face detection is published and all people would have started creating another innovation do you think that it would have ever been implemented in our mobile phones? "Research" means you are doing something on a thing that is already created, this is the basis of modern technological advancements.
I am revisiting this video after reading Decision Transformers for sequence modelling of Offline RL .....which needs a dataset to learn...why would I train a model again if I have already cloned n number of agents to generate that dataset
Lol aint it little late to point this out? There are already multiple conferences that turned into shit show because of this. I’ve seen Machine learning “Researchers” presenting papers who can’t explain the difference between “accuracy” and “precision”.
He isnt someone who is actively involved in research , niether has he ever produced any paper worth any serious attention. He is just a good practitioner of technology. Completely incompetent on commenting of research.
And that's why people like Kurzweil are really, I'm sorry to say it - kind of full of it. Singularity won't happen in decades or maybe centuries precisely because of this: the important research areas aren't covered enough.
Ugh system of citation and need to publish more papers is ridiculous. We need new way and this is again same theme over and over again. Imagine if 1000 people are studying red color and only one is studying blue. You can continue from here.
Most scientific and engineering advances take place outside of academia anyway. The academics typically catch up later and lay claim to the discovery. But typically it already happened at some obscure lab somewhere. Read NN Taleb to learn more about this.
@@ConsciousnessExplored Rashid, my friend, I can assure you that I received the finest education at the real world risk institute. It was affordable too! Only 2k/day to have the privilege of being educated by the best minds the Universe has ever produced. Skin in the game! Antifragility! Fat tails! Now I will be rich just like Nassim.
The guy in this video is Jeremy Howard, one of leading experts in machine learning and deep learning, and he says that most research in deep learning is a waste of time. And Elon Musk says the progress in AI is exponential. An Musk is not an expert in AI, but he likes to think that he is
Musk is no expert in AI. To be an expert in AI it takes years of study. If you put Elon Musk in front of a computer and ask him to program a deep learning algorithm, he would be able to, because Musk is a bussinesman. And as for Jeremy Howard, when he said that most research in deep learning is a waste of time, he ment that the progress in deep learning is very slow, wich is the opposite what Musk says
The point is that whenever Musk talks about AI singularity, or Mars colonization, or traveling to other star systems, the first thing that comes to his mind is sci-fi movies. He talks about the three laws of robotics, merging humans with machines etc. All these things come straight from science fiction novels. He is an adult who believes in science fiction
First of all I don't dislike him. He hasn't done anything to me, I just think he is grossly overrated. Second, you said Musk did things that were thought impossible. Like what ?
Most research on ANYTHING is a total waste of time.... only useful to satisfy the publication requirements on "institutions". And most meaningful/useful research seems to be going on in industry.
People in academia don't need to care about usefulness. They're only interested in doing things that academics find interesting which often doesn't interesect with industry interests. For instance, a biochemist at Harvard doesn't spend his time trying to find a new better vi/a/gra. Your opinion is trash.
99% of what you learn from school is also wasteful. Almost 50% of your life is also wasteful. Human can only use 20% of the brain capacity. 90% of startups end up liquidated. I can go on and on. Would you stop? No! Because you only need the 1%, the 10%, the 20% of your attempts to make some meaningful results. You can only call something a waste of time if it truly leads to nowhere in terms of helping others and yourselves. I agree that 99% of researches are useless, but not waste of time.
"99% of what you learn from school is also wasteful." This statement is pretty ridiculous, or you went to a horrendous school. "Human can only use 20% of the brain capacity." This one is outright false. We know for a fact that we use 100% of our brains. "Almost 50% of your life is also wasteful." This is entirely opinion (depends on the individual and what one would consider 'wasteful'). There is no need to go on and on if you have nothing but this kind of nonsense to spew. Don't conflate advancing knowledge by finding a wrong way the same as wasting time. If something is "useless", it is a waste of time, and there is no excuse for wasting time.
@@SgtSupaman if you don't know the difference between "useless" and "waste of time" that is okay. You can have a research that provides no results but it helps the researcher gains experiences from it. The first car ever made might be useless, but it is not a waste of time since people improve it through iterations. If you doing something wrong and you consider it waste of time. You just stop doing it altogether instead of trying to fix the problem?