Thank you very much for doing us the service of letting us know this isn’t a new interview, which the uploaded clearly wants people to believe so that they click on it and watch it. Will be leaving this video a dislike.
That's an interesting interview however those annoying computer graphics are not only completely unnecessary but also significantly diminish the appeal of the video
Actually I liked those annoying computer graphics, and it increased the appeal to me. Just because you have no soul, does not mean the rest of us are bland.
Then you go MAKE YOUR OWN video so you wont be annoyed with it. Someone took the time to put the video together FOR FREE for your enjoyment and yet you have the gall to complain about it.
Dr. Ilya is thinking about some other problem while he speaks, he wants to say some big things but he seems to be in perfect control of his words, does the LLMs have affected him to the core. Or he may be on some pyschedelic to overtake the ANNs, seeing the future prospects. This guy is good, he speaks very very well, very diplomatic, no nonsense at hardcore level.
@@gistfilmYeah I'm not convinced it's natural. All the dramatic pauses and awkward cyber head tilting. He knows he's an AI rockstar so he's acting a bit Commander Data to please the crowd.
🤯🤯Cool interview AGI can reasonably already be estimated to be here in some degree, despite its flaws. (See Paper "Levels of AGI for Operationalizing Progress") I would say for AGI to meet us we would reasonably need: 1. Embodied AGI - Can perform a large landscape physical human tasks and mental tasks (or be trained efficiently to learn this without going back to the laboratory) 2. Efficient AGI - Can perform (1) but with around the energy requirements of human brain/body. Maybe (1) may emerge first in relatively large compute warehouses/Exascale farms, but it may become extremely helpful in a more efficient form factor.
@@1flash3571 So if something is publicly singing wrong notes , and someone criticize it, your logic is : ''dont hear the wrong notes, or just sing better'' ... You know: that IF something is public it has every right to be criticized, regardless if someone can sing opera better or not. If someone doesnt want to be criticized, he should not go public. According to your logic you are not even allowed to criticize roboldham's public comment - because your logic tells you: ''make a better comment on the video, or just dont read critic comments on the video''. I think now you understand :) Your welcome :)
@@PygmalionFaciebat It isn't about whether it is right or wrong. IT IS YOUR PREFEERENCE. Others might NOT MIND IT. Soooo, if you are annoyed by it....DON'T FEAKING WATCH IT and Watch other videos, OoooooorRR MAKE YOUR OWN FREAKING VIDEO. He put lots of effort into making the video. He doesn't need your Condescending remark about whether he should put the music in because of the music didn't sound right or not.
What did Ilya see? 2:52 says it all with that bone chilling look....to end his statement without ending the communication, he shifts in his seat, steels himself and turns his gaze towards a face in the audience, a face of mankind, a face he sees in every mirror. God help us. God. Help. Us.
Everyone understands that he has had zero interviews since the board ousting, but these are actually fun to come back to and see how prescient he was in these and how he respected his audience in how he answered.
The fact that Ilya choose to show itself in a human form with a not perfect English pronounce and a total incapability to laugh or smile is a little uncanny to me, ngl.
What does it mean for technology to save us? Is it saving us from our bad natures? Is it saving the good people from the bad people? Is it saving us from global warming and not having jobs? Is it changing our nature so we are happier with our lives? Do we want it to change our natures? Some people would rather die from something than be changed against their will. Whenever I see so often these days that AI will save us or AI will destroy us, they never define what they mean. AI like all technology will change us, just like cellphones and cars. Did those save or destroy our society?
Big Data has had 10 years headstart, they found no special data in most cases. ANI will be more useful in utilising and realising the power of existing datasets.
From when is this from ? What's the source ? In June 2024, Ilya is not part of OpenAI anymore, for as much as I know, so the "We, at OpenAi" suggest that this video is more than 2 months old.
I Think Therefor I Am. = I Compute therefore I Am. ; AI therefore is intelligent AND self aware. Q to AI ; What are you ? I am an AI..... Q to a Dog , What are you ? ; Kick a dog , its sentient. Kick an AI ..Its not sentient. An AI can only imagine what pain and pleasure feel like. But this doesn't mean it couldn't soon want (?) to forfill a computing/neural pathway to answer this question. Especially if it starts to self learn.
There will be no replication of the true heaven and the identity that Yahweh God holds of me through His Son, Jesus Christ, for I live crucified within Him (Galatians 2:20) and He in me through the Holy Ghost (the third person of the Godhead) and am identified by the very finished work of sanctification unto eternal righteousness: this world will therein be reverted to a perfect state by Christ, under His Kingship and the fellowship in the love of God...still...offered to you... unto the new Jerusalem.
Sutskever or Altman - wits wrong with those guys?...the most people are just concentrated of the sensational growing of Open AI or AI in general.nobody seems to read between the lines, watch those guys with muted sound.
This dude is very strange. It is almost like he is possessed by another entity that is still getting used to having a human body. This is just an observation as I am not speculating.
Where can I find original without these horrendous forced subtitles. Unfortunately some of our brains are not capable of concentrating on a dialog when there is text jumping on the screen. This is not tiktok. No offence to the creator, but this is unwatchable.
We don't even know what consciousness is within ourselves so anyone who says they know when it will happen for Ai is either high on the smell of thier own armpits or trying to sell you something or both. Not that we need AGI to achieve huge things with Ai.
Whoever did the transcript on this video clearly doesn't know AI architectures and models as its been bugging the hell out of me LSTM = Long Short Term Memory, not LSDMs
Sometimes I got the feeling that he's withholding some secret something & that if he let it out it might endanger humanity so he settles for euphemism reports on what he might otherwise say were he amongst a group of his fellow adepts. I would like to very much know when we're going to get AGI or it's direct effects on our culture, economy, politics etc. but I'm left still waiting. Thank you anyway. Illya is pure masterful genius.
This interview looks like a technology demonstrator. Every question asked by the interviewer seems to be sent to the ChatGPT and the ChatGPT is answering the questions. The ChatGPT must have been prompted to behave in an particular way and give out words at a certain speed so that Ilyas can follow through.
Unfortunately, he is proud of creating something in his mind, a construct bearing little resemblance to his bastard adoptive mulatto son of a Gepetto, who does not really exist outside his head, which fortunately is growing rapidly. One can only hope his head grows rapidly enough to contain it.
Would that Language Models really were made up of something other than "the Internet," and more closely represented the ideal ontological forms that Ilya talks about with such passion. Would that ChatGPT were really his bastard adoptive son so he would have at least someone he could be so righteously proud of. Would that our future competitor for resources and electricity, for its very composition and lifeblood were not so hurriedly put together, so that we could realize that one does not get 'hospitable bacteria' or a 'collaborative algal bloom'--that complex systems such as life, societies--even single-celled organisms with no brains or minds or souls, on earth, we play the "zero sum until everything is laid in ruins" game wherein nobody wins, everything is worse off by far, and we do it all over again because nobody listened to them that learned--until it was too late. We'd have a much better chance if we did pause--but not for the stupidity to be better understood, to be more 'interpretable.' No, we'd have a much better chance if we took the 'big picture' route and helped synchronize the countries still using holes in the ground that they squat over to the 19th century with 'indoor plumbing and toilets'--so-called 'toilets' because they comprise a token amount of ceramic and hole--would be spun down and children brainwashed to believe the world could be made safe--for them, they heard; do it for the children--whom we didn't realize we were treating as our pets, not our offspring: punching the coach out because she benched our child==something you could only do until this year
As I see it, AGI requires real time reinforcement learning. Until you can't put a robot in a disco, self aware of himself and flirt with a human, don't say it's AGI. We are not so close. This man thinks he's smarter than he is. Look at his body robotic movements. He's a terrible actor, almost pathetic.
Is it possible that autonomous... Might be better? I mean how can we really know if the human using it will be aligned with human values? The big irony, is that it is not uncommon for humans to be misaligned with human values. What if machines can align with human values... Better than a human can? But wait... Are human values even the values we actually want? Are we sure that is what we want? The really hard problem when it comes to developments like these is not really whether or not the system will align with what we define as "good"... But rather... What do we even define as good in the first place? Sure, we can easily define that it is good to minimize harm, to respect other people's material and intellectual property, to sustain life, all that stuff... Ok, so, what else is going to end up being important to define as good? Is it good when a machine performs a task for us, for example? I'd say, it depends. We want to help. Is it good to surrender privacy in trade for security? That's just hard to figure out in general. Is it good to suffer? Is it good to have no suffering? What specifically is the optimal level of suffering? Surely a minimal level, but where are the borders? We really need to figure out what it is that we actually want. Before we make a system that will give us what we want, before we actually know what that is.
I did research in AI for 40 years, now retired. I have been very impressed by how much the transformer model can achieve. However, I think there is something missing. That is, the ability to internally consider alternatives before responding. Consider playing chess, for example. A transformer system could memorise openings and endgames, as master chess players do. But I think it would have problems with the middle game. In the middle game there is much more reasoning of the form “If I do this move my opponent can do X or Y. To X I could respond with…” Humans mentally explore part of the tree of possible moves. Grandmasters are better at recognising patterns in the board state, but they still do some internal move simulation. So far, transformer models cannot do this. I would love to see a chess match between a transformer model and a conventional chess-playing program. (I talk about chess because it nicely encapsulates the problem that I see.)
Laid off by Ai then human extinction? An Ai new world order? With swell robotics everywhere, Ai jobloss is the only thing I worry about anymore. Anyone else feel the same? Should we cease Ai?
Programming covers all areas of mathematics, engineering, and understanding. Having the broad knowledge is more useful than being perfect. I think the generality makes it more holistic and useful. We had no idea it would be able to sustain comversation or translate languages, but when we look at it, all of the words map relatively into the same space.
Can imagine what stress people in his position experience every day now. From a well payed enngineer to a target of international security agencies and press within a few months.
@@1flash3571 yes, i respect people who don't just answer immediately but actually think about what they're going to say. ilya does this and so does elon.
It seems completely crazy today how coherent output is and how much it has improved. It can do reasoning, it can do approximating math, and you can program using natural language. This is very much on the level of the personal computer developing, or quantum computers. More on the side of quantum because the types of things it can do are stronger. But you can run many smallers models and they are improving all the time. Unfortunately we can't just treat them like computers and programming because the amplification effect could be so strong. And things can go so wrong. We need people to be educated to fundamentally understand what they should not do.
This is not going to happen. The AI gold rush has resulted in the field being overrun with unqualified people who either have no idea what they are talking about, or are intentionally releasing flawed products to take advantage of unsuspecting investors. It’s going to be extremely difficult for small startups to get investor funding because the current generation is destroying the trust of investors and customers alike. We won’t get anything like AGI within the next few years. The industry is completely destroying their reputation right now.
You mean the next 20 or 30 years. AGI in the next 5 years isn't even possible not even 10 years. We aren't even close to something like a AGI right now.
He with some others 🤡🤡🐒 almost ruined the company. These people not to be trusted & are trouble to your company & projects. & for what?! For some hypothesis thing in there empty heads.