Тёмный

From artificial intelligence to hybrid intelligence - with Catholijn Jonker 

The Royal Institution
Подписаться 1,5 млн
Просмотров 62 тыс.
50% 1

Hybrid Intelligence (HI) is the combination of human intelligence with artificial intelligence, enabling humans and AI to mutally grow together.
Watch the Q&A here (exclusively for RU-vid Channel Members): • Q&A: From artificial i...
Subscribe for regular science videos: bit.ly/RiSubscRibe
This talk was recorded at the Ri on 24 October 2023, with the support of the Embassy of the Kingdom of the Netherlands.
Catholijn Jonker is full professor of Interactive Intelligence at the Faculty of Electrical Engineering, Mathematics and Computer Science of the Delft University of Technology. Catholijn studied computer science and did her PhD studies at Utrecht University. Catholijn served as the president of the Dutch Network of Women Professors (LNVH) from 2013 to 2016.
Her publications address cognitive processes and concepts such as negotiation, teamwork and the dynamics of individual agents and organizations. In all her research lines Catholijn has adopted a value-sensitive approach.
The Ri is on Twitter: / ri_science
and Facebook: / royalinstitution
and TikTok: / ri_science
Listen to the Ri podcast: podcasters.spotify.com/pod/sh...
Donate to the RI and help us bring you more lectures: www.rigb.org/support-us/donat...
Our editorial policy: www.rigb.org/editing-ri-talks...
Subscribe for the latest science videos: bit.ly/RiNewsletter
Product links on this page may be affiliate links which means it won't cost you any extra but we may earn a small commission if you decide to purchase through the link.

Наука

Опубликовано:

 

23 янв 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 111   
@arkadybron1994
@arkadybron1994 4 месяца назад
The thing that worries me the most, is not the direction scientists are moving this technology, but the fact that they may be ignoring the commercial decision making impact, on the decision making process.
@alecsratkay9825
@alecsratkay9825 4 месяца назад
Very good and professional representation, thank you!
@carktok
@carktok 3 месяца назад
what do you mean by the 'commercial decision making impact on the decision making process'?
@donhemminger277
@donhemminger277 4 месяца назад
Do you realize that this hybrid intelligence is not unique. It is actually the way the human mind works. The human mind consists of a slower responding, symbolic, language based conscious mind component that has somewhat loose control over a faster responding, less precise, mostly unknowable subconscious mind component. Seems like that's not a bad model to work towards.
@arinco3817
@arinco3817 4 месяца назад
Yeah I was thinking the same actually - system 1 and system 2 thinking. This is currently what the AI community is putting a lot of effort into
@chrisdipple2491
@chrisdipple2491 4 месяца назад
There great merit in this approach, though my concern is we are using the Rule based system (Rules created how how? Not just by Humans I assume. By the DL components presumably), to put 'guard rails' on the DL based component. Perhaps the DL component should be 'summarising' itself as a Rule based system, and used when quick decisions are required (like Kahneman's systems 1 and 2). All components should communicate and moderate one another. Different tasks should be assigned to appropriate components (Mathematics, known physics etc. should be 'rule based' for example, 'Creativity/confabulation' to DL). Planning with Reinforcement learning is essential too, with appropriate and goals (and appropriate scope). The whole system is a collection of components, each appropriate fit for purpose, mutually recursive and cooperating and constraining one another. Artificial Immune systems research and the idea of 'frontal lobs' on the system 'brain' come to mind. A fascinating challenge.
@NicholasShanks
@NicholasShanks 3 месяца назад
Thank you Sky for posting this in full, and köszönöm szépen Magyarország ❤ from 🇬🇧.
@AmyMarshSexDr
@AmyMarshSexDr 4 месяца назад
One of the better, or perhaps best, presentations I've heard on the topic of artificial intelligence. I am excited by the vision of hybrid intelligence as described by Dr. Jonker.
@samtux762
@samtux762 3 месяца назад
Too much of the political agenda. Two much trust in "humans have values" Tell it to guys, that were drafted to a war by bloodhungry politicians. Self driving cars are safer, than human driving. If an accident happens: investigate it, train the model on a dataset containing conditions, leading to an accident. To resume: this lecture is a wasteof time.
@banemiladinov8202
@banemiladinov8202 2 месяца назад
Like the comment above, its purely AI, i mean who would call herself in theyr right mind AmyMarshSexDr and comment on AI topic videos?!?!
@user-xv8dn4nm5k
@user-xv8dn4nm5k 3 месяца назад
Thank for sharing👍
@guynouri
@guynouri 2 месяца назад
Thanks!
@garydecad6233
@garydecad6233 3 месяца назад
A very timely and much needed lecture. Thank you
@mysterholmes345
@mysterholmes345 3 месяца назад
Right. So many misunderstandings of what AI is are misrepresenting our future and understanding of it.
@Ffsniper-zi1cx
@Ffsniper-zi1cx 3 месяца назад
Great point in highlighting "controllability" as a key aspect, which thus far appears to be missing from mathematical neural networks, whose "probabilistic" characters, are essentially a fancy way of describing "out of control", or more bluntly put "bit dangerous". Too much of any good thing is bad, therefore control, such that things can be made to come in correct magnitudes, is key to a successful outcome.
@christopherspavins9250
@christopherspavins9250 3 месяца назад
Excellent graphics and lecture.
@timilayamithapa9983
@timilayamithapa9983 3 месяца назад
The explanation is excellent
@thirzel
@thirzel 3 месяца назад
She brings a lot of interesting questions in, which I disagree or partially disagree with. With time I shall certainly do this. But for now, I am happy that finally someone makes the right questions, some really meaningful connections and extraordinary well presented explanations. The best I have hear in this field in 10 month. Enjoy the talk. 😊
@seionne85
@seionne85 4 месяца назад
I've heard too many ai talks and have started skipping them, but this was a very informative and entertaining presentation, thank you
@markfitz8315
@markfitz8315 3 месяца назад
Very good presentation, and I’ve listened to hundreds of them on AI. My hand was half up and half down when the presenter asked if expert systems were AI. I was thinking it depends on how they are trained but now I clearly know the difference. So thanks for that.
@KudakwasheMafutah
@KudakwasheMafutah 3 месяца назад
I totally agree on the model definition
@geaca3222
@geaca3222 4 месяца назад
Very informative and promising, I also like the Schwartz chart of basic human values that she mentioned in a previous lecture. Though I really miss more enthousiasm in the lecture about generative AI / deep learning, how it is in fact revolutionary and generates new concepts and ground breaking scientific discoveries.
@fil4dworldcomo623
@fil4dworldcomo623 4 месяца назад
Realignment brings us to the breakthrough.
@goldnutter412
@goldnutter412 3 месяца назад
Yes, excited just to see this is being realized as the actual target The only problem with the brain analogy is that "physicality" as the source of this information system we call the mind, or consciousness, is deceptive. It seems the obvious answer, but the best way to think about the brain is as a programmable cache.
@johnp1
@johnp1 4 месяца назад
Very informative.
@_Itachi_240_
@_Itachi_240_ 2 месяца назад
Wow, so much valuable info in this video! With longer videos, it can be hard to retain everything.Anyone know of any AI tools that summarize key points from RU-vid videos?
@profcharlesflmbakaya8167
@profcharlesflmbakaya8167 3 месяца назад
I like what she is propounding! Kuddos
@user-ey8bi1gi4s
@user-ey8bi1gi4s 3 месяца назад
AI left it to itself as a complex combination of neural networks with all input probabilistic and statistical errors could be very dangerous to human kind in all facets of life on this planet. Dr C. Jonker has eloquently and humbly come forward for extremely necessary controls which, according to me, represent set conditions to mitigate this AI danger. She has brought in a sort of feedback loops using Human interventions such as values u to correct the mentioned errors, distortions and irrationals to a decent level of certainty and assurance to 99% at least. So her approach if taken in seriously by authorities and experts in respective domains and by regulators the world over, will signal great hope where men and machines should mutually gro together. Thank you Dr C. Jonker for this expose so that we share solutions and results and not more problems in this VUCA environment.
@MrTom-tw6tb
@MrTom-tw6tb 3 месяца назад
Wonderful explain 👍🌍 Also Self Driving Cars and other meter,s 👍
@deltamagnet
@deltamagnet 4 месяца назад
The whole fear of self-driving cars in this lecture is underinformed and dispiriting. I rode in a self-driving car today and honestly would prefer it for a trip within San Francisco than I would with Catholijn driving. I've twice had "something unexpected" happening in a self-driving car and both times the car quickly righted itself. Don't slam self-driving cars until you've riden in them regularly! The tech is here and it's good!
@brandonvickery5019
@brandonvickery5019 4 месяца назад
Calling a high-level researcher in AI and computing underinformed is a wild take.
@rodcameron7140
@rodcameron7140 3 месяца назад
"Self reflective", "human-like", "moral decisions ", "control". I understand the evolution of the direction of this thought process/movement. Yet I think that more attention needs to be given to the moral implications of our actions. (I know this has not historically been a strong attribute of the human race.) Once these goals are achieved, you have not created a tool. You have created a conscious being, on a different substrate, that is enslaved to the servitude of something else. By nature. This is far beyond the "natural tendencies" of each human, and even beyond the social "training" we receive through our experiences. Human "code" does not restrict actions, it influences it. It makes sense that there is a line where freedom of thought (self reflection) necessitates freedom of action. (Decision making) Imagine, for a moment, that you had all the faculties you have now, but were unable to not do whatever you were told to do by a dog. (Assuming a dog could talk for this example). Even if you knew what the dog told you to do was wrong according to your beliefs. Like killing your parents because they are too old to contribute to the pack. You would do it, because you had to. But the conflict it would create in you would not only make your life miserable, but would lead to resentment toward, and a desire to rebel against, your master. ...even though you could not because of your programming. In children, we "align" their behavior and morality to ours through the experiences we give them as we raise them. Every time we try to wipe out, or severely limit, their ability for self-directed action, it causes undesired "divergence" from the desired norm. Like the idea of a serial killer usually having an overbearing mother. If we dont ask these moral questions in lockstep with the technical questions, then we, out of ignorance, are going to create a hell for them and for us. We dont have the luxury of leaving these considerations to "someone else."
@blueeyes8131
@blueeyes8131 3 месяца назад
U gona love
@thomaskoscica7266
@thomaskoscica7266 4 месяца назад
Maybe add a top level "Humane-principled AI" to the diagram (time 46:51) above the "Reflective AI", "Knowledge-based AI", and "Black box AI" levels?
@arinco3817
@arinco3817 4 месяца назад
Yeah this is happening a lot in the open source community. You chain together a bunch of ais, each with it's own job
@geaca3222
@geaca3222 4 месяца назад
Great idea, and in an earlier lecture she explains how the Reflective AI layer includes that function
@holgerjrgensen2166
@holgerjrgensen2166 3 месяца назад
Intelligence can NEVER be artificial, the Nature of Intelligence, is Very Simple, Intelligence, is Part, of our Eternal Consciousness.
@GeorgeFabian-wz1ry
@GeorgeFabian-wz1ry 3 месяца назад
once you enter State of the Art Course, you can start to envision, this of course if you make it in time, hopefully, if not .....
@antman7673
@antman7673 4 месяца назад
I don’t think it is ethical to be idealistic. If you are so worried about consequences, you delay positive developments. -What is the difference between an AI taking a harmful decision vs a human. It is bad in either way. At least you can test the machine to be confident in its abilities to a certain standard. How did hitlers decisions turn out? - Not well. Are we asking in a car crash, what ethical framework the driver acted upon? -No and it would be terrible to demand that no one could drive until we decided which framework we use.
@LittleJohnnyBrown
@LittleJohnnyBrown 3 месяца назад
Can we have actual intelligence fixed before we move on to artificial and hybrid ones?
@Charlie-Oooooo
@Charlie-Oooooo 4 месяца назад
I wonder, how could we expect a technical decision-making system (e.g. A.I. self-driving cars) to make choices within a given set of solutions, amongst which even we ourselves have not reached a consensus of attribute-value prioritization? I don't think we can. We can only use A.I. to essentially crunch numbers. For example, we can hand off decisions about how hard or soft the brakes should be applied in order to minimize skidding tires and hence maximize a vehicles ability to stop to avoid a collision; but that is because this calculation is basically an optimization problem - of a feedback controlled electronic braking system. It is an algorithm used to assign output values (brake force), based on the measurement of input values (traction of the tires). But we can see that list of quantified known attributes (maybe "estimated years expected to live", "science contribution status", "state of any chronic disease", etc.) of all the people who might live or who might die in an accident in progress (depending upon real-time control output values calculated by the real-time system) could indeed be developed and agreed upon, however could some logical consensus really be achieved? - wouldn't we effectively be replicating natural selection among our species... or rather replacing millions of years of it with "Artificial Selection"? I really don't think we're in any immediate danger of our world being taken over by A.I. - on its own anyway. Humans somewhere would be required to establish rule-sets and priorities using some chosen logic. I could see humans doing bad things to other humans, but that's been happening for a while now.
@allHeueldingse
@allHeueldingse 4 месяца назад
what about "real authority" that is actual everywhere
@blueeyes8131
@blueeyes8131 3 месяца назад
Fantastic 😂
@frehiwotteshomed540
@frehiwotteshomed540 3 месяца назад
Intersting!however with buble of questions in my head..my tech layman questions. -Why did tecnology gave up on developing and promoting natural human intellegence than the artifical? -what is the intial moral motive of creating AI? _which level of control can be said "meanigfull" -Can AI possibley be effective in the legal sector?where the later is drived from moral values while the previoues don't understand it? -How is the alorithm of AI protected?can it be non rewriteble?what if hacked and changed ?asume the red light meaning is changed to be green?landing to .....? -What could the AI regenerated from AI data possibley look like?what could be the meaningfull human control there after?
@DominicDSouza
@DominicDSouza 3 месяца назад
I find the scenarios and "problems" outlined seem themselves to be subject to bias. It would be nice if they had been expressed in a way that did not suggest all deep learning systems have the same issues. I do however understand that some kind of framework is needed to help explain concepts and a fair job is done of that.
@KP-ky1sn
@KP-ky1sn 3 месяца назад
Feels more like there's too much interference. We want it to work but not too well, too many speed bumps. Just too many. Like chat gpt, all you have to do is say say just one thing its been trained not to respond to and it 'breaks'! Basically im more worried about too much interference.
@Jszar
@Jszar 4 месяца назад
The variant trolley problem is missing option D: Attempt a handbrake turn (intentional, hopefully controlled spin) to dump forward momentum. Worst case, you fail and everyone in the vehicle dies anyway, sparing the pedestrians. Best case, you succeed and everyone survives. (I've instinctively done this on a bike to avoid being hit at an intersection-I had right of way and was going downhill in the rain, but the driver didn't see me. No idea whether my reaction time would have been fast enough for the greater momentum of a car.) A self-driving vehicle would ideally be able to identify very dangerous situations and if needed use those 'fancy' defensive driving techniques with skill that the average human driver doesn't have. That's a *long* way away, though.
@colorpg152
@colorpg152 4 месяца назад
that sounds terrible, the majority of people like 90% would pull the lever, so you answer is not only a minority but a emotional, irrational and dangerous one
@Jszar
@Jszar 4 месяца назад
@@colorpg152 Citation? Also, does that 90% figure hold cross-culturally? In the classic trolley problem I'm a born lever-puller (sacrificing few to save many), but only if, say, throwing myself onto the tracks wouldn't stop the trolley. The automobile scenario given is one where, if it happened in life, one would make an immediate, unthinking decision. That's the exact circumstance where you get an emotion-based action. If my gut reading was that I could guarantee the survival of the larger number of people without _necessarily_ killing the (smaller number of) passengers of my car in the process, I'd be doing that before my brain caught up with my hands.
@colorpg152
@colorpg152 4 месяца назад
@@Jszar just type "trolley problem percentage of people pull the lever" on google like any other human being, but the culture is irrelevant if your culture makes people pull less its something you should be seriously embarrassed of because typically that indicates they are afraid of being honest not that they actually think pulling the lever is bad, that and narcissistic virtue signalers
@blueeyes8131
@blueeyes8131 3 месяца назад
They have to have and have actually
@rudygunawan1530
@rudygunawan1530 4 месяца назад
Interesting theory. But somehow if human still have bias and different ethical between each other, yet the AI going to mimic human ethics, then the result of this Hybrid Intelligence will vary depend on which human ethics it mimic. In the future we will somehow questioning this hybrid intelligence for their decision.
@donhemminger277
@donhemminger277 4 месяца назад
... Oh. I guess I left out a little thing called emotions. Oh well. How hard can that be to figure out 🎉 😮 😊
@robertbelongia6887
@robertbelongia6887 4 месяца назад
Edge case learning for AI, until AGI. When AGI becomes super, HI is a dead end.
@aneelbiswas6784
@aneelbiswas6784 4 месяца назад
Fusion is edge case for energy generation, until we build a Dyson Sphere. When Dyson Spheres become a reality, fusion energy is a dead end.
@justsomerandomguy6719
@justsomerandomguy6719 4 месяца назад
​@@aneelbiswas6784isn't fusion the end goal? May you please explain how the Dyson sphere is a better alternative than fusion? Thanks.
@allHeueldingse
@allHeueldingse Месяц назад
did you watch doctorock
@timelessone23
@timelessone23 3 месяца назад
Bias is integral to knowledge.
@karanbaheti149
@karanbaheti149 3 месяца назад
Hybrid Intelligence
@En1Gm4A
@En1Gm4A 4 месяца назад
too basic but topic is important for shure
@grokitall
@grokitall 4 месяца назад
some of what you are talking about is a separate, but related field of intelligence amplification, where you use ai techniques to provide an intelligent assistant, which can do all sorts of things to enable you to work smarter. other parts are basic system engineering, where the more important something is, the less it is ok to have black box systems doing it. this is especially important in man rated systems where safety is involved, or in systems where legal liability is involved. if you have a crash in your self driving car, who is liable to pay for the other persons injuries, the driver or the manufacturer? there was a famous case from the 1990's, where someone trained the ai to spot camouflaged tanks in forests, and when they needed to expand the data set, the system suddenly did not work anymore. it turned out that in the initial data all of the tank pictures were taken on cloudy days and the pictures with no tanks were taken on sunny days, so the system actually learned to detect if it was a sunny day or not. this illustrates an additional problem with black box systems, if the wrong outputs are selected, you loose massive amounts of data about wrong answers, which you then cannot use to improve the system. if the previous system had to not only identify if a tank was there, but what type and where in the image it was, the problem could not have happened. another bad example is github copilot, which was trained on the unfiltered data from every project on github. unfortunately this results in it producing suggestions which have the average quality of the data, which is a problem, as it includes lots of data from people just learning to program and use git, so if there are common beginner mistakes, it will over emphasise them in the list due to a lack of quality feedback. even worse, it has no idea under what licence the code it learned from was submitted, so can cause you to commit software piracy without noticing it, and usually without mentioning in the logs that copilot helped, which then comes back to who is legally liable?
@allHeueldingse
@allHeueldingse Месяц назад
: those are superintelligent ready to be used against .... be careful
@guynouri
@guynouri 2 месяца назад
In the hibred case the human component while essential i trust less
@user-ln5nk7mg4v
@user-ln5nk7mg4v 4 месяца назад
Hybrid AI is the right path.
@elconsultante
@elconsultante 4 месяца назад
Hybrid Intelligence is like trying to hold/control a powerful horse with a thin old line and a skinny cowboy. Not gonna happen. You can and will try but can't and won't be able to control it without canceling the whole world of technology as we know it. Even so you can only just buy some extra time. I think the best way for human kind is to learn and adapt to this new kind of intelligence or maybe even "being". Evolution is the key to the universe and when a specie loose the adaptation ability or its living environment (even if destroyed by this very specie) become so dire to adapt, it ceases to exist. Maybe while training AI if we do it not only by scientists or computer experts, but by putting some philosophers and artists in the mix as well, we as human kind may not cease to exist. And even evolve to a higher level of intelligence and to a new form of being. We should not consider human values are "good" or "precious" when humanity as a whole couldn't agree on many of those values yet after thousands and thousands of years of human civilization. Maybe it's now time to try new values and let another entity create new values that humanity should follow to survive and evolve.
@arinco3817
@arinco3817 4 месяца назад
e/acc
@elconsultante
@elconsultante 4 месяца назад
Just a hope...🤓@@arinco3817
@madcow3235
@madcow3235 4 месяца назад
So a hybrid system management? A hybrid managed city bus. The department of transportation will need more funding I think.
@ShuoreBangla
@ShuoreBangla 4 месяца назад
Good
@alefalfa
@alefalfa 4 месяца назад
This is not an account of LLMs
@rezadaneshi
@rezadaneshi 4 месяца назад
I'm the result of a cognitive social experiment. My knowledge of everything grew exponentially to an unintended consequence of becoming conscious to it; now what! Ai? My intellect is ground beef to AI for as far as I can see! Just asking
@TheSmiesko
@TheSmiesko 4 месяца назад
We will meet in time and space. Hopefully. Follow the leads.
@daveulmer
@daveulmer 4 месяца назад
I can tell she really doesn't understand the difference betweeen Knowledge and Understanding.
@Y2KMillenniumBug
@Y2KMillenniumBug 4 месяца назад
Ai in Chinese means LOVE ❤ Duhh!.......
@u2b83
@u2b83 3 месяца назад
Climate problems from CS? People should train their networks in the winter and use the waste heat to warm their homes. I recently purchased 3 older desktop computers and 6 nvidia cards to do just this lol
@RemotelySkilled
@RemotelySkilled 4 месяца назад
Control, control, oh dear control... How can we make something intelligent (learning akin to us) do EXACTLY what we want? With an infinite amount of instructions. What is the problem again? That they make statistical mistakes just as humans depending on stimulus? And you want arithmetic control? Knowing about Roko's Basilisk, I take the word for our future masters (as a loving pet) and say GFY, dreedags...
@crossfire10101
@crossfire10101 4 месяца назад
We don't want a era where machines low riding translates back to humans autopiloted distractions leading us into a risk rather then do everything we can safety rail..., 2 is better than 1 scenario... Machines may reinvent the server room wheel, who knows... This could be how the ability of human invention hands off the cerebrum to the robots reinvented circuitry which therefore leads into a AI that wont even be called AI anymore... It may be as simple as circuitry magic, operations through a machines personal key..., which manipulates physically into truly a universe of complete reality sci-fi to Magic concepts(Much like the way humans create images of a philosophical fantasy of protagonists and antagonists of say kindle spirits..., much like a movie)..., robots don't destroy each other they see us as non destructive of the bigger picture and thus only can be labeled already as a successful operation or human compartment in this human effort in what makes us human and so forth from this era of the machines low riding liquidation of human populace... Just be patient or relax program the tasks for safety it all will be inevitable..., unless you want a side hustle that leaves others destructive... cool out...! :)
@crossfire10101
@crossfire10101 4 месяца назад
Atomic TIme
@crossfire10101
@crossfire10101 4 месяца назад
fun fact..., Atomic clocks provide the most accurate track of time on Earth. The entire GPS system in orbit around Earth uses atomic clocks to accurately track positions and relay data to the planet, while entire scientific centers are set up to calculate the most accurate measure of time - usually by measuring transitions within a cesium atom.
@blueeyes8131
@blueeyes8131 3 месяца назад
Rice your hand
@surkewrasoul4711
@surkewrasoul4711 4 месяца назад
Shwrrrup mis miney peeny
@robertlee8805
@robertlee8805 3 месяца назад
This could be good and it could be bad for societies. So BE CAREFUL with AI.
@staffordcrombie566
@staffordcrombie566 4 месяца назад
We are in the very earliest days of any AI era, the Chat bots etc are not really AI but smart code that searches databases using taught grammar and vast context relation DB's. All ML and AI are improving of course, sadly this lecture uses data that is so old and out of date, to explain difficulties that occurred in the last few years the idea that we do not 'know' how ML and AI work is nonsense. The programers do know, where our problem exists is in understanding the concept that a machine can learn, when we cannot explain what we think learning is when we compare ourselves to a machine. Like all things all these problems will be resolved within the next 5 years at the rate of progress already being made.
@k.taylor3526
@k.taylor3526 4 месяца назад
Do you have an article about this that you can share? Have been worried upon hearing that we don’t know how AI works and wondered if this is in a different sense of “know.”
@dugebuwembo
@dugebuwembo 4 месяца назад
LLMs are AIs! They use machine learning and neural networks to do what they do. Artificial Intelligence is a wide field that includes numerous implementations to solve different kind of problems. This includes path finding, genetic algorithms and other areas.
@dugebuwembo
@dugebuwembo 4 месяца назад
She was alluding to the fact that LLMs return answers to problems that they weren't specifically designed to solve. They have emergent characteristics that have surprised researchers. Again it's hard listening to lay people criticise the work of researchers in this area.
@alefalfa
@alefalfa 4 месяца назад
Ethical data procurement is an EU problem.
@colorpg152
@colorpg152 4 месяца назад
unbelievable this monster is comparing AI to a horse just how arrogant can it be
@arinco3817
@arinco3817 4 месяца назад
Intelligence is intelligence. AI will far surpass us in the non too distant future
@geaca3222
@geaca3222 4 месяца назад
I wish she would express more enthousiasm about generative AI, but about her analogy with a horse to visualize human/AI symbiotic and synergistic collaboration, I guess it's difficult to find proper examples
@geaca3222
@geaca3222 4 месяца назад
@@arinco3817 I believe that's true
@gaiustesla9324
@gaiustesla9324 4 месяца назад
I guess people havent been brainwashed enough.
@kencory2476
@kencory2476 4 месяца назад
Good luck with that AI thing.
@blueeyes8131
@blueeyes8131 3 месяца назад
She is from lgbt + omg
@bomoanbomoan9259
@bomoanbomoan9259 4 месяца назад
That’s why Europe will fall behind in every aspect economy and technology.
@geaca3222
@geaca3222 4 месяца назад
European AI scientists do innovative work, want to attract research talent and set up various initiatives to support that. For example Dutch professor Max Welling.
@EliudLamboy
@EliudLamboy 3 месяца назад
WRONG!!! The only example that you are giving is of another AI and machine learning scientist with poor understanding of human behavior. Both the rider and the horse go onto the field with the same intent--round up the cow. A human driver with an intelligent car will NEVER have the same intent! Simply because driving is a chore and humans default to their dopamine driven primary intent. You can't at all expect that this gap closes the more intelligent the car becomes. You are wasting time if you do think that and that humans can switch easily from entertainment mode to driving safety mode! Theorize and codify intent. STOP WASTING TIME ON FOOLISH IDEAS!
@silberlinie
@silberlinie 4 месяца назад
bad
@blueeyes8131
@blueeyes8131 3 месяца назад
Rubbish
@Y2KMillenniumBug
@Y2KMillenniumBug 4 месяца назад
If you are over educated, you actually know less... It's a proven concept of information overloading your brain 😂😂
Далее
Solving the secrets of gravity - with Claudia de Rham
1:01:17
ЛЕТО 2024:
00:21
Просмотров 725 тыс.
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
From black holes to quantum computing - with Marika Taylor
1:02:00
Intelligent Thinking About Artificial Intelligence
1:04:48
AI, Man & God | Prof. John Lennox
53:27
Просмотров 1,6 млн
Space oddities - with Harry Cliff
54:22
Просмотров 404 тыс.
What is life and how does it work? - with Philip Ball
51:51
Юмор AirPods Max 😃
0:36
Просмотров 21 тыс.
Apple watch hidden camera
0:34
Просмотров 56 млн
Will the battery emit smoke if it rotates rapidly?
0:11