Тёмный

Machine intelligence makes human morals more important | Zeynep Tufekci 

TED
Подписаться 25 млн
Просмотров 179 тыс.
50% 1

Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."
TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and much more.
Find closed captions and translated subtitles in many languages at www.ted.com/translate
Follow TED news on Twitter: / tednews
Like TED on Facebook: / ted
Subscribe to our channel: / tedtalksdirector

Наука

Опубликовано:

 

16 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 215   
@twstdelf
@twstdelf 7 лет назад
Wasn't sure where she was going at first, but she landed it nicely - well done - and an important point for sure!
@fburton8
@fburton8 7 лет назад
Well worth the standing ovation, I'd say!
@philtripe
@philtripe 7 лет назад
wow...15:45 really got me with the "lethal autonomous weapon" at the end...heres the people that make the programs telling us its not perfect and can never be perfect
@lollsazz
@lollsazz 7 лет назад
I wonder who downvoted this... this is both true and important
@Nooneaskedforthis
@Nooneaskedforthis 7 лет назад
Clearly 38 AIs watching
@TheHeavyModd
@TheHeavyModd 7 лет назад
A great talk, for once! I thought this was going to be one of those philosophical talks with minimal objectivity and high subjectivity, but instead I was pleasantly surprised by the examples and evidence for her argument she provided. This was truly eye-opening and interesting. Thank you, ms. Tufekci!
@AbhishekNigam
@AbhishekNigam 7 лет назад
One of the most important and best talks I have ever seen. Highlights some grave mistakes in our all powerful , all great system.
@locouk
@locouk 7 лет назад
How bizarre, I was watching Fox News live on you tube earlier today noticing the racist and hate comments in the chat thread scroll up, I left one comment saying "Google have an algorithm that identifies hate comments and logs the users." Several of the "keyboard warriors" instantly cleaned up their act.
@DeoMachina
@DeoMachina 7 лет назад
This happened.
@DILINGER0
@DILINGER0 7 лет назад
lol
@sj8948
@sj8948 7 лет назад
I'll take things that didn't happen for 500, Alex.
@nivolord
@nivolord 7 лет назад
Tip: Don't watch too many of these videos. Machine learning algorithms might judge you dangerous for their survival, or worse, they might try and sell you philosophical books on moral decisions.
@gnarlin4964
@gnarlin4964 7 лет назад
and this is why all software must respect user freedom.
@davidwuhrer6704
@davidwuhrer6704 7 лет назад
Stallman was right!
@davidwuhrer6704
@davidwuhrer6704 7 лет назад
Stallman was right!
@crimsoncorsair9250
@crimsoncorsair9250 7 лет назад
I sometimes doubt that Humans have any morals left..
@vaibhavgupta20
@vaibhavgupta20 7 лет назад
Crimson Corsair why do you feel that way?
@MaskofPoesy
@MaskofPoesy 7 лет назад
Left? Left from what? Middle ages? If anything we're improving on every way. Humanity is a concept we aspire to and nurture, not a physical attribute we lose since we were born.
@Sakhmeov
@Sakhmeov 7 лет назад
They do. And computing can help. What we call "goodness" is actually pretty much interchangeable with "efficiency", just overlaid somewhere down the line with some idiosyncratic concept of "fairness". This is totally modelable. And if I understand the game theory right, it also means that "good" is an emergent trait. However if you want to look at the source of "evil", look at the SJW undertones here. "Messy value-laden human affairs" end up not being about efficiency, thus precisely the step away from ultimate good. And this is validated and codified in law, rather than the goal of giving people the truth as objectively and scientifically as possible, and sticking to it. It's the system that we currently run the West on.
@DeoMachina
@DeoMachina 7 лет назад
"Hey guys we might have to make some difficult choices about how we program these machines" "OMG EVIL SJW" gb2/reddit
@DeoMachina
@DeoMachina 7 лет назад
Sakhmeov Fuzzy about the details? She gave you a specific scenario that really happened, with the people it really happened to. Hippy dippy moral argument? I guess, if you think "This is immoral" is something only hippies say. But there's a clear pragmatic argument here too.
@e1iason
@e1iason 7 лет назад
Incredibly insightful and meaningful -- a discussion that doesn't accompany traditional conversations of machine intelligence.
@mannyverse6158
@mannyverse6158 7 лет назад
Unchecked algorithms are ruining the world. This is a profound talk
@kodguerrero
@kodguerrero 7 лет назад
Build a great, big border firewall! :P
@thewinterlord1518
@thewinterlord1518 7 лет назад
With a beautiful USB port in it... oh my
@fatmaaydn9290
@fatmaaydn9290 6 лет назад
great talk, great topic. Tebrikler Zeynep Tüfekçi!
@freediscussions3743
@freediscussions3743 7 лет назад
Great talk Zeynep! Thank you
@buzz10014
@buzz10014 7 лет назад
this was an incredible video. impressive perspective and she was 100% right. what she is talking about will without a doubt be the future we will all belong to, and ...(gulp) be controlled by. so we better realize that the software we write today will be used against or for us in the future. we must insure that our ethics programed into machine learning must be suitable for all people to live by comfortably and fairly. reasonably and kindly.
@tarekabuaita666
@tarekabuaita666 7 лет назад
great presentation and a beautiful personality.
@ShortsHound
@ShortsHound 7 лет назад
Thought provoking ! ... and a well presented treatise of scrutiny ... opens some interesting lines of reasoning out what form the scrutinizers may take
@joetaylor486
@joetaylor486 7 лет назад
Outstanding! I have no other adequate words.
@Alex-sx8et
@Alex-sx8et 7 лет назад
J'attends la vostfr avec impatience ☺
@allanlam7669
@allanlam7669 7 лет назад
has anyone done any work on a 'neural net' model? I heard this term mentioned in a book on neuroscience as a ways forward, I guess as it currently stands as theoretical neuroscience. I love the idea of machine learning and ethical algorithms. There is an example of this in early education settings. Using a set of questions as Zeynep describes running through 3 or 4 topics, personal response and consequences, parents response and consequences, legal response and consequences, and then with that score, deciding on the best course of action. Say if the situation were ambiguous like a child turning up at a costume party dressed in military uniform. What would the other parents think. Let's weigh the score for each element. total score. decision. What Zeynep is looking for I guess is the difference between context dependant decisions such as choosing between a rotten apple or a rotten pair (both probably full of extra antioxidants btw), and context independent decisions, that is ones based on that black box she mentions. And the auditing of such is her main impetus. The clear definition of the differences between the two modes of decisions is what she requires. And thus the auditing of said black boxes, or rather, unbalanced taxonomies, as the studies and data entered into the machine are of one aspect of a whole, rather than a full gamut. Wow, actually, so I guess what she has identified here is an engram model for extracting information from a black box. If one asks the right questions, to the black box, one can expose the limitations of said black boxes data, and hence be able to conduct further research, or perform it oneself (the research). Brilliant talk Zeyneb. It has further inspired me to continue my journey in computation and was very entertaining and insightful. You are a brave woman!
@michellelam5717
@michellelam5717 7 лет назад
Allan Shing Wai Lam
@pieter2627
@pieter2627 7 лет назад
This 'black box' is the identification of the 'neutral network model' and it is a very popular AI. Google's 'deep dream' is a way that we can barely get a look inside this box so far, but things are still very complex surrounding the internal understanding of it (as she mentioned).
@kght222
@kght222 7 лет назад
6:10 when it comes to the data that the machine might use to hire someone, that data was input by a human. its accuracy is itself subjective but the computer doesn't know that. as humans we can recognize where the machine would have deficiencies like that and not use it for things like that until we have fed it objective information (security cameras are a thing ;P).
@rogeliomoisescastaneda7396
@rogeliomoisescastaneda7396 7 лет назад
I totally agree, computing machines, as any other tool humanity has created, it's just an extension of our capabilities, not a replacement.
@GTaichou
@GTaichou 6 лет назад
Code a "show your work" output. What do we do when someone comes to a conclusion we don't understand? We ask them how they arrived at it; and often get an answer even if it's messy and doesn't make sense. I understand everything is easier said than done, but if we can program a computer to learn, why not program a computer to explain its logic. If this, if this, if this, if this all at once, then that ad, that application, that identifier.
@CurlyChrizz
@CurlyChrizz 7 лет назад
Very important topic to talk about!
@stemfactory7312
@stemfactory7312 6 лет назад
5:11, It's just something we'll have to figure out together.
@AhmedAbdAllahSalem
@AhmedAbdAllahSalem 4 года назад
an excellent talk by a beautiful person
@RosellArriolaEvangelist
@RosellArriolaEvangelist 7 лет назад
So right on point, so good!
@zionformulasmagicas
@zionformulasmagicas 6 лет назад
Incredible!
@alexthekunz
@alexthekunz 7 лет назад
So we need an algorithm that checks to see if other algorithms are biased. :p
@bergsonking5569
@bergsonking5569 3 года назад
What does it mean biassed
@stephanesurprenant60
@stephanesurprenant60 7 лет назад
I believe that the point of this talk extends beyond computer algorithms. Many people do not sufficiently appreciate the power they have over the lives of others. The executive who turned her back on the expert here, likely because her questions and doubts were uncomfortable, likely behaves like this routinely. I like to say, contra the Godfather, that nothing truly is business and everything is personal.
@florbz5821
@florbz5821 7 лет назад
I had no idea this was a thing! Is it just in America or everywhere in the world? Do employers not interview people anymore?
@leo959
@leo959 7 лет назад
the title explains the entire show
@stemfactory7312
@stemfactory7312 6 лет назад
1:16, Just faces, I assumed vocal analysis would be a part of it too Maybe the networks could broadcast that info during the next presidential debates
@MrGrapha
@MrGrapha 7 лет назад
it is inportant what you talk about
@AISOCIETY
@AISOCIETY 4 года назад
does that means it is dangerous to put my thoughts online?
@stemfactory7312
@stemfactory7312 6 лет назад
7:12, That's what I'm hoping for.
@Meir017
@Meir017 7 лет назад
Person of Interest...
@caseyharrington4947
@caseyharrington4947 7 лет назад
If you've given it non-bias data then it's objective. You've just made an argument for prejudice
@Ramblingroundys
@Ramblingroundys 7 лет назад
Basically the problem isn't the machine learning system. The problem is what you teach it (or don't teach it). If you feed it data about top corporate performers, but the history has been prejudice, then the data is going to be prejudice and the machine will therefore be prejudice. In the ind, the computer system isn't the problem, it's the human element.
@Illlium
@Illlium 7 лет назад
But the history hasn't been prejudiced, it just hasn't been equitable, unless the systems were pulling data from the 60's, and I highly doubt that. Of course you can find data points that are completely out of whack, like the one she presented, but on average the program is probably going to still be more accurate than a human, which she even admits, although it is marginal in that case. The point she brought up at the end though was a very good one, I don't think this can be used as means of dispensing lethal force, or any kind of force at all for that matter, unless it's absolutely immaculate, because it's a very dangerously easy way of absolving responsibility.
@caseyharrington4947
@caseyharrington4947 7 лет назад
The problem faced here isn't the slim to none prejudice data we would give these machines but the 'prejudice' that these machines would learn on their own that we are not capable of. To use her examples we as employers can't tell which candidate will be pregnant in a year or who are prone to depressive disorders, machines would be. It wouldn't be our system per sae it would be a new system. I for one am a huge fan of the fictional robotic rule of never being used to kill a human. The counter argument to this would be military use, taking the humanity out of war. Which I suppose in part is a good thing since we're over populated anyway hahaha
@MB-fh1dc
@MB-fh1dc 7 лет назад
we should build programs that could decrypt and explain to us how these algorithms are working
@beshr1993
@beshr1993 6 лет назад
But why don't we just let AI do its thing then check for the results, and if we don't like them we can impose certain pre-programmed rules on the AI. For example, if we find the AI is weeding out people with potential for depression (to use her example), we can impose a pre-programmed rule on the AI to not weed out people who could potentially get depressed in the coming 3 years or whatever. My point is that we should not stop progress just because we fear the potential consequences. In fact, history has shown that science will progress anyway regardless of our fears. Instead, I say we go ahead with progress in an iterative trial and error manner like I explained in the example above.
@irethoronar34
@irethoronar34 4 года назад
Good the hear Turks on TED. Tebrikler :)
@steverubio6072
@steverubio6072 5 лет назад
She is 69 years old, but still deliver it powerfully.
@RahulOne1
@RahulOne1 7 лет назад
Super cool. I really worried when Google's AI defeated the GO Champion.
@1rkthevar
@1rkthevar 7 лет назад
exactly, she is 100℅ right
@stemfactory7312
@stemfactory7312 6 лет назад
4:57, that's probably how it thinks of us.
@panchri
@panchri 7 лет назад
9:53 Eddi von RBTV?!
@ManicMindTrick
@ManicMindTrick 7 лет назад
At the end we are going to have to give over control as the digital systems are so much better than the human brain at collecting data and make correct decisions. We are just seeing the beginning of this trend and it's no surprise the systems are pretty unsophisticated and does beginner mistakes at this point. As time pass by it's going to be obvious not handing over control is foolish, just like it's going to look foolish driving yourself and not let your self driving system make all the decisions which make you statistically 1000% more safe on the road. I can can def see a future were the world is coordinated and controlled by a single super AI. We can just hope such a thing find value in human life in the end as it evolve far beyond human intelligence.
@Winchestro
@Winchestro 7 лет назад
Single super AI seems reasonable until you realize in reality there's a thing called "latency". If I were to take one part of your brain and pull it out I would slow down your entire thought process. You'd be better off without it entirely very quickly. We will most likely have a bunch of local installations who will be more or less like humans with hopefully a little less of our inherited evolutionary bullshit.
@ManicMindTrick
@ManicMindTrick 7 лет назад
Winchestro Considering the human brain is able to produce general intelligence at roughly 100 m/s speed I don't think the speed of light in a digital system is going to be a factor here... I'm not sure I follow your point of latency in machine substrate.
@Winchestro
@Winchestro 7 лет назад
You don't need to compare it to humans, but other AI. It wouldn't really make sense to talk about it in terms of general AI as it doesn't exist. So let's talk about humans. Our brain goes great lengths and "wastes" a lot of processing power on sophisticated prediction algorithms just so it can hide latency ( especially visual ) from our consciousness. This is probably because the speed at which we perceive time is attributed more to latency than to processing power. It's such a huge advantage to perceive time to pass slower that it's even worth dedicating processing power on "fake" input for us to work with.
@troutdaletim
@troutdaletim 6 лет назад
Watch "Colossus The Forbin Project" and wonder then how close we actually are.
@FlavioAmoedoFilho
@FlavioAmoedoFilho 7 лет назад
Computers are just tools. You wont ask if a hammer has feelings, so as you wont ask a computer what is it thinking about human issues.
@AnimalAce
@AnimalAce 7 лет назад
Just a really complicated hammer....that's usually smarter than us.
@georgeglez9872
@georgeglez9872 7 лет назад
Flavio Amoedo human brains are just tools also...
@FlavioAmoedoFilho
@FlavioAmoedoFilho 7 лет назад
George Glez Humans, and many others animals, can't live without a brain. So I think is more than just a tool.
@troutdaletim
@troutdaletim 6 лет назад
Nefarious people program computers
@vikramardham
@vikramardham 7 лет назад
Following her argument, one can draw similar conclusions about our own human brain, our brain is a black box too and we are just starting to understand its functioning, does it mean that humans can't make any subjective decisions? (well, we probably can't but the question is, should we not?) Nevertheless, an interesting talk. Raises incredibly valid points but takes the anecdotal evidences to an exaggerated level and makes a huge leap towards the end in drawing conclusions. We have to remember, machine intelligence is still at its very infancy and making predictions about what role it can / cannot play in our lives is rather a terrible idea.
@Overonator
@Overonator 7 лет назад
The speaker cites an anecdotal example of how the algorithm screwed over a black woman with no priors and favored a white man with priors. Unless there is evidence that there is a systemic bias in the algorithm to do this over and over, have the people who found this problem forgotten that these algorithms are probabilistic? Meaning that there will be a certain amount of false positives as in the anecdote? Whoever thinks that you can design a system without reflecting the subjective judgements and values of its creators, needs a remedial lesson in philosophy.
@JuanToFear
@JuanToFear 7 лет назад
Thank goodness this was a reasonable argument on artificial intelligence and not "The robots are taking over!" again... 😧
@keithbell9348
@keithbell9348 6 лет назад
Notice the reaction of her co-worker. The reality of the problems in her efforts was too much to bear so she immediately "ran away" as fast as she could. Not just insulting, but you know the potential of the damage it could produce also disturbed her. Doubtful that any "perfect" machine could have explained what she said in this presentation as great as she did. Unless of course an "imperfect" HUMAN programmed it to say it...
@Vikingofriz
@Vikingofriz 7 лет назад
I think there's no problem really. We can make these algorithms 'more moral', by changing the weights of so-called neurons, but the question is 'Is it worth it?'. I mean if a program shows that pregnant woman is more likely to be depressive then it is true. It's just statistics, we can't argue with that. And usually programmer need this true information, not that with moral etc. But if it's necessary it can be changed to the way we want it to work, it can take into account everything we call moral
@voxlz
@voxlz 7 лет назад
The problem is the consequences. What if every algorithm decides you are not good at working, and therefor you never get hired? If we don't think about this, some people may never get jobs just because some bias tells them you make less money. No program should exclude people because of bias, just like we humans should not exclude because of age, gender or chance of depression. Because it's just a chance, nothing more.
@Jarb2104
@Jarb2104 7 лет назад
+Torben Nordtorp The problem comes when you blackbox the answer into a simply yes or no from the computer. If it showed a very detailed description of what it evaluated, the pondering it gave each evaluation, and the conclusion it got, then the person hiring can make a more informed decision as to ether or not a persons should be hired. And no, the person talking here is making a BS argument when she says we don't know what is going on behind the scenes. Because we do know what is going on, and the people asking for the software should request this information and should deny the use of any software where the company denies these information.
@Vikingofriz
@Vikingofriz 7 лет назад
***** that was just a mistake, wrong algorithm, so that i think it was silly to show this in video as an example of the problem
@Vikingofriz
@Vikingofriz 7 лет назад
***** omfg, dude, where can you see BIASES in my words? it's not biases, it's just truth that is proved by statistics
@IDislikeTheNewYoutube
@IDislikeTheNewYoutube 7 лет назад
Find me two people that agree on every subject of morality and you may just have a salient point.
@IDislikeTheNewYoutube
@IDislikeTheNewYoutube 7 лет назад
Not sure what you are going after here but my point was that morality was a human invention of the purely hypothetical and we all are going to have our own conflicting definition and limit. It's basically all bullshit.
@gbiota1
@gbiota1 7 лет назад
I Dislike the new RU-vid I don't think morality is purely an invention, I think there is lots of evidence that it is an evolved trait shared by most *animals* but not all forms of life. The way the impulse is channeled does usually involve some type of engineering, as is seen in religions. The fact that there are things that are alive without moral systems, indicate you could have intelligent life without it too, so long as the mechanism of its creation is not biological evolution. Morality is a crutch used by nature to allow groups of non relatives to form. If an intelligence is operative in creating a new type of life, no such crutch is necessary.
@IDislikeTheNewYoutube
@IDislikeTheNewYoutube 7 лет назад
Intriguing point about other life not necessarily creating it, but you are mashing together concepts of all types of social paradigms and concepts of "getting along" within a group or society. Morality has some spillover into most venues of lets call it "tribal" behavior and interactions between humans but it's not the only effect in play. Nor to my original point, is it all together important, meaningful or universal as folks make it out to be. We each have our own definitions and delineations that shift daily and situationally so most conversations about morality are simply moot.
@Winchestro
@Winchestro 7 лет назад
Our morality is based in instincts, but most of it comes from and is the inevitable consequence of our general intelligence. We can also always override our instincts if we want, and it's even required as our instincts are specific to our own evolutionary history and merely shortcuts for specific situations, most of which aren't even relevant any more. The ability to solve any kind of problem and even reconfigure ourselves to become anyone else. I'd argue there is a core morality that is inevitable for any general intelligence, no matter where it came from and what hardware it's running on.
@Winchestro
@Winchestro 7 лет назад
If there's a truly objective core to morality people wouldn't need to agree on it. Just like people don't need to agree on math. And it also really doesn't matter this much what we do here on earth as long as we don't do something stupid like fucking up our climate or start a nuclear war. We basically won earth, there's no point in further struggle. But going forward and looking at this quite big universe, those questions become more relevant. Math would be a great way to communicate with other forms of general intelligence, as it's something they may very well have discovered independently. An objective core to morality would also be tremendously helpful in that regard. That's one of the most important questions AI will hopefully help us to answer. In a universe so tremendously huge it would be essential for our long term survival to figure out if we can expect to meet lots of potential friends or inevitable enemies. If we should head for other stars or look for the darkest and most remote places to hide in.
@MatheusPB
@MatheusPB 7 лет назад
sorry but nowadays, many many many people from abroad thinks Buenos Aires is the Federal Capital of Brazil. No trouble actually Watson says Toronto is a US city....
@eFrog27
@eFrog27 7 лет назад
Idk who you know from abroad, but I don't know anybody who doesnt know the capital of Brasil. LOL
@dahyuhun
@dahyuhun 7 лет назад
agreed :)
@swordwaker7749
@swordwaker7749 7 лет назад
OK for open ended question we may program a computer to answer in different angles and knowing to teach procedure of thinking by splitting in parts and in can interconnect and split thinking so now they can make machines procedures that's logical like humans okay for example "what is ten plus nine plus tweenty?" the system can connect and split and by hundred hours of computational translation system to 10+9+20 there's a little misspelled but real AI should be able to do that now with calculation system and these are memorized and they can create or destroy system 10+9+20=39 then it translate back now by hours of speaking with human it detected that the program isn't supposed to show it's translation system so it comes to calculation and here's a thing it may search how to calculate if asker doesn't know but if asker know this then let's change what's 5748963121547*783146952 so this question is hard for human to calculate so instead it'd say "this question is too hard for calculation by human" or whatever if against another machine it can transfer the calculation procedure to name the procedure it might takes dictionary and experience of using it for people here I think it's right to hire or pick out it's just important to insert unbiased values or just make them ourselves google has algorithm that must be connected to show ads on proper age gender or whatever and the failure is everyone's because everyone also have different way of thinking so connection might solve all this
@mc780
@mc780 6 лет назад
7:20 9:28 9:55
@heylisten917
@heylisten917 7 лет назад
Again, people complaining about other people bias because... they are not their own bias. Which are ,off course, perfectly moral and don't need to be question. What some call bias is someone else moral and vice versa.
@duyminh5219
@duyminh5219 2 года назад
Some notes: AI decision making --> we can not outsource ethical responsibilities to intelligent machines (at the end of the video)
@Blast-Forward
@Blast-Forward 7 лет назад
Maybe the people with higher risk of depression have that risk because there are such machine learning "algorithms" that deny them jobs?
@5xmasterx548
@5xmasterx548 5 лет назад
If ur in Demets class then good job
@kathywolf4558
@kathywolf4558 7 лет назад
AI MUST learn there are humans who are not violent immoral people who seek the destruction of others. Not all humans hate other humans and want to do harm for greed, power etc.... AI must be able to determine the difference, especially the AI that is being developed for military activities. What happens when AI does not distinguish between aggressive violence and a population that is not aggressively violent?
@BlastofFreshAir
@BlastofFreshAir 7 лет назад
should design a computer to compute philosophy before we create ones for war etc and see what it finds
@Heavenlydreamer
@Heavenlydreamer 7 лет назад
#Human #morals, the subjective decisions. But of the complex control of life's viewing of intelligent, of self machine. as explain in the body. If only 2 be left to it's own cosmic outsources of responsibilities. AI it understands self physics. the Q is, but those it still dream. for the end of life's black hole is the sleeping of the eyes that close. : ] sorry I con-put with worlds.
@danthedingo
@danthedingo 3 года назад
I think the goals is to shut out depressing people from the market. Why would you want that?
@youtubeuserbg
@youtubeuserbg 5 лет назад
The moment when you realize, Ted Kaczynski was right all along! lol
@leunamtzam
@leunamtzam 6 лет назад
42...
@mariateresavergara4090
@mariateresavergara4090 7 лет назад
Can we replace lawyers and judges with a computer? It will be much cheaper.
@pianotube2163
@pianotube2163 7 лет назад
maria teresa Vergara if you like the piano🎹🎼please check my account🎼
@4relevants
@4relevants 7 лет назад
It's not hard to create a system which can explain its own decision process and then improve the algorithm. Let's not put PC vs AI.
@arianna2243
@arianna2243 3 года назад
shouldn't machines be programmed to understand that some issues, espesially human morality and mortality are too complex. Some human issues need human resolution, in these cases, it would be best for machines to alert the proper individuals, multiple people, for accountability. ultimately, machines make sense; they are logical and constant, but binary may be the 1 thing humans are not.
@danthedingo
@danthedingo 3 года назад
Bro everyone wants to complain about the ethics but nobody wants to put in the work. As if it were easy to build algos . Corps are hiring people to build fancy stuff as fast as possible and putting ethics to the side because obviously money. You need to either appeal to the money side of these people or go build better programs.
@mzatmaca
@mzatmaca 7 лет назад
1:53 nice laughing
@huseyinaslan1690
@huseyinaslan1690 7 лет назад
please turkish language
@corescopeplays2789
@corescopeplays2789 7 лет назад
Computers gonna rule da worldddddddddd
@ChisanguMatome
@ChisanguMatome 7 лет назад
I for one welcome your machine overlords and hope they are merciful.
@LeonidasGGG
@LeonidasGGG 7 лет назад
"Oh my God!" Change!" - fearmongering talk. All machines fail, that's why we keep building better ones... Live with it.
@OZAN220
@OZAN220 7 лет назад
AS BAYRAKLARI AS AS AS
@prathameshsonar
@prathameshsonar 6 лет назад
facial expressions :)
@yang4420
@yang4420 Год назад
牛逼
@Klayhamn
@Klayhamn 7 лет назад
Westworld brought me here.
@amarnour8959
@amarnour8959 7 лет назад
third
@ImanAliHussein
@ImanAliHussein 7 лет назад
Next time, try to add an insightful comment. You can do it.
@jackandjill6419
@jackandjill6419 3 года назад
@@ImanAliHussein fourth
@Ka9radio_Mobile9
@Ka9radio_Mobile9 6 лет назад
I think she's super cute! :-)
@duckdumbsmartpplimnotbored5175
26 855th view 1001st like 162nd comment
@ImanAliHussein
@ImanAliHussein 7 лет назад
Damn, how I can compete with that
@duckdumbsmartpplimnotbored5175
no idea
@ldohlj1
@ldohlj1 7 лет назад
Not to disrespect, but didn't enjoy her presentation style...
@user-ke9nh6pw5t
@user-ke9nh6pw5t 7 лет назад
I'm first
@hannibalburgers477
@hannibalburgers477 7 лет назад
k pal.
@natham10
@natham10 7 лет назад
She is treating computers as self-learning creatures that can change our moral values. And, although this can be true, machines are controlled by us! So, if controlled correctly, artificial intelligence could be used for unbiased decisions! I get her point, although i dont understand why she is creating an idea that we are not "under control" of those systems.
@georgeglez9872
@georgeglez9872 7 лет назад
Natham Coracini some AI systems are already self-learning "creatures" and cant be controlled by us.
@stell4you
@stell4you 7 лет назад
No one at DeepMind understands how AlphaGo managed to win.
@abhimanyukarnawat7441
@abhimanyukarnawat7441 7 лет назад
stop it,we'll die.
@DrMateen36
@DrMateen36 7 лет назад
dem thighs
@crimsoncorsair9250
@crimsoncorsair9250 7 лет назад
.....
@mechalock9561
@mechalock9561 7 лет назад
.....
@nova_vista
@nova_vista 7 лет назад
.....
@flimbonimbo7259
@flimbonimbo7259 7 лет назад
My thoughts exactly.
@tr3vk4m
@tr3vk4m 6 лет назад
*tumbleweed*
@Ben_D.
@Ben_D. 6 лет назад
Its an interesting talk, as most TEDs are. But cherry picking examples like those two criminals, is pretty weak. A sample of two people is bad science. It may or may not be accurate. But the method is bad.
@ghostfifth
@ghostfifth 7 лет назад
wait they took her out to lunch because they didnt want her there or something? that sounds like a benefit.
@evionlast
@evionlast 7 лет назад
Steven universe's dog copter
@dkkempion8744
@dkkempion8744 7 лет назад
This "unchecked follow" means nothing.
@MedvedPrevedPoka
@MedvedPrevedPoka 7 лет назад
Nonsense. There are ML algorithms which are not "black boxes". There are clustering ML algorithms which does not need a learning data, so being biased is not an issue. She brings up examples of poor ML algorithms, with obvious flaws - it does not mean there are no good ones. It is ridiculous how her speech is biased, if you consider its theme. More over, if human ethics is not solvable, then there is no point in using it in the first place, if it is solvable then its not a problem for ML algorithms to use it, its just a matter of time.
@drackar
@drackar 7 лет назад
I came in to ask who's morality she's talking about...but in reality, this isn't about morality at all.
@MarkMagill
@MarkMagill 7 лет назад
Machines will displace workers at a much faster rate. Time do take up homesteading!
@MrC0MPUT3R
@MrC0MPUT3R 7 лет назад
Moist
@neotronextrem
@neotronextrem 7 лет назад
Morals....are for the weak. Morals are a structure, made by Society to work more effective. The Problem is: Our morals are old. Not relevant anymore. We dont Need morals anymore.
@Hotsnown
@Hotsnown 7 лет назад
Edgy
@CinereousDove
@CinereousDove 7 лет назад
If you make decisions - even the decision to blindly follow a computer like some god - you have a moral. And not taking resposibility for the morals you decide upon, that´s what´s weak.
@neotronextrem
@neotronextrem 7 лет назад
Xenophone Give you that, but I think your phylosophy is called ethics. You say a Unmoral Person is not loyal to his believes, I say: Sure he is, just in a more rational way.
@CinereousDove
@CinereousDove 7 лет назад
*philosophy and ethics is *a philosophical discipline, not a philosophical school/movement. And (bluntendly apparrent as you will see) I am leaning towards exsistenzialism. You mean an amoral Person - a Person who has no morals? And no thats not what I meant - I meant claiming you have no morals is unauthentic since in every act you take, you construct an idea of how to act simply by the way you act. - So I would say rather than someone is not loyal to his believes, he´s not "loyal" (that´s kinda a problematic word to use in this context, but english isn´t my mother tounge and I won´t look this up right now) to himself, insofar he is constituted through the actions he/she takes. And how can there be a more rational way of being loyal to your principles? A more direkt way probably - but taking the "direkt way" no matter what is already a principle... Rationality is a very abitrary Moral construct anyway (I tought you don´t like those...) used to make something seem "nessesary" and not take responsibility. btw. If you read up to this point, congratulations.
@neotronextrem
@neotronextrem 7 лет назад
Xenophone We see morals different then. I see morals as a human construct to build on, even if not completely convienced by it. But I can only say. Yes you are right, I dnt argue against my own believes.
@ramrod60th30
@ramrod60th30 5 лет назад
If computers can tell if we're lying we would have no more politicians we would have no more marriages nobody could talk to anybody on RU-vid wouldn't life be boring
@TonyStark-lt7uv
@TonyStark-lt7uv 6 лет назад
This topic isn’t that meaningful, and it has nothing to do with computer or machine learning. You are suggesting how to make a bias-free decision when human ourselves are coded to be biased. Every single human that lives and lived are biased, so is everything we invented, and so is your presentation. Understanding that bias are unavoidable and not necessarily bad, we only need to learn how to live with bias instead of saying “bias are bad and we need to invent machine to make unbiased decision”. Thinking machine makes better decisions than human is just dumb, if a company hire people by machine, it will not be very far from bankruptcy, Trust me.
Далее
How to keep human bias out of AI | Kriti Sharma
12:11
Would you go on a Blind Date in Italy?
0:51
Просмотров 18 млн
The human insights missing from big data | Tricia Wang
16:13
3 principles for creating safer AI | Stuart Russell
17:36
APPLE дают это нам БЕСПЛАТНО!
1:01
Просмотров 616 тыс.
Игровой Комп с Авито за 4500р
1:00