Тёмный
No video :(

How to keep human bias out of AI | Kriti Sharma 

TED
Подписаться 25 млн
Просмотров 99 тыс.
50% 1

Опубликовано:

 

28 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 306   
@adamsteeber
@adamsteeber 5 лет назад
Notice she did not mention the largest AI assistant: Google. The devs deliberately did not give the Google assistant a name because they believe The AI should NOT have a personal identity. It is a tool. It should be an extension of our mind, not another mind in and of itself. The irony of this talk is palpable. Her bias on "racism" and "sexism" is exactly what she is advocating for AI algorithms. So, if an AI finds out that men are more apt at a physically demanding position we should adjust the algorithm so that it places an equal number of men and women in that role? How is that not imposing our bias? She is also assuming a future where we have no choice over The AI. No one is forcing you to click the advertisements that show up on your feeds just as a job placement algorithm will not forcibly place all women in service industries... TedX is really falling off the wagon with these left leaning ideologies.
@gutterpunkbobby
@gutterpunkbobby 5 лет назад
I feel this comment so hard these days...I would like it to go back to science and tech instead of the weird pandering it's been picking up here. I'm not Left or Right, actually, but it's driving me away anyways.
@stormixgaming8389
@stormixgaming8389 5 лет назад
Your 3rd paragraph/ point on how no one is forcing you to choose over the AI simply doesn't make sense, they way an AI works for many of these companies is by looking at your search history and your feed, what you watch, etc., etc. to determine your interests and target certain products as such even if you don't click the ad other aspects of your time on the internet undoubtedly give them enough information, and the simple aspect of you seeing the ad is the symptom of your actions on these websites as opposed to their information gathering resources, companies like Facebook and Google not only do that but they also sell that information for a profit meaning other companies gain the ability to target you, impacting what you see on the internet. Currently AI's heavily impact people due to a control of the sources they see on the internet, and this is only going to increase, so the idea that AI won't affect people if the simply don't click on ads isn't realistic, just because you don't click the ad doesn't mean the AI isn't influencing your perspective through countless other means. That being said I do agree with your second point, by forcing our own ideals and standards we effectively inject biases while trying to prevent perceived biases, Logically it defeats the purpose of AI, if you have to change a system that's meant to make things more objective and fair to fit your own preconceived biases then it defeats the purpose of having AI in the first place if you're just going shape it to how you want. If you want a system where AI's are ideal and don't discriminate just don't involve the gender or race when factoring for decisions such as job placement, which means we need to fix societal issues so everyone truly does have an equal chance.
@panpiper
@panpiper 5 лет назад
2:05 Those just happen to be absolutely, literally true statements. It has nothing to do with bias, it is statistics. If we program that out of AI, we are programming our biases INTO AI, not the other way around.
@squid84202
@squid84202 5 лет назад
And statistics is not as simple as viewing data and making conclusive statements without analyzing underlying factors or explanations beyond the obvious. If you have ever taken a statistics course, you would know this. Which is why these things are problematic. Just because the data says that a black man is more likely to reoffend than a white man doesn't you should inordinately discriminate against him because the data points one direction, the reason being that not every black man is a re-offender. And that's where the issue lies in algorithms.
@panpiper
@panpiper 5 лет назад
@@squid84202 If the 'only' thing you know about a man is that he is black, then you absolutely should discriminate against him as a more likely repeat offender if that is relevant to your decision. Failing to do that is not, 'not' being racist, it's just being stupid. (PC is stupid, pretty much by definition.) Fortunately there are other things to know about a man that in most cases can mitigate that one bit of data. The solution to AI making 'racist' judgments is not for us to program politically correct biases into them (making them stupid) but rather for us to make sure that the AI is basing judgments on more than just a handful of statistics. If the AI knows that black man has for instance improved their education since their offense, or has a held a job for more than a few months and pays their bills, that would for instance completely cancel out them being a more likely repeat offender. I have studied statistics. It was all so trivially obvious that I was actually chastised by the teacher for making the rest of the class feel stupid.
@andreaduval894
@andreaduval894 5 лет назад
Peter Cohen If you studied statistics, then you should recognise that correlation does not imply causation. The other statistics are significantly more relevant than race because race does not cause reoffending. But those factors you mentioned do have causative effects that reduce reoffending. It also removes a feedback loop if we continue to just assume that these people are guilty inherently or that they will inherently reoffend.
@AA-dn8dj
@AA-dn8dj 5 лет назад
@peter Cohen exactly
@seaweedseaside5905
@seaweedseaside5905 Год назад
@@panpiper That's a great point. It's the same thing I was pondering about. It seems that some bias is clearly the result of lack of data. In that case, more data is the solution. Tweaking the algorithm to avoid uncomfortable results should never be the way forward, as it would defeat the whole purpose of creating the algorithm in the first place.
@user-vn7ce5ig1z
@user-vn7ce5ig1z 5 лет назад
Personal assistant AIs don't have female voices because of sexism, they have them because surveys and focus-groups indicated a strong preferences for a female voice by both men and women. ¬_¬
@subtle0savage
@subtle0savage 5 лет назад
Yes, but that doesn`t allow for the constant and incessant emasculation of men. What would happen if a man stood on stage and gave a presentation where he constantly put women down.
@johnnyblade2052
@johnnyblade2052 5 лет назад
As a male, I agree 😊
@HaseoOkami
@HaseoOkami 5 лет назад
To the comments a love letter that I wrote *clear throats*: 1. Stats aren't always true or reliable due to our own biases (another way of saying PERCEPTION). 2. Machine Learning is a form of AI. 3. Facts aren't real, evidence is. Evidence can only support an argument. Facts prove one to be entirely true and objective. Our attempts at reaching objectivity can only ever be subjective (Science is a collective effort to reach the objective... that has failed several times). "The grass is green." Means nothing to someone who is color blind. This doesn't mean that the statement is incorrect, but that is subjective. 4. Maybe if TED keeps sharing these "Social Justice" videos, especially with people who are both have knowledge in their respected fields and have experience of being marginalized in them, you should put down the metaphorical pitchfork of your own ideology feeling like it is being "threaten" and actually listen. You know... curiosity over ambition? That is my love letter to you.
@THESocialJusticeWarrior
@THESocialJusticeWarrior 5 лет назад
2:36 Not from human bias but from statistics. Don't lie.
@HaseoOkami
@HaseoOkami 5 лет назад
statistics are biased. Or prone to bias. Just like everything we do. Pay attention... and learn how stats work
@THESocialJusticeWarrior
@THESocialJusticeWarrior 5 лет назад
@@HaseoOkami, I make a living doing statistics. The only bias is taking them out of context.
@ErikB605
@ErikB605 5 лет назад
@the Lost Q Pradigm shifting scientific papers are more likely published than those that just confirm common knowledge. Data might be misleading through things like Simpsons paradox or intentionally misleading like: "80% of dentists recommend this toothpaste" not saying that it wasn't pick one and other many other toothpastes scored equal/better.
@Wake-Up47
@Wake-Up47 5 лет назад
Exactly. She’s injecting the bias not the machine
@furkell
@furkell 5 лет назад
She just said that an individual shouldn´t be classified by an AI for it´s gender, race, etc.
@chrismason6857
@chrismason6857 5 лет назад
An AI learns from data not opinions or bias!
@YourFatherVEVO
@YourFatherVEVO 5 лет назад
Literally her first point addressed this
@YourFatherVEVO
@YourFatherVEVO 5 лет назад
@Martyr4JesusTheChrist you're not nearly as smart as you think you are
@YourFatherVEVO
@YourFatherVEVO 5 лет назад
​@Martyr4JesusTheChrist old enough to know pseudo-intellectual bs when i see it
@johnbuckner2828
@johnbuckner2828 5 лет назад
Statistics or bias?
@DusteDdekay
@DusteDdekay 5 лет назад
Arent values like fairness and ethics kind of human biases ?
@gumikebbap
@gumikebbap 5 лет назад
Values that make human beings cooperate with complete strangers is one of the key factors of human supremacy. I recommend you to read "Sapiens" by Yuval Noah Harari ;)
@keithklassen5320
@keithklassen5320 5 лет назад
*Skynet wants to know your location*
@CephalicMiasma4
@CephalicMiasma4 5 лет назад
Yes, empathy is something that we assert (rightly so, in my opinion) as a guiding value, and it biases our judgements and perceptions.
@DusteDdekay
@DusteDdekay 5 лет назад
@@CephalicMiasma4 exactly my point :) Thing is, biases are built from observed data, and if the observed data is only a subset, the bias will be skewed, and "wrong", for example, as a child, I had a bias against people with a certain name, because I've met three people with that name, and I disliked them all strongly.. It wasen't until I met a fourth person with the same name, in a situation where I couldn't entirely avoid them, that I learnt that people with that name could also be nice people. It took some rationalization to change that bias, even after the fact, since, at that point, it seemd that 3 out of 4 were not nice people, it was of course, a wrong bias, but some biases are correct, and we can't really teach an ai anything without biasing it. That would include biasing it towards human values and ethics, on purpose. If we can some day agree what those values and ethics are..
@ChucksAstrophotography
@ChucksAstrophotography 5 лет назад
Humans are biased enough, we don't need machines being biased too. You are doing fantastic work.
@tonraqkorr230
@tonraqkorr230 5 лет назад
Don't destroy science with identity politics, please.
@tajakjejtam
@tajakjejtam 5 лет назад
Statistics are racists? Sample with developers: most of them are men not because women are forbidden to work as developers, it's because they do not want to be such (most of them). How can you hire ie 50% of female devs when they are absent on the market?
@aaronpandey
@aaronpandey 3 месяца назад
bru shut up
@brendarua01
@brendarua01 5 лет назад
Bias in CS is well documented. The nerds might even be the worst. That is not really clear. It might simply be lack of social skills and ego protection not actual bias. One case that really struck me was a TED presentaton that talked about the failure of AI in facial recognition. It turned out the samples fed to the system by white males used predominantly white people. People of color, especially blacks and Austro-Micronesian, had features too extreme to fall into the algorithms. The selection bias in the sample set was not a conscious decision but it was bias.
@ErikB605
@ErikB605 5 лет назад
"Too extrem" is problably the wrong phrase. To different from the training data might fit better. In the end extreme features would make differentiation easier.
@brendarua01
@brendarua01 5 лет назад
@@ErikB605 The problem was that the relative locations and sizes of markers were outside parameters of the algorithms, and so they fail. I am comfortable with your choice of words. This isn't my field, but the failure says something about the code design. One wonders why . The real take away is that diversity is important. Different perspectives enrich the dialog. An Asian or black would have noticed the small sample size of non-whites. But I would be they don't think about Micronesiand either. People are like that LoL
@d4nkx549
@d4nkx549 3 года назад
@@brendarua01 Diversity is not important at all. It leads to hiring less competent people. AI wants to maximize efficiency and performance and does not really care about your feelings.
@Papada00
@Papada00 5 лет назад
She is talking about machine leaening. Not AI.
@Alan-me8bs
@Alan-me8bs 5 лет назад
@Tony Ray But it isnt, thats massively oversimplified and misleading
@tonraqkorr230
@tonraqkorr230 5 лет назад
@Tony Ray it's certainly not consciousness
@LuxiBelle
@LuxiBelle 5 лет назад
@Tony Ray At one point in my life, I too didn't know what Machine Learning is.
@SomebodyPerfectly
@SomebodyPerfectly 5 лет назад
@Martyr4JesusTheChrist 1. Samuel 18, 25-27 And the biggest bestest source of knowledge humanity has available to study is the holy Bimble right?
@Navhkrin
@Navhkrin 5 лет назад
Machine learning is AI. To be more precise , Machine Learning is a subset of AI, just like an Apple is Fruit, Machine Learning is AI. And it is the most advanced form of AI we have. What you guys are trying refer to is known as AGI -> Artifical General Intelligence, AGI is basically the "goal" of AI research.
@IoannouStelios
@IoannouStelios 5 лет назад
Machine Learning is not AI, it's technically a branch of AI, but it's more specific than the overall concept. 😑
@Alan-me8bs
@Alan-me8bs 5 лет назад
@Tony Ray No it really isnt, that's massively oversimplified, machine learning is just learning to reach an end goal as efficiently as possible, not any "Consciousness being developed"
@Alan-me8bs
@Alan-me8bs 5 лет назад
@Martyr4JesusTheChrist ai is fictional? 😂 Okay you are a loon because your speaking to a computer systems student and unless many of my modules are lies then u are Infact wrong 😂
@Alan-me8bs
@Alan-me8bs 5 лет назад
@Martyr4JesusTheChrist you are definitely uneducated
@Joxman2k
@Joxman2k 5 лет назад
I didn't hear anything about "How" to keep human bias out of AI, just that there is bias. I think this has more to do with machine learning than actual AI. Many viewpoints can be part of an AI algorithm, but being neutral should be the goal. I'm not sure how her expressing apparent male centric developers bias as being bad and her more woman centric bias as being correct is supposed to balance that out. I mean exchanging one bias for another is not keeping out human bias. She does bring up an important topic, but it is more about awareness than it is about solving it.
@Parulminu
@Parulminu 4 года назад
Awareness of the bias is the first step towards solving it.
@fyghetr
@fyghetr 3 года назад
well said brother or we will get a Skynet with anger management problems lol
@drednaught608
@drednaught608 3 года назад
Bias is necessary in any system which uses inductive reasoning on given sets of data. This includes Humans and machine learning ai. The problem comes if the bias is not “updated” when it is found to consistently lead to incorrect conclusions. (which I assume is not a problem for machine learning ai since it will continue getting better as it receives more data) Bias is not inherently a bad thing. It is a necessity. Attempting to get bias out of machine learning ai would be akin to trying to create a square circle. It cannot be done by definition.
@Joxman2k
@Joxman2k 3 года назад
@@drednaught608 Not entirely sure if I agree with you. There needs to be a binary good/bad bias within any decision making process. The machine learning model does not take into account any bias. It just aggregates what is common/uncommon. I believe it is a roadblock. My concept of neutral is knowing the good/bad, and knowing where the neutral point is. I think adding personality factors, like a male bias or female bias is where we can get into trouble from the AI learning perspective. It think Bias can be an added component after AI has developed enough to determine its own decided bias. Having a neutral platform needs to be the key. Bias needs to be weeded out at this point. We need to know where neutral is. Perhaps adding all kinds of bias might be the key for AI to determine were neutral is.
@annayang7181
@annayang7181 3 года назад
@@Joxman2k I agree. I think of AI as a pre-advanced toddler. First it takes in the world around "it." It uses judgement as the environment has raised it. So of course when racism, sexism, etc is what is given to it, of course it will be biased. Rather we should teach it right from wrong and then make let it decide the suitable outcome.
@Graeme_Lastname
@Graeme_Lastname 5 лет назад
The only thing that matters is whether the "facts" are true or false. Where is there bias apart from the facts.
@ErikB605
@ErikB605 5 лет назад
Current ai usually uses machine learning. It learns by inference. Lets take a simple image classificator for example. You give it the data it is supposed to learn on. Lets say 1000 pictures of cats and dogs. Then you tell it what those pictures are in each case. For example (pic1.png, cat; pic2.png, dog; etc. ) This data is used to train a model. If you then input a picture it will give you a number of resemblence for both solutions it has learned. Now lets imagine you want to train a classifier to determine wether or not to hire someone. If your dataset is to small or scewed that characteristic will be inherited by your modell. If every person named dave from your dataset got hired the algorithm will think the name dave is really relevant to hiring people and consequently hire everyone named dave.
@lpgoog
@lpgoog 5 лет назад
AI is software that writes itself & evolves exponentially. We already don’t understand a lot of it yet dependent on those aspects for it to work. Knowing this we’re still competitively racing ahead at releasing it’s full potential.
@johnnyblade2052
@johnnyblade2052 5 лет назад
Sure, but more than likely to our doom! Never try to create something smarter than you unless its your children. And that can only be done through teaching them.
@violet-trash
@violet-trash 5 лет назад
*AI:* "Women are statistically more likely to buy pregnancy tests than men" *Kriti Sharma:* "Wow, how sexist!"
@Andrei5656
@Andrei5656 5 лет назад
Please change the title. It's not about how to keep human bias out of AI but how bias exists in machine learning based on statistics. I'm rarely pissed at TED for wasting my time but today it's the case. Also, I prefer my assistants to be women. Not because I want to shout orders at a woman but because her voice is more intelligible in a noisy environment than his.
@elinope4745
@elinope4745 5 лет назад
I heard that cell phone companies are biased against people who opt out of using cell phones. 0% of people who refuse to own a cell phone, own a cell phone, this is obviously bias from the cell phone companies. And that is basically what she is saying.
@nikitakarim6196
@nikitakarim6196 5 лет назад
I’ve been waiting for a talk like this. As someone who is interested in AI & ML. Most of these comments are feeding the stereotypes she discussed in the video. Speaking as a female developer, bravo Kriti!
@petripat5979
@petripat5979 5 лет назад
Not exactly its the stereotype thats feeding the comments but how would you know .You need the redpill look it up its out there . thank me later. Oh unless you're an NPC
@mj25423
@mj25423 5 лет назад
I was thinking a comment like this should be there for sure which is totally irrelevant and yet screams that this is what she was expecting. Hashtags Hashtags is all we are famous at now. And I saw yours. Wow! Made my day though not very surprised
@dominionofme3462
@dominionofme3462 5 лет назад
Am I the only one who finds a lot of what she is talking about to be contradictory and possible more dangerous for AI interaction with us?
@SuperAtheist
@SuperAtheist 5 лет назад
Facts I don't like are biased
@ClockworkAvatar
@ClockworkAvatar 5 лет назад
those aren't biases if the data supports it. numbers don't lie, it's just a matter of how granular you want to get.
@ErikB605
@ErikB605 5 лет назад
Current ai usually uses machine learning. It learns by inference. Lets take a simple image classificator for example. You give it the data it is supposed to learn on. Lets say 1000 pictures of cats and dogs. Then you tell it what those pictures are in each case. For example (pic1.png, cat; pic2.png, dog; etc. ) This data is used to train a model. If you then input a picture it will give you a number of resemblence for both solutions it has learned. Now lets imagine you want to train a classifier to determine wether or not to hire someone. If your dataset is to small or scewed that characteristic will be inherited by your modell. If every person named dave from your dataset got hired the algorithm will think the name dave is really relevant to hiring people and consequently hire everyone named dave.
@maximvsdread1610
@maximvsdread1610 5 лет назад
All AI is bias and can never not be bias. Digital Utopia being sold here. This is the soft introduction to the West of the "Worship the State" app they have millions of Chinese people under.
@HiAdrian
@HiAdrian 5 лет назад
Bias is a noun and can never be an adjective.
@lucsteffens
@lucsteffens 5 лет назад
Great presentation! I never tought of AI enforsing our bias. Wasn't sure to watch the video, but was well worth it! Thx and continue to spread your toughts, they are worth listening to.
@ShadowRifft
@ShadowRifft 5 лет назад
A very important talk. Just imagining a bias DataSet being given or minimal scope of a person available, to advertise to or general concept to analyze, could not only make things difficult for a person but if the generated data/AI behavior leads to important decisions toward Statistical public health for a county/budget adjusting, or a person liking things "on sale" rather than Health goals, then Long-Term effects across the field could be drastically mis-Shapen.
@wolfdragon4176
@wolfdragon4176 5 лет назад
Interesting
@wolfdragon4176
@wolfdragon4176 5 лет назад
LOCAL COPE what ....?
@HumansOfVR
@HumansOfVR 5 лет назад
The aim of compassionate A.I. is to build deep connections - the connections which can feel the pain of the prisoners and the joy in the dances of the butterflies
@biarkinalexandermanonreyes141
@biarkinalexandermanonreyes141 5 лет назад
for me it is simply to respect the rules of the human being and thus to be a good country. and where you want to arrive respecting human rights.
@esbenandreasen6332
@esbenandreasen6332 5 лет назад
Excuse me, but I simply do not believe that any woman has ever been denied a job opportunity due to an AI decision. That has never happened anywhere but in theoretical scenarios. If you're going to make such a claim, you need specifics. You need evidence, not some vague, nebulous claim about women in general being denied jobs as programmers. These examples are thought experiments, not actual examples of real-life problems.
@keenarnia
@keenarnia 5 лет назад
We no longer live in a world where facts are relevant.
@ErikB605
@ErikB605 5 лет назад
There was a study where they just flipped the applicants name from male to female. news.yale.edu/2012/09/24/scientists-not-immune-gender-bias-yale-study-shows If they would use this data to train a cnn it would give male names a bigger weight. Leaving the name out would eliminate that problem.
@LuxiBelle
@LuxiBelle 5 лет назад
4:15 imagine not knowing that you can change the voice of your AI personal assistant or thinking that a overglorified search engine is like a real person.
@noahbrown5491
@noahbrown5491 5 лет назад
Yes?
@bosslax3162
@bosslax3162 5 лет назад
I swear to god if she's a T-Series subscriber...
@Overonator
@Overonator 5 лет назад
But a woman is more likely to be a personal assistant than a CEO. Count up all the CEOs. Count how many are women. Count the number of personal assistance. Count how many are women. Compare the percentages. This is just basic probability based on reality. The algorithm is trying to make predictions. It has to create a model of reality to make those predictions. If your ancestry is from eastern Africa you are more likely to have sickle cell anemia than if your ancestry is eastern Europe. You call this bias but it represents reality. Soldiers are overwhelmingly male. If an algorithm has to decide whether a female or a male is more likely to be a soldier what should it decide?
@afsal6098
@afsal6098 5 лет назад
Good perspective! But the title is confusing. The 'bias' i see here is of gender that is becuase of the voice. Then, that could be solved by allowing to switch between male and female voice. As 'Siri' in England is male, which could be changed to female voice as well. Only the VUI changes in this scenario.
@vikramsrinivasan8176
@vikramsrinivasan8176 5 лет назад
The AI cannot remove poverty. Also AI decides based on data that is fed for decision tree & not on real activity pattern
@igitha..._
@igitha..._ 5 лет назад
The algorithms are broken and short sighted - they are presumptuous and corrupted and do not allow for true symbiotic creativity or reflect the richness of life love and true compassion - robots will never be as amazing and unique as human beings
@BlaZay
@BlaZay 5 лет назад
Actually it could, if we let it distribute all resources based on the satisfaction of primary needs only, it definitively would. Problem is that not even communists would let it do that.
@igitha..._
@igitha..._ 5 лет назад
'Let it' distribute 'all resources'?? Sounds like a terrible idea and has nothing to do with the richness and creativity and expansive qualities that humanity already carries innately - Your response sounds like it's for someone else @@BlaZay
@BlaZay
@BlaZay 5 лет назад
@@igitha..._ Uhm, yeah, I was answering OP's statement about poverty ? Anyway, I disagree with you as well, but I think we still have waaaaays to go before AIs are remotely able to rival human beings in terms of creativity. But hey, who knows how fast things might go once we develop a super AI ?
@ErikB605
@ErikB605 5 лет назад
Current ai usually uses machine learning. It learns by inference. Lets take a simple image classificator for example. You give it the data it is supposed to learn on. Lets say 1000 pictures of cats and dogs. Then you tell it what those pictures are in each case. For example (pic1.png, cat; pic2.png, dog; etc. ) This data is used to train a model. If you then input a picture it will give you a number of resemblence for both solutions it has learned. Now lets imagine you want to train a classifier to determine wether or not to hire someone. If your dataset is to small or scewed that characteristic will be inherited by your modell. If every person named dave from your dataset got hired the algorithm will think the name dave is really relevant to hiring people and consequently hire everyone named dave.
@aleleeinnaleleeinn9110
@aleleeinnaleleeinn9110 5 лет назад
Ethics and AI or even the small scale tools that market to us are always only as good as their programming. Bias and even pure coding errors will affect the results. These kinds of errors are likely to be deep in the subroutines and their interactions. And all this will affect the final decision a compute (AI or not) makes. I am greatly encouraged that I had several of these discussion wih undergraduate studints heading into the field.
@StyeAI
@StyeAI 5 лет назад
I dont know... A world without sexism, racism, and even biases?! That's idealistic and unrealistic. Even if God gave me the power, I can't think of a single way to stop humans from being sexist, racist, and biased. Maybe I'm just being pessimistic, but I'm still unconvinced how we're able achieve such feat.
@HaseoOkami
@HaseoOkami 5 лет назад
But you're being pessimistic. Presuposing something to be true when its not. Although... To be fair to you its not easy to see a world without it. Just have to remember that historically speaking, the "height of human society and achievement" was fire at some point. now give that 20k years and here we are.
@ShortCrypticTales
@ShortCrypticTales Год назад
This is a greatly important subject because that is and can be how AI can be misused and today it is more important
@AbAb-mm3og
@AbAb-mm3og 5 лет назад
Interesting talk. She keeps glancing downward - is there a teleprompter or some kind of prompter? Thank you.
@wowlucasss
@wowlucasss 5 лет назад
There's a timer down there as far as I have seen before in other TED Talks.
@AbAb-mm3og
@AbAb-mm3og 5 лет назад
@@wowlucasss thank you.
@vakuzar
@vakuzar 5 лет назад
Siri in england is male....
@changlife7581
@changlife7581 5 лет назад
basically she is reading the screen and yea
@juliahenriques210
@juliahenriques210 5 лет назад
Mmm. Yes and no. AI absolutely needs to have human biases to make decisions in the interest of humans instead of, say, ants, who make the most sizeable portion of animal biomass on Earth. The question is political: since human biases are an absolute necessity, WHICH biases are we going to kick off AI with?
@blaXkgh0st
@blaXkgh0st 5 лет назад
this was great, thanks Kriti.
@lubo7699
@lubo7699 5 лет назад
referencing !
@Cameron-rf9gt
@Cameron-rf9gt 5 лет назад
It sounds like the speaker is trying to say that we haven't managed to get AI to be able to determine the difference between Causation vs Correlation. If AI is selecting by gender in fields that are majority male or female (such as engineering or nursing), then it sounds to me like the AI isn't aware the ratio of individuals who are trying to work a certain type of job.
@BlaZay
@BlaZay 5 лет назад
I agree with you. The problem is that AI is like a kid with no common sense, if you don't teach it _exactly_ every mental process humans use to select qualified people, then it won't. The more we know ourselves, the better the AI.
@maxgames3727
@maxgames3727 5 лет назад
Well, i get pregnancy test ads too. Im a male btw so thanks youtube
@aaronpandey
@aaronpandey 3 месяца назад
🤣
@zimbu_
@zimbu_ 5 лет назад
Get the author of "weapons of math destruction" to do a tedtalk, the examples used here were really meh.
@kunalkhan1143
@kunalkhan1143 5 лет назад
You make sense! thank you! :)
@openbabel
@openbabel 5 лет назад
This is not about Ai at all this is about discrimination. This is more frightening than you think. So a female goes into a railway station and asks for a ticket from A to B. They are given the same cost ticket as a man. A blind person goes into the same railway station to ask for the same ticket and is charged twice the amount even though the law is very clear that blind person should be charged the lowest fare possible of all tickets regardless of gender. Why is this ? Ansawer the algorythm written by a person discriminates against the blind person because it was written by a non blind person. Its a case of a process designed by the sighted for the sighted which excludes the non sighted even though the law has been broken. The only way forward is yearly testing of machines to certify they are disabled tested before they can be used by the public. Simply the machine tries to cement existing predujices against groups and are not fit for purpose.
@michaelgeiss741
@michaelgeiss741 5 лет назад
Correlation is not causation. The same applies in human decision making. Why assume sexism rather than genuine interest? Potential client: "What makes you think you know this?" Biased expert: "I need to hide my gender to avoid such patronizing comments". Secure expert: "I'll send my resume right over."
@stormixgaming8389
@stormixgaming8389 5 лет назад
Another interesting thing I find, they're are so many people who look at one set of statistics and assume the correlation means this causes this, despite the fact that there are tangible realistic reasons as to why they don't do what they do.
@angelicasbestversion3301
@angelicasbestversion3301 5 лет назад
Thank you so much for this talk! the speaker made points very worth to think about. Thank you
@rooksman64
@rooksman64 2 года назад
I was definitely left thinking "tooth things"
@Kongolox
@Kongolox 5 лет назад
the whole talk could have been in 2 min
@supremereader7614
@supremereader7614 5 лет назад
You are INSERTING human bias into AI when you say black people are equally likely... whatever. AI should go by results, it has no pre-planned prejudice, or at least it shouldn’t. But if you want to make a worse product and say a 23 black man is exactly as likely to pay back a loan as a 52 year old man, go right ahead. A foreign company will just go an make a much more accurate AI. 🙂🤦‍♀️
@BlueSkyBS
@BlueSkyBS 5 лет назад
Play it at half speed to make her sound drunk.
@wesleyunke7414
@wesleyunke7414 5 лет назад
Best advise ever! lol :D
@letsgoiowa
@letsgoiowa 5 лет назад
In other words, keep politics out of AI!
@adeshpoz1167
@adeshpoz1167 5 лет назад
Exactly!
@MetaphoricMinds
@MetaphoricMinds 5 лет назад
I watched till 4:14. I'm not saying she is wrong, but there doesn't seem to be consideration of bias in her statements nor some counter arguments that pop in my head. You can enter variables to avoid discrimination. You can put a learning algorithm matrix together to start investigating causes to causes. The bias comes into play when a human uses their bias to claim targeted results.
@hantzleyaudate7697
@hantzleyaudate7697 5 лет назад
Brandon Craft cool story bro
@strangetimez
@strangetimez 5 лет назад
nooo we want the Detroit become human in real life
@tonraqkorr230
@tonraqkorr230 5 лет назад
You're joking, right?
@strangetimez
@strangetimez 5 лет назад
@@tonraqkorr230 it was sarcasm xD yes
@Rhygenix
@Rhygenix 5 лет назад
Utopia requires omniscience, something AI will never give us. Bias is inevitable, and there's nothing that can be done to end it
@SomebodyPerfectly
@SomebodyPerfectly 5 лет назад
​@Martyr4JesusTheChrist Your not only a supersticious Christian, the "light" also turned you into an internet missionary preeching your devil to people on the internet.... You havent been woken up, you just switched whose dream you are dreaming.
@stormixgaming8389
@stormixgaming8389 5 лет назад
@@SomebodyPerfectly dudes an idiot lmao
@theoddparty3052
@theoddparty3052 5 лет назад
Robert Tunstall look up Elon’s newest endeavor NueraLink, that will give the human colossus omniscience
@shkittle07
@shkittle07 5 лет назад
Where business is involved, there will always be bias.
@CustomClass5
@CustomClass5 5 лет назад
Human bias will be a huge concern when we design general artificial intelligence, but she is clearly not understanding what those biases will look like.
@KagimuBrian
@KagimuBrian 5 лет назад
This is a great talk. It raises important questions about building AI
@violet-trash
@violet-trash 5 лет назад
You don't need to worry about AI developing a bias if you program in bias to begin with. 👉😏
@Sahralie
@Sahralie 5 лет назад
What is AL ?? In German May be ??
@mitkoogrozev
@mitkoogrozev 5 лет назад
Since AI still can't learn from scientific experiments and understand scientific writings , until the point where your whole societies are re-formed and engineered based on latest scientific understanding, exposing it to what we have today will inevitably make the AI 'racist', 'sexist' , 'elitis and other such 'biases' because that's what it can sample from today's societies. This is what is currently 'objective', that's how most of them are currently structured.
@desab6049
@desab6049 5 лет назад
“Teach the right values” what? You shouldn’t be withholding information that could allow the ai to make Better decisions. If you feed it the truth and you don’t like the result, sorry but reality isn’t forced to be tailored to your liking. Also the pitch of the voice in the ai shouldn’t matter. Siri(the servant) has options for male and female so you cannot argue that it is sexist. And if you wanted you could higher the pitch in Watson if it helps you sleep at night. However, I don’t see the benefit or scientifically enhancement you are making there. Overall you should stop bringing politics into science. If you can’t handle reality and your focus is more on political correctness than the advancement of the human race than you shouldn’t be in science. 😐 you shouldn’t force ai to hire men and woman evenly because there aren’t as many women as men in engineering( its 10 to 1 when it comes to that). The ai could make mistakes when it encounters outliers but that means it will be corrected. Also if women started doing better jobs than men one would think the ai would notice that the women have higher qualities than men, allowing the to be chosen more frequently.
@aaronpandey
@aaronpandey 3 месяца назад
o yeah imagine guys named desab becoming criminals and then u getting unemployed cuz of ur username
@galerivs
@galerivs 5 лет назад
that is nbot removing human bias... because AI does not ahve bias... humans have bias, and it is social engineering
@virajselot4948
@virajselot4948 5 лет назад
Let’s observe where this technology can take us :)
@johnnyblade2052
@johnnyblade2052 5 лет назад
We need to "imagine" what this technology will do to us and not "assume" what we are creating is going to do. I guarantee you there will always be flaws..... and some out of our control. "THINK" BEFORE YOU DO
@igitha..._
@igitha..._ 5 лет назад
I imagine that it will lead to an even more zombified lazy submissive and controlled society that loses its ability to think for itself acknowledge situational awareness and programs new generations into serving whomever is the elite controller\programmer's sinister agenda - we are already facing the obliteration of the situational awareness abilities of the new generations who have their face permanently stuck 2 inches from a mobile phone screen black mirror black hole and already there are people leaving school that claim they do not know how to read do not know where milk comes from do not know their vegetables from their fruits are terrified of nature do not know how to heal themselves or work with herbs oils clays work with their hands produce something get creative and messy or splash in puddles and cannot gauge other plant teachers and do not understand humanities innate requirement to be in a symbiotic relationship with nature the animals and the ebb and flow of the seasons AI is malicious malware in the guise of benevolent tech like a wolf in sheeps clothing - The previous tests performed with certain prototypes have already shown that the AI agenda and those who program such robotic entities is and are self-serving and against humanity and not for it as we would like to believe - Endowing such tech with self 'improving' capabilities is not going to have a positive outcome either. One reason being as robots have no soul and are incapable of being seeded with divine soul and divine intention as humanity are capable of upon their inhabitation of the flesh through soul seeding and energetic interactions and activation of their DNA kundalini energies third eye pineal gland spiritual connections and historical cosmic multifaceted ancestral understandings and wisdoms. The only reason to cultivate such robotic AI \'smart' homes and tech etc (that are also taking people's jobs causing depression suicides and loss of sense of purpose amongst other things) is to disable humanity and for ultimate dissemination of the species removal removal of sentiments memories nostalgia creativity spiritual connection sensation and self-directed immersion free will true sovereignty and of what makes humanity human - instead AI has the capacity for divide and conquer slow kill through blue light and eugenic ag21 ag2030 etc outcomes - its definitely dark stuff!! I have visions of the future and we are in a seriously dark place if we entrust our survival to AI. Sure it all seems like roses when it's doing what you want (just like phones and laptops - sans the being spied on and data retrieval crap etc) but when it starts destroying what you have worked for your sense of purpose your sense of control skewing your perception of what is and is not around you and how you are affected by it what we know to be life - humanity - supporting this direction will end in misery and death and further dissociation and indifference from and to nature and Earth Rehumanize the internet Rehumanize the planet Rehumanize humanity
@thomasbasile9279
@thomasbasile9279 5 лет назад
There are tons of problems AI should think to.
@Liravin
@Liravin 3 года назад
"diverse teams" - how about we just anonymize everything instead she literally said multiple times that the inequalities disappeared once she went anonymous
@luisatierney5863
@luisatierney5863 3 года назад
How would you anonymize everything when certain things need identifying inputs e.g facial recognition. Also in most cases, A.I. can still predict what group someone belongs to even without identifying features like name and ethnicity. It can spot patterns in things like the style of language you use, the websites you visit etc and extrapolate that information. Anonymity in "everything" really doesn't make much sense when A.I. works by spotting patterns. That's why it's important to have as much diversity as possible to avoid biases and "positive feedback loops" in algorithms.
@maximvsdread1610
@maximvsdread1610 5 лет назад
You will never program the understanding for the need to forgive without an arbitrary punishment period. So, who makes the innocent, guilty or, forgive exe. ? That is still and always will be the question. We can't have an A.I. until we get it right out here.
@maximvsdread1610
@maximvsdread1610 5 лет назад
Sister you miss the point of 'Judge not lest Ye be judged' Don't let my mischievous behavior throw you. I'm just having fun with the fools. As many as I can.
@Lorenzo_Atzei
@Lorenzo_Atzei 2 дня назад
Seen
@DemasiadamenteStudios
@DemasiadamenteStudios 5 лет назад
So. When hiring new players for the NBA, you better tell AI to include 4 foot tall candidates, because hiring only tall people is biased. Also, when I google 'basketball player' I have to be shown 4 foot tall players, or it will be biased. I love this talks.
@dominiquegautier5583
@dominiquegautier5583 5 лет назад
Dommage Pas de traduction en français
@briaisabanana7031
@briaisabanana7031 4 года назад
Y en a maintenant
@ungeschaut
@ungeschaut 5 лет назад
Sorry to be rude but she didnt said anything new
@Mr_Doodleydoo
@Mr_Doodleydoo 5 лет назад
If the groups involved changed the statistics about themselves, the algorithms would reflect that. If White/Asian people suddenly became the least likely to repay loans, the algorithms would reflect that.
@Isedorgamlit
@Isedorgamlit 5 лет назад
wow this sets a new low bar - now I could give Ted talks too, it seems.
@moelester2203
@moelester2203 5 лет назад
AI...humanities greatest threat
@BlaZay
@BlaZay 5 лет назад
...not really. If anything, humanity is its own greatest threat.
@BlaZay
@BlaZay 5 лет назад
@Martyr4JesusTheChrist I mean, AIs are essentially just a lot of "if" functions put together, but what does it have to do with religion ? Please don't bring it on the table for no reason, not everyone here is a believer.
@007srikanth
@007srikanth 5 лет назад
Just amazing
@user-fo8wm2ob5e
@user-fo8wm2ob5e 2 года назад
5:47
@connectmuraly
@connectmuraly 5 лет назад
Hope she just started a discussion. It's a good one.
@nokoolaid
@nokoolaid 5 лет назад
How can we take the human bias out of something that a human created? Maybe not impossible, but a worthy challenge. That said, workers will be displaced by automation, autonomous vehicles, AI and robotics. It's already happening. Not sure there will be good jobs to take their place, so there will be a time of change and flux.
@ofsanjay
@ofsanjay 5 лет назад
AI give great future if it works in limits for good cause.
@johnnyblade2052
@johnnyblade2052 5 лет назад
And that never happens.
@Liravin
@Liravin 3 года назад
they had us in the first half, ngl
@CephalicMiasma4
@CephalicMiasma4 5 лет назад
Entire argument is based on a flawed assumption - that there are no distinctions between racial and gender groups (whether they are inherent or the result of socioeconomic factors) and that all statements regarding any differences are inherently prejudiced. This needs to be shown first, you cannot merely assert this.
@AMBERSKYS1
@AMBERSKYS1 5 лет назад
AI JUST SHOULD NOT BE
@vikramsrinivasan8176
@vikramsrinivasan8176 5 лет назад
To be Indian is lovely
@bowtie8283
@bowtie8283 5 лет назад
by not making ai that manages online interaction
@mrkrabs3766
@mrkrabs3766 8 месяцев назад
I'm here because this is homework anyone else?
@RogueBeatsARG
@RogueBeatsARG 5 лет назад
If you think it human rigths are human biased... So.
@daGOAT0518
@daGOAT0518 5 лет назад
This speech is literally biased
@rawrizord
@rawrizord 5 лет назад
you can’t, simple and plain
@Datharass
@Datharass 5 лет назад
Do me a huge favor, just don't teach them fear/punishment before you teach it to care/love.
@doramario4916
@doramario4916 5 лет назад
請問這個頻道是你們官方ted授權的嗎? ru-vid.com/show-UCEHzRcmMEfxDe3VOFKDTBug
@tjguidry7753
@tjguidry7753 5 лет назад
U sound too sane girl good luck 😂😂
@Lunareon
@Lunareon 5 лет назад
Gender and race are just a lazy way to lump together millions of people who have otherwise nothing in common. Instead of profiling people based on gender and race, we should actually specifically remove these two attributes from all data before it's processed by the algorithms. That would actually produce more accurate profiles (and even more effective advertising, if you're into that sort of thing).
@amans6504
@amans6504 3 года назад
Wow dislikes Wow she has build robots
@sodikama
@sodikama 5 лет назад
Ironically a lot of neural networks start with a value called "bias" that is often set equal to zero at the beginning
@ErikB605
@ErikB605 5 лет назад
The bias will pretty much always be set to one. Their whole purpose is to serve as an offset if needed.
Далее
C’est qui le plus fort 😂
00:18
Просмотров 11 млн
나랑 아빠가 아이스크림 먹을 때
00:15
Просмотров 3,7 млн
How to Spot a Cult | Sarah Edmondson | TED
17:42
Просмотров 358 тыс.
How AI Image Generators Make Bias Worse
8:11
Просмотров 49 тыс.
How AI is Deciding Who Gets Hired
15:28
Просмотров 165 тыс.
C’est qui le plus fort 😂
00:18
Просмотров 11 млн