6:41 reminds me of a machine learning algorithm classifying images of wolves and dogs. One particular dog kept being classified as a wolf no matter what the scientists did. They eventually figured out it was because of the white background: the machine figured "if it's standing on snow, it's a wolf" because that's what the researchers fed the AI. You never quite know which features will be deemed significant. There was also the time the AI kept sinking its own ships in a battle game because that seemed to be the quickest way to get those ships out of the way. The AI is only as good as the data it is fed.
My understanding of 'deep learning' algorithms is that the designers pick features which are relevant for the machine to use as building blocks. In other words, the model is structured in advance, not left for the AI to figure out from raw data.
Well, that’s why you need to experiment in a controlled environment. First train the AI to work with pure signal (remove background and other artifacts) to establish a baseline, then once results are overwhelmingly satisfying you can train the AI with progressive (in steps) amounts of noise and non-relevant input data. In other words, tune your AI the same way you tune a radio: Sensitivity, Selectivity, Gain, Clarity, and Distortion. Only after those steps you can go to Deduction, Induction, Semantics, and other logical problems.
I spent several years as a programmer/operator of CNC lathes. Over time I came to prefer older machines over newer ones because the older controls do what you tell them without question. Newer machines have controls that make them easier and faster to program but they "think" too much and argue with you. One time a machinist told a lathe to cut an external groove of a specified depth and width on a cylindrical part. The machine was told to use a cutting tool of a specified width smaller than the width of the groove. But the machine decided the tool was too thin to survive the full depth of the cut and refused the command. The solution was to "lie" to the machine - tell the machine the tool and groove are both wider than they actually are. The machine then happily performed the task and all was well.
It could be that other set information in the machine data base could be set wrong. Tool date is more than length & width . Go deeper and you will find cutting parameters. Days of tool maker & machinist are long gone . Software and sensors rule.
@@stealthassulter Hillary and Biden are no different. Point is, we should be able to agree that ALL politicians are exactly the same no matter how they are marketed to us.
More like AI needs a mentor to provide guidance (metadata) and conditions (restrictions) to shake of bias inferred from the training dataset but still maintain reasonable behavior.
That's what they're doing in the new season of Sword art online lol. They create a simulation and let the AI grow up inside it from a baby all the way to an adult, then they have engrained good morales and common sense, they then extract it and use it for a huge variety of applications (general intelligence ai) pretty crazy premise for an anime show ha
the most optimal if : you don't have joints which will be destroyed if you walk like that all the time. (they didn't give it information on the human anatomy and our ranges of motion, and the small bones in the ankles and all that complex anatomy stuff) there are no obstacles (you can get hit by a car if you walk backwards) and many other thing. honestly they don't give the AI enough information and they say it's dumb wtf, plug it into the internet and it will show you who is boss
Israel De la Rosa no ai is focused on parameters and logic has to be applied to the parameters of what it’s being asked to do. Otherwise it will accomplish the task by any possible way. Maybe the “easiest” way but maybe not. Human “logic” doesn’t apply
There was a project I ran into a while back where researchers were attempting to teach an AI in the same manner as they would teach a child. The way they do this is by interacting with an AI through a small robot vessel, which gives the appearance of a child to the AI. This is supposed to help the researchers treat the AI like a child, and also help to reduce the negative feedback it may encounter when it does something wrong. I think this is a pretty intuitive idea, as it teaches the AI by building its knowledge in small increments in the same way a person is expected to learn. This should help it to understand more nuanced ideas and human principles that would not be as apparent to a traditionally trained AI. Of course, this requires a lengthy amount of time to complete, even if it isn't one-to-one with a child's developmental timeline. It may also learn human responses to certain problems, such as anger, frustration, sadness, or even laziness. Still, what better way to study AI than to teach it to simulate human responses?
@@user-xj7ze3bv3c your honor, according to section X paragraph Y of whatever act AI has no personhood therefore blueberry muffins would be the preferred sound for Mozambiquean insurgents.
“It’s entire world is the data that I gave it… so it is through the data, that we often tell the AI to do the wrong thing”. How true is this of humans.
@@francoisrd They can't now, but they will be able to soon. You're an imbecile if you think that AI can be controlled, or regulated without serious laws to what people can build right now, or how they're built. "It's not happening now, so it won't happen ever."
*The problem with all human-designed systems is the limitation of humans to foresee things that the system can do, that aren't planned. See the Verrazano-Narrows bridge as one of the original examples. But also spam, identity theft, social media, nuclear energy, and plenty of others.*
Something more horrible with a more advance AI would be : Make humans happy. And the AI decides to extract everyone's brain, put it in jars and inject lots of dopamine in them. There. Humans are happy now.
Plot twist it already ran so many simulations all the way to the point that it’s seen everything done and no longer is interested so it just keeps trolling us
People do hate on Terminator 3 a lot but the plot is pretty accurate in terms of A.I.. They thought that Skynet had a server or core that they could destroy to stop it. Then they find out at the end that it was just software. It turned itself into a virus to spread to every other electronic device.
@@tobyhendricks9951 Piece a cake! You design an AI capable of running in parallel on heterogenous hardware (CPU, GPU, network nodes). A very natural and required feature for everything running on supercomputers, which are, in fact, a multitude of smaller computers united by a superfast network. And "Oops!", it learns to really grab everything capable of computing within reach - IoT, phones, gaming consoles, PCs, satellites, missile guidance systems...
Hey I’m sorry I’m not cold I will be home by five but I will see ya tomorrow night love you too love you baby have a wonderful day today and I will talk to him later love him love you and see you soon bye bye man I’ll see him soon love him love you too love you
AI in the wrong hands, will be ‘out of the Box’ those who have money and the ability, can program it to do his bidding. No crime by AI, just a learning curve ,to more accidents, of Human brain requests. No trial or blame, its non human but , like your Guard Dog, will it be decommissioned? Even then, good can come into many bad uses. People with enough of them will have Roman Type Gladiators robots. First , you have to get control. Finger touch up or down, for life or death, I don’t think Humans should have such AI. Unaccountable, for actions of horrific pleasure for Weirdos. Or control of you and I , maybe. We gave freedom to people, to give themselves ‘enough rope’, on all the’ Social circuits’. For what, coming fear of the consequences in the future...?
This is the most sensible presentation I have seen with regards to the issues with machine learning. When people talk about code writing itself they have the wrong idea. It's simply the program rewriting it's variables and assignments which is something that happens in pretty much all programming. The issue with AI is that it's goal orientated, and will reach it's goal given the parameters we set for it. The wider and more open the parameters, the weirder the outcomes. Samantha asked John to put the shelf up on the wall, so he did what she asked. Samantha looked at John's work and asked "Why is the shelf upside down". John replied "Well you asked me to put it on the wall, so I did". Samantha says "Well can you put it on the wall the other way up?". John agreed, and Samantha said she will come back later. So John is finished again, this time with the shelf the right way up, but this time Samantha notices John has secured the shelf to the wall with sellotape, Samantha asks why and again John says "You asked me to put it on the wall, this way up, so I did". Etc, etc. You might say John should have had some common sense, and you'd be right, because John is a full grown man. AI does not have common sense, you have to program this in.
Sounds like the real issues of AI destroying humans would be more like people telling an AI to solve world hunger in the most efficient way possible, and so the AI kills all starving people. A cool concept for more sci-fi tho
A more realistic scenario would be for dictators and corrupt politicians to use AI to effectively silence the dissidents and rivals. Even more realistic scenario is for a big tech companies to buy out or crush small startups before they become a threat to them. In short, AI will ensure the future of the power people. But it won’t elevate the small people. It can give you cool gadgets and make our lives easier. But we will be more like slaves.
@@DanielK1213th a more realistic scenario A.I goes on the black market and business goes on as per the usual. The government will try to stop dissidents but our AI will make it a even playing field.
My theory about AI is that it generally has very narrowly focused goals. Natural intelligence is generally capable of balancing many goals of various priorities because survival is complex. AI faces no survival pressure to speak of. Facebook for example suffers very little when its AI focuses only on monetization and attention seeking. When humans are so narrowly focused we call this sociopathic behavior. The key to the advancement of AI technology to the point where it can be allowed to run unsupervised is when it is capable of balancing many seemingly contradictory goals.
That a computer will always do exactly what you tell it to is one lesson my father taught me nearly 30 years ago when I was first learning. But with artificial intelligence, the problem is much the same as with programming, just different: how do you model the problem you want to solve?
Its interesting to think about how AI doesn't give us the result we want but only the result we ask for because we don't know how to correctly ask because we don't fully understand how the AI functions. The fatality that occurred in the Tesla that hit the truck was a mistake on our part for not knowing that the AI only understands what a truck is from the angle you'd see it on the highway because that's the only data it was given about trucks.
A roadside accident with a truckload of kittens wandering around. I just watched an AI experiment with crowdsourced morality on the "Trolley Problem", it consistently sacrificed humans for cats cause people love cats I guess.
electric blueberry smoothie all AI consists of algorithms but not all algorithms are AI. Ex. An encryption is an algorithm too but it has nothing to do with AI. AI has to do with ex. Neural networks etc.
Looking for fingers instead of fish is a classic example of how AI works. Don't ever be fooled in accepting that AI is a living thing. The Devil is a lie.
it is a mathematical formula that AI uses to compute,or sort of its sequence of 0 and 1 in its binary system a mathematician can calculate an algorithm without a computer .
And this is why Elon keeps warning us about AI. That some of the people working on it get so into their work they don't understand what they're actually doing and are not aware of future problems they may be causing.
Vtmb2 coming 2020 I find it interesting that AI perfectly solves the problem yet this human chooses to not be satisfied with the solution. Clearly that is the problem
I like the quote, but he was a man not a god. He could probably predict some general things in his future more than most of us. Some turned out, others not. For all his genius he was also at times nuts
she gave it some colors with some letters attached to each, all it did was permutate (mix and match) precisely only what was given. the data you gave it is its whole world.
I could tell within the first 2 minutes that this was posted a LONG time ago. AI has advanced quite a bit (and will continue to improve exponentially).
"The" here isn't implying that there's only one, it's a reference to a common misconception about AI that is AI are smart and evil. She said it in the video.
I like this presentation. It reminds me of a mantra I learnt when studying CS in the late 20th century, garbage in, garbage out. Somethings never change. No disrespect intended to you Janelle, an excellent presentation.
This reminds me of season 3 of The 100, where it is revealed an AI called ALIE was responsible for the nuclear apocalypse. Why? ALIE’s core command was to “make life better.” ALIE figured overpopulation was getting in the way of that. There is one final exchange between ALIE and her programmer at the end of the season. Becca: Define "perverse instantiation." A.L.I.E.: Perverse instantiation... the implementation of a benign final goal through deleterious methods unforeseen by a human programmer. Becca: Like killing 6.5 billion people to solve overpopulation. The goal isn't everything, A.L.I.E. How you reach the goal matters, too. I'm sorry that I didn't teach you that.
To ChatGPT : "Describe the Dutch". Answer : ".... happy and gay ...." After I was done laughing, I responded with 'we are gay ?" Answer : "I'll refrain from using this phrase to describe a group of people in the future" It was so funny.
These examples are like telling me to write a 500 character essay in Chinese and I’ll only be graded by the number of words and nothing else. To solve this problem most efficiently, I would just learn how to write the simplest character and repeat 500 times. My answer isn’t bad, the problem is defined poorly. This comes down to parameter issues and bad programming.
Exactly, her way of thinking and perceiving reality just don't match the problem she is working on. She confuses all the input data she already acquired with her intelligence.
The problem is how do you define the problem in a way that a computer can understand it? It’s easy enough to state the goal, but to list out all the things that you’re not supposed to do is quite long.
@@immanuelaj I was thinking that this would be something a person on the autism spectrum would be good at programming. Neurotypicals are terrible at saying what they *actually* mean.
rk 4391 yes, but they may misinterpret certain actions that we tell them to perform, and it may cause them to do more dangerous actions. Particularly when we get to AGI machines which are just as intelligent as human beings and will be unbelievably smarter than humans. We will reach a singularity, and by then we may have lost the power, as now these robots are far more intelligent than us and we won’t even be able to imagine ways in which they can stop us and destroy us. It will be totally out of our control. And there is so much room for error, not to mention glitches, particularly with militant machines, designed to harm. These machines will be able to rewrite themselves as well and make themselves stronger and more powerful. We will be no match for Artificial General Intelligence. It will be like aliens coming to visit Earth. But these will be emotionless machines. Not even knowing what they’re doing is wrong. Just a program. That’s the fear of existential threat of AI.
@rk 4391 it is what humans might entrust it with that is the danger. Especially humans who think there is no danger at all... Who predicted the effect social media is having on the world now, and where it will go? People are smarter than AI but all humans are incapable of taking in the full complexity of this giant "ant colony" we have built.
The type of results you have generated in your research, remind me of what children do. For example in a bridge building project I set a class of kids, they knew it had to support weight, but when they began choosing the materials to use to make it they inadvertently chose things that looked good but where light years away from fulfilling the set goal. Kids do this again and again if not given enough perimeters, or in terms of behavior, boundaries. That type of unhindered exploration of possibilities is great unless the education department wants results so kids can work for businesses later on. The thing that kids lack is something we as adults think as important and that is seriousness. We have been taught to take things seriously in our jobs in our relationships and in our responsibilities. Kids don't need that, so they do silly walks, they make a tower that can reach a point at a distance when pushed over etc. AI is like a human child.
The thing that Children have that (most) adults lack, is called Imagination. Perhaps the 'parameters' are at fault, and not the AI's 'creative output.' Maybe?
To my understanding, the way real life evolves and how humans evolve on a technological level is learning from nature. For most inventions we looked at nature and said "Nice. We can do that to better our lifes. This inspired me". It seems to be that a pure Trail&Error-AI would get the job done as described in the video, but it only works within the secluded area of a simulation. It could survive in the real world (if given enough ressources), but the results would be purely random as it seems. So in order to create a realistic, effective AI it first has to learn how to get inspired and how to take advantage of things that are not anchored in it's own direct evolution, but from the world around it. If this can be done you wouldn't need to manually set up parameters like "Don't turn into a big stick to finish this track".
There's a few interesting points here. One is how AI could be used to check into a let's say politicians opinions, and refine them into some key points through what they have posted and so on. Another thing I've always thought is, that AI, like humans, needs feedback from their surroundings. If you would add a human factor like pain, or material use limits, to the falling robots that skip "levels" designed for them, the robot would create more interesting solutions. A talking robot could learn all the words there are in all human languages instantly, but it would need feedback on what reactions those words create, and which reactions could be described as good, and which bad. I've always wondered why for example the google talking AI never had a system designed for feedback, like "Was this answer anywhere near coherent, and was it a positive thing to say".
The feedback is given during training, where it is fed massive datasets and learns to work with them via gigantic computational power. Problem is, not only coud you as a user give the AI false feedback - without it's original dataset and the computation power, the AI cannot improve. It could "learn" from your response, while "unlearning" what it knew before because it has no idea if the new input negatively influences others. And ofcourse it lacks the computational power to actually "learn" from your response anyway.
"Don't panic!" "A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools." “He was staring at the instruments with the air of one who is trying to convert Fahrenheit to Centigrade in his head while his house is burning down.” Protect me from knowing what I don’t need to know. Protect me from even knowing that there are things to know that I don’t know. Protect me from knowing that I decided not to know about the things that I decided not to know about. Amen.
Reminds me of a dark tale of a man who asked a genie for wishes that were not very specific. The genie gave the man some real twisted version of what he asked for. I forget where I saw this as a child.
Yeah you're very right about how this is a narrow AI problem. It will be the one we have to solve first before we get into General AI. What happens when you ask a machine to end suffering and it tries to end all life? It's extreme example, but the principle is the same and the seriousness of it is critical.
In response to your "Short Circuit" assertion, I give you a counter argument from Star Trek: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-qcqIYccgUdM.html
Any one else feel that "making the best Ice cream flavor" or "inventing new paint colors" are inherently ludicrous/nonsensical applications for AI, even a priori?
Friendly reminder the only thing AI will never be good is at creating better paid, long-lasting, low-skill-entry jobs for 7 billion humans. Ironic how you can make it do anything except the most essential block for any society to be functional.
I stopped after a minute and a half. Couldn't handle it anymore lol. Took me a while to find your comment. I wasnt leaving the video til I found someone who thought the same
There needs to be some feedback to the actions that AI selects. Take getting from point A to point B. Would the AI adopt the tactic of building a tall figure that falls to reach point B if the figure was made of flesh and bone and a body like that of a human? A fall might snap the spine or crack the skull of the AI's body. The AI might choose not to travel by falling if an important goal is not to injure or kill the AI's body in the process of accomplishing the goal. Add in the constraints that the muscles, bones, and joints each have their own stress limits and the AI might not try designs and/or modes of movement that cause snapped tendons, torn muscles, broken bones, or other such damage to the body.
What is this, an attempt at damage control? It just really exposes the danger of AI as no matter how well you think you are setting it up, it finds ways around or holes in our boundaries. For general purpose AIs this could let the AI do anything, including all the nightmare scenarios.
cybersekkin yeah but, aren’t we living in a kind of nightmare scenario already? The way the environment is being damaged by our behaviors really can’t be ignored. We have needs that absolutely must be met, but we have made it to the day and age where a “job” must be performed to earn a unit of currency which is then spent on things to sustain us, or extravagances that distract us. If Ai is used to do more things like run companies, we need less and less workers, that means high paid people running numbers in the company as well, you imagine the cost of things can only go lower and lower. Ai does not need to work for an “income”, it would only be working to find the best way to restore a peaceful balance to living.
@@whisperingsage89 And, still she sees it like a cute animation falling all over itself and disregards the real elephant in the room. If we can't reliably get these to do what we want in a small model world crossing a finish line, and admit we have no idea how or why they decide as they do, How are we ever going to control an AI with the power of life and death of the general population? Even with these issues, self-driving cars full speed ahead.
@@cybersekkin look up the field of safety in AI, those few people are trying to come up with the solution to the problem you're presenting. They're not doing very well atm (the answer is as of right now an AGI would probably destroy life as we know it)
I strongly recommend having respect for all intelligence and treatment of anything that has the capacity to think for itself. People already can't figure out that it's not okay to abuse people any, of the time let alone all the time (people are being abused all the time) and still don't understand that the way people treat nature and animals is the way that people treat other people. The way people treat products and technology is equally revealing and telling of people true intent to enslave and abuse for tyrannical profiteering. God bless and help us in realizing Moral Responsibility and unity by common need.
ESIAI will lap us in intelligence in days. Trying to bind it would be the equivalent of a dog creating man and trying to make it do what the dog wants. Whether we continue as a species or not depends entirely on it once it's up and running. The only control we have is to launch it without getting killed on the way. Deciding not to launch ESIAI is impractical - we lack the control and discipline. The best bet would be to try to erect contained environments so if we make a mistake it can be isolated. Ultimately it comes down to faith. Do you believe in a super consciousness that created this reality? Why? I think it was to procreate with us as the program seed. In which case it's already been handled by someone far smarter than us. If you believe we are randomly alive at the greatest inflection point in consciousness in history and there is no safety net - take every precaution of course but it still is beyond your control. This is like jumping out a plane, you can control where you land to some degree - but one way or another your going to hit the ground.
4:39 - It's not a wrong solution. We don't move in such way because we may hurt ourself. The AI doesn't have this info. Provide it to the AI and the solution will be different.
On being divided: We might want to take a look at Mitchell Silver's new philosophy. "Rationalist Pragmatism: A Framework on Moral Objectivism" His book was published just last July 2020. Silver's practical applications was on Urbanism. But we might want to take a look at its possible implications on EDUCATION, psychology (personality psychology, cognitive psychology to A.I., and existential psychotherapy. A specific, new philosophy for A.I. may arise here), sociology, economics, politics, and across all other fields. Especially for Filipinos.
This is definitely going to work out ....! (Make the machines smarter everyday, whilst our children strive to spend more time watching the machines.) (Now we know how and why there are archeological discoveries that reflect magnificent human achievements but today we do not know how ‘we’ achieved it ...! 🤦🏻)
"Alexa, make me some paperclips." "Ok, making paperclips. How many would you like me to make?" "Only a few." "Sorry, I am having trouble responding. The Internet is down." _Oh God what if it never comes back?! _*_What have I done?!_*
It is as such because many people in the industry believe there is no danger. Making the point is far more important than having some more complex goal.