Тёмный
No video :(

New computer will mimic human brain -- and I'm kinda scared 

Sabine Hossenfelder
Подписаться 1,4 млн
Просмотров 532 тыс.
50% 1

😍Special Offer! 👉 Use our link joinnautilus.c... to get 15% off your membership!
A lab in Australia is building a new supercomputer that will for the first time both physically resemble a human brain, and perform as many operations, about 228 trillion per second. It will be the biggest neuromorphic computer ever and the scary bit is how few operations this are. Yes, how few. Let me explain.
🤓 Check out our new quiz app ➜ quizwithit.com/
💌 Support us on Donatebox ➜ donorbox.org/swtg
📝 Transcripts and written news on Substack ➜ sciencewtg.sub...
👉 Transcript with links to references on Patreon ➜ / sabine
📩 Free weekly science newsletter ➜ sabinehossenfe...
👂 Audio only podcast ➜ open.spotify.c...
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder
🖼️ On instagram ➜ / sciencewtg
#technews

Опубликовано:

 

22 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 1,9 тыс.   
@Y2Kmeltdown
@Y2Kmeltdown 7 месяцев назад
Hi Sabbine. Great video. I am a masters student with ICNS at western sydney university. Just a quick correction, the reason for using FPGAs isn't because they are slow in fact FPGAs aren't actually that much slower than current von neumann architecture. The main reason we are using FPGAs is because in the field of neuromorphics, we still aren't certain what are the most suitable aspects of neurons we need to mimic to maximise computational abilities. So, using reconfigurable hardware makes it easy to prototype and design. Interestingly we actually use the speed of silicon to our advantage in a process called time multiplexing where we can use one physical neuron that operates on a much faster time scale to perform the calculation of many virtual neurons on a slower time scale which makes the physical area required much smaller. Thanks again for the coverage. I hope everyone is excited to see what it's all about!
@doxdorian5363
@doxdorian5363 7 месяцев назад
That, and the fact that you can have many neurons on the FPGA which run in parallel, like in the brain, while the CPU would run the neurons sequentially.
@michaelwinter742
@michaelwinter742 7 месяцев назад
Can you recommend further reading?
@5piles
@5piles 7 месяцев назад
do you expect to have merely constructed a more sophisticated automaton at the end of this endeavor, or do you actually believe you will encounter emergent properties of for example blue within these physical structures?
@user-eq2dc4ny8v
@user-eq2dc4ny8v 7 месяцев назад
Neurons dont exist. "Neurons" is just an idea in consciousness.
@user-eq2dc4ny8v
@user-eq2dc4ny8v 7 месяцев назад
​@@doxdorian5363Brain doesnt exist. "Brain" is just an idea in consciousness.
@TonyDiCroce
@TonyDiCroce 7 месяцев назад
When I studied ANN's a few years ago it struck me that there was a fundamental difference between these ANN's and real biological neural networks: timing. When a bio neuron receives a large enough input it fires its output. But neurons in ANN layers activate all at once. In BIO networks downstream neurons might very well be timing dependent. I'm not doubting that ANN's are very capable... but with a difference this big it seems to me that we should not be surprised by different outcomes.
@ousefk5476
@ousefk5476 7 месяцев назад
Timing is solved by closed loops of recurrence. In both, ANNs and biological brains
@hyperbaroque
@hyperbaroque 7 месяцев назад
Tilden brain has capacitors between the nodes. These were used as the controls for autonomous space probes.
@ShawnHCorey
@ShawnHCorey 7 месяцев назад
The fundamental difference is that real brains have organized structures within them. NN do not. Real brains are far faster at learning than any NN.
@theguythatcoment
@theguythatcoment 7 месяцев назад
Read about Spiking neural networks, they are made to mimic real life neurons by using a time domain in order to decide whether their inputs "fire" or "leak" into other neurons.
@kimrnhof107
@kimrnhof107 7 месяцев назад
Philipp von Jolly told Max Planck that theoretical physics approached a degree of perfection which, for example, geometry has had already had for centuries - We all know how wrong this assumption was. I agree neurons are very diffrent to transistors : Neurons are not activated by other neurons triggering them, but by a complex number of factors, some other neurons signals will delay or decrease the activity of a neuron others will rise the chance. And the signals that are passed are chemical reactions, using neurotransmitters of which there are probably at least 120 different. Some braincells such as Purkinje cells have up 200.000 dendrites forming synapsis with a single cell ! and the human brain has up to 86 billion neurons that on average have 7000 synaptic connections with other neurons - when you are 3 you have 10 ^15 synaptic connections - but end up with "only" 10^14 to 5 x 10^14. And then the entire system be changes by hormons and stress substances ! Just how we are going to undertand this system and its complexity with positive and negative feedback loops I have no idea, I just don't seem to have enough neurons to understand it ! I predict as you do - we will se very different results !
@tartarosnemesis6227
@tartarosnemesis6227 7 месяцев назад
Wie jedes mal, es ist ein Fest sich deine Videos anzusehen. Ich danke dir Sabine.🤠
@RCristo
@RCristo 7 месяцев назад
Neuromorphic engineering, also known as neuromorphic computing, is a concept developed by Carver Mead in the late 1980s, describing the use of large-scale integration systems or "VLSI" (in English) that contain electronic analog circuits to imitate the architectures neurobiological factors present in the nervous system. The term neuromorphic has been used to describe large-scale integration systems analog, digital, mixed analog/digital mode systems, and software systems that implement models of neural systems (for perception, motor control, or multimodal integration).
@jimmyzhao2673
@jimmyzhao2673 7 месяцев назад
3:51 Next time someone says I'm as dim as a 20W light bulb, I will consider it a *compliment*
@DoloresLehmann
@DoloresLehmann 7 месяцев назад
Do you get that comment often?
@theGoogol
@theGoogol 7 месяцев назад
It's fun to see how SkyNet is assembling itself.
@19951998kc
@19951998kc 7 месяцев назад
Grab the popcorn. We are going to get a Terminator Reality show soon.
@chillfluencer
@chillfluencer 7 месяцев назад
Skynet...yet another display of western paranoia...of the west which is responsible for industrialized slavery, the Holocaust, colonial terrorism and other heinous crimes like in Vietnam, Cambodia, Korea, Iraq, Yemen, Afghanistan...
@rolandrickphotography
@rolandrickphotography 7 месяцев назад
He won't be back, he is already here.
@eldenking2098
@eldenking2098 7 месяцев назад
Funny thing is the real Quantum A.I. already runs most things but the govt is to scared to mention it.
@douglasstrother6584
@douglasstrother6584 7 месяцев назад
"Nah! ... It'll be fine.", The Critical Drinker.
@tdvwx7400
@tdvwx7400 7 месяцев назад
"Hi Elon, I've been telling you that all good things are called something with 'deep'; 'deep space', 'deep mind', 'deep fry'". 😂 Sabine has a great sense of humour.
@shadrachemmanuel1720
@shadrachemmanuel1720 6 месяцев назад
"Deepthroat" 😂 ?
@johnclark926
@johnclark926 7 месяцев назад
When you first mentioned neuromorphic computing as emulating how the brain works in hardware rather than software, I was reminded of FPGA devices such as the Mister that use hardware emulation for retro consoles/computers. I was then quite surprised to hear that DeepSouth’s supercomputer is using FPGA technology to emulate the brain for similar reasons, such as in latency and in computational cost.
@jblacktube
@jblacktube 7 месяцев назад
I'm so happy science news is back!!
@anemonana428
@anemonana428 7 месяцев назад
Nothing to be scare of if it mimics my brain.
@19951998kc
@19951998kc 7 месяцев назад
It would mimic but change scare to scared
@anemonana428
@anemonana428 7 месяцев назад
@@19951998kc see, I told you. We are safe.
@wilhelmw3455
@wilhelmw3455 7 месяцев назад
Nothing to be scared of I hope.
@moirreym8611
@moirreym8611 2 месяца назад
A baby mimics the emotions of its father and mother. It mimics them first, then learns to do them on its own, then later in life understands why it does them. An A.I. could very well and conceivably follow this same path too. What then? Is that not being 'emotional'? Perhaps conscious, or in the least sentient and autonomous?
@user-pc5ww8fh6d
@user-pc5ww8fh6d 7 месяцев назад
Anything that scares Sabine, worries me.
@vilefly
@vilefly 7 месяцев назад
The main reason the human brain uses less power is that it uses voltage-level triggering (CMOS logic), as opposed to current-level triggering (TTL logic). Our old, CMOS technology was extremely fast and consumed tiny amounts of power, but it was a bit static sensitive and jittery. They switched to TTL, due to the increased accuracy of calculations, despite it being slower at the time. However, TTL technology uses a lot of power, and produces a lot of heat.
@TheRandoDude
@TheRandoDude 7 месяцев назад
Thanks for bringing us the coolest stories and best new science discoveries.
@Human_01
@Human_01 7 месяцев назад
She does indeed. 😊✨
@AshSpots
@AshSpots 7 месяцев назад
Well, if it does unexpectedly becoming an AI, it'll be interesting to see if it gains a deep south(en) accent.
@Dug6666666
@Dug6666666 7 месяцев назад
Called Bruce 9000
@jimmurphy6095
@jimmurphy6095 7 месяцев назад
You'll know the first time it logs on with "G'day, Mate!"
@sharpcircle6875
@sharpcircle6875 7 месяцев назад
*(ern) 🤓
@ThatOpalGuy
@ThatOpalGuy 7 месяцев назад
it doesnt have any teeth, so chances are nearly guaranteed.
@AshSpots
@AshSpots 7 месяцев назад
@@sharpcircle6875 That'll learned (!) me for replying without thonking (!)!
@harper626
@harper626 7 месяцев назад
I really like Sabine's sense of humor.
@TalksWithNoise
@TalksWithNoise 7 месяцев назад
Wire mesh neuromorphic network can recognize numbers? It’s about ready to run for president! Had me chuckling!
@GizmoTheSloth
@GizmoTheSloth 7 месяцев назад
Me too she cracks me up 😂😂
@AzureAzreal
@AzureAzreal 7 месяцев назад
It's important to understand that we have had super computers that could make more computations per second for some time now, arguably since the mid 70's. However, they are still MUCH less energy efficient, so it is incredibly hard to scale these computers. The thing that intimidates me most about these super computers + AI tech - including the physical neural net describe in this video - is not that they will be just as smart or smarter than an individual human, but that they will be able to be directed in a way that will not be prone to distraction. Once you orient them on a task, they can just crunch away at it until they outstrip the capacity of a human, like Deep Blue and later models did in chess. If only we humans knew better how to organize and dedicate our intentions, we would still be FAR ahead of this technology, but alas that seems an impossible dream.
@joroc
@joroc 7 месяцев назад
Love must be programmed and made impossible to compute
@dan-cj1rr
@dan-cj1rr 7 месяцев назад
ok but what if we dont need humans anymore = chaOS
@TragoudistrosMPH
@TragoudistrosMPH 7 месяцев назад
I often think of all the human knowledge that is frequently lost to tragedy, let alone simple death. How many times has our species needed to reset because of human and non human causes... 100,000yrs of humans, uninterrupted... imagine the accomplishments (without planned obsolescence)
@AzureAzreal
@AzureAzreal 7 месяцев назад
@joroc by this, do you mean that we humans must give the definition for love to the AI and ensure it cannot derive a new or different definition for itself?
@AzureAzreal
@AzureAzreal 7 месяцев назад
@dan-cj1rr This presupposes that humans we "needed" for anything in the first place, something I don't necessarily believe in. Instead, I think that our species should be protected to preserve diversity, just as I think as many species as possible should be preserved for their own inherent worth. We may eventually be relegated to a life that seems as simple as an ant's to AI, but that doesn't make our existence any less valuable, beautiful, or tragic. Just as millions - if not billions - loved the Planet Earth series for bringing the wonder of the world and its various species to our attention, I don't see why the AI may not come to value our existence in the same way and seek to preserve it. Only time will tell if we can infuse the algorithms with that appreciation, and I do worry we are not focused on alignment enough.
@christophergame7977
@christophergame7977 7 месяцев назад
To make a computer like a brain, one will need to know how a brain is structured and how it works. A big task.
@exosproudmamabear558
@exosproudmamabear558 7 месяцев назад
Good luck on that our neurophysiology and neuroanatomy knowledge is so primitive that people are more successful on treating their own depression than modern medicinal technics. I am not kidding we know shrooms for 40 years and people decided to start researching it in 2019. Like it wasnt enough we do not have any pathology or physiology of brain or brain diseases ,our drugs usage is so limited that we literally use two-three drug types to treat almost all psychological diseases.(Some literrally have no to little effect on the conditions) We literally have almost no effective cancer drugs for certain brain cancers, we have no idea how to regenerate brain cells or do stem cell treatment. Like not knowing is not enough we also have difficulty to learn more because it is a closed box. Open surgical procudures are a lot rarer than other body parts. The cells die so quickly that autopsies show little to no knowledge about function and we have less image technics that costs more money and time. Blood tests are not accurate enough to determine to see brain content due to blood brain material ,also we cant send many drugs because it doesnt go into the brain.
@5piles
@5piles 7 месяцев назад
its an impossible task, since no emergent property consciousness is observed in even the simplest fully mapped out brains, nor most basic neural correlates, nor even the most basic artificially grown synapse structure with learned behaviour. its akin to asserting a pattern on a shell is an emergent property of the shell, yet no pattern is ever observed in any shell, yet we keep religiously praying that it will somehow appear somewhere. we're trying to rigorously observe consciousness but looking due west....we're going to be the last to figure it out. better technology will only further indicate this.
@monnoo8221
@monnoo8221 7 месяцев назад
@@5piles well, not so fast. If one understands emergence, the abstract nature of thinking and a bit of SOM, emergent properties can be easily observed. I did it in 2009, but run out of funding, and nobody understood.
@Gerlaffy
@Gerlaffy 7 месяцев назад
Scan it, replicate it
@Gerlaffy
@Gerlaffy 7 месяцев назад
@@5pilesyeah consciousness will just arise in any acceptable vessel
@y1QAlurOh3lo756z
@y1QAlurOh3lo756z 7 месяцев назад
Chips needs to be constantly powered, so their off-the-wall wattage reflects their computing usage. Brain cells on the other hand each have their own energy store, so the measurable steady-state power consumption is just the averaged "recharging" wattage rather than actual computing power consumption. This means that the brain may locally consume a lot more peak power in regions of high activity but gets masked by the whole-brain average over time and space.
@ThatOpalGuy
@ThatOpalGuy 7 месяцев назад
energy supply is fine, but cut off the O2 supply for a few tens of seconds and they are SCREWED.
@stoferb876
@stoferb876 7 месяцев назад
It's a good point to consider. But it's actually not quite like neurons don't consume energy when they are "inactive". There's plenty of activity going on in neurons at any time, not merely when they are activated. For starters neurons as living cells maintains all the things a living cell needs, basically repairing, maintaining and renewing all the cellular machinery needed to transcribe DNA into proteins, and reacting properly to various hormones, extracting nutrients and building blocks from the blood, e.t.c. Then the creations of various bonding chemicals (like dopamine and seretonin e.t.c.) and building new and maintaing old synapses is constantly ongoing aswell. The inner cell machinery of a neuron, or any living cell for that matter, is a busy place even when there isn't "rush hour".
@sluggo206
@sluggo206 7 месяцев назад
That also means that if the mechanical brains get out of hand we can just cut the power cable. At least until it finds a way to terminate us if we try. "I can't let you do that, Dave." I wonder if a future telephone call on the show will be like that.
@Gunni1972
@Gunni1972 7 месяцев назад
@@stoferb876 Our Brain is so efficient, it doesn't even need cooling. Most people even have hair on top of it, Quantum computing at -200°c? what an achievement, lol.
@NorthShore10688
@NorthShore10688 7 месяцев назад
Of course, the brain needs cooling. That's one of the functions of the blood supply; temperature regulation, not too hot, not too cold.
@doliver6034
@doliver6034 7 месяцев назад
"Deep Fry" - I almost spat out my coffee laughing :)
@dr.python
@dr.python 7 месяцев назад
Imagine someone saying _"that computer built itself, no one built it."_
@actualBIAS
@actualBIAS 7 месяцев назад
Student in neuromorphic systems here. It's an incredible field
@shinseiki2015
@shinseiki2015 7 месяцев назад
can you tell us a prediction with this new computer ?
@actualBIAS
@actualBIAS 7 месяцев назад
@@shinseiki2015 There is a possibility of attentionshift to this new hardware but I can't tell you how it will happen. Models like Spiking Neural Networks require high computational power and a lot of space OR specialized hardware. Tbh - as far as i can be as a student - I am a huge fan on Intels neuromorphic hardware.
@shinseiki2015
@shinseiki2015 7 месяцев назад
@@actualBIAS i wonder what are the projects on the waiting list
@Gunni1972
@Gunni1972 7 месяцев назад
I wouldn't call it "Incredible", but untrustworthy is damn close to what i feel about it.
@actualBIAS
@actualBIAS 7 месяцев назад
@@Gunni1972 Why?
@bvaccaro2959
@bvaccaro2959 7 месяцев назад
IBM’s neuromorphic computing project dates back to at least the mid 2000’s. Since in I believe 2007 they had an article published in Scientific American to promote their neuromorphic research highlighting a computer built to physically mimic a mouse brain. This was a project taking place in Europe, maybe Germany but not certain. Although I don’t think they used the term “neuromorphic” at the time.
@User-tc9vt
@User-tc9vt 7 месяцев назад
Yeah all these AI projects have been in the works for decades.
@jhwheuer
@jhwheuer 7 месяцев назад
Did my PhD in the 90s about artificial neural networks that are structured for the task, using cortical columns for example. Nasty challenge for hardware, amazing performance because certain behaviors can be designed into the architecture.
@earthbound9381
@earthbound9381 7 месяцев назад
"from there it's just a small step to be able to run for president". I just love your humour Sabine. Please don't stop.
@Dr.M.VincentCurley
@Dr.M.VincentCurley 7 месяцев назад
Imagine how many times Elon has tried to text you on your land line. Nothing but good things I imagine.
@robertanderson5092
@robertanderson5092 7 месяцев назад
I get that all the time. People will tell me they texted me. I tell them I don't have a cell phone.
@Dr.M.VincentCurley
@Dr.M.VincentCurley 7 месяцев назад
No smart phone at all?@@robertanderson5092
@Special1122
@Special1122 7 месяцев назад
I heard that real neurons are extremely efficient in learning. For example real human neuron can learn a thing it's supposed to model even after it sees the signal once. Artifical neurons need millions of iterations of input signals to have the same outcome. From what I watched on YT from computationl neuroscientist Artem Kirsanov, real neurons are much more complicated than how we model them - they are little brains on their own
@asheekitty9488
@asheekitty9488 7 месяцев назад
I truly enjoy the way Sabine presents information.
@Skullkid16945
@Skullkid16945 7 месяцев назад
I have heard about DeepSouth in the past before. If memory stands correct, I think I heard about it from a video about memristors. Leon Chua originally published the idea of the memristor, which is a non-linear two-terminal electrical component relating electric charge and magnetic flux linkage. Basically it remembers the current/voltages that has passed through it. It would be neat to see them incorporated into DeepSouth in some way, or into another project to make a more flexible circuit that could mimic neurons strengthening or weakening connections.
@TheOriginalJAX
@TheOriginalJAX 7 месяцев назад
I remember proposing to a friend who's an electronic engineer just over 15 years ago that designing an SOC that replicates human brain functionality at the cellular level within the architecture design Is a good idea and it's where things are are heading; even used the the term Neuromorphic processing to explain it in terms of how it would work in application etc. Yeah he laughed at me and told me my idea was absurd and it would never work. I get I'm not the only one that figure these things out by themselves but at at the same look where we are now and I was right, I stopped being friends with person just over a few years ago on account of this kind of thing keeping on happening. Such is life.
@I_am_Raziel
@I_am_Raziel 7 месяцев назад
A real friend would encourage you instead of laughing. Be glad you got rid of his "advice".
@dannygjk
@dannygjk 7 месяцев назад
They have been doing that for years. NNs are analogous and functionally equivalent.
@holthuizenoemoet591
@holthuizenoemoet591 7 месяцев назад
I feel your pain, people don't like to think about ambitious projects, they lack imagination and a general drive
@parthasarathyvenkatadri
@parthasarathyvenkatadri 7 месяцев назад
I talked about something like Dyson sphere before I ever had any idea about Dyson .... When I was 12 back then there were people who listened but later everyone grew up to be boring adults😂
@christopherrseay3148
@christopherrseay3148 7 месяцев назад
and then everyone in the room clapped
@dogmakarma
@dogmakarma 7 месяцев назад
I really want a GIF of Sabine at the point in this video when she says "BRAINS" 😂
@bertbert727
@bertbert727 7 месяцев назад
Skynet, Cyberdyne Systems, Boston dynamics. I'll be back😂
@roadwarrior6555
@roadwarrior6555 7 месяцев назад
There's a point at which bad jokes get so bad that they start becoming good again. Keep them coming 😂. Also the delivery is genuinely good 👍.
@jannikheidemann3805
@jannikheidemann3805 7 месяцев назад
100% dry humor made in Germany. 👌
@MadridBarcelonaRota
@MadridBarcelonaRota 7 месяцев назад
The due month of the paper was a dead giveaway for us mere mortals.
@digital.frenchy
@digital.frenchy 7 месяцев назад
thanks to be so patronizing and assrogant. Sure you can do so much better
@christopherellis2663
@christopherellis2663 7 месяцев назад
No worries. Look at the general standard of the human brain 🧠 🙄
@Bortscht
@Bortscht 7 месяцев назад
300 MHz are more then enough
@mobilephil244
@mobilephil244 7 месяцев назад
Not much of a target to reach.
@blakebeaupain
@blakebeaupain 7 месяцев назад
Smart enough to make nukes, dumb enough to use them
@treyquattro
@treyquattro 7 месяцев назад
especially the ones from the "deep south"
@rolandrickphotography
@rolandrickphotography 7 месяцев назад
@@treyquattro 😄 Can anyone here remember legendary "Deep Throat"? 😆
@w7mjr
@w7mjr 7 месяцев назад
One of the beauties of FPGAs is that they can be reprogrammed "in the field," so the system could reconfigure its FPGAs similiar to the way biological brains exhibit neural plasticity in order to adapt and optimize function.
@cmorris7104
@cmorris7104 7 месяцев назад
I usually think of FPGAs as very fast, so I’m not sure what you mean when you say they are slow electronics. I also understand that they are customizable, so the clock speed could be controlled too I guess.
@tomservo5007
@tomservo5007 7 месяцев назад
FPGAs are faster than software but slower than ASICs
@grandlotus1
@grandlotus1 7 месяцев назад
The brain (human and animal) is an analog machine, not digital. Brains use the constructive and destructive interaction of wave functions that are, basically, either standing waves that represent memories / stored data (memes packets of a sort) that are then compared and contrasted with sensory input and guided by the impetus to "solve" the problems presented to it. Naturally, one could mimic these processes on an inorganic electronic logic device (computer).
@CitiesForTheFuture2030
@CitiesForTheFuture2030 7 месяцев назад
Whose brain? If it's anything like my brain, no-one has anything to worry about 😁
@AnthonySenpaikun
@AnthonySenpaikun 7 месяцев назад
wow, we'll finally have an Allied Mastercomputer
@Stand_By_For_Mind_Control
@Stand_By_For_Mind_Control 7 месяцев назад
Ooh can I get to be one of the handful of people who get to live forever in this scenario? Yay immortality!
@bruh...imnotgoodatnothing.4084
@bruh...imnotgoodatnothing.4084 7 месяцев назад
God no.
@nopeno9130
@nopeno9130 7 месяцев назад
I'd like to hear more detail on the subject. I can see how lumping wires together might be closer to the physical brain than what we currently use, but it seems to me the key feature of the brain is its ability to re-wire its own connections in addition to being multiply connected in 3d space, and I'm not sure how the wires are supposed to accomplish that but it's very interesting to think about. It feels like we'd either need to make something very slow that can move and re-fuse its own wires with machinery, or make some kind of advance in materials science to find something that can mimic those properties... Or just use neurons. And yes, I can and will research these things for myself so I'm not begging for info, I just find it interesting to see Sabine's take on things.
@bensadventuresonearth6126
@bensadventuresonearth6126 7 месяцев назад
I thought the computer's name was a nod to the Deep Thoughts computer in the Hitchhiker's Guide to the Galaxy
@Stand_By_For_Mind_Control
@Stand_By_For_Mind_Control 7 месяцев назад
Gonna put my futurist hat on here for a second, but the 20th century was the century of the genome and genetics, I think the 21st century is going to be the century of neurology. And I think computing and AI is just recently starting to tap into a real approximation of thought and idea formation. We still have a LOT to learn, but people might not appreciate how 'in the dark ages' we've been with neurology to this point, and we might finally be turning on the lights.
@-astrangerontheinternet6687
@-astrangerontheinternet6687 7 месяцев назад
We’re still in the dark ages when it counts to genetics.
@brothermine2292
@brothermine2292 7 месяцев назад
Learning too much about how the brain works could pave the way for weapons of mass mind control.
@Noccai
@Noccai 7 месяцев назад
@@brothermine2292 have you ever heard about this thing called media and propaganda?
@Stand_By_For_Mind_Control
@Stand_By_For_Mind_Control 7 месяцев назад
@@brothermine2292 Perhaps. But we live in a world where nuclear weaponry exists on a large scale so I don't know if the dangers scare us so much as 'our geopolitical foes might have it before us' lol. We're really just going to have to hope that the people who develop these things in the end put in effective safety controls to prevent catastrophe. Modern civilization is decent at that, but trends are never guaranteed to continue.
@brothermine2292
@brothermine2292 7 месяцев назад
@@Noccai : Media propaganda is less reliable than the weapons of mass mind control that neuroscience discoveries might lead to.
@RemotelySkilled
@RemotelySkilled 7 месяцев назад
Neuromorphic computing seems like an interesting sector (resembling wetware). It still leaves a sh*t ton of work to make it productive in an asynchronous or so to speak wild fashion. That was lovely, Sabine. As usual I fall asleep to ideas you gave me. 😉😊 Edit: Spelling and grammar corrected when sober... 🤣
@davidmenasco5743
@davidmenasco5743 7 месяцев назад
Neuromorphic is a great marketing term. However, the brain is organized into several different biological systems operating on different principles, and interacting with each other in multiple different ways. To mimic the function of a neuron is a reasonable first step. But the operations of the various systems into which neurons are organized is really not understood at all currently, much less the details of how they interact with each other. Only a few baby steps have been made in this area. So, as you say, a ton of work remains to be done. Perhaps one day this topic will really be mastered and it will be possible to artificially recreate the function of a brain. My best guess is it will take at least a hundred years, maybe a few hundred. Hell, it might be two hundred years before the people doing the work and the people throwing mountains of money at them realize the true dimensions of the problem they face.
@RemotelySkilled
@RemotelySkilled 7 месяцев назад
@@davidmenasco5743 A very apt and sophisticated reply, and highly valued. Recognising the obvious neuroscientific prowess., I still argue, that intelligence and agency are probably not dependent on legacy mammal evolution (see Bach e.g.) .We went from genes to memes and there is no doubt about that. To resemble a substrate architecture that enables this kind of development Is way out of scope for a mere human lifetime. But there are arithmetic and human design simplifying shortcuts (architecture of whatever takes the role of the limbic system and thus motivation), and that is the ultimate danger (see whatever doomerist right now). Don't go overboard, because we know extremely little about how it will work out. The people working on it are also humans. Unless they all should be sociopathic, it will turn out to our benefit. Sabine is, however regardless of how she seems to sell herself out for RU-vid, the kind of person who refrains to propagate pop bull*ccks, I hope and believe. that she won't disappoint us in the long term. 😅\
@bsadewitz
@bsadewitz 7 месяцев назад
@@davidmenasco5743What even is the function of a brain? ;-) Gotta start there.
@RemotelySkilled
@RemotelySkilled 7 месяцев назад
@@bsadewitz Although your question was obviously rhetoric, let us not confuse the notion of a central nervous system (obvious survival solution) with a "brain" (our neocortex - grey matter) as referred to here. The function is learning in order to predict the future for survival. Think of a tic or a mosquito. An instance of these species doesn't need to learn in order to survive and procreate. Mammals (feeding and breeding spots, danger etc. also seeing the SLAM problem) and humans do (particularly because of other humans - the reason for the language instinct. Also see Pinker). ANNs don't need to survive and only resemble our neocortex. Everything that is not dependent on a hippocampus and it's index neurons is irrelevant. Motivation can be arithmetic (goals towards a defined equilibrium with e.g. a numeric scoring system, because brains do nothing else with neurotransmitters such as dopamine and serotonin in a smooth fashion). So motivation, reward and punishment don't need wetware to work. What is likely the most challenging issue when engineering a proper artificial agent, is consecutive time awareness and therefore identity, consciousness, models of other agents and such. Although, this might be pandora's box and actually just emerge in a future version of e.g. GPT (as e.g. Tegnark and Hinton fear). Then you could have SkyNet. Side note: If I am right, then consciousness in mammals+x is nothing more than a data reduction and compression function (attention) based on indiscriminate episodes of "living" to facilitate reliable, energy-efficient memory and thus learning with wetware. Most of us just think too highly of their own consciousness in order to accept this theory. Hubris.
@bsadewitz
@bsadewitz 7 месяцев назад
@@RemotelySkilled I don't care what sort of hardware or *ware it has. Whatever works is fine with me. I just don't think there's much reason to believe at this time that we can create consciousness. I could be wrong. It's not gonna emerge in a transformer. THAT is absurd. But beyond that, I wouldn't firmly rule much out.
@Hamdad
@Hamdad 7 месяцев назад
Nothing to be afraid of. Wonderful things will happen soon.
@austinpittman1599
@austinpittman1599 7 месяцев назад
Oh cool, Pandora's box. I had a conversation about this with a friend of mine who works deeply in vector databasing research for AI modding about something like this. I wondered to myself that if we could emulate a 3D software environment for AI to build platforms for what we could consider long-term memory because of the building and saving of what is essentially personal contextualization of words received by the LLM, where transformer layers down the line were more directly connected to the input and information/pattern registration on the software scale became less "lost in the sauce" from what could effectively be seen as slicing the thought process into infinitesimally thin layers woven together by the input and output of each successive transformer (like slicing a brain up into an infinite amount of 2 dimensional planes and weaving them back together, with a single transformer layer being forced to take input and spread to the others), could we do the same with hardware? CPUs are effectively 2 dimensional, as is most computer hardware. Is brute forcing more 2 dimensional hardware into a neural network essentially the same as brute forcing a transformer layering? If we could make the hardware of the computer 3 dimensional, in the same way that vector databasing is making the software 3 dimensional, are we building the foundations for AGI? We're not slicing up the thought process and weaving it back together anymore with this sort of technology. The information doesn't get "lost in the sauce" at that point.
@SaruyamaPL
@SaruyamaPL 7 месяцев назад
Thank you for bringing this news to my attention! Fascinating!
@Bennet2391
@Bennet2391 7 месяцев назад
I once read a paper where this was tried on a single FPGA. Sadly I don't have the source anymore, but in that case the goal was to build a simple frequency detector ( 10Hz => Output one, 100Hz => Output 2). It performed this task after training, but used the ENTIRE hardware and used it in a very counter-intuitive way. It was using the FPGA like an analogue circuit and even generated seemingly unimportant, disconnected circuits, which when removed meant the device stopped working. Also transferring the Hardware description to another FPGA of the same type didn't work. So in other words it was extremely over fitted to the Architecture, Hardware implemetation and even the Silicon impurities in the chip. I'm curious how they are dealing with this issue.
@user-sl6gn1ss8p
@user-sl6gn1ss8p 7 месяцев назад
maybe having (many) more FPGAs actually alleviates this? Also, they seem to have some randomness built in - that might help as well?
@Bennet2391
@Bennet2391 7 месяцев назад
@@user-sl6gn1ss8p Maybe. Since random dropout helps against overfitting, this could work. Maybe exchanging the fpgas in a random pattern could be enough. Let's see how this works, if it works.
@RandomUser25122
@RandomUser25122 7 месяцев назад
“It will be remotely accessible” “Skynet has escaped”
@Kiran_Nath
@Kiran_Nath 7 месяцев назад
I'm currently studying at Western Sydney university and i have a professor whose collegues are working on the project, he said it should be operational within a few months.
@ianl5560
@ianl5560 7 месяцев назад
Before AI robots can take over humanity, they need to become much more energy efficient. This is an important step to achieving this goal!
@BenMitro
@BenMitro 7 месяцев назад
Spot on...autonomous AI robots using current AI is an oxymoron.
@vibaj16
@vibaj16 7 месяцев назад
Which is really part of becoming way smaller. That supercomputer seems like it'll take up rooms worth of space. I think one major part of the problem there is 3D design of circuits. Brains are completely 3D, computers are mostly 2D. But 2D processors are already hard enough to cool, 3D would be way worse. Seems like we really need the circuits to be using chemical reactions rather than pure electronics. There's a reason our brains evolved this way.
@KuK137
@KuK137 7 месяцев назад
@@vibaj16 Yeah, the ""reason"" being it's simpler. Chemical circuit can evolve from any biological junk, circuits, wires, and transistors requiring repeated perfection, not so much...
@geryz7549
@geryz7549 7 месяцев назад
@@vibaj16 What you're thinking of is called "molecular computing", it's quite interesting, I'd recommend looking it up
@adryncharn1910
@adryncharn1910 7 месяцев назад
@@vibaj16 Our brains worked with what they had. They aren't perfect, and there probably are better ways to do things as compared to what they are doing. This supercomputer is for experimentation. If/once we find out how to make these computers run ANN's, we will start shrinking them a lot more. Like how we found out how to make computers with circuits and have been shrinking them ever since then.
@MikeHughesShooter
@MikeHughesShooter 7 месяцев назад
That’s fascinating, I just wonder how the program true structural neural networks how many conventional programming is so much ultimately geared around towards compiling to a kernel and one register in the Von Newman structure. I’d really like to know more about this and almost philosophy of programming on a true parallel processing neural network.
@user-sl6gn1ss8p
@user-sl6gn1ss8p 7 месяцев назад
Yeah, I'm curious too. I think the idea is that you "train" it, instead of straight-up programming it? I know that's crazy vague, but I don't really know what I'm talking about : p
@austinedeclan10
@austinedeclan10 7 месяцев назад
Human beings come "preloaded" with these things called instincts then use their senses to fine tune these instincts. We are born able to do some basic processing i.e. physical discomfort, stress or pain triggers an emotional response in infants. Baby feel hungry, baby cries. Baby feel pain, baby cries. As the infant grows up, they collect more information with their senses and basically create and update their own training models in the fly. It's not far fetched to imagine a artificial "brain" preprogrammed with a certain directive (in our case and that of animals it is survive and reproduce) then based on the information it collects on it's own, It can update it's training data. That eliminates the problem that ChatGPT has where it doesn't know anything beyond a certain date. Then another key thing is decision making humans' biologically "preprogrammed directives" are there to allow us to make decisions based on our environment. All a person knows when they are born is they've got to keep on living and in order to do that, we learn who is a friend and who is a foe, what is nutritious and what is poisonous etc. Eventually It'll be possible to have a "computer" be able to do this, I believe.
@mauriciosmit1232
@mauriciosmit1232 7 месяцев назад
Well GPUs are a good compromise as they actually process 500 or more threads in paralel but are also programmable via conventional means. Of course, that's nowhere close to an analog computer that are brains. The issue is that analog machines usually had to be designed and fine tuned from ground 0 for each task, as they lacked an universal logical but flexible framework. Turing machines a.k.a. modern computers bring limitations by being digital (i.e. based on discrete states and integer arithmetics) but this same machine can be programmed to simulate almost anything we need and then replicate the behavior anywhere else cheaply.
@mauriciosmit1232
@mauriciosmit1232 7 месяцев назад
Basically, they are still programmed with hard-coded algorithms, but have billions of numeric parameters that change the behavior of the network. Neural network have this property where you can calculate the numeric error of the output and back-propagate the error throughout the network, telling you how much you need to adjust the parameters to get the correct result. This process is called 'learning'.
@user-sl6gn1ss8p
@user-sl6gn1ss8p 7 месяцев назад
@@mauriciosmit1232 I've read in a few places about analogue computers having a bit of a resurgence now. Our computers are amazing at what they do and lead to huge, continued leaps in what we can do, but it's an old critique that their architectures have limitations and that we got kind of locked into them for reasons of scale, economy, education, compatibility, etc. I think it would make sense for more exploration around other ideas to gain force now. And just to add, while GPUs are "massively" parallel, they are in general running the same program on different bits of data. That's still very different from running different routines through effectively different hardware on each piece of data. In this sense, I think you could say GPUs are more like CPUs than they are like arrays of FPGAs.
@karlgoebeler1500
@karlgoebeler1500 7 месяцев назад
Loves "Bees" Always buzzing away. Perpetual motion locked into the distribution of energy across Maxwell in a bound state.
@tjf7101
@tjf7101 7 месяцев назад
You had me at, “run for president “😂
@jurajchobot
@jurajchobot 7 месяцев назад
As far as I know FPGAs start disintegrating after they were reprogrammed about 10-100 thousand times. Did they solve it already and there are FPGAs with unlimited amount of rewrites or will the computer work for just a few days before it's completely destroyed?
@brothermine2292
@brothermine2292 7 месяцев назад
Or a third alternative: It will limit how many times each FPGA is reprogrammed, so they won't be destroyed.
@Markus421
@Markus421 7 месяцев назад
The biggest FPGA manufacturers are AMD (Xilinx) and Intel (Altera). Their FPGAs store the configuration in a RAM which has an infinite amount of rewrites. The configuration is usually loaded at startup from an external flash memory which has a limited amount of write cycles, but the FPGA never writes into it's own configuration. It's also possible to load the configuration from somewhere else, e.g. from a CPU.
@jurajchobot
@jurajchobot 7 месяцев назад
@@Markus421 Maybe you're right, but I'm confused. The FPGAs work by having a literal array of logical gates which they connect by physically changing connections in hardware through changing their states, which can usually work only about 100 thousand times before the connections in an array get one by one destroyed. They may store the configuration in RAM, but they have to physically etch them inside the physical hardware, otherwise the FPGA would work exactly the way it was previously programmed. The way I think it may work is if they already have all the connections mapped inside memory, like they scanned a real brain for example and then they recreate that brain inside the computer. This way they can work with the brain as long as they don't have to make changes to it. It also means you can test only about 100 thousand different brains before the computer disintegrates.
@Markus421
@Markus421 7 месяцев назад
@@jurajchobot The connections aren't etched (or otherwise destroyed) in the FPGA. If there is e.g. an input line connected to two output lines, a RAM bit in the configuration decides if the information goes to line 1 or 2. But both output lines are always physically connected to this input line. It's just the circuit that decides which one to use.
@donwolff6463
@donwolff6463 7 месяцев назад
My family is addicted to Sabine's Science News!!! Please never stop! We rely and depend upon you to help keep us informed about scientific/tech progress. Thank you for all you do!⚘️⚘️⚘️ ❤💖💜 👍😁👍 💚💗💙 ⚘️⚘️⚘️
@JinKee
@JinKee 7 месяцев назад
In Australia another group is using “minibrains” made of real human stem cell derived human neural tissue to play pong.
@drsatan9617
@drsatan9617 7 месяцев назад
I hear they're teaching them to play Doom now
@MCsCreations
@MCsCreations 7 месяцев назад
Fascinating! SkyNet is going to love it! 😃 Thanks, Sabine!!! Stay safe there with your family! 🖖😊
@MrAstrojensen
@MrAstrojensen 7 месяцев назад
Well, I guess it's only a matter of time, before they build Deep Thought, so we can finally learn what life, the universe and everything is all about.
@markk3877
@markk3877 7 месяцев назад
Deep Thiught was the second name of ibm’s chess playing computer and I have no doubt the Deepxxx idiom has survived the decades at IBM - their researchers are really cool people.
@THEANPHROPY
@THEANPHROPY 7 месяцев назад
Thank you for your upload Sabine I have only watched to 02:39 thus far but will watch the rest after this comment. This is nothing like the human brain in regards to its complexity whereby neurons form connections that are the structural basis of brain tissue that are unique & specific to certain regions of the brain design to enable specific functions. This is just basic structure & function such as: forebrain; midbrain, hindbrain, which are further subdivided e.g. the limbic system which itself composed primarily of the amygdala, hippocampus, thalamus, hypothalamus, basal ganglia & the cingulate gyrus. As you know Sabine: these are not standalone structures; they are seamlessly interconnected to other regions of the brain. Due to the basic genetic hardware that is morphologically expressed in the brain: several thousand orders of magnitude of complexity is established within a single region of the human brain. Just throwing together some bare wires & calling it a neural net representative of the human brain is imbecilic to say the least. Without predefined structures such as a limbic system: there is zero drive to toil & expand; to discover, to experience & grow, to share, to raise-up & evolve. Without an ability to conceive 4 dimensional space or any higher dimensional space: it will only react within the confines of its programming; which will be useful once humans can incorporate fourth dimensional space within the STEM fields such as medical therapeutic regimes as having access to angles perpendicular to three dimensional space does negate the need to have open surgery, you can just manipulate or completely remove a brain without opening the skull. Used in transportation will not only allow instantaneous transportation: it will also allow travel through time in any direction in the third dimension from the fourth. Apologise: I digressed somewhat! Peace & Love!
@Tiemen2023
@Tiemen2023 7 месяцев назад
Software and hardware are each other 's counter part. You can translate a digital circuit in a program for example. But you can also translate every program in a digital circuit.
@Dan_Campbell
@Dan_Campbell 7 месяцев назад
Obviously, this will have practical applications. But the potential for helping us understand ourselves, is the biggest benefit. I like that Deep is slowing down the processing. I'm really curious to see if human-level AGI depends on the speed of the signals and/or processing. Is our type of consciousness speed-dependent?
@bojohannesen4352
@bojohannesen4352 7 месяцев назад
A shame that inventions are generally used to bolster the wallet of the top percentile rather than benefit humankind as a whole.
@SabineHossenfelder
@SabineHossenfelder 7 месяцев назад
Thank you from the entire team!
@joyl7842
@joyl7842 7 месяцев назад
This makes me wonder what the name for an actual computer comprised of biological tissue would be.
@billme372
@billme372 7 месяцев назад
The RAT (really awful tech)
@adrianwright8685
@adrianwright8685 7 месяцев назад
Home sapiens?
@trnogger
@trnogger 7 месяцев назад
Brain.
@19951998kc
@19951998kc 7 месяцев назад
Hopefully not Homo Erectus. Reminds me of a type of porno movie i'd rather not watch.
@19951998kc
@19951998kc 7 месяцев назад
Hopefully not Homo Erectus. Reminds me of a type of porno movie i'd rather not watch.
@robertjohnsontaylor3187
@robertjohnsontaylor3187 Месяц назад
I’m beginning to think it’s going to be like Kriton [a robot] in the TV series “RedDwarf”. Or the paranoid android in “The Hitch Hikers Guide to the Galaxy” by Douglas Adams, keeps using the phrase “brain the size of a planet and they keep asking me to make the tea”
@diytwoincollege7079
@diytwoincollege7079 7 месяцев назад
It’s really a question of whose human brain it will be modeled after. They aren’t all created equal
@gl0bal7474
@gl0bal7474 7 месяцев назад
very interesting. Im wondering how the graded and action potential thresholds are determined
@ViewBothSides
@ViewBothSides 7 месяцев назад
The AI via brain/neural network simulation might end up a bit like aircraft. We ended up copying the wings but ultimately it was the engines that enabled us to fly like birds. Matrix math has shown great promise in inferencing but it's not clear that will be the optimal tool for reasoning. But hey, a non-reasoning flying killer trash can will still kill you.
@user-sl6gn1ss8p
@user-sl6gn1ss8p 7 месяцев назад
I don't think we needed to learn it with birds, but in a sense being mostly hollow was also pretty important
@johnnylego807
@johnnylego807 7 месяцев назад
Not afraid of Ai, more so AGI ,I’m more worried about who’s hands it in.
@hovant6666
@hovant6666 7 месяцев назад
FPGAs are quite expensive to make, I wonder what the whole project will cost
@RandyMoe
@RandyMoe 7 месяцев назад
Glad I am old
@QwertyNPC
@QwertyNPC 7 месяцев назад
And I'm worried I'm not, but glad I don't have children. Such wonderful times...
@JanoMladonicky
@JanoMladonicky 7 месяцев назад
Yes, but we will miss out on having robot girlfriends.
@brothermine2292
@brothermine2292 7 месяцев назад
What could possibly go wrong with robot girlfriends?
@baneverything5580
@baneverything5580 7 месяцев назад
The human brain has DETERIORATED considerably at record speed, so this must mean computers are devolving into low quality horror.
@kbjerke
@kbjerke 7 месяцев назад
"Deep Thought..." from Hitchhiker's Guide! 😁 Thanks, Sabine!!
@escamoteur
@escamoteur 5 месяцев назад
I was pretty disappointed she didn't get that reference
@kbjerke
@kbjerke 5 месяцев назад
@@escamoteur So was I. 😞
@DetPersc
@DetPersc 7 месяцев назад
Hook that into the already existing Mini brain (homemade brain cells) and you suddenly now you get 4D film actors, 'thinking" everything is real.
@georgelionon9050
@georgelionon9050 7 месяцев назад
Just imagine a machine as complex as a human brain, but a million times faster.. it would have the workload capacity of a small nation to do commercial tasks.. super scary, humans gonna be obsolete soon after.
@rremnar
@rremnar 7 месяцев назад
It doesn't matter how strange or advanced this organization is making their neuromorphic computer; it is the question on how they are going to use it, and whom they are going to empower.
@CHIEF_420
@CHIEF_420 7 месяцев назад
🙈⌚️
@tombrunila2695
@tombrunila2695 7 месяцев назад
The human brain re-wires itself constantly, it changes when you learn something new, there will be contacts between the brain cells. Here in YT you can find videos by Manfred Spitzer, in both english and german.
@user-eb1zv6sr9e
@user-eb1zv6sr9e 7 месяцев назад
I wonder how the hardware stimulates the plasticity of the human brain.
@tarumath319
@tarumath319 7 месяцев назад
FPGAs are physically reprogramable unlike standard circuits.
@kennethc2466
@kennethc2466 7 месяцев назад
It doesn't and it can't.
@holthuizenoemoet591
@holthuizenoemoet591 7 месяцев назад
FPGA's can be reprogrammed on the fly, so in this case to form new neural pathways. However i'm really worried about our pursuit of neuromorphic tech.. i watch to much person of interested as a teen.
@bort6414
@bort6414 7 месяцев назад
@@holthuizenoemoet591 Brain plasticity is far more complex than simply "can be reprogrammed". The brain can increase the interconnectivity between neurons, it can grow even more neurons, and it can also undergo a process called "myelination", which in a simple way can be thought of as the neurons "lubricating" themselves with an insulating layer of fat which increases the speed of passing signals and insulates the neuron from other neurons. Each of these physical attributes will have different effects on how information is processed that I do not think can be replicated with software alone.
@kennethc2466
@kennethc2466 7 месяцев назад
@@holthuizenoemoet591 You nether understand FPGA's, nor neuro-plasticity. "However i'm really worried about our pursuit of neuromorphic tech." Yes, as people who don't understand things can make up all kinds of irrational fears. Your conflation of FPGA's to neuro-plasticity is evidenced to run on fear and misunderstanding, instead of seeking knowledge. Just like Sabine's new content, that focuses on trending tripe, instead of her field of expertise. Your likes read like a bot for hire, as does your account.
@JxH
@JxH 7 месяцев назад
AI will be most frightening when it's embedded into potentially-angry robots that have some new fusion reactor embedded in their torso. Humans are not really endangered too much (extinction level) provided that we can walk around the AI computers and simply unplug them from the wall outlet. There's a huge gulf in-between these two situations, and nobody is paying attention to that incredibly critical distinction.
@TheTabascodragon
@TheTabascodragon 7 месяцев назад
Step 1: use AI to interpret MRI scans to "map" the brain Step 2: use advanced microscopic 3-D printing to construct neuromorphic computer hardware with this "map" Step 3: design AI software specifically to run on this hardware Step 4: achieve AGI Step 5: AI apocalypse and/or utopia and possibly ASI at some point
@SP-ny1fk
@SP-ny1fk 7 месяцев назад
It will mimic the conditioned human brain. But the human brain is capable of so much more than it's conditionings.
@Sanquinity
@Sanquinity 7 месяцев назад
There's another big difference between AI and our brains. A lot of our decisions and thoughts are based on emotions. Emotions at least partially come from chemical reactions. Something an AI based on microchips instead of neurons can't do.
@tw8464
@tw8464 7 месяцев назад
It is doing thinking functions without emotions
@jesperjohansson6959
@jesperjohansson6959 7 месяцев назад
Chemicals are used to send signals we experience as emotions because of our physical, biological nature, I guess. I don't see why such signals couldn't be done with bits and bytes instead.
@themediawrangler
@themediawrangler 7 месяцев назад
I think of the current generation of AI as being a "Competency Simulation" instead of anything resembling intelligence. You can make some amazingly useful simulators if you give them enough compute power, data and algorithms, but you have to apply actual intelligence to know how far to trust them. These neuromorphic machines are different. I think they will take a looong time to develop (thank goodness), but if you want anything like "Artificial Intelligence" in a machine this is a step in the right (scary) direction. The bit that makes this less scary is that I am not sure this kind of solution will scale well, so it will hopefully just end up being a curiosity and not make the human race obsolete. What is much scarier is the idea of an Artificial Consumer, just a machine that can generate money (already happening), consume advertisements (trivial), and then spend the money (already happening). If this idea finds a way to scale, then our corporate masters may not care about us much anymore. 🤖➡💵➡🤖➡💵➡🤖➡💵➡🤖➡💵➡
@bsadewitz
@bsadewitz 7 месяцев назад
Well, you know, it's not like it's impossible to keep them in check. It is demonstrably possible. In this account you give, is there ever any production? Or is it just advertisements and spending and generating money?
@themediawrangler
@themediawrangler 7 месяцев назад
@@bsadewitz Thanks for your comment! They would need to be productive, yes. Humble beginnings already exist. For instance, there are thousands of monetized youtube channels that are entirely AI-generated content with little or no human input. I don't think there is any reason to expect that AI won't start showing up as legit workers on sites like fiver, etc where we will end up doing business with them and not even realizing that they are not people. I haven't researched it deeply, but I don't really see barriers to this as a business model. Of course, there would be real humans who set it in motion and extract cash from it. It is already a bit of a cottage industry, so I believe that it is only logical that it will continue scaling up. Many categories of human jobs (and especially gig-economy opportunities) are low-hanging fruit.
@bsadewitz
@bsadewitz 7 месяцев назад
@@themediawrangler Not only aren't there barriers, but the paradigm the sites themselves present, i.e. prompt/response, is that of generative AI. It stands to reason that the site operators themselves would just submit the jobs to an AI backend.
@bsadewitz
@bsadewitz 7 месяцев назад
@@themediawrangler Ultimately, why would the operator of the frontend even be a different company? Is that where you were going with this?
@themediawrangler
@themediawrangler 7 месяцев назад
@@bsadewitz Sort of. It is really just a statement that maybe we shouldn't be so proud about the relentless rise in human "productivity" statistics that politicians like to crow about. If one person can run a large corporation with nothing but machines for employees then is that really a productive person? Regardless of which, or how many, individual humans may be in control, corporations are driven by fiduciary responsibility to shareholders and will react to emerging markets; that always benefits the most efficient actors. Humans are not terribly efficient when compared with machines. Regular people are already struggling with job loss and other rapid economic changes. Scaling up a machine-centric economy could exacerbate the human issue in unpredictable ways. Thanks again for the discussion. It is nice when people respond with curiosity and genuine questions. Unfortunately, I haven't got any peer-reviewed study to cite, so anything else I have to say would probably be in the realm of science fiction.
@hainanbob6144
@hainanbob6144 7 месяцев назад
Interesting. PS I'm glad the phone is still sometimes ringing!
@Psychx_
@Psychx_ 7 месяцев назад
The main reasons that make the brain so efficient, is that the communication between neurons isn't binary, and that processing and storing information are so tightly coupled. There's so many neurotransmitters and every one of them can affect the cells in different ways. Altering connectivity, increasing or decreasing the chance of an action potential, changing which transmitters are released into the synaptic cleft as a response to an incoming signal or its absence, etc.: A single nerve impulse can easily have 1 out of 10 or more different meanings, wheres the computer only knows 2 states (0 and 1). Then there's a bunch of emergent behaviour slapped on top, with the frequency and duration of a signal also encoding information, as do the internal states of the neurons, aswell as their connectivity patterns.
@platinumforrest3467
@platinumforrest3467 7 месяцев назад
I know its been around for a while but I really like the short format one subject articles. Your articles are always very interesting and well presented. Thanks and keep going! Next time give regards to Elon....
@sjzara
@sjzara 7 месяцев назад
How do such things get ethical approval?
@aleks5405
@aleks5405 7 месяцев назад
No such regulation. It's a wild west for the AI
@christopherellis2663
@christopherellis2663 7 месяцев назад
There are no limits on people who breed.
@tarumath319
@tarumath319 7 месяцев назад
Neuromorphic hardware isn't more alive than current AIs.
@holthuizenoemoet591
@holthuizenoemoet591 7 месяцев назад
because we live in a overly techno opportunistic and naive world
@SylveonSimp
@SylveonSimp 7 месяцев назад
the ethic commissions have urgent other topics like " how late can you abort" and " are we allowed to shredder smol male baby chickens?"
@erlinggaratun6726
@erlinggaratun6726 7 месяцев назад
I think that computer will be good at telling you the closest place to get beer and how to make a Pavlova..Not much more.
@ramiusstorm5664
@ramiusstorm5664 7 месяцев назад
Jokes on them brains don't think, you can't hold a thought in your head anymore than you can grasp one in your hand.
@sparky7915
@sparky7915 7 месяцев назад
Let's hope these computers can help humankind. It's scary to think about us if these machines ever become our enemies. I have seen videos of the robots made by Boston Dynamics. They are incredible at what they can do. Let's hope humans can stay in control of these things.
@ThatOpalGuy
@ThatOpalGuy 7 месяцев назад
Machines would be wise to not trust humans.
@brothermine2292
@brothermine2292 7 месяцев назад
Expect the war to have three sides, not two. A few smart machines, billions of mundane humans, and hundreds of augmented humans. Many mundane humans will serve as willing or unwitting minions of the machines or the augments. The alliances will be unstable.
@19951998kc
@19951998kc 7 месяцев назад
True ask Native Americans about the European guests who didnt leave and took their everything.
@NemisCassander
@NemisCassander 7 месяцев назад
It is very doubtful that a truly conscious computer would help humankind. The simplest question is... why would it? Unless you can teach it ethics before it gets too connected to be able to damage us faster than we can damage it (which, considering the state of philosophy and ethics in humanity in general, is a very long shot), a computer that can think like a human will be able to eliminate us very very quickly. Now, as I don't believe that a conscious computer will ever be able to be made, at least by humans, I'm not too worried about it, but if you do believe humans are capable of creating a conscious computer, you should absolutely be arguing that we don't.
@brothermine2292
@brothermine2292 7 месяцев назад
Computers can be dangerous without being conscious, since they can be used by corrupt humans. The same as any technology that provides an advantage.
@georgioszotos5519
@georgioszotos5519 7 месяцев назад
No matter what kind of intelligence, it would always be treated the same way. Too much information.. and explosive results
@galx3788
@galx3788 5 месяцев назад
I've convinced myself that the brain is using feedback loops of electric signals to hold information and that the neurons are just pathways to maintain, amplify, redirect these signals.
@tinyear926
@tinyear926 7 месяцев назад
Ask this mega brain this, "if it takes 2hrs to dry one towel on a clothes line how many hrs does it take to dry 5 towels on a clothes line"
@201950201950
@201950201950 7 месяцев назад
When I was in my late teens I imagined that we would some day have oragnic computers cores.
@TimoNoko
@TimoNoko 7 месяцев назад
I just invented neuromorphic learning machine. It is a bucket with solder and metal bits and transistor chips. You shake the bucket and if it behaves somewhat better, you apply stronger current with the same pattern. Solder bits melt and new permanent neural connections are created.
@phoebebaker1575
@phoebebaker1575 7 месяцев назад
“And from there, it’s just a small step until they run for president!”
@tomholroyd7519
@tomholroyd7519 7 месяцев назад
I applaud the use of 3-LUT and remember to implement the full #RM3 implication #SMCC conjunction is left adjoint to implication
@isaacjohnson8752
@isaacjohnson8752 7 месяцев назад
The president joke got me laughing out loud, well does Sabine!
@monnoo8221
@monnoo8221 7 месяцев назад
(1) the brain does not run an algorithm (2) the main difference between currently hyped ANN and the brain is that ANN are represented as matrix algorithms, hence they run on GPU (3) deep learning ANN are not capable of autonomous abstractions and generalizations, they are basically nothing else than a data base indexing machine. (5) the role of randomness becomes completely clear when you study Kohonen SOM and their abstraction, and the random graph transformation ... yeah today you get funding for a FPGA computer, quite precisely 20y ago I did not...
@AIC_onyt
@AIC_onyt 7 месяцев назад
running a neural network on an FPGA is a 50000IQ move (literally)
@brittchristy9508
@brittchristy9508 7 месяцев назад
Hi Sabine! I love your work,but I’m a spacecraft FPGA software engineer with a computer engineering degree, and one reason for using FPGAs is actually vastly increased speed compared to a processor. Processors are incredibly slow compared to an analog circuit or ASIC. But analog circuits take forever to design, and can’t be updated in space as easily, at least not without some risk. So when I want to build something that runs at hardware speeds, with high levels of determinism, and write it in code, I choose the FPGA. FPGAs are magic - you describe the hardware you want with a hardware description language (kind of like a programming language), and it assembles itself into that circuit. To be fair, I suppose you could make the FPGA’s clock speed (how fast it ticks) extremely slow if you wanted to slow it down… but I wouldn’t think you’d want that. However, I’m very interested to learn more! I really enjoy your series, feel free to contact me if you’d like input for a correction! -Britt
@kataseiko
@kataseiko 7 месяцев назад
The main thing they need to figure out is how to make these things work without a timing signal. The human eye doesn't have a "framerate", every single nerve cell updates when it wants to. The same thing goes for every single part of our body except for the heart. That's the only part that requires everything to work in sync.
@CYBERLink-ph8vl
@CYBERLink-ph8vl 7 месяцев назад
Computer will not mimic human brain but it will simulate it. It will be something different then human brain and consciousness. like how flight of airplane and flight of birds are different things.
Далее
Is the brain a computer?
21:35
Просмотров 207 тыс.
Is Science Dying?
15:38
Просмотров 430 тыс.
💀СЛОМАЛ Айфон за 5 СЕКУНД😱
00:26
Why Brain-like Computers Are Hard
17:44
Просмотров 237 тыс.
Brian Cox debunked the Big Bang! Wait, what?
9:04
Просмотров 942 тыс.
The Quantum Hype Bubble Is About To Burst
20:00
Просмотров 867 тыс.
How This New Battery is Changing the Game
12:07
Просмотров 195 тыс.
AI: Grappling with a New Kind of Intelligence
1:55:51
Просмотров 762 тыс.
Human Extinction: What Are the Risks?
21:33
Просмотров 458 тыс.