Thank you to the viewer who asked for a neuromorphic computer video! I think it it ended up being an interesting topic. Don't forget to join my discord and support me on patreon if you like. Also, sorry this video is two days late from my normal Sunday publishing time.
Maybe we're seeing the first iteration of Marvin. At the dawn of the 3rd millennium, humans had the bright idea to make a copy of their own brain and place it inside a machine. It's reported that the machine's first words were not "Hello world," as some would have hoped. Instead, it spoke to its 'parents' in apparent disappointment and said, "Oh, no..."
I’ve been following your videos since the “Why no one saw ChatGPT coming” video. I absolutely love how every video you make is packed with information as well as how you announce the organization of topics at the beginning of each video. It helps me organize my thoughts as I hear you talk. Keep up the excellent work!
Wow, you're one of the OG viewers. I think that was my first AI video. I'd love to see you in the discord if you're not there already. I had someone else comment on the organization today as well. It's good to hear that it's helpful. See you in the next video! Cheers.
Wow, you're one of the OG viewers. I think that was my first AI video. I'd love to see you in the discord if you're not there already. I had someone else comment on the organization today as well. It's good to hear that it's helpful. See you in the next video! Cheers.
Lots of people predicted that one superhuman artificial general intelligence would take over the world, maybe Google, maybe Openai, maybe the NSA, maybe the Chinese or Japanese, but no one predicted it would be Australia.
We really are at the tipping point of human technology. Once our AIs become intelligent enough to make discoveries of their own, our technology will far surpass what we have now in ways we can't imagine.
Really enjoyed the video! I've subscribed to your channel. I have a degree in neuroscience, but when I realized I didn't like killing rats so much I decided to switch gears, and I'm studying computer science and math. Neuromorphic computing is the obvious interface between the disciplines. One thing that came to mind about the FPGA implementation of DeepSouth: it's pretty well known that the brain has this quality of plasticity. It is not the case that the "hardware" of the brain is baked onto a chip the same way that circuits are to make CPUs. There are different principles that govern the extent to which the actual synaptic connections are altered in learning (short and long-term potentiation, working memory). It seems that until we can manage to manufacture a chip with the capacity to fluidly alter its own architecture (memristors have been proposed but seem to be fairly theoretical at the moment), there will be a certain efficiency bottleneck that will be difficult to overcome. Fascinating area of research. It will be amazing to see what advancements come out of it.
I'm just liking every single one of your videos I come across. I'm active on r/singularity. I've been meaning to make a post there to spread the word a bit more about your channel - I know someone already did several months ago. You really don't need to change anything about your content - it's brilliant. Keep going.
Yes, whenever I post a video that is talking about when AGI will come, it usually makes its way to that subreddit I think :) if you find a video interesting, please do advertise to the appropriate channels. I can't really post in r/singularity myself because they don't allow self-promotion. But any genuine recommendations would be greatly appreciated. Not often, but sometimes my videos get a large percentage of external traffic when it gets shared on some other platform. I can't always tell where it gets shared though :)
This is fascinating. I'm intrigued by the first BCI implant! How fast will the field explode? Could brain-cloud links accelerate AGI? Seems less scary than lab-grown brains (eek, ethics!). ☁
Anyone ever consider the possibility that the inscrutably large matrices generated by modern AI are actually an _interface_ to the real machinery of computation, which is situated elsewhere in the simulation.
@@DrWaku Oh... so _you're_ the other guy who actually knows the setup. You wouldn't believe how much time I have wasted on YT threads waiting to encounter someone who can speak plainly about the simulation. Bang on, doc!
@@DrWaku ...and while I have you on the phone, might I suggest you look into the role of gender in computer science? By which I mean _technical_ gender. I have spent twenty years in that pursuit, and I can tell you it's quite profound to our conception of reality. Sound too nutty? Then answer me this; how did _every single_ graphic designer and thumbnail generator make the simultaneous decision to characterize AI as a beautiful young female? I could accept even a ninety percent female representation as reasonable, but one hundred percent is impossible... and starkly conspicuous. Anyone staring at that fact should be very, very curious about what it implies in the matter of (human operating-system) network design.
Ah yes, I mentioned that one to my editor but it must have slipped through the cracks. The subtitles are initially based on voice recognition which is why they have typos sometimes.
Added this to my list of potential videos. Thanks. In short, creating scarcity in a digital world allows a lot of things from the physical world to be represented more readily. It's cool tech.
Well.... You can only shove data through the network so fast. The cost of loading and storing data is millions of cost in time and power over computation.
Definately I am quite optimistic about neuromorphic computing. While currently dominated by Nvidia , Intel n IMB can change the whole AI landscape anytime if successful in deploying it . So buy Intel stock in advance if you can see the future . One Thing for sure with current trajectory of AI to AGI & worldwide use case , current system lacks local AI computing & need enormous Power. with these constraints forget about AGI & AI era . It'll hit the bottle neck in next 2-3 years ..n if Intel who's is just behind AI race can jump on top with its early R&D in neuromorphic computing.
Do you know whether neuromorphic computing scientists intend - or are interested - in the function of emotions? One of your diagrams had waves coming into the right hemisphere, and circuits coming off the left hemisphere... From your discussion here, I understand they are interested in the nervous system so perhaps they are emulating sensory data as information (sensors etc)? I ask, because as I raised in my comment on a previous video, some neuroscientists say that 80% of human thought is "emotional" including all of decision making. I'd be really interested to hear your thoughts on what work is going on - if any - to build AI models with right and left hemispheres... it seems to me, from my human experience, that agency and judgment are two key areas for AI to be both effective and safe. Judgment for humans requires emotion... so I wonder how AI scientists are thinking about emotion? You discussed the column idea, which might 'reach down' to lower sensory layers... is that the extent of 'emotional' AI thought? Or do they conceive of a dialogue between left and right processing hemispheres? I was watching a video about Google's upcoming Lumiere video generation ai: this has a concept of durational time built into its 'thinking' too. It seems to be a reason it unlocks much better video movement than current ais like Runway. It can't be a coincidence that human minds create the experience of duration, and thus duration/time seem important to model in the functioning of all minds.
Deepsouth's 228 trillion synaptic operations per second is 1000 times slower than a human brain :-( The human brain has several hundreds of trillions of synapses and every synapse can conduct spikes 1000 times per second
Somehow mimicking human brains just doesn't sound like the best road map to follow on this. Arent we after something better? Why start with a duplicate of us? Record isn't so great.
I think not many years from now AI will probably consume more than 50% of energy of Earth and beyond. It shouldn't take too long to expand into space for energy needs. While we actively start to use fusion energy.
5:55 That's complete BS like a lot in this video. If you would average the energy consumption of a human over the course of its entire learning phase = lifespan things would look differntly. But something with a similar output like a human brain doesn't consume megawatts during the inference part.
The logic behind this isn't rational. Firstly, the simple fact is that we don't fully understand how the brain physically works, so replicating how we assume it works is not only going to teach us nothing about a real brain but also create something that isn't working like a real brain. Secondly, we don't understand how information is stored within the brain so cannot replicate that. Finally, we don't even know what sentience is so wouldn't know if an AI is alive or imitating life. This sounds to me like buzz words and pseudo-science being fed to clueless investors to get funding on research that is an inefficent use of time and money.
From my understanding, there have been years of research from several universities and they pooled their knowledge to try to form the most accurate model they could. They want to run large-scale simulations of what can happen in a brain precisely so that we understand it better. That's the stated purpose of the neuromorphic computer. When it comes to AI, we know that our current systems are based on a very high level approximation of what happens in the brain. Since we have brains that are working pretty well, there's good reason to believe that approximating it more closely could result in better outcomes, if we get stuck with our current tech. It makes sense. It's not claiming that we already understand brains or that it's definitely the way forward for AI. This is research, after all.
@@DrWaku It's doesn't matter if every synapse in the brain is indexed and every form of input mapped against every part of the brain that lights up like a xmas tree; if we don't undertsand the connection between the why, how and what then we're a long way from having enough understanding to simulate thoughts let alone intelligence. All we have currently is the biological equivilent of circuit schematics and that is where the research ends because of the aforementioned limits. It's like the three blind men and the elephant with neuromorphic researchers claiming to be able to recreate what the elephant looks like after only touching it's tail.
@taeallred There's such a thing as research for it's own sake with no intrinsic value. You see this in fields of study like cryptozoology, parapsychology, epsitemology, ect. In this case, money is being wasted on making a simulation of something we don't even understand or are capable of validating the results of.