Keeping you up-to-date on the latest technology innovations in a beginner-friendly way.
I mainly focus on AI development, specifically the development of decentralized AI/AGI systems (the ASI Alliance with OpenCog Hyperon).
So if you are worried about big tech companies developing AI/AGI systems to serve their narrow goals and rather want to see the open, decentralized, and democratized development of AI/AGI for the benefit of all of us, this channel is the right place for you.
Of course, I'll also update you on general AI news and sometimes other tech developments if they are really fascinating
Join my community by subscribing to my channel and I'm looking forward to talking to you soon
That's great and it's similar to another RU-vidrs video, but despite using MidJourney for almost 2 years just as V3 was going extinct. The prefer option set and list and yada yada always had errors so I ignored it. I know that when I put in dash dash --CobaltBlue (that's one of my presets) I'm irritated because I have trouble switching back and forth and that's my only frustration because it's super easy to delete a preset. Especially when I'm going back down this rabbit hole. This is easier than learning video editing but in those videos they are super slow which is good for you as you make more money with watch time and I the "Student" can go back and forth between discord and the video to try the actions. Ya feel me? That's my suggestion. BTW--> I know how to set my SREF, but I don't know how to get it froom previous pictures in the number. Does the S in SREF refer to the SEED #?
@@CHHOTUKHAN-qp7pm I'm from a country where this app isn't available on iOS, so I can only use the website. However, I can't delete things on the website. Even when I type '!delete', it still says that it can't be deleted.
Do you think AI needs more biological grounding? Want to find out more about Sophia? Check out this video about Sophiaverse ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-6pBjoIuENO8.html
Of course I'm not well versed when it coems to these topics, so please let me know your thoughts (or your deeper knowledge) and if what I say makes sense or doesn'T make sense ;)
I'm on Cardano, gonna do my best to get it all done and vote. I wonna vote for Cudos to join our alliance, together we are stronger, Cudos is complementary to my favorite NuNet. I was worried a lot about my NuNet in the beginning but NuNet guys quickly explained well, so I'm fine with Cudos actually 💯
Should be nice if the ordinary cudos holders, was benefit of the deal also. Instead of 300x we get max 10x Just because FET mooned and Cudo insiders dumped on us from 3c to 0.008
Yeah it would be great if there would be any form of benefit or compensation. Last live I talked about CUDOS mentioning an incentive to merge, so maybe something is coming. I really hope so
the cool thing about l2 stuff... they can be more decentralized than L1 stuff.. they can be designed so you have to take down etherum, cardano btc pokadot, cosmos and solona simultaneously to mess with it
😮 look into brain organoid computers and you'd realize how dangerously wrong you are😮 the only limit to those is how long we can keep them alive😮 we've kept one alive for 10 months😮 we don't need this llm thing to be AGI we just need flms to give us an update about the human brain and neurons that we could grow organoids far greater than the human brain and then cross feed that data back into the computer😮 creating a biological synthetic feedback loop😮 eventually we'll end up with bio brains made of human😮 brain material the size of server rooms😮 I won't hear any of you arguing about if it's conscious or some bulshit when it's made out of your own brain cells you're a fool who doesn't know enough about science😮 so what you're saying is you're smarter than Ray kurzweil like that I'm just trying to be real that's crazy I'm not smarter than Ray Kurtz😮 but you are okay😮
"At a standstill" is categorically false. Saudi Arabia is building a massive data centre as we speak. Groq is working on improvements to its Language chip.
That sounds like hair splitting to me. LLMs have of course improved a bit but far from significantly since quite a while. And "standstill" is not in the context of nations and companies putting more resources into this. Intact, that even more resources are put into the same approach instead of working on different approaches is exactly what he criticises
He is talking about the integration of multiple approaches as a possible new road. That's something done for example by Dr Ben Goertzel who is also present. I recommend you check it out, Hyperon is a very interesting approach
Unless logic is added to AI, I think it will remain an "idiot-savant" prone to stupid errors. One thing not discussed here, is that so long as it remains based on language, it will be at the mercy of the biases of its trainers, which means instead of logically thinking outside the box and being a force for innovation, it will continue to support the status quo. An AI which could think logically, on the other hand, would pose a challenge to the status quo. Perhaps that is why it isn't being developed.
It's not explicitly said but breached on when he talks about how the language and image systems aren't connected and also when he mentioned that combined approaches to AI should be worked on preferably. I highly recommend you also watch the whole talk.
@@WieseTechnology I do recall those points, so I must have watched enough of it to catch them. I have "discussed" these points with Claude AI rather extensively. The nice thing about AI is that it isn't defensive.
@@ssake1_IAL_Research Funny that you are insinuating I'm getting defensive about anything instead of addressing what I've said. Oh well I guess it's harder to have conversations these days then ever because everyone seems so on endge about literally anything
I used to think ... it was a hype... but not anymore the moment ai find a way to master recursion and self debugging capabilities it will take 5 years and you will have something that will have a broader understanding of all human task and execution of every single one ... maybe wont have a sould and stuff.. but it will transform the world save this comment
What changed your mind about it being a hype Also it may be important to note here that Gary is only talking about LLM because of their limiting fundamental architecture. He mentions that other approaches/combined approaches could develop further
@@WieseTechnology I'm not the guy that you asked, but what has convinced me is a couple of things: DeepMind's work through AlphaFold 1-3, and then AlphaProteo is proof that synthetic data can be used to train stronger models, and then higher-quality, larger datasets generated from those higher quality models (AlphaFold 3) can then be used to train a model of a different sort (AlphaProteo). This implies that the size of the training data will not be a limiting factor to scaling. However, I understand you don't believe in the scaling laws, and that's fine - I've built a couple of decently complex projects using Claude, and I do believe that it is capable of some sort of logic. It's a little hard to pin down exactly where it does well and where it falls apart - but it's definitely doing something. So my personal experience with these models tells me that they're more capable than the average user is aware, and in fact it requires a decent amount of effort to discover what they can do. The quality of current frontier models, I believe, is sufficient to train future models through reinforcement learning methods a la AlphaZero. Even if you don't believe in scaling laws - it is basically undeniable that making larger models with larger, higher-quality datasets and a hundred billion dollar data center to train the thing, we are going to have very capable baseline models which will be able to do that reinforcement learning sort of thing at an even higher quality. Finally - maybe the most persuasive argument for continued exponential progress - it seems like there are so many synergistic discoveries. As one method is refined it opens doors for new methods, or enables a previous idea to become actionable. It's too simple to just say that the scaling laws will hold - I think they will - because even if they don't hold you're forced to recognize that the scaling laws specifically describe transformer architectures, the hypothesis is not generally applicable to "AI". Thank you for coming to my TED talk LOL - sorry it's so long-winded. I think it's important.
scaling up is just going to cost more electricity and servers and not get any better answers These guys are morons wasting other morons billions. Question is how do the morons get billions?
I'll keep making videos weekly about this topic and all the latest developments :) As for Gary Marcus, this talk is based on a talk he gave 3 years ago. So I'm pretty sure he'll also keep updating on everything