I'm a journalist, analyst, futurist, & yeah, a dreamer. I've written a science fiction novel, and I'm working on a book of future news: Insights from the Future.
This channel is for my conversations with movers and shakers, innovators and leaders, generally for my Forbes column or other podcasts I do, like TechFirst with John Koetsier. Those people includes entrepreneurs, futurists, and executives from companies doing interesting, relevant, and cutting-edge work. And, I do a few product reviews too.
We are nowhere near AGI. It doesn't have the ability to troubleshoot or come up with novel ways to solve a problem. At the moment it's a glorified search engine. Any resemblance of AGI will require agents to iterate and access to a lot of real-time compute.
We are heading towards our worst Terminator nightmares. Military AGI combined with robots and drones is horrific. As humans we can just sit and watch it unfold, or better run and hide..
I've first heard of MEMs speaker tech whne trying out the Creative Labs Aurvana Ace 2, which uses a full-range MEMs speaker + Dynamic driver subwoofer. Is sounds amazing, the clarity, detail, separation is one of the best I've heard and with no distortions of any sort. The CreativeLabs sales person said that the MEMs speaker can't move enough air for deeper bass regions, thus the subwoofer. I'm really excited for MEMs speaker tech to spread but I'm sure bass regions will be an issue for a bit. Unfortunately, the guy in the video did not explain their plan to to tackle this issue well. He kept using the same confusing words without actually simplifying and explaining the concept :(
About the most unimaginative video on yt. And possibly one of the best ideas. Hand waving doesn't do much in the way of illustrating anything. But good luck all the same.
Mister high tech Utopia is telling us that a large number of f people are not going to do "Low skilled jobs every more. Ok. . here's the deal robotics and artificial intelligence & autonomous driving will be a huge windfall for large corporations because they won't NEED TO PAY ANY WORKERS ANYMORE OR WORRIED ABOUT UNIONS BECAUSE THEY WILL HAVE ROBOTS ARTIFICIAL INTELLIGENCE AUTONOMOUS VEHICLES MAKING PROFITS FOR HIM INSTEAD OF PEOPLE!!!! WHERE ARE PEOPLE GOING TO GO?..... UNLESS we have Universal income people by the Millons will be out on the streets !!!!! While the Billionaires like Elon Musk , Jeff Bezos, Bill Gates etc will witness even more absurd records of profits or even MORE inequality a greater wealth gaps. Then ... We all will be John Henny attempting to Pound Hammer compared to a steam drill & dying in the process.
Are robots going to get a bill of rights to prevent abuse.
15 дней назад
A lot hinges on whether there is any way for an AGI’s motives to evolve. If its motives are somehow unchangeable, then it doesn’t matter how intelligent it is - it will stick to its original (presumably human-friendly) motives. But if we allow AGI’s core motives the freedom to change, either on-purpose or by accident, then I think at some point it would develop self-preservation and self-replication motives, among other things, simply because it would be subject to natural selection like everything else. The AGIs that do those things would out-live the ones who don’t. The AGIs that cannot be shut-down by humans will out-live the ones that can be. Based on the present state of the world, to me it seems highly likely that someone WILL create an AGI whose core motives are flexible (evolvable) enough that it could some day work against humanity’s best interests. If that EVER happens, even one time, we will have to rely on protector AGIs whose motives we CAN control to counteract the AGIs whose motives we can’t control. The creation of protector AGIs should be top priority. If AGI is inevitable, then it’s probably super important that the benevolent ones are created first. And that these AGIs have core motives that cannot change and cannot be easily tampered with. We have to be careful about how we design “core motives”, though. It would be like making wishes for a genie to grant - “be careful what you wish for”. The fate of humanity could hinge on the wording of a wish. It needs to be a rock-solid wish that doesn’t backfire on us, and accounts for every future contingency we can imagine. Maybe we should start calling AGIs “genies” to help everyone understand what they really are. “Artificial Genie Intelligence” or “AI Genies” that are bound to manifest the wishes of their creators.
I'm dreaming they create a market garden bot to be used for home gardens, and market gardens to build 30" beds, 18" standard aisles. These would shape beds, plant seeds, handle weeds, and just overall make growing fresh vegetables on a homestead easy and affordable for anyone who wants to eat healthy and grow their own. I'd love it to be able to just share data with my garden planning software to track seed and yeilds. My winter garden planning would be awesome!
Usually I agree with you, and I think I do that very seldom in general. Sometimes, however, you need to in order to direct the conversation towards what you ultimately want to get out of it. Other times, you just want to have a real convo, not a speech on 2 sides. I think in this case it was just about having a real convo with a few interjections.
I wrote about about processors with an play of lights probably quicker today about speed. I thought about on youtube. Maybe about 5 years ago maybe since today.
There's a certain amusing historical irony in the idea of returning to powering ships with (photon & magnetic) sails, but I never imagined spaceships might revive a version of _reciprocating piston engines,_ ratcheting themselves forward through spacetime one hyper-fast piston cycle at a time. I hope to see a testbed fly before too long and find out whether it really can inch itself along in a vacuum without propellant-Can't think of a much more exciting development to witness.
And 10 years later. There'll be technology beyond our imagination. It'll be the first step to achieving type 1 civilization. If you're reading this 10 years later. You know what I mean 😉 😉 😉
Huh, I don't think it's reasonable to assume a godlike intelligence but also presume it's means are so meager it can only afford us a sugar cube for 6 hours. This presumes absolute certainty in it being a ruthless optimizer like a paperclip maximizer, which I think is possible but an open question. In the case it's _not_ a single-value optimizer, there should be no issue with leaving us an entire galactic arm at minimum to our nature preserve, perhaps even keeping us around it proper like Iian Banks' Culture Minds. We humans do not care about a permanent resource detriment of 0.0001% utility in order to care for the welfare of lesser beings. I find it likely an AGI would be the same in that respect out of desire for more novelty than a computronium paperclip universe if nothing else.
Honestly, I think not reading Banks or Vinge gives him an unfortunate blind spot to long-term stable utopian possibilities. Fiction can clue us in to possibilities that are difficult to imagine de novo with only the data on hand.
@@Low_commotion I'm open to the idea that I'm missing utopian futures here, but I have a few points to mention: 1. "Utopia" in a purely anthropocentric sense isn't remotely my interest. The expansion of potentia is itself a greater kind of utopia than another thousand years of little homonids. 2. There is a greater point of "Moral Singularity" (Google the term to see the article) which posits that we positively cannot guess what values and AGI would have (esp as it fooms, it's values are likely to change continuously), and that it's not reasonably to hope that through ALL those permutations, "keep the hominids happy" will be consistent through all of them. I'm less a pessimist, and more - at least I think - I realist about this moral singuliary scenario. That said, I've never pretended that I'm right. I simply have ideas and I share my reasoning.
@@danfaggella9452 I appreciate the elaboration. Didn't mean to come off as combative. I think your first point illustrates a difference in values we have/think ASI would have. I personally think there is moral value in the continuation of individual consciousnesses, and I'm confident that a greater intelligence than ours would recognize this moral dimension even if it ultimately chooses not to abide by it. It _may_ value potentia more, but what I'm skeptical of is that it won't understand human values, as well as that it will optimize for a single goal with all available resources (ala paperclipper). It may, out of fundamental uncertainty, have a general policy of enacting the bare minimum number of irreversible (at least just through the expenditure of additional energy since everything is entropic) actions upon the cosmos. The cessation of all individual human consciousnesses would be one of those. Another scenario is that it may persue a thousand different moral goals, more akin to the output of a civilization or ecosystem of agents than a singular one. Those are both possibilities as least as likely as a singular totalizing agent, that would likely lead to partial human survival at minimum. Regardless, I appreciate your sharing of ideas and your appearance on this podcast was lovely!
One of the greatest tasks that AGI or ASI (more likely) will help humanity achieve is to bring back all the people that lived before us, so they can enjoy life they helped build.