Тёмный

What Is Effective Accelerationism (e/acc)? 

Hello World HD
Подписаться 6 тыс.
Просмотров 11 тыс.
50% 1

Effective accelerationism, also known as e/acc, is a philosophical movement that offers a positive perspective on transformative technologies like artificial intelligence and the progress of humanity. Effective accelerationism has recently gained attention after prominent Silicon Valley figures, including Marc Andreessen and Garry Tan, have expressed their support on X/Twitter. This video explains e/acc in detail.

Наука

Опубликовано:

 

6 авг 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 41   
@dojacow-go
@dojacow-go 7 месяцев назад
I’ve been seeing e/acc all over twitter and had do clue what it meant. this video was super concise and helpful! thank you!!
@mistycloud4455
@mistycloud4455 Месяц назад
AGI(artificial general intelligence) will be man's last invention
@lang1892
@lang1892 9 месяцев назад
where is beff jezos when we need him
@jun
@jun 6 месяцев назад
i like your concise style
@jacktholdsworth
@jacktholdsworth 3 месяца назад
Overcoming biological encumbrances is an evolutionary imperative, driving towards more enduring and resilient forms of life and consciousness.
@watcher8582
@watcher8582 10 месяцев назад
Interesting topic - do you got more takes on it?
@mistycloud4455
@mistycloud4455 Месяц назад
AGI(artificial general intelligence) is essential to accelerationism
@archangelmichael1978
@archangelmichael1978 6 месяцев назад
It's imperative that we create open source AI models that can be embodied in removable storage. Whoever achieves this endeavor first will be written in history books. Now is the time of the AI revolution. Act now! We won't get a second chance at this.
@BinaryDood
@BinaryDood 4 месяца назад
considering the power of those who believe this, this might be the single most dangerous ideology in the world.
@LeZylox
@LeZylox 29 дней назад
Word.
@seansettgast5699
@seansettgast5699 6 месяцев назад
I like your video. 👍 My take on both e/acc and e/alt is they're both too generalistic. They both seemingly paint broad ills in society as being a result of certain conceptual root causes that can be solved by taking opposite although equally generalistic approaches, and this just seems to me like a false dichotomy. Before deciding whether e/acc or e/alt is good, maybe we should decide if more or less regulation is good conceptually? Even that sentence paints a false dichotomy; there are some circumstances where we might be better off with more regulation, and some circumstances where we'd be better off with less.
@calmhorizons
@calmhorizons 3 месяца назад
Yup. It's a shallow self-serving philosophical framework used to justify hyper-capitalism which, just coincidentally, of course /s, benefits them personally.
@kolopee
@kolopee 28 дней назад
Government regulation is inherently worse than market mechanics at any job, as economic centralization is inherently worse at allocating means efficiently, it is 'groping in the dark.' Effective accelerationism is like trying to launch a rocket from a launch pad which is on fire while refusing to talk about WHY its on fire. It is not complete, it is just a type of Anarcho-Capitalism. Remember, "Nothing human makes it out of the near-future."
@rolletroll2338
@rolletroll2338 6 месяцев назад
This is just the last avatar of the libertarian movement from the tech bros of the silicon valley.
@vertigoz
@vertigoz 6 месяцев назад
Imagine capitalista who are all for unregulated markets are now advocating for regulation
@7TheWhiteWolf
@7TheWhiteWolf 7 месяцев назад
Accelerate ❤
@BernAlter34
@BernAlter34 7 месяцев назад
as an L/acc i find this interesting.
@LeZylox
@LeZylox 29 дней назад
What does L mena
@7TheWhiteWolf
@7TheWhiteWolf 12 дней назад
@@LeZylox Left Accelerationism.
@LeZylox
@LeZylox 11 дней назад
@@7TheWhiteWolf oh that's hella W
@eAccBro
@eAccBro 11 месяцев назад
Accelerate 🫡
@patodesudesu
@patodesudesu 11 месяцев назад
I haven't heard or read anyone in EA forums, conferences, orgs, etc. pro AI accelerationism and most of them are really pro regulation. Also EA is more about choosing a good an impactful career nowadays, not so much about money, and SBF has been heavily disendorse from the community.
@hwhd
@hwhd 11 месяцев назад
Yeah, most people except e/acc people are pro AI regulation. And EA is definitely not just about money, but when I read The Most Good You Can Do by Peter Singer, it did seem like giving as much as possible to effective charity was the emphasis.
@patodesudesu
@patodesudesu 11 месяцев назад
Yeah, that was the emphasis at the time for sure@@hwhd
@eAccBro
@eAccBro 11 месяцев назад
@@hwhd the logical endpoint of Singer is everyone being a slave to everybody else, out of guilt. It’s more practical and historically evident that free enterprise and economic growth is a better and more optimal improver of life for the greatest amount of people.
@randomusername6
@randomusername6 11 месяцев назад
I like the video for bringing attention to the topic, but I feel this was an overly one-sided and uncritical coverage of the e/acc position. Here are counterarguments to some of the video's points 1) AI will bring an age of prosperity and the existential risk is low We don't currently know a way to create a safe AI. We currently create AIs by randomly adjusting their internals until they look like they do what we want in the training environment. Upon deploying them to production, very often we find out there are subtle (or not-so-subtle) differences between the goal we wanted to instill and the AI's actual learned goal - that's true even for simple domains like an Atari videogame. For current AIs that's not a big deal because we can turn them off and adjust them until they're mostly (but not always) working as expected. We won't be able to turn off a superintelligent AI and if wants different things than we do - we either die, or even worse, live in some sort of horrible warped state. Humanity dying is currently the default assumption, not some unlikely concern not worth worrying about, and it does not matter whether "good actors" or "bad actors" are at the wheel. Some prominent e/acc supporters are on record saying they're OK with total human extinction because it would be "evolutionary progress" and a hypothetical superintelligent AI would deserve to exist more than we do, even if it did not share our values. 2) We need open access to AI so good AI can fight bad AI Disregarding the question of whether we can actually create "good AI" - it's easier to break things than to create them. A powerful destructive force requires a much more powerful creative force to counteract it. You can easily stab a person in the gut, but you need a bunch of complicated equipment and medicine and a team of doctors who trained for years to save that person from blood loss and peritonitis. Should we give every madman a way to synthesize a bioweapon and then rely on "good AI" to create and distribute an antidote in time? 3) Superintelligent AI is inevitable, because it is impossible to prevent people from training AI Training large AI models requires expert knowledge and access to large amounts of compute. It would be relatively easy to track and regulate use of large GPU clusters (like we do with uranium-refining equipment) and you don't need a totalitarian surveillance state to do it. 4) If we don't do it, China will China just implemented a batch of restrictive regulations on AI. China also has much less expertise in the area. The main danger overwhelmingly comes from US-based companies.
@MetsuryuVids
@MetsuryuVids 11 месяцев назад
Well said.
@hwhd
@hwhd 11 месяцев назад
Thanks for the feedback. My primary goal in this video was to explain e/acc, and while I did list my criticisms, you're right that most of the video was spent explaining e/acc's viewpoint. Here are some thoughts related to your points. (1) In the video I did say there is a possibility of AI wiping out humanity as a criticism of e/acc. Still, I'd challenge you on the notion that humanity dying is the default assumption. The significant majority of AI experts do not think AI will wipe out humanity. (2) I agree that it is easier to do evil than it is to do good. This is analogous to the guns debate and whether you want more guns in the hands of good people. To me, it's not clear what is best; e/acc offers one perspective. (3) Yes, you could try to track the largest GPU clusters. But GPUs are getting more and more powerful. You won't need a massive supercluster in the future to train a crazy powerful AI. (4) China is maybe 6 months behind the US on AI. They are leaders in surveillance tech. China's top-down system will still innovate, but I don't think this top-down system decreases the likelihood of using AI for evil.
@saerain
@saerain 11 месяцев назад
_Some prominent e/acc supporters are on record saying they're OK with total human extinction because it would be "evolutionary progress" and a hypothetical superintelligent AI would deserve to exist more than we do, even if it did not share our values._ If this is about @BasedBeffJezos again, it's quite a dirty move that's been pulled by Liron Shapira and his like. But to be fair, a very old move, for which Beff should've been ready. The idea being that if you're excited-or not properly disturbed-by the meaning of "human" probably continuing to broaden, morph, and eventually justify new terminology, you're "OK with human extinction". A bioconservative classic.
@randomusername6
@randomusername6 11 месяцев назад
​​@@hwhd Thanks for your response. Rather than turn this into a typical tedious point-by-point discussion I'll focus on my main point, which is: "AI x-risk is sufficiently high that we should pause superintelligent AI development instead of accelerating it and focus on the control problem" My personal position is that if we proceed as is, likelihood of human extinction is above 50%. Regarding what you said about expert evaluations: latest AI expert survey I could find - from August 2022 - places the median response at 5% (i.e., half of responders think it's 5% or higher). I don't think we should gamble the whole of humanity on not rolling the mother of all critical fails. The ones that are confident the chance is low (like Yann LeCun) do not have a concrete verifiable plan to build safe AI - they're just confident they'll be able to figure it out sometime in the future. There are also no definitive refutations (that I'm aware of) of the object-level argument for high x-risk likelihood (orthogonality thesis, instrumental convergence, etc).
@hwhd
@hwhd 11 месяцев назад
@@randomusername6 I respect your argument. My personal position is that there is a very small chance that we wipe out humanity with AI, but stopping progress would be essentially impossible, and the opportunity cost (eliminating scarcity and figuring out the nature of the universe) is too high. I guess we can agree to disagree.
@MitchellPorter2025
@MitchellPorter2025 11 месяцев назад
I am a transhumanist, so you might suppose I have a positive reaction to e/acc, but the problem is that all I hear is pure recklessness, as if nothing can go wrong. Acceleration without the effectiveness, so to speak. They want the keys to the car, but they don't want driving lessons. That attitude won't get you to your destination, it'll land you in hospital or in the grave. edit: Another way to put it, using the metaphor of driving. e/acc might be effective at pushing on the pedal, but they have given no indication that they will be effective at steering the car.
@hwhd
@hwhd 11 месяцев назад
The argument against your claim is that there isn't a clear reason why slowing down will decrease the likelihood of a bad outcome. What sort of laws could meaningfully increase safety? Current proposals would just lead to regulatory capture and concentrate AI development in the hands of the few.
@MitchellPorter2025
@MitchellPorter2025 11 месяцев назад
@@hwhd I am mostly talking about the creation of AI agents that strongly surpass human intelligence. Do you see that once such entities exist, the future is out of our hands? So ideally, before you made them, you would do your best to ensure that they were going to be compatible with human survival and well-being, rather than incompatible. Also, the development of the most advanced AIs is inherently something that only a few get to be involved with, whether or not it's regulated. It requires brilliant people and massive resources, and even then, that's no guarantee of success, it just means that you can compete in the race.
@archangelmichael1978
@archangelmichael1978 6 месяцев назад
Remember when the shuttle Challenger blew up? Might as well scrap the entire space exploration program, huh? "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety." - Ben Franklin
@BinaryDood
@BinaryDood 4 месяца назад
@@archangelmichael1978 false equivelency
Далее
What 3D Bioprinting Is and How It Works
16:59
Просмотров 61 тыс.
Что нового в 11.2?
58:32
Просмотров 59 тыс.
ЛУЧШАЯ ПОКУПКА ЗА 180 000 РУБЛЕЙ
28:28
What can AGI do? I/O and Speed
10:41
Просмотров 118 тыс.
4 Common Misconceptions About A.I.
9:27
Просмотров 61 тыс.
Bioprinting of Perfusable Skeletal Muscle Tissue
5:24
Просмотров 2,4 тыс.
ЗАКОПАЛ НОВЫЙ ТЕЛЕФОН!!!🎁😱
0:28