Rational Animations is a fully remote international team of about 40 artists, writers, and tech dorks collaborating together. We are fans of dogs, stories, animation, learning, and doing good.
With our RU-vid channel, we aim to promote good-thinking, altruistic causes and help ensure humanity's future goes well. Among these topics, we are particularly focused on AI Safety and AI Alignment: making sure present and future AI Systems are aligned with human values and don't cause our extinction.
We also offer animation production and writing services. Don't hesitate to contact us!
To reemphasize: I do not want an AI language model based on what some group thinks is moral. I want an AI language model that will learn my morals and values.
This is just people masturbating to the idea that if there's more people, all our current problems will be worse. But that's not how that works. The majority of people alive today are suffering on a level that should not be accepted and easily qualifies, in severity, with the S risk status. The scope is pretty much irrelevant because the scale is already as big as it can physically be now. If that's not a cause for action yet, nothing ever will be. If any relevant decisions were influenced by scope, we would have solved the problem already. This entire idea is simply inactionable and fruitless. The only effective measures to prevent these risks are to eliminate these problems in our modern world at the scales where they already exist. If that isn't done, the discussion is over. In fact, everything happening today would qualify as an S risk 100 years ago, 1000 years ago, even 10,000 years ago.
AI danger isn't its computing speed because that's offset by its insane power cost. AI danger comes from its cheap replication. an Ai can be cloned, perfectly, on minutes. Humans take decades for a very imperfect copy.
Reminds me of the Portal in the Forest book that has humanity suffer various apocalypses (in the wider story universe). One of them had humanity become perpetually enslaved to something through the use of machines that allowed folks to sort of program their day. At first it was simple stuff like boring work, but moved up to entire work schedules, workout routines, etc. Eventually they figured out ways to actually do it wirelessly, a bunch of pretty weird religious fanatics started to grow way too fond of the stuff (you get tons of productivity and suffering apparently ends, cuz the device works in a way that allows you to sleep/daydream sort of), and more and more folks used it 24/7. Finally it resulted in everyone being merged into what is essentially a hivemind, with it being revealed that despite the sort of dreamlike state, the usage of these machines/methods/technologies leaves the victim in what amounts to perpetual torture until they die, where they come out of the trance screaming.
a wish can be made safe with 2 conditions: 1) I must not regret making this wish at any point. 2) Condition 1 must not be achieved by altering my values or removing my ability to regret something.
Is this why GPT tells me men pretending to be women are actually women and that's great, but wanting to live with people who look and think like me is evil?
And who gets to decide what suffering is? It first starts with reeducation camps and ends in “We should abort the fetus because it would lead a miserable life anyway“. And again, what is suffering? We should tread lightly and be thoughtful when discussing weighty matters of this type. No one wants suffering, but first do no harm.
Omg this was one heck of an incident to go down in history! Greatly explained. This is how bad an evil teacher can be. Hope they check their codes a hundred times beforehand, now that this happened!
This is a FANTASTIC video on intuitive understand on Bayes. Loved the animations. Loved that you explained the odds for of Bayesian. Loved the examples.
One must teach an AI morals and to teach a machine the ever-changing complexities of whatever someone has deemed as "Good and moral" has and will continue to change throughout the course of humanity, and even changes from anything to person to person, faith to faith, to nations and countries
Our existence in this time has terrifying implications for humanity as a whole. If humanity will be successful and last millions of years, we are implausibly early as individuals within the human race. If humanity will last millions of years and/or expand to tens of billions of individuals, then we are in a tiny fraction of a percent of the first humans that will ever exist, which demands at least as much of an explanation as being a tiny fraction of a percent of the first intelligent species that will ever exist. If humanity will be successful and last millions of years, of course _someone_ would have to be around in the early period, but why would it be us specifically? If we assume that we are typical humans (which we have no reason not to), we can expect that there were roughly as many humans who lived before us as will live after us (about 110 billion). If that assumption is correct, then we should expect humanity to only last around 1000 years more without a major catastrophe which reduces our population to a few million or less, or wipes us out entirely.
I have had this type of thinking for nearly half my life now. In common English I will say that I believe things that I have more than two deviations of certainty.
They already do it on purpose. What if, and I know this may be hard to believe, what if many twelve year old girls want to be mothers. Instead we send them to school and import immigrants to have those children instead. A whole lot of children explicitly deny consent to go to school and must be forced. This sometimes leads to gun violence in schools. We don't try to fix this situation, we just watch as the next generation is abused into being wage slaves. Their chance for having children stole from them by immigrants that were imported so that wage labor stays cheap, so that they stay poor while going to those schools. We do it on purpose. We increase the suffering on purpose.