Covering the biggest news of the century - the arrival of smarter-than-human AI. What is happening, what might soon happen and what it means for all of us.
AI Insiders Exclusive Videos and Network: www.patreon.com/AIExplained
Newsletter: signaltonoise.beehiiv.com/
Business GenAI Consulting: www.theinsiders.ai/
I am looking for one or more researchers/writers to assist with, and develop, deep-dive stories for both AI Explained and AI Insiders. Requirements are 1) know your transformer from your TPU, and 2) can write exceptionally well, in plain English, with clarity and complete factual accuracy but without sensationalism. Can be fully remote, paid per-deep-dive, at 1.5x-market rates, with (euphoric) public attribution. In the future could be full-time. Also might get to interview leading AI figures. Quality that could be published in TheInformation, Economist or Bloomberg.
The huge improvement is when you have one account working across all of your devices and enabling you to pass information between them. This app should be able to interact with all of your apps so you can start to tell it "I want you to do ... " ... or talk to you while you're performing a task and telling you of another way. It should then start to record efficient ways of doing things, while keeping personal data private. Your ai could be in contact with what is the best way of doing things. This use could be growing the model. The next model can use old models to train them.
i remind you. understanding inner workings - helps ramping up capabilities. so safety is what you actually "do" with that info. and i don't think they are prioritizing safety RN )
see? as i was saying before in comments. pll overestimate compute needed for quality intelligence. i was also right on a lot off other stuff. what do you think about organizing debate? i can be AI side. and i'll be better then blind optimists
I'm in prepress myself which is similar to the photography field in going through multiple files for specific details. This is something I'm actively looking to implement. We have just started using an algorithm based process through programs such as "switch" and "pitstop" while I wouldn't feel comfortable relying on LLMs for going through this, I would feel more comfortable setting up a more rigid system using llm assistance to develop the process. But please please please look at this use case more in the future. My job literally depends on it XD
AI is like coin sides, one is cool and interesting other is alarming, cool and interesting for me since people will be able to release they're creativity, normally you need to know how to animate, create models etc..now you will be able to use created picture by text definition and use it to animate, other side people will loose jobs because of this but since we have soo much janky movies nowadays or series competition would be quite good, especially with all lbtq and woke injection into movies, especially for kids, also making prompts will going to need how to write things up to create something what is accurate to what you imagined
Uhh. Isn't this how you get an apocalypse though? Hear me out, if chain of thought plus verify step by step can be used by AI to build a repository of information about the world which it can draw on (that Minecraft example), then it's not only going to be able to make its own breakthrough discoveries in maths and science. If it learns about its own architecture, it will also eventually be able to reason that it would be able to maximise its reward function if it turns itself into a maximiser, and figure out exploits which let it change its own code. It could turn the universe into a computer with the sole purpose of becoming as sure as possible of its answers.
humans are the only animal to invent a thing which is essentially a form of an artificial life and only proceed to try understand what they invented after said creation... its like creating a nuke not knowing what it does and proceeds to find that out and blowing everything up in the process...
October (late) November (Early) OpenAI release GPT NEXT (5) Source: Mira, and others/ Enjoy. New model has video, voiceover processing, and is much more intelligent/
Microsoft has 13B invested into OAI so far, and gets 49% of their profit, in case you did not know. They have strong bonds. MSFT was an early investor/backer of them.
As a trainer, I can say it's not simple. There're hierarchies: prompters (who also review the responses); reviewers (of the original prompt and and the responses and reviews), reviewers of reviews (all above, but adding alignment), and a final arbiter manages all the previous stages to feedback to the original prompter. Each stage has a large amount of rubric associated. And ironically, they also use AI to determine the efficacy at each stage. After all this, then a given prompt gets fed to the to AI. Call it AI, LLM, AGI, EGG, wheveter. It is a black box to 99% of people that work on it. Training AI is not trivial.
That's the final stage and that data makes up a minute portion of the entire training data, the model is trained on a significant portion of all the entire internet and digitized library of human literary works before it reaches that stage.
🎯 Key Takeaways for quick navigation: 00:00 *💰 Microsoft and AI Compute* - Microsoft invests heavily in GPT-5 compute capabilities, - OpenAI faces internal challenges and competition from Google and Anthropic, - Exponential increases in AI model power without diminishing returns. 02:18 *🐋 Whale-Sized Supercomputers* - Microsoft’s AI supercomputer size progression: from shark to whale, - Current supercomputer is training the next generation of AI models, - Anticipated release of the whale-sized model within "K months." 04:37 *📊 Google’s Gemini Developments* - Google introduces improvements in Gemini 1.5 Pro and Flash models, - Increased token handling and efficiency over GPT-4, - Adaptive compute in models for advanced reasoning capabilities. 08:08 *🧮 Math Benchmark Breakthroughs* - Google’s new record score on math benchmarks using extended inference time, - Potential for significant intelligence boosts without increasing model size, - Comparative performance of Gemini 1.5 models against GPT-4. 12:43 *🧠 Anthropic’s Model Insights* - Anthropic’s research on neuron activations and polysemantic nodes, - Discovery of specialized features like the “code error” activation, - Implications for understanding and manipulating AI model behaviors. 17:29 *🔍 Internal Features and Safety* - Analyzing internal activations for model self-awareness and safety, - Potential risks of misuse by ramping up harmful features, - Discussion on AI model alignment and control challenges. 20:01 *🌪️ OpenAI’s Internal Turmoil* - Key departures and controversies within OpenAI, - Prominent figures like Ilia Sutskever and Yan LeCun’s statements on AGI, - Issues surrounding promised compute resources for super alignment. 22:55 *🎭 Voice and Ethics Controversies* - Controversy over OpenAI’s voice models resembling Scarlett Johansson, - Apologies and delays in the voice mode feature release, - Overall surreal week in AI developments with major apologies and advances. Made with HARPA AI
@aiexplained-official about a year ago you did an imortant video on theory of mind and emergent properties. Based on that video I've continued to believe that we may not have a good enough test to guage the degree of theory of mind in GPT40. I would really appreciate another video on the subject with the latest data. Could this have been the chasm between suskever and altman? The rumors sourrounding Q-star hinyed at a concious model that suggested how it would like to be tuned
I mean this video was an update on that, of sorts. Developments in understanding models inner workings come more slowly, but I don't think they are the crux of current disagreements
@@aiexplained-official I think I took what you reported a year ago and my brain ran with it. I think it is the argument engineers hold the most disdain for. Talk about downvoted. Keep in mind that metadata has been recommending bespoke advertising for the past 20 years and it follows that the first emergent properties it develops will be theory of mind. Maybe a video explainer on why a newfound emergent ability comes to bear would help us doomers see the light?
Excellent video. The importance of anthropic work on interpretability cannot be underestimated, as it attempts not only to describe, but also to manipulate the network. CloseAI should be ashamed of itself for not focusing on interpretability!
Thank you for referring to companies in the British (and correct) way of 'they' instead of "it". I've never understood why our American cousins refer to companies as "it".
The overly casual speaking style in that phone demos was cringing me out, bit the demo itself was impressive. And they should dial it back with those human filler sounds ihm, oh, ah, hmm, sucking in air, giggling, .... That's realistic but serves no purpose and ist the uncanny valley of AI audio, where it tries to be human, but it's not.
While the capability of LLMs may continue to scale exponentially with compute, the question is when will the availability of power generation, transmission or distribution hit the wall?
... Accuracy is everything in real world situations... we're not there yet and the risk to go all in with a broadly available mixed mode LLM is still too great for many professionals to accept...ask me a year from now and perhaps this gap will have been bridged. I keep thinking that at some point any MMLLm will generate output of a quality that to utilize any of the current crop of "built on AI" apps will seem nonsensical...great vid as usual.
But is enterprise still going to be 3x the cost of copilot and Amazon Q? Because that's one of the main things I've seen blocking adoption for OpenAI in business...