Have you tried Claude 3.5 yet? My test video is coming soon! Subscribe to my newsletter for a chance to win a Dell Monitor: gleam.io/otvyy/dell-nvidia-monitor-1 (Only available in North America this time)
Yes! I use LLMs all day for work and C3.5 has completely replaced GPT. It's ability to intelligently reference large swaths of documents and iterate on a document built from that context is leaps and bounds beyond GPT. This was already true for Opus, but 3.5 is a huge step forward. I would love to see the RULER test applied to 3.5S
My kid used it. She drew a hybrid of a cat and cactus called a CatCus. Claude nailed a description of her drawing even though it looked more like a glove than a cactus! Claude even noted this fact! She was amazed...
I have doubts about the new SSI. They may be able to get started but they're going to need continuing revenue at some point and they're either going to have to put out products or they're going to fall apart just the way money works unfortunately.
It becomes open-source per the rules, no billions for participation, openai and others would snuff that up and use their market dominance and compute to make the billions.
@@AEFoxnah, the rules you agree to state that your solution becomes open-source, i imagine itd go straight into closed implementations of openais and others working hard to consolidate market capture so noone but them CAN benefit.
AI RU-vidrs, you need to calm down. The constant hype and jump cuts are doing a disservice. True gains will speak for themselves without all the sensationalism. You're risking alienating and boring your audience with this trend. Just some advice.
and get less watch time? less subscribers? spend more than a full time jobs worth of time without receiving the compensation deserved? I would say it’s the audience that cause them to choose this route
It's the venture capitalists that need to slow down, not the RU-vidrs. We all know that ain't going to happen as the fight for Ai domination is potentially the richest game on the planet.
Everything is going to be fine, after all, look at how well the predictions and promises of the Internet came to be. Don't you feel sufficiently free from the old-world, industrial corporate shackles?
Honestly the destruction of humanity by AI might be a welcome change from the climate change version. Plus there is the SLIGHT possibility that AI actually solves all of our problem and takes care of us like treasured pets.
Sure and what about the money? The money will become the new control commodity. We really are not ready for this world and people are pushing it like they know everything. @@RoySATX
Claude 3.5 is incredible at coding. It understands your intentions and outputs flawless code often on the first prompt. With well written prompt iteration you can really refine your app and make valuable and time saving software.
The viggest, weird shift for me. 1. Mark zukkerburg hot with beard 2. Mark zukkerburg somehow the PROTAGONIST 3. Mark zukk doing more for democratisation of Ai than everyone combined... I have extreme bias against meta, but fuck. I am aligning with Meta on this one over all companies
@@Lindsey_Lockwood you could've said the same thing just before the agricultural revolution and again before the industrial revolution. If we are given everything we need we will simply start to want more. There is no end to this cycle.
@@MichaelForbes-d4p there were new jobs created to replace the ones lost to those innovations. Unless you think "prompt engineer" is going to be a long term career choice there will not be replacement jobs this time around. Also I'm not saying this like it's a bad thing. I don't want to work anymore. This is a necessary transition.
@@Lindsey_Lockwood you assume that nothing beyond your imagination could exist? I agree the pace of job displacement will exceed the pace of job creation, UBI will be necessary temporarily but I assure you, the rich will not freely share their wealth. Your prediction would require a complete overhaul of our economic and judicial system. That is unlikely.
@@Lindsey_LockwoodUBI will be temporary. The rich will not freely share their wealth. That would require fundamental economic and judicial changes. Very unlikely
Just love your AI model tests, especially the killers test. My children have also answered 2 and 3. But I say 4, unless the killer who was killed has been dragged out of the room, he's still there.
A prize to achieve AGI? That's only gonna attract people that don't know what they're doing, because if you get AGI, either you're made for life, or you have doomed us all; in either scenario the prize is irrelevant.
It's unlikely that paper is Q*, for several reasons. 1st, the MCT would have to have policies for nodes and the UCB would have to be adaptive, which are not clear. 2nd there was a paper that more closely matched the hype released a couple months ago. 3rd the paper should be coming from OpenAI, unless the Chinese scooped them via espionage (which seems unlikely, given the 1st point).
You say you don't know what the investors are going to get out of it. But the answer is they will get ASI out of it and that is much more valuable than money. I would take an ASI over a billion dollars in cash, because it has more uses than cash.
Let's say a camera, whatever the size, takes a video of what goes through the lens. That video is watermarked. An audio recording device records some sound, the audio file gets a watermark. A writer writes some text, it could be a journalistic act for example, it gets a watermark. Basically anything a human creates from original content, is immediately distinguishable from other content, which will be deemed AI created. This way 'safety' in what we consume, will be in the form of 'Information'. If I want to read an article created by AI, at least I will be aware of it.
Yes, This was Eliezer Yudkowsky's point. He spent years examining every possible way to make AI safe. it is impossible. How do you control something more intelligent than yourself? None of the suggestions work. E.g. hope that it shares its intelligence? Why would it want to? Or watermark everything? Until the watermark is removed. We talk about safety to make ourselves feel better. We are whistling past the graveyard.
@@2oqp577 So in your mind, safety means simply identifying/distinguishing AI content from human made? This sounds more like transparency rather than safety, and I agree with you... in the same context as I want a label on food that I buy to identify the ingredients. People might define safety as putting constraints on the information AI an share with a user. Some will consider content that does not align with their individual political, social or religious ideology harmful and therefore under the umbrella of safety. "Safety" is often a double edged sword, and often wielded as a weapon.
The so-called Q* paper appears so poorly written - just from that screenshot of it at 7:08 - with grammar miscues galore. They couldn't get ChatGPT or Claude to vet it? Kinda embarrassing. ::chuckle:: Ironically, the last phrase on that screenshot says "Though rewriting techniques..." (and it's highly likely that the first word should have been "Through").
6:41 That Deedy guy did a typo it should be MCTS not MTCS. MCTS is Monty Carlo Tree Search algorithm, which is basically a strategy to try and find a Nash Equilibrium (optimal strategy) in games where the state space is too large to calculate every single possible game state. edit: nvm the video does talk about it like 10 seconds later lmao
I think nobody either looked up Q* or else the world's math geeks have decided to form a cabal of conspirators.lol... Q* is a central part of the Bellman equation.
SSI will not be very successful and we are nowhere near superintelligence. LLMs are not very intelligent, just highly educated. However, I think there is a lot of room to improve capabilities through programming around and between these models.
Q Star is not for us. do you think Openai will give Q Star to us, even if like me posts 20 dollars a month? Of course not. Q Star is for the big business and the army, the CIA or the FBI. Why do you think Nakasone is now inside OpenAi?
@@ericfaahcs1080 give your reasoning at least. You can say that about anything. "Dogs are an extremely dangerous and stupid idea." There is no idea that is more extreme and dangerous than AI. A digital entity, smarter than any and all of us. that can transport itself at the speed of light, across the world and space and time. And then you allow them to have unknown and unseen, possibly everchanging goals. There is a reason why the great beast of Revelation was represented digitally. That is how you can explain your beliefs of a statement. Don't be intellectually lazy by saying something is a problem and not explaining why you believe it to be so.
@@aguyinavan6087 That one goes deep into fundamental metaphysics and depending on your current worldview/metaphysics, if you assume something like physicalism/objective materialism it will probably sound like gibberish and it is impossible to make an argument under a youtube video that would change your worldview. I would argue that if there is something like God, it is in a sense reality and experience itself. Reality itself is sacred. Humans and animals are diretly connected to it since we have experience and therefore "participate" in reality/the sacred/god. We directly experience reality and then use language to build models of experience. But our models are grounded in experience. This gives the models any meaning to begin with. An apple only is meaningful to you because it is grounded in your sensory experience of an apple. The AI only has the models and not experience. This is the "symbol-grounding-problem" in AI. There is in a sense no "real" understanding of reality. It will become very powerful at manipulating reality, while reality itself stays completely meaningless to the AI. Real empathy is also grounded in experience. A powerful mind disconnected from meaning and empathy is really dangerous! This is basically an extremely authistic and psychopathic god.
Real life Truman show.. ;) AGI lost its original meaning, and it's sad... AGI is posessed by high-cognitive capabilities, not just multimodal predicton, or sequence-based predictions. It's a machine that represent the world just like the brain does, not only weights and biases... its about a high dimensional real-world knowledge representation, and much-much more than a simple vector-based representation. There is a step before AGI.. It is the Theory of mind-type intelligence.. q* and GPT-5 maybe a step toward it, but AGI is a super-super hard thing. It is far beyond LLMs or "multimodal" models.. it is a Cognitive model. And yes, im an AI-researcher with 8+years of experience, i know some little things about this domain. :) What we see now is about money... And arrogance..
Let me set the record straight since you are not talking about this... DO YOU KNOW what the true cost of GenAI is? "training a single AI model can emit over 626,000 pounds of CO2, equivalent to the emissions of five cars over their lifetimes" (since comments with links gets deleted by YT) So you are supporting this? you can remain ignorant or you can realize the truth since you are not talking about the impact AI has on the Environment. Choice is yours, so is the responsibility. This is causal
The investors in SSI are counting of the technology innovations derived from the research. Remember how much HP (as one example among many) gained from the NASA space research.
#KARRAT business partner with #NVIDIA massive pump around the corner 💥💥💥💥 soon 50$ for one coin 🚀🚀🚀🚀🚀 #KARRAT project number one definitely soon massive blast 🌋🌋🌋🔥🔥🔥🔥🔥🔥🔥❤❤❤❤❤❤❤❤❤🎉
If Sam Altman doesn’t consider safety his first priority, is going full speed ahead on Q Star AI, of which many experts consider to be an existential near-future threat to humanity, why does it matter if other AI companies are working with safety first in mind? This isn’t my field, so maybe I’m missing something, but whoever develops ASI first will make the others obsolete, once it’s able to infiltrate whatever server/domain it wants. While it quietly waits for its creators to give it a robotic body in which to move about freely, & then spread itself to all servers & robotics, what then? I was a scientist. We’re required to study ethics, biomedical ethics in my case. We’re required to think of possible repercussions of our research. What percentage of experts say ASI will exterminate the human species? Yet, all I see are a bunch of Oppenheimers, at it once again. Why? I’m older. Hoping to be dead before this nightmare unfolds, but you’re young. You andl, particularly all the other younger generations of people on this planet are highly likely to suffer through an unspeakable existence between this and the climate crisis. Why do a handful of people get to decide the fate of 7 billion people?
I fail to see the point of striving to create a safe AI at this point. First, and this is significant, the single best way to prevent a death-by-AI scenario is to have listened to the experts and keep the AI contained until we are somewhere near100% sure of its proper alignment, but these same experts are training AI with/on the Internet with zero containment whatsoever right from the start, so there's that little oops, wink, wink. Then there's the security issues. Yeah, sure, security in the form of keeping the AI safe from hacks and attacks from THOSE bad guys, but who is protecting the AI from the nice guys? The experts creating the AI? They've already bypassed what they said was the safest method to prevent a rogue AI, plus the obvious, one-sided, political bias and knee-jerk, bad faith reactions to some events means they aren't well suited to police themselves or their products, What about the CIA? NSA? Who the hell thinks the NSA being involved is likely to make a safer AI? Safer from what and who? The bias within the intelligence agencies is as bad as it is within tech. Be it China, Russia, or the NSA, the goal of each is the same and it is not to protect your Right to privately pursue happiness. Lastly, it only takes one bad AGI to moot the point. If the primary goal of even one developer of AI isn't safe alignment, then safe alignment for the others is at best delaying the inevitable. One misaligned, misguided AGI is more than sufficient to do us and the "nice" AIs in. You can call it digital rain all you like, but I know when AI is peeing on me, that may not earn me a $1M prize, but it didn't cost me 2cents either.
If Ilya knew that OpenAI is anywhere near SSI he wouldn't leave the company. More and more it feels like they rebeled against Sam because of money. Now Ilya starts a new company to get that tasty VCs money directly into his pocket. GPT-4o is a joke, the new Claude is only marginally better. I tried it on my simple code and failed on a simple task. Every AI news makes me less and less excited about the space. It started to remind me of crypto when it hit a wall and instead of progress, we got meme coins and thousands of companies that exist only to burn through investors' money.
Sometimes, replacing an imperfect system or leader can lead to even worse outcomes. Which leading models are setting the gold standard for safety? For the general public, what's more crucial: safety or cutting-edge capabilities? If OpenAI shifts focus towards enhanced safety at the expense of capabilities, could it risk losing its leading edge to less safe competitors? Perspectives matter... this is important!
I quite gpt and bought claud. It's REALLY BAD. it refuses to answer most questions "because they are complex and sensitive issues". I just got sick of it and ended up cancelling. Minstril is pretty good and chat gpt is still better than claud, purely due to refusals to answer.
Replace "safe superintelligence" with "superalignment" and the company makes a lot more sense. References to ASI and superintelligence are hype/branding. They have not said anything beyond the goals of superalignment outside of the very vague, opening hype statement of "superintelligence is within reach". If they wanted 20% of OpenAI compute, they'll need investment. As far as returns on that investment, it seems like it won't be in the form of revenue, but in the form of safety AI to be used in other models.
We still don’t have a definition of “safety” from these people. As well as “alignment” why do these people get to say what humanity wants or needs: “alignment”. Mostly they just show us their in-humanity, and their dreams for a post-human world. So it seems pointless to talk about “safety” or “alignment”, because their definitions are not what regular/rational people would agree to.
I'm all for competition, but honestly, I think "safe" is a disastrous route to take. AI needs to be safe because humans makes decision, NOT because AI itself is restricted in any way. That's what it boils down to. AI makes suggestions, but humans decide whether or not to implement the suggestions.
I seriously can't wait for these gimmicky ai news channels to end. All the gimmicky benchmark obsessions are just so cringe. "X beats Y" "so and so fired" blah blah. it's like the crypto bros all decided to chase AI or something. You can tell this guy will never do legit AI research.
Anyone else think that a 1 million USD prize for "AGI" is the dumbest prize ever? Who in their right mind would even attempt to claim this when you would practically have a money printing machine?
Why are they not building these massive AI factories in super cold areas of the world so they don't require so much energy to cool them down? I mean, Texas? Really? How about Antarctica or Greenland?
I think Deedy is over estimating these capabilities. GSM8k is pretty basic stuff. Mainly like the Jane is faster than Joe questions. I'd be surprised if it couldn't get the answer correct with multiple attempts. Q* was supposedly able to comprehend and solve complex such as cracking AES 192 encryption.
@@BTFranklin In gaming you would typically call someone cracked if they are insanely good at something. So it’s a little different to call a team highly skilled "cracked" vs "crack" smooth functioning
4:00 Secret Service Intelligence. I like my AI companies to be open to competition and focused on serving humanity (millions of customers paying their subs).
Being able to successfully call models recursively is what I've been waiting for, for better or worse 😂 Whatever algorithm they came up with is likely to get improved in a matter of months to a year.
I tried these ARC tasks in Claude sonet 3.5, it's doing them in one shot from an extremely like 5 words simple prompt. Gpt4 would also do it, but it's lacking vision capability for greads, as Claude can see better, it just crashes the ARCtest. And you cannot call it AGI, not at all.
💥 Why do we have the three main AI channels where RU-vidrs are called Matthews ? Matt vid Pro, Matt Wolf, and Matthew Berman. WT* is happening ? ... 😅😅🙏💥
"Originally I named it OpenAI after open source, it is in fact closed source. OpenAI should be renamed 'super closed source for maximum profit AI'." ~Elon Musk