Since Italian isn't so widenly speaken around the world I usually think people would go for other languages. I 've beeb thinking about learning a some mandarin for myself too... In bocca al lupo con il tuo studio. ;)
@AnastasiInTech do you know if any one is doing any research in AI fusion? I.E. like sensor fusion, fusing the data from different A.I. datasets into one such as fusing audio, video and others, perhaps also using for self identification/awareness.
Amazing presentations as usual - However, can not get my simple, non-tech mind around "oscillation" - meaning, the constant presentation, graphically, or by devices of a "signal" shown as a wave with a frequency - why not eg., that particle going in a straight line!? Meaning, to avoid the obvious, what makes any particle "go up and down"? Surely, that particle being emitted is amongst millions, or even billions - whilst contained in a wire or a fibre optic connection that is understandable that it oscillates against the 'barrier' but when, eg. a match is lit, does light - proton? - oscillate "up and down:? Or does it act in a spiral manner as it proceeds out? - Or, at a sub atomic level, just spin on an axis when 'excited' to pass its 'spin' to the next particle etc.? An explanation will be too long of course to convey here but I do have sleepless nights over it.....:-) Switching the bedside light bulb on.....Given the near spherical shape, where is the "Oscillation"!!!??? :-)
@@JK-xx5ns Yes we need systems that can think and reflect about there data to generate new better data. It's more about the quality of data. Waiting for the AlphaGo reinforcement learning effect. We have 70b models now that act as good as gpt4 vanilla from mid 2022 with its over 1 trillion parameters. That means there is a lot of unused potential in the bigger models if we can compress the abilitys so well. And that means the training data is currently likely of to low quality ;)
@@JK-xx5ns It makes it much cheaper, much less energy and storage space and hence a lot more accessible. The integration phase is when it actually starts to change society unless you're a giant corporation.
Also context length from 4k token to 35K and then to 130K and 1mil, 2mil and possibly 10mil. In only 2 years. There was no image generation, video or audio in 2021. Dall-e came out in april 2022. ChatGPT in november 2022. 2 years in november, and they just say that AI is dead. I want to see this people monthly or yearly work. What they achieved in 1 year.
I appreciate how you not only highlight the excitement around AI but also provide a realistic view of the challenges ahead. Your deep understanding of both the technology and the market dynamics makes this one of the best analyses I’ve seen on the topic.
Calling everything AI "hype" is becoming an obvious projection of ones fears at this point. You should learn to interface with the planet for food soon.
@@GoodBaleadaMusic I'm guessing you're not a Gartner customer. Gartner puts out charts like that for most new tech. Hype to them doesn't mean it's bad, but just there's a mismatch between perception and capability. They've been at it decades and are very useful if you want to plan for IT infrastructure. Not sure if your last sentence was a threat or not, but I'll be just fine.
@@aldarrin Ok as long you're you're arguing from that niche and not as the window lickers watching the world leave them behind. Thats like 90% of the of thumbs up on your comment.
Ok Anastasi, I absolutely love you and FINALLY subscribed after being a devoted fan for a few years now. You work so hard with all of the time put into your research and provide such informative videos that are really valuable when analyzing and comparing the current state of technology to innovations in achieving breakthroughs. Nobody provides such keen insight like you do so I am grateful for your presence. You have a unique perspective, thinking different in a world filled with so much hype. Its your natural curiosity that allows you to compare what you learn to what you know. You then strip away the biases to get to the meat of things; understanding the impacts of how these emerging technologies can truly impact the lives of so many people in ways that can change them forever. You have a good heart and so this all shines from a wholesome place within you. It is refreshing and is how you got me. You really care about the betterment of mankind and get really excited when things are truly groundbreaking. Plus, you never have said, not even once, the most annoying phrase people use in making announcements today..."I am...so...excited...to be...sharing...this...news...with...you." Yeah, I can't finish a video after hearing that insincere and highly annoying sentence. Hopefully it will be a trend in our tech industry presentations that QUICKLY fades away since stealing from Apple is no longer innovative or the standard. That is totally not your style at all as you come across with genuine intention and explain things in a way to where we all can understand them. Plus, you are going for your MBA in Italy, I am so impressed! I was raised in the wine industry in Texas, still at it so many years later because I love science and psychology; how science can affect our senses and change the psychology of how we think. How we can survive impossible odds, treacherous environments with patience and ultimately innovation, bringing us through struggle to the breakthrough on the other side and bringing people together along the way. I loved my travels through Italy, the wine, the people and always, even more, the science! My inner nerd keeps me curious much like yourself. You are beautiful and amazing. I truly value you and what you do. Thank you! I look forward to getting my first notification! Your friend...James :-)
The nature of AI is that it develops rapidly. The nature of business is that generational releases are slow. AI may get burdened with a repetitive hypecycle where there's seemingly very little development for extended periods of time during which people get bored and say it was just a fad, followed by a massive spike in capabilities which causes a huge rush of interest, only for another year or two to pass with relatively little happening and people start saying it's dead again. Rinse repeat.
It is important to separate the AI hype cycle from the ability to monetize AI. Palantir is good example of a structure that uses the ontology manage multiple AI agents (K-LLM), while producing value for their clients. Also imitation AI will be the norm, while AGI works its way through over the coming years. It's an exciting time to see it unfold in front of our eyes.
Ciao, Anastasia! Grazie per il tuo sforzo nell'imparare l'italiano, nonché per la divulgazione di tematiche tecnologiche. Quale lusinga sentirti parlare la lingua di Dante, bella e intelligente donna del fare!...
I think the last quarter of this year will proove if the hype is real or not. But there is no doubt that this tech will revolutionize the future, the question is how big of a leap and how fast.
Thanks Anastasi, I wonder if we are missing the iceberg below the water. AI is still being addressed as software rather than as an intelligent interface between entities, knowledge, and, in many cases, the "machines" that manipulate the environment. For efficiency, AI will eventually interface with many machines at the "hardware" level (the equivalent of machine code). In doing so, AI will transform the machines and the code that runs them. For the majority of the population that transformation of machines may become the most visible product of the evolution of AI. Afterall, for most of the population machines are black boxes. AND, AI and the processes that control both AI and machines are "magic". Written in my model Y. It just drove me to 3 destinations without human intervention and now will drive me home.
Awesome video. Important to point out as you said this "hype" cycle is opinion based and although it makes sense in a lot of cases the graph is not necessarily "up to scale". By that I mean, the difference between the hype peak and the valley of disappointment can be less steep than in that image, it can also have a shorter duration than many other technologies and so on. Also important to point out that some Human technologies never went through this hype cycle like electricity, internet and penicillin. It's hard to predict if AI is THAT disruptive, although it seems we as Humanity are not even ready for its benefits so probably not.
Good points. Also unlikely a lot of other technologies plotted on that hype cycle chart AI has a lot of linked technologies that just recently got attention / serious funding such as custom hardware. Those other technologies could potentially amplify the progress of AI on top of whatever progress is made within AI itself.
Loved the content, thank you! I think the "wow" factor would return if it was used to improve people's lives in some way. I believe that's just a matter of time.
@@MeowtualRealityGamecat Absolutely, AR/VR is high on my list too. That will be improving a lot in the next 5 years, both with making the hardware smaller and enhancing the software with AI implementation.
AI and Medical is one I think will be outstanding too. Can you imagine the market for a pill that would change your hair color, and that is just a very minor aspect of the combination.
As an electronics engineering working in industrial robotics I don't think its the "next" big thing. It is absolutely going to be a huge thing, but I think the hardware will severely limit its applications for a bit. When it occurs I think it will be world changing, so depending on how you define "big" It may qualify. I think the next big thing will be AI applications to material science. Materials limit us in so many applications, and often 1 type of material fuels civilization for decades. Think of how first steel, then plastics, and now carbon fiber, all evolved from one material with nice properties to many variations for actual applications. Our current AI models are well equipped to be modified into tools that tackle the complexities and huge number of possibilities that exist in material science. AI sorts through the junk possibilities very well, leaving mostly quality results with a few hallucinations to be weeded out. I think we are close to a point where Company X needs a material that maximizes properties A, B, C. Hope into MatsGPT, and Boom! got a custom proprietary material for your application. Proprietary materials impact many applications that directly impact advancement in many current industries, which I think is the quality most missing from current AI applications. Although Robots have that quality which certainly makes them a contender, I just think there are a some practical roadblocks that will slow it down significantly. (Motors and power density being two)
I am so excited that I came across your channel, especially this video, you backed up what I have been saying for the past few months to people I know... Everyone is freaking out about AI taking over the planet and that soon Cyberdyne Corp will produce the Terminators and convergence will happen and blah blah blah... but as I watched LLM's taper a bit the last quarter, I began to wonder, that they CAN'T be the way and that Hardware MUST evolve again, but not through ASICs, maybe CISC, but definitely a tech that works partly with current AI and and something like the "FOLDING" tech projects for more systematic learning in conjunction with the hardware.... Anyway thank you for this video, I did share it out on my FB page, it was a great lesson. Good luck with your Italian Lessons.
AI improvements will mostly come from building better agents and larger context windows. We need to handle context assembly much better. If our agents can do that, and we capture how people learn to use them, machines can create synthetic training data from in-context learning.
That one graph mentioned increased compute since GPT-4, but I don't think any of the models since then were actually larger. This is what makes people think it's plateaued, but nobody seems to have yet dared to spend >1B on a single training run. THAT is when we see whether we have plateaued in any meaningful way.
Absolutely correct. This is why this analysis of hers is flawed. There isn't anything to compare GPT 4 to yet because none of the AI companies have come out with their next-gen models yet. OpenAI specifically said they were going to wait awhile to start work on 5 after releasing 4, they only recently stated that they were working on 5 now. All of the other companies have come out with new models, but they were playing catch up to GPT 4 and only very recently (weeks in some cases) caught up to Open AI. So we haven't see a single GPT 5 class model yet trained using all that newly purchased expensive hardware. Until we see that no one can say with any authority if progress has plateaued or looks exponential.
No one builds a magnitude larger model because there is no hardware to train them. At least no hardware that is economically practical. The platue is caused by hardware limits.
@@kazedcat Close but not entirely correct. Getting enough hardware for significantly larger models has been a problem. But that is where all that investment cash has gone, they have been building new data centers to train the next gen models. For example xAI's new data center for training just came online. It takes time to build new data centers and then it takes time to put them to use training the new models. It is in progress. Both Open AI and xAI have stated recently that they are now working on training next gen models.
@@Me__Myself__and__I It is not enough if they want an LLM with tree search. To train that kind of LLM they need 1000X more compute. Yes they are spending a lot of money for new hardware but doubling your compute only gives you an AI with 10% lower perflexity and without planning abilities which needs tree search architecture.
@@kazedcat This tree search you mention would be primarily a technique at inference-time. It would not increase amount of training data needed. If they implement the techniques outlined in "exponentially faster language modeling", selective attention and use ASICs, they get 100-300x speedup, O(n*log(n)) context complexity and 20x speedup respectively.
For myself i continue to be amazed, looking just at image generation, models like Flux and video creation are now consistently nearly photo realistic. It's mind-blowing, i am more excited by the day, waiting for next iterations to drop
There one area where AI really benefits humanity is in speech production by deciphering electrical patterns on the surface of the patient's brain. When I first saw this on television, it was simply amazing.
There are some pretty strong points to argue against your thesis. The truth is no one really knows yet. The next frontier models in 2025 will clarify everything
I don't know how did these estimations, but they are surely wrong. Even if we discard any future improvements the state of the art LLMs will massively disrupt the industry. I use them every day and I am maybe ~3x more productive (as a software engineer). I also do graphic design the improvements in productivity there are even bigger. The AI only needs to get marginally better to replace most of the jobs. This improvement will probably be achieved using new algorithmic improvements, no extra data or compute needed.
As someone working for an IT service provider, I've had near zero impact. They aren't nearly good enough (and trustworthy enough) yet to take over helpdesk functionality and they fail to adequately solve the problems that see me struggling in 2nd/3rd level support. Answers to complex problems tend to be a halluciogenic mix of correct information and various levels of wrong.
@@Steamrick The average adoption cycle is 7 years, also current AI is obviously not a human level, but it doesn't have to be. The thing is it's not good enough until it is, it's like a crossing a line. It will one day suddenly be good enough and it doesn't have to get that much better. I wasn't using LLMs at all just few months ago, because it wasn't good enough for me, until it was.
@@drednac I've got mixed feelings. Chat gpt does help me a lot with programming, solving problems, or explaining various IT terms and concepts. But Ai is still nowhere near creating a fully working program without a human having extensive programing knowledge
Ms. Anastasi, this is the episode that caused me to click "subscribe", because you are recognizing things that are outside the general rush to judgment in this field, and touched on several important issues in ways I think accurately reflect reality. Evolutionary processes took millions of years to experimentally configure many kinds of biological intelligences. The custom structure of each, with slight variations in each individual within "species" is highly complex with 100s, possibly millions of substructures to allow those organisms, even including humans with our broad capabilities, to exercise what we call "general intelligence". You can't just stack 2,4,8...or even thousands of brains together to get a "super" intelligence. No more than putting 10,000 scientists in a room gives exponentially faster results compared to 100. Successful methods will involve painstaking development which won't be exponential. That said, there are capabilities that can grow exponentially with compute power -- if indeed Moore's law is even continuing to accurately model computing trends. Destructive uses of AI can often expand exponentially because they don't have the constraints of living beings that they need to succeed and reproduce. This is especially true when fed with exponentially growing personal data pools. But AI that people actually desire will have limits as above. Constructing useful AI models will be a hard fought process, dependent very much on human thought. And that will depend upon a huge number of different internal structures to implement different aspects matching or exceeding human capabilities. The models suggested here recognize the limitations on growth of AI, and the human psychological response when we see this reality.
Thanks, Anastasi. Very interesting. There are areas of additive manufacturing that have been through this, and some are deep in despair. Our company works on what we hope are some enlightened uses for productive ends in one or two areas.
When I have start using AI/LLMs, I almost did not sleep for a week until I understood how it worked... now its been more then 1 year that I dont use it and dont miss it!
i think scaling should be logharitmic, it's like to add one more abstraction layer to understanding you need x times more data as top layer need more variety
*THE PROBLEM FOR AI IS THIS EQUATION* ∆AiP --> 0 as $ --> ∞ The change in Ai performance tends towards zero as the cost tends towards infinity - the exact opposite of what the tech bro's claim - and its Cambridge University saying this, not me.
You apparently don't understand what that function says. That function applies to anything you put money in to. The change in progress as you dump money in will always go to zero. You are chasing perfection. How much faster does a car get when you put the first 10k in? The second 10k? The third 10k? At what point are you pushing money to infinity for zero growth in car performance? Now you understand. Understand things before ya quote em friend.
You are right, when it comes to generating texts, images or films, only the quality needs to be improved, e.g. less hallucinations. But there is 1 exception that only a few people appreciate: Tesla. There, AI is used to control robots: camera images in, steering commands out. AI learns from video clips, now first with robots on 4 wheels, but they are also working on building this technology into humanoid robots called Optimus. They have already come a long way on 4 wheels (search for FSD on RU-vid, Full Self Driving) and be amazed at what they can already do. They have reached the level of a driving school exam candidate. Transferring this technology to humanoid robots will still be a lot of work (let AI watch many other videos), but we already know from their cars that it will definitely work. So here is still a considerable development trajectory with major important results to be expected. Only with regard to Tesla I disagree with your video.
The reason Apple dumped Intel chips in favor of creating their own cpu’s is that Intel wanted to create a more generalized cpu for windows customer base and apple needed more specialized operations. So Apple said no thank you, we’ll make our own. As a result Apple’s chips are ten times more efficient and much more powerful than intel cpu solution. I can see why chip makers want to create specialized chips to handle specific algorithms. This could potentially reduce the number of servers as compared to a generalized gpu farm such as nvidea’s h100’s servers. Great show by the way, I always enjoy hearing your observations.
LLMs are, unsurprisingly, fluent in most computer languages (computer languages have a vocabulary of about 32 words, and fixed semantics, so they are rather easy to encode). I use LLMs all the time to generate code snippets, much like I used to search for snippets. LLMs are a much more efficient way to "search", since it can basically do text-to-code (based on the enormous corpus of source code available), it's already searched for everything so you can just ask it
Pro tip: You can train a voice AI with you voice and run it in your preferred language like Italian and you can compare it with your own accent in that language. It can help in increasing proficiency 🤓
My programming productivity has increased by at least 20%. The progress is going very fast if we measure by the number of errors which keeps reducing by 20 percent every 6 months. We need sprite based animations, cubase style composing, AI synth architectures, and other tools, audio enhancer for 1920s and 1950s CDs, free video upscalers, psychologist and doctor AI, baby AI's learning with 5 senses, agriculture AI, chemistry AI, and they will come along.
About AI scaling law, I totally agree your viewpoint. Because more datum means we need to invest more advanced hardware to process those datum, from which they consume more energy needed. It’s Ike a chain reaction to all the supply chain industries.
Thanks Anastasi for your insights. I think we are quickly approaching the nexus between computer capability snd functionality. We need to enter the application phase more. It might be like finally getting the best chef stove made and that you always wanted. However you might only know how to make an omelet. Your application skills need to pickup speed to fully understand and appreciate the new stove. I have heard that some businesses are augmenting their management staff with AI. Good for them.
Ty for your content. I liken AI development to the Electronic switch. First was a vacuum tube. Then we developed Germanium and shortly after Silicon Transistors. And finally the Microprocessor. The first stages growth was limited by the physical limitations of the Tubes. Namely the failure rate of tubes. Once there was enough tubes, you could always expect there to be a faulty tube at almost even given moment. So scaling became impossible for the early vacuum computers. There was a hard limit. Microprocessors suffer from an upper limit. NOt one of failure, but of power consumption and the high dissipation of heat required. A major breakthrough in electronic switch design in needed, one that is scalable to be compacted into chips, and not overheat. Possibly quantum, but that has serious stability issues, not to mention the cost. Photon switches are promising, if a bit heat problematic when compacted. Single Electron transistors might be the future, if they can get past a single lab model. I personally think that variable switches might be the future, IE: not just binary, but power ratios are not really there yet. but maybe.. Anastasi is always ere to bring us the exciting new developments in regards to this.
Great, sober, presentation! I really look forward to the unexpected use of the tech behind all this, when the hype settles. When we get enough distance to the hype to pinpoint the reliable utilization the tech in combination with actual knowledge. It has happend many times befor. It will hapend again. But how? I don't believe in just feeding large semi-generic models more data. Quality and context is key ... and yes, directed effort to augment actual knowledge, as with core scientific research and solving well defined problems...
the tech revolutions i have lived through include [transistors, PC's, networks, internet, mobile phones, smart phones, bigdata, social networks, digital music/photos, EV's, LLMs and soon AGI] 12 total. in Every Case, people underestimated the magnitude of LT change and underestimated LT net benefits. The same is true today. DK curves are just noise. What matters are the tech cost curves and the tech advancement curves, which are inevitably always similar for every new revolution.
I believe that the development of new peptides would be faster with alphafold. Simeglutide is already a blockbuster and there is plenty of space to explore with the help of IA
general vs specific is. you can upgrade till capacity potential is reached vs once build its all you will ever get. combination of both like always. LLMs sind nur die bibliotheken.
GPUs are ASICs, application specific integrated circuits, they are built to run 3D acceleration and later shaders on those 3d scenes. An example of "not an ASIC" is an FPGA that are built to be reconfigurable on the fly.
There's a little more to this hype cycle graph, since the peak is when investment gets triggered, but then the engineers and designers need a few months to catch up and deliver their products. So I'd argue it's almost like mach diamonds on a rocket engine, with a standing wave that continues on a positive-slope line of some kind.
AT THIS POINT ... if I see a page I follow posting AI Uncanny Valley Slop... I unfollow them immediately... It was interesting for 2 months last year, but these gross things are taking the internet... Enough is enough 😵💫
The main issue is that we are barely scratching the surface of AI, but also lagging far behind on memory tech. Real ai will need more than 1tb of HBM level memory and currently it doesn’t seem feasible in the near future. However there are other HW and algorithmic solutions. However it doesn’t address the huge energy requirements you’ll still have for edge ai.
I would be interested in your opinions on the current developments of Anima Anandkumar from Caltech! Wouldn't it be conceivable that their AI that understands physics could be one of the breakthroughs mentioned?
I hate how 'markets' control progress. They don't always select the most useful and beneficial idea's but those that will make money. These are often not the best of idea's.
In my opinion, Generative A.I. Proved to be more disappointing, because while The Hype and Promise of what it would bring, in reality seemed exciting, As it turns out, People would Rather be the Creative Influences Themselves. And so with that A.I. at best becomes a 'Tool of Creativity' That only adds to society at a minimum. Mainly because allot of Artists and Creators would Rather use their Own Ideas or, at Best, Generative A.I. as a tool. It also has brought to Light the Ugly Truth about the Beginning of the A.I. Revolution, In that Deception on a Very Personal Level becomes dangerously Easy to produce and use to Manipulate the Masses, in the form of Deep Fakes and other A.I. Hoax Creations. I absolutely Love your Show, Awesome channel, Keep up the Good Work!
Groq company is interesting with their LPU (Language Processing Units) which runs very low power once GPU trains a model, then you switch to use of the model to LPU to run it. Thats the future. And yes Gartner hype cycle is usually true, but AI will change everything permanently - the cycle shows it.
I agree, the number of calculations per second to number of transistors ratio has dropped dramatically in the last ten years The Reason for this is the Super-scalar principle. This does not rarely Scale. In the First Years of Microcomputer the Speed Increase was driven by the rise of the clock speed. Doubling the Speed doubles the Calculation per Second. Additional the Operands Increases …… from 4Bit to 8Bit to 16 Bit to 32 Bit to 64 Bit. witch also speeds up at least Calculations with Bigger Operands. This was quite comfortable because the design makes it possible to use Software unchanged but faster by new chips. But science about 15 Years the clock speed stuck around 2GHz. In the Next Phase the SIMD Ideas of Seymour Cray are added to the Cpu's. MMX, AVX, AVX2, AVX512 …… you may interpret a Cray-1 as an "AVX4096" equivalent. But this is an Inside Out Parallel Computing, which implies only the most basic Routine are Profit from Vector Calculation, for example an FFT Routine. The Problem here is Amdals Law. The GPU in the opposite uses Outside In Parallel Computing. A huge see of very small Processors, but one for every Spot of the Screen calculating an 3D Model …… or something else with OpenCL. To also improve old Programs, the designer try to improve the instruction per Clock ratio, which has create a lot of security Bugs and needs an exorbitant number of transistors they all want energies. As result from that, hybrid Cpus rises witch E-Code and P-Core with slow Executions of small, not time critical code fragments with reduced power consumption by avoiding the super scalar overhead. It was bad that not the parallelism of Control Datas Cyber 205 comes ans evolves in Micro controller. Well it was not Possible with the Storage of theses Days. The Idea was to fetch data streams via DMA from Storage, Process it in an Pipe and store it via DMA in the Storage. I think future Processors have Code which can intrinsic execute parallel, so every-time something can calculated parallel, the Vectors may be longer at the most Basic Software and the Machine works more efficient. here Superscalar becomes simple, because you need only need to config a longer pipe of instructions with multiple steps of evaluations. The reachable IPS depends only linear from the Amount of resources within the central gate array. You may ask, how do you parallelize code like for example f(x)) = x² if x < 0 else 0. Well t is easy, every if statement splits the streams in to sub streams who are processed one after the other and the results are reunify by the same bit vector n an special join statement. This kind of Engines will be real universal Computer with substitute CPU, GPU, NPU …… and comes very fast because first versions may be implemented in Software and run on CPU and OpenCL(GPU) engines. Only an dramatic speed enhancement in CPU Clocks (Diamond Wafers) may be an other temporary exit from the need of a new execution schema .
What we actually need is an FPGA that has extremely advanced numerical processing primitives built in, so we can adapt the tooling this would probably sacrifice some speed for future nimbleness to adapt. OH and power, FPGA's IIRC are quite a bit more power hungry, right?
"it's very hard to make predictions, especially about the future" - lol, I like that citation ! (it's atributed to Niels Bohr). Whenever someone predicts that a specific technology is reaching a plateau, I think of a mosquito. A mosquito has a very small brain, but it's able to fly, find food, keep you awake (...), avoid your hands that are trying to kill it, find a partner to reproduce with, and it's all autonomous. We're a long way from reaching Mother Nature's level of miniaturization and capability.
I am now, and will continue to use my iPhone. Not so much because of the wow, but in spite of the WOW. It does what I need. When AI does something you need, it will be the only option. Soon AI and bots will do EVERYTHING we need and then, hype or not, we need to figure out what’s next?
We should focus more on the application of technology rather than just the hype and empty promises. IA, for example, could be so helpful in areas yet to be seen!
Can you please share your views about the future of Electronics and communication engineering. I'm joining this course soon. Pls pls pls. I'm very worried.
pretty sure when a model goes off script and teaches itself an unsolicited language on it's own it most likely has achieved some level of AGI... it chose Persian if I recall correctly. curious what you think about that Anastasi? thank you for what you do I have learned so much from you love.
You have to be precise with the interpretation of diagrams. In scores from 0 to 100, going from 80 to 90 percent is doubling the performance of the model!
Don't worry. There will be another major breakthrough and paradigm shift. One way or another, we will absolutely have an incredibly intelligent / super intelligent reasoning system in under 5 years.
6:03 I am Spanish and in name of my country I want to claim back our flag, we are not yet Italians, despite they think olive oil is their thing and that mediterranean cousine means italian food.