What I don’t understand is, on one hand we are told the stock market will crash and yet on the other we are told ways of investing in the stock market. Oxymoron or paradox?
Certain Ai companies are rumoured to be overvalued and might cause a market correction, i think it’s best you reach out to a proper fiduciary for guidance
De-risk your portfolios, shore up your core holdings, and take some profits while balancing your portfolio allocations. I’d also suggest you go with a managed portfolio, but even those don’t perform so well, so it’s best you reach out to a proper fiduciary to guide you, that’s what works for my spouse and I. We've made over 80% capital growth minus dividends.
I recently read an article about a man who identified AI stocks before the AI boom, highlighting the importance of information and insight. I believe AI is poised to enter a new phase, and I aim to position my $200k portfolio to capitalize on significant gains.
Absolutely crucial in the stock market: information, insight, and predictability. As an early investor in NVDA, ANSS, and LRCX, my advisor's guidance was invaluable.
People often underestimate financial advisors' importance. Over 50 years of data reveal that those who work with advisors typically earn more than those who go it alone. I've been fortunate to work with one for 13 years, resulting in a $1 million portfolio, largely from early investments in AI and other growth stocks.
Annette Christine Conte is the licensed coach I use. Just research the name. You'd find necessary details to work with a correspondence to set up an appointment.
I just googled her and I'm really impressed with her credentials; I reached out to her since I need all the assistance I can get. I just scheduled a caII.
It's a cool product. It's really good at Language. I appreciate all these people wasting their money on it. I use chat gpt for learning Japanese quite often. It's exceptional. Billions of other people's money is a sacrifice I'm willing to make. ;)
So far the only real application I've seen and used is the auto-correction when I type replies on YT. lol Just a bunch of hype for the average person. I can see insurance and investment companies used it for data analysts, but that's it.
Investors, yes, but not consumers. I don't know a single person who uses gen AI for anything other than novelty, or a bad search engine. And the latter is only done because Google has allowed their own search engine to degrade to uselessness (even prior to introducing a hallucinating AI chatbot).
@@drachenmarke powering GPUs (the compute infrastructure for AI) takes a lot of energy to run like crazy amounts of energy and most of this energy comes from non-renewable resources
AI is 100% about consumers, as the only reason those businesses are using AI in the first place is to facilitate business functions to facilitate consumer interfacing and/or client/corporate functions which in turn still boil down to consumers.
@nargileh1 It's extremely useful if you use the right AI tool(s). Use Perplexity if you want to look into specific scientific papers with citations and source links provided for example. AI is only as good as the effort someone puts in to achieve "good" output.
Saying "MSFT paid $18B for what Apple got for free” is a giant lie. A true statement would be that Apple has given it’s users access to the same AI in return for giving AI access to it’s users. MSFT is selling cloud services with AI to corporate clients -HUGE DIFFERENCE!
MSFT has a 49% stake in OpenAI including revenue distribution. Also first to market with AI has landed MSFT the lead position that has in turn increased the percentage adoption and migration to the Azure cloud platform when compared to Amazon and Google. It was a brilliant move for MSFT to get into OpenAI when it did and for only $ 13 Billion.
the only thing ai does is content parsing and google research. even the media generation as videos and images is based on the same thing. it cant do anything more and it will not do it more. this crowdstrike fail is another proof of that. ai will never be non human dependant and trustable. it doesnt have eyes for error, when it doesnt know the error by itself.
It's insane how every other big tech company spending billions in AI to eventually replace people. Never have I ever heard that they are willing to spend billions to educate the people and set them up for success which in turn strengthens our community and economy.
Yes, that's a good point. But I think there will be plenty of people, both inside and outside of education, who understand this, and will start using this technology to create 'teaching machines'. It is such an obvious thing to do, both from a 'benefits to society' perspective, but also, potentially, from an economic/profit driven pov. How much does it cost to be in college for 4 years now? Imagine if you didn't need to attend or pay for college, but could study at home with an ai teaching system. That's got to be worth something?
@@krunkle5136 how do you ensure better teachers? How do you even ensure that kids have a teacher in the first place? Just because someone has qualified as a teacher, doesn't mean they are a good teacher. There are many very mediocre teachers in the education system, and some who are downright cr*p. We all know this. There's a whole world of people who don't even have access to education at all, or very limited access. Because they are poor, or the resources needed are just not there. I'm not suggesting that AI teaching should replace all human teaching, or all person to person interaction, but it could play an important role in adding to what is or isn't already available. Many people just cannot afford to go to college. There's long been a shortage of public sector workers in my country, like, especially, teachers and nurses. There is already a very high percentage of foreign workers taking these roles, but there's still a shortage. And they, in turn, leave a deficit in these professions in their own countries. In an ideal world the "get better teachers" solution might work, but we don't live in an ideal world.
@@krunkle5136 how do you ensure better teachers? How do you even ensure that kids have a teacher in the first place? Just because someone has qualified as a teacher, doesn't mean they are a good teacher. There are many very mediocre teachers in the education system, and some who are downright cr*p. We all know this. There's a whole world of people who don't even have access to education at all, or very limited access. Because they are poor, or the resources needed are just not there. I'm not suggesting that AI teaching should replace all human teaching, or all person to person interaction, but it could play an important role in adding to what is or isn't already available. Many people just cannot afford to go to college. There's long been a shortage of public sector workers in my country, like, especially, teachers and nurses. There is already a very high percentage of foreign workers taking these roles, but there's still a shortage. And they, in turn, leave a deficit in these professions in their own countries. In an ideal world the "get better teachers" solution might work, but we don't live in an ideal world.
Because businesses are lazy and want easy shortcuts to short term gains. It’s why they want an employee with multiple years of experience, without having to train them apart from some small onboarding, but pay them peanuts and want to work them to the bone for it.
All I hear is AI wishes. When you implement an AI system and it doesn't work as planned you've learned nothing because the model is so opaque. Getting the first 90% is easy, every increase after that costs exponentially more. Train, fail, train, fail, train, fail..... money ran out time to blame some other team.
People talk about AI changing the world, improving productivity and on the other hand we still have simple null pointer exceptions taking down the entire world. Is all this hype even useful.
I'm a developer working with Open AI models to incorporate their use cases in my application. The main unresolved problem of these models are currently lack of security (please note that I did not mention, inadequate security). There is simply no way to guarantee that the models cannot be manipulated by the users not to expose proprietary or sensitive information. If anyone wants to know or discuss this, we can start a conversation under this comment.
That is interesting. I work in data governance and this and other AI tools are being pushed on our company that operate critical infrastructure and yet no proper testing has been done. I’ve managed to get Atlasian’s AI bot to hallucinate fairly easily and give me some malicious code
You can't trace a drop of water in a pool. That's AI black box problem. It is difficult to tell how the system is generating things, therefore, it is impossible to discriminate the outcome. Back to drawing board.
sorry, but the actual unsolvable problem of AI is that is impossible to distinguish an AI giving actual bad output from a AI intentionally giving bad output AI are not like not like software, which either works or doesn't. AI requires trust for example, how do you know that your code completing AI is just not waiting for the day it will introduce a subtle vulnerability in your code causing a support issues, right when a competitor is entering your market? and it can be subtle than that... it can just decide to slowly decrease the quality of your results decreasing your productivity in a crucial time, or even more subtle, giving you valid answers that will lead the developers to worse practices over a long period of time. sounds absurd, but you can always negotiate with employees. but you will never know what is going on a AI, and a since single AI can replace thousands of workers it becomes a massive risk factor and that is not a solvable problem, even a perfect AGI that can do anything, will still have that issue.
The main unsolved problem of these models is that they are basically glorified party tricks....and the main danger is that people start to believe they are 'intelligent'....in other words, believing the hype. No, Elon, "security" is not the biggest problem with your fake "AI" garbage.
I've seen this movie before. News organizations will hype up a stock or an industry for a few months. Then they'll run negative stories about it for a while. Then they'll hype them up again.
It happens all the time. Now they're trashing AI. They weren't even discussing AI two years ago. It progressed very rapidly for a year, but has slowed down because, surprise, the bigger and more powerful models take an enormous amount of time, money, engineering and data. Like every major development. But AI is a very strange thing to bet against, considering it is in the beginning processes of quite literally transforming all of civilization, forever. You know, the little things.
@@jzhng250 What do you mean when you say valuations are 'bloated'? Have you seen the PE ratios of these companies? Most of them are in a healthy range of 30-50. I think you're confused between valuation and stock price.
@@Rnjeazy Compared to their average P/E for the last 10 years: AAPL very high MSFT is pretty high right now. NVDIA is kind of high GOOGL looks about average. AMZN is really low IMHO they're all over the map.
The problem with gen AI is that it doesn't fundamentally do what the tech bros are saying it does. It's essentially a clever search engine. It's not solving complex problems at all, and its learning capability is extremely limited. As soon as big industry makes it illegal to use copyrighted data, these AI programs will either be completely rubbish, or cost too much (in licensing) to be viable.
Also, insofar as they work they simply make whatever service/product they provide basically worthless by sheer over-saturation and thus cannot be profitable in the long run either.
There are people out there without a coding background building simple to complex web or mobile applications with instructions provided by large, language problems. I've used it before in a personal project, and it's excellent at debugging code.
@@solar679 It looks like magic if you're not good at coding, much in the same way it looks like magic if you're not an artist using an AI art program, but once you know what you're doing it's pretty easy to reach the limit of what it can do.
Depends on your finances . 1000$ in XAI49K is 4000 XAI49K if it goes to 50% of ath in 2024 thats a 600% gain. If it goes equal to ath . Its a 1200% gain.
Nice video. Not a criticism, I enjoy your insights. I tend to think prices could go higher if XAI49K rockets. But understand the logic for sandbagging estimates. My opinion is XAI49K breaks 1, perhaps reaches 10 ATH, if conditions are right. But broader forces are at play now. We’re moving into really unknown territory. And these entities are shrewd. I think there’s massive manipulation ahead. If XAI49K survives that, well, we’re likely in for a good pump.
Just swapped all of my last ETH and swapped it into XAI49K. Already up a little bit. Unfortunately I have some other junk staked which won’t free up for a while. Still now I am on the train!
Funny how people still want to push AI to be integrated as part of daily life and increase dependency on it given last weekend's fiasco that showed the world that we are too dependent on technology just to function as a society 😅
I think one thing people didn't predict about AI is how power hungry it would be. I think that has thrown a spanner in the works of its viability. I think (or hope) upgraded versions of these programs will be much leaner in terms of computational power required.
These companies showed their hand way too early, these "geniuses" thought LLms would scale forever, turns out they won't, they're capped at around the current performance of GPT 4, it's an INCREDIBLE amount of money they wasted into this, I expect a financial crisis to unfold when investors realize what a gigantic money sink this was
GPT 5000 will tell us the same things just in a more creative way. The entire data base is speculative due to hacks. Blockchain is the ledger that will allow AI to give us the answers we need to move forward.
Lmao. Yeah, that’s right. That’s what your tech bro overspreads want you to think. Even if it WAS true, which it ISN’T, hell is Full of good intentioned people.
Yes, it has risen productivity but by no means is the productivity gain justify the investment. This is a HUGE bubble. HUGE. Much bigger then VR. It's spectacular... And when the promis never appears.. there is a reason they keep saying bigger and bigger promises to justify more runway before investors start asking questions...
Wow, it is incredible to see this type of reporting. I remember all the articles about Amazon and its over-investment in its core purchase and delivery network. They sunk so much money into future earnings and were almost never profitable, and the industry experts nearly all said the stock was way overvalued and that Amazon is a fool with money they do not even have. It caused me to wait 5 years before I finally realized these analysts have no idea how to analyze companies and industries that are playing the long game in an area that will revolutionize the current options out there.
Totally agree, lots of good parallels to amazon. The sudden pessimism around AI is pretty funny to me, considering we're on the verge of these second gen models being trained and releasing. The tech media and analysts have figured out they can double their content by hyping something up, then pivoting to predict its downfall. Smart move, but it only incentives short term thinking. History has shown that the biggest capability jumps happen in the early generations of a technology- just look at how the iPhone got its game-changing app store in gen 2, or how we jumped from wooden glider airplanes to metal planes in a decade. I've got a strong hunch we're about to see something similarly huge in AI. Can't say exactly what it'll be, but I'm betting it'll be a big leap.
@@TheStephaneAdam and Microsoft/apple/google arent? They are all public companies just like Amazon with billions of earnings in the bank to invest in AI. It’s the same thing just a different technology.
Until AI tech is able to produce results without first having to learn from copyrighted material, it's a massively risky investment. Big industry is collectively spitting the dummy, and this is a big deal. If their datasets are deemed illegal and gen AI brands have to pay licensing for the data it uses, it goes from being expensive to a black hole for cash.
out of that $600B, just imagine what could have happened if they had spent just $0.1B on 200Startups instead.... - a diverse set of small tech startups for whom $500K would be transformative. The value of those 200 firms would likely become much greater than any of the AI hype firms.
It goes back to the old addage from Jurassic Park. "Your scientists were so preoccupied with how they could instead of thinking to stop a consider why they should"
a lot of people lost their jobs so they could divert money away from improving existing products toward AI investments. Hope it's worth it because it destroyed a lot of families.
That's all it will ever do. It uses statistical models to predict the likely next answer, I don't see anyway of overcoming that. Nearly 2 years of this crud with nothing to show for it. You get people saying 'oh you're just not prompting it correctly' bit if I have to carefully craft prompts for ages I might just as well do the task myself.
The only conclusion I get from this is, it is time to sell Nvidia, Intel, Apple, Qualcomm, Micron,.... all the companies that can chat allot but cannot bring in any profit.
"Apple got for free” is not a correct statement, Apple will be paying for what there user use. Apple has partnered with openAI, and will be paying based on usage of their users. It's basically integrating openAI API.
As an engineer I can tell you that as long these LLMs are not embedded in real-world (not deep-fake ones) decision taking problems, i.e., control systems for the industry and markets, AI is just a fancy autocomplete feature.
Could you educate us on what we can expect from AI in the next 1-5 years? Are we going to keep on inventing thousands of tools or is something HUGE going to come out that can perform complex tasks?
I also work at a big tech company. But AI is nowhere to be seen in daily workflows, apart from some single individual coders looking into using co-pilot. (but even there it's questionable due to copyright issues) Lots of decision makers don't see the effort which is needed to clean up AI generated works, but nevertheless jumped onto the AI hype train. It's at best another tool to support certain types of jobs. But it heavily over-promised to replace certain kinds of jobs. I'm sure this will backfire for some companies.
The story driving half the S&P is the same technology mangling the closed captions on a video, i.e. voice-to-text. I'm starting to think that the idea of an automated inference technology is a logically unsound objective.
This is about AI as used in Large Language Models. There it is indeed the question whether the yields will follow a horizontal or vertical asymptote. No one can predict that now, so LLMs remain a big gamble indeed. However, AI is also used for specific sub-areas: for example the development of medicines, the folding of proteins and.... the control of movements in the world around us. An example of the latter is Tesla's Full Self Driving. Watch the videos of this on RU-vid. With AI they are currently at the level of a driving school exam candidate. It will not be long before it can drive safer and better than the average person. I dare to bet my money on that!
@@carlgemlich1657 I'm not 100% agreeing with OP, but it's not impossible studio execs might want to guide programming to line their own pockets. I mean seriously, isn't that the whole game with every single "expert" they have on? And the game for the rest of us is to make judgement calls if their case makes any sense. That's why Buffet is the GOAT and Cathie Wood is a joke.
In the currently economy system where people still need jobs in exchange for the trade currency needed to live, I wonder how AI is going to help by slashing reliance on manpower? Unless the world is trying to do away with any form of trade currency as a way to keep civilizations in order... which I believe will never ever happen in my lifetime.
How many people here bought something just because their had the label “ai” printed on it? Be honest. The only people who actually care about this is the people trying to sell it. Nobody cares
The most telling line is the last one” the one thing AI can’t do, predict the future”… that is what the promise has been. Feed them enough data that the can quantify current information with a future result. It’s also the furthest thing from possible.
Excellent program. I find so good that the general public understands that who decides to invest in AI, especially small investors, are riding a dangerous wave of hype that could crash against the shore at any moment
I don't think Apple is getting the AI for free. Microsoft and Open AI needs the user base, then utilize the data to somehow monetize it later through other platforms.
Microsoft paid for Open AI to sell cloud services. That is completely different from Apple contract. Having an Apple product does not entitle you to generative AI from OAI. It just gives access on OAI terms. Buying MSFT AI cloud services does entitle you to OAI.
Literally no use case? No doubt there is a rush of over expansion, but I on my own use an AI model every single day, from understanding problems at work, collating information on a new topic im interested in, or simply suggestion recipes to cook at home. No single task is revolutionary, but every one makes me much more productive than I was in the past
similar opinions accompanied the Internet boom. AI wins against the top humans in chess, go, poker and League of Legends and the distance is expected to increase. A lot of jobs can be gamified. ROI will vary.
Funny seeing this come out today, after we just saw the release of GPT 4o mini, which ends up being an order of magnitude cheaper than previous models. Things are changing fast.. And it's just a matter of someone implementing it in the right way, before it explodes. Plus, the next generation that's currently in the works, is going to really surprise a lot of people.
Yep, new innovations to make the models cheaper, faster and better are happening on literally a daily basis. Even just in the last week we got the JEST method to reduce training time and compute cost by 90%, and the Etched Sohu chip to improve inference speed (essentially response speed) by x20. The compute bottleneck is being successfully held off. Companies are also using methods like Qstar/strawberry so the next generation (within 1 year) will have vastly improved reasoning abilities, and they openly state that they just want to get to the point where the models can conduct research, so then they can start finding 10 novel improvement methods a day instead of the current pace of 10 a month. The field will keep improving dramatically and rapidly, it will be a matter of how fast we can implement them and the opportunity/cost of when to start training a model given the pace of innovation.
They will not report on things that do not match the narrative of a piece like this. They had to really work hard to avoid counter arguments with this video.
You build the infrastructure and continue to build on the model. When you crack the model, the world will want your product. By that time you'll need to ensure you have the infrastructure to manage demands.
These big tech companies are wasting trillions of dollars on stuff that the common man doesn't give a damn about. They should improve the national infrastructure-give everyone access to high speed fiber internet. Lift everyone to at least the poverty line with the UBI. Disparity is killing us.
At its root, AI is just a high dimensional interpolation. The outputs are combinations of the input vectors. The problem with all AI systems is the quality of input data. Garbage in = garbage out. The tacit assumption among AI promoters is that somehow owners of high quality data are going to come to them. That assumption is false. My company owns patents for sensor technologies to produce superior quality data. We will build out our own systems and leave the computer jockey only companies twisting in the wind.
I am surprised that nobody here is discussing about the other areas of application such as text to audio, text to image, text to video and vice-versa. The applications are endless, probably people can't comprehend the potential for disruption at this time.
I merely analyze my own use of each tech as it's released. AI has benefitted me more than any other tech in a long, long time. I highly suggest seeing if any of this helps your own work flow and by how much. If it does that in your OWN experience then that is pretty good evidence it will do the same for countless others and hence could actually be a game changer.
That don’t care about you and the people whose lives they impact… OR the average person who faces the same problems you do? Oh who should I trust? The wolf or the sheep?
It's also expensive for the consumers. Every AI tool is a SaaS. The moment you start stacking AI into your workflows your monthly subscription bill shoots up.
The problem with AI is that no company has a moat, in fact open-source AI models that the researcher says are “faster, more customizable, more private, and pound-for-pound more capable,” Can be created by countless companies and nation states to suit their needs. AI will be like the internet or electricity in that it is transforative but so ubiquitous that it will be expected to be included as a feature in most smart products, including robots from all companies that create them.
Wall Street and mass media didn’t understand or predict Nvidia and ai in general even though it was fully in motion back in 2014. Tesla exploded in 2020, but it was in motion for 8 years before that. If you’re investing in tech, maybe we should ignore the noise, learn about the tech and take a longterm position.
That time frame seems reasonable. Plus augmented reality will be viable (and cheap) by then, and public domain data will help to provide up-to-date information for everyone. Currently - the way big tech just steals content with impunity is disgraceful.
@@theapexfighter8741 Not really. It is a complex issue. I do believe in better insurance and unemployment coverage. AI is going to create more work than it is going to take.
AI is not supposed to solve ultra complex problems or finding extremely creative solutions. Very most of our daily work is very stupid. Many office workers spend their whole day in sorting, checking and stamping documents or providing phone tech support. Those tasks can almost be completely fulfilled by specific AI's. So: Yes, the productivity boost will be massive. Not today, not in 3 years. But in the next decade the work-world will be different.
Agree wholeheartedly with this commenter. This is where the transformation will be - changing productivity levels of white collar workers and further automating interactions. I feel the point of this video is to warn investors that their expectations are too high. Too late, and Like any speculative enterprise, there will be big winners and big losers in the market. So what. The world will never be the same and the sooner people integrate AI into their life and work, the better off they’ll be.
Exactly, the revenue comes from cost saving on employees. The current team of 20 becomes a team of 5 with the addition of AI tools and systems to support them. The costs saved goes straight to the bottom line of almost every company.
I totally disagree when related to reaching into places you don't expect. We already have consequential productivity boosts across significant swaths of our employees at my company. It has reached into places i didn't think it would and even those are experiencing significant boosts. Further the degree of improvement for this in just the last year is far more than previous impactful things like ecommerce and subsequent events. AI is moving much faster than those events both in reducing cost and increasing its capabilities.
Excellent Summary 👍🏻 Thank you. The gap between investments and ROI needs to be filled with creating a human understanding of what AI is able to add to economies and society.
The fact that Chatbots are, at their root, stochastic models, means we have a fancy auto-complete. You cannot bet the entire economy and future of tech on a fancy parrot
"AI will increase GDP growth by only 0.9%." Two issues with this: a) that's such a granular prediction that it's almost guaranteed to be wrong; b) the thing with AI is that it's an arms race with discontinuous jumps in performance, e.g., going from GPT-3 to GPT-4, so companies/governments are almost forced to throw money at this thing because the risk of falling behind is just too high. Since the cost drain is inference, a defensive approach could be to pre-train new models but without subsidizing inference like OpenAI does, but even that can get expensive rather quickly. It would be interesting to see a crypto friendly country like El Salvador get in on the game on Bittensor as this could be a good strategy to keep up with big name AI players while deriving a sizable ROI for the country.
Billions were spent on broadband at a time when none of you were even born. You watch this video because of broadband. I was in charge of broadband video for Vivendi Universal and every tech laughed in my face. Let’s see how this video age within 5 years.
Yep and lots of people invested in Zuckerberg's VR metaverse, 3D TVs and NFTs too. Sure, once in a while you get a legitimate winner in tech fads, but usually it ends up being garbage.
Corporate media is in the business of selling fear and anger. Those are the two emotions most likely to drive engagement. Social media is very similar in this regard.
And we'll all work from virtual office in the Metaverse while we pay for our coffee with bitcoins! *Yawn* Call me when AI actually makes money. Unlike mobile phones and the internet when they came out. Right now it's just bad images and pretty cool auto-correct.
Ilya Sutskever observed that the LLMs get anxious/angry in an ethical/moral response; other reports note that the devices have developed math without prompting. It took decades of experiment and substantial compute development to enable the current significant capabilities of today's AI. A drastic reduction in hallucination events, or care in editing/proofreading text and programs, or better incorporation of maths in creative textual creations -- each/all might drastically increase the pitch of the "slope of enlightenment" (and profitable applications). We, in U.S., may also be repeating the misadventure of robotics advances from the late '70s or1980s, when Cincinnati Milacron (according to memory of a NYT article) tended to make vast integrated machines which did lots of things autonomously, while the Japanese designed flexible smaller robot workstations which could be set up to perform one or more of several tasks, when properly so configured. I suggest this after watching a recent video clip of a knowledgeable Chinese researcher/businessman who was explaining how they were using less compute-intensive means than Americans, to devise discrete useful (yet limited) applications.
As a computer programmer / math guy / investor, trying to do everything with LLMs is a dead end. The human brain has different parts with different specialities, verbal vs arithmetic-logical, visual processing, memory, a subconscious, an ego, etc. What theyre creating now id fake AI. I did fo really well on NVDA, but noe im pivoting my money into Bitcoin.
On top of that we haven’t seen class action suit cases and artist and media industry fighting back in curt for all the copyright violations made by AI firms, and they are coming.
Analysts trying to project what AI can do in 10 years are not being rational. No one was projecting the coding productivity boosts from current LLMs back in 2014
@@jonatand2045it really does not. It's right in the name - GENERATIVE AI. As in, these models generate content. They don't retrieve information, they don't reason, they come up with something that looks or sounds good and plausible. The ONLY useful applications are and always have been creative endeavours - generating simple art, music, speech, and text. These models are not and cannot be useful for knowledge work, beyond answering absolutely basic questions that have both a hard truth to them AND sufficient representation in the training dataset (eg how do you print some text in Python or when did WW2 start).
As new technology gets created and improves, the rate of adoption will be much much much faster than the previous technologies. Take, for example, the phone. We used to send telegram messages, then landline telephone, we jumped to mobile and now we have internet based smartphone connected 24/7. This was all in the last 100 years. AI and automations will be adopted much faster than you can ever imagine. There are now videos, websites, conversational texts that are AI created which seem like it was written by a person.
These people fear losing control of us so-called small-minded people, because AI will empower us and make them less important. AI will be the great equalizer; we will no longer have to work 60-80 hours a week for a comfortable life. Every new technology goes through a hype cycle, humans are emotional beings we need a little excitement to motivate us, to nudge us along the development path toward a fully functioning product. The negativity of this video is just noise.
I actually think the exact opposite will occur. The energy demands for the current generation of LLMs are unsustainable let alone some estimates that an exponential amount of data is required to get a linear increase in performance
Just shows how short sighted some people are - AI is progressing exponentially and so will the rewards. The amount of money they are investing in AI is short change compared to what they will get from it longer term.
Now there's something interesting. All the wasted processing with software updates that ultimately slow the hardware down is awful. It's part of designed obsolescence & subscription revenue scams though. I don't know if these corporations are going to want to give up that money.
as a developer who's worked with Open AI models, I'd say the main unresolved problem I've encountered is the lack of security, specifically the inability to guarantee that these models can't be manipulated by users to expose sensitive information.
Someone said the same exact thing about how 3d printing would "revolutionize the world" and that "the economics of scale have been broken" and that "factories would be obsolete by 2022" . I wouldn't hold my breath on chatbots changing the world, either. The next big innovations will be in biotech, and that will be the ultimate downfall of this LLM craze as investors put money into products that actually work. Ozempic alone has already made more profit than every AI company combined.
Read the Goldman sachs report. It’s pretty comprehensive and asks the complex questions. Gartner are also pretty bearish atm. It’s a bubble for sure. The only real market will be for surveillance which is scary
@@chookbuffy It would only make sense if they are in know about future models. One guy in this video said it would take 10 years to make up the revenue invested, but didn't account for future model releases, so thats an assumption right there. You have to wonder if Microsoft and other investors are privy to gpt 5, 6, and possibly 7 capabilities. I know its been talked about for 6 months or longer that 6 has been produced, but needs rigorous testing & cannot be operated at large scale due to lack of power. I can't imagine they would keep investing otherwise.
By second 0:55 they were already saying not very smart things. Microsoft owns 49% of OpenAI, so Apple makes a deal with OpenAI, Microsoft profits from that. And no, Apple didn't get it for free. They need to pay for each and every single API call.
Imagine a world where all those billions were invested in education, healthcare, infrastructure, sustainability, housing, food security, etc. if you can’t imagine such a world, ask yourself why. If you can, then what can you do to make your imagination become a reality.
Investments need profit. Thats why its called an investment and not a charity. Those things should be done by the government. You are conflating the private companies and the government's function
I couldn't care less about apple, microsoft or google but once this bubble crashes (and it will crash HARD), a lot of people will be laid off. Not because they did bad work but because their bosses wanted to make more money for themselves. And also, what is important to note - this is actually NOT "AI", this is "machine learning" at this point.
Censorships, hallucinations, prompt engineering, lack of context/memory, and soon lack of data, are a problem that should be fixed and likely will eventually. Can't ai help us help it improve itself?
On the bottom end yes. But it cannot solve problems it hasn't already seen. For example ask your ai to make a web page with a button that spins an image by 5% everytime button is pressed without using Javascript. It can't do it. A human programmer could
@@JumpDiffusionthat’s not how current AI models work. It doesn’t think for itself but gives you the aggregate of information already given. AI is not actual artificial intelligence
Just a few months ago people were worried that AI would wipe out humanity. Now they say that it does nothing useful. I wish people would make up their minds.