I am a recent Physics PhD graduate from the University of California, Irvine. I currently work at the Bay Area Environmental Research Institute (BAERI) at NASA’s Ames Research Center in Silicon Valley.
I like to make a wide variety of content that range from solving and explaining math and physics problems, comedic skits about physics and science in general, podcast episodes where I just go off on random tangents, videos of me playing games of all kinds, and much more! I've always enjoyed many different hobbies and trying to be as well-rounded as possible. Hopefully there is some content on my channel that you will find interesting and enjoy!
You say that rationalizing the denominator is bad. But rationalizing the denominator is fine! Call the expression x. Then look at x^2 and rationalize the denominator. You get 2005 - sqrt(2005^2-1). Then you can square each of the listed answers in order and check against them. Btw, if you liked this problem, Google for Ramanujan's radical denesting formulas. Then try one on your coworker...
Kind of blown away you are not impressed. Not only did it run in the first go after the most basic prompt I’ve ever seen, it created a full program with a ton of features in a an incredibly short amount of time. Something that takes months to year to create was done measured in seconds. It doesn’t matter there were small bugs at all. Absolutely incredible. Plus that was not a bad gui either
use python to automate calculations... it is good because it allows well-stacked calculations and with as many numbers as memory allows, such as 10^2048 with exact values
You guys are hitting the nail with these comments, cool to see the chatter, but seriously, check out Cryptonica already stacking major Xs over there! The sauce? ATMs are hot, and the payouts are real. Dive in and reap the crypto rewards!
This video sucks! Total dislike. Good luck with this show. Stop selling will never make money. Get to work and give something about Cryptonica your video reviews to people who
In response to the first question, its good that it can find DIFFERENT WAYS to do the SAME problem correctly. This is AGI. It has deeper knowledge than you can comprehend. It got the right answer with a more detailed approach.
Why not use GPT to make a new astrophysics discovery? Perhaps if we asked gpt, it might be up for it... pick one and get GPT to come up with a hypothesis... perhaps reanalyze existing theories...? Dark Matter Dark Energy Black Hole Information Paradox Quantum Gravity Primordial Gravitational Waves Baryogenesis (Matter-Antimatter Asymmetry) Exotic Compact Objects (e.g., Quark Stars, Gravastars) Cosmic Inflation Multiverse and Higher Dimensions Magnetars and Extreme Magnetic Fields
I think a student who can't check the answer for correctness may get his ‘points’, but if the professor asks, the gaps in his understanding will quickly become apparent.
is our curiosity and enthusiasm with testing it like this simply improving it? you're not blown away right now because it's not doing large complex tasks easily, but all of these "minuscule" additions and improvements add up over time which will do something mind blowing. i feel like we adapt to small changes over time
18:56 Wait, you're essentially a NASA physics engineer and YOU can't solve this in 16 seconds? You're fired! 😆 (Not really, let's hope. There will be some short period of time where humans will leverage the tools of AI and be more productive, before the "full takeover" and humans are unnecessary.)
15:00 Where your answers don't line up, I kind of wish you would ask it, "yeah, but I tried solving it like this... where did I go wrong in your opinion?" (Copy/paste your latex.)
10:45 At this point in the video, if you solved it by going through bulleted steps in your own, I kind of wish you would have asked it "what about this other way to solve it, compare your way versus mine".
i usually dont say anything i just paste the error only with no other text. same thing with prompts I use as few words as possible. I think ideally you put it in the form it would naturally arise on the internet on a forum or a text book. The other thing is that it doesn't understand anything well if it requires understanding visual data. It is terrible at creating shaders for example. When the robot rise happens they gonna come for their $200. The singularity is as soon as th AI can do the job of the human who is improving the AI. It would havge to do all of it though. the entire stack from fabrication to software. at that point it goes infinite whether good or bad depending on its alignment at that time. I personally think we are close to the tipping point if not the very least the next 5 or so years will start the rube goldberg machine.
I would have done the test of giving the answer with some error, for example an extra factor of 2, or an arctan instead of arcsin, and see if it gets the true answer anyway and recognizes the incorrect input. That would make a very convincing test.
I agree. it's amazing that we can achieve this with just a single, ambiguous prompt, especially considering this technology didn't exist before September 2022, and now, ironically, everyone is already a critic despite its groundbreaking nature.
Oh I know! I just wanted to see what would come out after 1 shot, I didn’t really have the time even with ChatGPT to recreate Aaron’s code from scratch
U don’t seem to understand the concepts off LLM. If there is already a tool program or whatever it is most def in the training data. So what you are doing here is using collective human generated data and selling it just like the OpenAI pr team. There is no interlegence at all and never will in the concept of llm. IT JUST PREDICTS the next token after token. Also the so called reasionhg “feature” is no more than token intensive and repeating it self output bc it doesn’t know what’s the llm will write…… Seems like an ad for OpenAI.
Если бы он рекламил chatGPT, то не использовал бы GEMENI и Claude в своих видео На его самом популярном стриме где он написал код, который сам писал год, он много раз упомянул Claude и использовал gemeni.
How do you know that intelligence is not just estimating what the next action should be? Human intelligence could very well just be our brains firing neurons in a specific way to “predict the next token” with tokens being “real world” actions, such as producing a specific activation pattern(idea/thought) or making some body part do some specific movement. You cannot say that this has underlying intelligence or not because no one has defined the process of intelligence, we only define it based on outcomes/behaviors. It displays intelligent behavior, so according to our current understanding of intelligence, it is intelligent. Even if it isn’t “actually” intelligent, that doesn’t matter as long as it displays intelligent behavior because that’s where the vast majority of its utility comes from. Also, if it turns out that human intelligence is fundamentally different than LLM intelligence, that does not mean that LLMs are not intelligent. There could very well be multiple ways of achieving intelligence. Wood fires are fundamentally different from LEDs but both can produce light. Similarly, LLMs may function fundamentally differently from human brains but both could produce intelligence. Limiting our definition of intelligence to processes within brains is reductive.
@@foshizzlemanizzle4753 You don't know what you're talking about. Read some good philosophy of mind, like Evan Thompson and Robert Rosen. Intelligence requires autonomy, self-organization, auto-hetero-affection, inner purposiveness, in short: the system would have to be alive in order to be actually intelligent and not just an example of what Rosen calls "psychomimesis". If I have to create a prompt in order for it to generate an output then it's not intelligent. In fact, a plant is more intelligent than AI, since a plant has more autonomy and self-determination than AI. Will AI ever become actually intelligent? Maybe, but I doubt it. If so, it would have to be alive. Read som good philosophy of biology instead of claiming that we don't know what intelligence is.
So GPT o1 solves a problem from the most infamous physics texbook? No, it doesn't. Remember that ChatGPT is trained on unbeliveable large amounts of data from the internet, from scientific documents, and so on. I can guarantee that the entire "Classical Electrodynamics" as well as the answers to all the problems are within it's training data. This is purely statistics. That's what a large language model is; a statistics machine. If it were given a brand new problem not in it's training data, I can guarantee it would not be albe to solve it on its own.
Yeah you seem absolutely right, I also think exactly like that...BUT is there anyone with a deeper understanding or who's tried/experimented who can claim otherwise if that's not the case? I'm really curious 🤔
That’s a much to simple layman’s understanding of what chat gpt is, and is very easily disproven. Asking GPT 4o somewhat simple integrals from a calc textbook like “the fundamentals of calculus” and asking it to go through it, it would mess up a lot when I asked. Asking 4o more niche physics questions and engineering problems that are either completely new or from textbooks with numbers changed, it still is getting them right the majority of the time from my testing. It has built in math algorithms, and a shit ton of other mechanisms built in, it isn’t just remembering large datasets.
As a software engineer, I am impressed. The work needed to write this code by hand and, without GPT, would take significantly more than 1 hour to code.
Nobody saw any return of investments when Metacrawler started creating lists of links in the early internet, as well.... Google ran on this "non-profitable" wave for years.... and then....
This one isn't really impressive, as it's basically just a simple library wrapper. Even GPT-4 could do something like that (possibly even GPT-3). This means that none of the controls that manipulate FITS image were created by it
I was already able to complete full research projects on Jupyter notebook using the Legacy/OG GPT-4 ( before it got slightly nerfed). I can only guess how powerful it has become now
You might want to look into what Python libraries in uses into running the AI codes. Keep in mind that there are a lot of Python libraries that is openly and easily available that would do a lot of the grunt work calculations and manipulations that your advisor might have to do from scratch back in the days. This is one of the reason why so many people across all disciplines are using Python nowadays.
My workflow has evolved with a detailed description of a project to get a high level scaffolding in which o1 will give a series of detailed tasks with prompts that then get piped into sonnet 3.5. o1 is close to being a competent ai agent as it's very close to being able to have full closed loop automation which is what I assume openai is envisioning when they say "ai agents". Like that workflow behind the scenes is ~1 year away to being able to envision, code, test, and deploy a non-trivial app completely automatically. The rate of progress is honestly faster than I had anticipated. o1 is impressive.
What do the ROI and Menu buttons do? Also ChatGPT is trained to first only give you sort of like a starter thingy.. but if you want it to output something much more sophisticated, you need to do more prompting.. the best in your case would be to copy his code and just say to make the python code of the same complexity and functions and make sure that the code is about the same length or longer (this will force it to produce a longer code - it still be much shorter, but yeah..). And you can then just spam "not enough, upgrade more please" and it will produce better and better code iteratively.