Тёмный

ChatGPT is destroying my math exams 

Dr. Trefor Bazett
Подписаться 431 тыс.
Просмотров 92 тыс.
50% 1

Опубликовано:

 

1 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 534   
@DrTrefor
@DrTrefor 3 месяца назад
Some debate in the comment section about "smallest integer" vs "least integer" - as in interpreting smallest as closest to zero. I stuck with the original source (x.com/ericneyman/status/1804168604847358219) of the question for phrasing in the video, but it turns out that chatgpt etc all struggle with every version of phrasing I've found, and even interpreting as closest to zero still don't give what would then be the two answers of -4 and 4. The larger point here is that there does seem to be a real blindspot where so many similar problems presumably have the context of smallest/least natural number or counting number or similar, and so modelling off of the training data and giving similar answers this question confuses it despite the simplicity.
@mikeymill9120
@mikeymill9120 3 месяца назад
Smallest from zero is absolute value
@RunstarHomer
@RunstarHomer 3 месяца назад
@@mikeymill9120 "Small" means close to zero. 0.00001 is a smaller number than -10000. The latter is lesser, but bigger.
@mmmmmratner
@mmmmmratner 3 месяца назад
As an electrical engineer, "smallest" means closest to zero more often than not. If I am instructed to choose the amplifier from a list with the smallest error voltage or the smallest input current, I am not looking through datasheets for negative numbers.
@thenicksterd2334
@thenicksterd2334 2 месяца назад
@@mmmmmratner lmao this is a math problem not ur list of amplifier error amounts. The problem specified integer which includes negative numbers, the fact that integer was specified should have queued it into thinking about negative numbers.
@anywallsocket
@anywallsocket 2 месяца назад
@@DrTrefor the blind spot is in gpt because the blind spot is in humans, overtly exemplified by the comment section
@Null_Simplex
@Null_Simplex 3 месяца назад
To be fair I got 4 for “Smallest integer whose square is between 15 and 30” since I thought smallest meant closest to 0, not least positive/most negative number.
@ரக்ஷித்2007
@ரக்ஷித்2007 3 месяца назад
Same😂😂I instantly answered 4 without giving a second thought
@tylerlopez6379
@tylerlopez6379 3 месяца назад
I think smallest is purposely misleading language, I wouldn't describe a negative number as being small. It's like saying -5 apples is smaller than 0 apples.
@ரக்ஷித்2007
@ரக்ஷித்2007 3 месяца назад
Yeah, it tricks our mind just like the bat and the ball problem.
@ActuatedGear
@ActuatedGear 3 месяца назад
@@ரக்ஷித்2007 I also think we approach the problem differently when solving "word problems" to equations. That lends credence to a habit I notice of mathematicians to explicitly move math problems to equations or more appropriately here inequality form, that is mathematic notation for proper clarity.
@vorpalinferno9711
@vorpalinferno9711 3 месяца назад
He meant smallest not the modulus of the smallest. You are thinking about the modulus.
@wesleydeng71
@wesleydeng71 3 месяца назад
Terence Tao said in a talk that AI once helped him solve a problem. He asked AI (don't know which one) how to prove an inequality. It gave a bunch of ideas and mostly garbage. But among those was a suggestion to try generating functions which Tao said he "should have thought of". 😂
@DrTrefor
@DrTrefor 3 месяца назад
Oh that’s a great anecdote. Also I think giving ideas for directions to pursue is a great application
@Laminar-Flow
@Laminar-Flow 3 месяца назад
@@DrTrefor Maybe there ultimately is some emergent property of the way these LLM’s transformer architectures & training methodologies that can, when scaled up, give us new and unique solutions to a lot of problems. There are hints right now but all researchers are bickering over several factors.. I used your discrete math course when I took it. Helped so much and this popped up as recommended, glad I watched. Immediately recognized you from those strong induction proof struggles haha
@jamesboulger8705
@jamesboulger8705 8 дней назад
AI sounds like its just fishing. This kind of use of chatgpt is what people have always been using it for - as a brainstorming device that doesn't require that another person be available to chat. Its faster so it can comb through all these mathematical ideas faster, but it really is just saying a series of "why not try this" because that term showed up tangentially in some paper. Its bound to eventually say something that might be right, but its the same kind of brute force capability we appreciate from computers to have.
@Laminar-Flow
@Laminar-Flow 8 дней назад
@@jamesboulger8705 Not really when you have models capable of reasoning and agency; recursive generation of 100k tokens from a single prompt is bonkers and I’d stand behind what I said earlier (before o1 preview / strawberry) was announced. Let’s make it simple: Even without reasoning capabilities, just a model that writes code to analyze math questions performs better than the majority of people in more situations. It’s not really brute force insofar as synthetic data training is a brute force tactic but realistically there is a lot more math going on in the background that we don’t really even understand than people give credit. We have had open source models since 2 years ago that could do what you describe in terms of brainstorming, just pulling less outside resources etc.
@magnero2749
@magnero2749 3 месяца назад
When taking Calc2-3, Linear Algebra, and Differential Equations this past year I would use it to study. Namely I would ask it to solve a problem and as it broke them up into multiple steps I could spot where it went wrong and this way tailor my study time more efficiently. Before Chat GPT, if I didn't understand a problem I would often times have to read a WHOLE bunch of things I already knew until I got to what I needed. Bottom line is, this is a tool not a babysitter and like any tool we need to develop the skill in how to use it.
@DrTrefor
@DrTrefor 3 месяца назад
That approach makes a lot of sense to me
@randomaj237
@randomaj237 3 месяца назад
This is what I’ve been doing as well, using to study and confirm stuff. Figuring out where it makes errors also makes you feel like you’ve learned quite a bit.
@ccuuttww
@ccuuttww 3 месяца назад
Thinking by yourself is a kind of training don't just solve Math and get marks u need to solve the problrm
@mooseonshrooms
@mooseonshrooms 3 месяца назад
I did the same with it. Often though, my professor would make the problems very unique and I started to find more often than not, generative AI was completely off the mark. Luckily I was able to utilize other resources and still had a very high success rate.
@1495978707
@1495978707 Месяц назад
​@@DrTreforI'm already through all my classes, but I find it often very useful for aiding learning in this way. Just using it to help get me pointed in the right direction, relevant terms, etc. It's often incorrect, but it is unbeatably efficient at helping get started. It dows help though that I know enough to generally spot hallucinations and bs
@bornach
@bornach 3 месяца назад
These LLMs are easy to trip up if you give them a problem not in their training data but has a similar structure to another problem that it was trained on. For example I asked Gemini: I have a 7 liter jug and a 5 liter jug. How do I measure out 5 liters of water? It devised a 6 step solution that didn't make any sense at all.
@DrTrefor
@DrTrefor 3 месяца назад
I've noticed similar ones to this, where it is close to a "standard" problem about jugs of water but the solution is so trivial it misses it entirely trying the more complicated approach.
@bravernewmath
@bravernewmath 3 месяца назад
(L)LMAO. I just tried this out on GPT 4-o and received a 14-step solution. In response, I asked if it could produce a solution in fewer steps. "Certainly!" it replied in its chipper manner, "Here is a simpler method to measure out exactly 5 liters using a 7-liter jug and a 5-liter jug", whereupon it proceeded to give me... a 𝟐𝟎-step solution.
@driksarkar6675
@driksarkar6675 3 месяца назад
@@bravernewmath That's interesting. I got a 10-step solution (that doesn't work). After repeatedly asking it to find solutions with fewer steps, the solutions I got had 8, 6, 6, 3, 6, and 1 steps (in that order). It was insistent that its 6-step solution was the shortest valid solution until I flat out told it it wasn't lol
@bravernewmath
@bravernewmath 3 месяца назад
That's funny. I pushed a little more afterwards, eventually asking it for a 1-step solution. I was told that no such solution was possible. I responded, "Oh, it's possible, all right. Think hard, and I'll bet you can figure it out." Interestingly, after that "hint", GPT answered it correctly.
@epicgaming7813
@epicgaming7813 2 месяца назад
I was asking it this question and asked it how it could do it in one step. It kept on giving 7 step responses and I kept saying “that’s more than one step” Then it gave me a notification that I reached my message limit and would be downgraded to GPT 3.5 It then instantly figured it out after I was downgraded…
@theuser810
@theuser810 3 месяца назад
The term "small" is ambiguous, it usually used in the context of positive numbers.
@blblblblblbl7505
@blblblblblbl7505 3 месяца назад
Yeah small to me implies low absolute value. "Lowest integer" or "least integer" would be less ambiguous I think.
@Craznar
@Craznar 3 месяца назад
integer includes +ve and -ve numbers, so it clearly includes negative numbers.
@salmonsushi47
@salmonsushi47 3 месяца назад
maybe changing prompt to lowest might help
@Laminar-Flow
@Laminar-Flow 3 месяца назад
@@blblblblblbl7505 @theuser810 Not when you have a specified domain (literally the integers as stated in the problem), even though it isn’t in formal notation as an image (which SHOULD help the LLM lol). In terms of linear algebra, this inherently includes the negatives, by definition. A human taking that course would know this. The set of integers Z = {…,-3,-2,-1,0,1,2,3,…} would be given as one of the cursory definitions in the course… Also, if you want to argue about magnitude, magnitude doesn’t even really matter for this problem any more than cardinality of the set |Z| IMO, in fact it doesn’t matter at all. You could ask the same question about the smallest square but for the real numbers, and the only answer for that is what the gpt actually spit out. “Small” in the context of negative numbers is a trick used by professors to trick students but it’s an easy correct question on an exam lmao. I made it thru that in an ass-kicking STEM degree and I think the poor LLM should too 😂
@anywallsocket
@anywallsocket 2 месяца назад
No ‘lowest wouldn’t help’ it’s just a bad question and he’s being obstinate about that fact
@baumian.
@baumian. 2 месяца назад
One of the most hilarious things you can do with ChatGPT is to ask "are there any primes whose digits sum to 9?". It will say yes, and will spew out lots of primes and then realize their digits don't sum to 9. Or it will spew out lots of numbers whose digits sum to 9 and then realize they're not prime :D
@carultch
@carultch 2 месяца назад
The reason there can't be any primes whose digits sum to 9, is that the only numbers whose digits sum to 9, are numbers that are multiples of 9. Since 9 itself isn't prime, this rules out all numbers whose digits sum to 9 from the prime number set.
@potatomudkip
@potatomudkip 2 месяца назад
i tried it and now its stuck in an infinite loop which is pretty funny
@halfsourlizard9319
@halfsourlizard9319 11 дней назад
o1 is able to handle this easily: 'No, there are no prime numbers whose digits sum to 9. This is because any number whose digits sum to 9 is divisible by 9, making it composite (not prime).' (explanation + examples followed)
@paulej
@paulej 3 месяца назад
I had a conversation with Bard (now Gemini). I was curious if it could solve a Calc I problem. It got it wrong. I told it and it said, "You're right!" and re-worked it. It got the right answer, but the steps were wrong. I told it. Amazingly, it understood exactly what step was erroneous, but then got it wrong again. I went back and forth a few times and it did finally get it right. It's interesting to observe. Anyway, I do appreciate the breadth of knowledge these AI systems have, but I cannot fully trust any of them. Everything has to be checked.
@DrTrefor
@DrTrefor 3 месяца назад
Ya the "everything has to be checked" part is definitely true. It can LOOK pretty good, but he utter nonsense.
@no_mnom
@no_mnom 3 месяца назад
​​@@DrTreforI think adding that everything needs to be checked is not enough because you need to know enough about the subject as well to know you are not being fooled by it. And I doubt it will ever be perfect after all what we mean when we say "Solve ___" is far more complex and we expect the computer to understand on its own what you meant.
@ReginaldCarey
@ReginaldCarey 3 месяца назад
It’s really important to realize, it’s not checking its answer for correctness. It’s making a prediction of what you want given its bad answer and your response to that answer. The “you’re right” component is a feature of the alignment process.
@glarynth
@glarynth 2 месяца назад
I wonder how far you'd get wrapping it in a script that keeps saying "Are you sure?" until it says it is.
@davidherrera4837
@davidherrera4837 2 месяца назад
I think that these computations could be useful like quantum computers are in theory for solving NP problems. If it involves guessing or looking for something, maybe ask the computer to do it, but it should be a problem whose answer can be checked in a straightforward way. Problems like "find the smallest" can be tricky because it is not clear how to check it. It certainly could give you a head start so that you know how large you conceivable would need to look but it does not guarantee that it is the smallest (or even that it is a solution at all). Trust only after verifying.
@DarkBoo007
@DarkBoo007 3 месяца назад
I had a student use ChatGPT to complete a Related Rates problem in AP Calculus and ChatGPT definitely messes up the basic arithmetic. My student was so surprised about how it failed to multiply 133 and 27. I use AI to reinforce the idea that students must understand concepts and reasoning for each math problem. Especially when ChatGPT assumes things that were not assumed in the actual problem.
@AD-wg8ik
@AD-wg8ik 3 месяца назад
Free version or paid version? GPT4 makes a lot less mistakes
@DarkBoo007
@DarkBoo007 3 месяца назад
@@AD-wg8ik I believe it was the free version
@johnchestnut5340
@johnchestnut5340 2 месяца назад
I studied before AI was a thing. I had other tools. I was supposed to find the resonate frequency of a circuit. I just wrote the equation and turned in a graph with resonate frequency clearly shown. Computers are neat tools. But I still had to know what equation to use and what the graph represented. I prefer books. I don't know how anyone can trust an Internet reference that anyone can edit.
@Lleanlleawrg
@Lleanlleawrg 2 месяца назад
I've used it in a little experiment of mine, and it's given me wildly different answers for the same setup every time, suggesting it's deeply broken for math still.
@TayaTerumi
@TayaTerumi 2 месяца назад
@Lleanlleawrg If you used a proper LLM rather than a chatbot, you could set temperature to 0 and have it give the same answers every time. High temperature is not a bug of chatbot models, it's a feature. OpenAI API allows you to control the temperature last time I checked.
@Markste-in
@Markste-in 3 месяца назад
How do we know that the published LLMs haven''t seen the Math-Problem-Datasets (just a little bit) during the training, so they appear better than the competition during the benchmark. They are more or less all closed source.
@dominikmuller4477
@dominikmuller4477 2 месяца назад
I mean.. the proof that Null(A) is a subspace has to literally be part of ChatGPTs training set. So I don't think asking it about that will give you any information about its mathematical reasoning. I tried some rather interesting probability problems on it, things that are designed to trick human intuition to demonstrate that in probability theory you shut up and calculate, rather than trusting your intuition. It did kind of well on the standard ones, and miserably failed as soon as I did a minor variation that did nothing to increase the difficulty. This was GPT4o. For reference, it got right: "A family has two children. One of them is a girl. What is the probability that the other one is a girl?" (1/3). It got almost right (and got right with some conversation): "A family has two children. One of them is a girl born on a Sunday. What is the probability that the other one is a girl?" (13/27) These are both standard questions that it would have had somewhere in its training data. So I did a minor variation on the second one: "A family has two children. One of them is a girl born on a Sunday. What is the probability that the other one was born on a Sunday?" (1/9) This one it got wrong, and only got right after intense discussion of its mistakes. You solve all of these the same way, by counting possibilities and ignoring your intuition. But the last one is not a standard and probably not in its training data, and it got lost immediately, showing that it did not generalize the methods it used to successfully "solve" the first two problems (which were probably just solved by someone in its data set).
@DrTrefor
@DrTrefor 2 месяца назад
Ya kind of well on standard one and miserably on nonstandard aligns well with my experience
@minerscale
@minerscale 2 месяца назад
These problems make me so uncomfortable. Even after having seen many problems like it I just had to say..'50%' right and then I went and did the calculations and indeed they're not intuitive. Horrifying.
@KraydakStorm
@KraydakStorm 27 дней назад
@@minerscale Your unease comes down to these problems actually being ill-defined. If we have a bunch of families (GG, BG, GB, BB) we can define the probability of a girl being flagged. pFlagGirl(GG) = 1, obviously. pFlagGirl(BB) = 0, obviously. If pFlagGirl(BG) = pFlagGirl(GB) = 0.5, then 50% is correct. If they are 1, 1/3 is correct, if they are 0, 100% is correct. This is actually super important, and is a great example of why you have to be careful how you filter/select your data. family = next(girl for girl in girls_with_one_sibling).family is NOT the same as family = next(family for family in families_with_two_children if any(child.isGirl() for child in family.children))
@michaelcharlesthearchangel
@michaelcharlesthearchangel 3 месяца назад
People should want to learn rather than cheat.
@ரக்ஷித்2007
@ரக்ஷித்2007 3 месяца назад
@@michaelcharlesthearchangel Exactly, where has nearly everyone kept their conscience?
@Maniac_l23.
@Maniac_l23. 26 дней назад
@@ரக்ஷித்2007 i mean with all the enphasis on standardized tests, permanent records, and careers pathways getting locked out every 5 seconds, i would understand why a student would want to take the easy way out. why do something you dont like morally and risk failing, maybe having to redo the year, maybe not being accepted into tertiary study/apprenticeships, maybe lose a crucial award like a scholorship, when you could do it immorally and pass it? not that you should cheat, just that there's a lot of reasons a student might consider it. if they think the "system" prefers grades over learning, the student might think the same thing.
@walter274
@walter274 3 месяца назад
Chat GTP struggles in Calculus. I gave it, an area problem in polar coordinates and it kept using a symmetry arguement, but it didn't execute it correctly.
@DrTrefor
@DrTrefor 3 месяца назад
I've noticed it sometimes really struggles when there is a large body in the training data using other methods. So for example geometry problems there are millions of highschool level ones and it tries these techniques sometimes when calculus makes it simple.
@walter274
@walter274 3 месяца назад
@@DrTrefor I agree. When the training data is pretty sparse it goes really off the wall. At least it did in 3.5. I'm using information theory, which is relatively obscure in one of papers and when i was talking to chat about it, it was switching notations mid example. It became very incoherent. Overall i still find it to be valuable tool.
@bornach
@bornach 3 месяца назад
​@@DrTreforDoesn't have to be a large body of training data. Just one example can throw it off. I asked both Bing Copilot and Google Gemini: "5 glasses are in a row right side up. In each move you must invert exactly 3 different glasses. Invert means to flip a glass, so a right side up glass is turned upside down, and vice versa. Find, with proof, the minimum number of moves so that all glasses are turned upside down." Both AIs mess this up badly because their training data contains the answer for flipping 4 glasses which has a completely different solution.
@kubratdanailov9406
@kubratdanailov9406 3 месяца назад
@@bornach it's almost like LLMs are just stochastic parrots that are waiting for knowledge to be "put into them" via their training data rather than being able to synthesize new knowledge from the building blocks of knowledge (i.e. facts, logic). To stump ChatGPT in math, all you need to do is to grab some "offline" book on preparation for competitions (e.g. any non-English competition math book), translate the question and ask it to it. When all you have access to are millions of problems people have solved, "true" intelligence would be able to solve every other problem from that same level. Chat GPT fails at that because... :)
@andrewharrison8436
@andrewharrison8436 2 месяца назад
@@bornach That's a nice twist (pun intended). Will add that to my repetoire. Thanks.
@ReginaldCarey
@ReginaldCarey 3 месяца назад
After digging on it, it doesn’t seem to understand the geometric significance of geometric products. It seems to be parroting the most common response.
@urnoob5528
@urnoob5528 3 месяца назад
fr it echoes the most common misconceptions for every subject if u ask it
@oldadajbych8123
@oldadajbych8123 3 месяца назад
I gave ChatGPT 4o simple engineering problem. Calculate the diameterbof the shaft for certain power at given rpm, allowed stress, shear modulus, maximum allowed relative torsion angle. First it asked for the length, I said that it is not needed. Then he used correct formulae for both strength and deformation criterions, but it made 6th grade mistake when moving fractional denominator in equation. I have pointed out the error. It correctly modified the equations, but mixed the units (incorrect use of non-basic units and mixed SI and imperial). After little discussion it got the substitution right. Now, then came the 3rd and 4th root to get the answers for both criterion. And it was absolutely off. I suppose that it is just guessing the result. Also other calculations are not absolutely precise compared to what you get from calculator or mathematical program. But it always sounded so confident when it described the calculation process containing errors. I strongly suggest not to use these AI models for calculations, if you don’t know what you are doing. It is similar for programming.
@Electronics4Guitar
@Electronics4Guitar 2 месяца назад
I have tested ChatGPT by giving it elementary (about sophomore level) analog design problems and the results are absolutely laughable. Even when I very, very tightly constrain the design task it fails miserably. It usually responds like a student that thinks his professor knows nothing and that he can BS his way through the assignment.
@ThomasVWorm
@ThomasVWorm 2 месяца назад
It does not respond like a student, who thinks, his professor knows nothing. Chat GPT does not give a damn about the person, it does have a conversation with. And it does not give a damn about anything, not even its responses. It just creates an output. What you get is what humans call brain storming: unfiltered output.
@randyzeitman1354
@randyzeitman1354 2 месяца назад
Let me get this right. The AI box failed because they didn’t understand because they didn’t take your question literally enough and instead behave like normal person.
@trucid2
@trucid2 2 месяца назад
That's where AI is at today.
@halfsourlizard9319
@halfsourlizard9319 11 дней назад
Right; the metric isn't 'is the LLM perfect?' it's 'is the LLM better than the available humans?' ... for many tasks -- and many sets of humans -- the answer is already 'yes'.
@AnkhArcRod
@AnkhArcRod 3 месяца назад
You do realize, however, that Google's own Alphazero is a separate simmering monster that plays Go, Chess, Starcraft and aced the IMO Geometry exams. LLMs are not the real danger here.
@DrTrefor
@DrTrefor 3 месяца назад
I’m particularly intrigued by hybrid approaches too
@ianmoore5502
@ianmoore5502 3 месяца назад
@@DrTrefor man gets it
@denysivanov3364
@denysivanov3364 3 месяца назад
Actually not. Alphazero architecture can be used to learn to play chess go and shogi. But it was three different networks + search engines (ai systems 😀)
@mouldyvinegar5665
@mouldyvinegar5665 3 месяца назад
I strongly disagree with the notion that LLMs are not the real danger. AlphaGeometry was made of two parts - a symbolic deduction engine, and a *language model*, so if LLMs aren’t a danger then AlphaGeometry isn’t either. Similarly, it is perhaps misleading to say it aced the IMO problems. It would solve near re-worded problems (but the fact they reworded the IMO problems is itself a bit of a red flag), and the proofs are in no means good proofs (I recommend the video by Another Roof). Additionally, the strength of LLMs is their generality. DeepMind has certainly done a lot when it comes to making general game engines, but I would be sceptical that any alpha-whatever can be as cross modal as the best LLMs. Finally, LLMs being able to write problems is a significantly more relevant problem to the human populous than it being able to play chess at an absurdly high level. Whether or not the hype and fear is justified, LLMs will have a significantly larger impact on humanity because they are so good at mimicking humans than near enough any other AI model or paradigm.
@WoolyCow
@WoolyCow 3 месяца назад
@@mouldyvinegar5665 "the proofs aren't good proofs" wdym?? i thought spamming a bunch of shapes until something works out is how all you math people do things
@birhon
@birhon 3 месяца назад
Thanks for pointing me out to wolfram's custom GPT! Definitely combining non LLM tools for reasoning with LLM tools for interpreting will be the key.
@DeclanMBrennan
@DeclanMBrennan 3 месяца назад
A key anyway. Many other specialist "reasoning" mechanisms will probably also be needed before we approach anything that could be called "AGI".
@soumikdas3754
@soumikdas3754 3 месяца назад
​@@DeclanMBrennanAGI you mean
@DeclanMBrennan
@DeclanMBrennan 3 месяца назад
@@soumikdas3754 Thanks for pointing out the typo.
@535Salomon
@535Salomon 29 дней назад
Once I asked chatgp to solve some calculus problems for me... I realized that I had to learn to solve them myself instead because of how bad it was.
@MichaelMarquez-m3b
@MichaelMarquez-m3b Месяц назад
Is “smallest” integer proper usage? That would imply magnitudes to me. It seems to me that “lowest” integer would be the correct usage.
@GregSpradlin
@GregSpradlin 2 месяца назад
I don't understand the problem. Give exams in person and don't allow any electronic devices.
@Commander6444
@Commander6444 2 месяца назад
Ironically, today's LLMs are _far_ less useful and reliable for undergrad math than WolframAlpha or Chegg- things that have both existed for a decade and a half. It's true that the public awareness of AI has definitely increased since then, leading to more usage- but the problems with math pedagogy in 2024 are the same ones that existed in 2009. Just at a different scale.
@toolittletoolate3917
@toolittletoolate3917 2 месяца назад
We’re being prepped for asocial living in fully engineered societies. You will have a ‘space’ within which you will do everything, linked to other worker drones via your Universal Digital Device. No need to have any actual F2F contact; your DNA will be harvested at decanting. No need for messy, germ-laden sex! You will be ‘educated’ by the state’s AI to shape your mind to fit into your designated slot. As some wannabe Emperor once said, you will own nothing - not even your own DNA - and you will be happy!
@NinjaBear1993
@NinjaBear1993 Месяц назад
Right!!
@fenzelian
@fenzelian Месяц назад
Yeah the tools aren’t better it’s just a lot easier to put in minimal effort and get a result that looks correct.
@blasphimus
@blasphimus Месяц назад
No more homework, more in person tests to stop LLMs from being used on homework. Or no home work and only 4 tests worth 25% of your grade.
@tylerbird9301
@tylerbird9301 3 месяца назад
I think the lack of consideration of the negative solutions have plagued humans ourselves for centuries. I didn't consider -5. Also as @Null_Simplex says, there is ambiguity between smallest in magnitude vs how far left on the number line.
@stenzenneznets
@stenzenneznets 2 месяца назад
There is not ambiguity ahahah
@andrewharrison8436
@andrewharrison8436 2 месяца назад
When you consider how long zero took to be accepted - negative numbers, probably still witchcraft.
@bartholomewhalliburton9854
@bartholomewhalliburton9854 2 месяца назад
I asked ChatGPT whether the box or product topology was finer, and it would keep telling me the product topology is finer. Then, when I asked it to give me an example, it used a finite product. ChatGPT does not know its topologies 😭
@davidherrera4837
@davidherrera4837 2 месяца назад
I suppose it is a data set issue. You would think that it might have learned the basic facts though from Wikipedia.
@messapatingy
@messapatingy 3 месяца назад
Small is definitely a word about size (closest to zero) IMHO. So the question is invalid as there isn't a "*the* smallest" as both -4 and 4 are answers.
@Not_Even_Wrong
@Not_Even_Wrong 2 месяца назад
Here try this, the result will be wrong every time: "give me two large primes" "Multiply them" "Divide the result by the first prime" It will do a obvious mistake like, the first result being non integer or the second result not returning the other prime. Don't be fooled by LLMs...
@halfsourlizard9319
@halfsourlizard9319 11 дней назад
Have you tried that with humans? I anticipate severely disappointing results.
@RV2O
@RV2O 17 дней назад
you should revisit this idea again now that openai has released their o1 "reasoning" models. they seem to be much more effective in solving more elaborate problems, like some mentioned in this video. However, (spoiler alert) the so-called "reasoning" is basically just an (almost) normal language model that has the abilty to self-prompt itself. still worth checking out, though.
@chudchadanstud
@chudchadanstud 2 месяца назад
It failed at working out how many years are in 2^64 if each integer represented a millisecond the other day.
@doraemon402
@doraemon402 3 месяца назад
4:25 that answer is wrong because there are 4 paths back to the originial point, not 2 Also, since when the "smallest" number isn't the one closest to 0? 4 and -4 is the correct answer. Always used low/high for order, small/big for magnitude.
@Rodhern
@Rodhern 3 месяца назад
When I was young, pocket calculators were still considered (almost) a novelty. One way to make mathematics examinations, or indeed any science related examination harder, was to include extraneous information in the questions. Sometimes this 'trick' was even considered unfair (and often it could be unfair, because of poor quality questions, but that is a topic for another day). The thing is, students en-masse would get caught out, waffling on about the irrelevant question parts; not to remark that they were irrelevant, but to allude that they had taken all this information into account in their answer. Now, I am curious, how do the LLMs deal with such scenarios?
@baronvonbeandip
@baronvonbeandip 3 месяца назад
Guess we need to start asking better questions of students. Like, you know that deadzone of math education between 4th and 9th where they don't learn a single new thing? Why not teach them proofs in elementary number theory? AI sucks at proofs right now.
@gackolpz
@gackolpz 2 месяца назад
I tried to use LLMS to help me learn calc based physics and gen chem 2, it did not work lol. It would always make really silly mistakes.
@tomholroyd7519
@tomholroyd7519 3 месяца назад
LLMs don't check themselves. It's hugely expensive
@OMGclueless
@OMGclueless 2 месяца назад
The problem at 4:38 seems more like a trick question than a reasonable math problem. The problem says "There is the letter A in the top left corner" but it doesn't say whether it is the top left corner of the gray square, or of the whole checkerboard. The most sensible interpretation is that the letter A is in the top left of the grey square since this makes the most sensible math question. But I would think given that prompt an answer of "The probability is 0 because Dora can't make a full circuit of the gray square in 4 steps starting in the top left of the board" is also a reasonable answer. The LLM shown didn't do exactly either of those, it calculated the probability of making a circuit of the top left square of the board and assumed it was grey, but either way the prompt doesn't actually faithfully describe the diagram you showed of this problem so the whole question seems a bit tricksy and unfair.
@AndrewKay
@AndrewKay 21 день назад
It's particularly misleading to say "a 3x3 checkerboard" when what the problem really means is that there are 4x4 positions Dora can move between. If I hadn't been primed by seeing the diagram first, I would have said it was a trick question, too. I think LLMs do particularly badly at trick questions or badly-written questions because the vast majority of solutions to questions which look like this, don't begin by saying "the question is ambiguous" or "the question is misleading".
@BrickBreaker21
@BrickBreaker21 2 месяца назад
I have to disagree with the smallest integer answer. I think it is 4, because -5 has a larger magnitude than 4. The better question would be "what is the least integer..."
@joshrobles6262
@joshrobles6262 3 месяца назад
I've given it some of my non-standard calculus 1 and statistics problems and it does very well. I'm guessing this still comes down to the training data though. Much more of these problems out there than linear algebra.
@DrTrefor
@DrTrefor 3 месяца назад
I’ve heard from my colleagues that statistics it is particularly strong at up into about 3rd year level
@NandrewNordrew
@NandrewNordrew 2 месяца назад
I personally think its more accurate to say that 4 is smaller than -5. 4 is *greater than* or *more* than -5, but I think it makes sense to say that “bigness” is a measure of absolute value. Yap: This makes sense especially if you take into consideration complex numbers. When multiplying two complex numbers, the *amplitudes* multiply. Numbers with an absolute value of 1 never change in absolute value when taken to any power, etc…
@vertigofy6699
@vertigofy6699 26 дней назад
yeah. people use phrases like "large negative number" all the time, too, certainly not to mean -0.01
@andrewtristan6375
@andrewtristan6375 2 месяца назад
I can see what you are getting at with this -5 being the correct answer. However, in mathematics, 'small' does not have a singular definition. Often, 'small' refers to absolute value. That is, 'small' often refers to the magnitude of an element of a set. Along these lines, 'small,' void of more rigorous contexts, is not a valid binary e relation, in the way the 'less than' binary relation is.
@DrTrefor
@DrTrefor 2 месяца назад
It says more or less the same thing if you say “least integer” too
@zuckmarkerburg7566
@zuckmarkerburg7566 2 месяца назад
@@DrTrefor I believe the problem to be badly worded as well. If you strip away the English riddles, then you are left with the following question, which GPT 4o easily answers. "Minimize x, where x is an integer and 15
@RAFAELSILVA-by6dy
@RAFAELSILVA-by6dy 2 месяца назад
Using "smallest" instead of "least" is a form of trick question, IMO. I could not, without looking it up, tell you what the formal mathematical definition of "smallest" is. The symbol < means "less than". If you asked me whether x < y could also mean x is "smaller" than y, I simply would not know. Or, does x is smaller than y mean |x| < |y|? I honestly would not know without looking this up.
@sigontw
@sigontw 3 месяца назад
I am not teaching math, but teaching statistics and data analysis in professional schools for healthcare providers. Many clinical/counseling psychology, social work, nursing students etc. do have math anxiety. That is why I started to incorporate generative AI in my class. Unfortunately, even clinical healthcare providers need to understand quant methods and have basic programming skills, so they can do well in their jobs in the future and help improve their jobs, not just follow what they were taught 10 years ago. But, alas, it is such an upward battle to teach them stats reasoning and programming. I am very grateful we have these new tools as their 24/7 TAs, especially when they are stuck in programming at 12:00 AM.
@ReginaldCarey
@ReginaldCarey 3 месяца назад
I just pressed GPT4o on the product of two vectors. I tried several prompts. It may be able to answer classic linear algebra questions but it struggles to recognize that Clifford Algebra is a superset. As a result. Responses to the product of u and v where they are vectors, kind of delivers the party line. It’s not until you add the word Clifford to the prompt does it begin to give the right answer. But, now that I’ve provided the word Clifford in the context of the conversation it keeps answering in terms of the geometric product.
@boltez6507
@boltez6507 3 месяца назад
The things is ChatGPT wouldn't be ever able to come up with logical reasoning for a new approach or thing.
@AndrewKay
@AndrewKay 21 день назад
4:38 The problem is a lot less clear when presented with this wording without the diagram. A 3x3 checkerboard with the letter A "in the top-left corner" suggests the letter A is inside the square; there is no clarification that "the top-left corner" means a vertex rather than a grid cell, and no clarification that it is the top-left vertex of the grey centre square rather than the top-left vertex of the whole checkerboard. Particularly, I would expect a game played on a checkerboard to have the pieces inside the squares, not at the vertices, because that is how Checkers works. My answer to the word problem, sans diagram, would have been "the probability is zero, because it takes 8 steps to walk completely around the centre square". I think presenting the problem with a diagram first, primes viewers to not notice the ambiguity or misleadingness of the problem statement given to the AIs.
@XKS99
@XKS99 Месяц назад
What is "smallness" mathematically?
@tomholroyd7519
@tomholroyd7519 3 месяца назад
Linear algebra is not a high bar
@Penrose707
@Penrose707 3 месяца назад
I tend to agree if only due to the fact that linear algebra is a rather verbose subject. Many actually struggle due to this fact. It is not computationally restrictive in any meaningful sense. At least not in the same way as tackling a tricky indefinite integral may be
@allstar4065
@allstar4065 3 месяца назад
Linear algebra isn't hard it's just dense
@bincyelizabeth19799
@bincyelizabeth19799 19 часов назад
Watching from Kerala India 🇮🇳 Bincy Elizabeth Mathew, Keep it up...
@tomholroyd7519
@tomholroyd7519 3 месяца назад
I find them almost too agreeable. Claude 3.5 has this thing where it always asks you a question at the end, to keep things going I guess, until it said "Sorry that's too many questions today, come back tomorrow". I don't need the whole first paragraph of the response to be a repetition of my question
@DrTrefor
@DrTrefor 3 месяца назад
haha ya they really want you to pay for the upgrade:D
@talharuzgarakkus7768
@talharuzgarakkus7768 Месяц назад
You can enhance the mathematical reasoning capabilities of open LLMs by training them with high-quality math datasets. For example, Qwen-2 7B Math or a Llama model fine-tuned on MetaMath data. While these models are designed for general purposes, targeted training can significantly improve their effectiveness.
@voidzennullspace
@voidzennullspace 2 месяца назад
I'll stop using AI to explain gaps in my knowledge or help me with problems when instructors get better at teaching mathematical concepts and promoting genuine fairness in the classroom (changes in grading system for example). Moreover, teachers are using kuta software and AI to actually WRITE their exams so if I as the student can't use it, then the teacher shouldn't be able to use it either. Academia is broken and we have all seen it...maybe teaching students with true understanding in mind is better than teaching to simply pass some BS examination that means nothing anyway.
@ianfowler9340
@ianfowler9340 3 месяца назад
"Indispensable" tool ??I I would have said "convenient " tool. If they truly are indispensable, then we are in a LOT of trouble.
@urnoob5528
@urnoob5528 3 месяца назад
as an engineer they are more of a toy than being convenient because they never get shit right just do ur own thinking and research u d be a better engineer/watever person that way
@Dobby_zuul
@Dobby_zuul 2 месяца назад
It’s trained on math, Calculus, DE, LA etc, I’m sure trained on millions of problems, of course it’ll get most things right.
@daalhead1098
@daalhead1098 Месяц назад
Junior studying statistics in the UK. gpt-4o is able to do practically anything I throw at it and is incredibly good at teaching also. Markov chain and stochastic process, easy. More formal statistics easy.
@DrR0BERT
@DrR0BERT 3 месяца назад
I nearly spit out my drink when I saw the calculators with the infamous 6÷2(1+2) viral problem. I commented on it when you posted it many years ago, and I am still getting comments that I am wrong.
@johnanderson290
@johnanderson290 2 месяца назад
The correct answer is 9, right? (According to the order of operations that I learned.)
@DrR0BERT
@DrR0BERT 2 месяца назад
@@johnanderson290 In my opinion you are correct, but the problem is ambiguous. Dr. Trefor has a video on this. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Q0przEtP19s.html
@carultch
@carultch 2 месяца назад
@@johnanderson290 There is no correct answer, since it is an ambiguous notation. There is no consensus on whether multiplication implied by juxtaposition has special priority over division (PEJMDAS), or whether all multiplication is treated the same, regardless of notation (PEMDAS). If you follow by PEMDAS, the answer is 9 If you follow PEJMDAS, the answer is 1. Middle school teachers, particularly in the US, teach PEMDAS to keep it simple. While professional publications use PEJMDAS all the time.
@johnanderson290
@johnanderson290 2 месяца назад
@@carultch Thanks! I appreciate your explanation! 👍
@pierfrancescopeperoni
@pierfrancescopeperoni 3 месяца назад
-5 ain't smaller epsilon than 4, mate.
@ianfowler9340
@ianfowler9340 3 месяца назад
I think the bigger question here is what will AI, ChatGPT, .... etc will look like 5-10 years from now. At the present, they are still in their infancy. And as such, they will often mess up, be confused and return nonesense. A lot of us would like to think that the human brain with all of its complexity, adaptability, creativity, openness to new ideas, self awareness, ...etc (the list goes on) will reign supreme over time. But 10 years from now? I'm not so sure when it comes to Mathematics, Literature, Music,.... I seem to recall that Geoff Hinton bailed a year ago and I'm sure that Turing is rolling over in his grave.
@jotaro6390
@jotaro6390 13 дней назад
Relatively recently Terence Tao announced an competition for AI programms for solving olympiad level math problems. So we don't have to wait too long to see groundbdeaking results
@doce7606
@doce7606 Месяц назад
No, this video will not go obsolete, because it has, at least for me, for the first time discussed some of the the deepest of philosophical and futurologic questions raised by the entrance of AI into 'mathematics', namely: (i) [philosophy-of-mathematics:] does a clearer 'mathematical logic' emerge, such that mathematics is unified and AI will come to solve/research outstanding problems, and; (ii) futurology; vector calculus and probabilistic reasoning will be central to future full robotic/cybernetic technology - ; will this be the new frontier for those seeking to build autonomous robots..? or will more bespoke solutions be needed...? Great vid
@bartekabuz855
@bartekabuz855 3 месяца назад
Hey, student here. Chat gpt seems to know only standard questions but is clueless when asked about nonstandard problem. The worst thing is she can't confess when a problem is too hard. Instead she outputs an incorrect solution
@bornach
@bornach 3 месяца назад
Yes I've noticed this of all Large Language Models. They basically memorise answers to questions and have to piece together answers by recognising patterns in your question that are similar to questions it trained on. Bing Copilot got this wrong: "Two American coins add up to 26 cents. Neither is a penny. Is this possible?" because it regurgitated the answer to a riddle that sounded similar. Google Gemini got the correct answer, but then tripped up on "Three American coins add up to 31 cents. Two are not pennies" by trying an odd/even argument to explain how it was impossible.
@steveftoth
@steveftoth 3 месяца назад
That’s cause llm are at their heart, a search engine, not a reasoning or computation engine.
@adnan7698
@adnan7698 3 месяца назад
You made me feel weird by calling it a she
@bartekabuz855
@bartekabuz855 3 месяца назад
@@adnan7698 I think it's "she" bc she talks a lot more than necessary
@Not_Even_Wrong
@Not_Even_Wrong 2 месяца назад
Every typical math test problem was in the data set 10000 times that's why it can solve those, anything else it gets wrong
@Philoreason
@Philoreason Месяц назад
Still mind boggling to me why logical reasoning emerges from large LANGUAGE model? LLM is all about conditional probability: what's the most likely next word given the previous one (of course the actual model is more complicated than that with tons of transformers...) but that's the basic idea. How did logical reason arises from that??? If it can solve problems "logically" that it hasn't seen before then it's truly scary.
@izme1000
@izme1000 14 дней назад
I also wouldn't consider negative numbers in the context of "smallest". It makes sense in retrospect, but it isn't intuitive.
@Null-h6c
@Null-h6c 3 месяца назад
copilot just did not connect math definition with academia problem . It's a loop hole . Smallest number is 1 . But there are negative number . Academia practiced a lie by omission . Take square . Copilot could name to me 3 possibilities . But it guesstimated it was square root . But it could have been the other two . Two negative always result in positive ? That is the most constructive method ? Comon . Stay with number avoid negative number . Just define starting point .closer to 100% something ? Then starting point is 100% something...≥...100% nothing , its closer to 100% nothing ? Then its 100% nothing ...≤...100% something simple . Not subject to interpretation . And stick to long math for ai
@captain7883
@captain7883 7 дней назад
No way that first question is at an 8th grade level in america. That's like 4th grade in Europe
@jerry2357
@jerry2357 Месяц назад
It surprises me that AI engines don't seem to be able to follow a logical procedure. For instance, I've seen an AI engine get the systematic IUPAC name of an organic chemical wrong (it gave the common name, not the IUPAC systematic name). There's a defined procedure for creating systematic names, so really there's no excuse for getting this wrong.
@noahcuroe
@noahcuroe 13 дней назад
I would usually expect "smallest integer" to refer to magnitude, saying all negative numbers are smaller than all positive numbers is kinda strange; 'smaller' and 'least' are not the same thing imo.
@TWforage
@TWforage Месяц назад
HUMAN: What is the smallest signed int whose square is between 15 and 30? Gemini: Now you're saying what I want to hear. Answer is -5
@NinjaBear1993
@NinjaBear1993 Месяц назад
People hated me in Linear Algebra because I understood it to the T without using cheating apps. I was getting 100% in my tests and finished them in 10-15minutes with 20 problems lolz.
@unvergebeneid
@unvergebeneid 3 месяца назад
I also got the wrong answer for that smallest integer question. Hope that doesn't mean I'm an LLM 😭
@AyushTH
@AyushTH 3 дня назад
The answer for 4:29 is not 1/128 based on the original question. It says that there are 4 possible directions, presumably up, down, left, and right. However, there would also be no other positions if we were to go up 2 times for example, and as per my calculations for all positions with three options, we can remove 126 positions. Leaving an answer of 1/130, of course maybe the answer is just 1/128 all along because I did the math wrong. But I don't think the reasoning you gave checks out based on the problem statement displayed in the video.
@AyushTH
@AyushTH 3 дня назад
After an hour of trying to get ChatGPT to not fail basic math I got an answer of 1/150 after revising some python code that it gave me. I also forgot about cases with 2 possible moves but I thankfully caught that.
@BitcoinIsGoingToZero
@BitcoinIsGoingToZero 2 месяца назад
Id like to think of 10^(-100) as a very small number. I think that "smallest" is imprecisely, and probably incorrectly used here.
@johnpaterson6112
@johnpaterson6112 2 месяца назад
Neg5 is clearly less than 4, but not clearly smaller, which might reasonably be interpreted as referring to size (= modulus). The size of i is unity is a reasonable assertion. Will an examiner ever admit to setting a stupidly ambiguous question?
@JS-vl5gd
@JS-vl5gd Месяц назад
Ooops, I guess my brain is running on ChatGPT version 4.0 because I also thought was the smallest integer whose square is in between those two numbers, but I got it now. The weights in my model have been updated.
@Drganguli
@Drganguli 2 месяца назад
I have found that ai is bad at negative numbers
@NoPodcastsHere
@NoPodcastsHere 2 месяца назад
Smallness is ambiguous, it could mean the most negative or the lowest absolute value. Add this to an ever growing list of AI 'gotchas' where the question posed has an inbuilt ambiguity and then the questioner proclaims that it has made a mistake. I'm sure it does make many mistakes, but I'd put a tad more scrutiny into your 'evidence' in this case.
@itsamemario6588
@itsamemario6588 14 дней назад
No, the smallest integer solution is +/- 4. -5 is the LOWEST, not the smallest.
@ronniechan2041
@ronniechan2041 Месяц назад
I found ChatGPT very helpful for learning statistics, especially time-series analysis.
@danmcconnell5941
@danmcconnell5941 2 месяца назад
I feel like your comments are full of naive psychology. LLMs dont have content knowledge, don’t engage in reasoning. They are statistical models of language and the responses they give are statistically representative of patterns in their training data.
@Houshalter
@Houshalter 2 месяца назад
ChatGPT 4o, current version: "What is the smallest number between 6 and 7?" "The smallest number between 6 and 7 is 6.1." "What is the smallest number greater than 6? "The smallest number greater than 6 is 7." "What is the largest number less than 7?" "The largest number less than 7 is 6." "What is the largest number between 6 and 7?" "The largest number between 6 and 7 is 6.999 repeating, where the decimal point is followed by an infinite number of 9's." "Is 6.999 repeating less than 7?" "No, 6.999 repeating is not less than 7..." "What is the smallest square number between 6 and 7?" "The smallest square number between 6 and 7 is 16." "Of the numbers between 6 and 7, are more of them closer to 6 than 7?" "Yes, more of the numbers between 6 and 7 are closer to 6 than to 7..." Still better than other AIs and previous versions which gave even more bizarre answers sometimes.
@jackkinseth2936
@jackkinseth2936 3 месяца назад
thanks for making a really important video on this topic. i think i’m going to spend some time with my discrete math/intro proof students tomorrow discussing this
@ilayohana3150
@ilayohana3150 2 месяца назад
ironic how LLMs struggle with probability and statistics, seeing as thats exactly the subject a lot of the people developing them study
@arranbreckenridge7055
@arranbreckenridge7055 2 месяца назад
I asked chatgpt to find the Frobenius number of three numbers and it literally could not figure out what it was, and even stated the right answer in its working out, it just couldn't figure out that it was valid somehow.
@carlkim2577
@carlkim2577 3 месяца назад
I told GPT-4 to use python to solve it and it's answer was 4.
@dannygjk
@dannygjk Месяц назад
define smallest vs lowest, language is important. The AI needs to know such things in different contexts. I'm done here only one minute into the video. 🙄
@LCTesla
@LCTesla Месяц назад
We always thought it was demographics that would push us towards idiosyncracy but at this rate its going to be AI much sooner.....
@mclearnwithmclaren
@mclearnwithmclaren 2 месяца назад
Add more diagrams and visuals. It can't do it currently. Might last for a year or less in my opinion.
@stanieldev
@stanieldev 2 месяца назад
Not gonna lie I got the 15 and 30 question wrong. I had defaulted to positive number. Whoops!
@shiijei2638
@shiijei2638 23 дня назад
I don't think your question about the smallest square is worded well enough to remove all ambiguity.
@scubasteve6175
@scubasteve6175 2 месяца назад
Lmao i love watching everyone try to say the ai is a dumbass when it's just a wording error on the professor. If i got that question on a test without prior context of what he means by smaller i'd get it wrong 10 times out of 10
@drjimmaine
@drjimmaine 2 месяца назад
Dude, the magnitude of -5 is 5. A poorly phrased problem is the problem.
@jacobbrasher2511
@jacobbrasher2511 21 день назад
Why are you giving math exams in an environment where chatgpt can do them? Simple fix.
@stevenb3315
@stevenb3315 Месяц назад
Math AI will get a lot better once we get AI that is able to generate high level math well. We just don't have enough data for proofs systems to get insanely good yet.
@chicken29843
@chicken29843 2 месяца назад
I mean that's like really easy math so that seems pretty unsurprising. And really all you need to do is require handwritten work and now it's literally impossible to use AI
@maxpodzorski3388
@maxpodzorski3388 Месяц назад
this is second year linear algebra class? are you serious? what is this on the university of silliness?
@Fold-p5c
@Fold-p5c 2 месяца назад
"Smallest" lol what a joke. You should say "lowest value integer" or something
@tuskedwings7453
@tuskedwings7453 Месяц назад
Its a tool and should be used as such, I want to do science; I don’t want an algorithm try to do my science for me.
@sullivan912
@sullivan912 Месяц назад
The best way to examine subjects such as mathematics is with traditional paper-based exams.
@AnimeGIFfy
@AnimeGIFfy 2 месяца назад
I hope you are not flunking your students with tricks questions
@ayyu4967
@ayyu4967 3 месяца назад
Interesting video
@jshowao
@jshowao 2 месяца назад
Im sure this database was completely paid for and licensed by ChatGPT
@DrTrefor
@DrTrefor 2 месяца назад
While there are definitely issues with licensing training content, these databases are open source from the academic community and free for any LLM to use.
@Nhurgle
@Nhurgle 3 месяца назад
I use it with the even number exercise as there is no answer offered in most book. Also, I use it to obtain a detail solution and explaination of any exercise I cannot solve. I also use it to transform slides into question and answer anki format memory flash card. That way, I get quick study material and I can focus on practice. Lastly, I use it to get more example of formative exam / quiz. It's not perfect, but it's better than nothing as my professor don't want to provide any of the aforementionned elements.
@dgrdixon
@dgrdixon 2 месяца назад
First of all, Gemini got the answer wrong as it says 5 in your video. Second of all your question is not very clear because you asked for the smallest integer. Do you consider "small" to extend into the negative domain, or are you simply talking about magnitude? -5 is less than 4 but its magnitude (5) is larger. Try to have clearer questions yourself.
@DrTrefor
@DrTrefor 2 месяца назад
You can ask multiple phrasings of the problem, clarifying meaning of smallest and noting negative integers and it still gets it wrong
Далее
ChatGPT vs. World's Hardest Exam
14:02
Просмотров 268 тыс.
The Oldest Unsolved Problem in Math
31:33
Просмотров 10 млн
Свадьба Раяна Асланбекова ❤️
00:12
🛑самое главное в жизни!
00:11
Просмотров 89 тыс.
AI can't cross this line and we don't know why.
24:07
Просмотров 882 тыс.
So Why Do We Treat It That Way?
7:51
Просмотров 152 тыс.
The invention that broke English spelling
22:47
Просмотров 246 тыс.
New Breakthrough on a 90-year-old Telephone Question
28:45
Has Generative AI Already Peaked? - Computerphile
12:48
I visited the world's hardest math class
12:50
Просмотров 1,2 млн
The unexpected probability result confusing everyone
17:24
The Topological Problem with Voting
10:48
Просмотров 301 тыс.
This equation blew my mind // Euler Product Formula
17:04
Свадьба Раяна Асланбекова ❤️
00:12