I played with this a bit. Very interesting. It can tell you how to multiply, but it can't follow its own instructions. I tried telling it to not give me an answer without first double-checking the result by dividing back to the original number, and to show its work. It proceeded to walk me step-by-step to the wrong answer, and then through the "check" step, also done incorrectly but magically arriving to the original multiplicand as a "proof" of correctness. Oddly enough, this is the hallmark behavior of an undergrad who doesn't want to learn -- these things are getting more and more humanlike by the day. 😅
It also intakes text tokenistically. Since patterns like "123" can be single tokens, it creates a lot of confusion when it tries to process data with unique numbers in it. It's much harder to train it and make it predict digital interactions properly when there are just random variations introduced like that
In theory, given enough data, a language model can do arithmetic accurately in general. For example, a researcher Neel Nanda trained an AI to do modular arithmetic, and amazingly it learned an algorithm that works in every case.
@@izzyonyti suppose we should stop using the inaccurate term AI then as well, but nobody will do that the workings of LLMs have much more to do with autocomplete than however human brains produce general intelligence facepalm someone making an autocomplete joke all you want, but you should be facepalming every "AI" comment as well
@@jmarvinsAGI might be impossible, but if it is possible don't you think it would be made in a similar manner as GPT4 is (training and fine tuning an LLM to maximize general intelligence)? You're overestimating human intelligence because GPT4 is better than 90-99% of humans in many tasks. If you disagree, is it because you think the only way to achieve AGI is some other way that's more similar to how the human brain functions?
Neural networks don't understand multiplication, getting the correct result every time would mean training on enough samples to "remember" every solution. I don't think it would be very difficult to train a neural network to recognize an arithmetic problem, and hard-code the behavior "put the numbers and operator into a calculator instead of giving an answer directly, return the result given by the calculator app".
1:00 AI can't do simple arithmetic, so mathematicians aren't going out of business anytime soon... ***my math major friends asking me if 7 * 8 is 48***
what exactly do you mean by "graphs closer to disproving the conjecture"? i feel like that wouldnt translate into most things like, parker squares are almost perfect magic squares, but they dont really do anything regarding proving or disproving if perfect magic squares exist, and it's not inconceivable tha an neural network could "believe" it does and keep tweaking it's dataset and keep training on a useless path
The idea is that you have some way of measuring how close it is to disproving. The idea is particularly well-suited to the situation where you have a function that takes in a graph and returns a real number, and you conjecture that the function is bounded below by some constant. "Being closer to disproving it" would just mean being closer to that constant. It's not a technique that will work on every problem, and it will certainly sometimes go down useless paths. But it is a technique that was shown to useful on a couple problems. If you're interested in more details, I recommend reading the paper.