I think the reason it sucks at spelling so much is because it doesn't actually operate in text. It understands language through a tokenised format - essentially a shorthand that only it knows, which inputs are translated into before being handed to it, and its outputs are translated back to human readable text afterward. This means all of its actual knowledge of spelling has to come directly from text *about spelling* in its training data, not from the words themselves. In short: L
Chatgpt is uniquely bad at this sort of test. I think its down to how it works on the back end. Most LLMs use a token system where they represent whole words or word fragments. A side effect is that they are hot garbage at anything relating to the length of words or the placement of letters in words.
I asked it about the 2 numbers that are 2 letters long and it said The five numbers that are only two letters long are: Ten (10) Eleven (11) Twelve (12) Twenty (20) Eighty (80) These numbers are written with only two letters in English. lol, just randomly jumps to 80. These are two DIGITS long, not two letters long...
Wait until ChatGPT finds out about eleven trillion, eleven billion, eleven million, eleven thousand, eleven (11,011,011,011,011), which is an actual number with eleven ‘L’s.
I spent 3 hours to make got make me a welly lesson tabs based on the book content covering 1 semester. I spent 3 hours fixing its mistakes again and again and finally it’s still hasn’t done. Idk man how exactly it’s going to conquer the world one day. But now it’s lazy asf