There is a lot of room for improvement. Assuming it could identify fast enough, the most reliable approach could be to use a machine learning algorithm to categorise objects on screen. Then we would want to implement an algorithm to try to score combos And also, implement some delays to make it less jittery and add functionality to avoid trajectories that overlap with a bomb
If this game was made in 2024, I'm sure the company would've called it AI game 🤦♂️ Any smart technology makes these days is called AI. To remove pimples from face is called AI object remover in 2024. To copy paste images in mobile phone is now called AI in 2024 🤦♂️🤦♂️
Very Great project iI was trying to run it locally by cloning the repo and runnig the program I had downloded fruit ninja on my laptop and when I run the program It goes to fruit but does not slice it I have change properries.json as my game is running in 1920x1080p mode what could be the issue could anyone help me
Next time train a YOLO model on a few hundred labeled images, it's a LOT easier, and will run much faster. Expect 30-120+ frames per second processed, based on your GPU.
Your code would be much more efficient in time and memory if you didn't define lambda functions inside your function bodies If it is a way to not have a big global namespace, or to use global values that you define at runtime (image size, etc.), you can have a class containing your handler where all your functions are methods
PLEASE I'M BEGGING YOU I NEED TO KNOW WHAT 2:45 MUSIC IS I REMEMBER IT FROM MY CHILDHOOD BUT CAN'T PUT MY FINGER ON IT, THE MUSIC SECTION DOESNT SAY ANYTHING!
some level are built wrong (not because there are no coins) but because the level just doesn't match with the actual level (example: the second jump in Dry Out is half-spaced down)
I don't understand what makes this project all that difficult, there isn't anything here you don't learn outside of Programming Languages 1 during jr year of Uni. Couple of things I saw in the code that can be improved: The tokenize function can be optimized by avoiding the use of std::string::erase and std::string::insert as they can be expensive operations. Instead, you can work with indices or iterators. Implement a state machine for your lexer. This can make the code more readable and efficient by clearly defining the transitions between different states of the lexer. Use enum class instead of plain enum for TokenType and Error::Location to provide better type safety and namespace scoping. Reserve space for your tokens vector if you have an estimate of the number of tokens to avoid multiple reallocations. Consider using std::string_view instead of std::string for operations that do not modify the string, to avoid unnecessary copies. Here's a refactored snippet: std::vector<Token> tokenize(const std::string& sourceCode) { std::vector<Token> tokens; // Reserve an estimated size to avoid reallocations tokens.reserve(estimatedSize); // Use string_view for non-modifying operations std::string_view remainingSource(sourceCode); // ... rest of the logic ... // Process tokens without modifying the original string while (!remainingSource.empty()) { auto nextIdentifier = extractNextIdentifier(remainingSource); if (nextIdentifier.empty()) continue; Token nextToken = determineTokenType(nextIdentifier); if (nextToken.type == TokenType::Invalid) { throw LexerError("Invalid identifier found", line, column); } tokens.push_back(nextToken); } // Add EOL and EOF tokens if (!tokens.empty() && tokens.back().type != TokenType::EndOfLine) { tokens.emplace_back(TokenType::EndOfLine, "EOL"); } tokens.emplace_back(TokenType::EndOfFile, "EOF"); return tokens; } This is just a small example of what can be changed to make it more readable and uses better Modern C++ coding standards.