Some example C++ code showing how to read text from images. Shows both Tesseract and Darknet/YOLO, and which type of images works for both methods. - github.com/stephanecharette/t... - / discord
nice project, I passed through the same issue when working with OCR, tessaract just helps on a plain black and white texts, for images, you I did the same as you did, training the NN for each letter, and worked pretty well, I had to reconstruct the text since I was reading vehicle license plates, the test video is on my channel. Cheers
Great video ! Can we do the same thing on windows (since you did it on Linux). Can you explain how you build your model to recognize the different stuffs (signs, street name, etc..) ? Thank you