Using edges is a mistake. Converting everything to black and white and then averaging saturation when sampling would be more stable. Also random sampling is unnecessary. You can "cut out" the square you want to test, blur it heavily until its basically one solid color and sample the middle once. Finding a threshold for that is easier because of 0 randomness.
Using edge detection is counterproductive. When you colored in the squares on paper, the point was to fill them in. Not make as many and as much pronounced edges as possible. So finding edges is wrong here. A better image processing would simplify all other steps. No random sampling would be needed, and activation threshold would be stable.
this reminds me a defcon talk where a pair of guys decapped some rom chips and used software to read their contents from images of the die in a similar way to what youre doing. very neat. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-7Q82FkthDx8.html
@Tsoding, Have you tried the Intel C compiler? I believe it's available for both Windows and Linux - Considering your mastery of the base subject, I would love to know your thoughts on the compiler itself?! As always, Большое спасибо!
I love that you did a really nice one line linear algebra and didn't even know why it worked. 😂 You subtracted the directional vector with the length of the distance from mouse to anchor from the positional vector of the target image.
50:50 при проективном преобразовании центр квадрата перейдет в в пересечение средних линий, поэтому думаю точнее будет использовать вместо лерпа на одной линии использовать пересечение двух линий.
Brainfart: how about doing an FFT on the raw (normalised) cells and discarding (summing) the high frequencies (the ink-width) ? Just like scanning barcodes...
Instead of black and white couldn't you choose a pivot color using the mouse and measure the distance from that color to determent if the cell is on or off? Maybe even picking both pivots (on and off) and choosing which depending on shortest distance.
The correct way is to compute homography, otherwise deviations from fronto-parallel position will keep failing (because projective transformations are not linear). In 2d case computing homography by 4 points can be done using some elementary-school math.
@56:00 it looks like lens (wide angle lens) distortion that the program can't account for that is causing the misalignment, and/or that the notebook was screenshot at a slight angle from perpendicular to the viewing axis of the lens.
If the camera is not parallel to the paper, then the cells are not all the same size. The step size along x needs to respect the change in y size and vice versa. It's not a linear transformation because we don't know the z component i think.