This is beautiful: the presentation, mesmerising machining, minimal and quirky narration, and the level of effort to create something totally and wonderfully useless.
I've worked on optimizing serial displays, and I think I see your problem. I think you are sending a lot of overhead with each write. i2c works by sending and address, a command, and data for that command. If you are only writing a single byte at a time, 2/3s of the transmission is completely unrelated to what you actually want to display. Instead of only changing the bits that change per frame, store a frame buffer on the computer's side, and write to the entire display at once. That would change it so that only 2/1027 bytes are overhead, and that should give a decent speed increase.
@@gorgpflug6087 I'm not talking about changing the i2c transmission speed, I'm talking about increasing the goodput(ratio of relevant data to irrelevant data) of the bus.
@@andreameparishvili5468 the i2c frequency isn't just an arbitrary limit, it's the maximum that mixtela's computer can output through the HDMI connection.
Not gonna lie, I actually got emotional hearing and seeing bad apple show up here. Its such a cultural touchstone for so many people and touhou music became such a big part of my life for so many years. Overall it just fills me with joy seeing it used for demonstration purposes. I'm sure the music being in a music box rendition didn't help with the emotions I was feeling either, feeling like the perfect allegory for plunging deep into memory.
Try blue noise comparison (static) instead of Floyd Steinberg to dither. You can also try 3D blue noise or Nvidia precomputed spatiotemporal blue noise, but given low frame rate, I think static would work better. The problem is that the movement in the dither is obscuring the movement in the picture. With a static dither, maybe you'll be more successful with partial screen updates.
this seems like a good idea, but would probably need something more, to be worth making a new video about, and given how at first this was like april fools video, and then he made a cool case for it, then now what, fix the dither and...what, use another of those screens, to make a micro model computer with functional screen?
@@jan_harald Oh I do believe mitxela could figure this out! You're right that it doesn't look easy to build a video about, at a glance. However, I didn't expect this one, either =D What I would do personally is "here how the code was massively improved" - there's def some things that can be added, including partial refreshes (look up micropython SH1106 driver for simple page-based example) to increase FPS. In addition to that... 128x128 could be done with two displays, I suppose! But, AFAIU, mitxela's videos aren't software-focused typically? so I'd understand if this were to be left as exercise to the reader.
@@AryaFairywren didn't he just specifically talk about how he made it only send the updates to the screen? both at one point, and then demoing it at the end, how if only the mouse moved, it worked just fine? and rather than bigger resolution, I'd rather take lower resolution over bigger screen size, to make it more legible, lol, right now it's basically just tiny noise, even at a terminal prompt
This and the MIDI synth plug he might wind up with the smallest fully functional recording studio in the world Sometimes I forget that he's a musician.
im neither a musician or an engineer but i could convince you i was both given a few minutes of time, my skill set is veeeeeeeeeeery diverse and i am the guy people call to repair THEIR setups
Also if you need it for any future projects feel free to use my video edit of bad apple specifically designed for these displays: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-7XijCqWb5TI.html
I dont know if you can change that, but if the dithering is slowing down the performance, maybe switching to indexed dithering would help. Floyd-Steinberg dithering is acumulating data in an iterative way, while indexed or lookup dithering is done per pixel and way faster. as a bonus you would get a nice retro pixel art vibe
🛠 The knurled brass HDMI connector looks super sleek. I never thought about customizing connectors like this, but you make it look both fun and doable. Also, the tech details you shared were really insightful. It's amazing how you manage to blend craftsmanship with tech so seamlessly. Can't wait to see what you come up with next!
I have not seen a music box mech like that before. With the rotating strikers it looks as though you can actually adjust the sustain of each note by varying the rotation speed. Wicked!
The proper way to use that style knurling tool is to pinch your part between rollers, using the screw to apply pressure. Doing so puts almost no load on the lathe itself, and therefore won't damage it. A lathe is not really meant to take heavy loads in the direction that a pressed knurling tool generates.
Uhhh, what? That's a 20+ year old Touhou track, "Bad Apple!!". It's been a meme for these low resolution displays for almost a decade now? Just like Doom. :P
Sir i just discovered your channel a few months ago, and i have to say that your channel is very special, thank you so much and i wish you all the best
What would make this even cooler is if the screen was amber, like a few old school monitors are, but I'm not sure if anyone manufactures small screens in this color...
For the next generation, you can just do what nVidia did, add frame interpolation to the driver, claim it now runs at "twice the FPS", and double the price. Or get one step ahead of them and do what I'm sure they will for the _next_ generation: add _three_ interpolated frames between real ones and claim it runs at four times the FPS.
One dither you could try would be a simple error propagation You start a line probably with something like 50%. If the first pixel is brighter than that 50%, you set white (100) otherwise black (0). Now subtract what you used from what you were seeking and that is the next value to compare against. Continue this across the line.This makes text illegible but grayscale pictures translate quite well. By removing randomness, you'll keep the same output for (some) static parts of the screen. I found that random combined with error propagation gave the most "accurate" representation of the original image but simple error propagation gave a more "clean" looking image at the loss of a little fidelity. (this was for printing images to a black and white thermal printer).