Programming and engineering tutorials on a variety of topics. Seeking to understand core concepts allows you to be successful on any system and in any programming language!
For people running into errors while trying to install gtkwave on WSL: 1. You don't need to download the compressed tar.gz file. 2. In Powershell/cmd run: "wsl --update" 3. In Linux terminal run: sudo apt-get update sudo apt-get -y install gtkwave 4. You'll now be able to see the GTKWave icon in the start menu. (You'll obviously have to create a .vcd file to be able to see any waveforms.)
Thank you very much for this. Ti has made some demo Python parser scripts that come packaged with the mmwave SDK. Mine is located in this directory: C:\ti\mmwave_sdk_03_06_02_00-LTS\packages\ti\demo\parser_scripts
There's a reason why FPGA video output typically pushes to VGA first... it's a crazy simple system to implement, literally "you can make a rock solid implementation in an afternoon" type setup. I'll be tackling HDMI myself in an upcoming project. let's see how this goes :D
Are FPGAs actually used for compression in the industry? Wouldn't there typically be some dedicated silicon chip used for this task? Even the cheapest of fpgas that could perform video compression are still pretty expensive. I don't know a ton about hardware video compression, but I do know that when we were looking at the possibility of compressing videos on our Zynq device, it became apparent that h264 would use about 90% of the resources on it leaving practically no room for anything else. So we found ultrascale chips that had hard silicon based h26 compression. Id honestly love to hear more about this.
thank you for the great insight! I can’t speak to current industry trends, and I will have to agree with you that using an FPGA for this may not be business viable. I’d like to emphasize that the purpose of these shorts is to drum up interest from new-grads, developers, and engineers who have never worked with an FPGA. My hope is that they will find their way to my Verilog tutorials from here.
Wholesale compression by streamers like RU-vid usually have proprietary IP, and AMD/Xilinx is definitely a vendor with the Alveo line of neutered mostly FPGAs. They probably have some hard IP to do compression but hanging bits are usually configurable. It just makes more sense
if youre going to use Azure to deploy resources, learn some terraform. you’ll be able to keep everything in the command line and do it much much faster. Otherwise awesome video!
Given the relative complexity of developing for an FPGA and the high price of the hardware, I really wonder if they'll stay competitive as more and more CPUs are getting Matrix coprocessors
But how do you get the finished FPGA onto your own circuit board? Sure, all that is necessary preliminary steps with a prototype board, but it's not a finished product yet. We need a followup video on this.
Theoretically, the next step is to make it out of TTL chips(if you're crazy enough), run software, then ship the logic design to a chip fab (I hear google is doing larger 100/50nm for relatively chip, as long as you make it open source).
@@rya3190 But I thought that the purpose of using an FPGA was so that you wouldn't have to have all those individual logic chips in the first place, but rather you can simply program it into a single FPGA chip. It's what happens after that that I'm asking about. And rather than having a custom chip fabricated, can you keep using an FPGA by ordering a board that has your custom circuitry on it plus the FPGA chip?
@@SDJ41 You should be able to...They probably sell them in chip form. The TTL was more of a joke, I'm pretty sure an FPGA would be faster and not as mind numbing. It's either you keep using the FPGA or get your chip made. If you're just tinkering, it's probably better just to keep the FPGA(s) and make a good circuit board for it/them, but if you're going to sell it, probably better to get the dedicated chip. Takes up less room, cheaper over time, and can't be messed up by customers (more or less...)
@@rya3190 Yes, that's it. I simply wanted to know if PCB order houses were capable of doing this. But I can see it depends on the seriousness of the project and what you're selling. Thanks.
So, heres my recommendation for this. Start very small. Pick up a low spec Lattice FPGA development board for something like an ice40 or something. From there make novel FPGA design you think is fun/cool/educative. It doesn't have to be super complex. Learn KiCAD for PCB design through Chris Gammel (contextual electronics) tutorials. Now the FPGA you choose may have multiple voltages and you might have to sequence those so you'll need to learn how to do that with power sequencing ic's or do it discretely yourself. There's a lot of things to keep in mind when designing a PCB for an FPGA, even if it's a low spec one, but its great fun to learn and you'll feel very accomplished at the end. I hope this is enough to point you in the right direction. Try and do some intermediate circuit projects as well before you tackle a whole custom FPGA solution. But definitely reach for the stars.
the C program writes x number of black pixels and then x number of white pixels, both starting from index zero. this was to test a VGA subsystem i created on the FPGA from scratch (without using intel IP video cores). i never could fix this issue where it was tearing the image over and over at different offsets. so that’s why the video feed looks all crazy.
@@metaphysicscomputing there is something called a front porch and a back porch when displaying on monitors, this is usually a around 200-1000us depending on many things but basically after the horizontal sync pulse you will have to wait a little bit before anything gets displayed on that row, and this goes for every single row, some people fix this by tricking the code into thinking the monitor is wider than it actually is and this usually easier than creating a buffer before displaying, it looks like the black that is displayed on your screen should actually be in the buffer zone(front and back porch) you would have to figure out how many extra pixels/buffer time solves your specific situation though.
Ti has made some demo Python parser scripts that come packaged with the mmwave SDK. Mine is located in this directory: C:\ti\mmwave_sdk_03_06_02_00-LTS\packages\ti\demo\parser_scripts
Nice tutorial. It's been a while since I've done them myself so good refresher, thanks! Since it's an B OR ... and not B XOR ... you could safely remove the B' from the 2nd condition and simplify down to B+AC. Removing the need for the inverter, and changing the AND3 to an AND2. Groups can overlap, so ABC and AB'C could be grouped (bottom row, right 2 cells) which yields the AC side. If ABC returned 0, it would need to be an XOR and have the inverter and AND3 gates.
Thanks again for the videos! And thanks for sharing your GitHub repo. It certainly makes onboarding and making quick progress easier. I'm taking the breadth over depth approach to get started.
Wow, GTK is a massive PITA! I cant figure out how to get it to work. Im a oinux newbie so im stumbling through. Giving up on GTK for now. I guess I can install it on the Windows side if I can't figure out how to install it onto WSL.
Hi i need help with my project (vital sign detection) using iwr6843isk. I need to ask if only this module would suffice or would I need to use mmwaveicboost as well? basically I want to measure the vital signs through the sensor and the numeric data should be transferred to a connected raspberry pi where I can upload it on cloud. Can you please help me with this as I am very new to this. I have tried contacting TI technical support and went over the vital sign lab but it is very confusing. Can you please list down the hardware and other requirements I would possibly need for my use case?
I have just started on creating the tutorial! I am planning on making a short guide to get this type of setup and then I am planning to do lots of tutorials based on that.
Great tutorial but Device Manager assigned new device "serial USB converter" when I plugged lead into J13. Unknown device did not happen. When I tried to update the driver with Quartus USB Blaster II driver, the Device Manager reported "best drivers for your device are already installed". Can anyone please advise?
I am currently struggling with the same thing. I am using the provided HDMI reference design from the DE10-Nano CD and I am also using this github repo guide as a reference: github.com/zangman/de10-nano/blob/master/docs/FPGA-SDRAM-Communication_-Introduction.md I am still debugging the code I wrote to scale up the design from github to write RGB pixel data for a 1080p picture.
@@metaphysicscomputing I can now using the mmap function to read or write data to HPS DDR on HPS side . Next, I will study how to send video data into HPS DDR through mSGDMA on the FPGA side, my video data is generated on the FPGA side.
@@jajjajajj Nice! 😁 I also got the SDRAM controller to work for my own Avalon MM Master. I’m now debugging my FPGA implementation to display the frame stored in the DDR3.