Hi! 6502, 8088, DIP-40 ICs, assembly programming - sound good? You’ll love this channel! Expect videos on this channel about electronics, projects, etc.
Indeed, and there are no CMOS 6507s out there - that’s still “pretty fast” though, so I guess the myth started because it’s not immediately obvious to everyone that means you can halt it indefinitely and move it forward with a push of a button. And it’s also in all the manuals :)
Another mildly irritating feature of the original 6502 is that it ignores RDY on write cycles, so if you need to slow writes you have to resort to clock stretching. I guess this made sense at the time, since RAM was faster than the CPU anyway and you normally only had to add waits for ROM reads.
I think it’s more about the fact that you don’t really need it to pause as you can just latch whatever the 6502 wants to offload to a register pretty easily on the ~W signal if you want to inspect what’s coming out. But that’s basically what you said :)
@@AndersNielsenAA my first computer was a Kim-1. I had to make my own power supply. My second computer was Apple II serial number 000745. I was 15 years old and I worked at The Byte Shop in Englewood Colorado. 8-)
Now try this on a Z80. It uses capacitors as temporary bit-storage elements. If you wait too long, capacitors discharge. That's why it had a minimal clock frequency. EDIT: I should've watched the first minute before commenting. Of course the Z80 can also be put in wait state, so there might be a way around it (although, I don't think it has a 'sync' pin)
@@AndersNielsenAA It does have an equivalent to SYNC: M1 together with MREQ indicates an opcode fetch cycle. There seems to be no minimum clock frequency. From the manual: "When the clock input to the Z80 CPU is stopped at either a High or Low level, the Z80 CPU stops its operation and maintains all registers and control signals." So it looks like you can single-cycle a Z80 by either method -- stop the clock, or use WAIT.
Well neither of those actually apply. On the Asian markets it’s super easy to get both - but the CMOS versions are a bit more expensive. Much more if going for a new one. Interfacing should be the same except the 90s versions run faster. Then there’s also the whole CMOS software incompatibility thing between manufacturers.. and HW incompatibility for the WDC version too
Guess of what this is a step towards: Making a computer that is in a function and form factor similar to an Altair 8080, but using a nice 6502 chip instead of an 8080.
What shocking news, I had a 6502 on my bike and could never work out why I had to keep peddling. The shop that sold me it said it would certainly cycle, I took it back and he said I must have installed it wrong. It never worked properly and now I know why, so after showing the shop your video I asked for my money back. He said since I bought it over 10 years ago it had probably has failed anyway. So if you can make known to all cycle outlets of your findings, this will help a lot of people.
I never realised that people thought you could not do it. Several 80s computers from the 80s used the rdy line to stall the CPU when the graphics chip needed cycles (even when the ram ran twice the speed of the CPU and the graphics chip, with each getting access alternately). Even the Atari 2600 6507 has a rdy pin, but single stepping it will cause the screen to fall apart.
The issue is that the registers need a clock to keep refreshing their state. The way you would typically stop a CPU and make it single step would be to stop the clock signal. This is how it was handled later when it was a static register. The board here doesn't stop the clock signal, it instead latches the signals which allow the CPU to fetch instructions. It's still getting clocked and still refreshing the register states, but it is waiting for the bus to be free to read, hence a single step.
It does not do single cycle. What you achieved is by "help" or external logic. But by itself, the non CMOS version cannot do this. It is the same to achieve BUS arbitration by disconnecting electrically the CPU. the 6502 cannot do this without external logic whereas the 6510 can
That’s incorrect. All the 6502’s put the data bus into high Z half the cycle - the 6502 and 6510 are the same in that regard. And this actually does let us halt after each CPU cycle.
@@AndersNielsenAA the 6502 has no tri-state capability. this is proved by the fact that C64 engineer employed a 6510 cpu which is a 6502 with integrated bus tristate capability (AEC signal) plus one memory mapped hw port
@@gasparinizuzzurro6306 Plenty of tristating to single cycle and feed it instructions. You don’t need to tristate the address bus for that. The 6502 won’t do anything at all without support circuitry. It’s a CPU. Anything except the most basic address decoding requires more circuitry. Doing the same thing on a 65c02 would also require a minimum of an IC to stop the clock in the right state - so you might as well use the same circuit and leave the clock running. And the 6510 is NMOS just like the 6502 so it needs a running clock too.
Thanks for the great video! It consists of only two D-FF, so maybe this is the minimum configuration. In my case, I configured it with two JK-FFs and NAND gates. And in my case, when I do a CPU RESET with the CPU stopped, in rare cases the RESET sequence is not performed correctly, so I added a circuit to keep the CPU in RUN state during the RESET period.
Thank you and thank you for the insight! The way the circuit is setup relies heavily on the button’s DPDT nature and makes it harder to control digitally - or just with SPST buttons. A few more logic ICs like you did would certainly make it more versatile and robust. The 6502s can differ quite a bit when it comes to the reset - I’ve heard some of the NMOS ones can overheat if held in reset for too long so a short reset pulse is very important too. Luckily I’ve never experienced that with a 6507. I appreciate having access to your Perseus schematics very much so I can sanity check a few things while moving forward, thank you 🙏 :)
@@AndersNielsenAA Thank you very much. To explain my case in detail, when I powered on the CPU board with the RDY signal at a “L” level and in a stop state, CPU reset, and then proceeded with the step operation, it occurred with a certain probability that the execution address progress was not correct. In this case, I believe that the start vector setting within the CPU is not correct. So, I think this kind of problem does not occur in the case of a configuration where the first state is always RUN state by power-on reset. I had to solve this problem with my PERSEUS-7, because it is assumed that it is often operated in step operation only, not in RUN state at all from power-on. I have made four PERSEUS-7 units and verified the operation with multiple CPUs as well, and confirmed that it is not a defect in only one unit by chance. defect occurred on a Rockwell NMOS R6502A and not on a CMOS R65C02. The R65C02 has a different number of clocks before the first instruction execution and I believe has an improved reset sequence. The additional circuit that solved the defect is R54 in PERSEUS-7 and R4 in PERSEUS-8.
@@mitsuruyamadaThank you for the details. Since Sync stays low during reset, I wonder if it would be sufficient to ensure “single instruction mode” during reset. I guess that would do the same as keeping it in run state - but of course that means you can’t read the reset vector the CPU reads either way.
@@AndersNielsenAA Thank you very much. You are right. According to the datasheet, it is sufficient to maintain “single instruction mode” even during RESET. This phenomenon and the countermeasure is based on my personal experiment. I have not been able to reproduce the defect within the conditions and number of times I have re-tested now. Since this circuit allows the same number of dummy steps until the first program instruction for the R6502A and the R65C02, I am going to continue evaluation with this circuit. Thank you again.
I tried to debunk so many myths like that on various Commodore forums and was pelted with insults. I was even expelled from Lemon64. Making videos like yours requires lots of efforts and I didn't have the energy and time.
If you were banned from Lemon64 it wasn't because you were just trying to debunk myths. As long as you showed evidence and stayed civil then you wouldn't have been banned.
@@AureliusR I can give you a copy of no less than 500 posts Groepaz left there that were directed at me that were all personal attacks including questioning my mental health. Everyone has their limits. What you say is patently false. He was an acquaintance of TNT and that is the only reason it ended the way it did. In any case it resulted in me selling more than 200k$ worth of merchandise that according to him should not have worked and caused damage to C64. All false. Added edit : The document titled 'The C64 PLA Dissected' by Thomas 'Skoe' Giesel proved everything I said was true and all the disparage from Groepaz was lies.
It requires quite a bit more effort than most believe - and a lot of love for the subject and patience with externalities :) Constructive dialogue is great - if people aren’t nice it’s easy to disregard :)
@@AndersNielsenAA Groepaz became very offensive with me around 2008 when I indicated ST's M27C512-90B6 PROM chip (only 1$ US at Mouser back then) had a low slew rate on output and therefore did not generate the glitches other 27(C)512 would cause when used as a PLA replacement for the C64. He was involved with Individual Computers' SuperPLA V3 which sold for about 30 Euros so this would have been a significant loss of market. I had proof from logic analyzers from the university where I was studying but couldn't take the results out. Eventually the genuine Super Zaxxon cartridge became the litmus test. The ST chips were discontinued in 2011 and would have saved huge sums to C64 users. See video from MindFlareRetro : #10 'The PLAin Truth About the Commodore 64 PLA' at World of Commodore 2017
@@AndersNielsenAA Groepaz became very offensive with me around 2008 when I indicated ST's M27C512-90B6 PROM chip (only 1$ US at Mouser back then) had a low slew rate on output and therefore did not generate the glitches other 27(C)512 would cause when used as a PLA replacement for the C64. He was involved with Individual Computers' SuperPLA V3 which sold for about 30 Euros so this would have been a significant loss of market. I had proof from logic analyzers from the university where I was studying but couldn't take the results out. Eventually the genuine Super Zaxxon cartridge became the litmus test. The ST chips were discontinued in 2011 and would have saved huge sums to C64 users. See video from MindFlareRetro : #10 'The PLAin Truth About the Commodore 64 PLA' at World of Commodore 2017
It would be a bit more complex, but I'd think it should be possible to adapt that concept to make a break point debugger as well. E.g. feed all the relevant pins into a RAM chip's address-in pins and use one of the data-out pins as break signal. (Thought at that point, just using another CPU to track things might be easier and cheaper.)
I'm sure there's better options, but I'd be itching to see how much can be done without specialized tools. ("We did it with the wrong tools, as a joke")
But what happens if the clock speed gets down to zero hertz (regardless of the circuit shown), specially if using the older versions of the processor? I think part of the myth may still be true - not being able to run if single clock cycles are fed to it. In the video the 6502 still receives the clock at full speed, but execution is "gated" by the RDY pin.
Slightly confused here; didn't both the Ataris and the C= 64 have special pins to let the gfx chip halt the cpu? Was that working different from what you described here?
Absolutely - they used the RDY pin to slow down execution just like the description in the 6500 datasheets. My point here is the 65C02 is often credited as the only chip you can use on a breadboard and step one instruction forward at a time - but actually the NMOS 6502 has the RDY pin for exactly that purpose and has no limits as to how long you can pause between instructions if you keep the clock running. ...and then I show how to do it :)
ohhhh cool! I heard this information about the registers losing data with no clock refresh, and was a bit disappointed. I am interested in running a real 6502 as a hardware unit test and source of truth against an emulator I want to make, so this information should be pretty useful.
Absolutely! It's been a few years since I wrote that and I wouldn't make the blunder today - but I can't believe I didn't catch it myself while making the video XD
I thought static for a CPU means that it does not rely on race conditions between edges. So you can slowly change the voltage on the clock pin. And as long as your noise amplitude is below the range between high and low, it wouldn’t accidentally go backwards or double step. A lot of flip flops need edges which transit faster than the speed of some logic gates? They store information on the gate capacity? 10 kHz is really slow and more like dynamic RAM. Again data is stored in a capacitor. Intel had a refresh circuit for this. In 6502 only temporary registers are dynamic. Are we sure that this isn’t really the pre-charge of the bus? It leaks away, and then when a device on the bus fires, no voltage changes. So you say that VIC-II in C64 should have used an input buffer to accept early data if the current instruction does not 7 cycles? I still don’t get how Atari 8 bit stalled the CPU for individual cycles. Surely, they did not manipulate the core. So they could only keep up the clock high for longer. Is the difference that commodore has a clean rhythm with the clock: 64 always half as slow. Plus4 : faster in the border (separate enable bit for side and bottom border). While Atari comes up with the signal on the fly?
I’m not completely sure and you may be right I stretched the comparison between DRAM and SRAM over to registers a bit too far. Woz actually ran into the issue with the Apple II - he thought he could keep the clock high while refreshing RAM because it was faster than the spec for a cycle - but he missed that keeping the clock high for a whole cycle at 10kHz is essentially 5kHz.. and he started losing registers. He kept swapping to new 6502s because brand new ICs have greater register capacitance than slightly used ones - it worked for a little while.. I bet he was real happy to switch to CMOS :)
@@AndersNielsenAA so we really talk about the non-temporary registers? So even for NOP the 6502 goes through A X Y SP on instruction load and refreshes a capacitor? I don’t see the metal lines for this. Intel had a register file and a refresh counter just below. And for each cycle (or instruction) this counter would advance. Ah, so on clock low registers live forever? To check if real registers are lost we need to keep the clock high for long and sta something with an addressing mode with z or t or SP . Then record the address bus. Then execution the same instruction, but with short high. If Y survived, we now record the correct address . This tests AH and AL . Or is it some trick with the instruction pointer? Or ALU input: LDA, CMP, PHA . Slow and fast . Test TAX … In a way 6502 is dynamic: it uses a line of inverters to delay the clock to create 4 phases. The clock edge needs to be sharp, not jitter. I don’t think that there is a Schmidt Trigger . I know that Intel and others need two clock pins with non-overlapping phases. I feel like this is mandatory for static operation. So 65C02 has two clock pins? In the end CMOS is mostly great for battery life because at that feature size leakage was zero and energy consumption also while holding a phase. I would like to see a joule thief where the 65c02 uses solar power and reduces it clock at dawn until it stops. Just wanted to add: static CMOS must not have an overlap where PNP and NPN both conduct. I guess that 1.1 V core voltage 5 GHz CPUs don’t really care that much. I also wonder how old discrete CMOS avoided this at 12 V max on the rails. Scrap that slow clock. Even a CMOS circuit on solar power needs a clean step clock edge and enough energy in an accompanying capacitor to run a full physical cycle ( but not a whole instruction).
@@ArneChristianRosenfeldt Neither - in those 1/10.000ths of a second you need to pass both phases. I’ve seen NOP running continuously slower than 10kHz without the program counter losing count - but something is out of spec for sure. Not completely sure what’s going on at the silicon level - would love a detailed description - but I am guessing it is in some way caps leaking.
@@AndersNielsenAA I found that in the NES: The sprite memory in the PPU is DRAM, but its refresh is the same as evaluation. Nintendo is explicit about it as Intel. Weird that MOS is not. I should check visual 6502 if the bits use 6 transistors or 1. DRAM needs sense amps and an SRAM buffer. AH AL only serve as buffers. I read that the MOS designers were speed freaks who wanted to squeeze the most MHz out of their ancient fab / large feature size for yield. So with 8 transistors an SRAM bit could be put to the middle between low and high (turn off power) then connect to the bus (phase change), and then power up feedback (delayed phase ). I babbled on in my last comment.
@@ArneChristianRosenfeldt You’re certainly digging into a deep subject I’ve only scratched the surface of - I guess I should go deep with Visual 6502 some time :) I appreciate the thoughts, thanks! 😊
Thanks - I think the Z80 has static registers and doesn't mind stopping the clock. I guess technically you could use this circuit just for debouncing and hook it up to the clock - but less would do :)
@@AndersNielsenAA Thanks for that. I'd be looking to use this for stepping through old Z80 and 6502 arcade PCBs to see what makes them tick. Before the likes of custom chips were a thing.
@@phils_arcade In that case you would probably need a bit more work for the Z80 in circuit since you would have to manually do all the steps. But maybe you get some inspiration on how to make that happen on a breadboard from this :)
Thanks for watching! You can get the kits here: www.imania.dk/index.php?currency=EUR&cPath=204&sort=5a&language=en Come hang out in the Retrocomputing Hackerspace Clubhouse on Discord! discord.com/invite/kmhbxAjQc3
When I dissembled mine a couples of years ago I think I managed to brake one of the stabilizers on the spacebar. As those is also made of brittle plastic. Think it can be good to mention that. Thanks for a good video!
Thank you. Yes, it's not a very durable design. Same for most other larger keys - they really should've put a metal bracket to hold them but I guess it's another cost cutting measure.
Good question. It has very distinct spring mechanism providing the switching.. but on the other hand it’s still a membrane keyboard. Both and neither I guess.
@@AndersNielsenAA Model Ms used a membrane too and folks would usually consider that mechanical (due to the buckling spring). I'd consider it mechanical.
@@colinstu Usually the distinction is made between individual switches vs a membrane but it doesn’t really make much sense as a measure of “quality”. You can find plenty of horrible mechanical switches and some great rubber dome keyboards too - I can’t handle 20 minutes of typing on a C64 keyboard no matter how “mechanical” it is ⌨️ 😆
I love mechanical keyboards and I'm a big fan of the old Model M. I was surprised to find out they had membranes under the buckling springs. I feel like a Coca-Cola fanatic who finds out they've been drinking Pepsi all this time without realising. All I can say is they _feel_ mechanical. I guess that's the main thing, right?
@@AndersNielsenAA I see the github just kinda overwhelmed but it. I think I need to use a arduino and then just build some stuff together. But not sure where to start I was given a 5150 project and while it works it needs a keyboard 😀
Love the safety considerations for the UV radiation. Also the USB-PD usage. My bodged-together setup uses a UV-C LED and a 555 timer driving a boost converter to take 5V up to 9-ish to overcome the forward voltage of the diode. This is far more elegant and less hacky :)
Yes - I haven't tried though, so no software support. And of course it doesn't need 12V+ so it can be done on a breadboard easily - but the programmer would also be more practical.
@@AndersNielsenAA Ooh I have just seen that, thats great, I am quite new to the concept of programming eeproms, the 28c256 has a different pinout to the 27c*, Is this something I can remap using software is it a hardware limitation? For example the A14 pin
Thank you. Kind of. The code needs to be changed to use another whole port, like PORTK, and it has to be wired onto it using jumper wires.. so kind of.
Hi. If you talk about something cool like that fancy ZIF socket, you could at least say the model name, and maybe a link in the description. Also, *brilliant project!*
I had an idea on how to make the button multifunction WITH visual feedback. Have a small annunciator appear in the corner of the display that tells you what the button will do as you let it go. So it could be Enter, ESC, quit, depending on how long you have held it and after 2 to 5 options it would return to the start. Similar to the way you enter a digit by having it cycle through when setting a cheap digital clock and stopping at the chosen digit. Great project and I will get one soon enough. Leaving room for a through hole trim pot would be nice as the tiny SMD ones appear very fragile and hard to use for older eyes.