I never had anyone stop and explain CPUs to this granularity...this is something i have been looking for to assist me in better understanding buffer overflows, assembly and more. You are a good instructor!
Great videos, but there is a slight error in terminology at 46:06. I believe what he is talking about at this point is called "Direct Memory Access." Memory-mapped I/O is when you assign a device a particular memory location, and write control signals to this location using regular MOV instructions (or equivalent) in order to control the device. The device listens to the bus for any references to its assigned address, then reads and executes the command. At least, this is what I've seen in textbooks, perhaps the terminology is loose.
Actually, the DMA is used along with slow Peripheral devices like disks, and it uses Memory Mapped IO. The DMA controller is a very specific piece of hardware that handles writes from the PCI. Basically, the way this works is whenever a CPU is supposed to context switch to read a disk with very slow input, it delegates to the DMA to read from that disk and write at a specific address that is memory mapped. The CPU is now able to go back executing processes, while the DMA is doing all the work. Whenever the DMA finishes it notifies the CPU, telling that the data is ready to be read. This technique is called buffering. Great video!
Right. It is certainly true the DMA controller may use memory-mapped IO. I was just pointing out when he says, "The idea here is that, rather than the CPU essentially going to the IO device, fetching the data, and then bringing it to main memory, we're going to tell the IO controller to go talk to main memory directly, without actually involving the CPU." This is misleading since the slide up at the time is "Memory Mapped IO," when in fact he is describing DMA. The textbook I'm using for reference is Structured Computer Organization by Tanenbaum. In it, he describes DMA essentially as follows (very similar to what you said, only restating so it's clear to anyone reading): Instead of the CPU interacting directly with the IO device's registers, it will: 1) Initialize a buffer in RAM 2) Tell the DMA controller where this buffer is, how long it is, and whether we're reading or writing 3) Tell the DMA controller which device to talk to (e.g., device 6) (The DMA controller itself has registers, which is how the CPU "tells" it these things.) The DMA controller will then read / write from memory, out of / into the specified device's control registers, and then signal an interrupt to the CPU when the IO is complete. This frees up the CPU (provided it's not trying to access the bus while the DMA controller is working, since the DMA controller gets priority; see "cycle stealing") to do other things. As you said, the DMA controller may in fact use memory-mapped IO to write to the device's control registers. For instance, when you tell the DMA controller which device to interact with, it may translate this number to a full-blown memory address, which is either physically wired to the IO device port, or configured in some way to end up there (i.e., memory-mapped IO). That is interesting that you say DMA is used only for slow peripheral devices, but I suppose it makes sense that for faster devices, it would be more efficient to simply interact with the device registers themselves.
I feel your explanation of memory mapped I/O is misleading. MMIO is simply the mapping of the I/O device into the system's memory address space. Your explanation seems to mix or confuse this with DMA (Direct Memory Access). Not all MMIO devices use DMA.
Not at all. System calls are controlled by interrupts, and ensure that they work correctly. Whenever a system call is having a bad behavior like accessing memory that it's not supposed to access, then a software interrupt is generated. On UNIX systems, software interrupts are handled by the Interrupt Vector Table procedures. Remember, system calls are a way of users to interact with the kernel, and the system must have a way of handling exceptions.