For many years I was in the Department of Computer Science at Portland State University. I have an Sc.B. from Brown University and a Ph.D. from the Oregon Graduate Institute.
When not trying to figure out how my computer actually works, I like to ski, hike, travel, and spend time with my sons building things.
Why use a dedicated page for KSTACK while there's plenty of free space in Trapframe page ? Can we have one page for both KSTACK and saving user regs ? Xv6 seems to be rather inefficient for small synthesizable (FPGA) systems with limited RAM.
When I was earning my master's degree, I heard a lot about finite state machines (FSMs), but it was all theory - like clouds in the sky: there's a lot of water, but you can't drink it. I toiled for three months after graduating until I implemented my first FSM in code in 1981. Now, there is a programming methodology based on this concept - v-agent oriented programming (VAOP) - with many examples of its implementation. It's best to start learning about VAOP with this article on Medium: "Bagels and Muffins of Programming or How Easy It Is to Convert a Bagel into a Black Hole". With VAOP, you can implement FSM in any programming language.
This assembler syntax reminds me of AT&T's DSP16xx series of DSP's, something I haven't seen adopted anywhere except here. The thoughtful syntax makes assembler programming so much more understandable with no loss in functionality. As an example of DSP16 DSP assembler (executes in 1 cycle optionally in a zero cycle loop): a0=p p=x*y x=*pt++ y=*r0++
Just ran across this... But, in a very high-level sense, this sort of sounds similar to something like an 8080 or similar. Not much particularly useful to comment here (and not much immediate use-case for doing an 8-bit core). I had designed and implemented a CPU core in Verilog (with a custom ISA), but mine ended up very different (64-bit LIW/VLIW with 64 GPRs; no dedicated FPRs), and generally requires a bigger FPGA (a mostly feature-complete version fits into an XC7A100T). Previously, I had experimented smaller ISA's, but going smaller it is harder to make a case of "why not just use RISC-V?..." On a bigger FPGA, it is possible to get better performance-per-clock, by around 30% it seems, but this is harder to pull off within the limits of a smaller FPGA. Technically, my current core can also run RV64G (or, more correctly, RV64imfd; the A and Ziscr extensions are incomplete, but also not emitted by GCC). There are differences at the ISA level, but the design of the pipeline was such that most things be glossed over in the instruction decoder. System-level features differ a fair bit though (somewhat different interrupt-handling and MMU design). One drawback is that I am running my own OS (of sorts) on it, and debugging is a pain. Much more time spent debugging stuff than adding new features. Most recent things were things like an ELF-loader to allow running RV64 binaries in user-mode; and trying to debug the virtual memory system (to try to reduce the amount of crashing). For my own ISA, I have my own C compiler, and am using a modified version of PE/COFF for the binaries. Nothing intended for serious use, still mostly a hobby project at this stage.
🎯 Key points for quick navigation: Turing machines lack built-in mechanisms to detect the left end of the tape, but this limitation can be overcome by shifting the input and introducing a special symbol, such as the dollar sign, to mark the left end. Programming Turing machines involves progressively detailing implementation, akin to moving from high-level programming languages to machine code in traditional computing. Turing machines can utilize subroutines, allowing one machine to perform a task that another machine incorporates into a larger computation. Techniques like symbol marking enable complex tasks such as string comparison without altering the original data, expanding the versatility of Turing machines in problem-solving. Made with HARPA AI
Thank you for all your efforts. A quick question regarding __sync_synchronize(). What triggers compiler optimisation? Developer expectation from main.c is that processor runs in the exact same order.
Sale consumption harvesting of alcohol cigarette meat vehicle railway airforce coastal adultery prosthetic egg milk products deforestation is death warrant pass agriculture or heart attack
I've never heard of a half-word, I see 16-bit integers in Linux called characters, and in Windows called words. Windows still denotes 32-bit integers as dwords, as evident in the registry. C++ and C# represent dwords as 32-bit integers