Discover the future of software from the people making it happen.
Listen to some of the smartest developers we know talk about what they're working on, how they're trying to move the industry forward, and what you can learn from it. You might find the solution to your next architectural headache, pick up a new programming language, or just hear some good war stories from the frontline of technology.
Join your host Kris Jenkins as we try to figure out what tomorrow's computing will look like the best way we know how - by listening directly to the developers' voices.
Hayleigh is a rockstar in the Gleam community! She's so bright, and I love the way she's so intentional with her words. I've been playing with Lustre and it's very good!
I too had seen Elm, was weirded out by the commas being before each item in lists. After exploring some OCaml and Haskell I came back to Elm. Now I love Elm. I think of it as Haskell-Light, with everything you need and nothing you don't.
Hayleigh thinks deeply about architecture and it's fascinating to listen to her reason through the trade-offs between different models. I'm loving Gleam and Lustre, so thanks for the software as well as the podcast!
Thank you for this interview! As a junior front end developer, it was very interesting to watch it, learn about Gleam and Lustre. I tried elm and one thing I liked about it - very helpful and readable error messages, but sadly - there are 0 jobs in my location with it. I really like your interviews, you're doing a really good job!
Whatever you do Loris, please, please I beg you :) do NOT reinvent Scala's SBT by chasing the dragon's tail of build-system-self-bootstrapping #declarativeToDeathAndBeyond
I'm currently learning Odin, and it seems like every time I delve into a topic, I'm always amazed at how cleverly things are done in Odin. The more I use Odin, the more I agree with his maxim "Odin is the alternative to C for the joy of programming". I invit any coder to test it for a few weeks. It's an amazing language for coding.
There is rye build system written in rust and its being developed by flask's (web micro-framework for python) author. Currently, its maintained under astral
What baffles me around Hare is that multithreading is not supported at all, to a point where they explicitly state that you should not do this, are on your own and that the stdlib could possibly break from doing that (since it's not threadsafe). Did they even look at how computer hardware evolved over the last -10- 5 years? Or how hardware manufacturers say where things are going to go in the next decade? If this wouldn't be a systems programming language, fine, a higher level language can get away without support for it (look at Python), but a systems programming language?
I did iOS development, making music instruments. I wrote my own engine, but I tinkered with Chuck, Faust, SuperCollider, CSound. When I stopped doing iOS music app development, Julius Orion Smith III got the IP for my app, and they made Geo Shred with their own company. Having FFTs and filters, and vectorized acceleration is a good idea for a language. It's similar to how good a vectorizable language for AI is a good idea. (Mojo would be the closest contender.) audio is basically a hard-real-time app. you get loud pops when you miss an audio deadline. so, usually it will get used in a C, Rust, etc runtime.
For those who'd like to scratch more surfaces, you might want to look into CSP (e.g. Transputer, XMOS, Occam), Clash and Futhark, or some DSPs like the Epiphany processors (with their dedicated loop modes). C is amazingly awkward for functional pipelines.
i have a feeling that also aliasing has been mentioned and talked about in broader context, lots of watchers may have no idea what it is in current application, why it happens and it'd be nice to explain just in few words at least
Starting at around 44:20, Romain explains pipelining on an FPGA. What he describes (AFAICT) is re-using a single hardware resource multiple times to process data. I imagine a case where each channel of multi-channel audio is processed by the same block of hardware, instead of dedicated hardware for each channel. I think "multiplexing" would be a better term for this. However, my understanding of FPGA lingo is nonexistent, I'm understanding from what I know about CPUs. With this background, pipelining on an FPGA would involve keeping all hardware resources busy by decomposing computations into small, successive steps that can be executed in parallel for a stream of inputs. Can anyone clarify this?
You're right, pipelining is time domain multiplexing at the function block level. If you have some fairly complex function, it takes time to complete it, as it propagates through multiple layers of logic. If we add registers spread out in that deep logic, the depth is lower so we can raise the frequency, but the new registers must then be filled with more data. The stages of the pipeline are like work stations along a conveyor belt. It's the same in CPUs; a pipelined CPU has multiple instructions at varying stages of completion. A revolver CPU, such as XMOS XS1, runs instructions from multiple threads to ensure they're independent (generic name SMT, hyperthreading is one example). MIPS instead restricts the effects, such that the instruction after a branch (in the delay slot) doesn't need to be cancelled. DSPs like GPUs specialize in this sort of thing, and might e.g. use one decoded instruction to run 16 ALU stages for 4 cycles (described as a wavefront or warp).
This discussions seems to conflate "real-time" with "fast". Programming in C, C++ or Rust doesn't magically get you "real-time", nor is it required. Real-time is more about being deterministic which requires help from the operating system itself as well. No such thing as a "real-time" programming language that runs on Windows, Linux, or any other non-RTOS.
Thanks for the comment. I did not know too much about RealTime Operating Systems (RTOS), but seems like Linux only recently got support inside the kernel: EDIT: Seems none of the links made it through, but there is an article on zdnet called "20-years-later-real-time-linux-makes-it-to-the-kernel-really"
In the audio context, that 'help' from the OS is that your audio processor is called on special thread, which has 'more realtime guarantees' than a normal thread. You're right that this isn't a true realtime context like an RTOS, but it's usually pretty good (or your computer's audio would glitch constantly). It's an audio programmer's job not to mess it up by causing priority inversions with lower-priority threads, or to make system calls that lack worst-case latency guarantees. This is usually only possible in C, C++ or Rust.
Linux is an RTOS, though many don't use that functionality (e.g. core reservation, memory locking, scheduler replacement). And there are real time programming languages, e.g. the Copilot Realtime Programming Language and Runtime Verification Framework. It's common to relegate hard realtime tasks of limited complexity to coprocessors like PRUs in BeagleBone, PIOs in Raspberry Pi MCUs, or separate microcontrollers. An example of such a task is dynamic voltage and frequency scaling in mainline CPUs.
A fantastic interview as always! I’ve been a hobbyist digital synth designer for ~6 years, and a functional(ish) language for audio that transpiles to C and JS is literally a wish come true. The additional possibility of targeting FPGAs was beyond what I thought prudent to wish for. I will be checking this out immediately.
I’ve been doing signal processing for software defined radio in Pony-lang and my framework is concurrent in the way that Kris asks about at 27:58. Every “block” in my framework is a Pony Actor and the Pony runtime schedules them across the available cores. And inside each block/actor, the processing almost always takes advantage of vector operations. I haven’t measured the upper limit of performance but I’ve been processing streams with hundreds of thousands of samples per second - much higher than a typical audio data rate like 48k.
I've never been much of a podcast "listener" especially with how the flooded the market is with mediocre q/a sessions masquerading as "informational podcasts". THIS is what I was looking for, an informative, and actually interesting session I can actually watch for entertainment! awesome (or idk maybe I am just a nerd) Keep it up man, you have some really dedicated folks rooting for ya!
@@CjqNslXUcM Yeah. These podcasts like "diary of a CEO" and many other I don't care to mention, it seems they have really accepted their identity as slop that people hear in background and don't really care about. Same generic cookie cutter questions, same circlejerk either trying to appease the guest or shill their product (looking at you Huberman)
omg this podcast is really exploring all kinds of interesting topics! I was a music nerd playing around with these dsp languages before I turned into a language nerd lol, so it is really interesting to see faust here! For anyone who find this field interesting, be sure to also check out supercollider, it is basically a powerful audio engine that comes with a OOP language called sclang.
Loris's static site generator, Zine, is awesome! I would love to see him back on to talk about that. It even ships with an HTML LSP that actually reports errors - the VSCode extension is written in zig and built to WASM!
This was enthralling and I'm surprised at how few comments there are and the percentage of them off topic or talking about some other language he should have used. Personally this conversation bloomed my fledgling interest in lisps. I'm downloading a compiler now to get started learning. I have no delusions that I would ever make something as important with it but his interest in a forever language and one that can be adapted to the problem rather than the other way around speaks deeply to me. Great conversation, thank you to both of you.
Great interview! During my undergrad, I worked on fluid simulation for my final year project. Later, I transitioned into programming at a bank. Listening to her talk about Maple software, CUDA programming, and SPH techniques brought back so many memories-I could understand it all. Definitely a nostalgic moment!
This language sounds worse to me than rust and zig and a bunch of other C or C++ wannabes. Use only once seems like a bunch of unnecessary overhead for no good reason.
I totally agree on preferring Rust over C++, but it isn't a magical silver bullet. What you want isn't just CUDA in Rust, but the best pieces of Halide, Chapel and Futhark. Chapel has a strong concept of domain subdivisions and distributed computing, Halide has algorithm rearrangements, Futhark has a less noisy language with some strong library concepts like commutative reductions and tooling that can autotune for your data. You'd also want a reasonably integrated proof system, as in Idris 2. The core thing that Chapel and Halide bring is the ability to separate your operational algorithm from your machine optimizations. E.g. if you chunk something for optimization, the overall operation is still the same. Futhark does some of that too, but only profile guided. Some fields approach this by separately writing formal proofs that two implementations are equivalent instead, but it's a much smoother process if you can maintain that as you write, like Idris attempts.
I think Zig is the proper evolutionary step. I waited almost 15 years for something like Rust. Never thought about the ramifications of borrow-checker, or even the difficulties of bringing OOP concepts down to a functional and procedural level. We simply accepted C for years, and still do. I'm excited about Zig. I want to find out if I can get Adam Dunkel's protothread working, or if the code trick fails. I think Zig will be perfect for embedded. When I read MicroZig has affinity to utilize both M0+ cores, I was like ya... this... evolve.
Again, pushing a fantasy of google using this to serve you ads without them knowing your info is wild. Google has you uniquely IDed they have your psychological profile, and they have what you're looking at. None of this is hidden from them. They need to know its you. They need to know what you're looking at, what ad was served to you, and if you interact with it. Can you imagine google not knowing what ads were successfully shown because it was anonymous. Stop floating these nonsensical fantasies. Googles' disinformation and misdirection engines are strong enough. They don't need your help