Hello! I'm an Associate Professor of Music Composition-Theory and Director of the Experimental Music Studios at The University of Illinois Urbana-Champaign. I use SuperCollider for most of my music and research. This channel features selections of my work, SuperCollider tutorials, and other musical experiments.
Highly recommend looking into OpenStageControl as an open source alternative to TouchOSC. I have tried both and I was extremely frustrated and unimpressed with how touch osc works. Open stage control you simply view your UI in a browser, it's mind-boggling that touch osc decided to build bespoke apps for every device rather than simply using a browser-based approach that works anywhere. Not to mention open source. There are many very frustrating and not well thought out design decisions and limitations with touchosc in terms of ux as well as technical
Great videos. If you want to change number of partials via parameter, such as for additive synthesis, i figured it would be easier to just set a max nimber of partials, like 64, and then have an argument which simply controls which partials are silent via multiplication by 0. Avoids having to update synthdefs or create array of synths. This way you can have a knob or something that dynamically changes partials over time which could be cool.
Awesome lecture, Eli! I've learned so much from your work! I have a question: Is there a way to use the Ugen Amplitude to trigger buffers? I've been working around that but I haven't found a way to send the Amplitude numbers back to 0. I want to use a microphone to trigger sounds. Thank you!!
Thanks! There is. The trick is to pick a threshold, and compare the amplitude against it. A conditional check on the server is 1 when true, and 0 when false. This is the signal you should use to trigger PlayBuf, EnvGen, etc. Here's an example. SetResetFF makes sense because it's basically a switch that can be turned on with one conditional check, and turned off with another. The sample (sig) is triggered whenever the source signal (src) is above -26 dB, and the sample can be retriggered after the amplitude falls below -50 dB. Keep in mind you'll have to fine-tune the threshold and reset parameters to get the precise behavior you want, based on the actual signals you're using. s.boot; b = Buffer.read(s, Platform.resourceDir ++ "/sounds/a11wlk01.wav"); ( SynthDef(\a, { var src, amp, trig, sig; src = BPF.ar( in: WhiteNoise.ar, freq: 1000, rq: 0.01, mul: LFNoise2.kr(5).exprange(0.04, 4); ); amp = Amplitude.kr(src, 0.01, 0.3).ampdb.poll(label: 'dB level'); trig = SetResetFF.kr(amp > \thresh.kr(-26), amp < eset.kr(-50)); sig = PlayBuf.ar(1, b, BufRateScale.ir(b), trig, startPos: 17000); sig = sig * Env.perc(0.01, 1).kr(0, trig); Out.ar(0, src ! 2); // control signal Out.ar(0, sig ! 2); // triggered sample }).play; )
Hi Eli, Thanks a lot for your tutorials they are very helpful. Is there a way to use Fourier Transform to generate waveforms and store them as Wavetable? I'd like to do additive synthesis, change the amplitude (and maybe phases) of each partials dynamically (while reading the wavetable)...do you think it is also possible to change the number of maximum partials dynamically in this theoric technique? Thanks
Thanks, glad to hear! Regarding your question - maybe, but it sounds like you're describing a complicated way of achieving something than can be done much more simply. If you want to dynamically modulate amplitudes and phases of sinusoidal partials in an additive synthesis context, why not just modulate those sine waves themselves? A primary application of the FFT is to analyze some complex wave and obtain information about its component partials. But if you're doing additive synthesis, then you already _have_ those partials, so why bother with the FFT at all?
@@elifieldsteel Thanks for your response. I would like to experiment with Trainlet Synthesis (from Roads's Microsound) on SuperCollider and I experimented with the usual ways of creating an additive synthesizer (using Klang, Blip, Array.sineFill, multiple SinOscs etc) but didn't find a satisfying way of implementing the chroma parameter (harmonic balance) or satisfying results (mostly regarding optimization of CPU use and easiness to work with patterns) on my trainlet synthesizer prototype but mostly curious as to how to do it and if it is possible on SuperCollider. :)
Nice! At 3:12, the grittiness is because Amplitude has lousy default settings, attack and release both set to 0.01. If you set the attack to 0.01 and release to 0.1 in Amplitude you don't have to use the lag.
Thanks. Some specialized knowledge was necessary. I built the original prototype by myself, but this final version was assisted by two electrical engineering students, and it has several improvements over the prototype. So I wasn’t totally on my own.
Hello, Eli! Would be glad to hear your opinion on what books/sources would be best to start from to understand maths for further learning of DSP and all sound-related/audio programming stuff? (for a person having a high school-university entry maths level) Thank you in advance!!
I’ve learned a lot from The Computer Music Tutorial (Roads) over the years, and a 2nd edition was recently published. It is a robust and comprehensive text that spares no detail.
Single, isolated code expressions can be successfully evaluated regardless of whether they end with a semicolon, using shift-enter. In this case, the interpreter determines the end of the code expression to be wherever the first return character is. If you want to evaluate multiple code statements at once, however, semicolons must be used to separate them.
Could someone help me with a doubt? In the first example, we change the noiseHz (associated with the freq parameter) to change the speed of note change. But in the last example, we use ampHz (which is associated with the amplitude parameter) to do the same. Why is there a difference? As always, great tutorial Eli! Thank you. :)
By changing noiseHz, we change the speed at which LFNoise0 selects values. These values are used to control the frequency of the sound we hear. By changing ampHz, we alter the frequency of two LFPulse UGens (named amp1 and amp2), causing them to pulse up and down at a different speed. These signals control the amplitude of the sound we hear. It's fair to say that the results of these two changes can sound similar, but they are not the same thing. Here's the second example, but noiseHz has been reincorporated into the SynthDef, replacing what was previously a constant value of 4. Set messages below attempt to demonstrate how these two parameters sound different from each other. ( SynthDef.new(\pulseTest, { arg ampHz=4, noiseHz=4, fund=40, maxPartial=12, width=0.5; var amp1, amp2, sig1, sig2, freq1, freq2; amp1 = LFPulse.kr(ampHz, 0, 0.35) * 0.75; amp2 = LFPulse.kr(ampHz, 0.5, 0.35) * 0.75; freq1 = LFNoise0.kr(noiseHz).exprange(fund, fund * maxPartial).round(fund); freq2 = LFNoise0.kr(noiseHz).exprange(fund, fund * maxPartial).round(fund); freq1 = freq1 * (LFPulse.kr(8)+1); freq2 = freq2 * (LFPulse.kr(6)+1); sig1 = Pulse.ar(freq1, width, amp1); sig2 = Pulse.ar(freq2, width, amp2); sig1 = FreeVerb.ar(sig1, 0.7, 0.8, 0.25); sig2 = FreeVerb.ar(sig2, 0.7, 0.8, 0.25); Out.ar(0, sig1); Out.ar(1, sig2); }).add; ) x = Synth(\pulseTest); // ampHz change x.set(\ampHz, 8); // faster x.set(\ampHz, 0.3); // slower // noiseHz change x.set( oiseHz, 24); // faster x.set( oiseHz, 2); // slower
I don’t think there’s a singular “correct” way to add reverb to an Ambisonic signal. Correctness depends on context. There are different techniques you can experiment with, and figure out which one you think sounds best. There is an AmbiVerbSC UGen. It’s a quark, built with the ATK in mind, but not included in the ATK quark distribution, so it needs to be downloaded and installed separately. I’ve briefly looked at it, but never actually used it. github.com/JamesWenlock/AmbiVerbSC Alternatively, you could pass your signal through a stereo reverb effect, and encode it to B-format separately from the dry signal. You could then apply different soundfield transformations to the dry and wet signals, which might be interesting. You could also decode them differently. You could also pass a B-format signal through a bank of monophonic reverberators, each with slightly different parameters, which could also have interesting results. There are probably lots of other options.
Te agradezco muchísimo la respuesta. Voy a hacer todo eso. Pero aprovecho para hacerte una consulta. Estoy siguiendo el tutorial paso a paso, y cuando en el minuto 27 (aprox) cuando se agrega la FoaRotate, me sale un error, y quedé trabado ahí. El error dice exactamente: exception in GraphDef_Recv: UGen 'FoaRotate' not installed. *** ERROR: SynthDef ambi not found FAILURE IN SERVER /s_new SynthDef not found Hace un par de horas que estoy con esto, inclusive desinstalé Supercollider por completo, borrando la carpeta del AppData (soy usuario de Windows), instalando sc3-plugins con Quarks, y atk-sc3 desde el zip. No lo puedo solucionar, y en internet no encuentro nada sobre el asunto. Ya no sé que hacer, y no puedo avanzar. No me figura que tenga error de instalación. Simplemente me dice eso, que FoaRotate no está instalado.
@@sebastianvillalba3260 It's difficult to say with accuracy using only this information, but my guess is that you installed the ATK Quark correctly, but not the sc3-plugins package. You will need to install both to use the ATK. Maybe you copied the plugins to the wrong location? Or forgot to recompile the library? I found this thread, which seems to be the same problem you're experiencing, although with a different UGen from a different quark scsynth.org/t/installed-quark-plugin-not-found/5527. You could also post your question on scsynth.org, and I'm sure someone will quickly point you in the right direction.
@@elifieldsteel recién, después de 3 días, logré hacerlo. Y no se exactamente como lo logré... Lo cual es frustrante. Pero bienvenido sea. Aprovecho para decirte que veo todos tus videos. Son obligatorios para consultas e inspiración. Te agradezco todo y te mando un abrazo desde Argentina!
Wow what an interesting piece! I had ideas about writing a piece for cello and live electronics, and stumbled upon your channel while researching about technical solutions. Do you maybe have a resource covering your use of interaction with the performer and the programming of different actions for the foot pedal in a piece like this?
Thanks! I don't think I have a video that directly addresses this topic, but I wrote chapters 9, 10 and 11 of my book hoping they'd provide insight in situations like yours.
I listen to a lot of youtube channels that have ambient/techno soundscapes that are hours long and I honestly thought I clicked on one of those by accident at first glance of the background image and title of this lmao. Glad I stumbled on this!!!
Hello! Thank you for uploading these videos, it has been helpful for me. Would it be too much to ask for some sort of access to the homework assignments? I would really like to train myself through solving problems. If not, could you point me to any sources which might pose some problems for me to solve in SuperCollider? Any help is appreciated. Thank you :)
Just added a homework link to the video descriptions in this playlist. There were five assignments this semester, assigned at two-week intervals. Also, if you don't mind going back a couple years, my 2021 playlist has links to problem sets in the video descriptions. The course content is nearly the same. ru-vid.com/group/PLPYzvS8A_rTbIgN0NTMBPXjmdyNvlD0cf
You should be able to map Phasor values from a linear range to an exponential one using .linexp or .lincurve. If you provide a timestamp, I could probably be more specific.
@@elifieldsteel Well I’m actually trying this w GrainBuf with the goal of the play head going from 0 to 1 and slowing down more quickly at the end. Exponential decay I guess. The opposite of XLine. Can’t for the life of me figure out how to do this. I would like to stay in GrainBuf rather than use Warp1. Thanks
This is amazing stuff -- particularly the music at the end!! Thanks so much for the great tutorials. (Just ordered your new book, can't wait to dig into it!)
Excellent as always. I bought a copy of your Supercollider book. I'm intending a deep dive this year. On another note, have you released any of your Supercollider music? I was hoping I could get a full length recording from Bandcamp, but I don't see that. I did find the pieces you contributed to Seamus. Very interesting work.
Thanks! No, I'm not on Bandcamp. Maybe I should be. The best place to hear my music is probably my Performances playlist. Though, this isn't specifically a "SuperCollider music" playlist, it's more general than that. ru-vid.com/group/PLPYzvS8A_rTZiDjxBmz4ghSWPyQmsTQuR
For this semester, I assigned my own book, which was complete but unpublished at the time. The book is now available: global.oup.com/academic/product/supercollider-for-the-creative-musician-9780197617007?cc=us&lang=en&
Yes, it should. Generally, when using these types of safety tools, it's still possible to produce sound that is loud and startling, with the potential to damage your hearing, but the risk is lower.
Thanks for all your work in general, Eli. Bought your book and now reading it through, accompanied by watching matching video lessons you've given over the years. Very helpful way to learn SC fast by applying the examples you provide to ones own material and ideas. I appreciate your clean-cut hands-on approach of teaching.
Hi, Thanks for ALL of your fantastic videos and SC learning materials. Quick question, I noticed you quit the app a couple of times to clear everything out, is there a difference between doing that and rebooting the interpreter?
I really appreciate those videos, they helped me a lot! I still have an issue with using my MIDI controller with supercollider: The latency is too big and I can't find any way to make it reasonable. Did anyone ever encounter this? (I'm using windows)
This question would probably be better answered by someone who’s actually had this problem and fixed it. My experience is mostly limited to macOS. I suspect this is a hardware/driver issue, unrelated to your controller or SC. I would guess using a dedicated audio/MIDI interface might be a solution. Some questions: If you simply print (postln) the incoming MIDI data in SC, is the latency still present? Or does the latency only happen when using MIDI to generate sound? Does this latency happen in other audio/MIDI software, like a DAW? Or only SC? Doing some investigating and answering questions like this can sometimes help isolate the source of the problem.
Thank you very much, Eli. Your videos are the best in SC... I suspice that SC is underestimated in the field of live coding. JitLib is amazing and not well known...