Тёмный
Eli Fieldsteel
Eli Fieldsteel
Eli Fieldsteel
Подписаться
Hello! I'm an Associate Professor of Music Composition-Theory and Director of the Experimental Music Studios at The University of Illinois Urbana-Champaign. I use SuperCollider for most of my music and research. This channel features selections of my work, SuperCollider tutorials, and other musical experiments.
SuperCollider Tutorial: 31. Ambisonics
1:13:45
Месяц назад
SuperCollider Tutorial: 30. Live Coding
53:04
3 месяца назад
SuperCollider Tutorial: 29. Patterns, Part II
57:04
9 месяцев назад
Комментарии
@poguri27
@poguri27 11 дней назад
Highly recommend looking into OpenStageControl as an open source alternative to TouchOSC. I have tried both and I was extremely frustrated and unimpressed with how touch osc works. Open stage control you simply view your UI in a browser, it's mind-boggling that touch osc decided to build bespoke apps for every device rather than simply using a browser-based approach that works anywhere. Not to mention open source. There are many very frustrating and not well thought out design decisions and limitations with touchosc in terms of ux as well as technical
@poguri27
@poguri27 11 дней назад
Great videos. If you want to change number of partials via parameter, such as for additive synthesis, i figured it would be easier to just set a max nimber of partials, like 64, and then have an argument which simply controls which partials are silent via multiplication by 0. Avoids having to update synthdefs or create array of synths. This way you can have a knob or something that dynamically changes partials over time which could be cool.
@elifieldsteel
@elifieldsteel 11 дней назад
True, this is definitely a viable way to do additive synthesis!
@user-gz6zc3rl6v
@user-gz6zc3rl6v 11 дней назад
thank u very much,love from china.
@borisyellnikoff8407
@borisyellnikoff8407 13 дней назад
Awesome lecture, Eli! I've learned so much from your work! I have a question: Is there a way to use the Ugen Amplitude to trigger buffers? I've been working around that but I haven't found a way to send the Amplitude numbers back to 0. I want to use a microphone to trigger sounds. Thank you!!
@elifieldsteel
@elifieldsteel 13 дней назад
Thanks! There is. The trick is to pick a threshold, and compare the amplitude against it. A conditional check on the server is 1 when true, and 0 when false. This is the signal you should use to trigger PlayBuf, EnvGen, etc. Here's an example. SetResetFF makes sense because it's basically a switch that can be turned on with one conditional check, and turned off with another. The sample (sig) is triggered whenever the source signal (src) is above -26 dB, and the sample can be retriggered after the amplitude falls below -50 dB. Keep in mind you'll have to fine-tune the threshold and reset parameters to get the precise behavior you want, based on the actual signals you're using. s.boot; b = Buffer.read(s, Platform.resourceDir ++ "/sounds/a11wlk01.wav"); ( SynthDef(\a, { var src, amp, trig, sig; src = BPF.ar( in: WhiteNoise.ar, freq: 1000, rq: 0.01, mul: LFNoise2.kr(5).exprange(0.04, 4); ); amp = Amplitude.kr(src, 0.01, 0.3).ampdb.poll(label: 'dB level'); trig = SetResetFF.kr(amp > \thresh.kr(-26), amp < eset.kr(-50)); sig = PlayBuf.ar(1, b, BufRateScale.ir(b), trig, startPos: 17000); sig = sig * Env.perc(0.01, 1).kr(0, trig); Out.ar(0, src ! 2); // control signal Out.ar(0, sig ! 2); // triggered sample }).play; )
@borisyellnikoff8407
@borisyellnikoff8407 12 дней назад
@@elifieldsteel OMG! Thanks, Eli!! I'm going to work on this right away!!
@anaisdominguez7818
@anaisdominguez7818 15 дней назад
te amo me inspira ver tus videos <3
@moulinexish
@moulinexish 16 дней назад
Hi Eli, Thanks a lot for your tutorials they are very helpful. Is there a way to use Fourier Transform to generate waveforms and store them as Wavetable? I'd like to do additive synthesis, change the amplitude (and maybe phases) of each partials dynamically (while reading the wavetable)...do you think it is also possible to change the number of maximum partials dynamically in this theoric technique? Thanks
@elifieldsteel
@elifieldsteel 15 дней назад
Thanks, glad to hear! Regarding your question - maybe, but it sounds like you're describing a complicated way of achieving something than can be done much more simply. If you want to dynamically modulate amplitudes and phases of sinusoidal partials in an additive synthesis context, why not just modulate those sine waves themselves? A primary application of the FFT is to analyze some complex wave and obtain information about its component partials. But if you're doing additive synthesis, then you already _have_ those partials, so why bother with the FFT at all?
@moulinexish
@moulinexish 15 дней назад
@@elifieldsteel Thanks for your response. I would like to experiment with Trainlet Synthesis (from Roads's Microsound) on SuperCollider and I experimented with the usual ways of creating an additive synthesizer (using Klang, Blip, Array.sineFill, multiple SinOscs etc) but didn't find a satisfying way of implementing the chroma parameter (harmonic balance) or satisfying results (mostly regarding optimization of CPU use and easiness to work with patterns) on my trainlet synthesizer prototype but mostly curious as to how to do it and if it is possible on SuperCollider. :)
@SuperDuckQ
@SuperDuckQ 16 дней назад
That array cheat sheet sounds like it would be a handy document!
@PerplexedMuse
@PerplexedMuse 26 дней назад
🤍🤍🤍
@cardboardmusic
@cardboardmusic 26 дней назад
Great stuff as always, and (Eli), you should mention your book next time. 😉
@tatrix
@tatrix 26 дней назад
Great as always! Thank you Eli!
@123g0bln
@123g0bln 26 дней назад
awesome
@marcelomellado1969
@marcelomellado1969 26 дней назад
I was about to sleep but an Eli's video popped up. Right away to booting the server
@blmzndr
@blmzndr 26 дней назад
Incredible tutorial. Sounds so nice! Big up 🙌
@synth_def
@synth_def 26 дней назад
Nice! At 3:12, the grittiness is because Amplitude has lousy default settings, attack and release both set to 0.01. If you set the attack to 0.01 and release to 0.1 in Amplitude you don't have to use the lag.
@elifieldsteel
@elifieldsteel 26 дней назад
An excellent tip! Using fewer UGens is definitely more efficient.
@laika6340
@laika6340 26 дней назад
awesome as always
@elifieldsteel
@elifieldsteel 26 дней назад
thanks!
@00Jannis00
@00Jannis00 29 дней назад
damn this is really cool. I guess you need a lot of knoledge to build something like this by yourself?
@elifieldsteel
@elifieldsteel 28 дней назад
Thanks. Some specialized knowledge was necessary. I built the original prototype by myself, but this final version was assisted by two electrical engineering students, and it has several improvements over the prototype. So I wasn’t totally on my own.
@00Jannis00
@00Jannis00 9 дней назад
@@elifieldsteel wow ok that sounds very special and difficult. so it wont be that easy to bould this by yourself?
@aleksdizhe
@aleksdizhe Месяц назад
Hello, Eli! Would be glad to hear your opinion on what books/sources would be best to start from to understand maths for further learning of DSP and all sound-related/audio programming stuff? (for a person having a high school-university entry maths level) Thank you in advance!!
@elifieldsteel
@elifieldsteel Месяц назад
I’ve learned a lot from The Computer Music Tutorial (Roads) over the years, and a 2nd edition was recently published. It is a robust and comprehensive text that spares no detail.
@aleksdizhe
@aleksdizhe Месяц назад
@@elifieldsteel thank you for your response!
@williansuarez9522
@williansuarez9522 Месяц назад
In 4:14, line 26 seems to work without a semicolon. What does this mean?
@elifieldsteel
@elifieldsteel Месяц назад
Single, isolated code expressions can be successfully evaluated regardless of whether they end with a semicolon, using shift-enter. In this case, the interpreter determines the end of the code expression to be wherever the first return character is. If you want to evaluate multiple code statements at once, however, semicolons must be used to separate them.
@williansuarez9522
@williansuarez9522 Месяц назад
How did you activate the 5-11 lines at 9:18 while only putting the text bar on 11?
@elifieldsteel
@elifieldsteel Месяц назад
on macOS, command-enter. on windows/linux, control-enter.
@swapnilchand338
@swapnilchand338 Месяц назад
Starting in 2024! Tried Sonic Pi didnt work. Tried Overtone and got tired setting up Clojure. I hope this works!!! Super hyped
@elifieldsteel
@elifieldsteel Месяц назад
Good luck! Happy to answer questions/provide extra help.
@byt4794
@byt4794 Месяц назад
Could someone help me with a doubt? In the first example, we change the noiseHz (associated with the freq parameter) to change the speed of note change. But in the last example, we use ampHz (which is associated with the amplitude parameter) to do the same. Why is there a difference? As always, great tutorial Eli! Thank you. :)
@elifieldsteel
@elifieldsteel Месяц назад
By changing noiseHz, we change the speed at which LFNoise0 selects values. These values are used to control the frequency of the sound we hear. By changing ampHz, we alter the frequency of two LFPulse UGens (named amp1 and amp2), causing them to pulse up and down at a different speed. These signals control the amplitude of the sound we hear. It's fair to say that the results of these two changes can sound similar, but they are not the same thing. Here's the second example, but noiseHz has been reincorporated into the SynthDef, replacing what was previously a constant value of 4. Set messages below attempt to demonstrate how these two parameters sound different from each other. ( SynthDef.new(\pulseTest, { arg ampHz=4, noiseHz=4, fund=40, maxPartial=12, width=0.5; var amp1, amp2, sig1, sig2, freq1, freq2; amp1 = LFPulse.kr(ampHz, 0, 0.35) * 0.75; amp2 = LFPulse.kr(ampHz, 0.5, 0.35) * 0.75; freq1 = LFNoise0.kr(noiseHz).exprange(fund, fund * maxPartial).round(fund); freq2 = LFNoise0.kr(noiseHz).exprange(fund, fund * maxPartial).round(fund); freq1 = freq1 * (LFPulse.kr(8)+1); freq2 = freq2 * (LFPulse.kr(6)+1); sig1 = Pulse.ar(freq1, width, amp1); sig2 = Pulse.ar(freq2, width, amp2); sig1 = FreeVerb.ar(sig1, 0.7, 0.8, 0.25); sig2 = FreeVerb.ar(sig2, 0.7, 0.8, 0.25); Out.ar(0, sig1); Out.ar(1, sig2); }).add; ) x = Synth(\pulseTest); // ampHz change x.set(\ampHz, 8); // faster x.set(\ampHz, 0.3); // slower // noiseHz change x.set( oiseHz, 24); // faster x.set( oiseHz, 2); // slower
@byt4794
@byt4794 Месяц назад
@@elifieldsteel Thank you for taking the time to explain in detail! You're a great teacher and person. :)
@sebastianvillalba3260
@sebastianvillalba3260 Месяц назад
Muy bueno! Cuál sería la forma correcta de agregar reverberación?
@elifieldsteel
@elifieldsteel Месяц назад
I don’t think there’s a singular “correct” way to add reverb to an Ambisonic signal. Correctness depends on context. There are different techniques you can experiment with, and figure out which one you think sounds best. There is an AmbiVerbSC UGen. It’s a quark, built with the ATK in mind, but not included in the ATK quark distribution, so it needs to be downloaded and installed separately. I’ve briefly looked at it, but never actually used it. github.com/JamesWenlock/AmbiVerbSC Alternatively, you could pass your signal through a stereo reverb effect, and encode it to B-format separately from the dry signal. You could then apply different soundfield transformations to the dry and wet signals, which might be interesting. You could also decode them differently. You could also pass a B-format signal through a bank of monophonic reverberators, each with slightly different parameters, which could also have interesting results. There are probably lots of other options.
@sebastianvillalba3260
@sebastianvillalba3260 Месяц назад
Te agradezco muchísimo la respuesta. Voy a hacer todo eso. Pero aprovecho para hacerte una consulta. Estoy siguiendo el tutorial paso a paso, y cuando en el minuto 27 (aprox) cuando se agrega la FoaRotate, me sale un error, y quedé trabado ahí. El error dice exactamente: exception in GraphDef_Recv: UGen 'FoaRotate' not installed. *** ERROR: SynthDef ambi not found FAILURE IN SERVER /s_new SynthDef not found Hace un par de horas que estoy con esto, inclusive desinstalé Supercollider por completo, borrando la carpeta del AppData (soy usuario de Windows), instalando sc3-plugins con Quarks, y atk-sc3 desde el zip. No lo puedo solucionar, y en internet no encuentro nada sobre el asunto. Ya no sé que hacer, y no puedo avanzar. No me figura que tenga error de instalación. Simplemente me dice eso, que FoaRotate no está instalado.
@elifieldsteel
@elifieldsteel Месяц назад
@@sebastianvillalba3260 It's difficult to say with accuracy using only this information, but my guess is that you installed the ATK Quark correctly, but not the sc3-plugins package. You will need to install both to use the ATK. Maybe you copied the plugins to the wrong location? Or forgot to recompile the library? I found this thread, which seems to be the same problem you're experiencing, although with a different UGen from a different quark scsynth.org/t/installed-quark-plugin-not-found/5527. You could also post your question on scsynth.org, and I'm sure someone will quickly point you in the right direction.
@sebastianvillalba3260
@sebastianvillalba3260 Месяц назад
@@elifieldsteel recién, después de 3 días, logré hacerlo. Y no se exactamente como lo logré... Lo cual es frustrante. Pero bienvenido sea. Aprovecho para decirte que veo todos tus videos. Son obligatorios para consultas e inspiración. Te agradezco todo y te mando un abrazo desde Argentina!
@ivanfominmusic
@ivanfominmusic Месяц назад
Wow what an interesting piece! I had ideas about writing a piece for cello and live electronics, and stumbled upon your channel while researching about technical solutions. Do you maybe have a resource covering your use of interaction with the performer and the programming of different actions for the foot pedal in a piece like this?
@elifieldsteel
@elifieldsteel Месяц назад
Thanks! I don't think I have a video that directly addresses this topic, but I wrote chapters 9, 10 and 11 of my book hoping they'd provide insight in situations like yours.
@mattshu
@mattshu Месяц назад
I listen to a lot of youtube channels that have ambient/techno soundscapes that are hours long and I honestly thought I clicked on one of those by accident at first glance of the background image and title of this lmao. Glad I stumbled on this!!!
@ElieNaulleau
@ElieNaulleau Месяц назад
Very clear, thank you very much.
@laika6340
@laika6340 Месяц назад
i’m here for it
@PerplexedMuse
@PerplexedMuse Месяц назад
🤍🤍🤍
@agapizita
@agapizita Месяц назад
thank you!
@scartabellomusic
@scartabellomusic Месяц назад
My SC keeps reverting back to 32 bit and AIFF even after I change those settings. Any idea why this might be happening?
@elifieldsteel
@elifieldsteel Месяц назад
Can you be more specific about how you're configuring the settings, what you're doing afterwards, and how you know the settings are reverting?
@billow0612
@billow0612 Месяц назад
Thank you! Still useful after 10 years!
@elifieldsteel
@elifieldsteel Месяц назад
Always great to hear!
@SoundEngraver
@SoundEngraver Месяц назад
Thanks, Eli! This lecture helped me with a few performance ideas. All the best to you!
@elifieldsteel
@elifieldsteel Месяц назад
Glad to help!
@Athena2000CE
@Athena2000CE Месяц назад
Hello! Thank you for uploading these videos, it has been helpful for me. Would it be too much to ask for some sort of access to the homework assignments? I would really like to train myself through solving problems. If not, could you point me to any sources which might pose some problems for me to solve in SuperCollider? Any help is appreciated. Thank you :)
@elifieldsteel
@elifieldsteel Месяц назад
Just added a homework link to the video descriptions in this playlist. There were five assignments this semester, assigned at two-week intervals. Also, if you don't mind going back a couple years, my 2021 playlist has links to problem sets in the video descriptions. The course content is nearly the same. ru-vid.com/group/PLPYzvS8A_rTbIgN0NTMBPXjmdyNvlD0cf
@Athena2000CE
@Athena2000CE Месяц назад
@@elifieldsteel Fantastic. Thank you! I really appreciate your work :)
@yuggothrecords
@yuggothrecords Месяц назад
Is there a way to exponentially read through the buffer? I’m stuck on this w a piece I’m working on.
@elifieldsteel
@elifieldsteel Месяц назад
You should be able to map Phasor values from a linear range to an exponential one using .linexp or .lincurve. If you provide a timestamp, I could probably be more specific.
@yuggothrecords
@yuggothrecords Месяц назад
@@elifieldsteel Well I’m actually trying this w GrainBuf with the goal of the play head going from 0 to 1 and slowing down more quickly at the end. Exponential decay I guess. The opposite of XLine. Can’t for the life of me figure out how to do this. I would like to stay in GrainBuf rather than use Warp1. Thanks
@yuggothrecords
@yuggothrecords Месяц назад
@@elifieldsteel This is figured it out posenv = EnvGen.ar(Env.new([0, 1],[bufdur], curve:-3), doneAction:2); had to specify the curve
@no_talking
@no_talking Месяц назад
many thanks - no links to homework this time?
@christiancasey4080
@christiancasey4080 Месяц назад
This is amazing stuff -- particularly the music at the end!! Thanks so much for the great tutorials. (Just ordered your new book, can't wait to dig into it!)
@elifieldsteel
@elifieldsteel Месяц назад
Awesome, thanks!
@marktaylor7034
@marktaylor7034 Месяц назад
Excellent as always. I bought a copy of your Supercollider book. I'm intending a deep dive this year. On another note, have you released any of your Supercollider music? I was hoping I could get a full length recording from Bandcamp, but I don't see that. I did find the pieces you contributed to Seamus. Very interesting work.
@elifieldsteel
@elifieldsteel Месяц назад
Thanks! No, I'm not on Bandcamp. Maybe I should be. The best place to hear my music is probably my Performances playlist. Though, this isn't specifically a "SuperCollider music" playlist, it's more general than that. ru-vid.com/group/PLPYzvS8A_rTZiDjxBmz4ghSWPyQmsTQuR
@atetraxx
@atetraxx Месяц назад
Anyone know what textbook was assigned for this class?
@elifieldsteel
@elifieldsteel Месяц назад
For this semester, I assigned my own book, which was complete but unpublished at the time. The book is now available: global.oup.com/academic/product/supercollider-for-the-creative-musician-9780197617007?cc=us&lang=en&
@MegaPippopluto
@MegaPippopluto 2 месяца назад
mammamiachenoiaaa
@atetraxx
@atetraxx 2 месяца назад
I'm using some safetyclip quark plug-in. That should clip the potential dangers you showed in this video right?
@elifieldsteel
@elifieldsteel Месяц назад
Yes, it should. Generally, when using these types of safety tools, it's still possible to produce sound that is loud and startling, with the potential to damage your hearing, but the risk is lower.
@aleksdizhe
@aleksdizhe 2 месяца назад
Wow, new video in the series!
@isoljator
@isoljator 2 месяца назад
Thanks for all your work in general, Eli. Bought your book and now reading it through, accompanied by watching matching video lessons you've given over the years. Very helpful way to learn SC fast by applying the examples you provide to ones own material and ideas. I appreciate your clean-cut hands-on approach of teaching.
@samoberholtzer4962
@samoberholtzer4962 2 месяца назад
Hi, Thanks for ALL of your fantastic videos and SC learning materials. Quick question, I noticed you quit the app a couple of times to clear everything out, is there a difference between doing that and rebooting the interpreter?
@elifieldsteel
@elifieldsteel 2 месяца назад
Could you timestamp that for me?
@samoberholtzer4962
@samoberholtzer4962 2 месяца назад
@@elifieldsteel You mean when you quit and restarted the app?
@samoberholtzer4962
@samoberholtzer4962 2 месяца назад
49:57 that might have been it. I watched it a couple of times
@natan77777
@natan77777 2 месяца назад
I really appreciate those videos, they helped me a lot! I still have an issue with using my MIDI controller with supercollider: The latency is too big and I can't find any way to make it reasonable. Did anyone ever encounter this? (I'm using windows)
@elifieldsteel
@elifieldsteel 2 месяца назад
This question would probably be better answered by someone who’s actually had this problem and fixed it. My experience is mostly limited to macOS. I suspect this is a hardware/driver issue, unrelated to your controller or SC. I would guess using a dedicated audio/MIDI interface might be a solution. Some questions: If you simply print (postln) the incoming MIDI data in SC, is the latency still present? Or does the latency only happen when using MIDI to generate sound? Does this latency happen in other audio/MIDI software, like a DAW? Or only SC? Doing some investigating and answering questions like this can sometimes help isolate the source of the problem.
@markpond6682
@markpond6682 2 месяца назад
Absolutly wonderful
@elifieldsteel
@elifieldsteel 2 месяца назад
Thanks so much Mark!
@alferrow
@alferrow 2 месяца назад
Thanks Eli!
@rishikumartiwari7473
@rishikumartiwari7473 2 месяца назад
Hi Eli, Really helpful videos to learn supercollider 😊😊😊 Thank You!
@udomatthiasdrums5322
@udomatthiasdrums5322 2 месяца назад
love your work!! udo
@newenglandcoastershots
@newenglandcoastershots 2 месяца назад
👀👋
@annamariemignosa5478
@annamariemignosa5478 2 месяца назад
Eli I have been looking for you!
@mesjetiu
@mesjetiu 2 месяца назад
Thank you very much, Eli. Your videos are the best in SC... I suspice that SC is underestimated in the field of live coding. JitLib is amazing and not well known...