Well, 0:20 contradicts your explanation for the valve analogy from the previous video, in which you clarified the valve as behaving like a resistor control, not a current control. But here you switch back to valve controlling the current analogy (by telling it controls "more or less current to flow" while the current was constant in the previous video, hence the name "constant current source")
The constant current source has a constant current at the base, hence a constant current at the emitter. What's actually going on is a feedback loop. If the output current drops, that means that the voltage across the emitter resistor will drop, which increases the current through the base, opening the valve and allowing more current to flow.
Valve analogy for a transistor actually confuses me. Valve is something that controls flow (which corresponds to current in water-electric analogy). However here the transistor is controlling its C-E resistance, as you pointed out at 3:02. But the mind image of the valve limiting the flow beats the verbal explanation.
If the water analogy doesn't help, don't pursue it. But if it does help, you can think of it as that we use a valve to regulate different things. Sometimes we want to regulate the pressure drop across the valve (analogous to voltage), sometimes we want to regulate the flow through it (analogous to current). We do both by having a pump (our power supply) pressurize the pipe, and then adjusting the resistance to the flow with the valve.
The reason for the unexpected phase, and and to a lesser extent, the gain curve, is due to the 100k anti-windup resistor. Making it larger, or removing it entirely, will yield more expected results. Of course, without the resistor DC offsets/bias will cause the circuit to saturate.
@carlosanvito yup. You need some sort of anti-windup, so you get into phase shifts. The gain and phase curves, of course, are perfectly explained by the fact that you're rolling off the low frequency gain with negative feedback at dc. But I'm trying to avoid getting very far into filter theory at this stage. That'll come later. TL;DR: I know what's going on with the phase shift. I don't expect the viewer necessarily to know what's going on, and you don't have to in order to get the main point of the argument. PS: the feedback resistor is as small as it is because the offset voltage on the cheap op-amps that I'm using is as big as it is. I could get a better demo with a better op amp, but if possible, I want viewers to be able to get my results with jellybean parts.
@@KludgesFromKevinsCave a differentiator is a high pass filter with a filter frequency of infinity. Look at how the corner is shifted. Basically an integrator attenuates frequencies above wc (lilke a low-pass filter), but amplifies those below wc. A differentiator does the opposite. That's why you need an amplifier to implement them (filters can be passive).
Could you explain to me what you mean when you say R3 (Rf) and R4 (Rin) provide a 0 DC bias? I am not sure I understand what that means. When I wrote out the transfer function for the integrator circuit I got gain = -Rf/(sCRfRin + Rin) which seems to make sense - at s = 0 the cap is open, the topology is an inverting amplifier, and we have gain = -Rf/Rin. So is it correct to say that it provides DC gain to the circuit? Thanks for posting this :)
If we left out the feedback resistor, when you try to solve for DC gain using your equation, you divide by zero. (You'll notice that I haven't introduced the Laplace transform yet, but your transfer function is correct!) You're right that the DC gain is -Rf/Rin. That's why we make Rf>>Rin - we want the integrator actually to integrate as close to DC as possible. The purpose of the feedback resistor isn't to provide DC gain, it's to limit it. The 'zero bias voltage' refers to condition of zero signal input - the quiescent state. In that state, the feedback loop _should_ hold the output at zero. We can then think of the signal as being superimposed on that dc voltage. In practice, it isn't zero, it's the op-amp's open-loop gain times the op-amp's input offset voltage (plus a correction term for the offset current that I can't be bothered to figure out.) For the relatively slow signals that I was considering here and the cheap op-amp I was using, that was significant. If I were using a low-offset op-amp, the feedback resistor might be 10M rather than 100K. If I were using a chopper-stabilized op-amp, it might even be 1G. But (as I said to another commenter), I'm trying to make my examples work with jellybean parts.
I found Python packages that do the trick. There was a package for the Rigol DS1054z 'scope at pypi.org/project/ds1054z/ that Just Worked. For the Feeltech AWG, I used 'fygen' github.com/mattwach/fygen - and found that I had to monkey patch the frequency-setting, because the wire protocol on my FY6900 wants the decimal point when sending the frequency to it, and the package as written just sends 14 digits without the decimal point. (I didn't find a reference for the protocol, just guessed that was the problem and tried it.) The code that I used to drive the 'scope and AWG and produce the Bode plots is on the project GitHub: github.com/kennykb/kevins-cave-kludges/tree/main/Synth/Ep007a-Integrator-vs-LPF
I deeply appreciate someone sharing their knowledge and design perspective so freely. This was great, and you are a great communicator of practical information. Thanks!
Its great to have people like you that teach people like me more and more about electronics. I live on a island and there is no study hier for electronics. So i had to just search on the internet to learn about them , these videos are gold for people like me. That either cannot afford to go study these subjects on electronics or that simply do not have them in theyr place. The more i learn everyday, the more i can do for my community when it comes to repairing of making things to help peoples needs. Thank you.
Terrific! That's why I love teaching! Maybe you learned the Sziklai by its other name, the "complementary Darlington?" As I said in the video, you used to see it a lot in audio power amps, back in the days when PNP power transistors were garbage. You'd have a push-pull output stage with a Sziklai pushing and a Darlington pulling.
I am 66, and for the 1st time in my life learning electronics in right earnest. I really love your lessons. And BTW, What is the Spice software that you are using Sir?
I'm using CircuitJS (www.falstad.com/circuit/circuitjs.html) to do the simulations. It's free and runs in the browser. Where I have circuit simulations in the video, I try to include the CircuitJS links in the description and on the project GitHub, so that viewers can follow along with the exact models I used.
@@KludgesFromKevinsCave Yes, although I graduated from technical school in electronics (but mor digital profile), your educational skills and clear practical presentations are outstanding.
I hadn't, but I've seen that sort of rookie mistake in a lot of RU-vid videos, making me say to myself, "has this person actually _built_ the thing? Of course, the correct polarity depends on the biasing of the previous stage. You haven't lived until you've set off the smoke alarm with an exploding capacitor!
@@KludgesFromKevinsCave Malvino's is one such book. But even Millman & Halkias have reversed biased input caps (I had to verify with a simulation, I mean... Millman!) I wonder if these reversals were the actual author's fault, or were introduced by the artist who created the circuits for publication.
interesting ... allways good to see how others organize their electronics workplace. Would be interesting to see the tools and other equipment you use. Greetings from Germany 😊
I'll most likely get around to that, particularly if I'm doing something odd with them. There's a three-part episode on measuring the current-versus-voltage curve of a diode or transistor coming soon, where I'll be doing some software control of the 'scope.
Thank you very much for this explanation of bootstrapping. Could you also explain the audio amplifier version of bootstrapping? I keep on seeing it on push-pull output stages but cannot understand how it works..
I have an ever-growing list of 'ideas for future videos.' Your question just got added to it. ;-) The short version: It works just the same as what I showed here. The push-pull stage is a pair of emitter followers (you can kind of ignore the ballast resistors between the two emitters, and the feedback resistor). The resistor that the bootstrap wants to make disappear is the collector resistor of the driver stage. It gets split with the output fed back into the middle of it by a bootstrap capacitor. (Hard to explain without a chalkboard...)
Several errors here. The statement about keeping the inputs equal is not an ideal opamp property: it is a _consequence_ of those properties _when NFB is applied._ Converting volts to volts/sec is differentiation, not integration, and it’s not what this circuit does. It produces output volts as the integral of volts/sec at the input. There are other ways of dealing with offset.
I'd argue that what you are calling 'errors' are at worst 'oversimplifications.' First: Observe empirically that the circuit, taken as a whole, does produce an output voltage proportional to the integral of the input voltage over time. The demonstration at the bench shows this with several input functions that can be readily verified. I gave it a square wave at the input; the output was a triangle wave, not a series of delta functions. In fact, your phrase, 'integral of volts/second at the input' is puzzling. 'Volts/second' is the derivative of the input voltage, and 'the integral of a derivative is the function itself - and clearly this circuit is not just functioning as a linear amplifier! You seem to be stuck on the idea of 'a capacitor differentiates its input voltage.' It does indeed: I = C dV/dT. But what the simple analysis presented here does is to turn that relation on its head and say that a capacitor integrates its input _current_ - which is an equally valid way of looking at the problem. The charge on the capacitor is Q = ∫ I dt , and the voltage across it is V=Q/C = (1/C) ∫ I dt. In circuit analysis, it's important to think of things either way; sometimes it's more convenient to look at currents (and consider Norton equivalents of networks), other times voltages (and think of Thévenin equivalents). I concede that I left off the key words 'in the presence of stable negative feedback.'" With that out of the way: 1. The feedback loop functions to maintain the (-) input of the op-amp at virtual ground. 2. The input resistor therefore functions to inject a current into that node proportional to the input voltage (and in fact I_in = V_in/R). 3. An ideal op-amp's inputs draw no current, therefore the feedback current must balance the input current. (I_c = -I_in) 4. The voltage across the capacitor will be the integral of the current moving through it - and said current, we just said, is proportional to the input voltage. (V_c = (1/C) ∫ I_c dt = -(1/C) ∫ I_in dt = -(1/RC) ∫ V_in dt 5. Since one end of the capacitor is at virtual ground, the other end will be at the stated voltage. The constant of integration is determined by the charge on the capacitor at the start of the interval where you start measuring. The function of the RESET signal is to establish a time where the integration begins. Yes, there are other ways to deal with op-amp offset that I haven't discussed ... yet. (Some of them are coming!) One important one is to use a commutating-auto-zero (that is, chopper-stabilized) amplifier to mitigate it by monitoring it aboard the IC and injecting an equal and opposite offset to cancel it. Or, as I said in the video but did not demonstrate, sometimes your circuit topology cancels the problem for you - an example that I plan to get to soon is a triangle-wave oscillator that does just that. Input offset causes a negligible asymmetry in the rising and falling slopes; the frequency remains constant regardless of the offset (within reason). But the two most important techniques by far are integrating over a short enough time that it's not a problem, and rolling off the integrator's DC response so that it centers itself. At some point, I'll most likely have an example that needs an LTC1150 or MCP6V51 (or some such). Auto-zero is in the category of "I'll cross that bridge when I come to it." It's got its problems - it tends to be an expensive solution (have you checked out the price of an LTC1150 lately?) and you usually start to struggle with clock feedthrough when you use auto-zero amps.
@@KludgesFromKevinsCave I don’t see how you can possibly conclude that I am ‘fixated’ on something I didn’t even mention. The fact remains that describing integration using units appropriate to differentiation isn’t a mere ‘over-simplification’; and you’ve acknowledged your other error.
@TomLeg I like it so far. Construction is good (no glass fuses like some of the cheap meters!) and its readings match an expensive Agilent meter in the lab at work.
Does bootstrapping sometimes reduce the stability of the amplifier by providing positive feedback? Bootstrapping is used sometimes when the fT of a transistor is barely adequate, but it has to be weighed against the potential reduced stability.
Bootstrapping in this configuration can indeed reduce stability, but it's done pretty routinely in class-B audio amps. In this configuration with cheap transistors, you have plenty of phase margin. I don't think I've ever had a bootstrapped power amp start oscillating on me.
The statement “This circuit knows calculus!” Is backwards. The correct statement would be: calculus can simulate the behavior of this op amp circuit. The circuit is the real world and calculus is an abstraction that predicts its behavior.
Calculus can model the behaviour of the op-amp. Alternatively, given a calculus problem from elsewhere in the real world, the op-amp's behaviour can model it fairly closely. (Recall how analog computers were used to compute things like the firing tables for guns or the rise and fall of the tides, before there were digital ones.) Is the real world an instance of some sort of Platonic reality in which mathematics reigns, or is the mathematics a mere abstraction of a portion of reality? I think you'll agree that the way I wrote it makes for a better headline. 😁
@@KludgesFromKevinsCave - I’m actually among the last of the dying breed that actually used an analog computer in the workplace. I don’t disagree with anything you’re saying.
Well, have you ever soldered a 40 pin IC backwards?? And this was before having a nice desolder station. Finally, after a lot of time and lifting several traces, I installed a socket, just in case, reinstalled the IC and all was fine. However, had I not used a socket, I am convinced the IC would have been dead. I'll not confess to whether or not I powered the board on with the IC installed vastly incorrectly.
I'm not sure I ever installed a 40 pin DIP backwards, but I've certainly done it with smaller ones - and once with a 15,000 µF electrolytic that set off the smoke alarm when it exploded.
Thank you for this wonderful presentation! Your wisdom, experience and patience really come through. I subscribed to your channel after I got over the flabbergast! Sincere thank you.
Sometimes I wish I could just blink and suddenly know all this stuff. Been learning algebra/calculus on my own for the last few weeks, but integrals/derivatives just won't click with me. I guess I'm stuck with the good ol trial and error ways of making a circuit :P