I've always wondered why we don't just use the sequential definition as the primary one. Sequences are such an effective workhorse for establishing fundamental results in real analysis, you might as well use it here as well. And it is more intuitive to say "no matter how you approach a, f approaches L" than the epsilon-delta cha-cha. I guess one reason for the usual definition is that we can define uniform convergence in a very similar way.
@@Nickesponja I've found that the epsilon-N definition of a limit is a bit more obvious, while the epsilon-delta definition of continuity is a bit more opaque. It's also easier to say it plainly, e.g. "lim {xk}=L if every open interval of L contains all but a finite number of sequence elements."
@@stabbysmurf I've found that there are functions that are just easier with epsilon-delta. I think the sequential definition is really good for most functions for limits. But occasionally it just drives you into the woods for no good reason.
the last problem was pretty intuitive. the limit as x goes to zero of sin(1/x) means that the argument of sine tends towards infinity as x goes to zero, but limit as x goes to infinity of sin(x) obv doesnt exist due to sine just oscillating, i.e. if you picked L to be any real number you could show easily using epsilon delta that it isnt a limit.
Just a slight subtlety that you had to show x_n=/=a for any n. But that comes from the definition of the limit where you say |x_n - a|>0, so x_n=/=a for any n (also because x_n is chosen from A and a is a limit point of A).
Since we have proved that sequential limit of function is equivalent to the original definition of limit of function, we can use this fact to show extreme value theorem.
I wonder whether you could help me solve question 23 from 2020 AMC 10B. The question is as follows: Square ABCD in the coordinate plane has vertices at the points A(1,1), B(-1,1), C(-1,-1), and D(1.-1). Consider the following four transformations: - L a rotation of 90 degrees counterclockwise around the origin; - R, a rotation of 90 degrees clockwise around the origin; - H, a reflection across the x-axis; - V, a reflection across the y-axis. Each of these transformations maps the square onto itself, but the positions of the labeled vertices will change. For example, applying R and then V would send vertex A at (1,1) to (-1,-1) and would send vertex B at )-1.1) to itself. How many sequences of 20 transformations chosen from {L, R, H, V} will send all of the vertices back to their original positions? (a) 2^37 (b) 3.2^36 (c) 2^38 (d) 2^39
would like to show a contra positive of ⇒ (∃{a_n} a_n→a ∧ f(a_n)↛L) ⇒ lim{f(x),x→a}≠L PF: foil out the definitions for each statemet in the "and" operation (∧) above: ∃{a_n}: ∀δ>0 ∃N(δ)∈ℕ ∀n∈ℕ, n> N(δ)⇒|a_n-a|0 ∀M∈ℕ ∃n∈ℕ, n>M ∧ |f(x)-L|≥ ε. Set M= N(δ), find n> N(δ) s.t |a_n-a|0 ∀δ>0 ∃x∈ℝ (x=a_n for some n> N(δ)), |x-a|