Thursday, 10 August 2017

Understanding the definition of limits of sequences

The definition of limit of a sequence (in the real numbers) is well known:

Let $(u_n)_{n \in \mathbb{N}}$ be a sequence of real numbers. Then:

$$\lim_{n \to \infty} u_n = L \iff \forall \epsilon >0: \exists N \in \mathbb{N}: \forall n > N: |u_n - L| < \epsilon$$

and we say that the sequence converges to its limit $L$.

While this definition may seem abstract, there is a good reason it looks like this. When we speak about convergence, we mean that when we look far, for very great indices $n$, the terms of the sequence come arbitrarily close to the limit. So, the distance between the limit value, and the value of the sequence for this large $n$, can become as small as possible.

So for a certain (small) $\epsilon > 0$, we have that $|u_n - L| < \epsilon$, if we look far enough, which translates by saying that we can always find a positive integer $N$ such that the condition $|u_n - L| < \epsilon$ is met whenever $n > N$.