Thursday, 10 August 2017

Understanding the definition of limits of sequences

The definition of limit of a sequence (in the real numbers) is well known:

Let $(u_n)_{n \in \mathbb{N}}$ be a sequence of real numbers. Then:

$$\lim_{n \to \infty} u_n = L \iff \forall \epsilon >0: \exists N \in \mathbb{N}: \forall n > N: |u_n - L| < \epsilon$$

and we say that the sequence converges to its limit $L$.

While this definition may seem abstract, there is a good reason it looks like this. When we speak about convergence, we mean that when we look far, for very great indices $n$, the terms of the sequence come arbitrarily close to the limit. So, the distance between the limit value, and the value of the sequence for this large $n$, can become as small as possible.

So for a certain (small) $\epsilon > 0$, we have that $|u_n - L| < \epsilon$, if we look far enough, which translates by saying that we can always find a positive integer $N$ such that the condition $|u_n - L| < \epsilon$ is met whenever $n > N$.

Sunday, 16 April 2017

$(p-1)! \equiv -1 \bmod p$

A group theoretic proof:

Obviously, for $p = 2$, the statement is true.
For the rest of the proof, let's assume $p \neq 2$

Now, if we have a finite abelian group $G = \{x_1, \dots , x_n\}$, we know that $x_1*x_2* \dots x_n = \prod\limits_{ord(x_i)=2}x_i$, because every element that does not have order 2 cancels with its inverse (if $ord(x) = 2$, then $x^2 = e$, then $x = x^{-1}$)

Now, let's apply this to the group $(\mathbb{Z}/p\mathbb{Z}-\{0\},.)$:

(Denote the elements of $\mathbb{Z}/p\mathbb{Z}-\{0\}$  as $\{ [1], \dots [p-1]\}$)

We find $$[1][2][3]\dots[p-1] = [(p-1)!] =\prod\limits_{ord(x_i)=2}x_i$$

So, we only have to find the elements of order 2 in the group:

Hence, let's solve the equation $[a]^2 = [1]$:

$[a]^2 = [1] \iff [a^2] = [1] \iff a^2 \equiv 1 \bmod p \iff p|(a-1)(a+1)$

and because $p$ is prime (that's important here), it follows that either $p|(a-1)$ or $p|(a+1)$. Hence, $a \equiv 1 \bmod p$ or $a \equiv -1 \bmod p$

and thus:

$[a] = [1]$ or $[a] = [-1]$

Because $ord([1]) = 1$, it follows that $[-1]$ is the only element of order $2$ in the group. Hence:

$[(p-1)!] = [-1]$

and thus:

$(p - 1)! \equiv -1 \bmod p \quad \triangle$

Tuesday, 13 September 2016

If a function $f: A \subset\mathbb{R}\rightarrow\mathbb{R}$ is strictly increasing, then $f$ is injective.

Theorem: Consider a function $f: A \subset\mathbb{R}\rightarrow\mathbb{R}$ that is strictly increasing. Then $f$ is an injective function.

Proof: We know that $f$ is strictly increasing on its domain $A$. Thus, for $x_1 < x_2$, we have $f(x_1) < f(x_2)$. So it's clear that if $x_1 \neq x_2$, then $f(x_1) \neq f(x_2)$. Thus $f$ is injective.

Monday, 12 September 2016

If $A$ is a symmetric and regular matrix, then $A^{-1}$ is symmetric.

Theorem: Suppose $A$ is a symmetric and regular matrix, then $A^{-1}$ is symmetric too.

Proof: $A^{-1}$ exists as A is a regular matrix.

So, we have: $AA^{-1} = I_n$ and we have to show that ${(A^{-1})}^{T} = A^{-1}$

Now, transposing everything, we obtain, because ${I_n}^{T} = I_n$ and $(AB)^{T} = B^TA^T$:

$(A^{-1})^TA^T = I_n$, but we also have:  $A^{-1}A = I_n$

So: $(A^{-1})^TA^T = A^{-1}A$

But $A$ is symmetric, thus $A^T = A$

So: $(A^{-1})^TA = A^{-1}A$

And by the theorem: $AB = CB$ and $B$ regular $\Rightarrow A = C$, we get:

$(A^{-1})^T = A^{-1}$

And this is exactly the definition of a symmetric matrix.

Sunday, 11 September 2016

Functions: definition

Definition (function): Given 2 sets X and Y. A function $f: X \rightarrow Y$ is the set of ordered pairs $(x,y)$ such that:

1) $(x,y) \in X \times Y$. This means: $x \in X$ and $y \in Y$. (*)
2) The $x$ in $(x,y)$ occurs only once as first component of the ordered pair $(x,y)$

We call A the domain of f and B the codomain of f (***)


1) (*) In fact, this means that a function $f: X \rightarrow Y$ is actually defined as a subset of $X \times Y$. So, we can write that $f = \{(x,f(x))|x \in X\}$
2) (**) Simply stated, this means that we can only have one $y$ for every $x$ in $(x,y)$.
3) (***) When you would compute f(x) for all x in the domain of f, you would get a set of values. This set is called the range of f. Now, it's very important to notice that the range of f is not necessarily equal to the codomain of f! Define, for example, a function g in the following way.

$g: \mathbb{R}\rightarrow \mathbb{R}: x \mapsto f(x) = x^2$.

Then, the codomain of f is the set of the real numbers $\mathbb{R}$ while the range of f is the set of all positive real numbers $\mathbb{R^{+}}$


1) Consider the function f:

$f: \mathbb{R}\rightarrow \mathbb{R}: x \mapsto x^2 - x -2$.
- The domain of f is $\mathbb{R}$
- The codomain of f is $\mathbb{R}$
- The range of f is $[\frac{-9}{4},\infty)$

2) The Sign function is defined in the following way:

$ Sign(x) :=
-1 & \quad x<0\\
0 & \quad x= 0\\
1 & \quad x>0 \\

Theorem: Let $f: A \to B$ be a function. Then the following statements are equivalent:

(1) $f$ bijective
(2) There is a function $g: B \to A$ such that $g \circ f =  1_A$ and $f \circ g = 1_B$

Proof: $\boxed{(1) \Rightarrow (2)}$

Since $f$ is bijective, for every $b \in B$, there exists a unique $a \in A$ such that $f(a) = b$. Then, define $g: B \to A$ by letting $g(b) := a$. Because $f$ is bijective, $g$ is well defined.

Now, we have that $g \circ f (a) = g(f(a)) =  g(b) : a$ and $f \circ g(b) = f(g(b)) = f(a) = b$

$\boxed{(1) \Leftarrow (2)}$