Recall that in the previous lecture, we wanted to find solutions to the following congruence

$$ \begin{align*} f(x) \equiv 0 \pmod{p^n} \end{align*} $$

Using Newton’s method, if we have a solution \(x_1\) such that

$$ \begin{align*} f(x_1) \equiv 0 \pmod{p^n} \end{align*} $$

and if

$$ \begin{align*} f'(x_1) \not\equiv \pmod{p} \end{align*} $$

then the Newton’s update

$$ \begin{align*} x_2 = x_1 - \frac{f(x_1)}{f'(x_1)} \end{align*} $$

is a solution modulo \(p^{2n}\)

$$ \begin{align*} f(x_2) \equiv 0 \pmod{p^{2n}} \end{align*} $$

In this lecture, we want to know what happens if \(f'(x_1) \equiv 0 \pmod{p^n}\)? For example, consider

$$ \begin{align*} x^2 \equiv 17 \pmod{2^n} \end{align*} $$

We can see here that \(f'(x) = 2x\) and that \(f'(x) \equiv 0 \pmod{2}\) for any \(x\). So it looks like we won’t be able to use Newton Method or Hensel’s Lemma to solve this. However, we could if we are a little more careful. Suppose we have a solution \(x\) such that

$$ \begin{align*} f(x) \equiv 0 \pmod{p^n} \end{align*} $$

and suppose

$$ \begin{align*} f'(x) \equiv 0 \pmod{p} \end{align*} $$

Then, this implies that \(f'(x)\) is divisible by \(p\). Now, let \(d\) be the highest power such that \(f'(x)\) is divisible by \(p^d\) and not divisible by \(p^{d+1}\). Now recall that the new update / improved solution is \(x - \frac{f(x)}{f'(x)}\). Then using Taylor’s Theorem:

$$ \begin{align*} f\left(x - \frac{f(x)}{f'(x)}\right) &= f(x) + f'(x) \cdot \left(- \frac{f(x)}{f'(x)} \right) + \frac{f''(x)}{2!} \cdot \left( - \frac{f(x)}{f'(x)} \right)^2 + \cdots \pmod{p^n} \end{align*} $$

The first two terms will cancel out because we’ll get \(f(x) - f(x)\) when we multiply the terms. Now, consider the next term in the series:

$$ \begin{align*} \frac{f''(x)}{2!} \cdot \left( - \frac{f(x)}{f'(x)} \right)^2 \end{align*} $$

From last lecture, the first term is an integer. The second term is divisible by \(\frac{p^{2n}}{p^{2d}}\). Why? We know that \(f(x)\) is divisible by \(p^n\) by assumption and we know that \(f'(x)\) is divisible by \(p^d\). Therefore,

$$ \begin{align*} \left(\frac{f(x)}{f'(x)}\right)^2 = (p^{n-d} \cdot C)^2 \end{align*} $$

where \(C\) is some constant. So this term is divisible by \(p^{2(n-d)}\). Now, we want this improved solution or update \(x_2 = x - \frac{f(x)}{f'(x)}\) to be a solution to

$$ \begin{align*} f(x_2) \equiv 0 \pmod{p^{n+1}} \end{align*} $$

This implies that we want \(p^{2(n-d)}\) to be greater than \(n+1\). So \(2n - 2d \geq n + 1\). In other words

$$ \begin{align*} n &\geq 2d + 1 \end{align*} $$

To summarize what we did

Suppose that \(f(x_1) \equiv 0 \pmod{p^n}\) and \(f'(x_1)\) is divisible by \(p^d\) but not by \(p^{d+1}\) and suppose that \(n \geq 2d + 1\). Then $$ \begin{align*} f(x_2) \equiv 0 \pmod{p^{n+1}} \end{align*} $$ where \(x_2 = x_1 - \frac{f(x_1)}{f'(x_1)}\)

Observe here that when \(d = 0\), then we get Hansel’s Lemma


Example

When can we solve $$ \begin{align*} x^2 \equiv a \pmod{2^n} \end{align*} $$ when \(n\) is large?

Let \(f(x) = x^2 - a\). Then \(f'(x) = 2x\). Suppose we have a solution \(x_1\) such that

$$ \begin{align*} f(x_1) \equiv 0 \pmod{2^n} \end{align*} $$

Then \(x\) must be odd. But if \(x\) is odd, then \(f'(x_1) = 2x\) is divisible by \(2^1\) but not by \(2^2\). Then \(d = 1\). So the condition becomes

$$ \begin{align*} n \geq 2d + 1 = 3 \end{align*} $$

So now if \(n = 3\), then we can lift the solution \(x^2 \equiv a \pmod{2^3}\) to for example \(x^2 \equiv a \pmod{2^4},\pmod{2^5},\cdots\)

So now when can we solve \(x^2 \equiv a \pmod{2^3}\)? We can check all the cases. We know that \(x\) is odd. So we need to check the cases

$$ \begin{align*} x \equiv 1,3,5,7 \pmod{2^3} \end{align*} $$

Squaring both sides gets us

$$ \begin{align*} x^2 \equiv 1,1,1,1 \pmod{8} \end{align*} $$

This means that we can only solve \(x^2 \equiv a \pmod{2^3}\) when

$$ \begin{align*} a \equiv 1 \pmod{8} \end{align*} $$

This illustrates how we reduced the problem from modulo a large power to condition modulo a small prime power. We started with wanting to solve \(x^2 \equiv a \pmod{2^n}\) where \(n\) is large and then we saw that \(n\) must be at least \(3\) to be able to lift the solution to higher powers. So if we can prove that it works for \(n=3\), then it means we can lift the solution to any large \(n\). That’s when we saw that \(a\) must be congruent to \(1\) modulo \(8\).


p-adic Numbers

Recall the example we did in the previous lecture when we solved

$$ \begin{align*} x^2 \equiv 7 \pmod{3^{100}} \end{align*} $$

We first solved this modulo \(3\). We got

$$ \begin{align*} x &\equiv 1 \pmod{3} \end{align*} $$

This implied that \(x = 3k+1\). Then we plugged this \(x\) into the next congruence modulo \(3^2\)

$$ \begin{align*} (3k + 1)^2 &\equiv 7 \pmod{3^2} \\ 9k^2 + 6k + 1 &\equiv 7 \pmod{3^2} \\ 6k &\equiv 6 \pmod{3^2} \\ 2k &\equiv 2 \pmod{3} \\ 2 \cdot 2k &\equiv 2\cdot 2 \pmod{3} \\ k &\equiv 1 \pmod{3} \end{align*} $$

So \(x = 3 \cdot 1 + 1\). If we repeat the process for the next few powers, we will see that

$$ \begin{align*} x &\equiv 7 \pmod{3} \\ x &\equiv 1 \cdot 3 + 1 \pmod{3^2} \\ x &\equiv 1 \cdot 3^2 + 1 \cdot 3 + 1 \pmod{3^3} \\ x &\equiv 1 \cdot 3^3 + 1 \cdot 3^2 + 1 \cdot 3 + 1 \pmod{3^4} \\ &\vdots \end{align*} $$

This sum is called a 3-adic number. This sum diverges in \(\mathbb{R}\) as these numbers get larger and larger. but in \(\mathbb{Z}_3\), it actually converges.

$$ \begin{align*} x_n &= \sum_{i=0}^{n-1} 3^i \pmod{3^n}\\ &= 1 \cdot 3^0 + 1 \cdot 3^1 + 1 \cdot 3^3 + 1 \cdot 3^4 + \cdots \pmod{3^n}\\ &= \frac{3^n - 1}{2} \pmod{3^n} \end{align*} $$

As \(n\) goes to infinity, this sum obviously diverges in \(\mathbf{R}\) but in the 3-adic world, a number is small if it’s divisible by a large power of \(3\). So \(3^1\) is small, \(3^{50}\) is a lot smaller and so on. Therefore,

$$ \begin{align*} x = \lim_{n \rightarrow \infty} x_n = \frac{1}{1 - 3} = - \frac{1}{2} \end{align*} $$

We can add, subtract and multiply p-adic numbers. We can also divide given that the element has an inverse. Moreover, anything we can do for the real numbers, there is an analog of it in the \(p\)-adics….


Some Intuition of p-adic Numbers

We’ve encountered infinite series before like

$$ \begin{align*} 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots \end{align*} $$

To find the sum, we can use the geometric series \(\frac{a}{1-r}\) where \(a\) is the first term and \(r\) is the rate so

$$ \begin{align*} \sum \frac{1}{2^n} = \frac{1}{1 - \frac{1}{2}} = 2 \end{align*} $$

We’ve also encountered the following series

$$ \begin{align*} \sum 2^n = 1 + 2 + 4 + 8 + \cdots \end{align*} $$

This series clearly doesn’t converge and if we try to apply the geometric series formula we will get

$$ \begin{align*} \sum 2^n = \frac{1}{1 - 2} = -1 \end{align*} $$

This isn’t true and in fact, \(r\) is required to satisfy \(|r| < 1\) when we typically use this formula. However, in the p-adic world, this sum is really \(-1\). In the \(p\)-adic world, higher powers of \(p\) means smaller and smaller numbers. Why? Take the following example

\(10^0\)\(1\)
\(10^1\)\(1\)\(0\)
\(10^2\)\(1\)\(0\)\(0\)
\(10^3\)\(1\)\(0\)\(0\)\(0\)
\(10^4\)\(1\)\(0\)\(0\)\(0\)\(0\)
\(10^5\)\(1\)\(0\)\(0\)\(0\)\(0\)\(0\)

As powers of \(10\) increase, the \(1\) is shifted to the left. We align the digits in the \(p\)-adic system such that the right most digit is the least significant one and the left most digits are the most significant digits. In this sense, \(10^5\) is a lot smaller than \(10^1\).


References