It is very common to encounter series for which it is difficult, or even virtually impossible, to determine the sum exactly. Often you try to evaluate the sum approximately by truncating it, i.e. having the index run only up to some finite \(N\text<,>\) rather than infinity. But there is no point in doing so if the series diverges 1 2 . So you like to at least know if the series converges or diverges. Furthermore you would also like to know what error is introduced when you approximate \(\sum_^\infty a_n\) by the “truncated series” \(\sum_^Na_n\text<.>\) That's called the truncation error. There are a number of “convergence tests” to help you with this.,>
This tells us that, if we already know that a given series \(\sum a_n\) is convergent, then the \(n^\) term of the series, \(a_n\text\) must converge to \(0\) as \(n\) tends to infinity. In this form, the test is not so useful. However the contrapositive 3 of the statement is a useful test for divergence.
If the sequence \(\big\_^\infty\) fails to converge to zero as \(n\rightarrow\infty\text\) then the series \(\sum_^\infty a_n\) diverges.
So the series \(\sum_^\infty \frac\) diverges.
The divergence test is a “one way test”. It tells us that if \(\lim_
Now while convergence or divergence of series like \(\sum_^\infty \frac\) can be determined using some clever tricks — see the optional §3.3.9 —, it would be much better of have methods that are more systematic and rely less on being sneaky. Over the next subsections we will discuss several methods for testing series for convergence.
Note that while these tests will tell us whether or not a series converges, they do not (except in rare cases) tell us what the series adds up to. For example, the test we will see in the next subsection tells us quite immediately that the series
converges. However it does not tell us its value 4 .
In the integral test, we think of a series \(\sum_^\infty a_n\text\) that we cannot evaluate explicitly, as the area of a union of rectangles, with \(a_n\) representing the area of a rectangle of width one and height \(a_n\text<.>\) Then we compare that area with the area represented by an integral, that we can evaluate explicitly, much as we did in Theorem 1.12.17, the comparison test for improper integrals. We'll start with a simple example, to illustrate the idea. Then we'll move on to a formulation of the test in general.
Visualise the terms of the harmonic series \(\sum_^\infty\frac\) as a bar graph — each term is a rectangle of height \(\frac\) and width \(1\text<.>\) The limit of the series is then the limiting area of this union of rectangles. Consider the sketch on the left below.
It shows that the area of the shaded columns, \(\sum_^4\frac\text\) is bigger than the area under the curve \(y=\frac\) with \(1\le x\le 5\text<.>\) That is
If we were to continue drawing the columns all the way out to infinity, then we would have
\begin \sum_^\infty \frac & \ge \int_1^\infty \frac\, d \end
We are able to compute this improper integral exactly:
\begin \int_1^\infty \frac \, d &= \lim_ \Big[ \log|x| \Big]_1^R = +\infty \end
That is the area under the curve diverges to \(+\infty\) and so the area represented by the columns must also diverge to \(+\infty\text<.>\)
It should be clear that the above argument can be quite easily generalised. For example the same argument holds mutatis mutandis 5 for the series
Indeed we see from the sketch on the right above that
\begin \sum_^\infty \frac \leq \int_1^\infty \frac\, d \end
This last improper integral is easy to evaluate:
\begin \int_2^\infty \frac\, d &= \lim_
Thus we know that
and so the series must converge.
The above arguments are formalised in the following theorem.
Let \(N_0\) be any natural number. If \(f(x)\) is a function which is defined and continuous for all \(x\ge N_0\) and which obeys
Furthermore, when the series converges, the truncation error
\[ \bigg|\sum_^\infty a_n-\sum_^N a_n\bigg|\le \int_N^\infty f(x)\ dx\qquad\text \nonumber \]
Let \(I\) be any fixed integer with \(I \gt N_0\text<.>\) Then
Look at the figure above. The shaded area in the figure is \(\sum_^\infty a_n\)
This shaded area is smaller than the area under the curve \(y=f(x)\) for \(I-1\le x \lt \infty\text<.>\) So
\[ \sum_^\infty a_n \le \int_^\infty f(x)\ dx \nonumber \]
and, if the integral is finite, the sum \(\sum_^\infty a_n\) is finite too. Furthermore, the desired bound on the truncation error is just the special case of this inequality with \(I=N+1\text\)
\begin \sum_^\infty a_n - \sum_^N a_n =\sum_^\infty a_n \le \int_N^\infty f(x)\ dx \end
For the “divergence case” look at the figure above. The (new) shaded area in the figure is again \(\sum_^\infty a_n\) because
This time the shaded area is larger than the area under the curve \(y=f(x)\) for \(I\le x \lt \infty\text<.>\) So
\[ \sum_^\infty a_n \ge \int_I^\infty f(x)\ dx \nonumber \]
and, if the integral is infinite, the sum \(\sum_^\infty a_n\) is infinite too.
Now that we have the integral test, it is straightforward to determine for which values of \(p\) the series 6
Let \(p \gt 0\text<.>\) We'll now use the integral test to determine whether or not the series \(\sum_^\infty\frac\) (which is sometimes called the \(p\)-series) converges.
So we conclude that \(\sum_^\infty\frac\) converges if and only if \(p \gt 1\text<.>\) This is sometimes called the \(p\)-test.
We now know that the dividing line between convergence and divergence of \(\sum_^\infty\frac\) occurs at \(p=1\text<.>\) We can dig a little deeper and ask ourselves how much more quickly than \(\frac\) the \(n^\) term needs to shrink in order for the series to converge. We know that for large \(x\text\) the function \(\log x\) is smaller than \(x^a\) for any positive \(a\) — you can convince yourself of this with a quick application of L'Hôpital's rule. So it is not unreasonable to ask whether the series
converges. Notice that we sum from \(n=2\) because when \(n=1, n\log n=0\text<.>\) And we don't need to stop there 7 . We can analyse the convergence of this sum with any power of \(\log n\text<.>\)
Let \(p \gt 0\text<.>\) We'll now use the integral test to determine whether or not the series \(\sum\limits_^\infty\frac
So we conclude that \(\sum\limits_^\infty\frac
Our next convergence test is the comparison test. It is much like the comparison test for improper integrals (see Theorem 1.12.17) and is true for much the same reasons. The rough idea is quite simple. A sum of larger terms must be bigger than a sum of smaller terms. So if we know the big sum converges, then the small sum must converge too. On the other hand, if we know the small sum diverges, then the big sum must also diverge. Formalising this idea gives the following theorem.
Let \(N_0\) be a natural number and let \(K \gt 0\text<.>\)
We will not prove this theorem here. We'll just observe that it is very reasonable. That's why there are quotation marks around “Proof”. For an actual proof see the optional section 3.3.10.
The comparison test for series is also used in much the same way as is the comparison test for improper integrals. Of course, one needs a good series to compare against, and often the series \(\sum n^\) (from Example 3.3.6), for some \(p \gt 0\text\) turns out to be just what is needed.
We could determine whether or not the series \(\sum_^\infty\frac\) converges by applying the integral test. But it is not worth the effort 8 . Whether or not any series converges is determined by the behaviour of the summand 9 for very large \(n\text<.>\) So the first step in tackling such a problem is to develop some intuition about the behaviour of \(a_n\) when \(n\) is very large.
Of course the previous example was “rigged” to give an easy application of the comparison test. It is often relatively easy, using arguments like those in Example 3.3.9, to find a “simple” series \(\sum_^\infty b_n\) with \(b_n\) almost the same as \(a_n\) when \(n\) is large. However it is pretty rare that \(a_n\le b_n\) for all \(n\text<.>\) It is much more common that \(a_n\le K b_n\) for some constant \(K\text<.>\) This is enough to allow application of the comparison test. Here is an example.
As in the previous example, the first step is to develop some intuition about the behaviour of \(a_n\) when \(n\) is very large.
So when \(n\) is very large
We already know from Example 3.3.6, with \(p=2\text\) that \(\sum_^\infty\frac\) converges, so we would expect that \(\sum_^\infty\frac
So the numerator \(1+(\cos n)\frac\) is always smaller than \(1+(1)\frac=2\text<.>\)
We now know that
and, since we know \(\sum_^\infty n^\) converges, the comparison test tells us that \(\sum_^\infty\frac\) converges.
The last example was actually a relatively simple application of the comparison theorem — finding a suitable constant \(K\) can be really tedious 12 . Fortunately, there is a variant of the comparison test that completely eliminates the need to explicitly find \(K\text<.>\)
The idea behind this isn't too complicated. We have already seen that the convergence or divergence of a series depends not on its first few terms, but just on what happens when \(n\) is really large. Consequently, if we can work out how the series terms behave for really big \(n\) then we can work out if the series converges. So instead of comparing the terms of our series for all \(n\text\) just compare them when \(n\) is big.
Let \(\sum_^\infty a_n\) and \(\sum_^\infty b_n\) be two series with \(b_n \gt 0\) for all \(n\text<.>\) Assume that
In particular, if \(L\ne 0\text\) then \(\sum_^\infty a_n\) converges if and only if \(\sum_^\infty b_n\) converges.
(a) Because we are told that \(\lim_
(b) Let's suppose that \(L \gt 0\text<.>\) (If \(L \lt 0\text\) just replace \(a_n\) with \(-a_n\text<.>\)) Because we are told that \(\lim_
The next two examples illustrate how much of an improvement the above theorem is over the straight comparison test (though of course, we needed the comparison test to develop the limit comparison test).
Set \(a_n= \frac>\text<.>\) We first try to develop some intuition about the behaviour of \(a_n\) for large \(n\) and then we confirm that our intuition was correct.
We can also try to deal with the series of Example 3.3.12, using the comparison test directly. But that requires us to find \(K\) so that
We might do this by examining the numerator and denominator separately:
Putting the numerator and denominator back together we have
and the comparison test then tells us that our series converges. It is pretty clear that the approach of Example 3.3.12 was much more straightforward.
When the signs of successive terms in a series alternate between \(+\) and \(-\text\) like for example in \(\ 1-\frac +\frac-\frac+ \cdots\ \text\) the series is called an alternating series. More generally, the series
\[ A_1-A_2+A_3-A_4+\cdots =\sum_^\infty (-1)^ A_n \nonumber \]
is alternating if every \(A_n\ge 0\text<.>\) Often (but not always) the terms in alternating series get successively smaller. That is, then \(A_1\ge A_2 \ge A_3 \ge \cdots\text<.>\) In this case:
So the successive partial sums oscillate, but with ever decreasing amplitude. If, in addition, \(A_n\) tends to \(0\) as \(n\) tends to \(\infty\text\) the amplitude of oscillation tends to zero and the sequence \(S_1\text\) \(S_2\text\) \(S_3\text\) \(\cdots\) converges to some limit \(S\text<.>\)
This is illustrated in the figure
Here is a convergence test for alternating series that exploits this structure, and that is really easy to apply.
Let \(\big\_^\infty\) be a sequence of real numbers that obeys
\[ A_1-A_2+A_3-A_4+\cdots=\sum\limits_^\infty (-1)^ A_n =S \nonumber \]
converges and, for each natural number \(N\text\) \(S-S_N\) is between \(0\) and (the first dropped term) \((-1)^N A_\text<.>\) Here \(S_N\) is, as previously, the \(N^\) partial sum \(\sum\limits_^N (-1)^ A_n\text<.>\)
We shall only give part of the proof here. For the rest of the proof see the optional section 3.3.10. We shall fix any natural number \(N\) and concentrate on the last statement, which gives a bound on the truncation error (which is the error introduced when you approximate the full series by the partial sum \(S_N\))
This is of course another series. We're going to study the partial sums
for that series.
So we now know that \(S_\) lies between its first term, \((-1)^NA_\text\) and \(0\) for all \(\ell \gt N+1\text<.>\) While we are not going to prove it here (see the optional section 3.3.10), this implies that, since \(A_\rightarrow 0\) as \(N\rightarrow\infty\text\) the series converges and that
lies between \((-1)^NA_\) and \(0\text<.>\)
We have already seen, in Example 3.3.6, that the harmonic series \(\sum_^\infty\frac\) diverges. On the other hand, the series \(\sum_^\infty(-1)^\frac\) converges by the alternating series test with \(A_n=\frac\text<.>\) Note that
so that all of the hypotheses of the alternating series test, i.e. of Theorem 3.3.14, are satisfied. We shall see, in Example 3.5.20, that
You may already know that \(e^x=\sum_^\infty\frac \text<.>\) In any event, we shall prove this in Example 3.6.5, below. In particular
is an alternating series and satisfies all of the conditions of the alternating series test, Theorem 3.3.14a:
So the alternating series test guarantees that, if we approximate, for example,
then the error in this approximation lies between \(0\) and the next term in the series, which is \(\frac\text<.>\) That is
which, to seven decimal places says
\begin 2.7182816 \le e\le &2.7182837 \end
(To seven decimal places \(e=2.7182818\text<.>\))
The alternating series test tells us that, for any natural number \(N\text\) the error that we make when we approximate \(\frac\) by the partial sum \(S_N= \sum_^N\frac\) has magnitude no larger than \(\frac\text<.>\) This tends to zero spectacularly quickly as \(N\) increases, simply because \((N+1)!\) increases spectacularly quickly as \(N\) increases 13 . For example \(20!\approx 2.4\times 10^\text<.>\)
We will shortly see, in Example 3.5.20, that if \(-1 \lt x\le 1\text\) then
Suppose that we have to compute \(\log\frac\) to within an accuracy of \(10^\text<.>\) Since \(\frac=1+\frac\text\) we can get \(\log\frac\) by evaluating \(\log(1+x)\) at \(x=\frac\text\) so that
By the alternating series test, this series converges. Also by the alternating series test, approximating \(\log\frac\) by throwing away all but the first \(N\) terms
introduces an error whose magnitude is no more than the magnitude of the first term that we threw away.
To achieve an error that is no more than \(10^\text\) we have to choose \(N\) so that
The best way to do so is simply to guess — we are not going to be able to manipulate the inequality \(\frac> \le \frac>\) into the form \(N\le \cdots\text\) and even if we could, it would not be worth the effort. We need to choose \(N\) so that the denominator \((N+1)\times 10^\) is at least \(10^\text<.>\) That is easy, because the denominator contains the factor \(10^\) which is at least \(10^\) whenever \(N+1\ge 12\text\) i.e. whenever \(N\ge 11\text<.>\) So we will achieve an error of less than \(10^\) if we choose \(N=11\text<.>\)
This is not the smallest possible choice of \(N\text\) but in practice that just doesn't matter — your computer is not going to care whether or not you ask it to compute a few extra terms. If you really need the smallest \(N\) that obeys \(\frac> \le \frac>\text\) you can next just try \(N=10\text\) then \(N=9\text\) and so on.
So in this problem, the smallest acceptable \(N=10\text<.>\)
The idea behind the ratio test comes from a reexamination of the geometric series. Recall that the geometric series
\begin \sum_^\infty a_n = \sum_^\infty a r^n \end
converges when \(|r| \lt 1\) and diverges otherwise. So the convergence of this series is completely determined by the number \(r\text<.>\) This number is just the ratio of successive terms — that is \(r = a_/a_n\text<.>\)
In general the ratio of successive terms of a series, \(\frac>\text\) is not constant, but depends on \(n\text<.>\) However, as we have noted above, the convergence of a series \(\sum a_n\) is determined by the behaviour of its terms when \(n\) is large. In this way, the behaviour of this ratio when \(n\) is small tells us nothing about the convergence of the series, but the limit of the ratio as \(n\to\infty\) does. This is the basis of the ratio test.
Let \(N\) be any positive integer and assume that \(a_n\ne 0\) for all \(n\ge N\text<.>\)
Beware that the ratio test provides absolutely no conclusion about the convergence or divergence of the series \(\sum\limits_^\infty a_n\) if \(\lim\limits_
(a) Pick any number \(R\) obeying \(L \lt R \lt 1\text<.>\) We are assuming that \(\Big|\frac>\Big|\) approaches \(L\) as \(n\rightarrow\infty\text<.>\) In particular there must be some natural number \(M\) so that \(\Big|\frac>\Big|\le R\) for all \(n\ge M\text<.>\) So \(|a_|\le R|a_n|\) for all \(n\ge M\text<.>\) In particular
\begin |a_| & \ \le\ R\,|a_M|\\ |a_| & \ \le\ R\,|a_| & \le\ R^2 \,|a_M|\\ |a_| & \ \le\ R\,|a_| & \le\ R^3 \,|a_M|\\ &\vdots\\ |a_| &\le R^\ell \,|a_M| \end
for all \(\ell\ge 0\text<.>\) The series \(\sum_^\infty R^\ell \,|a_M|\) is a geometric series with ratio \(R\) smaller than one in magnitude and so converges. Consequently, by the comparison test with \(a_n\) replaced by \(A_\ell = a_\) and \(c_n\) replaced by \(C_\ell= R^\ell \, |a_M|\text\) the series \(\sum\limits_^\infty a_ =\sum\limits_^\infty a_n\) converges. So the series \(\sum\limits_^\infty a_n\) converges too.
(b) We are assuming that \(\Big|\frac>\Big|\) approaches \(L \gt 1\) as \(n\rightarrow\infty\text<.>\) In particular there must be some natural number \(M \gt N\) so that \(\Big|\frac>\Big|\ge 1\) for all \(n\ge M\text<.>\) So \(|a_|\ge |a_n|\) for all \(n\ge M\text<.>\) That is, \(|a_n|\) increases as \(n\) increases as long as \(n\ge M\text<.>\) So \(|a_n|\ge |a_M|\) for all \(n\ge M\) and \(a_n\) cannot converge to zero as \(n\rightarrow\infty\text<.>\) So the series diverges by the divergence test.
Fix any two nonzero real numbers \(a\) and \(x\text<.>\) We have already seen in Example 3.2.4 and Lemma 3.2.5 — we have just renamed \(r\) to \(x\) — that the geometric series \(\sum_^\infty a x^n\) converges when \(|x| \lt 1\) and diverges when \(|x|\ge 1\text<.>\) We are now going to consider a new series, constructed by differentiating 14 each term in the geometric series \(\sum_^\infty a x^n\text<.>\) This new series is
\[ \sum_^\infty a_n\qquad\text\quad a_n = a\, n\, x^ \nonumber \]
Let's apply the ratio test.
The ratio test now tells us that the series \(\sum_^\infty a\, n\, x^\) converges if \(|x| \lt 1\) and diverges if \(|x| \gt 1\text<.>\) It says nothing about the cases \(x=\pm 1\text<.>\) But in both of those cases \(a_n=a\,n\,(\pm 1)^n\) does not converge to zero as \(n\rightarrow\infty\) and the series diverges by the divergence test.
Notice that in the above example, we had to apply another convergence test in addition to the ratio test. This will be commonplace when we reach power series and Taylor series — the ratio test will tell us something like
The series converges for \(|x| \lt R\) and diverges for \(|x| \gt R\text<.>\)
Of course, we will still have to to determine what happens when \(x=+R, -R\text<.>\) To determine convergence or divergence in those cases we will need to use one of the other tests we have seen.
Once again, fix any two nonzero real numbers \(a\) and \(X\text<.>\) We again start with the geometric series \(\sum_^\infty a x^n\) but this time we construct a new series by integrating 15 each term, \(a x^n\text\) from \(x=0\) to \(x=X\) giving \(\frac X^\text<.>\) The resulting new series is
To apply the ratio test we need to compute
If \(X=1\text\) the series reduces to
which is just \(a\) times the harmonic series, which we know diverges, by Example 3.3.6.
If \(X=-1\text\) the series reduces to
which converges by the alternating series test. See Example 3.3.15.
The ratio test is often quite easy to apply, but one must always be careful when the limit of the ratio is \(1\text<.>\) The next example illustrates this.
In this example, we are going to see three different series that all have \(\lim_
Let's do a somewhat artificial example that forces us to combine a few of the techniques we have seen.
Again, the convergence of this series will depend on \(x\text<.>\)
So in summary the series converges when \(-\frac \lt x \leq \frac\) and diverges otherwise.
We now have half a dozen convergence tests:
Imagine that you are about to stack a bunch of identical books on a table. But you don't want to just stack them exactly vertically. You want to built a “leaning tower of books” that overhangs the edge of the table as much as possible.
How big an overhang can you get? The answer to that question, which we'll now derive, uses a series!
Thus book #1 does not topple off of the table provided
In conclusion, our two-book tower survives if
\begin x_2\le x_1+\frac\qquad\text\qquad x_1+x_2\le L \end
In particular we may choose \(x_1\) and \(x_2\) to satisfy \(x_2 = x_1+\frac\) and \(x_1+x_2 = L\text<.>\) Then, substituting \(x_2 = x_1+\frac\) into \(x_1+x_2 = L\) gives
\[ x_1 + \Big(x_1+\frac\Big) = L \iff 2x_1 = \frac \iff x_1 = \frac\Big(\frac\Big),\quad x_2 = \frac\Big(1+\frac\Big) \nonumber \]
In conclusion, our three-book tower survives if
\begin x_3\le x_2+\frac\qquad\text\qquad x_2+x_3\le 2x_1 + L \qquad\text\qquad x_1+ x_2+x_3\le \frac \end
In particular, we may choose \(x_1\text\) \(x_2\) and \(x_3\) to satisfy
\begin x_1+ x_2+x_3&= \frac\qquad\text\\ x_2+x_3&= 2x_1 + L \qquad\text\\ x_3 &= \frac + x_2 \end
Substituting the second equation into the first gives
\begin 3x_1 +L = \frac \implies x_1 = \frac\Big(\frac\Big) \end
Next substituting the third equation into the second, and then using the formula above for \(x_1\text\) gives
\begin 2x_2 +\frac = 2x_1+L = \frac + L \implies x_2 = \frac\Big(\frac+\frac\Big) \end
There is another test that is very similar in spirit to the ratio test. It also comes from a reexamination of the geometric series
\begin \sum_^\infty a_n = \sum_^\infty a r^n \end
The ratio test was based on the observation that \(r\text\) which largely determines whether or not the series converges, could be found by computing the ratio \(r = a_/a_n\text<.>\) The root test is based on the observation that \(|r|\) can also be determined by looking that the \(n^\) root of the \(n^\) term with \(n\) very large:
Of course, in general, the \(n^\) term is not exactly \(ar^n\text<.>\) However, if for very large \(n\text\) the \(n^\) term is approximately proportional to \(r^n\text\) with \(|r|\) given by the above limit, we would expect the series to converge when \(|r| \lt 1\) and diverge when \(|r| \gt 1\text<.>\) That is indeed the case.
exists or is \(+\infty\text<.>\)
Beware that the root test provides absolutely no conclusion about the convergence or divergence of the series \(\sum\limits_^\infty a_n\) if \(\lim\limits_
(a) Pick any number \(R\) obeying \(L \lt R \lt 1\text<.>\) We are assuming that \(\root\of<|a_n|>\) approaches \(L\) as \(n\rightarrow\infty\text<.>\) In particular there must be some natural number \(M\) so that \(\root\of<|a_n|>\le R\) for all \(n\ge M\text<.>\) So \(|a_n|\le R^n\) for all \(n\ge M\) and the series \(\sum\limits_^\infty a_n\) converges by comparison to the geometric series \(\sum\limits_^\infty R^n\)
(b) We are assuming that \(\root\of<|a_n|>\) approaches \(L \gt 1\) (or grows unboundedly) as \(n\rightarrow\infty\text<.>\) In particular there must be some natural number \(M\) so that \(\root\of<|a_n|>\ge 1\) for all \(n\ge M\text<.>\) So \(|a_n|\ge 1\) for all \(n\ge M\) and the series diverges by the divergence test.
We have already used the ratio test, in Example 3.3.23, to show that this series converges when \(|x| \lt \frac\) and diverges when \(|x| \gt \frac\text<.>\) We'll now use the root test to draw the same conclusions.
and the root test also tells us that if \(3|x| \gt 1\) the series diverges, while when \(3|x| \lt 1\) the series converges.
We have done the last example once, in Example 3.3.23, using the ratio test and once, in Example 3.3.26, using the root test. It was clearly much easier to use the ratio test. Here is an example that is most easily handled by the root test.
Now we take the limit,
by Example 3.7.20 in the CLP-1 text with \(a=-1\text<.>\) As the limit is strictly smaller than \(1\text\) the series \(\sum_^\infty \big(\frac\big)^\) converges.
To draw the same conclusion using the ratio test, one would have to show that the limit of
as \(n\rightarrow\infty\) is strictly smaller than 1. It's clearly better to stick with the root test.
that appeared in Warning 3.3.3, is called the Harmonic series 18 , and its partial sums
are called the Harmonic numbers. Though these numbers have been studied at least as far back as Pythagoras, the divergence of the series was first proved in around 1350 by Nicholas Oresme (1320-5 – 1382), though the proof was lost for many years and rediscovered by Mengoli (1626–1686) and the Bernoulli brothers (Johann 1667–1748 and Jacob 1655–1705).
Oresme's proof is beautiful and all the more remarkable that it was produced more than 300 years before calculus was developed by Newton and Leibnitz. It starts by grouping the terms of the harmonic series carefully:
\begin & \sum_^\infty \frac = 1 + \frac + \frac + \frac + \frac + \frac + \frac + \frac + \cdots\\ &= 1 + \frac + \left( \frac + \frac \right) + \left( \frac + \frac + \frac + \frac \right) + \left( \frac + \frac + \cdots + \frac + \frac \right) + \cdots\\ & \gt 1 + \frac + \left( \frac + \frac \right) + \left( \frac + \frac + \frac + \frac \right) + \left( \frac + \frac + \cdots + \frac + \frac \right) + \cdots\\ &= 1 + \frac + \left( \frac \right) + \left( \frac \right) + \left( \frac \right) + \cdots \end
So one can see that this is \(1 + \frac +\frac+\frac +\frac +\cdots\) and so must diverge 19 .
There are many variations on Oresme's proof — for example, using groups of two or three. A rather different proof relies on the inequality
\begin e^x \gt 1 + x \qquad \text < for $x \gt 0$>\end
which follows immediately from the Taylor series for \(e^x\) given in Theorem 3.6.7. From this we can bound the exponential of the Harmonic numbers:
\begin e^ &= e^ + \frac + \frac + \cdots + \frac>\\ &= e^1 \cdot e^ \cdot e^ \cdot e^ \cdots e^\\ & \gt (1+1)\cdot(1+1/2)\cdot(1+1/3)\cdot(1+1/4)\cdots(1+1/n)\\ &= \frac \cdot \frac \cdot \frac \cdot \frac \cdots \frac\\ &= n+1 \end
Since \(e^\) grows unboundedly with \(n\text\) the harmonic series diverges.
The problem of determining the exact value of the sum of the series
is called the Basel problem. The problem is named after the home town of Leonhard Euler, who solved it. One can use telescoping series to show that this series must converge. Notice that
Hence we can bound the partial sum:
Thus, as \(k\) increases, the partial sum \(S_k\) increases (the series is a sum of positive terms), but is always smaller than \(2\text<.>\) So the sequence of partial sums converges.
Mengoli posed the problem of evaluating the series exactly in 1644 and it was solved — not entirely rigorously — by Euler in 1734. A rigorous proof had to wait another 7 years. Euler used some extremely cunning observations and manipulations of the sine function to show that
He used the Maclaurin series
\begin \sin x &= 1 - \frac + \frac - \cdots \end
and a product formula for sine
\begin \begin \sin x &= x \cdot \left(1 -\frac <\pi>\right) \cdot \left(1 + \frac <\pi>\right) \cdot \left(1 -\frac <2\pi>\right) \cdot \left(1 + \frac <2\pi>\right) \cdot \left(1 -\frac <3\pi>\right) \cdot \left(1 + \frac <3\pi>\right) \cdots\\ &= x \cdot \left(1 -\frac <\pi>\right) \cdot \left(1 - \frac <4\pi>\right) \cdot \left(1 -\frac <9\pi>\right) \cdots \end\label\tag \end
Extracting the coefficient of \(x^3\) from both expansions gives the desired result. The proof of the product formula is well beyond the scope of this course. But notice that at least the values of \(x\) which make the left hand side of (\(\star\)) zero, namely \(x=n\pi\) with \(n\) integer, are exactly the same as the values of \(x\) which make the right hand side of (\(\star\)) zero 20 .
This approach can also be used to compute \(\sum_^\infty n^\) for \(p=1,2,3,\cdots\) and show that they are rational multiples 21 of \(\pi^\text<.>\) The corresponding series of odd powers are significantly nastier and getting closed form expressions for them remains a famous open problem.
In this optional section we provide proofs of two convergence tests. We shall repeatedly use the fact that any sequence \(a_1\text\) \(a_2\text\) \(a_3\text\) \(\cdots\text\) of real numbers which is increasing (i.e. \(a_\ge a_n\) for all \(n\)) and bounded (i.e. there is a constant \(M\) such that \(a_n\le M\) for all \(n\)) converges. We shall not prove this fact 22 .
We start with the comparison test, and then move on to the alternating series test.
Let \(N_0\) be a natural number and let \(K \gt 0\text<.>\)
(a) By hypothesis \(\sum_^\infty c_n\) converges. So it suffices to prove that \(\sum_^\infty [Kc_n-a_n]\) converges, because then, by our Arithmetic of series Theorem 3.2.9,
\[ \sum_^\infty a_n = \sum_^\infty K c_n -\sum_^\infty [Kc_n-a_n] \nonumber \]
will converge too. But for all \(n\ge N_0\text\) \(Kc_n-a_n\ge 0\) so that, for all \(N\ge N_0\text\) the partial sums
\[ S_N = \sum_^N [Kc_n-a_n] \nonumber \]
increase with \(N\text\) but never gets bigger than the finite number \(\sum\limits_^ [Kc_n-a_n] + K \sum\limits_^\infty c_n\text<.>\) So the partial sums \(S_N\) converge as \(N\rightarrow\infty\text<.>\)
(b) For all \(N \gt N_0\text\) the partial sum
\[ S_N = \sum_^N a_n \ge \sum_^ a_n + K\hskip-10pt\sum_^N\hskip-10pt d_n \nonumber \]
By hypothesis, \(\sum_^N d_n\text\) and hence \(S_N\text\) grows without bound as \(N\rightarrow\infty\text<.>\) So \(S_N\rightarrow\infty\) as \(N\rightarrow\infty\text<.>\)
Let \(\big\_^\infty\) be a sequence of real numbers that obeys
\[ a_1-a_2+a_3-a_4+\cdots=\sum\limits_^\infty (-1)^ a_n =S \nonumber \]
converges and, for each natural number \(N\text\) \(S-S_N\) is between \(0\) and (the first dropped term) \((-1)^N a_\text<.>\) Here \(S_N\) is, as previously, the \(N^\) partial sum \(\sum\limits_^N (-1)^ a_n\text<.>\)
Let \(2n\) be an even natural number. Then the \(2n^\) partial sum obeys
So the sequence \(S_2\text\) \(S_4\text\) \(S_6\text\) \(\cdots\) of even partial sums is a bounded, increasing sequence and hence converges to some real number \(S\text<.>\) Since \(S_ = S_ +a_\) and \(a_\) converges zero as \(n\rightarrow\infty\text\) the odd partial sums \(S_\) also converge to \(S\text<.>\) That \(S-S_N\) is between \(0\) and (the first dropped term) \((-1)^N a_\) was already proved in §3.3.4.
Select the series below that diverge by the divergence test.
(A) \(\displaystyle\sum_^\infty \frac\)
(C) \(\displaystyle\sum_^\infty \sin n\)
(D) \(\displaystyle\sum_^\infty \sin (\pi n)\)
Select the series below whose terms satisfy the conditions to apply the integral test.
(A) \(\displaystyle\sum_^\infty \frac\)
(C) \(\displaystyle\sum_^\infty \sin n\)
Suppose there is some threshold after which a person is considered old, and before which they are young.
Let Olaf be an old person, and let Yuan be a young person.
Below are graphs of two sequences with positive terms. Assume the sequences continue as shown. Fill in the table with conclusions that can be made from the direct comparison test, if any.
if \(\sum a_n\) converges | if \(\sum a_n\) diverges | |
and if \(\\) is the red series | then \(\sum b_n\) \(\Rule\) | then \(\sum b_n\) \(\Rule\) |
and if \(\\) is the blue series | then \(\sum b_n\) \(\Rule\) | then \(\sum b_n\) \(\Rule\) |
For each pair of series below, decide whether the second series is a valid comparison series to determine the convergence of the first series, using the direct comparison test and/or the limit comparison test.
Suppose \(a_n\) is a sequence with \(\displaystyle\lim_a_n = \frac\text<.>\) Does \(\displaystyle\sum_^\infty a_n\) converge or diverge, or is it not possible to determine this from the information given? Why?
What flaw renders the following reasoning invalid?
What flaw renders the following reasoning invalid?
Q: Determine whether \(\displaystyle\sum_^\infty \left(\sin(\pi n)+2\right)\) converges or diverges.
A: We use the integral test. Let \(f(x)=\sin(\pi x)+2\text<.>\) Note \(f(x)\) is always positive, since \(\sin(x)+2 \geq -1+2 =1\text<.>\) Also, \(f(x)\) is continuous.
\begin \int_1^\infty [\sin(\pi x)+2] \, d &= \lim_\int_1^b [\sin(\pi x)+2 ] \, d\\ &=\lim_ \left[\left.-\frac<\pi>\cos(\pi x)+2x \right|_1^b\right]\\ &=\lim_\left[ -\frac<\pi>\cos(\pi b)+2b +\frac<\pi>(-1)-2\right]\\ &=\infty \end
By the integral test, since the integral diverges, also \(\displaystyle\sum_^\infty\left( \sin(\pi n)+2\right)\) diverges.
What flaw renders the following reasoning invalid?
Which of the series below are alternating?
(A) \(\displaystyle\sum_^\infty \sin n\)
Give an example of a convergent series for which the ratio test is inconclusive.
Imagine you're taking an exam, and you momentarily forget exactly how the inequality in the ratio test works. You remember there's a ratio, but you don't remember which term goes on top; you remember there's something about the limit being greater than or less than one, but you don't remember which way implies convergence.
should mean that the sum \(\sum\limits_^\infty a_n\) diverges (rather than converging).
Give an example of a series \(\displaystyle\sum_^\infty a_n\text\) with a function \(f(x)\) such that \(f(n)=a_n\) for all whole numbers \(n\text\) such that:
Suppose that you want to use the Limit Comparison Test on the series \(\displaystyle \sum_^ <\infty>a_n\) where \(\displaystyle a_n = \frac\text<.>\)
Write down a sequence \(\\) such that \(\displaystyle \lim\limits_
Decide whether each of the following statements is true or false. If false, provide a counterexample. If true provide a brief justification.
Does the series \(\displaystyle \sum_^\infty \frac\) converge?
Determine, with explanation, whether the series \(\displaystyle \sum_^\infty \frac\) converges or diverges.
Determine whether the series \(\displaystyle\sum_^\infty\frac
Does the following series converge or diverge? \(\displaystyle\sum_^\infty\frac\sqrt>\)
Evaluate the following series, or show that it diverges: \(\displaystyle\sum_^\infty 3(1.001)^k\text<.>\)
Evaluate the following series, or show that it diverges: \(\displaystyle\sum_^\infty \left(\frac\right)^n\text<.>\)
Does the following series converge or diverge? \(\displaystyle\sum_^\infty \sin(\pi n)\)
Does the following series converge or diverge? \(\displaystyle\sum_^\infty\cos(\pi n)\)
Does the following series converge or diverge? \(\displaystyle\sum_^\infty \frac\text<.>\)
Evaluate the following series, or show that it diverges: \(\displaystyle\sum_^\infty\frac>\text<.>\)
Does the following series converge or diverge? \(\displaystyle\sum_^\infty\frac\text<.>\)
Does the following series converge or diverge? \(\displaystyle\sum_^\infty\frac\text<.>\)
Show that the series \(\displaystyle\sum_^\infty \frac
Find the values of \(p\) for which the series \(\displaystyle^\infty \frac
Use the comparison test (not the limit comparison test) to show whether the series
converges or diverges.
Determine whether the series \(\displaystyle\sum_^\infty\frac< \root\of > >\) converges.
Does \(\displaystyle\sum_^\infty\frac>\) converge or diverge?
Determine, with explanation, whether each of the following series converge or diverge.
Determine whether the series
converges or diverges.
Determine whether each of the following series converge or diverge.
Evaluate the following series, or show that it diverges: \(\displaystyle\sum_^\infty \frac\text<.>\)
Determine whether the series \(\displaystyle\sum_^\infty\frac\) is convergent or divergent. If it is convergent, find its value.
Determine, with explanation, whether each of the following series converge or diverge.
Determine, with explanation, whether each of the following series converges or diverges.
Determine whether the series \(\displaystyle\sum_^\infty\frac\) is convergent or divergent.
What is the smallest value of \(N\) such that the partial sum \(\displaystyle\sum_^N\frac\) approximates \(\displaystyle\sum_^\infty\frac\) within an accuracy of \(10^\text\)
It is known that \(\displaystyle \sum_^\infty \frac> = \frac<\pi^2>\) (you don't have to show this). Find \(N\) so that \(S_N\text\) the \(N^\) partial sum of the series, satisfies \(| \frac<\pi^2> - S_N | \le 10^\text<.>\) Be sure to say why your method can be applied to this particular series.
The series \(\displaystyle \sum_^\infty \frac>\) converges to some number \(S\) (you don't have to prove this). According to the Alternating Series Estimation Theorem, what is the smallest value of \(N\) for which the \(N^\) partial sum of the series is at most \(\frac1\) away from \(S\text\) For this value of \(N\text\) write out the \(N^\) partial sum of the series.
A number of phenomena roughly follow a distribution called Zipf's law. We discuss some of these in Questions 52 and 53.
Determine, with explanation, whether the following series converge or diverge.
(a) Prove that \(\displaystyle \int_2^\infty\frac\ \, d\) diverges.
(b) Explain why you cannot conclude that \(\displaystyle\sum\limits_^\infty \frac\) diverges from part (a) and the Integral Test.
(c) Determine, with explanation, whether \(\displaystyle\sum\limits_^\infty \frac\) converges or diverges.
Show that \(\displaystyle\sum\limits_^\infty\frac>>>\) converges and find an interval of length \(0.05\) or less that contains its exact value.
Suppose that the series \(\displaystyle\sum\limits_^\infty a_n\) converges and that \(1 \gt a_n\ge 0\) for all \(n\text<.>\) Prove that the series \(\displaystyle\sum\limits_^\infty \frac\) also converges.
Suppose that the series \(\sum\limits_^<\infty>(1-a_n)\) converges, where \(a_n \gt 0\) for \(n=0,1,2,3,\cdots\text<.>\) Determine whether the series \(\sum\limits_^\infty 2^n a_n\) converges or diverges.
Assume that the series \(\displaystyle\sum_^\infty\frac\) converges, where \(a_n \gt 0\) for \(n = 1, 2, \cdots\text<.>\) Is the following series
convergent? If your answer is NO, justify your answer. If your answer is YES, evaluate the sum of the series \(-\log a_1 + \sum\limits_^\infty \log\big(\frac>\big)\text<.>\)
Prove that if \(a_n\ge 0\) for all \(n\) and if the series \(\displaystyle\sum_^\infty a_n\) converges, then the series \(\displaystyle\sum_^\infty a^2_n\) also converges.
Suppose the frequency of word use in a language has the following pattern:
The \(n\)-th most frequently used word accounts for \(\dfrac\) percent of the total words used.
So, in a text of 100 words, we expect the most frequently used word to appear \(\alpha\) times, while the second-most-frequently used word should appear about \(\frac\) times, and so on.\alpha>
If books written in this language use \(20,000\) distinct words, then the most commonly used word accounts for roughly what percentage of total words used?
Suppose the sizes of cities in a country adhere to the following pattern: if the largest city has population \(\alpha\text\) then the \(n\)-th largest city has population \(\frac\text<.>\)
If the largest city in this country has 2 million people and the smallest city has 1 person, then the population of the entire country is \(\sum_^\frac\text<.>\) (For many \(n\)'s in this sum \(\frac\) is not an integer. Ignore that.) Evaluate this sum approximately, with an error of no more than 1 million people.
This page titled 3.3: Convergence Tests is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Joel Feldman, Andrew Rechnitzer and Elyse Yeager via source content that was edited to the style and standards of the LibreTexts platform.