Atoms in Mathematics and Science, Part 2: Infinite-Dimensional Spaces

In a previous post we began to discuss the idea of a basis in mathematics. The examples given in that post are finite-dimensional vector spaces, and in this post we are going to generalize them by giving some examples of infinite-dimensional vector spaces.

But before we do this, let’s play with some motivating examples not involving vector spaces, but nevertheless fit in with the general idea of understanding something complicated by expressing it in terms of simpler constituents (which goes by the name of reductionism).

For example, there is a theorem in number theory (called the fundamental theorem of arithmetic) that states that every natural number greater then 1 is either a prime number or can be expressed uniquely (ignoring changes in the order of the factors) as a product of prime numbers. In this case, the prime numbers are the basic building blocks, and if we can understand them, we have an inroad to understanding all natural numbers. (Note that $1$ is excluded as a prime number by convention.)

However, there are an infinite number of primes, which is different from the examples we looked at in the previous post, when there were only a finite number of basis elements. This fact was proved by Euclid, and if you haven’t seen the delightful and elegant proof, you should definitely check it out, here for example.

I should emphasize that neither the set of whole numbers, nor the set of prime numbers, form a vector space, so the example discussed in the previous few paragraphs does NOT constitute an example of an infinite-dimensional vector space. But nevertheless, I hope it serves as a relatively elementary example of a situation where we have “things” (whole numbers, in this case) that can be constructed by using a “smaller” set of building blocks, but yet the set of building blocks is infinite.

The following example does not constitute an example of a vector space either, but it serves the same purpose of showing that a familiar situation nevertheless involves something (perhaps) strangely infinite. Consider a number whose decimal representation is endless. A simple example is

$0.444444…$

where every digit is $4$. We can consider the basic building blocks of such numbers as

$0.1, 0.01, 0.001, 0.001,$ etc.

of which there are an infinite number. Then every number between $0$ and $1$ can be written as a “linear combination” of the building blocks. For example,

$0.444444… = 4(0.1) + 4(0.01) + 4(0.001) + …$

and the decimal part of $\pi$  (that is, $0.14159…$) can be written as

$1(0.1) + 4(0.01) + 1(0.001) + 5(0.0001) + 9(0.00001) + …$

Power series

By analogy with the expression of any decimal number between $0$ and $1$ as a “linear combination” of the infinite number of basic building blocks listed above, let’s ask the following question:

Can every function be expressed as a “linear combination” of the basic building blocks $1, x, x^2, x^3, x^4, \ldots $? This amounts to asking whether every function can be expressed as a sort of “infinite polynomial;” that is, not a real polynomial, but a sort of infinite-dimensional analogue of a polynomial.

We’ll answer this question in the next post in this series, but why would anyone wish to do this anyway? Well, polynomials are among the simplest types of functions. A series of powers, even an infinite series of powers, might be easier to deal with than the original function that we are attempting to represent as a power series. Could this kind of reductionism be of value in the realm of functions?

Yes, it often is useful. For example, if you run into a differential equation that you don’t know how to solve, one useful technique is to propose a trial solution function as a power series with unknown coefficients, and then run the trial solution through the differential equation (and use the initial conditions) in an attempt to determine the coefficients. It often works, and the result is a solution, at least in the form of a power series. And the reason this method works is that dealing with (i.e., differentiating, adding, multiplying) power functions (the basic building blocks) is so simple.

The idea of representing a function as a power series is also useful in proving various theorems. We’ll continue the power series story in the next post in this series; click here to read Atoms in Mathematics and Science, Part 3: Power Series.

(This post first appeared at my other (now deleted) blog, and was transferred to this blog on 22 January 2021.)