An Operator Method For Solving Second Order Differential Equations, Part 3: Wild Speculation

In the two previous posts in this series we explored a method for solving second order linear differential equations with constant coefficients that is different from the standard textbook methods taught nowadays. I found the method in a 1941 book (or see here) by the Sokolnikoffs.

The key point of the method, as we learned, is the identification of the action of the inverse of the differential operator 1/(D + a)

$\dfrac{1}{D + a} \, f(x)$

with the action of the integral operator

$e^{-ax} \int e^{ax} f(x) \, {\rm d} x$

The previous two posts described solid, well-established mathematics. But now let’s go out on a limb.

Time for some wild speculation

When I see the form of the operator

$\dfrac{1}{D + a}$

which can also be written as

$\dfrac{1}{a} \cdot \dfrac{1}{1 – (-D/a)}$

I can’t help but think of the formula for the sum of an infinite geometric series:

$1 + x + x^2 + x^3 + \cdots = \dfrac{1}{1 – x}$

You might recall that the formula is valid if and only if $-1 < x < 1$. For example, if $x = 1/2$, then the formula gives

$1 + \dfrac{1}{2} + \dfrac{1}{4} + \dfrac{1}{8} + \cdots = \dfrac{1}{1 – 1/2} $

$ = 2$

which is reasonable if you think in terms of a journey on a number line that starts at $0$, moves to the right by $1$ unit, then moves to the right by an additional $1/2$ unit, etc. However, if you try $x = 1$, you get nonsense on the right side of the formula, because division by $0$ makes no sense, and nonsense on the left side of the formula, because adding $1$s indefinitely does not lead to any real number as a result:

$1 + 1 + 1 + 1 + \cdots = \dfrac{1}{0}$ (MAKES NO SENSE!!)

Similarly, if you try $x = -1$, you get nonsense on the left side of the formula, because partial sums alternate between $1$ and $0$, and so it seems entirely unreasonable to attribute any sum to the infinite series, even though the formula gives $1/2$:

$1 + (-1) + 1 + (-1) + \cdots = \dfrac{1}{2}$ (MAKES NO SENSE!!)

Back to the differential operator above. What happens if, without thinking much, we just apply the formula for the sum of an infinite geometric series to the differential operator

$\dfrac{1}{a} \cdot \dfrac{1}{1 – (-D/a)}$

Let’s see:

$\dfrac{1}{a} \cdot \dfrac{1}{1 – (-D/a)} = \dfrac{1}{a} \left ( 1 + \left [ -\dfrac{D}{a} \right ] + \left [ -\dfrac{D}{a} \right ]^2 + \left [ -\dfrac{D}{a} \right ]^3 + \left [ -\dfrac{D}{a} \right ]^4 + \cdots \right )$

$ = \dfrac{1}{a} \left ( 1 – \dfrac{D}{a} + \dfrac{D^2}{a^2} – \dfrac{D^3}{a^3} – \dfrac{D^4}{a^4} + \cdots \right )$

So here is the wild speculation: Could it be that the infinite series of operators in the previous line is “the same” as the integral operator given earlier? That is, could the following be true:

$e^{-ax} \int e^{ax} f(x) \, {\rm d} x = \dfrac{1}{a} \left ( 1 – \dfrac{D}{a} + \dfrac{D^2}{a^2} – \dfrac{D^3}{a^3} – \dfrac{D^4}{a^4} + \cdots \right ) \, f(x)$  (*)

This is probably of no practical use, even if it is true, but it’s fun to explore the idea, and if it is true it would make for a nice connection. So let’s test it out: All you have to do is select various functions $f(x)$, work out both sides of the previous equation (which I’ll call Equation *), and then compare them.

For $f(x) = x$, all is well, and both sides of Equation * work out to the same expression,

$\dfrac{x}{a} – \dfrac{1}{a^2}$

For $f(x) = x^2$, both sides of Equation * also work out to the same expression,

$\dfrac{x^2}{a} – \dfrac{2x}{a^2} + \dfrac{2}{a^3}$

It seems that this ought to work for all powers, and therefore all polynomials. Can you prove this?

Let’s try some different types of function. For $f(x) = \sin x$, again all is well, and both sides of Equation * work out to

$\dfrac{1}{a^2 + 1} \left ( a\sin x – \cos x \right )$

For $f(x) = \cos x$, both sides of Equation * work out to

$\dfrac{1}{a^2 + 1} \left ( a\cos x + \sin x \right )$

OK, how about an exponential function, such as $f(x) = e^{bx}$; then provided that $ -a < b < a$, both sides of Equation * yield

$\dfrac{e^{bx}}{a + b}$

However, for $b = -a$, the right side of Equation * works out to $xe^{-ax}$, which is the correct solution of the corresponding differential equation. However, the left side yields the nonsensical

$\dfrac{e^{-ax}}{a} \left ( 1 + 1 + 1 + \cdots \right )$

Similarly, for $b = a$, the right side of Equation * works out to $\dfrac{1}{2a} e^{ax}$, which is correct. However, the left side yields the nonsensical

$\dfrac{e^{ax}}{a} \left ( 1 – 1 + 1 – 1 + \cdots \right )$

And if $b > a$, or $b < -a$, then the infinite series on the left side of Equation * makes no sense (“does not converge”).

So, the operator equivalence in Equation * is sometimes true, sometimes not. Of course, we have only tried a few functions, and have made no effort to be systematic. Perhaps after further exploration it would be interesting to clarify the conditions for which the proposed equivalence is valid. Give it a try if you’re interested, but it does not look like an easy problem!

(This post first appeared at my other (now deleted) blog, and was transferred to this blog on 22 January 2021.)