Suppose you have a differential equation that you want to solve numerically. An approach you might take is collocation, which is based on the idea that an unknown function \(u(z)\) can be approximated by a sum of \(N+1\) basis functions \(\phi_n(z)\):

This approach has the advantage over finite differences that we can guarantee a smooth solution. (Of course, it has the disadvantage that we can't use a non-uniform grid to get more detail in only some areas: since the accuracy of our approximation depends on the number of basis functions we use, we can only add more detail to the entire domain.)

The function \(u(z)\) is the solution to some differential equation \(Lu = f(z)\). Substituting the approximation into this equation defines a residual function:

The residual function is zero for the exact solution. Spectral and pseudospectral methods try to choose the coefficients \(a_n\) to minimize this residual. Collocation chooses these coefficients by requiring the residual function to be zero at selected points.

Chebyshev Collocation uses Chebyshev Polynomials as the basis functions (hence the name):

where \(z = \cos(\theta)\) and \(T_n(z) \equiv \cos(n\theta)\).

Collocation (i.e. residual is set to zero) at the Gauss-Lobatto points:

These are the extrema of the \(N\)-th Chebyshev polynomial. By requiring the residual function to be zero at these points, we can bound the truncation error of our solution.

# Boundary Conditions

Unlike using a Fourier series to approximate a solution with periodic boundary conditions, Chebyshev polynomials do not automatically satisfy the appropriate boundary conditions. However, explicit constraints can be added:

Inserting this into the algebraic equations produced by our choice of minimization technique for \(R(x; a_0, a_1, ..., a_N)\) causes \(u(1)=\alpha\) to be satisfied by the approximate solution.

For collocation, we require the differential equation to be satisfied at each of the \(N+1\) gridpoints. The equations for the boundary points can then by replaced with boundary constraints instead. We still have \(N+1\) equations to find the \(N+1\) unknown coefficients: the differential equation satisfied at the \(N-1\) interior points, and two boundary constraints.

# The Problem

If we are solving a second-order differential equation with two boundary conditions, we can easily apply Chebyshev collocation as described above. But, suppose instead, we consider the equivalent first-order system.

Analytically, the first-order system is equivalent to the second-order differential equation, but when we try to apply Chebyshev collocation, we run into a problem. The number of unknown coefficients and equations at the interior points have now doubled, but we still only have two boundary conditions: basically "half" a boundary condition at each boundary. So we now have \(2N+2\) unknowns, and only \(2N\) equations.

# Example: Wave Equation

Let's look at an example, to demonstrate this issue. (Note that this is only a toy example: there is no reason that you would want to do this with the wave equation, but there are more complicated equations second-order equations that are substantially simpler in first-order system form.)

Consider the differential equation

with boundary conditions \(u(-1) = 0 = u(1)\). This can be solved analytically for eigenvalues \(k\):

We can also write this as a first-order system:

Or, in vector form,

where

## Second-Order Equation

Let's apply Chebyshev Collocation to our second-order equation. We approximate \(u\) with a series of \(N+1\) Chebyshev polynomials:

Our differential equation must be satisfied at the Gauss-Lobatto points \(x_j = \cos\left(\frac{j\pi}{N}\right)\):

This can be written in matrix form, \(Ay = k^2 By\), where \(y\) is the vector of coefficients \(a_n\). Each row in matrices \(A\) and \(B\) corresponds to one of the gridpoints \(x_j\).

To satisfy the boundary conditions, the rows corresponding to \(x_0\) and \(x_N\) can be replaced by the boundary conditions \(u(\pm 1) = 0\); that is,

In matrix form, this is

which can easily be substituted into the appropriate rows of \(A\) and \(B\).

The resulting matrix eigenvalue problem can be solved with a standard general eigenvalue algorithm. The numerical results match the analytical solution.

## First-Order System

Like for the second-order equation, we approximate \(w\) with a series of \(N+1\) Chebyshev polynomials:

We now have a vector of coefficients for each element in the series, so we have \(2(N+1)\) unknowns, rather than the \(N+1\) unknowns we had in the previous case.

Again, our differential equation must be satisfied at the Gauss-Lobatto points, \(x_j = \cos\left(\frac{j\pi}{N}\right)\):

where

Again, we can replace two rows by the boundary conditions \(u(\pm 1) = 0\):

However, in this case, we have two matrix rows corresponding to each of \(x_0\) and \(x_N\), but we only have "half" a boundary condition on each boundary, since it is a boundary condition on \(u\) rather than our vector variable \(w\).

If we require the differential equation to be satisfied at the interior Gauss-Lobatto points, and add the two "half" boundary conditions, this gives us \(2N\) equations; however, we have \(2N + 2\) unknowns (\(a_n\) and \(b_n\)).

But clearly our system isn't actually underdetermined, since it's equivalent to the second-order case above. So, what can we try to add two more equations?

- Require the differential equation to be satisfied at one of the boundary points.
- Overspecify the boundary conditions. As \(v'(x) = ku\), add the additional boundary condition \(v'(\pm 1) = 0\).
- Consider the two equations separately, rather than as a system. Then, require the equation in \(u'(x)\) to be satisfied at the interior collocation points, and the equation in \(v'(x)\) to be satisfied at all collocation points. The boundary conditions on \(u\) provide the remaining two equations.
- Homogenize the boundary conditions. Rather than having both \(u\) and \(v\) represented as a series of Chebyshev polynomials, define \(u\) in terms of \(\phi_{2n}(x) = T_{2n}(x) - 1\) and \(\phi_{2n+1}(x) = T_{2n+1}(x) - x\) for \(n = 1, 2, ...\). Using this basis function means that the boundary condition \(u(\pm 1) = 0\) is automatically satisfied, and thus we can just consider the differential equation at all the Gauss-Lobatto points.

All four methods give approximately the same values for the wavenumber \(k\). Unfortunately, the values are nowhere near the analytical solution, nor those from solving the second-order equation.

You might be wondering if this is due to some problem with the first-order system formulation in the first place. Apparently not, since solving the first-order system using a finite-difference method results in the correct solution.

This suggests that Chebyshev Collocation is just not a suitable technique for solving a first-order system of differential equations numerically. So if you find yourself with such a problem, you'll either need to work with the messier second-order equation, or use some other numerical method.