Define the concept of an inner product on some domain as:

If we consider vectors in space the inner product is essentially the dot product of the two vectors. Vectors that are orthogonal to each other have a dot product of 0. The concept of orthogonality can be abstracted and extended to vectors in *any* vector space, so that vectors that have a zero inner product are orthogonal.

Some arbitrary function on the domain may not permit an analytical representation and so most be approximated in some way. An easy choice is a polynomial approximation of order M. A natural choice for the polynomial basis is the monomials, . These are satisfactory for low orders, but we encounter a problem at higher orders.

If we normalize the monomial basis with respect to the norm we arrive at . If the inner product of two basis functions is 0, they are orthogonal and therefore linearly independent. If many of our basis functions end up linearly dependent it is difficult to reconstruct the solution approximation.

Consider the inner product of two adjacent basis functions in our normalized monomial basis:

It is easy to see that for even moderate values of the inner product is close to 1, indicating near linear dependence between basis functions.

It is therefore important to choose a polynomial basis of (ideally) orthogonal functions so that the solution approximation is well conditioned. One good choice is the Legendre polynomials which are orthogonal on the interval (these are defined as the solutions to the Sturm–Liouville equation for a weighting function ). They can be defined using a recurrence formula or the explicit Rodrigue’s formula.

Orthogonality means that the inner product of two Legendre polynomials is zero if they are not of the same order, this can be expressed using the Kronecker product as

Note that we could use a similar orthonormal version of the basis such that every inner product is simply the Kronecker product, however one nice property of the current form is that and . In the orthonormal case we would need to explicitly evaluate at the endpoints for every (it is important to be clear with what one means by normalization in this case: orthonormal with respect to the norm or normalized with respect to the maximal value of on ).

The Legendre polynomials are orthogonal on the domain however we would like to use them on the domain of interest. We can define a mapping that will transform the domain of interest to that of the Legendre polynomials, to transform to . Using this we can define a variant of the shifted Legendre polynomials such that . The newly defined shifted Legendre polynomials are now orthogonal over and the inner product becomes

Using our orthogonal Legendre polynomial basis we can now construct an approximating solution for arbitrary order M using basis functions and basis weights

This approximation is well conditioned for arbitrary orders and has the additional benefit that it is easy to evaluate integrals and derivatives of it.

Where the orthogonal basis really shines however is when one needs to find the integral of the product of the approximation (or it’s derivative) with some other function (or *it’s* derivative) using the same orthogonal basis for the approximation. Usually the integrand of the two approximations can be calculated analytically and presents a very succinct closed form.

For example to solve the 1D the scalar conservation law PDE

could be solved using a order spatial modal Discontinuous Galerkin method, so the the weak form becomes

Using an orthogonal Legendre basis approximation for both the solution and test function (a Galerkin approach) simplifies the expression greatly to

Which is then readily solved using an explicit method of lines approach for time discretization. Without an orthogonal basis approximation the integrands would all need to be calculated using quadrature and the routine would be far more expensive to perform.

Your youtube tutorials on DG were really helpful for me. Thank you so much for sharing your knowledge.

Hello,

All stuff related to DG-FEM implementation for 1D convection equation is really helpful for me.

Could you please explain the part of ExactRHS in matlab code?

I am not getting it.