# Orthogonal Functions

After consulting numerous sources I finally found something that clearly and satisfactorily explains the details of orthogonal functions. The concept is the easy part: Two vectors are considered orthogonal if their dot product is zero.

The tough part is the details: This can be generalized to functions, with an inner product of a vector space taking the place of the dot product. How you define this inner product defines the orthogonality conditions and is dependent on the vector space. This is where the useful source comes in:

I have also attached the PowerPoint file should it be lost to the sands of time.
What made it click for me is him drawing direct analogues between the various pieces of the dot product, and the inner product. Take Legendre polynomials as an example: instead of Cartesian 3-space directions (i, j, k) your basis is the the monomial power (1, x, x²) in polynomial “space”. In general for real functions over the domain [a,b] the inner product is:
Restricting the domain to [-1,1] yields the Legendre polynomials.
These ruminations stem from attempting to solve problem 7.42 in Schaum’s Advanced Mathematics.
`Given the functions  where $a_0, a_1+a_2x, a_3+a_4x+a_5x^2$ where $a_0,\ldots,a_5$ are constants.  Determine the constants so that these functions are mutually orthogonal in (-1,1) and thus obtain the functions.`

Attached: Orthogonal