Fast Evaluation of Singular Integrals

A natural need arose during my master’s work to be able to accurately and efficiently evaluate singular integrals with high-order convergence for unstructured meshes. A large part of my time spent at UIUC focused on precisely this task. To accomplish this goal there are several pieces of machinery necessary.

First is the actual integration scheme. This needs to be accurate, have high-order convergence, and also use only local corrections. By local corrections I mean: the integration scheme will necessarily be dependent on the mesh, but since the mesh is unstructured the modified integration coefficients will vary over the domain. If these variations are dependent on the whole of the mesh for each point the cost scaling will be hopeless. In reality we find that the corrections are strongly dependent on neighboring elements, but this dependency rapidly drops off with radius.

Comparison of the error (in Fourier space) between an exact Green’s function and the evaluation of the Taylor Expansion of the high-order de-singularization at the singularity. Examining approximation errors in Fourier space provides a convenient window into behavior of the error of the computed integral in real space.

The second piece required is the “fast” part of the algorithm. Even if one has a means for accurate high-order integration, the system has “n-body” characteristics, meaning computational cost naively scales as O(n^2). To make a practical algorithm quadratic scaling is insufficient. Thus one can embed the modified integration into a Fast Multipole Method for linear scaling. To accomplish this the two different regions that the FMM and modified integration scheme consider “local” need to be reconciled and handled so that source points are not doubly included or left out.

The third piece is an analysis-based understanding of error bounds and scaling/estimations for error control for any free parameters in the algorithm.

To a large extent the first two pieces have had considerable progress made with working prototypes that demonstrate the capabilities. I am continuing to work on this algorithm (as of August 2019) and hope that perhaps in a year or so it might be mature enough to examine it’s performance and accuracy on some validation test cases.

The most recent summary of the work done on the algorithm can be found in a presentation I gave in Spring 2018.