# Series Approximation The basic idea of series approximation is to express the delta quantity $\Delta_n$ in form of an infinite series that can be used to quickly approximate $\Delta_n$ by evaluating a limited number of initial terms. If the approximation turns out to be sufficiently accurate for large $n$, a considerable speed up can be achieved, because it is then no longer necessary to compute $\Delta_n$ by iteration. In the example of K. I. Martin the series is composed out of three initial terms and a residual term. Writing $\delta$ for $\Delta_0$ Martin's equation looks like this: \begin{align*} \Delta_n = A_n \delta + B_n \delta^2 + C_n \delta^3 + \text{o}(\delta^4) \end{align*} In generalized terms, this series can be written down like so: \begin{align*} \Delta_n = a_{1,n} \delta + a_{2,n} \delta^2 + a_{3,n} \delta^3 + \ldots + a_{k,n} \delta^k + \text{o}(\delta^{k+1}) \tag{1} \end{align*} If the residual term $\text{o}(\delta^{k+1})$ is sufficiently small, this formula can be utilized to approximate $\Delta_n$: \begin{align*} \Delta_n \approx a_{1,n} \delta + a_{2,n} \delta^2 + a_{3,n} \delta^3 + \ldots + a_{k,n} \delta^k \tag{2} \end{align*} In practice, this approach works very well in many cases: The residual term often remains small for substantial time until it exceeds a given error threshold. In such a case, we can speed up the computation of a delta orbit by skipping a large initial segment. This means that instead of computing \begin{align*} \Delta_0, \Delta_1, \Delta_2, \ldots \end{align*} the series \begin{align*} \Delta_m, \Delta_{m+1}, \Delta_{m+2}, \ldots \tag{3} \end{align*} is computed where $m$ denotes the largest value of $n$ for which $(2)$ is sufficiently accurate. In many cases, a large enough initial piece can be skipped that only a small fraction of the iterations need be performed that would have been necessary without series approximation. For applying series approximation in practice, we need determine up to which value equation ($2$) is sufficiently accurate. The common solution is to use a certain set of *probe points*. For each of these points, the delta orbit is generated by iteration and compared to the values obtained by series approximation. If the relative error exceeds a certain threshold, calculation is aborted. The test point which exceeds the error threshold first, defines the value of $m$ in equation $(3)$. We are missing one last piece of the puzzle. So far, we have dealt with $(1)$ as an abstract formula, assuming that the coefficients $a_{1,n}, a_{2,n}, a_{3,n} \ldots$ have already been calculated. We still need to find a way to calculate these coefficients. $n = 0$ is the easy case. Substituting $n$ by $0$ in $(1)$ reveals: \begin{align*} a_{1,n} &= 1 \nonumber \\ a_{j,n} &= 0 \;\;\text{for}\;\; j \geq 2 \nonumber \end{align*} For $n \geq 1$ we find the answer by applying the delta iteration scheme to: \begin{align*} \Delta_n = \sum_{j = 1}^{k} a_{j,n} \delta^j \end{align*} This result in: \begin{align*} \sum_{j = 1}^{k} a_{j,n+1} \delta^j &= \Delta_{n+1} \\ &= 2 x_n \Delta_n + \Delta_n^2 + \Delta_0 \\ &= 2 x_n \Bigl(\; \sum_{j = 1}^{k} a_{j,n} \delta^j\; \Bigr) + \Bigl(\; \sum_{j = 1}^{k} a_{j,n} \delta^j \; \Bigr)^2 + \delta \\ &= \sum_{j = 1}^{k} 2 x_n a_{j,n} \delta^j + \sum_{i = 1}^{k}\sum_{j = 1}^{k} a_{i,n} a_{j,n} \delta^{i+j} + \delta \end{align*} Comparing of coefficients reveals the relationships we are looking for: \begin{align*} a_{1,n+1} &= 2 x_n a_{1,n} + 1 \nonumber \\ a_{j,n+1} &= 2 x_n a_{j,n} + \sum_{i = 1}^{k} a_{i,n} a_{k - i,n} \;\;\text{for}\;\; j \geq 2 \nonumber \end{align*}