Discrete Derivatives

Difference quotient of a continuous function
Two points on a continuous curve separated by h

In calculus, the focus is on continuous functions. The derivative f' of a continuous function f is defined by an infinitesimal difference quotient. They’re nice and easy to work with because they can be zoomed into forever without losing detail.

f'(x)=\lim_{h \to 0} \frac{f(x+h)-f(x)}{h}

However, discrete series don’t have the luxury of being continuous. When h becomes too small, the discrete series doesn’t have any terms to fill that void. It’s necessary to use a clearly defined value. Usually, it’s the minimum spacing between two entries of the discrete series. Let’s call the step size \Delta x.

Three points of a discrete series separated by delta x

Also, because \Delta x won’t be going to zero, it’s necessary to keep the difference “balanced” about f(x) to give a better approximation. Taking the previous and next values means a step size of 2\Delta x.

f'(x) =\frac{f(x+\Delta x)-f(x-\Delta x)}{2\Delta x}

Unfortunately, this does mean losing some information at the ends of a finite discrete series. This isn’t suitable for some situations. For discrete series taking place in real-time, it’s better to take a “backward difference”. This does introduce some bias. The derivative at a local maxima will slant positive while it’s actually zero. Vice versa for local minima. The backward difference has “inertia” while the central difference does not.

f'(x)=\frac{f(x)-f(x-\Delta x)}{\Delta x} \tag{1}

Equation 1 defines the first derivative using a backward difference. This way, information about the latest entry in the series isn’t lost. Now, what about the second derivative? It’s the derivative of the first derivative, so replace f(x) with f'(x) in Equation 1.

\begin{aligned}
f''(x) &= \frac{f'(x)-f'(x-\Delta x)}{\Delta x} \\
&= \frac{(\frac{f(x)-f(x-\Delta x)}{\Delta x})-(\frac{f(x-\Delta x)-f(x-\Delta x-\Delta x)}{\Delta x})}{\Delta x} \\
&= \frac{f(x)-f(x-\Delta x)-f(x-\Delta x)+f(x-\Delta x-\Delta x)}{(\Delta x)^2} \\
&= \frac{f(x)-2f(x-\Delta x)+f(x-2\Delta x)}{(\Delta x)^2}
\end{aligned} \tag{2}

Plugging the derivative back into Equation 1 repeatedly reveals a pattern. Signs keep alternating and coefficients are rows from Pascal’s Triangle. Higher-order derivatives look farther backward in the series. nth order derivatives are given by

f^n(x)=\frac{\sum_{i=0}^{n}(-1)^i\binom{n}{i}f(x-i\Delta x)}{(\Delta x)^n} \tag{3}

The (-1)^i term in the summand flips the sign for each term in the expansion. \binom{n}{i} is the binomial coefficient. It is defined as \frac{n!}{i!(n-i)!}. The third term goes further and further back in the time series. This general form will be handy for exploring any derivative of our hearts’ desire.

Two caveats: making sure that there are enough old entries, and that they are temporally consistent. For example, fetching the latest entry in a real-time series and doing something with it can cause enough time to pass that a new entry is added. Fetching the penultimate entry would actually be fetching what was the latest entry. This can be a problem when it comes to real-time series.

Ignoring the temporal consistency problem for now, here’s a Python implementation that uses a frozen-in-time copy of a stack and an implicit step size of 1.

import math

def nth_derivative(n, series):
	
	result = 0

	for i in range(0, n + 1):
		sign = (-1)**i
		coefficient = math.comb(n, i)
		entry = series.pop() 
		result += sign * coefficient * entry

	return result