The cheby
package provides easy-to-use Chebychev interpolation of an arbitrary R function.
Chebychev interpolation approximates an arbitrary function on a closed interval using an the Chebychev polynomials as a basis. In general, polynomial approximation is helpful when a function
Where
If we want the resulting approximation to have particular properties (such as monotonicity or concavity), then we will need to use other methods. This package provides functionality to fit one-dimensional functions both via conventional Chebychev approximation, and by imposing constraints on the shape of the approximation.
The algorithms in this package draw heavily on Judd (1998).
Algorithm 6.2 of Judd (1998) outlines the steps required to produce the one dimensional Chebychev interpolation of order
Theorem 6.7.3 of Judd (1998) shows that this approximation is pointwise-convergent for any
The basic command for Chebychev interpolation is d1.poly
. For example, to compute the approximation of the natural logarithm:
base <- d1.poly( fn=log, range=c(.01,4), iOrder=10, iPts=11 )
This computes a 6th-order approximation to log
on the interval log
is not continuous at zero) using 7 grid points (the minimum possible). The output is a function that we can use straight away. We can check the approximation at a few points:
c( base(1), log(1) )
c( base(2), log(2) )
## [1] 0.009866295 0.000000000
## [1] 0.6928425 0.6931472
So the approximation is good to 2 decimal places for these values of
plot( log, xlim=c(.01, 4), type='l', lty=2 )
plot( base, xlim=c(.01, 4), col='red', add=TRUE, lwd=2 )
So the approximation is very good for most of the approximation range, although given the discontinuity at zero it struggles for small
library(microbenchmark)
microbenchmark(base <- d1.poly( fn=log, range=c(.01,4), iOrder=6, iPts=7 ))
## Unit: milliseconds
## expr
## base <- d1.poly(fn = log, range = c(0.01, 4), iOrder = 6, iPts = 7)
## min lq median uq max neval
## 10.71071 10.85061 11.45069 11.95571 15.90621 100
However, the approximation is not strictly concave - it oscillates slightly around the approximated function. Note how the dashed log function shows up alternately above and below the approximation. In some numerical calculations, preserving shape properties of the function can be important. Later we will discuss how value function will often fail if the continuation value function is not concave, as then there is no unique local interior maximum. This is not a problem which can be eliminated by simply increasing the order of the approximation, either. In fact this makes the problem worse. A higher order approximation improves the fit for small
To extract more information about the approximation, set the details
flag to TRUE
.
base.deets <- d1.poly( fn=log, range=c(.01,4), iOrder=10, iPts=11, details=TRUE )
This returns a five-element list with elements:
fn
The polynomial approximation function. So `base.deetspoly
The polynomial representation of the Chebychev approximation. This element is a polynomial from the polynom
package, so can be manipulated easily (see the documentation of that package for more details). It is also defined only on the fn
element.fn.deriv
The exact derivative of the polynomial approximation function. Computed by differentiating the polynomial representation and translating to the approximation interval. This is very useful for optimization problems where the derivatives are required. Plotting this function (not shown here) will generate upward-sloping parts of the function, confirming that the approximation is not globally concave. We can check also this calculation by comparing to the finite-difference calculation:1e06 * ( base(1+1e-06) - base(1) )
base.deets$fn.deriv(1)
## [1] 0.76435
## [1] 1.112838
fn.deriv.poly
The polynomial representation of the derivative. The analogue of poly
, but for deriv
.residuals
The residuals of the approximation. All zero if iPts
is one more than iOrder
.The “standard” usage of d1.poly
above integrates function calculation and approximation. This is convenient - it simply allows one to pass a function to the approximation and get a function back. But it is not very flexible - it requires a self-contained function to be evaluated separately at each of the points in the grid. In many applications this is not efficient. For example, if the target function to be evaluated contains a computationally demanding common component, it may be quicker to calculate that on its own and then include that value in the evaluation of
Here is an example of how to use this syntax
grid.pts <- d1.grid( c( .01, 4), 11 )
# The grid of Chebychev nodes
log.vals <- sapply( grid.pts, log )
# The target function evaluated at the approximation nodes
base.grid <- d1.poly( fn.vals=log.vals, grid=grid.pts, range=c(.01, 4), iOrder=10, iPts=11, details=TRUE )
The inclusion of the argument grid
here is superfluous, as without this d1.poly
assumes that the function values are evaluated on a grid of Chebychev nodes, which is exactly what d1.grid
produces here. If you wish to evaluate the function on a different grid of nodes, you should use this argument to specify that grid.
One can pass function parameters using the argument fn.opts
. However, this requires that the target function have all its parameters enter through a single argument named opts
(using a list to pass multiple parameters where required).
For example, to compute a fifth-order approximation to the function
target <- function( k, opts ) return( opts$A * k ^ opts$alpha )
# The target function
apx.1 <- d1.poly( target, c(.001,2), 5, 6, fn.opts=list(A=1, alpha=.7) )
apx.2 <- d1.poly( target, c(.001,2), 5, 6, fn.opts=list(A=1.1, alpha=.2) )
# The approximations
plot( function(x) target(x, opts=list(A=1, alpha=.7) ), xlim=c(.001,2), lty=2, ylab='' )
plot( function(x) target(x, opts=list(A=1.1, alpha=.2) ), xlim=c(.001,2), lty=2, add=TRUE )
plot( apx.1, xlim=c(.001,2), col='red', lwd=2, add=TRUE )
plot( apx.2, xlim=c(.001,2), col='red', lwd=2, add=TRUE )
Functions of more variables can be approximated using dn.poly
. This works much the same as d1.poly1
, but currently only works for functions of two variables.
For details of the theory, see Judd section 6.11. The basic idea of shape-preserving Chebychev approximation is to minimize the (squared) errors of a polynomial fit on an approximation grid, subject to constraints on the shape. This task is therefore a constrained optimization problem where the objective is the error on the approximation, the controls are the polynomial coefficients, and the constraints are on the shape of the polynomial at a secondary grid (typically expressed as restrictions on the sums of the polynomial coefficients).
To approximate a univariate function using a shape-preserving polynomial, we can use sp1.poly
. This function takes the same inputs as d1.poly
, but now also requires details on the shape-preserving restrictions on the polynomial approximation. These details are provided through the arguments n.shape
and sign.deriv
. The former is a vector establishing the number of Chebychev nodes at which shape restrictions should be imposed. The second is a vector of positive and negative unit values determining the sign of the restriction. For example, to restrict the approximation to be increasing at 3 points and concave at 7, one would use set n.shape=c(3,7)
and sign.deriv=c(1,-1)
.
Here’s a complete example:
base.sp <- sp1.poly( fn=log, range=c(.01,4), iOrder=10, iPts=11, n.shape=c(3,21), sign.deriv=c(1,-1) )
## Sucess: nlopt terminated after 63 iterations
## NLOPT_XTOL_REACHED: Optimization stopped because xtol_rel or xtol_abs (above) was reached.
plot( log, xlim=c(.01,4), lty=2 )
plot( base.sp, xlim=c(.01,4), add=TRUE, lwd=2, col='red' )
plot( base, xlim=c(.01,4), add=TRUE, lwd=2, col='blue' )
Here, we impose concavity at 21 points in the approximation interval and monotonicity at only 3 (as the standard Chebychev approximation delivers near-monotonicity already). The plot above show that this generates concavity for small values of
Shape-preserving polynomial approximation is particularly useful when the true function has known properties that influence subsequent calculations. For example, imagine that
Judd (1998), Numerical Methods in Economics, MIT Press