Principal differential analysis: Estimating coefficients and linearly independent solutions of a linear differential operator with covariates for functional data
Functional data, more commonly referred to as data curves, arise in many fields. It is often believed that the data curves correspond to a physical process described well by a differential equation, that is, each data curve is a sum of a random error term and a smooth curve in the null space of a linear differential operator. Ramsay(1996) first proposed the method of regularized principal differential analysis for fitting a differential equation to a collection of noisy data curves. A smooth estimate of the coefficients of the linear differential operator is obtained by minimizing a penalized sum of squared norm of the residuals. In this context, the residual is that part of the data curve that is not annihilated by the linear differential equation. Once the coefficients of the linear differential operator are estimated, a basis for the null space is computed using iterative methods from numerical analysis for solving linear differential equations. Ultimately, a smooth low dimensional approximation to the data curves is obtained by regressing the data curves on the null space basis. This thesis extends principal differential analysis to allow for the coefficients in the linear differential equation to smoothly depend upon covariates. The penalty of Eilers and Marx(1996) is used to impose smoothness. ^ The estimating equations for coefficients in the principal differential analysis with covariates are derived; these are implemented in Splus and illustrated on crash-test data. ^
Jin, Seoweon, "Principal differential analysis: Estimating coefficients and linearly independent solutions of a linear differential operator with covariates for functional data" (2006). ETD Collection for University of Texas, El Paso. AAI1439488.