State space models
Contents
State space models¶
Consider
However, some variables of
We can define a matrix
Example:
- unobserved, - observed
The case where
which, in turn, is a special case of the class of (non-linear) state space models
Simplest example: AR(1) model with measurement error:
Gaussian linear state space model¶
Note 1: This is a time-invariant model. This can be relaxed with some or all of the matrices
Note 2: We can add an intercept in one or both of the state and observation equations.
Note 3:
Note 4:
Note 5:
Autocovariances of
Note: We get
Stationarity of
requres
Marginal distribution of ¶
Note: see HW4 part 1 for an alternative way to write the system. Check Efficient simulation and integrated likelihood estimation in state space models for applications of that approach.
Applications:¶
likelihood function: distribution of
forecasting: distribution of
, given
Joint distribution of ¶
Moments of the conditional distribution of given ¶
Note: The conditional variance of
Applications:¶
filtering: distribution of
, givenstate prediction: distribution of
, givensmoothing: distribution of
, given
Kalman filter¶
Let
and
optimal one-step ahead forecast of
optimal one-step ahead forecast of
optimal update of the forecast of
where
Derivation for
step 1 Compute joint distribution of
using
Note that
step 2 Compute the marginal distribution of
andstep 3 Compute the conditional distribution of
given
Derivation for any
write
in terms of , , andusing the assumed conditional distribution of
given , compute the joint and marginal conditional disrtibutions of and given .from the joint (conditional) distribution compute the conditional distribution of
given . This will give you the conditional distribution of given
Likelihood function with the Kalman filter¶
joint distribution of
where
Note: equivalent to, but much more efficient than computing the joint distribution of
Kalman smoother¶
where
Note 1: for
Note 2: equivalent to, but much more efficient than computing the block diagonal of conditional distribution of
Estimation¶
What are we estimating?
Collect the unknown parameters of
MLE
Identification¶
If we replace
with , with , with , with , and with the process for remains unchanged, and the likelihood function remain the same.
Therefore, unless there are (sufficient) restrictions on
, , and , their parameter cannot be identified - multiple values of imply the same value of the likelihood.
a simple way to check for local identification at a given value of
is to compute the Jacobian matrix of w.r.t and check that it has full rank.
Forecasting¶
Optimal forecast given information at
Computing the variance of the forecast erros
Using that
and
Therefore, the forecast error is
The MSE of
and the MSE of
Accounting for parameter uncertainty¶
So far, forecast errors were always computed assuming that parameters were known. However, they are estimated and parameter uncertainty is also present (it disappears only asymptotically)
total uncertainty
Evaluate
by generating draws from the asymptotic distribution of and compute the average value ofEvaluate
as the average of using the draws from the asymptotic distribution of
Linear state space model¶
reduced-form (statistical) models
the parameters are the unrestricted elements of
, , ,of little (no) interest on their ow
structural (theoretical) models
reduced-form model parameters are functions of (often much) smaller number of structural parameters
have economic meaning and are (or could be) of interest on their ownestimation is usually harder (non-linear functions)