GHKFilter¶
Copyright 2015 Roger R Labbe Jr.
FilterPy library. http://github.com/rlabbe/filterpy
Documentation at: https://filterpy.readthedocs.org
Supporting book at: https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
This is licensed under an MIT license. See the readme.MD file for more information.
-
class
filterpy.gh.
GHKFilter
(x, dx, ddx, dt, g, h, k)[source]¶ Implements the g-h-k filter.
Parameters: - x : 1D np.array or scalar
Initial value for the filter state. Each value can be a scalar or a np.array.
You can use a scalar for x0. If order > 0, then 0.0 is assumed for the higher order terms.
x[0] is the value being tracked x[1] is the first derivative (for order 1 and 2 filters) x[2] is the second derivative (for order 2 filters)
- dx : 1D np.array or scalar
Initial value for the derivative of the filter state.
- ddx : 1D np.array or scalar
Initial value for the second derivative of the filter state.
- dt : scalar
time step
- g : float
filter g gain parameter.
- h : float
filter h gain parameter.
- k : float
filter k gain parameter.
References
Brookner, “Tracking and Kalman Filters Made Easy”. John Wiley and Sons, 1998.
Attributes: - x : 1D np.array or scalar
filter state
- dx : 1D np.array or scalar
derivative of the filter state.
- ddx : 1D np.array or scalar
second derivative of the filter state.
- x_prediction : 1D np.array or scalar
predicted filter state
- dx_prediction : 1D np.array or scalar
predicted derivative of the filter state.
- ddx_prediction : 1D np.array or scalar
second predicted derivative of the filter state.
- dt : scalar
time step
- g : float
filter g gain parameter.
- h : float
filter h gain parameter.
- k : float
filter k gain parameter.
- y : np.array, or scalar
residual (difference between measurement and prior)
- z : np.array, or scalar
measurement passed into update()
-
__init__
(x, dx, ddx, dt, g, h, k)[source]¶ x.__init__(…) initializes x; see help(type(x)) for signature
-
update
(z, g=None, h=None, k=None)[source]¶ Performs the g-h filter predict and update step on the measurement z.
On return, self.x, self.dx, self.y, and self.x_prediction will have been updated with the results of the computation. For convienence, self.x and self.dx are returned in a tuple.
Parameters: - z : scalar
the measurement
- g : scalar (optional)
Override the fixed self.g value for this update
- h : scalar (optional)
Override the fixed self.h value for this update
- k : scalar (optional)
Override the fixed self.k value for this update
Returns: - x filter output for x
- dx filter output for dx (derivative of x
-
batch_filter
(data, save_predictions=False)[source]¶ Performs g-h filter with a fixed g and h.
Uses self.x and self.dx to initialize the filter, but DOES NOT alter self.x and self.dx during execution, allowing you to use this class multiple times without reseting self.x and self.dx. I’m not sure how often you would need to do that, but the capability is there. More exactly, none of the class member variables are modified by this function.
Parameters: - data : list_like
contains the data to be filtered.
- save_predictions : boolean
The predictions will be saved and returned if this is true
Returns: - results : np.array shape (n+1, 2), where n=len(data)
contains the results of the filter, where results[i,0] is x , and results[i,1] is dx (derivative of x) First entry is the initial values of x and dx as set by __init__.
- predictions : np.array shape(n), or None
the predictions for each step in the filter. Only returned if save_predictions == True
-
VRF_prediction
()[source]¶ Returns the Variance Reduction Factor for x of the prediction step of the filter.
This implements the equation
\[VRF(\hat{x}_{n+1,n}) = \frac{VAR(\hat{x}_{n+1,n})}{\sigma^2_x}\]References
Asquith and Woods, “Total Error Minimization in First and Second Order Prediction Filters” Report No RE-TR-70-17, U.S. Army Missle Command. Redstone Arsenal, Al. November 24, 1970.
-
bias_error
(dddx)[source]¶ Returns the bias error given the specified constant jerk(dddx)
Parameters: - dddx : type(self.x)
3rd derivative (jerk) of the state variable x.
References
Asquith and Woods, “Total Error Minimization in First and Second Order Prediction Filters” Report No RE-TR-70-17, U.S. Army Missle Command. Redstone Arsenal, Al. November 24, 1970.
-
VRF
()[source]¶ Returns the Variance Reduction Factor (VRF) of the state variable of the filter (x) and its derivatives (dx, ddx). The VRF is the normalized variance for the filter, as given in the equations below.
\[ \begin{align}\begin{aligned}VRF(\hat{x}_{n,n}) = \frac{VAR(\hat{x}_{n,n})}{\sigma^2_x}\\VRF(\hat{\dot{x}}_{n,n}) = \frac{VAR(\hat{\dot{x}}_{n,n})}{\sigma^2_x}\\VRF(\hat{\ddot{x}}_{n,n}) = \frac{VAR(\hat{\ddot{x}}_{n,n})}{\sigma^2_x}\end{aligned}\end{align} \]Returns: - vrf_x : type(x)
VRF of x state variable
- vrf_dx : type(x)
VRF of the dx state variable (derivative of x)
- vrf_ddx : type(x)
VRF of the ddx state variable (second derivative of x)