失眠网,内容丰富有趣,生活中的好帮手!
失眠网 > 《最优状态估计-卡尔曼 H∞及非线性滤波》:第10章 有关卡尔曼滤波的其他讨论

《最优状态估计-卡尔曼 H∞及非线性滤波》:第10章 有关卡尔曼滤波的其他讨论

时间:2022-04-28 00:50:32

相关推荐

《最优状态估计-卡尔曼 H∞及非线性滤波》:第10章 有关卡尔曼滤波的其他讨论

《最优状态估计-卡尔曼,H∞及非线性滤波》:第10章 有关卡尔曼滤波的其他讨论

前言1. MATLAB仿真:示例10.12. MATLAB仿真:示例10.23. MATLAB仿真:示例10.34. MATLAB仿真:示例10.45. 小结

前言

《最优状态估计-卡尔曼,H∞及非线性滤波》由国外引进的一本关于状态估计的专业书籍,正式出版,作者是Dan Simon教授,来自克利夫兰州立大学,电气与计算机工程系。主要应用于运动估计与控制,学习本文的过程中需要有一定的专业基础知识打底。

本书共分为四个部分,全面介绍了最优状态估计的理论和方法。第1部分为基础知识,回顾了线性系统、概率论和随机过程相关知识,介绍了最小二乘法、维纳滤波、状态的统计特性随时间的传播过程。第2部分详细介绍了卡尔曼滤波及其等价形式,介绍了卡尔曼滤波的扩展形式,包括相关噪声和有色噪声条件下的卡尔曼滤波、稳态滤波、衰减记忆滤波和带约束的卡尔曼滤波等(掌握了卡尔曼,基本上可以说这本书掌握了一半)。第3部分详细介绍了H∞滤波,包括时域和频域的H∞滤波,混合卡尔曼/H∞滤波,带约束的H∞ 滤波。第4部分介绍非线性系统滤波方法,包括扩展卡尔曼滤波、无迹卡尔曼滤波及粒子滤波。本书适合作为最优状态估计相关课程的高年级本科生或研究生教材,或从事相关研究工作人员的参考书。

其实自己研究生期间的主研方向并不是运动控制,但自己在本科大三时参加过智能车大赛,当时是采用PID对智能车的运动进行控制,彼时凭借着自学的一知半解,侥幸拿到了奖项。到了研究生期间,实验室正好有研究平衡车的项目,虽然自己不是那个方向,但实验室经常有组内报告,所以对运动控制在实际项目中的应用也算有了基本的了解。参加工作后,有需要对运动估计与控制进行相关研究,所以接触到这本书。

这次重新捡起运动控制,是希望自己可以将这方面的知识进行巩固再学习,结合原书的习题例程进行仿真,简单记录一下这个过程。主要以各章节中习题仿真为主,这是本书的第十章的4个仿真示例(仿真平台:32位MATLABb),话不多说,开始!

1. MATLAB仿真:示例10.1

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%功能:《最优状态估计-卡尔曼,H∞及非线性滤波》示例仿真%示例10.1: Multiple.m%环境:Win7,Matlabb%Modi: C.S%时间:-05-02%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%function Multiple% Multiple model Kalman filtering.% Second order system.% Corrected March 17, , due to typos in Equations 10.35 and 10.37.zeta = 0.1; % damping ratiown = [sqrt(4); sqrt(4.4); sqrt(4.8)]; % possible wn valuesN = size(wn, 1); % number of parameter setspr = [0.1; 0.6; 0.3]; % a priori probabilities% Compute the initial estimate of wnwnhat = 0;for i = 1 : Nwnhat = wnhat + wn(i) * pr(i);endT = 0.1; % sample periodQc = 1000; % continuous time process noise varianceR = diag([10 10]); % discrete time measurement noise covarianceH = eye(2); % measurement matrixq = size(H, 1); % number of measurementsx = [0; 0]; % initial state% Compute the alternative Lambda and phi matrices.for i = 1 : NAi = [0 1; -wn(i)^2 -2*zeta*wn(i)];Bi = [0; wn(i)^2];Fi = expm(Ai*T);F(:,:,i) = Fi;%Lambda(:,:,i) = (phii - eye(size(phii))) * inv(Fi) * Li;Pplus(:,:,i) = zeros(size(Fi));xhat(:,i) = x;endB = [0; wn(1)^2];Q = B * Qc * B' * T; % discrete time process noise covariancetf = 60; % Length of simulation% Create arrays for later plottingwnhatArray = [wnhat];prArray = [pr];for t = T : T : tf% Simulate the system.% The first parameter set is the true parameter set.w = sqrt(Q) * randn(2, 1);x = F(:,:,1) * x + w;y = H * x + sqrt(R) * randn(2, 1);% Run a separate Kalman filter for each parameter set.for i = 1 : NPminus(:,:,i) = F(:,:,i) * Pplus(:,:,i) * F(:,:,i)';Pminus(:,:,i) = Pminus(:,:,i) + Q;K = Pminus(:,:,i) * H' * inv(H * Pminus(:,:,i) * H' + R);xhat(:,i) = F(:,:,i) * xhat(:,i);r = y - H * xhat(:,i); % measurment residualS = H * Pminus(:,:,i) * H' + R; % covariance of measurement residualpdf(i) = exp(-r'*inv(S)*r/2) / ((2*pi)^(q/2)) / sqrt(det(S));xhat(:,i) = xhat(:,i) + K * (y - H * xhat(:,i));Pplus(:,:,i) = (eye(2) - K * H) * Pminus(:,:,i) * (eye(2) - K * H)' + K * R * K';end% Compute the sum that appears in the denominator of the probability expression.Prsum = 0;for i = 1 : NPrsum = Prsum + pdf(i) * pr(i);end% Update the probability of each parameter set.for i = 1 : Npr(i) = pdf(i) * pr(i) / Prsum;end% Compute the best state estimate and the best parameter estimate.xhatbest = 0;wnhat = 0;for i = 1 : Nxhatbest = xhatbest + pr(i) * xhat(:,i);wnhat = wnhat + pr(i) * wn(i);end% Save data for plotting.wnhatArray = [wnhatArray wnhat];prArray = [prArray pr];endclose all;t = 0 : T : tf;figure;plot(t, wnhatArray.^2);title('Estimate of square of natural frequency', 'FontSize', 12);set(gca,'FontSize',12); set(gcf,'Color','White');xlabel('Seconds');figure;plot(t, prArray(1,:), 'b-', t, prArray(2,:), 'k--', t, prArray(3,:), 'r:');title('Probabilities of square of natural frequency', 'FontSize', 12);set(gca,'FontSize',12); set(gcf,'Color','White');xlabel('Seconds');legend('Probability that \omega_n^2 = 4', 'Probability that \omega_n^2 = 4.4', 'Probability that \omega_n^2 = 4.8');

2. MATLAB仿真:示例10.2

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%功能:《最优状态估计-卡尔曼,H∞及非线性滤波》示例仿真%示例10.2: Reduced.m%环境:Win7,Matlabb%Modi: C.S%时间:-05-02%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%function Reduced% Reduced order Kalman filter simulation.% Estimate the first state of a two-state system.F = [.9 .1; .2 .7]; % System matrix%F = [1.1 -.1; .2 .7]; % System matrixLambda1 = 1;Lambda2 = 0;Q = 0.1; % Process noise covarianceH1 = 0;H2 = 1;R = 1; % Measurement noise covarianceLambda = [Lambda1; Lambda2]; % Noise input matrixH = [H1 H2]; % Measurement matrixKtol = 0.00001; % tolerance for convergence of gain to steady stateNumSteps = 50; % number of simulation stepsclose all; % close all figures% Iteratively compute the steady state reduced filter Kalman gainP = 0;PIt = 0;PItOld = 0;PItt = 0;PIttOld = 0;Ptt = 0;Pt = 0;Sigma = 0;K = 0;x = [0; 0]; % Initial state vectorxhat = [0; 0]; % Initial state estimatexhatReduced = 0; % Initial reduced order state estimatePplus = [0 0; 0 0]; % Initial Kalman filter estimation error covarianceI = eye(2); % Identity matrixErrArray = [x - xhat]; % Array of Kalman filter estimation errorsErrReducedArray = [x(1) - xhatReduced]; % Array of reduced order filter estimation errorsx1Array = [x(1)];xhatReducedArray = [x(1)];KArray = [];PttArray = [];PtArray = [];PArray = [];SigmaArray = [];PfullArray = [];randn('state', sum(100*clock)); % initialize the random number generator% Try to find a steady state reduced order gain.temp1 = H1 * F(1,2) + H2 * F(2,2);temp2 = H1 * Lambda1 + H2 * Lambda2;for k = 1 : 200A = H1 * F(1,1) * P * F(1,1)' * H1';A = A + H1 * F(1,1) * Sigma * temp1';A = A + temp1 * Sigma' * F(1,1) * H1';A = A - H1 * F(1,1) * PItt * temp1';A = A - temp1 * PItt' * F(1,1) * H1';A = A + H1 * F(1,1) * Pt * F(2,1) * H2';A = A + H2 * F(2,1) * Pt * F(1,1) * H1';A = A - H1 * F(1,1) * PIt * F(2,1) * H2';A = A - H2 * F(2,1) * PIt * F(1,1) * H1'; A = A + temp1 * Ptt * temp1';A = A + temp1 * Sigma' * F(2,1) * H2';A = A + H2 * F(2,1) * Sigma * temp1';A = A + H2 * F(2,1) * Pt * F(2,1) * H2';A = A + R;A = A + temp2 * Q * temp2';B = F(1,1) * P * F(1,1) * H1';B = B + F(1,2) * Sigma' * F(1,1) * H1';B = B + F(1,1) * Sigma * temp1';B = B - F(1,2) * PItt' * F(1,1) * H1';B = B - F(1,1) * PItt * temp1'; B = B + F(1,1) * Pt * F(2,1) * H2';B = B - F(1,1) * PIt * F(2,1) * H2';B = B + F(1,2) * Ptt * temp1';B = B + F(1,2) * Sigma' * F(2,1) * H2';B = B + Lambda1 * Q * temp2';KOld = K;K = B * inv(A);KArray = [KArray K];if (k > 3) & (abs((K - KOld) / K) < Ktol)break;endPIttOld = PItt;PItt = (1 - K * H1) * F(1,1) * (PIt * F(2,1) + PItt * F(2,2));PItt = PItt + K * (H1 * F(1,1) + H2 * F(2,1)) * (Pt * F(2,1) + Sigma * F(2,2));PItt = PItt + K * temp1 * (Sigma' * F(2,1) + Ptt * F(2,2));PItt = PItt + K * temp2 * Q * Lambda2';PItOld = PIt;PIt = (1 - K * H1) * F(1,1) * (PIt * F(1,1) + PIttOld * F(1,2));PIt = PIt + K * (H1 * F(1,1) + H2 * F(2,1)) * (Pt * F(1,1) + Sigma * F(1,2));PIt = PIt + K * temp1 * (Sigma' * F(1,1) + Ptt * F(1,2));PIt = PIt + K * temp2 * Q * Lambda1';temp3 = F(1,2) - K * H1 * F(1,2) - K * H2 * F(2,2);P = (1 - K * H1)^2 * F(1,1)^2 * P;P = P + 2 * (1 - K * H1) * F(1,1) * Sigma * temp3;P = P - 2 * (1 - K * H1) * F(1,1) * PIttOld * temp3;P = P - 2 * (1 - K * H1) * F(1,1) * Pt * F(2,1) * H2' * K';P = P + 2 * (1 - K * H1) * F(1,1) * PItOld * F(2,1) * H2' * K'; P = P + temp3^2 * Ptt;P = P - 2 * temp3 * Sigma' * F(2,1) * H2' * K;P = P + K^2 * H2^2 * F(2,1)^2 * Pt;P = P + K^2 * R;P = P + (Lambda1 - K * H1 * Lambda1 - K * H2 * Lambda2)^2 * Q;PttOld = Ptt;PtOld = Pt;SigmaOld = Sigma;Ptt = F(2,1)^2 * Pt + 2 * F(2,1) * Sigma * F(2,2) + F(2,2)^2 * Ptt + Lambda2^2 * Q;Pt = F(1,1)^2 * Pt + 2 * F(1,1) * Sigma * F(1,2) + F(1,2)^2 * PttOld + Lambda1^2 * Q;Sigma = F(1,1) * PtOld * F(2,1) + F(1,1) * SigmaOld * F(2,2);Sigma = Sigma + F(1,2) * SigmaOld * F(2,1) + F(1,2) * PttOld * F(2,2);Sigma = Sigma + Lambda1 * Q * Lambda2;PttArray = [PttArray Ptt];PtArray = [PtArray Pt];PArray = [PArray P];SigmaArray = [SigmaArray Sigma];endfigure; plot(KArray); set(gca,'FontSize',12); set(gcf,'Color','White');xlabel('Iteration Number'); ylabel('Reduced Order Gain');if abs((K - KOld) / K) > Ktoldisp('Reduced order Kalman gain did not converge to a steady state value');return;endfor k = 1 : NumSteps% System simultionx = F * x + Lambda * sqrt(Q) * randn;z = H * x + sqrt(R) * randn;% Full order Kalman filter simulation (time varying)Pminus = F * Pplus * F' + Lambda * Q * Lambda';KStd = Pminus * H' * inv(H * Pminus * H' + R);xhat = F * xhat;xhat = xhat + KStd * (z - H * xhat);Pplus = (I - KStd * H) * Pminus * (I - KStd * H)' + KStd * R * KStd';% Reduced order Kalman filter simulation (steady state)xhatReduced = F(1,1) * xhatReduced + K * (z - H1 * F(1,1) * xhatReduced);% Save data for plottingx1Array = [x1Array x(1)];xhatReducedArray = [xhatReducedArray xhatReduced ];ErrArray = [ErrArray x-xhat];ErrReducedArray = [ErrReducedArray x(1)-xhatReduced];PfullArray = [PfullArray Pplus(1,1)];end% Compute estimation errorsErr = sqrt(norm(ErrArray(1,:))^2 / size(ErrArray,2));disp(['Full order estimation std dev (analytical and experimental) = ',num2str(sqrt(Pplus(1,1))),', ',num2str(Err)]);ErrReduced = sqrt(norm(ErrReducedArray(1,:))^2 / size(ErrReducedArray,2));disp(['Reduced order estimation std dev (analytical and experimental) = ',num2str(sqrt(P)),', ',num2str(ErrReduced)]);% Plot resultsk = 1 : NumSteps;figure; plot(PttArray); title('Ptt', 'FontSize', 12);set(gca,'FontSize',12); set(gcf,'Color','White');xlabel('Time Step');figure; plot(PtArray); title('Pt', 'FontSize', 12);set(gca,'FontSize',12); set(gcf,'Color','White');xlabel('Time Step');figure; plot(k, PArray(1:max(k)), 'r:'); hold on; plot(k, PfullArray(1:max(k)), 'b');title('P', 'FontSize', 12);set(gca,'FontSize',12); set(gcf,'Color','White');xlabel('Time Step'); ylabel('Estimation Error Variance');legend('Reduced order variance', 'Full order variance');figure; plot(SigmaArray); title('Sigma', 'FontSize', 12);set(gca,'FontSize',12); set(gcf,'Color','White');xlabel('Time Step');figure; plot(k,ErrArray(1,1:max(k)),'r:');hold on;plot(k,ErrReducedArray(1:max(k)), 'b');set(gca,'FontSize',12); set(gcf,'Color','White');xlabel('Time Step'); ylabel('Estimation Error');legend('Std Kalman filter error', 'Reduced filter error');

>> ReducedFull order estimation std dev (analytical and experimental) = 0.69709, 0.77517Reduced order estimation std dev (analytical and experimental) = 0.72606, 0.73999

3. MATLAB仿真:示例10.3

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%功能:《最优状态估计-卡尔曼,H∞及非线性滤波》示例仿真%示例10.3: Schmidt.m%环境:Win7,Matlabb%Modi: C.S%时间:-05-02%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%function Schmidt% Reduced order Kalman filter using consider states.phix = 1; phiy = 1;phi = [phix 0; 0 phiy];Qx = 1; Qy = 0;Q = [Qx 0; 0 Qy];Hx = 1; Hy = 1;H = [Hx Hy];R = 1;Pxminus = 1.6; Pxyminus = 0; Pyxminus = Pxyminus'; Pyminus = 1;Pminus = [Pxminus Pxyminus; Pyxminus Pyminus];x = [0; 0];xhatminus = [0; 0];xhatminusSchmidt = [0];xhatErr = [];xhatErrSchmidt = [];tf = 20;for t = 0 : tf% Simulate the measurementz = H * x + sqrt(R) * randn;% Simulate the full order filterK = Pminus * H' * inv(H * Pminus * H' + R);xhatplus = xhatminus + K * (z - H * xhatminus);Pplus = (eye(2) - K * H) * Pminus * (eye(2) - K * H)' + K * R * K';xhatminus = phi * xhatplus;Pminus = phi * Pplus * phi' + Q;% Simulate the Kalman-Schmidt filteralpha = Hx * Pxminus * Hx' + Hx * Pxyminus * Hy' + Hy * Pxyminus * Hx' + Hy * Pyminus * Hy' + R;Kx = (Pxminus * Hx' + Pxyminus * Hy') * inv(alpha);xhatplusSchmidt = xhatminusSchmidt + Kx * (z - Hx * xhatminusSchmidt);Pxplus = (eye(1) - Kx * Hx) * Pxminus - Kx * Hy * Pyxminus;Pxyplus = (eye(1) - Kx * Hx) * Pxyminus - Kx * Hy * Pyminus;Pyxplus = Pxyplus';Pyplus = Pyminus;xhatminusSchmidt = phix * xhatplusSchmidt;Pxminus = phix * Pxplus * phix' + Qx;Pxyminus = phix * Pxyplus * phiy';Pyxminus = Pxyminus';Pyminus = phiy * Pyplus * phiy' + Qy; % Save data for laterxhatErr = [xhatErr; x(1) - xhatplus(1)];xhatErrSchmidt = [xhatErrSchmidt; x(1) - xhatplusSchmidt];% Simulate the state dynamicsx = phi * x + [Qx * randn; Qy * randn];endt = 0 : tf;close all;plot(t, xhatErr(1:21), 'r-', t, xhatErrSchmidt(1:21), 'b--');set(gca,'FontSize',12); set(gcf,'Color','White');legend('Full Order Filter', 'Reduced Order Filter');xlabel('Time Step'); ylabel('Estimation Error');xhatErr = std(xhatErr);xhatErrSchmidt = std(xhatErrSchmidt);disp(['RMS Error = ', num2str(xhatErr), ' (full order filter), ', num2str(xhatErrSchmidt), ' (Schmidt filter)']);

>> SchmidtRMS Error = 0.92247 (full order filter), 0.91047 (Schmidt filter)

4. MATLAB仿真:示例10.4

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%功能:《最优状态估计-卡尔曼,H∞及非线性滤波》示例仿真%示例10.4: Robust.m%环境:Win7,Matlabb%Modi: C.S%时间:-05-02%仅供调用%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%function [x1_rms_robu, x1_rms_opt, x2_rms_robu, x2_rms_opt] = Robust(alpha, beta)% Simulate a robust Kalman filter.% INPUTS:% alpha = relative change in Q from nominal% beta = relative change in R from nominalK_optimal = [0.02; 0.002]; % optimal Kalman gain for this problemduration = 100; % simulation lengthdt = 0.1; % step sizea = [1 dt;0 1]; % system matrixb = [dt^2;dt]; % input matrixB_w = eye(2); % noise input matrixc = [1 0]; % measurement matrixrho1 = 0.5; % relative weight on nominal Kalman filter performancerho2 = 0.5; % relative weight on robustnessrandn('state',sum(100*clock));measnoise = 10; % std dev of measurement noiseaccelnoise = 0.2; % std dev of process noiseR = measnoise^2; %measurement noise covarianceQ = accelnoise^2*[dt^4/4 dt^3/2;dt^3/2 dt^2]; %process noise covariancex = [0; 0]; % initial statexhat = x; % initial state estimatexhat_new = x; % initial robust state estimatex1hatnew=0; x2hatnew=0;x1=0; x2=0;x1hat=0; x2hat=0;epsilon = 0.01;zero_temp = eps*eye(2,2);JArray = [];K = K_optimal;J = inf;% Minimize the robust Kalman filter cost functionwhile (1 == 1) X_1 = DARE ( (a-K*c*a)', zeros(2,2), (B_w-K*c*B_w)*Q*(B_w-K*c*B_w)', zero_temp, zeros(2,2), eye(2) );X_2 = DARE ( (a-K*c*a)', zeros(2,2), (K*R*K'), zero_temp, zeros(2,2), eye(2) );J_old = J;Jnominal = trace(X_1)+trace(X_2);Jrobust = trace(X_1)^2+trace(X_2)^2;J = rho1*Jnominal + rho2*Jrobust;disp(['Jnominal = ', num2str(Jnominal), ', Jrobust = ', num2str(Jrobust), ', J = ',num2str(J)]);JArray = [JArray J];if (J > 0.999*J_old ) break; % convergenceend;% Partial of J with respect to X1 and X2par_J_X1 = rho1*eye(2)+2*rho2*trace(X_1)*eye(2);par_J_X2 = rho1*eye(2)+2*rho2*trace(X_2)*eye(2);% Change K so that the partial of X1 and X2 w/r to K can be computed% numerically.D_K_1 = [K(1,1)*(1 + epsilon);K(2,1)];D_K_2 = [K(1,1);K(2,1)*(1 + epsilon)];% PARTIAL OF X_1 w/r to KX_1_new_K1 = DARE ( (a-D_K_1*c*a)', zeros(2,2), (B_w-D_K_1*c*B_w)*Q*(B_w-D_K_1*c*B_w)', zero_temp, zeros(2,2), eye(2) );X_1_delta_K1 = X_1 -X_1_new_K1;deltaX_1_K1 = [X_1_delta_K1(1,1) X_1_delta_K1(1,2) X_1_delta_K1(2,1) X_1_delta_K1(2,2)];X_1_new_K2 = DARE ( (a-D_K_2*c*a)', zeros(2,2), (B_w-D_K_2*c*B_w)*Q*(B_w-D_K_2*c*B_w)', zero_temp, zeros(2,2), eye(2) );X_1_delta_K2 = X_1 -X_1_new_K2;deltaX_1_K2 = [X_1_delta_K2(1,1) X_1_delta_K2(1,2) X_1_delta_K2(2,1) X_1_delta_K2(2,2)];DeltaX_1_K = [deltaX_1_K1;deltaX_1_K2];% Partial of X_2 w/r to KX_2_new_K1 = DARE ( (a-D_K_1*c*a)', zeros(2,2), (D_K_1*R*D_K_1'), zero_temp, zeros(2,2), eye(2) ); X_2_delta_K1 = X_2 -X_2_new_K1;deltaX_2_K1 = [X_2_delta_K1(1,1) X_2_delta_K1(1,2) X_2_delta_K1(2,1) X_2_delta_K1(2,2)];X_2_new_K2 = DARE ( (a-D_K_2*c*a)', zeros(2,2), (D_K_2*R*D_K_2'), zero_temp, zeros(2,2), eye(2) );X_2_delta_K2 = X_2 -X_2_new_K2;deltaX_2_K2 = [X_2_delta_K2(1,1) X_2_delta_K2(1,2) X_2_delta_K2(2,1) X_2_delta_K2(2,2)];DeltaX_2_K = [deltaX_2_K1;deltaX_2_K2];% Partial of J w/r to K temp1 = par_J_X1(1,1)*DeltaX_1_K(1,1)+par_J_X1(1,2)*DeltaX_1_K(1,2)+par_J_X1(2,1)*DeltaX_1_K(1,3)+...par_J_X1(2,2)*DeltaX_1_K(1,4);temp2 = par_J_X1(1,1)*DeltaX_1_K(2,1)+par_J_X1(1,2)*DeltaX_1_K(2,2)+par_J_X1(2,1)*DeltaX_1_K(2,3)+...par_J_X1(2,2)*DeltaX_1_K(2,4);temp3 = par_J_X2(1,1)*DeltaX_2_K(1,1)+par_J_X2(1,2)*DeltaX_2_K(1,2)+par_J_X2(2,1)*DeltaX_2_K(1,3)+...par_J_X2(2,2)*DeltaX_2_K(1,4);temp4 = par_J_X2(1,1)*DeltaX_2_K(2,1)+par_J_X2(1,2)*DeltaX_2_K(2,2)+par_J_X2(2,1)*DeltaX_2_K(2,3)+...par_J_X2(2,2)*DeltaX_2_K(2,4);Delta_J_K = [temp1;temp2]+[temp3;temp4];% Use gradient descent to compute a new KK_new = K + epsilon*Delta_J_K;K = K_new;endfor t = 0 : dt : durationu = 1; % control inputProcessNoise = (1+alpha)*accelnoise*[(dt^2/2)*randn; dt*randn];x = a*x+b*u+ProcessNoise;x1 = [x1 x(1)];x2 = [x2 x(2)];MeasNoise = (1+beta)*measnoise*randn;y = c*x+MeasNoise;xhat = a*xhat+b*u;xhat_new = a*xhat_new+b*u;Inn1 = y-c*xhat;Inn2 = y-c*xhat_new;% Standard Kalman filter estimatexhat = xhat+K_optimal*Inn1;x1hat = [x1hat xhat(1)];x2hat = [x2hat xhat(2)];% Robust Kalman filter estimatexhat_new = xhat_new+K_new*Inn2;x1hatnew = [x1hatnew xhat_new(1)];x2hatnew = [x2hatnew xhat_new(2)];endx1_sqr = (x1-x1hatnew).^2;x1_rms_robu = (mean(x1_sqr))^0.5;x1_sqr_op = (x1-x1hat).^2;x1_rms_opt = (mean(x1_sqr_op))^0.5;disp(['RMS x1 error = ',num2str(x1_rms_robu),' (robust), ',num2str(x1_rms_opt),' (optimal)']);x2_sqr = (x2-x2hatnew).^2;x2_rms_robu = (mean(x2_sqr))^0.5;x2_sqr_op = (x2-x2hat).^2;x2_rms_opt = (mean(x2_sqr_op))^0.5;disp(['RMS x2 error = ',num2str(x2_rms_robu),' (robust), ',num2str(x2_rms_opt),' (optimal)']);

5. 小结

运动控制在现代生活中的实际应用非常广泛,除了智能工厂中各种智能设备的自动运转控制,近几年最火的自动驾驶技术,以及航空航天领域,都缺少不了它的身影,所以熟练掌握状态估计理论,对未来就业也是非常有帮助的。切记矩阵理论与概率论等知识的基础一定要打好。对本章内容感兴趣或者想充分学习了解的,建议去研习书中第十章节的内容,有条件的可以通过习题的联系进一步巩固充实。后期会对其中一些知识点在自己理解的基础上进行讨论补充,欢迎大家一起学习交流。

原书链接:Optimal State Estimation:Kalman, H-infinity, and Nonlinear Approaches

如果觉得《《最优状态估计-卡尔曼 H∞及非线性滤波》:第10章 有关卡尔曼滤波的其他讨论》对你有帮助,请点赞、收藏,并留下你的观点哦!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。