1. Introduction
Particle Filter (PF) methodology deals with the estimation of latent variables of stochastic processes taking into consideration noisy observations generated by the latent variables [1]. This technique mainly consists of Monte-Carlo (MC) simulation [2] of the hidden variables and the weight assignment to the realizations of the random trials during simulation, the particles. This procedure is repeated sequentially, at every time step of a stochastic process. The involvement of sequential MC simulation in the method is accompanied by a heavy computational cost. However, the nature of the MC simulation makes the PF estimation methodology suitable for a wide variety of state-space models, including non-linear models with non-Gaussian noise. The weights are defined according to observations, which are received at every time step. The weight assignment step constitutes an evaluation process of the existing particles, which are created at the simulation step.
As weight assignment according to an observation dataset is a substantial part of PF, missing observations hinder the function of the filter. Wang et al. [3] wrote a review concerning PF on target tracking, wherein they mentioned cases of missing data and measurement uncertainties within multi-target tracking, as well as methods that deal with this problem (see, e.g., [4]). Techniques that face the problem of missing data focus mainly on substitution of the missing data. In recent decades, Expectation-Maximization algorithm [5] and Markov-Chain Monte Carlo methods [6] became popular for handling missing data problems. These algorithms have been constructed independently of PF. Gopaluni [7] proposed a combination of Expectation-Maximization algorithm with PF for parameter estimation with missing data. Housfater et al. [8] devised Multiple Imputations Particle Filter (MIPF), wherein missing data are substituted by multiple imputations from a proposal distribution and these imputations are evaluated with an additional weight assignment according to their proposal distribution. Xu et al. [9] involved uncertainty on data availability in the observations with the form of additional random variables in the subject state-space model. All the aforementioned approaches are powerful, although computationally costly.
This paper focuses on state-space models with linear observation equations and provides an estimation of the errors of missing observations (in cases of missing data), aiming at the approximation of weights, under a Missing At Random (MAR) assumption [10]. Linearity in an observation equation permits sequential replacements of missing values with equal quantities of known distributions. Although this method is applicable to a smaller set of models than the former ones, it is much faster as it leads to a single imputation process. A simulating example is provided for the comparison of the suggested method with existing techniques for the advantages of the proposed algorithm to be highlighted. The contribution of the a priori estimation step to the study of impoverishment phenomena is also exhibited through Markov System (MS) framework (see, e.g., [11]). The substitution of future weights renders the estimation of future distribution of particles in the state domain feasible. The significance of this initiative lays on the possible estimation of the sample condition concerning impoverishment, in future steps, based on the suggested theory. Such a practice permits the coordinated application of stochastic control [12] instead of the mostly empirical approaches that been proposed so far [13].
The present article is based on the work of Lykou and Tsaklidis [14]. Further mathematical propositions are formed by the sparse remarks exhibited in [14], and the incorporation procedure of MS-theory in the study of particle distribution is explained in detail. The presentation of the initial results of the simulation example is reformulated for the example to be more easily understandable, as well as a new application of MS-theory for the quantitative prior estimation of the particle distribution one time step forward is added to the initial example. In Section 2, PF algorithm is presented analytically. In Section 3, the new weight estimation step is introduced and its connection with the study of degeneracy and impoverishment is explained. In Section 4, a simulating example is provided, where the results of the current method are compared with those of MIPF and the results of the basic PF algorithm in the case when all data are available. An example for the estimation of the particle distribution one step ahead after the current is also presented in the direction of impoverishment tracking and prediction. In Section 5, the discussion and concluding remarks are quoted.
2. Particle Filter Algorithm
Let be a stochastic process described by m-dimensional latent vectors, , and be the k-dimensional process of noisy observations, . The states and observations are inter-related according to the state-space model,
(1)
(2)
In the system of Equations (1) and (2), f is a known deterministic function of , stands for the process noise, and symbolizes the observation noise. Each sequence and consists of independent and identically distributed (iid) random vectors, while is a constant vector and is a constant matrix.
PF methodology employs Bayesian inference for state estimation. The Bayesian approach aims at the construction of the posterior probability distribution function , where , resorting to the recursive equations,
These equations are analytically solvable in cases of linear state-space models with Gaussian noises. However, for more general models, analytical solutions are usually infeasible. For this reason, PF can be applied by utilizing MC simulation and integration to represent the state posterior probability distribution function, , with the help of a set of particles , , with corresponding weights , . Then, can be approximated by the discrete mass probability distribution of the weighted particles as
where is the Dirac delta function and weights are normalized, so that . As is usually unknown; this MC simulation is based on importance sampling, namely a known probability (importance) density is chosen in order for the set of particles to be produced. Then, the state posterior distribution is approximated by with , while(3)
are the normalized particle weights for .As PF is applied successively for several time steps, it happens that increasing weights are assigned to the most probable particles, while the weights of the other particles become negligible progressively. Thus, only a very small proportion of particles is finally used for the state estimation. This phenomenon is known as PF degeneracy. In order to face this problem, a resampling with replacement step according to the calculated weights has been incorporated into the initial algorithm, resulting in the Sampling Importance Resampling (SIR) algorithm. Nevertheless, sequential resampling leads the particles to take values from a very small domain and exclude many other less probable values. This problem is called impoverishment. A criterion over the weight variability has been introduced for a decision to be made at every time step, whether existing particles should be resampled or not, to reach the middle ground between degeneracy and impoverishment. In this criterion, the Effective Sample Size measure of degeneracy, defined as
is involved (see, e.g., [15], pp. 35–36). As this quantity cannot be calculated directly, it can be estimated asThe decision on resampling is positive whenever , where is a fixed threshold. A usually selected value for is . Establishing a criterion for resampling slows down sample impoverishment of the sample but does not prevent it.
Algorithm 1 summarizes PF steps. The sampling part corresponds to the prior (prediction) step of Bayesian inference, while weight assignment and possible resampling constitute the posterior (update) step.
Algorithm 1 SIR Particle Filter |
Require:N, q, , T |
Initialize |
for do |
1. Importance Sampling |
Sample |
Set |
Calculate importance weights |
end for |
for do |
Normalize weights |
2. Resampling |
if then |
for do |
end for |
else |
for do |
Sample with replacement index according to the discrete weight distribution |
, |
Set and |
end for |
end if |
end for |
3. The Missing Data Case—Estimation of Weights
We now proceed to the addition of a new step to Algorithm 1 for the case of missing data. For that purpose, some new definitions need be quoted. As the missing data mechanism is usually unknown, its possible dependence on the missing data themselves could introduce bias to the statistical inference. For this reason, a Missing at Random (MAR) assumption is adopted: let a random indicator variable , , indicate whether the jth component of the tth observation is available or not. That is,
Additionally, sets and are defined as the collections of missing and available components of observations , respectively, for every time step .
According to the MAR assumption, the missing data mechanism does not depend on the missing observations, given the available ones:
Let be the set of particles produced for the posterior estimation of latent vector , while whole observation is missing. In addition, let , , be the observational errors for corresponding candidate particles , so that , according to (2). Then, the conditional distribution of every observational error on the particle set is approximated as
(4)
where is a point estimation of and represents the process noise for the generation of .Let be a particle for the posterior estimation of the hidden state of the state-space model (1)–(2). Then, according to Algorithm 1 and Equation (1), the ith prior estimation of the hidden state is produced by equation
(5)
According to Equation (2), the observational error of the particle is calculated as
(6)
If the (whole) observation is unavailable, sequential replacements of and from Equations (6) and (2), respectively, contribute to the creation of the formula,
As observation is considered missing, particles cannot be evaluated. Thus, both and are replaced according to Equations (1) and (5),
The hidden state is unknown, but its posterior distribution is available, so that a point estimation of it, , can be calculated. Then,
(7)
Therefore, since the quantity is a constant at time t, the distribution of can be approximated as
(8)
□
Given that the distributions of the random vectors , , and are known, the distribution of is also known, as it is the convolution of linear functions of the initial components , , and . Calculation of such convolutions is not always an easy task, as analytical solutions may not be feasible, leading to numerical approximation options ([16]). However, given that each noise process consists of iid vectors and matrix A is constant, the distribution of this sum needs to be calculated only once.
The replacement of can be avoided, if MC simulation has been implemented at this time point.
The weight assigned to depends on , according to Equations (3) and (6), because
Then, as the two variables ( and ) are closely associated, knowledge of the distribution of leads to the derivation of the distribution of . Even if the distribution of may not be exactly calculated, in cases where are complicated functions of , knowledge on the distribution of will suffice to provide information on the weight distribution. Thereby, calculation of , as it is suggested in Remark 1, is of interest for the concomitant estimation of weights.
If the conditions of Proposition 1 hold, while is partially observed, the conditional distribution of every observational error on particle set and collection of available components of is approximated as
Estimation of implies the estimation of , according to Equation (6). If observation is partially available, its available components, say collection, can be placed into the above equations. Thus, some components of the observational error will also be available, while the rest of the components, say collection, possibly dependent on the available ones, is estimated in the same rationale. In this case, (4) takes the form
□
In any case, missing (parts of) observational errors along with their weights can be substituted by single values, as expected values or modes. Consequently, the initial PF algorithm undergoes a slight change, as presented in Algorithm 2. Further to Remark 2, the substitution of observational errors is implemented after the Importance Sampling step in Algorithm 2 for the sake of simplicity.
Algorithm 2 SIR Particle Filter for missing data with observational error estimation |
Require:N, q, , T |
Initialize |
for do |
1. Importance Sampling |
Sample |
Set |
Produce observational error estimations for the missing components and calculate importance weights |
end for |
for do |
Normalize weights |
2. Resampling |
if then |
for do |
end for |
else |
for do |
Sample with replacement index according to the discrete weight distribution |
Set and |
end for |
end if |
end for |
3.1. Connection to Markov Systems and Contribution to the Study over Impoverishment
Impoverishment over the particle samples can be studied in connection with certain Markov models, the Homogeneous or Non-homogeneous Markov Systems (denoted as HMSs or NHMSs, respectively), which have their roots in [17]. With the consideration of a grid of cells over the state domain, problems of impoverishment reduce to a problem concerning the derivation of the distribution of the particle population over the grid cells. Term “grid” denotes here a single partition over the whole state domain. The cells of this grid represent the states of the MS. At every time step, a particle moves from cell i () to cell j () with (time-dependent) transition probability , () in the general case. However, MS consideration is based on the hypothesis that population members which are situated in the same state move to any cell at the next time step according to a joint transition probability. Thus, for the introduction of MS-theory in the study of particle distribution over the grid, probabilities are approximated by single quantities for all particles in cell i at time point t. The fact that PF is applied to dynamical systems entails that different areas of the state domain become of particular interest at different time steps. Therefore, it is preferable for the grid lines not to remain constant over time. A simple time-varying grid is constructed within the simulating example in Section 4, while a more complex structure is provided in [18]. In the simple case that the PF algorithm comprises constant parameters and excludes the resampling step, the corresponding MS can be considered homogeneous, as the particles only move according to a state equation with constant approximating transition probability values. Resampling causes the redistribution of particles over the grid. The probability vectors for this redistribution are defined by the observational errors at every time step. Thus, steps of changing probability vectors are introduced in the MS rendering this MS non-homogeneous. Moreover, the results over the grid of both production of weighted particles on the basis of system (1)–(2) and resampling, at the end of every time step, derive the results of sums of multinomial trials with varying probability vectors (see also [19], p. 28); this procedure corresponds to the transitions of population members between the state of a MS. In general, the sums of multinomial trials can be considered to follow generalized multinomial distribution [20] or, more generally, Poisson Binomial distribution (which has its roots in [21], §14). As the number of particles remains constant, according to Algorithm 1, the MS is considered to be closed. The difficulty in making predictions on the MS lies in the fact that observational errors are not a priori acquired during the filtering procedure.
In this study, observational errors are substituted by single values for one time step, so that weights can be estimated one step ahead. The set of weights configures the probability vectors of the resampling step. Thus, the distribution of particles over the grid cells can be approximated during the upcoming step of resampling and new Importance Sampling. Thus, the distribution of the particle population can be estimated for the next step on the basis of the estimated weights for the dispersion of the future particles over the grid to be assessed and impoverishment phenomena to be predicted. Such a practice paves the way to the involvement of stochastic control theory [12] (leading to control of asymptotic variability [22]) into the matter of the avoidance of impoverishment.
4. Simulating Example
4.1. Contribution to the Missing Data Case
A simulation example is presented in this section to emphasize the benefits of the proposed algorithm. The proposed method is compared with the typical PF algorithm, when the complete dataset is available, and the multiple imputation particle filter (MIPF) for imputations [8]. The reduction step proposed in [23] is incorporated in the initial MIPF algorithm for the best possible results to be achieved. The data simulation is based on the state-space model of Equations (1) and (2), with two-dimensional vectors
(9)
where , , , , and N symbolizes Gaussian distribution. Let be considered known. Next, concerning missing data, we let , that is the data are missing completely at random. particles have been used for every filter. The distributions of noises are also considered known. The weighted mean is used as a point estimator of a hidden state and missing observational errors are substituted by their expected values. All the filters have been repeated for 100 times and their performance concerning their precision and consumed time has been recorded. (The code was written in R project [24]. Packages mvnorm [25], with its corresponding reference book [26], ggplot [27], and ggforce [28] were also used. Simulations were performed on an AMD A8-7600 3.10 GHz processor with 8 GB of RAM.)The results of the three methods are shown in Table 1. In the first two columns, the means over the simulations of Root-Mean-Square Errors (RMSE) of the estimators (weighted means) for each component of the hidden states are presented. The mean of the two aforementioned columns is also calculated, as well as the mean time consumed in each approach. In the table, it is shown that the weight estimation with the suggested method outperforms MIPF concerning both precision and time elapsed. The precision of the suggested method supersedes that of MIPF slightly, while the mean required computational time is about less than the corresponding mean time required for MIPF. The proposed method is also compared with the results of the standard PF algorithm, for which all observations are available, and it seems that, even if the precision is inevitably reduced in the case of missing data, the computational time remains nearly the same. The small differentiation in the mean elapsed time is probably connected with the resampling decision. That is, in this example, the precision of the suggested method slightly supersedes its competitor, while its computational cost is much lower than the cost of its competitor, reaching the levels of the basic filter (which is practically infeasible in the missing data case). In Figure 1 and Figure 2, the performances of the proposed method and MIPF are depicted for the two components of the state process, respectively, for one iteration of each filter. The estimators (weighted means) of the two approaches are close to each other, tracking the hidden vector satisfactorily. Therefore, in this example, the suggested method appears to provide the best option between the available ones in the missing data case.
4.2. Contribution to Impoverishment Prediction
As far as estimation of particle distribution one step ahead is concerned, an application for the transition of the particles from time point to is presented during one implementation of the suggested PF with single imputation for missing values on the available dataset. In the time interval (0,10], only the first component of observation is unavailable. In the end of time step , resampling is implemented and the histograms of the particle sets are exhibited in Figure 3. The sample mean of the particles is and the standard deviations of the corresponding components are and . According to Equation (4) and the given parameters of the problem, the random factor needed to be estimated for every particle at the next time step is
where . Thus, in both dimensions, the following partitions are considered, so that a grid of nine cells is configured over the two dimensions. The frequency table (Table 2) exhibits the particle distribution over the grid.The selected time period was chosen because there is a considerable number of preceding steps that permits a relatively good adaptation of the particle samples over the hidden states and the samples have not yet collapsed to a tiny neighborhood around a single point (utter impoverishment). This argument is evinced in Figure 3c and Table 2, where the distribution of the particles is presented in connection with the hidden state and the suggested grid. The produced particles are both close to the hidden state, as most of them are less than one standard deviation from it, and sparse enough for the existence of particles outside the central cell of the grid. Thus, the condition of the sample during these time points configures a typical example of filter implementation before its collapse. Such time points may be suitable starting points for the introduction of control (which exceeds the limits of this study) as the subject sample is in a good condition concerning both impoverishment and accuracy over hidden variable estimation.
For the next time step, a prior estimation for hidden vector is implemented. For the formation of the new grid, the existing particles are moved according to the deterministic part of Equation (9), resulting in , the mean of the new particle set. This quantity constitutes a prior point estimator of the hidden state. Thus, the grid of is shifted by to a new grid, as shown it Table 3, the central cell of which is
The movement of all the particles according to the deterministic part of Equation (9) results in the frequency table in Table 3, where it is shown that all the new particles belong to the central cell. Even though the particles are identically distributed, with the addition of the process noise to the particles, the probabilities for particles to move from the central cell to random ones defer from particle to particle, as the particles have different distances form the grid lines initially. This fact is in contrast to the theoretical background of MS, according to which population members have a common transition probability matrix P to move during a time step. For this reason, the probabilities of particles to move to a cell with the addition of the random noise are approximated by the probabilities of the point estimation to move to a random cell with the addition of noise. These probabilities (rounded values) are provided in Table 4. Thus, the expected numbers of particles over the grid cells are
and the expected distribution of the particles over the grid is presented in Table 5. Concerning the expected posterior distribution of the particles, the expected observational errors are zero, so that particle weights are expected to remain the same. Thus, no further change is expected in their distribution in cells even if resampling is decided to take place, as all weights are equal after resampling in the previous time step.The expected values of observational errors are zero. Nevertheless, the prior estimation of their distribution according to relation (4) and model parameters, where the variances of the errors are presented, evinces the increased uncertainty for them, as while .
The results of the implementation of PF at time step are also exhibited. Resampling has taken place at this time step as well. The joint histogram of the posterior sample over both dimensions (Figure 4) indicates that the majority of the particles do not belong to the central cell. This fact is reasonable, as the length of the sides of the central cell equals only one standard deviation , so that prior probabilities for the particles to be placed outside of the central cell at this time point are considerably big according to the Empirical Rule (68-95-99.7) for normal distribution. For the consolidation of these results towards this rule, the orange squares are drawn in Figure 4 for the corresponding areas of the rule to be defined for each separate dimension, while the orange circles are the corresponding standard deviation circles (and not ellipses, generally, as the two components have the same variance ) of the whole vector. Thus, questions on the suitability of the proposed grid structure are raised for future study. Nevertheless, it should be mentioned that a grid with a central cell of double side length would have classified all particles to the central cell during time step , rendering further study on the issue meaningless. Additionally, a new grid of nine cells is also constructed around the mean of this posterior particle set, the central cell of which also has length . The distribution of the particles in the new grid is quoted in Table 6. In comparison with Table 2, it seems that the number of particles in the central cell is increased in Table 6.
In the present example, the transitions of the particles according to the deterministic function led all particles to a single cell (Table 3), so that the result of the addition of process noise was handled as a result of a multinomial trial. In the case that the deterministic function leads the particles to more than one cell, then it is suggested that different means be found for each cell as well as corresponding transition probabilities, so that the final result can be considered the sum of results of multinomial trials for the transitions to every cell.
5. Discussion and Conclusions
In this study, single substitution (in contrast to MIPF) of observational errors is proposed for missing data cases, when PF is implemented and MAR assumption is adopted. This method is a single imputation procedure. Acuña et al. [23] argued against single imputation, as it is rather simplistic and it cannot attribute to a single value the distributional characteristics that can be approached and described by a sample of multiple imputation. Nevertheless, the primary target of the proposed technique is the minimization of the computational cost that is added to the initial PF algorithm, in the case of missing data. For this purpose, interventions in the PF algorithm are slight. Moreover, in the provided simulation, the suggested method outperforms the multiple imputation approach even for a considerable number of imputations, whereas Acuña et al. [23] noticed that MIPF with imputations produces very satisfactory results, according to the approximation of multiple imputation estimator of efficiency provided by Rubin [29]. As a result, in this example, estimation of observational errors performs better with respect to both the computational time it requires and the precision it achieves. Besides, knowledge on the distribution of the observational errors contributes to the quantification possibility of the uncertainty over the point estimations. Thus, the suggested method takes advantage of the low computational cost of the single imputation option, while the study of more general distributional characteristics of the observational noise can also be taken into account at the same time (see Remark 3).
The contribution of such a method in order to cope with impoverishment problems is also worth mentioning. This method permits the estimation of observational errors and their corresponding weights one time step forward. The evolution of weight distribution has not been a priori estimated for multiple time steps yet, to the best of the authors’ knowledge, but this is feasible at least for one step ahead. As the weights of the next step can be estimated, the probabilities that a particle will be chosen at the resampling step can also be estimated. As explained in Section 3.1, the assessment of weight distribution for the forthcoming time steps could be very interesting, as far as it is connected with impoverishment issues. Concerning future perspectives over this issue, the study of impoverishment problems can be implemented with the use of input control [12], in order for the impoverishment to be controlled; laws of large numbers [30], as MC approximation employs large samples; state capacity restrictions [31]; for the existence of a population limit at every grid cell; literature on the evolution of attainable structures [32]; the evolution of the distribution of particles [33] or of the corresponding moments [34] in the direction of HMSs and NHMSs; and for the estimation of the future behaviour of the sample, possibly reaching continuous-time models [35]. Research on automatic optimal control [36] could be combined with the suggested methodology, possibly leading to interesting joint applications of PF [37] along with artificial intelligence [38]. The performance of the method could also be tested when data are missing for longer time periods [39], while more sophisticated grid structures could also be examined [18]. Correspondingly, in a broader sense, the main idea of the proposed method could be implemented in the errors-in-variables signal processing for missing data cases [40], or it could be involved in more complex models that require MC simulation for the prior estimation of variables [41].
Author Contributions
Conceptualization, R.L. and G.T.; methodology, R.L. and G.T.; software, R.L.; validation, R.L. and G.T.; formal analysis, R.L. and G.T.; writing—original draft preparation, R.L.; writing—review and editing, G.T.; supervision, G.T.; and funding acquisition, R.L. and G.T. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by A.G. Leventis foundation (Grant for Doctoral Studies).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
The authors would like to express their gratitude to Panagiotis-Christos Vassiliou and Andreas C. Georgiou for the opportunity to submit to this Special Issue for possible publication. The constructive comments and the valuable suggestions of the three anonymous reviewers as well as the editorial assistance of Bella Chen are highly appreciated.
Conflicts of Interest
The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Abbreviations
The following abbreviations are used in this manuscript:
PF | Particle Filter |
MC | Monte Carlo |
MIPF | Multiple Imputation Particle Filter |
MAR | Missing At Random |
MS | Markov System |
iid | independent and identically distributed |
SIR | Sampling Importance Resampling |
HMSs | Homogeneous Markov Systems |
NHMSs | Non-homogeneous Markov Systems |
RMSE | Root Mean Square Error |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figures and Tables
Figure 1. Time-series of the hidden values, the observations, and the corresponding point estimations of the proposed method and MIPF imputations for the first component x1,t of the state process.
Figure 2. Time-series of the hidden values, the observations, and the corresponding point estimations of the proposed method and MIPF imputations for the second component x2,t of the state process.
Figure 3. Histograms of particle samples for the posterior estimation of hidden variable x9.
Figure 4. Joint histogram of the particle sample (i=1,⋯,N) for the posterior estimation of both components of the whole hidden vector x10. Red lines delimit the suggested grid cells of Table 3. The red diamond stands for the hidden state. The red star stands for the observation at this time point. The sides of the two squares are correspondingly one and two standard deviations σz from the center of the diagram. The circles are inscribed in the corresponding squares.
Comparison of the results over three methods: the basic PF algorithm, when all observations are available; the weight estimation method, which is proposed in this study; and MIPF for imputations. The methods are compared through the mean of RMSE and the time consumed over the 100 repeated implementations.
Method | Mean RMSE for () | Mean RMSE for () | Overall Mean Precision | Mean Time Elapsed (s) |
---|---|---|---|---|
Basic PF | 0.1610253 | 0.1566881 | 0.1588567 | 2.5570 |
Weight est. | 0.2065578 | 0.2102287 | 0.2083933 | 2.5491 |
MIPF | 0.2267527 | 0.2173670 | 0.2220598 | 4.9137 |
Frequency table of the particle distribution over the suggested grid at the end of .
2nd Componet | ||||
---|---|---|---|---|
1st Component | ||||
2 | 6 | 3 | ||
5 | 77 | 1 | ||
1 | 4 | 1 |
Frequency table of the particle distribution over the suggested grid at when the particles , move only according to the deterministic part of Equation (9).
2nd Componet | ||||
---|---|---|---|---|
1st Component | ||||
0 | 0 | 0 | ||
0 | 100 | 0 | ||
0 | 0 | 0 |
Transition probability table for to move with the addition of process noise .
2nd Componet | ||||
---|---|---|---|---|
1st Component | ||||
0.044 | 0.122 | 0.044 | ||
0.122 | 0.335 | 0.122 | ||
0.044 | 0.122 | 0.044 |
Frequency table for the expected numbers of the particles over the grid cells after the addition of process noise realizations at .
2nd Componet | ||||
---|---|---|---|---|
1st Component | ||||
4.4 | 12.2 | 4.4 | ||
12.2 | 33.5 | 12.2 | ||
4.4 | 12.2 | 4.4 |
Frequency table of the particle distribution over the grid around the mean of the sample in the end of .
2nd Componet | ||||
---|---|---|---|---|
1st Component | ||||
1 | 3 | 0 | ||
6 | 90 | 0 | ||
0 | 0 | 0 |
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2021 by the authors.
Abstract
Observational errors of Particle Filtering are studied over the case of a state-space model with a linear observation equation. In this study, the observational errors are estimated prior to the upcoming observations. This action is added to the basic algorithm of the filter as a new step for the acquisition of the state estimations. This intervention is useful in the presence of missing data problems mainly, as well as sample tracking for impoverishment issues. It applies theory of Homogeneous and Non-Homogeneous closed Markov Systems to the study of particle distribution over the state domain and, thus, lays the foundations for the employment of stochastic control against impoverishment. A simulating example is quoted to demonstrate the effectiveness of the proposed method in comparison with existing ones, showing that the proposed method is able to combine satisfactory precision of results with a low computational cost and provide an example to achieve impoverishment prediction and tracking.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer