Publication Date



Technical Report: UTEP-CS-15-89


How to estimate parameters from observations subject to errors and uncertainty? Very often, the measurement errors are random quantities that can be adequately described by the probability theory. When we know that the measurement errors are normally distributed with zero mean, then the (asymptotically optimal) Maximum Likelihood Method leads to the popular least squares estimates. In many situations, however, we do not know the shape of the error distribution, we only know that the measurement errors are located on a certain interval. Then the maximum entropy approach leads to a uniform distribution on this interval, and the Maximum Likelihood Method results in the so-called minimax estimates. We analyse specificity and drawbacks of the minimax estimation under essential interval uncertainty in data and discuss possible ways to solve the difficulties. Finally, we show that, for the linear functional dependency, the minimax estimates motivated by the Maximum Likelihood Method coincide with those produced by the Maximum Consistency Method that originate from interval analysis.

Included in

Mathematics Commons