#### Publication Date

7-2019

#### Abstract

In many practical situations, we only have partial information about the probabilities; this means that there are several different probability distributions which are consistent with our knowledge. In such cases, if we want to select one of these distributions, it makes sense not to pretend that we have a small amount of uncertainty -- and thus, it makes sense to select a distribution with the largest possible value of uncertainty. A natural measure of uncertainty of a probability distribution is its entropy. So, this means that out of all probability distributions consistent with our knowledge, we select the one whose entropy is the largest. In many cases, this works well, but in some cases, this Maximum Entropy approach leads to counterintuitive results. For example, if all we know is that the variable is located on a given interval, then the Maximum Entropy approach selects the uniform distribution on this interval. In this distribution, the probability density ρ(x) abruptly changes at the interval's endpoints, while intuitively, we expect that it should change smoothly with x. To reconcile the Maximum Entropy approach with our intuition, we propose to limit distributions to those for which the probability density's rate of change is bounded by some a priori value -- and to limit the search for the distribution with the largest entropy only to such distributions. We show that this natural restriction indeed reconciles the Maximum Entropy approach with our intuition.

## Comments

Technical Report: UTEP-CS-19-77

To appear in V. Kreinovich (ed.),

Statistical and Fuzzy Approaches to Data Processing, with Applications to Econometrics and Other Areas, Springer, Cham, Switzerland.