Publication Date



Technical Report: UTEP-CS-10-06a

Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2010, Toronto, Canada, July 12-14, 2010, pp. 367-373.


In many practical situations, we have only partial information about the probabilities. In some cases, we have {\em crisp} (interval) bounds on the probabilities and/or on the related statistical characteristics. In other situations, we have {\em fuzzy} bounds, i.e., different interval bounds with different degrees of certainty.

In a situation with uncertainty, we do not know the exact value of the desired characteristic. In such situations, it is desirable to find its worst possible value, its best possible value, and its "typical" value -- corresponding to the "most probable" probability distribution. Usually, as such a "typical" distribution, we select the one with the largest value of the entropy. This works perfectly well in usual cases when the information about the distribution consists of the values of moments and other characteristics. For example, if we only know the first and the second moments, then the distribution with the largest entropy if the normal (Gaussian) one.

However, in some situations, we know the entropy (= amount of information) of the distribution. In this case, the maximum entropy approach does not work, since all the distributions which are consistent with our knowledge have the exact sam e entropy value. In this paper, we show how the main ideas of the maximum entropy approach can be extended to this case.

tr10-06.pdf (71 kB)
Original file: UTEP-CS-10-06