Publication Date

4-2019

Comments

Technical Report: UTEP-CS-19-37

Abstract

One of the important steps in deep learning is softmax, when we select one of the alternatives with a probability depending on its expected gain. A similar formula describes human decision making: somewhat surprisingly, when presented with several choices with different expected equivalent monetary gain, we do not just select the alternative with the largest gain; instead, we make a random choice, with probability decreasing with the gain -- so that it is possible that we will select second highest and even third highest value. Both formulas assume that we know the exact value of the expected gain for each alternative. In practice, we usually know this gain only with some uncertainty. For example, often, we only know the lower bound L and the upper bound U on the expected gain, i.e., we only know that the actual gain g is somewhere in the interval [L,U]. In this paper, we show how to extend softmax and discrete choice formulas to such cases of interval uncertainty.

Share

COinS