Publication Date

3-2013

Comments

Technical Report: UTEP-CS-13-22

To appear in Proceedings of the Joint World Congress of the International Fuzzy Systems Association and Annual Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS'2013, Edmonton, Canada, June 24-28, 2013.

Abstract

In many practical situations, we need, given the values of the observed quantities x1, ..., xn, to predict the value of a desired quantity y. To estimate the accuracy of a prediction algorithm f(x1, ..., xn), we need to compare the results of this algorithm's prediction with the actually observed values.

The value y usually depends not only on the values x1, ..., xn, but also on values of other quantities which we do not measure. As a result, even when we have the exact same values of the quantities x1, ..., xn, we may get somewhat different values of y. It is often reasonable to assume that for each combinations of xi values, possible values of y are normally distributed, with some mean E and standard deviation s. Ideally, we should predict both E and s, but in many practical situations, we only predict a single value Y. How can we gauge the accuracy of this prediction based on the observations?

A seemingly reasonable idea is to use crisp evaluation of prediction accuracy: a method is accurate if Y belongs to a k-sigma interval [E - k * s, E + k * s], for some pre-selected value k (e.g., 2, 3, or 6). However, in this method, the value Y = E + k * s is considered accurate, but a value E + (k + d) * s (which, for small d > 0, is practically indistinguishable from Y) is not accurate. To achieve a more adequate description of accuracy, we propose to define a degree to which the given estimate is accurate.

As a case study, we consider predicting at-risk students.

Share

COinS