Publication Date

7-2003

Comments

Technical Report: UTEP-CS-03-22

Published in: Vasco Brattka, Matthias Schroeder, Klaus Weihrauch, and Ning Zhong, Proceedings of the Conference on Computability and Complexity in Analysis CCA'2003, Cincinnati, Ohio, USA, August 28-30, 2003, pp. 19-54.

Abstract

In many real-life situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. To estimate y, we find some easier-to-measure quantities x1,...,xn which are related to y by a known relation y=f(x1,...,xn). Measurements are never 100% accurate; hence, the measured values Xi are different from xi, and the resulting estimate Y=f(X1,...,Xn) is different from the desired value y=f(x1,...,xn). How different?

Traditional engineering to error estimation in data processing assumes that we know the probabilities of different measurement error dxi=Xi-xi. In many practical situations, we only know the upper bound Di for this error; hence, after the measurement, the only information that we have about xi is that it belongs to the interval [xi]={Xi-Di,Xi+Di]. In this case, it is important to find the range [y] of all possible values of y=f(x1,...,xn) when each xi is in [xi]. We start the paper with a brief overview of the computational complexity of the corresponding interval computation problems.

We then discuss how this computational complexity changes when, in addition to the upper bounds Di, we have some partial information about the probabilities of different values of di.

We also show how the use of quantum computing can speed up some computations related to interval and probabilistic uncertainty.

Most of the related problems turn out to be, in general, at least NP-hard. We end the paper with speculations on whether (and how) hypothetic physical devices can compute NP-hard problems faster than in exponential time.

Share

COinS