In this paper, we provide an asymptotically optimal algorithm for checking whether a given vector is a solution to a given interval-quantifier linear system. For a system of m equations with n unknown, this algorithm takes time O(m * n).

]]>This result shows the potential of fuzzy data processing. To maximally utilize this potential, we need to provide an operational meaning of the corresponding fuzzy degrees. We show that such a meaning can be extracted from the above justification of fuzzy techniques. It turns out that, in contrast to probabilistic uncertainty, the natural operational meaning of fuzzy degrees is indirect -- similarly to the operational meaning of geometry and physics in General Relativity.

]]>In some practical situations, dependence is more general than partial order: e.g., to understand Chapter 3, we may need to understand either Chapter 1 or Chapter 2, but it is not necessary to understand both. We show that such a general dependence can be naturally described by a known generalization of topology: the notion of an interior (or, equivalently, closure) structure (provided, of course, that this structure satisfies a natural analog of T_{0}-separability).

The traditional approach to decision making is based on the assumption that we know all possible consequences of each alternative, and we know the probability of each such consequence. Under this assumption, we can describe a rational decision-making process: to each possible consequence, we assign a numerical values called its utility, and we select the alternative for which the expected value of the utility is the largest.

An important advantage of this approach is that it can be performed in *real time:* if after we made a decision, a new alternative appears, we do not need to repeat the whole analysis again: all we need to do is compare the new alternative with the previously selected ones.

In the past, when we used the same procedures year after year, we accumulated a lot of data about the consequences of different decisions -- based from which we could estimate the desired probabilities. Nowadays, with new technologies, new materials constantly emerging, we do not have such detailed information about the consequences of these new technologies. As a result, we often only have partial information about the corresponding probabilities. Different possible probability values result in different values of expected utility. Hence, for each alternative, instead of a single value of expected utility, we have a range (interval) of possible values. We need to make a decision under such interval uncertainty.

In this paper, we describe when we can make decisions under interval uncertainty in linear time and in real time -- and when we cannot.

]]>In practice, we often only know these degrees p_{i} and μ_{i}i with uncertainty. Usually, we know the upper bound and the lower bound on each of these values. In other words, instead of the exact value of each degree, we only know the *interval* of its possible values, so we need to process such interval-valued degrees.

Before we start processing, it is important to find out which values from these intervals are actually possible. For example, if only have two alternatives, and the probability of the first one is 0.5, then -- even if the original interval for the second probability is wide -- the only possible value of the second probability is 0.5. Once the intervals are narrowed down to possible values, we need to compute the range of possible values of the corresponding characteristics (mean, variance, conditional probabilities and possibilities, etc.). For each such characteristic, first, we need to come up with an algorithm for computing its range.

In many engineering applications, we have a large amount of data to process, and many relevant decisions need to be made in real time. Because of this, it is important to make sure that the algorithms for computing the desired ranges are as fast as possible.

We present expressions for narrowing interval-valued probabilities and possibilities and for computing characteristics such as mean, conditional probabilities, and conditional possibilities. A straightforward computation of these expressions would take time which is quadratic in the number of inputs n. We show that in many cases, linear-time algorithms are possible -- and that no algorithm for computing these expressions can be faster than linear-time.

]]>The very fact that immediate feedback is helpful is not surprising: it helps the student clear misconceptions and avoid the wrong paths. However, the fact that different techniques involving feedback lead to practically the same learning speed-up is intriguing. To explain this speed-up, we provide a simplified first-order description of a learning process in simple geometric terms. We show that already in this first approximation, the geometric description leads to the observed two-fold speed-up in learning.

]]>In this paper, we show that for each dimension d, a feasible algorithm is possible if we fix some integer n and look for *n-scissors-congruence* in R^{d} -- i.e., for possibility to represent P and P' as a union of n (of fewer) *simplexes*.