For example, a brief study was performed using one of these applications, i.e., a matrix-multiply kernel. By improving the vectorization of this application, the variation was reduced from an average of 25% to an average of 10%. However, the root cause of the remaining variation was not identified. Because the execution times of the other applications also exhibit similar levels of variation, we hypothesize that this execution-time variation could be caused by the hardware or by performance issues associated with how OpenMP is utilized.

]]>Traditionally, practitioners start a statistical analysis of a given sample *x*_{1}, … , *x*_{n} by computing the sample mean *E* and the sample variance *V*. The sample values *x*_{i} usually come from measurements. Measurements are never absolutely accurate and often, the only information that we have about the corresponding measurement errors are the upper bounds Δ_{i} on these errors. In such situations, after obtaining the measurement result , the only information that we have about the actual (unknown) value *x*_{i} of the *i*th quantity is that *x*_{i} belongs to the interval . Different values *x*_{i} from the corresponding intervals lead, in general, to different values of the sample mean and sample variance. It is therefore desirable to find the range of possible values of these characteristics when *x*_{i} ∈ x_{i}.

Often, we know that the values *x*_{i} cannot differ too much from each other, i.e., we know the upper bound *V*_{0} on the sample variance *V* : *V* ⩽ *V*_{0}. It is therefore desirable to find the range of *E* under this constraint. This is the main problem that we solve in this paper.