As a consequence of this result, we prove that the centroid defuzzification of a *general* type-2 fuzzy set can be reduced to the easier-to-compute case when for each x, the corresponding fuzzy degree of membership is *convex*.

This report describes an investigation of performance and energy consumption of a single scientific code on five high performance and mobile systems with the objective of identifying the performance and energy efficiency implications of a variety of architectural features. The results of this pilot study suggest that ISA is less significant than other specific aspects of system architecture in achieving high performance at high efficiency. The strategy employed in this study may be extended to other scientific applications with a variety of memory access, computation, and communication properties.

]]>Surprisingly, often, we also observe a peak at half-frequency f/2.

In this paper, we explain this empirical phenomenon: we show that it is a particular case of a spontaneous symmetry violation, and that the general physical theory of such symmetry violations explains why namely half-frequency signals are often observed.

]]>There are, however, many ways to elicit these degrees. Different elicitations lead, in general, to different numerical values of these degrees -- although, ideally, tuples which have a higher degree of possibility in one scale should have a higher degree in other scales as well. It is therefore desirable to come up with a description of the dependence between fuzzy variables that does not depend on the corresponding procedure and, thus, has the same form in different scales. In this paper, by using an analogy with the notion of copulas in statistics, we come up with such a scaling-invariant description.

Our main idea is to use marginal membership functions μ_{i}(x_{i}) = max_{x1, ..., , xi − 1, xi + 1, ..., xn} μ(x_{1}, ..., x_{i − 1}, x_{i}, x_{i + 1}, ..., x_{n}),

and then describe the relationship between the fuzzy variables x_{1}, ..., x_{n} by a function r_{i}(x_{1}, ..., x_{n}) for which, for all the tuples (x_{1}, ..., x_{n}), we have μ(x_{1}, ..., x_{n})=μ_{i}(r_{i}(x_{1}, ..., x_{n})).

There exist many optimization algorithms, the problem is that these algorithms often end up in a local optimum. One of the promising methods to avoid the local optima is the filled function method, in which we, in effect, first optimize a smoothed version of the objective function, and then use the resulting optimum to look for the optimum of the original function. It turns out that empirically, the best smoothing functions to use in this method are the Gaussian and the Cauchy functions. In this paper, we show that from the viewpoint of computational complexity, these two smoothing functions are indeed the simplest.

The Gaussian and Cauchy functions are not a panacea: in some cases, they still leave us with a local optimum. In this paper, we use the computational complexity analysis to describe the next-simplest smoothing functions which are worth trying in such situations.

]]>Precursors have in common the fundamental petrologic processes of melt generation in the lithosphere and subsequent magma differentiation. Our ultimate objective is to apply state-of-the-art machine learning techniques to volcano eruption forecasting. In this paper, we applied machine learning techniques to the precursor data, such as the 1999 eruption of Redoubt volcano, Alaska, for which a comprehensive record of precursor activity exists as USGS public domain files and global data bases, such as the Smithsonian Institution Global Volcanology Project and Aerocom (which is part of the HEMCO data base). As a result, we get geophysically meaningful results.

]]>In this paper, we analyze this problem from a decision-theoretic viewpoint. Our conclusion is that while a little flexibility is OK, in general, it is beneficial to make a syllabus as precise as possible.

]]>