Departmental Technical Reports (CS)Copyright (c) 2015 University of Texas at El Paso All rights reserved.
http://digitalcommons.utep.edu/cs_techrep
Recent documents in Departmental Technical Reports (CS)en-usFri, 19 Jun 2015 01:43:56 PDT3600How to Take Into Account a Student's Degree of Certainty When Evaluating the Test Results
http://digitalcommons.utep.edu/cs_techrep/940
http://digitalcommons.utep.edu/cs_techrep/940Wed, 17 Jun 2015 09:57:45 PDT
To more adequately gauge the student's knowledge, it is desirable to take into account not only whether the student's answers on the test are correct or nor, but also how confident the students are in their answers. For example, a situation when a student gives a wrong answer, but understands his/her lack of knowledge on this topic, is not as harmful as the situation when the student is absolutely confident in his/her wrong answer. In this paper, we use the general decision making theory to describe the best way to take into account the student's degree of certainty when evaluating the test results.
]]>
Joe Lorkowski et al.A Simplified Explanation of What It Means to Assign a Finite Value to an Infinite Sum
http://digitalcommons.utep.edu/cs_techrep/939
http://digitalcommons.utep.edu/cs_techrep/939Wed, 17 Jun 2015 09:57:42 PDT
Recently, a video made rounds that explained that it often makes sense to assign finite values to infinite sums. For example, it makes sense to claim that the sum of all natural numbers is equal to -1/12. This has picked up interested in media. However, judged by the viewers' and readers' comments, for many viewers and readers, neither the video, not the corresponding articles seem to explain the meaning of the above inequality clearly enough. One of the main stumbling blocks is the fact that the infinite sum is clearly divergent, so a natural value of the infinite sum is infinity. What is the meaning of assigning a finite value to this (clearly infinite) sum? While the explanation of the above equality is difficult to describe in simple terms, the main idea behind this equality can be, in our opinion, explained rather naturally, and this is what we do in this paper.
]]>
Olga Kosheleva et al.Optimizing Cloud Use under Interval Uncertainty
http://digitalcommons.utep.edu/cs_techrep/938
http://digitalcommons.utep.edu/cs_techrep/938Wed, 17 Jun 2015 09:57:38 PDT
One of the main advantages of cloud computing is that it helps the users to save money: instead of buying a lot of computers to cover all their computations, the user can rent the computation time on the cloud to cover the rare peak spikes of computer need. From this viewpoint, it is important to find the optimal division between in-house and in-the-cloud computations. In this paper, we solve this optimization problem, both in the idealized case when we know the complete information about the costs and the user's need, and in a more realistic situation, when we only know interval bounds on the corresponding quantities.
]]>
Vladik Kreinovich et al.How To Take Into Account Model Inaccuracy When Estimating the Uncertainty Of the Result of Data Processing
http://digitalcommons.utep.edu/cs_techrep/937
http://digitalcommons.utep.edu/cs_techrep/937Wed, 17 Jun 2015 09:57:34 PDT
In engineering design, it is important to guarantee that the values of certain quantities such as stress level, noise level, vibration level, etc., stay below a certain threshold in all possible situations, i.e., for all possible combinations of the corresponding internal and external parameters. Usually, the number of possible combinations is so large that it is not possible to physically test the system for all these combinations. Instead, we form a computer model of the system, and test this model. In this testing, we need to take into account that the computer models are usually approximate. In this paper, we show that the existing techniques for taking model uncertainty into account overestimate the uncertainty of the results. We also show how we can get more accurate estimates.
]]>
Vladik Kreinovich et al.We Live in the Best of Possible Worlds: Leibniz's Insight Helps to Derive Equations of Modern Physics
http://digitalcommons.utep.edu/cs_techrep/936
http://digitalcommons.utep.edu/cs_techrep/936Wed, 17 Jun 2015 09:57:31 PDT
To reconcile the notion of a benevolent and powerful God with the actual human suffering, Leibniz proposed the idea idea that while our world is not perfect, it is the best of possible worlds. This idea inspired important developments in physics: namely, it turned out that equations of motions and equations which describe the dynamics of physical fields can be deduced from the condition that the (appropriately defined) action functional is optimal. In practice, this idea is not always very helpful in physics applications: to fully utilize this fact, we need to how the action, and there are many possible action functionals. Our idea is to apply Leibniz's insight once again and to assume that (similarly) on the set of all action functionals, there is an optimality criterion, and the actual action functional is optimal with respect to this criterion. This idea enables us to derive the standard equations of General Relativity, Quantum Mechanics, Electrodynamics, etc. only from the fact that the corresponding expressions for action are optimal. Thus, the physical equations describing our world are indeed the best possible.
]]>
Vladik Kreinovich et al.Why Copulas?
http://digitalcommons.utep.edu/cs_techrep/935
http://digitalcommons.utep.edu/cs_techrep/935Wed, 17 Jun 2015 09:57:27 PDT
A natural way to represent a 1-D probability distribution is to store its cumulative distribution function (cdf) F(x) = Prob(X ≤ x). When several random variables X_{1}, ..., X_{n} are independent, the corresponding cdfs F_{1}(x_{1}), ..., F_{n}(x_{n}) provide a complete description of their joint distribution. In practice, there is usually some dependence between the variables, so, in addition to the marginals F_{i}(x_{i}), we also need to provide an additional information about the joint distribution of the given variables. It is possible to represent this joint distribution by a multi-D cdf F(x_{1}, ..., x_{n}) = Prob(X_{1} ≤ x_{1} & ... & X_{n} ≤ x_{n}), but this will lead to duplication -- since marginals can be reconstructed from the joint cdf -- and duplication is a waste of computer space. It is therefore desirable to come up with a duplication-free representation which would still allow us to easily reconstruct F(x_{1}, ..., x_{n}). In this paper, we prove that the only such representation is a representation in which marginals are supplements by a copula. This result explains why copulas have been successfully used in many applications of statistics.
]]>
Vladik Kreinovich et al.Why ARMAX-GARCH Linear Models Successfully Describe Complex Nonlinear Phenomena: A Possible Explanation
http://digitalcommons.utep.edu/cs_techrep/934
http://digitalcommons.utep.edu/cs_techrep/934Wed, 17 Jun 2015 09:57:23 PDT
Economic and financial processes are complex and highly nonlinear. However, somewhat surprisingly, linear models like ARMAX-GARCH often describe these processes reasonably well. In this paper, we provide a possible explanation for the empirical success of these models.
]]>
Hung T. Nguyen et al.Standing on the Shoulders of the Giants: Why Constructive Mathematics, Probability Theory, Interval Mathematics, and Fuzzy Mathematics Are Important
http://digitalcommons.utep.edu/cs_techrep/933
http://digitalcommons.utep.edu/cs_techrep/933Wed, 17 Jun 2015 09:57:19 PDT
Recent death of Ray Moore, one of the fathers of interval mathematics, inspired these thoughts on why interval computations -- and several other related areas of study -- are important, and what we can learn from the successes of these areas' founders and promoters.
]]>
Vladik KreinovichWhy Is Linear Quantile Regression Empirically Successful: A Possible Explanation
http://digitalcommons.utep.edu/cs_techrep/932
http://digitalcommons.utep.edu/cs_techrep/932Wed, 17 Jun 2015 09:57:15 PDT
Many quantities describing the physical world are related to each other. As a result, often, when we know the values of certain quantities x_{1}, ..., x_{n}, we can reasonably well predict the value of some other quantity y. In many application, in addition to the resulting estimate for y, it is also desirable to predict how accurate is this approximate estimate, i.e., what is the probability distribution of different possible values y. It turns out that in many cases, the quantiles of this distribution linearly depend on the values x_{1}, ..., x_{n}. In this paper, we provide a possible theoretical explanation for this somewhat surprising empirical success of such linear quantile regression.
]]>
Hung T. Nguyen et al.What is the Right Context for an Engineering Problem: Finding Such a Context is NP-Hard
http://digitalcommons.utep.edu/cs_techrep/931
http://digitalcommons.utep.edu/cs_techrep/931Wed, 17 Jun 2015 09:54:45 PDT
In the general case, most computational engineering problems are NP-hard. So, to make the problem feasible, it is important to restrict this problem. Ideally, we should use the most general context in which the problem is still feasible. In this paper, we prove that finding such most general context is itself an NP-hard problem. Since it is not possible to find the appropriate context by utilizing some algorithm, it is therefore necessary to be creative -- i.e., to use some computational intelligence techniques. On three examples, we show how such techniques can help us come up with the appropriate context. Our analysis explains why it is beneficial to take knowledge about causality into account when processing data, why sometimes long-term predictions are easier than short-term ones, and why often for small deviations, a straightforward application of a seemingly optimal control only makes the situation worse.
]]>
Martine Ceberio et al.In Engineering Classes, How to Assign Partial Credit: From Current Subjective Practice to Exact Formulas (Based on Computational Intelligence Ideas)
http://digitalcommons.utep.edu/cs_techrep/930
http://digitalcommons.utep.edu/cs_techrep/930Wed, 17 Jun 2015 09:54:41 PDT
When a student performed only some of the steps needed to solve a problem, this student gets partial credit. This partial credit is usually proportional to the number of stages that the student performed. This may sound reasonable, but in engineering education, this leads to undesired consequences: for example, a student who did not solve any of the 10 problems on the test, but who successfully performed 9 out of 10 stages needed to solve each problem will still get the grade of A ("excellent"). This may be a good evaluation of the student's intellectual ability, but for a engineering company that hires this A-level student, this will be an unexpected disaster. In this paper, we analyze this problem from the viewpoint of potential loss to a company, and we show how to assign partial credit based on such loss estimates. Our conclusion is that this loss (and thus, the resulting grade) depend on the size of the engineering company. Thus, to better understand the student's strengths, it is desirable, instead of a single overall grade, to describe several grades corresponding to different company sizes.
]]>
Joe Lorkowski et al.Why Fuzzy Cognitive Maps Are Efficient
http://digitalcommons.utep.edu/cs_techrep/929
http://digitalcommons.utep.edu/cs_techrep/929Wed, 17 Jun 2015 09:54:36 PDT
In many practical situations, the relation between the experts' degrees of confidence in different related statements is well described by Fuzzy Cognitive Maps (FCM). This empirical success is somewhat puzzling, since from the mathematical viewpoint, each FCM relation corresponds to a simplified one-neuron neural network, and it is well known that to adequately describe relations, we need multiple neurons. In this paper, we show that the empirical success of FCM can be explained if we take into account that human's subjective opinions follow Miller's seven plus minus two law.
]]>
Vladik Kreinovich et al.Across-the-Board Spending Cuts Are Very Inefficient: A Proof
http://digitalcommons.utep.edu/cs_techrep/928
http://digitalcommons.utep.edu/cs_techrep/928Wed, 17 Jun 2015 09:54:31 PDT
In many real-life situations, when there is a need for a spending cut, this cut is performed in an across-the-board way, so that each budget item is decreased by the same percentage. Such cuts are ubiquitous, they happen on all levels, from the US budget to the university budget cuts on the college and departmental levels. The main reason for the ubiquity of such cuts is that they are perceived as fair and, at the same time, economically reasonable. In this paper, we perform a quantitative analysis of this problem and show that, contrary to the widely spread positive opinion about across-the-board cuts, these cuts are, on average, very inefficient.
]]>
Vladik Kreinovich et al.Why Awe Makes People More Generous: Utility Theory Can Explain Recent Experiments
http://digitalcommons.utep.edu/cs_techrep/927
http://digitalcommons.utep.edu/cs_techrep/927Wed, 17 Jun 2015 09:54:26 PDT
Recent psychological experiments show that the feeling of awe increases people's generosity. In this paper, we show that a usual utility-based approach to decision making explains this increase.
]]>
Joe Lorkowski et al.What Is Computable? What Is Feasibly Computable? A Physicist's Viewpoint
http://digitalcommons.utep.edu/cs_techrep/926
http://digitalcommons.utep.edu/cs_techrep/926Wed, 17 Jun 2015 09:54:22 PDT
In this paper, we show how the questions of what is computable and what is feasibly computable can be viewed from the viewpoint of physics: what is computable within the current physics? what is computable if we assume -- as many physicists do -- that no final physical theory is possible? what is computable if we consider data processing, i.e., computations based on physical inputs? Our physics-based analysis of these questions leads to some unexpected answers, both positive and negative. For example, we show that under the no-physical-theory-is-perfect assumption, almost all problems are feasibly solvable -- but not all of them.
]]>
Vladik Kreinovich et al.Why We Need Extra Physical Dimensions: A Simple Geometric Explanation
http://digitalcommons.utep.edu/cs_techrep/925
http://digitalcommons.utep.edu/cs_techrep/925Wed, 17 Jun 2015 09:54:17 PDT
It is known that a consistent description of point-wise particles requires that we add extra physical dimensions to the usual four dimensions of space-time. The need for such dimensions is based on not-very-intuitive complex mathematics. It is therefore desirable to try to come up with a simpler geometric explanation for this phenomenon. In this paper, we provide a simple geometric explanation of why extra physical dimensions are needed.
]]>
Olga Kosheleva et al.Towards A Physics-Motivated Small-Velocities Approximation to General Relativity
http://digitalcommons.utep.edu/cs_techrep/924
http://digitalcommons.utep.edu/cs_techrep/924Wed, 17 Jun 2015 09:54:13 PDT
In the general case, complex non-linear partial differential equations of General Relativity are very hard to solve. Thus, to solve the corresponding physical problems, usually appropriate approximations are used. The first approximation to General Relativity is, of course, Newton's theory of gravitation. Newton's theory is applicable when the gravitational field is weak and when all velocities are much smaller than the speed of light. Most existing approximations allow higher velocities, but still limit us to weak gravitational fields. In this paper, he consider the possibility of a different approximation, in which strong fields are allowed but velocities are required to be small. We derive the corresponding equations and speculate on their possible physical consequences.
]]>
Vladik Kreinovich et al.Modeling Extremal Events Is Not Easy: Why the Extreme Value Theorem Cannot Be As General As the Central Limit Theorem
http://digitalcommons.utep.edu/cs_techrep/923
http://digitalcommons.utep.edu/cs_techrep/923Wed, 17 Jun 2015 09:19:20 PDT
In many real-life situations, a random quantity is a joint result of several independent factors, i.e., a {\em sum} of many independent random variables. The description of such sums is facilitated by the Central Limit Theorem, according to which, under reasonable conditions, the distribution of such a sum tends to normal. In several other situations, a random quantity is a {\em maximum} of several independent random variables. For such situations, there is also a limit theorem -- the Extreme Value Theorem. However, the Extreme Value Theorem is only valid under the assumption that all the components are identically distributed -- while no such assumption is needed for the Central Limit Theorem. Since in practice, the component distributions may be different, a natural question is: can we generalize the Extreme Value Theorem to a similar general case of possible different component distributions? In this paper, we use simple symmetries to prove that such a generalization is not possible. In other words, the task of modeling extremal events is provably more difficult than the task of modeling of joint effects of many factors.
]]>
Vladik Kreinovich et al.