387 research outputs found

    Divergent mathematical treatments in utility theory

    Get PDF
    In this paper I study how divergent mathematical treatments affect mathematical modelling, with a special focus on utility theory. In particular I examine recent work on the ranking of information states and the discounting of future utilities, in order to show how, by replacing the standard analytical treatment of the models involved with one based on the framework of Nonstandard Analysis, diametrically opposite results are obtained. In both cases, the choice between the standard and nonstandard treatment amounts to a selection of set-theoretical parameters that cannot be made on purely empirical grounds. The analysis of this phenomenon gives rise to a simple logical account of the relativity of impossibility theorems in economic theory, which concludes the paper

    The bearable lightness of being

    No full text
    How are philosophical questions about what kinds of things there are to be understood and how are they to be answered? This paper defends broadly Fregean answers to these questions. Ontological categories-such as object, property, and relation-are explained in terms of a prior logical categorization of expressions, as singular terms, predicates of varying degree and level, etc. Questions about what kinds of object, property, etc., there are are, on this approach, reduce to questions about truth and logical form: for example, the question whether there are numbers is the question whether there are true atomic statements in which expressions function as singular terms which, if they have reference at all, stand for numbers, and the question whether there are properties of a given type is a question about whether there are meaningful predicates of an appropriate degree and level. This approach is defended against the objection that it must be wrong because makes what there depend on us or our language. Some problems confronting the Fregean approach-including Frege's notorious paradox of the concept horse-are addressed. It is argued that the approach results in a modest and sober deflationary understanding of ontological commitments

    Rentabilitätsvergleiche im Umlage- und Kapitaldeckungsverfahren : Konzepte, empirische Ergebnisse, sozialpolitische Konsequenzen

    Full text link
    Die demographischen Veränderungen sind Auslöser einer grundsätzlicheren Debatte über Alterssicherungsverfahren, nämlich der Wahl eines effizienten Finanzierungsverfahrens der Altersvorsorge. Im Zentrum der Debatte steht immer wieder der Renditevergleich zwischen dem Umlage- und dem Kapitaldeckungsverfahren. Ihm gilt dieses Papier. Er ist keineswegs so einfach, wie es oft suggeriert wird, da Versicherungs- und Risikoaspekte, vor allem aber das Übergangsproblem berücksichtigt werden müssen. Der vorliegende Beitrag stellt den wirtschaftstheoretischen Hintergrund mit den wichtigsten relevanten Konzepten dar und präsentiert empirische Schätzungen zur heutigen und Simulationsergebnisse zur zukünftigen Entwicklung der relevanten Renditen. Wir schließen mit den sozialpolitischen Konsequenzen für eine reformierte Altersvorsorge

    Priority for the Worse Off and the Social Cost of Carbon

    Get PDF
    The social cost of carbon (SCC) is a monetary measure of the harms from carbon emission. Specifically, it is the reduction in current consumption that produces a loss in social welfare equivalent to that caused by the emission of a ton of CO2. The standard approach is to calculate the SCC using a discounted-utilitarian social welfare function (SWF)—one that simply adds up the well-being numbers (utilities) of individuals, as discounted by a weighting factor that decreases with time. The discounted-utilitarian SWF has been criticized both for ignoring the distribution of well-being, and for including an arbitrary preference for earlier generations. Here, we use a prioritarian SWF, with no time-discount factor, to calculate the SCC in the integrated assessment model RICE. Prioritarianism is a well-developed concept in ethics and theoretical welfare economics, but has been, thus far, little used in climate scholarship. The core idea is to give greater weight to well-being changes affecting worse off individuals. We find substantial differences between the discounted-utilitarian and non-discounted prioritarian SCC

    Complexity of token swapping and its variants

    Get PDF
    AbstractIn the Token Swapping problem we are given a graph with a token placed on each vertex. Each token has exactly one destination vertex, and we try to move all the tokens to their destinations, using the minimum number of swaps, i.e., operations of exchanging the tokens on two adjacent vertices. As the main result of this paper, we show that Token Swapping is W[1]-hard parameterized by the length k of a shortest sequence of swaps. In fact, we prove that, for any computable function f, it cannot be solved in time f(k)no(k/logk) where n is the number of vertices of the input graph, unless the ETH fails. This lower bound almost matches the trivial nO(k)-time algorithm. We also consider two generalizations of the Token Swapping, namely Colored Token Swapping (where the tokens have colors and tokens of the same color are indistinguishable), and Subset Token Swapping (where each token has a set of possible destinations). To complement the hardness result, we prove that even the most general variant, Subset Token Swapping, is FPT in nowhere-dense graph classes. Finally, we consider the complexities of all three problems in very restricted classes of graphs: graphs of bounded treewidth and diameter, stars, cliques, and paths, trying to identify the borderlines between polynomial and NP-hard cases

    Reciprocity as a foundation of financial economics

    Get PDF
    This paper argues that the subsistence of the fundamental theorem of contemporary financial mathematics is the ethical concept ‘reciprocity’. The argument is based on identifying an equivalence between the contemporary, and ostensibly ‘value neutral’, Fundamental Theory of Asset Pricing with theories of mathematical probability that emerged in the seventeenth century in the context of the ethical assessment of commercial contracts in a framework of Aristotelian ethics. This observation, the main claim of the paper, is justified on the basis of results from the Ultimatum Game and is analysed within a framework of Pragmatic philosophy. The analysis leads to the explanatory hypothesis that markets are centres of communicative action with reciprocity as a rule of discourse. The purpose of the paper is to reorientate financial economics to emphasise the objectives of cooperation and social cohesion and to this end, we offer specific policy advice

    Linear Ramsey Numbers

    Get PDF

    First-Order and Second-Order Ambiguity Aversion

    Full text link

    Degrees of belief, expected and actual

    Get PDF
    A framework of degrees of belief, or credences, is often advocated to model our uncertainty about how things are or will turn out. It has also been employed in relation to the kind of uncertainty or indefiniteness that arises due to vagueness, such as when we consider “a is F” in a case where a is borderline F. How should we understand degrees of belief when we take into account both these phenomena? Can the right kind of theory of the semantics of vagueness help us answer this? Nicholas J.J. Smith defends a unified account, according to which “degree of belief is expected truth-value”; this builds on his Degree Theory of vagueness that offers an account of the semantics and logic of vagueness in terms of degrees of truth. I argue that his account fails. Degree theories of vagueness do not help us understand degrees of belief and, I argue, we shouldn’t expect a theory of vagueness to yield a detailed uniform story about this. The route from the semantics to psychological states needn’t be straightforward or uniform even before we attempt to combine vagueness with probabilistic uncertainty

    Logical inference for inverse problems

    Get PDF
    Estimating a deterministic single value for model parameters when reconstructing the system response has a limited meaning if one considers that the model used to predict its behaviour is just an idealization of reality, and furthermore, the existence of measurements errors. To provide a reliable answer, probabilistic instead of deterministic values should be provided, which carry information about the degree of uncertainty or plausibility of those model parameters providing one or more observations of the system response. This is widely-known as the Bayesian inverse problem, which has been covered in the literature from different perspectives, depending on the interpretation or the meaning assigned to the probability. In this paper, we revise two main approaches: the one that uses probability as logic, and an alternative one that interprets it as information content. The contribution of this paper is to provide an unifying formulation from which both approaches stem as interpretations, and which is more general in the sense of requiring fewer axioms, at the time the formulation and computation is simplified by dropping some constants. An extension to the problem of model class selection is derived, which is particularly simple under the proposed framework. A numerical example is finally given to illustrate the utility and effectiveness of the method
    corecore