Sign up with your email address to be the first to know about new products, VIP offers, blog features & more.
- Vagueness, Presupposition and Truth-Value Judgments Jeremy Zehr (Quote from the author): "The day I taught my first course of semantics, I presented a definition of meaning along the lines of (Heim & Kratzer 1998)’s, which I was presented with as an undergraduate student in linguistics: to know what a sentence means is to know in what situations it is true. And very soon I showed that, as I also had come to realize five years before, this definition was unable to capture our intuitions about presuppositional sentences: these are sentences we perfectly understand, but that we are sometimes as reluctant to judge true as to judge false, even while possessing all potentially relevant information. But by the time I became an instructor, I had become well acquainted with another phenomenon that similarly threatens this truth-conditional definition of meaning: the phenomenon of vagueness. So I added the class of vague sentences to the discussion. That both vague and presuppositional sentences threaten this fundamental definition shows the importance of their study for the domain of semantics. Under the supervision of Orin Percus, I therefore decided to approach the two phenomena jointly in my M.A. dissertation. By applying the tools developed for analyzing presupposition in truth-conditional semantics to the study of vagueness, I showed that it was possible to give a novel sensible account of the sorites paradox that has been puzzling philosophers since Eubulide first stated it more than 2000 years ago. This result illustrates how the joint study of two phenomena that were previously approached separately can bring new insights to long discussed problems. This thesis aims at pursuing the joint investigation of the two phenomena, by focusing on the specific truth-value judgments that they trigger. In particular, theoretical literature of the last century rehabilitated the study of non-bivalent logical systems that were already prefigured during Antiquity and that have non-trivial consequences for truth-conditional semantics. In parallel, an experimental literature has been constantly growing since the beginning of the new century, collecting truthvalue judgments of subjects on a variety of topics. The work presented here features both aspects: it investigates theoretical systems that jointly address issues raised by vagueness and presupposition, and it presents experimental methods that test the predictions of the systems in regard to truth-value judgments. The next two sections of this chapter are devoted to the presentation of my objects of study, namely vagueness and presupposition; and the last section of this chapter exposes the motivations that underline my project of jointly approaching the two phenomena from a truth-functional perspective. Because the notions of truth-value judgments are at the core of the dissertation, I have to make clear what I mean by bivalent and non-bivalent truth-value judgments. When I say that a sentence triggers bivalent truth-value judgments, I mean that in any situation, a sufficiently informed and competent speaker would confidently judge the sentence either “True” or “False”. When I say that a sentence triggers non-bivalent truth-value judgments, I mean that there are situations where a competent speaker, even perfectly informed, would prefer to judge the sentence with a label different from “True” and “False”. In this chapter, I will remain agnostic as to what labels are actually preferred for each phenomenon, but the next chapters are mostly devoted to this question."
- A Hierarchy of Bounds on Accessible Information and Informational Power Michele Dall’Arn: Quantum theory imposes fundamental limitations to the amount of information that can be carried by any quantum system. On the one hand, Holevo bound rules out the possibility to encode more information in a quantum system than in its classical counterpart, comprised of perfectly distinguishable states. On the other hand, when states are uniformly distributed in the state space, the so-called subentropy lower bound is saturated. How uniform quantum systems are can be naturally quantified by characterizing them as t-designs, with t = ∞ corresponding to the uniform distribution. Here it is shown that the existence of a trade-off between the uniformity of a quantum system and the amount of information it can carry. To this aim, we derive a hierarchy of informational bounds as a function of t and prove their tightness for qubits and qutrits. By deriving asymptotic formulae for large dimensions, the author also shows that the statistics generated by any t-design with t > 1 contains no more than a single bit of information, and this amount decreases with t. Holevo and subentropy bounds are recovered as particular cases for t = 1 and t = ∞, respectively.
- Kantian Space, Supersubstantivalism, and the Spirit of Spinoza James Messina: In the first edition of Concerning the Doctrine of Spinoza in Letters to Mendelssohn, Jacobi claims that Kant’s account of space is “wholly in the spirit of Spinoza”. In the first part of the paper, the author argues that Jacobi is correct: Spinoza and Kant have surprisingly similar views regarding the unity of space and the metaphysics of spatial properties and laws. Perhaps even more surprisingly, they both are committed to a form of parallelism. In the second part of the paper, James draws on the results of the first part to explain Kant’s oft-repeated claim that if space were transcendentally real, Spinozism would follow, along with Kant’s reasons for thinking transcendental idealism avoids this nefarious result. In the final part of the paper, James sketches a Spinozistic interpretation of Kant’s account of the relation between the empirical world of bodies and(what one might call) the transcendental world consisting of the transcendental subject’s representations of the empirical world and its parts.
- Bayesianism, Infinite Decisions, and Binding Frank Arntzenius, Adam Elga, John Hawthorne: When decision situations involve infinities, vexing puzzles arise. The authors describe six such puzzles below. (None of the puzzles has a universally accepted solution, and they are aware of no suggested solutions that apply to all of the puzzles.) The authors will use the puzzles to motivate two theses concerning infinite decisions. In addition to providing a unified resolution of the puzzles, the theses have important consequences for decision theory wherever infinities arise. By showing that Dutch book arguments have no force in infinite cases, the theses are evidence that reasonable utility functions may be unbounded, and that reasonable credence functions need not be either countably additive or conglomerable (a term to be explained in section 3). The theses show that when infinitely many decisions are involved, the difference between making the decisions simultaneously and making them sequentially can be the difference between riches and ruin. And the authors reveal a new way in which the ability to make binding commitments can save perfectly rational agents from sure losses.
- The Solvability of Probabilistic Regresses. A Reply to Frederik Herzberg David Atkinson and Jeanne Peijnenburg: The authors have earlier shown by construction that a proposition can have a well-defined nonzero probability, even if it is justified by an infinite probabilistic regress. The authors thought this to be an adequate rebuttal of foundationalist claims that probabilistic regresses must lead either to an indeterminate, or to a determinate but zero probability. In a comment, Frederik Herzberg has argued that our counterexamples are of a special kind, being what he calls ‘solvable’. In the present reaction the authors investigate what Herzberg means by solvability. They discuss the advantages and disadvantages of making solvability a sine qua non, and we ventilate our misgivings about Herzberg’s suggestion that the notion of solvability might help the foundationalist. They further show that the canonical series arising from an infinite chain of conditional probabilities always converges, and also that the sum is equal to the required unconditional probability if a certain infinite product of conditional probabilities vanishes.
- A Hierarchy of Bounds on Accessible Information and Informational Power Michele Dall’Arno: Quantum theory imposes fundamental limitations to the amount of information that can be carried by any quantum system. On the one hand, Holevo bound rules out the possibility to encode more information in a quantum system than in its classical counterpart, comprised of perfectly distinguishable states. On the other hand, when states are uniformly distributed in the state space, the so-called subentropy lower bound is saturated. How uniform quantum systems are can be naturally quantified by characterizing them as t-designs, with t = ∞ corresponding to the uniform distribution. Here it is shown the existence of a trade-off between the uniformity of a quantum system and the amount of information it can carry. To this aim, the authors derive a hierarchy of informational bounds as a function of t and prove their tightness for qubits and qutrits. By deriving asymptotic formulae for large dimensions, they also show that the statistics generated by any t-design with t > 1 contains no more than a single bit of information, and this amount decreases with t. Holevo and subentropy bounds are recovered as particular cases for t = 1 and t = ∞, respectively.
- On Econometric Inference and Multiple Use Of The Same Data Benjamin Holcblat, Steffen Gronneberg: In fields that are mainly nonexperimental, such as economics and finance, it is inescapable to compute test statistics and confidence regions that are not probabilistically independent from previously examined data. The Bayesian and Neyman-Pearson inference theories are known to be inadequate for such a practice. The authors show that these inadequacies also hold m.a.e. (modulo approximation error). They develop a general econometric theory, called the neoclassical inference theory, that is immune to this inadequacy m.a.e. The neoclassical inference theory appears to nest model calibration, and most econometric practices, whether they are labelled Bayesian or `a la NeymanPearson. The authors then derive a general, but simple adjustment to make standard errors account for the approximation error.
- How to Confirm the Disconfirmed - On conjunction fallacies and robust confirmation David Atkinson, Jeanne Peijnenburg and Theo Kuipers: Can some evidence confirm a conjunction of two hypotheses more than it confirms either of the hypotheses separately? The authors show that it can, moreover under conditions that are the same for nine different measures of confirmation. Further they demonstrate that it is even possible for the conjunction of two disconfirmed hypotheses to be confirmed by the same evidence.
- Probability Density Functions from the Fisher Information Metric T. Clingmana, Jeff Murugana, Jonathan P. Shock: The authors show a general relation between the spatially disjoint product of probability density functions and the sum of their Fisher information metric tensors. We then utilise this result to give a method for constructing the probability density functions for an arbitrary Riemannian Fisher information metric tensor. They note further that this construction is extremely unconstrained, depending only on certain continuity properties of the probability density functions and a select symmetry of their domains.
- Having Science in View: General Philosophy of Science and its Significance Stathis Psillos: General philosophy of science (GPoS) is the part of conceptual space where philosophy and science meet and interact. More specifically, it is the space in which the scientific image of the world is synthesised and in which the general and abstract structure of science becomes the object of theoretical investigation. Yet, there is some scepticism in the profession concerning the prospects of GPoS. In a seminal piece, Philip Kitcher (2013) noted that the task of GPoS, as conceived by Carl Hempel and many who followed him, was to offer explications of major metascientific concepts such as confirmation, theory, explanation, simplicity etc. These explications were supposed “to provide general accounts of them by specifying the necessary conditions for their application across the entire range of possible cases” (2013, 187). Yet, Kitcher notes, “Sixty years on, it should be clear that the program has failed. We have no general accounts of confirmation, theory, explanation, law, reduction, or causation that will apply across the diversity of scientific fields or across different periods of time” (2013, 188). The chief reasons for this alleged failure are two. The first relates to the diversity of scientific practice: the methods employed by the various fields of natural science are very diverse and field-specific. As Kitcher notes “Perhaps there is a ‘thin’ general conception that picks out what is common to the diversity of fields, but that turns out to be too attenuated to be of any great use”. The second reason relates to the historical record of the sciences: the ‘mechanics’ of major scientific changes in different fields of inquiry is diverse and involves factors that cannot be readily accommodated by a general explication of the major metascientific concepts (cf. 2013, 189). Though Kitcher does not make this suggestion explicitly, the trend seems to be to move from GPoS to the philosophies of the individual sciences and to relocate whatever content GPoS is supposed to have to the philosophies of the sciences. I think scepticism or pessimism about the prospects of GPoS is unwarranted. And I also think that there can be no philosophies of the various sciences without GPoS.
- Reducing Computational Complexity of Quantum Correlations Titas Chanda, Tamoghna Das, Debasis Sadhukhan, Amit Kumar Pal, Aditi Sen(De), and Ujjwal Sen: The authors address the issue of reducing the resource required to compute information-theoretic quantum correlation measures like quantum discord and quantum work deficit in two qubits and higher dimensional systems. They provide a mathematical description of determining the quantum correlation measure using a restricted set of local measurements. They also show that the computational error caused by the constraint over the complete set of local measurements reduces fast with an increase in the size of the restricted set. They also perform quantitative analysis to investigate how the error scales with the system size, taking into account a set of plausible constructions of the constrained set. Carrying out a comparative study, they show that the resource required to optimize quantum work deficit is usually higher than that required for quantum discord. They also demonstrate that minimization of quantum discord and quantum work deficit is easier in the case of two-qubit mixed states of fixed ranks and with positive partial transpose in comparison to the corresponding states having nonpositive partial transpose. For bound entangled states, the authors show that the error is significantly low when the measurements correspond to the spin observables along the three Cartesian coordinates.
- Nonparametric Nearest Neighbor Random Process Clustering Michael Tschannen and Helmut Bolcskei: The authors consider the problem of clustering noisy finite length observations of stationary ergodic random processes according to their nonparametric generative models without prior knowledge of the model statistics and the number of generative models. Two algorithms, both using the L(1) distance between estimated power spectral densities (PSDs) as a measure of dissimilarity, are analyzed. The first algorithm, termed nearest neighbor process clustering (NNPC), to the best of the authors' knowledge, is new and relies on partitioning the nearest neighbor graph of the observations via spectral clustering. The second algorithm, simply referred to as k-means (KM), consists of a single k-means iteration with farthest point initialization and was considered before in the literature, albeit with a different measure of dissimilarity and with asymptotic performance results only. The authors show that both NNPC and KM succeed with high probability under noise and even when the generative process PSDs overlap significantly, all provided that the observation length is sufficiently large. Their results quantify the tradeoff between the overlap of the generative process PSDs, the noise variance, and the observation length. Finally, we present numerical performance results for synthetic and real data.
- Economic inequality and mobility in kinetic models for social sciences Maria Letizia Bertotti, Giovanni Modanese: Statistical evaluations of the economic mobility of a society are more difficult than measurements of the income distribution, because they require to follow the evolution of the individuals’ income for at least one or two generations. In micro-to-macro theoretical models of economic exchanges based on kinetic equations, the income distribution depends only on the asymptotic equilibrium solutions, while mobility estimates also involve the detailed structure of the transition probabilities of the model, and are thus an important tool for assessing its validity. Empirical data show a remarkably general negative correlation between economic inequality and mobility, whose explanation is still unclear. It is therefore particularly interesting to study this correlation in analytical models. In previous work the authors investigated the behavior of the Gini inequality index in kinetic models in dependence on several parameters which define the binary interactions and the taxation and redistribution processes: saving propensity, taxation rates gap, tax evasion rate, welfare means-testing etc. Here, they check the correlation of mobility with inequality by analyzing the mobility dependence from the same parameters. According to several numerical solutions, the correlation is confirmed to be negative.
- Science and Informed, Counterfactual, Democratic Consent Arnon Keren: On many science-related policy questions, the public is unable to make informed decisions, because of its inability to make use of knowledge and information obtained by scientists. Philip Kitcher and James Fishkin have both suggested therefore that on certain science-related issues, public policy should not be decided upon by actual democratic vote, but should instead conform to the public's Counterfactual Informed Democratic Decision (CIDD). Indeed, this suggestion underlies Kitcher's specification of an ideal of a well-ordered science. The paper argues that this suggestion misconstrues the normative significance of CIDDs. At most, CIDDs might have epistemic significance, but no authority or legitimizing force.
- Proving the Herman-Protocol Conjecture Maria Bruna, Radu Grigore, Stefan Kiefer, Joel Ouaknine, and James Worrell: Herman’s self-stabilisation algorithm, introduced 25 years ago, is a well-studied synchronous randomised protocol for enabling a ring of N processes collectively holding any odd number of tokens to reach a stable state in which a single token remains. Determining the worst-case expected time to stabilisation is the central outstanding open problem about this protocol. It is known that there is a constant h such that any initial configuration has expected stabilisation time at most hN2. Ten years ago, McIver and Morgan established a lower bound of 4/27 ≈ 0.148 for h, achieved with three equally-spaced tokens, and conjectured this to be the optimal value of h. A series of papers over the last decade gradually reduced the upper bound on h, with the present record (achieved last year) standing at approximately 0.156. In this paper, the authors prove McIver and Morgan’s conjecture and establish that h = 4/27 is indeed optimal.