MCMP – Philosophy of ScienceMathematical Philosophy - the application of logical and mathematical methods in philosophy - is about to experience a tremendous boom in various areas of philosophy. At the new Munich Center for Mathematical Philosophy, which is funded mostly by the German Alexander von Humboldt Foundation, philosophical research will be carried out mathematically, that is, by means of methods that are very close to those used by the scientists.
The purpose of doing philosophy in this way is not to reduce philosophy to mathematics or to natural science in any sense; rather mathematics is applied in order to derive philosophical conclusions from philosophical assumptions, just as in physics mathematical methods are used to derive physical predictions from physical laws.
Nor is the idea of mathematical philosophy to dismiss any of the ancient questions of philosophy as irrelevant or senseless: although modern mathematical philosophy owes a lot to the heritage of the Vienna and Berlin Circles of Logical Empiricism, unlike the Logical Empiricists most mathematical philosophers today are driven by the same traditional questions about truth, knowledge, rationality, the nature of objects, morality, and the like, which were driving the classical philosophers, and no area of traditional philosophy is taken to be intrinsically misguided or confused anymore. It is just that some of the traditional questions of philosophy can be made much clearer and much more precise in logical-mathematical terms, for some of these questions answers can be given by means of mathematical proofs or models, and on this basis new and more concrete philosophical questions emerge. This may then lead to philosophical progress, and ultimately that is the goal of the Center.Mathematical Philosophy - the application of logical and mathematical methods in philosophy - is about to experience a tremendous boom in various areas of philosophy. At the new Munich Center for Mathematical Philosophy, which is funded mostly by the German Alexander von Humboldt Foundation, philosophical research will be carried out mathematically, that is, by means of methods that are very close to those used by the scientists.
The purpose of doing philosophy in this way is not to reduce philosophy to mathematics or to natural science in any sense; rather mathematics is applied in order to derive philosophical conclusions from philosophical assumptions, just as in physics mathematical methods are used to derive physical predictions from physical laws.
Nor is the idea of mathematical philosophy to dismiss any of the ancient questions of philosophy as irrelevant or senseless: although modern mathematical philosophy owes a lot to the heritage of the Vienna and Berlin Circles of Logical Empiricism, unlike the Logical Empiricists most mathematical philosophers today are driven by the same traditional questions about truth, knowledge, rationality, the nature of objects, morality, and the like, which were driving the classical philosophers, and no area of traditional philosophy is taken to be intrinsically misguided or confused anymore. It is just that some of the traditional questions of philosophy can be made much clearer and much more precise in logical-mathematical terms, for some of these questions answers can be given by means of mathematical proofs or models, and on this basis new and more concrete philosophical questions emerge. This may then lead to philosophical progress, and ultimately that is the goal of the Center.Philosophy, Logic, Science, Language, Mathematics, Hannes Leitgeb, Stephan Hartmann, Alexander von Humboldt Foundation, Munich Center for Mathematical Philosophy, MCMP, LMU
https://cast.itunes.uni-muenchen.de/vod/playlists/FIAHt46Lax.html
Thu, 14 Mar 2013 14:12:56 +0000MCMP TeamLudwig-Maximilians-Universität Münchenitunes@lmu.denoenContext, Conversation, and Fragmentation
https://cast.itunes.uni-muenchen.de/clips/t1Q2JBI3IP/vod/high_quality.mp4
Dirk Kindermann (Graz) gives a talk at the MCMP Colloquium (25 June, 2015) titled "Context, Conversation, and Fragmentation". Abstract: What is a conversational context?One inuential account (Lewis, Stalnaker, Roberts) says that it is a shared body of information | the information conveyed and/or presupposed by all interlocutors. Conversation, on this account, proceeds by variously influencing, and being influenced, by this body of information. In this talk, I argue that standard idealising assumptions, according to which this body of information is consistent and closed under entailment, put the account at risk of being inapplicable to ordinary speakers|rational agents with limited cognitive resources. I argue that to mitigate the problem, we should think of context as a fragmented body of information. I explain what fragmentation amounts to and develop a simple model of afragmented common ground. I close by presenting some advantages of a fragmentation strategy in explaining some otherwise puzzling conversational phenomena.Dirk Kindermann (Graz) gives a talk at the MCMP Colloquium (25 June, 2015) titled "Context, Conversation, and Fragmentation". Abstract: What is a conversational context?One inuential account (Lewis, Stalnaker, Roberts) says that it is a shared body of information | the information conveyed and/or presupposed by all interlocutors. Conversation, on this account, proceeds by variously influencing, and being influenced, by this body of information. In this talk, I argue that standard idealising assumptions, according to which this body of information is consistent and closed under entailment, put the account at risk of being inapplicable to ordinary speakers|rational agents with limited cognitive resources. I argue that to mitigate the problem, we should think of context as a fragmented body of information. I explain what fragmentation amounts to and develop a simple model of afragmented common ground. I close by presenting some advantages of a fragmentation strategy in explaining some otherwise puzzling conversational phenomena.Wed, 08 Jul 2015 02:00:00 +0000Dirk Kindermann (Graz)Colloquium Mathematical Philosophyno00:47:261https://cast.itunes.uni-muenchen.de/clips/t1Q2JBI3IP/vod/high_quality.mp4Fifteen Dimensions of Evaluating Theories of Causation. A Case Study of the Structural Model and the Ranking Theoretic Approach to Causation
https://cast.itunes.uni-muenchen.de/clips/df3MnZKdrW/vod/high_quality.mp4
Wolfgang Spohn (Konstanz) gives a talk at the Workshop on Causal and Probabilistic Reasoning (18-20 June, 2015) titled "Fifteen Dimensions of Evaluating Theories of Causation. A Case Study of the Structural Model and the Ranking Theoretic Approach to Causation". Abstract: The point of the talk is not to defend any exciting thesis. It is rather to remind you of all the dimensions theories of causation must take account of. It explains 15 such dimensions, not just in the abstract, but as exemplified by the structural model and the ranking theoretic approach to causation, which, surprisingly, differ on all 15 dimensions. Of course, the subcutaneous message is that the ranking theoretic approach might be preferable. However, the main moral is to be: Keep all these dimensions in mind, and don't think that any one of these dimensions would be settled! Even if working at a specific issue, you are never on secure ground.Wolfgang Spohn (Konstanz) gives a talk at the Workshop on Causal and Probabilistic Reasoning (18-20 June, 2015) titled "Fifteen Dimensions of Evaluating Theories of Causation. A Case Study of the Structural Model and the Ranking Theoretic Approach to Causation". Abstract: The point of the talk is not to defend any exciting thesis. It is rather to remind you of all the dimensions theories of causation must take account of. It explains 15 such dimensions, not just in the abstract, but as exemplified by the structural model and the ranking theoretic approach to causation, which, surprisingly, differ on all 15 dimensions. Of course, the subcutaneous message is that the ranking theoretic approach might be preferable. However, the main moral is to be: Keep all these dimensions in mind, and don't think that any one of these dimensions would be settled! Even if working at a specific issue, you are never on secure ground.Fri, 10 Jul 2015 03:00:00 +0000Wolfgang Spohn (Konstanz)Workshop on Causal and Probabilistic Reasoningno00:57:36CaProbReasoning151https://cast.itunes.uni-muenchen.de/clips/df3MnZKdrW/vod/high_quality.mp4On the Role of the Light Postulate in Relativity
https://cast.itunes.uni-muenchen.de/clips/WjukueSIu3/vod/high_quality.mp4
R. A. Rynasiewicz (Johns Hopkins University) gives a talk at the MCMP Colloquium (10 June, 2015) titled "On the Role of the Light Postulate in Relativity". Abstract: As presented by Einstein in 1905, the theory of special relativity follows from two postulates: first, what he called the principle of relativity, and second, an empirical fact about the relation of the propagation of light relative to its source that has come to be called the light postulate. In 1910 Waldemar von Ignatowsky claimed to be able to derive the Lorentz transformations, and hence special relativity, without the light postulate using only the principle of relativity and assumptions that Einstein seems to have implicitly made, such as linearity and the isotropy and homogeneity of space. In his authoritative Relativitätstheorie of 1921, Pauli dismissed Ignatowsky’s result without explanation as void of physical significance. More recently, respected physicists and foundationalists, such as David Mermin (1984), have defended Ignatowsky and claimed that special relativity pre- supposes nothing about electromagnetism. In the first part of this talk, I discuss just what the light postulate asserts (both in special and in general relativity). In the second, I hope to shed light on the debate, if not definitively settle it. (To say on which side would spoil the suspense.) I will also discuss related attempts to dismiss the conventionality of simultaneity.R. A. Rynasiewicz (Johns Hopkins University) gives a talk at the MCMP Colloquium (10 June, 2015) titled "On the Role of the Light Postulate in Relativity". Abstract: As presented by Einstein in 1905, the theory of special relativity follows from two postulates: first, what he called the principle of relativity, and second, an empirical fact about the relation of the propagation of light relative to its source that has come to be called the light postulate. In 1910 Waldemar von Ignatowsky claimed to be able to derive the Lorentz transformations, and hence special relativity, without the light postulate using only the principle of relativity and assumptions that Einstein seems to have implicitly made, such as linearity and the isotropy and homogeneity of space. In his authoritative Relativitätstheorie of 1921, Pauli dismissed Ignatowsky’s result without explanation as void of physical significance. More recently, respected physicists and foundationalists, such as David Mermin (1984), have defended Ignatowsky and claimed that special relativity pre- supposes nothing about electromagnetism. In the first part of this talk, I discuss just what the light postulate asserts (both in special and in general relativity). In the second, I hope to shed light on the debate, if not definitively settle it. (To say on which side would spoil the suspense.) I will also discuss related attempts to dismiss the conventionality of simultaneity.Tue, 30 Jun 2015 06:00:00 +0000R. A. Rynasiewicz (Johns Hopkins University)Colloquium Mathematical Philosophyno00:57:432https://cast.itunes.uni-muenchen.de/clips/WjukueSIu3/vod/high_quality.mp4Explaining Macroscopic Systems from Microscopic Principles
https://cast.itunes.uni-muenchen.de/clips/jfqQujfkoO/vod/high_quality.mp4
Peter Pickl (LMU) gives a talk at the MCMP Colloquium (10 June, 2015) titled "Explaining Macroscopic Systems from Microscopic Principles". Abstract: The revolutionary idea of the late 19th century that the physics of gases can be explained by the dynamics of small, point-like particles had a great influence on physics as well as mathematics and philosophy. This idea has changed our understanding of the physics of macroscopic systems significantly as well as the way we see our universe as a whole. The question of how the connection between the microscopic and the macroscopic world can be explained also arises in other fields, for example the life sciences. Answering this question might have a similar impact on the research in these fields. In the talk I will present recent techniques and results of our research group in deriving macroscopic evolution equations from microscopic principles for certain classical, quantum mechanical and biological systems.Peter Pickl (LMU) gives a talk at the MCMP Colloquium (10 June, 2015) titled "Explaining Macroscopic Systems from Microscopic Principles". Abstract: The revolutionary idea of the late 19th century that the physics of gases can be explained by the dynamics of small, point-like particles had a great influence on physics as well as mathematics and philosophy. This idea has changed our understanding of the physics of macroscopic systems significantly as well as the way we see our universe as a whole. The question of how the connection between the microscopic and the macroscopic world can be explained also arises in other fields, for example the life sciences. Answering this question might have a similar impact on the research in these fields. In the talk I will present recent techniques and results of our research group in deriving macroscopic evolution equations from microscopic principles for certain classical, quantum mechanical and biological systems.Tue, 30 Jun 2015 05:00:00 +0000Peter Pickl (LMU)Colloquium Mathematical Philosophyno00:42:453https://cast.itunes.uni-muenchen.de/clips/jfqQujfkoO/vod/high_quality.mp4Convergence of Iterated Belief Updates
https://cast.itunes.uni-muenchen.de/clips/OeVwCSRzjy/vod/high_quality.mp4
Berna Kilinç (Boğaziçi University) gives a talk at the MCMP Colloquium (3 June, 2015) titled "Convergence of Iterated Belief Updates". Abstract: One desideratum on belief upgrade operations is that their iteration is truth-tropic, either on finite or infinite streams of reliable information. Under special circumstances repeated Bayesian updating satisfies this desideratum as shown for instance by the Gaifman and Snir theorem. There are a few analogous results in recent research within dynamic epistemic logic: Baltag et al establish the decidability of propositions for some but not all upgrade operations on finite epistemic spaces. In this talk further convergence results will be established for qualitative stable belief.Berna Kilinç (Boğaziçi University) gives a talk at the MCMP Colloquium (3 June, 2015) titled "Convergence of Iterated Belief Updates". Abstract: One desideratum on belief upgrade operations is that their iteration is truth-tropic, either on finite or infinite streams of reliable information. Under special circumstances repeated Bayesian updating satisfies this desideratum as shown for instance by the Gaifman and Snir theorem. There are a few analogous results in recent research within dynamic epistemic logic: Baltag et al establish the decidability of propositions for some but not all upgrade operations on finite epistemic spaces. In this talk further convergence results will be established for qualitative stable belief.Tue, 30 Jun 2015 04:00:00 +0000Berna Kilinç (Boğaziçi University)Colloquium Mathematical Philosophyno00:54:534https://cast.itunes.uni-muenchen.de/clips/OeVwCSRzjy/vod/high_quality.mp4The Causual Nature of Modeling in Data-Intensive Science
https://cast.itunes.uni-muenchen.de/clips/oDZp57XlHU/vod/high_quality.mp4
Wolfgang Pietsch (MCTS/TU Munich) gives a talk at the MCMP Colloquium (3 June, 2015) titled "The Causual Nature of Modeling in Data-Intensive Science". Abstract: Abstract: I argue for the causal character of modeling in data-intensive science, contrary to wide-spread claims that big data is only concerned with the search for correlations. After introducing and discussing the concept of data-intensive science, several algorithms are examined with respect to their ability to identify causal relationships. To this purpose, a difference-making account of causation is proposed that broadly stands in the tradition of David Lewis’s counterfactual approach, but fits better the type of evidence used in data-intensive science. The account is inspired by causal inferences of the Mill’s method type. I situate data-intensive modeling within a broader framework of a Duhemian or Cartwrightian scientific epistemology, drawing an analogy to exploratory experimentation.Wolfgang Pietsch (MCTS/TU Munich) gives a talk at the MCMP Colloquium (3 June, 2015) titled "The Causual Nature of Modeling in Data-Intensive Science". Abstract: Abstract: I argue for the causal character of modeling in data-intensive science, contrary to wide-spread claims that big data is only concerned with the search for correlations. After introducing and discussing the concept of data-intensive science, several algorithms are examined with respect to their ability to identify causal relationships. To this purpose, a difference-making account of causation is proposed that broadly stands in the tradition of David Lewis’s counterfactual approach, but fits better the type of evidence used in data-intensive science. The account is inspired by causal inferences of the Mill’s method type. I situate data-intensive modeling within a broader framework of a Duhemian or Cartwrightian scientific epistemology, drawing an analogy to exploratory experimentation.Tue, 30 Jun 2015 03:00:00 +0000Wolfgang Pietsch (MCTS/TU Munich)Colloquium Mathematical Philosophyno01:01:395https://cast.itunes.uni-muenchen.de/clips/oDZp57XlHU/vod/high_quality.mp4Against Grue Mysteries
https://cast.itunes.uni-muenchen.de/clips/3SN5APngVF/vod/high_quality.mp4
Alexandra Zinke (Konstanz) gives a talk at the MCMP Colloquium (28 May, 2015) titled "Against Grue Mysteries". Abstract: In a recent paper, Freitag (2015) reduces Goodman’s new riddle of induction to the problem of doxastic dependence. We are not justified in projecting grue because our grue-evidence is doxastically dependent on defeated evidence. I try to implement this solution into an inductive extension of AGM belief revision theory. It turns out that the grue-example is nothing but an inductive version of well-known examples by Hansson (1992), which he uses to argue for base revision: If revision takes place on belief-bases, rather than on logically closed belief sets, we can easily account for the doxastic dependence relations between our beliefs. To handle the grue-case, I introduce the notion of an inductively closed belief-base. If we update on this inductively closed belief-base, the grue problem dissolves.Alexandra Zinke (Konstanz) gives a talk at the MCMP Colloquium (28 May, 2015) titled "Against Grue Mysteries". Abstract: In a recent paper, Freitag (2015) reduces Goodman’s new riddle of induction to the problem of doxastic dependence. We are not justified in projecting grue because our grue-evidence is doxastically dependent on defeated evidence. I try to implement this solution into an inductive extension of AGM belief revision theory. It turns out that the grue-example is nothing but an inductive version of well-known examples by Hansson (1992), which he uses to argue for base revision: If revision takes place on belief-bases, rather than on logically closed belief sets, we can easily account for the doxastic dependence relations between our beliefs. To handle the grue-case, I introduce the notion of an inductively closed belief-base. If we update on this inductively closed belief-base, the grue problem dissolves.Tue, 30 Jun 2015 02:00:00 +0000Alexandra Zinke (Konstanz)Colloquium Mathematical Philosophyno00:45:356https://cast.itunes.uni-muenchen.de/clips/3SN5APngVF/vod/high_quality.mp4On Einstein's Reality Criterion
https://cast.itunes.uni-muenchen.de/clips/b6jcVnrK8w/vod/high_quality.mp4
Gábor Hofer-Szabó (Hungarian Academy of Sciences) gives a talk at the MCMP Colloquium (28 May, 2015) titled "On Einstein's Reality Criterion". Abstract: In the talk we characterize the different interpretations of QM in an operationalist-frequentist framework and show what entities the different interpretations posit. We define completeness and correctness of an interpretation in terms of how this posited ontology relates to the "real world ontology" posited by principles independent of the interpretations. We argue that the Reality Criterion is just such a principle. We also argue that the EPR argument, making use of the Reality Criterion, is devised to show that certain interpretations of QM are incomplete, whereas Einstein's latter arguments, making no use of the Reality Criterion, are devised to show that the Copenhagen interpretation is simply wrong. Next, investigating the nature of prediction, an essential part of the Reality Criterion, we formulate two hypothesis: (i) the Reality Criterion is a special case of Reichenbach's Common Cause Principle; (ii) it is a special case of Bell's Local Causality Principle.Gábor Hofer-Szabó (Hungarian Academy of Sciences) gives a talk at the MCMP Colloquium (28 May, 2015) titled "On Einstein's Reality Criterion". Abstract: In the talk we characterize the different interpretations of QM in an operationalist-frequentist framework and show what entities the different interpretations posit. We define completeness and correctness of an interpretation in terms of how this posited ontology relates to the "real world ontology" posited by principles independent of the interpretations. We argue that the Reality Criterion is just such a principle. We also argue that the EPR argument, making use of the Reality Criterion, is devised to show that certain interpretations of QM are incomplete, whereas Einstein's latter arguments, making no use of the Reality Criterion, are devised to show that the Copenhagen interpretation is simply wrong. Next, investigating the nature of prediction, an essential part of the Reality Criterion, we formulate two hypothesis: (i) the Reality Criterion is a special case of Reichenbach's Common Cause Principle; (ii) it is a special case of Bell's Local Causality Principle.Tue, 30 Jun 2015 01:00:00 +0000Gábor Hofer-Szabó (Hungarian Academy of Sciences)Colloquium Mathematical Philosophyno00:42:017https://cast.itunes.uni-muenchen.de/clips/b6jcVnrK8w/vod/high_quality.mp4Predicting Outcomes in Five Person Spatial Games: An Aspiration Model Approach
https://cast.itunes.uni-muenchen.de/clips/Xa3OVOqTRf/vod/high_quality.mp4
Bernard Grofman (Irvine) gives a talk at the MCMP Colloquium (13 May, 2015) titled "Predicting Outcomes in Five Person Spatial Games: An Aspiration Model Approach". Abstract: There are many situations where voters must choose a single alternative and where both the voters and the alternatives can be characterized as points in a one or two or more dimensional policy space. In committees and legislatures, often choice among these alternatives will be done via a decision agenda in which alternatives are eliminated until a choice is made, sometimes requiring a final vote against the status quo. A common form for such an agenda is what has been called by Black (1958) standard amendment procedure, a “king of the hill” procedure in which there is an initial alternative who is paired against another alternative, with the winner of that pairwise contest becoming the new winner, and the processes continuing until either the set of feasible alternatives is exhausted or there is a successful motion for cloture. Beginning with a seminal experiment on five person voting games conducted by Plott and Fiorina (1978), there have been a number of experiments on committee voting games with an potentially infinite set of alternatives embedded in a two dimensional policy space. In games where there is a core, i.e., an alternative which, for an odd number of voters, can defeat each and every other alternative in paired comparison, outcomes at or near the core are chosen, but there is also considerable clustering of outcomes even in games without a core. A major concern of the literature has been to develop models to explain the pattern of that clustering in non-core situations. Here, after reviewing the present state of the art, we offer a new family of models based on the Siegel-Simon aspiration approach, in which voters satisfice by choosing “acceptable” alternative, and the set of outcomes that are considered acceptable by each voter changes as the game continues.Bernard Grofman (Irvine) gives a talk at the MCMP Colloquium (13 May, 2015) titled "Predicting Outcomes in Five Person Spatial Games: An Aspiration Model Approach". Abstract: There are many situations where voters must choose a single alternative and where both the voters and the alternatives can be characterized as points in a one or two or more dimensional policy space. In committees and legislatures, often choice among these alternatives will be done via a decision agenda in which alternatives are eliminated until a choice is made, sometimes requiring a final vote against the status quo. A common form for such an agenda is what has been called by Black (1958) standard amendment procedure, a “king of the hill” procedure in which there is an initial alternative who is paired against another alternative, with the winner of that pairwise contest becoming the new winner, and the processes continuing until either the set of feasible alternatives is exhausted or there is a successful motion for cloture. Beginning with a seminal experiment on five person voting games conducted by Plott and Fiorina (1978), there have been a number of experiments on committee voting games with an potentially infinite set of alternatives embedded in a two dimensional policy space. In games where there is a core, i.e., an alternative which, for an odd number of voters, can defeat each and every other alternative in paired comparison, outcomes at or near the core are chosen, but there is also considerable clustering of outcomes even in games without a core. A major concern of the literature has been to develop models to explain the pattern of that clustering in non-core situations. Here, after reviewing the present state of the art, we offer a new family of models based on the Siegel-Simon aspiration approach, in which voters satisfice by choosing “acceptable” alternative, and the set of outcomes that are considered acceptable by each voter changes as the game continues.Thu, 28 May 2015 11:00:00 +0000Bernard Grofman (Irvine)Colloquium Mathematical Philosophyno01:20:178https://cast.itunes.uni-muenchen.de/clips/Xa3OVOqTRf/vod/high_quality.mp4Modeling Cognitive Representations with Evolutionary Game Theory
https://cast.itunes.uni-muenchen.de/clips/6PpZVDl2ix/vod/high_quality.mp4
Marc Artiga (MCMP) gives a talk at the MCMP Colloquium (7 May, 2015) titled "Modeling Cognitive Representations with Evolutionary Game Theory". Abstract: Cognitive science has been developed on the idea that cognitive systems are representational. Recently, however, some people have challenged this idea. The goal of this talk is to provide some mathematical tools for resolving this question. More precisely, I will defend two claims. First, I will argue that Evolutionary Game Theory can help us establish which states are representations. Secondly, I will defend that this strategy can be used to show that perceptual states (among others) are indeed representational.Marc Artiga (MCMP) gives a talk at the MCMP Colloquium (7 May, 2015) titled "Modeling Cognitive Representations with Evolutionary Game Theory". Abstract: Cognitive science has been developed on the idea that cognitive systems are representational. Recently, however, some people have challenged this idea. The goal of this talk is to provide some mathematical tools for resolving this question. More precisely, I will defend two claims. First, I will argue that Evolutionary Game Theory can help us establish which states are representations. Secondly, I will defend that this strategy can be used to show that perceptual states (among others) are indeed representational.Tue, 12 May 2015 04:20:11 +0000Marc Artiga (MCMP)Colloquium Mathematical Philosophyno00:35:309https://cast.itunes.uni-muenchen.de/clips/6PpZVDl2ix/vod/high_quality.mp4Structures, Mechanisms and Dynamics in Theoretical Neuroscience
https://cast.itunes.uni-muenchen.de/clips/0w8OBagsRH/vod/high_quality.mp4
Holger Lyre (Magdeburg) gives a talk at the MCMP Colloquium (6 May, 2015) titled "Structures, Mechanisms and Dynamics in Theoretical Neuroscience". Abstract: Proponents of mechanistic explanations have recently proclaimed that all explanations in the neurosciences appeal to mechanisms – including computational and dynamical explanations. The purpose of the talk is to critically assess these statements. I shall defend an understanding of both dynamical and computational explanations according to which they focus on the explanatorily relevant spatiotemporal-cum-causal structures in the target domain. This has impact on at least three important issues: reductionism, multi-realizability, and explanatory relevance. A variety of examples from the theoretical neurosciences shall be used to show that very often the explanatory relevance, burden, and advantage in view of law-like generalizability lies in picking out the relevant structure rather than characterizing mechanisms in all details.Holger Lyre (Magdeburg) gives a talk at the MCMP Colloquium (6 May, 2015) titled "Structures, Mechanisms and Dynamics in Theoretical Neuroscience". Abstract: Proponents of mechanistic explanations have recently proclaimed that all explanations in the neurosciences appeal to mechanisms – including computational and dynamical explanations. The purpose of the talk is to critically assess these statements. I shall defend an understanding of both dynamical and computational explanations according to which they focus on the explanatorily relevant spatiotemporal-cum-causal structures in the target domain. This has impact on at least three important issues: reductionism, multi-realizability, and explanatory relevance. A variety of examples from the theoretical neurosciences shall be used to show that very often the explanatory relevance, burden, and advantage in view of law-like generalizability lies in picking out the relevant structure rather than characterizing mechanisms in all details.Tue, 12 May 2015 02:14:21 +0000Holger Lyre (Magdeburg)Colloquium Mathematical Philosophyno00:50:0110https://cast.itunes.uni-muenchen.de/clips/0w8OBagsRH/vod/high_quality.mp4The Mathematical Route to Causal Understanding
https://cast.itunes.uni-muenchen.de/clips/6PHEHB5dXP/vod/high_quality.mp4
Michael Strevens (NYU) gives a talk at the MCMP Colloquium (30 April, 2015) titled "The Mathematical Route to Causal Understanding". Abstract: Causal explanation is a matter of isolating the elements of the causal web that make a difference to the explanandum event or regularity (so I and others have argued). Causal understanding is a matter of grasping a causal explanation (so says what I have elsewhere called the "simple theory" of understanding). It follows that causal understanding is a matter of grasping the facts about difference-making, and in particular grasping the reasons why some properties of the web are difference-makers and some are not. Mathematical reasoning frequently plays a role in our coming to grasp these reasons, and in some causal explanations, deep mathematical theorems may do almost all the work. In these cases - such as the explanation why a person cannot complete a traverse of the bridges of Königsberg without crossing at least one bridge twice - our understanding seems to hinge more on our appreciation of mathematical than of physical facts. We have the sense that mathematics gives us physical understanding. But this is quite compatible with the explanation in question being causal in exactly the same sense as more unremarkable causal explanations.Michael Strevens (NYU) gives a talk at the MCMP Colloquium (30 April, 2015) titled "The Mathematical Route to Causal Understanding". Abstract: Causal explanation is a matter of isolating the elements of the causal web that make a difference to the explanandum event or regularity (so I and others have argued). Causal understanding is a matter of grasping a causal explanation (so says what I have elsewhere called the "simple theory" of understanding). It follows that causal understanding is a matter of grasping the facts about difference-making, and in particular grasping the reasons why some properties of the web are difference-makers and some are not. Mathematical reasoning frequently plays a role in our coming to grasp these reasons, and in some causal explanations, deep mathematical theorems may do almost all the work. In these cases - such as the explanation why a person cannot complete a traverse of the bridges of Königsberg without crossing at least one bridge twice - our understanding seems to hinge more on our appreciation of mathematical than of physical facts. We have the sense that mathematics gives us physical understanding. But this is quite compatible with the explanation in question being causal in exactly the same sense as more unremarkable causal explanations.Mon, 11 May 2015 06:06:32 +0000Michael Strevens (NYU)Colloquium Mathematical Philosophyno00:47:2711https://cast.itunes.uni-muenchen.de/clips/6PHEHB5dXP/vod/high_quality.mp4Philosophy of Statistical Mechanics from an Emergentist Viewpoint
https://cast.itunes.uni-muenchen.de/clips/Ffm3Eir80Q/vod/high_quality.mp4
David Wallace (Balliol College) gives a talk at the MCMP Colloquium (15 April, 2015) titled "Philosophy of Statistical Mechanics from an Emergentist Viewpoint". Abstract: I sketch a view of the philosophy of statistical mechanics as (a) concerned primarily with the interrelations between different dynamical systems describing more or less coarse-grained degrees of freedom of a system, and only secondarily with thermodynamic notions like equilibrium and entropy, and (b) informed by developments in contemporary mainstream physics. I develop, as concrete examples, (i) the projection-based approach to kinetic equations developed in the 1970s by Balescu, Prigogine, Zwanzig et al, and (ii) the relevance of quantum mechanics to nominally “classical” systems like the ideal gas.David Wallace (Balliol College) gives a talk at the MCMP Colloquium (15 April, 2015) titled "Philosophy of Statistical Mechanics from an Emergentist Viewpoint". Abstract: I sketch a view of the philosophy of statistical mechanics as (a) concerned primarily with the interrelations between different dynamical systems describing more or less coarse-grained degrees of freedom of a system, and only secondarily with thermodynamic notions like equilibrium and entropy, and (b) informed by developments in contemporary mainstream physics. I develop, as concrete examples, (i) the projection-based approach to kinetic equations developed in the 1970s by Balescu, Prigogine, Zwanzig et al, and (ii) the relevance of quantum mechanics to nominally “classical” systems like the ideal gas.Tue, 12 May 2015 01:07:55 +0000David Wallace (Balliol College)Colloquium Mathematical Philosophyno01:03:2612https://cast.itunes.uni-muenchen.de/clips/Ffm3Eir80Q/vod/high_quality.mp4Occam's Razor in Algorithmic Information Theory
https://cast.itunes.uni-muenchen.de/clips/IdrQKXzV4u/vod/high_quality.mp4
Tom Sterkenburg (Amsterdam/Groningen) gives a talk at the MCMP Colloquium (15 January, 2015) titled "Occam's Razor in Algorithmic Information Theory". Abstract: Algorithmic information theory, also known as Kolmogorov complexity, is sometimes believed to offer us a general and objective measure of simplicity. The first variant of this simplicity measure to appear in the literature was in fact part of a theory of prediction: the central achievement of its originator, R.J. Solomonoff, was the definition of an idealized method of prediction that is taken to implement Occam's razor in giving greater probability to simpler hypotheses about the future. Moreover, in many writings on the subject an argument of the following sort takes shape. From (1) the definition of the Solomonoff predictor which has a precise preference for simplicity, and (2) a formal proof that this predictor will generally lead us to the truth, it follows that (Occam's razor) a preference for simplicity will generally lead us to the truth. Thus, sensationally, this is an argument to justify Occam's razor. In this talk, I show why the argument fails. The key to its dissolution is a representation theorem that links Kolmogorov complexity to Bayesian prediction.Tom Sterkenburg (Amsterdam/Groningen) gives a talk at the MCMP Colloquium (15 January, 2015) titled "Occam's Razor in Algorithmic Information Theory". Abstract: Algorithmic information theory, also known as Kolmogorov complexity, is sometimes believed to offer us a general and objective measure of simplicity. The first variant of this simplicity measure to appear in the literature was in fact part of a theory of prediction: the central achievement of its originator, R.J. Solomonoff, was the definition of an idealized method of prediction that is taken to implement Occam's razor in giving greater probability to simpler hypotheses about the future. Moreover, in many writings on the subject an argument of the following sort takes shape. From (1) the definition of the Solomonoff predictor which has a precise preference for simplicity, and (2) a formal proof that this predictor will generally lead us to the truth, it follows that (Occam's razor) a preference for simplicity will generally lead us to the truth. Thus, sensationally, this is an argument to justify Occam's razor. In this talk, I show why the argument fails. The key to its dissolution is a representation theorem that links Kolmogorov complexity to Bayesian prediction.Fri, 20 Feb 2015 01:48:56 +0000Tom Sterkenburg (Amsterdam/Groningen)Colloquium Mathematical Philosophyno00:36:5313https://cast.itunes.uni-muenchen.de/clips/IdrQKXzV4u/vod/high_quality.mp4Vindicating Methodological Triangulation
https://cast.itunes.uni-muenchen.de/clips/sbNxsv8Pdw/vod/high_quality.mp4
Remco Heesen (Carnegie Mellon) gives a talk at the MCMP Colloquium (8 January, 2015) titled "Vindicating Methodological Triangulation". Abstract: Social scientists use many different methods, and there are often substantial disagreements about which method is appropriate for a given research question. A proponent of methodological triangulation believes that if multiple methods yield the same answer that answer is confirmed more strongly than it could have been by any single method. Methodological purists, on the other hand, believe that one should choose a single appropriate method and stick with it. Using formal tools from voting theory, we show that triangulation is more likely to lead to the correct answer than purism, assuming the scientist is subject to some degree of diffidence about the relative merits of the various methods. This is true even when in fact only one of the methods is appropriate for the given research question.Remco Heesen (Carnegie Mellon) gives a talk at the MCMP Colloquium (8 January, 2015) titled "Vindicating Methodological Triangulation". Abstract: Social scientists use many different methods, and there are often substantial disagreements about which method is appropriate for a given research question. A proponent of methodological triangulation believes that if multiple methods yield the same answer that answer is confirmed more strongly than it could have been by any single method. Methodological purists, on the other hand, believe that one should choose a single appropriate method and stick with it. Using formal tools from voting theory, we show that triangulation is more likely to lead to the correct answer than purism, assuming the scientist is subject to some degree of diffidence about the relative merits of the various methods. This is true even when in fact only one of the methods is appropriate for the given research question.Fri, 16 Jan 2015 02:10:00 +0000Remco Heesen (CMU)Colloquium Mathematical Philosophyno00:19:5114https://cast.itunes.uni-muenchen.de/clips/sbNxsv8Pdw/vod/high_quality.mp4Mathematical Explanations of Non-Mathematical Facts?
https://cast.itunes.uni-muenchen.de/clips/Qu70RyHDRr/vod/high_quality.mp4
Gabriel Tarziu (Romanian Academy) gives a talk at the MCMP Colloquium (7 January, 2015) titled "Mathematical explanations of non-mathematical facts?". Abstract: Are there mathematized explanations of physical phenomena? A fairly uncontroversial answer will look like this: of course there are – it is sufficient a glimpse at what happens in science to find plenty examples. Actually, if one’s science of choice is physics, it would be hard to find non-mathematics involving explanations of the phenomena. This fact raises a host of interesting philosophical problems: why can mathematics be used in such a context? what are the explanatory benefits that such a use bring with it, if there are any? what is the role played by the mathematical part in these explanations? can it be taken, at least sometimes, as explanatory in its own right? This last problem received a great deal of attention lately mainly because it is of central importance in the quarrel between realists and nominalists. This debate is irrelevant here. This talk will explore the prospect of finding an account of scientific explanation that will tell us how the mathematical part can be genuinely explanatory in such a context. I will argue that the main models of explanation fail for various reasons to accommodate mathematical explanations of physical phenomena and I will present some reasons for being skeptical that a model that accommodates such explanations can be found. Gabriel Tarziu (Romanian Academy) gives a talk at the MCMP Colloquium (7 January, 2015) titled "Mathematical explanations of non-mathematical facts?". Abstract: Are there mathematized explanations of physical phenomena? A fairly uncontroversial answer will look like this: of course there are – it is sufficient a glimpse at what happens in science to find plenty examples. Actually, if one’s science of choice is physics, it would be hard to find non-mathematics involving explanations of the phenomena. This fact raises a host of interesting philosophical problems: why can mathematics be used in such a context? what are the explanatory benefits that such a use bring with it, if there are any? what is the role played by the mathematical part in these explanations? can it be taken, at least sometimes, as explanatory in its own right? This last problem received a great deal of attention lately mainly because it is of central importance in the quarrel between realists and nominalists. This debate is irrelevant here. This talk will explore the prospect of finding an account of scientific explanation that will tell us how the mathematical part can be genuinely explanatory in such a context. I will argue that the main models of explanation fail for various reasons to accommodate mathematical explanations of physical phenomena and I will present some reasons for being skeptical that a model that accommodates such explanations can be found. Fri, 16 Jan 2015 00:58:54 +0000Gabriel Tarziu (Romanian Academy)Colloquium Mathematical Philosophyno00:44:5415https://cast.itunes.uni-muenchen.de/clips/Qu70RyHDRr/vod/high_quality.mp4Science, Metaphysics, and Understanding
https://cast.itunes.uni-muenchen.de/clips/01rxIuYjSx/vod/high_quality.mp4
Henk W. de Regt (Amsterdam) gives a talk at the MCMP Colloquium (17 December, 2014) titled "Science, Metaphysics, and Understanding". Abstract: My talk will address the question of whether there are limits to scientific understanding. To answer this question we first need to know what exactly scientific understanding is, and how it is achieved. This issue has long been neglected by philosophers of science because of the misguided assumption that understanding is purely subjective. I will offer an analysis of the nature of scientific understanding that accords with scientific practice and accommodates the historical diversity of conceptions of understanding. Its core idea is a general criterion for the intelligibility of scientific theories that is essentially contextual: which theories conform to this criterion depends on contextual factors, and can change in the course of time. To illustrate my account I will discuss a well-known episode in the history of physics: the debates about the intelligibility of gravitation in the seventeenth and eighteenth centuries. These debates were concerned with the relation between science and metaphysics, and an analysis of them sheds new light on the question of limits of scientific understanding.Henk W. de Regt (Amsterdam) gives a talk at the MCMP Colloquium (17 December, 2014) titled "Science, Metaphysics, and Understanding". Abstract: My talk will address the question of whether there are limits to scientific understanding. To answer this question we first need to know what exactly scientific understanding is, and how it is achieved. This issue has long been neglected by philosophers of science because of the misguided assumption that understanding is purely subjective. I will offer an analysis of the nature of scientific understanding that accords with scientific practice and accommodates the historical diversity of conceptions of understanding. Its core idea is a general criterion for the intelligibility of scientific theories that is essentially contextual: which theories conform to this criterion depends on contextual factors, and can change in the course of time. To illustrate my account I will discuss a well-known episode in the history of physics: the debates about the intelligibility of gravitation in the seventeenth and eighteenth centuries. These debates were concerned with the relation between science and metaphysics, and an analysis of them sheds new light on the question of limits of scientific understanding.Wed, 31 Dec 2014 11:05:44 +0000Henk W. de Regt (Amsterdam)Colloquium Mathematical Philosophyno00:49:5616https://cast.itunes.uni-muenchen.de/clips/01rxIuYjSx/vod/high_quality.mp4Navigating the Twilight of Uncertainty: Decisions from Experience
https://cast.itunes.uni-muenchen.de/clips/Z7KMpzs7f9/vod/high_quality.mp4
Ralph Hertwig (Max Planck Institute for Human Development) gives a talk at the Workshop on Causal and Probabilistic Reasoning (18-20 June, 2015) titled "Navigating the Twilight of Uncertainty: Decisions from Experience". Abstract: In many of our decisions we cannot consult explicit statistics telling us about the relative risks involved in our actions. In lieu of explicit statistics, we can search either externally or internally for information, thus making decisions from experience (as opposed to decisions from descriptions). Recently, researchers have begun to investigate choice in settings in which people learn about options by experiential sampling over time. Converging findings show that when people make decisions based on experience, choices differ systematically from description-based choice. Furthermore, this research on decisions from experience has turned to new theories of decision making under uncertainty (ambiguity), “rediscovered” the importance of learning, and suggested important implications for risk and precautionary behavior. I will review these issues.Ralph Hertwig (Max Planck Institute for Human Development) gives a talk at the Workshop on Causal and Probabilistic Reasoning (18-20 June, 2015) titled "Navigating the Twilight of Uncertainty: Decisions from Experience". Abstract: In many of our decisions we cannot consult explicit statistics telling us about the relative risks involved in our actions. In lieu of explicit statistics, we can search either externally or internally for information, thus making decisions from experience (as opposed to decisions from descriptions). Recently, researchers have begun to investigate choice in settings in which people learn about options by experiential sampling over time. Converging findings show that when people make decisions based on experience, choices differ systematically from description-based choice. Furthermore, this research on decisions from experience has turned to new theories of decision making under uncertainty (ambiguity), “rediscovered” the importance of learning, and suggested important implications for risk and precautionary behavior. I will review these issues.Wed, 08 Jul 2015 03:00:00 +0000Ralph Hertwig (Max Planck Institute for Human Development)Workshop on Causal and Probabilistic Reasoningno00:54:55CaProbReasoning1516https://cast.itunes.uni-muenchen.de/clips/Z7KMpzs7f9/vod/high_quality.mp4Use-novelty and double-counting: new insights from model selection theory
https://cast.itunes.uni-muenchen.de/clips/sRoh6MpoNP/vod/high_quality.mp4
Charlotte Werndl (Salzburg) gives a talk at the MCMP Colloquium (27 November, 2014) titled "Use-novelty and double-counting: new insights from model selection theory". Abstract: A widely debated issue on confirmation is the requirement of use-novelty (i.e. that data can only confirm models if they have not already been used before, e.g. for calibrating parameters). This paper investigates the issue of use-novelty in the context of the mathematical methods provided by model selection theory. I will show that the picture model selection theory presents us with about use-novelty is more subtle and nuanced than the commonly endorsed positions by climate scientists and philosophers. More specifically, I will argue that there are two main cases in model selection theory. On the one hand, there are the methods such as cross-validation where the data are required to be use-novel. On the other hand, there are the methods such as the Akaike Information Criterion (AIC) for which the data cannot be use-novel. Still, for some of these methods (like AIC) certain intuitions behind the use-novelty approach are preserved: there is a penalty term in the expression for the degree of confirmation by the data because the data have already been used for calibration. Finally, this picture presented by model selection theory will be compared to the conclusions drawn about use-novelty by Bayesians and proponents of the use-novelty approach.Charlotte Werndl (Salzburg) gives a talk at the MCMP Colloquium (27 November, 2014) titled "Use-novelty and double-counting: new insights from model selection theory". Abstract: A widely debated issue on confirmation is the requirement of use-novelty (i.e. that data can only confirm models if they have not already been used before, e.g. for calibrating parameters). This paper investigates the issue of use-novelty in the context of the mathematical methods provided by model selection theory. I will show that the picture model selection theory presents us with about use-novelty is more subtle and nuanced than the commonly endorsed positions by climate scientists and philosophers. More specifically, I will argue that there are two main cases in model selection theory. On the one hand, there are the methods such as cross-validation where the data are required to be use-novel. On the other hand, there are the methods such as the Akaike Information Criterion (AIC) for which the data cannot be use-novel. Still, for some of these methods (like AIC) certain intuitions behind the use-novelty approach are preserved: there is a penalty term in the expression for the degree of confirmation by the data because the data have already been used for calibration. Finally, this picture presented by model selection theory will be compared to the conclusions drawn about use-novelty by Bayesians and proponents of the use-novelty approach.Thu, 18 Dec 2014 11:32:17 +0000Charlotte Werndl (Salzburg)Colloquium Mathematical Philosophyno00:55:4517https://cast.itunes.uni-muenchen.de/clips/sRoh6MpoNP/vod/high_quality.mp4On de Finetti's Instrumentalist Philosophy of Probability
https://cast.itunes.uni-muenchen.de/clips/uXGOLuIMeX/vod/high_quality.mp4
Joseph Berkovitz (Toronto) gives a talk at the MCMP Colloquium (20 November, 2014) titled "On de Finetti's Instrumentalist Philosophy of Probability". Abstract: De Finetti is one of the founding fathers of the modern theory of subjective probability, where probabilities are coherent degrees of belief. De Finetti rejected the idea that subjective probabilities are supposed to be guesses, predictions or hypotheses about the corresponding objective probabilities. He argued that probabilities are inherently subjective, and that none of the objective interpretations of probability makes sense. While de Finetti’s theory has been influential in science and in philosophy, it has
encountered various objections. In particular, it has been argued that de Finetti’s concept of probability is too permissive, licensing degrees of belief that we would normally call ‘crazy’. Further, de Finetti is commonly conceived as giving an operational, behaviorist definition of degrees of belief and accordingly of probability. The claim is that degrees of belief are defined in terms of behaviors and behavioral dispositions in (hypothetical and actual) betting circumstances. Thus, the theory is said to inherit the difficulties embodied in operationalism and behaviorism. I argue that these and some other objections are unfounded as they overlook various central aspects of de Finetti’s philosophy of probability. I then propose a new interpretation of de Finetti’s theory that highlights these central aspects and explains how they are integral part of de Finetti’s instrumentalist philosophy of probability.Joseph Berkovitz (Toronto) gives a talk at the MCMP Colloquium (20 November, 2014) titled "On de Finetti's Instrumentalist Philosophy of Probability". Abstract: De Finetti is one of the founding fathers of the modern theory of subjective probability, where probabilities are coherent degrees of belief. De Finetti rejected the idea that subjective probabilities are supposed to be guesses, predictions or hypotheses about the corresponding objective probabilities. He argued that probabilities are inherently subjective, and that none of the objective interpretations of probability makes sense. While de Finetti’s theory has been influential in science and in philosophy, it has
encountered various objections. In particular, it has been argued that de Finetti’s concept of probability is too permissive, licensing degrees of belief that we would normally call ‘crazy’. Further, de Finetti is commonly conceived as giving an operational, behaviorist definition of degrees of belief and accordingly of probability. The claim is that degrees of belief are defined in terms of behaviors and behavioral dispositions in (hypothetical and actual) betting circumstances. Thus, the theory is said to inherit the difficulties embodied in operationalism and behaviorism. I argue that these and some other objections are unfounded as they overlook various central aspects of de Finetti’s philosophy of probability. I then propose a new interpretation of de Finetti’s theory that highlights these central aspects and explains how they are integral part of de Finetti’s instrumentalist philosophy of probability.Thu, 18 Dec 2014 09:59:33 +0000Joseph Berkovitz (Toronto)Colloquium Mathematical Philosophyno00:57:5218https://cast.itunes.uni-muenchen.de/clips/uXGOLuIMeX/vod/high_quality.mp4Indigenous and Scientific Knowledge. A Model of Knowledge Integration and its Limitations.
https://cast.itunes.uni-muenchen.de/clips/6xZOwUnxli/vod/high_quality.mp4
David Ludwig (VU University Amsterdam) gives a talk at the MCMP Colloquium (17 June, 2015) titled "Indigenous and Scientific Knowledge. A Model of Knowledge Integration and its Limitations". Abstract: Philosophical debates about indigenous knowledge often focus on the issue of relativism: given a diversity of local knowledge systems, how can certain types of (e.g. scientific or metaphysical) knowledge claim to transcend their historical and cultural contexts? In contrast with philosophical worries about differences between knowledge systems, research in ethnobiology and conservation biology is often motivated by the practical need to integrate indigenous and modern scientific knowledge in the co-management of local environments. Instead of focusing on alleged incommensurability, real-life conservation efforts often require the incorporation of knowledge from different sources. Based on ethnobiological case studies, I propose a model of knowledge integration that reflects shared reference to property clusters and their inferential potentials. Furthermore, the proposed model does not only explain integration of indigenous and modern scientific knowledge but also predicts limitations of knowledge integration. I argue that the proposed model therefore does not only help to understand current practices of ethnobiology but also provides a nuances picture of the ontological and epistemological relations between different knowledge systems.David Ludwig (VU University Amsterdam) gives a talk at the MCMP Colloquium (17 June, 2015) titled "Indigenous and Scientific Knowledge. A Model of Knowledge Integration and its Limitations". Abstract: Philosophical debates about indigenous knowledge often focus on the issue of relativism: given a diversity of local knowledge systems, how can certain types of (e.g. scientific or metaphysical) knowledge claim to transcend their historical and cultural contexts? In contrast with philosophical worries about differences between knowledge systems, research in ethnobiology and conservation biology is often motivated by the practical need to integrate indigenous and modern scientific knowledge in the co-management of local environments. Instead of focusing on alleged incommensurability, real-life conservation efforts often require the incorporation of knowledge from different sources. Based on ethnobiological case studies, I propose a model of knowledge integration that reflects shared reference to property clusters and their inferential potentials. Furthermore, the proposed model does not only explain integration of indigenous and modern scientific knowledge but also predicts limitations of knowledge integration. I argue that the proposed model therefore does not only help to understand current practices of ethnobiology but also provides a nuances picture of the ontological and epistemological relations between different knowledge systems.Wed, 08 Jul 2015 05:00:00 +0000David Ludwig (VU University Amsterdam)Colloquium Mathematical Philosophyno00:43:1219https://cast.itunes.uni-muenchen.de/clips/6xZOwUnxli/vod/high_quality.mp4The Varieties of Explanations in the Higgs Sector
https://cast.itunes.uni-muenchen.de/clips/eIjN3wASGs/vod/high_quality.mp4
Michael Stöltzner (South Carolina) gives a talk at the MCMP Colloquium (19 November, 2014) titled "The Varieties of Explanations in the Higgs Sector". Abstract: I argue that there is no single universal conception of scientific explanation that is consistently employed throughout Higgs physics – ranging from the successful search for a standard model (SM) Higgs particle and the hitherto unsuccessful searches beyond it, to phenomenological model builders in the Higgs sector and theoretical physicists interested in the Higgs mechanism. But the coexistence of deductive-statistical, unificationist, model-based, and statistical-relevance explanations does not amount to a fragmentation of the discipline, but allows elementary particle physicists to simultaneously pursue a plurality of research strategies and keep the field together by joint convictions about the SM and shared explanatory ideals. Most importantly, the SM represents both a successful explanation and contains aspects in need of further explanation. Such explanatory ideals typically appear as stories or narratives motivating the different models and linking them to the whole of the discipline.Michael Stöltzner (South Carolina) gives a talk at the MCMP Colloquium (19 November, 2014) titled "The Varieties of Explanations in the Higgs Sector". Abstract: I argue that there is no single universal conception of scientific explanation that is consistently employed throughout Higgs physics – ranging from the successful search for a standard model (SM) Higgs particle and the hitherto unsuccessful searches beyond it, to phenomenological model builders in the Higgs sector and theoretical physicists interested in the Higgs mechanism. But the coexistence of deductive-statistical, unificationist, model-based, and statistical-relevance explanations does not amount to a fragmentation of the discipline, but allows elementary particle physicists to simultaneously pursue a plurality of research strategies and keep the field together by joint convictions about the SM and shared explanatory ideals. Most importantly, the SM represents both a successful explanation and contains aspects in need of further explanation. Such explanatory ideals typically appear as stories or narratives motivating the different models and linking them to the whole of the discipline.Thu, 18 Dec 2014 08:47:48 +0000Michael Stöltzner (South Carolina)Colloquium Mathematical Philosophyno00:57:3019https://cast.itunes.uni-muenchen.de/clips/eIjN3wASGs/vod/high_quality.mp4Propensities, Chance Distributions, and Experimental Statistics
https://cast.itunes.uni-muenchen.de/clips/Qk18zou19e/vod/high_quality.mp4
Mauricio Suarez (London, Madrid) gives a talk at the MCMP Colloquium (12 November, 2014) titled "Propensities, Chance Distributions, and Experimental Statistics". Abstract: Probabilistic or statistical modelling may be described as the attempt to characterise (finite) experimental data in terms of models formally involving probabilities. I argue that a coherent understanding of much of the practice of probabilistic modelling calls for a distinction between three notions that are often conflated in the philosophy of probability literature. A probability model is often implicitly or explicitly embedded in a theoretical framework that provides explanatory – not merely descriptive – strategies and heuristics. Such frameworks often appeal to genuine properties of objects, systems or configurations, with putatively some explanatory function. The literature provides examples of formally precise rules for introducing such properties at the individual or token level in the description of statistically relevant populations (Dawid 2007, and forthcoming). Thus, I claim, it becomes useful to distinguish probabilistic dispositions (or single-case propensities), chance distributions (or probabilities), and experimental statistics (or frequencies). I illustrate the distinction with some elementary examples of games of chance, and go on to claim that it is readily applicable to more complex probabilistic phenomena, notably quantum phenomena.Mauricio Suarez (London, Madrid) gives a talk at the MCMP Colloquium (12 November, 2014) titled "Propensities, Chance Distributions, and Experimental Statistics". Abstract: Probabilistic or statistical modelling may be described as the attempt to characterise (finite) experimental data in terms of models formally involving probabilities. I argue that a coherent understanding of much of the practice of probabilistic modelling calls for a distinction between three notions that are often conflated in the philosophy of probability literature. A probability model is often implicitly or explicitly embedded in a theoretical framework that provides explanatory – not merely descriptive – strategies and heuristics. Such frameworks often appeal to genuine properties of objects, systems or configurations, with putatively some explanatory function. The literature provides examples of formally precise rules for introducing such properties at the individual or token level in the description of statistically relevant populations (Dawid 2007, and forthcoming). Thus, I claim, it becomes useful to distinguish probabilistic dispositions (or single-case propensities), chance distributions (or probabilities), and experimental statistics (or frequencies). I illustrate the distinction with some elementary examples of games of chance, and go on to claim that it is readily applicable to more complex probabilistic phenomena, notably quantum phenomena.Thu, 18 Apr 2019 23:05:17 +0000Mauricio Suarez (London, Madrid)Colloquium Mathematical Philosophyno00:55:4120https://cast.itunes.uni-muenchen.de/clips/Qk18zou19e/vod/high_quality.mp4Unifying Causal and Non-Causal Knowledge
https://cast.itunes.uni-muenchen.de/clips/jWPObb0ZNb/vod/high_quality.mp4
Michael Strevens (NYU) meets Roland Poellinger (MCMP/LMU) in a joint session on "Unifying Causal and Non-Causal Knowledge" at the MCMP workshop "Bridges 2014" (2 and 3 Sept, 2014, German House, New York City). The 2-day trans-continental meeting in mathematical philosophy focused on inter-theoretical relations thereby connecting form and content of this philosophical exchange. Idea and motivation: We use theories to explain, to predict and to instruct, to talk about our world and order the objects therein. Different theories deliberately emphasize different aspects of an object, purposefully utilize different formal methods, and necessarily confine their attention to a distinct field of interest. The desire to enlarge knowledge by combining two theories presents a research community with the task of building bridges between the structures and theoretical entities on both sides. Especially if no background theory is available as yet, this becomes a question of principle and of philosophical groundwork: If there are any – what are the inter-theoretical relations to look like? Will a unified theory possibly adjudicate between monist and dualist positions? Under what circumstances will partial translations suffice? Can the ontological status of inter-theoretical relations inform us about inter-object relations in the world? Find more about the meeting at www.lmu.de/bridges2014.Michael Strevens (NYU) meets Roland Poellinger (MCMP/LMU) in a joint session on "Unifying Causal and Non-Causal Knowledge" at the MCMP workshop "Bridges 2014" (2 and 3 Sept, 2014, German House, New York City). The 2-day trans-continental meeting in mathematical philosophy focused on inter-theoretical relations thereby connecting form and content of this philosophical exchange. Idea and motivation: We use theories to explain, to predict and to instruct, to talk about our world and order the objects therein. Different theories deliberately emphasize different aspects of an object, purposefully utilize different formal methods, and necessarily confine their attention to a distinct field of interest. The desire to enlarge knowledge by combining two theories presents a research community with the task of building bridges between the structures and theoretical entities on both sides. Especially if no background theory is available as yet, this becomes a question of principle and of philosophical groundwork: If there are any – what are the inter-theoretical relations to look like? Will a unified theory possibly adjudicate between monist and dualist positions? Under what circumstances will partial translations suffice? Can the ontological status of inter-theoretical relations inform us about inter-object relations in the world? Find more about the meeting at www.lmu.de/bridges2014.Mon, 06 Oct 2014 02:02:51 +0000Michael Strevens (NYU) & Roland Poellinger (MCMP/LMU)MCMP Workshop Bridges 2014no00:56:37bridges201421https://cast.itunes.uni-muenchen.de/clips/jWPObb0ZNb/vod/high_quality.mp4On the Distinction between Internal and External Symmetries
https://cast.itunes.uni-muenchen.de/clips/itr7kzqG5j/vod/high_quality.mp4
Radin Dardashti (MCMP/LMU) gives a talk at the MCMP Colloquium (2 July, 2014) titled "On the Distinction between Internal and External Symmetries". Abstract: There is no doubt that symmetries play an important role in fundamental physics, but there is no agreement among physicists on what this role exactly is. So it is not surprising that it has caught the interest of philosophers in recent years leading to a lively discussion on the epistemological and ontological significance of symmetries. Especially in this context it becomes relevant whether common distinctions made between different kinds of symmetries are purely conventional or have a deeper mathematical and/or physical justification. It is the aim of this talk to discuss the distinction between internal and external (or spacetime) symmetries and its possible justification. First, I will discuss attempts at combining internal and external symmetries, which lead to several no-go theorems. A naive interpretation of these results leads to the conclusion that the distinction is physically/mathematically justified. Second, the strong dependence of the no-go results on physical and mathematical assumptions is discussed and it is shown how the distinction becomes blurred once mathematical assumptions are weakened. This is exactly what happens in supersymmetric extensions of the standard model of particle physics. So, in the final part, I will argue that under a certain (philosophical) assumption the question about the status of the distinction can be made into an experimental question.Radin Dardashti (MCMP/LMU) gives a talk at the MCMP Colloquium (2 July, 2014) titled "On the Distinction between Internal and External Symmetries". Abstract: There is no doubt that symmetries play an important role in fundamental physics, but there is no agreement among physicists on what this role exactly is. So it is not surprising that it has caught the interest of philosophers in recent years leading to a lively discussion on the epistemological and ontological significance of symmetries. Especially in this context it becomes relevant whether common distinctions made between different kinds of symmetries are purely conventional or have a deeper mathematical and/or physical justification. It is the aim of this talk to discuss the distinction between internal and external (or spacetime) symmetries and its possible justification. First, I will discuss attempts at combining internal and external symmetries, which lead to several no-go theorems. A naive interpretation of these results leads to the conclusion that the distinction is physically/mathematically justified. Second, the strong dependence of the no-go results on physical and mathematical assumptions is discussed and it is shown how the distinction becomes blurred once mathematical assumptions are weakened. This is exactly what happens in supersymmetric extensions of the standard model of particle physics. So, in the final part, I will argue that under a certain (philosophical) assumption the question about the status of the distinction can be made into an experimental question.Thu, 18 Apr 2019 23:09:03 +0000Radin Dardashti (MCMP/LMU)Collloquium Mathematical Philosophy no00:43:4522https://cast.itunes.uni-muenchen.de/clips/itr7kzqG5j/vod/high_quality.mp4Model Tuning and Predictivism
https://cast.itunes.uni-muenchen.de/clips/6HHaCHKoTg/vod/high_quality.mp4
Mathias Frisch (Maryland) gives a talk at the MCMP Colloquium (26 June, 2014) titled "Model Tuning and Predictivism". Abstract: Many climate scientists maintain that evidence used in tuning or calibrating a climate model cannot be used to evaluate the model. By contrast, the philosophers Katie Steele and Charlotte Werndl have argued, appealing to Bayesian confirmation theory, that tuning is simply an instance of hypothesis testing. In this paper I argue against both views and for a weak predictivism: there are cases, model-tuning among them, in which predictive successes are more highly confirmatory than accommodation. I propose a Bayesian formulation of the predictivist thesis.Mathias Frisch (Maryland) gives a talk at the MCMP Colloquium (26 June, 2014) titled "Model Tuning and Predictivism". Abstract: Many climate scientists maintain that evidence used in tuning or calibrating a climate model cannot be used to evaluate the model. By contrast, the philosophers Katie Steele and Charlotte Werndl have argued, appealing to Bayesian confirmation theory, that tuning is simply an instance of hypothesis testing. In this paper I argue against both views and for a weak predictivism: there are cases, model-tuning among them, in which predictive successes are more highly confirmatory than accommodation. I propose a Bayesian formulation of the predictivist thesis.Thu, 18 Apr 2019 23:06:02 +0000Mathias Frisch (Maryland)Colloquium Mathematical Philosophyno00:40:5023https://cast.itunes.uni-muenchen.de/clips/6HHaCHKoTg/vod/high_quality.mp4Rational Routines
https://cast.itunes.uni-muenchen.de/clips/9AqWG5SJzT/vod/high_quality.mp4
Martin Peterson (Eindhoven) gives a talk at the MCMP Colloquium (18 June, 2014) titled "Rational Routines". Abstract: Recent research in evolutionary economics suggests that firms and other organizations are governed by routines. What distinguishes successful firms and organizations from less successful ones is that the former are better at developing, using and modifying routines that fit with the circumstances faced by the organization. Individual agents also rely on routines: many people do not actively choose what to eat for breakfast, or how to travel to work, or how to organize their daily activities in the office. In this talk I explore the hypothesis that routines, rather than preferences over uncertain prospects, should be used as the fundamental building block in theories that aim to analyze normative aspects of real-life decision making. I focus on a single, structural property of routines: I show that as long as routines are weakly monotonic (in a sense defined in the talk) the decision maker is rationally permitted to apply all routines available in a given time period, in any order, and there is no requirement to apply any routine more than once. Furthermore, there is no other way in which the same set of routines could be applied that would produce an operational state that is strictly better. I finally compare my results with some quite different claims about routines made by Krister Segerberg in the 1980's.Martin Peterson (Eindhoven) gives a talk at the MCMP Colloquium (18 June, 2014) titled "Rational Routines". Abstract: Recent research in evolutionary economics suggests that firms and other organizations are governed by routines. What distinguishes successful firms and organizations from less successful ones is that the former are better at developing, using and modifying routines that fit with the circumstances faced by the organization. Individual agents also rely on routines: many people do not actively choose what to eat for breakfast, or how to travel to work, or how to organize their daily activities in the office. In this talk I explore the hypothesis that routines, rather than preferences over uncertain prospects, should be used as the fundamental building block in theories that aim to analyze normative aspects of real-life decision making. I focus on a single, structural property of routines: I show that as long as routines are weakly monotonic (in a sense defined in the talk) the decision maker is rationally permitted to apply all routines available in a given time period, in any order, and there is no requirement to apply any routine more than once. Furthermore, there is no other way in which the same set of routines could be applied that would produce an operational state that is strictly better. I finally compare my results with some quite different claims about routines made by Krister Segerberg in the 1980's.Thu, 18 Apr 2019 23:06:01 +0000Martin Peterson (Eindhoven)Colloquium Mathematical Philosophyno00:36:5925https://cast.itunes.uni-muenchen.de/clips/9AqWG5SJzT/vod/high_quality.mp4Agent-based simulations in empirical sociological research
https://cast.itunes.uni-muenchen.de/clips/653ZB5QuQu/vod/high_quality.mp4
Isabelle Drouet (Paris-Sorbonne) gives a talk at the MCMP Colloquium (4 June, 2014) titled "Agent-based simulations in empirical sociological Research". Abstract: Agent-based models and simulations are more and more widely used in the empirical sciences. In sociology, they have been put at the core of a research project: analytical sociology, as theorized and practiced in, e.g., Hedström’s Dissecting the social (2005). Analytical sociologists conceive of ABMs as tools for causal analysis. More precisely, they see ABSs as the one method enabling the social sciences to produce genuine explanations of macro empirical phenomena by micro (or possibly meso) ones, and the purported explanations clearly are causal ones. My talk aims at clarifying in which sense exactly and under which conditions agent-based models and simulations as they are used in analytical sociology can indeed causally explain, or contribute to causally explain, social facts.Isabelle Drouet (Paris-Sorbonne) gives a talk at the MCMP Colloquium (4 June, 2014) titled "Agent-based simulations in empirical sociological Research". Abstract: Agent-based models and simulations are more and more widely used in the empirical sciences. In sociology, they have been put at the core of a research project: analytical sociology, as theorized and practiced in, e.g., Hedström’s Dissecting the social (2005). Analytical sociologists conceive of ABMs as tools for causal analysis. More precisely, they see ABSs as the one method enabling the social sciences to produce genuine explanations of macro empirical phenomena by micro (or possibly meso) ones, and the purported explanations clearly are causal ones. My talk aims at clarifying in which sense exactly and under which conditions agent-based models and simulations as they are used in analytical sociology can indeed causally explain, or contribute to causally explain, social facts.Thu, 18 Apr 2019 23:09:19 +0000Isabelle Drouet (Paris-Sorbonne)Colloquium Mathematical Philosophyno00:47:1426https://cast.itunes.uni-muenchen.de/clips/653ZB5QuQu/vod/high_quality.mp4Persistence of the lifeworld? On the relation of lifeworld and science
https://cast.itunes.uni-muenchen.de/clips/FvRGlZE5c5/vod/high_quality.mp4
Gregor Schiemann (Wuppertal) gives a talk at the MCMP Colloquium (28 May, 2014) titled "Persistence of the lifeworld? On the relation of lifeworld and science". Abstract: In contrast to the concept of science, the concept of the lifeworld describes an experience, which is characterised by familiar social relations, actions that are performed as a matter of course, and a lack of professionalism. Divergent relations between science and the life-world are possible, as I will demonstrate in the first part, by considering exemplary processes of scientification of the lifeworld and their opposing tendencies. I judge the contradicting evaluations as an expression of a cultural change, in which the existence of the lifeworld as a non-scientific experience is at stake. To evaluate the situation, I develop a concept of lifeworld in the second part, that reveals an attitude towards the world, and performances of actions, both of which are historically contingent; and whose abolition can, today, already be imagined. In the concluding third part the fact that we are, however, somewhat far removed from a conceivable end of the lifeworld is demonstrated.Gregor Schiemann (Wuppertal) gives a talk at the MCMP Colloquium (28 May, 2014) titled "Persistence of the lifeworld? On the relation of lifeworld and science". Abstract: In contrast to the concept of science, the concept of the lifeworld describes an experience, which is characterised by familiar social relations, actions that are performed as a matter of course, and a lack of professionalism. Divergent relations between science and the life-world are possible, as I will demonstrate in the first part, by considering exemplary processes of scientification of the lifeworld and their opposing tendencies. I judge the contradicting evaluations as an expression of a cultural change, in which the existence of the lifeworld as a non-scientific experience is at stake. To evaluate the situation, I develop a concept of lifeworld in the second part, that reveals an attitude towards the world, and performances of actions, both of which are historically contingent; and whose abolition can, today, already be imagined. In the concluding third part the fact that we are, however, somewhat far removed from a conceivable end of the lifeworld is demonstrated.Thu, 18 Apr 2019 23:12:46 +0000Gregor Schiemann (Wuppertal)Colloquium Mathematical Philosophyno00:33:4428https://cast.itunes.uni-muenchen.de/clips/FvRGlZE5c5/vod/high_quality.mp4An Analogical Inductive Logic for Partially Exchangeable Families of Attributes
https://cast.itunes.uni-muenchen.de/clips/Cgf4QZtLo5/vod/high_quality.mp4
Simon Huttegger (UC Irvine) gives a talk at the MCMP Colloquium (22 May, 2014) titled "An Analogical Inductive Logic for Partially Exchangeable Families of Attributes". Abstract: Since Carnap started his epic program of developing an inductive logic, there have been various attempts to include analogical reasoning into systems of inductive logic. I will present a new system based on de Finetti's concept of partial exchangeability. Together with a set of plausible axioms, partial exchangeability allows one to derive a family of inductive learning rules with enumerative analogical effects.Simon Huttegger (UC Irvine) gives a talk at the MCMP Colloquium (22 May, 2014) titled "An Analogical Inductive Logic for Partially Exchangeable Families of Attributes". Abstract: Since Carnap started his epic program of developing an inductive logic, there have been various attempts to include analogical reasoning into systems of inductive logic. I will present a new system based on de Finetti's concept of partial exchangeability. Together with a set of plausible axioms, partial exchangeability allows one to derive a family of inductive learning rules with enumerative analogical effects.Thu, 18 Apr 2019 23:21:29 +0000Simon Huttegger (UC Irvine)Colloquium for Mathematical Philosophyno01:14:0829https://cast.itunes.uni-muenchen.de/clips/Cgf4QZtLo5/vod/high_quality.mp4Computational Model as Generic Mechanisms
https://cast.itunes.uni-muenchen.de/clips/wrGXkHcMVq/vod/high_quality.mp4
Catherine Stinson (Ryerson University) gives a talk at the MCMP Colloquium (21 May, 2014) titled "Computational Model as Generic Mechanisms". Abstract: The role of computational models in science is a bit of a puzzle. They seem to be very unlike experiments in terms of their access to empirical facts about their target systems, yet scientists make liberal use of computational models to experiment and make discoveries. I connect this problem to one concerning mechanistic explanation. There a puzzle arises as to how schematic or abstract mechanisms can be explanatory, which they often seem to be, if one is committed to thinking of explanation as intimately connected to causation. Abstractions aren’t the sorts of things that have causal powers. A solution to both problems is to think of computational models not as abstractions, but as bare instantiations of abstract types, which I’ll call generics. Generics are the sorts of things that have causal powers. Computational models can then be considered experiments on generics, which gives them access to empirical facts about those generics. I argue that many common types of experiment can be better understood as experiments on generics, and suggest a shift in how we think of the inferences made in interpreting and applying experimental results.Catherine Stinson (Ryerson University) gives a talk at the MCMP Colloquium (21 May, 2014) titled "Computational Model as Generic Mechanisms". Abstract: The role of computational models in science is a bit of a puzzle. They seem to be very unlike experiments in terms of their access to empirical facts about their target systems, yet scientists make liberal use of computational models to experiment and make discoveries. I connect this problem to one concerning mechanistic explanation. There a puzzle arises as to how schematic or abstract mechanisms can be explanatory, which they often seem to be, if one is committed to thinking of explanation as intimately connected to causation. Abstractions aren’t the sorts of things that have causal powers. A solution to both problems is to think of computational models not as abstractions, but as bare instantiations of abstract types, which I’ll call generics. Generics are the sorts of things that have causal powers. Computational models can then be considered experiments on generics, which gives them access to empirical facts about those generics. I argue that many common types of experiment can be better understood as experiments on generics, and suggest a shift in how we think of the inferences made in interpreting and applying experimental results.Thu, 18 Apr 2019 23:13:36 +0000Catherine Stinson (Ryerson University)Colloquium Mathematical Philosophyno00:48:5030https://cast.itunes.uni-muenchen.de/clips/wrGXkHcMVq/vod/high_quality.mp4On the Justification of Deduction and Induction
https://cast.itunes.uni-muenchen.de/clips/iQUrpD61cj/vod/high_quality.mp4
Franz Huber (Toronto) gives a talk at the MCMP Colloquium (7 May, 2014) titled "On the Justification of Deduction and Induction". Abstract: In this talk I will first present my preferred variant of Hume (1739; 1748)'s argument for the thesis that we cannot justify the principle of induction. Then I will criticize the responses the resulting problem of induction has received by Carnap (1963; 1968) and by Goodman (1954), as well as briefly praise Reichenbach (1938; 1940)'s approach. Some of these authors compare induction to deduction. Haack (1976) compares deduction to induction. I will critically discuss her argument for the thesis that it is impossible to justify the principles of deduction next. In concluding I will defend the thesis that we can justify induction by deduction, and deduction by induction. Along the way I will show how we can understand deduction and induction as normative theories, and I will argue that there are only hypothetical, but no categorical imperatives.Franz Huber (Toronto) gives a talk at the MCMP Colloquium (7 May, 2014) titled "On the Justification of Deduction and Induction". Abstract: In this talk I will first present my preferred variant of Hume (1739; 1748)'s argument for the thesis that we cannot justify the principle of induction. Then I will criticize the responses the resulting problem of induction has received by Carnap (1963; 1968) and by Goodman (1954), as well as briefly praise Reichenbach (1938; 1940)'s approach. Some of these authors compare induction to deduction. Haack (1976) compares deduction to induction. I will critically discuss her argument for the thesis that it is impossible to justify the principles of deduction next. In concluding I will defend the thesis that we can justify induction by deduction, and deduction by induction. Along the way I will show how we can understand deduction and induction as normative theories, and I will argue that there are only hypothetical, but no categorical imperatives.Thu, 18 Apr 2019 23:22:59 +0000Franz Huber (Toronto)Colloquium Mathematical Philosophyno01:09:4731https://cast.itunes.uni-muenchen.de/clips/iQUrpD61cj/vod/high_quality.mp4On Bell's local causality in local classical and quantum theory
https://cast.itunes.uni-muenchen.de/clips/9ySl5790ZU/vod/high_quality.mp4
Gábor Hofer-Szabó (Budapest) gives a talk at the MCMP Colloquium (9 April, 2014) titled "On Bell's local causality in local classical and quantum theory". Abstract: This paper aims to give a clear-cut definition of Bell's notion of local causality. Having provided a framework, called local physical theory, which integrates probabilistic and spatiotemporal concepts, we formulate the notion of local causality and relate it to other locality and causality concepts. Then we compare Bell's local causality with Reichenbach's Common Cause Principle and relate both to the Bell inequalities. We find a nice parallelism: both local causality and the Common Cause Principle are more general notions than captured by the Bell inequalities. Namely, Bell inequalities cannot be derived neither from local causality nor from a common cause unless the local physical theory is classical or the common cause is commuting, respectively.Gábor Hofer-Szabó (Budapest) gives a talk at the MCMP Colloquium (9 April, 2014) titled "On Bell's local causality in local classical and quantum theory". Abstract: This paper aims to give a clear-cut definition of Bell's notion of local causality. Having provided a framework, called local physical theory, which integrates probabilistic and spatiotemporal concepts, we formulate the notion of local causality and relate it to other locality and causality concepts. Then we compare Bell's local causality with Reichenbach's Common Cause Principle and relate both to the Bell inequalities. We find a nice parallelism: both local causality and the Common Cause Principle are more general notions than captured by the Bell inequalities. Namely, Bell inequalities cannot be derived neither from local causality nor from a common cause unless the local physical theory is classical or the common cause is commuting, respectively.Thu, 18 Apr 2019 23:17:51 +0000Gábor Hofer-Szabó (Budapest)Colloquium Mathematical Philosophyno00:58:1132https://cast.itunes.uni-muenchen.de/clips/9ySl5790ZU/vod/high_quality.mp4On Mathematical Explanation of Physical Facts
https://cast.itunes.uni-muenchen.de/clips/6fU0gjliDf/vod/high_quality.mp4
Joseph Berkovitz (Toronto) gives a talk at the MCMP Colloquium (13 February, 2014) titled "On Mathematical Explanation of Physical Facts". Abstract: Modern physics is highly mathematical, and this may suggest that mathematics is bound to play some role in explaining the physical reality. Yet, there is an ongoing controversy about the prospects of mathematical explanations of physical facts and the nature of such explanations. A popular view has it that mathematics provides a rich and indispensable language for describing the physical reality but could not play any role in explaining physical facts. Even more prevalent is the view that physical facts are to be sharply distinguished from mathematical facts. Indeed, both sides of the debate seem to hold this view. Accordingly, the idea that mathematical facts could explain physical facts seems particularly puzzling: how could facts about abstract, non-physical entities possibly explain physical facts? In this paper, I challenge these common views. I argue that (1) in addition to its descriptive role, mathematics plays a constitutive role in modern physics: some general, fundamental features of the physical reality, as reflected by modern physics, are essentially mathematical; and that (2) this constitutive role is the source of mathematical explanations of physical facts. On the basis of this argument, I suggest an account of mathematical explanation of physical facts. I conclude by comparing this account to other existing accounts of mathematical explanations of physical facts.Joseph Berkovitz (Toronto) gives a talk at the MCMP Colloquium (13 February, 2014) titled "On Mathematical Explanation of Physical Facts". Abstract: Modern physics is highly mathematical, and this may suggest that mathematics is bound to play some role in explaining the physical reality. Yet, there is an ongoing controversy about the prospects of mathematical explanations of physical facts and the nature of such explanations. A popular view has it that mathematics provides a rich and indispensable language for describing the physical reality but could not play any role in explaining physical facts. Even more prevalent is the view that physical facts are to be sharply distinguished from mathematical facts. Indeed, both sides of the debate seem to hold this view. Accordingly, the idea that mathematical facts could explain physical facts seems particularly puzzling: how could facts about abstract, non-physical entities possibly explain physical facts? In this paper, I challenge these common views. I argue that (1) in addition to its descriptive role, mathematics plays a constitutive role in modern physics: some general, fundamental features of the physical reality, as reflected by modern physics, are essentially mathematical; and that (2) this constitutive role is the source of mathematical explanations of physical facts. On the basis of this argument, I suggest an account of mathematical explanation of physical facts. I conclude by comparing this account to other existing accounts of mathematical explanations of physical facts.Fri, 02 May 2014 01:01:53 +0000Joseph Berkovitz (Toronto)Colloquium Mathematical Philosophyno00:47:3033https://cast.itunes.uni-muenchen.de/clips/6fU0gjliDf/vod/high_quality.mp4The epistemic division of labour revisited
https://cast.itunes.uni-muenchen.de/clips/X0UNvn16hb/vod/high_quality.mp4
Johanna Thoma (Toronto) gives a talk at the MCMP Colloquium (6 February, 2014) titled "The epistemic division of labour revisited". Abstract: Scientists differ in the ways they approach their work. Some are happy to follow in the footsteps of others, and continue with work that has proven fruitful in the past. Others like to explore novel approaches. It is tempting to think that herein lies an epistemic division of labour conducive to overall scientific progress: The latter, explorer-type scientists, point the way to fruitful areas of research, and the former, extractor-type scientists, more fully explore those areas. And indeed, it has now long been acknowledged that the social structure of science can play an important epistemic role. Still, philosophers of science have so far failed to produce a model that demonstrates the epistemic benefits of such division of labour. In particular, Weisberg and Muldoon’s (2009) attempt, while introducing an important new type of model, suggests that it would be best if all scientists were explorer-types. I argue that this is due to implausible modeling choices, and present an alternative agent-based ‘epistemic landscape’ model which succeeds at showing the alleged epistemic rewards from division of labour, with one restriction. Division of labour is only beneficial when scientists are not too inflexible in their choice of new research topic, and too ignorant of work that is different from their own. In fact, my model suggests that the more flexible and informed scientists are, the more beneficial is division of labour.Johanna Thoma (Toronto) gives a talk at the MCMP Colloquium (6 February, 2014) titled "The epistemic division of labour revisited". Abstract: Scientists differ in the ways they approach their work. Some are happy to follow in the footsteps of others, and continue with work that has proven fruitful in the past. Others like to explore novel approaches. It is tempting to think that herein lies an epistemic division of labour conducive to overall scientific progress: The latter, explorer-type scientists, point the way to fruitful areas of research, and the former, extractor-type scientists, more fully explore those areas. And indeed, it has now long been acknowledged that the social structure of science can play an important epistemic role. Still, philosophers of science have so far failed to produce a model that demonstrates the epistemic benefits of such division of labour. In particular, Weisberg and Muldoon’s (2009) attempt, while introducing an important new type of model, suggests that it would be best if all scientists were explorer-types. I argue that this is due to implausible modeling choices, and present an alternative agent-based ‘epistemic landscape’ model which succeeds at showing the alleged epistemic rewards from division of labour, with one restriction. Division of labour is only beneficial when scientists are not too inflexible in their choice of new research topic, and too ignorant of work that is different from their own. In fact, my model suggests that the more flexible and informed scientists are, the more beneficial is division of labour.Fri, 02 May 2014 02:02:17 +0000Johanna Thoma (Toronto)Colloquium Mathematical Philosophyno00:47:3534https://cast.itunes.uni-muenchen.de/clips/X0UNvn16hb/vod/high_quality.mp4Theory convergence in approaches to quantum gravity?
https://cast.itunes.uni-muenchen.de/clips/aqJCDyOZEk/vod/high_quality.mp4
Johannes Thürigen (Max-Planck Institute for Gravitational Physics) gives a talk at the MCMP Colloquium (15 January, 2014) titled "Theory convergence in approaches to quantum gravity?". Abstract: Theories in (empirical) science can be considered epistemically justified not only according to empirical content but also systematization power and uniformity. In the light of these concepts we present an analysis of the basic structure and intertheoretic relations of some approaches to quantum gravity each starting from quite different assumptions. These are Loop quantum gravity, Spin foams, Causal dynamical triangulations, Regge calculus and Group field theory. The aim of this analysis is to critically discuss an argument of physicists working on quantum gravity, stating that there is some kind of convergence of the mentioned approaches which (at least partially) justifies them. Such an argument would be of high relevance since neither the precise relation to the established theories (and thus the phenomena described by those) nor the derivation of original phenomena might be achievable in the foreseeable future, leaving uniformity as the only epistemological criterion in favor for them. We find that intertheoretic relations can be found mainly at the level of the conceptual framework of the theories, rather than regarding the actual dynamical laws. Therefore a weaker notion of theory relation is needed. The recent concept of theory crystallization is a good candidate and we analyze to what extent the approaches to quantum gravity meet its conditions.Johannes Thürigen (Max-Planck Institute for Gravitational Physics) gives a talk at the MCMP Colloquium (15 January, 2014) titled "Theory convergence in approaches to quantum gravity?". Abstract: Theories in (empirical) science can be considered epistemically justified not only according to empirical content but also systematization power and uniformity. In the light of these concepts we present an analysis of the basic structure and intertheoretic relations of some approaches to quantum gravity each starting from quite different assumptions. These are Loop quantum gravity, Spin foams, Causal dynamical triangulations, Regge calculus and Group field theory. The aim of this analysis is to critically discuss an argument of physicists working on quantum gravity, stating that there is some kind of convergence of the mentioned approaches which (at least partially) justifies them. Such an argument would be of high relevance since neither the precise relation to the established theories (and thus the phenomena described by those) nor the derivation of original phenomena might be achievable in the foreseeable future, leaving uniformity as the only epistemological criterion in favor for them. We find that intertheoretic relations can be found mainly at the level of the conceptual framework of the theories, rather than regarding the actual dynamical laws. Therefore a weaker notion of theory relation is needed. The recent concept of theory crystallization is a good candidate and we analyze to what extent the approaches to quantum gravity meet its conditions.Fri, 21 Feb 2014 07:17:12 +0000Johannes Thürigen (Max-Planck Institute for Gravitational Physics)Colloquium Mathematical Philosophyno01:02:1835https://cast.itunes.uni-muenchen.de/clips/aqJCDyOZEk/vod/high_quality.mp4Chaos beyond the Butterfly Effect: The Poison Pill of Structural Model Error
https://cast.itunes.uni-muenchen.de/clips/ZMmK2p6r9z/vod/high_quality.mp4
Roman Frigg (LSE) gives a talk at the CAS Research Focus Series „Reduction and Emergence“ (13 November, 2013) titled "Chaos Beyond the Butterfly Effect: The Poison Pill of Structural Model Error" (host: Stephan Hartmann (MCMP/LMU)). Abstract: The sensitive dependence on initial condition associated with chaotic models, the so-called "Butterfly Effect", imposes limitations on the models’ predictive power. These limitations have been widely recognized and extensively discussed. In this lecture, Roman Frigg will draw attention to an additional so far under-appreciated problem, namely structural model error (SME). If a nonlinear model has only the slightest SME, then its ability to generate useful prediction is lost. This puts us in a worse epistemic situation: while we can mitigate against the butterfly effect by making probabilistic predictions, this route is foreclosed in the case of SME. Roman Frigg will discuss in what way the description of problems affects actual modeling projects, in particular in the context of making predictions about the local effects of climate change.Roman Frigg (LSE) gives a talk at the CAS Research Focus Series „Reduction and Emergence“ (13 November, 2013) titled "Chaos Beyond the Butterfly Effect: The Poison Pill of Structural Model Error" (host: Stephan Hartmann (MCMP/LMU)). Abstract: The sensitive dependence on initial condition associated with chaotic models, the so-called "Butterfly Effect", imposes limitations on the models’ predictive power. These limitations have been widely recognized and extensively discussed. In this lecture, Roman Frigg will draw attention to an additional so far under-appreciated problem, namely structural model error (SME). If a nonlinear model has only the slightest SME, then its ability to generate useful prediction is lost. This puts us in a worse epistemic situation: while we can mitigate against the butterfly effect by making probabilistic predictions, this route is foreclosed in the case of SME. Roman Frigg will discuss in what way the description of problems affects actual modeling projects, in particular in the context of making predictions about the local effects of climate change.Fri, 21 Feb 2014 03:04:36 +0000Roman Frigg (LSE)CAS Research Focus Series „Reduction and Emergence“no00:55:43res2013, res2013series, climate models, climate predictions, area forecast, climate, philosophy, climate change, global warming36https://cast.itunes.uni-muenchen.de/clips/ZMmK2p6r9z/vod/high_quality.mp4Inductive logic for rich languages
https://cast.itunes.uni-muenchen.de/clips/TvBwOCkadT/vod/high_quality.mp4
Jan-Willem Romeijn (Groningen) gives a talk at the MCMP Colloquium (19 December, 2013) titled "Inductive logic for rich languages". Abstract: I present an extension of the language of inductive logic with a formally precise notion of statistical hypothesis. I argue that this enhances the expressivity of inductive logic and provides an intuitive understanding of several Carnapian systems, e.g., systems that accommodate analogical predictions. The paper is set up as a guided tour passing a number of important sites: Carnapian inductive logic, De Finetti's representation theorem, Gaifman and Snir's often overlooked notion of rich language, von Mises Kollektivs, Bayesian statistical inference, the convergence theorems for it, and analogical predictions. All of these will be integrated into a single coherent story.Jan-Willem Romeijn (Groningen) gives a talk at the MCMP Colloquium (19 December, 2013) titled "Inductive logic for rich languages". Abstract: I present an extension of the language of inductive logic with a formally precise notion of statistical hypothesis. I argue that this enhances the expressivity of inductive logic and provides an intuitive understanding of several Carnapian systems, e.g., systems that accommodate analogical predictions. The paper is set up as a guided tour passing a number of important sites: Carnapian inductive logic, De Finetti's representation theorem, Gaifman and Snir's often overlooked notion of rich language, von Mises Kollektivs, Bayesian statistical inference, the convergence theorems for it, and analogical predictions. All of these will be integrated into a single coherent story.Tue, 18 Feb 2014 07:09:25 +0000Jan-Willem Romeijn (Groningen)Colloquium Mathematical Philosophyno01:00:0437https://cast.itunes.uni-muenchen.de/clips/TvBwOCkadT/vod/high_quality.mp4String Theory and the Scientific Method
https://cast.itunes.uni-muenchen.de/clips/WDKQPQSP9e/vod/high_quality.mp4
Richard Dawid (Vienna) gives a talk at the CAS Research Focus Series „Reduction and Emergence“ (13 November, 2013) titled "String Theory and the Scientific Method" (host: Stephan Hartmann (MCMP/LMU)). Abstract: For the last thirty years, string theory has played a highly influential role in fundamental physics without having found empirical confirmation. The presentation will analyze reasons for the high degree of trust many physicists have developed in a theory that, according to classical standards of theory assessment, would have to be called an unconfirmed speculation. It will be argued that the case of string theory suggests a new perspective on our understanding of theory confirmation in general. In the last part of the talk, some implications for the scientific realism debate and the question of reduction in science shall also be addressed.Richard Dawid (Vienna) gives a talk at the CAS Research Focus Series „Reduction and Emergence“ (13 November, 2013) titled "String Theory and the Scientific Method" (host: Stephan Hartmann (MCMP/LMU)). Abstract: For the last thirty years, string theory has played a highly influential role in fundamental physics without having found empirical confirmation. The presentation will analyze reasons for the high degree of trust many physicists have developed in a theory that, according to classical standards of theory assessment, would have to be called an unconfirmed speculation. It will be argued that the case of string theory suggests a new perspective on our understanding of theory confirmation in general. In the last part of the talk, some implications for the scientific realism debate and the question of reduction in science shall also be addressed.Fri, 21 Feb 2014 02:01:20 +0000Richard Dawid (Vienna)CAS Research Focus Series „Reduction and Emergence“no00:54:35res2013, res2013series, Reduktion, Emergenz, Reduction, Emergence, String Theorie, String Theory, Wissenschaftstheorie, physics, theory confirmation38https://cast.itunes.uni-muenchen.de/clips/WDKQPQSP9e/vod/high_quality.mp4Cross-Level Linkages in Neurobiology
https://cast.itunes.uni-muenchen.de/clips/XgRjiAz4O0/vod/high_quality.mp4
Patricia S. Churchland (San Diego) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Cross-Level Linkages in Neurobiology". Abstract: In neurobiology, an important strategy is to link high level properties, such as spatial knowledge or impulse control, to specific macro structuresand to signature neuronal activity in parts of those structures. At a deeper level, the goal is to understand that neuronal signature in the context of regional microanatomy, neuropharmacology, and basic neuronal physiology. Where possible, the companion goal is to link the structural changes during learning to gene expression. In some conditions, such as Williams syndrome, the deficits can be linked to deletion of specific genes that alter brain development. Thus techniques ranging from genetics to cellular recording to imaging to behavioral analysis are used to converge on a function with a view to explaining its neurobiological basis.Patricia S. Churchland (San Diego) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Cross-Level Linkages in Neurobiology". Abstract: In neurobiology, an important strategy is to link high level properties, such as spatial knowledge or impulse control, to specific macro structuresand to signature neuronal activity in parts of those structures. At a deeper level, the goal is to understand that neuronal signature in the context of regional microanatomy, neuropharmacology, and basic neuronal physiology. Where possible, the companion goal is to link the structural changes during learning to gene expression. In some conditions, such as Williams syndrome, the deficits can be linked to deletion of specific genes that alter brain development. Thus techniques ranging from genetics to cellular recording to imaging to behavioral analysis are used to converge on a function with a view to explaining its neurobiological basis.Tue, 28 Jan 2014 17:00:31 +0000Patricia S. Churchland (San Diego)Reduction and Emergence in the Sciencesno00:49:25res201339https://cast.itunes.uni-muenchen.de/clips/XgRjiAz4O0/vod/high_quality.mp4Emergence and Explanation
https://cast.itunes.uni-muenchen.de/clips/XTn5mNH60U/vod/high_quality.mp4
Peter Wyss (Oxford) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Emergence and Explanation". Abstract: The notion of emergence has acquired explanatory power in relation to the mind–body problem. It seems that we explain something when we say that consciousness emerges from brain function, or that mind emerges from matter. I criticise this (puzzling) intuition, and hence 'epistemic emergence'. I argue that 'explanations' in terms of emergence are incoherent at worst; and at best, they make explicit our ignorance. Some explanatory punch can be rescued only if emergence is ontologised. I sketch such an approach. Although more substantial than many of the current discussions, it gets an explanatory grip on two central features of emergence, viz. novelty and irreducibility.Peter Wyss (Oxford) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Emergence and Explanation". Abstract: The notion of emergence has acquired explanatory power in relation to the mind–body problem. It seems that we explain something when we say that consciousness emerges from brain function, or that mind emerges from matter. I criticise this (puzzling) intuition, and hence 'epistemic emergence'. I argue that 'explanations' in terms of emergence are incoherent at worst; and at best, they make explicit our ignorance. Some explanatory punch can be rescued only if emergence is ontologised. I sketch such an approach. Although more substantial than many of the current discussions, it gets an explanatory grip on two central features of emergence, viz. novelty and irreducibility.Tue, 28 Jan 2014 16:35:49 +0000Peter Wyss (Oxford)Reduction and Emergence in the Sciencesno00:28:50res201340https://cast.itunes.uni-muenchen.de/clips/XTn5mNH60U/vod/high_quality.mp4Scarecrow’s Brain and Homunculi: Neurobiological Reductionism as Ensoulment-Objectification Process Seen Through Anthropological Lenses
https://cast.itunes.uni-muenchen.de/clips/aG9kOzc5d1/vod/high_quality.mp4
Marko Zivkovic (Alberta) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Scarecrow’s Brain and Homunculi: Neurobiological Reductionism as Ensoulment-Objectification Process Seen Through Anthropological Lenses". Abstract: Using Scarecrow’s brain to think through folk psychology’s encounter with neurobiological reductionism, I will try to reframe the debates among Dennett, Bennet & Hacker, Searle and the Churchlands in terms of anthropological understandings of “folk psychology” developed by Alfred Gell, Gregory Schrempp’s mythological analysis of homunculism, and Michael Polanyi’s conceptualization of from-to nature of all knowing. My premise is that all the positions taken on the issue of neurobiological reduction are variants of ensoulment-objectification processes systematically examined in Gell’s distributed personality theory.Marko Zivkovic (Alberta) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Scarecrow’s Brain and Homunculi: Neurobiological Reductionism as Ensoulment-Objectification Process Seen Through Anthropological Lenses". Abstract: Using Scarecrow’s brain to think through folk psychology’s encounter with neurobiological reductionism, I will try to reframe the debates among Dennett, Bennet & Hacker, Searle and the Churchlands in terms of anthropological understandings of “folk psychology” developed by Alfred Gell, Gregory Schrempp’s mythological analysis of homunculism, and Michael Polanyi’s conceptualization of from-to nature of all knowing. My premise is that all the positions taken on the issue of neurobiological reduction are variants of ensoulment-objectification processes systematically examined in Gell’s distributed personality theory.Tue, 28 Jan 2014 15:35:21 +0000Marko Zivkovic (Alberta)Reduction and Emergence in the Sciencesno00:33:08res201341https://cast.itunes.uni-muenchen.de/clips/aG9kOzc5d1/vod/high_quality.mp4The Completion of Logical Empiricism: Hempel's Pragmatic Turn
https://cast.itunes.uni-muenchen.de/clips/UZDaMNEyXI/vod/high_quality.mp4
Gereon Wolters (Konstanz) gives a talk at the MCMP Colloquium (22 October, 2014) titled "The Completion of Logical Empiricism: Hempel's Pragmatic Turn". Abstract: For most of his life Carl Gustav Hempel (1905-1997) subscribed to the Carnapian variant of logical empiricism. According to Rudolf Carnap (1891-1970) philosophy of science is "rational reconstruction" (syntactically and/or semantically) of basic methodological concepts like probability, explanation, confirmation, and so on. Practically unnoticed by the philosophical community Hempel later gave up this approach and developed an "explanatory-normative methodology" (E-N--Methodology). Decisive for his change was the work of Thomas S. Kuhn (1922-1996), whom Hempel first had met at Stanford in 1963/64. Hempel interpreted this pragmatic turn as a return to the Neurathian (Otto Neurath (1882-1945)) variant of logical empiricism.Gereon Wolters (Konstanz) gives a talk at the MCMP Colloquium (22 October, 2014) titled "The Completion of Logical Empiricism: Hempel's Pragmatic Turn". Abstract: For most of his life Carl Gustav Hempel (1905-1997) subscribed to the Carnapian variant of logical empiricism. According to Rudolf Carnap (1891-1970) philosophy of science is "rational reconstruction" (syntactically and/or semantically) of basic methodological concepts like probability, explanation, confirmation, and so on. Practically unnoticed by the philosophical community Hempel later gave up this approach and developed an "explanatory-normative methodology" (E-N--Methodology). Decisive for his change was the work of Thomas S. Kuhn (1922-1996), whom Hempel first had met at Stanford in 1963/64. Hempel interpreted this pragmatic turn as a return to the Neurathian (Otto Neurath (1882-1945)) variant of logical empiricism.Thu, 18 Apr 2019 23:17:06 +0000Gereon Wolters (Konstanz)Colloquium Mathematical Philosophyno00:36:3842https://cast.itunes.uni-muenchen.de/clips/UZDaMNEyXI/vod/high_quality.mp4Heterogeneity and Emergence in the Social Sciences
https://cast.itunes.uni-muenchen.de/clips/ehgUVn5yIi/vod/high_quality.mp4
Frederik Willemarck (Birkbeck) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Heterogeneity and Emergence in the Social Sciences". Abstract: In my paper, I focus on the relationship between collective properties (pertaining to collectives) and individualistic properties (pertaining to individual persons) in the social sciences. I argue that there is a sub-class of collective properties—I call them heterogeneous (collective) properties—which fail to reduce to, and supervene on, individualistic properties. In addition, I also show that heterogeneous properties have real causal power, at least in the sense that they are capable of influencing events at the individualistic level. I thus conclude that heterogeneous properties should be interpreted as emergent properties in the social context.Frederik Willemarck (Birkbeck) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Heterogeneity and Emergence in the Social Sciences". Abstract: In my paper, I focus on the relationship between collective properties (pertaining to collectives) and individualistic properties (pertaining to individual persons) in the social sciences. I argue that there is a sub-class of collective properties—I call them heterogeneous (collective) properties—which fail to reduce to, and supervene on, individualistic properties. In addition, I also show that heterogeneous properties have real causal power, at least in the sense that they are capable of influencing events at the individualistic level. I thus conclude that heterogeneous properties should be interpreted as emergent properties in the social context.Tue, 28 Jan 2014 14:34:55 +0000Frederik Willemarck (Birkbeck)Reduction and Emergence in the Sciencesno00:30:28res201343https://cast.itunes.uni-muenchen.de/clips/ehgUVn5yIi/vod/high_quality.mp4How Can One and the Same Thing be Subject to Different Theories? On the Proper Logic for Non-Reductive Monism
https://cast.itunes.uni-muenchen.de/clips/p5MMw1MPTY/vod/high_quality.mp4
Thomas Müller (LMU) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "How Can One and the Same Thing be Subject to Different Theories? On the Proper Logic for Non-Reductive Monism". Abstract: The aim of this paper is to shed light on a neglected issue in the field of intertheoretic relations: How is it that properties belonging to different theories apply to one and the same thing? What does that teach us about the notion of being one and the same thing, and what could an adequate formal representation of sameness look like? What about the controversial thesis of constitution as identity that seems to be required for a monistic (e.g., physicalistic) metaphysics? By discussing a simple example—physical and biological properties applying to a cat—we argue that standard logical resources of predicate or quantified modal logic are inadequate for the task. We finally describe case-intensional first order logic, which provides an adequate formal framework for non-reductive monism.Thomas Müller (LMU) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "How Can One and the Same Thing be Subject to Different Theories? On the Proper Logic for Non-Reductive Monism". Abstract: The aim of this paper is to shed light on a neglected issue in the field of intertheoretic relations: How is it that properties belonging to different theories apply to one and the same thing? What does that teach us about the notion of being one and the same thing, and what could an adequate formal representation of sameness look like? What about the controversial thesis of constitution as identity that seems to be required for a monistic (e.g., physicalistic) metaphysics? By discussing a simple example—physical and biological properties applying to a cat—we argue that standard logical resources of predicate or quantified modal logic are inadequate for the task. We finally describe case-intensional first order logic, which provides an adequate formal framework for non-reductive monism.Tue, 28 Jan 2014 13:34:36 +0000Thomas Müller (LMU)Reduction and Emergence in the Sciencesno00:39:52res201344https://cast.itunes.uni-muenchen.de/clips/p5MMw1MPTY/vod/high_quality.mp4Technical Aspects of Reduction and Multiple Realizability
https://cast.itunes.uni-muenchen.de/clips/A7CaTS2p0v/vod/high_quality.mp4
Sebastian Lutz (MCMP/LMU) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Technical Aspects of Reduction and Multiple Realizability". Abstract: In my talk, I will suggest conceptualizations of reducibility, supervenience and non-reductive physicalism within type theory and model theory. The conception of reduction is Nagelian in spirit but logically significantly different, and one of the suggested conceptions of non-reductive physicalism captures Fodor’s claims about disjunctive properties without relying on his problematic assumptions about natural kinds.Sebastian Lutz (MCMP/LMU) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Technical Aspects of Reduction and Multiple Realizability". Abstract: In my talk, I will suggest conceptualizations of reducibility, supervenience and non-reductive physicalism within type theory and model theory. The conception of reduction is Nagelian in spirit but logically significantly different, and one of the suggested conceptions of non-reductive physicalism captures Fodor’s claims about disjunctive properties without relying on his problematic assumptions about natural kinds.Tue, 28 Jan 2014 12:34:08 +0000Sebastian Lutz (MCMP/LMU)Reduction and Emergence in the Sciencesno00:30:52res201345https://cast.itunes.uni-muenchen.de/clips/A7CaTS2p0v/vod/high_quality.mp4"Reversed Reduction" in Gibbsian Statistical Mechanics
https://cast.itunes.uni-muenchen.de/clips/SQctkhBrUM/vod/high_quality.mp4
Ronnie Hermens (Groningen) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "'Reversed Reduction' in Gibbsian Statistical Mechanics". Abstract: Statistical mechanics is the branch of physics which uses probability theory to describe and explain the macroscopic behavior of many particle systems in terms of the mechanical behavior of their constituents. Part of this macroscopic behavior is independently captured by thermodynamics. In this talk I will lay out some difficulties with the purported reduction of thermodynamics to Gibbsian statistical mechanics. The explanatory power of the use of probabilities will be evaluated with respect to the interpretation of probability adopted. I will argue that a consistent reading of statistical mechanics requires that probabilities are motivated by thermodynamics rather than providing an explanation for it.Ronnie Hermens (Groningen) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "'Reversed Reduction' in Gibbsian Statistical Mechanics". Abstract: Statistical mechanics is the branch of physics which uses probability theory to describe and explain the macroscopic behavior of many particle systems in terms of the mechanical behavior of their constituents. Part of this macroscopic behavior is independently captured by thermodynamics. In this talk I will lay out some difficulties with the purported reduction of thermodynamics to Gibbsian statistical mechanics. The explanatory power of the use of probabilities will be evaluated with respect to the interpretation of probability adopted. I will argue that a consistent reading of statistical mechanics requires that probabilities are motivated by thermodynamics rather than providing an explanation for it.Tue, 28 Jan 2014 11:33:37 +0000Ronnie Hermens (Groningen)Reduction and Emergence in the Sciencesno00:32:21res201346https://cast.itunes.uni-muenchen.de/clips/SQctkhBrUM/vod/high_quality.mp4Holography and the Emergence of Gravity
https://cast.itunes.uni-muenchen.de/clips/2b7PRj6XAF/vod/high_quality.mp4
Dennis Dieks (Utrecht) and Jeroen van Dongen (Amsterdam) and Sebastian de Haro (Amsterdam) give a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Holography and the Emergence of Gravity". Abstract: ‘Holographic’ relations between theories have become a main theme in quantum gravity research: a theory without gravity is in some way equivalent to a gravitational theory with an extra dimension. ‘t Hooft first proposed holography for evaporating black holes in 1993, “AdS/CFT” duality is a more recent holographic topic of study. Very recently, Verlinde has proposed that even Newton’s law of gravitation can be related holographically to a thermodynamics of information on screens. We discuss theory reduction and spacetime emergence in these scenarios: in what sense are these theories equivalent or reduce to each other and when is spacetime emergent?Dennis Dieks (Utrecht) and Jeroen van Dongen (Amsterdam) and Sebastian de Haro (Amsterdam) give a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Holography and the Emergence of Gravity". Abstract: ‘Holographic’ relations between theories have become a main theme in quantum gravity research: a theory without gravity is in some way equivalent to a gravitational theory with an extra dimension. ‘t Hooft first proposed holography for evaporating black holes in 1993, “AdS/CFT” duality is a more recent holographic topic of study. Very recently, Verlinde has proposed that even Newton’s law of gravitation can be related holographically to a thermodynamics of information on screens. We discuss theory reduction and spacetime emergence in these scenarios: in what sense are these theories equivalent or reduce to each other and when is spacetime emergent?Tue, 28 Jan 2014 10:33:16 +0000Dennis Dieks (Utrecht) and Jeroen van Dongen (Amsterdam) and Sebastian de Haro (Amsterdam)Reduction and Emergence in the Sciencesno00:40:09res201347https://cast.itunes.uni-muenchen.de/clips/2b7PRj6XAF/vod/high_quality.mp4Novelty and autonomy as bases for, or alternatives to, a conception of emergence in physics
https://cast.itunes.uni-muenchen.de/clips/plO73l1qiQ/vod/high_quality.mp4
Karen Crowther (Sydney) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Novelty and autonomy as bases for, or alternatives to, a conception of emergence in physics". Abstract: An effective theory in physics is one that is supposed to apply only at a given length (or energy) scale; the framework of effective field theory (EFT) describes a ‘tower’ of theories each applying at different length scales, where each ‘level’ up is a shorter-scale theory. A subtlety regarding the use and necessity of EFTs means that any conception of emergence related to reduction is irrelevant, failing to capture the relations of interest in these real physical cases. A positive conception of emergence, based on the novelty and autonomy of the ‘levels’, is developed by considering other physical examples, including critical phenomena, symmetry-breaking and hydrodynamics, in addition to EFT more generally.Karen Crowther (Sydney) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Novelty and autonomy as bases for, or alternatives to, a conception of emergence in physics". Abstract: An effective theory in physics is one that is supposed to apply only at a given length (or energy) scale; the framework of effective field theory (EFT) describes a ‘tower’ of theories each applying at different length scales, where each ‘level’ up is a shorter-scale theory. A subtlety regarding the use and necessity of EFTs means that any conception of emergence related to reduction is irrelevant, failing to capture the relations of interest in these real physical cases. A positive conception of emergence, based on the novelty and autonomy of the ‘levels’, is developed by considering other physical examples, including critical phenomena, symmetry-breaking and hydrodynamics, in addition to EFT more generally.Tue, 28 Jan 2014 09:32:40 +0000Karen Crowther (Sydney)Reduction and Emergence in the Sciencesno00:26:12res201348https://cast.itunes.uni-muenchen.de/clips/plO73l1qiQ/vod/high_quality.mp4Theory Reduction in Physics: A Model-Based, Dynamical Systems Approach
https://cast.itunes.uni-muenchen.de/clips/s5MOr9os29/vod/high_quality.mp4
Joshua Rosaler (Pittsburgh) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Theory Reduction in Physics: A Model-Based, Dynamical Systems Approach". Abstract: I elaborate an approach to reduction in physics that is distinct from the Nagelian and limit-based approaches that have been discussed most widely in the philosophical literature. This approach, which I call ‘Dynamical Systems (DS) Reduction’, is intended to apply to the reduction of theories whose models can be formulated as dynamical systems models. Importantly, this is the case with most physical theories, including classical mechanics, classical field theory, quantum mechanics and quantum field theory. After setting out the basic elements of DS reduction, I compare this approach with the limit-based and Nagelian approaches, arguing in each case that the DS approach does better. In particular, I highlight a number of significant parallels between the DS and Nagelian approaches, specifically relating to their use of special correspondences between theories (what are most commonly called ‘bridge laws’ in Nagelian approaches) to identify those elements of the low-level theory that emulate the behavior of certain elements in the high-level theory. Despite these similarities, I argue that DS reduction, in its use such correspondences (which I call ‘bridge maps’) does not face the ambiguities or difficulties that are often associated with Nagelian bridge laws: in particular, I argue that it avoids ambiguities as to whether these correspondences are to be regarded as conventions or empirically substantive claims, as well addressing concerns about multiple realisability.Joshua Rosaler (Pittsburgh) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Theory Reduction in Physics: A Model-Based, Dynamical Systems Approach". Abstract: I elaborate an approach to reduction in physics that is distinct from the Nagelian and limit-based approaches that have been discussed most widely in the philosophical literature. This approach, which I call ‘Dynamical Systems (DS) Reduction’, is intended to apply to the reduction of theories whose models can be formulated as dynamical systems models. Importantly, this is the case with most physical theories, including classical mechanics, classical field theory, quantum mechanics and quantum field theory. After setting out the basic elements of DS reduction, I compare this approach with the limit-based and Nagelian approaches, arguing in each case that the DS approach does better. In particular, I highlight a number of significant parallels between the DS and Nagelian approaches, specifically relating to their use of special correspondences between theories (what are most commonly called ‘bridge laws’ in Nagelian approaches) to identify those elements of the low-level theory that emulate the behavior of certain elements in the high-level theory. Despite these similarities, I argue that DS reduction, in its use such correspondences (which I call ‘bridge maps’) does not face the ambiguities or difficulties that are often associated with Nagelian bridge laws: in particular, I argue that it avoids ambiguities as to whether these correspondences are to be regarded as conventions or empirically substantive claims, as well addressing concerns about multiple realisability.Tue, 28 Jan 2014 08:32:09 +0000Joshua Rosaler (Pittsburgh)Reduction and Emergence in the Sciencesno00:33:14res201349https://cast.itunes.uni-muenchen.de/clips/s5MOr9os29/vod/high_quality.mp4The Topology of Intertheoretic Reduction
https://cast.itunes.uni-muenchen.de/clips/JCjJm7CFjq/vod/high_quality.mp4
Samuel C. Fletcher (Irvine) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "The Topology of Intertheoretic Reduction". Abstract: The standard accounts of reductive limiting relations between theories tend to focus on the limits of particular laws instead of limits of models. To bring limiting relations in line with the modern semantical view of theories, I propose topologizing the space of models of a potential reductive theory pair. I stress that justifying why a particular topology is appropriate for a given reduction relation is crucial, as it may perform much of the work in demonstrating a particular reduction’s success or failure. To illustrate, I consider the reduction of general relativity to (geometrized) Newtonian gravitation.Samuel C. Fletcher (Irvine) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "The Topology of Intertheoretic Reduction". Abstract: The standard accounts of reductive limiting relations between theories tend to focus on the limits of particular laws instead of limits of models. To bring limiting relations in line with the modern semantical view of theories, I propose topologizing the space of models of a potential reductive theory pair. I stress that justifying why a particular topology is appropriate for a given reduction relation is crucial, as it may perform much of the work in demonstrating a particular reduction’s success or failure. To illustrate, I consider the reduction of general relativity to (geometrized) Newtonian gravitation.Tue, 28 Jan 2014 07:31:42 +0000Samuel C. Fletcher (Irvine)Reduction and Emergence in the Sciencesno00:28:31res201350https://cast.itunes.uni-muenchen.de/clips/JCjJm7CFjq/vod/high_quality.mp4Inter-theoretic relations: The Brønsted Lowry theory of acids and microphysics
https://cast.itunes.uni-muenchen.de/clips/gNOQRXVBSe/vod/high_quality.mp4
Alexandru Manafu (Paris) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Inter-theoretic relations: The Brønsted Lowry theory of acids and microphysics". Abstract: This paper examines the relations between the Brønsted-Lowry theory of acids and the underlying microphysical theories. It argues that these relations are complex and somewhat messy. They do not lend wholesale support to any of the philosophical theories of reduction or emergence. However, they do support particular aspects from both sides. The paper fleshes out what these aspects are, and draws some general philosophical lessons. One of these is that acidity is a sui generis chemical property, which is made possible by processes at the lower level, but which can be adequately defined only at a higher level.Alexandru Manafu (Paris) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Inter-theoretic relations: The Brønsted Lowry theory of acids and microphysics". Abstract: This paper examines the relations between the Brønsted-Lowry theory of acids and the underlying microphysical theories. It argues that these relations are complex and somewhat messy. They do not lend wholesale support to any of the philosophical theories of reduction or emergence. However, they do support particular aspects from both sides. The paper fleshes out what these aspects are, and draws some general philosophical lessons. One of these is that acidity is a sui generis chemical property, which is made possible by processes at the lower level, but which can be adequately defined only at a higher level.Tue, 28 Jan 2014 06:31:11 +0000Alexandru Manafu (Paris)Reduction and Emergence in the Sciencesno00:32:26res201351https://cast.itunes.uni-muenchen.de/clips/gNOQRXVBSe/vod/high_quality.mp4An Explication of Emergence
https://cast.itunes.uni-muenchen.de/clips/2EracOj4sM/vod/high_quality.mp4
Elanor Taylor (Iowa) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "An Explication of Emergence". Abstract: There is a consensus in contemporary philosophy that debates about emergence are confused and messy. In this paper I offer a unified explication of emergence, and argue that this explication can cut through the confusion evident in discussions of emergence. I argue that the best way to understand the concept of emergence is as the unavailability of a certain kind of scientific explanation for an observer or observers. According to this perspectival account of emergence, emergence (the unavailability of a certain kind of scientific explanation for an observer or observers) can track a range of different metaphysical and epistemic relations.Elanor Taylor (Iowa) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "An Explication of Emergence". Abstract: There is a consensus in contemporary philosophy that debates about emergence are confused and messy. In this paper I offer a unified explication of emergence, and argue that this explication can cut through the confusion evident in discussions of emergence. I argue that the best way to understand the concept of emergence is as the unavailability of a certain kind of scientific explanation for an observer or observers. According to this perspectival account of emergence, emergence (the unavailability of a certain kind of scientific explanation for an observer or observers) can track a range of different metaphysical and epistemic relations.Tue, 28 Jan 2014 05:24:22 +0000Elanor Taylor (Iowa)Reduction and Emergence in the Sciencesno00:27:47res201352https://cast.itunes.uni-muenchen.de/clips/2EracOj4sM/vod/high_quality.mp4Reduction in Economics: Causality and Intentionality in the Microfoundations of Macroeconomics
https://cast.itunes.uni-muenchen.de/clips/O2gbDYRis2/vod/high_quality.mp4
Kevin D. Hoover (Duke University) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Reduction in Economics: Causality and Intentionality in the Microfoundations of Macroeconomics". Abstract: In many sciences—physical, but also biology, neuroscience, and other life sciences—one object of reductionism is to purge intentionality from the fundamental basis of both explanations and the explanatory target. The scientifically relevant level—ontologically and epistemologically—is thought to lie deeper than the level of ordinary human interactions. In the material and living world, the more familiar is the less fundamental. In contrast, the economic world of day-to-day life—the world of market interactions—appears to be the relevant level. Macroeconomics is thought to provide an account that is above, not below or behind, ordinary economic decision-making. An advantage of a macroeconomic account is that it is possible to employ causal analysis of the economy as a whole analogous to the causal analysis of physical systems. The fear of many economists is that such analyses are untethered to ordinary economic decision-making. The object of reductionism in economics—the so-called microfoundations of macroeconomics—is adequately to ground or replace higher level causal analysis with an analysis of the day-to-day interactions of people. The object is not to purge intentionality, but to reclaim it. The paper will attempt to understand the key issues surrounding the microfoundations of macroeconomics from a perspectival realist perspective that elucidates the relationship between economists’ methodological preference for microfoundations and need for macroeconomic analysis—that is, between economists’ respect for the intentional nature of economic life and the need for a causal analysis of the economy. The paper favors metaphysical humility and methodological pragmatism.Kevin D. Hoover (Duke University) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Reduction in Economics: Causality and Intentionality in the Microfoundations of Macroeconomics". Abstract: In many sciences—physical, but also biology, neuroscience, and other life sciences—one object of reductionism is to purge intentionality from the fundamental basis of both explanations and the explanatory target. The scientifically relevant level—ontologically and epistemologically—is thought to lie deeper than the level of ordinary human interactions. In the material and living world, the more familiar is the less fundamental. In contrast, the economic world of day-to-day life—the world of market interactions—appears to be the relevant level. Macroeconomics is thought to provide an account that is above, not below or behind, ordinary economic decision-making. An advantage of a macroeconomic account is that it is possible to employ causal analysis of the economy as a whole analogous to the causal analysis of physical systems. The fear of many economists is that such analyses are untethered to ordinary economic decision-making. The object of reductionism in economics—the so-called microfoundations of macroeconomics—is adequately to ground or replace higher level causal analysis with an analysis of the day-to-day interactions of people. The object is not to purge intentionality, but to reclaim it. The paper will attempt to understand the key issues surrounding the microfoundations of macroeconomics from a perspectival realist perspective that elucidates the relationship between economists’ methodological preference for microfoundations and need for macroeconomic analysis—that is, between economists’ respect for the intentional nature of economic life and the need for a causal analysis of the economy. The paper favors metaphysical humility and methodological pragmatism.Tue, 28 Jan 2014 04:23:48 +0000Kevin D. Hoover (Duke University)Reduction and Emergence in the Sciencesno00:57:57res201353https://cast.itunes.uni-muenchen.de/clips/O2gbDYRis2/vod/high_quality.mp4Agent-based models as mixed-level: lessons from E.coli
https://cast.itunes.uni-muenchen.de/clips/Z1Y1oLNiu1/vod/high_quality.mp4
Bert Baumgaernter (Idaho) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Agent-based models as mixed-level: lessons from E.coli". Abstract: We argue that agent-based models (ABMs) are better conceived of as multi- or intra-level models. Consequently, they are neither reductionistic nor emergent in the explanations they provide. We argue for this by first contrasting them with analytical models, which tend to focus on macro-level entities or properties. We then use an example of an ABM of group-selection from the biological sciences which is thought to be purely individualistic. However, we argue that the model is misconceived. The macro-level properties required to make the model work are not straightforwardly given by the composition of the individuals in the group.Bert Baumgaernter (Idaho) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "Agent-based models as mixed-level: lessons from E.coli". Abstract: We argue that agent-based models (ABMs) are better conceived of as multi- or intra-level models. Consequently, they are neither reductionistic nor emergent in the explanations they provide. We argue for this by first contrasting them with analytical models, which tend to focus on macro-level entities or properties. We then use an example of an ABM of group-selection from the biological sciences which is thought to be purely individualistic. However, we argue that the model is misconceived. The macro-level properties required to make the model work are not straightforwardly given by the composition of the individuals in the group.Tue, 28 Jan 2014 03:04:22 +0000Bert Baumgaernter (Idaho)Reduction and Emergence in the Sciencesno00:27:19res201354https://cast.itunes.uni-muenchen.de/clips/Z1Y1oLNiu1/vod/high_quality.mp4From Dressed Electrons to Quasiparticles: The Emergence of Emergent Entities in Quantum Field Theory
https://cast.itunes.uni-muenchen.de/clips/HjIz3i2mxN/vod/high_quality.mp4
Alexander Blum (Berlin) and Christian Joas (LMU) give a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "From Dressed Electrons to Quasiparticles: The Emergence of Emergent Entities in Quantum Field Theory". Abstract: The development of renormalization group techniques and effective field theories in the 1970s have led to a major reinterpretation of the renormalization program originally formulated in the late 1940s within quantum electrodynamics (QED). A more gradual shift in its interpretation, however, occurred already in the early-to-mid-1950s when renormalization techniques were transferred to solid-state and nuclear physics and gave rise to the notion of effective or quasi-particles, emergent entities that are not to be found in the original, microscopic description of the theory. We study how the methods of QED, when applied in different contexts, gave rise to this ontological reinterpretation.Alexander Blum (Berlin) and Christian Joas (LMU) give a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "From Dressed Electrons to Quasiparticles: The Emergence of Emergent Entities in Quantum Field Theory". Abstract: The development of renormalization group techniques and effective field theories in the 1970s have led to a major reinterpretation of the renormalization program originally formulated in the late 1940s within quantum electrodynamics (QED). A more gradual shift in its interpretation, however, occurred already in the early-to-mid-1950s when renormalization techniques were transferred to solid-state and nuclear physics and gave rise to the notion of effective or quasi-particles, emergent entities that are not to be found in the original, microscopic description of the theory. We study how the methods of QED, when applied in different contexts, gave rise to this ontological reinterpretation.Tue, 28 Jan 2014 02:03:25 +0000Alexander Blum (Berlin) and Christian Joas (LMU)Reduction and Emergence in the Sciencesno00:32:54res201355https://cast.itunes.uni-muenchen.de/clips/HjIz3i2mxN/vod/high_quality.mp4The Physics of Ontological Emergence
https://cast.itunes.uni-muenchen.de/clips/v7j1RGFhkH/vod/high_quality.mp4
Margaret Morrison (Toronto) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "The Physics of Ontological Emergence". Abstract: Emergent phenomena are typically described as those that cannot be reduced, explained nor predicted from their microphysical base. However, this characterization can be fully satisfied on purely epistemological grounds, leaving open the possibility that emergence may simply point to a gap in our knowledge of these phenomena. By contrast, Anderson’s (1972) claim that the whole in not only greater than but very “different from” its parts implies a strong ontological dimension to emergence, one that requires us to explain how macro properties characteristic of emergence can be ontologically distinct from the micro properties from which they emerge. This is partly explained by using RG methods to show how the ‘universal’ characteristics of emergent phenomena are insensitive to the Hamiltonian(s) governing the microphysics. But this is not wholly sufficient since it is possible to claim that the independence simply reflects the fact that different ‘levels’ are appropriate when explaining physical behavior, e.g. we needn’t appeal to micro properties in explaining the macro behavior of fluids. The paper attempts a resolution to the problem of ontological independence by showing how a closer examination of RG methods can provide a way of explicating the relation between micro and macro properties, a relation that satisfies the requirements for ontological emergence in physics.Margaret Morrison (Toronto) gives a talk at the MCMP conference "Reduction and Emergence in the Sciences" (14-16 November, 2013) titled "The Physics of Ontological Emergence". Abstract: Emergent phenomena are typically described as those that cannot be reduced, explained nor predicted from their microphysical base. However, this characterization can be fully satisfied on purely epistemological grounds, leaving open the possibility that emergence may simply point to a gap in our knowledge of these phenomena. By contrast, Anderson’s (1972) claim that the whole in not only greater than but very “different from” its parts implies a strong ontological dimension to emergence, one that requires us to explain how macro properties characteristic of emergence can be ontologically distinct from the micro properties from which they emerge. This is partly explained by using RG methods to show how the ‘universal’ characteristics of emergent phenomena are insensitive to the Hamiltonian(s) governing the microphysics. But this is not wholly sufficient since it is possible to claim that the independence simply reflects the fact that different ‘levels’ are appropriate when explaining physical behavior, e.g. we needn’t appeal to micro properties in explaining the macro behavior of fluids. The paper attempts a resolution to the problem of ontological independence by showing how a closer examination of RG methods can provide a way of explicating the relation between micro and macro properties, a relation that satisfies the requirements for ontological emergence in physics.Tue, 28 Jan 2014 01:02:29 +0000Margaret Morrison (Toronto)Reduction and Emergence in the Sciencesno00:51:01res201356https://cast.itunes.uni-muenchen.de/clips/v7j1RGFhkH/vod/high_quality.mp4Reduction and Emergence in Physics
https://cast.itunes.uni-muenchen.de/clips/DDxNzVLCte/vod/high_quality.mp4
Sebastian Lutz (MCMP/LMU) and Karim Thébault (MCMP/LMU) give a talk at the CAS Research Focus Series „Reduction and Emergence“ (13 November, 2013) titled "Reduction and Emergence in Physics" (host: Stephan Hartmann (MCMP/LMU)). Abstract: Matter is composed of small elementary particles whose behavior is predicted very accurately by modern physics. This has led to the suggestion that the fundamental theories of physics are ‘theories of everything’, since in principle they describe the evolution of all matter in the universe. But does the behavior of molecules, magnets and proteins really reduce to that of quarks, gluons and electrons? It often rather seems that the phenomena that occur at larger scales and with more complex systems are genuinely novel and hence emergent. We will examine these philosophical questions and draw philosophical morals with regard to the nature of modeling from the debate.Sebastian Lutz (MCMP/LMU) and Karim Thébault (MCMP/LMU) give a talk at the CAS Research Focus Series „Reduction and Emergence“ (13 November, 2013) titled "Reduction and Emergence in Physics" (host: Stephan Hartmann (MCMP/LMU)). Abstract: Matter is composed of small elementary particles whose behavior is predicted very accurately by modern physics. This has led to the suggestion that the fundamental theories of physics are ‘theories of everything’, since in principle they describe the evolution of all matter in the universe. But does the behavior of molecules, magnets and proteins really reduce to that of quarks, gluons and electrons? It often rather seems that the phenomena that occur at larger scales and with more complex systems are genuinely novel and hence emergent. We will examine these philosophical questions and draw philosophical morals with regard to the nature of modeling from the debate.Fri, 21 Feb 2014 01:51:59 +0000Sebastian Lutz (MCMP/LMU) and Karim Thébault (MCMP/LMU)CAS Research Focus Series „Reduction and Emergence“no00:41:37res2013, res2013series57https://cast.itunes.uni-muenchen.de/clips/DDxNzVLCte/vod/high_quality.mp4To map or not to map: or, how to represent auditory space
https://cast.itunes.uni-muenchen.de/clips/d8XuqdPdzK/vod/high_quality.mp4
Benedikt Grothe (LMU) gives a talk at the MCMP Colloquium (30 October, 2013) titled "To map or not to map: or, how to represent auditory space". Abstract: More than 20 years ago, Walter Heiligenberg stated in a famous review that “Wherever we find behavioral responses guided by continual modulations of a certain stimulus variable, we seem to find an ordered representation of this variable within neuronal maps.” (Annu Rev Neurosci, 1991). While this concept may seem trivial for example in the visual system, where space is mapped directly onto the retina, auditory space is not represented at the receptor surface, and it is thought to be represented instead by a computed map from binaural disparity cues. Indeed, the ordered representation of auditory space computed from binaural disparities via a labeled-line code as described beautifully by Konishi, Knudsen and colleagues in the barn owl’s tectum served perhaps the ultimate test case for cartesian maps as a general solution for sensory coding of relevant cues. However, for mammals only crude “maps” representing large spatial areas at the single neuron level are found in their analog the superior colliculus and thus present a conceptual dilemma. Recent evidence indicates a dynamic population code for binaural disparities in mammals, which is incompatible with the concept of a fixed map of auditory space. Moreover, our most recent results show that self-regulated adaptation in the binaural detector neurons leads to drastic misjudgments of spatial position of sounds under predictable circumstances (Stange et al., 2013, Nat Neurosci) indicating relative rather than absolute spatial coding. Hence, computed maps seem to be only one possible solution for representing sensory information in the brain.Benedikt Grothe (LMU) gives a talk at the MCMP Colloquium (30 October, 2013) titled "To map or not to map: or, how to represent auditory space". Abstract: More than 20 years ago, Walter Heiligenberg stated in a famous review that “Wherever we find behavioral responses guided by continual modulations of a certain stimulus variable, we seem to find an ordered representation of this variable within neuronal maps.” (Annu Rev Neurosci, 1991). While this concept may seem trivial for example in the visual system, where space is mapped directly onto the retina, auditory space is not represented at the receptor surface, and it is thought to be represented instead by a computed map from binaural disparity cues. Indeed, the ordered representation of auditory space computed from binaural disparities via a labeled-line code as described beautifully by Konishi, Knudsen and colleagues in the barn owl’s tectum served perhaps the ultimate test case for cartesian maps as a general solution for sensory coding of relevant cues. However, for mammals only crude “maps” representing large spatial areas at the single neuron level are found in their analog the superior colliculus and thus present a conceptual dilemma. Recent evidence indicates a dynamic population code for binaural disparities in mammals, which is incompatible with the concept of a fixed map of auditory space. Moreover, our most recent results show that self-regulated adaptation in the binaural detector neurons leads to drastic misjudgments of spatial position of sounds under predictable circumstances (Stange et al., 2013, Nat Neurosci) indicating relative rather than absolute spatial coding. Hence, computed maps seem to be only one possible solution for representing sensory information in the brain.Wed, 05 Mar 2014 01:53:24 +0000Benedikt Grothe (LMU)Colloquium Mathematical Philosophyno00:50:5458https://cast.itunes.uni-muenchen.de/clips/d8XuqdPdzK/vod/high_quality.mp4Completeness, Categoricity, and Dismissal
https://cast.itunes.uni-muenchen.de/clips/xWlMaKf3Za/vod/high_quality.mp4
Michael Stöltzner (South Carolina) gives a talk at the MCMP Colloquium (18 July, 2013) titled "Completeness, Categoricity, and Dismissal". Abstract: In 1939, von Neumann panned Carnap “totally naıve, simplistic views on the issue of ‘completeness’ of the axiomatics of mathematics (‘categoricity’)” and expressed his surprise that philosophers were attracted by them. Taking up recent scholarship on Carnap’s work on axiomatic around 1930 and on von Neumann’s axiomatization of quantum physics, I argue that although von Neumann continued to cherish completeness and categoricity as regulative principles, he became increasingly aware how difficult they were to achieve in quantum logic. Moreover, in von Neumann’s (and Hilbert’s) use, the axiomatic method was never committed to the logical universalism of Carnap’s approach. This episode carries, to my mind, some lessons for the role of mathematics and logic in the context of the axiomatic method, especially if one considers the latter as the core vehicle of mathematical philosophy.Michael Stöltzner (South Carolina) gives a talk at the MCMP Colloquium (18 July, 2013) titled "Completeness, Categoricity, and Dismissal". Abstract: In 1939, von Neumann panned Carnap “totally naıve, simplistic views on the issue of ‘completeness’ of the axiomatics of mathematics (‘categoricity’)” and expressed his surprise that philosophers were attracted by them. Taking up recent scholarship on Carnap’s work on axiomatic around 1930 and on von Neumann’s axiomatization of quantum physics, I argue that although von Neumann continued to cherish completeness and categoricity as regulative principles, he became increasingly aware how difficult they were to achieve in quantum logic. Moreover, in von Neumann’s (and Hilbert’s) use, the axiomatic method was never committed to the logical universalism of Carnap’s approach. This episode carries, to my mind, some lessons for the role of mathematics and logic in the context of the axiomatic method, especially if one considers the latter as the core vehicle of mathematical philosophy.Sun, 03 Nov 2013 13:07:21 +0000Michael Stöltzner (South Carolina)Colloquium Mathematical Philosophyno01:02:5859https://cast.itunes.uni-muenchen.de/clips/xWlMaKf3Za/vod/high_quality.mp4Evolving Perceptual Categories
https://cast.itunes.uni-muenchen.de/clips/gXw8BnSh6N/vod/high_quality.mp4
Cailin O'Connor (UCI) gives a talk at the MCMP Colloquium (15 July, 2013) titled "Evolving Perceptual Categories". Abstract: Do perceptual categories--green, cool, sweet--accurately track features of the real world? If not, are there systematic ways in which perceptual categories fail to latch onto real world structure? Attempts to answer these questions have persistently led to a further question. Given that human beings can only observe the world through the lens of our perceptual systems, how is it possible to know whether and in what ways perceptual categories are veridical? In this talk, I use tools from evolutionary game theory to attempt to gain traction on this problem. In particular, I employ signaling games to model perceptual signaling and elucidate how and why perceptual categories may or may not track real world structure. Jager (2007) introduced sim-max games, a variation of the standard signaling game where the states of the world are modeled as bearing similarity relationships to one another. This added structure is manifested in payoffs that reward approximate coordination between the sender and receiver as well as perfect coordination. This altered payoff structure appropriately models many situations in which perceptual signals evolve. Jager (2007) showed that in the long run actors in sim-max games evolve strategies that categorize similar states of the world together and dissimilar states of the world separately. When applied to perception, these results would seem to indicate that perceptual categories are natural or veridical in that similar real world objects should be expected to evolve to be part of the same perceptual category. However, this conclusion is not merited when one takes into account how similarity is built into these models. Similarity is manifested in the payoff structure alone. What this means, as I will argue, is that one should expect real world states where the same actions are successful to be categorized together perceptually, and real world states that cannot to be categorized separately. In other words, perceptual categories should be expected to track real world structure inasmuch as payoff to organisms tracks real world structure. As I will argue, this conclusion should not lead to a strong anti-realist stance with regard to perceptual categories. Whenever organism payoff is systematically related to the natural structure of the world, perceptual categories should be systematically related to this structure as well. What this means is that the relationship between perceptual categories and real world structure may be subtle and complex.Cailin O'Connor (UCI) gives a talk at the MCMP Colloquium (15 July, 2013) titled "Evolving Perceptual Categories". Abstract: Do perceptual categories--green, cool, sweet--accurately track features of the real world? If not, are there systematic ways in which perceptual categories fail to latch onto real world structure? Attempts to answer these questions have persistently led to a further question. Given that human beings can only observe the world through the lens of our perceptual systems, how is it possible to know whether and in what ways perceptual categories are veridical? In this talk, I use tools from evolutionary game theory to attempt to gain traction on this problem. In particular, I employ signaling games to model perceptual signaling and elucidate how and why perceptual categories may or may not track real world structure. Jager (2007) introduced sim-max games, a variation of the standard signaling game where the states of the world are modeled as bearing similarity relationships to one another. This added structure is manifested in payoffs that reward approximate coordination between the sender and receiver as well as perfect coordination. This altered payoff structure appropriately models many situations in which perceptual signals evolve. Jager (2007) showed that in the long run actors in sim-max games evolve strategies that categorize similar states of the world together and dissimilar states of the world separately. When applied to perception, these results would seem to indicate that perceptual categories are natural or veridical in that similar real world objects should be expected to evolve to be part of the same perceptual category. However, this conclusion is not merited when one takes into account how similarity is built into these models. Similarity is manifested in the payoff structure alone. What this means, as I will argue, is that one should expect real world states where the same actions are successful to be categorized together perceptually, and real world states that cannot to be categorized separately. In other words, perceptual categories should be expected to track real world structure inasmuch as payoff to organisms tracks real world structure. As I will argue, this conclusion should not lead to a strong anti-realist stance with regard to perceptual categories. Whenever organism payoff is systematically related to the natural structure of the world, perceptual categories should be systematically related to this structure as well. What this means is that the relationship between perceptual categories and real world structure may be subtle and complex.Sun, 03 Nov 2013 13:06:39 +0000Cailin O'Connor (UCI)Colloquium Mathematical Philosophyno00:44:2460https://cast.itunes.uni-muenchen.de/clips/gXw8BnSh6N/vod/high_quality.mp4Reason-based rationalization
https://cast.itunes.uni-muenchen.de/clips/1SNfeAxbIU/vod/high_quality.mp4
Franz Dietrich (Paris) gives a talk at the MCMP Colloquium (26 June, 2013) titled "Reason-based rationalization". Abstract: In economics and other social sciences, choice behaviour is usually explained (or rationalized) by means of fixed preferences which the agent seeks to satisfy. This model of choice not only conflicts with observed behaviour, but also fails to account for important aspects such as the relevance of the choice context and the possibility of preference change. These problems stem from a failure to (i) address the reasons underlying choices and preferences, and (ii) distinguish between the modeller's and the agent's perception of the decision problem. We introduce a new way to rationalize an agent's choice behaviour. A 'reason-based' rationalization explains behaviour in terms of the motivationally salient properties of the options and/or the context. Such rationalizations allow us to explain several non-standard choice patterns, model the agent's subjective perception of alternatives, and predict choices in novel, not yet observed contexts. We show how behaviour can reveal which properties are motivationally salient. We determine the behavioural implications of two kinds of context-dependence: context-regarding motivation (i.e., concern for the context) and context-variant motivation (i.e., psychological instability).Franz Dietrich (Paris) gives a talk at the MCMP Colloquium (26 June, 2013) titled "Reason-based rationalization". Abstract: In economics and other social sciences, choice behaviour is usually explained (or rationalized) by means of fixed preferences which the agent seeks to satisfy. This model of choice not only conflicts with observed behaviour, but also fails to account for important aspects such as the relevance of the choice context and the possibility of preference change. These problems stem from a failure to (i) address the reasons underlying choices and preferences, and (ii) distinguish between the modeller's and the agent's perception of the decision problem. We introduce a new way to rationalize an agent's choice behaviour. A 'reason-based' rationalization explains behaviour in terms of the motivationally salient properties of the options and/or the context. Such rationalizations allow us to explain several non-standard choice patterns, model the agent's subjective perception of alternatives, and predict choices in novel, not yet observed contexts. We show how behaviour can reveal which properties are motivationally salient. We determine the behavioural implications of two kinds of context-dependence: context-regarding motivation (i.e., concern for the context) and context-variant motivation (i.e., psychological instability).Thu, 10 Oct 2013 18:55:19 +0000Franz Dietrich (Paris)Colloquium Mathematical Philosophyno01:10:4961https://cast.itunes.uni-muenchen.de/clips/1SNfeAxbIU/vod/high_quality.mp4A Model-Based Epistemology of Measurement
https://cast.itunes.uni-muenchen.de/clips/Tcw7QB7x7r/vod/high_quality.mp4
Eran Tal (Bielefeld) gives a talk at the MCMP Colloquium (12 June, 2013) titled "A Model-Based Epistemology of Measurement". Abstract: The epistemology of measurement is an interdisciplinary area of research concerned with the conditions under which measurement and standardization methods produce knowledge, the nature, scope, and limits of this knowledge and the sources of its reliability. A primary goal of such studies is to better understand the ways in which theoretical and statistical assumptions about a measurement process influence the content and quality of its outcomes. Such assumptions often involve idealizations, that is, intentional distortions of aspects of the measuring instrument, measured object and environment that appear to threat the accuracy and objectivity of measurement. Here I argue that the opposite is the case: idealization is a necessary precondition for obtaining accurate and objective measurement outcomes. A measurement outcome, I submit, is a value range assigned to a parameter in a model in a way that allows the model to coherently predict the final states (‘indications’) of a process. Idealizations are necessary for identifying the measured parameter with a particular object, for distinguishing genuine effects from errors and for comparing measurement outcomes to each other. These claims are exemplified with a study of the contemporary evaluation and comparison of atomic clocks across national metrological laboratories. Building on these insights, I conclude by highlighting the promise held by model-based approaches for further research in the epistemology of measurement.Eran Tal (Bielefeld) gives a talk at the MCMP Colloquium (12 June, 2013) titled "A Model-Based Epistemology of Measurement". Abstract: The epistemology of measurement is an interdisciplinary area of research concerned with the conditions under which measurement and standardization methods produce knowledge, the nature, scope, and limits of this knowledge and the sources of its reliability. A primary goal of such studies is to better understand the ways in which theoretical and statistical assumptions about a measurement process influence the content and quality of its outcomes. Such assumptions often involve idealizations, that is, intentional distortions of aspects of the measuring instrument, measured object and environment that appear to threat the accuracy and objectivity of measurement. Here I argue that the opposite is the case: idealization is a necessary precondition for obtaining accurate and objective measurement outcomes. A measurement outcome, I submit, is a value range assigned to a parameter in a model in a way that allows the model to coherently predict the final states (‘indications’) of a process. Idealizations are necessary for identifying the measured parameter with a particular object, for distinguishing genuine effects from errors and for comparing measurement outcomes to each other. These claims are exemplified with a study of the contemporary evaluation and comparison of atomic clocks across national metrological laboratories. Building on these insights, I conclude by highlighting the promise held by model-based approaches for further research in the epistemology of measurement.Thu, 18 Apr 2019 23:27:25 +0000Eran Tal (Bielefeld)Colloquium Mathematical Philosophyno00:49:2562https://cast.itunes.uni-muenchen.de/clips/Tcw7QB7x7r/vod/high_quality.mp4Separating Truth from Its Idealization
https://cast.itunes.uni-muenchen.de/clips/RymFYKgaEY/vod/high_quality.mp4
Paul Teller (UC Davis) gives a talk at the MCMP Colloquium (18 April, 2013) titled "Separating Truth from Its Idealization". Abstract: Science never succeeds in providing representations that are both perfectly precise and completely accurate. Instead science constructs models that are always in some ways inexact – imprecise, not perfectly accurate, or both. If this goes for the results of science, how much more should we expect it to hold for human knowledge generally! I explore this expectation for the project of modeling what it is for a statement to be true. The familiar model of characterizing truth in terms of predicating a precisely delimited property of a precisely delimited referent succeeds famously in characterizing semantic structure, but falters with questions about application to the world because we rarely, if ever, succeed in perfectly determinately picking out properties and referents. I sketch an alternative model-building approach that takes advantage of the ubiquitous occurrence of imprecision: For an imprecise statement to be true is for its precise “semantic alter-ego”, though false, to function as a truth.Paul Teller (UC Davis) gives a talk at the MCMP Colloquium (18 April, 2013) titled "Separating Truth from Its Idealization". Abstract: Science never succeeds in providing representations that are both perfectly precise and completely accurate. Instead science constructs models that are always in some ways inexact – imprecise, not perfectly accurate, or both. If this goes for the results of science, how much more should we expect it to hold for human knowledge generally! I explore this expectation for the project of modeling what it is for a statement to be true. The familiar model of characterizing truth in terms of predicating a precisely delimited property of a precisely delimited referent succeeds famously in characterizing semantic structure, but falters with questions about application to the world because we rarely, if ever, succeed in perfectly determinately picking out properties and referents. I sketch an alternative model-building approach that takes advantage of the ubiquitous occurrence of imprecision: For an imprecise statement to be true is for its precise “semantic alter-ego”, though false, to function as a truth.Thu, 18 Apr 2019 23:27:20 +0000Paul Teller (UC Davis)Colloquium Mathematical Philosophyno00:47:4963https://cast.itunes.uni-muenchen.de/clips/RymFYKgaEY/vod/high_quality.mp4From Shannon's Axiomatic Approach to a New Sense of Biological Information
https://cast.itunes.uni-muenchen.de/clips/xatAptx9iF/vod/high_quality.mp4
Omri Tal (CPNSS/LSE) gives a talk at the MCMP Colloquium (17 April, 2013) titled "From Shannon's Axiomatic Approach to a New Sense of Biological Information". Abstract: Shannon famously remarked that a single concept of information could not satisfactorily account for the numerous possible applications of the general field of communication theory. Recent interest in assessing the ‘population signal’ from genetic samples has mainly focused on empirical results. I employ some basic principles from Shannon’s work on information theory (Shannon 1948) to develop a measure of information for extracting ‘population structure’ from genetic data. This sense of information is somewhat less abstract than entropy or Kolmogorov Complexity and is utility-oriented. Specifically, given a collection of genotypes sampled from known multiple populations I would like to quantify the potential for correct classification of genotypes of unknown origin. Motivated by Shannon's axiomatic approach in deriving a unique information measure for communication, I first identify a set of intuitively justifiable criteria that any such quantitative information measure should satisfy. I will show that standard information-theoretic concepts such as mutual information or relative entropy cannot satisfactorily account for this sense of information, necessitating a decision-theoretic approach.Omri Tal (CPNSS/LSE) gives a talk at the MCMP Colloquium (17 April, 2013) titled "From Shannon's Axiomatic Approach to a New Sense of Biological Information". Abstract: Shannon famously remarked that a single concept of information could not satisfactorily account for the numerous possible applications of the general field of communication theory. Recent interest in assessing the ‘population signal’ from genetic samples has mainly focused on empirical results. I employ some basic principles from Shannon’s work on information theory (Shannon 1948) to develop a measure of information for extracting ‘population structure’ from genetic data. This sense of information is somewhat less abstract than entropy or Kolmogorov Complexity and is utility-oriented. Specifically, given a collection of genotypes sampled from known multiple populations I would like to quantify the potential for correct classification of genotypes of unknown origin. Motivated by Shannon's axiomatic approach in deriving a unique information measure for communication, I first identify a set of intuitively justifiable criteria that any such quantitative information measure should satisfy. I will show that standard information-theoretic concepts such as mutual information or relative entropy cannot satisfactorily account for this sense of information, necessitating a decision-theoretic approach.Thu, 18 Apr 2019 23:31:25 +0000Omri Tal (CPNSS/LSE)Colloquium Mathematical Philosophyno00:52:4864https://cast.itunes.uni-muenchen.de/clips/xatAptx9iF/vod/high_quality.mp4Making sense of multiple climate models' projections
https://cast.itunes.uni-muenchen.de/clips/S6SWEEDjYp/vod/high_quality.mp4
Claudia Tebaldi (Climate Central & NCAR) gives a talk at the 6th Munich-Sydney-Tilburg Conference on "Models and Decisions" (10-12 April, 2013) titled "Making sense of multiple climate models' projections". Abstract: In the last decade or so the climate change research community has adopted multi-model ensemble projections as the standard paradigm for the characterization of future climate changes. Why multiple models, and how we reconcile and synthesize -- or fail to -- their different projections, even under the same scenarios of future greenhouse gas emissions, will be the themes of my talk. Multi-model ensembles are fundamental to exploring an important source of uncertainty, that of model structural assumptions. Different models have different strength and weaknesses, and, how we use observations to diagnose those strengths and weaknesses, and then how we translate model performance in a measure of model reliability, are currently open research questions. The inter-dependencies among models, the existence of common errors and biases, are also a challenge to the interpretation of statistics from multi-model output. All this constitutes an interesting research field in the abstract, whose most current directions I will try to sketch in my talk, but is also critical to understand in the course of utilizing model output for practical purposes, to inform policy and decision making for adaptation and mitigation.Claudia Tebaldi (Climate Central & NCAR) gives a talk at the 6th Munich-Sydney-Tilburg Conference on "Models and Decisions" (10-12 April, 2013) titled "Making sense of multiple climate models' projections". Abstract: In the last decade or so the climate change research community has adopted multi-model ensemble projections as the standard paradigm for the characterization of future climate changes. Why multiple models, and how we reconcile and synthesize -- or fail to -- their different projections, even under the same scenarios of future greenhouse gas emissions, will be the themes of my talk. Multi-model ensembles are fundamental to exploring an important source of uncertainty, that of model structural assumptions. Different models have different strength and weaknesses, and, how we use observations to diagnose those strengths and weaknesses, and then how we translate model performance in a measure of model reliability, are currently open research questions. The inter-dependencies among models, the existence of common errors and biases, are also a challenge to the interpretation of statistics from multi-model output. All this constitutes an interesting research field in the abstract, whose most current directions I will try to sketch in my talk, but is also critical to understand in the course of utilizing model output for practical purposes, to inform policy and decision making for adaptation and mitigation.Thu, 18 Apr 2019 23:25:20 +0000Claudia Tebaldi (Climate Central & NCAR)Conference on Models and Decisionsno00:38:31m&d2013, firstsight:6My7oiKdqT65https://cast.itunes.uni-muenchen.de/clips/S6SWEEDjYp/vod/high_quality.mp4Rationality and the Bayesian Paradigm
https://cast.itunes.uni-muenchen.de/clips/v56n4oluOb/vod/high_quality.mp4
Itzhak Gilboa (HEC) gives a talk at the 6th Munich-Sydney-Tilburg Conference on "Models and Decisions" (10-12 April, 2013) titled "Rationality and the Bayesian Paradigm". Abstract: It is claimed that rationality does not imply Bayesianism. We first define what is meant by the two terms, so that the statement is not tautologically false. Two notions of rationality are discussed, and related to two main approaches to statistical inference. It is followed by a brief survey of the arguments against the definition of rationality by Savage's axioms, as well as some alternative approaches to decision making.Itzhak Gilboa (HEC) gives a talk at the 6th Munich-Sydney-Tilburg Conference on "Models and Decisions" (10-12 April, 2013) titled "Rationality and the Bayesian Paradigm". Abstract: It is claimed that rationality does not imply Bayesianism. We first define what is meant by the two terms, so that the statement is not tautologically false. Two notions of rationality are discussed, and related to two main approaches to statistical inference. It is followed by a brief survey of the arguments against the definition of rationality by Savage's axioms, as well as some alternative approaches to decision making.Thu, 18 Apr 2019 23:34:08 +0000Itzhak Gilboa (HEC)Conference on Models and Decisionsno00:46:51m&d2013, firstsight:gi5nbF6nCe66https://cast.itunes.uni-muenchen.de/clips/v56n4oluOb/vod/high_quality.mp4Cooperation and (structural) Rationality
https://cast.itunes.uni-muenchen.de/clips/0G7aPYHj4d/vod/high_quality.mp4
Julian Nida-Rümelin (LMU) gives a talk at the 6th Munich-Sydney-Tilburg Conference on "Models and Decisions" (10-12 April, 2013) titled "Cooperation and (structural) Rationality". Abstract:Cooperation remains a challenge for the theory of rationality, rational agents should not cooperate in one shot prisoner's dilemmas. But they do, it seems. There is a reason why mainstream rational choice theory is at odds with cooperative agency: rational action is thought to be consequentialist, but this is wrong. If we give up consequentialism and adopt a structural account of rationality, the problem resolves, as will be shown. In the second part of my lecture I shall show that structural rationality can be combined with bayesianism, contrary to what one may expect. And finally I shall discuss some philosophical implications of structural rationality.Julian Nida-Rümelin (LMU) gives a talk at the 6th Munich-Sydney-Tilburg Conference on "Models and Decisions" (10-12 April, 2013) titled "Cooperation and (structural) Rationality". Abstract:Cooperation remains a challenge for the theory of rationality, rational agents should not cooperate in one shot prisoner's dilemmas. But they do, it seems. There is a reason why mainstream rational choice theory is at odds with cooperative agency: rational action is thought to be consequentialist, but this is wrong. If we give up consequentialism and adopt a structural account of rationality, the problem resolves, as will be shown. In the second part of my lecture I shall show that structural rationality can be combined with bayesianism, contrary to what one may expect. And finally I shall discuss some philosophical implications of structural rationality.Thu, 18 Apr 2019 23:37:29 +0000Julian Nida-Rümelin (LMU)Conference on Models and Decisionsno00:51:28m&d201368https://cast.itunes.uni-muenchen.de/clips/0G7aPYHj4d/vod/high_quality.mp4Idealization, Prediction, Difference-Making
https://cast.itunes.uni-muenchen.de/clips/T2WZHFRbfE/vod/high_quality.mp4
Michael Strevens (NYU) gives a talk at the 6th Munich-Sydney-Tilburg Conference on "Models and Decisions" (10-12 April, 2013) titled "Idealization, Prediction, Difference-Making". Abstract: Every model leaves out or distorts some factors that are causally connected to its target phenomena – the phenomena that it seeks to predict or explain. If we want to make predictions, and we want to base decisions on those predictions, what is it safe to omit or to simplify, and what ought a causal model to capture fully and correctly? A schematic answer: the factors that matter are those that make a difference to the target phenomena. There are several ways to understand the notion of difference-making. Which are the most useful to the forecaster, to the decision-maker? This paper advances a view.Michael Strevens (NYU) gives a talk at the 6th Munich-Sydney-Tilburg Conference on "Models and Decisions" (10-12 April, 2013) titled "Idealization, Prediction, Difference-Making". Abstract: Every model leaves out or distorts some factors that are causally connected to its target phenomena – the phenomena that it seeks to predict or explain. If we want to make predictions, and we want to base decisions on those predictions, what is it safe to omit or to simplify, and what ought a causal model to capture fully and correctly? A schematic answer: the factors that matter are those that make a difference to the target phenomena. There are several ways to understand the notion of difference-making. Which are the most useful to the forecaster, to the decision-maker? This paper advances a view.Thu, 18 Apr 2019 23:34:21 +0000Michael Strevens (NYU)Conference on Models and Decisionsno00:41:50m&d2013, firstsight:mAgdIJ0Vse69https://cast.itunes.uni-muenchen.de/clips/T2WZHFRbfE/vod/high_quality.mp4Evaluating Risky Prospects: The Distribution View
https://cast.itunes.uni-muenchen.de/clips/F3VSUJSQae/vod/high_quality.mp4
Luc Bovens (LSE) gives a talk at the 6th Munich-Sydney-Tilburg Conference on "Models and Decisions" (10-12 April, 2013) titled "Evaluating Risky Prospects: The Distribution View". Abstract: Policy Analysts need to rank policies with risky outcomes. Such policies can be thought off as prospects. A prospect is a matrix of utilities. On the rows we list the people who are affected by the policy. In the columns we list alternative states of the world and specify a probability distribution over the states. I provide a taxonomy of various ex ante and ex post distributional concerns that enter into such policy evaluations and construct a general method that reflects these concerns, integrates the ex ante and ex post calculus, and generates orderings over policies. I show that Parfit’s Priority View is a special case of the Distribution View.Luc Bovens (LSE) gives a talk at the 6th Munich-Sydney-Tilburg Conference on "Models and Decisions" (10-12 April, 2013) titled "Evaluating Risky Prospects: The Distribution View". Abstract: Policy Analysts need to rank policies with risky outcomes. Such policies can be thought off as prospects. A prospect is a matrix of utilities. On the rows we list the people who are affected by the policy. In the columns we list alternative states of the world and specify a probability distribution over the states. I provide a taxonomy of various ex ante and ex post distributional concerns that enter into such policy evaluations and construct a general method that reflects these concerns, integrates the ex ante and ex post calculus, and generates orderings over policies. I show that Parfit’s Priority View is a special case of the Distribution View.Thu, 18 Apr 2019 23:55:53 +0000Luc Bovens (LSE)Conference on Models and Decisionsno01:03:14m&d2013, firstsight:kxrb0XatVy70https://cast.itunes.uni-muenchen.de/clips/F3VSUJSQae/vod/high_quality.mp4Theoretical Terms, Ramsey Sentences and Structural Realism
https://cast.itunes.uni-muenchen.de/clips/kICbZH1Ghk/vod/high_quality.mp4
John Worrall (LSE) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Theoretical Terms, Ramsey Sentences and Structural Realism".John Worrall (LSE) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Theoretical Terms, Ramsey Sentences and Structural Realism".Thu, 18 Apr 2019 23:40:01 +0000John Worrall (LSE)Conference on The Analysis of Theoretical Termsno00:49:1971https://cast.itunes.uni-muenchen.de/clips/kICbZH1Ghk/vod/high_quality.mp4The Criteria for the Empirical Significance of Terms
https://cast.itunes.uni-muenchen.de/clips/OzGG91yHWH/vod/high_quality.mp4
Sebastian Lutz (LMU/MCMP) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "The Criteria for the Empirical Significance of Terms".Sebastian Lutz (LMU/MCMP) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "The Criteria for the Empirical Significance of Terms".Thu, 18 Apr 2019 23:37:35 +0000Sebastian Lutz (LMU/MCMP)Conference on The Analysis of Theoretical Termsno00:35:1772https://cast.itunes.uni-muenchen.de/clips/OzGG91yHWH/vod/high_quality.mp4Typicality in Statistical Physics and Dynamical Systems Theory
https://cast.itunes.uni-muenchen.de/clips/LclrPbdnJy/vod/high_quality.mp4
Charlotte Werndl (LSE) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Typicality in Statistical Physics and Dynamical Systems Theory".Charlotte Werndl (LSE) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Typicality in Statistical Physics and Dynamical Systems Theory".Thu, 18 Apr 2019 23:38:02 +0000Charlotte Werndl (LSE)Conference on The Analysis of Theoretical Termsno00:37:5773https://cast.itunes.uni-muenchen.de/clips/LclrPbdnJy/vod/high_quality.mp4Causal-descriptivism Revisited
https://cast.itunes.uni-muenchen.de/clips/4g0GZjFyub/vod/high_quality.mp4
Stathis Psillos (Athens) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Causal-descriptivism Revisited".Stathis Psillos (Athens) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Causal-descriptivism Revisited".Thu, 18 Apr 2019 23:45:33 +0000Stathis Psillos (Athens)Conference on The Analysis of Theoretical Termsno00:54:3774https://cast.itunes.uni-muenchen.de/clips/4g0GZjFyub/vod/high_quality.mp4Leibniz Equivalence
https://cast.itunes.uni-muenchen.de/clips/9Urb3KpuWW/vod/high_quality.mp4
Jeffrey Ketland (Oxford) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Leibniz Equivalence".Jeffrey Ketland (Oxford) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Leibniz Equivalence".Thu, 18 Apr 2019 23:44:11 +0000Jeffrey Ketland (Oxford)Conference on The Analysis of Theoretical Termsno00:53:3875https://cast.itunes.uni-muenchen.de/clips/9Urb3KpuWW/vod/high_quality.mp4Theoretical Terms and Induction
https://cast.itunes.uni-muenchen.de/clips/M8XWsfnAH4/vod/high_quality.mp4
Hannes Leitgeb (LMU/MCMP) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Theoretical Terms and Induction".Hannes Leitgeb (LMU/MCMP) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Theoretical Terms and Induction".Thu, 18 Apr 2019 23:48:44 +0000Hannes Leitgeb (LMU/MCMP)Conference on The Analysis of Theoretical Termsno00:58:0776https://cast.itunes.uni-muenchen.de/clips/M8XWsfnAH4/vod/high_quality.mp4The epsilon-reconstruction of theories and scientific structuralism
https://cast.itunes.uni-muenchen.de/clips/oSQdZg7Ohn/vod/high_quality.mp4
Georg Schiemer (LMU/MCMP) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "The epsilon-reconstruction of theories and scientific structuralism".Georg Schiemer (LMU/MCMP) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "The epsilon-reconstruction of theories and scientific structuralism".Thu, 18 Apr 2019 23:37:52 +0000Georg Schiemer (LMU/MCMP)Conference on The Analysis of Theoretical Termsno00:30:2077https://cast.itunes.uni-muenchen.de/clips/oSQdZg7Ohn/vod/high_quality.mp4Definition, elimination and introduction of theoretical terms
https://cast.itunes.uni-muenchen.de/clips/W7SLnAjJPK/vod/high_quality.mp4
Gauvain Leconte (Paris) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Definition, elimination and introduction of theoretical terms".Gauvain Leconte (Paris) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Definition, elimination and introduction of theoretical terms".Thu, 18 Apr 2019 23:40:19 +0000Gauvain Leconte (Paris)Conference on The Analysis of Theoretical Termsno00:30:5678https://cast.itunes.uni-muenchen.de/clips/W7SLnAjJPK/vod/high_quality.mp4Implicitly defining mathematical terms
https://cast.itunes.uni-muenchen.de/clips/jGDbtEQ5W3/vod/high_quality.mp4
Demetra Christopoulou (Patras) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Implicitly defining mathematical terms".Demetra Christopoulou (Patras) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Implicitly defining mathematical terms".Thu, 18 Apr 2019 23:42:02 +0000Demetra Christopoulou (Patras)Conference on The Analysis of Theoretical Termsno00:35:0779https://cast.itunes.uni-muenchen.de/clips/jGDbtEQ5W3/vod/high_quality.mp4Causality and Theoretical Terms in Physics
https://cast.itunes.uni-muenchen.de/clips/uyLPjfuK2B/vod/high_quality.mp4
C. Ulises Moulines (LMU) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Causality and Theoretical Terms in Physics".C. Ulises Moulines (LMU) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Causality and Theoretical Terms in Physics".Thu, 18 Apr 2019 23:48:37 +0000C. Ulises Moulines (LMU)Conference on The Analysis of Theoretical Termsno00:50:2380https://cast.itunes.uni-muenchen.de/clips/uyLPjfuK2B/vod/high_quality.mp4Theoretical Terms, Ideal Objects and Zalta's Abstract Objects Theory
https://cast.itunes.uni-muenchen.de/clips/1sP6Caoehl/vod/high_quality.mp4
Xavier de Donato (USC) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Theoretical Terms, Ideal Objects and Zalta's Abstract Objects Theory".Xavier de Donato (USC) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Theoretical Terms, Ideal Objects and Zalta's Abstract Objects Theory".Thu, 18 Apr 2019 23:45:35 +0000Xavier de Donato (USC)Conference on The Analysis of Theoretical Termsno00:33:5182https://cast.itunes.uni-muenchen.de/clips/1sP6Caoehl/vod/high_quality.mp4Avoiding Reification
https://cast.itunes.uni-muenchen.de/clips/hX2SvyZLSI/vod/high_quality.mp4
Michele Ginammi (Pisa) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Avoiding Reification".Michele Ginammi (Pisa) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Avoiding Reification".Thu, 18 Apr 2019 23:44:26 +0000Michele Ginammi (Pisa)Conference on The Analysis of Theoretical Termsno00:29:2083https://cast.itunes.uni-muenchen.de/clips/hX2SvyZLSI/vod/high_quality.mp4Descriptivism about Theoretical Concepts Implies Ramsification or (Poincarean) Conventionalism
https://cast.itunes.uni-muenchen.de/clips/gSkebASMHR/vod/high_quality.mp4
Holger Andreas (MCMP/LMU) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Descriptivism about Theoretical Concepts Implies Ramsification or (Poincarean) Conventionalism".Holger Andreas (MCMP/LMU) gives a talk at the conference on "The Analysis of Theoretical Terms" (3-5 April, 2013) titled "Descriptivism about Theoretical Concepts Implies Ramsification or (Poincarean) Conventionalism".Thu, 18 Apr 2019 23:51:30 +0000Holger Andreas (MCMP/LMU)Conference on The Analysis of Theoretical Termsno00:47:2884https://cast.itunes.uni-muenchen.de/clips/gSkebASMHR/vod/high_quality.mp4How Almost Everything in Space-time Theory Is Illuminated by Simple Particle Physics: The Neglected Case of Massive Scalar Gravity
https://cast.itunes.uni-muenchen.de/clips/uZcbg1d1rR/vod/high_quality.mp4
J. Brian Pitts (Cambridge) gives a talk at the MCMP Colloquium (6 February, 2013) titled "How Almost Everything in Space-time Theory Is Illuminated by Simple Particle Physics: The Neglected Case of Massive Scalar Gravity". Abstract: Both particle physics from the 1920s-30s and the 1890s Seeliger-Neumann modification of Newtonian gravity suggest considering a “mass term,” an additional algebraic term in the gravitational potential. The “graviton mass” gives gravity a finite range. The smooth massless limit implies underdetermination. In 1914 Nordström generalized Newtonian gravity to fit Special Relativity. Why not do to Nordström what Seeliger and Neumann did to Newton? Einstein started in setting up a (faulty!) analogy for his cosmological constant Λ. Scalar gravities, though not empirically viable since the 1919 bending of light observations, provide a useful test bed for tensor theories like General Relativity. Massive scalar gravity, though not completed in a timely way, sheds philosophical light on most issues in contemporary and 20th century space-time theory. A mass term shrinks the symmetry group to that of Special Relativity and violates Einstein's principles (general covariance, general relativity, equivalence and Mach) in empirically small but conceptually large ways. Geometry is a poor guide to massive scalar gravities in comparison to detailed study of the field equation or Lagrangian. Matter sees a conformally flat metric because gravity distorts volumes while leaving the speed of light alone, but gravity sees the whole flat metric due to the mass term. Largely with Poincaré (pace Eddington), one can contemplate a “true” flat geometry differing from what material rods and clocks disclose. But questions about “true” geometry need no answer and tend to block inquiry. Presumptively one should expect analogous results for the tensor (massive spin 2) case modifying Einstein’s equations. A case to the contrary was made only in 1970-72: an apparently fatal dilemma involving either instability or empirical falsification appeared. But dark energy measurements since 1999 cast some doubt on General Relativity (massless spin 2) at long distances. Recent calculations (2000s, some from 2010) show that instability can be avoided and that empirical falsification likely can be as well, making massive spin 2 gravity a serious rival for GR. Particle physics can let philosophers proportion belief to evidence over time, rather than suffering from unconceived alternatives.J. Brian Pitts (Cambridge) gives a talk at the MCMP Colloquium (6 February, 2013) titled "How Almost Everything in Space-time Theory Is Illuminated by Simple Particle Physics: The Neglected Case of Massive Scalar Gravity". Abstract: Both particle physics from the 1920s-30s and the 1890s Seeliger-Neumann modification of Newtonian gravity suggest considering a “mass term,” an additional algebraic term in the gravitational potential. The “graviton mass” gives gravity a finite range. The smooth massless limit implies underdetermination. In 1914 Nordström generalized Newtonian gravity to fit Special Relativity. Why not do to Nordström what Seeliger and Neumann did to Newton? Einstein started in setting up a (faulty!) analogy for his cosmological constant Λ. Scalar gravities, though not empirically viable since the 1919 bending of light observations, provide a useful test bed for tensor theories like General Relativity. Massive scalar gravity, though not completed in a timely way, sheds philosophical light on most issues in contemporary and 20th century space-time theory. A mass term shrinks the symmetry group to that of Special Relativity and violates Einstein's principles (general covariance, general relativity, equivalence and Mach) in empirically small but conceptually large ways. Geometry is a poor guide to massive scalar gravities in comparison to detailed study of the field equation or Lagrangian. Matter sees a conformally flat metric because gravity distorts volumes while leaving the speed of light alone, but gravity sees the whole flat metric due to the mass term. Largely with Poincaré (pace Eddington), one can contemplate a “true” flat geometry differing from what material rods and clocks disclose. But questions about “true” geometry need no answer and tend to block inquiry. Presumptively one should expect analogous results for the tensor (massive spin 2) case modifying Einstein’s equations. A case to the contrary was made only in 1970-72: an apparently fatal dilemma involving either instability or empirical falsification appeared. But dark energy measurements since 1999 cast some doubt on General Relativity (massless spin 2) at long distances. Recent calculations (2000s, some from 2010) show that instability can be avoided and that empirical falsification likely can be as well, making massive spin 2 gravity a serious rival for GR. Particle physics can let philosophers proportion belief to evidence over time, rather than suffering from unconceived alternatives.Thu, 18 Apr 2019 23:57:40 +0000J. Brian Pitts (Cambridge)Colloquium Mathematical Philosophyno00:59:0585https://cast.itunes.uni-muenchen.de/clips/uZcbg1d1rR/vod/high_quality.mp4On the Conception of Fundamentality of Time-Asymmetries in Physics
https://cast.itunes.uni-muenchen.de/clips/MwXPpVX12A/vod/high_quality.mp4
Daniel Wohlfarth (Bonn) gives a talk at the MCMP Colloquium (30 January, 2013) titled "On the Conception of Fundamentality of Time-Asymmetries in Physics". Abstract: The goal of my talk is to argue for two connected proposals: Firstly: I shall show that a new conceptual understanding of the term ‘fundamentality’ - in the context of time-asymmetries - is applicable to cosmology and in fact shows that classical and semi-classical cosmology should be understood as time-asymmetric theories. Secondly: I will show that the proposed conceptual understanding of ‘fundamentality’, applied to cosmological models with a hyperbolical curved spacetime structure, provides a new understanding of the origin of the (quantum) thermodynamic time-asymmetry. In the proposed understanding a ‘quantum version’ of the second law can be formulated. This version is explicitly time-asymmetric (decreasing entropy with decreasing time coordinates and visa versa). Moreover, the physical effectiveness of the time-asymmetry will be based on the crucial Einstein equations and additional calculations in QFT. Therefore, the physical effectiveness of the time-asymmetry will be independent of an ontic interpretation of ‘entropy’ itself. The whole account is located in the set of semi classical quantum cosmology (without an attempt to quantize gravity) and depends on the definability of any cosmic time coordinates.Daniel Wohlfarth (Bonn) gives a talk at the MCMP Colloquium (30 January, 2013) titled "On the Conception of Fundamentality of Time-Asymmetries in Physics". Abstract: The goal of my talk is to argue for two connected proposals: Firstly: I shall show that a new conceptual understanding of the term ‘fundamentality’ - in the context of time-asymmetries - is applicable to cosmology and in fact shows that classical and semi-classical cosmology should be understood as time-asymmetric theories. Secondly: I will show that the proposed conceptual understanding of ‘fundamentality’, applied to cosmological models with a hyperbolical curved spacetime structure, provides a new understanding of the origin of the (quantum) thermodynamic time-asymmetry. In the proposed understanding a ‘quantum version’ of the second law can be formulated. This version is explicitly time-asymmetric (decreasing entropy with decreasing time coordinates and visa versa). Moreover, the physical effectiveness of the time-asymmetry will be based on the crucial Einstein equations and additional calculations in QFT. Therefore, the physical effectiveness of the time-asymmetry will be independent of an ontic interpretation of ‘entropy’ itself. The whole account is located in the set of semi classical quantum cosmology (without an attempt to quantize gravity) and depends on the definability of any cosmic time coordinates.Thu, 18 Apr 2019 23:54:57 +0000Daniel Wohlfarth (Bonn)Colloquium Mathematical Philosophyno00:50:0586https://cast.itunes.uni-muenchen.de/clips/MwXPpVX12A/vod/high_quality.mp4Simplicity and Measurability in Science
https://cast.itunes.uni-muenchen.de/clips/d0d9X3S0Vi/vod/high_quality.mp4
Luigi Scorzato (Roskilde) gives a talk at the MCMP Colloquium (16 January, 2013) titled "Simplicity and Measurability in Science". Abstract: Simple assumptions represent a decisive reason to prefer one theory to another in everyday scientific praxis. But this praxis has little philosophical justification, since there exist many notions of simplicity, and those that can be defined precisely strongly depend on the language in which the theory is formulated. Moreover, according to a common general argument, the simplicity of a theory is always trivial in a suitably chosen language. However, this "trivialization argument" is always either applied to toy-models of scientific theories or applied with little regard for the empirical content of the theory. In this paper I show that the trivialization argument fails, when one considers realistic theories and requires their empirical content to be preserved. In fact, the concepts that enable a very simple formulation, are not necessarily measurable, in general. Moreover, the inspection of a theory describing a chaotic billiard shows that precisely those concepts that naturally make the theory extremely simple are provably not measurable. This suggests that, whenever a theory possesses sufficiently complex consequences, the constraint of measurability prevents too simple formulations in any language. In this paper I propose a way to introduce the constraint of measurability in the formulation of a scientific theory in such a way that the notion of simplicity acquires a general and sufficiently precise meaning. I argue that this explains why the scientists often regard their assessments of simplicity as largely unambiguous.Luigi Scorzato (Roskilde) gives a talk at the MCMP Colloquium (16 January, 2013) titled "Simplicity and Measurability in Science". Abstract: Simple assumptions represent a decisive reason to prefer one theory to another in everyday scientific praxis. But this praxis has little philosophical justification, since there exist many notions of simplicity, and those that can be defined precisely strongly depend on the language in which the theory is formulated. Moreover, according to a common general argument, the simplicity of a theory is always trivial in a suitably chosen language. However, this "trivialization argument" is always either applied to toy-models of scientific theories or applied with little regard for the empirical content of the theory. In this paper I show that the trivialization argument fails, when one considers realistic theories and requires their empirical content to be preserved. In fact, the concepts that enable a very simple formulation, are not necessarily measurable, in general. Moreover, the inspection of a theory describing a chaotic billiard shows that precisely those concepts that naturally make the theory extremely simple are provably not measurable. This suggests that, whenever a theory possesses sufficiently complex consequences, the constraint of measurability prevents too simple formulations in any language. In this paper I propose a way to introduce the constraint of measurability in the formulation of a scientific theory in such a way that the notion of simplicity acquires a general and sufficiently precise meaning. I argue that this explains why the scientists often regard their assessments of simplicity as largely unambiguous.Thu, 18 Apr 2019 23:52:29 +0000Luigi Scorzato (Roskilde)Colloquium Mathematical Philosophyno00:42:0287https://cast.itunes.uni-muenchen.de/clips/d0d9X3S0Vi/vod/high_quality.mp4