Abstracts of Contributed Papers

3rd Congress of the Association for the Philosophy of Mathematical Practice (APMP)

 

Book of Abstracts of Contributed Papers

 

Click here to download the .pdf version

 

Moreno Andreatta (IRCAM-CNRS-UPMC France)

Exploring the “mathemusical” dynamics: some theoretical and philosophical aspects of a musically driven mathematical practice

Despite the increasing numbers of articles and books devoted to the relationship between music and mathematics, few attempts have been made to grasp the conceptual novelty of “mathemusical” research from the perspective of mathematical practice. Far from reducing to a simple application of mathematics to music, the “mathemusical” approach relies on a fruitful feedback between both disciplines which may be described as follows: the point of departure is a musical problem which is initially formalized and successively generalized leading to new mathematical theorems (or to new versions of already known mathematical results). These theorems are then applied to music providing also new powerful tools for music theorists, composers or analysts – going well beyond the initial musical problem.

I will illustrate this feedback using several examples of such “mathemusical” problems. These include the construction of rhythmic tiling canons which are related to the open Fuglede (or spectral) Conjecture (Andreatta, 2011; Caure, PhD thesis]; the classification of musical structures having the same intervallic content and their link to crystallography-based homometric structures (Caure, PhD thesis; Mandereau et al., 2011]; and recent advances in formalizing transformational music theory in a categorical framework (Andreatta, 2013; Popoff et al., 2015 (in press)].

I will then discuss some philosophical aspects of such a “mathemusical” dynamics, by focusing on the duality between musical objects and transformations between and upon them. This not only provides an instantiation of Gilles-Gaston Granger’s articulation between the “objectal” and the “operational” dimensions (Granger, 1994) in the musical domain. It also confirms the relevance of the interplay between algebra (as representing the temporal dimension) and geometry (as conceptualizing the spatial component) in musical thinking, as initially suggested by Alain Connes (Boulez et al., 2011).

Combining reflections of mathematicians and philosophers on the phenomenological account of contemporary mathematics [Patras, 2006; Benoist,2007) with some epistemological orientations on the geometrical character of cognition (Longo, 2006), I will show how to overcome certain limitations of current philosophical assumptions about the relationship between mathematics and music. In this sense, I believe that “mathemusical” research constitutes an interesting case study (Acotto et al., 2011) which may raise new challenges and open new perspectives in the philosophy of mathematical practice (Andreatta, 2010; Adreatta et al. (eds) 2012).

 

Sorin Bangu (Univ. of Bergen)

Mathematical Knowledge - From a Psychological Point of View

Is mathematical knowledge innate? What evidence do we have for this? Can it develop in the absence of a symbolic language? Are there brain structures especially devoted to mathematical cognition?

These questions are versions of one of the most venerable philosophical concerns – how is mathematical knowledge possible? Its roots are in Plato’s dialogues, and virtually all philosophers afterwards (from Descartes and Kant to Quine and Wittgenstein) were fascinated by it. The question remains alive today and, in light of the wealth of evidence collected by the cognitive sciences so far – and unavailable to these illustrious predecessors – invites new, interdisciplinary approaches. Thus, the general goal of my paper is to tackle the above philosophical queries from the perspective of the modern early childhood experimental psychology. I would like to contribute to consolidating a new, emerging direction in the philosophy of mathematics, which, while keeping in sight the traditional concerns of this sub- discipline, aims to engage with them in a scientifically-informed manner.

More specifically, the experiments I will focus on have been performed by Gellman, Starkey, Spelke, and Wynn, among others, and are now classics in cognitive psychology. As I will show, they are immediately relevant to venerable philosophical problems – e.g., the origin and nature of mathematical knowledge – and yet largely neglected in the philosophical literature. I discuss these experiments in a fair amount of detail, examining the extent to which they fulfill their main goal: to provide evidence that very young infants possess innate arithmetical abilities, as well as those favoring the idea they have internalized fundamental physical principles such as object-persistence and cross-modal perceptual correspondence. I argue that although each such type of experiment is persuasive when considered by itself, when taken together they seem to be in conflict with each other. I propose an interpretation of these findings which removes this tension.

 

Méven Cadet (Paris 1 Panthéon-Sorbonne)

The Logocentric Predicament: What about a Semantic Theory? A Study of Frege’s Case

In the Grundgesetze der Arithmetik, Frege introduced his so-called ‘logical system’ and used it in an attempt to reduce a considerable part of mathematics to logic. Elsewhere, he detailed the semantic conceptions underlying this task. Frege located logic among other sciences, with its own concepts and its own content. As an important step towards his goal, he called for the elucidation of the link that bounds an expression to its content. Only such an elucidation could explain why the derivations occurring between expressions relate to actual proofs of some contents of expressions, most notably in the case of the fundamental laws of arithmetic and of real analysis.

According to Frege, logical sentences have content: they express the most general thoughts. This feature was once called Frege’s ‘universalism’. Many scholars draw a peculiar consequence from this point, namely that logic holds us captive. This is the the logocentric predicament. If logic is inescapable, then it is impossible to reason theoretically about logic. This involves a rejection of both any kind of semantic theory and of any demonstration of meta-theorems. Accordingly, all of Frege’s semantic considerations would simply be elucidatory.

Many philosophers, inspired by the profound interpretation of Dummett, stand opposed to this reading. They hold that nothing, in Frege’s ideas, prevents the theoretical use of semantical notions. Further, they claim that Frege provides us with an informal semantic theory in the Grundgesetze, based on his own semantic conceptions. Of course many years were necessary to reach a formalization of such a theory, and of course the modern version is not exactly the one sketched by Frege – set theory and semantic paradoxes stood in the way – but, according to them, Frege nonetheless showed the right direction.

We therefore observe an important disagreement concerning the history of logic and mathematics. One group holds that Frege’s ideas about logic left no room for a semantic theory, and then for a large part of what led to modern mathematical logic; while the other maintains that such a theory was already embedded in Frege’s work and that it paved the way to Tarskian semantics.

Various arguments have been made in both directions, involving Frege’s conceptions of judgement, generality and elucidations [Erläuterung]. During this presentation, we shall first proceed to a clear introduction of the different features of the issue. As a second step, we shall focus on the connection between Fregean’s elucidations and four important notions: those of function, object, truth and reference. This perspective, we argue, sheds a new light on the problem.

 

Davide Crippa (Université Paris Diderot, France)


From irreducible to reducible impossibility theorems: A shift in mathematical practice

The question whether modal notions have a role to play in our understanding of the nature of mathematical objects and mathematical knowledge has become an urgent issue for several positions in the philosophy of mathematics. In this talk, I purport to tackle this problem through the angle of attack represented by the study of the mathematical practice(s). In other words, what I am really looking for is a rational reconstruction of the practising mathematician's modal talk by considering case studies within well-defined fragments of mathematics.

My starting point in this talk is a thesis advanced by Wilfrid Hodges in two unpublished studies (see bibliographic references). In these works, Hodges contends that from the viewpoint of the working mathematician, modal expressions add nothing to the content of the assertions they modify. Since the mixture of modality and mathematics is paradoxical, the occurrence of modal expressions in mathematical contexts demands an explanation. In his studies, Hodges analyzes few modern textbooks of algebra and topology in order to parse out occurrences of modals like “must”, “can”, “may”, and so on, in “non conversational” contexts, namely definitions, axioms, theorems, lemmas, corollaries and exercises.

In my exposition, I shall present an ongoing research on theorems about the non constructibility of geometric problems in Euclid's geometry - I refer, in particular, to the trisection of the angle, the duplication of the cube and the quadrature of the circle. These are both examples of modal statements and targets of well-known proofs. Moreover, they seem particularly prone to Hodges' skepticism: as far as neither classical nor today mathematics apparently possess the formal resources in order to handle modal inference rules and prove modal statements, we can legitimately wonder what mathematicians really prove when they claim to prove impossibility theorems.

The first result that I want to stress, which appears to be shared by diverse practices in the history of mathematics, is that a proof of an impossibility statement is usually obtained by eliminating its modal content through a suitable paraphrase into an equivalent, non-modal statement. Subsequently, the non-modal counterpart is proved by relying on (non-modal) logical inferences.

This hypothesis suggests a further conceptual distinction between reducible and irreducible impossibility statements, which depends on the conceptual and mathematical resources that are available within a given fragment of mathematics.
On the basis of these premisses, I shall advance the thesis that a shift occurred, in 17-th century geometry, in the logical status of statements about the non-constructibility of some elementary geometric problems. In order to ground my claim, I shall discuss certain investigations led by John Wallis, James Gregory and G.W. Leibniz on the quadrature of the circle. These case studies will call upon the existence of different strategies, developed by early modern mathematicians in order to reduce modal to non-modal statements in geometry, which would eventually lead to the formulation of impossibility theorems about the non-quadrability of figures by prescribed means.

 

Walter Dean (University of Warwick, UK)

Degrees and difficulty

This paper seeks to explore the concept of mathematical problem as well as the attendant notions of the reducibility of one problem to another and the degree of difficulty of a problem in light of the practice of contemporary computability and complexity theory. I will begin by tracing the discussion of these notions to Kolmogorov (1931)’s  so-called problem interpretation of intuitionistic logic. For simplicity I will concentrate on only one the notions of problem he discusses – i.e. that of a decision problem understood as a set of natural numbers (possibly as the result of an effective encoding).

On this understanding, Kolmogorov’s notion of a reduction of a problem X to a problem Y is traditionally understood as that of a uniform procedure or construction which allows us to transform any solution to X into a solution for Y . As many of his examples illustrate, this notion appears to enjoy a pre-theoretical status similar to that of effectively computable function itself. The question thus naturally arises whether it is possible to provide a mathematical analysis of the reducibility of problem to another which adequately analyzes our understanding of this relation in the same way in which Church’s thesis is often said to analyze of understanding of effectivity in terms of recursiveness. I will focus on the extent to which various definitions of reducibility offered by Turing (1937), Post (1944), Cook (1971), and Karp (1972) can be seen as playing such a role.

I will discuss three points as possible first steps towards answering this question:

    1) Although the definitions just mentioned can all be understood as analyzing pre-theoretically salient notions of problem reduction, this notion is itself underspecified along at least two di-mensions: 1) what sort of uniformity is required of a method f(x) for transforming solutions of X into solutions of Y ; 2) what does it mean for such a method to have access to a solution for X? The definitions of different technical notions of reduction studied in computability and complexity theory – e.g. Turing, 1-1, many-1, truth table reducibility, Cook, and Karp reducibility – all arise from different means of answering these questions.


    2) Suppose that ≤R is the ordering on P(N) induced by a reducibility notion R of this sort. Since ≤R will typically be reflexive and transitive, we can define an equivalence relation on P(N) by defining X=R Y if and only if X≤R Y and Y ≤R X. It is then customary to speak of the equivalence class of [X]R of a given problem X as its degree of difficulty and of Y as being more difficult than X if X ≤R Y but not conversely. I will argue that use of such terminology is indicative of the fact that the methods of computability and complexity theory are instances of measurement in the traditional sense of Krantz et al. (1971). The question thus arises as to whether comparisons of problem difficulties using degree-theoretic apparatus tracks any pre-theoretically recognized sense of comparative or absolute mathematical difficulty.

3) I will finally highlight several senses in which the use of degree-theoretic apparatus in complexity theory is of greater significance to mathematical practice outside of logic and theoretical computer science than is their use in computability theory. I will first illustrate this point both with respect to practical exigencies of using computational methods in mathematics and the naturalness of the problems compared in the two subjects. I will then argue that such considerations lead us naturally to ask whether it is possible to prove representation and uniqueness theorems (again in the sense of Krantz et al. 1971) which might be taken to account for the extent to which different degree notions provide an accurate characterization of our intuitions about problem difficulty and identity. Time permitting, I will also survey the prospects here with respect to our current knowledge of the properties of various degree structures in computability and complexity theory.

 

José Ferreirós (Universidad de Sevilla)

Practice and Ideology behind E. Wigner’s “Unreasonable Effectiveness”

The purpose of this talk is to examine and contextualize the arguments presented by Nobel prize Eugene Wigner in his famous 1960 paper. The analysis will show that there are important and interesting elements of practice behind his ideas, reflecting the novel experiences of theoretical and mathematical physicists from about 1930 – the role of new mathematical tools and techniques like group theory and symmetry considerations (among others, Wigner 1931), of abstract structures such as Hilbert spaces, involving also the relative abandonment of the traditional mathematical tool-kit of physicists. In relation with this new mathematical physics, and with the predictive success of some physical theories, Wigner’s paper has interesting arguments to offer – arguments that I would classify as belonging to the philosophy of physics.

But we shall also see that Wigner’s way of framing his questions is heavily influenced by a “modernist” way of conceiving mathematics. Here, the 1920s Berlin background of Wigner’s education turns out to play a key role (see Szanton 1992), as does his longstanding relation with Janos von Neumann. We shall enter into this aspect of the question in some detail, discussing e.g. the views of Walter Dubislav (1932), one of the few references cited by Wigner in (1960) and an early defender of strict formalism. It is also interesting to compare Wigner’s way of framing his question with earlier and later perspectives, among others the views of Poincaré (1905, chap. V: L'Analyse et la Physique). Such comparisons serve to underscore the ideological component in Wigner’s approach, which fits very well the discussions of mathematical “modernity” in the 1920s and 30s (Mehrtens 1990, Gray 2008).

 

Karen Francois (Vrije Universiteit Brussel, Belgium) & Brendan Larvor (University of Hertfordshire, UK)

The concept of culture in the study of mathematical practice

In this presentation, we will argue that there is no commonly agreed, unproblematic conception of culture for researchers and students of mathematical practices to use. Rather, there are many imperfect candidates. One reason for this diversity is a tension between the material and ideal aspects of culture that different conceptions manage in different ways.

We will analyse the concept of culture by distinguishing normative conceptions of culture from descriptive or scientific conceptions. Having suggested that this distinction is in general unstable, we will argue that a properly philosophical conception of culture would include the normative/descriptive and material/ideal dyads as dialectical moments.

Based on our analysis of the concept of culture, we then consider the special case of mathematics. From the research field of philosophy of mathematics we first take an overview of the field of study of mathematical cultures, and suggest that it is less well developed than the number of books and conferences with the word ‘culture’ in their titles might suggest. Secondly, we turn to the research field of mathematics education to explore the ‘cultural’ turn into this research field and to analyse how the concept of culture is used in this context.

We finally will suggest which of the approaches to the concept of culture is most promising for philosophy of mathematical practices, and mathematics education.

 

Joachim Frans & Bart Van Kerkhove
(Vrije Universiteit Brussel, Belgium)

A contextual approach to the explanatory value of mathematical visualisations

There is a wide variety of visualisations and visual thinking in mathematics. The ubiquity of visualisations in mathematics has been well documented, but raises several philosophical questions. One might investigate what the role of these specific kind of argumentation can be in mathematical practice. A possible answer to this question could be that (some of) these visualisations (sometimes) exhibit explanatory power.

Mathematical explanation is a hot topic in philosophy of mathematics. The idea is that mathematical activity is not merely driven by justificatory aims, such as the collection of mathematical truths. For example, in many cases mathematicians will search for alternative proofs of known results in order to find a (better) explanation of the theorem. Only a few authors have tried to explicate the nature of mathematical explanation. The most well-known are Steiner (1978), introducing the notion of characterizing property, and Kitcher (1981), arguing that his unificationist account of explanation captures mathematical explanation as well. The literature remains silent, however, whether there is a relation between visualisations and explanations, and what this relation could be. If we strictly follow philosophical treatments on mathematical explanation such as those by Steiner and Kitcher, the presence or absence of visual representations does not have any effect on the explanatory value of a mathematical argument.

It does not seem very controversial to claim that visualizations can and do play a role in evaluating the explanatory value of an argument. We argue that this becomes more clear when we shift our attention the topic of understanding. Our view will be based upon the work by De Regt and Dieks (2005, who suggest a contextual approach to understanding in physics. They claim that different standards of understanding exist, but they all share the common idea that understanding requires the ability to draw conclusions about hypothetical changes in the explanatory model without making exact calculations. Their view is contextual, in the sense that it involves both the theoretical qualities of a theory and the skills of a certain scientist. Some philosophers have argued that understanding is a philosophically irrelevant notion, based on the view that related notions may vary from person to person.

We will argue, nevertheless, that it is crucial to keep the contextual approach since, contrary to objectivist approaches, it succeeds in providing a relation between visualisation and explanation. When dealing with several mathematical theories, the use of visualisation is a very effective way to achieve understanding. The use of visualisations is, however, not indispensable, since there are other ways to reach the same goal. To defend our thesis we will present three kinds of visual reasoning in mathematics, namely the use of diagrams in Euclid geometry, visual proofs in number theory and commutative diagrams in category theory. We will argue that these visualizations can be seen as tools to achieve understanding. Similarly to the view by De Regt and Dieks, the success hereof is linked with both theoretical aspects of the theory and visualisation used and with the specific contextual skills of a (group of) mathematician(s).

 

Elías Fuentes Guillén (University of Salamanca,Spain)

"Bienvenue, Monsieur Brejnev". Weierstrass or the dilemma of revolutionaries

In a letter to Georg Cantor from April 1, 1870, Schwarz says about the theorem that nowadays states that an infinite and bounded set of real numbers has a limit point and which although with different terminology was employed by Weierstrass in his Berlin lectures ("Suppose that, within the domain of a real magnitude x, one has defined in a certain way another magnitude x', but in such a way that it can take infinitely many values, all of which fall within two definite limits; then, one can prove that within the domain of x there is at least one place a, such that in any arbitrarily small neighborhood of a there are infinitely many values of x'"), that it was developed further by the last on the basis of Bolzano's principles, as Hettner redaction of those lectures would confirm. This paper aims to defend:

    a) that “Weierstrassian” practices do constitute part of what can be called 'purely analytic' investigations of the real continuum, since they belong to a static conception of analysis that would reign since. 


    b) that "Bolzano’s principles" found in his first mathematical works (till 1817) were heavily deviant from “Weierstrassian” practices and actually belong, not to a static nor dynamic conception of analysis, but to a transitional conception according to which the notion of a variable is not yet syntactic, as to Weierstrass and his disciples, but semantical-ontological. 


   c) that static conception of analysis was in a sense the "logical step" after the ones taken by Euler, the Germanic combinatorial school, Lagrange, Bolzano, Cauchy and others. 


Considered the letters of Schwarz and Cantor as evidence of their affiliation to those Weierstrassian practices, one could exclaim, paraphrasing Deleuze and Guattari: "'Bienvenue, Monsieur Weierstrass'. Est-ce encore des révolutionnaires qui parlent à un révolutionnaire, ou un village qui réclame la venue d’un nouveau préfet?" Both are correct.

 

Juan Luis Gastaldi (IREPH, Paris Ouest Nanterre)

Boole’s Arithmetic of 0 and 1: The mathematical conditions of logic. 

Although explicitly introduced in the early 1920s by Wittgenstein and Post, the contemporary idea behind truth tables as the formal core of propositional logic is commonly attributed to the English mathematician George Boole. Not only truth tables are often called Boolean tables describing Boolean functions associated to Boolean Algebras, but logicians, historians and philosophers, among which Post himself, also refer directly to the Boole’s work as the source of this basic formal structure underling the propositional calculus (cf. Post, 1921; Kneale & Kneale, 1962; Shosky, 1997; Rahman, 2000). However, if the few tabular arrangements of 0 and 1 laid out by Boole might appear to our contemporary eyes as an early form of truth-table calculus in need of further “correction” and “simplification”, they have, when attention is paid to Boole’s actual mathematical constructions, a complete different significance. Indeed, the developments of what Boole called “Dual Algebra” or “Arithmetic of 0 and 1” in the manuscripts following the publication of The Laws of Thought reveal not only that 0 and 1 are never the formal symbols of truth values, but, what is more, that they’re actual mathematical objects resulting from an arithmetical interpretation of the symbolical equations of his Algebra of Logic. Furthermore, as the classic work of Hailperin (1981; 1986) has shown, Boole’s Dual Algebra or Arithmetic is a very singular structure that can be understood as a commutative ring with unit having no additive or multiplicative non-zero nilpotents, which is by no means reducible to what would later become known as Boolean Algebra. Through an analysis of Boole’s own formulations – belonging mainly to his correspondence and unpublished papers and – I’ll show that Boole’s divergence with what would become the Boolean mainstream is conscious and deliberate, and endowed of its own coherence. A reconstruction of Boole’s method of evaluation of logical equations in terms of his Arithmetic of 0 and 1 will moreover reveal that, positively considered, Boole’s singular mathematical setting implies an original conception of the relation between logic and mathematics, in which the latter functions as a practical, yet formal condition for the former (by means of the definition of its conditions of interpretability). This conception openly contrasts with the one emerging from later “rectifications” of Boole’s system, in which logic is conceived as dealing with the truth conditions of propositions, upon which mathematics could then be built, defining a logicist position that starts already with Stanley Jevons.

 

Michele Ginammi (Scuola Normale Superiore, Italy)


Avoiding reification : The heuristic effectiveness of mathematics and the discovery of the the omega minus particle

In two recent works (Bangu 2008, Bangu 2012), Sorin Bangu reconstructs and critically examines the reasoning that led to the famous prediction of the Ω− particle by M. Gell-Mann and Y. Ne’eman, in 1962, on the basis of a symmetry classification scheme. According to Bangu, this inference is particularly atypical, since its conclusion is apparently based on the employment of a highly controversial methodological principle — what he calls the “reification principle”.

What is highly controversial in this principle is the fact that it seems to allow us to deduce the existence of a physical entity only on the basis of purely mathematical considerations. As a consequence, the heuristic effectiveness of mathematics in this case seems to confirm Steiner’s (1998) conclusions about the role of mathematics in contemporary physics, i.e., that contemporary physicists draw important consequences about the physical world by relying on purely formal mathematical analogies. In this sense, the applicability of mathematics turns out to be magic, or ‘miraculous’, as Wigner (1960) would have said.

In the paper I am going to present at the Third International Meeting of the Association for the Philosophy of Mathematical Practice, I will offer a different reconstruction of the reasoning which led to the prediction of the Ω− particle. This alternative reconstruction is based on a new account for mathematical representative effectiveness offered in (Ginammi 2015), and carefully avoids to resort to any reification principle. Particularly, the existential prediction of the Ω−particle is shown to be deducible from considerations about the mathematical structure employed to represent the class of the 3/2-spin baryons (to which the Ω− belongs) plus some hypotheses about the relation between this mathematical structure and the physical target represented by it. These hypotheses need to be tested, of course; but as such they link the mathematical structure to the empirical world (the reification principle bypasses this link and this is why it is so problematic), and offer a justification for the logical step from the mathematical level to the empirical existence.

This alternative reconstruction shows that, at least in the case just discussed, we do not need to assume any ‘Pythagorean’, or extra-naturalistic assumption about mathematics in order to clarify its heuristic role. We can therefore bypass Steiner’s difficulties and show that the applicability of mathematics to physics can be accounted for without necessarily resorting to any ‘miraculous’ methodological principle.

 

Eduardo N. Giovannini (CONICET, Argentina)

Hilbert on Geometry, Continuity and Purity

The importance that Hilbert bestowed to ‘purity’ in his early axiomatic investigations is well known, and it has been recently analyzed in two important works (Arana and Mancosu 2012, Hallett 2008). In this context, ‘purity’ is tied to the requirement of the ‘purity of me- thods of proof’, according to which theorems must be proved, if possible, using means that are suggested by their content (Cf. Hallett and Majer 2004, pp. 315–316). A prominent example of this kind of purity enquire is the (im)possibility of finding a purely projective proof of Desargues’ theorem in the plane, avoiding any kind of spatial assumptions. Now, it can be argued that purity demands were also operating more generally in Hilbert’s axiomatic construction of Euclidean geometry. To be more precise, a central concern that motivated Hilbert’s axiomatic investigations from very early on was the aim of providing an independent basis for geometry. By proving that one is not required to resort to any kind of numerical assumptions in the construction of a major part of elementary geometry, Hilbert was pursuing the central epistemological goal of showing that geometry should be considered, regarding its foundations, a self–sufficient or autonomous science. Then, a main goal of Hilbert’s axiomatization was not only to show that geometry should be considered a pure mathematical theory, once it was presented as a formal axiomatic system; he also aimed at showing that in the construction of such an axiomatic system one could proceed purely geometrically, avoiding concepts borrowed from other mathematical disciplines like arithmetic or analysis. The aim of this presentation is to analyze the relationship between these purity demands and the way continuity conditions are handled in Hilbert’s axiomatic investigations of the foundations of Euclidean geometry. On the one hand, I will argued that Hilbert’s notable technical breakthrough in his reconstruction of the Euclidean theory of proportions and area, in which no continuity principle – especially the Archimedean axiom – is assumed, is closely tied to purity requirements. On the other hand, I will claim that, at least in part, purity concerns are also behind Hilbert’s postulation of his famous axiom of completeness, as a means to guarantee the full continuity of space and, consequently, the one–to–one correspondence between the elements of his geometrical system and analytic geometry over the real numbers. Finally, I will conclude with a more general discussion on the role played by ‘purity’, as a methodological and epistemological guiding principle, in Hilbert’s axiomatic construction of Euclidean geometry.

 

Emily R. Grosholz (Pennsylvania State University)

Report on ICSU Workshop on Cultures of Mathematical Research Training

On June 14 and 15, 2015, a dozen philosophers of mathematics, mathematicians, mathematics adminstrators and mathematics educators assembled in Hamburg, Germany to attend a workshop on how to study different cultures of mathematical research and their effect on the training of the next generation of mathematical research, with the intention of making mathematical research more effective and fruitful. The workshop was supported by the International Council for Science, the International Union for History and Philosophy of Science, and the International Mathematical Union. We followed the methods of William J. Sutherland et al. (2011), “Methods for Collaboratively Identifying Research Priorities and Emerging Issues in Science and Policy.” Thus, each of us separately drew up a list of ten or so questions we thought raised important questions about the structure of mathematical research and the way students are trained; we asked for feedback from colleagues; we sent our questions in; and the questions were compiled and sorted. We were then asked to evaluate all the collected questions on a scale from one to ten; this resulted in a list of about forty questions. We spent most of the workshop discussing the questions in detail (what do these terms mean? what does it mean for mathematics to be effective? what encourages mathematical creativity? who is invited to do mathematical research? how can these questions be investigated empirically? what effects would these suggestions have on how we think about mathematical invention?). In the group were people concerned with mathematics education in Europe, the United States, Canada, the continent of Africa, India and Brazil. So we brought many perspectives to the discussion, and there was lively debate about the relation between applied and pure mathematics, and the differences in mathematical cultures around the globe. I would like to present our final list of 17 research questions and my experience of the process, as well as Brendan Larvor’s summary of the debate, in the hope of attracting the interest of other members of APMP in this on-going project.

 

Rochelle Gutiérrez
(University of Illinois at Urbana-Champaign)

What is Mathematics? Pre-service Secondary Mathematics Teachers’ Perceptions


For decades, secondary mathematics teachers have been provided with opportunities to develop their knowledge of mathematics. Researchers recognize that through an apprenticeship of observation (Lortie, 1975), most teachers teach as they have been taught. As such, teachers need opportunities to “relearn” mathematics so they may teach it differently (Ball, 1988). This push towards developing teachers’ mathematical knowledge for teaching remains prominent today (Ball, Thames, & Phelps, 2008; Davis, 2013), with little regard for a philosophy of mathematical practice. If teachers are required to develop a philosophy, the focus tends to be upon a philosophy of teaching. Yet, without first developing a philosophy of mathematics, individuals are likely to perpetuate a version of mathematics that has been handed to them by their high school teachers or college level mathematics professors (Gold, 2013; Gutiérrez, 2013). In fact, researchers in the philosophy of mathematical practice think deeply about the nature of mathematics and its impact on education (Gowers, 2013; Lockhart, 2009; Restivo, 2007; Skovsmose & Yasukawa, 2004; Walkerdine, 1994; Walshaw, 2004). For example, Bonnie Gold (2013) suggests: “Whatever your attitude toward the philosophy of mathematics, when you teach mathematics, you do in fact take, and teach your students, positions on philosophical issues concerning mathematics” (p. 147). So, what might it look like for high school teachers to grapple with such questions as: What is mathematics? What isn’t mathematics? And, who decides? Is mathematics invented or discovered? How do we know we are doing mathematics? Who is mathematics by and for?

I report on a longitudinal study I conducted with pre-service high school mathematics teachers wherein they were asked to develop a working definition of mathematics and then continue to reflect on whether or not they were enacting that version of mathematics in their teaching. All 19 of the teacher candidates were undergraduate students obtaining degrees in mathematics, and working towards a minor in education. Three 3-hour seminars and accompanying homework assignments were dedicated to the topic of “what is mathematics?” Rather than develop clear and stable definitions of mathematics, the participants in this study tended to raise additional questions about mathematics, what it is, who is capable of it, and who gets to decide? Participants suggested these questions were critical towards helping them address equity in teaching. I address implications for the professional development of public school teachers.

 

Yacin Hamami (Vrije Universiteit Brussel, Belgium)


The Granularity of Proof: McKay’s Proof of Cauchy’s Group Theorem as a Case-Study

Mathematical proofs in mathematical practice come at various levels of details or, to use a terminology proposed by Thomas Hales (2007) at various levels of granularity. Accordingly, to verify or to understand a mathematical proof often requires one to ‘fill in’ the details, a process taken to its extreme when it comes to the formal verification of mathematical proofs. In this talk, I propose to examine this process by comparing a proof of Cauchy’s group theorem due to McKay at two different levels of granularity. More specifically, I will compare the original proof provided by McKay (1959)—a particularly compact proof which only amounts to 10 lines—with the exact same proof but this time in a much more detailed version provided by Herstein (1996) in his classic algebra textbook. The question I shall address is: what does the process of ‘filling in’ the details which allows one to pass from McKay’s proof to Herstein’s version consist in? I will propose to see the elementary steps of deduction in McKay’s proof as mathematical problems— the problems consisting specifically in providing intermediate steps of deduction. I will then identify two interconnected processes at works for solving these problems, which I shall refer to as problem decomposition and information retrieval. The process of problem decomposition consists in decomposing the initial problems into ‘smaller’ or ‘easier’ ones. We shall see that this requires a form of knowing-how on the part of the agent: the agent should know the relevant proof strategies, techniques or methods (Avigad, 2006). The process of information retrieval consists in retrieving information from background knowledge. We shall see that this requires a form of knowing-that on the part of the agent: the agent should know the relevant definitions, lemmas and theorems. The outcome of this case-study is that the process of ‘filling in’ the details is a complex one that requires at least two types of knowledge which, as I shall argue, could be qualified as local or domain-specific. This connects, unsurprisingly, to similar conclusions coming from the philosophical analysis of the formal verification of mathematical proofs (Avigad, 2008).

 

Ansten Klev (Czech Academy of Sciences, Czech Republic)


The algebraic notion of function

In algebra one distinguishes polynomials (where ’polynomial’ is a noun) from polynomial functions (where ’polynomial’ is an adjective). Polynomials as well as polynomial functions are presented by expressions akin to anxn + · · · + a1x + a0, but they are associated with different criteria of identity: the polynomial p(x) is equal to the polynomial q(x) when they have the same degree and the corresponding coefficients are equal; the polynomial function p(x) is equal to the polynomial function q(x) when they have the same domain and they agree on each element of this domain. With these criteria of identity one sees that, for instance, xp and x are equal polynomial functions, but different polynomials, over Z/p.

In my talk I wish to deal with this notion of polynomial and its relation to the notion of polynomial function, both historically and systematically. The main thesis of the historical part will be that certain uses of the word ‘function’ (or its equivalent in the orthography of other languages) at least since its use in Gauss’s definition of the notion of forms of the second degree can best be regarded as aiming at polynomials rather than functions, say, in the sense of the calculus. Likewise, when ‘function’ is used in the early papers on what is now known as Galois Theory what the authors have in mind is best regarded as polynomials.

When such great mathematicians could call functions what are in effect not functions, or at least not functions in the sense of the calculus, it is only because the notion of polynomial is so closely related to the notion of polynomial function. In the systematical part of my talk I wish to get a better grip on the relation between these two notions. I must then first distinguish various notions of function. The notion of a function as a set of ordered pairs satisfying functionality is not of interest to us. Of more interest is the notion of function as a “dependent quantity” (cf. Euler’s 1755 definition). Such a quantity is presented by an “analytical expression” involving constant and variable quantities (cf. Euler’s 1748 definition). And here, I think, lies to key to understanding the relation between functions and polynomials. Instead of regarding an analytical expression as expressing a dependent quantity, one may regard it as a mathematical object in itself, as a “formal object,” to use an apt term of Haskell Curry’s. This process of passing from regarding an expression as we ordinarily do, namely as expressing certain things, to regarding it as an object in itself will be recognizable to logicians, who do something very similar when they pass from the notion of proposition to the notion of well-formed formula. The fact that it is applicable to any analytical expression, and not only to such expressing polynomial functions, means that we have obtained a notion that is somewhat broader in its extent than that of polynomial function. We may perhaps call it the algebraic notion of function.

 

Kevin Kuhl (University of Toronto, Canada)


Structuralism and (Meta)Mathematical Disagreement

Structuralism, in philosophy of mathematics, purports to be an account of the subject matter of mathematics as a developed science. Realist structuralists claim that structures are a sort of abstract entity, and that the language we use to talk about such structures is broadly treated referentially. That is, the structuralist holds that they have a “face-value” treatment of the semantics of mathematical language. In Charles McCarty’s paper “Structuralism and Isomorphism” McCarty alleges that the structuralist’s metaphysical and semantic theories entail contradictions when we consider the structure(s) of second- order classical and intuitionistic arithmetic. Due to Dedekind’s isomorphism theorem we know that any “...models of the three second-order arithmetic axioms — the two for successor and the one about induction — whether asserted by classical mathematicians or intuitionists are isomorphic: their component positions stand in arithmetic preserving one-to-one correspondence.” (4) However, classical and intuitionistic arithmetic differ in at least one instance of excluded middle. Given that the two structures are isomorphic, the two structures are equivalent with respect to the same formulae; thus structuralism is embroiled in a contradiction.

I provide a formal reconstruction of McCarty’s argument that suggests that McCarty’s argument rests on the illicit assumption that the classical and intuitionistic mathematicians rely on a shared notion of logical entailment. This suggests that the structuralist should salvage their position by challenging this assumption. I contend that the most profitable way of challenging this assumption is to maintain that the disagreement between the classical and intuitionistic mathematician not be construed as a disagreement about the nature of truth, or that the classical and intuitionist work in different mathematical universes, but as a disagreement about the entailment relation. As such, I reject McCarty’s further claim that the intuitionist and the classical mathematicians do not have different meanings in mind when putting forward their theories. McCarty’s primary rationale here is a hermeneutic rationale: we are capable of understanding his meanings, despite potential philosophical disagreements about the nature of logic. However, this mistakes natural language understanding and agreement about logical truth, the two do not coincide.

I conclude by discussing two structuralist strategies for theorizing about this form of mathematical disagreement: an ameliorative strategy and a resolute strategy. The ameliorative strategy is the form of logical relativism devised by Stewart Shapiro (2012). Under this strategy, logical truths are relative to a model theoretic structure. As the classical and intuitionistic mathematicians adopt differing axioms regarding the background structures, they employ differing entailment relations, thus accommodating the differing perspectives. The resolute strategy abandons the relativist approach, and maintains that there is a unique entailment relation that plays a role in determining structure. I argue that both are suitable for providing a structuralist account of this form of metamathematical disagreement, but present a potential worry for the ameliorative strategy: if the ameliorative strategy relies on a model-theoretic background which is predominantly classical, then it appears to slide into the resolute strategy, thus failing to provide any distinct accommodation for the intuitionist mathematician.

 

Javier Legris (IIEP-BAIRES (UBA-CONICET))

Symbolic Knowledge and the Constitution of Objects in Mathematical Practice

G. W. Leibniz introduced around 1684 the idea of symbolic knowledge (cogitatio caeca or cogitatio symbolica) in order to draw a fundamental distinction between forms of cognitive representations. It can be described as knowledge obtained by means of a semiotic structure of some kind. This notion has been recently analyzed (see Krämer 1992 and Esquisabel 2012). One of its features consists in introducing calculi with symbols without an intended reference for ‘imaginable’ objects. Although the reference of such symbols can be understood as ‘fictitious’ entities, they play an essential epistemological role in calculi: It is through them that an authentic new knowledge is obtained. A ‘constitutive’ aspect of symbolic knowledge has thus been asserted (s. Krämer 1992). An example of this aspect in Leibniz himself could be the nature of infinitesimals as fictions bien fondée (well founded fictions) in order to solve mathematical problems. In this paper I discuss this ‘constitutive’ aspect of symbolic knowledge, placing it into the richer context of the theory of signs of C. S. Peirce. According to Peirce’s mature thought, mathematical procedures are mainly iconic. This means that mathematicians, in general, get their results by creating and manipulating diagrams or icons and ‘experimenting with them’. Icons are to be understood as structured complex signs having also a visual and topologic component (see inter alia Peirce NEM IV 316, where this idea is applied to deduction in a broad sense). Hence, mathematical procedures presuppose semiosis, that is, the process of formation of signs involving the relation between a sign-vehicle, an object and an interpretant. Furthermore, semiosis should be considered as a social process, where the interpretants consist of the ‘community of inquirers’, as Peirce called it. In this respect, some kind of collective recognition of the mathematical procedures should obtain in the community. On this basis, I will suggest a feasible interpretation of the idea that the objects of mathematical knowledge are semiotically constituted, that is expected to be also consistent with mathematical practice. The ontological consequences of this interpretation will be also discussed.

 

Pietro Milici (University of Palermo, Italy)

A finitist differential extension of Descartes' balance between machines, algebra and geometry

Descartes proposed a balance between geometric constructions and symbolic manipulation with the introduction of suitable ideal machines. In particular Cartesian tools were polynomial algebra (analysis) and a class of diagrammatic constructions (synthesis): in this setting only algebraic curves were considered "purely geometrical." This limit has been overcome with a general method by Newton and Leibniz introducing infinitesimals in the analytical part, while the synthetic perspective gradually lost its importance with respect to the analytic one (geometry became a mean of visualization, no longer of construction). Descartes' foundational approach (analysis without infinitary objects and synthesis with diagrammatic constructions) has however been extended beyond algebraic limits, even though in two different period. In late 17th century the synthetic part was extended by "tractional motion" (construction of transcendental curves with idealized machines), and in the first half of the 20th century the analytic part was extended by "differential algebra" (now a branch of computer algebra).

I claim that it is possible to obtain a new balance between these synthetic and analytic extensions of Cartesian tools for a class of transcendental problems. In other words, it is possible a new convergence of machines, algebra and geometry that allows a foundation of (part of) infinitesimal calculus without the conceptual need of infinity.

The peculiarity of this approach is the attention to the constructive role of geometry as idealization of machines for foundational purposes. This perspective, after the "de-geometrization" of mathematics, is very far away from the mainstream discussions of mathematics, specially about foundations. However, even though nowadays fallen in oblivion, the problem of defining appropriate canons of construction was very important in the early modern period, and had many influences in the definition of mathematical objects and methods. According to Bos' definition, these are "exactness problems" for geometry.

Such problems about exactness involve philosophical and psychological interpretations, so they are usually considered external to mathematics. However, even though without any final answer, I propose a very primitive algorithmic approach to such problem, that I hope to deepen in future.
From a cognitive perspective, this approach to calculus does not require infinity and, thanks to idealized machines, can be set with suitable "grounding metaphors" (according to the terminology of Lakoff-Núñez). That concreteness can have useful fallouts in math education, thanks to the use of both physical and digital artifacts.

 

Madeline Muntersbjorn (University of Toledo, USA)

Geometric Rectification & Algebraic Representation Madeline.Munterbjorn@utoledo.edu

In his "On the Algebra of Logic: A Contribution to the Philosophy of Notation," Charles Pierce distinguishes between icons, tokens and indices. He notes that geometric diagrams are exemplars of icons insofar as these signs resemble what they represent. Yet there are also algebraic "icons par excellence" which contain "indices of tokens" that resemble patterns of reasoning found at the intersection of logic and mathematics. Pierce’s algebraic icons help us better understand how rectification methods in the 17th C. improved as more mathematicians found better ways to put algebra to work in the solution of geometrical problems. Pierce reminds us that, in the history of mathematics, relations that appear obvious in retrospect would have been impossible to perceive before the cultivation of suitable methods of representation:

As for algebra, the very idea of the art is that is presents formulae which can be manipulated, and that by observing the effects of such manipulation we find properties not to be otherwise discerned. In such manipulation, we are guided by previous discoveries which are embodied in general formulae. These are patterns which we have the right to imitate in our procedure, and are the icons par excellence of algebra. ... no application could be made of such an abstract statement without translating it into a sensible image (CP3.363).

The emergence of analytic geometry in the first half of the 17th C. and the calculus in the second is no coincidence. Mathematicians who translated classical problems into more modern languages made unifying relationships between problems evident and demonstrated more general solutions. Several European mathematicians at this time believed algebraic representations would help solve a wider variety of these problems but had yet to work out just how. What invisible features of geometric problems become visible when algebraic symbols are used? Do the unifying relationships between geometric problems result from the increased use of algebra or do they exist prior to their expression in algebraic terms? How do algebraic equalities change how we see geometrical entities?

These questions are limiting cases of more general questions: What makes algebra an appropriate language for the natural sciences and formal logics? What is mathematical intuition and what do we see with our mind's eyes? Are mathematical relationships invented or discovered or does this distinction contain a false dilemma? We cannot wait until all the historical evidence is in before we consider these general questions insofar as historical questions pose methodological challenges that require prior commitments to what mathematics is and how it grows. The present historiographical strategy is to consider several solutions to a single problem, namely, the rectification of parabolic segments by Fermat, van Heurat, Neil(e), Brouncker, Wallis, and Newton. Contrasting their divergent solutions to the same problem allows us to see how algebraic symbolism displays unifying relationships between kinds of problems in statu nascendi and transforms the problem solving space in ways that made the calculus possible.

 

Markus Pantsar (University of Helsinki, Finland)

Metaphors and mathematical concepts

In the philosophy of mathematical practice, the role of metaphorical thinking is a highly interesting question. Many of the uses of metaphors deal with sophisticated mathematical thinking and seem to have mostly heuristic value. But metaphorical thinking can also be seen in more basic level of mathematics, and as such it appears to be an important element in the development of mathematical understanding. In this talk I propose that metaphorical thinking plays a key role in acquiring some of the most fundamental mathematical concepts. In particular, I study the hypothesis of “Process -> Object metaphor“ (POM), according to which we understand many objects of mathematics, such as sequences and functions, in terms of processes. This is particularly interesting in the case of infinite objects, whose epistemological and ontological status have traditionally been among the most difficult problems in the philosophy of mathematics. I will argue that we reach such objects by thinking about endless processes metaphorically as objects. In order to show the feasibility of the hypothesis, a wide selection of instances of the use of POM will be presented from mathematical practice. The Process -> Object metaphor will also be compared to well-known alternative metaphorical understanding of infinity, the Basic Metaphor of Infinity (BMI) of Lakoff and Núñez. Compared to BMI, POM appears to have many important strengths. It does not, for example, treat infinity as a special case requiring its own metaphor. Rather, infinity is taken as only one case of applying POM, which is a metaphor ubiquitous in mathematical practice.

 

J. Brian Pitts
(Faculty of Philosophy, University of Cambridge)

Theory Testing, Theory Construction, and Ontology for Space-time: What Theory(s), Logic(s), and Geometry(s) Do We Need

Proving theorems about the best theory in one’s scientific field with today’s mathematics is a worthy task. But is it broad enough to exhibit the growth of scientific knowledge about space-time? I suggest that it is too narrow in ways relating to theory testing, theory construction, and ontology. Sometimes one needs theories other than the best current one, reasoning looser than deduction, and a few lessons from yesterday’s differential geometry.

Deduction is too narrow for confirmation, as Hempel found. Striving to make all tacit premises explicit can be stultifying scientifically. Bayesianism is a consequence of generalizing logic to shades of gray (Cox’s theorem); it leaves conclusions defeasible.

Bayesianism implies that theory testing generically is comparative. Hence the logic of confirmation forces one to consider rival theories, not just the best current one. If there is no good rival now, one should be sought.

Theory construction in fundamental physics begins with a variational principle with an action (quasi-)invariant under coordinate transformations. To build a variety of theories, one needs a variety of ingredients. An invariant action requires a Lagrangian density that is a scalar density of weight 1, or equivalently a _twisted_ top-form. Whereas scalar densities were generally known, twisted forms are somewhat exotic; thus one now finds the mistaken claim that Lagrangian densities should be top- forms (not twisted), entities that do not exist on non-orientable manifolds. In contrast to modern austerity, classical differential geometry defined a great many less famous entities (pseudoscalars, axial vectors, densities of non-integral weight, etc.), some of which sometimes are important (e.g., in supergravity). Modern geometry could recover that habit to facilitate theory construction.

Interpreting a scientific theory involves ontology. Physics distinguishes real existence from mathematical existence, and so places more emphasis on Ockham’s razor for ontology and less emphasis on economy of definition or elegance of proof than does mathematics. Conformal invariance illustrates the issue: many writers follow Weyl in treating conformal invariance as conformal covariance (different volume elements make no difference or a simple difference), instead of T. Y. Thomas who discussed what can be done without any volume element at all (invariance without surplus structure). These two pictures lead to different definitions of conformal Killing vector fields, the former metric-based, the latter strictly within conformal geometry with a unimodular conformal metric density. Point individuation also provides an illustration: modern geometry introduces primitive point identities and revives Einstein’s hole argument, whereas classical geometry (still used in much of the General Relativity literature) naturally accommodates Einstein’s point- coincidence argument for relational point individuation. Hence a space-time theorist aiming to exhibit the growth of scientific knowledge should consider rival theories rather than just the best, should reason more broadly than deduction, and should recover of some classical geometric virtues.

 

Colin Jakob Rittberg (University of Hertfordshire, UK)

Mathematical Pull

In this talk, I show a case in which mathematicians can obtain a new philosophical argument by doing more mathematics. I will draw on arguments that combine two issues in contemporary set theory: undecidability and the pluralism/non-pluralism debate.

Undecidability: there are some questions which some set theorists hold to be fundamental and which are formally unsolvable from our contemporary axiom system for set theory. The Continuum Hypothesis is a well-known example. Hugh Woodin has called this the 'spectre of undecidability'. In recent work, Woodin has argued for a variety of axiom candidates each of which could, when added to our current set-theoretic axioms, 'banish the spectre of undecidability'. The method Woodin proposes to choose between these axiom candidates is to rely on future results in formal set theory. The pluralism/non-pluralism debate is a contemporary debate within set theory about metaphysics. The question is whether there is a unique correct background concept of set or if there are many legitimate set concepts. Peter Koellner connects Woodin's method of choosing between the different axiom candidates to the pluralism/non-pluralism debate. If, as Koellner argues, for one of Woodin's proposed axioms the future results in formal set theory converge in a given sense, then this counts as an argument in favour of non-pluralism. Divergence counts as argument for pluralism.

I argue that Koellner connects mathematics to the pluralism/non-pluralism debate in such a way that mathematicians can obtain a new philosophical argument by doing more mathematics. In this sense, mathematicians can successfully engage in metaphysical debates by mathematical means; mathematics pulls the metaphysical debate.

 

Davide Rizza (University of East Anglia, UK)

Mathematisation : An elementary analysis

An increasing attention is being paid within philosophy of mathematics to the role played by mathematical concepts and theories in applications. So far, the investigation of this issue has been restricted by (i) the adoption of a general picture of application as the embedding of an empirical structure into another (e.g. in Bueno and Colyvan 2011 and Pincock 2012) and (ii) an almost exclusive attention to the alleged explanatory role of mathematics in empirical enquiry. In this talk I propose an elementary articulation of a different approach to the application of mathematics, which must be presupposed by an adequate investigation into the problem of its explanatory role and moreover transcends the reduction of application to a structure-preserving transition.

On my account, the focus of the analysis is on the process of mathematisation, whereby mathematical resources are brought to bear on the articulation and resolution of a family of interrelated empirical problems. I distinguish two successive stages in mathematisation: in the first stage a manifold (I use this term in loosely Kantian terms here) of empirical features is assigned a formal environment, which makes its features discernible as configurations within a suitable space and allows the refinement of empirical concepts available before mathematisation or the introduction of new environment-specific concepts not otherwise available. The second stage of mathematisation is triggered by the fact that some of the new concepts pose further problems that may not be solved by an appeal to the initially given formal environment alone. In order to tackle such problems, further mathematical resources must be connected to the original environment: there are several ways of effecting transitions to further mathematical resources, only some of which rely on structure-preservation. I illustrate this general account by looking at the mathematisation of simple majority decisions as it has unfolded over the last forty years. The formal environment assigned to known empirical features of simple majority outcomes is a particular type of directed graph, known as a tournament. In presence of a tournament, certain general features of majority decisions can be assigned configurations of directed edges (e.g. cycles and paths) that sustain general forms of reasoning and allow one to deploy a methodology for their general study and to prove general results about simple majority elections. Furthermore, it is possible to articulate, in graph-theoretical terms, novel solution concepts (i.e. definitions of a winning set when the majority winner is not unique) which in turn raise problems that cannot be tackled by an appeal to the theory of tournaments alone. I look at the way additional mathematical resources (from logic, complexity theory and probability) are attracted to intervene upon tournaments and focus on the several ways in which mathematical information is mobilised to solve the newly raised problems. I emphasise, in this connection, how this can be done without relying on structure-preserving mappings. I also suggest how one might identify an explanatory function for mathematics as a special episode within the mathematisation process.

 

Luca San Mauro (Scuola Normale Superiore, Italy)

Taking Foundational Programs as Practices themselves: Informal Proofs in Computability

Historically, advocates of a practical turn in philosophy of mathematics have had roughly two different attitudes towards the classical foundational programs. On the one hand, in line with Lakatos, the so-called “maverick” tradition has defended a strong anti- foundationalism. On the other hand, contemporary philosophers of mathematical practice mostly present their work in a less polemic way, as extending, rather than substituting, the foundationalist tradition. Yet, even under this latter attitude, one shall notice that the “working mathematician” – of which philosophy of mathematical practice aims to give account – is rarely considered to be a mathematical logician, and case studies are typically picked from non-foundational areas of mathematics.

Thus, in this talk, I propose to put a long-standing foundational program – namely, the one embedded in Computability Theory – under the lens of a practice-oriented philosophical inquiry. In particular, I aim to make use of this theory in order to tackle a much debated problem, that of determining how informal proofs – i.e., the ones occurring in standard mathematical prose – differ from their formal counterpart (see, for instance, Larvor, 2012). Formulated in the context of Computability, this latter problem is deeply intertwined with the Church-Turing thesis (CTT). Indeed, if accepted, CTT entails that informal descriptions of algorithms (provided, for instance, in plain English with additional mathematical symbols) do correspond to their formal implementations in a given model of computation.

Furthermore, among researchers in the field, there is a large – and yet philosophically unanalysed – consensus concerning the claim that relying on informal methods (which is the norm in Computability) is just a matter of convenience, since informal definitions point towards formal ones, and that we could theoretically substitute the former with the latter without any significant loss of information. If wholly accepted, such perspective would assign to Computability a very peculiar characteristic within mathematical theories, that of being a context (and arguably the only one) in which the informal side of the theory is fully reducible to its formal side, and no practical aspect is really philosophically significant.

Nonetheless, I argue that such a view is untenable. In doing so, I first show that CTT has been practically employed, since Post’s seminal work (see Post, 1944), as a device to dismiss a certain amount of formalness in the exposition of proofs. Then, by carefully analysing some of the informal methods that arise in Computability, and by focusing in particular on the construction of a simple set, I show that typical informal constructions have to be “structurally conceived”, in the sense that the kind of objects that are constructed are: 1) not extensionally fixed; 2) independent from any specific formal model. In order to clarify these two latter aspects, I borrow the notion of formalism-freeness from Kennedy (2013), and highlight how this feature, in the case of Computability, has its roots in Felix Klein Erlangen Program.

 

Georg Schiemer (University of Vienna, Austria)

Duality and transfer principles:
The rise of model- theoretic methods in projective geometry.

Modern axiomatic geometry based on Hilbert’s Grundlagen der Geometrie (1899) is usually described as model-theoretic in character: theories are understood as theory schemata that implicitly define a number of primitive terms and that can be interpreted in different models or structures. Moreover, starting with Hilbert’s work, metatheoretic results concerning the consistency of axiom systems and the independence of particular axioms have come into the focus of geometric research. These results are also established in a model-theoretic way in work after Hilbert, i.e. by the construction of structures with the relevant geometrical properties.

The present talk wants to investigate the conceptual roots of this model-theoretic approach by looking at several methodological developments in projective geometry between 1810 and 1900. More specifically, the focus here will be on two topics under discussion during that time: (i) the theoretical explanation of the “principle of duality” (discussed in the works of Gergonne, Poncelet, Chasles, and Pasch among others), i.e. the fact that theorems in projective geometry can be dualized; (ii) the use of so-called transfer principles (“Übertragungsprinzipien”), first introduced in Hesse’s work on analytic projective geometry and then applied more generally in Klein’s Erlanger Program (Klein 1872).

We will survey these developments in the talk and discuss their respective impact on the gradual implementation of model-theoretic techniques in modern geometry. The aim here will be twofold: First, to assess whether the early contributions to duality and transfer principles considered here can already be described as model-theoretic in character in the modern sense of the term. In the case of duality, our discussion of this issue will be based on a closer examination of two existing explanations of the general principle, namely a transformation-based account and an account based on the axiomatic presentation of projective geometry. The second aim will be to see whether Hilbert’s own approach in Grundlagen was motivated or at least influenced by the previous uses of duality and transfer principles in the practice of projective geometry.

 

Dirk Schlimm (Department of Philosophy, McGill University, Montreal, Canada)

Towards a cognitive and pragmatic account of notations for propositional logic

This paper is part a larger project that aims at a systematic study of cognitive and pragmatic aspects of mathematical notations. For being able to focus on these aspects without being distracted by semantic issues, it is useful to compare notations that purport to represent the same content. In the present case study, I therefore restrict myself to different symbolic notations for propositional logic (whose syntactic, or proof-theoretic, properties are well known). In particular, I will consider Frege’s Begriffsschrift (1879), the dot-notation introduced by Peano and employed by Whitehead and Russell in Principia Mathematica (1910), the so-called Polish notation of Łukasiewicz (1929), and the now common notation, as it can be found, e.g, for the most part in Hilbert and Ackermann’s Grundzüge der theoretischen Logik (1928) and in contemporary logic textbooks. Notable differences between these notations include the use of a two-dimensional representation, the methods for grouping sub-expressions (parentheses vs. dots vs. Leibniz’ vinculum), and the order in which the connectives and their arguments are written. For the systematic comparison, I shall introduce abstract syntax trees (sometimes also called parsing trees) as canonical representations of propositional formulas and show how to translate between them and each of the other notations. I will argue that the amount of effort with which these translations can be effected gives us some information about the complexity of parsing the various notations, which impacts the cognitive effort needed for understanding them. Moreover, various advantages and disadvantages of the notations, in relation to certain particular aims, will be discussed and some historical reflections on the trade-offs regarding the use of the notations will be presented.

 

Karl-Heinz Schlote (University Hildesheim)

Carl Neumann’s view of mathematical physics

Mathematical physics formed a main field of Carl Neumann’s research. He developed a special view of the role of mathematics in physics in the 1860es and 1870es. He formulated it in his inaugural lections held at the University of Tuebingen in 1865 and at the University at Leipzig in 1869. Starting from a few basic principles that allow explaining most of the phenomena of a domain of physics he strove for a mathematical representation of those principles and for a construction of the theory in a mathematical correct way. That view on mathematical physics is analysed in the first part of the lecture. In forming his view on mathematical physics he drew some stimuli from Jacobis`s lecture on analytical mechanics as well as his education in the mathematical-physical school at the University of Koenigsberg. In the second part it will be shown how the view influenced his research activities on the one hand and the development of mathematics and physics at the University of Leipzig. Neumann demonstrated his ideas about the structure of a theory in mathematical physics for various parts of physics. As an important instance he formed a theory of electrodynamics which he established in several basic approaches. This gave rise of some intensive discussions with H. Helmholtz and other physicists about the right basic principles of the theory. Finally Neumann’s efforts for improving the necessary mathematical means will be mentioned.

 

Gisele Secco (Univ. Federal do Rio Grande do Sul, Brasil)


On the relevance of the Four-Colour Theorem Proof for thinking about computers in mathematical practices

Philosophical questions concerning the general theme of the use of computers in mathematical proofs received noteworthy attention with the advent of the Four-Colour Theorem proof, presented in a pair of papers by Appel and Haken in 1977. The main reason why this proof provoked a certain commotion in the mathematical community is the indispensable participation of computers in its construction. The philosophical citizenship of the proof was acquired trough the publication of a seminal paper by T. Tymoczko’s (1979), in which the use of computational machinery is conceived as a clear-cut case of the introduction of experimentation in the traditionally a priori domain of mathematical practices. According to Tymoczko’s argument the philosophical significance of the Four-Colour Theorem proof lays in the revision it forces on the use of some crucial concepts such as theorem, proof and mathematical knowledge. This paper’s strategy is twofold: on one hand, I shall present an overview of the debates motivated by Tymoczko’s paper such as suggested by D. Prawitz (2008) in the context of his discussion of the relations between proofs and computer programs. On the other hand, I will present an alternative reconstruction of the disputes concerning the proof in question, bringing to the scene some generally neglected voices such as G. Kreisel’s (1977), E.R. Swart (1980) and H. Wang’s (1981). The aim of this approach is to delimit some questions concerning the ways in which the Four-Colour Theorem proof is still appealed to in the discussions about computers in mathematical practices.

 

Yaroslav D. Sergeyev
(University of Calabria, Italy and Lobachevsky State University of Nizhni Novgorod, Russia)

Methodological and philosophical aspects of a new approach allowing one to work with infinities and infinitesimals numerically

The talk presents a recent methodology allowing one to execute numerical computations with finite, infinite, and infinitesimal numbers on a new type of a computer – the Infinity Computer – patented in EU, USA, and Russia (see [20]). The new approach is based on the principle ‘The whole is greater than the part’ (Euclid’s Common Notion 5) that is applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as particular cases of a unique framework different from that of the non-standard analysis. The new methodology evolves ideas of Cantor and Levi-Civita in a more applied way and, among other things, introduces new infinite integers that possess both cardinal and ordinal properties as usual finite numbers. Its relations to traditional approaches are discussed.

It is emphasized that the philosophical triad – researcher, object of investigation, and tools used to observe the object – existing in such natural sciences as Physics and Chemistry, exists in Mathematics, too. In natural sciences, the instrument used to observe the object influences the results of observations. The same happens in Mathematics where numeral systems used to express numbers are among the instruments of observations used by mathematicians. The usage of powerful numeral systems gives the possibility to obtain more precise results in Mathematics, in the same way as the usage of a good microscope gives the possibility to obtain more precise results in Physics.

A numeral system using a new numeral called grossone is described. It allows one to express easily infinities and infinitesimals offering rich capabilities for describing mathematical objects, mathematical modeling, and practical computations. The concept of the accuracy of numeral systems is introduced. The accuracy of the new numeral system is compared with traditional numeral systems used to work with infinity. Numerous examples are given. The Infinity Calculator using the Infinity Computer technology is presented during the talk.

 

Fenner Tanswell (St Andrews, Scotland)

Rigor, proof and mathematical virtues

Abstract: In Sosa's The Raft and The Pyramid (1980) he rejects both the foundationalist and coherentist approaches to knowledge in favour of a third way: that of virtue epistemology. In this paper I will take the same broad approach to explaining the nature of informal rigorous proof. That is, I will reject explanations both in terms of formalisability and in terms of systemic cohesion, in favour of taking a virtue approach to mathematical proofs. I argue that rigour in our proving practices and correctness of the proofs themselves are best seen as the products of the mathematician(s)'s mathematical virtues. I argue further that this provides the best account of various difficult to explain aspects of proofs as they are found in practice, particularly the apparent incongruity between the impact of contextual factors and the special epistemic status we assign mathematical knowledge gained from proofs. Finally, I will consider some difficult cases from mathematics for the virtue theorist and how they should respond.

    

James Tappenden (University of Michigan, USA)

Styles of Mathematical Explanation. Why do Elliptic Functions Have Two Periods?

Recent years have seen sustained attention to the topic of explanation as a phenomenon within mathematics. There appear to be both differences and similarities in the patterns characteristic of mathematical explanations of mathematical events and causal explanations of physical events, but more study is needed to ascertain precisely what the differences are. This talk will sketch and discuss a historical case study illustrating that, among other things, mathematical explanations can exhibit the same interest- relativity and context-dependence that are often found in explanations of physical events. The example is the explanation of the fact that elliptic functions are doubly periodic. (This way of describing the case involves seeing the elliptic functions in a nineteenth-century way via inverting elliptic integrals; today double periodicity is part of the definition of “elliptic function”.) Two ways to address the fact – one using techniques characteristic of Riemann (develop the Riemann surface then integrate on a torus) and another in the style of Weierstrass (represent via the Weierstrass P-function and its derivative) reveal strikingly different mathematical virtues. The explanations are both “good ones”, but for very different reasons that are not entirely commensurable. They draw on different fundamental intuitions and styles of reasoning: structural/geometric versus broadly computational. They both draw on broader techniques that are very fruitful, but for non-overlapping classes of problems. This paper will consider the advantages of each to illustrate that one

 

Eric Vandendriessche (Universite Paris Diderot, France)

String figures as mathematics? A research in ethnomathematics.

This presentation will aim to give an overview of the outcomes of a research in ethnomathematics regarding the mathematical rationality contained in the making of string figures (Vandendriessche, 2015). The research design is using interdisciplinary methods borrowed from anthropology, mathematics, history and philosophy of mathematics. The practice of string figure-making has long been carried out in many societies, and particularly in those of oral tradition. It involves applying a succession of operations to a string (knotted into a loop), mostly using the fingers and sometimes the feet, the wrists or the mouth. This succession of operations is intended to generate a final figure. We explore different modes of conceptualization of the practice of string figure-making and analyse various source material through these conceptual tools: it looks at research by mathematicians, as well as ethnographical publications, and personal fieldwork findings in the Chaco, Paraguay, and in the Trobriand Islands, Papua New Guinea, which all give evidence of the rationality that underlies this activity. We will conclude that the creation of string figures may be seen as the result of intellectual processes, involving the elaboration of algorithms, and concepts such as operation, sub-procedure, iteration, and transformation.

 

Giorgio Venturi (University of Campinas, Brasil)

Mathematical explanation and its practice

In this talk we would like to analyze the notion of mathematical explanation in mathematics. Although the subject of mathematical explanation in physical sciences is fairly codified in the literature, its purely mathematical counterpart has seen few and sporadic contributions. Explicit statements can be found in the work of Steiner (1978), or indirectly as an extension to mathematics of Kitcher’s theory of scientific unification as explanation (1989) and more recently by Frans and Weber (2014).

We start considering pros and cons of the most standard methodologies of inquiry of this subject, in particular, and more in general of the philosophy of mathematical practice: top-down and bottom-up approaches. We will classify Steiner’s and Kitcher’s accounts as top-down, while Frans and Weber’s as bottom up. We will then describe the limits of these authors’ use of the two approaches, identifying what we will call the problem of causality and the problem of reference. The former pertains mainly to the top-down approach and consists in the difficulty of identifying a metaphysical relation between explans and explanandum, able to justify the explanation in terms of a given theoretical virtue; while the latter problem pertains mainly to the bottom up approach and consists in the difficulty of giving convincing reasons for the theoretical value of a particular example of mathematical explanation.

In view of both problems we propose a view of mathematical explanation as a contextual process. The search for the right context for the proof of a proposition expressed by a sentence S will then bring us to question if axioms, alone, are able to explain something. If the explanation we seek is more than the knowledge that S is a theorem of a given theory, we maintain that axioms do not explain because they bring too much information. Consequently, arguing that explanation comes in degrees, we will find prototypical examples of explanatory proofs in equivalence theorems. Moreover we will maintain that although the statements of an equivalence theorem are logically equivalent, also in this case it is possible to discriminate between explans and explanandum, thanks to mathematical practice. We then present two examples of such theorems: the equivalence between the Axiom of Choice (AC) and the Well Ordering Theorem, and another example from analysis, where AC plays a fundamental role.

In the end we will discuss the role of practice, and the relevant informal components of the mathematical activity, in the context of mathematical ex- planation. In conclusion we will draw attention on a difficult aspect of any philosophy of mathematical practice: the justification of the objective character of mathematics. We will then discuss our view in connection with this problem. Without proposing a general solution, we will hope to contribute to a philosophical informed discussion of a theoretical notion that plays a fundamental role in the philosophy of mathematical practice.

 

Roy Wagner (Tel Aviv University, Israel)

A constraints based philosophy of mathematics

It seems that the revived debate between realism and nominalism in the philosophy of mathematics is reaching a dead end. As the philosophical conversation unfolds, we learn that mathematical theorems and objects do behave in some ways like scientific truths and real scientific entities, but in other ways they do not. If we follow P. Maddy's (2011) Defending the Axioms, for example, we end up with an almost arbitrary choice of whether to extend the predicates "true" and "real" from natural scientific claims and entities to mathematical theorems and objects.

The proposed talk will suggest that we should replace ontological reality by real worldly constraints. We will briefly sketch some of the very real functional, rational, semiotic and institutional constraints that make mathematics a highly non-arbitrary realm of knowledge that depends, at the same time, on the contingencies of human forms of life. For example, mathematics is constrained by the natural facts it tries to reflect; by our means of observation and inscription; by the capacity of most humans to follow some kinds of formal rules with a high degree of consensus, while repeatedly failing when trying to follow others; by the material means and leisure given to people for doing mathematics, which, in turn, is constrained by public attitudes to mathematical knowledge; by patterns of communication and representation that successfully generate shared fields of problems, toolboxes and informal consensus among experts; by formal and logical ideals... This list can and should be continued. Different constraints have different purport, but none has an absolute force. This is an easy truism in everyday life (e.g., if I step of a ledge, I am bound to fall, unless I manage to grasp onto something; I am constrained by my moral convictions, unless I choose to pay the price of betraying them). I claim that constraints on the status of mathematical truth can also be similarly mitigated, leaving us with no absolute core constraints to hold on to, yet never free of to ignore all constraints at once. The philosophical question is therefore not to isolate the “ideal” or “independent” or “physical” or “socially constructed” core constraints. The question is, rather, how different kinds of constraints interact, compete and balance to form various historical mathematical cultures.

In the suggested talk, I will first sketch this general framework of constraints-based- ontology and epistemology of mathematics. Then I will attempt to validate it by engaging with the analyses of mathematical truth in the works of Wittgenstein and Putnam, while drawing on continental-hermeneutic inspirations.

 

Pauline van Wierst
(Scuola Normale Superiore di Pisa)

Grounding in Bolzano’s Purely Analytic Proof

In A Purely Analytic Proof (Rein analytischer Beweis, 1817), the Czech philosopher and mathematician Bernard Bolzano presents a new proof of what we nowadays call the Intermediate Value Theorem (IVT). Distinguished mathematicians such as Gauss and Lagrange had already given proofs of this theorem that made it perfectly clear that it is true, but Bolzano felt the need to provide a proof that shows why the IVT is true. In other words, in Bolzano’s view the IVT was in need of a grounding proof.

It is commonly agreed that the proof that Bolzano presents in A Purely Analytic Proof is remarkably good for its time (e.g. Rusnock, 2000). This is remarkable, all the more because it is also commonly agreed that there are serious problems with Bolzano’s theory of grounding (Betti, 2010b; Roski, 2014; Rusnock, 2000; Tatzel, 2002). As Bolzano admits himself, he is unable to do much more than to give some characteristics and examples of truths that in his view stand in the grounding relation, and it is unclear how this alleged theory could work in mathematical practice (Bolzano, 1837).

Where it indeed, as Bolzano argues himself in the preface to A Purely Analytic Proof, his beliefs about grounding that enabled him to give such a good proof? And what does this proof tell us about Bolzano’s notion of grounding? These will be the main questions that will be addressed in this talk.

Since Bolzano does not explain why in his view the proof that he gives in A Purely Analytic Proof is a grounding proof, and barely why those of other mathematicians are not, we will start out this talk by identifying exactly which aspects make that Bolzano’s proof is, according to his conception, a grounding proof. For example, we will consider to which extent Bolzano’s proof relates truths in their objective order, that is, in the order in which truths “really” stand, as opposed to how we come to know them. Further, we will consider how we can understand Bolzano’s claim that proofs of this theorem that appeal to geometric concepts are circular. On this basis we will gain better understanding of the value of Bolzano’s conception of grounding for mathematical practice.

Online user: 1