Triplez vos dons à des organismes efficaces grâce aux réductions d'impôt

Tripler ses dons

Les dons à des organismes d’intérêt général ouvrent droit à une réduction d’impôt au taux de 66%, dans la limite de 20% du revenu imposable. Cette réduction permet donc de tripler ses dons. En effet, si je souhaite dépenser 1000€ en donnant, je peux donner 3000€, et l’administration fiscale me rembourse 3000 * 0.66=2000€ sous forme de réduction d’impôt. Ce régime fiscal du don est extrêmement généreux: à titre de comparaison, la législation britannique du “Gift Aid” permet d’augmenter ses dons de 25% au lieu de 200% pour la France!

Persche 2009

Depuis l’arrêt Persche 2009, aucun état européen ne peut refuser une déduction fiscale au motif que le bénéficiaire du don n’est pas établi dans cet état. La cour a en effet établi que la déductibilité fiscale de dons transfrontaliers relève de la libre circulation de capitaux garantie par le droit communautaire.

Cependant, la France à pris plusieurs années à mettre en pratique cet arrêt.

L’ancienne déclaration de revenus (jusqu’au millésime 2013)

Dans le formulaire 2042 “Déclaration des revenus 2012”, seule la case “Dons à des organismes établis en France” apparaît. Il n’y a aucun moyen simple de demander la réduction d’impôt pourtant garantie par Persche 2009.

Pourtant, la version en vigueur du 7 mai 2012 au 1 janvier 2014 de l’article 200 du code général des impôts est en accord avec Persche 2009. Le numéro 4 bis indique:

Ouvrent également droit à la réduction d’impôt les dons et versements effectués au profit d’organismes agréés dans les conditions prévues à l’article 1649 nonies dont le siège est situé dans un État membre de l’Union européenne ou dans un autre État partie à l’accord sur l’Espace économique européen ayant conclu avec la France une convention d’assistance administrative en vue de lutter contre la fraude et l’évasion fiscales. L’agrément est accordé lorsque l’organisme poursuit des objectifs et présente des caractéristiques similaires aux organismes dont le siège est situé en France répondant aux conditions fixées par le présent article.

Lorsque les dons et versements ont été effectués au profit d’un organisme non agréé dont le siège est situé dans un État membre de l’Union européenne ou dans un autre État partie à l’accord sur l’Espace économique européen ayant conclu avec la France une convention d’assistance administrative en vue de lutter contre la fraude et l’évasion fiscales, la réduction d’impôt obtenue fait l’objet d’une reprise, sauf lorsque le contribuable a produit dans le délai de dépôt de déclaration les pièces justificatives attestant que cet organisme poursuit des objectifs et présente des caractéristiques similaires aux organismes dont le siège est situé en France répondant aux conditions fixées par le présent article.

La demande d’agrément, régie par l’arrêté NOR EFIE1100179A du 28 février 2011 était une procédure très lourde pour les organismes. Malgré mes tentatives en été 2015, aucune des ONG recommandées par GiveWell ou Giving What We Can n’ont souhaité entamer cette procédure.

Si l’organisme n’est pas agréé, c’est le contribuable qui doit joindre à sa déclaration “les pièces justificatives attestant que cet organisme poursuit des objectifs et présente des caractéristiques similaires aux organismes” Français. Au delà de la discrépance entre le formulaire 2042 et l’article 200, il n’est qui plus est pas spécifié quelles pièces justificatives sont à apporter.

La dispense de justificatifs

A partir du 1er janvier 2014, le contribuable n’a plus à fournir de justificatifs de ses dons. Il a seulement à les produire en cas de contrôle fiscal. L’article 4 bis précise dès la version du 1er Janvier 2014:

Lorsque les dons et versements ont été effectués au profit d’un organisme non agréé dont le siège est situé dans un État membre de l’Union européenne ou dans un autre État partie à l’accord sur l’Espace économique européen ayant conclu avec la France une convention d’assistance administrative en vue de lutter contre la fraude et l’évasion fiscales, la réduction d’impôt obtenue fait l’objet d’une reprise, sauf si le contribuable produit, à la demande de l’administration fiscale, les pièces justificatives attestant que cet organisme poursuit des objectifs et présente des caractéristiques similaires aux organismes dont le siège est situé en France répondant aux conditions fixées par le présent article.

La case 7VC

Par ailleurs, depuis le formulaire 2042 C millésime 2014, il existe une nouvelle case 7VC, relative aux dons versés à des organismes d’intérêt général établis dans un État européen. (Depuis 2017, la case 7VC se trouve désormais dans le formulaire 2042 RICI (au lieu de 2042 C), mais reste autrement inchangée.)

Conclusion

La dispense de justificatifs est un changement sans grande importance pour les dons à des organismes Français, puisque ces justificatifs (reçus fiscaux) sont en général donnés automatiquement par l’organisme. Mais dans le cas d’un organisme européen non agréé, le changement revêt une importance plus grande. La justification attestant que cet organisme est similaire aux organismes français d’intérêt général est en effet non seulement plus onéreuse à fournir, de surcroît ses modalités ne sont pas précisément spécifiées.

C’est donc seulement en rare cas de contrôle fiscal qu’il faudra justifier de cette similitude entre organisme européen et organismes français. En cas de contrôle, je suppose que ce serait à l’administration fiscale de préciser quels documents suffiraient à établir cette similitude. Cela ne devrait pas être difficile puisque les organismes européens plébiscités par l’altruisme efficace sont clairement d’intérêt général. En cas de doute, vous pouvez me contacter pour en discuter.

La création de la case 7VC est par ailleurs rassurante car elle met le formulaire d’impôt en conformité avec l’article 200.

Transnational giving Europe

Il est possible de faire un don à l’Against Malaria Foundation via le réseau Transnational Giving Europe; l’on recoit alors automatiquement un reçu fiscal. Mais les frais s’élèvent à 5% de la somme donnée pour un don de moins de 100 000 euros.

December 16, 2017

Philosophy success story II: the analysis of computability

a computing machine is really a logic machine. Its circuits embody the distilled insights of a remarkable collection of logicians, developed over centuries.

— Martin Davis (2000)

This is part of my series on success stories in philosophy. See this page for an explanation of the project and links to other items in the series. I also have a related discussion of conceptual analysis here

Contents

  1. The intuitive notion of effective calculability
  2. The Church-Turing analysis of computability
  3. Computability theory applied
    1. The halting problem
    2. Further applications in mathamtics
    3. The modern computer
  4. Epilogue: the long reach of the algorithm

The analysis of computability is one of the few examples in history of a non-trivial conceptual analysis that has consensus support. Some might want to quibble that computer science or mathematics rather than philosophy deserves the credit for it. I’m not interested in which artificial faction of academy should lay claim to the spoils of war. The search for, proposal of, and eventual vindication of a formalisation of an everyday concept is philosophical in method, that is my point. If you wish to conclude from this that computer scientists produce better philosophy than philosophers, so be it.

To us living today, the formal idea of an algorithm is so commonsensical that it can he hard to imagine a worldview lacking this precise concept. Yet less a than a century ago, that was exactly the world people like Turing, Gödel, Russel, Hilbert, and everybody else was living in.

The notion of algorithm, of course, is so general that people have been using them for thousands of years. The use of tally marks to count sheep is a form of algorithm. The sieve of Eratosthenes, an algorithm for finding all prime number up to a given limit, was developed in ancient Greece. The success story is therefore not the development of algorithms, but the understand and formalisation of the concept itself. This improved understanding helped dissolve some confusions.

The intuitive notion of effective calculability

Soare 1996:

In 1642 the French mathematician and scientist, Blaise Pascal, invented an adding machine which may be the first digital calculator.

Wikipedia:

In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. […] It could:

  • add or subtract an 8-digit number to / from a 16-digit number
  • multiply two 8-digit numbers to get a 16-digit result
  • divide a 16-digit number by an 8-digit divisor

These primitive calculating devices show that people in the 17th century had to have some inuitive notion of “that which can be calculated by a machine” or by a “mindless process” or “without leaps of insight”. They were, at the very least, an existence proof, showing that addition and subtraction could be performed by such a process.

Wikipedia also tells us:

Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods.

And:

In 1935 Turing and everyone else used the term “computer” for an idealized human calculating with extra material such as pencil and paper, or a desk calculator, a meaning very different from the use of the word today.

The Church-Turing analysis of computability

Stanford has a nice and concise explanation:

In the 1930s, well before there were computers, various mathematicians from around the world invented precise, independent definitions of what it means to be computable. Alonzo Church defined the Lambda calculus, Kurt Gödel defined Recursive functions, Stephen Kleene defined Formal systems, Markov defined what became known as Markov algorithms, Emil Post and Alan Turing defined abstract machines now known as Post machines and Turing machines.

Surprisingly, all of these models are exactly equivalent: anything computable in the lambda calculus is computable by a Turing machine and similarly for any other pairs of the above computational systems. After this was proved, Church expressed the belief that the intuitive notion of “computable in principle” is identical to the above precise notions. This belief, now called the “Church-Turing Thesis”, is uniformly accepted by mathematicians.

Computability theory applied

I take Turing’s (and his contemporaries’) philosophical contribution to be the conceptual analysis of “computable” as “computable by a Turing machine”, i.e. the assertion of the Church-Turing Thesis. As we will often see in this series, once we have a formalism, we can go to town and start proving things left and right about the formal object. What was once a topic of speculation becomes amenable to mathematics. (For much more on this topics, see my other post on why good philosophy often looks like mathematics.) Here are some examples.

The halting problem

Given Pascal’s and Leibnitz’s machines, one might have thought it natural that any function (set FF​ of ordered pairs such that if a,bF\langle a,b \rangle \in F​ and a,cF\langle a,c \rangle \in F​ then b=cb=c​ ) which can be precisely specified can be computed in the inuitive sense. But Turing showed that this is not true. For example, the halting problem is not computable; and the Entscheidungsproblem (Turing’s original motivation for developing his formalism) cannot be solved.

Further applications in mathamtics

Here are some lists of examples of non-computable functions:

There is an analogy here, by the way, to the previous success story: many people thought it natural that any continuous function must be differentiable, the discovery of a function that is everywhere continuous and nowhere differentiable seemed problematic, and the formalisation of the concept of continuity solved the problem.

The modern computer

The greatest practical impact of Turing’s discoveries was to lay the conceptual ground for the development of modern computers. (Wikipedia has a good summary of the history of computing.)

In his 1936 paper On Computable Numbers, with an Application to the Entscheidungsproblem, once armed with his new formalism, Turing immediately proves an interesting result: the general notion of “computable by some turing machine” can itself be expressed in terms of Turing machines. In particular, a Universal Turing Machine, is a Turing Machine that can simulate an arbitrary Turing machine on arbitrary input.1

This was the first proof that there could be a “universal” programmable machine, capable of computing anything that we know how to compute, when given the recipe. Sometimes in history, as in the case of heavier-than-air flying machines, infamously pronounced impossible by Lord Kelvin, the proof is in the pudding. With the computer, the proof preceded the pudding by several decades.

Jack Copeland (The Essential Turing, 2004, p.21f) writes:

In the years immediately following the Second World War, the Hungarian-American logician and mathematician John von Neumann—one of the most important and influential figures of twentieth-century mathematics—made the concept of the stored-programme digital computer widely known, through his writings and his charismatic public addresses […] It was during Turing’s time at Princeton that von Neumann became familiar with the ideas in ‘On Computable Numbers’. He was to become intrigued with Turing’s concept of a universal computing machine. […] The Los Alamos physicist Stanley Frankel […] has recorded von Neumann’s view of the importance of ‘On Computable Numbers’:

I know that in or about 1943 or ’44 von Neumann was well aware of the fundamental importance of Turing’s paper of 1936 ‘On computable numbers . . .’, which describes in principle the ‘Universal Computer’ of which every modern computer […] is a realization. […] Many people have acclaimed von Neumann as the ‘father of the computer’ (in a modern sense of the term) but I am sure that he would never have made that mistake himself. He might well be called the midwife, perhaps, but he firmly emphasized to me, and to others I am sure, that the fundamental conception is owing to Turing—insofar as not anticipated by Babbage, Lovelace, and others. In my view von Neumann’s essential role was in making the world aware of these fundamental concepts introduced by Turing […].

Epilogue: the long reach of the algorithm

The following is an example of progress in philosophy which, while quite clear-cut in my view, hasn’t achieved consensus in the discipline, so I wouldn’t count it as a success story quite yet. It also has more to do with the development of advanced computers and subsequent philosophical work than with the conceptual analysis of computability. But Turing, as the father of the algorithm, does deserve a nod of acknowledgement for it, so I included it here.

Peter Millican an excellent, concise summary of the point (Turing Lectures, HT16, University of Oxford):

Information processing, and informationally sensitive processes, can be understood in terms of symbolic inputs and outputs, governed by explicit and automatic processes. So information processing need not presuppose an “understanding” mind, and it therefore becomes possible in principle to have processes that involved sophisticated information processing without conscious purpose, in much the same way as Darwin brought us sophisticated adaptation without intentional design.

On the idea of natural selection as an algorithm, see Dennett.

  1. The universal machine achieves this by reading both the description of the machine to be simulated as well as the input thereof from its own tape. Extremely basic sketch: if MM' simulates MM, MM' will print out, in sequence, the complete configurations that MM would produce. It will have a record of the last complete configuration at the right of the tape, and a record of MM’s rules at the left of the tape. It will shuttle back and forth, checking the latest configuration from the right, the finding the rule that it matches at the left, the moving back to build the next configuration accordingly on the right. (Peter Millican, Turing Lectures, HT16, University of Oxford) 

December 3, 2017

Philosophy success story I: predicate logic

This is part of my series on success stories in philosophy. See this page for an explanation of the project and links to other items in the series.

Contents

  1. Contents
  2. Background
  3. The problem of multiple generality
    1. How people were confused (a foray into the strange world of suppositio)
    2. How predicate logic dissolved the confusion
  4. Definite descriptions
    1. How people were confused
    2. How predicate logic dissolved the confusion
  5. The epsilon-delta definition of a limit
    1. How people were confused
    2. How predicate logic dissolved the confusion
  6. On the connection between the analysis of definite descriptions and that of limit
  7. “That depends what the meaning of ‘is’ is”

Background

Frege “dedicated himself to the idea of eliminating appeals to intuition in the proofs of the basic propositions of arithmetic”. For example:

A Kantian might very well simply draw a graph of a continuous function which takes values above and below the origin, and thereby ‘demonstrate’ that such a function must cross the origin. But both Bolzano and Frege saw such appeals to intuition as potentially introducing logical gaps into proofs.

In 1872, Weierstrass described a real-valued function that is continuous everywhere but differentiable nowhere. All the mathematics Weierstrass was building on had been established by using “obvious” intuitions. But now, the intuitive system so built up had led to a highly counter-intuitive result. This showed that intuitions can be an unreliable guide: by the lights of intuition, Weierstrass’s result introduced a contradiction in the system. So, Frege reasoned, we should ban intuitive proof-steps in favour of a purely formal system of proof. This formal system would (hopefully) allow us to derive the basics propositions of arithmetic. Armed with such a system, we could then simply check whether Weierstrass’s result, and others like it, hold or not.

So Frege developed predicate logic. In what follows I’ll assume familiarity with this system.

While originally developed for this mathematical purpose, predicate logic turned out to be applicable to a number of philosophical issues; this process is widely considered among the greatest success stories of modern philosophy.

The problem of multiple generality

How people were confused (a foray into the strange world of suppositio)

Dummett 1973:

Aristotle and the Stoics had investigated only those inferences involving essentially sentences containing not more than one expression of generality.

Aristotle’s system, which permits only four logical forms, seems comically limited1 by today’s standards, yet Kant “famously claimed, in Logic (1800), that logic was the one completed science, and that Aristotelian logic more or less included everything about logic there was to know.” (Wikipedia).

Some medieval logicians attempted to go beyond Aristotle and grappled with the problem of multiple generality. As Dummet writes (my emphasis),

Scholastic logic had wrestled with the problems posed by inferences depending on sentences involving multiple generality – the occurrence of more than one expression of generality. In order to handle such inferences, they developed ever more complex theories of different types of ‘suppositio’ (different manners in which an expression could stand for or apply to an object): but these theories, while subtle and complex, never succeeded in giving a universally applicable account.
It is necessary, if Frege is to be understood, to grasp the magnitude of the discovery of the quantifier-variable notation, as thus resolving an age-old problem the failure to solve which had blocked the progress of logic for centuries. […] for this resolved a deep problem, on the resolution of which a vast area of further progress depended, and definitively, so that today we are no longer conscious of the problem of which it was the solution as a philosophical problem at all.

Medieval philosophers got themselves into terrible depths of confusion when trying to deal with these sentences having more than one quantifier. For example, from “for each magnitude, there is a smaller magnitude”, we want to validate “each magnitude is smaller than at least one magnitude” but not “there is a magnitude smaller than every magnitude”. The medievals analysed this in terms of context-dependence of the meanings of quantified terms:

The general phenomenon of a term’s having different references in different contexts was called suppositio (substitution) by medieval logicians. It describes how one has to substitute a term in a sentence based on its meaning—that is, based on the term’s referent. (Wikipedia)

The scholastics specified many different types of substitution, and which operations were legitimate for each; but never progressed beyond a set of ham-fisted, ad-hoc rules.

To show examples, I had to go to modern commentaries of the scholastics, since the actual texts are simply impenetrable.

Swiniarski 1970. Ockham’s Theory of Personal Supposition

Broadie 1993, which is Oxford University Press’ Introduction to medieval logic:

a term covered immediately by a sign of universality, for example, by ‘all’ or ‘every’, has distributive supposition, and one covered mediately by a sign of affirmative universality has merely confused supposition. A term is mediately covered by a given sign if the term comes at the predicate end of a proposition whose subject is immediately covered by the sign. Thirdly, a term covered, whether immediately or mediately, by a sign of negation has confused distributive supposition (hereinafter just ‘distributive supposition’). Thus in the universal negative proposition ‘No man is immortal’, both the subject and the predicate have distributive supposition, and in the particular negative proposition ‘Some man is not a logician’, the predicate has distributive supposition and the subject has determinate supposition. […]

Given the syntactic rules presented earlier for determining the kind of supposition possessed by a given term, it follows that changing the position of a term in a proposition can have an effect on the truth value of that proposition. In:

(10) Every teacher has a pupil

‘pupil’ has merely confused supposition, and consequently the proposition says that this teacher has some pupil or other and that teacher has some pupil or other, and so on for every teacher. But in:

(11) A pupil every teacher has

‘pupil’ has determinate supposition, and since ‘teacher’ has distributive supposition descent must be made first under ‘pupil’ and then under ‘teacher’. Assuming there to be just two teachers and two pupils, the first stage of descent takes us to:

(12) PupiI1 every teacher has or pupil2 every teacher has.

The next stage takes us to:

(13) Pupil1 teacher1 has and pupil1 teacher2 has, or pupil2 teacher1 has and pupil2 teacher2 has.

(13) implies that some one pupil is shared by all the teachers, and that is plainly not implied by (10), though it does imply (10).

In all this talk of supposition, we can discern a flailing attempt to deal with ambiguities of quantifier scope, but these solutions are, of course, hopelessly ad hoc. Not to mention that the rules of supposition were in flux, and their precise content is still a matter of debate among specialists of Scholasticism2.

And now just for fun, a representative passage from Ockham:

And therefore the following rule can be given concerning negations of this kind : Negation which mobilizes what is immobile, immobilizes what is mobile, that is, when such a negation precedes a term which supposits determinately it causes the term to supposit in a distributively confused manner, and when it precedes a term suppositing in a distributively confused manner it causes the term to supposit determinately.

According to one commentator (Swiniarski 1970), in this passage “Ockham formulates a De Morgan-like rule concerning the influence of negations which negate an entire proposition and not just one of the terms.” I’ll let you be the judge. For at this point it is I who supposit in a distributively confused manner.

How predicate logic dissolved the confusion

The solution is now familiar to anyone who has studied logic. Wikipedia gives a simple example:

Using modern predicate calculus, we quickly discover that the statement is ambiguous. “Some cat is feared by every mouse” could mean

  • For every mouse m, there exists a cat c, such that c is feared by m, i.e. m(M(m)c(C(c)F(m,c)))\forall m (M(m) \rightarrow \exists c (C(c) \land F(m,c)))

But it could also mean

  • there exists one cat c, such that for every mouse m, c is feared by m, i.e. c(C(c)m(M(m)F(m,c))\exists c (C(c) \land \forall m (M(m) \rightarrow F(m,c)).

Of course, this is only the simplest case. Predicate logic allows arbitrarily deep nesting of quantifiers, helping us understand sentences which the scholastics could not even have made intuitive sense of, let alone provide a formal semantics for.

Definite descriptions

How people were confused

The problem here is with sentences like “Unicorns have horns” which appear to refer to non-existent objects. People were quite confused about them:

Meinong, an Austrian philosopher active at the turn of the 20th century, believed that since non-existent things could apparently be referred to, they must have some sort of being, which he termed sosein (“being so”). A unicorn and a pegasus are both non-being; yet it’s true that unicorns have horns and pegasi have wings. Thus non-existent things like unicorns, square circles, and golden mountains can have different properties, and must have a ‘being such-and-such’ even though they lack ‘being’ proper. The strangeness of such entities led to this ontological realm being referred to as “Meinong’s jungle”. (Wikipedia)

The delightfully detailed Stanford page on Meinong provides further illustration:

Meinong tries to give a rational account of the seemingly paradoxical sentence “There are objects of which it is true that there are no such objects” by referring to two closely related principles: (1) the “principle of the independence of so-being from being” [“Prinzip der Unabhängigkeit des Soseins vom Sein”], and (2) the “principle of the indifference of the pure object to being” (“principle of the outside-being of the pure object” [“Satz vom Außersein des reinen Gegenstandes”]) (1904b, §3–4). […]

Meinong repeatedly ponders the question of whether outside-being is a further mode of being or just a lack of being (1904b, §4; 1910, §12; 1917, §2; 1978, 153–4, 261, 358–9, 377). He finally interprets outside-being as a borderline case of a kind of being. Every object is prior to its apprehension, i.e., objects are pre-given [vorgegeben] to the mind, and this pre-givenness is due do the (ontological) status of outside-being. If so, the most general determination of so-being is being an object, and the most general determination of being is outside-being. The concept of an object cannot be defined in terms of a qualified genus and differentia. It does not have a negative counterpart, and correlatively outside-being does not seem to have a negation either (1921, Section 2 B, 102–7).

In fact, as John P. Burgess writes:

as Scott Soames reveals, in his Philosophical Analysis in the Twentieth Century, volume I: The Dawn of Analysis, Russell himself had briefly held a similar view [to Meinong’s]. It was through the development of his theory of descriptions that Russell was able to free himself from anything like commitment to Meinongian “objects.”

How predicate logic dissolved the confusion

Russell’s On denoting, as the most famous case of a solved philosophical problem, needs no introduction. (Wikipedia gives a good treatment, and so does Sider’s Logic for Philosophy, section 5.3.3.)

Russell’s analysis of definite descriptions could have stood on its own as a success story. The tools of predicate logic were not, strictly speaking, necessary to discover the two possible interpretations of empty definite descriptions. In fact it may seem surprising that no-one made this discovery earlier. But as literate people of the 21st century, it can be hard for us to imagine the intellectual poverty of a world without predicate logic. So we must not be too haughty. The most likely conclusion, it seems to me, is that Russell’s insight was, in fact, very difficult to achieve without the precision afforded by Frege’s logic.

The epsilon-delta definition of a limit

How people were confused

As Wikipedia writes:

The need for the concept of a limit came into force in the 17th century when Pierre de Fermat attempted to find the slope of the tangent line at a point xx of a function such as f(x)=x2f(x)=x^{2}. Using a non-zero, but almost zero quantity, EE, Fermat performed the following calculation:

slope=f(x+E)f(x)E=x2+2xE+E2x2E=2x+E=2x\begin{aligned} slope &= \frac{f(x+E)-f(x)}{E} \\ &= \frac{x^2 +2xE + E^2 -x^2}{E} \\ &= 2x + E \\ &= 2x \end{aligned}

The key to the above calculation is that since EE is non-zero one can divide f(x+E)f(x)f(x+E)-f(x) by EE, but since EE is close to 00, 2x+E2x+E is essentially 2x2x. Quantities such as EE are called infinitesimals. The problem with this calculation is that mathematicians of the era were unable to rigorously define a quantity with properties of EE although it was common practice to ‘neglect’ higher power infinitesimals and this seemed to yield correct results.

SEP states:

Infinitesimals, differentials, evanescent quantities and the like coursed through the veins of the calculus throughout the 18th century. Although nebulous—even logically suspect—these concepts provided, faute de mieux, the tools for deriving the great wealth of results the calculus had made possible. And while, with the notable exception of Euler, many 18th century mathematicians were ill-at-ease with the infinitesimal, they would not risk killing the goose laying such a wealth of golden mathematical eggs. Accordingly they refrained, in the main, from destructive criticism of the ideas underlying the calculus. Philosophers, however, were not fettered by such constraints. […]

Berkeley’s arguments are directed chiefly against the Newtonian fluxional calculus. Typical of his objections is that in attempting to avoid infinitesimals by the employment of such devices as evanescent quantities and prime and ultimate ratios Newton has in fact violated the law of noncontradiction by first subjecting a quantity to an increment and then setting the increment to 0, that is, denying that an increment had ever been present. As for fluxions and evanescent increments themselves, Berkeley has this to say:

And what are these fluxions? The velocities of evanescent increments? And what are these same evanescent increments? They are neither finite quantities nor quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?

Kline 1972 also tells us:

Up to about 1650 no one believed that the length of a curve could equal exactly the length of a line. In fact, in the second book of La Geometrie, Descartes says the relation between curved lines and straight lines is not nor ever can be known.

How predicate logic dissolved the confusion

Simply let ff be a real-valued function defined on R\mathbb{R}. Let cc and LL be real numbers. We can rigorously define a limit as:

limxc=L(ε>0,δ>0,xR,0<xc<δf(x)L<ε)\lim_{x \rightarrow c}=L \leftrightarrow (\forall \varepsilon > 0, \exists \delta >0, \forall x \in \mathbb{R}, 0<|x-c|<\delta \rightarrow |f(x)-L|<\varepsilon)

From this it’s easy to define the slope as the limit of a rate of increase, to define continuity, and so on.

Note that there are two nested quantifiers here, and an implication sign. When we remind ourselves how much confusion just one nested quantifier caused ante-Frege, it’s not surprising that this new definition was not discovered prior to the advent of predicate logic.

On the connection between the analysis of definite descriptions and that of limit

John P. Burgess, in The Princeton companion to mathematics, elaborates on the conceptual link between these two success stories:

[Definite descriptions] illustrate in miniature two lessons: first, that the logical form of a statement may differ significantly from its grammatical form, and that recognition of this difference may be the key to solving or dissolving a philosophical problem; second, that the correct logical analysis of a word or phrase may involve an explanation not of what that word or phrase taken by itself means, but rather of what whole sentences containing the word or phrase mean. Such an explanation is what is meant by a contextual definition: a definition that does not provide an analysis of the word or phrase standing alone, but rather provides an analysis of contexts in which it appears.

In the course of the nineteenth-century rigorization, the infinitesimals were banished: what was provided was not a direct explanation of the meaning of df(x)df (x) or dxdx, taken separately, but rather an explanation of the meaning of contexts containing such expressions, taken as wholes. The apparent form of df(x)/dxdf (x)/dx as a quotient of infinitesimals df(x)df (x) and dxdx was explained away, the true form being (d/dx)f(x)(d/dx)f (x), indicating the application of an operation of differentiation d/dxd/dx applied to a function f(x)f (x).

“That depends what the meaning of ‘is’ is”

Bill Clinton’s quote has become infamous, but he’s got a point. There are at least four meanings of ‘is’. They can bec clearly distinguished using predicate logic.

Hintikka ‎2004:

Perhaps the most conspicuous feature that distinguishes our contemporary « modern » logic created by Frege, Peirce, Russell and Hilbert from its predecessors is the assumption that verbs for being are ambiguous between the is of predication (the copula), the is of existence, the is of identity, and the is of subsumption. This assumption will be called the Frege-Russell ambiguity thesis. This ambiguity thesis is built into the usual notation of first-order logic and more generally into the usual notation for quantifiers of any order, in that the allegedly different meanings of verbs like « is » are expressed differently in it. The is of existence is expressed by the existential quantifier (\exists x), the is of predication by juxtaposition (or, more accurately speaking, by a singular term’s filling the argument slot of a predicative expression), the is of identity by = , and the is of subsumption by a general conditional.

  1. Not to mention arbitrary in its limitations. 

  2. For instance, Parsons (1997), writes: “On the usual interpretation, there was an account of quantifiers in the early medieval period which was obscure; it was “cleaned up” by fourteenth century theorists by being defined in terms of ascent and descent. I am suggesting that the cleaning up resulted in a totally new theory. But this is not compelling if the obscurity of the earlier view prevents us from making any sense of it at all. In the Appendix, I clarify how I am reading the earlier accounts. They are obscure, but I think they can be read so as to make good sense. These same issues arise in interpreting the infamous nineteenth century doctrine of distribution; I touch briefly on this.” 

December 3, 2017