Showing posts with label Paradox Logical. Show all posts
Showing posts with label Paradox Logical. Show all posts

Tuesday, December 23, 2008

Lottery Paradox

Henry E. Kyburg, Jr.'s Lottery Paradox (1961, p. 197) arises from considering a fair 1000 ticket lottery that has exactly one winning ticket. If this much is known about the execution of the lottery it is therefore rational to accept that some ticket will win. Suppose that an event is very likely if the probability of it occurring is greater than 0.99. On these grounds it is presumed rational to accept the proposition that ticket 1 of the lottery will not win. Since the lottery is fair, it is rational to accept that ticket 2 won't win either---indeed, it is rational to accept for any individual ticket i of the lottery that ticket i will not win. However, accepting that ticket 1 won't win, accepting that ticket 2 won't win, ..., and accepting that ticket 1000 won't win entails that it is rational to accept that no ticket will win, which entails that it is rational to accept the contradictory proposition that one ticket wins and no ticket wins.

The lottery paradox was designed to demonstrate that three attractive principles governing rational acceptance lead to contradiction, namely that

* It is rational to accept a proposition that is very likely true,
* It is not rational to accept a proposition that is known to be inconsistent, and
* If it is rational to accept a proposition A and it is rational to accept another proposition A', then it is rational to accept A & A',

are jointly inconsistent.

The paradox remains of continuing interest because it raises several issues at the foundations of knowledge representation and uncertain reasoning: the relationships between fallibility, corrigible belief and logical consequence; the roles that consistency, statistical evidence and probability play in belief fixation; the precise normative force that logical and probabilistic consistency have on rational belief.

Contents:
1. History
2. A Short Guide to the Literature
3. Selected References
4. External links

1. History

Although the first published statement of the lottery paradox appears in Kyburg's 1961 Probability and the Logic of Rational Belief, the first formulation of the paradox appears in his "Probability and Randomness," a paper delivered at the 1959 meeting of the Association for Symbolic Logic, and the 1960 International Congress for the History and Philosophy of Science, but published in the journal Theoria in 1963. This paper is reprinted in Kyburg (1983).

2. A Short Guide to the Literature

The lottery paradox has become a central topic within epistemology, and the enormous literature surrounding this puzzle threatens to obscure its original purpose. Kyburg proposed the thought experiment to get across a feature of his innovative ideas on probability (Kyburg 1961, Kyburg and Teng 2001), which are built around taking the first two principles above seriously and rejecting the last. For Kyburg, the lottery paradox isn't really a paradox: his solution is to restrict aggregation.

Even so, for orthodox probabilists the second and third principles are primary, so the first principle is rejected. Here too you'll see claims that there is really no paradox but an error: the solution is to reject the first principle, and with it the idea of rational acceptance. For anyone with basic knowledge of probability, the first principle should be rejected: for a very likely event, the rational belief about that event is just that it is very likely, not that it is true.

Most of the literature in epistemology approaches the puzzle from the orthodox point of view and grapples with the particular consequences faced by doing so, which is why the lottery is associated with discussions of skepticism (e.g., Klein 1981), and conditions for asserting knowledge claims (e.g., J. P. Hawthorne 2004). It is common to also find proposed resolutions to the puzzle that turn on particular features of the lottery thought experiment (e.g., Pollock 1986), which then invites comparisons of the lottery to other epistemic paradoxes, such as David Makinson's preface paradox, and to "lotteries" having a different structure. This strategy is addressed in (Kyburg 1997) and also in (Wheeler 2007). An extensive bibliography is included in (Wheeler 2007).

Philosophical logicians and AI researchers have tended to be interested in reconciling weakened versions of the three principles, and there are many ways to do this, including Jim Hawthorne and Luc Bovens's (1999) logic of belief, Gregory Wheeler's (2006) use of 1-monotone capacities, Bryson Brown's (1999) application of preservationist paraconsistent logics, Igor Douven and Timothy Williamson's (2006) appeal to cumulative non-monotonic logics, Horacio Arlo-Costa's (2007) use of minimal model (classical) modal logics, and Joe Halpern's (2003) use of first-order probability.

Finally, philosophers of science, decision scientists, and statisticians are inclined to see the lottery paradox as an early example of the complications one faces in constructing principled methods for aggregating uncertain information, which is now a thriving discipline of its own, with a dedicated journal, Information Fusion, in addition to continuous contributions to general area journals.

3. Selected References

* Arlo-Costa, H (2005). "Non-Adjunctive Inference and Classical Modalities", The Journal of Philosophical Logic, 34, 581-605.
* Brown, B. (1999). "Adjunction and Aggregation", Nous, 33(2), 273-283.
* Douven and Williamson (2006). "Generalizing the Lottery Paradox", The British Journal for the Philosophy of Science, 57(4), pp. 755-779.
* Halpern, J. (2003). Reasoning about Uncertainty, Cambridge, MA: MIT Press.
* Hawthorne, J. and Bovens, L. (1999). "The Preface, the Lottery, and the Logic of Belief", Mind, 108: 241-264.
* Hawthorne, J.P. (2004). Knowledge and Lotteries, New York: Oxford University Press.
* Klein, P. (1981). Certainty: a Refutation of Scepticism, Minneapolis, MN: University of Minnesota Press.
* Kyburg, H.E. (1961). Probability and the Logic of Rational Belief, Middletown, CT: Wesleyan University Press.
* Kyburg, H. E. (1983). Epistemology and Inference, Minneapolis, MN: University of Minnesota Press.
* Kyburg, H. E. (1997). "The Rule of Adjunction and Reasonable Inference", Journal of Philosophy, 94(3), 109-125.
* Kyburg, H. E., and Teng, C-M. (2001). Uncertain Inference, Cambridge: Cambridge University Press.
* Lewis, D. (1996). "Elusive Knowledge", Australasian Journal of Philosophy, 74, pp. 549-67.
* Makinson, D. (1965). "The Paradox of the Preface", Analysis, 25: 205-207.
* Pollock, J. (1986). "The Paradox of the Preface", Philosophy of Science, 53, pp. 346-258.
* Wheeler, G. (2006). "Rational Acceptance and Conjunctive/Disjunctive Absorption", Journal of Logic, Language, and Information, 15(1-2): 49-53.
* Wheeler, G. (2007). "A Review of the Lottery Paradox", in William Harper and Gregory Wheeler (eds.) Probability and Inference: Essays in Honour of Henry E. Kyburg, Jr., King's College Publications, pp. 1-31.

4. External links

* Links to Jim Hawthorne's papers on the logic of nonmonotonic conditionals (and Lottery Logic)

Carroll's Paradox

"What the Tortoise Said to Achilles" is a brief dialogue by Lewis Carroll which playfully problematises the foundations of logic. The title alludes to one of Zeno's paradoxes of motion, in which Achilles could never overtake the tortoise in a race. In Carroll's dialogue, the tortoise challenges Achilles to use the force of logic to make him accept the conclusion of a simple deductive argument. Ultimately, Achilles fails, because the clever tortoise leads him into an infinite regression.

Contents:
1. Summary of the dialogue
2. Discussion
3. See also
4. Where to find the article
5. References
1. Summary of the dialogue

The discussion begins by considering the following logical argument:

* A: "Things that are equal to the same are equal to each other" (transitive property)
* B: "The two sides of this triangle are things that are equal to the same"
* Therefore Z: "The two sides of this triangle are equal to each other"

The Tortoise asks Achilles whether the conclusion logically follows from the premises, and Achilles grants that it obviously does. The Tortoise then asks Achilles whether there might be a reader of Euclid who grants that the argument is logically valid, as a sequence, while denying that A and B are true. Achilles accepts that such a reader might exist, and that he would hold that if A and B are true, then Z must be true, while not yet accepting that A and B are true.

The Tortoise then asks Achilles whether a second kind of reader might exist, who accepts that A and B are true, but who does not yet accept the principle that if A and B are both true, then Z must be true. Achilles grants the Tortoise that this second kind of reader might also exist. The Tortoise, then, asks Achilles to treat him as a reader of this second kind, and then to logically compel him to accept that Z must be true.

After writing down A, B and Z in his notebook, Achilles asks the Tortoise to accept the hypothetical:

* C: "If A and B are true, Z must be true"

The Tortoise agrees to accept C, if Achilles will write down what he has to accept in his note-book, making the new argument:

* A: "Things that are equal to the same are equal to each other"
* B: "The two sides of this triangle are things that are equal to the same"
* C: "If A and B are true, Z must be true"
* Therefore Z: "The two sides of this triangle are equal to each other"

But now that the Tortoise accepts premise C, he still refuses to accept the expanded argument. When Achilles demands that "If you accept A and B and C, you must accept Z," the Tortoise remarks that that's another hypothetical proposition, and suggests even if he accepts C, he could still fail to conclude Z if he did not see the truth of:

* D: "If A and B and C are true, Z must be true"

The Tortoise continues to accept each hypothetical premise once Achilles writes it down, but denies that the conclusion necessarily follows, since each time he denies the hypothetical that if all the premises written down so far are true, Z must be true:

"And at last we've got to the end of this ideal race-course! Now that you accept A and B and C and D, of course you accept Z."
"Do I?" said the Tortoise innocently. "Let's make that quite clear. I accept A and B and C and D. Suppose I still refused to accept Z?"
"Then Logic would take you by the throat, and force you to do it!" Achilles triumphantly replied. "Logic would tell you, 'You can't help yourself. Now that you've accepted A and B and C and D, you must accept Z!' So you've no choice, you see."
"Whatever Logic is good enough to tell me is worth writing down," said the Tortoise. "So enter it in your note-book, please. We will call it

(E) If A and B and C and D are true, Z must be true.

Until I've granted that, of course I needn't grant Z. So it's quite a necessary step, you see?"
"I see," said Achilles; and there was a touch of sadness in his tone.

Thus, the list of premises continues to grow without end, leaving the argument always in the form:

* (1): "Things that are equal to the same are equal to each other"
* (2): "The two sides of this triangle are things that are equal to the same"
* (3): (1) and (2) ⇒ (Z)
* (4): (1) and (2) and (3) ⇒ (Z)
* …
* (n): (1) and (2) and (3) and (4) and ... and (n − 1) ⇒ (Z)
* Therefore (Z): "The two sides of this triangle are equal to each other"

At each step, the Tortoise argues that even though he accepts all the premises that have been written down, there is some further premise (that if all of (1)-(n) are true, then (Z) must be true) that he still needs to accept before he is compelled to accept that (Z) is true.
2. Discussion

Several philosophers have tried to resolve the Carroll paradox. Bertrand Russell discussed the paradox briefly in § 38 of The Principles of Mathematics (1903), distinguishing between implication (associated with the form "if p, then q"), which he held to be a relation between unasserted propositions, and inference (associated with the form "p, therefore q"), which he held to be a relation between asserted propositions; having made this distinction, Russell could deny that the Tortoise's attempt to treat inferring Z from A and B as equivalent to, or dependent on, agreeing to the hypothetical "If A and B are true, then Z is true".

The Wittgensteinian philosopher Peter Winch discussed the paradox in The Idea of a Social Science and its Relation to Philosophy (1958), where he argued that the paradox showed that "the actual process of drawing an inference, which is after all at the heart of logic, is something which cannot be represented as a logical formula … Learning to infer is not just a matter of being taught about explicit logical relations between propositions; it is learning to do something" (p.57). Winch goes on to suggest that the moral of the dialogue is a particular case of a general lesson, to the effect that the proper application of rules governing a form of human activity cannot itself be summed up with a set of further rules, and so that "a form of human activity can never be summed up in a set of explicit precepts" (p.53).

Isashiki Takahiro (1999) summarizes past attempts and concludes they all fail before beginning yet another.
3. See also

* Deduction theorem
* Münchhausen Trilemma
* Paradox

4. Where to find the article

* Carroll, Lewis. "What the Tortoise Said to Achilles". Mind, n.s., 4 (1895), pp. 278-80.
* Hofstadter, Douglas. Gödel, Escher, Bach: an Eternal Golden Braid. See the second dialogue, entitled "Two-Part Invention." Dr. Hofstadter appropriated the characters of Achilles and the Tortoise for other, original, dialogues in the book which alternate contrapuntally with prose chapters.
* A number of websites, including [1], [2], and [3]

5. References

* Isashiki Takahiro (1999). What Can We Learn from Lewis Carroll's Paradox?. In Memoirs of the Faculty of Education, Miyazaki University: Humanities, no. 86, pp. 79-98. The paper is in Japanese only, except for the abstract. A slightly extended version of the English-language abstract is available from [4].

Another author provides a more extended summary at [5] (currently down, and unavailable from archive.org).

Drinker Paradox

The drinker paradox is a theorem of classical predicate logic that can be stated: there is someone in the pub such that, if he or she is drinking, then everyone in the pub is drinking. The actual theorem is

.

The paradox was popularised by the mathematical logician Raymond Smullyan, who called it the "drinking principle" in his book What Is the Name of this Book? [1]

Contents:
1. Proof of the paradox
2. Discussion
3. References
4. External links

1. Proof of the paradox

The proof begins by recognizing it is true that either everyone in the pub is drinking (in this particular round of drinks), or at least one person in the pub isn't drinking.

On the one hand, suppose everyone is drinking. For any particular person, it can't be wrong to say that if that particular person is drinking, then everyone in the pub is drinking — because everyone is drinking.

Suppose, on the other hand, at least one person isn't drinking. For that particular person, it still can't be wrong to say that if that particular person is drinking, then everyone in the pub is drinking — because that person is, in fact, not drinking.

Either way, there is someone in the pub such that, if he or she is drinking, then everyone in the pub is drinking. Hence the paradox.

2. Discussion

This proof illustrates several properties of classical predicate logic which do not always agree with ordinary language.

2. 1. Non-empty domain

First, we didn't need to assume there was anyone in the pub. The assumption that the domain is non-empty is built into the inference rules of classical predicate logic. We can deduce from , but of course if the domain were empty (in this case, if there were nobody in the pub) then the proposition is not well-formed for any closed expression .

Nevertheless, free logic, which allows for empty domains, still has something like the drinker paradox in the form of the theorem:



Or in words:

If there is anyone in the pub at all, then there is someone such that, if he or she is drinking, then everyone in the pub is drinking.

2. 2. Excluded middle

The above proof begins by saying that either everyone is drinking, or someone is not drinking. This uses the validity of excluded middle for the statement "everyone is drinking", which is always available in classical logic. If the logic does not admit arbitrary excluded middle—for example if the logic is intuitionistic—then the truth of must first be established, i.e., must be shown to be decidable.

As a simple example of one such decision procedure, if there are finitely many customers in the pub, one can simply check that everyone in the pub drinks, or find one person who doesn't drink. But if is given no semantics, then there is no proof of the drinker paradox in intuitionistic logic. Indeed, assuming the drinking principle over infinite domains leads to various classically valid but intuitionistically unacceptable conclusions.

For instance, it would allow for a simple solution of Goldbach's conjecture, which is one of the oldest unsolved problems in mathematics. It asks whether all even numbers greater than two can be expressed as the sum of two prime numbers. Applying the drinking principle, it would follow that there exists an even number greater than two, such that, if it is the sum of two primes, then all even numbers greater than two are the sum of two primes. However, the fundamental property of a constructive proof is that when a number with a certain property is proven to exist, the actual number is specified. It would then suffice to check whether that particular number is the sum of two primes, which has a finite decision process. If it were not, then obviously it would be a refutation of the conjecture. But if it were, then all of them would be, and the conjecture would be proven.

Nevertheless, intuitionistic (free) logic still has something like the drinker paradox in the form of the theorem:

If we take to be , that is, x is not drinking, then in words this reads:

If there isn't someone in the pub such that, if anyone in the pub isn't drinking, then he or she isn't drinking either, then there is nobody in the pub at all.

In classical logic this would be equivalent to the previous statement, from which it can be derived by two transpositions.

2. 3. Material versus indicative conditional

Most important to the paradox is that the conditional in classical (and intuitionistic) logic is the material conditional. It has the property that is true if B is true or if A is false (in classical logic, but not intuitionistic logic, this is also a necessary condition).

So as it was applied here, the statement "if he or she is drinking, then everyone is drinking" was taken to be correct in one case, if everyone was drinking, and in the other case, if he or she was not drinking — even though his or her drinking may not have had anything to do with anyone else's drinking.

In natural language, on the other hand, typically "if...then" is used as an indicative conditional.
3. References
1 | references-column-count references-column-count-{{{1}}} }} }} }}" >

1. Raymond Smullyan (1990). What is the Name of this Book. Penguin Books Ltd. chapter 14. ISBN 0-14-013511-1.

4. External links

* Formal Proof of Drinker Paradox @ filomatia.net

Unexpected hanging paradox

The unexpected hanging paradox is an alleged paradox about a prisoner's response to an unusual death sentence. It is alternatively known as the hangman paradox, the fire drill paradox, or the unexpected exam (or pop quiz) paradox.

Despite significant academic interest, no consensus on its correct resolution has yet been established [1] . One approach, offered by the logical school, suggests that the problem arises in a self-contradictory self-referencing statement at the heart of the judge's sentence. Another approach, offered by the epistemological school, suggests the unexpected hanging paradox is an example of an epistemic paradox because it turns on our concept of knowledge [2] . Even though it is apparently simple, the paradox's underlying complexities have even led to its being called a "significant problem" for philosophy [3] .

Contents:
1. Formalizing the paradox
2. The logical school
3. The epistemological school
4. The common-sense school
5. See also
6. References
7. External links

1. Formalizing the paradox

The paradox runs as follows:

A judge tells a condemned prisoner that he will be hanged at noon on one weekday in the following week but that the execution will be a surprise to the prisoner. He will not know the day of the hanging until the executioner knocks on his cell door at noon that day. Having reflected on his sentence, the prisoner draws the conclusion that he will escape from the hanging. His reasoning is in several parts. He begins by concluding that if the hanging were on Friday then it would not be a surprise, since he would know by Thursday night that he was to be hanged the following day, as it would be the only day left (in that week). Since the judge's sentence stipulated that the hanging would be a surprise to him, he concludes it cannot occur on Friday. He then reasons that the hanging cannot be on Thursday either, because that day would also not be a surprise. On Wednesday night he would know that, with two days left (one of which he already knows cannot be execution day), the hanging should be expected on the following day. By similar reasoning he concludes that the hanging can also not occur on Wednesday, Tuesday or Monday. Joyfully he retires to his cell confident that the hanging will not occur at all. The next week, the executioner knocks on the prisoner's door at noon on Wednesday — an utter surprise to him. Everything the judge said has come true. [4]

Other versions of the paradox replace the death sentence with a surprise fire drill, examination, or lion behind a door.

The informal nature of everyday language allows for multiple interpretations of the paradox. In the extreme case, a prisoner who is paranoid might feel certain in his knowledge that the executioner will arrive at noon on Monday, then certain that he will come on Tuesday and so forth, thus ensuring that every day really is a "surprise" to him. But even without adding this element to the story, the vagueness of the account prohibits one from being objectively clear about which formalization truly captures its essence. There has been considerable debate between the logical school, which uses mathematical language, and the epistemological school, which employs concepts such as knowledge, belief and memory, over which formulation is correct.

2. The logical school

Formulation of the judge's announcement into formal logic is made difficult by the vague meaning of the word "surprise". A first stab at formulation might be:

* The prisoner will be hanged next week and its date will not be deducible from the assumption that the hanging will occur sometime during the week (A)

Given this announcement the prisoner can deduce that the hanging will not occur on the last day of the week. However, in order to reproduce the next stage of the argument, which eliminates the penultimate day of the week, the prisoner must argue that his ability to deduce, from statement (A), that the hanging will not occur on the last day, implies that a penultimate-day hanging would not be surprising. But since the meaning of "surprising" has been restricted to not deducible from the assumption that the hanging will occur during the week instead of not deducible from statement (A), the argument is blocked.

This suggests that a better formulation would in fact be:

* The prisoner will be hanged next week and its date will not be deducible in advance using this statement as an axiom (B)

Some authors have claimed that the self-referential nature of this statement is the source of the paradox. Fitch [5] has shown that this statement can still be expressed in formal logic. Using an equivalent form of the paradox which reduces the length of the week to just two days, he proved that although self-reference is not illegitimate in all circumstances, it is in this case because the statement is self-contradictory.

2. 1. Objections

The first objection often raised to the logical school's approach is that it fails to explain how the judge's announcement appears to be vindicated after the fact. If the judge's statement is self-contradictory, how does he manage to be right all along? This objection rests on an understanding of the conclusion to be that the judge's statement is self-contradictory and therefore the source of the paradox. However, the conclusion is more precisely that in order for the prisoner to carry out his argument that the judge's sentence cannot be fulfilled, he must interpret the judge's announcement as (B). A reasonable assumption would be that the judge did not intend (B) but that the prisoner misinterprets his words to reach his paradoxical conclusion. The judge's sentence appears to be vindicated afterwards but the statement which is actually shown to be true is that "the prisoner will be psychologically surprised by the hanging". This statement in formal logic would not allow the prisoner's argument to be carried out.

A related objection is that the paradox only occurs because the judge tells the prisoner his sentence (rather than keeping it secret) — which suggests that the act of declaring the sentence is important. Some have argued that since this action is missing from the logical school's approach, it must be an incomplete analysis. But the action is included implicitly. The public utterance of the sentence and its context changes the judge's meaning to something like "there will be a surprise hanging despite my having told you that there will be a surprise hanging". The logical school's approach does implicitly take this into account.

2. 2. Leaky Inductive Argument

The argument that first excludes Friday, and then excludes the last remaining day of the week is an inductive one. The prisoner assumes that by Thursday he will know the hanging is due on Friday, but he does not know that before Thursday. By trying to carry an inductive argument backward in time based on a fact known only by Thursday the prisoner may be making an error. The conditional statement "If I reach Thursday afternoon alive then Thursday will be the latest possible date for the hanging" does little to reassure the condemned man. The prisoner's argument in any case carries the seeds of its own destruction because if he is right, then he is wrong, and can be hanged any day including Friday.

The counter-argument to this is that in order to claim that a statement will not be a surprise, it is not necessary to predict the truth or falsity of the statement at the time the claim is made, but only to show that such a prediction will become possible in the interim period. It is indeed true that the prisoner does not know on Monday that he will be hanged on Friday, nor that he will still be alive on Thursday. However, he does know on Monday, that if the hangman as it turns out knocks on his door on Friday, he will have already have expected that (and been alive to do so) since Thursday night - and thus, if the hanging occurs on Friday then it will certainly have ceased to be a surprise at some point in the interim period between Monday and Friday. The fact that it has not yet ceased to be a surprise at the moment the claim is made is not relevant. This works for the inductive case too. When the prisoner wakes up on any given day, on which the last possible hanging day is tomorrow, the prisoner will indeed not know for certain that he will survive to see tomorrow. However, he does know that if he does survive today, he will then know for certain that he must be hanged tomorrow, and thus by the time he is actually hanged tomorrow it will have ceased to be a surprise. This removes the leak from the argument.

2. 3. Additivity of surprise

A further objection raised by some commentators is that the property of being a surprise may not be additive over cosmophases. For example, the event of "a person's house burning down" would probably be a surprise to them, but the event of "a person's house either burning down or not burning down" would certainly not be a surprise, as one of these must always happen, and thus it is absolutely predictable that the combined event will happen. Which particular one of the combined events actually happens can still be a surprise. By this argument, the prisoner's arguments that each day cannot be a surprise do not follow the regular pattern of induction, because adding extra "non-surprise" days only dilutes the argument rather than strengthening it. By the end, all he has proven is that he will not be surprised to be hanged sometime during the week - but he would not have been anyway, as the judge already told him this in statement (A).

3. The epistemological school

Various epistemological formulations have been proposed which show that the prisoner's tacit assumptions about what he will know in the future, together with several plausible assumptions about knowledge, are inconsistent.

Chow (1998) provides a detailed analysis of a version of the paradox in which a surprise examination is to take place on one of two days. Applying Chow's analysis to the case of the unexpected hanging (again with the week shortened to two days for simplicity), we start with the observation that the judge's announcement seems to affirm three things:

* S1: The hanging will occur on Monday or Tuesday.

* S2: If the hanging occurs on Monday, then the prisoner will not know on Sunday evening that it will occur on Monday.

* S3: If the hanging occurs on Tuesday, then the prisoner will not know on Monday evening that it will occur on Tuesday.

As a first step, the prisoner reasons that a scenario in which the hanging occurs on Tuesday is impossible because it leads to a contradiction: on the one hand, by S3, the prisoner would not be able to predict the Tuesday hanging on Monday evening; but on the other hand, by S1 and process of elimination, the prisoner would be able to predict the Tuesday hanging on Monday evening.

Chow's analysis points to a subtle flaw in the prisoner's reasoning. What is impossible is not a Tuesday hanging. Rather, what is impossible is a situation in which the hanging occurs on Tuesday despite the prisoner knowing on Monday evening that the judge's assertions S1, S2, and S3 are all true.

The prisoner's reasoning, which gives rise to the paradox, is able to get off the ground because the prisoner tacitly assumes that on Monday evening, he will (if he is still alive) know S1, S2, and S3 to be true. This assumption seems unwarranted on several different grounds. It may be argued that the judge's pronouncement that something is true can never be sufficient grounds for the prisoner knowing that it is true. Further, even if the prisoner knows something to be true in the present moment, unknown psychological factors may erase this knowledge in the future. Finally, Chow suggests that because the statement which the prisoner is supposed to "know" to be true is a statement about his inability to "know" certain things, there is reason to believe that the unexpected hanging paradox is simply a more intricate version of Moore's paradox. A suitable analogy can be reached by reducing the length of the week to just one day. Then the judge's sentence becomes: You will be hanged tomorrow, but you do not know that.

4. The common-sense school

First note that the prisoner assumes that he will be hanged (i.e. it is a premise of his argument). He assumes also that he is going to be surprised, and also that the last day he can be hanged is Friday. From these premises, he is able to conclude that he cannot be hanged on Friday (since he would know that he must be hanged that day and therefore could not be surprised in such an instance). In order not, at this point, to contradict the premises of his argument, he still assumes that a day does exist upon which he can be hanged and surprised. In the end however, the conclusion of his argument is that there is no such day, i.e. a contradiction of the premises from which he started his argument. In such a case, the rules of logic dictate that the only valid conclusion is that either the argument is incorrect, the premises contain a contradiction, or both. The prisoner cannot automatically, from the deduction of a contradiction, conclude that either he will not be hanged or he will not be surprised. That his argument is free from mistakes is something he did not demonstrate.

Common sense tells us that the prisoner can indeed be both hanged and surprised (and on a day before Friday). If such is the case, the premises of the prisoner's argument would seem to be beyond question. That leaves us with his argument.

The prisoner argues that he cannot be surprised on Thursday since he knows he cannot be hanged on Friday (if he is to be surprised). But if Thursday is reached, he must either be hanged that day or the next. If he is certain that it must be Thursday, he cannot expect to be surprised if he is indeed hanged that day. But if he is not, in the end, going to be surprised, what reason then does he have to suppose that Friday will not be the day of his hanging? He has none. Therefore, if he is certain that he will be hanged on Thursday, he cannot be certain that he will not be hanged on Friday - an absurdity. Therefore, because of the contradiction it implies, it is not logically possible for the prisoner to be certain that he will be hanged on Thursday. And if he cannot be certain, he can still be surprised.

And that reveals the flaw in the prisoner's argument.

5. See also

* Centipede game, the Nash equilibrium of which uses a similar mechanism as its proof.
* Interesting number paradox

6. References

1 | references-column-count references-column-count-{{{1}}} }} }} }}" >

1. T. Y. Chow, "The surprise examination or unexpected hanging paradox," The American Mathematical Monthly Jan 1998 [1]01
2. Stanford Encyclopedia discussion of hanging paradox together with other epistemic paradoxes
3. R. A. Sorensen, Blindspots, Clarendon Press, Oxford (1988)
4. "Unexpected Hanging Paradox". Wolfram.
5. Fitch, F., A Goedelized formulation of the prediction paradox, Amer. Phil. Quart 1 (1964), 161-164

* D. J. O'Connor, "Pragmatic Paradoxes", Mind 1948, Vol. 57, pp. 358-9. The first appearance of the paradox in print. The author claims that certain contingent future tense statements cannot come true.
* M. Scriven, "Paradoxical Announcements", Mind 1951, vol. 60, pp. 403-7. The author critiques O'Connor and discovers the paradox as we know it today.
* R. Shaw, "The Unexpected Examination" Mind 1958, vol. 67, pp. 382-4. The author claims that the prisoner's premises are self-referring.
* C. Wright and A. Sudbury, "the Paradox of the Unexpected Examination," Australasian Journal of Philosophy, 1977, vol. 55, pp. 41-58. The first complete formalization of the paradox, and a proposed solution to it.
* A. Margalit and M. Bar-Hillel, "Expecting the Unexpected", Philosophia 1983, vol. 13, pp. 337-44. A history and bibliography of writings on the paradox up to 1983.
* C. S. Chihara, "Olin, Quine, and the Surprise Examination" Philosophical Studies 1985, vol. 47, pp. 19-26. The author claims that the prisoner assumes, falsely, that if he knows some proposition, then he also knows that he knows it.
* R. Kirkham, "On Paradoxes and a Surprise Exam," Philosophia 1991, vol. 21, pp. 31-51. The author defends and extends Wright and Sudbury's solution. He also updates the history and bibliography of Margalit and Bar-Hillel up to 1991.
* T. Y. Chow, "The surprise examination or unexpected hanging paradox," The American Mathematical Monthly Jan 1998 [2]
* P. Franceschi, "Une analyse dichotomique du paradoxe de l'examen surprise", Philosophiques, 2005, vol. 32-2, 399-421, English translation.
* M. Gardner, "The Paradox of the Unexpected Hanging", The Unexpected Hanging and Other * Mathematical Diversions 1969. Completely analyzes the paradox and introduces other situations with similar logic.
* W.V.O. Quine, "On a So-called Paradox", Mind 1953, vol. 62, pp. 65-66.
* R. A. Sorensen, "Recalcitrant versions of the prediction paradox", Australasian Journal of Philosophy 1982, vol. 69, pp. 355-362.

7. External links

* Unexpected Hanging: explained using dramatization

Monday, December 22, 2008

Horse Paradox

The horse paradox is a falsidical paradox that arises from flawed demonstrations, which purport to use mathematical induction, of the statement All horses are the same color. The paradox does not truly exist, as these arguments have a crucial flaw that makes them incorrect. This example was used by George Pólya as an example of the subtle errors that can occur in attempts to prove statements by induction.

Contents:
1. The argument
2. Explanation
3. References
4. See also

1. The argument

The flawed argument claims to be based on mathematical induction, and proceeds as follows:

Suppose that we have a set of five horses. We wish to prove that they are all the same colour. Suppose that we had a proof that all sets of four horses were the same colour. If that were true, we could prove that all five horses are the same colour by removing a horse to leave a group of four horses. Do this in two ways, and we have two different groups of four horses. By our supposed existing proof, since these are groups of four, all horses in them must be the same color. For example, the first, second, third and fourth horses constitute a group of four, and thus must all be the same colour; and the second, third, fourth and fifth horses also constitute a group of four and thus must also all be the same colour. For this to occur, all five horses in the group of five must be the same colour.

But how are we to get a proof that all sets of four horses are the same colour? We apply the same logic again. By the same process, a group of four horses could be broken down into groups of three, and then a group of three horses could be broken down into groups of two, and so on. Eventually we will reach a group size of one, and it is obvious that all horses in a group of one horse must be the same colour.

By the same logic we can also increase the group size. A group of five horses can be increased to a group of six, and so on upwards, so that all finite sized groups of horses must be the same colour.

2. Explanation

The argument above makes the implicit assumption that the two subsets of horses to which the induction assumption is applied have a common element. This is not true when n = 1, that is, when the original set only contains 2 horses.

Indeed, let the two horses be horse A and horse B. When horse A is removed, it is true that the remaining horses in the set are the same colour (only horse B remains). If horse B is removed instead, this leaves a different set containing only horse A, which may or may not be the same colour as horse B.

The problem in the argument is the assumption that because each of these two sets contains only one color of horses, the original set also contained only one colour of horses. Because there are no common elements (horses) in the two sets, it is unknown whether the two horses share the same colour. The proof forms a falsidical paradox; it seems to show something manifestly false by valid reasoning, but in fact the reasoning is flawed. The horse paradox exposes the pitfalls arising from failure to consider special cases for which a general statement may be false.

3. References

* Enumerative Combinatorics by George E. Martin, ISBN 0-387-95225-X

4. See also

* Pólya's proof that there is no "horse of a different color"

Raven Paradox

The Raven paradox, also known as Hempel's paradox or Hempel's ravens is a paradox proposed by the German logician Carl Gustav Hempel in the 1940s to illustrate a problem where inductive logic violates intuition. It reveals the problem of induction.

Contents:
1. The paradox
2. Proposed Resolutions
3. The Role of Background Knowledge
4. Rejecting Nicod's Criterion
5. Proposed Resolutions which Reject the Equivalence Condition
6. Notes

1. The paradox

Hempel describes the paradox in terms of the hypothesis [1] [2] :

(1) All ravens are black.

In strict logical terms, via the Law of Implication, this statement is equivalent to:

(2) Everything that is not black is not a raven.

It should be clear that in all circumstances where (2) is true, (1) is also true; and likewise, in all circumstances where (2) is false (i.e. if we imagine a world in which something that was not black, yet was a raven, existed), (1) is also false. This establishes logical equivalence.

Given a general statement such as all ravens are black, we would generally consider a form of the same statement that refers to a specific observable instance of the general class to constitute evidence for that general statement. For example,

(3) Nevermore, my pet raven, is black.

is clearly evidence supporting the hypothesis that all ravens are black.

The paradox arises when this same process is applied to statement (2). On sighting a green apple, we can observe:

(4) This green (and thus not black) thing is an apple (and thus not a raven).

By the same reasoning, this statement is evidence that (2) everything that is not black is not a raven. But since (as above) this statement is logically equivalent to (1) all ravens are black, it follows that the sight of a green apple offers evidence that all ravens are black.

2. Proposed Resolutions

Two apparently reasonable premises:

The Equivalence Condition (EC): If a proposition, X, provides evidence in favor of another proposition Y, then X also provides evidence in favor of any proposition which is logically equivalent to Y.

and

Nicod's Criterion (NC): A proposition of the form "All P are Q" is supported by the observation of a particular P which is Q.

can be combined to reach the seemingly paradoxical conclusion:

(PC): The observation of a green apple provides evidence that all ravens are black.

A resolution to the paradox must therefore either accept (PC) or reject (EC) or reject (NC). A satisfactory resolution should also explain why there naively appears to be a paradox. Solutions which accept the paradoxical conclusion can do this by presenting a proposition which we intuitively know to be false but which is easily confused with (PC), while solutions which reject (EC) or (NC) should present a proposition which we intuitively know to be true but which is easily confused with (EC) or (NC) [3] .

2. 1. Approaches which Accept the Paradoxical Conclusion

2. 1. 1. Hempel's Resolution

Hempel himself accepted the paradoxical conclusion, arguing that the reason the result appears paradoxical is because we possess prior information without which the observation of a non-black non-raven would indeed provide evidence that all ravens are black.

He illustrates this with the example of the generalization "All sodium salts burn yellow", and asks us to consider the observation which occurs when somebody holds a piece of pure ice in a colorless flame which does not turn yellow.

This result would confirm the assertion, "Whatever does not burn yellow is no sodium salt", and consequently, by virtue of the equivalence condition, it would confirm the original formulation. Why does this impress us as paradoxical? The reason becomes clear when we compare the previous situation with the case of an experiment where an object whose chemical constitution is as yet unknown to us is held into a flame and fails to turn it yellow, and where subsequent analysis reveals it to contain no sodium salt. This outcome, we should no doubt agree, is what was to be expected on the basis of the hypothesis ... thus the data here obtained constitute confirming evidence for the hypothesis.

In the seemingly paradoxical cases of confirmation, we are often not actually judging the relation of the given evidence, E alone to the hypothesis H ... we tacitly introduce a comparison of H with a body of evidence which consists of E in conjunction with an additional amount of information which we happen to have at our disposal; in our illustration, this information includes the knowledge (1) that the substance used in the experiment is ice, and (2) that ice contains no sodium salt. If we assume this additional information as given, then, of course, the outcome of the experiment can add no strength to the hypothesis under consideration. But if we are careful to avoid this tacit reference to additional knowledge ... the paradoxes vanish. [4]

2. 1. 2. The Standard Bayesian Solution

One of the most popular proposed resolutions is to accept the conclusion that the observation of a green apple provides evidence that all ravens are black but to argue that the amount of confirmation provided is very small, due to the large discrepancy between the number of ravens and the number of non-black objects. According to this resolution, the conclusion appears paradoxical because we intuitively estimate the amount of evidence provided by the observation of a green apple to be zero, when it is in fact non-zero but very small.

I J Good's presentation of this argument in 1960 [5] is perhaps the best known, and variations of the argument have been popular ever since [6] although it had been presented in 1958 [7] and early forms of the argument appeared as early as 1940 [8] .

Good's argument involves calculating the weight of evidence provided by the observation of a black raven or a white shoe in favor of the hypothesis that all the ravens in a collection of objects are black. The weight of evidence is the logarithm of the Bayes factor, which in this case is simply the factor by which the odds of the hypothesis changes when the observation is made. The argument goes as follows:

... suppose that there are objects that might be seen at any moment, of which are ravens and are black, and that the objects each have probability 1/ of being seen. Let be the hypothesis that there are non-black ravens, and suppose that the hypotheses are initially equiprobable. Then, if we happen to see a black raven, the Bayes factor in favour of is

average

i.e. about 2 if the number of ravens in existence is known to be large. But the factor if we see a white shoe is only

average

and this exceeds unity by only about r/(2N-2b) if N-b is large compared to r. Thus the weight of evidence provided by the sight of a white shoe is positive, but is small if the number of ravens is known to be small compared to the number of non-black objects. [9]

Many of the proponents of this resolution and variants of it have been advocates of Bayesian probability, and it is now commonly called the Bayesian Solution, although, as Chihara [10] observes, "there is no such thing as the Bayesian solution. There are many different `solutions' that Bayesians have put forward using Bayesian techniques." Noteworthy approaches using Bayesian techniques include Earman,
[11] , Eells [12] , Gibson [13] , Hosaisson-Lindenbaum [14] , Howson and Urbach [15] , Mackie [16] and

Hintikka [17] , who claims that his approach is "more Bayesian than the so-called `Bayesian solution' of the same paradox." Bayesian approaches which make use of Carnap's theory of inductive inference include Humburg [18] , Maher,
[19]

and Fitelson et al. [20] . Vranas [21] introduced the term "Standard Bayesian Solution" to avoid confusion.
2. 2. The Carnapian Approach

Maher [22] accepts the paradoxical conclusion, and refines it:

A non-raven (of whatever color) confirms that all ravens are black because

(i) the information that this object is not a raven removes the possibility that this object is a counterexample to the generalization, and

(ii) it reduces the probability that unobserved objects are ravens, thereby reducing the probability that they are counterexamples to the generalization.

In order to reach (ii), he appeals to Carnap's theory of inductive probability, which is (from the Bayesian point of view) a way of assigning prior probabilities which naturally implements induction. According to Carnap's theory, the posterior probability, , that an object, , will have a predicate, , after the evidence has been observed, is:

where is the initial probability that has the predicate ; is the number of objects which have been examined (according to the available evidence ); is the number of examined objects which turned out to have the predicate , and is a constant which measures resistance to generalization.

If is close to zero, will be very close to one after a single observation of an object which turned out to have the predicate , while if is much larger than , will be very close to regardless of the fraction of observed objects which had the predicate .

Using this Carnapian approach, Maher identifies a proposition which we intuitively (and correctly) know to be false, but which we easily confuse with the paradoxical conclusion. The proposition in question is the proposition that observing non-ravens tells us about the color of ravens. While this is intuitively false and is also false according to Carnap's theory of induction, observing non-ravens (according to that same theory) causes us to reduce our estimate of the total number of ravens, and thereby reduces the estimated number of possible counterexamples to the rule that all ravens are black.

Hence, from the Bayesian-Carnapian point of view, the observation of a non-raven does not tell us anything about the color of ravens, but it tells us about the prevalence of ravens, and supports "All ravens are black" by reducing our estimate of the number of ravens which might not be black.

3. The Role of Background Knowledge

Much of the discussion of the paradox in general and the Bayesian approach in particular has centred on the relevance of background knowledge. Surprisingly, Maher [23] shows that, for a large class of possible configurations of background knowledge, the observation of a non-black non-raven provides exactly the same amount of confirmation as the observation of a black raven. The configurations of background knowledge which he considers are those which are provided by a sample proposition, namely a proposition which is a conjunction of atomic propositions, each of which ascribes a single predicate to a single individual, with no two atomic propositions involving the same individual. Thus, a proposition of the form "A is a black raven and B is a white shoe" can be considered a sample proposition by taking "black raven" and "white shoe" to be predicates.

Maher's proof appears to contradict the result of the Bayesian argument, which was that the observation of a non-black non-raven provides much less evidence than the observation of a black raven. The reason is that the background knowledge which Good and others use can not be expressed in the form of a sample proposition - in particular, variants of the standard Bayesian approach often suppose (as Good did in the argument quoted above) that the total numbers of ravens, non-black objects and/or the total number of objects, are known quantities. Maher comments that, "The reason we think there are more non-black things than ravens is because that has been true of the things we have observed to date. Evidence of this kind can be represented by a sample proposition. But ... given any sample proposition as background evidence, a non-black non-raven confirms A just as strongly as a black raven does ... Thus my analysis suggests that this response to the paradox [i.e. the Standard Bayesian one] cannot be correct."

Fitelson et al. [24] examined the conditions under which the observation of a non-black non-raven provides less evidence than the observation of a black raven. They show that, if is an object selected at random, is the proposition that the object is black, and is the proposition that the object is a raven, then the condition:

is sufficient for the observation of a non-black non-raven to provide less evidence than the observation of a black raven. Here, a line over a proposition indicates the logical negation of that proposition.

This condition does not tell us how large the difference in the evidence provided is, but a later calculation in the same paper shows that the weight of evidence provided by a black raven exceeds that provided by a non-black non-raven by about . This is equal to the amount of additional information (in bits, if the base of the logarithm is 2) which is provided when a raven of unknown color is discovered to be black, given the hypothesis that not all ravens are black.

Fitelson et al. [25] explain that:

Under normal circumstances, may be somewhere around 0.9 or 0.95; so is somewhere around 1.11 or 1.05. Thus, it may appear that a single instance of a black raven does not yield much more support than would a non-black non-raven. However, under plausible conditions it can be shown that a sequence of instances (i.e. of n black ravens, as compared to n non-black non-ravens) yields a ratio of likelihood ratios on the order of , which blows up significantly for large .

The authors point out that their analysis is completely consistent with the supposition that a non-black non-raven provides an extremely small amount of evidence although they do not attempt to prove it; they merely calculate the difference between the amount of evidence that a black raven provides and the amount of evidence that a non-black non-raven provides.
4. Rejecting Nicod's Criterion
4. 1. The Red Herring

Good [26] gives an example of background knowledge with respect to which the observation of a black raven decreases the probability that all ravens are black:

Suppose that we know we are in one or other of two worlds, and the hypothesis, H, under consideration is that all the ravens in our world are black. We know in advance that in one world there are a hundred black ravens, no non-black ravens, and a million other birds; and that in the other world there are a thousand black ravens, one white raven, and a million other birds. A bird is selected equiprobably at random from all the birds in our world. It turns out to be a black raven. This is strong evidence ... that we are in the second world, wherein not all ravens are black.

Good concludes that the white shoe is a "red herring": Sometimes even a black raven can constitute evidence against the hypothesis that all ravens are black, so the fact that the observation of a white shoe can support it is not surprising and not worth attention. Nicod's criterion is false, according to Good, and so the paradoxical conclusion does not follow.

Hempel rejected this as a solution to the paradox, insisting that the proposition 'c is a raven and is black' must be considered "by itself and without reference to any other information", and pointing out that it "... was emphasized in section 5.2(b) of my article in Mind ... that the very appearance of paradoxicality in cases like that of the white shoe results in part from a failure to observe this maxim." [27]

The question which then arises is whether the paradox is to be understood in the context of absolutely no background information (as Hempel suggests), or in the context of the background information which we actually possess regarding ravens and black objects, or with regard to all possible configurations of background information.

Good had shown that, for some configurations of background knowledge, Nicod's criterion is false (provided that we are willing to equate "inductively support" with "increase the probability of" - see below). The possibility remained that, with respect to our actual configuration of knowledge, which is very different from Good's example, Nicod's criterion might still be true and so we could still reach the paradoxical conclusion. Hempel, on the other hand, insists that it is our background knowledge itself which is the red herring, and that we should consider induction with respect to a condition of perfect ignorance.

4. 2. Good's Baby

In his proposed resolution, Maher implicitly made use of the fact that the proposition "All ravens are black" is highly probable when it is highly probable that there are no ravens. Good had used this fact before to respond to Hempel's insistence that Nicod's criterion was to be understood to hold in the absence of background information [28] :

...imagine an infinitely intelligent newborn baby having built-in neural circuits enabling him to deal with formal logic, English syntax, and subjective probability. He might now argue, after defining a raven in detail, that it is extremely unlikely that there are any ravens, and therefore it is extremely likely that all ravens are black, that is, that is true. 'On the other hand', he goes on to argue, 'if there are ravens, then there is a reasonable chance that they are of a variety of colours. Therefore, if I were to discover that even a black raven exists I would consider to be less probable than it was initially.'

This, according to Good, is as close as one can reasonably expect to get to a condition of perfect ignorance, and it appears that Nicod's condition is still false. Maher made Good's argument more precise by using Carnap's theory of induction to formalize the notion that if there is one raven, then it is likely that there are many [29] .

Maher's argument considers a universe of exactly two objects, each of which is very unlikely to be a raven (a one in a thousand chance) and reasonably unlikely to be black (a one in ten chance). Using Carnap's formula for induction, he finds that the probability that all ravens are black decreases from 0.9985 to 0.8995 when it is discovered that one of the two objects is a black raven.

Maher concludes that not only is the paradoxical conclusion true, but that Nicod's criterion is false in the absence of background knowledge (except for the knowledge that the number of objects in the universe is two and that ravens are less likely than black things).

4. 3. Distinguished Predicates

Quine [30] argued that the solution to the paradox lies in the recognition that certain predicates, which he called natural kinds, have a distinguished status with respect to induction. This can be illustrated with Nelson Goodman's example of the predicate grue. An object is grue if it is blue before (say) 2010 and green afterwards. Clearly, we expect objects which were blue before 2010 to remain blue afterwards, but we do not expect the objects which were found to be grue before 2010 to be grue afterwards. Quine's explanation is that "blue" is a natural kind; a privileged predicate which can be used for induction, while "grue" is not a natural kind and using induction with it leads to error.

This suggests a resolution to the paradox - Nicod's criterion is true for natural kinds, such as "blue" and "black", but is false for artificially contrived predicates, such as "grue" or "non-raven". The paradox arises, according to this resolution, because we implicitly interpret Nicod's criterion as applying to all predicates when in fact it only applies to natural kinds.

Another approach which favours specific predicates over others was taken by Hintikka [31] . Hintikka was motivated to find a Bayesian approach to the paradox which did not make use of knowledge about the relative frequencies of ravens and black things. Arguments concerning relative frequencies, he contends, cannot always account for the perceived irrelevance of evidence consisting of observations of objects of type A for the purposes of learning about objects of type not-A.

His argument can be illustrated by rephrasing the paradox using predicates other than "raven" and "black". For example, "All men are tall" is equivalent to "All short people are women", and so observing that a randomly selected person is a short woman should provide evidence that all men are tall. Despite the fact that we lack background knowledge to indicate that there are dramatically fewer men than short people, we still find ourselves inclined to reject the conclusion. Hintikka's example is: "... a generalization like 'no material bodies are infinitely divisible' seems to be completely unaffected by questions concerning immaterial entities, independently of what one thinks of the relative frequencies of material and immaterial entities in one's universe of discourse."

His solution is to introduce an order into the set of predicates. When the logical system is equipped with this order, it is possible to restrict the scope of a generalization such as "All ravens are black" so that it applies to ravens only and not to non-black things, since the order privileges ravens over non-black things. As he puts it:

If we are justified in assuming that the scope of the generalization 'All ravens are black' can be restricted to ravens, then this means that we have some outside information which we can rely on concerning the factual situation. The paradox arises from the fact that this information, which colors our spontaneous view of the situation, is not incorporated in the usual treatments of the inductive situation. [32]

5. Proposed Resolutions which Reject the Equivalence Condition

5. 1. Selective Confirmation

Scheffler and Goodman [33] took an approach to the paradox which incorporates Karl Popper's view that scientific hypotheses are never really confirmed, only falsified.

The approach begins by noting that the observation of a black raven does not prove that "All ravens are black" but it falsifies the contrary hypothesis, "No ravens are black". A non-black non-raven, on the other hand, is consistent with both "All ravens are black" and with "No ravens are black". As the authors put it:

... the statement that all ravens are black is not merely satisfied by evidence of a black raven but is favored by such evidence, since a black raven disconfirms the contrary statement that all ravens are not black, i.e. satisfies its denial. A black raven, in other words, satisfies the hypothesis that all ravens are black rather than not: it thus selectively confirms that all ravens are black.

Selective confirmation violates the equivalence condition since a black raven selectively confirms "All ravens are black" but not "All non-black things are non-ravens".

5. 1. 1. Probabilistic or Non-Probabilistic Induction

Scheffler and Goodman's concept of selective confirmation is an example of an interpretation of "provides evidence in favor of" which does not coincide with "increase the probability of". This must be a general feature of all resolutions which reject the equivalence condition, since logically equivalent propositions must always have the same probability.

It is impossible for the observation of a black raven to increase the probability of the proposition "All ravens are black" without causing exactly the same change to the probability that "All non-black things are non-ravens". If an observation inductively supports the former but not the latter, then "inductively support" must refer to something other than changes in the probabilities of propositions. A possible loophole is to interpret "All" as "Nearly all" - "Nearly all ravens are black" is not equivalent to "Nearly all non-black things are non-ravens", and these propositions can have very different probabilities [34] .

This raises the broader question of the relation of probability theory to inductive reasoning. Karl Popper argued that probability theory alone cannot account for induction. His argument involves splitting a hypothesis, , into a part which is deductively entailed by the evidence, , and another part. This can be done in two ways.

First, consider the splitting [35] :

where , and are probabilistically independent: and so on. The condition which is necessary for such a splitting of H and E to be possible is , that is, that is probabilistically supported by .

Popper's observation is that the part, , of which receives support from actually follows deductively from , while the part of which does not follow deductively from receives no support at all from - that is, .

Second, the splitting [36] :

separates into , which as Popper says, "is the logically strongest part of (or of the content of ) that follows [deductively] from ," and , which, he says, "contains all of that goes beyond ." He continues:

Does , in this case, provide any support for the factor , which in the presence of is alone needed to obtain ? The answer is: No. It never does. Indeed, countersupports unless either or (which are possibilities of no interest). ...

This result is completely devastating to the inductive interpretation of the calculus of probability. All probabilistic support is purely deductive: that part of a hypothesis that is not deductively entailed by the evidence is always strongly countersupported by the evidence ... There is such a thing as probabilistic support; there might even be such a thing as inductive support (though we hardly think so). But the calculus of probability reveals that probabilistic support cannot be inductive support.

5. 2. The Orthodox Approach

The orthodox Neyman-Pearson theory of hypothesis testing considers how to decide whether to accept or reject a hypothesis, rather than what probability to assign to the hypothesis. From this point of view, the hypothesis that "All ravens are black" is not accepted gradually, as its probability increases towards one when more and more observations are made, but is accepted in a single action as the result of evaluating the data which has already been collected. As Neyman and Pearson put it:

Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behaviour with regard to them, in following which we insure that, in the long run of experience, we shall not be too often wrong. [37]

According to this approach, it is not necessary to assign any value to the probability of a hypothesis, although one must certainly take into account the probability of the data given the hypothesis, or given a competing hypothesis, when deciding whether to accept or to reject. The acceptance or rejection of a hypothesis carries with it the risk of error.

This contrasts with the Bayesian approach, which requires that the hypothesis be assigned a prior probability, which is revised in the light of the observed data to obtain the final probability of the hypothesis. Within the Bayesian framework there is no risk of error since hypotheses are not accepted or rejected; instead they are assigned probabilities.

An analysis of the paradox from the orthodox point of view has been performed, and leads to, among other insights, a rejection of the equivalence condition:

It seems obvious that one cannot both accept the hypothesis that all P's are Q and also reject the contrapositive, i.e. that all non-Q's are non-P. Yet it is easy to see that on the Neyman-Pearson theory of testing, a test of "All P's are Q" is not necessarily a test of "All non-Q's are non-P" or vice versa. A test of "All P's are Q" requires reference to some alternative statistical hypothesis of the form of all P's are Q, , whereas a test of "All non-Q's are non-P" requires reference to some statistical alternative of the form of all non-Q's are non-P, . But these two sets of possible alternatives are different ... Thus one could have a test of without having a test of its contrapositive. [38]

5. 3. Rejecting Material Implication

The following propositions all imply one another: "Every object is either black or not a raven", "Every Raven is black", and "Every non-black object is a non-raven." They are therefore, by definition, logically equivalent. However, the three propositions have different domains: the first proposition says something about "Every object", while the second says something about "Every raven".

The first proposition is the only one whose domain is unrestricted ("all objects"), so this is the only one which can be expressed in first order logic. It is logically equivalent to:

and also to

where indicates the material conditional, according to which "If then " can be understood to mean " or ".

It has been argued by several authors that material implication does not fully capture the meaning of "If then " (see the paradoxes of material implication). "For every object, , is either black or not a raven" is true when there are no ravens. It is because of this that "All ravens are black" is regarded as true when there are no ravens. Furthermore, the arguments which Good and Maher used to criticize Nicod's criterion (see Good's Baby, above) relied on this fact - that "All ravens are black" is highly probable when it is highly probable that there are no ravens.

Some approaches to the paradox have sought to find other ways of interpreting "If then " and "All are " which would eliminate the perceived equivalence between "All ravens are black" and "All non-black things are non-ravens."

One such approach involves introducing a many-valued logic according to which "If then " has the truth-value , meaning "Indeterminate" or "Inappropriate" when is false [39] . In such a system, contraposition is not automatically allowed: "If then " is not equivalent to "If then ". Consequently, "All ravens are black" is not equivalent to "All non-black things are non-ravens".

In this system, when contraposition occurs, the modality of the conditional involved changes from the indicative ("If that piece of butter has been heated to 32 C then it has melted") to the counterfactual ("If that piece of butter had been heated to 32 C then it would have melted"). According to this argument, this removes the alleged equivalence which is necessary to conclude that yellow cows can inform us about ravens:

In proper grammatical usage, a contrapositive argument ought not to be stated entirely in the indicative. Thus:

From the fact that if this match is scratched it will light, it follows that if it does not light it was not scratched.

is awkward. We should say:

From the fact that if this match is scratched it will light, it follows that if it were not to light it would not have been scratched. ...

One might wonder what effect this interpretation of the Law of Contraposition has on Hempel's paradox of confirmation. "If is a raven then is black" is equivalent to "If were not black then would not be a raven". Therefore whatever confirms the latter should also, by the Equivalence Condition, confirm the former. True, but yellow cows still cannot figure into the confirmation of "All ravens are black" because, in science, confirmation is accomplished by prediction, and predictions are properly stated in the indicative mood. It is senseless to ask what confirms a counterfactual. [40]

5. 4. Differing Results of Accepting the Hypotheses

Several commentators have observed that the propositions "All ravens are black" and "All non-black things are non-ravens" suggest different procedures for testing the hypotheses. E.g. Good writes [41] :

As propositions the two statements are logically equivalent. But they have a different psychological effect on the experimenter. If he is asked to test whether all ravens are black he will look for a raven and then decide whether it is black. But if he is asked to test whether all non-black things are non-ravens he may look for a non-black object and then decide whether it is a raven.

More recently, it has been suggested that "All ravens are black" and "All non-black things are non-ravens" can have different effects when accepted [42] . The argument considers situations in which the total numbers or prevalences of ravens and black objects are unknown, but estimated. When the hypothesis "All ravens are black" is accepted, according to the argument, the estimated number of black objects increases, while the estimated number of ravens does not change.

It can be illustrated by considering the situation of two people who have identical information regarding ravens and black objects, and who have identical estimates of the numbers of ravens and black objects. For concreteness, suppose that there are 100 objects overall, and, according to the information available to the people involved, each object is just as likely to be a non-raven as it is to be a raven, and just as likely to be black as it is to be non-black:

and the propositions are independent for different objects , and so on. Then the estimated number of ravens is 50; the estimated number of black things is 50; the estimated number of black ravens is 25, and the estimated number of non-black ravens (counterexamples to the hypotheses) is 25.

One of the people performs a statistical test (e.g. a Neyman-Pearson test or the comparison of the accumulated weight of evidence to a threshold) of the hypothesis that "All ravens are black", while the other tests the hypothesis that "All non-black objects are non-ravens". For simplicity, suppose that the evidence used for the test has nothing to do with the collection of 100 objects dealt with here. If the first person accepts the hypothesis that "All ravens are black" then, according to the argument, about 50 objects whose colors were previously in doubt (the ravens) are now thought to be black, while nothing different is thought about the remaining objects (the non-ravens). Consequently, he should estimate the number of black ravens at 50, the number of black non-ravens at 25 and the number of non-black non-ravens at 25. By specifying these changes, this argument explicitly restricts the domain of "All ravens are black" to ravens.

On the other hand, if the second person accepts the hypothesis that "All non-black objects are non-ravens", then the approximately 50 non-black objects about which it was uncertain whether each was a raven, will be thought to be non-ravens. At the same time, nothing different will be thought about the approximately 50 remaining objects (the black objects). Consequently, he should estimate the number of black ravens at 25, the number of black non-ravens at 25 and the number of non-black non-ravens at 50. According to this argument, since the two people disagree about their estimates after they have accepted the different hypotheses, accepting "All ravens are black" is not equivalent to accepting "All non-black things are non-ravens"; accepting the former means estimating more things to be black, while accepting the latter involves estimating more things to be non-ravens. Correspondingly, the argument goes, the former requires as evidence ravens which turn out to be black and the latter requires non-black things which turn out to be non-ravens [43] .

5. 5. Existential Presuppositions

A number of authors have argued that propositions of the form "All are " presuppose that there are objects which are [44] . This analysis has been applied to the raven paradox [45] :

... : "All ravens are black" and : "All nonblack things are nonravens" are not strictly equivalent ... due to their different existential presuppositions. Moreover, although and describe the same regularity - the nonexistence of nonblack ravens - they have different logical forms. The two hypotheses have different senses and incorporate different procedures for testing the regularity they describe.

A modified logic can take account of existential presuppositions using the presuppositional operator, `*'. For example,

can denote "All ravens are black" while indicating that it is ravens and not non-black objects which are presupposed to exist in this example.

... the logical form of each hypothesis distinguishes it with respect to its recommended type of supporting evidence: the possibly true substitution instances of each hypothesis relate to different types of objects. The fact that the two hypotheses incorporate different kinds of testing procedures is expressed in the formal language by prefixing the operator `*' to a different predicate. The presuppositional operator thus serves as a relevance operator as well. It is prefixed to the predicate ` is a raven' in because the objects relevant to the testing procedure incorporated in "All raven are black" include only ravens; it is prefixed to the predicate ` is nonblack', in , because the objects relevant to the testing procedure incorporated in "All nonblack things are nonravens" include only nonblack things. ... Using Fregean terms: whenever their presuppositions hold, the two hypotheses have the same referent (truth-value), but different senses; that is, they express two different ways to determine that truth-value [46]

6. Notes
1 | references-column-count references-column-count-2 }} }} }}" style="-moz-column-count:2; column-count:2;">

1. Hempel, CG (1945) Studies in the Logic of Confirmation I. Mind Vol 54, No. 213 p.1 JSTOR
2. Hempel, CG (1945) Studies in the Logic of Confirmation II. Mind Vol 54, No. 214 p.97 JSTOR
3. Maher, P (1999) Inductive Logic and the Ravens Paradox, Philosphoy of Science, 66, p.50 JSTOR
4. Hempel, CG (1945) Studies in the Logic of Confirmation I. Mind Vol 54, No. 213 p.1 JSTOR
5. Good, IJ (1960) The Paradox of Confirmation, The British Journal for the Philosophy of Science, Vol. 11, No. 42, 145-149 JSTOR
6. Fitelson, B and Hawthorne, J (2006) How Bayesian Confirmation Theory Handles the Paradox of the Ravens, in Probability in Science, Chicago: Open Court Link
7. Alexander, HG (1958) The Paradoxes of Confirmation, The British Journal for the Philosophy of Science, Vol. 9, No. 35, P. 227 JSTOR
8. Hosaisson-Lindenbaum, J (1940) On Confirmation, The Journal of Symbolic Logic, Vol. 5, No. 4, p. 133 JSTOR
9. Note: Good used "crow" instead of "raven", but "raven" has been used here throughout for consistency.
10. Chihara, (1981) Some Problems for Bayesian Confirmation Theory, British Journal for the Philosophy of Science, Vol. 38, No. 4 LINK
11. Earman, 1992 Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory, MIT Press, Cambridge, MA.
12. Eells, 1982 Rational Decision and Causality. New York: Cambridge University Press
13. Gibson, 1969 On Ravens and Relevance and a Likelihood Solution of the Paradox of Confirmation, LINK
14. Hosaisson-Lindenbaum 1940
15. Howson, Urbach, 1993 Scientific Reasoning: The Bayesian Approach, Open Court Publishing Company
16. Mackie, 1963 The Paradox of Confirmation, Brit. J. Phil. Sci. Vol. 13, No. 52, p. 265 LINK
17. Hintikka, 1969
18. Humburg 1986, The solution of Hempel's raven paradox in Rudolf Carnap's system of inductive logic, Erkenntnis, Vol. 24, No. 1, pp
19. Maher 1999
20. Fitelson 2006
21. Vranas (2002) Hempel`s Raven Paradox: A Lacuna in the Standard Bayesian Solution LINK
22. Maher, 1999
23. Maher, 1999
24. Fitelson, 2006
25. Fitelson, 2006
26. Good 1967, The White Shoe is a Red Herring, British Journal for the Philosophy of Science, Vol. 17, No. 4, p322 JSTOR
27. Hempel 1967, The White Shoe - No Red Herring, The British Journal for the Philosophy of Science, Vol. 18, No. 3, p. 239 JSTOR
28. Good 1968, The White Shoe qua Red Herring is Pink, The British Journal for the Philosophy of Science, Vol. 19, No. 2, p. 156 JSTOR
29. Maher 2004, Probability Captures the Logic of Scientific Confirmation LINK
30. Quine, WV (1969) Natural Kinds, in Ontological Relativity and other Essays. New York:Columbia university Press, p.114
31. Hintikka, 1969
32. Hintakka J. 1969, Inductive Independence and the Paradoxes of Confirmation LINK
33. Scheffler I, Goodman NJ, Selective Confirmation and the Ravens, Journal of Philosophy, Vol. 69, No. 3, 1972 JSTOR
34. Gaifman, H (1979) Subjective Probability, Natural Predicates and Hempel's Ravens, Erkenntnis, Vol. 14, p. 105 Springer
35. Popper, K. Realism and the Aim of Science, Routlege, 1992, p. 325
36. Popper K, Miller D, (1983) A Proof of the Impossibility of Inductive Probability, Nature, Vol. 302, p. 687 Link
37. Neyman J, Pearson ES (1933) On the Problem of the Most Efficient Tests of Statistical Hypotheses, Phil. Transactions of the Royal Society of London. Series A, Vol. 231, p289 JSTOR
38. Giere, RN (1970) An Orthodox Statistical Resolution of the Paradox of Confirmation, Philosophy of Science, Vol. 37, No. 3, p.354 JSTOR
39. Farrell RJ (1979) Material Implication, Confirmation and Counterfactuals LINK
40. Farrell (1979)
41. Good (1960)
42. O'Flanagan (2008) Judgment LINK
43. O'Flanagan (2008)
44. Strawson PF (1952) Introduction to Logical Theory, methuan & Co. London, John Wiley & Sons, New York
45. Cohen Y (1987) Ravens and Relevance, Erkenntnis LINK
46. Cohen (1987)

6. 1. References

* Franceschi, P. The Doomsday Argument and Hempel's Problem, English translation of a paper initially published in French in the Canadian Journal of Philosophy 29, 139-156, 1999, under the title Comment l'Urne de Carter et Leslie se Déverse dans celle de Hempel
* Hempel, C. G. A Purely Syntactical Definition of Confirmation. J. Symb. Logic 8, 122-143, 1943.
* Hempel, C. G. Studies in Logic and Confirmation. Mind 54, 1-26, 1945.
* Hempel, C. G. Studies in Logic and Confirmation. II. Mind 54, 97-121, 1945.
* Hempel, C. G. Studies in the Logic of Confirmation. In Marguerite H. Foster and Michael L. Martin, eds. Probability, Confirmation, and Simplicity. New York: Odyssey Press, 1966. 145-183.
* Whiteley, C. H. Hempel's Paradoxes of Confirmation. Mind 55, 156-158, 1945.

6. 2. External links

* PRIME Encyclopedia
* Hempel's Ravens, at Logical Paradoxes.Info

Paradox of entailment

The paradox of entailment is an apparent paradox derived from the principle of explosion, a law of classical logic stating that inconsistent premises always make an argument valid; that is, inconsistent premises imply any conclusion at all. This seems paradoxical, as it suggests that the following is a good argument:

It is raining
It is not raining

Therefore:

George Washington was a zombie.

Contents:
1. Understanding the paradox
2. Explaining the paradox
3. References
4. See also

1. Understanding the paradox

Validity is defined in classical logic as follows: An argument (consisting of premises and a conclusion) is valid if and only if there is no possible situation in which all the premises are true and the conclusion is false.

For example an argument might run:

If it is raining, water exists (1st premise)
It is raining (2nd premise)
Water exists (Conclusion)

In this example there is no possible situation in which the premises are true while the conclusion is false. Since there is no counterexample, the argument is valid.

But one could construct an argument in which the premises are inconsistent. This would satisfy the test for a valid argument since there would be no possible situation in which all the premises are true and therefore no possible situation in which all the premises are true and the conclusion is false.

For example an argument with inconsistent premises might run:

Matter has mass (1st premise; true)
Matter does not have mass (2nd premise; false)
All numbers are equal to 42 (Conclusion)

As there is no possible situation where both premises could be true, then there is certainly no possible situation in which the premises could be true while the conclusion was false. So the argument is valid whatever the conclusion is; inconsistent premises imply all conclusions.

2. Explaining the paradox

The strangeness of the paradox of entailment comes from the fact that the definition of validity in classical logic does not always agree with the use of the term in ordinary language. In everyday use validity suggests that the premises are consistent. In classical logic, the additional notion of soundness is introduced. A sound argument is a valid argument with all true premises. Hence a valid argument with an inconsistent set of premises can never be sound. Other suggested improvements to the notion of logical validity include strict implication and relevant implication.

3. References

4. See also

* Correlation does not imply causation
* False dilemma

Recent Visitors

Popular Pages Today: