Content area
Issue Title: JPL40: The Fortieth Anniversary Issue
J Philos Logic (2015) 44:651662
DOI 10.1007/s10992-015-9350-1
Received: 13 January 2013 / Accepted: 15 June 2013 / Published online: 28 February 2015 Springer Science+Business Media Dordrecht 2015
1 Doxastic Theories
The application of formal tools to questions related to epistemology is of course not at all new. However, there has been a surge of interest in the field now known as formal epistemology over the past decade, with two annual conference series (the Formal Epistemology Workshops and the Formal Epistemology Festivals) and an annual summer school at Carnegie Mellon University, in addition to many oneoff events devoted to the field. A glance at the programs of these series illustrates the wide-ranging set of topics that have been grouped under this name, ranging from rational choice theory and the foundations of statistics, to logics of knowledge and formal measures of coherence, with much more besides.
In this paper I will ignore most of these topics, and just trace some parts of the history of two ideas about belief whose current interaction may lead to future progress. One idea is the idea of belief, disbelief, and suspension of judgment as a meaningful tripartite distinction in our doxastic attitudes to various propositions. The other is the idea of a credence function, with some normative or descriptive connection to the mathematics of probability theory, as a summary of our doxastic attitudes to various propositions. I will give brief summaries of some interesting recent work (including some work in progress) on these two ideas and their interaction, but my discussions of these projects is far from complete, so I suggest that interested readers consult the original papers for a better understanding of them.
There are of course many other formal theories of belief that treat it in a more complicated way than the on-off state suggested by natural language terminology, but dont use the mathematics of standard probability theory. Many such theories are
K. Easwaran ([envelopeback])
Department of Philosophy, Texas A&M University, College Station, TX 77840, USA e-mail: mailto:[email protected]
Web End [email protected]
Formal Epistemology
Kenny Easwaran
652 K. Easwaran
discussed in Halpern [10], including various imprecise probability theories, ranking functions, Dempster-Shafer functions, and more. Many epistemologists argue that these other theories are more fruitful avenues of study than the theories that I will discuss here, but because of limitations of space I must narrow my focus to this small (but historically influential) subset of theories of belief.
1.1 Belief
The idea of belief, disbelief, and suspension is certainly quite old, as it appears to be the way that doxastic attitudes are most often described in natural language. However, the application of formal tools to this idea is somewhat more recent. Although there had certainly been thoughts of using formal logic to represent the inferences that an agent might draw from her beliefs, the idea of representing belief itself by an operator in the formal language may only go back to some work in modal logic in the 1950s and 60s. (For instance, see some of the references in Hendricks and Symons [12].)
Much of this early work treated belief as a normal modal operator, meaning that it is closed under conjunction and logical implication. Additionally, it was assumed to satisfy the axioms D, 4, and 5 of modal logic, meaning that beliefs are consistent, that if an agent believes p then she believes that she believes p, and if an agent doesnt believe p, then she believes that she doesnt believe it. However, while these are quite fruitful formal assumptions for fitting belief in with traditional modal logics, and versions of them have continued to be accepted for logics of knowledge, they dont appear to give the correct logic of belief. Slaney [30] Investigating the appropriate closure conditions for belief (or for justified belief) has been an important trend in formal approaches to epistemology in recent years, as I will mention later.
Another important topic in formal theories of belief has involved investigating how beliefs ought to change in light of new information. Harman [11] argued that although many philosophers had implicitly thought of the logical entailment relation as telling us about the appropriate way to update beliefs in light of new evidence (which had motivated some non-classical logics, like relevance logic [26]), this must be an over-simplification, because sometimes the appropriate thing to do in light of new evidence is to retract some of ones old beliefs.
The literature on belief revision has accepted Harmans criticism of the simple rule of drawing logical consequences of ones new evidence, but has sought to describe a formal theory of how to respond to evidence, generally rejecting his suggestion that there are always multiple rationally permissible ways to respond to it. In particular, there has been interest in characterizing situations in which ones update should involve rejecting earlier beliefs, and distinguishing them from situations in which one should just add the new evidence to this set. One historically influential characterization, with an important early statement in Alchourrn et al. [1], has come to be known as the AGM theory of belief revision. This tradition makes the basic assumption that one should retract old beliefs only when learning something incompatible with ones total set of beliefs, and then proceeds to investigate different methods of belief retraction that neither reject too much nor too little. However, these theories have always been much more complicated than the update rule of conditionalization that has been proposed for credence.
Formal Epistemology 653
1.2 Credence
The idea of credence as a sort of doxastic probability is certainly a much newer idea than the idea of belief as something that one has or lacks (the mathematical theory of probability itself only dates back a few centuries), but probability as a formal theory of credence is perhaps older than the explicit use of logics of belief. One of the earliest full statements of this view is in Ramsey [27], which also introduces several of the main contemporary arguments in favor of using the mathematics of probability theory. There are certainly earlier applications of probability to epistemic issues (ranging from the 17th century Port Royal Logic to Keynes [17]), but Ramsey is one of the first to make explicit the idea that the doxastic state itself can be represented by a probability function.
Ramsey begins by noting that in order to have a quantitative theory of credence, there must be some convention for representing credences numerically. He rejects a phenomenological characterization of credence, and suggests that the right way to conceive of credence is as a kind of disposition to action. Thus, the credence is represented by the ratio between the differences in values of various outcomes that would motivate one to act on it on a simplified picture of the value of money, we could say that ones credence in p is the amount of guaranteed money one would forego in order to get $1 if p is true and nothing otherwise.
Ramsey argues in two different ways that on this conception of credence, a rational agent would have credences that satisfy the probability axioms. The first argument he gives is the more general one, where he imagines that an agent may value guaranteed outcomes (consisting of a specification of every fact in the history of the world) in any way, and then uses various assumptions about how the preferences of such an agent ought to extend to uncertain outcomes to construct numerical measures both of her utilities for outcomes and of her credences in various propositions. This representation theorem argument has since been extended by decision theorists such as Savage [29] and Jeffrey [14], who have greatly refined the rationality assumptions that go into it. Interestingly, on Jeffreys version, the numerical representation turns out not to be unique, so that there is no fact of the matter about the numerical values of an agents credences instead, there is a range of such values, and the extent of this range is determined by the ratio between the greatest possible difference in utilities for the agent and the smallest possible difference in utilities.
Ramseys second argument is made in an offhand remark on p. 182: If anyones mental condition violated these laws [the probability axioms], his choice would depend on the precise form in which the options were offered him, which would be absurd. He could have a book made against him by a cunning better and would then stand to lose in any event. Rather than considering the complete pattern of preferences an agent has over all uncertain choices, and extracting numerical values from these, this argument assumes that an agents credences directly correspond to prices for certain simple decisions under uncertainty (which we can think of as bets), and shows that the same uncertain gamble can be made in multiple different ways out of bets, which means that a rational agent must have prices that give rise to the same sum for these different combinations. This requirement turns out to exactly correspond to the axioms of probability theory. Versions of this argument are worked out in greater
654 K. Easwaran
detail by De Finetti [5], and have since become the most common presentation of Ramseys argument in much of the philosophical literature.
In recent years, many philosophers have started worrying about Ramseys conceptual connection between credence and action, and have thought that it introduces too many pragmatic considerations about action into something that is supposed to be purely epistemic. One prominent non-Ramseyan proposal is to consider credence not as the measure of a disposition to action, but rather as an estimate of the truth value of a proposition, where we conventionally represent truth by the number 1 and falsehood by the number 0. Joyce [16] The idea is that the fundamental epistemic value is accuracy of ones estimates of truth values. Joyce then shows that if accuracy is measured in a way that satisfies several formal assumptions, then a set of estimates that fails to satisfy the probability axioms is necessarily dominated by some other set of estimates that satisfies them. That is, no matter what the world is actually like, the second set of estimates is guaranteed to be closer to the truth than the first set, which suggests that a rational agent ought not to use the first. Thus, even without Ramseys connection between credence and action, we can still give an argument that credences satisfy the axioms of probability theory.
2 Interaction
Although the literature on formal theories of belief and formal theories of credence is often completely separate, there have been some discussions of how the two theories should relate. Since they are both proposed as theories of our doxastic nature, which we normally describe in natural language with words like belief and disbelief, they have often been taken to be competitors if there are really credences, then there really arent beliefs, and vice versa. Beliefs are sometimes taken to be too simple to be an adequate representation of our doxastic states, and credences are taken to have too much complexity to be psychologically realistic, especially in combination with the standard update rule of conditionalization.
Much of the work in each of these traditions has ignored a lot of what has been said in the other tradition. However, there have been some explicit statements attacking one tradition or other. For instance, Richard Jeffrey regarded the theory of credence as having replaced the theory of belief: our ordinary notion of belief is only vestigially present in the notion of degree of belief. I am inclined to think Ramsey sucked the marrow out of the ordinary notion, and used it to nourish a more adequate view. [15, p. 172] However, in the same passage he also allowed that future theorizing might replace credence with yet a further notion, and that thinking about the ordinary notion of belief may inform the development of such theorizing.
Conversely, some philosophers have argued against the very idea of credence as probability. Kyburg [20] makes a forceful series of such arguments, concluding with the claim I conclude that the theory of subjective probability is psychologically false, decision-theoretically vacuous, and philosophically bankrupt: its account is overdrawn. He argues that all of the Ramseyan attempts to connect credence with action have problems, and thus that there is no reason to think that credence obeys the probability axioms. However, he doesnt appear to prefer an account on which belief
Formal Epistemology 655
is the primary doxastic state. In his [22], he also argues that several formal alternatives have many of the same features that make probability problematic. Presumably he would want some further alternative to all of these theories.
However, a growing number of philosophers nowadays suggest that both belief and credence are real, and are part of any complete characterization of the doxastic states of certain types of rational agents (perhaps us). Though this tradition began fairly early, perhaps with [24], I will focus on contemporary theories. Again, there are several possibilities for how to understand the relation between these notions. It may be that beliefs are somehow constituted by credences, or the credences are somehow constituted by beliefs. Or it may be that both have independent existence (though there may yet be many connections between the two). The rest of this paper will discuss some recent projects considering each of these three possible relations between belief and credence. There is conceivably a fourth view, on which both belief and credence are themselves to be reduced to some further notion, but that would require a formal theory of this third notion, which would bring it beyond the scope of this paper.
2.1 Non-reduction
The view that neither belief nor credence reduces to the other raises further questions about their connection. It seems strange to think that one could believe that p while simultaneously having extremely low credence that p. But it is not obvious whether one should say that this situation is impossible, or merely one that tends not to happen, given how humans actually behave. On the former view, it would be useful to develop a fuller theory of the combinations of belief and credence that are in fact possible, since it is claimed that not all of them are, but also that in general, neither belief nor credence alone determines the other.
On either view, there is a further question of what normative connections there may be between credence and belief. For instance, on the view of Ross and Schroeder [28], to believe that p is to defeasibly treat p as true in ones reasoning, and one is justified in doing this iff this is the recommendation of the policy that best strikes a balance between expected complexity of reasoning and expected value of resulting action. Credence plays a role in calculating these expectations, and thus plays a role in determining whether or not beliefs are justified, but does not itself directly constitute belief.
However, I am more interested in considering views that pursue a reduction one way or the other. Some may end up being better re-interpreted as theories of the specific beliefs that are justified by a given set of credences (or vice versa) rather than the beliefs that are constituted by them, but at any rate, there is much to be gained of mathematical interest by considering such reductionist conceptions, whether or not they turn out to be correct.
2.2 Reducing Belief to Credence
Out of the views that attempt to reduce belief to credence, the most straightforward is the one called the Lockean thesis, which states that there is some threshold t such
656 K. Easwaran
that for all propositions, a subject believes the proposition iff her credence in that proposition is greater than t. This threshold may be the same for all agents, or perhaps some feature of the agent determines what the threshold is. However, this is subject to Kyburgs famous lottery paradox [19, p. 197] if there is a fair lottery with more than 1/(1 t) tickets in it, then an agent familiar with the details of the lottery
will have credence greater than t that each ticket will lose, and thus will believe of each ticket that it will lose, and yet will also believe that some ticket will win.
To maintain the Lockean thesis in light of this paradox, one must give up the idea that beliefs are closed under conjunction, as Kyburg himself goes on to do in that book (though Kyburgs views on probability and belief seem to have evolved at various points in his career). That is, one must maintain that the logic of belief is not a normal modal logic. This view has been defended more recently by Foley [9] (see in particular chapter 4). However, many epistemologists have taken it as a datum that belief is closed under conjunction the idea is that unlike credence, belief is supposed to provide one unified picture of what the world is like, and therefore it cant have components that are separate from one another. On the Lockean view, one instead has many overlapping pictures of the world, which cant all be reconciled.
Many epistemologists have thus sought to modify the Lockean thesis to provide some sort of partial characterization of belief (or of justified belief) that accommodates closure under conjunction. However, an important response to this literature has been provided by Douven and Williamson [6]. They consider supplementations of the Lockean thesis of the sort, an agent believes that p if the agents credence in p is greater than t and condition D holds of p, where D is a purely formal condition. They show that if there is a sufficiently large, but finite, equal-probability partition of the agents credences (just as in a large enough lottery), and if t < 1, and if belief is closed under conjunction, then either D is trivial (in the sense that no proposition with probability below 1 satisfies it), or the agent believes a contradiction. Since we clearly dont want it to be the case that agents believe contradictions, and allowing either t = 1 or D to be trivial gives up the basic threshold idea, this
means that no purely formal version of the Lockean theory can save conjunction closure for belief.
They present this result after showing that various prominent modified Lockean theories (generally proposed as theories of justified belief, but the arguments would apply to them as theories of belief as well) trivialize the concept of belief, and claim that it shows that any such modified Lockean theory must do so. However, I think the importance of this result is somewhat exaggerated. The result is limited to cases where the agents credences are distributed equally (or almost equally) over finitely many possible states. Additionally, the restriction to purely formal theories is a significant one.
The finitude of the set of states seems to be a serious restriction. Since the agents that we are considering have access to natural languages (or formal languages), which have infinitely many meaningful sentences, there will always be uncountably many descriptions of worlds that are consistent. The finitude of actual agents is often used to argue that they only ever consider finitely many of these descriptions, and so their credences must be distributed only over these finitely many descriptions. However, it seems plausible to me to think that credence is distributed not as an explicit mental
Formal Epistemology 657
allocation of probability to things that one considers, but rather as an implicit distribution over possibilities that one has not yet ruled out. Thus, the finitude of actual agents will manifest in the fact that the information agents are able to gather cant cut things down too far. Finite agents, on this view, must always have credences that are distributed over infinite collections of possibilities.
Smith [31] generalizes the results of Douven and Williamson to probability functions that have non-trivial distributions over infinitely many states. However, he makes a further assumption about the characterization of the property that is adjoined to the Lockean thesis. In particular, he assumes that it must be preserved under coarse-graining. That is, if a proposition has the property, then it must retain the property even after a modification of the state space over which the probability function is defined, in which sets of states are replaced by single states while preserving the probabilities of all propositions that are still expressible. This is of course, a serious restriction, and one that might be rejected by various context-sensitive responses to the lottery paradox.
My sympathies lie here, with a theory that identifies belief with some suitable property related to credence, where credences are distributed over an infinite set of states. However, I think there is still formal interest in seeing what sorts of theories could work with a finite set of states, and there dont yet appear to be useful proposals for reductionist theories involving infinitely many states (other than the simple Lock-ean thesis itself). Thus, in the rest of this section, I will discuss several theories that give what I take to be interesting responses to the results of Douven and Williamson, and in the next section I will discuss some of my own current work, which also essentially relies on the set of states being finite. I dont expect that any of these theories will turn out to be the correct way to understand the relation between belief and credence, but I think they all shed interesting light on the issues that are relevant.
Given this restriction to finite spaces, Douven and Williamsons assumption of equal distribution (or nearly equal distribution) of probability among finitely many possible states seems like a serious limitation. Chandler [4] argues that in the lottery situation there really should be no non-trivial beliefs. The only proposition believed is the one that says that some ticket will win. Thus, a supplemented Lockean thesis that gives no interesting beliefs in this case seems fine. As long as a theory of belief gives interesting beliefs in non-lottery cases, it may still be useful, especially if one can eventually provide an argument that the usual credal states that agents find themselves in are different from lottery situations in the relevant ways. (Korb [18, p. 232] argues that one can lotterize just about any inductive inference problem, so that issues of the lottery paradox generalize, but the given argument clearly leads to the conclusion that credences are distributed over infinitely many states, which puts it out of bounds while we are restricting consideration to finite spaces.)
One such theory involves the idea of a belief core, which was first proposed in van Fraassen [32], and has since been worked out in greater detail in several other papers, notably [2]. (It has an independent derivation in Leitgeb [23].) The definition of a belief core is as a proposition that not only has probability above a given threshold, but also has conditional probability greater than this threshold when conditioning on any proposition that is consistent with it. If the threshold is r, then a proposition that has this feature is known as an r-core. An agent is then said to
658 K. Easwaran
believe any proposition that is a superset of an r-core, where r is some particular threshold for the agent. This definition is a purely formal modification of the Lock-ean thesis, which is closed under conjunction, and is thus subject to the theorem of Douven and Williamson. However, it gives interesting results nonetheless.
When the probability function representing the agents credence is distributed over only finitely many states, there is a natural way to figure out which sets are r-cores. One can enumerate the states from highest to lowest probability and calculate the ratio of the probability of each state to the sum of the probabilities of all states that come after it in the ranking. Whenever this ratio is greater than r/(1 r), the set
of all states up to this one is an r-core. From this characterization, it is clear that a non-trivial r-core does not exist unless some states are not only more probable than other states, but in fact more probable than a whole set of other states. Thus, for a probability function with a uniform distribution (like in the lottery paradox), the only r-core is the set of all states, and thus the agent has no non-trivial beliefs. However, many naturally described situations give rise to beliefs nonetheless.
Another purely formal theory allowing for conjunction closure arises by considering the interaction of rules for belief revision with the relation between credence and belief. Lin and Kelly [25] look for a belief revision rule, and a rule relating credence and belief, such that updating credences by conditionalization and then considering the resulting beliefs gives the same results as taking the beliefs generated by ones current credences and then applying the belief revision rule. In addition, they ask that the belief revision rule satisfy a few basic conditions. Although the AGM theory requires that anything that is believed still be believed after learning anything consistent with ones full belief set, Lin and Kelly ask merely that it still be believed after learning anything entailed by ones full belief set. But they additionally require that if a proposition would be believed after learning p, and would also be believed after learning p, then it must already be believed.
Space considerations prevent me from giving a full description of the acceptance and revision rules that they end up with. However, it turns out that the acceptance rule depends not on the ratio between the probability of a state and the probability of the set of all less probable states, but rather on the ratio between the probability of a state and the probability of each other state. Additionally, they show that the belief revision rule cant satisfy the AGM axioms mentioned above. The resulting belief revision rule does have the complication that it depends on more than just the set of propositions believed, but they show that it can be defined from similarly discrete data that is still much simpler to work with than a full credence function. Although they reject the idea of a belief core because of its poor interaction with belief revision rules, it is notable that the resulting theory of the relation between credence and belief still appeals to ratios between the probabilities of states. Perhaps further points of contact between the two theories can be developed.
One other way around the results of Douven and Williamson is to allow that the condition may not be purely formal. One such class of ideas considers that an agent believes that p if she will act as though p were true. A natural interpretation of this thought is that conditioning on p would not reverse any of the agents preferences among actions. Now, given standard Bayesian decision theory, this cant hold for all conceivable actions there is always a way to describe an action whose expected
Formal Epistemology 659
utility depends sensitively on ones credence in p. But if we limit consideration to the actions that are in some sense live options for the agent, then this is possible. This theory is not fully formal, because it appeals to utilities (which are not part of the framework that Douven and Williamson prove their results in) as well as clearly informal notions like the concept of a live option.
Weatherson [33] develops a view of this sort, though there are antecedents, like Kyburg [21] (which seems to rely on a probability function that is not a credence). Weatherson considers not just actual preferences, but also conditional preferences. Interestingly, this gives a theory with some formal similarities to the idea of a belief core. Consider an agent with only finitely many practically relevant states of the world, for whom there are only finitely many utility values, and for whom every arrangement of these utilities over states of the world describes some option that is live for her. It turns out that the agent believes a proposition iff the proposition contains an r/(r+1)-core, where r is the ratio between the largest difference between
any two utilities and the smallest difference between any two utilities. (Interestingly, this is the same r that measures the spread of the probability functions that all equally well represent an agents credences on the theory of [14] the agent acts as though p is true iff her credence in p isnt necessarily represented as a number different from1.) For agents that dont consider such a wide range of live actions, the theory of belief that emerges is somewhat different, and more complicated, but it is interesting to see that there is such a connection between a theory specified purely formally, and one specified in terms of practical considerations.
Further investigation of the connections between all these views seems worthwhile. Additionally, there may be prospects for alternative fully formal theories that give rise to non-trivial beliefs in cases other than lottery situations. Will all such theories involve the ratios between probabilities of individual states? If so, does that raise problems for extensions to credence functions over infinitely many states? At any rate, the results of Douven and Williamson will play an important role in guiding these theories, even if they dont as definitively rule them out as is often assumed.
3 Reducing Credence to Belief
There have been comparatively fewer attempts to reduce credence to belief. The natural proposals involve saying that an agent has a credence of x in p iff the agent believes that the chance of p is x. However, if p is a proposition that clearly doesnt have a non-trivial chance (perhaps it is a claim about mathematics, or the laws of nature), then an agent would not be able to have a non-trivial credence in it. Even if one allows that a credence represents an agents belief in the evidential probability or epistemic probability or something else more objective, there would be problems. It would be impossible for agents to be unreflectively confident in something while separately judging that their evidence makes it unlikely.
However, I have been working on a new proposal that may be able to give a more holistic reduction of credence to belief. Rather than being determined by a particular belief, an agents credences are defined by the overall pattern of all her beliefs,
660 K. Easwaran
together with the values she puts on achieving true belief and avoiding false belief. This proposal is being worked out in Easwaran [7], and is related to ideas in Briggs et al. [3] and Easwaran and Fitelson [8].
The basic idea involves the conception of belief defended by James [13]. Rather than being justified by having good grounds, beliefs are justified by achieving their aims, which are to seek truth and avoid error. It is a consequentialist theory of doxastic normativity, rather than a deontological one. An agent is said to put some value R on being right in believing something that turns out to be true, and some value W on not being wrong in believing something false. We can then evaluate an agents overall doxastic state by summing R times the number of correct beliefs that she has, and subtracting W times the number of incorrect beliefs that she has. There is no overall contribution made by propositions that she suspends judgment on. Following the idea of Joyce [16], we can think of this overall score as a measure of the accuracy of the agents beliefs, but while Joyce relies on an understanding of credence as an estimate of truth value, this notion only depends on the idea that beliefs aim to seek truth and avoid error. We then say that a set of beliefs is coherent iff there is no other set of beliefs that is guaranteed to be more accurate, regardless of what the truth is.
There is much to say about the logic of such coherent sets of beliefs. Coherence turns out to be much weaker than consistency, and it does not require closure under conjunction. As with the Lockean thesis, this has serious implications for argument by reductio ad absurdum showing that a set of propositions is inconsistent will show that they cant all be true, but doesnt appear to provide any pressure to giving any of them up if the agent is still convinced that this set of beliefs contains a sufficiently high ratio of truth to error.
Interestingly, although the coherent sets of beliefs dont have to be consistent, they cant be too inconsistent. For instance, although a set of beliefs may be coherent even if every possible way the world could be makes at least one of them false, it cant be the case that it is logically necessary that a greater fraction than R/(R + W) of them
is false. In such a case, the agent would be guaranteed to get a negative score, and thus she could do better by suspending judgment on all propositions.
A sufficient condition for coherence can be calculated by using probability theory. Given a probability function P , we can see that the expected value of believing p is RP (p) W(1 P (p)) = (R + W)P (p) W. Thus, if the probability of a
proposition is greater than W/(R + W), believing p has a higher expected value than
not believing, and if the probability is less than this threshold, then not believing has a higher expected value than believing. Since being guaranteed to get a better overall score ensures having a higher expected overall score on every probability function, this means that the existence of a probability function with respect to which the agents beliefs satisfy the Lockean thesis (with threshold W/(R+W)) is sufficient
to guarantee that the agents beliefs are coherent.
Thus, we can use this characterization of a particular class of coherent belief sets to define a notion of credence out of a coherent agents beliefs. We can say that the agents credences are represented by any probability function that assigns probabilities above W/(R + W) to every proposition the agent believes, and below
this amount to every proposition she doesnt believe. The resulting characterization of credences is similar to that given by decision-theoretic representation theorems,
Formal Epistemology 661
but it uses purely epistemic value, rather than constitutively connecting credence to pragmatic values. The resulting representation is not unique, but this is also true for the decision-theoretic representation given by Jeffrey [14]. For theorists that are inclined to take belief as the fundamental doxastic notion, and to separate epistemology from pragmatic decision theory, these considerations may make this view (or some modification of it) quite interesting.
The formalism of this view is not essentially connected to the interpretation on which belief is the fundamental attitude and credence is reduced to it. It can also be used when the two are considered to be metaphysically independent. In that case, if an agent ought to set her attitudes in accord with the expected epistemic utility, then the calculations from above will give the Lockean thesis straightforwardly. Even though there are some coherent belief sets that arent represented by probability functions, there will always be a different belief set that has higher expected epistemic utility given an agents actual credences. Interpreted this way, the formalism suggests the use of this notion of coherence in place of consistency, as a way of unifying considerations of truth (given through the accuracy characterization) and evidence (which in some way determines the agents credences).
4 Conclusion
I have only briefly touched on some of what I consider to be the most interesting current formal work relating belief and credence. There is of course much more on this front, not to mention the many other areas of work in formal epistemology. But it should give the reader a taste of some of the fruitful interactions to come as the field continues to develop.
References
1. Alchourrn, C.E., Gardenfors, P., & Makinson, D. (1985). On the logic of theory change: Partial meet contraction and revision functions. Journal of Symbolic Logic, 50, 510530.
2. Arl Costa, H., & Pedersen, A.P. (2012). Belief and probability: A general theory of probability cores. International Journal of Approximate Reasoning, 53(3), 293315.
3. Briggs, R., Cariani, F., Easwaran, K., & Fitelson, B. (2014). Individual coherence and group coherence. In Lackey, J. (Ed.) Essays in Collective Epistemology: Oxford University Press.
4. Chandler, J. (2010). The lottery paradox generalized The British Journal for the Philosophy of Science, 61(3), 667679.
5. De Finetti, B. (1974). Theory of Probability Vol. 1: Wiley.6. Douven, I., & Williamson, T. (2006). Generalizing the lottery paradox. The British Journal for the Philosophy of Science, 57(4), 755779.
7. Easwaran, K. (ms). Dr. Truthlove, or, how I learned to stop worrying and love Bayesian probability. unpublished.
8. Easwaran, K., & Fitelson, B. (2014). Accuracy, coherence, and evidence. Oxford Studies in Epistemology, 5.
9. Foley, R. (1993). Working Without a Net: A Study of Egocentric Epistemology. Oxford.10. Halpern, J. (2003). Reasoning about Uncertainty. MIT Press.11. Harman, G. (1986). Change in View: Principles of Reasoning. MIT Press.12. Hendricks, V., & Symons, J. (2006). Epistemic logic. Stanford Encyclopedia of Philosophy.
662 K. Easwaran
13. James, W. (1896). The will to believe. The New World, 5, 327347.14. Jeffrey, R. (1965). The Logic of Decision. McGraw-Hill.15. Jeffrey, R. (1970). Dracula meets wolfman: Acceptance vs. partial belief. In Swain, M. (Ed.) Induction, Acceptance, and Rational Belief: Dordrecht: Reidel.
16. Joyce, J. (1998). A nonpragmatic vindication of probabilism. Philosophy of Science, 65(4), 575603.17. Keynes, J.M. (1921). A Treatise on Probability: Macmillan and co.18. Korb, K.B. (1992). The collapse of collective defeat: Lessons from the lottery paradox. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1, 230236.
19. Kyburg, H. (1961). Probability and the Logic of Rational Belief. Wesleyan University Press.20. Kyburg, H. (1978). Subjective probability: Criticisms, reflections, and problems. Journal of Philosophical Logic, 7, 157180.
21. Kyburg, H. (1988). Full belief. Theory and Decision, 25, 137162.22. Kyburg, H. (1992). Getting fancy with probability. Synthese, 90, 189203.23. Leitgeb, H. (2014). The stability theory of belief. The Philosophical Review, 123(2), 131171.24. Levi, I. (1967). Gambling with Truth: An Essay on Induction and the Aims of Science: Alfred A. Knopf.
25. Lin, H., & Kelly, K. (2012). Propositional reasoning that tracks probabilistic reasoning. Journal of Philosophical Logic, 41(6), 957981.
26. Meyer, R.K. (1971). Entailment. The Journal of Philosophy, 68(21), 808818.27. Ramsey, F.P. (1926). Truth and probability. In Braithwaite, R.B. (Ed.) The Foundations of Mathematics and other Logical Essays (1931): Harcourt, Brace and Company.
28. Ross, J., & Schroeder, M. (2011). Belief, credence, and pragmatic encroachment. Philosophy and Phenomenological Research.
29. Savage, L.J. (1954). The Foundations of Statistics. Dover.30. Slaney, J. (1996). KD45 is not a doxastic logic. Technical Report TR-SRS-3-96, Australian National University.
31. Smith, M. (2010). A generalised lottery paradox for infinite probability spaces. The British Journal for the Philosophy of Science, 61(4), 821831.
32. van Fraassen, B. (1995). Fine-grained opinion, probability, and the logic of full belief. Journal of Philosophical Logic, 24, 349377.
33. Weatherson, B. (2005). Can we do without pragmatic encroachment Philosophical Perspectives, 19(1), 417443.
Springer Science+Business Media Dordrecht 2015