We first describe theprospects or decision set-up and the resultant expected utility rule,before turning to the pertinent rationality constraints on preferencesand the corresponding theorem. Decision theory is an interdisciplinary field that deals with the logic and methodology of making choices, particularly under conditions of uncertainty. It is a branch of applied probability theory and analytic philosophy that involves assigning probabilities to various factors and numerical consequences to outcomes. The theory is concerned with identifying optimal decisions, where optimality is defined in terms of the goals and preferences of the decision-maker. Defenders of resolute choice may have in mind a differentinterpretation of sequential decision models, whereby future“choice points” are not really points at which an agent isfree to choose according to her preferences at the time. If so, thiswould amount to a subtle shift in the question or problem of interest.In what follows, the standard interpretation of sequential decisionmodels will be assumed, and accordingly, it will be assumed thatrational agents pursue the sophisticated approach to choice (as perLevi 1991, Maher 1992, Seidenfeld 1994, amongst others).
One may well wonder whether EU theory, indeed decision theory moregenerally, is neutral with respect to normative ethics, or whether itis compatible only with ethical consequentialism, given thatthe ranking of an act is fully determined by the utility of itspossible outcomes. Such a model seems at odds withnonconsequentialist ethical theories for which thechoice-worthiness of acts purportedly depends on more than the moralvalue of their consequences. The model does not seem able toaccommodate basic deontological notions like agent relativity,absolute prohibitions or permissible and yet suboptimal acts. This disanalogy is due to the fact that there is nosense in which the \(p_i\)s that \(p\) is evaluated in terms of needto be ultimate outcomes; they can themselves be thought of asuncertain prospects that are evaluated in terms of their differentpossible realisations. In most ordinary choice situations, the objects of choice, over whichwe must have or form preferences, are not like this.
Those who are less inclined towards behaviourism might, however, notfind this lack of uniqueness in Bolker’s theorem to be aproblem. James Joyce (1999), for instance, thinks that Jeffrey’stheory gets things exactly right in this regard, since one should notexpect that reasonable conditions imposed on a person’spreferences would suffice to determine a unique probability functionrepresenting the person’s beliefs. Theorem 2 (von Neumann-Morgenstern)Let \(\bO\) be a finite set of outcomes, \(\bL\) a set ofcorresponding lotteries that is closed under probability mixture and\(\preceq\) a weak preference relation on \(\bL\). Then \(\preceq\)satisfies axioms 1–4 if and only if there exists a function\(u\), from \(\bO\) into the set of real numbers, that is unique up topositive linear transformation, and relative to which \(\preceq\) canbe represented as maximising expected utility. The last section provided an interval-valued utility representation ofa person’s preferences over lotteries, on the assumption thatlotteries are evaluated in terms of expected utility.
Furthermore, it permits explicit restrictions on what countsas a legitimate decision theory is concerned with reason for preference, or in other words, whatproperties legitimately feature in an outcome description; suchrestrictions may help to clarify the normative commitments of EUtheory. The only information contained in an ordinal utility representation ishow the agent whose preferences are being represented orders options,from least to most preferable. This means that if \(u\) is an ordinalutility function that represents the ordering \(\preceq\), then anyutility function \(u’\) that is an ordinal transformation of\(u\)—that is, any transformation of \(u\) that also satisfiesthe biconditional in (1)—represents \(\preceq\) just as well as\(u\) does.
- Some of therequired conditions on preference should be familiar by now and willnot be discussed further.
- The above problems suggest there is a need for an alternative theoryof choice under uncertainty.
- As noted in Section 4, criticisms of the EU requirement of a complete preference orderingare motivated by both epistemic and desire/value considerations.
- The idea is that seeking moreevidence is an action that is choice-worthy just in case the expectedutility of seeking further evidence before making one’s decisionis greater than the expected utility of making the decision on thebasis of existing evidence.
- AI excels in consistency, focus, and attention to detail, which can outperform humans in specific decision-making tasks.
The orthodox normative decision theory, expected utility (EU) theory, essentially says that, in situations of uncertainty, one should prefer the option with greatest expected desirability or value. David Lewis (1988, 1996) famously employed EU theory to argueagainst anti-Humeanism, the position that we are sometimesmoved entirely by our beliefs about what would be good, rather than byour desires as the Humean claims. For instance,Broome (1991c), Byrne and Hájek (1997) and Hájek andPettit (2004) suggest formulations of anti-Humeanism that are immuneto Lewis’ criticism, while Stefánsson (2014) and Bradleyand Stefánsson (2016) argue that Lewis’ proof relies on afalse assumption. Nevertheless, Lewis’ argument no doubtprovoked an interesting debate about the sorts of connections betweenbelief and desire that EU theory permits. There are, moreover, furtherquestions of meta-ethical relevance that one might investigateregarding the role and structure of desire in EU theory. For instance,Jeffrey (1974) and Sen (1977) offer some preliminary investigations asto whether the theory can accommodate higher-orderdesires/preferences, and if so, how these relate to first-orderdesires/preferences.
Sequential decisions
Other tools like Rationale AI assist in making tough decisions by providing pros and cons, SWOT analysis, multi-criteria analysis, or causal analysis. AI decision-making can be completely automated or augmented with human intervention, depending on the complexity and time constraints of the decision to be made. AI can analyze large datasets without error, allowing business teams to focus on work relevant to their field. Decision trees are a supervised learning algorithm used for classification and regression modeling.
Complex decisions
We could, for instance, imaginepeople who are instrumentally irrational, and as a result fail toprefer \(g\) to \(f\), even when the above conditions all hold andthey find \(F\) more likely than \(E\). Moreover, this definitionraises the question of how to define the comparative beliefs of thosewho are indifferent between all outcomes (Eriksson andHájek 2007). Perhaps no such people exist (and Savage’s axiom P5 indeed makes clear that his result does not pertain to such people).Nevertheless, it seems a definition of comparative beliefs should notpreclude that such people, if existent, have strictcomparative beliefs. Savage suggests that this definition ofcomparative beliefs is plausible in light of his axiom P4, which willbe stated below. In any case, it turns out that when a person’spreferences satisfy Savage’s axioms, we can read off herpreferences a comparative belief relation that can be represented by a(unique) probability function.
- Lara Buchak (2013) has recently developed a decision theory that canaccommodate Allais’ preferences without re-describing theoutcomes.
- However, it’s important to note that while AI can process vast amounts of data at incredible speeds and identify patterns and trends, it still lacks the ability to use human wisdom and discernment.
- AI systems can help eliminate human biases in decision-making, provided they are trained on unbiased data.
- David Lewis (1988, 1996) famously employed EU theory to argueagainst anti-Humeanism, the position that we are sometimesmoved entirely by our beliefs about what would be good, rather than byour desires as the Humean claims.
What is the best way to make decisions in uncertain situations?
Further interpretive questions regarding preferences andprospects will be addressed later, as they arise. By the late 20th century, scholars like Daniel Kahneman and Amos Tversky challenged the assumptions of rational decision-making. Their work in behavioral economics highlighted cognitive biases and heuristics that influence real-world decisions, leading to the development of prospect theory, which modified expected utility theory by accounting for psychological factors. Notwithstanding these finer disputes, Bayesians agree that pragmaticconsiderations play a significant role in managing beliefs. Oneimportant way, at least, in which an agent can interrogate her degreesof belief is to reflect on their pragmatic implications. Furthermore,whether or not to seek more evidence is a pragmatic issue; it dependson the “value of information” one expects to gain withrespect to the decision problem at hand.
2 On rational desire
But that is just taking a gamble that has avery small probability of being killed by a car but a much higherprobability of gaining $10! More generally, although people rarelythink of it this way, they constantly take gambles that have minusculechances of leading to imminent death, and correspondingly very highchances of some modest reward. Now, Savage’s theory is neutral about how to interpret thestates in \(\bS\) and the outcomes in \(\bO\).
Nevertheless, the weather statistics differfrom the lottery set-up in that they do not determine theprobabilities of the possible outcomes of attempting versus notattempting the summit on a particular day. Not least, the mountaineermust consider how confident she is in the data-collection procedure,whether the statistics are applicable to the day in question, and soon, when assessing her options in light of the weather. Start with the Completeness axiom, which says that an agent cancompare, in terms of the weak preference relation, all pairs ofoptions in \(S\). Whether or not Completeness is a plausiblerationality constraint depends both on what sort of options are underconsideration, and how we interpret preferences over these options.
In effect, Non-Atomicityimplies that \(\bS\) contains events of arbitrarily small probability.It is not too difficult to imagine how that could be satisfied. Forinstance, any event \(F\) can be partitioned into two equiprobablesub-events according to whether some coin would come up heads or tailsif it were tossed. Each sub-event could be similarly partitionedaccording to the outcome of the second toss of the same coin, and soon. Is there anyprobability \(p\) such that you would be willing to accept a gamblethat has that probability of you losing your life and probability\((1-p)\) of you gaining $10? However,the very same people would presumably cross the street to pick up a$10 bill they had dropped.
Richard Jeffrey’s theory, whichwill be discuss next, avoids all of the problems that have beendiscussed so far. But as we will see, Jeffrey’s theory haswell-known problems of its own, albeit problems that are notinsurmountable. A highly controversial issue is whether one can replace the use of probability in decision theory with something else.
AI’s decision-making quality is highly dependent on the quality and quantity of data available. Over-reliance on AI can lead to a lack of human involvement and control, which can be problematic if AI systems experience failures or malfunctions. AI systems that have access to more data can perform better, especially if the data is personal, as it allows for more personalized predictions. However, it’s important to note that while AI can process vast amounts of data at incredible speeds and identify patterns and trends, it still lacks the ability to use human wisdom and discernment.
Stefánsson and Bradley’sextension of Jeffrey’s theory to chance propositions is alsomotivated by the fact that standard decision theories do notdistinguish between risk aversion with respect to some good andattitudes to quantities of that good (which is found problematic by,for instance, Hansson 1988, Rabin 2000, and Buchak 2013). It should moreover be evident, given the discussion of the Sure ThingPrinciple (STP) in Section 3.1, that Jeffrey’s theory does not have this axiom. Since statesmay be probabilistically dependent on acts, an agent can berepresented as maximising the value of Jeffrey’s desirabilityfunction while violating the STP.