Play Live Radio
Next Up:
0:00 0:00
Available On Air Stations

Can Faith Ever Be Rational?

Boyer d'Agen
Getty Images

Some people have faith that their spouse won't cheat on them. Some have faith that things happen for a reason. Some have faith in God.

Is there a common commitment underlying these disparate declarations of faith? And can such faith ever be rational?

I was prompted to think about these questions after reading the comments to my 13.7 post last week, "Science Vs. Religion: A Heated Debate Fueled By Disrespect." There I argued that in debates about science, religion and (a)theism, all parties should seek "charitable ground":

"We should assume, as a default, that others hold their religious and scientific beliefs deeply, genuinely and reflectively."

Many readers responded that religious beliefs often aren't held "reflectively," and that scientific and religious beliefs are fundamentally different: The latter rest on faith as opposed to reason. The implication seemed to be that faith cannot be based on reason and, perhaps, that faith can never be rational. But is this right? Might faith be rational under some conditions? Could faith be warranted as an attitude one should (sometimes) adopt in guiding actions and beliefs?

To help me think about these weighty matters, I decided to read two recent papers (one already published, with a more accessible version forthcoming) by Berkeley philosophy professor Lara Buchak. As an expert in decision theory and a philosopher of religion, she's in a unique position to tackle these very questions.

In evaluating the rationality of faith, we first need to be clear about what faith is, and that's where Buchak begins her analysis.

People commonly treat faith as something stronger than belief, as something one maintains despite evidence to the contrary. Buchak argues that there is a sense in which faith goes "beyond the evidence," but that it doesn't require believing something more strongly than the evidence suggests. Instead, Buchak characterizes faith as a commitment to acting as if some claim is true without first needing to examine additional evidence that could potentially bear on the claim.

To get the intuition behind this idea, it helps to consider some examples from Buchak's forthcoming paper:

If a man has faith that his spouse isn't cheating, this seems to rule out his hiring a private investigator, opening her mail, or even striking up a conversation with her boss to check that she really was working late last night—that is, it rules out conducting an inquiry to verify that his spouse isn't cheating. If he does any of these things, then she can rightfully complain that he didn't have faith in her, even if she realizes that, given his evidence, he should not assign degree of belief 1 to her constancy [i.e., even if he couldn't be 100% sure that she's been faithful].

To use a religious example, when so-called 'doubting' Thomas asks to put his hand in Jesus' side to verify that he has been resurrected in the flesh, this is supposed to indicate that he lacks faith.

In brief, faith requires a willingness to act as if something is true while also refraining from gathering evidence for the purpose of checking whether it's true.

But is it ever rational to agree to such terms? Isn't it always better to have more evidence rather than less; to make one's decisions after examining the largest and most reliable body of evidence possible?

Working from decision theory, rational choice theory, and her own novel mathematical results, Buchak draws a surprising conclusion: faith can sometimes be rational.

For faith to be rational, argues Buchak, a basic precondition first needs to be met: it must be the case that one thinks a claim sufficiently likely, and the actions it supports sufficiently beneficial, that the "expected utility" of acting on the claim outweighs the "expected utility" of not doing so. So, for example, if one assigns higher utility to a marriage in which both partners are faithful than to one in which either partner cheats, and one has good reason to believe that one's partner is in fact faithful, then the "expected utility" of being faithful oneself can outweigh that of cheating.

This precondition supports the rationality of acting in a way that's consistent with one's faith (e.g., of giving up an opportunity to cheat on one's spouse), but not necessarily of committing to doing so without recourse to further evidence (e.g., without investigating whether one's spouse is actually faithful). Buchak argues that this second condition for the rationality of faith will sometimes hold. The basic idea is that there are risks to making decisions about how to live your life – whether it's being faithful to your spouse or praying to God – in a way that's contingent on whether you've examined further evidence.

In the most straightforward cases, the costs associated with examining further evidence could outweigh the potential benefits. For example, the personal and financial costs that come with verifying the faithfulness of one's spouse could outweigh the differential benefits of cheating on a faithless spouse over remaining faithful to a true one. And if one's spouse is in fact faithful, then those costs will have been incurred for minimal benefits. One might feel slightly more secure in one's relationship, but the decision not to cheat won't have changed.

Another risk comes from the opportunities one loses in postponing a decision. If you hold off on trusting your spouse or in making a religious commitment, for example, you effectively give up the benefits of a particular kind of relationship (with your spouse, with God, with a religious community...) while you're hemming and hawing. You could also miss out on a great opportunity altogether (for true love, for a relationship with God...) if there's some possibility that the evidence you examine will be too inconclusive to support a decision or – worse yet – be misleading.

The details of Buchak's arguments get pretty technical pretty quickly, but the conclusions are easy to summarize. Roughly, faith will be more likely to be rational when one already has strong evidence for the claim in question (since further evidence will be less likely to drastically change one's mind), when the costs to examining further evidence are high (because those costs could outweigh differential benefits of acting differently), and when the evidence is likely to be inconclusive or misleading (because one could miss out on what was in fact a great opportunity).

While pretty abstract and theoretical, these conclusions turn out to support some intuitive guidelines for those hoping to be rational about their faith. In a conversation about her paper that we had by e-mail, Buchak offered the following succinct advice when it comes to trusting a partner or making a religious commitment:

Get evidence first! But don't postpone acting forever!

Of course, it doesn't follow from these arguments that religious faith – in general – is rational. Skeptics could argue that the condition of having strong evidence to begin with simply won't obtain when it comes to having faith in God, and New Atheists might argue that actions based on faith can themselves be costly to oneself and to others, challenging the idea that they'll ever yield greater expected utility.

Nonetheless, Buchak's paper suggests that under some conditions, faith can be rational, and sets the stage for a more sophisticated conversation about faith for theists and atheists alike. In our own conversation, Buchak shared the following reflections:

The way that religious faith is sometimes talked about in the larger cultural conversation can be harmful to everyone who is trying to find out the truth in religious matters and how they should live their lives. There's a naïve idea that faith requires believing against the evidence, or in the absence of evidence. When this idea is adopted by atheists, it can allow them to dismiss all religious faith as irrational by definition, without considering what the evidence is for particular religious claims. When this idea is adopted by religious people, it can allow them to think that believing against the evidence is a virtue, which is harmful to the pursuit of truth – it can also be psychologically harmful to try to believe something you think you don't have evidence for.

Sounds like an excellent basis for establishing more charitable ground!

You can keep up with more of what Tania Lombrozo is thinking on Twitter: @TaniaLombrozo

Copyright 2021 NPR. To see more, visit

Tania Lombrozo is a contributor to the NPR blog 13.7: Cosmos & Culture. She is a professor of psychology at the University of California, Berkeley, as well as an affiliate of the Department of Philosophy and a member of the Institute for Cognitive and Brain Sciences. Lombrozo directs the Concepts and Cognition Lab, where she and her students study aspects of human cognition at the intersection of philosophy and psychology, including the drive to explain and its relationship to understanding, various aspects of causal and moral reasoning and all kinds of learning.