How do we assign moral credit or blame to an action? Intuitively, it makes sense to say that people who do things that have good effects deserve credit, and people who do things that have bad effects deserve blame.

While the are many nits one could pick with this model, I want to focus on the question of moral luck. If a person does something that’s not inherently bad or dangerous, but it happens to have a bad effect, that doesn’t feel like it should deserve a lot of blame. If I come in the door and take off my shoes, and then my friend comes in, slips on my shoe and falls over, that deserves much less blame than if I directly push over my friend.

Is it the intention that matters? I don’t think so. If someone is choking, and I think I know how to give them a tracheotomy, but actually I don’t, and I just make things worse, I still deserve lots of blame. Maybe a little less than if I knew I wasn’t competent, but the core of the issue is that my efforts weren’t going to succeed, regardless of my beliefs or intentions. Note that opinions differ in this question, these are just mine.

Instead, I think what really matters for assigning credit and blame is the expected effects of one’s actions, using the probabilistic sense of the word “expected”. Leaving one’s shoe out carries a low probability of harm, and hence deserves little blame, even if the harm comes to pass. An incompetently performed medical procedure will very likely cause harm, thus deserving significant blame.

Of course, using a stochastic model of blame requires a stochastic model of reality. In most cases, we use a stochastic model of reality more to handle our uncertainty in predicting the future, even if the future is inherently predictable. Maybe in the shoe-placing example, if one performed a cell-by-cell analysis of my friend, one could determine that at the moment I took off my shoe, it was essentially guaranteed that my friend would step on it and fall over. I don’t think that changes my blameworthiness.

But then are we back to a knowledge/belief/intent model of blame, because the stochastic model just encodes our lack of knowledge of the world? I don’t think so. I don’t think we need to have a dichotomy splitting “predictable in principle via the laws of physics” and “the beliefs of the person taking the action” as the only two options. There’s a whole continuum in between.

I think an appropriate stochastic model is something along the lines of “what could a person have known or predicted about the effects of the action?” In the shoe scenario, one could predicted that there was some chance of a slip, but not much more than that. In the tracheotomy scenario, one could have predicted that it wasn’t going to work. The person didn’t make that prediction, but the necessary information was available.

One nice aspect of stochastic morality is that it can handle situations where negative probabilistic scenarios are avoided. Suppose a person has a 10% chance of being bitten by a disease-carrying mosquito over the next year and therefore developing a disease. However, a friend gives them a mosquito-preventing net to put over their bed and block the mosquitoes. How much credit does the friend deserve? Under the outcome and belief frameworks, we’ll never know if the friend deserves credit, because we don’t know what would’ve happened without the net. In the stochastic model, it doesn’t matter: the friend prevented 10% of a disease, on average, and deserves a corresponding ammount of moral credit.