Moral bounded rationality applied to computers

The idea of “bounded rationality” is that people are rational– make optimal decisions in pursuit of whatever their goals are– as long as it makes sense to do so given their computational, time, and memory constraints.

Like, if you’re about to get hit by a car the rational thing to do is NOT to think deeply about your next action, it’s to get out of the way (that’s a time constraint). Similarly, if you’re trying to figure out what to do in a conversation, and you’re tracking back to think what they thought what he thought what she thought what they think– that’s taking too much computational power, and isn’t worth it. Using that much energy is not bounded rational. But if you’re in a situation like choosing your next job, it is rational to think the decision through very slowly and carefully.

So now let’s relate that to computers: artificial intelligence. When we’re programming decision-making into our computers, we  presumably want them to be bounded rational. We don’t want to make them waste their time and energy on things that will take too long to compute.

And here’s my question: things like moral decisions can also be construed as bounded rational. If I have a split-second to make a decision, I might make a different decision then if I have a while to think about it. If we ever are programming moral decision-making into our computers, what kind of bounded rational rules are we going to teach them? Are we going to program in the split-second rules or the longer-thinking ones? What’s their computational balance compared to ours, and what if that changes over time?

My guess right now is… that computers are going to try to maximize the number of times that they get a moral decision “right”, and that will usually require longer processing time. But even humans aren’t sure about what a “right” moral decision is a lot of the time– there are a lot of edge cases like abortion. And I don’t know how much we’ll weight having an AI get an answer “right” compared to how much time we’ll want the AI to “think” about it, even if we do figure out what “right” is.

I wonder how this will play out in the future. I don’t know anything right now; I’m just posing questions. I’d love if someone wanted to give me an informed opinion on this!

Leave a comment