Are there any utilitarians who argue against the degree to which Peter Singer thinks we should donate to charity?

Yes, that's what I'm asking.

Your action was morally justified, and morally praiseworthy, because it was probably going to result in good consequences. So one should always save people from drowning because they (usually) don't become Hitlers.

On the other hand, if you knew the consequences that would result, now you have the moral responsibility not to act in order to avoid those foreseen consequences. Unfortunately that's specific knowledge about the particular circumstances that you didn't have.

All in all in can be considered some kind of honest mistake. It's like if you decide to drive your car because it's usually faster than the bus - just because there turns out to be a lot of traffic on the freeway doesn't mean that you made a poor decision, you were just unlucky. Still, of course, you may wish that you had taken the bus.

Then, should we have just let her die in the lake, and anyone who came to save her, we would say, "No! Don't! If she lives, her future son will probably cause massive destruction!"?

Yes, we should have let her die in the lake. One murder is better than a large number of murders. To be honest, the converse idea - that it is better for two murders to happen than for a particular agent to commit one - seems quite paradoxical.

if that's the case, then what if Hitler himself really thought that what he was doing would make the world a better place?

He certainly didn't think he was going to maximize utility. Moral responsibility derives from the expected utility of actions - if he acted in the way that, to the best of the information available to him would seem to maximize utility, but oh so unfortunately resulted in WW2, then he is in the same boat as the lifeguard rescuing the woman and the man who takes a car instead of a bus. But it's more evident that his intentions were to serve the interests of a narrow national and ethnic group without regard to the suffering of others.

All in all, designating people as virtuous or bad kind of misses the point of consequentialism; it's actions that matter, and actions that can be fundamentally good or bad. Designating someone virtuous or not is less clear, less useful, more like an emotive statement.

the theory they are actually supporting (that, say, maximizing utility is objectively the right thing to do) is problematic in the sense that it either must ignore the consequences that actually come to be in reality

Actual consequences are bad. Moral value, or disvalue, is identical to features of the natural world (pain, joy) regardless of the causal mechanisms that led there.

Not being able to perfectly optimize the world is a problem for a theory that says that optimizing is the goal.

That's not true at all - that's like saying that not being able to perfectly predict which route to drive to route is a problem for the idea that we should minimize our travel times. If someone says "I should go to work as fast as possible", you don't say that they might be wrong about that normative claim, merely because the traffic maps are unclear.

our knowledge of probabilities tends to zero.

Not really, there's no reason to expect that actions of short or medium term positive benefit are likely to have counter-effects that specifically outweigh their ordinary effects. Actions might have unexpected benefit, or might have unexpected harm. In the absence of information on those variables, which are equally likely, you focus on what you can change. And in the real world, application of utilitarian principles often yields quite simple and undeniable frameworks of action and policy that can clearly be seen as consequentially superior to other ideas.

/r/askphilosophy Thread Parent