Pleasure Now!

Which false god should we worship? Two options are hedonic utilitarianism and preference utilitarianism, and Tim Cooijmans discusses the choice between them here. In response, I attempted to comment there, but the comment was eaten by the commenting system, and its loss was emotionally painful to me. Thus I became averse to continuing the attempt to post there, and I have posted here instead.

These false gods look like their pleasures and preferences are both satisfied.

These false gods look like their pleasures and preferences are both satisfied.

Like many LessWrong-influenced folks, I find the arguments in favor of consequentialism in general, and sometimes utilitarianism specifically, to be compelling and morally stirring, at least compared to the alternatives. There’s no urgent need to find the One True Utilitarianism when it seems entirely feasible for society to cooperatively achieve a harmonious pluralist consequentialism or possibly feasible for an AI singleton to bypass the matter entirely. Nevertheless, we wonder if there is an OTU, because it’s fun to think about, and because its absence is sufficient reason for many people to reject doing the moral thing utilitarianism.

We humans have a bunch of internal drives like fun, hunger, thirst, dyspnea, lust, affection, anger, fear, disgust, etc., for solid evolutionary reasons. Most commentary on hedonism seems to assume that pleasure is satisfying these drives. I think that’s wrong, or at least usually wrong. I think, in many cases, these drives are only indirect causes of pain and pleasure, which are instead mediated by how the brain changes in response to the drives. Two quick examples are worth mentioning: the minor aches after exercise use the same neurons that normally are mere pain and yet these aches sometimes feel pleasurable, and post-ejaculatory stimulation of the penis changes the feeling from being pleasurable to being painful without changing the other aspects of how it physically feels. The real direct causes of pleasure and pain, I suppose, are that the (subjective) feeling of pleasure largely corresponds to the (objective) upregulation of a behavior by the brain, and that the (subjective) feeling of pain largely corresponds to the (objective) downregulation of a behavior by the brain. This follows my general belief that there is one thing of which qualia is the subjective form and information processing in the brain is the objective form. To say from the subjective viewpoint that pleasure is intrinsically motivating and pain intrinsically demotivating would therefore be the same as saying from the objective viewpoint that a brain becomes more likely to do upregulated behaviors and less likely to do downregulated behaviors. The latter viewpoint is virtually a tautology, which I think is good and helps it bridge the explanatory gap regarding why the former viewpoint might also be true.

If it is true, then “pursuing pleasure” would mean “attempting to experience the upregulation of behaviors”, which far from being a tautology has many interesting consequences. First, as there is no practical limit to the possible behaviors someone might engage in, there is no practical limit to the humanity’s pleasurable pastimes. Fun is open-ended! Second, the “higher pleasures” are behaviors reinforced by the same drives as the “lower pleasures”, but simply reinforced more weakly due to their less direct effects on those drives. The drives that evolution has left us with for good reasons are sometimes felt more intensely or more weakly, so that sometimes we desperately shovel tacos into our mouths and other times we spend too long choosing pretty photographs for blogs that no one reads. We can only reliably ignore the drives when they are weak, which is when they are sated. This means that the “higher pleasures” can only be pursued by people whose biological drives are mostly met most of the time. Therefore, if you want to lift people’s minds to higher things, fill their bellies. Third, since biologically it’s easier to upregulate a neural pathway from a low-activation state than an already high-activation state, the pleasure gained from repetition of a behavior declines as the novelty wears off and it becomes entrenched habit. Fourth, when you’re unhappy, trying to jumpstart happiness by directly satisfying one of your strong drives via an already-strongly-reinforced behavior like eating a tub of ice cream, masturbating, or watching a favorite comedy will be sadly futile; instead you should try to satisfy a strong drive in a new way like cooking something challenging, kink, or trying to get into a new artform. Fifth, wireheading would only be effective at reinforcing itself, so would tend toward the dystopian version rather than the beautified version, and probably wouldn’t work at all long-term. Sixth, instead of wireheading, we could add new senses, capabilities, and biological drives to the human-standard ones, and thereby enable totally new forms of pleasure. For example, we could refactor our general feeling of hunger into dozens of different hungers for the various nutrients we need and add taste buds attuned to them, making cravings real and distinct ways to generate gustatory pleasures. And if we keep our brain architecture essentially the same and upload into simulations like video games, then the number of behaviors within reach of cheap reinforcement would be virtually unlimited. (Pun intended. I’m not sorry for the pain!)

Even better to be a happy Pig Socrates.

Even better to be a satisfied human pig.

Despite the above, I think hedonic utilitarianism is at most one-third correct. Pleasures and pains are the ups and downs in our wills, I think, and so are essential to an account of what makes life worth the living. Yet it often happens that our habits, caused by certain past pleasures and pains, outlive the pleasures and pains that caused them, and keep on determining our behavior as free-floating preferences. Furthermore, someday we may gain the ability to change the architecture of our brains, and we might choose a new architecture that doesn’t choose our upcoming actions via habits grown out of past useful behaviors, but that by design selects a course of action based directly on expected value in terms of achieving preferences. If we become those beings, pleasures and pains will no longer be relevant.

Pleasure now!

Preferences later!


3 thoughts on “Pleasure Now!

  1. Hmm… You describe the role of pains and pleasures in shaping our behavior, and how this usually has good consequences. Furthermore, explicitly optimizing for hedons is likely to have good consequences. But, you say, we should not let our behavior be guided by pleasures and pains if we can just install a more intelligent software package that knows how to bring us where the pleasures and pains are trying to bring us. Then we won’t need pleasures and pains anymore.

    But this only makes sense from a preference perspective. I don’t care where my pains are trying to bring me; I just want to stop hurting!


    • > You describe the role of pains and pleasures in shaping our behavior, and how this usually has good consequences.

      Rather, I think it defines a large part of what “good consequences” means.

      > we should not let our behavior be guided by pleasures and pains if…

      I don’t say that we “should” change from pleasure to preference, but rather that circumstances might conspire to make us find it desirable to do so.

      > I don’t care where my pains are trying to bring me; I just want to stop hurting!

      Good point. But if you can successfully stop hurting, it might then be the case that you find there is more you want.


      • > if you can successfully stop hurting, it might then be the case that you find there is more you want.

        Yes! Pleasure! The objective is as unbounded as that of preference utilitarianism, so it’s not like we would be done at any point.

        More root-strikingly: desires hurt, and to the extent that they do there is value in eradicating them. Wanting something besides better experiences is a bad experience.

        The object of desire is not valuable in and of itself; its value is in the eye of the beholder. There are two ways to get the value: you can either obtain the object or change the eye. The latter is taboo in preference utilitarianism, whereas hedonistic utilitarianism proposes to take whichever route is cheapest.

        It could be that my view of preference utilitarianism is overly Less-Wrongian. People there are heavily into AI and tend to analyze themselves from the same perspective. Sure, if we want an AI to do our bidding, then we should take care not to let it cheat by taking control of its reward channel. But humans were programmed by a blind idiot god, whose bidding there is no point in doing.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s