Solving the Incoherence of Ethical Calculation

The purpose of this article is to establish that ethics fundamentally cannot be based upon any non-arbitrary principles.  The values that we use to decide which actions we shall take, rather than based on objective, universally reasonable principles, are instead necessarily assumed based on the individual empirical utility of assigning those values to different desirable (or undesirable) factors.  As a result, our ethical values are decided not by some kind of objectively perfect reason but rather by our own arbitrary assumptions, taken either intentionally or as a result of outside influence.  Over time, those assumed values can then change as a result of our own reflection upon empirical problems and observations we make regarding issues with our current value system which we previously presumed to be correct.

In establishing these assertions, I will first attempt to prove that ethics is multi-variable and that the different variables are impossible to reduce into any kind of universal variable which every value can be understood through.  Building on that, I will assert that any rationalist explanation for comparisons between values cannot succeed in creating a clear reason for taking one action for another.  I will argue, however, that comparing different values to one another is necessary for action and that choosing not to make any kind of comparisons creates the worst of all worlds.  I will argue that, therefore, reasons for action must be based not on perfect explanations (which are impossible) but rather must proceed through assumptions (which can change to become more accurate over time, but which are always at least partially arbitrary by nature).  I will then discuss those assumptions:  how they can be and are made, and on what grounds they can be and are changed.  I will then conclude by focusing on the implications of this inquiry on ethics as a whole.

But first, let’s discuss on what grounds we can take action.  Within this section, my purpose will be to prove that any action is based on a multi-variable system.  To do so, I will point out a few possible variables which could be at a part of our decision-making.  I do not intend to extensively list all of the possible reasons for action or to claim that the variables which I discuss are the most important ones.  Rather, I simply want to show that, even with a very small number of variables, we still arrive at the same problems which would result from a longer or more accurate list.

One variable for decision-making which would be difficult to deny the existence is pleasure, in a chemical sense.  People generally enjoy raw pleasure, and they generally engage in activities for the sake of achieving that raw pleasure.  Indeed, people are often willing to forego time and effort which could be spent to acquire other possible values in order to achieve such raw pleasure.  As a rather extreme example, consider a heroin addict who injects themselves with heroin, knowing that it will likely have negative consequences, such as taking time out of their day to shortening their life span.  In the future, the heroin addict is going to be less able to commit to and achieve ends other than taking heroin.  Thus, we can fairly assume that the heroin addict (in most cases) takes that heroin for the sake of the pleasurable traits of heroin itself, rather than because they think it will lead to some kind of non-heroin-related positive outcome.  The “end” of taking this heroin is almost certainly the dopamine (raw, chemical pleasure) which comes to users as a result of heroin usage, even if it makes other objectives much more difficult or impossible to achieve.  Thus, we can say that raw chemical pleasure is a form of an end in itself, something which people wish for, even if it leads to other ends becoming more difficult to achieve.  We shall call this variable the “pleasure” variable.

However, most people are not heroin addicts.  If you ask most people why they are not, you are liable to get a few responses, some of which include (but are not limited to) a general fear of addiction, health problems which can come as a result of heroin usage, and a general distaste for the squalor in which someone who takes heroin will inevitably live.  We shall focus on these three reasons as possible counterbalances and unrelated variables to the original example of pleasure.

First, we shall focus on the addiction point.  For someone who is anti-addiction, the reasons are manifold.  One reason is the lack of control which addiction implies.  The addict, constantly needing more heroin and being almost incapable of basic action without the heroin, has put a burden upon themselves by taking heroin, which over time has made not taking heroin or devoting less time to heroin nearly impossible.  If a problem with taking heroin comes up which they did not think of at the time, or if there is a better alternative which they did not think of at the time, or even if their moral beliefs about pleasure turn out to be false, heroin addiction changes the user such that either they cannot look at the problem objectively or if they can, they cannot fix the problem, as they have already devoted their entire lives to the acquiring of new heroin.  Therefore, avoiding future commitments can be seen as an end in itself – to not place a burden upon oneself which makes it impossible to make future moral judgments with possibly better consequences.  We shall call this variable – the desire to have the ability to choose one’s own actions – the “freedom” variable.

There is another, further, reason for the anti-addiction point of view.  It can be asserted that addiction is bad not only from a point of view of the hypothetical long-term consequences of addiction but because of addiction in and of itself.  Indeed, there seems to be a general, irreducible fear of taking action if that action is a consequence of another’s will rather than one’s own.  Something outside of pleasure tells us that those actions which are done contrary to our own desire are lesser than those decisions which we make on our own (even if they create the same amount of pleasure), even if that individual action is in a vacuum.  This pure desire for a far-ness from addiction, I would like to make clear, is completely separate from freedom, although usually proportional to freedom.  We shall call this variable the “autonomy” variable.

The third of our reasons for opposing heroin usage, health problems, is clearly the most difficult to measure.  What people mean by health problems, in its most extreme form, is the fear of death.  Heroin, for instance, decreases the user’s life expectancy by a drastic amount.  Therefore, someone might fear taking heroin because they think it will cause them to die at a young age.  This can be reduced partially to a desire for pleasure (dying tends to decrease pleasure) and partially to a desire for freedom (the dead cannot go back on their actions), but I would argue that there is something else that is quite intangible in the value of life.  It is only due to our status as a living person, after all, that the value of a given action can even be understood.  Therefore, understanding the value of life from the perspective of a living person’s logic (which is itself a mere part of their life) seems unreasonable.  A kind of irrationalism, in which staving off hypothetical death is valued completely arbitrarily, seems both just as reasonable as reason-based evaluation of life and how people tend to look at life.  After all, we do not know the consequences of death particularly well (if we continue in existence and, if so, in what form), and we do not know how the source of all value can be appraised in relation to what is evaluated.  For this fear, we can use the term “life value.”

Finally, we focus on the issue of self-identification.  Self-identification is based on not just what kind of person someone is, but also on how they look at themselves and how often they think certain kinds of thoughts about themselves.  To go back to heroin, a significant number of people would argue that there is some kind of intangible desire not to take heroin, simply to not be like “those people who take heroin.”  Here, there is some kind of attribution of identity, in which the person says their identity is (or at least should be) separate from the identity of the heroin-takers, an identity which is a priori negative.  This intangible preference for believing you should have one identity as opposed to another is something which we should take quite seriously in ethics.  If someone is deciding between two options, the very concept of thinking of themselves as someone who is one option as opposed to the other (a drug addict as opposed to a non-drug addict, in this case) is something which can be very strongly desired.  This is incomparable to pleasure or anything else, for it does not say, “the consequences of an action will be good for me”; it rather says, “the consequences of an action will be good from my perspective.”  What is avoided in identity-based ethics is one’s intellectual position that one is a “bad person,” someone who goes against whatever they believe a person should do.  In doing so, we try to stave off negative self-appraisal (statements that we act differently from how a good person should act) for reasons entirely separate from pure pleasure.  For example, many people volunteer at charities, not because they gain a significant amount of dopamine from doing so, but rather because otherwise they would constantly be haunted by the concept that they are not doing enough to help others.  This fear of negative self-appraisal shall be termed the “identity” factor.

This challenge of identity can only be surmounted by changing one’s action so that they are more similar to one’s ideal identity.  By doing so, the argument that there is some kind of dissonance between one’s ideal and one’s real identity is undermined (if someone doesn’t take heroin, they are less likely to view themselves as a heroin addict).  However, taking an action contrary to one’s ideal identity is not the only thing which is needed to surmount issues arising from dissonant identity.  If the action taken is not due to the actor’s own decision-making – but rather due to another’s – then that action is completely useless in addressing one’s dissonant identity.  This is where the previously dormant concept of autonomy comes in.  Perceptions of autonomy have a major effect on how people view their identity in relation to their actions.  If someone knows themselves to be a heroin addict who cannot make it through the day if they do not take heroin, then they are much more likely to view their identity as a form of victimhood (and consequently not as a “bad actor”) than if someone has just started to take heroin.  At the same time, if someone who normally takes heroin quits it, they are more likely to view themselves as a virtuous person than someone who only quit after being forced to go to rehab.  Thus, the effects of identity-based action are proportional to both the magnitude of the action and the importance of that action on how someone views themselves.

This concept of identity is unable to be reduced to other values.  The continual negative thoughts which are the possible outcome of dissonant identity are not related to any kind of pure pleasure.  One does not receive a significant amount of dopamine when thinking intellectually about their value as a person.  Certainly, no one gets anywhere close to as much dopamine from such self-reflection as one does from heroin.  Yet consistent negative self-reflection and differences between one’s ideal and real selves is viewed as something to be feared and opposed, even if it has no effect on the amount of pleasure someone receives.  Very few would say that they would want constant pleasure under the condition that between every injection of pleasure, their only thoughts would be about how terrible and undesirable their lives are.

These three values which affect decision-making (which of course do not create a full picture of decision-making but rather are a useful oversimplification) cannot be compared to one another.  None can be reduced to any kind of measurable numerical value or unit, and certainly none of these values can be compared in importance to one another.  First, we shall focus on the failings of numerically measuring the effects of individual actions on individual values.  Second, we will focus on our incapability to compare any one of these values to the others in importance.  Third, we will focus on our inability to rationally decide on an action if we know neither of these things.

Quantification is usually assumed to be quite simple in philosophy.  To a utilitarian, they are measuring the amount of pleasure felt, compared to the amount of pain felt and the values of each being equivalent to one another is assumed.  I would like to disagree with that perspective.  How can freedom, for instance, be valued or measured?  Should it be measured in “number of possible future actions lost?”  Does the length of time between now and when that possible future action could have happened change the value of freedom?  Does the fact that you might have actually acted on a freedom matter to whether you have lost the freedom?  Or is the mere restriction of the consideration of an action equivalent?  If someone smokes cigarettes every day yet they wish to stop, is their freedom more less restricted than someone who wishes to start yet will not for years?  What about someone who often thinks about smoking but never does it?  Is a restriction on thought equivalent to a restriction on action (from the point of view of freedom) and, if so, what is the numerical relationship?  What is a unit of freedom?  Certainly, “I feel unfree” is a coherent sentence, but “I feel like I have lost 10 freedoms from yesterday” is not.  We have a general feeling of freedom but not a precise one.

This is equally true of the other ethical guides.  Certainly, if we can scan someone’s brain, we can see exactly how much pleasure they are feeling.  However, if we cannot, then we cannot really know how much pleasure they are feeling on a given day.  For instance, in the example of the heroin addict, they certainly know that they receive pleasure, but they cannot really quantify it.  If they also get pleasure from, say, hearing a funny joke, they can tell us that one is more pleasure than the other, but they probably could not tell us how many jokes are equal to how much heroin.  No one is quite so precise about pleasure as that.  With respect to the example of identity, if someone is trying to be a good person, there is no metric they can use.  There is no spectrum which they can check at any time (as in, say, weight), to see how good a person they are.  Instead, the spectrum is self-assigned and entirely unquantifiable.  There can be general feelings (the heroin addict might recognize that they are an addict, for instance), but there is no ability for anyone to state the exact amount of heroin they must do to regard themselves as an addict (or the exact amount of actions they must do to feel like a good person).  Instead, people can only generally gesture in the direction of what kinds of actions create what kinds of beliefs about themselves.  Finally, with respect to the possibility of death, there is never any knowledge which could allow for a quantification of this.  There is not “some death”; if someone dies, they die.  In some situations, this might be the easiest to quantify.  If you are getting a surgery, for instance, and there is a 20% chance it fails and you die and an 80% chance it succeeds and you live, then the answer is pretty simple: a 20% chance of death.  However, in most cases, the particular context of an action determines the likelihood of death (driving drunk on a busy New York street is very different from driving drunk in the middle of farmland).  Because of this, in day-to-day contexts, when possibly dangerous actions are taken, one’s thoughts about death are not, “There is a 2% chance I die,” but rather, “It’s unlikely for me to die.”

Secondly, we are completely incapable of comparing the different values against one another, if even if we could quantify each.  Imagine, for instance, we know there is a certain amount, x, of identity which we would be losing by taking an action.  On the other hand, there is a certain amount of pleasure, y, which we would be gaining.  Unless we can somehow convert “freedom” and “pleasure” to be understood through the perspective of the other, we cannot possibly answer the question of whether x identity or y pleasure is preferable.  The desire for pleasure and the desire for freedom are both desires for things in themselves.  They allow us to be brought into a situation preferable to the current one and for our consciousness to be a more desirable area for us to exist in.  Neither depends on some kind of other desire in order for it to be desirable.  To create a common unit for the two, we would have to be able to figure out some kind of higher unit, something which is the “true end” and which can allow us to compare the two.  However, this is clearly false.  If identity and pleasure were equally important (of equal value) for this greater end, then we could very easily say that trading an infinite amount of pleasure for an infinite amount of identity is a completely neutral trade (or vice versa).  However, that is not necessarily clear, and most people, if asked whether they would want to feel eternally ashamed of themselves so long as they felt infinite pleasure or eternally proud of themselves so long as they felt infinite displeasure, would probably say that they prefer neither.  If they could answer, they could probably not answer coherently (nothing such as, “well infinite shame could lead me to have less of x value, but infinite pleasure would cause even more of x value than I would lose from infinite shame”).  Any reasonable response would require us to distinguish between different amounts of each value – yet, without the ability to measure our values, we can’t even do that.  This same problem exists with other comparisons.  (Probably the most commonly discussed one is the possibility of pleasure and life value being at odds.  A core controversial assumption of Hobbes was that man viewed any threat to his life as more important than an infinite amount of anything else.  Hobbes believed that man would do anything, if needed, “by force, or wiles, to master the persons of all men he can . . . till he see no other power great enough to endanger him,” (Leviathan part 1 chap. 13) even if it would lead to worse consequences for everyone, simply to stave off fear of death.)

A simple way to express what I have been saying is to imagine a mathematical equation deciding the value of an action (merely as a thought experiment).  Let v = value, where v > 0 is a positive action we should take and v < 0 is a negative action we should not, and let a, b, c, and d act as quantifiers for the amount each preference is affected by the action, as measured by a hypothetical (yet impossible) measurement system for different values (i.e., if we could assume that freedom can be measured in units, this is the number of freedoms being granted or taken away by an action).  The expression might look something like this:

v = a * pleasure + b * freedom + c * life value + d * identity

Now imagine we don’t know any variable’s value in that equation.  That is effectively our position when considering ethics.  We don’t know how much any of our quantifiers should be, we don’t know any hypothetical quantifier’s value in relation to anything else, and yet we must expect ourselves in ethics to find v.  If we cannot find v (or at least determine if v is a positive or negative value), then we are at an ethical crossroads, completely purposeless and incapable of deciding whether to do an action.

 

Now, we must ask the question (which has been partially touched on by the previous sentence) of whether we even need to care about comparing different values.  The alternative would be to not take any action which could lead to the loss of one value at the expense of another.  However, the practical application of such a conclusion is significantly more complex than it might at first seem.  Let’s use the example of deciding whether or not you want to see a movie.  In deciding this, there is a definite gain of pleasure (if it a good movie at least), and a definite loss of freedom (while you are watching the movie, you will not be able to do something else, even if halfway through you think it would be better, unless you want to waste the money you used to purchase the ticket).  In deciding this, the response from the value nihilist might be to assert that they should not go, as they cannot possibly define whether the loss of freedom is better or worse than the gain of pleasure.  They therefore do not do anything which can lead to a loss of any value.

However, this inability to act is a problem in itself.  Nearly every decision includes a loss of some kind of value, and thus, if someone is to decide how to act based on this nihilism, they must negate nearly every single action.  If any action requires some kind of commitment, then it is an infringement upon freedom.  If it does not even require a long-term commitment but instead requires the action of another person (even if that person is as tangentially related to the matter as a service worker), then that action still infringes upon your autonomy, as it places your actions in another’s hands; therefore, it infringes upon identity.  If it allows for even a minute chance of death (driving, for instance), then it cannot be supported.  If we veto all these actions, we find that we can do very, very little.  Indeed, if we can do so little, then we are placed in a position where we, on the macro level, have less of every single value than we might have if we actually did what seems to, at the micro level, cause us to lose a certain amount of a given value.  Committing to something that takes away freedom and gives us pleasure and something else which gives us freedom while taking away pleasure can, if the losses are less than the gains of both activities, lead to us having more freedom and more pleasure than if we did nothing to begin with.  Furthermore, the decision to not take an action in the first place is a statement about the relationship among our values.  It is simply the statement that any possible loss is more important than any possible gain; that is, that anything at all negative is equivalent to negative infinity.  Such thinking can lead us to taking actions which are very contrary to what would actually lead to the best possible world for us.  For instance, if we are given the opportunity to get an infinite amount of freedom in exchange for losing a small amount of pleasure (and/or if we care a lot about freedom and care very little about pleasure), under the nihilistic approach, we must forsake that possibility of getting an infinite amount of freedom because it would lead to a tiny loss in pleasure.  This is completely undesirable and unreasonable and must be rejected.

A reasonable alternative which responds to this criticism is an argument for optimal action – that is, action which leads to obtaining the greatest amount of our values.  This can be reduced to two versions, one which wishes for the greatest amount of all values in total and one which desires to create the most of each individual value.  The first version’s mistakes can be easily cleared up.  The assertion that you can create the most of all the values put together without creating the most of each individual value falls into the exact problem which we have been trying to solve in the first place.  It requires a value for each of our preferences and a way of measuring those.  The only difference is that the value assigned to each reason is one which we have already assumed before we even start the discussion, and instead of focusing on deriving these values, the aggregate value approach only focuses on applying the rationalist perspective without focusing on the derivation of values.  However, if we care more about pleasure than identity, for instance, then this approach fails, as it requires us to focus on identity at the expense of pleasure, in excess of what our preferences would otherwise dictate.

The second alternative, which focuses on maximizing each value, is slightly more robust.  According to this assertion, which we shall call the Pareto Approach due to its focus on creating Pareto optimality, we should first assume that any action is good if it leads to all of our preferences increasing in value and none decreasing.  Building from that, if two actions (as in the earlier example of freedom and pleasure) lead, together, to a situation where both are greater than they were previously, then that is a positive result.  Thus, we can take many, many actions so long as they all eventually balance out with all preferences are greater than their original value.  This, in theory, is a perfectly good argument and, for a robot programmed to act, is entirely rational.  However, for a person, we quickly confront serious pragmatic issues.  When a person takes two actions, they do not do both at once.  Rather, a person first takes one action and then another, and they must consider each action before they do it.  However, this aspect of practical action creates significant problems for the Pareto Approach.  Taking the first action may make sense.  For the person doing it, they simply assert, “I shall increase pleasure at the expense of freedom, and later, I shall increase freedom at the expense of pleasure, thereby leading to more of both pleasure and freedom than I started with.”  However, when the person arrives at the second action, the assertion starts to make less sense.  In combination with the previous action, this future action makes sense.  But, as an action considered by itself, it does not.  If the actor decides to stop at this point, they will have more pleasure than if they took the further action.  Acting at this point requires asserting that creating a lot of freedom is more important than losing a little pleasure.  This alone is a problematic value statement.  Thus, for the actor, they can only decide to act to create more pleasure and less freedom if they will compensate for that in the future.  Doing this repeatedly will eventually lead to a situation where some values are less than they were originally (if they continually say they are going to increase freedom but never do, for instance).  At this point, the Pareto Approach merely reduces down into a group of value statements which hold some values to be intrinsically preferable to others.  The only difference, practically, between the Pareto Approach and a rationalistic approach is that the Pareto Approach bases its value judgements not on a rational basis but on an arbitrary one.  Some values can be more easily acquired in the short-term, and some values in conjunction with other future actions allow us to get more of other values (such as freedom, for instance).  If we were following the Pareto Approach, we would be put in a situation where we were infinitely required to give precedence to those values which allow us to achieve other values in the future – and, yet, our ethical system would also require us to never achieve any of those values.

So, we cannot evade evaluating our values in relation to one another.  At this point, having taken rationalism out of the equation, we have to ask ourselves what kind of basis we can even have for our belief in the relationships among our values.  First, I will argue against what seems to me the most obvious possibility, empiricism.  Then, I will argue for an inferentialist approach which uses assumed, knowingly imperfect values as its basis.  I will argue firstly that this is the only possible approach, and indeed that it is the only one actually used in real action.  Next, I will argue that while the inferentialist approach is not perfect and does not lead to perfect conclusions, it is a good starting place that we can use as a starting point for original values to be attributed to different reasons for action, which can then be later re-appraised.

An empirical approach to evaluating the value of different actions would at first glance seem to be a sensical approach.  According to this approach, the value of an action should be based on whether, empirically, it leads to desirable consequences.  Therefore, if we have, in the past, preferred x amount of pleasure to y amount of freedom, then we should continue to do so into the future, presuming that in all future cases, a similar amount of pleasure should be preferable to a similar amount of freedom.  This can then be used to create general rules which can then be applied to individual cases (e.g. less pleasure than x is worth less than more freedom than y).

However, this empirical argument is extremely flawed, for multiple reasons.  One reason is that it is simply impossible.  An empirical approach would require us to be able to quantify previous experiences in relation to our own.  We would have to say, “This amount of pleasure is equivalent to that amount of freedom.  Therefore, a similar proportion in another case will be just as good as that proportion.”  That therefore requires us to be able to quantify pleasure, freedom, etc. in exactly the flawed manner to which we objected beforehand.  Otherwise, how could we come up with any kind of generalizable rules for the values of different reasons for action?  Without such generalizable rules, we would be left unable to discuss any case other than something we have already done, since the only reason we could use to choose among different actions would come from applying our previous experience.  For the decentralized empirical point of view, in which we only presume an action to be good based on case-by-case empirical experience, we would be unable to improve our actions in any coherent way.  Firstly, we could never really understand how we are wrong about something.  If we base our actions on empirical rulemaking, we would have no general rules about how to behave.  Therefore, if any of our rules turns out wrong, we could not easily diagnose exactly what is wrong with them.  If, for some actions, we act in a manner which emphasizes freedom too much, and for others, we emphasize freedom too little, the statement, “I emphasize freedom too little; I desire to be freer” would lead not to an outcome of freeing oneself when needed but rather to freeing oneself in all actions, including those actions for which we already overemphasized freedom.  Secondly, the shifts in the mind which inherently occur over time would not be accounted for.  If we generally assumed the case-by-case empirical approach, we could only ever learn about an action from that action itself, rather than from other similar actions.  Thus, whenever there was any mental shift (which – as I discuss in “A Preliminary Philosophy of Morals,” paragraph 25 – happens very often), one would have to re-adjust their approach in a case by case basis.  That means that every single action would have to be repeatedly re-tested, as we could not apply general rules to anything.  That would involve nearly constant error, as the case-by-case approach does not allow us to learn general rules from our mistakes.

Based on our criticism of both the rationalist and empirical approach, I think we can arrive at a few conclusions about action in general.  Firstly, rules and proportions between different values are needed.  Some way of discussing one value in relation to another is not just the best way to act but in fact the only way.  Without a comparative basis such as this, we are either left without the ability to choose among actions, or we are constantly acting in not just non-optimal but extremely negative ways, never having a handle on just how we should look at our different values and never moving or changing into a more optimal system.  Secondly, we find that both the rationalist and empirical bases for relational values either require some kind of prior assumptions or are entirely impossible to calculate without rejecting both of the two schools.  To create an ethical system which solves for the issues we have raised in this article, one first needs to proclaim rationalism incorrect by concluding that perfect values for our preferences are impossible.  Then, one has to proclaim empiricism incorrect by realizing that some kind of system that we can use to quantify values is required in order to use any kind of empirical system.  Therefore, our main problem is how we can assign any kind of action more value than any other if not through rationalist or empirical methods.

My answer to this question is to adopt an inferential perspective.  What I mean by the inferential perspective is that we should simply infer the value of different preferences based not on some kind of logical or experiential model but rather based on general pointers which we can then modify based on later experiences.  That is to say, we should assume that life or pleasure or whatnot is each worth a certain value in relation to one another, rather than attempting to derive it.  Thus, we can say that our earlier equation, with a perfect unit of all things that all values can be measured by, should be imagined out of thin air.  Then, in real-life cases, our imagined equation and imagined values of different reasons are applied without care for whether they are an exactly true explanation of what we in fact individually care about.  A good model for this is, say, the Christian model, which grants identity a comparably high value, pleasure and life value a comparably low value, and freedom a value which tends to be different depending on the sect.  For this argument, it does not matter if Christianity is “right” or “wrong” about its exact values; rather, it matters that it is a vantage point from which one actually can consider real situations and at least try to understand how different actions can be compared to each other and how one can be deemed preferable to another.

The inferential perspective is not adopted because it is a good evaluation but rather because it is a bad one.  Because it is necessarily a group of imperfect assumptions, someone looking at it and not taking it all that seriously is willing to change their own actions and beliefs to fit with what is ultimately best for them.  As opposed to, say, the rationalist perspective (which has just as much truth), the inferentialist perspective grants the user a significantly easier and more malleable value system that can change with time, with new realizations, and with the opening up of new alternatives.  The inferentialist perspective grants the user a kind of freedom of ethics, which allows for them to recognize their own system’s flaws.  It also allows for what would normally be considered imprecise reasons for one’s values to be taken seriously (the particular imprecise reasons will be discussed later).  The importance of a value must, in a rationalist context, be airtight, based on real arguments and actual reasoning.  If we are arguing that freedom matters more than pleasure, then that must be based on some kind of reasonable judgement.  However, taking perfect rationalism to be impossible, a version of judgment based on faulty assumptions can nevertheless use reasons that are certainly preferable to purely arbitrary assumptions but are not quite as correct as perfect rationality.  This allows us to act, if not perfectly rationally, still reasonably.

While this is not quite an argument for or against the inferential approach, I will quickly argue here that the inferentialist project is much broader than the earlier religion example suggests.  Indeed, it seems to me that nearly all action is based on inference.  We may have a general sense of how much a value means in relation to another value; however, at the same time, reducing that relationship to a few formalistic arguments seems to miss the point.  If someone is asked if they would commit a mass murder in return for a small amount of pleasure, they would likely say no.  They would not want to be a murderer, the reason goes, and not viewing oneself as a murderer is more important to them than some pleasure.  The amount that someone is or is not willing to sacrifice, from an identity point of view, for a certain amount of pleasure is often identifiable.  However, having reduced the relationship between two competing values to a particular determination, there is little more which can be given in terms of generalizable conclusions or underlying reasons.  Asking a self-hating serial killer why the amount of pleasure he gets from killing outranks the amount of self-hate he feels from killing does not make much sense as a question.  Similarly, to take a more mundane example involving the comparison between pleasure and life-value, asking someone why they are willing to drive to an ice cream store to buy ice cream, even though they could hypothetically crash and die, also cannot elicit much more reasonable of an answer than a simple, unexplained response that the pleasure of eating ice cream outweighs the remote chance of crashing and dying.  Thus, we can see that we are all inferentialists and that the only conceivable logic of action is that which is based on inference.

The rest of this article will be devoted to analyzing and expanding on the inferential perspective, arguing that certain inferences and relationships to those inferences are preferable to others.  First, I will argue that tradition – that is, how the relationships between different values are defined to us – is the best way of establishing an original relative value of different reasons for action.  I then argue that the usage of any such tradition is not a perfect basis, and that each individual therefore needs to customize their own relative values to better reflect their actual preferences.  I shall then argue that such customization is best done over time through the empirical recognition of shortfalls in one’s rules that provide guidance for creating general shifts in our understanding of our values to improve them.

We can, of course, make our inferences randomly.  That would not be very prudent, though.  Our intention, even though our bases for judgment are imaginary, is still to produce a situation which we personally desire.  We should probably attempt, therefore, to at least try to make our relational values as close as possible to what will lead to us living desirable lives.  Luckily, billions of people have attempted to live desirable lives before us.  They have all attempted to create situations in which their relational values generate outcomes which lead to the best possible lives.  As a result, these people over time accumulate a great deal of learned information about what optimizing their relational values.  While we attempted to assess the relative importance of pleasure in relation to freedom, those other people have already considered these things, and they have already attempted to assign multiple relational values and develop a calculus to each reason for action.  In their own past, they have likely believed that their individual preferences have had many different relational values, many of them inaccurate.  To reach the relational values they now hold to be true, they must have realized, probably through empirical failure, that their previous inferences were incorrect, and they probably changed their actions based on that realization.  Although their beliefs are still probably imperfect, they probably are more accurate than they had been at some previous time, or than if they had simply guessed.  Consequently, we can at least assert that their inferences – modified over time – are at least somewhat accurate.  The practical meaning of this is that the traditions that we are taught about how to make decisions on how to act – based as they are on relational values of incomparable preferences that our predecessors have developed for themselves over time – should be adopted as our original vantage point, from which we can then act or otherwise change to suit our own empirically observed personal preferences.

While the inferential approach we have just outlined seems at first glance to follow a general Hegelian line, I think it is important to reject that conclusion.  The Hegelian starts by assuming that there is some ideal kind of optimal behavior or optimal relational value of different preferences towards which we strive.  All people are constantly working towards discovering this optimal system.  Every generation is left better off than the previous one, as each generation receives the constantly greater and greater knowledge of every generation before them.  As each generation knows more than the previous one, each generation can act in a more educated way, with more accuracy in how they behave.  Thus, we are constantly striving towards a perfect system which we shall someday reach, where all people act perfectly.  Therefore, we should presume all society-wide changes of behavior (so long as they are created by a general increase in knowledge) are good and should be directly applied to oneself.

However, the Hegelian argument seems to overlook a rather simple mistake, which it commits in its original assumption.  The concept that there is some kind of objective, universal, perfect relational value of different reasons for action is quite impossible, considering the gulf of difference between different people.  As discussed in my previous essay, “A Preliminary Philosophy of Morals” (paragraph 23), the neurological differences between individual people are quite substantial.  Indeed, while there may be many similarities in the concept of pleasure between two people, the massive differences between their individual experiences of that feeling are large enough neither person could conceivably care about that feeling the exact same amount (unless every single other preference decreased in value by the same amount as pleasure).  Indeed, for two people to prefer the same level of every individual value in relation to every other value, they would have to have the same brain.  Thus, we see one important source of disruption – in each generation and with each new individual, the previous value system they inherit is necessarily at least somewhat inaccurate, even if it is preferable to mindless guessing.  For each new generation, therefore, there is a need to restart the process of finding new relationships among different values.  (A quick aside:  the prior analysis can also be interpreted as a good argument in favor of traditional parental relationships – that is, parental relationships between the conceivers of the child and the child themselves.  If there is a genetically based neurological similarity between a parent and child, then a biological parent’s personal value relations are likely to be more similar to the child’s than an unrelated person’s might be.  Therefore, it makes sense for a child to be taught by those who are most similar to them in genetic makeup, because that allows for their initial inferences – based on what they are taught by their parents – to best approximate their true value relationships.)

A second problem with the Hegelian perspective is that one’s own relational values often change.  As described in my “A Preliminary Philosophy of Morals” (paragraph 25), the mind often changes from a neurological perspective.  Therefore, the relationship between two values can frequently change as the parts of the mind which register pleasure, for instance, develop to become more or less receptive to greater amounts of pleasure.  For someone who can now feel more pleasure than they could a few years ago, the importance of that pleasure is going to change in relation to other preferences.  Thus, since the mind changes over time, we cannot simply believe that there is one stable value relationship within our mind, but instead we must constantly remain vigilant for shifts that occur over time.  As a result of both of these problems, we have to conclude that recognizing and responding to consistent change is a significant part of acting as one ought to.  Thus, ethics seems a bit more like constantly chasing after a moving target than a simple march towards an inevitable ideal of perfect action.

So now, one must ask:  How should we proceed to better our personal ethics?  How can we adapt our out-of-date inferences to make them more accurate to our real preferences?  My answer is we should do so by generalizing the problems we identify when we have made empirical mistakes.  When we do something which we later regret, we say something to the effect of, “I regret doing that.  It caused x or caused me to miss out on y.  I do not like x and like y.”  Upon such a reflection, we must ask ourselves if losing y or getting x is at least as negative as the action’s positive effects are positive.  If our ethical system says that getting x or losing y is more important than the action’s positive effects, then the mistake in that case was not our ethical system; it was a rather a problem with ourselves, such as, say, weak will.  In such a case, we have no need to shift our relational values.  However, if we say yes, then we clearly made a mistake here.  We see that our situation is bad and that we wish we had done the other thing, yet our ethics are telling us that no, we are wrong, we do not in fact prefer the consequences of the other action.  If the discrepancy persists between our real preferences (as experienced) and our merely inferential ones (which previously informed our ethical decision-making), then, we can easily replace our current assumptions, since those inferences are only as powerful as we grant them to be.  If we are in woe because we have given up freedom that we wish we had not in return for pleasure, then we should in the future decrease the value assigned to pleasure in relation to freedom.  We can possibly overreach, but in that case,  we should also be able to try to self-correct.  Of course, we will never actually arrive at the perfect relational values.  That would be nearly impossible and, even if possible, extremely fleeting.  Rather than attempting to perfect our ethics (an impossible task), we should simply try to get closer and closer to perfection, so that we can live as desirable a life as possible.

While I have praised an empirical approach to improving our ethics, it behooves us to then ask the question:  on what grounds is this empirical approach any different from the empirical approach which we discussed earlier and rejected?  The answer is that the flaws which we identified in our earlier analysis of the empirical approach do not exist once inference is added in.  First, we talked about an inherit contradiction with much of empirical decision-making, that is, that it depended on previously created units which we could not appraise without a previous rationalist basis.  However, we have figured out a way of appraising those values through inference rather than rationalism, which addressed that problem.  Before discussing an individual case, we have already imagined and assigned values to different amounts of freedom or pleasure (or anything else).  Therefore, all we need to do in the empirical case is to imply those already created measurements.  Second, when it comes to our complaint that what one negative reaction to an empirical mistake is too particular to be applied to all cases with any similarities, we have solved for that as well.  Of course, if individual cases have different rules, then generality cannot teach us anything.  But if there are general rules deciding a relational value, then a disagreement or dissonant note indicates a direction in which we can change our rules.  We are not, in this instance, trying to achieve a comprehensive list of empirical actions aligned with their ethical qualities, but rather we are trying to understand how accurate our rules are and how well they perform.  Thus, the overly precise nature of empirics in fact helps us to improve our relational values rather than (as it did earlier) hindering us.

Having discussed ethics proper, I would like to now turn my attention to the implications of our theory, particularly when it comes to the amount of import which we place in our ethical decisions.  First, I would like to focus on the implications of our constantly imperfect and imaginary ethics on ourselves and our own actions.  Then I would like to examine how the imperfection of our ethics can be applied to our relationship with others, particularly when it comes to decision-making in which some forms of value are determined to be preferable to others.

When it comes to the self, imperfection in ethics usually seems to heavily skew our preferences towards freedom.  Since we cannot know what the best consequence is due to estimation being used (as opposed to a perfectly rational system), we should simply admit our faults and assume ourselves incapable of acting as we ought.  Better then to grant ourselves the ability to make these choices in the far future, when we might have more information about ethics which we can use to act in a preferable way, than in the present.  However, with our inferential ethical paradigm, this is not quite true.  Rather than the future being some kind of objectively preferable environment for decision-making, the future is just as flawed as the present.  Indeed, it could be even more flawed, as, in the future, our preferences might have changed faster than the relational values which we have previously attributed to those preferences.  Therefore, we should assume short-term freedom is just as valuable as we thought it was prior to this discussion (at least, presuming our earlier discussion is a full list of human preferences).  However, when it comes to long-term freedom, our discussion has certainly given us a reason to value it more than we otherwise would.  Based on our earlier discussion, we can assume that people’s values change over time and that their actions should change with them.  A commitment which seems good now could easily turn out to be extremely negative in the far future, simply due to our preferences changing.  An example of this would be the person taking heroin.  For them, if they have no clear path to a rehabilitation center, they have no clear way of not taking heroin in the future if they come to disagree with their current usage.  There is no possibility for them to terminate future commitments to acquiring new heroin.  Therefore, they are in a perpetually terrible situation.  Consequently, when we create long-term commitments for ourselves, we should always make sure that there is a possible way for us to get out of that commitment in case our preferences change.  To go back to the heroin example, imagine if the heroin addict had the ability to go to a rehabilitation center at any time they wanted.  In that case, they could easily get rid of future commitments, which means that the ability to opt out of heroin usage in the future makes heroin usage in the present much less undesirable.

Further, when it comes to the self, we should be able to recognize general trends in our own intellectual development.  If, for instance, pleasure is mattering more and more to us over time, we can fairly assume (within reason) that pleasure will matter even more to us in the future.  Therefore, we can plan to project a hypothetical future in which we have recognized the great value of pleasure which we do not necessarily assume as of yet.  In doing so, we could lead ourselves to a short-term situation in which we seem to be planning something undesirable.  But, in the long-term, this actually would in fact be prudent, as we would be thinking (often accurately) about the hypothetical world in the future where likely trends continue, even if they seem inaccurate based on current preferences.  Such a perspective is extremely important when it comes to identity, more than with respect to any other value which we have discussed.  Indeed, when it comes to identity, our future perspective is the very thing being valued.  Therefore, a hypothetical future in which our identity means very little to us should be considered.  But, equally, if not more importantly, a hypothetical future in which our identity means a lot to us should definitely be planned for.  Unlike pleasure or freedom, if an increased focus on identity comes about, then previous actions will have their importance retroactively increased.  Someone who cares about their identity, even if their action happened a long time before their care about identity increased, will still think about and regret a previous action as if it had been taken with the now greater care about identity.  Therefore, if anything, our inquiry here skews the importance of identity as a factor whose increase in importance must be feared and which should be prioritized in cases where a conceivable identity-focused person would prioritize it.

When it comes to extending our ethical framework to interactions with other people, the arguments we have been making are just as important.  Rather than assuming that our morality is even correct, we have assumed just the opposite:  that our morality is incorrect, even for ourselves.  Furthermore, we have assumed that learning about what is ethically right is something which someone must teach themselves, rather than learning from someone else (except as part of a child’s original pedagogy).  Thus, the question of the importance of one’s own ethical statements for someone else has suddenly lost relevance.  What is important is not necessarily pleasure in relation to freedom, for example, but rather allowing people to discover and pursue their own personal relational values.  Infringing upon someone else’s views by saying something equivalent to “well, that is not how someone should evaluate pleasure in relation to freedom” is a statement which cannot be verified except by and with respect to the person saying it.

At the same time, the attempt to create one’s own relational values through trial and error is extremely important.  Indeed, it is the only way to arrive at a situation in which someone can logically choose between two options.  Therefore, we should also attempt (if we are attempting to help others) to give others the ability to act in ways which can help them arrive at their desired outcomes, this desired outcome being whatever they need to help them better understand their relational ethics.  Yet, due to the necessary imperfections of other people’s actions as well as the future likelihood of their ethics changing in a direction which values things differently, it would seem impossible for us to proclaim that we should support all actions, even those which lead to a loss of future freedom.  Indeed, those actions which cause a loss in future freedom presume a great deal (such as that in the future the actor will not be able to learn from their action in the same way that they might in most other cases), we should attempt to dissuade people from those actions.  Over-indulgence in actions that sacrifice future freedom can, in the extreme, lead to a future in which the actor is unable to indulge in absolutely anything.  At that point, the actor cannot possibly find their new relational values, which have changed since the time of their original action.  However, as we previously discussed, trends can indicate if future freedom and future relational values are quite as important as our current indulgence in present-day relational values.  We can certainly imagine a situation in which someone’s ability to feel pleasure greatly outstrips any future ability to feel anything of value.  Therefore, while we can attempt to disagree with or call for prudence when discussing actions of great impact, we cannot know, as an onlooker, if we are in fact correct when we assert that someone else’s ethics are better served by a less extreme or self-restrictive action than the one chosen.

Finally, I would like to offer a short re-capitulation.  Having discussed this entire ethical system in a rather academic and theoretical manner, I would now like to explain the practical application of everything we have discussed.  First, when thinking about whether to take an action which privileges one value of ours over another, we should choose based on our current assumptions, as taught to us by those who raised us, in particular those from our families.  If we then feel bad after acting in accordance with our ethics, we should interpret that as a sign that our ethics are problematic and need to be changed.  We should then amend our ethics to grant more influence to those values which we feel have not received enough precedence.  We should always continue to question our ethics, even when we think our ethics seems to have no major issues; after all, one could pop up, whether or not the ethics themselves were perfect at the time of their conception.  We should also always focus on how we view ourselves; making our real selves more similar to our ideal selves is constantly important, even if we don’t see why at a given moment.  It is equally important to look at and bet on what we think we might care about in the future, and, absent that capability, it is important to keep open as many avenues as possible for ourselves in the future.  Finally, when we try to help other people, while we should dissuade them from acting in ways which may be rash or lead to the loss of their ability to choose between options in the future, we should not impose our wish for them to act (or not act) in a particular way and thereby become more capable – we should instead defer to their own opinion on whether they wish to become more capable people.

 

Only this way can we solve the ethical calculation debate.

Previous
Previous

Clarifying Desirability in Ethics

Next
Next

The Ethics of Intentionality