A Theory of Ethical Practice

The main focus of ethics has historically been to try to arrive at and argue for a group of ethical views which the ethicist believes people in general should hold.  However, in fact, ethical views and ethical action are significantly different.  An ethical position being held not just in theory but also in practice would be quite impossible.  While we can imagine (and even discuss) utilitarianism as one of many possible ethical theories, it would be quite difficult to imagine a real utilitarian occupying the same world as we do.  Indeed, if you asked someone, even someone well versed in philosophy, why they acted a certain way, you would not be likely to hear, “Yes, well, according to the categorical imperative…” except perhaps as a poor joke.  It is not as if the average person is simply amoral; the complex ethics which any self-respecting ethicist would include in their theories are simply impossible to think about in day-to-day life.  The questions of, “What will be the total happiness caused by doing x,” or “Is doing x consistent with a group of pre-established rules,” both seem quite difficult to answer – how is anyone supposed to ask and then answer these questions millions of times per day?  Ethics beyond the lab is completely unreasonable in nearly any real case.

 

Complete impracticability is an issue that inherently accompanies the complexity which any proper ethics must have.  Nearly any ethics is not only required to prescribe the right guidelines to action but also the right guidelines to judging action.  The latter, allowing for deliberation and a considerable amount more complex judgement, can only be vaguely applied to the former.  Many actions are done after scant and imperfect consideration, which will lead to undesirable actions.  Further, the same actions that might seem right in an unbiased laboratory setting might not seem the same in a real, day-to-day setting.  Thus, it is quite easily imaginable that an undesirable action might be done by someone even if they are using the perfect tools for a theoretical setting.  In fact, imperfect consideration might lead to an even worse situation occurring than no consideration at all.  Thus, if an improperly considered ethics is less preferable than no ethics at all, it’s quite reasonable for someone to choose to avoid actually acting in accordance with any particular ethics.

 

Avoiding the practical application of ethics is clearly an undesirable result.  And yet, the considerable amount of attention that ethics has received has rarely focused on this particular issue.  In fact, the more consideration ethics has gotten, the more impossible it has become to put ethicists’ prescriptions into practice.  Complex ethics, which put their emphasis on precise rules that require much thought to understand, let alone apply, can never be utilized by a real human being within the real world.  At the same time, we mustn’t throw out the baby with the bathwater and create a new ethics merely out of necessity – the excessive complexity of ethics was not created out of incompetence but rather because it is often required.  Thus, we should not only attempt to think about applying our ethics to the real world but also to think about how we can make our ethics practicable.  Barring making changes to the very nature of our ethics to make them more practicable (which would require us to fundamentally shift our views in such a way that would require us to accept extremely suboptimal results), one way in which we could allow for the practice of our mostly unpracticable ethics is to is to create practices which allow our more complex ethics to be practicable.  This will be what we focus on in this article.

 

Prior to our discussion of what particular practices can make ethics in general practicable, we should first consider what ethics we are talking about.  With many kinds of ethics, practicability is absolutely impossible to accomplish in any situation, so we must first rule out those unpracticable kinds of ethics before questioning how we can apply hypothetically practicable ethics.  Thus, we can assert that what we are focusing on, for the sake of our discussion about practicability, is exclusively those forms of ethics which are practicable (i.e. ethics which can hypothetically be applied to discern between two possible actions within a particular real-world situation).  One requirement of a practicable ethics is for its ethical requirements to be observable by the actor.  This can be done either by requiring the actor to act exclusively based on clearly deducible statements (i.e., if someone is required to act in a way which seems clearly logical) or by delegating only those parts of ethics which can be calculated to the ethicist (i.e. if someone is required to follow a group of precepts they cannot understand the source of but can understand how it could be executed).  Any other kind of ethics, which requires an actor to access the perspective of another or an objective perspective, should be viewed as completely spurious to any discussion focusing on a practical application of ethics (as opposed to a purely theoretical one).  While requiring ethics to be observable would seem like a very minor requirement (after all, who would require someone to do what they cannot possibly ever understand?), it is extremely helpful (both in itself and in its implications) to narrow the scope of our discussion.

 

So, no ethics should require an actor to base their actions on information which cannot be gathered by that person prior to their actions.  This has two immediate implications.  The first is that a question which has never been asked by the actor cannot be the reason why something is right or wrong for them to do.  The ethical indictment of an action cannot be based on knowledge which cannot be reasonably gathered by the actor – and, if the thought never occurs to someone that something could be wrong, then its being wrong is a prototypical example of information that needn’t impact someone’s thought process on whether to do something.  (As an example, imagine that there were many people somewhere who were, for some reason, killed whenever people walk on sidewalks.  If someone had never considered that walking might hurt those people and if nobody had ever connected the two events, an actor who chooses to walk on the sidewalk cannot reasonably be condemned as unethical for killing those people).  Seeing that, we can also make another judgement:  if something appears to be right (or wrong) to an actor, then that action is right (or wrong), at least insofar as an action’s being right or wrong affects the actor’s decision on whether to take the action.  If any action is right or wrong because of information which has not occurred and could not reasonably have occurred to an actor, then that information should not be a part of either which action someone takes or a later appraisal of the action’s rightness.  An ethics based on unknown information is absurd for a pretty simple reason:  that ethics would require the actor to base their decisions not in what they believe to be the best action, as it would make even the action which seems most preferable seem susceptible to a sudden shift in desirability based on information which has not been and could not have been gathered.  An approach to ethics which requires unknown information to take center stage in action would lead to a constant need to gather more information, which would make action impossible.  A retrospective appraisal, too, which is based on information which could not have been gathered by the actor being appraised is an appraisal which only indicts itself.  The appraiser, being a person subject to many of the same problems as the actor who made the initial decision, would have to require themselves not to neglect any possible information which could shed doubt on their appraisal.  Otherwise, the appraiser must put themselves on a pedestal from which they are somehow free to criticize without being subject to the same criticism.  Appraising based on then-unknown information thus falls into exactly the same trap as acting based on then-unknown information, namely that it leads to undesirable inaction or incoherency in action.

 

The intention here is to say that an ethics must not base its precepts on what is objectively the right thing to do but rather on what the actor thinks the right thing for them to do is.  As such, we must not take as a fact that something which is certainly good from a logical perspective is necessarily good so far as practical action is concerned.  In fact, the ethicality of someone taking an action should have little bearing on the ethicality of the action in a vacuum but rather depends on the ethicality which the actor believes the act holds.  When we discuss ethics, we must not believe that an action being right makes it ethical, or an action’s being wrong makes it unethical.  An action’s ethicality has its root not in objective morality but in the thoughts and beliefs of an individual person.  So, an action which is objectively right but which appears wrong to an actor is effectively wrong (and vice versa).  Seeing this as true, we have a quite clear and easy way of deducing whether an action is ethical: not based on the ethical system in direct relation to actions but instead on our thoughts about an ethical system in relation to our thoughts about our actions.  Avoiding the thought that our actions are not in line with our ethical system, thus, should be the main goal of ethical discussion.  Now that we have a clear idea of what we need to prove, we can proceed to the main purpose of the article: diagnosing and fixing a major practical problem in ethics.

 

 

 

Let us imagine two people, John and Richard, who both behave ethically.  Both behave as per an impractical ethical system.  John believes in an ethical system which requires him to calculate the consequences of each action he takes, whereas Richard believes that he should behave as per a group of ethical maxims (do not steal, etc.), which he must not break.  Both of these two people believe in acting to do what they currently believe their ethical systems imply.  As such, if one is under the impression that it is a moral imperative to do something, they will try to the best of their ability to do it.  Both people, we presume, will change how they behave whenever they are under the impression that the way they currently behave is an incorrect application of their morality.  As such, if they fall under the impression that an action they are currently doing is wrong while doing it, they will immediately stop doing what they are currently doing and change to doing something else. 

 

The reason we have chosen these conditions is because they are implied by the above discussion of ethical systems.  An ethical system which demands from its followers that they believe in it or act based exclusively on information currently available would also seem to inherently ask for its followers to act in day-to-day life on currently available information.  As the information which is currently available is subject to change whenever any actor learns anything, the actions which an ethical actor should take would also constantly change whenever the actor learns anything new about the consequences of a given action.  Information sensitivity not only applies to factual epiphanies, (e.g. knives are sharp) but also moral epiphanies (e.g. stabbing this person would be immoral).  Any ethical system which both calls for people to act following certain rules and which leaves itself open to moral or factual epiphanies, therefore, would require people to change their actions whenever they have a moral or factual epiphany, even if that moral or factual epiphany takes place in the middle of an action.

 

Let us now imagine several examples of actions committed using information-sensitive ethics.  Let us imagine John is falling off a cliff.  Richard manages to catch John prior to his falling off.  However, Richard is not sure he wants to pull John onto solid land.  In this hypothetical, let us presume that saving John’s life is the preferable action for Richard, and, given enough time to think about it, Richard would agree that saving John’s life is the right thing to do.  Let us also presume that Richard is not sure of this at the time.  Instead, Richard is uncertain whether he should save John’s life.  While he is predisposed towards pulling John onto solid ground, he is uncertain due to several uncertainties which he has not answered (e.g., if Richard believes falling off the cliff isn’t as damaging as it first seemed, if Richard thinks John purposefully jumped off, or if Richard believes that trying to pull John onto solid land could backfire and both Richard and John could fall off).  For Richard to save John’s life in this situation, he must affirm that he should continue to pull John onto solid ground for enough time for him to complete the action (of pulling John onto solid ground).  For Richard to end John’s life, he only needs to believe that letting go of John is right for enough time to let go of John, killing him.  Richard’s thoughts are quivering between the two assertions (one that he should pull John onto solid land and the other that he should let John die), and Richard, doing whatever he currently believes is the right thing to do, will therefore quiver between the two different actions.  Richard would therefore decide which action to do not based on which is a better option (even if he is trying to figure out which option is better).  Rather, he would decide based on which action (killing John or letting him live) he believes is right for enough time for him to go through with the action.  In this case, where killing John would take a few seconds and pulling him up would take far longer, if Richard gives equal credence to both options, he would probably end up dropping John off the cliff.  In a case in which no option can be accomplished in the time it takes for Richard to change his views on the situation, he would choose neither option, instead waiting until his inability to choose between his options leads to a third outcome due to factors other than Richard’s choices (such as Richard eventually losing his grip and John falling off the cliff).

 

Through this, we can see a major problem with the practical application of many complex ethics.  All ethics require an assertion to be true for someone to act on it.  Worse, that assertion is always difficult to answer and requires a difficult moral, semantic, or psychological judgement.  In the case of an ethics which focuses on consequences, the problems are quite clear: consequences of an action must be appraised, not just individually but also in relation to one another.  If they are based on a psychological judgement (e.g., happiness), then an action would require a judgement based on data which will constantly be subject to change, and therefore which action someone should do will be equally subject to change.  While an ethics which focuses on creating individual rules about how one should act (e.g., don’t kill) would seem to fix our problem, that is not necessarily so.  Rule-based behavior requires several assertions prior to an action:  first, that general rules apply in a specific case; second, that there are no conflicting rules which might be affected; and third, if there are conflicting rules in a situation, the decision of which should be prioritized (e.g., “try to save the most lives possible” and “don’t kill”).  All three of these statements are difficult to assert with complete certainty, and all three of them can easily cause someone to sway back and forth in any real-world situation on how they should act.

 

For an example of how this would apply to rule-based ethics, let us again consider Richard and John.  Let us imagine Richard let John fall off the cliff, and yet John has somehow survived.  John thinks Richard is homicidal and wants to stop Richard from trying to kill anyone else.  He is uncertain whether to stab Richard, which would temporarily incapacitate Richard, making it impossible for him to kill anyone (at least temporarily).  On the one hand, one rule John might have is “in no cases should you let anyone kill another,” and on the other, John might say, “you should not hurt people based on arbitrary psychological judgements about someone else.”  While it is perfectly reasonable that we can reach a conclusion on what John should do and on which rule takes priority, such a quandary would take quite a bit of time and effort to reach an endpoint.  However, this time and effort might be enough that the original question becomes pointless (if Richard is caught) or that the time spent wondering about the question becomes an ethical problem in itself (if Richard kills again while John is wondering whether to hurt them).  Thus, if John stabs Richard, a completely unreasonable action might be taken – or, if John never stabs Richard (or if he stabs Richard later than he should have), then a completely reasonable action might never be taken.  In both cases, John might act in a manner which goes directly against his ethics.

 

In both of the cases we have discussed, it is not the particular action taken that is the problem.  Rather, the problem is that the action was taken while the actor was quivering between different options, prior to the actor being certain about which action follows their moral guidelines.  Thus, looking at their action, the actor will, no matter what or how righteous an individual action was, argue with themselves over whether it was the right action to be taken.  And, just as in our cases of people considering whether to do an action in the future, the actor will reflect upon their actions with the same uncertainty.  Whether any individual action follows a group of ethical maxims is rarely a cut-and-dry case, and that is even less true when we consider actions in-reality, understood only through our imperfect, imprecise memories.  How can we know whether an action is ethical if it is completely vague in our memory?  If an action and its consequences are left vague, and if it was previously uncertain to us whether our actions followed our ethical maxims, there are very few reasonable arguments you could make that the previous uncertainty should not be one’s understanding of their actions.  In that situation, we would be left in constant uncertainty about whether any action was right.  That uncertainty would translate into the belief (at least for certain segments of time) that our actions didn’t follow our ethics (and thus that our actions were wrong), precisely what we have been trying to avoid in the first place.

 

An easy response to these concerns would be to proclaim that we should think through the content and consequences of our actions prior to taking action in a difficult moral quandary.  This is a perfectly reasonable argument to follow when we are discussing questions that can be answered over a long period of time.  For instance, the question “should I write this article” does not require an answer within a few hours, so I can think about whether to write the article prior to doing it.  In these cases, the quandary can be completely resolved prior to the action being taken, and thus there is no uncertainty and no bad actions.  However, there are several cases in which avoiding taking action prior to thought does not work.  The first involves cases in which there is a clear and large drawback to waiting.  For instance, let us go back to our example of John falling off the cliff.  If Richard stands there and refuses to decide between saving John or letting him fall, Richard will eventually lose his grip and John will eventually fall.  In this case, given enough time and thought, we could easily imagine a calculation which Richard can use to decide how long he should wait before picking the idea which his mind has settled on.  And yet, Richard cannot.  Richard does not have months to arrive at a conclusion about whether this particular situation falls under his ethical rules or not – he must take action before John falls, and time spent on something other than Richard’s particular predicament is time wasted. 

 

The second problem involves cases in which the attempt to reach a conclusion is a larger problem for someone than choosing the wrong action.  Think, for example, of a case where John (the believer in rule-based ethics) is walking and sees a rock ahead of him.  He is uncertain as to whether he should kick the rock.  However, near the rock, there is an ant hill, so he thinks that if he kicks the rock, the rock might hit the ant hill and all of the ants might die.  John is uncertain how to assess the character of kicking the rock into the ants, as he is not an entomologist.  He does not know if the ants are sentient beings or not, and thus whether kicking the rock would mean killing sentient beings.  On the other hand, he has heard his neighbors complaining about an ant infestation, and he thinks that killing these ants might stop more ants from infesting homes nearby.  There are several important rules here that are contrary to one another: “you should not kill living things” and “you should do the most possible to help other people.”  The resolution to the complex disagreement is very difficult to find.  It would likely take substantially more effort to determine the right answer than the decision of whether to kick the rock is worth.  The time spent thinking about whether kicking a rock is good, in fact, could have been spent on things that are more important and will lead to more good.  Therefore, it is perfectly reasonable for someone to argue with themselves over whether the amount of effort they spent on figuring out the right answer to an ethical quandary was righteous, defeating the purpose of the original action as well as all the thought which was put into it.

 

While these are just two caveats, they make up most actions.  The first caveat – the real costs of waiting to think through and calculate the right decision – applies to nearly every problem, because most decisions have reducing returns depending on the amount of time spent wondering whether to do something.  Spending time thinking about whether to save someone from falling off a cliff or to answer a question during an interview obviously both have immediate ramifications (in the first case, the person could fall off the cliff, and in the second case, the interviewer might get an undesirable impression of the speaker), but so does spending time considering, say, whether to read a book.  All returns decrease with time, as there is less time in which those returns can be used (less time while someone is alive).  Thus, anything that leads to positive consequences, whether those be direct (for example, learning) or indirect (for example, altruism) involves instances in which the positive effects decrease in magnitude over time. 

 

The second caveat – the costs of the time spent to fully think through an action – not only covers the individual example we talked about but creates uncertainty about thinking (and thus, as thinking is an act, acting) in nearly any case.  We do not know prior to thinking about how difficult a problem will be and how long it will take us to find peace with an answer.  Indeed, with many questions, we may never find peace with any answer (in spite of millennia of discussion, there has yet to be a clear resolution for ethics on nearly any point).  Thus, when discussing whether to simply “think more” about a situation, we must first ask “how much do we need to think,” an unanswerable question in most situations.  (The only instances we might be able to give a time estimate for answering a difficult question is if we have had practice answering similar questions and thus have developed an empirical basis for our time estimate, such as in an arithmetic problem.)

 

As such, requiring complex thought as a prerequisite for action will often fail to lead to desirable outcomes.  It can reduce the positive returns from an action to the point where they fail to compensate for the time lost, and taking an action after deliberate thinking becomes a significantly worse decision than taking the action would have been before thinking.  Thinking complexly about every action can also simply be impossible to do (because no answer can be found) or have more impactful negative consequences than the action’s original positive consequences.  The solution to this problem given in common sense philosophy is reasonable.  In common sense philosophy, an action which doesn’t lose many of its positive consequences over time and which has important consequences is usually considered something that needs to be thought seriously about before making a decision, whereas other less important or more pressing concerns are cases in which thinking about them is not prioritized. 

 

And yet, we are left with the significant problem of those smaller or more pressing issues, still unsolved.  How should we conduct ourselves when an issue does not seem at face value so pressing or when an issue is immediate?  Indeed, in these cases, reasons for acting are often not just vulgar or incomplete, but actually nonexistent.  If you asked someone, for instance, why they kicked a rock while walking, an answer such as “no particular reason” would not strike anyone as particularly odd; in fact, it would probably be a quite common response.  (On the other hand, if you asked someone who committed a clearly premeditated murder “Why did you kill that person?” and they said, “No particular reason,” it would seem that the answerer was either trying not to answer the question, or they would seem to be giving a completely incoherent answer.)  Further, most problems are not in a perfect context for decision-making.  Most problems arise in connection with some kind of time pressure.  And considering that time pressure is charged on its own:  how can someone consider creating and applying new rules for situations in which they are facing time pressure if they are already not under perfect conditions?  Frankly, focusing on knowledge prior to taking action is near impossible in most imperfect conditions, and hence it is impossible in most real-world situations.

 

As we can see, perfect conditions of thought are extremely desirable and fix our problem of time pressure.  In fact, nearly any ethical problem can be resolved given enough time for the actor to think.  Thus, in practical ethics, the focus of action should be to avoid the time pressure and increase the amount of time that the actor has to resolve an ethical problem.  If time pressure is removed altogether, then ethics should simply be the application of certain maxims in thought – which will allow us to have significantly preferable actions.  Our main goal, therefore, is to put ourselves in a situation where we can consider the morality of the alternatives without time pressure – the perceived ethicality of the particular actions taken to get out of time pressure will not be as important as getting into the ideal conditions for thought.  Becaue imperfect actions arise under imperfect conditions, we should thus also diminish the magnitude of actions taken while still under imperfect conditions (i.e., we should do the fewest actions which we cannot easily reverse).  Indeed, such behavior will lead to imperfect moral actions:  for instance, take the rock example.  We would be required by our rule of reversibility to avoid kicking the rock.  Indeed, kicking the rock might be the preferable action to take, and if we think about it later and come to the conclusion that we should have kicked the rock, kicking it now is suboptimal (as kicking the rock immediately would have been optimal).  However, no matter how suboptimal our action is, it is still preferable to taking action while not taking reversibility into account.  Action under time pressure is action taken without the governance of ethics – and thus action that has not just suboptimal moral character but nearly random moral character.  Thus, we can conclude that we should focus on taking the most reversible actions possible – even if that creates what seem to be suboptimal moral results.

 

However, we must be vigilant.  Creating rules that can only be used if we recognize imperfect situations requires us to recognize imperfect situations to begin with.  On the other hand, if we recognize in this discussion (which I presume is not under particularly threatening time pressure) that most situations are imperfect, then we can already assume that we should always act to create functionally perfect conditions prior to taking any sort of ethical action (e.g., in situations where I don’t know if there is time pressure, I should still act as if there were time pressure).  What this means is that (before all else) we should get out of time pressure.  No matter how large, small, or morally lopsided a situation might at first seem, that says nothing about how positive any assertion is about an action (or whether one should take that action).  Even in cases in which an antithesis has not yet formed in response to the belief that a particular action is ethical, we should not immediately take action based on what we think our ethics demand.  We, after all, do not know at that point, so we should allow enough time for ourselves to create a real ethical discussion with ourselves.  What that means practically speaking is that our first question should not be, “how moral is x-ing” but rather, “how reversible is x-ing.”  Recalling our earlier example about the rock and the ant hill, we should recognize that we can always kick the rock later but that we cannot necessarily un-kick the rock.

 

Our reversibility-based inquiry is entirely different from traditional ethical rules and does not lead to the same failings as other ethics.  This is because the question being asked is not a complex one like, “understanding all of the consequences of x-ing, will x-ing lead to the most y” (which is subject to constant new realizations about possible consequences), a semantic one like, “is x-ing a form of y-ing” (which, due to the situational nature of most words is not a simply answered question in most situations), or a moral assertion like, “does x moral rule come before y moral rule” (which is a kind of assertion that takes someone years to develop and believe in, often after looking at many arguments on both sides).  Unlike these difficult and morally questionable assertions, the assertion of reversibility is a simple fact which is not subject to moral epiphanies and very rarely subject to factual epiphanies.  “Will I likely be able to do x later if I do not-x now” or “Will the consequences of x likely go away if I do not-x later” is not a question that can always be affirmed with a similar legitimacy as it can be negated (at least in most situations).  If I am discussing whether I should let my friend fall off a cliff, it is not just incorrect that letting him fall off is not irreversible; it is an obvious and immediately occurring fact that someone’s fall cannot be willed away.

 

So, we have found our way out of one quandary (or at least the most extreme version of it).  However, let us say John has walked past the rock and the ant hill, and John is now wondering whether he should walk back and kick the rock.  We now have to face our second caveat:  the problem of overthinking.  To recap the problem of overthinking:  If we think for the rest of the walk about kicking the rock on the ant hill, we will likely miss out on substantially more interesting and important questions that will affect far more people far more significantly.  Focusing so much on one or two problems while ignoring every other problem is just as much of an ethical mistake as making the wrong choice in an individual moral quandary.  If we ignore someone falling off the cliff and choose not to help him because we are so engrossed in our contemplation of whether to kick rocks, we have done something clearly unethical, something that will very likely haunt us in the future.  We cannot simply get rid of every single occasion of overthinking that will ever affect us.  Each and every one of us constantly has millions of possible actions that we could be doing; if we are to think about any of these, we will almost certainly not, probabilistically speaking, think about only the best of them.  So, we should try to focus on more on mitigating the effects of overthinking than on getting rid of it entirely.

 

Now that we have explained our problem, let us go back to that rock example.  The two possible outcomes of kicking a rock are first that we could hurt sentient creatures (ants) and second that we could stop a few possible ant infestations.  We should first assume the most extreme possible scenario for each outcome.  Let us start by considering the pro-kicking side.  We can, on this side, assume that kicking the rock would lead to us ending an ant infestation and no sentient beings being killed.  On the other side, we can assume that kicking the rock will kill sentient beings and will not end the ant infestation.  We should look at both of these consequences, and, rather than focusing on which one is more ethically or factually sound, we should compare the magnitude of their cumulative possible impact against the cumulative possible impact of other possible actions and topics that we could think about.  We could, for instance, think about whether we should volunteer at a soup kitchen and look at the possible positive and negative sides of that decision.  We can then compare both possible actions’ consequences and characters.  If doing the latter has more important possible consequences to us than doing the former, then we should spend our time thinking about the latter rather than thinking about the former.  It is only when we consider all more important possible actions that we should refocus our efforts on the rock problem (however unlikely it is that our greatest concern will ever be whether to kick a rock).

 

Comparing the possible consequences of one action to possible consequences of another, while not perfect, is a relatively good method to dealing with the problem of overthinking.  Before heaping praise upon this method of deciding how to act, I will acknowledge one problem.  There are many cases where one possible action has much larger possible consequences than it first seems.  The inverse is also true – it is perfectly logical to say that we will miss some important actions and we will focus on some unimportant ones because we are only considering first impressions rather than reaching a comprehensive understanding of all the possible consequences of any given action.  However, what our comparative approach loses in perfection (which no answer to any question will ever have), it gains in expediency.  Consideration of most minor problems will not even reach an argumentation stage, for there is no need to examine their possible consequences (or character) in a critical light until the exact action is considered.  Therefore, most minor dilemmas (and, indeed, most problems) can be quite easily dashed away and disregarded in a manner very difficult to achieve without cutting a few corners.  It is also true that we will never really be affected negatively by most of those actions which we do not take, for we will never consider them.  Even if they were potentially extremely important, they will remain comparatively unimportant, and therefore we will never be able to recognize our “objective” moral mistake of not focusing on them.

 

In spite of all this in-depth discussion about how each part of our thinking would go, our system here is, in practice, much simpler than it first may seem.  All that needs to be done to address a moral problem is to take two steps.  First, when problems are time-sensitive, we should first try to change the situation into a non-time-sensitive one by focusing on trying to do whatever reversible (or comparatively reversible) action is possible.  Once we have done that, we should zoom out of any individual ethical quandary to examine how this particular problem compares to other ethical quandaries.  After looking at different ethical quandaries, we should choose to think about the possible consequences of different actions in regard to those problems which have the highest possible impacts.  Then, we should focus on finding an answer to the most difficult ethical quandaries, using conventional ethical techniques.

 

Throughout this discussion, I have purposely left completely open a significant number of questions pertaining to ethical practice (what is time-sensitivity; how should we treat problems that are more time-sensitive in relation to problems that are less time-sensitive; when there are many time-sensitive problems which we cannot think about, which ones should we prioritize; etc.).  The reason I have done so is that this article is intended to provide one single demonstration of an answer to just one question about how we can bring ethics into practice.  With our focus on answering questions such as those we have addressed above and prescribing different practices which actually allow for ethical action in practice, the attempt is to change ethics from a hypothetical group of academic arguments into an actually practicable discipline.  I would like to emphasize here the difference between such a project and the project of “practical ethics.”  In practical ethics, the attempt is to apply abstract ethics to day-to-day situations, without actually allowing any changes to make the ethics practical.  In other words, that attempt involves looking at the real world while keeping our ideas of ethical action unchanged from what they were in the laboratory in situations where ethics were not part of anyone’s actual decision making.  By contrast, my attempt here is to complete flip that approach on its head:  instead of leaving ethical prescriptions unchanged, this article has argued for an actor to take actions quite different from those that impractical ethicists call for – quite different from the ethical orthodoxy which practical ethics call for.  Only that way, by taking actions directly contrary to our impractical ethics, can we create a situation where we can think and make actual decisions about how to act ethically.

 

In other words, this is not an argument about practical ethics, but about ethical practice.

Previous
Previous

The Ethics of Intentionality

Next
Next

Reconciling Capitalism and Democracy