Signs of the Times

Introduction:

The ongoing social change in many liberal democracies today is rather unusual.  Theoretically speaking, most early political observers of liberal democracy such as Alexis de Tocqueville viewed democracy as being inherently culturally conservative and averse to significant change.  And yet, modern liberal democracies around the world are facing significant, unusual social change.  Indeed, not only are we facing significant social change but, as we will argue, much of the current changes, such as the increase in economic inequality and the decline in the correlation between test-taking merit and practical success, has been the unusual reversals of the social change preceding it.  Trying to understand why current social change is occurring therefore poses a difficult theoretical question (that is, “Why does social change occur in liberal democracies writ large?”) and a difficult particular question (that is, “Why should current social change in liberal democracies be so different from the social change underway several decades ago?”).  In this article, we will seek to answer these questions.

 

What we will basically do in this article is what philosopher Hans Sluga has called “diagnostic political philosophy.”  In this form of philosophizing, our “concern is with assessments and verdicts, not with general norms and principles,” that is, it theorizes with the intention of figuring out certain things about the way that the polity works.  In focusing upon trying to get a better understanding of the current political situation of a particular nation, diagnostic political philosophizing has certain rules, which Sluga sets out as follows:

 

(1) It directs itself at the present and rejects a timeless perspective.

(2) It thinks in genealogical terms and shuns an anti- or ahistorical view.

(3) It believes in the “unity of theory and practice,” not in pure theorizing, and may

reject altogether the possibility of comprehensive theorizing.

(4) It concerns itself with concrete choices rather than universal norms.

(5) It is historically self-conscious rather than oblivious of its own historical situation. 

(Sluga 2014, p. 25)

 

These rules set up a sort of political philosophy that emphasizes using one’s particular historical position as a tool, perhaps even the most important tool, to understanding one’s political situation.  It views one’s political obligations as being created by their particular historicity and placement within the history of a given nation.  We will follow these general rules throughout this article, with our theory arguing that it is only in our particular social environment, with mass media advertising and modern social norms, where modern social change can occur.

 

Even more importantly, Sluga outlines a group of “steps” involved in any diagnostic practice, political or otherwise:

 

1. Initial, general observation.

2. Examination, probing.

3. Selection and identification of localized cues.

4. Description, comparison, classification.

5. Determination of genetic and causal factors.

6. Prognosis.

7. Diagnostic opinion, prescription.

8. Intervention, therapy.  (Sluga 2014, p. 33)

 

We will try to follow these general rules of practice throughout this article.  The first three steps, what Sluga describes as “diagnostic phenomenology,” is the finding of particular indications of the ongoing broader trend.  At the end of the process, we should be able to discover particular “cues” which, together, indicate a particular source of the situation we are describing.  The fourth step, which Sluga describes as “diagnostic genealogy” is the theorizing of the particular reasons why those “cues” exist, as well as theorizing, historically, when and how they started to develop.  The sixth step is the part where one diagnoses exactly what is ongoing in the patient (in this case, the nation or society) and how that patient is likely to develop in the future.  Finally, the final two steps are largely self-explanatory, with the seventh step being our figuring out what the patient should do in relation to the illness and the final step being the application of theory into the real world (Sluga 2014, pp. 33-36).  In this article, we will try to provide all but the final step (which necessarily must be made off the page).  So much for our intentions in this article.

 

In spite of our general intentions, Part One of this article is going to be mostly theoretical.  Within the first section, we will seek to answer the question of how social change can occur in a modern, liberal democratic society, on a broad level.  Our answer will be that social change happens when advertisers and mass media producers portray a society as having different values or behaving differently than they do in fact.  That portrayal causes there to be a discrepancy between what people believe about others and how those others actually behave (or what they believe).  This discrepancy then causes people to change their own views or actions to better match what they believe to be the norms of the society, or to be less likely to stop themselves from believing or acting in a particular way that previously seemed to them to be abnormal.  That change then causes the actual norms of society to conform to what were previously the merely perceived norms.  We will also argue that the advertisers and mass media producers change their advertising strategies, and therefore social attitudes, based upon whichever strategies work in practice, whether or not they actually know why they work, and those strategies are developed by trial and error, usually during times of economic recession.  Whichever strategies work for one firm are usually then copied by other firms, therefore creating industry-wide similarity in what views are pushed by advertisers and mass media producers.  Most of our argument is a demonstration of how social change can be caused by advertisers and mass media producers; in making this argument, we will start by analyzing why the individual people who change their opinions, habits, or actions might be influenced to do so, and only afterwards we discuss what processes might affect advertisers’ favoring of certain views over others and how that can change over time.

 

Part Two of this article will be significantly more diagnostic in methodology.  In that part, we will try to understand the currently ongoing social change in the modern United States.  We will notice that, unusually, in spite of historic correlation between intellectual capability and economic success, America has seen an increase in intellectual egalitarianism amidst a simultaneous increase in economic inequality.  We will argue that this change has occurred due to an increasing percentage of the American population being affected by personal neuroticism and personal aversion towards work.  We will argue that these social changes have been caused by advertisers producing a large number of intelligent but neurotic people who, instead of working, are used as testing subjects to refine advertisements for more wealthy consumers.  To assure that this population is neurotic and does not work, these advertisers have, in our view, sought to undermine the traditional cultural, social, and religious milieu of America and to replace it with one more amenable towards advertiser interests.  We will argue that this shift is likely to continue until everything about traditional American society is replaced.  We will close by recognizing that, in spite of our quite negative prognosis, American society has still not been completely overhauled by advertisers and mass media producers, and therefore, that most of America is still tied to its cultural, social, and religious traditions.  We will make this argument largely evidentially, which means that we will make it based on a very large amount of compiled evidence, none of which absolutely proves the source or future of ongoing social change, but which altogether makes our judgment about the course of American social change highly probable.

 

Part Three of this article will be a mostly moral argument about the obligations created by negative social change (such as the ones currently occurring).  We will argue that, even though ongoing social change is determined by the conditions under which it occurs rather than individuals’ particular decisions, people still have a moral obligation to defy undesirable social change, even though this obligation requires positive action to follow-through with.  We will argue that this obligation is created by one’s commitments, created by understanding the ongoing negative effects of social change.  The best and only effective way to follow-through with this obligation, we will argue, is to change the way in which we apply our memory of our opinions and habits from favoring those opinions and habits which we held more recently to favoring those opinions which we held a longer time ago.  We will argue that, in spite of several practical difficulties which this behavior faces and onerous personal demands it makes of us, it is still morally demanded due to the extent of the negative social change underway.  This part is argued almost entirely conceptually and abstractly, first by demonstrating that we do have an obligation to defy negative social change and by applying philosophical theories of defiance, particularly medieval philosophers’ theories of sin, to the social change we discussed in Parts One and Two of this article.

 

In the final and fourth part of this article, we will apply our concept of moral defiance of social change to political action.  We will argue that we are morally required, within the political sphere, to support political action that decreases the influence of negative social change.  We will identify at least three sorts of behavior that are capable of doing so; the ones on which we will focus are the direct regulation of particular behaviors that accelerate negative social change, supporting the return of the educational system to whatever state it was in prior to whatever social change we oppose, and the usage of political rhetoric (and possibly legal action) to emphasize the contradiction between social change and the cultural heritage of the people affected by social change.  We will base these arguments on exemplar governments that have had success resisting social change in their respective nations by taking these policies.  (In respect to regulating particular behaviors, we will use the example of American regulation of nicotine and cannabis, while in the case of our policies on education and political rhetoric, we will use Maurice Duplessis’ government in 1940s and 50s Quebec as our example.)

 

Finally, we will conclude by very shortly recapitulating how we have enacted the eight diagnostic “steps” Sluga described (or at least, the seven we can enact in a written article).

 

Before proceeding with the article, I would like to very quickly make a couple of apologies for several failings that were impossible to avoid in writing the article.  For one, we have focused almost exclusively on America in our diagnosis of present social change.  It seems highly likely to me that the social change we are describing is present not only in America but also in much of Latin America, Europe, East Asia, and Oceania at the very least, and possibly throughout the entire world.  In spite of this fact, I have been unable to effectively research whether the currently ongoing social change is solely an American issue.  This is for two reasons.  For one, America is a very big country with a massive number of social scientists researching the answers to many different questions.  As a result, it is much easier to answer any questions one might have about what social change is underway in America than to answer those same questions about a smaller or poorer country that has fewer working social scientists and therefore less good social scientific research.  Further, much of that research will be published in languages other than English; as someone who is monolingual, I am not able to read or understand much of anything written in other languages, let alone complicated social scientific papers.  Second, I am myself an American and therefore have far more interest in the issues facing America than those facing any other single country.  It seems highly likely to me that the social change we are describing also is likely to be occurring in other countries; although, without the ability to state this fact with any level of certainty, this article can only really claim to be about American social change.

 

Second, the argument we are making here, by virtue of our position in time and history, is extremely imperfect.  None of us are capable of prophecy, and as a result, we are really only capable of understanding what social change would look like if current trends were to continue completely unchanged in perpetuity.  Economist Joseph Schumpeter makes this point thusly:

 

Analysis, whether economic or other, never yields more than a statement about the

tendencies present in an observable pattern. And these never tell us what will happen

to the pattern but only what would happen if they continued to act as they have been

acting in the time interval covered by our observation and if no other factors intruded.

“Inevitability” or “necessity” can never mean more than this.  (Schumpeter 2003, p. 61)

 

Even further complicating our analysis, our political understanding of the world is innately colored by our position within its context.  The data that we get are inherently schematic, tentative, and vague and therefore extremely uncertain and easily affected by our own biases.  As Sluga argues:

 

…the political diagnostician is characteristically located inside the situation he seeks to

diagnose, whereas medical diagnosis is carried out from outside the patient’s body. The

philosophical diagnostician will thus have to be content with data that are limited,

colored, and possibly tainted by the location, the place and the time, in which they are

gathered. Philosophical diagnosis operates for that reason under conditions that make

for a particularly high degree of uncertainty. The outcome of a diagnostic form of

political analysis has to be accordingly tentative and problematic.  (Sluga 2014, p. 39)

 

As a result, the analysis we will offer here should not be taken as gospel or as absolutely certain.  It is flawed and could very well be wrong.  The trends in our society currently could completely change in a couple of years; they could be leading somewhere entirely different than what they seem to be leading up to; there could very easily be evidence I am overlooking, etc.  That much being said, political analysis is still important.  We cannot simply avoid all knowledge because what we can know is inherently uncertain.  Nor can we avoid recognizing our moral obligations because we cannot be sure if everything in the world around us is different than it seems.  At the end of the day, uncertainty is not good reason for inaction.  So, in spite of our inherently uncertain analysis, the obligations we describe still exist, and the problems we see still probably do as well.

 

Finally, we try to avoid any especially contentious moral issues throughout this article, in particular, why we should view the social change that we will describe as negative.  What that means is that we do not seek here to make any specific or sufficient moral argument about why the social change currently underway in America is wrong; we simply assume that it is.  The reason for this is that the ongoing social change in America is so absurdly abhorrent according to most people’s morality that to try to make an argument specifically explaining why is, if anything, more likely to divide those in our audience (who could have a variety of different general moral views) than to unite them.  It is very possible, as we will point out in more detail towards the end of Part Two of this article, that the reader might hold a moral view according to which the ongoing social change is good.  Sadly, convincing that particular reader is not within the scope of this article.  For those people, the first two parts of this article might interest them, as they attempt to look at ongoing social change on a purely factual level, although Parts Three and Four, both being about the moral obligation to oppose this social change, will probably not be of as much interest.

 

This much being said, there is still, in my eyes, a great deal that can and should be said to most people about American social change.  We will seek to at least contribute to that broader conversation.

Part 1:

In a free society, the existence and preponderance of broad social change is inherently counterintuitive.  Society in a free nation is pluralistic, and therefore should seemingly operate only in factions; if social change occurs, it should only affect individual sub-cultures within the broader nation while leaving the other sub-cultures unaffected.  Indeed, at its extreme, a truly “free” society ought to experience no “social” change at all because there should not be any broad “society” to change; the nation should change only inasmuch as the average of disparate individuals changes, with no broad, general change underpinning those individual changes.

 

Of course, this understanding of social change in a free society is overly simplistic and clearly false; there are quite obviously large cultural changes that have occurred, to say nothing of the obvious social changes that are occurring, throughout liberal democracies around the world.  While we will discuss more discomforting social shifts later in this article, nearly everyone would agree that several of the most significant social shifts have been positive.  In the United States in 1958, for instance, only 4% of Americans approved of “marriage between white and colored people” compared to 94% who disapproved.  By 2021, (when asked the slightly different question of whether they approved of “marriage between Black and White people”) those numbers were exactly flipped (McCarthy 2022).  In 1937, 61% of Americans described themselves as supporting the “birth control movement” (with 26% saying they opposed that movement) (Roper Center for Public Opinion Research 2015).   Comparatively, 89% of Americans in 2021 described using birth control as “morally acceptable” (compared to 8% who described it as “morally unacceptable) (Newport 2021).  Perhaps more controversial (but still displaying clear social change) is the attitude towards homosexuality.  In the early 1970s, several polls found about 70% of Americans thought homosexual relations between adults as “always wrong,” compared to only 42% who thought the same thing in 2021 (De Boer 1978; Newport 2021).  In all these cases, there has been a large change in moral attitudes which has led to the replacement of one consensus with another, not simply within particular communities, but within society as a whole.  Further, these are changes that affect views held very passionately by their believers; in the 1950s, many of the most anti-interracial marriage people viewed interracial marriage as not just morally questionable but as a moral outrage, whereas by 2021, the opposite view (disapproval of interracial marriage) would probably be viewed as indicative of a negative moral character. 

 

It is highly unlikely that such massive social shifts occurred simply via individual people or singular communities, unaffected by the broad consensus, all simultaneously deciding to change their views in a particular direction.  Indeed, it is questionable whether these are even views based on logical opinions.  The view of whether interracial marriage is “morally acceptable” seems to be more based on impulse than anything else (an impulse which has changed among most people significantly over previous years).  It seems that such social shifts display not so much the plurality of people or societies in the modern nation but rather their common openness to social change which seems to create, replace, or disestablish consensuses which are agreed to by everyone, no matter the “communities” they come from or the “individuals” they are.

 

Such a befuddling conformism is something which has been confounding to scholars of democratic, free nations for a very long time.  Alexis de Tocqueville observed conformism in America as such:

 

Men equal in rights, in education, in fortune, and to say it all in a word, of similar

condition, necessarily have needs, habits and tastes barely unalike. As they perceive

objects under the same aspect, their minds naturally incline toward analogous ideas,

and although each one of them can diverge from his contemporaries and make his own

beliefs for himself, in the end, without knowing it and without wishing it, all meet each

other in a certain number of common opinions.  (Tocqueville 2000, p. 612)

 

A more recent description of the problem was given by theologian Colin Gunton:

 

The much vaunted pluralism of modern secular cultures conceals an underlying

tendency to deny plurality and individuality. Modern individualism breeds homogeneity.

Support for such a thesis is to be found in a number of places, but an early reading of

the signs of the times was offered by Kierkegaard, who protested in the name of

individuality against what he saw to be the levelling tendencies of the modern age:

 

For levelling really to take place, a phantom must first be raised, the spirit of

levelling, a monstrous abstraction, an all-encompassing something that is

nothing, a mirage — and this phantom is the public... Only when there is no

strong communal life to give substance to the concretion will the press create

this abstraction 'the public', made up of unsubstantial individuals who are never

united or never can be united in the simultaneity of any situation or organization

and yet are claimed to be a whole.

 

Kierkegaard's analysis reveals, I believe, the logic of the matter, which is that in the

absence of an adequate way of accounting for and realizing socially the relations of the

many to each other - almost a definition of individualism - what I would call a false

universal (the public, or ' the people' - or history or the market) rushes in to fill the

vacuum.  (Gunton 2005, pp. 30-31)

 

Or, take philosopher Robert Pippin’s paraphrase of Jonathan Swift’s view:

 

…modernity promised us a culture of unintimidated, curious, rational, self-reliant

individuals, and it produced, so went the later charge, a herd society, a race of anxious,

timid, conformist "sheep," and a culture of utter banality (Pippin 1999, pp. 21-22).

 

Yet, many of these identifications of conformity fail to properly explain its source.  Take de Tocqueville’s, for instance.  De Tocqueville identifies the basis of this conformity with people’s fundamental equality in a free democracy leading to their suspicion of others’ contrary opinions.  Therefore, he states, people in a democracy are fundamentally conservative:

 

The more I consider attentively the effects of equality on the intellect, the more I

persuade myself that the intellectual anarchy to which we are witnesses is not, as some

suppose, the natural state of democratic peoples. I believe that one must consider it

rather as a particular accident of their youth, and that it is shown only in that period of

passage when men have already broken the old bonds that attached them to one

another together, and still differ tremendously in origin, education and mores; so that,

having preserved very diverse ideas, instincts, and tastes, nothing any longer prevents

them from producing them. The principal opinions of men become alike as conditions

become alike. Such seems to me to be the general and permanent fact; the rest is

fortuitous and fleeting.

 

I believe it will rarely happen within in a democratic society that a man comes to

conceive in a single stroke a system of ideas very far removed from that which his

contemporaries have adopted; and if such an innovator presented himself, I imagine

that he would in the first place have great trouble in making himself heard and more still

in making himself believed.

 

When conditions are almost alike, one man does not easily allow himself to be

persuaded by another. As all see each other from very close, as they have together

learned the same things and lead the same life, they are not naturally disposed to take

one among them for a guide and to follow him blindly: one scarcely believes one’s like

or one’s equal at his word.

 

It is not only confidence in the enlightenment of certain individuals that is weakened

in democratic nations; as I have said elsewhere, the general idea of the intellectual

superiority that any man whatsoever can acquire over all the others is not slow to be obscured.

 

As men resemble each other more, the dogma of the equality of intellect insinuates

itself little by little into their beliefs, and it becomes more difficult for an innovator,

whoever he may be, to acquire and exercise great power over the mind of a people. In

such societies sudden intellectual revolutions are therefore rare; for if one casts a glance

over the history of the world, one sees that it is much less the force of reasoning than

the authority of a name that has produced great and rapid mutations of human

opinions.  (Tocqueville 2000, pp. 612-613)

 

Such an argument could make sense when de Tocqueville was writing, a time when democracy was new and many unfree nations were affected by significantly greater social changes (the many revolutions during the early 19th century in Europe and Latin America sparked by the American and French Revolutions) than were the already-democratic nations.  However, today, it appears rather antiquated.  Firstly, there have there been massive social changes, often in democratic and free societies which have been directly caused by the sort of convincing de Tocqueville denies being likely to occur in democratic society.  Take Evangelical, Pentecostal, usually Charismatic Protestantism in Honduras, for instance.  Honduras was a historically Catholic nation; in 1970, indeed, it was 94% Catholic (Pew Research Center 2014).  However, by 2016, one study showed only 43.6% of the population being Catholic, with 42.1% being Evangelical Protestant; another showed 48% of the population being Evangelical Protestants and a mere 41% being Catholic (U.S. Department of State 2016).  Such a significant shift occurred in a country that spent most of the last half century (excepting short periods of dictatorship during 1970-1971, 1972-1981, and 2009-2010) as a basically free and democratic country, and which was, for all of that time, a nominal democracy.  This change was brought about by exactly the process de Tocqueville claims to be improbable in free and democratic countries: the direct convincing of people by charismatic individuals with unusual, marginal ideas, claiming themselves to be a “guide” to “follow…blindly.”  But this is not the only or even the most important objection to de Tocqueville’s perspective.  As we will discuss later, de Tocqueville’s whole attitude that people’s views are shaped solely by being “persuaded” by others is quite obviously false in an era in which mass media and professional advertisement act as the main conduit through which people understand which different interpretations of the world even may be considered.

 

Gunton asserts a similarly inaccurate basis for social conformity:

           

When God is displaced as the focus of the unity of things, the function he performs does

not disappear, but is exercised by some other source of unity - some other universal.

The universal is false because it does not encompass the realities of human relations

and of our placing in the world, and so operates deceptively or oppressively.  (Gunton

2005, p. 31)

 

This idea that the absence of religion is the source of conformity is nearly impossible to maintain.  Social change has been no less significant, after all, in religious as opposed to irreligious communities.  The shift in attitudes towards contraception, for instance, has been no less pronounced among Catholics, 80% of whom now say they “approve” of the usage of contraception, than in the rest of America, despite Catholic doctrine (Newport 2021).  Further, the whole argument seems to be confused.  The conformism of America has been observed for hundreds of years; a significant decline in American religiosity and church attendance has only occurred in the last 30 (Jones Nov. 2021).  It seems that if the two events are connected, religious decline must be the contingent rather than the antecedent event.

 

Both of these arguments, however, face a more fundamental problem.  That is to say, neither seem to properly understand conformist social change as universal.  Both are theories that view social change as being caused by large numbers of people being convinced by a logical argument to change their opinions.  However, a logical argument simultaneously convincing whole masses of people to change their opinions seems unlikely to be the cause of social change.  Usually, arguments affect people’s actions and views based on what they already believe, based on some kind of broader group of views within which the qualities of an argument might be especially salient.  So, for instance, a theological argument might be salient for a religious Christian; an economic argument might be salient to someone predisposed towards economics (the latter of which is in fact a personality-type, as an aside (Frank 1988, pp. 227-228)).  However, the salience of most logical arguments, whether that be a need for a replacement for God or one’s trust in other people’s opinions, is based on fundamentally individual beliefs.  An argument with a particular salience to one person based on their ideological commitments inherently does not have the same salience to someone else without those ideological commitments.  And, in a pluralistic society, everyone (or at least, many groups of people) should have different ideological commitments, and therefore, different arguments should seem salient to them.  But the whole problem that we are facing is that the beliefs at hand are beliefs that change rapidly across all groups.  The only way for a logical argument to be convincing enough to affect such massive change would be if its quality depended solely on universal beliefs.  Even then, though, many people do not arrive at many of their beliefs through intensive thought or debate and therefore do not have good reason to hold them.  To put it simply, in a free, pluralistic society, many different people will find many different parts of life and different questions important to them, therefore spending less time thinking about other facets of life.  So, if the source of the consensus is some kind of complicated argument, then that consensus should not include those with little interest in a particular ethical question.  Considering that a very large percentage of people (not engaging in it) do not have to ask themselves whether, for instance, homosexual relations are wrong, the social change on that issue should therefore not be based on a complicated argument but rather a simple one.  So, then, the conclusion should be that in many of these fundamental social changes, attitudes have changed due to simple, universally appealing arguments.

 

Yet, in this case, we have to ask ourselves why people did not think of these arguments before.  If the arguments are simple and universally salient, then they should probably have been adopted as people’s original position on a given argument.  And, further, for this argument to work, there must often be such salient arguments on both sides of a given issue to allow for people to change between viewing an action (interracial marriage, for instance) extremely unfavorably and then extremely favorably.  In this case, people are now asked to pick and choose between different, salient arguments.  Yet, for such arguments to be chosen in the exact same way repeatedly, then the comparative value of the different arguments must be both simple and universally salient.  But here we arrive at a clear falsehood.  Shifts of why one chooses between different arguments can occur either because someone learns about some kind of argument that somehow pre-empts or disproves the ones justifying prior beliefs or because one learns an argument that is simply more important than previously understood arguments.  We will explore the faults in both of these two possibilities, the first because it requires an improperly complicated argument to have motivated social change, and the second because that sort of argument requires those who have changed their minds to hold views quite different to those they in fact hold.

 

The first sort of process that could prompt social change, one in which a large number of people simultaneously learn a view that directly contradicts or pre-empts their older beliefs, is usually based on new information, often scientific.  For instance, one could say that the change in American views on interracial marriage occurred because research showed there was not a large innate difference between the races and children were raised just as well by interracial couples as single-race couples.  However, this is an innately complicated argument that depends on people paying very close attention to what research shows or general empirical data, to such an extent that they are willing to deny views that they previously believed in.  There is no evidence that Americans are willing to do this.  In 2014, a poll found a quarter of Americans still believed in the geocentric model of the Solar System, only 48% of Americans believed in evolution, and only 72% knew what the theory of evolution stated (National Science Foundation 2014).  The idea that much less publicized psychological studies would be less complicated and more likely to be known by the average person than the very publicized concepts of evolution and heliocentrism seems false, and the idea that this research is more certain than, again, the ideas of evolution or heliocentrism also seems false.  Clearly, the sorts of arguments that might pre-empt or directly contradict prior beliefs are either unknown or innately uncompelling to most people, no matter how much factual evidence is provided.

 

The second sort of process prompting social change, one in which a large number of people simultaneously learn a new, relatively important argument contrary to their previous beliefs, is simply not what we are referring to when discussing social change.  If one holds the view that there is some kind of unresolvable moral quandary, wherein there are moral reasons to take two actions or hold two positions which cannot both be done or held at once, then even if one chooses one of the two claims over the other, that does not mean they do not recognize the claim of both positions.  For instance, imagine someone’s friend asked for help to do something morally wrong (hide a body, to use an extreme example).  In this case, there are both moral claims telling them to do that morally wrong action (to be a good friend) and to not do that wrong action (to not break general moral rules).  No matter which position a given person takes, they would still recognize the opposite position’s moral claim over them, even if they would say it was overruled by a more important claim.  In philosopher Bernard Williams’ words:

 

…it is surely falsifying of moral thought to represent its logic as demanding that in a

conflict situation one of the conflicting 'ought's must be totally rejected. One must,

certainly, be rejected in the sense that not both can be acted upon; and this gives a

(fairly weak) sense to saying that they are incompatible. But this does not mean they do

not both (actually) apply to the situation; or that I was in some way mistaken in thinking

that these were both things that I ought to do. I may continue to think this

retrospectively, and hence have regrets; and I may even do this when I have found some

moral reason for acting on one in preference to the other. For while there are some

cases in which finding a moral reason for preference does cancel one of the 'ought's,

this is not always so. I may use some emergency provision, of a utilitarian kind for

example, which deals with the conflict of choice, and gives me a way of "acting for the

best”; but this is not the same as to revise or reconsider the reasons for the original

'ought's, nor does it provide me with the thought "If that had occurred to me in the first

place, there need have been no conflict''. It seems to me impossible, then, to rest

content with a logical picture which makes it a necessary consequence of conflict that

one 'ought' must be totally rejected in the sense that one becomes convinced that it did

not actually apply.  (Williams 1988, p. 56)

 

Yet, this is clearly not true for those who have changed their opinion on these major social issues.  Clearly, when people discuss birth control or homosexual sex or interracial marriage, they do not say that there are reasons both for it being moral and immoral and that they happen to believe the positives of these actions outweigh the negatives; rather, they tend to deny the existence of any negatives whatsoever.  Indeed, most people would likely be somewhat outraged by an argument about interracial marriage that both recognized negatives and positives to the action that happened to just barely balance out on the side of the positives.  Further, many of these positions are entrenched to such an extent that people would not recognize them as moral quandaries.  Few people ask themselves whether many of these actions, particularly interracial marriage and the usage of birth control, are “right” or “wrong;” it is simply assumed.  So, then, the moral quandary is not resolved; rather, it is completely dissolved.

 

While this argument does give us good reason to reject that social change as a result of particularly salient or valuable ideas is the source of social change, it does not give us any positive replacement.  It seems somewhat outrageous to say that individual people’s opinions have changed as a result of anything other than their logical or empirical beliefs, let alone an entire society.  And besides, what other than logic does affect one’s beliefs? 

 

For this, we can look to Thomas Aquinas.  Aquinas faced a similar problem in explaining behavior.  In our case, we are trying to explain how people can, in spite of their freedoms, arrive at basically the same answers altogether, as if dependent on one another.  In Aquinas’, he was forced to resolve the apparent contradiction between the existence of God’s grace and the inherent free will of man.  Both cases are attempting to understand the behavior of people who are both free to make decisions yet seem to have some kind of force causing them to always make the same decisions as one another, in our case the decisions demanded by conformity, in his, decisions demanded by God.  Both recognize that logic alone cannot properly explain decision-making in this case, him because of doctrinal constraints (he was not a Pelagian), us because of our arguments above.

 

For Aquinas, our minds (or will) was innately stable.  Its changing required some kind of force to act upon it.  Without some kind of force acting upon it, similarly to an object, it would remain in its previous position, therefore holding its previous beliefs and tending towards its previous actions.  In the words of Thomist scholar Bernard Lonergan:

 

A change of will is a motion in the will; that motion must have its proportionate cause;

and the ticking of the clock is not a proportionate cause.  (Lonergan 2005, p. 355)

 

This state of what Lonergan calls “psychological continuity” is the state in which de Tocqueville (in particular) and Gunton (to a lesser extent) see in our society.  For de Tocqueville, shifts should not be able to happen in most cases because of our inability to be convinced by other people, because they are all of our same stature.  Rather, what change does occur comes as a result of forces from outside the state of democracy, in particular the remaining effects of the period of “transition,” when due to the “youth” of the democracy, people are only now able to “bring…forth” a variety of opinions, and that ability is now creating massive social change as people’s opinions are coming to homogenize.  That is to say, the force acting upon people’s opinions and thereby societal opinions are merely the remnants of a force which was created at the beginning of democracy, and which will, as time passes, create the further homogenization of people’s opinions in some kind of stasis.  Gunton’s view, while slightly less developed, still asserts basically the same position: that in the decline of the importance of Christianity and religion generally in the Western world, a force now acts upon society, pushing it in one of two different directions, “either the many become an aggregate of ones, each attempting to dominate the world, the outcome being those regimes now labelled fascist, in which the strongest survives and dominates; or the many become homogenized, contrary to their true being, into the mass” (Gunton 2005, p. 33).  This is a result of an outside force as well – to Gunton, an academic one.  In Gunton’s words, “the root of the problem is theological,” and its “solution” must be a “renewed theological vision” (Gunton 2005, pp. 129-130).  That is to say, to Gunton, academic argument, in this case theology, acts as an outside force which “pushes” people, or in a mass sense, society, to certain opinions contrary to the opinions society held prior, hence creating social change.

 

Both de Tocqueville and Gunton’s explanations, though, are insufficient.  De Tocqueville’s position holds that society should have a declining amount of social change, one in which all contending positions are already existent when democracy comes about, and in which positions are over time subsumed into the broader conformity.  Yet, many modern, contending viewpoints have been founded only within democracy.  Communism (at least in its modern Marxist form) was invented following the advent of democracy in democratic nations (Marx wrote most of his works in France and Britain), and so was Fascism (Mussolini and Hitler created their ideologies in democratic Italy and Weimar Germany respectively).  Indeed, the social changes we have referred to as examples are exclusively examples that have developed in America in the last seventy years, in a nation that was free and democratic for that entire period.  It seems that, contrary to de Tocqueville, there has been some new, or some number of new, causes of motion in liberal democracies following their transition into liberal democracies.

 

Gunton’s vision, on the other hand, does not fully explain the source of the motion.  If Gunton is indeed correct that academic theological inaccuracies are the basis of modern social change, then we have to ask what has caused these inaccuracies.  Of course, individuals (and even academia, which is a group of very similar individuals) can come to the incorrect point of view, but if this problem is not just one of academia but of society writ large, then everyone coming to the same incorrect conclusion seems like an odd societal event.  This common theological conclusion itself seems like a form of mass social change.  We would have to therefore understand the original motion for this social change (a change in theological attitudes) to understand broad social change, effectively requiring us to answer much the same question (how do consensuses, en masse, change in society).

 

So, then, we should go back to Aquinas to help us understand the different possible “causes” of ideological “motion.”  According to Lonergan, Aquinas identifies several:

 

[Changes in the will’s] causes are assigned: they are either intrinsic or extrinsic; if they

are intrinsic then they are either a change in one's knowledge, or a change in one's

passions, or a change in one's habits; if the cause is extrinsic, then it is God operating in

the will.  (Lonergan 2005, p. 357)

 

We will put the intrinsic changes on hold for a second and instead focus on the extrinsic cause, God acting upon the will through the grace of God.  We will do this for two reasons:  first, because the grace of God is the cause the operation of which Aquinas explained most precisely and therefore, we can learn the most about the operation of both extrinsic and intrinsic causes from analyzing it, and second because the other, intrinsic causes are mentally internal and therefore (at least) seemingly subjective.

 

The grace of God, to Aquinas, is very complicated, and we will only be able to cover a few minor, important distinctions and traits which Aquinas attributes to it.  The first that we will look at is the division of grace into two different sorts, of operative grace, grace which God alone is involved in and which does not require human action to be effective, and cooperative grace, that grace which, while partially enacted by God, does also require human action to be effective.  Lonergan summarizes Aquinas’ view on the two sorts of grace as follows:

 

The general scheme of the division [between operative and cooperative grace] is clear.

The effects of divine favor are manifold: operative grace is the grace of justification, the

infusion of habitual grace; cooperative grace is the divinum auxilium that supplements

habitual grace because of the infirmity of fallen human nature. But habitual grace is

essentially one thing: inasmuch then as it acts formally, it is operative grace; but

inasmuch as it acts efficiently, it is cooperative.  (Lonergan 2005, p. 235)

 

This is rather dense, so we will spend some time explaining it before moving on.  The basic argument here is that operative grace is the first grace applied, that is, the grace with which God starts His process of influencing actions.  This grace is used to create the correct habits (or at least, as Lonergan will later describe habitual grace, inclinations) which tell us how to behave, that is to say, the idea of the good is something operative, something completely invented by God and given to us by Him and Him alone.  After such grace is given, however, people still tend towards the wrong because of our “infirmity.”  So, God is required to grant us a second wind of grace, one which allows us to persevere in the good in spite of its difficulty and our impulse (due to our fallenness) to oppose grace.  However, that second grace only arrives at its end (our perseverance) with the consent of ourselves, that is with respect to our free will.  Yet, while Aquinas views the latter as depending on free will to come to fruition (and therefore, it is cooperative), it is still fundamentally operative because the grace itself is granted to everyone, whether or not they are willing to persevere in the good; the will is informed whether or not it will cooperate.

 

This power to both intervene in bringing about an intention and to assure the actor perseveres in bringing that intention to fruition is clearly quite great.  Indeed, to Aquinas, this power to influence all action is so great that it, while not overriding the free will of people, is enough to assure providential powers for God.  Again, according to Lonergan:

 

…when St Thomas affirms that God applies all agents to their activities, he is indeed

thinking of God as the cause of all motion. But the significance of his affirmation goes far

beyond that. God is the cause of each particular motion inasmuch as his mind plans and

his will intends the endless premotions that make up the dynamic pattern of the

universe and provide the real guarantee against entropy. It is not enough that things be

kept moving by the moving heavens; the order of the universe has to be maintained,

and that is due not to the heavens but to divine providence. Thus the basic significance

of Deus omnia applicat is this:

But God is not the cause of something except as he is intelligent, for his

substance is his understanding ... And everything acts through the mode of its

substance. God therefore moves all things to their proper ends through his

intellect. But this is to be provident.

and so this:

nothing occurs outside the order of a particular cause, unless some other cause

impedes (its action), which (other) cause has to be reduced to the first universal

cause ... Since God therefore is the first universal cause, not just of one genus,

but universally of the totality of being, it is impossible for anything to happen

outside the order of divine governance

Because God is an intellectual agent, he is not merely the first cause of all physical

premotions; he is the cause of the premotions as intended premotions. An intended

premotion is an application.  (Lonergan 2005, pp. 386-287)

 

What Lonergan is saying here is that Aquinas’ view is that God’s ability to affect our actions through grace is effectively the same as God causing every single action to be as He intended it.  This gives God literally providential power to control everything that humans do, and makes Him the original cause of all of those actions.  Effectively, it is as if God acted in all these cases (although Aquinas makes an exception, as we will later note, when discussing sin).  This level of power establishes to what extent people’s actions are determined by these “causes” of intellectual “motion,” in this case, grace.  (Of course, neither Lonergan nor Aquinas are exclusively referring to grace in this passage, although both are referring to premotions, that is, actions to by God prior to the actions taken in this world which influence those acted upon.  Grace, as we have discussed earlier, is included in that term as the main form of premotion which God enacts upon humanity.  Therefore, this passage is describing God’s providential power over humanity as being implied by His grace.)

 

While Aquinas’ view of the existence of some kind of universal “cause” of “motions” of our belief, that is, some kind of outward influence on our beliefs that causes them to change in a particular way, seems highly perceptive, that cause being grace seems unlikely.  To understand this better, we should understand what grace pushes people towards believing.  According to Aquinas, grace necessarily, according to Thomist philosopher Eleonore Stump, changes the person acted upon so that “that person desires God’s goodness and hates his own sins” (Stump 2007, p. 389).  There are two highly instructive reasons, the non-universality of the illogical pressure on our minds and its dependence on societal attitudes, that cause this concept of grace as the illogical cause of mental motion to fail. 

 

According to Aquinas, this desire for goodness should at least exist for all people; if it does not, then that would make a universal grace far less likely.  In fact, though, the existence of such a view seems contingent on particular emotional traits: the existence of empathy and other emotions regulating our views of actions.  Without those traits, we seem unable to even understand the existence of “good” or “sinful” action.  According to philosopher Jesse J. Prinz:

 

Blair (1995) demonstrates that psychopaths fail to grasp the moral/conventional

distinction. Using the methods of Turiel, Smetana, and Nucci, he asked criminals who

had been diagnosed as psychopaths to consider various scenarios in which rules had

been violated. Some of the rules were moral and some were conventional, but the

psychopaths were not alerted to this fact. They were simply asked to rate the

wrongness and seriousness of the violations, and to justify their answers. They were also

asked whether the described behavior would have been wrong if an authority had

allowed it. The results were striking. Psychopaths did not treat moral and conventional

wrongs significantly differently. Unlike a control group of non-psychopathic criminals,

they tended to ignore victim’s welfare when justifying their answers about moral

wrongs.

 

There was an unexpected finding in Blair’s study. Psychopaths tended to treat both

moral and conventional wrongs as if they were authority-independent. It’s wrong to

speak out in class no matter what the teacher says. On the face of it, this seems to

suggest that psychopaths are interpreting conventional as moral, rather than the other

way around. But Blair offers another, more plausible explanation. He speculates that his

subjects were trying to sound as if they were sensitive to moral rules. Inmates have

great motivation to do that, because moral sensitivity can hasten release. Psychopaths

tended to treat all rules as inviolable in an effort to convince the experimenter that they

were mentally healthy. The plan backfired. Non-psychopathic criminals in the control

group who were equally motivated to impress the experimenter answered in line with

normal subjects. They treated conventional transgressions as authority-dependent. If

psychopaths understood the difference between moral and conventional wrongs, they

would have treated the transgression-types differently, in order to convey their moral

health. The compensatory strategy of feigning moral rectitude actually revealed the

profundity of their deficit. This interpretation finds confirmation in another study.

Instead of looking at incarcerated adults, who have strong motivation to lie, Blair (1997)

administered the moral/conventional test on children with psychopathic tendencies,

and they found that these children tend to treat moral transgressions as if they were

conventional. Blair concludes that psychopaths fail to grasp the difference between

moral and conventional wrongs, and they tend to regard all wrongs as merely

conventional. For them, morality is like etiquette, like conventions about which side of

the road to drive on, or like chess: a group of more or less arbitrary conventions that

place demands on us only because they have been adopted by a social group.

Psychopaths can give lip service to morality, but their comprehension is superficial at

best.  (Prinz 2007, pp. 43-44)

 

And, according to philosopher John Kekes, some emotionally underdeveloped people, using the example of British criminal John Allen, do not only not understand morality but experience the exact opposite desire: to transgress moral rules out of the excitement doing so gives them:

 

[The falsehood of the view that evil is irrational] emerges from understanding the thrill

of evil. People who opt for a life of evil in preference to a life of boredom need not be

unreasonable, although of course they are immoral. They have good reasons for seeking

meaning and purpose in their lives, alleviating their boredom, integrating their

fragmented attitude, acting in ways that express their nature rather than succumbing to

the mind-numbing distractions of popular culture, and putting to use their intelligence

and ingenuity. Evil, therefore, motivates them, and they do not mistake it for good. They

do evil knowingly, precisely because it is evil. It thrills them, which it would not do if it

were not evil. Their evil actions thus conform to reason. Therefore, the requirements of

reason and morality need not coincide.  (Kekes 2005, p. 116)

 

So, then, it seems that some kind of “outward pressure” causing conformity, while it might exist for many people, is not quite universal.  And, considering its non-universality, it would seem that God’s grace would have to either be much weaker than Aquinas believes it to be, or it would have to be something that God does not grant everyone.  It further seems that our impulse towards common views has something to do with our brain chemistry and that psychopaths’ lack of those common views is something which is wrong with psychopaths’ brains, rather than God overlooking them for some reason.  So, our focus should probably be on something to do with our brain chemistry, and in particular, our feeling about the world, rather than a purely external, universal pressure.

 

Secondly, the pressure on people which causes them to take similar views to one another is not so much an objective, righteous pressure but is more so a conformist one.  The same view which now causes massive percentages of the population to (in my opinion, correctly) believe that interracial marriage, birth control, and homosexuality are not immoral is the same pressure which caused conformism in the opposite direction on all three issues.  Unless there was a significant change in morality sometime between 1950 and 2020, it seems unlikely for some kind of universal religious pressure to suddenly start to demand opposite attitudes.  Further, in much of the world, the feeling of general societal pressure is in the exact opposite direction.  In Tunisia in 2019, for instance, only 9% of people believe that homosexuality should be “tolerated” in society (Pew Research Center June 2020).  We can probably derive that there is at least some societal pressure on people not to accept homosexuals.  Is the grace of God which has caused our quite different consensus on this issue simply not present in Tunisia?  Or is what is morality right there different from what is morally right here?  At its extreme, the idea that the grace of God is the same as the pressures which cause social change in democracy reduces into a justification of whatever the current attitudes in society are, no matter their morality, with absurd results.

 

So, then, if the cause of our society-wide attitudinal motion is not the grace of God, we must ask ourselves what it is.  For this, we can refer back to Lonergan.  When he discusses Aquinas’ view on possible causes of attitudinal motion, he referred to not only the grace of God but also several “intrinsic” causes, “a change in one's knowledge, or a change in one's passions, or a change in one's habits.”  The first seems unlikely for reasons we have discussed earlier; we will leave it without further comment.  The second, we will argue in the section following this one, is a good replacement for the idea of grace, a clear cause of our changing consensus beliefs and highly important to understand social change.  The third, we will then argue, is also highly important; although it does not directly affect consensus views, it does have a clear effect on changes in people’s physical environments, and through those environments, it mediately affects people’s views and broader social change.

 

Social Change in Individuals

 

We will start with changes in passions.  Passions as used here seems to imply desires, that is, what one feels to be “good” or “bad” for them.  So, then, before we discuss those changes, we should better understand what desire is.  For this, we can defer to philosopher Mark Schroeder, who has defined desire as follows:

 

For X to have a desire whose object is P is for X to be in a psychological state grounding

the following disposition: when for some action a and proposition r believed by X, given

X’s beliefs r obviously helps to explain why X’s doing a promotes P, X finds r salient, and

this tends to prompt X to do a, and X’s attention is directed toward considerations like r. 

(Schroeder 2007, pp. 156-157)

 

What Schroeder is saying here is that we have a “desire” for something when two requirements are sated:  first, that we have fundamental desire P (for instance, to improve our health), and second that the proposition r (for instance, “one is healthier when one exercises”) seems “salient” to them to arrive at their fundamental desire.  This desire can therefore translate into a desire to do action a (for instance, to exercise), if proposition r tells us to do action a.  Here, Schroeder used the term “salience,” a term we have not yet explained and which seems confusing.  What, we must ask, causes something to be considered “salient”?  Schroeder explains:

 

The situation is this: when you desire to have some coffee, certain considerations strike

you in a certain phenomenologically familiar way. They are what I’ve called ‘salient’.

And these salience-strikings play a certain functional role: they tend to prompt

motivation to act in a way that is non-alienating. And they are the objects of our

directed attention.  (Schroeder 2007, p. 159)

 

That is to say, there is some “striking” of one with a feeling that seems almost externally sourced (although not necessarily so), a sudden feeling that comes over one which tells them of a solution to their problem, a way of arriving at their desired conclusion.  This feeling of salience demands those who feel it to act in a particular way, to have a desire to bring about proposition r so that they can arrive at position P:

 

Salience-strikings, I think, always come clumped [with the desire to act according to

what seems salient]…It is very hard to imagine what could be going on with Jack, if the

fact that there is water at the beach strikes him as salient, but no related consideration

does so. And it is very hard to imagine what could be going on with Jack, if the fact that

there is water at the beach strikes him as salient, and the fact that there is coke in the

fridge strikes him as salient, but the fact that there is water at the top of the hill does

not. I don’t mean to say that it is impossible for only one consideration to strike Jack as

salient. But I certainly mean to be saying that that is not how things work in normal,

well-functioning human psychologies. In normal psychologies, salience-strikings always

come clumped, and in clumps that correspond to aims.

So I conclude that salience-strikings always occur as part of some desire.  (Schroeder

2007, p. 161, p. 162)

 

This gives us a simple understanding of desire.  If a particular action (or belief, or item, or etc.) appears salient to arrive at a particular end, then the person who finds that action (or belief, or item, or etc.) salient has a desire to do it (or believe it, or own it, or etc.).  So, to prove that a particular desire is in any respect universal, we are required to prove that first there are some fundamental desires which are universal, and second that there are some fundamental events that can cause most, if not all, people to find the same propositions “salient” to arrive at those fundamental ends – that is, for some reason, that many people have the same experience of “salience-striking.”  And, if we prove that a particular desire is universal in a particular situation, then, if there are no compelling objections to the idea that this desire might cause such an action, we can say that the reason why the consensus engages in this action is due to that desire.  And, if that desire is determined by something else (for instance, if an advertising campaign causes a large number of people to believe they have a need for a particular product for one reason or another, then we can say that consensus is created by that action.

 

The first part of our discussion about universal salience-striking is whether people have common ends.  This seems like a self-evident truth.  We all desire food, water, housing, certain psychological conditions, and many other obviously universal desires.  These all clearly cause us to psychologically tend towards a desire for water, a desire for food, etc.  And they cause us to, as a society, value these things.  If we were to ask people if drinking water was immoral, we would probably get a negative answer.  People are usually unlikely to condemn absolutely everyone for desires that seem universal and innate.  Further, those universal, fundamental desires do not seem to be caused by social change.  The extent to which people desire water, for instance, has not changed much over time; there was no individual moment when water suddenly became desirable.  Most of our universal desires are quite like this, universal among humans since we evolved.  Thus, we can find that many universally salient desires have little to nothing to do with social change.  For a desire to have something to do with social change, not only must it be universally (or at least widely) salient, the demands it makes of the desirer must be at least somewhat variable in what they have demanded over time.  So, we should try to find which desires we have that are on the one hand universal and fundamental, yet at the same time, variable in their demands.

 

To identify these desires, we can imagine universal human goods as a kind of “homeostatic property cluster.”  What is meant by that is explained by philosopher Richard N. Boyd, with the important points as follows:

 

1. There is a family F of properties which are 'contingently clustered' in nature in the

sense that they co-occur in an important number of cases.

2. Their co-occurrence is not, at least typically, a statistical artifact, but rather the result

of what may be metaphorically (sometimes literally) described as a sort of homeostasis.

Either the presence of some of the properties in F tends (under appropriate conditions)

to favor the presence of the others, or there are underlying mechanisms or processes

which tend to maintain the presence of the properties in F, or both.

3. The homeostatic clustering of the properties in F is causally important: there are

(theoretically or practically) important effects which are produced by a conjoint

occurrence of (many of) the properties in F together with (some or all of) the underlying

mechanisms in question.

4. There is a kind term t which is applied to things in which the homeostatic clustering of

most of the properties in F occurs.

5. This t has no analytic definition; rather all or part of the homeostatic cluster F

together with some or all of the mechanisms which underlie it provides the natural

definition of t. The question of just which properties and mechanisms belong in the

definition of t is an a posteriori question-often a difficult theoretical one.

6. Imperfect homeostasis is nomologically possible or actual: some thing may display

some but not all of the properties in F; some but not all of the relevant underlying

homeostatic mechanisms may be present.

7. ln such cases, the relative importance of the various properties in F and of the various

mechanisms in determining whether the thing falls under t-if it can be determined at all-

is a theoretical rather than an conceptual issue.

11. No refinement of usage which replaces t by a significantly less extensionally vague

term will preserve the naturalness of the kind referred to. Any such refinement would

either require that we treat as important distinctions which are irrelevant to causal

explanation or to induction or that we ignore similarities which are important in just

these ways.  (Boyd 1988, pp. 197-198)

 

What Boyd is arguing here is that for something to be a homeostatic property cluster, it must follow a couple of requirements.  First, the properties that make up the cluster should often occur together for reason, either because they cause one another or are all caused by the same thing.   Second, that the homeostatic property cluster holistic has some value or effects on the real world that each individual trait does not have.  Third, whether enough of the properties are present for the cluster to also exist can only be explained by listing the properties present.  Fourth, not all of the properties that are part of the cluster are necessary for the cluster to exist, merely a large number of them.  Fifth, and finally, the importance of each property to determining whether the cluster is present can only be understood on a case-by-case basis. 

 

So, to use an example, think of health.  For someone to remain broadly healthy when, for instance, they do not have any underlying health conditions (e.g., diabetes), is far easier than for someone to remain broadly healthy if they are already suffering from underlying health conditions, even if those underlying health conditions are not enough alone to make that person putatively unhealthy.  This fact is true because health in some respects affects health in others.  These traits altogether, or at least enough of them, create a “healthy” person, an important condition which has effects on their body and general well-being.  However, to be healthy, one need not have perfect health, merely good enough health, and whether one’s health is good enough for that person to be healthy is dependent on the number of traits indicative of health which they have at once.  And some traits which may exist in a generally healthy person individually can, in conjunction with other similar traits, lead that person to be unhealthy.  For instance, someone with the flu can be generally healthy, and someone with diabetes can be generally healthy, but it is difficult to imagine how someone with the flu and diabetes can be described as generally healthy.  Yet, in spite of being able to understand whether different groups of traits cause someone to be healthy or unhealthy, we cannot state a simple theory which explains our referral to some people as healthy and others as unhealthy; we simply have to all agree to referring to some groups of traits in some ways and other groups in others.

 

Having a “good” life, that is, what traits we fundamentally desire in our lives, can be understood similarly.  As Boyd explains:

 

There are a number of important human goods, things which satisfy important human

needs. Some of these needs are physical or medical. Others are psychological or social;

these (probably) include the need for love and friendship, the need to engage in

cooperative efforts, the need to exercise control over one's own Iife, the need for

intellectual and artistic appreciation and expression, the need for physical recreation,

etc. The question of just which important human needs there are is a potentially difficult

and complex empirical question.

these human goods…are homeostatically clustered. ln part they are clustered because

these goods themselves are-when present in balance or moderation-mutually

supporting.  (Boyd 1988, p. 203)

 

This is to say, that those traits which tend to make one’s life “good” tend to contribute to each other’s existence.  One’s being healthy can help one’s ability to engage in fulfilling actions (which are quite a bit more difficult when one is, for instance, sickly).  Further, having some of these traits without, necessarily, all of them, is all that is necessary to qualify for having a good life.  We can imagine someone who has quite poor health, a morbidly obese man, for instance, who can still have a good life because of his holding other important traits (good friendships, a fulfilling career, etc.).  At the same time, we can understand that those negative traits do contribute to the quality of one’s life, even if they cannot alone cause that life to be bad.  We can understand, even with the prior example, that someone else’s life, someone perhaps less fulfilled in their career or with weaker friendships, could be described as having a bad life in part because of their morbid obesity.  So, then, those traits which are part of this homeostatic property cluster, those which allow us to have a “good life” altogether, even if not individually necessary for a “good life,” are fundamental ends (what we earlier described as the P’s) which we aim for.  And since all of those “good” properties are individually desirable, even if not absolutely necessary for us to live a “good” life, we all aim towards having each individual “good” property.  So, to prove an objective desire for a particular end, all we need to really prove is that it is a part of the homeostatic property cluster of a “good life.”  And, to prove a trait is part of a particular homeostatic property cluster, one must prove (1): that there is some natural or common association which at least putatively exists between the trait and the broader cluster, (2): that having other traits within the cluster promotes having this particular trait, and (3): that having this particular trait promotes having other traits within the cluster.

 

The fundamental end we will focus on here is having a positive view of oneself.  This is a concept that has been described with many different terms, some being “pride,” “self-esteem,” “self-respect,” and “self-love,” to name a few; we will use the term “self-esteem” here.  What we mean by this term is a view that one is a good, generally successful person; that is, not just the situation of having a “good” life but also the ability to recognize the “goodness” of one’s life.  The association of this self-esteem with a broader good life is a view with a long history.  For instance, Aristotle defined “pride” (a kind of moderate amount of self-esteem) as a virtue:

 

With regard to honour and dishonour the mean is proper pride, the excess is known as a

sort of ‘empty vanity’, and the deficiency is undue humility; and as we said liberality was

related to magnificence, differing from it by dealing with small sums, so there is a state

similarly related to proper pride, being concerned with small honours while that is

concerned with great.  (Aristotle 2009, p. 32)

 

Thomas Hobbes, as explained by philosopher Michael Oakeshott, while seeing excessive pride (“vainglory”) as a vice, saw moderate pride as a virtue:

 

Hobbes sometimes used the word “pride” in a derogatory sense to indicate one of the

three passions preeminent in causing strife; but he also identified it with generosity,

courage, nobleness, magnanimity, and an endeavour for glory, and he distinguished it

from “vainglory,” which is always a vice because it entails illusion and strife without the

possibility of felicity.  In short, Hobbes…recognized the twofold meaning which the

word has always carried. Pride, in that tradition, was the passion to be Godlike. But it

was recognized that this may be either the endeavour to put oneself in the place of God,

or the endeavour to imitate God. The first is a delusive insolence in which a Satanic self-

love, believing itself to be omnipotent, is not only the last analysis of every passion but

the only operative motive, and conduct becomes the imposition of oneself upon the

world of men and of things. This Hobbes, like every other moralist, recognized as a vice

and an absolute bar to a peaceful condition of human circumstance: it is the pride which

provokes a destroying nemesis, the pride which Heraclitus said should be put out even

more than a fire. But, as Duns Scotus said, there is no vice but it is the shadow of a

virtue; and in the second manner of being Godlike, self-love appears as self-knowledge

and self-respect, the delusion of power over others is replaced by the reality of self-

control, and the glory of the invulnerability which comes from courage generates

magnanimity, peace. This is the virtue of pride whose lineage is to be traced back to the

nymph Hybris, the reputed mother of Pan by Zeus; the pride which is reflected in the

megalopsychos of Aristotle and at a lower level in the wise man of the Stoics; the sancta

superbia which had its place in medieval moral theology; and which was recognized by

Hobbes as an alternative manner to that suggested by fear and reason of preserving

one’s own nature and emancipating oneself from the fear of shameful death and from

the strife which this fear generates.  (Oakeshott 1975, pp. 130-131)

 

To David Hume, going much further than Aristotle or Hobbes, self-esteem (any positive view of the self, to Hume) is a part of all pleasurable or agreeable experiences:

 

Upon my consulting experience, in order to resolve this difficulty, I immediately find a

hundred different causes, that produce pride; and upon examining these causes, I

suppose, what at first I perceive to be probable, that all of them concur in two

circumstances; which are, that of themselves they produce an impression, ally’d to the

passion, and are plac’d on a subject, ally’d to the object of the passion. When I consider

after this the nature of relation, and its effects both on the passions and ideas, I can no

longer doubt, upon these suppositions, that ’tis the very principle, which gives rise to

pride, and bestows motion on those organs, which being naturally dispos’d to produce

that affection, require only a first impulse or beginning to their action. Any thing, that

gives a pleasant sensation, and is related to self, excites the passion of pride, which is

also agreeable, and has self for its object.  (Hume 2011, p. 189)

 

So, self-esteem does seem to be, at least putatively, an important part of having a good life.  (This is, of course, not to say that, on a sliding scale, a greater self-esteem is always better, or even that any one of these particular visions of what self-esteem entails is the correct one.  It is simply to say that there is a commonly viewed connection between having a good life and viewing oneself as having a good life; to what extent one must view one’s life as good and the magnitude of that view’s importance are both questions we are not asking here.)

 

The second requirement of a member of our homeostatic property cluster is that other traits within the cluster promote one’s having this trait.  This seems like an easy position to defend.  If one has positive traits in their life – if their career is fulfilling, if their relationships are successful, if they have well-develops interests and talents then they are probably more likely to have a more positive view of their life.  Hume makes a similar argument in relation to our bodies’ “beauty,” as well as their strength and usefulness:

 

Whether we consider the body as a part of ourselves, or assent to those philosophers,

who regard it as something external, it must still be allow’d to be near enough

connected with us to form one of these double relations, which I have asserted to be

necessary to the causes of pride and humility. Wherever, therefore, we can find the

other relation of impressions to join to this of ideas, we may expect with assurance

either of these passions, according as the impression is pleasant or uneasy. But beauty

of all kinds gives us a peculiar delight and satisfaction; as deformity produces pain, upon

whatever subject it may be plac’d, and whether survey’d in an animate or inanimate

object. If the beauty or deformity, therefore, be plac’d upon our own bodies, this

pleasure or uneasiness must be converted into pride or humility, as having in this case

all the circumstances requisite to produce a perfect transition of impressions and ideas.

These opposite sensations are related to the opposite passions. The beauty or deformity

is closely related to self, the object of both these passions. No wonder, then, our own

beauty becomes an object of pride, and deformity of humility.

’Tis not the beauty of the body alone that produces pride, but also its strength and

force. Strength is a kind of power; and therefore the desire to excel in strength is to be

consider’d as an inferior species of ambition. For this reason the present phænomenon

will be sufficiently accounted for, in explaining that passion.

 

Concerning all other bodily accomplishments we may observe in general, that whatever

in ourselves is either useful, beautiful, or surprizing, is an object of pride.  (Hume 2011,

pp. 195-196)

 

Both of these two traits, that is, the beauty and usefulness of one’s body, are probably also parts of the cluster of the “good life.”  Certainly, the former has some relationship with health (common standards of beauty tend to encourage traits such as thinness and strength that are, at least in moderation, healthy as well as “beautiful”), and the usefulness of one’s body is certainly helpful in arriving at our other ends.  And, both of those things, the ability to arrive at one’s ends and health, are pretty clearly desirable.  Hume’s argument is quite strong in this area.

 

But, if one is somewhat uncertain about this argument, there are even simpler logical constraints which demand that proper self-esteem be preceded by other positive traits.  For someone to have self-esteem, they must be worthy of that esteem.  If they are not, then they do not really have self-esteem, because they are wrong either about themselves or about their traits.  They are esteeming something which is not themselves.  And, if this much is true, then to have self-esteem, a person must previously have other good traits, as only those good traits can justify their self-esteem.  In Aristotle’s words:

 

Now the proud man, since he deserves most, must be good in the highest degree; for

the better man always deserves more, and the best man most. Therefore the truly

proud man must be good. And greatness in every virtue would seem to be characteristic

of a proud man.

 

And it would be most unbecoming for a proud man to fly from danger, swinging his

arms by his sides, or to wrong another; for to what end should he do disgraceful acts, he

to whom nothing is great? If we consider him point by point we shall see the utter

absurdity of a proud man who is not good.  (Aristotle 2009, p. 68)

 

This much is to say that for someone to be deserving of self-esteem, they must also be good and have other good traits.  Those traits, which comprise virtue, are probably part of the cluster (if we believe in the Aristotelian concept of the good life, at least).  Such a requirement, as Aristotle notes, can be derived from two things: first, from the fact that the only way someone is deserving of esteem is if they have virtue, and therefore, someone’s self-esteem can only be truly descriptive of them (rather than deluded and descriptive of a merely imaginary version of themselves) if they truly are virtuous.  Secondly, we can determine that virtue is required for self-esteem from the fact that the character of what those who have both self-esteem and virtue engage in is quite different from the actions enacted by those with self-esteem but not virtue.  For, when the latter engage in prideful action, they inherently seem to us “disgraceful” and “absurd,” whereas if the former engage in the same sort of action, they seem to “deserve” the pride they have.  Even divorced from the particular reasons why this might be the case, the mere fact that it is the case shows us that self-esteem divorced from virtue is very different (and, seemingly, a negative rather than positive trait) from self-esteem in conjunction with virtue.  Furthermore, this requirement does not seem arbitrary or merely statistical – it seems on a theoretical level necessary for the concept of self-esteem to make sense.

 

Having proven that self-esteem meets the two first requirements of a part of a homeostatic property cluster ((1) that it is putatively thought to be a part of that property cluster and (2) that other traits in that property cluster help to promote this trait), all we now have to establish in order to prove that self-esteem is a part of this property cluster is the third requirement, that having this particular trait somehow aids in the promotion of other traits in the homeostatic property cluster.  The groundwork for this argument has already been laid by our earlier quotation of Oakeshott discussing Hobbes.  To requote the argument, “self-love appears as self-knowledge and self-respect, the delusion of power over others is replaced by the reality of self control, and the glory of the invulnerability which comes from courage generates magnanimity, peace.”  This is to say that the existence of self-esteem allows us to better understand ourselves and, by understanding ourselves, control ourselves, allowing us to create peace rather than the innate violence that otherwise dominates our lives.  Such peace (quite obviously) is virtuous, both in itself (the capricious violence and lonesomeness which Hobbes diagnoses in humans without peace is not the sort which creates a good life) and in its effects on our lives (in a constant state of war, we are left without very much good; in Hobbes’ words, such a situation creates “no place for industry, because the fruit thereof is uncertain: and consequently no culture of the earth; no navigation, nor use of the commodities that may be imported by sea; no commodious building; no instruments of moving and removing such things as require much force; no knowledge of the face of the earth; no account of time; no arts; no letters; no society,” that is, with little to nothing in the homeostatic property cluster of the good life (Hobbes 1985, p. 186)).  Now, Hobbes meant this argument in relation solely to his idea of the state of nature, a rather unreasonable thought experiment that today seems antiquated.  That being said, it can equally be applied to situations outside of the state of nature per se.  In day-to-day life, we are often faced with situations in which we can act relatively pacific or pugilistic, in which we can fight to receive a marginally greater share than if we act more peaceable, and wherein we can abuse others to achieve what we want.  Such conflicts, constantly present, would very often be better resolved for everyone if we all chose more peaceable solutions.  And, those solutions would have to be created through much the same processes in which Hobbes’ imaginary state of nature is resolved: through a general agreement for those involved to understand themselves, understand what would truly be better for them and others, and be able to control themselves based on that self-knowledge.  As a result, even if Hobbes’ state of nature argument is not in itself accepted, Hobbes’ general argument that self-esteem creates our peaceable nature by allowing us to better understand ourselves, if true, is enough to prove that self-esteem meets the three requirements of being part of the homeostatic property cluster of the good life.

 

An obvious objection to Hobbes’ argument thus far, however, is that, while some amount of self-understanding is necessary for the creation of peace, that need not be self-esteem.  For instance, one could understand themselves as well as others quite negatively, incorrectly so, and that attitude could help create peace nevertheless because, in viewing those around them as evil, they would frightfully ask for peace from others and aggressively restrict what they themselves do out of fear of a non-peaceable conclusion either created by the others or themselves.  While this, of course, does not seem like a very “good” life, it could still prove that, at least, the effects of pride on other traits in the homeostatic property cluster of good living is at least more moderate than Hobbes would argue, something which only affects certain areas of life (one’s openness to interpersonal relationships, for instance) rather than affecting all areas.  Such a criticism, however, is incorrect, for reasons Hobbes addressed, namely, that the positive effects of pride are caused not just because of one’s fear of death but rather because of one’s fear of shame.  In Oakeshott’s words:

 

Fear of shameful death, provoking reason to suggest the convenient articles of peace

and the manner in which they may become the pattern of human life, generates the

morality of the tame man, the man who has settled for safety and has no need of

nobility, generosity, magnanimity, or an endeavour for glory in order to move him to

behave justly.  (Oakeshott 1975, p. 132)

 

What we can see from this argument is that one’s desire for peace is not created merely by the possibility of failure (in this extreme case of the state of nature, death) but rather by one’s fear of a failure that is shameful, that is, not befitting the person who fails.  For instance, if someone in Hobbes’ extreme metaphor loses a fight with competitor, they are not only punished insofar as they are killed but that they are killed in a manner they do not believe befits them.  That latter concern, which is a major part of why people prefer peace, is something that only exists because of people’s self-esteem, their view of themselves as not “befitting” certain failures.  This belief about what befits one, of course, extends beyond purely death, to the many other interpersonal interactions which make up our lives.  Self-esteem assures that in those interpersonal interactions, we prefer pacific conclusions to disagreements and differing motives so as to avoid our possible “shameful” failures.  This thereby creates a more pacific world, a world far more amenable to industry, the development of interpersonal relationships, the arts, and the many other parts of the good life, that is, it will result in the securing of the good life.

 

Hume, similarly to Hobbes, argues that self-esteem creates attitudes which are parts of the good life.  In the eyes of Hume, self-esteem allows the person who esteems themselves to prioritize their virtue over otherwise compelling contrary values, such as their fear of death or hedonism.  In Hume’s words:

 

GLORY is the portion of virtue, the sweet reward of honourable toils, the triumphant

crown, which covers the thoughtful head of the disinterested patriot, or the dusty brow

of the victorious warrior. Elevated by so sublime a prize, the man of virtue looks down

with contempt on all the allurements of pleasure, and all the menaces of danger. Death

itself loses its terrors, when he considers, that its dominion extends only over a part of

him, and that, in spite of death and time, the rage of the elements, and the endless

vicissitude of human affairs, he is assured of an immortal fame among all the sons of

men.  (Hume 1987, pp. 153-154)

 

This much is to say that the feeling of glory, that is, one’s esteem for themselves (as well as that esteem among others) creates the feeling that many otherwise alluring temptations, from hedonism to a fear of death, are not quite as important, as the feeling of “glory” (effectively, esteem) creates value in the person beyond purely their physical body or pleasurable feeling.  As a result, according to Hume, such a feeling of esteem causes us to focus more on what parts of ourselves deserve such esteem, that is, our having other positive virtues.  As a result, self-esteem makes one more likely to focus their life on acting according to the virtues.  Considering that at least a part of the good life is (probably) one’s being virtuous (a vast majority of people are not violent psychopaths and do not desire evil for its own sake, no matter how much pleasure or enjoyment being evil would provide them), we can therefore derive that self-esteem, by creating the desire to act according to virtue, thereby creates other parts of the homeostatic property cluster of the good life.  (As a quick aside, a seeming contradiction exists here between Hume and Hobbes’ arguments.  On the one hand, Hobbes sees self-esteem as resulting in greater fear and cautiousness around death, while Hume sees self-esteem as creating the opposite, a lack of fear of death.  This is actually not contradictory, because Hobbes is referring to not just death generally but death of a shameful sort, while Hume is referring to general death.  We can therefore see that, while self-esteem creates greater fear of some kinds of death, particularly of the shameful sort, for instance, some death (of the non-shameful sort) is less feared (death by disease, for instance), and some death might be not just unaffected by Hobbes’ argument but actually reinforced by it, so long as that death is in fact honorable (for instance, military deaths or those of firefighters on 9/11).)

 

This much is to say that people in general, desiring a good life, desire some level of self-esteem, which is part of a good life.  This is the general desire P which we are going to work with as an assumed general end.  From here, our main objective will be to see how a given proposition r could be seen as arriving at this general proposition P.  Further, we have to see how this proposition r could seem salient to everyone all at once and how that belief might not seem salient prior to that point in time.  Our argument in this section will be that to arrive at self-esteem, we desire others’ esteem, as our concept of ourselves is partially derived from our concept of others.  Further, we believe that others get their opinions about ourselves partially from what we own, how we behave, and the opinions we have.  To assure that other people think positively of us, and thus that we think positively of ourselves, we therefore desire to have the “right,” or generally agreed upon, opinions.  Social change occurs when, for one reason or another, the opinion which people believe to be generally held is different from the actual opinion that is generally held.

 

But, before we get to most of that argument, we should start merely by proving that people generally believe that others’ views of them, others’ esteem for them, affects their own self-esteem.  The basic structure of this logical extrapolation can be explained by economist Thorstein Veblen:

 

Those members of the community who fall short of this, somewhat indefinite, normal

degree of prowess…suffer in the esteem of their fellow-men; and consequently they

suffer also in their own esteem, since the usual basis of self-respect is the respect

accorded by one’s neighbours.  (Veblen 2007, p. 25)

 

This argument, that people’s view of themselves is highly influenced by “the respect according by one’s neighbours,” that is, the views and values of those around them, we will argue, is basically correct.  Sadly, however, by not giving any specific examples or logical reasons to believe any part of it but simply stating it outright, Veblen leaves proving his point to us.

 

We will try to prove that people’s views of their own esteem are influenced by others through morality.  Our first observation, following our previous discussion of Hume and Aristotle’s views on the matter, is that one’s view of their own virtue has a significant effect on their views of their personal quality, that is, their self-esteem.  Second, we can see that one’s view of virtue is at least somewhat created by the society in which they live and the broader culture encouraging such behavior.  As economist Robert H. Frank has argued:

 

Many have argued that [culturally encouraged] behaviors would not exist at all were it

not for the pressure of cultural forces. After all, the very definition of honesty differs

widely from culture to culture, and across groups within cultures. The Mafia soldier

follows a code of conduct vastly different from the Presbyterian deacon's. This tells us

that there is nothing so simple as a biological urge to "be honest." If there is any sort of

inherited instinct at work here, it must be extremely flexible-something like "pay

attention to what the people around you teach, and try to follow that."  (Frank 1988, p.

40)

 

Such a view of culturally influenced morality is reinforced by social scientist James Q. Wilson when discussing the correlation between the differences between Western and Japanese child-rearing and their respective concepts of morality:

 

We need not settle the argument between guilt and self-command as the source of a

distinctive national character, much less the deeper issue…namely, the existence of a

heritable tendency toward group affiliation. It is enough to suggest that wherever

children learn mutual dependency and family obligation, they are learning to give

highest priority to self-control and the sense of duty. When, as in the United States and

other societies with a more individualistic and democratic family culture, children learn

independence and self-expression, they are learning to give highest priority to sympathy

and a sense of fair play. The need to take care of each other and to help raise children

requires of a child that he or she subordinate a desire for fun to the requirements of

helping; in short, they require self-control. When the family is the primary source of

social approval and ridicule is used to control misconduct, the fear of being shamed

leads to a strong sense of honor. When, by contrast, the children from an early age are

encouraged to engage in play with others outside the family, to develop their own

personalities, and to make their own decisions, they will acquire a strong stake in peer-

defined and peer-enforced rules of fairness (else the play will quickly get out of hand)

and will greatly value the good opinion of peers and so come to be sensitive to the

moods of others.

 

If this argument is true, it has some rather large implications.  A willingness to extend to

other people—especially to people who are very different from oneself—the same

fundamental respect one extends to friends requires that one believe that people, as

people, are entitled to understanding and fair play. This means that in the more

individualistic child-rearing cultures, where sympathy and fairness are dominant

concerns, one will find the greatest deference to universalistic standards of justice.

These standards will be the enemies of familial and neighborhood control: people who

believe that there are universal rules based on equal respect will also believe that they

are entitled to be autonomous, independent individuals who have the right to do as

they wish so long as it does not hurt anyone else.

 

On the other hand, a preference for preserving the honor of the family, avoiding shame,

and meeting obligations is consistent with the view that people are not individuals but

members of distinct social units—kinship systems, local communities, racial and ethnic

groups—that are as inward-looking and group-serving as one’s own. In Japan,

childrearing patterns and cultural patterns combine to limit personal loyalty to the

immediate group. Not only obligation but even communication with persons outside the

family or the work group is much more limited than what Americans are accustomed to.

Karl Zinsmeister reports that elderly Japanese are only half as likely as their American

counterparts to have daily conversations with their neighbors.  Within the core group—

the family and the workplace—cooperation and etiquette are paramount; outside that

group, the devil can take the hindmost.

 

Honor is not a universalistic standard but a particular one; people valuing it will value it

even at the expense of others, with no sense that some rule has been violated. Honor

and self-control can even be seen as a mark of superiority; those displaying it are

civilized, those lacking it, barbarians.  (Wilson 1997, pp. 154-155)

 

This much is not to say that our concepts of morality are not at least somewhat innate or to argue that the concept of morality is not objective.  Nor is it to argue that specific principles like honesty or justice are not objective or innate.  It is simply to argue that our particular concepts of what actions allow us to qualify as holding particular virtues (for instance, honesty or justice) can be influenced by others’ views of how people can qualify as holding particular virtues.  That is to say, we are influenced not in whether we believe in honesty in principle but rather in what actions we say an honest person would take in a given circumstance.  To put it in the terms of Schroeder, our broader desire P is not necessarily affected by our social relationships, but whether we find a particular proposition r salient to arriving at P is.  In relation to morality, salience-striking, to put it simply, is often created by social norms.

 

This much is enough to prove that our concept of ourselves and the goodness of our own lives is at least partially determined by the views of people generally.  However, we have not yet quite arrived at the important conclusion, that not only is part of our self-esteem created by generally agreed upon ideas of the good but also that our self-esteem is in large part created simply by others’ views alone.  There are plenty of possible examples of people who behave according to the social norms they respect yet are disdained by those they are usually around.  For instance, we can imagine someone who only engages in socially encouraged actions and yet is viewed negatively by others due to a misunderstanding of what those actions constitute.  Or we can imagine someone moving to a town with very different morals (a colonial official, for instance), continuing to act according to the social norms they respect, and therefore being disdained for not following the local traditions.  It seems that in spite of following the social norms these people personally respect, the people in those examples would still emotionally affected by the rest of society’s condemnation, even if they have already formed robust moralities that affirm their own actions.  Indeed, this effect of general social condemnation seems to be closer to what we are looking at when we discuss social change than merely a change in cultural attitudes.  For a shift as extreme as has occurred in relation to people’s attitudes on interracial marriage or homosexuality in as short a period of time as has occurred, people would not just have to be replaced by younger generations who grew up in some kind of new culture; many would have to actually change their minds.  And that change in mind would have to based in the broad-based change in attitudes around them and their trust in that broad consensus.  How can we explain not just people’s moral foundations in others but also their trust in others’ opinions?

 

Our answer is that we create habits in following social norms to care about those social norms and to appraise the goodness of actions and others, which we then use to appraise ourselves.  This becomes common through social norms, which we (most of us anyway) regularly use to appraise others.  Social scientist Jon Elster gives several good examples of such common norms:

 

Some norms are unconditional: ‘‘Do X; do not do Y.’’ They include the norms not to eat

human flesh, not to have sexual intercourse with a sibling, not to break into the queue,

never to wear red clothes (as some mothers tell their daughters), to wear black clothes

at a funeral, to begin with the outermost knife and fork and work inward toward the

plate, to treat the sickest patient first. Other norms are conditional: ‘‘If you do X, then

do Y,’’ or ‘‘If others do X, then do X.’’ In many groups, there is a norm that the person

who first suggests that some action be taken is then charged with carrying it out; as a

result, many good suggestions are never made. A childless couple may feel subject to a

norm that whoever first suggests they have a child will have a larger share in raising it;

as a result some couples who would like to have a child may remain childless. There may

not be a norm telling me to send Christmas cards to my cousins, but once I begin there

is a norm to continue and another norm telling my cousins to reciprocate. Yet although

conditional, these norms are not conditional on any outcome to be realized by the

action, as is the injunction to wear black in strong sunshine.  (Elster 2007, p. 354)

 

Not only do we willingly sanction people who break these social norms, we come to hold emotionally passionate views on those who break these socially determined rules.  Again to quote Elster:

 

A…parsimonious and adequate explanation of sanctioning relies on the spontaneous

triggering of contempt and the associated action tendency. Anger, too, may be involved,

because of the fluid distinction between social and moral norms. Also, flaunting one’s

violation of social norms is likely to trigger anger rather than contempt because it tells

other people that one does not care about their reactions. Although these spontaneous

action tendencies may be kept in check by the costs and risks of sanctioning, they may

be capable of overriding the latter. Ostracizing the nerd who could help his classmates

with homework is costly, as was the refusal of aristocrats under the ancien regime to let

their daughters marry wealthy commoners. When a ‘‘taste’’ for discrimination takes the

form of refusing to employ or buy from members of despised minority groups or

women, economic efficiency may suffer. Often, such behavior reflects the operation of

social norms rather than of idiosyncratic individual preferences, as shown by phrases

such as ‘‘Jew-lover’’ or ‘‘nigger-lover’’ used to condemn those who go against the norm. 

(Elster 2007, p. 356)

 

These regular passionate negative responses to others breaking social rules teach us to trust social rules as a form of moral judgment, or at least passion-judgment.  That is to say, to consider whether a given desire (or a person who engages in those desires) is good, we regularly use generally agreed upon social norms to judge those actions or people.  In doing so, we regularly call upon social norms and mix them up with concepts of goodness.  In so doing, we create a habit, that is, a pre-disposition, of mixing them up and using social norms to derive moral goodness or badness.  Oakeshott (in a different work from his secondary essay on Hobbes we earlier cited) describes this process of “moral education,” or the way in which one learns moral beliefs, as such:

 

We acquire habits of conduct, not by constructing a way of living upon rules or precepts

learned by heart and subsequently practised, but by living with people who habitually

behave in a certain manner: we acquire habits of conduct in the same way as we acquire

our native language. There is no point in a child's life at which he can be said to begin to

learn the language which is habitually spoken in his hearing; and there is no point in his

life at which he can be said to begin to learn habits of behaviour from the people

constantly about him. No doubt, in both cases, what is learnt (or some of it) can be

formulated in rules and precepts; but in neither case do we, in this kind of education,

learn by learning rules and precepts. What we learn here is what may be learned

without the formulation of its rules. And not only may a command of language and

behaviour be achieved without our becoming aware of the rules, but also, if we have

acquired a knowledge of the rules, this sort of command of language and behaviour is

impossible until we have forgotten them as rules and are no longer tempted to tum

speech and action into the applications of rules to a situation. Further, the education by

means of which we acquire habits of affection and behaviour is not only coeval with

conscious life, but it is carried on, in practice and observation, without pause in every

moment of our waking life, and perhaps even in our dreams; what is begun as imitation

continues as selective conformity to a rich variety of customary behaviour. This sort of

education is not compulsory; it is inevitable.  (Oakeshott 1962, p. 62)

 

This much is to say that we learn that, upon continually practicing the rules of whatever game we are playing and to continually see those around us playing such a game in a particular way, we ought to play the same game in the same way.  And thus, without intentionally holding a particular belief, we come to act in the way that those around us previously have.  This sort of action is what we describe as a habit. 

 

However, this is not a full explanation of habit.  For one, it does not explain how we choose between habits.  If someone sees others repeatedly touch a hot object and express pain in a particular way, they might start to learn how to express pain in that same way, to gain a habit for engaging in that sort of expression when they feel pain.  However, they would likely not learn to repeatedly engage in the habit that led to the person feeling pain, that is, they would not habitually start to touch hot objects and cause themselves pain.  We have to answer why they would learn the former and not the latter.  Oakeshott himself gives a good answer to this question in a different essay, in which he discusses “practical knowledge,” a very similar term to his concept of habit, something that cannot be expressed in words and is not intentionally understood which can only be learned through repeatedly doing or seeing other people do.  Oakeshott describes this sort of knowledge as being a part of being “good” at “every human activity whatsoever,” that is, as being the only way in which one can properly act and therefore be considered a proper actor in any way shape or form, from religion to cookery (Oakeshott 1962, pp. 7-9).  So, then, we would only learn a habit if we had some indication that in doing so, we were properly engaging in some kind of activity we wanted to do properly.  (For instance, we might pay attention to and habitually learn a certain “way of cooking” if we actively wanted to become a “good cook” or at least, to cook properly.)  As such, for someone to have reason to create a habit, they must be around people doing something in a particular way and want to somehow mimic that sort of behavior to become a sort of person or learn a sort of skill.

 

Secondly, Oakeshott’s argument does not fully explain the way in which we act according to habit once we have learned that habitual behavior.  For someone to act in a particular way without realizing it or consciously thinking about it is somewhat difficult to imagine.  This is especially true in relation to beliefs.  It seems like the very definition of a belief is something one thinks.  Yet, for this concept of habit to be true, there would have to be beliefs, or at least things that work effectively the same as beliefs in motivating our actions, that we do not consciously hold.  To better explain how we can come to act according to habit, we can look to philosopher Ludwig Wittgenstein’s discussion of our beliefs.  Using a metaphor, he discusses our beliefs as being similar to an imaginary river:

 

94. But I did not get my picture of the world by satisfying myself of its correctness: nor

do I have it because I am satisfied of its correctness. No: it is the inherited background

against which I distinguish between true and false.

 

95. The propositions describing this world-picture might be part of a kind of mythology.

And their role is like that of rules of a game; and the game can be learned purely

practically, without learning any explicit rules.

 

96. It might be imagined that some propositions, of the form of empirical propositions,

were hardened and functioned as channels for such empirical propositions as were not

hardened but fluid; and that this relation altered with time, in that fluid propositions

hardened, and hard ones became fluid.

 

97. The mythology may change back into a state of flux, river-bed of thoughts may shift.

But I distinguish between movement of the waters on the river-bed and the shift of the

itself; though there is not a sharp division of the one from other.  (Wittgenstein 1969, p.

15)

 

According to this concept of our knowledge, which works quite well with Oakeshott’s concept of habit, we can come to believe in kinds of “world-pictures,” or general paradigms.  Such paradigms are, when fully believed, almost “below” our normal thoughts, beliefs they are dependent on but are not currently in question.  For example, imagine someone describing the Battle of Austerlitz.  To do so, they have to assume that the world existed in 1805.  Yet, it would be rather odd and completely unnecessary if they were to suddenly interject into their explanation “assuming the world existed in 1805.”  Indeed, if either person were to so much as think, midway through the conversation, “we of course believe that the world existed in 1805,” that would also be rather odd and unnecessary.  This is because we simply assume the world existed at that time; that is, we have a habit of assuming the world’s prior existence, and as a result, we act as if we hold a belief without directly thinking of that belief.  This can be applied to our concept of habits.  At an extreme, we could even not know of our belief at all; we could always simply take it to be assumed and always act as if it were true without outright stating it while consistently using it to inform actions.  So much for the idea that we must directly recognize our beliefs and habits before acting upon them.

 

Having more fully explained the genesis and character of habits, we can now apply the concept to our earlier discussion of moral behavior.  For such a particular action or attitude to become habitual, remember, requires of it that (1) a person desires to come to enact what they seek to enact (or partially enacts) properly (cooking, speaking, judging), (2) a person sees others consistently engaging in a particular habit, (3) as a person sees those people around them behave in a certain way, they come to act in the same way, and (4) when someone comes to engage in the habit which they now believe in, they no longer necessarily recognize the way in which they act but rather act by simple reflex.  Considering that we are discussing general social change, things which occur across all of society, for each of these requirements, we must generalize them; that is, make it clear why many people simultaneously change their habits in the same way as one another.  To apply this model to the “mixing up” of moral judgment and judgment of conformity to social norms, the argument basically looks as follows: (1) a person innately desires to have the correct moral judgments, a necessary part of acting morally (the latter of which is clearly an innate desire), (2) a person is often around others who make moral judgments along the lines of social norms, in particular, condemning others for the breaking of those social norms, (3) people tend to have their moral judgments affected by seeing others around them morally judge others along the lines of whether they have followed social norms, and finally (4) people do not make a difference between social norm-judgments and moral judgments; they act the same way towards their judgment whether they are making the first or the second sort of judgment.

 

The first argument is the easiest to prove.  While I would argue that there are very clearly innate particular moral judgments (that is, particular ways in which we all, objectively, morally judge), that claim is not necessary for this particular argument.  All that is necessary is that there is an objective belief that we ought to act morally, that is, an innate tendency towards wanting to be a “good” person.  This point is pretty clearly true and can be made in a variety of different ways: here, we will prove it linguistically and empirically.  First, we will look at a Wittgenstein’s linguistic argument:

 

Supposing that I could play tennis and one of you saw me playing and said "Well, you

play pretty badly" and suppose I answered "I know, I'm playing badly but I don't want to

play any better," all the other man could say would be "Ah then that's all right." But

suppose I had told one of you a preposterous lie and he came up to me and said "You're

behaving like a beast" and then I were to say "I know I behave badly, but then I don't

want to behave any better," could he then say "Ah, then that's all right"? Certainly not;

he would say "Well, you ought to want to behave better."

[Ethics] is a document of a tendency in the human mind which I personally cannot help

respecting deeply and I would not for my life ridicule it.  (Wittgenstein 1965, p. 5, p. 12)

 

What this is to say is that simply linguistically, not even considering any possible social scientific data or empirical argument, we cannot deny our desire to be moral.  It is not like a skill which we can desire or not desire or some kind of preference which some people have and others do not; disinterest in moral issues innately outrages us.  If someone were to have such disinterest, we would assume that they had something wrong with them, that they were somehow untaught or mentally malformed.  And no matter how much we might desire to not have morality, we cannot simply excise the concept of goodness out of ourselves or our language no matter how hard we might try; we “cannot help respecting” the good in our day-to-day lives.  And, if someone is somehow morally malformed, we find that person outrageous.  To use an example from philosopher Jesse J. Prinz (cited earlier in this article on a different point):

 

In an experiment, Rozin et al. (1993) found that subjects were unwilling to try on a

sweater that he said had belonged to Hitler. Their refusal continued even after

monetary offers were made, and they expressed disgust.  (Prinz 2007, p. 71)

 

This argument alone displays that our desire for morality is somewhat innate, if in nothing else than in our language.  However, we have a pretty significant amount of empirical evidence which states that we are, most of us at least, motivated by some kind of morality.  For instance, as Wilson has explained, people tend to be motivated, beyond economic desires, by whether they think a particular situation is just:

 

In a famous set of studies done in the 1960s, various experimenters hired men to

conduct interviews, paying them on a piecerate basis. During the hiring, the “employer”

(an experimenter) made clear to some men that he thought them unqualified (and

hence would be overpaid for the work to be done) while saying to others that they were

fully qualified (and hence would be equitably paid). The men were then sent out to

work. Those who were made to feel unqualified (that is, whose skills were not worth

what they were going to get in wages) produced fewer but better interviews than did

the men who were given to believe that they were being fairly paid. When some

employees were made to feel that they were underpaid (that is, that their skills were

worth more than they would earn), they produced far more interviews, but of lower

quality, than did employees who believed they were fairly paid.

 

This is not what one would expect if people were only interested in maximizing their

income. Both the “overpaid” and the “equitably paid” workers earned the same amount

per interview completed. If getting the most money was all that mattered, both groups

would try to complete as many interviews as possible and the earnings of each group

would be identical. What their employer thought of them would be irrelevant. That the

“overpaid” workers did less work (thereby sacrificing earnings) but did work of higher

quality (thus sacrificing effort) can be explained in terms of their concern for equity. If

they believed that their inputs (here, their skills) were not worth their outputs (their

wages), they could eliminate the distress they felt in one of two ways—by reducing their

output (earning less money) or by improving their input (doing a better job). They did

both, in this and a number of other studies.  (Wilson 1997, pp. 61-62)

 

And, a similar tendency has been observed by Frank about our innate honesty:

 

…psychologist Harvey Hornstein and several of his colleagues were pleasantly surprised

to discover what happens when people are given an opportunity to do a good deed. In

their basic experiment, they placed hundreds of wallets, each with a small amount of

cash (roughly $5 in today's purchasing power), in different locations on the crowded city

sidewalks. Each contained a variety of membership cards and personal papers, and an

identification card bearing the name, address, and telephone number of its ostensible

owner, one Michael Erwin (whose name was chosen "to avoid any unequivocal ethnic or

religious group identification'').

 

For several months during the spring of 1968, Hornstein and his colleagues "lost" these

wallets at the rate of roughly 40 per day. Of the many hundreds they lost, an

astonishingly high 45 percent were returned completely intact! Needless to say, some

inconvenience is involved in wrapping up a wallet and getting it to the post office. With

such a small amount of cash involved, it would not have been reasonable for the finders

to have expected a reward. (Indeed, many of them returned the wallets anonymously.)

It is thus difficult to imagine a self-interested motive that might have driven so many

people to return them.  (Frank 1988, p. 214)

 

It is quite clear, dependent on this data alone, that people tend to desire morality, and even if not morality (although that seems most likely in light of both Wittgenstein’s linguistic argument and the data), the constituent parts of morality.  So, we have proven (1), that the desire for morality is innate.

 

The other three parts of our argument follow simply from conclusions we have already reached.  The second necessary condition for habit formation, that people tend to be around others who judge morality on social norms, follows from our previous argument (including our citation from Elster) about the constant presence of social norms in our lives.  It seems likely that these social norms would be pretty commonly used to appraise others around each and every one of us.  It further seems likely that social norms have often been used to appraise us by others.  The actual formation of the habit also seems pretty closely related to one’s being around others who already have that habit.  This can be displayed by our earlier citation of Wilson on the effects of cultural norms on the morality of those people within the cultures.  Once these habits have been formed, they are very clearly treated the same way in which normal moral attitudes are also treated.  The odd emotional-moral parts of social norms have already been described by both Elster and Wilson, and moral arguments based on these emotions and norms certainly do not seem different from normal moral statements.  Indeed, they seem rather natural.  For instance, as explained by Prinz, the emotional reaction to people breaking collectivist social norms, although certainly (as explained by Wilson) caused by social pressures, is not directly connected to social norms at all, but instead to a kind of immediate emotional response:

 

Shweder et al. (1997) also identify a third kind of ethical system, which is very central in

some cultures. They call it an ‘‘ethics of community,’’ and its rules govern personal

relationships with social groups. Rules governing rank, respect for the family, and

treatment of communal resources fall into this category. Collectivist societies tend to

emphasize rules of this kind. They place special value on each individual’s place in

society, and failure to play the roles associated with that place are morally prohibited.

When people step out of line, they disrupt the social order. In such societies, disrespect

is a cardinal transgression…  (Prinz 2007, p. 72)

 

This seems like a fully habituated process of connecting social norms to moral judgment.  We can pretty safely say, therefore, that our attitude of viewing one’s transgression of social norms as a moral transgression is a nearly universal habit in modern society, one which is, in Oakeshott’s words, “inevitable.”

 

We can see pretty clearly from here how this habituation of mixing up social norms and moral judgment would lead to our self-esteem being affected by social norms.  If we are used to condemning people for not conforming, then we would come to condemn or praise based on people’s conformity.  And, if we do that in relation to other people, we would likely also do so in relation to ourselves.  So, if we are contrary to social norms, we would likely condemn ourselves.  And, through that self-condemnation, we would lead ourselves to taking different positions.  However, this yields a further question of why that desire for the following of social norms would necessarily lead to holding conformist opinions.  We can certainly imagine a society in which there are norms about when to speak or how to act without having norms in relation to what to believe. 

 

To understand this concept of conformity in opinions, we can again look back to Veblen, and in particular to his anthropology of our modern desire for property.  The exact anthropology of this argument is not so important; what is more important is to understand how this idea – that is, how some things can become sources of esteem – can occur and how cases of that occurring can be diagnosed.  In Veblen’s view, our desire for property (at least to some extent) comes from our desire for esteem.  To him, that desire comes about as traditional sources of esteem (victory at securing food for the tribe, for instance) is replaced by industry:

 

Gradually, as industrial activity further displaces predatory activity in the community’s

everyday life and in men’s habits of thought, accumulated property more and more

replaces trophies of predatory exploit as the conventional exponent of prepotence and

success. With the growth of settled industry, therefore, the possession of wealth gains

in relative importance and effectiveness as a customary basis of repute and esteem. Not

that esteem ceases to be awarded on the basis of other, more direct evidence of

prowess; not that successful predatory aggression or warlike exploit ceases to call out

the approval and admiration of the crowd, or to stir the envy of the less successful

competitors; but the opportunities for gaining distinction by means of this direct

manifestation of superior force grow less available both in scope and frequency. At the

same time opportunities for industrial aggression, and for the accumulation of property

by the quasi-peaceable methods of nomadic industry, increase in scope and availability.

And it is even more to the point that property now becomes the most easily recognised

evidence of a reputable degree of success as distinguished from heroic or signal

achievement. It therefore becomes the conventional basis of esteem.  (Veblen 2007, p.

24)

 

Such a shift can cause things that are not even property to become examples of wealth, for instance, one’s leisure time.  Again from Veblen:

 

During the predatory culture labour comes to be associated in men’s habits of thought

with weakness and subjection to a master. It is therefore a mark of inferiority, and

therefore comes to be accounted unworthy of man in his best estate. By virtue of this

tradition labour is felt to be debasing, and this tradition has never died out. On the

contrary, with the advance of social differentiation it has acquired the axiomatic force

due to ancient and unquestioned prescription.  (Veblen 2007, pp. 28-29)

 

From Veblen, we get not only particular concepts of what are considered examples of the elite – that is, the condition in which people are esteemed (the exact nature of which is not, in the modern day, always correct, as we will later argue) – but also, more importantly, the engine which causes such shifts to occur.  Namely, as Veblen argues, if people with esteem tend to have a particular trait, then we come to associate that trait with that esteem; we get a “habit” of mixing up that trait with their positive qualities and start to esteem that trait alone.  So, we can come to esteem one’s lack of work because, while that lack of work is not the cause of our esteem, we tend to associate it in our minds with those whom we esteem.  So, this game of association is the basis of indirect reasons for social norms.  If people tend to follow social norms, if they tend to be the sort we esteem, then we will also esteem the traits that they have, and if people we do not esteem tend to have their own particular traits, then we will tend to have contempt for those.

 

In this context, we can see how certain opinions might become “charged” with status and become manners of demarcation of different traits.  If an opinion, for instance, is relatively ordinary among the poor and not the wealthy, we can imagine that someone expressing that opinion would cause others to assume they were poor rather than wealthy.  People might thereby also come to associate certain opinions with positive or negative traits, with intelligence or stupidity, and so on.  This latter category of trait could very easily, lead to that person desiring a change in their own opinions in favor of those which they view as indicative of the good or away from those they view as indicative to the bad. 

 

And, indeed, people do view certain attitudes this way.  To take a less extreme example, think of politics.  This is an area which is split about 50-50; neither side can truly be considered the “consensus view” throughout America.  While within individual communities, one of the two sides can be the consensus (rural, Northern Alabama voted 90% Republican in 2020, and Washington, DC voted 90% Democrat), these community-wide attitudes are certainly less extreme than the consensus that existed in the 1950s against interracial marriage or that exists now as to the moral acceptability of birth control or interracial marriage.  Yet, even in regard to this highly contested area of American life, political views are charged with particular character traits.  Somewhat ironically, 64% of Republicans and 75% of Democrats stated they viewed members of the opposite party as being more “closed-minded” than members of their own party.  Further, a majority (55%) of Republicans and a plurality (47%) of Democrats rated members of the opposite party more “immoral” than members of their own party (Pew Research Center 2019).  Clearly, political views are morally charged for most Americans.  It seems completely reasonable (and probable) that other historical views, views such as approval of interracial marriage or homosexuality, were once viewed as morally charged, and views today, such as the opposite of both of those views (disapproval of interracial marriage or homosexuality), are today viewed in a similar way.  We can therefore understand how people might come to desire a shift in their opinions:  they come to associate certain traits with certain beliefs, those traits are negative, they want to make sure they do not hold those traits, and so they change their opinions away from those associated with negative traits.

 

And so, we can arrive at what Schroeder would call the common salient-striking proposition r, one’s association of a particular view with either virtue or vice, which leads us to believe we can arrive at our common end P, our desire for self-esteem, and which motivates a particular action a, changing one’s opinions to those associated with virtue from those associated with vice.  Now, this would seem to not truly answer our original question.  While we have shown how there might be a common desire to change one’s beliefs to the beliefs that one associates with positive traits, that leaves open the question of how a particular association can be so common among so many people.  And further, how is a universal change in what traits one associates with certain beliefs any simpler than a universal change in one’s beliefs about actions?  Indeed, the former and the latter seem very close in definition; for one to change one’s beliefs about whether homosexuality is moral seems quite similar to one’s changing their beliefs about whether one’s beliefs about homosexuality are moral.  Indeed, it seems like we have not really discovered anything at all, instead merely moving from the initial question to a slightly more abstract version of the same question.  However, this view overlooks something fundamental that we have derived about opinions:  that we can not only believe things about opinions but also associate those opinions with particular traits, even non-moral traits.

 

Our argument, remember, is not so much about people shifting their views because of moral considerations but rather because of normative ones.  People might not be convinced logically that holding this or that position is correct but rather that holding this or that position is, by social norms, approbated or condemned by society.  The former question is, at least partially, logical, that is, it demands arguments to derive its truth or falsity.  The latter, on the other hand, is empirical and bases its truth in either our experiences or some kind of external source of factual truth (for instance, a study showing most people’s view of a particular norm).  It is further something we cannot directly experience – we cannot say what the entirety of society thinks about a particular action, only what the people around us believe.  Further, it is something we regularly guess at contrary to our own experience, and even contrary to the truth.  For instance, polls of college students have found that their perception of the average student’s attitude towards school drinking culture to be significantly more positive than the actual average of students’ attitudes towards school drinking culture.  This misperception was not simply a minor, peripheral attitude in these students either; its presence was correlated with an increase in drinking (that is, an active change in behavior) among students.  Where this exact viewpoint originated from is uncertain, whether it was partially cultural (from mass media about the collegiate experience which emphasizes alcohol consumption), caused by seeing a loud minority of those with positive views of drinking culture more than the quiet majority of those who did not have those same positive views, or some entirely different source.  Yet, what this displays is that our common guesses about social norms are not one-to-one with those true norms.  This opens up the possibility that people might believe there exist norms which in fact do not exist.  Motivated by that belief, they would act as if they were the true social norms.  And, in acting according to that belief, they might come to start to support whatever beliefs the norms told them were generally supported so as to hold views charged with the desirable traits which social norms tell them are correlated with those beliefs.  Indeed, the very same students who at first reported a difference in opinion between their views and the views of their peers over time came to hold more positive views of the drinking culture they had earlier spurned – over time, their views became more similar to the peers of their imagination than to their real peers (Prentice & Miller 1993, p. 246, 249).

 

Looking at social shifts as motivated by people’s appraisal of certain views as being socially correlated with desirable traits rather than being merely good opens up plenty of doors as to how people’s views on certain social issues might change.  Thus far, we have not mentioned one sort of social change omnipresent in modern, pluralistic society uniquely impressive in its size, scope, and speed: advertising.  With some money, some time, and often using methods taken from clinical psychology, advertisements are able to shift the entirety of society’s attitudes on a particular brand or product to a massive extent.  While, sure, spreading the “idea” that your audience should smoke a cigar is slightly less prestigious or significant than spreading political ideology, it is no less impressive.  It is the ability to create a new “proposition r” for your audience’s “desire P” in Schroeder’s terms, a new salience-striking immediate desire for your audience’s broader, more fundamental desire.  And, although sometimes advertisements might often be merely factional and targeted to a particular group, they are also often not.  Think of smoking, for instance.  In 1955, amidst widespread advertising and the creation of a cultural norm in favor of smoking, 40% of Americans were daily smokers, and looking purely at men, 53.8% smoked cigarettes daily (United States Congress Division of Health Interview Statistics 1970, p. 5).  In 2020, on the other hand, a mere 12.5% smoked cigarettes daily, and only 7.4% of those between 18 and 24 years old smoked cigarettes daily (Cornelius 2022).  Accompanied by an aggressive anti-tobacco advertising campaign taken by the government, this change was not simply one of people’s habits but also of their attitudes.  People who previously found smoking cigars completely normal, in significant numbers, have been shown to change their opinions over time to now either find smoking disgusting or outright immoral (Rozin & Singh 1999, pp. 331-332).  These are not merely shifts in the opinions of small factions of the population; they are seismic shifts in attitudes and actions for very large segments of the population, segments that found their daily lives and attitudes on the world around them interrupted by an advertising campaign – in other words, massive social change was affected by these advertisements. 

 

Further, this social change was created more by a change in people’s attitudes towards the sort of person who would smoke than by a change in people’s moral opinions. Beliefs that smoking was disgusting were significantly more common among those who did not smoke than that it was immoral.  Indeed, views that smoking was immoral tended to merely be entailments of views that it was disgusting, as shown by the fact that disgust towards smoking had a greater influence on whether someone thought doing so was immoral than one’s particular views on whether smoking was healthy (Rozin & Singh 1999, p. 326, p. 328).

 

Indeed, it seems that advertising campaigns generally have an unusual ability to affect people’s views of different objects and attitudes.  Just look at journalist Vance Packard’s description of a report about how consumers viewed the implications of owning different cars on the owner’s personality and public image:

 

One of the most remarkable documents I came across in my investigation was a

pamphlet called "Automobiles, What They Mean to Americans." It reports on a study

made for The Chicago Tribune by Social Research, Inc. The major merchandising journals

have discussed its findings in great detail. The study was conducted by a team of social

scientists who used a variety of probing techniques on 352 car owners in the Chicago

area.

 

The investigators found that only a minority of the populations, mostly men in the lower

class, have any real interest in the technical aspect of cars. And the major finding that

stands out in the survey is that automobiles are heavily laden with social meanings and

are highly esteemed because they "provide avenues for the expression . .. of the

character, temperament and self concept of the owner and driver.... The buying process

is an interaction between the personality of the car and the personality of the

individual."

 

The report indicated the personality of one sort of owner of various major makes of car

by presenting a series of circles. Each circle contained words written in to indicate the

dominant traits of this owner and their relative importance. Here are some of the owner

profiles that were indicated:

 

Cadillac: "Proud ... flashy ... salesman ... middle-aged ... social mobility ... good income

level ... responsible."

 

Ford: "Speed demon ... good income ... young man ... proud . .. upper lower class ...

drives to work ... practical"

 

Desoto: "Conservative . . . responsible . . . matron . . . upper middle class ... good

income ... proud."

 

Studebaker: "Neat look ... sophisticated . ..intellectual ... mobile ... professional ... young

man."

 

Pontiac: "Stable class outlook . . . middle of road . . . married woman ... mother ...

sincere ... conventional ... busy."

 

Mercury: "Salesman . . . assertive . . . mobile . . . modern .. . substantial ... lower

middle ... father ... quick."

 

The report stated that "people buy the cars they think are especially appropriate for

them" and then made these points:

 

People who want to seem conservative, to tell the world they are very serious and

responsible, tend to buy Plymouth, Dodge, DeSoto, Packard, four-door sedans, dark

colors, minimum accessories and gadgets.

 

People who want to seem sociable and up-to-date but in a middle-of-the-road sort of

way tend to favor Chevrolet, Pontiac, Buick, Chrysler, two-door coupes, light colors,

moderate accessories and gadgets. People who want to express showiness, to assert

their individualism and modernity, tend to buy Ford, Mercury, Oldsmobile, Lincoln,

hardtops, two tones, bright shades and hues, a range of extras, gadgets, fads.

 

People who need to express unusual status or individual needs favor Cadillac

(ostentation, high status), Studebaker, Hudson, Nash, Willys, convertibles

(impulsiveness), very bright colors, red, yellow, white, latest gadgets and accessories.

 

One of the interesting variations, under the ways to fulfill "wish for attention" through

car ownership, is what the investigators call "conspicuous reserve."  Those people want

other people to know their status but at the same time want to express it modestly.

Some may engage in deliberate downgrading. This is "a frequent technique of people

who are secure in their high social position. They show their superiority by displaying

indifference to status—by purposely buying less expensive cars than might be expected.

They love beat-up station wagons and old cars." Others who wish attention may try to

do it with car images showing a sophisticated flair: foreign cars, the Nash Rambler, the

new Studebaker. Burleigh Gardner told of a crisis that occurred among a group of four

doctors who shared a suite on Chicago's swank Michigan Avenue when one of the

colleagues began parking his slightly radical, attention-getting car in front of the

building. After conferring they told him the car didn't fit the image they were trying to

build for themselves as carriage-trade medicos.  (Packard 2007, pp. 70-72)

 

Connecting this report to our concept of “charging” goods or ideas with particular stereotypes seems almost too obvious; indeed, the report itself makes an allusion to Veblen’s idea of “conspicuous consumption” (instead referring to “conspicuous reserve”).  Perhaps the most interesting part of this report, however, is how arbitrarily different cars are “charged.”  There is no good reason why a brand name should decide if someone is a father, proud, or practical other than that firms have effectively marketed their particular product in that particular way.  Our second major observations is how this marketing does not always create exactly the intended effect.  While some of the stereotypes are simply what was directly advertised, some others are mere byproducts of that advertisements.  For instance, it is doubtful that the concept of “conspicuous reserve” was intentionally created by firms advertising more expensive cars, but it exists because an advertising market which advertises some but not other cars thereby causes those less advertised to become indicative of humility.  If firms are able to so effectively, so obviously arbitrarily, and so often unintentionally charge the general population’s attitude towards products with particular traits, there is no reason to believe that they could not do the same thing with respect to beliefs and other traits.  If they repeatedly advertise in favor of a particular view, attitude, or sort of action, then that view, attitude, or action could increase in popularity in much the same way as a product, and the contrary is true if they advertise a particular view, attitude, or action as being negatively charged. 

 

This particular concept of advertisement as a source of people’s conception of social norms can be extended beyond the world of advertisement itself to include all things that use similar processes to spread their viewpoint.  That would include most sorts of media, that is, the places from which people derive the stories that make up their concepts of the world, particularly those parts of the world they do not regularly experience.  On a mass level, which is the form that much modern media takes, the favoring or disfavoring of particular views can be equally effective upon most Americans, no matter their particular backgrounds.  In cases in which mass media seeks to positively portray a particular attitude, that attitude can be created by that very portrayal.  An example of this sort of advertisement would be how the Klu Klux Klan revived in the 1910s-20s through the largest form of mass media at the time, the new artform of movies.  First through the wildly popular blockbuster “The Birth of a Nation,” which the re-founder of the Klan described as having “helped the Klan enormously,” and then through the less popular “The Face at Your Window,” which was directly used as propaganda by the new Klan in places where it sought to increase its membership, the revival of the Klan at this time, certainly organizationally and perhaps even ideologically, was created, at least in large part, through its usage of the contemporary mass media’s glorification (or at least alleged glorification) of the organization (Rice 2015, p. 16, pp. 32-34).

 

Another possible way in which the mass media can create social change is by destroying certain social norms, that is, by making certain traits viewed as more “normal” than they previously were.  An example of this sort of shift in viewpoint can be seen with regard to homosexuality (or at least general trends in LGBT identification).  If people view the trait extremely negatively, then that will probably suppress the number of people who are open about their having that trait.  On the other hand, if that trait is viewed less negatively, or at least more normally, people will be more likely to act upon their holding that particular trait or to even believe they have that trait.  (For example, it is highly likely, considering the recent uptick in self-identification with a variety of groups such as bisexuality and transgenderism that were previously not within the public cultural milieu before very recently, that many of those who now view themselves as members of those groups probably did not know until recently what those groups even were, let alone identifying themselves as members.)  In the case of homosexuality, the view of the majority has constantly overestimated the percentage of the population which is in fact homosexual, which is to say, they have consistently overestimated how “normal,” or at least common, homosexuality is (they have normally guessed 23%-25%, whereas the real number has usually been 4%-7%) (McCarthy 2019; Morales 2011; Newport 2018; Jones Feb. 2022a).   This has accompanied a consistent increase in the percentage of Americans who identify themselves as being LGBT (from 3.5% in 2012 to 7.1% in 2021) (Gates & Newport 2012; Jones Feb. 2022a).  Indeed, those incorrect guesses have come to, over time, look more like the real percentage of Americans who identify as LGBT.  The consistent inaccurate guess of 23% now suddenly seems prescient, as the percentage of Generation Z has, quite shockingly, ticked up to 21% (or 22% of those who answered the poll) from 10% in 2017, a rather sudden increase (Jones Feb. 2022a; Newport 2018).  It seems that these two factors, the consistent inaccurate guessing of the normalcy of homosexuality and the consistent increase in the percentage of the population who identify as LGBT, particularly seeing as the former seemed to guess the same number as the latter has approached over time, are at least somewhat correlated; that is to say, it seems that when people perceive a certain trait as being more prevalent than it is, they are more likely to identify themselves with that trait, as they feel it is less marginalized.  This then causes the percentage that actually is a particular trait to become more similar to what is perceived as that trait.  As a result, the mass media can likely cause a particular social norm to become less pronounced, and thereby, cause a particular socially ostracized trait, belief, or action to lose its negative charge.  (This is true in particular of traits that people view negatively, as it is much easier to strongly condemn someone who does something both immoral and uncommon than someone who did something immoral yet common.  The former is much more likely to result in a condemnation of the individual, while the latter is more likely to produce a condemnation of the society.  For instance, it is much more difficult to condemn someone for committing infanticide if they were to live in a society like ancient Rome where doing so was widely considered normal and morally inoffensive than it would be in a society where doing so is considered morally evil, such as modern America, even though one can condemn both.)

 

This latter example, that of the mass media getting rid of social norms and making traits appear more normal than they in fact are, is, we would hypothesize, is also true the other way around.  This is to say, an underestimation of the normality of a particular attitude or trait probably also decreases the percentage of people willing to identify with that trait.  That being, said, however, it seems that, in practice, it is far more difficult to create a new social norm than to destroy a previous one.  This asymmetry has been displayed in anti-cigarette advertisements.  Studies have found that when those advertisements seek to portray smoking as being socially ostracized, those ads consistently tend to be ineffective.  This is believed to be the result of students’ general distrust for ads which try to portray negative social attitudes that do not in fact exist (Pechmann 2003).  This much is to say that it is significantly more difficult to create new social ostracization, and therefore new negative norms, via mass media promotion of those norms than the other way around.  This makes sense with our broader theory.  Remember, the basis of the mass media’s influence over social norms was not so much that they directly controlled those norms but rather that they controlled people’s perception of them.  And the reason they controlled that perception was because people did not have an empirical way of testing that perception and therefore were required to rely on what they were often told.  However, claims that someone will be ostracized for certain actions, attitudes, or beliefs are directly falsifiable, that is, if they are not true, that untruth can be very easily seen by observers.  If someone states that smokers are socially ostracized, and we know smokers who are not socially ostracized, we can immediately see the falsehood of the first claim.  On the other hand, a claim which overstates the normality of a particular trait, outside of unique circumstances where people are actively and aggressively ostracized into their own subgroup for having that trait, is less empirically testable, and the operations of advertisements which directly try to create social norms by charging actions, attitudes, or beliefs are avoiding even the concept of empirical testing.  This much is not to say that an advertisement understating the normality of something or seeking to arbitrarily portray that thing as indicative of social ostracization is necessarily impossible, simply to say it is very difficult, and at this moment, I cannot think of any case of it being successful.  Rather, I can think of many where this strategy was unsuccessful.  For instance, think of political campaigns such as the Labour campaign in the United Kingdom in 1992 or the Hillary Clinton campaign in the United States in 2016 which sought to portray the other side as unpopular and therefore undeserving of votes.  Neither succeeded, and in both cases, voters distrusted the idea that they were ostracized or otherwise abnormal for opinions they viewed as common.  This is absolutely not to say that campaigns cannot create new norms – which they can through, for instance, the creation of a feeling of disgust for a particular viewpoint, attitude, or action.  It is rather to say that the only way to reliably create a new social norm is by charging a belief, attitude, or action, not by portraying it as merely unpopular.

 

This model of asymmetric mass media powers to create or destroy social norms explains quite a bit about how relatively free and pluralistic we are in modern society.  It seems to us, when we look back at the past, at the 19th century, for instance, that it was unreasonably puritanical and authoritarian relative to the modern world.  The reason why our predecessors seem so unreasonably authoritarian to us today can be viewed from this perspective as being partially because it is far easier for advertisers and mass media producers to get rid of certain social norms over time by portraying something as common than it is for those advertisers and mass media producers to create a new social norm.  Further, any change seeking to create a social norm requires constant re-adjudication of that norm, that is, the mass media must constantly explain why that particular trait is bad.  Think of the feeling of constant need for more anti-smoking ads or of the constant emphasis on creating new “anti-racist” forms of mass media or education (that is, those about the evils of racism, such as in American slavery or the Holocaust), because the creation of negative social norms does not simply need an initial push, it needs a constant creation of disgust for a particular group.  On the other hand, destroying negative social norms, or even creating new positive ones, does not need any constant re-adjudication once negated or established, simply inertia.  Someone merely needs to say something is popular or normal to establish it as so; and, once it is so, one barely needs to even do that.  If one were to bring up interracial marriage or birth control as moral issues to most normal people, they would be less likely to have strong opinions on the issues (although they might have strong opinions on the person bringing them up due to opposite negative social norms against racists or religious extremists for instance) and more likely to be surprised that someone even viewed those opinions as moral issues.

 

At the same time, we cannot (or at least, it is very difficult to) completely eradicate views that are contrary to the mainstream.  For, while one can create new norms to replace old ones and even charge old ones with negative traits for most people, those old ones cannot, without that charging, simply become “obviously” history.  This is why, in spite of trends in favor of birth control or interracial marriage, there are still, and probably still will be in fifty years, minute holdouts of 5-10% of the population in both cases who do not agree with the social norm.  For, there are people who are not affected, either because of their removal from normal society into local communities (such as, in extreme cases, cults or abnormal religious communities) or simply the extremity of their previous views of disgust for that particular action (although neither of these cases, of course, necessarily rule out one’s being affected by social change).

 

It is likely that this sort of attitude was the reason for the major forms of social change which we have used as examples:  birth control, racial integration, and the acceptance of homosexuality.  In all three cases, mass media was broadly on the side to which America generally changed.  In the former case, birth control (whether the drug or outright abortions) were viewed by positively by mass media outlets, at least internally, as ways of avoiding uncomfortable controversies for their top stars.  And, indeed, this spilled over into increasing the popularity of birth control as well as abortion.  Perhaps one of the most significant causes of shifting attitudes on the latter was a (minor) television star supported by the mass media, Sherri Finkbine, whose own abortion shifted public views on this issue (Berry 2000, p. 159).   And, certainly, in the latter two cases, most mass media was highly favorable towards the eventually successful sides of the debate.  Both are somewhat famous, so we will keep our discussion (and citations) to a minimum, but to put it simply, it is hard to imagine the average American changing their views anywhere near to the extent that they did on racial issues if not for highly successful mass media products such as “Guess Who’s Coming Home to Dinner” or “In the Heat of the Night” promoting such shifts in opinion with respect to interracial marriage, as it is equally hard to imagine in the case of homosexuality without products such as “Brokeback Mountain” or “Milk.”  It has been a general trend that the attitudes of most mass media producers, such as those in Hollywood, precede the rest of the country in advocating for most kinds of social change (indeed, California and New York, where most members of the opinion-making mass media exist, are usually considered unusually liberal, particularly socially liberal).

 

We have thus far used positive examples (or at least examples that I view as positive) of modernity creating good social change.  However, this sort of social change is not necessarily positive.  It is rather the creation of whoever has the power to create mass media, whether that be the government, corporations, or NGOs.  It is at this point where our ability to systematize social change starts to break down, because to do so, we would be required to explain the desires of each group that might bring about social change, and often, these different groups’ interests are not just different but actively contrary:  the interest of a government seeking to decrease the percentage of the population that smokes is quite different from the intention of a cigarette company.  So, our further discussion of general social change is only going to be true of most social change, that is social change created by most forms of advertising and mass media.  Most of that advertising and mass media is created by corporations, and as a result, it enacts corporate preferences, that is, those which will make the most money for the corporation which creates the mass media or advertisement.  Of course, this sort of argument is far more complicated than it sounds, and it requires, most especially, an explanation of corporate decision-making and preferences which can explain how corporations decide which policies, and thereby which preferences and cultural values, they prefer.  That will be the last part of our broad explanation of social change prior to our discussion of the individual social changes that are occurring right now.

 

Social Change in Firms

 

Advertising and mass media, the sources of our social norms, are dominated completely by businesses.  In 2021, the advertising industry spent $284 billion, and American consumers spent $134 billion on entertainment industry products (including televisions streaming products like Netflix) (Adgate 2021; Adgate 2022).  Considering the profit margin in the entertainment industry is about 28%, that probably means total entertainment industry spending, that is, spending either directly or indirectly on the creation of mass media, was somewhere around $100 billion (Lieberman 2014).  Comparatively, government and NGO spending is almost nonexistent:  between 2006 and 2015, the government spent about $1 billion per year on advertising, and NGOs spent about $6 billion in 2021 on advertising, and even political campaigns only spent a total $8.5 billion for advertising in the 2020 campaign, making the three consist of about 0.4%, 2.1%, and 3% of the advertising industry, respectively (Government Accountability Office 2016, p. 5; Clolery 2021; Homonoff 2021).  The only cases in which being outspent by that margin can still lead to success are cases similar to the cigar-industry example – cases where the government both uses its unique monopoly on violence to ban advertising by the industry and to force the industry to effectively advertise against itself (in the cigar case, through policies like requiring warning labels on cigar packages), as the federal government did through the Public Health Cigarette Smoking Act of 1969 (Miles 2009).  Outside of these cases, though, non-business groups are usually underrepresented, relatively incompetent, and bad at selling their views.  So, in understanding why advertisements and mass media advocate the views they do and therefore why social change occurs how it does, we should try to understand how businesses makes the decisions they do.

 

In this section, we will try analyze business decision-making according to the “Evolutionary” or “Neo-Schumpeterian” school of economics.  We are choosing this rather heterodox school because, as will be displayed by our analysis, it simply has a more realistic and precise view of business decision-making as well as focusing far more on how businesses interact with changing technological (and, as we will apply evolutionary economics, social) conditions than other, more mainstream schools such as the neoclassical or Keynesian.  Of course, this means that this part of our article is going to be more innately controversial than the rest of it thus far, because it is directly endorsing a viewpoint that, while not marginalized, is also not exactly the mainstream.  Although our view here is possibly incorrect, most of the basic points of it are basically in line with or even implied by our previous argument about human action generally:  businesses tend to make decisions based on what seems to grant them more profit, businesses tend to make those decisions based on habits that are empirically “associated with” profits, businesses tend to change those habits when their previous habits are causing severe problems for the company (either because of a new competitor using different, more effective habits, an old competitor changing their habits to be more effective, or some broad change in the market such as a change in technology), and those habits or their precise implications do not tend to be well understood by the business – they are simply followed.  To put it simply, the evolutionary school gives a similar explanation of business action as we have described human action:  businesses, all having the same end goal (or what Schroeder called P) profit.  Salience-strikings are needed to make a particular optimization or change in habit appear reasonable to businesses just as they need to for us, and individual, outside influences are needed to cause salience-strikings for them just as they need to for us.  However, for businesses, unlike us, salience-strikings are created not by desire but rather by outside influences:  technological change, outside intervention from an entrepreneur, etc.  This explanation of business decision-making is highly important to understand the basic timeline of our current social change as well as to understand why businesses make exactly the decisions they do.  However, even without our theory, just conceding that businesses seek to make money and that they are imperfect but generally effective at doing so can work with most of the argument we will make in Part Two of this article.  So, for a reader who does not agree with our precise explanation of business operations here accurate (or who already knows that they do not find evolutionary economics compelling), they can simply skip this section and move on to Part 2.

 

To try to explain corporate decision-making, we should first try to understand the basic ends of corporations.  It is tempting to think of them simply as what economists Richard R. Nelson and Sidney G. Winter describe as “profit, or market value, and the more of it, the better,” that is, profit-maximization (Nelson & Winter 1982, p. 53).  However, the reality of corporations, which are imprecise and do not make decisions in a timeless vacuum, is slightly more complicated.  As Nelson and Winter describe:

 

…the firm operates at all times with a status quo policy, the profitability of which it

inexactly compares, from time to time, with individual alternatives that present

themselves by processes not entirely under its control-changing policies when the

comparison favors the presented alternative over the current status quo.  (Nelson &

Winter 1982, p. 31)

 

What would be more accurate to say is that corporations act intending to move towards greater profit, or as what Nelson and Winter describe as “profit-motivated striving” (Nelson & Winter 1982, p. 31).  According to this latter understanding of the ends of corporations, firms are presented by different possible strategies which they choose between, depending on their (imperfect) view of how profitable different changes in strategy would be, which are thereby used to slowly change or get rid of strategies which entail significant relative losses, as Nelson and Winter argue:

 

…in view of the nature of the deliberate approximation to the complex underlying

reality, it is…natural to represent large-scale motivational forces as a kind of persistent

pressure on decisions, a pressure to which the response is sluggish, halting, and

sometimes inconsistent. And it may be noted that this is the same view of dominant

motivational forces to which one is led if one regards them not as the result of an

intellectual quest for perfect consistency, but as the outcome of an imprecise and

unsubtle evolutionary purging of motives that diverge excessively from survival

requirements.  (Nelson & Winter 1982, p. 58)

 

However, this explanation alone is not enough to understand why corporations make the decisions they do.  For, in preferring certain alternatives to their current situation or vice versa, there is no simple, omniscient “firm” making decisions but rather individual people.  We should therefore try to understand how those individual people can make those decisions: what knowledge they are working with, how different opportunities seem to them, etc.  To better understand this manner of decision-making, we should also understand how corporations work in the present, the current status quo against which they are comparing alternatives.  Here, again, Nelson and Winter explain:

 

The idea that organizations "remember" a routine largely by exercising it is much like

the idea than an individual remembers skills by exercising them. The point that

remembering is achieved largely through exercise, and could not be assured totally

through written records or other formal filing devices, does not deny that firms keep

formal memories and that these formal memories play an important role. But there

must be much more to organizational memory than formal records. Further, cost

considerations make “doing" the dominant mode of information storage even in many

cases where formal records could in principle be kept.

 

To see how exercise of a routine serves as parsimonious organizational memory,

consider an organization in fully routine operation and ask what really needs to be

remembered, given that such a state has been achieved. Under such a regime, the

situations of individual members and of the organization as a whole contain no

significant novelties: the situations confronted replicate ones that were confronted the

previous day (or week, month, or year) and are handled in the same way. The scope of

the activity that actually takes place in such a static condition and the operational

knowledge involved are extremely restricted. Members perform only a minute fraction

of the routines they have in repertoire. The lathe operator and the lathe turn out a few

specific parts; there is an indeterminately larger number that they could (after

appropriate setup and learning) produce. The operator's skills as truck driver and short-

order cook are never drawn upon, and perhaps are unknown to other organization

members. Routine operation of the organization as a whole certainly does not require

that the lathe operator maintain his skill in cooking bacon and eggs, or in the machining

of parts for products that were discontinued three years previously; neither does it

require that other members remember that the lathe operator possesses or once

possessed these skills. If the same state of routine operation is expected to continue

indefinitely, there is no economic benefit to be anticipated from holding this sort of

information in the organization's memory.  (Nelson & Winter 1982, pp. 99-100)

 

In this status quo, the routine, having been established, can only be understood as a broad “working” system as opposed to individual constituent parts:

 

To the extent that the description above is valid, skills, organization, and "technology"

are intimately intertwined in a functioning routine, and it is difficult to say exactly where

one aspect ends and another begins. This is another way of arguing that "blueprints" are

only a small part of what needs to be in an organizational memory in order that

production proceed effectively. Furthermore, once the set of routines is in memory by

virtue of use, blueprints may not be necessary save, perhaps, as a checkpoint to assess

what might be wrong when the routine breaks down.  (Nelson & Winter 1982, p. 104)

 

For those involved themselves, all that is necessary to understand is one’s particular obligations and “job” – that is, what one is “supposed” to do to avoid losses – rather than anything about the broader performance of the company:

 

What is required for the organization to continue in routine operation is simply that all

members continue to "know their jobs" as those jobs are defined by the routine. This

means, first of all, that they retain in their repertoires all routines actually invoked in the

given state of routine operation of the organization.

 

There is, however, much more to "knowing one's job" in an organization than merely

having the appropriate routines in repertoire. There is also the matter of knowing what

routines to perform and when to perform them. For the individual member, this entails

the ability to receive 'and interpret a stream of incoming messages from other members

and from the environment. Having received and interpreted a message, the member

uses the information contained therein in the selection and performance of an

appropriate routine from his own repertoire.  (Nelson & Winter 1982, p. 100)

 

These observations can be applied to the firm’s planners, as well.  Those people, just as much as the rest of the firm, need not understand precisely what the routine is or why it is as successful as it is; they merely need to understand the proper routine for planning – in what sorts of circumstances a firm should change its behavior.  And the knowledge they have, or at least the knowledge necessary for them, is simply that which their “routine” tells them to have.  So, we should try to understand what creates which techniques as part of the routine.  First, we should understand how these routines generally are created.  According to economist Constance E. Helfat:

 

Routines develop over time in part through learning by doing through trial and error and

repeated performance of activities. This form of learning has been observed at the team

level, as in the study of Cohen and Bacdayan (1994) in which two person teams learned

to configure playing cards in a specified manner by repeatedly playing a card game.

Numerous studies of learning curves and productivity improvements in manufacturing

have also provided evidence of learning by doing at the organizational level (Argote,

1999).  (Helfat 2018, p. 92)

 

This much is to say that these routines are created empirically by testing what is vaguely effective.  That is, which routines work at producing the most profit are those that are cleaved to by firms.  This can therefore be applied to firm planners – that is, those who plan possible changes for the firm – who therefore also decide what to do along these same lines, cleaving to which tactics of changing firm-wide policy tend to lead to the most profit.  Those firm-wide planning tactics which work in general, and therefore those which become routine, tend to become conservative, that is, they tend to avoid change.

 

The first reason for this innate conservatism among firms is that they tend to be made up of people who are all seeking to maximize what they can get out of the firm (wages, wealth generally) while working the least, which is true for all people, from managers to employees to stockholders.  All these different groups tend to create a kind of status quo between themselves during which, at least for a time, they agree upon their particular duties and rights within the firm to avoid constant internal squabbling (which would be expensive for all parties), a “truce” of sorts (Nelson & Winter 1982, pp. 107-111).  Part of what this “truce” does, though, is make all sides suspicious of any possible changes to their own or any other party’s rights or duties which might therefore re-open internal conflict (discussion of different members’ rights and duties) that might damage one party’s particular interests.  As Nelson and Winter explain:

 

Like a truce among nations, the truce among organization members tends to give rise to

a peculiar symbolic culture shared by the parties. A renewal of overt hostilities would be

costly and would also involve a sharp rise in uncertainty about the future positions of

the parties. Accordingly, the state of truce is ordinarily considered valuable, and a

breach of its terms is not to be undertaken lightly. But the terms of a truce can never be

fully explicit, and in the case of the intraorganizational truce are often not explicit at all.

The terms become increasingly defined by a shared tradition arising out of the specific

contingencies confronted and the responses of the parties to those contingencies. In the

interpretive context of such a tradition, actions by individual members have

connotations related to the terms of the truce. In particular, a contemplated action

otherwise sensible both for the organization and for the member taking it may have to

be rejected if it is likely to be interpreted as "provocative"-that is, as signaling a lessened

commitment to the preservation of the truce and a corresponding willingness to risk

overt conflict for the sake of modifying the routine in a manner favored by the member

who initiates the change. On the defensive side, each member strives to protect his

interests by standing prepared to deliver a firm rebuff not only to actions by others that

clearly threaten those interests, but also to actions that might be quite innocuous were

it not for their possible interpretation as probes of his alertness or determination to

defend his rights under the truce.

 

The apparent fragility of the prevailing truce and the implied need for caution in

undertaking anything that looks like a new initiative is thus reinforced by the defensive

alertness (or alert defensiveness) of organization members seeking to assure that their

interests continue to be recognized and preserved. The result may be that the routines

of the organization as a whole are confined to extremely narrow channels by the dikes

of vested interest. Adaptations that appear “obvious" and “easy" to an external

observer may be foreclosed because they involve a perceived threat to internal political

equilibrium.  (Nelson & Winter 1982, p. 111)

 

Secondly, conservatism is implied by the human aspect of the firm.  For, as we discussed earlier, firms do not simply act as according to blueprints but according to routines, and the latter can only be derived empirically.  Empirical statements can only be remembered by people who have experienced them.  While in some cases, empirical statements can be understood second-hand, in others, that is somewhat less possible.  Many routines are heuristics and difficult to put into words – many others are purely practical.  Think back to a term we earlier introduced in a different context, Oakeshott’s concept of “practical knowledge,” something which is purely habitual and cannot be fully put into words.  These sorts of purely practical forms of knowledge are quite common in our lives.  To use a couple of (admittedly non-business) examples from Oakeshott:

 

In a practical art, such as cookery, nobody supposes that the knowledge that belongs to

the good cook is confined to what is or may be written down in the cookery book;

technique and what I have called practical knowledge combine to make skill in cookery

wherever it exists. And the same is true of the fine arts, of painting, of music, of poetry;

a high degree of technical knowledge, even where it is both subtle and ready, is one

thing; the ability to create a work of art, the ability to compose something with real

musical qualities, the ability to write a great sonnet, is another, and requires, in addition

to technique, this other sort of knowledge. Again, these two sorts of knowledge are

involved in any genuinely scientific activity. The natural scientist will certainly make use

of the rules of observation and verification that belong to his technique, but these rules

remain only one of the components of his knowledge; advance in scientific discovery

was never achieved merely by following the rules. The same situation may be observed

also in religion. It would, I think, be excessively liberal to call a man a Christian who was

wholly ignorant of the technical side of Christianity, who knew nothing of creed or

formulary, but it would be even more absurd to maintain that even the readiest

knowledge of creed and catechism ever constituted the whole of the knowledge that

belongs to a Christian.  (Oakeshott 1962, pp. 8-9)

 

These sorts of knowledge, it is hard to deny, also exist for firm workers.  “Business experience” is usually considered a quite significant positive specifically for this reason – there are things that can be learned in most areas of life, including working, only by actually doing them repeatedly.  As a result, most firms will avoid firing people with their unique, inexplicable knowledge, particularly if they are the only one at the firm (or even the world) with knowledge of their particular routine, as is common at firms, according to Nelson and Winter:

 

…in some cases the memory of a single organization member may be the sale storage

point of knowledge that is both idiosyncratic and of great importance to the

organization. The knowledge may be tacit-say, an intuitive grasp of the priority structure

of the competing demands on the employee's time that are signaled by incoming

messages. It may be articulable but not written down-the first names, marital status,

and preferred recreations of the important customers in the region, or the action that is

called for when a particular machine starts to vibrate too much.  (Nelson & Winter 1982,

p. 115)

 

This replacement of someone with unreplaceable knowledge is often something which completely changes the firm – something which causes the firm to allocate learning the now lost knowledge to someone new to replace the old person, someone who will be required to either slowly re-learn exactly what the old employee knew or someone who will fail at learning what the previous person knew, instead learning a new, similar strategy which does not exactly replicate their predecessor’s strategy:

 

The loss of an employee with such important idiosyncratic knowledge poses a major

threat to the continuity of routine-indeed, if the departure is unanticipated, continuity is

necessarily broken. The new person hired to fill the role may eventually restore a

semblance of the old routine, but only by picking up the knowledge more or less from

scratch, guided by whatever clues his predecessor left lying about and by the indications

provided by those in adjacent roles, within or outside the organization. However, those

in adjacent positions may be taking the opportunity to attempt to redefine his

organizational role in their own interest, so their advice is not fully trustworthy. For this

reason, and because the new role occupant may himself be different in significant and

durable ways from his predecessor, and also as the result of other contingencies

affecting the role-learning process, it is highly unlikely that a near replica of the

predecessor's role performance will result. In short, the organizational routine will

mutate.  (Nelson & Winter 1982, pp. 115-116)

 

Thus, firms, because of this conservatism and demand for continuity to avoid upsetting the internal truce created in the firm and destroying important bits of knowledge which only some members of the firm have, will generally not just avoid changing their broad strategies but also nearly every other kind of change due to fear that those other changes might force a firm to change their broad economic strategies. As Veblen explains:

 

All economic change is a change in the economic community - a change in the

community's methods of turning material things to account. The change is always in the

last resort a change in habits of thought. This is true even of changes in the mechanical

processes of industry. A given contrivance for effecting certain material ends becomes a

circumstance which affects the further growth of habits of thought habitual methods of

procedure - and so becomes a point of departure for further development of the

methods of compassing the ends sought and for the further variation of ends that are

sought to be compassed.  (Veblen 1898, p. 391)

 

This much is to say that all changes change the perspective from which other changes are seen.  A change in which kinds of machines are used, for instance, changes the utility which a particular machine operator has to the firm.  This shift can cause a particular machine operator to lose their job because they are no longer viewed as useful, thereby creating even more reverberations across the firm.  The unpredictability of these reverberations of change can therefore cause individual members of the firm to want to avoid any future change whatsoever, even if such change is not directly negative for that particular member of the firm.  This is why we can see unions striking over automation of particular employees’ jobs – and employees who are not directly affected by that automation supporting those strikes (Hernandez 2018).  This conservatism, a necessary part of firms generally, is so extreme as to bring firms to what is basically a standstill.

 

This situation of extreme conservatism reminds us of the situation we diagnosed in individuals’ opinions, and just as in that case, it demands some kind of outside influence, some kind of grace, to disrupt its stasis.  And the existence of this outside influence is absolutely necessary for our theory to make any sense.  For, if firms are as perfectly conservative as we have thus far described them as being, we should be in constant stasis, with no discovery, technology, or change whatsoever.  Yet, change, discovery, and the replacement of the old with the new is extremely present, particularly over the last couple of hundred years.  In economist Joseph Schumpeter’s words:

 

the history of the productive apparatus of a typical farm, from the beginnings of the

rationalization of crop rotation, plowing and fattening to the mechanized thing of

today—linking up with elevators and railroads—is a history of revolutions. So is the

history of the productive apparatus of the iron and steel industry from the charcoal

furnace to our own type of furnace, or the history of the apparatus of power production

from the overshot water wheel to the modern power plant, or the history of

transportation from the mailcoach to the airplane. The opening up of new markets,

foreign or domestic, and the organizational development from the craft shop and

factory to such concerns as U.S. Steel illustrate the same process of industrial

mutation—if I may use that biological term—that incessantly revolutionizes the

economic structure from within, incessantly destroying the old one, incessantly creating

a new one.  (Schumpeter 2003, p. 83)

 

The firm’s “grace,” so to speak, the force which drives it to change, comes in the form of economic failure – that is, of losses – that cause the firm to re-evaluate its strategy.  In such a case, it will become literally impossible for the firm to continue with its current routine; it will be required to change its routine to stay in business because it cannot (or simply will not) continue to operate while losing money for very long.  To use our earlier discussion of the firm’s intentions, the pressures on the firm to deviate from the status quo become strong enough to overwhelm the pressures not to deviate.  This is explained by Nelson and Winter:

 

One important asymmetry between replication and contraction is that while the former

is typically an optional response to success, the latter is typically a mandatory response

to failure. As usual, the situation is clearest in the case of business firms, though there

are analogous problems in other sorts of organizations. If the revenues derived from the

sale of the routine's outputs fail to cover the costs of the routine's inputs, then-barring

governmental bail-outs, philanthropically inclined investors, and similarly unlikely

contingencies-it will ultimately become impossible to acquire the inputs to continue the

routine on the existing scale and something will have to happen.

 

Under this pressure, a business firm may be expected to initiate I some sort of search

for a new routine that would be viable in the prevailing environment…if the search is

successful in the limited sense that the firm begins to attempt to carry out a new

routine, then the old routine is no longer the target and has fallen victim to the

condition of adversity. The firm itself may live on, at least temporarily.  (Nelson &

Winter 1982, p. 122)

 

In effect, failure must completely overturn some routines, and in overturning those routines, in the long-run, create new ones.  This becomes particularly likely if financial failure is merely short-term, as it is in reality; therefore, in the long-run, the firm will have to expand once more and, in expanding, it will have to find more people who can carry some of the memories of the new segments of the firm, thereby creating a new practical “routine.”

 

The one part of firms’ decision-making which we have not quite covered is what causes these sudden economic failures – that is, why individual firms are made inferior, at least in the short-run, to others, and thereby lose their economic success.  If a routine has previously worked, we must ask why that routine might suddenly stop working.  For this source of change, we can look to the existence of the entrepreneur.  For firms do not exist in a vacuum but rather in competition with other firms and, more importantly, with firms that could exist.  A conservative firm can exist over a long period of time but, eventually, the innovations it does not make, the mistakes it consistently makes, or its general inefficiency will become significant, usually when there is a new invention, far more effective in production than anything previously invented, which the dominant firms do not make use of.  In that case, a new entrepreneur can enter a field, amalgamating new routines that are not based on long-run sustainable routines but rather on new, short-run more successful routines.  In this case, the new firm will cause the old firm (or firms) to lose their own economic position and thereby change their routines, ultimately with economy-wide effects.  As Schumpeter explains:

 

We have seen that the function of entrepreneurs is to reform or revolutionize the

pattern of production by exploiting an invention or, more generally, an untried

technological possibility for producing a new commodity or producing an old one in a

new way, by opening up a new source of supply of materials or a new outlet for

products, by reorganizing an industry and so on. Railroad construction in its earlier

stages, electrical power production before the First World War, steam and steel, the

motorcar, colonial ventures afford spectacular instances of a large genus which

comprises innumerable humbler ones—down to such things as making a success of a

particular kind of sausage or toothbrush. This kind of activity is primarily responsible for

the recurrent “prosperities” that revolutionize the economic organism and the recurrent

“recessions” that are due to the disequilibrating impact of the new products or

methods. To undertake such new things is difficult and constitutes a distinct economic

function, first, because they lie outside of the routine tasks which everybody

understands and, secondly, because the environment resists in many ways that vary,

according to social conditions, from simple refusal either to finance or to buy a new

thing, to physical attack on the man who tries to produce it. To act with confidence

beyond the range of familiar beacons and to overcome that resistance requires

aptitudes that are present in only a small fraction of the population and that define the

entrepreneurial type as well as the entrepreneurial function. This function does not

essentially consist in either inventing anything or otherwise creating the conditions

which the enterprise exploits. It consists in getting things done.  (Schumpeter 2003, p.

132)

 

This is to say that an individual shift can reverberate into larger, economy-wide shifts because one change in one area of a routine has ramifications for others.  This is because, as Veblen argues, small changes in routine change the “circumstance which affects the further growth of habits of thought habitual methods of procedure,” that is, minor changes in routine change the perspective from which firms look at the other parts of their routine.  Therefore, if the price or quality of one sort of product is affected by an invention or innovation, then the effectiveness of other firms’ routines – particularly whether there is a new, more effective strategy – also changes, or (as Veblen said), one innovation “becomes a point of departure for further development of the methods of compassing the ends sought and for the further variation of ends that are sought to be compassed.”  Minor changes in what technology or strategy a firm uses, caused by entrepreneurs, can thusly completely overhaul the routines that firms previously used and destabilize everything about the strategy a firm might have previously used.

 

Furthermore, while routines are not logically explicable, they are able to be empirically repeated.  Considering that, they can also be imitated – that is, a firm can use what little it knows or has seen of another firm’s routine and, by trial and error, try to slowly bring a similar to its own enterprise.  Or, considering that we live in a free economy, it can even bring people from another, more developed, firm to its own company and copy that previous routine.  In Nelson and Winter’s words:

 

What distinguishes this situation from replication is the fact that the target routine is

not in any substantial sense available as a template. When problems arise in the copy it

is not possible to resolve them by closer scrutiny of the original. This implies that the

copy is, at best, likely to constitute a substantial mutation of the original embodying

different responses to a large number of the specific challenges posed by the overall

production problem. However, the imitator is not directly concerned with creating a

good likeness, but with achieving an economic success-preferably, an economic success

at least equal to that of the original. Differences of detail that are economically of no

great consequence are perfectly acceptable.

 

By this economically relevant criterion the prospects for successful imitation vary

dramatically from one situation to another. At one extreme, the production in question

may be a novel combination of highly standardized technological elements. If so, close

scrutiny of the product itself-"reverse engineering"-may permit the identification of

those elements and the nature of their combination, and this may suffice for an

economically successful imitation. Indeed, even vague rumors about the nature of the

product may suffice, perhaps permitting the copy to hit the market almost as soon as

the original. At the other extreme, the target routine may involve so much idiosyncratic

and "impacted" tacit knowledge that even successful replication is highly problematic,

let alone imitation from a distance.

 

In the wide range of intermediate cases, the imitator's basic tactic is to follow the

example of a replicator wherever possible (and not too expensive), and to fill in the

remaining gaps by independent effort. One important application of this tactic is to try

to hire away from the imitatee those employees that the imitatee would reasonably

want to transfer to a new plant in an attempt to replicate the existing one. Another is to

obtain, by whatever means may be available, indirect clues to the nature of the target

routine.

 

An imitator working with an extremely sparse set of clues about the details of the

imitatee's performance might as well adopt the more prestigious title of "innovator,"

since most of the problem is really being solved independently. However, the

knowledge that a problem has a solution does provide an incentive for persistence in

efforts that might otherwise be abandoned.  (Nelson & Winter 1982, pp. 123-124)

 

Therefore, considering a firm sees another successful one and that firm will seek to somehow avoid its current losses caused by its current routines, firms might often end up imitating new, more successful firms.  In those cases, a successful firm’s particular routines and attitudes might end up extending across the entire industry – routines, it is important to note, which only need to be helpful in the particular case in which the routine is developed.  Therefore, an industry, or in some extreme cases, an economy, might imitate itself to conformity.  Further, in many cases, relative industrial success might continue for one or a couple of firms over a long period of time while the rest continue to fail.  So, even if a small number of firms might want to avoid modernization and keep using older technologies, they might eventually be required to or shut down, thereby leading the new firm, with its new routines, to become dominant.

 

By explaining this viewpoint, we now have a clearer understanding of exactly what causes firms to make particular decisions, industry-wide or even individually – simply what works.  They need not understand why a particular routine is successful – indeed, if they knew, they might even oppose that routine.  Yet, firms still need to arrive at practical success from that routine.  This is particularly true of their routine of choosing between routines.  Routine “paradigm shifts” – that is, sudden shifts in what sorts of routines are followed industry-wide – can be explained by entrepreneur-influenced recessions, or at least entrepreneur-influenced firm-failure, as a result of which underperforming firms will copy the decisions of more successful firms.  Consequently, we can see that firms do not make these decisions based on teleological planning (or at least, the basis of that teleological planning is not itself teleological) but rather on trial and error and imitation.  The most important part of this theory can be explained by Veblen:

 

The evolutionary point of view, therefore, leaves no place for a formulation of natural

laws in terms of definitive normality, whether in economics or in any other branch of

inquiry. Neither does it leave room for that other question of normality, What should be

the end of the developmental process under discussion?  (Veblen 1898, p. 392)

 

As Veblen states, there is no logical end to our economy, no “proper routines” which we will end up all sharing if enough time passes.  Rather, just like real evolution, economic decision-making is filled with reversals and arbitrariness, more influenced by caprice and superstition – the most consistently successful bases for firms’ actions – than anything else.  That caprice is at the center, or at least the beginning point, of our modern era of change.  For industries’ decisions involve not just what sort of material to use or how to pay their laborers but also what to advertise, and industries involve not only those in television manufacturing but also in television programming, that is, those creating the mass media.  Therefore, that fundamental mass media, that fundamental advertising, is created by this caprice, by these short-term routines, imitations, and innovations.

 

As we can see from this argument, at the center of what determines how our culture changes are rather capricious strategies and routines, determined by copying and trial and error rather than direct planning.  While this may seem to make social change inherently capricious, it makes it, if anything, predetermined.  The strategies that will work are always the ones that will improve the economic well-being of a particular firm.  Therefore, which advertising strategies are favored is determined by whichever strategies will most effectively make profits for that firm.  Therefore, we can understand social change as not happening because of individual people or firms but rather because of the inherent conditions under which it occurred.  If advertising would be more effective by preferring racial egalitarianism, for instance, no matter whether the individual advertising companies are led by prejudiced people or not, the advertising industry will always eventually favor racial egalitarianism, as firms will be required by entrepreneurial pressure to copy the most effective strategies or go out of business.  As a result, we can see that social change is determined not by groups, individuals, or intentions, but rather by conditions and, in particular, by which strategies will deliver the greatest profits to advertising and mass media corporations.

 

Conclusion

 

In the next part of this article, we will discuss how our massive modern cultural shift in particular has occurred and its peculiar characteristics.  However, before we do that, we should recapitulate our complete theory of social change.  After all, we have largely reverse engineered social change thus far, starting with its surface level existence and slowly moving backwards to its first cause, its “unmoved mover” (or at least, its “boringly moved mover”) of entrepreneurial discovery.  So, to put our theory into a basic syllogism:

 

            First, an entrepreneur disrupts the economy with a new invention or innovation, causing

previously dominant companies to lose money.

            Second, those previously dominant companies, because they are now losing money and

need to change their routines, either imitate or are replaced by the new company,

             creating a general common group of routines, including routines of how companies

            advertise across the industry or economy.

            Third, those advertisements help to create people’s perceptions of how normal or

            abnormal different behaviors, attitudes, or beliefs are.

            Fourth, because people have a habit of judging others on what is socially acceptable and

            normal, those beliefs of what is normal influence people’s views of what they should do

            and believe individually.

            Fifth, those people, believing they should act in a certain way or believe in a certain

            thing come to actually act in that way or believe that thing over time.

            Sixth, that which was originally socially abnormal becomes the normal, or vice versa.

 

In this theory of social and cultural change, we have established the existence of a kind of “small-man history,” a view that what changes the world over time are seemingly unimportant individuals whose greatest accomplishment is creating particular habits in an imitate-able, successful firm in a single industry during an economic downturn or simply economically outcompeting their contemporary firms.  Those individuals, in setting the norms for firm-wide advertising, affect the social norms for society, and through the social norms, the real norms, the actual actions of people across that society everywhere – from politics to the family.  For the correctly positioned entrepreneur, the world presents itself as a slightly modified version of Descartes’ cogito: “I think, therefore it is.”  That much being said, only the identity of the entrepreneur is variable.  The most effective strategies and technologies will, by virtue of economic law, always eventually be applied to each industry.  The particular entrepreneur is merely a tool of this economic law.

 

And therefore, through the influence of mass media and advertisements, whose decisions were determined by small-scale firm-wide decisions, our opinions, habits, and actions have become what they are.  Now, precisely what those are, we will discuss in Part Two of this article.

Part 2:

 

In the previous part of this article, we presented a broad theory of social change.  According to this theory, individual shifts in the market might cause failing firms to rather mindlessly imitate successful firms in many aspects, including their marketing preferences.  Those marketing preferences then influence people’s views about what is normal and abnormal, socially affirmed and condemned.  That influence on the attitudes of advertisers or the mass media (or at least the attitudes implicitly or explicitly advocated or otherwise favored by advertisers or the mass media) thereby changes people’s personal preferences and views.  In this section, we will discuss how this shift has occurred in our modern world.  To do so, we will start by analyzing the shift in economic preferences, trying to find its source, both overall in changing firm preferences and specifically by analyzing the individual events which might have brought about these changes.  From here, we will analyze how those shifts might have caused changes in cultural, social, and religious preferences.  Finally, we will, at least for a short period, try to morally appraise our changing society.  We will defer the description of what personal and political responses to that social change we endorse (or at least what intentions should motivate those responses) for Part Three of this article.

 

The fundamental question which motivates our analysis of modern social shifts is why economic equality has failed to keep pace with people’s increasing mental equality.  The differences between people’s mental abilities, both in the literal sense (that is, people’s actual mental differences from one another) and the practical sense (that is, how important those mental differences are to people’s lives) have diminished, with people becoming increasingly equal.  And yet, actual economic equality has failed to follow and, if anything, there has been more economic separation between the wealthy and the poor.  This, of course, goes contrary to our traditional understanding of wealth, which states that relative intelligence and ability is rewarded (that is, meritocracy).  We will argue that this putative diminishing of meritocracy in fact is not what it seems but rather that, as intellectual equality is enhanced, it becomes economically rewarding for firms to take more interest in workers’ work ethic, that is, their willingness to work longer work weeks for relatively low marginal benefit.  At the same time, it also becomes gainful for firms to create, among at least a very large percentage of the population (within, of course, wealthy nations who are affected by increasing intellectual equality), a lack of work ethic, that is, a certain disinterest or even incapability to engage in work, in spite of intellectual equality.  This is because doing so creates a population which spends much of its time looking at, and thus being affected by, different forms of advertisement, allowing advertising firms to improve the effectiveness of their advertising and therefore their importance to other firms.  In such a situation, that advertising is mainly being improved to convince the most successful individuals in the society to purchase products because they are advertised to be, and therefore seemingly are, valuable.  This situation, as we will discuss, has major cultural effects, reducing the importance of traditional forms of gaining information in favor of more manipulable media.  It also has social effects, destroying traditional forms of civil society and communal organizations in favor, again, of manipulable media.  Finally, it has religious effects, replacing older theistic religions with new forms of secular religion.  Altogether, it seems to have the effect of creating an entirely new society.

 

 

The basis of this massive shift, increasing intellectual egalitarianism, otherwise known as the Flynn Effect, is easily recognizable.  For the past century, each decade has seen increasing average IQ, with the average increasing (over the span of 1989-2002) by about 0.36 points per year (Flynn 2012, p. 37).  Most of this growth has been on the lower end of the bell curve (that is, has been the result of the improvement in intelligence of the least intelligent) due to increased environmental egalitarianism, in particular, changes such as “improvements in health and nutrition, greater environmental stimulation arising from TV, educational toys and games, books, increased leisure among parents and reductions in family size,” the access to which has become far more egalitarian in recent years, with far more people likely to have access to nutritious food (the percentage below the poverty line having declined from 22% in 1960 to 11% in 2020) as well as smaller families, with the average household size changing from 3.3 people per household in 1960  to about 2.5 in 2020 (Teasdale & Owen 1989, p. 261; Lynn & Hampson 1986, p. 31; Krogstad 2015; Shrider et al. 2021, p. 14; U.S. Census Bureau 1962; U.S. Census Bureau 2021).  Both of these numbers display significant improvement of quality of life for the worst off (poverty is exclusively a measurement of the worst off, and much of the decline in family size in the United States has been driven by a decline particularly in the size of poor people’s families – indeed, the average size of a wealthy household has actually increased over the past several decades) (Kopf & Purtill 2017).  Both of these trends seem likely to continue into the near future as well; the decline in poverty is unlikely to end unless economic trends were to reverse, which has not yet occurred in centuries (outside of short recessions which, at their longest, have lasted around a decade), and the trend in family sizes, if anything, has changed in recent years to become even more egalitarian as wealthy families have more children and poor families have fewer.  It seems that the environmental determinants of intelligence are very rapidly becoming more egalitarian and that, as a result, intelligence is also becoming more egalitarian.

 

Further, broad societal intellectual advancement inherently leads to increased egalitarianism because, at a certain point, intelligence diminishes in value.  It has been repeatedly displayed that, after reaching a certain threshold, intelligence stops having much of an effect on one’s career success.  In one study from psychometricians John Gibson and Phyllis Light, upon studying the IQ of a large number of university scientists, they observed that, intellectually, it seemed that “there may be a threshold level of IQ below which an individual is unlikely to become a scientist,” yet “above a certain threshold level (in the region of IQ 110) a conventional intelligence test measures qualities unrelated to the success of an individual in a scientific career.”  Where exactly that threshold was depended on the job, with “biochemists, chemists, and mathematicians all ha[ving] IQs above 120, while some scientists in the other groups ha[ving] IQs below 120” (Gibson & Light 1967, p. 443).  (In modern terms, an IQ of 120 in 1967 terms would be about equivalent to an IQ of 100 in modern terms, assuming an annual increase of about 0.4 IQ points per year.)  A similar phenomenon was observed by psychologist Donald W. MacKinnon, who looked at the average IQ of creative people (IQ observed through IQ tests and creativity observed  “by the broad problem posed by the assessment situation itself, by problem-solving experiments, by tests designed to discover what a person does not know or is unable to reveal about himself, by tests and questionnaires that permit a person to manifest various aspects of his personality and to express his attitudes, interests, and values, by searching interviews”) (MacKinnon 1966, pp. 152-153).  What MacKinnon found was that, rather than being purely unrelated or directly correlated to IQ, significant creativity tended to require a certain threshold of IQ to be possible but, upon reaching that threshold, seemed unrelated to IQ.  In MacKinnon’s words:

 

Yet intelligence alone does not guarantee creativity. On a difficult, high-level test of the

more general aspects of intelligence, creative persons score well above average, but

their individual scores range widely, and in several of the creative groups the correlation

of intelligence as measured by this test and creativity as rated by the experts is

essentially zero.

 

Certainly this does not mean that over the whole range of creative endeavor there is no

relation between general intelligence and creativity. No feebleminded persons

appeared in any of the creative groups. Clearly a certain degree of intelligence, and in

general a rather high degree, is required for creativity, but above that point the degree

of intelligence does not seem to determine the level of one's creativeness. In some

fields of endeavor, mathematics and theoretical physics for example, the requisite

intelligence for highly creative achievement is obviously high. But it does not follow

that…a person be outstanding in intelligence to be recognized as highly creative, at least

as intelligence is measured by intelligence tests.  (MacKinnon 1966, p. 153)

 

MacKinnon’s findings about IQ’s relationship to creativity is much like Gibson and Light’s findings of IQ’s relationship with the career outcomes of university scientists – while a certain threshold must be reached, upon reaching that threshold, IQ stops mattering.  While these two examples cannot be necessarily determined to hold true for all careers, both demonstrate that there are certain diminishing returns to increased intellectual ability – at a certain level, it seems that many of the most significant differences between the relatively stupid and smart (the ability to deal with complex problems, as shown by Gibson and Light, or the ability to conceptualize creative solutions to problems, as shown by MacKinnon) diminish in practical significance.

 

This observation regarding IQ’s relationship to success, particularly in light of the very achievable numbers in Gibson and Light’s study (which places the threshold for becoming a scientist to, in modern terms, the average American), leads us to believe that IQ should not be anywhere near as strong a determiner of success as it once was.  After all, if success in most jobs is only correlated with IQ insofar as one reaches a certain real threshold, a tangible increase in IQ (particularly one as significant as the one America has experienced for over a century) should mean that gaining that job should be easier for a much larger percentage of the population.  At least one requirement (the intellectual requirement) of career success should have disappeared, or at least decreased in significance.  As a result, increased economic egalitarianism would seemingly follow from the increased intellectual egalitarianism which we have experienced.  After all, there seemingly should be a decrease in the degree of economic distinction given to the most successful due to the increased percentage of the population which can be trusted for most jobs.  This argument becomes particularly potent in light of analysis by psychometrician Richard Herrnstein and social scientist Charles Murray, who found that IQ was heavily correlated with poverty, unemployment, collegiate education, and the marriage rate (Herrnstein & Murray 1994, p. 135, p. 152, p. 159, p. 172).  Further, this argument was made using (now) quite old data from the 1970s-80s, about the same period as our other studies about intelligence.  If, indeed, intellectual ability was the basis of differences in relative economic success, then we should see a significant increase in the egalitarianism of the many measured correlates of economic success – educational success, unemployment, wealth, and marriage.

 

Yet, in spite of this seeming implication, we have seen no such increase in egalitarianism in many of these areas.  Instead, we have seen a massive increase in inequalities between the worse and better off.  Educationally, while we have seen increasing egalitarianism in relation to high school graduation, that egalitarianism has failed to translate into collegiate egalitarianism.  The percentage of the most recent 4-year high school cohort that graduated increased between 2011 and 2019 from 79% to 86% (National Center for Education Statistics 2022).  During the same period, however, matriculation among high school students into colleges declined by about 10% (Hanson 2022).  Economically, the labor force participation rate has declined significantly since the 1990s, decreasing from 66.4% in 1992 to 61.6% in 2022.  Much of this decline has occurred among the young, with the labor force participation rate of those 16 to 24 years old declining from 66.1% in 1992 to 49.6% in 2022 (Toossi 2013).  Part of this, although not very much of it, can be explained by the increasing percentage of young people attending high school; yet, in spite of mostly stagnation since the 1990s in the percentage of Americans receiving collegiate education (with the percentage of high school graduates who matriculated into college changing from 78% of the graduating class in 1995 to 73% in 2020), among those 20 to 24, there has been a similarly large decline in labor force participation from 77.1% in 1992 to 67.3% in 2022 (Hanson 2022; Toossi 2013).  Wealth-wise, economic inequality has only increased over the past 40 years, with economist Thomas Piketty writing in 2013 that, over what was (at the time) the past 30 years, “the structural rate of growth of the largest fortunes seems…to be greater than the average growth of the average fortune (roughly at least twice as great).  If we look at the evolution of the shares of the various millionths of large fortunes in global wealth, we find increases by more than a factor of 3 in less than thirty years,” and this shift does not seem to have ended over the past ten years, with the 90/10 ratio increasing from 11.7 in 2010 to 12.6 in 2018 (Piketty 2014, p. 435; Horowitz et al. 2020).  The marriage rate has also, counterintuitively declined, with the percentage of American adults (25-54 year olds) currently married declined from 67% to 53% between 1990 and 2019 (Fry & Parker 2021).  At the same time, divorce rates have also significantly declined, from about 20 divorces per thousand currently married couples in 1990 to only about 15 divorces per thousand currently married couples in 2019 (Clarke 1995; Wang 2020).  This situation implies that while marriage rates continue to decline, those who do marry tend to have more stable marriages than they used to, that is, there is the same rise in the inequality of family stability as there is in wealth, education, and employment.  The fact that inequality across most metrics has risen amidst a decline in intellectual inequality is highly counterintuitive.

 

Simultaneously with this odd and conspicuous increase in economic inequality, there is also a decrease in the degree to which economic success correlates with intellectual ability.  Indeed, it seems that intellectual differences do in fact have an increasingly minor effect on one’s educational and economic outcomes.  For instance, educationally, Murray and Herrnstein measured the average IQ of students who went to top Ivy League universities in 1990 as being somewhere around 2.7 standard deviations above the mean, or about 140 (Herrnstein & Murray 1994, p. 46).  By 2003 (following a significant revamping of the SAT in 1995, the main form of testing used for college applications), a study by psychologists Shelley Carson, Jordan Peterson, and Daniel Higgins of Harvard students showed a significantly lower average IQ of 128. However, considering the study in question used an IQ test which was, at the time several decades old (the study was conducted using the 1980 Weschler test), the actual average IQ was likely several points lower, closer to 120 than 125 (considering that, as discussed earlier, over the similar period of 1989-2002, average IQ increased by 0.36 points per year, the average IQ in 1980 terms in 2000 was likely somewhere around 107, making an average IQ of 128 only about 120% of the average) (Carson et al. 2003, p. 501).  (Of course, it is impossible to know if Harvard simply has a significantly lower average IQ than the other Ivy League Universities; this argument certainly seems unlikely, though; indeed, Murray and Herrnstein used it as an example of the increasing average IQ of top universities over the 1930-1990 period (Herrnstein & Murray 1994, p. 38).)  There is no good reason to believe that this trend has reversed over the past several decades as submitting testing has been made optional at many top schools and an increasingly significant percentage of the smartest have allegedly been kept from many top schools due to their (Asian) race (Einhorn 2022, Gerson 2018).  Wealth-wise, we have seen a similar trend; whereas Murray and Herrnstein found significant correlation between lower IQ and poverty and higher IQ and wealth, research has, more recently, stopped producing that clear result.  For instance, look at social scientist Jay L. Zagorsky’s study done on “Young U.S. baby boomers” in 2007 (about a generation or two younger than most of Murray and Herrnstein’s studied populations), which found that, while income was slightly correlated with income, with “the average income difference between a person with an IQ score in the normal range (100) and someone in the top 2% of society (130) is currently between $6,000 and $18,500 per year,” or about somewhere between about one-eighth and one-third of the U.S. median household income at the time (Zagorsky 2007, p. 489, p. 499; Bernstein 2008). On the other hand, total wealth and IQ had no correlation whatsoever, and, most directly contrary to Murray and Herrnstein, financial distress was not caused exclusively by below-average IQ (although that did increase the likelihood of financial distress) but also by IQ significantly above average, meaning that the average IQ person was least likely to face significant financial distress (Zagorsky 2007, p. 500).  What this shows is that, over the past couple of decades, as differences between people’s intelligence has eroded, those minor differences in intelligence have become less important in determining a given person’s income and not important at all at determining their wealth.  Further, in some ways (those related to financial distress), it is not at all people with above-average intelligence who are most successful but rather those with average, or at least marginally above-average, intelligence, that is, yesteryear’s significantly above-average.  Indeed, the only area which Murray and Herrnstein identified as correlating with increased intelligence that has continued to correlate in the same manner is marital stability:  a study in 2006 by psychologists Paul Holley et al. showed that divorce indeed continues to be correlated with lower intelligence (Dronkers 2015).  However, this correlation only further confuses the issue.  As Zagorsky has found, divorce significantly decreases the wealth of families in the long-run by an average of 77% (Zagorsky 2005, p. 421).  That then makes our earlier study about wealth even more indicative of a shift – if the less intelligent are still as successful as the more intelligent in spite of their significantly higher divorce rates, then that means intelligence is even less of a direct cause of economic success, or at least success at one’s job.

 

So, then, we can say with some certainty that neither educational nor vocational success is as closely correlated with intelligence as it was thirty or forty years ago.  The question then becomes what now stratifies people into their own classes, that is, what causes people to be educationally and economically successful or unsuccessful.  The first thing to note is how closely these two areas of success are correlated with one another.  A twenty-five to thirty-four year-old who failed to graduate high school made about $25,000 per year on average in 2015, significantly lower than the average income of someone in the same age group who completed high school ($30,500 a year), and exactly half the average income of someone who completed an undergraduate educational degree (McFarland et al. 2017).  Further, the schools which graduate those with the highest average mid-career salaries (graduates at each make around $100,000 a year, double the average salaries of the average college graduate) are all highly selective, with none out of the top ten having an acceptance rate greater than the national average (which is 70%) (Eum 2016; Kowarski 2021).  What is true for one of the two areas, therefore, is also true of the other:  what helps someone graduate from university is likely to also help that person in the job market, and, considering the relative economic success which graduates of selective universities or universities generally seem to experience later in life, what helps one succeed in an educational environment seems to help that person later succeed in an economic environment.  As we have discussed, neither are related particularly closely to intelligence.  Indeed, success at top universities seems to have very little correlation with one’s scores on a variety of intelligence tests, such as the SAT (for those more intelligent, of course, than our earlier mentioned threshold), as has been displayed by the recent experiment in test optionality.  Yale University, in fact, has concluded that SAT scores actually have little correlation with academic success at its own (highly selective) college in both predicting graduation and college GPA (grade point average); high school GPA, in fact, was a much better predictor (Syverson 2018, pp. 46-48).  Therefore, we can fairly say those non-intelligence factors correlating with high school GPA are highly correlated to collegiate success, as well as economic success.

 

Those factors correlating with educational success on a high school level are less related to IQ than they are to personality.  In a study by economists Lex Borghans et al., personality traits such as “self-esteem and locus of control” have a much closer correlation to one’s success on an high school educational level than their particular IQ (Borghans 2016, pp. 2-3).  That correlation continues into adulthood, as wages are more closely correlated with the same personality factors as well as those measuring “antisocial behavior, and neuroticism,” which have significantly greater predictive power in relation to wages than a given adult’s particular IQ (Borghans 2016, pp. 3-4).  These traits are all directly related to one’s work ethic.  (Specifically, those traits are self-esteem (motivation created by one’s belief of what they can accomplish), locus of control (ability to stay focused on work once one has started), and neurosis and antisocial behavior (traits which have a close relationship with mental illness).)  These traits fundamentally converge on one’s ability to do a large amount of work effectively, that is, one’s work ethic.  Indeed, this trait (work ethic) is clearly much more present among successful individuals than it has been before very recently.  Over the last forty years, among the wealthy, the average number of hours worked in a given week has, counterintuitively, climbed significantly.  Between 1979 and 2006, the percentage of the top quintile who worked over 50 hours a week increased by 12%, with those higher paid making up the bulk of the increase in the average American work week over that period, and more recent (2012) data has shown that American high earners tend to work about 60-80 hours a week, displaying an extraordinarily high number of weekly work hours (Kuhn & Lozano 2005, pp. 318-319; Koch 2013).  It seems that there has been a pivot in the determinants of educational and economic success from the metric of IQ and intelligence towards the metric of personality and therefore work.

 

At the same time, there has been an increasing inequality both in neurosis and in willingness to work longer hours, with a growing percentage of the population either affected by mental illness or by a lack of motivation to work.  The collapsing labor participation rate which we earlier mentioned has been caused largely by a decline in the willingness of many on the lower end of the income spectrum to work; looking at men between the ages of 25 and 64 between 1979 and 2000, for instance, the percentage of college graduates employed declined about 2.5%, whereas the percentages with less than a high school degree employed declined 3.9%, and the percentage of those who graduated high school and did not go onto college declined 6.1% (Kuhn & Lozano 2005, p. 321).  From 2000 to 2015, the percentage of young men without a college degree employed has declined further by about 20% (Thompson 2016).  Since 2015, amidst massive (reportedly) permanent resignations among the young following the Covid-19 pandemic, it is probable that this decline has continued (Bruner 2021).  This shift has occurred beyond merely one’s willingness to work:  between 1979 and 2006, the percentage of the poor who worked 50 hours or more a week declined by 8%, such that it became more likely for someone in the top quintile to work over 50 hours a week than for someone in the lowest quintile (Kuhn & Lozano 2005, pp. 318-319).  At the same time, mental illness has had a significant surge in recent years.  For instance, only 45% of young people report “good or excellent” mental health, making them the only generation a majority of whom rates their own mental health as poor (The Annie E. Casey Foundation 2021).  This growing neuroticism is not only self-reported either; the suicide rate among youths increased from 2000 to 2017 by 30% to its highest point over that time period (Miron 2019, p. 2363).  Further, this trend of increasing mental illness has also seemed to continue into recent years, as, amidst the recent Covid-19 Pandemic, there has been a significant increase in youth neuroticism (although not necessarily youth suicidality) (Chadi 2022).

 

Having looked at these different statistics, we can see that they seem to paint a very odd picture.  As we just noted, we are seeing a decline in the importance of intellectual inequalities as those inequalities seem to disappear.  At the same time, we are seeing an increase in the importance of inequalities in work ethic (or at least desire to work long hours) and neuroticism.  Therefore, we see systematic shifts to incentivize the latter group of traits over the former.  This much is not particularly odd.  The odd part, however, is that in both of the latter areas, the increasing inequalities have not been created by a rise in the ability of the best off but a decline in the abilities of the worst off, of growing mental illness and declining work ethic, in spite of the increasing economic incentives for improvement in work ethic or mental health.  This fact seems counterintuitive – if firms prefer workers who are more mentally healthy or with greater work ethic and if those workers are rewarded more each year for those traits than they were the previous year, then there should be more of those workers every year.  The only way for this to not be the case is if there is some kind of counter-pressure causing these traits to decline in the population.  This counter-pressure, we will argue, exists.  Further, we will argue that this counter-pressure is a major form of social change currently underway in American society.

 

Social change, as we earlier defined it (as a change in general, nationwide opinions and actions caused by advertiser (and entertainer) preferences), seems inextricably intertwined with the decline in work ethic and increase in neuroticism in America.  Much of the time spent by those whose work ethic has declined the most in recent years – young, poorly educated men – is being spent on forms of products created by the entertainment industry, in particular video games, the playing of which makes up 75% of those young, poor men’s free time (Thompson 2016).  These non-working young men are able to afford this lifestyle because access to entertainment has become far more universal as the cost of entertainment has decreased significantly in recent years.  Indeed, the television, as well as computer software and cell phone services, have been some of the few products which have not only not increased in price with inflation but have significantly declined in price over the past twenty years (Miller & Golle).  This decrease in the expense of entertainment cannot be separated from these young men’s willingness to not work; they report significantly more happiness than other, working people, likely because of their access to cheap entertainment (Thompson 2016).  Further, the widespread mental illness among young people is extremely closely tied to the forms of social change occurring as well as the entertainment and advertisements they regularly use and see.  When it comes to social change, the growth in mental illness cannot be separated from the accompanying change in attitudes towards cannabis, in an almost prototypical example of social change.  Over the last fifteen years, perception of the effects of cannabis have changed significantly, with a decline in the percentage who believe it causes significant negative effects from 49.5% in 2007 to 33.3% in 2014 (Compton et al. 2016, p. 956).  Amidst this shift has been a concomitant shift in usage, with the percentage of 19-30 year-olds who had taken cannabis in the past 12 months increasing from 28% in 2006 to 42% in 2020 (Schulenberg et al. 2021; p. 104).  The effect of this shift in cannabis usage, informed by a shift in attitudes, on mental health is difficult to overstate, with regular use of high-potency cannabis increasing the likelihood of developing psychosis by four times, and irregular use or the usage of lower-potency cannabis causing a more moderate but still significant increase in the probability of developing psychosis (Di Forti et al. 2016, p. 431, p. 430).  Further, much of young people’s mental illness comes from stress necessarily caused by media to which they are exposed – three-quarters of young people state school shootings (something they most likely have not been directly affected by) are a significant cause of their stress and 57% state that deportations of illegal immigrants (unlikely to affect them personally, considering only about 7% of young people are related to illegal immigrants (Passel & Cohn 2016)), both significantly greater percentages than issues that directly affect people, such as sexual harassment (53%), bullying (35%), personal debt (33%), personal housing instability (31%), or personal hunger (28%) (Bethune 2019).  Considering that most people’s interaction with the two most significant causes of stress is mediated through the news or other media, it seems that our understanding of mental illness cannot be separated from our understanding of media-created changes in behavior.  The rise in mental illness is clearly a form of social change. 

 

As we have shown, both the rising mental illness and the lowering level of motivation to work long hours are the results of social change.  It is important, before we discuss why this social change might have occurred, to note that those most affected by this social change tend to be the already-poor.  We have previously shown the already-poor are the most affected by the recent society-wide decrease in workforce motivation (the highest rate of lost workforce participation has largely been among those in the lower quintiles); the poor are also most affected by the rise of mental illness, which has risen most significantly among the poor and affects the born-poor disproportionately from the time they are just 4 years old (Knifton & Inglis 2020, p. 193).  Further, both the rise in mental illness and the decline in workforce motivation among the poor reinforce the other, as the two traits have a close relationship (work tends to cause men as well as their significant others to feel satisfaction with their lives on the one hand, and mental illness tends to damage people’s work prospects on the other) (Schröder 2020, p. 324, p. 327; Baldwin & Marcus 2007, pp. 490-493).  Therefore, we can fairly say that this social change mainly affects already poor families, thereby improving the relative prospects of already rich families.  This argument is important, as it shows that modern inequalities in the new sense are sustainable in the same way that old inequalities based on intelligence and relative education, being highly heritable (although perhaps not genetic, considering the low rate of heritability in youth), were (Plomin & Deary 2015, p. 99).  This sustainability of social class in the face of our discussed social change is why, in spite of significant changes in what is incentivized, no new class of nouveau rich is replacing any old class of the rich and old economic inequalities are not being overturned.  Instead, it seems that according to the new system, rather than a complex matrix of assorting different jobs to different people based on their relative intelligence, we are seeing the separation of people into the broadly rich and the broadly poor–which helps to explain why we have seen a decline in the percentage of the population in middle-class households; instead, we see both the growth of the poor (from 25% in 1971 to 29% in 2021) and the rich (from 14% in 1971 to 21% in 2021) (Kochhar & Sechopoulos 2022).

 

It is also important to note that, if our concept of social change is right and advertisers and media-producers do drive social change, those advertisers and media-producers do not seem to be making a significant mistake.  There has been significant growth in the advertising industry, with spending increasing from $149 billion in in 2007 to $284.3 billion in 2021 (outpacing inflation over that same period by $89.6 billion), and the American television and movie industry has stagnated, with American consumer spending on entertainment products increasing from $320.4 billion in spending in 2007 to $328.2 in spending in 2021 (which, while under-pacing inflation by $75 billion, represents stagnation rather than collapse) (TNS 2008; Adgate 2021; U.S. Bureau of Labor Statistics 2010; MPA 2022).  In the video game industry, another increasingly major form of media in America, total consumer spending has increased from $17.9 billion in 2010 to $60.4 billion in 2021 (outpacing inflation by $34.8 billion) (Thorsen 2008; Byers 2022).  In all three cases, significant economic failure has not occurred, and in the case of both the advertising and video game industries, significant economic growth has been the main theme of the last fifteen years.  It therefore seems unlikely that firms in either field are causing the current social change merely by accident or as a result of a random, unimportant trait from one firm being copied by others.  Rather, it seems likely that the recent shift in these industries, particularly advertising, is actually an important and highly economically lucrative trait to have, that is, that the recent social change in people’s traits (away from work and towards mental illness) has some kind of positive effect at least for advertisers, and potentially for media creators as well.  Given this point, as well as our previous observation about the stratification of two separate classes following recent social change, we can now formulate a hypothesis about what exactly the purpose, although (in our opinion) not the intention, is of the firms promoting the recent social change.

 

Our argument about what is causing this shift starts with the fact that, as the average intellectual capabilities of Americans have increased with our average standard of living, the need for American labor in traditional, blue collar jobs such as manufacturing (outside of limited industries such as semiconductors and other electronics) has declined (Houseman 2014, p. 7).  This has occurred for a variety of reasons such as globalization, as American expectations for employment in many sectors seem unrealistic relative to the expectations in much of the rest of the world (the minimum wage in China, for instance, is about $2 per hour, about a quarter of the American minimum wage, $7.25 per hour) (Norden & Elone 2021; Konish 2022).  America’s comparative disadvantage in manufacturing work means that, in relation to most forms of low-skill work, American workers are not preferred over foreign workers.  This is not to say that there are a lack of work force opportunities, however, as reports of labor shortages seem to point to a more indirect cause of declining labor force participation than a mere lack of available jobs (Segal 2022).   It is simply to say that a higher American labor force participation rate is not a vital interest to most companies, and most of those companies for whom American labor participation is vital are service industries (such as finance or law) which employ those in the top quintile.  The decreasing importance of American manufacturing for corporations means that decisions which have concomitant negative effects on American labor force participation are less likely to dissuade firms from making those decisions.

 

That decline in the importance of American manufacturing for corporations, in light of our earlier description of the power of advertising on social change, mixed with increasing intellectual egalitarianism, is extremely important in explaining our own odd social changes.  Although intellectual egalitarianism has likely made Americans no better on average at traditional blue collar work such as manufacturing (as, according to our earlier analysis, people tend to stagnate in their ability to excel at all jobs at a certain point, and that point is probably reached in manufacturing far earlier than physics), it has increased the value of low-skill Americans to corporations by making them more similar in entertainment preferences to rich Americans.  The correlation between IQ and entertainment preferences can be demonstrated, at least in music, in which both people’s personal enjoyment of music as well as the way in which they think about music when listening to it are highly correlated to intelligence (Chamorro-Premuzic & Furnham 2007, p. 181, p. 182).  It seems highly probable that this correlation is also present in relation to what forms of entertainment (and even advertising) appeal to people of different IQs as well.  (This is to say, the sort of advertisement or entertainment which is compelling to someone is probably correlated with their intelligence.)  If advertising firms work as per the evolutionary method we discussed in the first section, more experience (which, of course, in advertising translates to more metrics which can be used to better understand the relative success or failure of different advertisements or entertainment) makes each firm more effective at advertising in the future – they can see which policies work and do not work and can change policies based on that experience (repeated failure in particular helping the refinement of policy).  If, indeed, having more experience with a certain sort of person helps with advertising to that sort of person, then even if an individual person is not themselves generating revenue (or at least significant revenue) for the advertised product or entertainment firm, merely by engaging with the advertisement at all, an individual watcher can help the firm improve its methods of advertisement.  

 

The value of those who those who consume advertising is further increased due to the power of advertising to create cultural norms.  If an advertiser can make a desire for something a broad cultural preference, they can therefore make it into a preference for individual members of a society.  We have, in the first section, described how advertisements can change the values of people on a general level, making them more likely to hold or not hold certain views about the world, including products based on their perception of broader opinion about a certain issue.  This shift in people’s attitudes can also, thereby, shift broad societal attitudes in fact as well.  A shift of societal attitudes in fact can, by nature of people’s preference to imitate societal preferences (as set out in Section 1), cause the wealthy to, even if not they have not individually seen certain advertisements or entertainment, change their views to fall in line with those advertisements and entertainment.  That cultural perception of products as having value might be, in many cases, the purpose of the advertisement of the product, rather than directly convincing individuals to purchase it.

 

Thus, we can conclude that advertising to a larger population will usually grant advertisers and entertainers more of an effect on the general attitudes of a culture through more avenues than a targeted advertisement at a smaller population.  Therefore, it is good for advertisers to be able to advertise to the largest number of people, so long as doing so does not in some other way diminish the size of a firm’s market (for example, if advertising a product for the wealthy somehow causes that the population to become less wealthy).  Historically speaking, what this meant was indirect.  While the non-wealthy population was large, as we have previously discussed, without proper education, it was not similar in intellectual ability to the wealthy.  Therefore, advertising directly to the non-rich and yet expecting the rich to purchase the product was a poor strategy.  Instead, the way that advertisers and mass media marketers created social change was by creating a certain ethic among the people (or at least by using that ethic to one’s purposes), the ethic of middle-class Capitalism.  This ethic declared the equivalence of wealth and virtue.  In sociologist Max Weber’s words,

 

In fact, the summum bonum of this ethic, the earning of more and more money,

combined with the strict avoidance of all spontaneous enjoyment of life, is above all

completely devoid of any eudæmonistic, not to say hedonistic, admixture. It is thought

of so purely as an end in itself, that from the point of view of the happiness of, or utility

to, the single individual, it appears entirely transcendental and absolutely irrational.

Man is dominated by the making of money, by acquisition as the ultimate purpose of his

life. Economic acquisition is no longer subordinated to man as the means for the

satisfaction of his material needs. This reversal of what we should call the natural

relationship, so irrational from a naïve point of view, is evidently as definitely a leading

principle of capitalism as it is foreign to all peoples not under capitalistic influence. At

the same time it expresses a type of feeling which is closely connected with certain

religious ideas. If we thus ask, why should “money be made out of men”, Benjamin

Franklin himself, although he was a colourless deist, answers in his autobiography with a

quotation from the Bible, which his strict Calvinistic father drummed into him again and

again in his youth: “Seest thou a man diligent in his business? He shall stand before

kings” (Prov. xxii. 29). The earning of money within the modern economic order is, so

long as it is done legally, the result and the expression of virtue and proficiency in a

calling; and this virtue and proficiency are, as it is now not difficult to see, the real Alpha

and Omega of Franklin’s ethic, as expressed in the passages we have quoted, as well as

in all his works without exception.

 

And in truth this peculiar idea, so familiar to us to-day, but in reality so little a matter of

course, of one’s duty in a calling, is what is most characteristic of the social ethic of

capitalistic culture, and is in a sense the fundamental basis of it. It is an obligation which

the individual is supposed to feel and does feel towards the content of his professional

activity, no matter in what it consists, in particular no matter whether it appears on the

surface as a utilization of his personal powers, or only of his material possessions (as

capital).  (Weber 2001, p. 18-19)

 

This ethic declares that wealth, and therefore expressions of wealth, are in themselves an expression of virtue.  That belief that wealth correlates with virtue therefore causes all people to have lives more similar to those of the rich, to those who they believe have virtue.  That desire to imitate the lives of the wealthy then causes the non-rich to hope to have a greater level of wealth.  This ethic extends to the lives of the wealthy as well – the poor do not simply want to have greater wealth but also want to live more similar lives to the wealthy, who they view as the paragons of proper living.  In de Tocqueville’s words, observing the attitude of non-wealthy 19th century Americans,

 

I did not encounter a citizen in America so poor that he did not cast a glance of hope

and longing on the enjoyments of the rich and whose imagination was not seized in

advance by the goods that fate was obstinately refusing him.  (Tocqueville 2000, p. 507)

 

In practice, what this preference for the lives of the rich is manifested into is the poorer attempting to copy what they view as the traits and attitudes of the wealthier.  Veblen observed this miming of the attitudes of the wealthy in Capitalist society:

 

The leisure class stands at the head of the social structure in point of reputability; and

its manner of life and its standards of worth therefore afford the norm of reputability

for the community. The observance of these standards, in some degree of

approximation, becomes incumbent upon all classes lower in the scale. In modern

civilised communities the lines of demarcation between social classes have grown vague

and transient, and wherever this happens the norm of reputability imposed by the

upper class extends its coercive influence with but slight hindrance down through the

social structure to the lowest strata. The result is that the members of each stratum

accept as their ideal of decency the scheme of life in vogue in the next higher stratum,

and bend their energies to live up to that ideal. On pain of forfeiting their good name

and their self-respect in case of failure, they must conform to the accepted code, at

least in appearance.  (Veblen 2007, p. 59)

 

Of course, the purpose of this ethic from the perspective of an advertiser (or at least, what advertisers have to gain from the creation of this ethic) is the ability to control not only what the masses consider the desires of the wealthy but also the actual desires of the wealthy.  Indeed, in a traditional Capitalist country, the poorer are not the only people who believe in Weber’s Capitalist ethic; so do the rich.  As de Tocqueville describes:

 

On the other hand, I never perceived that high-minded disdain for material wellbeing

among the rich of the United States that is sometimes shown even within the most

opulent and most dissolute aristocracies.

 

Most of these rich have been poor; they have felt the sting of need; they have long

combated adverse fortune, and, now that victory is gained, the passions that

accompanied the struggle survive it; they stand as if intoxicated in the midst of the little

enjoyments that they have pursued for forty years.

 

It is not that in the United States as elsewhere one does not encounter a great enough

number of the rich who, holding their goods by inheritance, possess effortlessly an

opulence that they have not acquired. But even they do not show themselves less

attached to the enjoyments of material life. Love of well-being has become the national

and dominant taste; the great current of human passions bears from this direction; it

carries everything along in its course.  (Tocqueville 2000, pp. 507-508)

 

The wealthy, believing in the Capitalist ethic, also seek to copy their concept of the “good life” that they supposedly afford by virtue of their wealth.  Indeed, the wealthy, according to this theory, seek to copy what the broader culture tells them are the preferences of the wealthy.   (While de Tocqueville does not outright make this last claim and instead sources wealthy Americans’ preference for opulence in their status as nouveau rich, he then immediately states that the non-nouveau rich have the exact same preference for opulence, contradicting himself.  Our theory does not run into this problem.)  As a result, advertisers are able to advertise to a larger group of people (this new middle class) to indirectly access a smaller, more valuable group of people (the wealthy).

 

An example of this advertising strategy was the “Lucky Strike” campaign in the 1920s cigarette industry.  The strategy basically worked by using testimonials of wealthy people, “opera stars and athletes, businessmen and socialites” to display that smoking was a marker of class and wealth (Brandt 2009, p. 72).  This sort of strategy was extremely successful in the cigarette industry alone, with cigarette sales increasing from 1900 to 1925 by 400 percent, in large part as a result of this sort of advertisement (Brandt 2009, p. 75).  While, in the height of cigarette advertisement (the 1940s and 50s), the percentages who used cigarettes among the poor and rich were very similar (with those with a college education being 6 percent less likely to smoke than high school dropouts), since the decline of cigarette advertising, that difference has increased to about 15% (Cheng & Kenkel 2010, p. 4).  This level of popularity which cigarettes experienced among the wealthy should be considered highly impressive and was probably a result of cigarette companies’ culture-based advertising strategy.

 

The indirect strategy, however, has lost its appeal over time.  No longer is advertising beholden to its contrary desires to advertise to both the wealthiest customers (who have the most to spend) and the largest groups of people (who are easiest to advertise to).  Instead, as intellectual egalitarianism has increased, so have we become more similar in which advertising strategies appeal to us.  The strategy that would directly appeal to the poor would also directly appeal to the rich.  Therefore, anyone looking at any advertisement, so long as their reaction is measurable, helps an advertiser or mass media producer understand what a wealthy consumer might want from an advertisement or piece of media.  This direct vision into the preferences of rich Americans is something unique to poor Americans.  The two groups (wealthy Americans and poor Americans) are more similar in their preferences than, say, wealthy Americans and wealthy individuals from, for instance, China, by virtue of their similar culture.  The American poor, with their absolute advantage (due to culture as well as intellectual similarity) in displaying the preferences of wealthy Americans relative to any other group of people as well as their absolute disadvantage in relation to many poor foreigners (those from China, for instance) in many forms of work such as manufacturing, certainly have a comparative advantage over the foreign poor (as well as, probably, the foreign rich) in displaying to producers what advertisements and mass media will induce wealthier Americans to buy certain goods.  As strategies such as machine learning allow producers to measure in great detail how compelling an advertisement is to every single person who sees it, every view of an advertisement helps companies narrow down what strategies are most compelling.  And, as a result, every view indirectly makes advertising companies (as well as the companies for which they advertise) money.

 

Thus, at least one group of Americans watching entertainment and thereby seeing advertisement has become lucrative for corporations, which is to say, whether or not that segment of the population is employed, they still make at least some money for many corporations (those which advertise).  And they certainly make a significant amount of money for those who directly advertise and entertain.  It is lucrative, therefore, for advertising and entertainment corporations to spread an ideology within their works which praises (among a segment of the population) a lack of work, or at least the traits which lead people to not work (neuroticism, for instance).  There is, of course, no direct proof that this is underway.  We will look at examples of how public opinion has changed over the years to devalue many traits and attitudes which approbate a significant amount of work, as well as the creation of new attitudes which damage the work ethic and bring about neuroticism.  However, the only way these trends in public opinion can be connected to advertisers is via our theory of social change.  In that case, our lack of direct evidence is of necessity.  Advertisement cannot usually be shown to have directly changed attitudes, except perhaps attitudes in relation to a specific product.  After all, advertisers appeal to the public.  So, if a massive number of people change their opinion about a particular issue, advertisers will have to appeal to that changing opinion.  And, thus, because advertisers are always trying to make their content appeal to the average person, even if a particular instance of social change is not caused by advertisers whatsoever, they will almost immediately change their advertisements in favor of the social change in order to appeal to the average person.  Thus, one way or another, advertisements and mass entertainment will eventually concomitantly change with public attitudes.  (As an example, the decline of cigarette smoking was clearly not caused by advertisers or mass media producers; yet it would now seem very odd for a mass media production or advertisement to use cigarette smoking as a symbol of a positive virtue (masculinity, for instance) in a way that would not have seemed weird in the 1950s.)  Nor do we believe there is any kind of conspiracy or intentionality around these shifts; they occur merely as a result of the corporate feedback loops we described in part one and, as a result, the shift does not occur in any theorized manner but rather through many minor shifts – shifts in a single advertising firm’s hiring strategy (without any understanding of why such strategies work) or one firm copying another firm’s already successful strategies.  Of course, both accidental success and hackery cannot be traced back to any kind of teleological source, and as a result, we cannot arrive at the reason for success by looking at the intentions of the successful. 

 

In spite of our inability to absolutely prove the source of our current social change, we can provide strong evidence in favor of our hypothesis that such social change is explained by our general theory of social change.  For one, the persuasive power of our general theory extends to this individual case.  If we can agree that modern social change occurs, generally, due to mass media, it seems only likely that our current social change is caused by mass media, and therefore, by mass media preferences.  Secondly, it seems that (as we will later argue) that many of the significant examples of social change followed the 2008 Great Recession, a year when, according to our theory, due to poor financial numbers, firms of all kinds (including advertising firms) would likely have had to reconsider many of their economic strategies.  (For instance, as we have discussed earlier, cannabis usage and attitudes on cannabis started changing significantly following 2008 (increasing 2.3% between 2007 and 2008, 4.2% between 2008 and 2009, and a whopping 5.7% between 2009 and 2010), the exact years of the Recession.  Before that year, cannabis’ popularity in some years was not just not on an upward trend but in fact on an outright downward trend (decreasing by 0.2% between 2005 and 2006, although then increasing 2.4% between 2006 and 2007.) (Compton et al. 2016, p. 956))  In discussing the social shifts which have occurred in recent years, we will consistently find 2008 as a kind of inflection point when the trends we have previously discussed become far more significant.  Thirdly, the level of success which advertisers have achieved since 2008 (which we have previously discussed) is an indicator that the social change was not unexpected for advertisers.  For, if social change unrelated to those advertisers occurred (particularly as significantly and rapidly as social change has occurred over the past fifteen years), then one would expect advertisers to have had a significant amount of trouble in the aftermath, as the values of the public would have changed rapidly.  We would at least expect significant shifts in the market as many old strategies rapidly lost effectiveness.  However, no such massive shifts occurred.  It seems that the same significant mass media and advertisement-producing companies have retained their importance over the past fifteen years.  On the other hand, if we assume that mass media producers had some part to play in this shift, then no such internal disruption would have been necessary; we would see advertising strategies shift prior to the shift in consumer preferences, disguised under the shroud of the crisis which initially produced the shift in strategy (namely, the negative effect the 2008 Recession had on all industries).  Finally, as we discuss recent social shifts in more detail, we will be able to track how nearly all of them seem to grant significant influence to advertisers and other mass media producers.  Again, this does not prove the influence of advertisers and mass media producers.  However, it is extremely conspicuous and gives us evidential reason to believe that advertisers and mass media producers have been instrumental in creating those recent social shifts.  All of these different reasons in conjunction with one another make our theory not just a possibility but the most compelling explanation of recent social change.

 

Thus far, we have looked mainly at one area, economic life, in our discussion of social change.  In this area, we have seen change tracking directly with our theory.  The labor force participation rate started declining significantly in 2008:  between December 2000 and December 2008, the labor force participation rate declined from 67% to 66%, while between December 2008 and December 2016, the labor force participation rate declined from 66% to 63% (oddly, declining lower than its Great Recession low point of 64.6% in spite of the economy leaving recession during that time) (U.S. Bureau of Labor Statistics Aug. 2022).  Further, as we have discussed, our recent economic shift seems to have helped advertisers significantly via the creation of a class of non-working people to be advertised to and therefore improving the advertising capabilities of major corporations.

 

Throughout the rest of the article, we will describe in more detail how this social change has manifested itself in three more areas:  cultural, social, and religious.  We will argue that in all of these areas, the public’s attitudes have trended in a direction which removes traditional bases of people’s knowledge of the world and has replaced those bases with new sources that both reinforce economic trends and grant advertisers yet more power over the world.  We will argue that, through these media, advertisers have become capable of completely determining people’s understanding of the world around them and, as a consequence, determining an increasingly significant percentage of people’s activities.  Therefore, it seems highly likely that, as this social trend continues, it will increase in speed and significance.  The three areas flow into one another, although we will try to keep our analysis of them mostly separate.

 

The Cultural Effects

 

What we mean by “culture” in this regard is the interposition of someone within a broader context of beliefs, influencing how their own beliefs, attitudes, and actions develop throughout their lives.  That cultural context validates or invalidates a person’s individual beliefs, attitudes, or actions, and by validating or invalidating them, also determines them, as it leads people to behave in particular ways, both in their thoughts and their actions, which therefore (of course) determines what other thoughts or actions they engage in.  This concept, which he terms the “social imaginary,” has been developed by philosopher Charles Taylor as follows:

 

Our social imaginary at any given time is complex. It incorporates a sense of the normal

expectations that we have of each other; the kind of common understanding which

enables us to carry out the collective practices which make up our social life. This

incorporates some sense of how we all fit together in carrying out the common practice.

This understanding is both factual and “normative”; that is, we have a sense of how

things usually go, but this is interwoven with an idea of how they ought to go, of what

mis-steps would invalidate the practice. Take our practice of choosing governments

through general elections. Part of the background understanding which makes sense of

our act of voting for each one of us is our awareness of the whole action, involving all

citizens, choosing each individually, but from among the same alternatives, and the

compounding of these micro-choices into one binding, collective decision. Essential to

our understanding what is involved in this kind of macro-decision is our ability to

identify what would constitute a foul: certain kinds of influence, buying votes, threats,

and the like. This kind of macro-decision has, in other words, to meet certain norms, if it

is to be what it is meant to be. If a minority could force all others to conform to their

orders, it would cease to be a democratic decision, for instance.  (Taylor 2007, p. 172)

 

So the particular rules which we see repeatedly being followed cause us to expect those rules to be followed in the future, and we normatively condemn when they are not followed. 

 

A slightly more extreme position would be that of philosopher Martin Heidegger, who believed that cultural values, which he termed our “background,” bounded all of our thoughts.  Within this view, as Heidegger scholar Charles Guignon says, a “common background of intelligibility” is required for any of our statements to make sense (Guignon 1983, p. 111).  Only through this cultural background do our “regular practices, customs and institutions” gain any kind of “comprehensible structure,” that is to say that our thoughts only make sense within the world of the thoughts which we are influenced by and the cultural practices which we internalize (Guignon 1983, p. 113).  This latter argument is significantly more extreme than the former, but both leave us with the same fundamental takeaway:  our thoughts and beliefs are significantly affected by our placement within some broader cultural context.  That cultural context is learned by our placement within a broader history.  This broader, opinion-determining history is important to our perception of the world around us, helping us distinguish between right and wrong, normal and abnormal.

 

Once we have recognized culture as the group of beliefs about how to understand the world around us, we can interpret its source in much the same manner as how we understood the source of all knowledge in Part One of this article:  as being created by habit.  To recapitulate, these “habits” (or what we are in this case calling culture) are created by repeatedly seeing other people engage in certain sorts of activity and mimicking those actions.  By mimicking them, we then come to “learn” to engage in the actions which we previously could only mimic on our own.  What this means culturally is that, by seeing how others think about the world and repeatedly mimicking them, we eventually come to have a particular cultural background.  That cultural background, just like all other information, can be transmitted through a variety of educational sources – other people and media (the latter often being a simulation of other people) in particular.

 

Historically speaking, Western culture has been somehow continuous millennia.  In ancient Rome or the Middle Ages, for instance, with low literacy rates and little media to choose from even if one was literate, one’s social relationships were basically the sole basis of one’s understanding of the world.  This social source of culture can be seen from how thought itself was considered innately social.  One’s thoughts themselves were not considered one’s own; instead, they were considered to be determined by the influence of spirits (angels and demons in particular), and the presence of those spirits were determined by other people’s actions – whether they engaged in witchcraft or how well a community followed collective rituals (Taylor 2007, p. 32, pp. 42-43).  Following this period, as the concept of witchcraft declined and literacy increased (although this was a much slower process than we are making it out to be here), Western society, in spite of rejecting its Middle Age period as “dark,” retained continuity, injecting Middle Age society with increasingly available Ancient thought, and then rejecting the superiority of that Western thought (Pippin 1999, pp. 19-20).  This continuity can be attributed to the fact that, so long as literary sources are the main non-social source of culture, there is no necessary preference for new rather than old literature.  If anything, the opposite is true; unless one is regularly purchasing new (expensive to print and ship) books, one was more likely to own an older book, such as the Bible, than the newest scholarship.  Thus, cultural values people learned from contemporary media (that is, literature) would be just as unlikely to overturn cultural continuity as values learned socially.  This cultural continuity between (at least) the Middle Ages and the modern world is why we can identify most of our modern literary concepts in the literature as well as many of the basic elements of popular concepts of love as having developed by the Middle Ages (Curtius 2013, pp. 391-397; Rougemont 1995).

 

What we now see, however, is the rupturing of that continuity as a result of a change in the sources of culture.  What we mean by rupturing is not rejection.  Rejection does not by any means end social continuity.  There have been periods of social unrest and the rejection of previous social attitudes.  Two examples would be the 17th century, when scholars from Hobbes (who rejected the honor-based society that had caused religious wars in his own lifetime) to Descartes (who, in Pippin’s words, stated that we could reject “all that we had inherited” in philosophy and “start again”) to the neo-Stoics to the Reformation religious movements (who advocated the complete recreation of society and “civilization” of man) rejected traditional, Medieval values in favor of what they viewed as newer, more sensible concepts (Midgley 2010, p. 118; Pippin 1999, p. 24; Taylor 2007, pp. 114-122).  This did not, of course, mean that Western culture experienced a rupture in the 17th century.  Rather, to reject a previous attitude was, if anything, an affirmation of society’s continuity with that attitude, a way of accepting previous bounds for discussion.  Indeed, much contemporary “new” popular social criticism (such as that of Richard Dawkins or Ayn Rand) is simply a new iteration of older social criticism going back to Nietzsche, as philosopher Mary Midgley has argued (Midgley 2010, pp. 40-41, 49, 127).  The rejection of a previous culture is not so much a rupturing of continuity as a change in the direction of society, remaining within the older context, simply moving in a different direction.  Instead, what we are arguing is that for many people, as a result of new cultural influences, the old culture is increasingly unrelated to their new cultural values, neither as a direct influence nor as a rejected alternative.  Rather, their new culture is one which is increasingly determined solely by advertisers and media developed in recent years by those advertisers, and further, that such advertisement is decoupling the relationship which we have with our cultural ancestors.

 

The way that this has happened has been by decreasing the influence of traditional sources of culture.  The two traditional sources of culture which have declined most obviously are one’s family and traditional forms of media.  Both were once highly important sources of culture, with the family being one’s primary source of cultural learning (of course, the social world is significantly broader than the family, and we will discuss non-familial social relationships at a later stage of the article) and media being the primary non-social basis of cultural learning.  In our time, both have declined significantly as bases for young people’s understanding of the world.  We can see this fact in a variety of ways.

 

First, the family has declined as a basis of cultural attitudes.  This can be seen by two statistical shifts:  the workforce participation rate of parents as well as the very structure of families.  For one, the percentage of children for whom at least one caregiver does not work full-time (and therefore, at least one caregiver can devote themselves full-time to imparting cultural values on the child) has declined in recent years, from 35.3% in 1988 to 34% in 2021 (Hayghe 1990, p. 18; U.S. Bureau of Labor Statistics, April 2022).  This statistic might seem very minor (being a decline in 1% of the population over 30 years), but, in comparison to the general labor force participation rate, which has declined by about 5% over the same period of time, this increase in the employment rate is quite significant.  It demonstrates that there has been a significant social trend in favor of parents working relative to the average person working.  Secondly, we have seen a shift away from two-parent households and towards no-parent households.  In 1990, the percentage of children who lived in two-parent households was 72.5%, the percentage in one-parent households was 24.7%, and the percentage of children in no-parent households was 2.7% (U.S. Bureau of the Census 1991, p. 5).  On the other hand, by 2020, the percentage of children who lived in two-parent households was 70.4%, the percentage of children in one-parent households was 25.5%, and the percentage of children who lived in no-parent households was 4.1% (U.S. Census Bureau 2021).  Strikingly, the percentage of children raised without parents increased by almost two-thirds over this thirty-year period.  The shift becomes even more stark upon closer inspection.  Much of the shift is the result of an increase in the number of families in which a grandparent, but not a parent, is present, the percentage of which increased from 1.3% in 1990 to 2.1% in 2020, an increase of about 61%, as well as environments without any parents whatsoever, which has increased from 0.54% in 1990 to 0.82% in 2020, an increase of 51% (U.S. Bureau of the Census 1991, p. 9, p. 5; U.S. Census Bureau 2021).  This figure includes both non-relative foster families and non-foster families where children live with non-relatives.  Not taking into account non-relative foster families, the shift is even more significant.  The percentage of children in non-relative foster families has actually decreased since 1990:  from 0.42% in 1990 (calculated by taking 75%, which was the average percentage of foster children in non-relative homes when data started being taken in the mid-2000s, of the total number of foster children in 1990, 360,000) to 0.33% in 2020 (Williams & Sepulveda 2019; U.S. Department of Health and Human Services 1990; U.S. Census Bureau 2021).  However, non-foster children who live without family has increased rapidly from 0.12% of children in 1990 (found by subtracting the percentage who lived in non-relative foster homes from the percentage who lived separately from family members) to 0.49% of children in 2020, an increase of 314% (U.S. Bureau of the Census 1991; U.S. Census Bureau 2021).

 

Both of the two insurgent familial models, children raised exclusively by grandparents and children raised by non-family members, are relative ineffective at passing down their cultural values to children.  Of course, the more common familial model, children living with their grandparents, is not an intuitively ineffective model.  After all, grandparents would seem to be, if anything, more capable of connecting children to their culture than parents.  Yet, this observation obscures several important influences on familial dynamics.  For one, custodial grandparents do not usually choose to have the children, and as a result, they are more likely to feel resentment at their being required to raise the children; this is particularly true of younger grandparents (Solomon & Marx. 1995, p. 393).  As a result, custodial grandparents tend to face significant emotional issues as a direct result of raising custodial grandchildren, thereby impairing their ability to raise the children.  For another, being raised by grandparents usually means that parents refused to raise the child, the knowledge of which is often emotionally distressing for the child (Billing & Kortenkamp 2012, p. 5, p. 4).  As a result, in many cases, transmitting values unto the grandchildren of custodial grandparents is quite difficult.  This is reflected by the behavioral issues which custodial grandchildren often exhibit relative to those raised by several biological parents (although it is important to note that these behavioral issues are less significant than those raised by single parents) (Solomon & Marx 1995, p. 390).  For these children, who it seems are likely to be less culturally influenced by family members than those in traditional families, mental illness, and in particular depression, are far more common (Smith & Palmieri 2007, p. 1307).  Further interestingly, while directly correlated with mental illness, living in custodial grandparent homes has very little effect on children’s academic ability, only having a negative effect on quantitative learning (Pittman & Boswell 2007, p. 31).  In spite of this lack of intellectual difficulty, however, custodial grandchildren seem to have extreme difficulty in school, with their odds of repeating a grade being 86% higher than those raised by two biological parents and 2% higher than those raised by single parents and their likelihood of being an above average student being 32% lower than those raised by two biological parents and only 6% higher than those raised by single parents (Solomon & Marx 1995, p. 390).  It seems that the rise of grandchildren being raised in custodial grandparent homes causes those grandchildren to have the traits of an advertiser-friendly person – academically bright but neurotic and performing poorly in job-producing avenues of life (such as school).

 

The second insurgent familial environment, living without one’s family at all, is slightly more difficult to measure.  One reason for this is that it is not a singular environment – this segment of the population includes many different possible environments, from living on one’s own to living with an adult to whom one is not related, to living with friends.  Second, there is simply less social scientific research on this group of people, because they are smaller in number and their preponderance has a more recent genesis.  Indeed, they only made up 0.1% of young people thirty years ago – expecting the social sciences to measure the developmental outcomes of this insurgent group as they come into being seems like expecting too much from anyone.  Yet, we do have some markers to show us the nature of this group.  For one, those 43% of children living in non-foster, non-familial homes are between ages 12 and 17, in spite of making up only 34% of the total under-18 population (U.S. Census Bureau 2021).  It seems that for a very large segment of this population, therefore, what is underway in this insurgent social group is that children are moving away from their families at an age under 18 – that is, that there is a kind of suspended childhood.  If this is indeed what is happening, then there is obviously less cultural influence upon children from their parents, simply because those parents are not around to influence the child.  Further, while we do not have any direct studies of these particular young people, we do have studies of people with similarly suspended childhoods – young runaways.  In relation to this group, studies have found that running away in itself, even controlling for previous mental and substance issues, tends to increase the probability of drug use in the long run, and therefore mental neuroticism (as we have already discussed, drug use seems to have a role in causing mental neuroticism) (Tucker et al. 2010, p. 516).  So, while we do not exactly have a full argument here, we do have the skeleton of one.  It seems that the rise in non-familial environments has been the rise in prematurely suspended familial environments, those environments by necessity create cultural ruptures, and those cultural ruptures seem to lead to the creation of advertiser-friendly people.  More research in this area, of course, would be very helpful for understanding this rapidly growing group of familial environments, but as it is, we have good reason to believe that the cultural shift which has led to the creation of this new childhood environment is one which creates advertiser-friendly people and decreases the importance of family for their cultural development.

 

The second shift in the source of cultural development has been a shift in young people’s preferences in media.  The shift in this area away from traditional models such as book-reading has been significant.  For instance, looking at the percentage of young Americans who rarely read books for fun, since 1984, the increase has been significant.   In 1984, 9% of 9 year-old Americans, 8% of 13 year-old Americans, and 9% of 17 year-old Americans stated they rarely or never read books for fun.  In 2020, those figures were 16%, 29%, and 27%, respectively (Schaeffer 2021).  On the other hand, many other forms of media have increased in popularity among the young, in particular, video games and other non-social activities on the computer.  In 2021, 15-19 year-old Americans spent on average 1.44 hours a day either playing video games or engaging in non-social internet activities, 260% of the national average and a 35% increase since 2011 (when the average amount of time spent engaging in non-social internet activities among 15-19 year olds was 1.06 hours per day) (U.S. Bureau of Labor Statistics June 2022; U.S. Bureau of Labor Statistics 2012).  The important point we can take from this shift in media production is the level of advertiser power which it grants large media-producing corporations.  A book, while often published by a corporation, is almost always either produced by one person or by a small number of people, people unlikely to have specific strategies in the same way that corporations do and even more unlikely to arrive at corporate strategies using the same processes as firms.  On the other hand, video games are produced almost exclusively by large corporations – out of the best-selling games of 2021, all were produced by firms with over a thousand full-time employees, firms which likely operate according to the evolutionary, copy-heavy strategies which we discussed in Part One of this article (Rousseau 2022).  As a result, video games are likely highly correlated to the creation of advertiser-friendly people.  We have already found that advertiser-friendly people (that is, intellectually capable people with high neuroticism and low labor force motivation) seem to very much enjoy video games, although it is completely possible that causation in this instance is the other way around, that is, that advertiser-friendly people enjoy playing video games rather than video games making people advertiser-friendly (Thompson 2016).  There are two reasons to believe this is not the case.  The first reason is simply that, on a factual level, neuroticism and low feelings of motivation simply do not make people any more likely to start engaging in compulsive video game usage (Peracchia et al. 2019, pp. 1016-1017).  If this is true, compulsive video game use would seem to be the cause of neuroticism and low work motivation (and therefore advertiser-friendliness) rather than the other way around (or some third thing causing both of them).  Second, compulsive use of video games heavily correlates with neuroticism at a young age, as young as one’s teenage years (Wang et al. 2019, pp. 3-5).  Therefore, even if neuroticism does determine one’s usage of video games, their importance in the formative years of those already neurotic people’s lives does seem to indicate some responsibility attributable to compulsive video game use for later developments in young people’s neuroticism.

 

It seems, in both of these areas, a decline in cultural influences from traditional sources during one’s social development has declined in recent years.  It further seems that being disconnected from those cultural influences is correlated with being more neurotic and having a lower work ethic.  To put it simply, it seems that being more influenced by corporate media or advertiser-dominated media leads to one’s holding the traits which we describe as being advertiser-friendly.  It further seems that there have been several social changes in recent years in these directions, that is, social changes which have ruptured the connections between an increasingly significant segment of the population and their traditional cultures in favor of an advertiser- and media company-dominated culture.  All of this seems to lend credence to our initial argument about the cause of recent social change.  By examining another area, the area of culture, we see the social shifts that are occurring are those which directly benefit advertisers and media companies and in which advertisers and media companies seem to play a role.  It also enhances our understanding of what this phenomenon – the social shifts of our time creating advertiser-friendly people – seems to entail for our cultural future.  The current trends are signs of an increasingly significant cultural rupture which tends towards cultural detachment – towards the creation of people connected to no past at all who are, therefore, perfect targets for advertisement, which itself need not distinguish among people (and more importantly people’s cultural influences) but can still approach everyone as if they were the exact same person.  At the extreme of cultural rupturing, an extreme which we have not reached but which we might reach sometime in the future, the only cultural ties that would remain would be those created by advertisers to sell products, cultural attachments that would be the most effective in making one more amenable to advertising and media products. 

 

So much for culture.  Long live advertising!

 

The Social Effects

 

The consistent decline in traditional forms of sociality and community in America is perhaps one of the most stark social shifts in recent years.  Sociologist Robert Putnam sounded the alarm in 2000 about, among other things, declining social participation in community group membership, religious communities, labor unions, informal friendships, and interpersonal trust throughout American society (Putnam 2000, p. 60, p. 72, pp. 84-85, pp. 104-105, p. 146).  Since then, the pace of decline in these areas has increased rapidly.  When it comes to community group membership, Putnam observed that the percentage of Americans in at least one community group declined from 75% in 1976 to 70% in 1994, a rate of decline of about 0.28% per year (Putnam 2000, p. 60).  On the other hand, according to a Pew poll, in 2017, that number had declined to 57%, a decline from 1994 of about 0.56% per year, double the rate of decline in prior two decades.  Even more disconcerting, the percentage of 18-29 year-olds involved in community groups was only 54%, evidence that the current trend is likely to continue into the future (Sandstrom & Alper Feb. 2019).  When it comes to church membership, Putnam identified church membership as declining by about 10-12% from 1975 to 2000, a rate of decline of 0.4-0.48% per year (Putnam 2000, p. 72).  Again displaying this trend, from 2000 to 2020, church membership declined from 70% to 47%, a rate of decline of 1.15% per year, clearly more than double the previous quarter century’s rate of decline.  Similarly, in the case of community social membership, young people are even less likely than the general population to be a member of a church, with only 36% of millennials being members of a church (Jones Nov. 2021).  When it comes to labor unions, Putnam identifies union membership as declining from 32.5% of workers in 1954 to 14.1% of workers in 1997, a rate of decline of about 0.43% per year (Putnam 2000, p. 85).  By 2018, that percentage had declined to 10.5% of workers, a rate of decline since 1997 of 0.17% per year (U.S. Bureau of Labor Statistics 2019).  While this rate of decline has slowed recently, that has more to do with the fact that further decreases in membership will necessarily slow once something has declined as much as union membership. 

 

When it comes to informal friendships, the decline in interaction between friends that Putnam identified seems passe in relation to the complete collapse of informal friendships over the past thirty years (Putnam 2000, pp. 104-105).  During that period, the percentage of Americans with no close friends has quadrupled from 3% to 12%, and the percentage with 10 or more close friends decreased from 33% to 13% (Cox 2022).  The decline is even more significant among young people, with 22% of young people saying they have no friends and 27% saying they have no close friends (Ballard 2019).  In relation to interpersonal trust, the continuation of the decline that Putnam observed of the period from the 1970s to the 1990s has not just been striking but also odd (Putnam 2000, p. 146).  In 1998, the percentage of Americans who said that most people could be trusted was about 40%, declining to about 33% in 2018 (Smith 2019).  This decline in social trust is directly contrary to what Putnam believes the reason for declining social trust to be – the real fact that people are not as trustworthy, as displayed by, for instance, the crime rate (Putnam 2000, p. 151).  Yet, in spite of the crime rate declining significantly from 1997 to 2018 (from 566.4 reported violent crimes per hundred thousand Americans in 1998 to 368.9 reported violent crimes per hundred thousand Americans in 2018, a decline of 35%), interpersonal trust has not recovered; it has rather even further diminished (United States Department of Justice 1999; United States Department of Justice 2019).

 

We will not try to explain this entire shift in relation to advertiser and mass media producer preference, although the connection between modern mass media consumption and the decline in American sociality is still clearly significant.  The connection between modern wealth with its consumer society and the decline of traditional social structures has been identified by many observers.  Taylor describes the causation as follows:

 

With post-war affluence, and the diffusion of what many had considered luxuries

before, came a new concentration on private space, and the means to fill it, which

began distending the relations of previously close-knit working-class or peasant

communities, even of extended families. Older modes of mutual help dropped off,

perhaps partly because of the receding of dire necessity. People concentrated more on

their own lives, and that of their nuclear families. They moved to new towns or suburbs,

lived more on their own, tried to make a life out of the ever-growing gamut of new

goods and services on offer, from washing-machines to packaged holidays, and the freer

individual life-styles they facilitated. The “pursuit of happiness” took on new, more

immediate meaning, with a growing range of easily available means. And in this newly

individuated space, the customer was encouraged more and more to express her taste,

furnishing her space according to her own needs and affinities, as only the rich had been

able to do in previous eras.  (Taylor 2007, p. 474)

 

Putnam described the phenomenon as such:

 

…news and entertainment have become increasingly individualized. No longer must we

coordinate our tastes and timing with others in order to enjoy the rarest culture or the

most esoteric information. In 1900 music lovers needed to sit with scores of other

people at fixed times listening to fixed programs, and if they lived in small towns as

most Americans did, the music was likely to be supplied by enthusiastic local amateurs.*

In 2000, with my hi-fi Walkman CD, wherever I live I can listen to precisely what I want

when I want and where I want. As late as 1975 Americans nationwide chose among a

handful of television programs. Barely a quarter century later, cable, satellite, video, and

the Internet provide an exploding array of individual choice.

 

Second, electronic technology allows us to consume this hand-tailored entertainment in

private, even utterly alone. As late as the middle of the twentieth century, low-cost

entertainment was available primarily in public settings, like the baseball park, the

dance hall, the movie theater, and the amusement park, although by the 1930s radio

was rapidly becoming an important alternative, the first of a series of electronic

inventions that would transform American leisure. In the last half of the century

television and its offspring moved leisure into the privacy of our homes. As the poet T. S.

Eliot observed early in the television age, “It is a medium of entertainment which

permits millions of people to listen to the same joke at the same time, and yet remain

lonesome.” The artifice of canned laughter reflected both the enduring fact that mirth is

enhanced by companionship and the novel fact that companionship could now be

simulated electronically.  (Putnam 2000, p. 233)

 

Putnam and Taylor presented this individuation of entertainment and other personal ends in the era of mass consumption as if it were inherent in such an era (although, it is important to point out that in both cases, other, unrelated causes are also suggested to help explain the modern decline in social congeniality).  However, individuation does not seem to innately follow from the expansion of means to be entertained.  Indeed, aside from religious organizations, innately individual hobby groups are the most popular community organizations in America (Sandstrom & Alper 2019).  The decline of factional, hobby-based interest groups does not seem to follow simply from the individuation of popular entertainment.  Further, many forms of popular “electronic” entertainment which represent the expansion of entertainment options in recent years, in fact, seem very communitarian, more so than traditional forms of entertainment such as the theater.  Take philosopher Noël Carroll’s observations on the communitarian preferences of many moviegoers:

 

…ghetto audiences and teenage audiences (in the United States at least) treat movies

as opportunities for communal participation. They crack jokes loudly at the expense of

the film, currying the approval of fellow viewers; they try to scare each other with

premature screams; they shout out criticisms and offer advice to the characters. The

audience with whom I watched Halloween kept warning Jamie Lee Curtis to stop

dawdling and to get out of that house pronto. Also, middle-class adult audiences, as

encountered in upper east side theaters in New York City, are quite communal, talking

to each other very audibly throughout the film.  Indeed, in my experience communal

interaction is far more common, in America, during movie screenings than during

theatrical performances.  (Carroll 1988, p. 42)

 

This is to say that solitude is not inherent in recent technological developments in entertainment.  Nor is it inherent in the creation of new entertainment technologies.  There is no innate reason why new technologies cannot, in their own way, lend themselves to sociality and communitarian preferences.  Yet, the consistent trend has indeed been towards solitude in entertainment.  For instance, public movie theaters languish as American moviegoing has collapsed over the past couple decades, while private movie-and-television subscription services like Netflix have massively increased in popularity over the last few years (with the percentage of Americans subscribed to at least one entertainment subscription service increasing from 50% in 2016 to 70% in 2019) (Brenan 2022; Nayak 2016; Fitzgerald 2019).  Cost-effectiveness is part of this shift, but another part is most likely that people do not care about whether their entertainment is communitarian.  The average person’s preference is, if anything, to have less social (or at least less communitarian) forms of entertainment.  This recent shift is a shift in consumer preferences, exactly the sort of shift engineered by advertisers and other mass media producers.  It seems highly likely that this shift has occurred due to the latter’s preference for people generally to not prize sociality in their entertainment, that is, people’s preference (or relative preference) for solitude.  Indeed, this shift in preferences seems to be the source of much of young Americans’ antisociality.  When asked, the reason why most lonely Americans said they had few friends was almost exclusively because of their own preferences (the most common reasons for a lack of friends was either shyness, that is, avoiding situations in which one could make friends, or a lack of interest in making friends) (Ballard 2019).  So, then, we must ask why advertisers or mass media producers might benefit from Americans being less socially congenial.

 

The most direct and obvious reason is simply an extension of our previous cultural argument.  Considering that social life is one source of culture (which is to say, one’s basis of opinions on a wide variety of issues), as is evidenced by the wide variety of very different social norms which underpin culture in many different societies (Elster 2007, pp. 360-368), and that social life is created by people and therefore not completely controlled by advertisers and mass media producers, it is simply in the interest of large companies to replace the social world with something more congenial.  A second argument would be that, as we discussed in Part One of this article, people’s opinions and actions are partially determined by what they perceive to be the common opinions and actions of people more broadly.  Congenial social groups would cause people to believe that the “normal” opinions and actions are whatever the most common opinions and actions are in whichever social groups they are members of.  On the other hand, a non-congenial social society would cause members to look to sources such as mass media and advertising to find out the norms of their society, thereby giving advertisers and mass media producers the ability to control those people’s perception of society’s norms and thus those people’s opinions and actions.  Therefore, the decline of social society would be an opportunity for advertisers and mass media producers to create a larger number of advertiser-friendly people.  However, all of this is very abstract.  One way of understanding the shift better than pure abstraction is to look at how this social shift manifests itself in the recent replacement for traditional community organizations and sociality: social media.

 

The rise of social media in the place of traditional communities has been clear and rapid.  According to a poll by Uswitch, Americans spent on average about 3.6 hours per day on social media in 2021, and according to a poll by YPulse, younger Americans, less likely to be involved in traditional communities, spent about four hours per day on social media (Suciu 2021; YPulse 2021).  It further seems that social media, the replacement for traditional community organizations, is heavily correlated with neurosis and lower motivation, and particularly with depression (which causes both) (Pantic 2014, p. 653, p. 655).  This correlation does not necessarily mean that social media directly causes depression; it does mean, though, that replacing traditional community organizations with social media does seem to be associated with making people more advertiser-friendly and, therefore, that advertising and mass media corporations benefit from a society in which community organizations are being replaced by social media as the main source of young Americans’ social interaction. 

 

There are several additional advantages for advertisers and mass media corporations which seem to come from Americans using social media as their main source of social interaction.  For one, in spite of research showing the mere use of social media does not seem to lower the self-esteem of users, those who use social media have lower rates of self-esteem (Pantic 2014, 653-655).  This seems to imply that the totality of the replacement of traditional social situations with social media (rather than purely the use of social media) lowers self-esteem among users.  In Part 1 of this article, we argued that people’s changing their habits and opinions tends to be based on a desire for self-esteem, which they lose due to not being in tune with what they view as being the general consensus opinions and habits of society.  If the totality of the recent social shift causes those affected to more often view themselves as being out of tune with the public consciousness and therefore less deserving of self-esteem, then it is also likely that the recent social shift is instrumental in causing people to view themselves as being incongruous with public sentiments and therefore being more in need of changing their habits and opinions, likely in the direction which advertisers and mass media producers prefer (more on this in the next paragraph).  Second, social media tends to be highly addictive, meaning that people who regularly use social media are more likely to prefer the sating of their addiction over other alternatives (Pantic 2014, 653-655).  Innately, this promotes the cultural shift underway as well as making people more likely to be less interested in working (that is, to lower their work ethic).  Finally, and most obviously, social media is a field which is dominated by advertisement.  Indeed, much of the recent growth in advertising spending has been based on a growth in digital advertising, which grew to 57% of total corporate spending on advertising in 2021 (Adgate 2021).  It seems that, for one reason or another, regular social media users are very good targets of advertising; they are advertiser-friendly people.

 

It further seems that advertiser-friendliness is (to some extent) innately promoted in social media.  Most forms of social media promote popular content, that is, content regularly viewed or shared.  Someone who is advertiser-friendly, that is, someone who spends much of their time viewing advertisements and mass media, will have more effect on posts which are more likely to be seen by more people.  For instance, one study showed that employed people are more likely to use social media to access individualized content, either personal blogs or things pertaining to their particular geographic region, whereas unemployed people tend to view and like posts pertaining to entertainment and news (Urbinati 2020, p. 390).  Personal blogs and regional content are less universal than entertainment and news content:  therefore, the latter or more likely to be more popular than the former.  Since more popular things are more aggressively pushed by most social media algorithms, the categories of content which the unemployed prefer are therefore more likely to be favored by social media algorithms than those categories which the employed prefer.  Secondly, the correlation between neuroticism and social media use, particularly compulsive social media use, means that neurotics will have a disproportionate effect on what content is viewed and shared on and therefore spread by social media platforms.  As a result, social media for most people will tend to spread the preferences of those who are more advertiser-friendly rather than those who are less.  As a result of the former being more represented than the latter, the average person will come to believe that the former’s opinions, habits, and actions are more representative of the norm of society than they in fact are, and as a result, they will come to mimic the opinions, habits, and actions of the advertiser-friendly group.  Thus, we can see that, by creating a platform which promotes advertiser friendliness, advertisers and mass media producers can promote advertiser friendliness in broader society.

 

There is, however, one area in which social media, in promoting advertiser friendliness also happens to promote a certain sociality in its users: politics.  Social media has rapidly become an important source of news for much of America.  In 2021, 48% of Americans said they “often” or “sometimes” use social media as a source of news, and in the most recent Presidential election year, 2020, that number was 53% (Walker & Matsa 2021).  This usage of social media to engage with political topics seems to promote advertiser friendliness in users.  This is likely as a result of already advertiser-friendly people controlling the discussion of highly charged news topics, as we discussed earlier.  Thus, the topics which receive the most focus are likely to be those which confirm those users’ own world view, tinged with their own neuroticism.  As a result, what is shown to other people is likely to be those topics which are most worrying and outrageous.  This process then causes users to increase in their anxiousness and general neuroticism, which has been confirmed by studies that have shown that increased political activism on social media tends to cause increased anxiousness and general neuroticism (Hisam 2017, p. 1466).  This, then, offers a major explanation of why political issues such as global warming and school shootings are the main source of neuroticism among young people, as we discussed earlier.  Because young people use social media so often, the conditioning effects of the platform’s news discussions, which innately cause users to become more neurotic, affects young people and makes them individually more neurotic.  This is why we see such oddly extreme and exaggerated fears of relatively minor dangers such as school shootings.  What is being expressed in that fear are the effects of social media’s conditioning, rather than a completely rational fear.

 

The interesting thing about this process is how it has led to regrowth in some areas of social life.  Putnam identified political life is one of the main collapsing forms of community in America, with a declining number of people engaging politically in any way since the early 1980s (Putnam 2000, pp. 39-40).  Yet, people’s involvement in political processes has had a resurgence in recent years, with the percentage of people who had attended a political rally in the past year increasing from slightly below 6% in the early 1990s to 11% in 2018, the percentage who worked for a political party increasing from slightly below 3% to 5%, and the percentage of people who had contacted a political representative more than doubling from about 10% to 23% (Putnam 2000, p. 40; Pew Research Center 2018, p. 11; Putnam 2000, p. 43).  Further, young people (those 18-29 years old) seem to be even more politically active in some ways than older people, with young people being more likely to have been to a political rally in the past year than the average and just as likely to have worked for a political campaign (Pew Research Center 2018, p. 11).  Yet, at the same time, more individuated politics, such as local politics, has continued to decline, with the percentage of people who had attended a public local political meeting declining over that period from around 12% to 10% (lower than the percentage of people who had attended a political rally!), with younger people being the least likely to do so (only 8% of people 18-29 attended a local government in the past year) (Putnam 2000, p. 42; Pew Research Center 2018, p. 11).  This shows how much of the recent upsurge in political activity is in fact an upsurge in social media-influenced political activity, that is, political activity which is learned about through social media.  Indeed, interacting publicly with politics by making an endorsement of a candidate on social media was more popular than any other political activity in 2018, and among young people, it was 150% as popular as the next most popular political action (Pew Research Center 2018, p. 11).  From this political example, we see the future of the social world in America, one which is framed by social media, and further, one in which people can spread the “consumer-friendly” attitudes and opinions without even involving themselves in any advertisement or mass media, simply by taking action in the real world to legitimize, spread, and confirm the actions, habits, and opinions imparted upon them by advertising and mass media with like-minded individuals.

 

From this discussion of recent shifts in American social life, we can learn two important lessons.  The first is an increased level of certainty about the direction and basis of recent social changes.  The changes that we have seen in social life, as well as cultural life and economic life have benefitted advertisers and mass media producers and have themselves operationally resulted from changes in consumer preference, the area where mass media and advertisers seem to have the most direct power.  This observation grants yet more credence to the conclusions of our model regarding both the power and the goals (although unintentional) of major advertisers and mass media producers.  Further, the precise nature of these changes displays how the rate of change (as well as the priorities of advertisers and mass media producers) has increased over time, with only gradual change occurring from the 1970s to the 1990s followed by more rapid change occurring over the past twenty (and particularly the past fifteen) years.  This grants further credence to our concept that shifts in corporate decision-making as a result of the Great Recession seems to have a strong relationship with how social change occurs.

 

Secondly, and perhaps more importantly, this discussion gives us a much better understanding of how the world will look following much of the current social change coming to fruition.  Based on our discussion, we can see that much of the traditional community organization that has underpinned American life in the second half of the 20th century is set to continue to decline in the foreseeable future, while social media is, for more and more Americans, likely to overtake it as their basis of socialization.  As such a shift occurs, it seems unlikely that all social organizations will disappear; indeed, we might see a putative revival in American social life in the near future.  However, social life will be increasingly dependent on and bound to social media-influenced concepts of the world and interests, thereby assuring that real life social action spreads and upholds the opinions, habits, and actions that are also upheld on social media.  These sorts of social behavior will look similar to modern political action:  they will place the focus of action not on the individual interests of local communities but rather on broader, more universal interests, issues on which attitudes and speech can be influenced on a more broad level than more individual, local issues.  These forms of social interaction will, not being individuated to local opinions, habits, or actions, be increasingly homogenous, as they will have an identical basis (or at least an identical source).  In this world, which forms of social society are upheld will be increasingly based on which forms of social life are most effective at spreading the opinions, habits, and actions of advertiser friendliness through, for instance, the spread of neuroticism or a decrease in work ethic.  (All of this is, of course, speculative, although this sort of behavior has reached nearly its full fruition when it comes to, as we discussed earlier, the world of modern political action.  There is no reason to believe that what has occurred in relation to political action could not be just as easily transmitted into organizational social groups or informal social relationships.)

 

So much for society.  Long live advertising!

 

The Religious Effects

 

The decline of American religiosity over the past several decades has been rapid and continuous.  Since 1990, several polling agencies have found a rapid decline in American religiosity, with the percentage identifying as Christian declining from about 90% in the early 1990s to 63% in 2020 (a rate of decline of 0.9% per year).  The decline has only sped up since the Great Recession, with the percentage of Americans identifying as Christian declining from 78% in 2007 to 63% in 2020 (a rate of decline of about 1.15% per year).  While part of this shift has been due to an increase in the number of those from non-Christian faiths (largely as a result of immigration), more significant has been the rise in the percentage of the population which identifies as unaffiliated, increasing from 9% in 1993 to 29% in 2020 (a rate of increase of about 0.74% per year) (Pew Research Center 2022, p. 2).  Among teenagers, religious affiliation is yet less common, with 32% of teenagers identifying as religiously unaffiliated (although the percentage that is Christian is the same as for the older population).  This decline is especially significant in light of the fact that teens’ parents are significantly more Christian (72% among parents as opposed to 63% among the general population) and less religiously unaffiliated (24% among parents as opposed to 29% among the general population) than the general population (Pew Research Center Sept. 2020).  The decline of American religiosity has not only been in its existence, but also its intensity among believers.  The percentage of Americans stating religion to be “very important” in their lives declined from 2000 to 2020 from 59% to 48% (a rate of decline of 0.55% per year), and from 2007 to 2020, from 56% to 48% (a rate of decline of about 0.62% per year) (Brenan 2021).  And, of course, we have already discussed the decline in America’s church attendance.

 

The reason for this rapid and massive decline in American religiosity appears deceptively simple.  Religion is a traditional basis of culture as well as social congeniality.  Both of those, as we have earlier discussed, are in need of fundamental change to align with advertiser and mass media corporations’ values.  Further, religiosity in particular is correlated with a conservative feeling towards significant social change, particularly violent social change.  For instance, during the Civil Rights Movement, religiosity was correlated very closely with an aversion towards political militance (26% of the “very religious” vs 70% of the “not at all religious” supported political militancy) (Reed 1986, p. 58).  If advertisers and mass media corporations are trying to create neuroticism and radicalism about politics, therefore, as well as increasing Americans’ acceptance of rapid social change, American religiosity would be a barrier to doing so.  Due to these drawbacks of religiosity from the perspective of advertisers and mass media corporations, it makes sense for religious views to decline rapidly as advertisers and mass media corporations gain more power over American culture and social society.  Q.E.D. 

 

While we are not going to contest this argument’s accuracy in relation to what is being replaced (traditional religions), what we will say is that this argument is overly simplistic about the future of religion.  This is because what we have seen is not a rise in a certain and ideological negation of religion as a concept but rather the negation of all existent religions.  Shifts in religiosity have been, in large part, a movement towards being “unaffiliated” rather than towards particular negative views towards religiosity, such as agnosticism or atheism (even as both numbers have grown significantly).  Indeed, among young people, only 10% identify as either agnostic or atheist, less than a third of the religiously unaffiliated population (Pew Research Center Sept. 2020).  As Charles Taylor has argued, in our time, while people have rejected traditional religions, they have not rejected the concept of religiosity; rather, they have continued to search for a new spirituality:

 

There has to be more to life than our current definitions of social and individual success

define for us…[The search for religion] is a personal search, and can easily be coded in

the language of authenticity: I am trying to find my path, or find myself.

 

Moreover the seekers in this case are the heirs of the expressive revolution, with its

roots in the reactions of the Romantic period against the disciplined, instrumental self

connected to the modern moral order. This means not only that they resonate with the

“Peggy Lee” response, but also that they are seeking a kind of unity and wholeness of

the self, a reclaiming of the place of feeling, against the one-sided pre-eminence of

reason, and a reclaiming of the body and its pleasures from the inferior and often guilt-

ridden place it has been allowed in the disciplined, instrumental identity. The stress is

on unity, integrity, holism, individuality; their language often invokes “harmony,

balance, flow, integrations, being at one, centred”.

 

Because of this, the search for spiritual wholeness is often closely related to the search

for health. We seem to have something akin to the medicalization of sin and vice in the

nineteenth century that I described in a previous chapter. A link is created here between

spiritual and physical health, but its basis is entirely different. Mainstream medicine

objectifies the body and its processes, and what I called medicalization extends this

objectification to vice. But contemporary links between health and spirituality usually

take off from alternative kinds of medicine. Far from seeing the body just as an object of

natural science, they see it as the site of spiritual currents and flows. Recovering health

requires that one put oneself right with these, and this can only be done by opening

oneself to them, the very opposite stance from objectification.  (Taylor 2007, p. 507)

 

Taylor argued that this search was likely to cause people to accept what social scientist Mikhail Epstein has described as a “minimal religion,” a religious attitude common in post-Soviet Europe.  Taylor describes “minimal religion as such:

 

“Minimal religion” is a spirituality lived in one’s immediate circle, with family and

friends, rather than in churches, one especially aware of the particular, both in

individual human beings, and in the places and things which surround us. In response to

the universalist concern for the “distant one” stressed in Marxist communism, it seeks

to honour the “image and likeness of God” in the particular people who share our lives. 

(Taylor 2007, p. 534)

 

In Taylor’s argument, we can see the direction of recent shifts in religious attitude: away from doctrinal or institutional restraints and towards a religion which focuses more on the individual and their individual relationships.  This, again, seems to fit naturally in our theory (by getting rid of institutional sources of culture and sociality while still retaining religious attitudes).  However, we are still presented with a precarious religious situation, one which seems to have little long-term stability.  Indeed, Taylor’s development of the theme simply seems to repeat to us what we already know from the basic facts of the recent religious shift:  the change that has occurred has been change that has left us without any particular religion yet has also not caused people to completely renounce religion as a concept.  It seems that this situation requires some kind of new attitude to complete it, to lead our society out of a vague understanding of the world and towards a clearer one, whether that be atheism or some kind of religion. 

 

Further, the fact that people refuse to outright renounce religiosity and declare themselves atheist or agnostic does not necessarily mean that they therefore believe in God.  Indeed, polling results have shown a pretty rapid and significant movement away from theism in recent years, with the percentage of Americans who belief in God decreasing from 92% in 2011 to 81% in 2022 (a rate of decline of about 0.82% per year).  Further, young people are even less likely to believe in God, with only 68% of young people believing in God in 2022, and the rate of decline is about the same as the rest of the population (a decline of ten points since 2011) (Jones June 2022).  What this shows is that about double the percentage of the general population refuses to state they believe in a God as say they are atheist or agnostic, and among young people, about triple as many.  This suggests a further step in “minimalizing” religion – a movement away from even the belief in God without the outright renunciation of religious or spiritual attitudes.

 

It seems unlikely that such a social shift would end with us believing in a religion or religious attitude that already exists.  First, the attitudes we have described are not well-represented by any currently significant religion.  Some kind of institutional power and doctrinal belief is required for most, and belief in God is required for nearly all.  It is hard to imagine what any currently existent religion would even look like without these elements.  Second, it seems that people are outright rejecting all currently popular positions in very large numbers.  Perhaps they are simply not in possession of some esoteric knowledge that will convince them to follow some religious worldview, but this seems unlikely on such a massive scale.  Rather, it seems that people are knowingly refusing all current religious options, including any current religions, the rejection of all current religions, and uncertainty about current religions.  Finally, it seems that the needs and wants of those who both reject religion and variants of irreligion do not so cleanly “fit” into a single religious attitude.  This can be seen simply by the fact that different aspects of those people’s needs have been argued by reasonable authors to “fit” into everything from Catholicism to Eastern Orthodoxy to Judaism to Islam (Taylor 1999, pp. 35-37; Shmemann 1998, p. 134; Buber 2016, p. 112; Murad 2020, pp. 46-49).  What these arguments more precisely betray is not the attitudes of the disillusioned population but rather the attitudes and religious needs of those authors themselves, who are respectively Catholics and Orthodox and Jewish and Muslim.  What the disillusioned population desires is uncertain, but what it absolutely does not desire is clear:  the old religious options whose rejection are the basis of this population’s distinctiveness.

 

The new religion which will become dominant in the future, as a result, is something about which we can really only speculate.  The shifts towards a particular social attitude will not be a continuation of previous social trends but rather something fundamentally new, the foundation of a new religious attitude.  This requires the creation of new social trends, something we cannot analyze in the same way that we can analyze current social trends.  As a result, the rest of this section is going to be far more speculative and therefore uncertain than what we have said thus far.  However, this is not to mean that the argument we will make for rest of this section will be unwarranted.  Rather, it is to say that the warrant we will use to make our argument will be different from the warrant which we have previously used.  Thus far, we have made arguments which have solely generalized from past events and assumed that, if a certain rule applies to many events in the past, it will continue to apply to many events in the future.  However, speculating on a future social trend, that style is impossible.  Rather, we will argue that this future social change can be estimated at by looking at past religious shifts and looking at which institutions or groups share a significant number of traits with the source of the previous religious changes.  From that, we can then conclude that there is good reason to believe that the coming religious change will have the source we identified as being very similar to the source of the previous social changes.  The basic structure we will be using for this argument was put logically by philosopher Noël Carroll as such:

 

1. Item A has properties pi . . . px-l (e.g., A is a 1964 Saab).

2. Item B has properties pi . . . px-l.

3. Item B also has property px (e.g., B can go 55 mph).

4. Therefore, (probably) Item A also has property px.  (Carroll 1988, p. 15)

 

As a result of this unorthodox style of argument, it would not be surprising if our argument here is wrong, as this is an evidential argument for which there is necessarily not that much evidence.  That being said, our argument still suggests that what we will say is true, although it does not confirm it with any degree of certainty.  Like the economic section of Part One of this article, if the reader would not like to read a theory which is merely probable rather than absolutely true, they can skip this section.

 

We will use as our historical example the founding and subsequent spread of Christianity, the last significant religious discontinuity in the West.  The reason we are using this shift as our example is quite simple:  the creation and spread of Christianity is a case in which previous religious attitudes were replaced by a new religion which eliminated many of the fundamental parts of the previous religion, such as polytheism, and replaced it with something new, monotheism.  If, as we have argued, the future religion is unlikely to retain a God, then that shift (from monotheism to non-theism) seems like a very similar shift to the shift from many Gods to only one.  We will not be giving a full history of the period or of the spread of Christianity; rather, we will focus on a couple of important traits about early Christianity which we can apply to the present time:  the people who founded it (Jews in Judea), the way in which its founding separated it from the religion that preceded it among the founders (Judaism), and the course of the subsequent spread.

 

In the foundation of Christianity, a very important part is the context in which Christianity was founded, Roman Judea.  Judea was, in many ways, a Jewish state, one where the norms and people were, in most part, Jewish (Schäfer 2003, pp. 89-90).  Further, Judea was a state where Jewish traditions, particularly in relation to Jewish law, were directly undermined and nearly powerless, contrary to the situation for Jews in Judea prior to Roman rule (Schäfer 2003, pp. 91-92).  That being said, Judea was still relatively open to Jews compared to the rest of Rome, exempting Jews from military service, for instance (Schäfer 2003, p. 108).  This precarious situation of complete separation from “normal” Roman society as well as the Jews’ status as a tolerated but (relatively) oppressed group created an increasingly zealous and violent Jewish population.  This population was distrustful of foreigners, with there being a large amount of distrust even of half-Jews (including one of their rulers, Herod) for being insufficiently Jewish (Rogers 2021, p. 38).  The group further found themselves aggressively opposed to traditional Roman rules, particularly the tax system as well as the Roman system of slavery (Schäfer 2003, pp. 113-114).  This placed zealous Jews on a collision course with their Roman leadership, with the first wide-scale rebellion occurring in 3 BC, and rebellions consistently occurring from that point supported by violent religious leaders and growing cults who supported such violence (Rogers 2021, pp. 53-57, pp. 115-117, pp. 113-114).  In all of these cases, the Jewish zealots lost.  This repeated failure, as well as rising enmity between Roman political elites and local Jews, caused increasing religious upheaval among Jews, including giving rise to Christianity, an oddity due to its focus on love rather than violence (Schäfer 2003, p. 106).  However, this new religion failed to grow, being a tiny contingent for the next half century or so (as evidenced by the Romans’ lack of interest in wiping out the Christians during the Great Jewish Revolt) (Rogers 2021, p. 267).  While there was some spread due to Paul the Apostle’s missionary work, it was fundamentally marginal, and none of the missionary groups that would spread Christianity across Rome in the second and third centuries had their direct source in these missionary activities (Freeman 2009, pp. 63-64).  It is also important to note that early Christianity was preached mostly to Jews and was led by people who proclaimed themselves to be Jewish (Freeman 2009, p. 52).  Basically, during this period, Christianity was mainly a small Jewish cult with very little popularity among either the Jews or the Romans.

 

A fundamental shift in both Jewish and Christian history was the Great Jewish Revolt from 66-73 AD.  For Jews, it was in many ways the culmination of much of the ethnic and religious tension that had grown over the past century as a subject peoples under often corrupt and incompetent Roman rule (Rogers 2021, pp. 115-117).  More important was the massive and brutal failure that the revolt became.  Roman soldiers subjected Judea to a brutal loss, committing atrocities (which would today be considered war crimes) against civilians, starving citizens of Jerusalem to the point of cannibalism, burning down entire neighborhoods, selling massive numbers of civilians into slavery, and burning down the Second Temple, the center of Jewish religion (Rogers 2021, pp. 247-248, pp. 349-350, pp. 364-365, p. 366, pp. 357-361).  The long-term effects of the Zealots’ loss were the collapse of Jewish economic life, with most rural Jews being impoverished to the point of serfdom, the need for the Jewish religion to reorient itself after its centerpiece (the Temple) had been destroyed, and the disappearance of the remaining power of the religious elite in the Synhedrion (Schäfer 2003, pp. 135-137).  To put it simply, the war was a national tragedy that left the Jewish people in need of a new religious option.

 

At the same time, the Christians started experiencing their most prosperous period after the Great Jewish Revolt.  The Christians viewed the failure of the revolt, as well as the destruction of the Second Temple, as a confirmation of their own religion’s truth (Rogers 2021, p. 7).  It was during the period in the twenty or thirty years after the Great Jewish Revolt occurred that the extremely popular Gospels, what are today the centerpiece of Christian religion, were first written (Freeman 2009, pp. 72-73).  Several decades later, the now (relatively) codified religion of Christianity started to grow rapidly, first creating stable communities within the Jewish community by the 100s (about 30 years after the Great Jewish Revolt), and then growing beyond Judea to Greece by the early 110s (about 40 years after the Great Jewish Revolt) (Freeman 2009, p. 114, p. 125).  The rapid spread of Christianity following the Great Jewish Revolt is at first difficult to explain, particularly in light of the scapegoating Christians faced from Jews immediately after the revolt (Freeman 2009, p. 72).  Yet, by looking at the views of early Christians, we can see how Christianity might have become appealing to, at least, those in Judea.  First, it was a significantly more peaceable religion than the other cults of the time.  Not only was there the obvious approval by Jesus of paying the same taxes which the Zealots had viewed as their casus belli for rebellion (“Render unto Caesar what is Caesar’s”), the Christians also tended to side on a more general level with the Romans in cases of rebellion, opposing, for instance, the Bar Kochba Rebellion (a Messianic rebellion by Jews in the 130s), and in the immediate aftermath of the rebellion Christians attempted to distinguish themselves from Jews (Schäfer 2003, p. 154; Freeman 2009, p. 133).  Second, the Christians were religiously universalist.  Christians at the time attempted to convert a large number of people of all kinds (both Jews and Gentiles) into their religion, and anyone was allowed to join the religion, provided they went through the proper procedures (Freeman 2009, p. 128, p. 131).  Further, early Christianity incentivized conversion rather strongly, with most early Christians viewing belief in the basic tenets of Christianity alone as enough to secure one’s entry into Heaven, no matter how a given Christian behaved in life (Von Balthasar 2014, p. 42).  Finally, early Christianity recommended celibacy and asceticism among its members.  Even before Christianity was clearly codified, Paul advocated for (currently unmarried) Christians to remain unmarried (in the First Letter to the Corinthians), and in the second and third centuries CE, when Christianity started to spread at an ever-quickening pace, ascetic principles became more and more influential within the Church (Heid 2000, pp. 33-40, pp. 61-63).  Further, these two traits seem to have reinvigorated one another in proportion to their extremity.  For instance, Marcion, a Christian figure of the 130s later derided as a heretic by the Church, took both asceticism and universalism to extrema, advocating the avoidance of marriage as well as believing in a completely universal God distinct from the factional God of the Old Testament (Freeman 2009, p. 134, pp. 135-136).  It is also notable that Marcion’s group, although designated as heretics by the Christians, seemed to snowball in popularity just as the Christians did, showing that Marcion’s views also coincided with the desired view of much of the Jewish and Roman populace (Freeman 2009, p. 136).

 

From all of these observations, we can make a hypothesis about why Christianity spread as it did.  To do so, we need to understand Heidegger’s concept of “dasein.”  Heidegger defined dasein as a particular attitude on the world, or, to use philosopher John Haugeland’s definition, dasein is:

 

(i) a distinctively human way of living that

(ii) embodies an understanding of being and for which

(iii) individual people (“cases of dasein”) can take responsibility (Haugeland 2013, p.

182)

 

So, then, a dasein can be something like the “Jewish Dasein” or the “Roman Dasein,” each of which has a different understanding of the world around them and how they should behave within that world.  What is important from this concept is Heidegger’s view of how a dasein can die.  As Haugeland explains:

 

Etruscan, for instance, is a dead language, and alchemy is an equally dead science. How

do languages and sciences die? Clearly, they die when (and because) they can no longer

be spoken or practiced—death is the possibility of no more possibilities.  (Haugeland

2013, p. 183)

 

So, then, the death of a particular dasein occurs when it is impossible to practice that dasein (that is, to hold and understand the world according to a particular set of cultural values) in the future, either because of the death of all people who previously knew how to practice that dasein or because nobody views the dasein as useful.  Further, it seems that, at least for a large percentage of people, the prospect of the death of their personal dasein is worrying.  Haugeland describes the variety of ways people can react to their dasein’s death as such:

 

Taking responsibility resolutely means living in a way that explicitly has everything at

stake. Heidegger’s way of saying this is as follows: dasein is the entity for which, in its

being, that very being is an issue (SZ 12, 191, 240, 263). This is the most basic definition

of dasein; all the others follow from it. It follows, for instance, that dasein’s basic

makeup is being-in-the-world because its being could not be an issue for it if its life

were not essentially at risk, and this risk presupposes the potential intransigence of

intraworldly entities. And it follows that dasein is the entity that discloses entities in

their being since entities could never be intransigent if they were not projected onto

possibilities or if impossibilities were not unacceptable. Existence, in Heidegger’s

proprietary sense, is the being of dasein; hence, it means all of these.

 

Fallen, everyday dasein runs away and hides from the issue of its being (though it can

never escape it); resolute dasein faces up to it by taking over responsibility for itself—

that is, by resolving to repeat itself while holding itself free for taking it back. “Refusing

to accept” intransigent impossibilities has a double meaning. One way of refusing to

accept is bullheadedly refusing even to see—blinding oneself.  (Haugeland 2013, p. 216)

 

What Haugeland is saying here is that, with the prospect of death, people can choose one of two possible positions – either to accept the death of their dasein or to fight against it.  Heidegger personally views accepting the death of one’s dasein as significantly more responsible and a better course of action.  However, that is not the point of this discussion.  The point is that, for many, the reaction to the death of their dasein will be to take whatever action possible to retain that dasein’s existence.  This argument can help us understand how Christianity spread so rapidly following the first revolt.

 

For a population, the continued existence of their societal dasein requires the existence of some kind of cultural continuity.  This can occur as a result of being a part of some kind of family (and therefore in a privileged position to teach one’s kids one’s values) or in a society (and therefore in a privileged position to teach one’s fellow members of society one’s values).  For the Jews, for instance, that was the ability to have a community in which young people were inducted into Judaism by virtue of the fact that they lived in a Jewish land and listened to their Jewish parents and Jewish religious elites.  However, there can be social tragedies which destroy communities.  In this case, that social tragedy was the failure in the Great Jewish Revolt, which destroyed much of the Jewish community in Judea.  For people in similar contexts, their dasein is under attack – and worryingly so.  This is particularly the case among groups which view themselves as being somehow contrary to the mainstream of the broader society in which they live (even if they are in fact the majority of the smaller society in which they live), for those minorities are constantly (and usually reasonably) worried about being made more similar to the mainstream in the broader society, who are very clearly the government’s preferred group of people.

 

This is where Christians changed the picture for many Jews.  As a celibate group (or at least a group which praised celibacy), Christians had little chance to spread their values via familial relationships.  Even societal spreading of religious values was unlikely due to their own marginality.  That marginality necessarily required early Christians to spread their dasein using methods other than the classic social or familial methods.  Instead, they were required to spread their dasein via convincing and conversion.  They therefore created a religion perfect for that purpose – a religion which avoids conflict (and therefore their community’s destruction) and which advocates for the conversion of the most possible (via universalism and a missionary spirit).  To those affected by a recent societal tragedy, this attitude will be attractive, as it allows them more security in their dasein than their current attitude in question of destruction.  It is important to note that the attitude must be close in ideological proximity to the attitude which people are giving up – for instance, Christianity was quite similar to Judaism – to make people feel like they are not giving up their dasein but rather preserving it.  As this ideology spreads, it will eventually grow to include enough people to make the religion expand outside of its natural constituency.  This shift in how the religion proliferates will happen for two reasons:  first, because many people attempting to proselytize the religion will slowly improve the methods of spreading the religion and eventually reach the point of being highly effective, and second, more people engaging in missionary work will innately spread a religion more than fewer people engaged in missionary work.  From there, so long as the people in society are looking for a new religious attitude, a new religion will be able to become popular rather quickly across society, as it did across the Roman Empire.

 

Our final observation is that any religion which spreads in these conditions will innately support and reaffirm the current social trends.  The first reason for this is that, to avoid conflict, a religion must necessarily support people acting amicably towards institutions.  That is, after all, the best way of assuring that one is not oppressed unduly by institutions.  The second reason is that, for a religion to appear universally attractive, it must somehow appeal to the “normal” person.  Therefore, it must appeal to that person’s non-religious morality, and therefore, whatever the mainstream moral views of society are.  Finally, and this reason does not apply to Rome but does apply to the modern world, if we accept that modern social change occurs as a result of advertisers and mass media producers’ preferences, then we must also accept that religious change, which is a sort of social change (a change in the attitudes of people) should occur in the modern world as a result, at least in part, of those advertisers and mass media producers.  So much for our observations of religious change generally.

 

To make our argument work, we must find a particular case in the modern world in which this group of traits also exists.  From this point, we can conclude that, in that case, there are the beginnings of a religion, if not an already formed one.  The group that we will identify as the source of the coming religious shift is the LGBT community.  This is, again, not to say that to be a member of this group is to be a member of a religious group or even to say that any members are currently members of a religious group.  It is rather to say that some time in the future, there will come to be a religious grouping that develops out of this group and that such a shift will occur via the operative method which we have described occurred in relation to Christianity.

 

To establish how the LGBT community could develop into a religious group, we must first identify how the history of the LGBT community lines up with the history of Judaism prior to the spread of Christianity.  For this, we will use the attitudes of the 1970s gay liberation writer Carl Wittman.  Wittman was significantly more extreme than much of the gay community even in his adoptive community in San Francisco, although it is useful to use his writings to understand what attitudes and complaints were within the cultural milieu of 1970s gay culture.  The first thing that is notable is the contempt which Wittman has for heterosexuality:

 

Exclusive heterosexuality is fucked up.  It reflects a fear of people of the same sex, it’s

anti-homosexual, and it is frought with frustration.  Heterosexual sex is fucked up, too;

ask women’s liberation about what straight guys are like in bed.  Sex is aggression for

the male chauvinist; sex is obligation for traditional woman.  And among the young, the

modern, the hip, it’s only a subtle version of the same.  For us to become heterosexual

in the sense that our straight brothers and sisters are is not a cure, it is a disease. 

(Wittman 1970, p. 3)

 

Further, in spite of Wittman arguing that bisexuality is the natural position of man, there is a great deal of skepticism of current bisexuals:

 

We continue to call ourselves homosexual, not bisexual, even if we do make it with the

opposite sex also, because saying “Oh, I’m Bi” is a copy out for a gay.  We get told it’s OK

to sleep with guys as long as we sleep with women, too, and that’s still putting

homosexuality down.  We’ll be gay until everyone has forgotten that it’s an issue. 

(Wittman 1970, p. 3)

 

This argument, although certainly very odd, does fit into the Jewish example – contempt towards even the mixed group and a feeling of superiority by virtue of being a member of an esoteric attitude dominant in the community in which this is written.

 

A second observation is the similarity of the situation of the San Francisco gay community’s to the Roman context.  Wittman describes gays as being separated from “normal” society into a small community (San Francisco) in which the minority group is accepted, although the group is still oppressed and controlled by a stronger majority group.  In Wittman’s words:

 

San Francisco is a refugee camp for homosexuals.  We have fled here from every part of

the nation, and like refugees elsewhere, we came not because it is so great here, but

because it was so bad there.  By the tens of thousands, we fled small towns where to be

ourselves would endanger our jobs and any hope of a decent life; we have fled from

blackmailing cops, from families who disowned or ‘tolerated’ us; we have been

drummed out of the armed services, thrown out of schools, fired from jobs, beaten by

punks and policemen.

 

And we have formed a ghetto, out of self-protection.  It is a ghetto rather than a free

territory because it is still theirs.  Straight cops patrol us, straight legislators govern us,

straight employers keep us in line, straight money exploits us.  (Wittman 1970, p. 3)

 

The third important trait present in Wittman’s presentation is disdain for traditional institutions of society, in this case, marriage.  Wittman states his opposition as such:

 

Traditional marriage is a rotten, oppressive institution.  Those of us who have been in

heterosexual marriages too often blamed our gayness on the breakup of the marriage. 

No.  They broke up because marriage is a contract which smothers people, denies

needs, and places impossible demands on both people.  And we had the strength, again,

to refuse to capitulate to the roles which were demanded of us.  (Wittman 1970, p. 4)

 

Instead, Whitman supported a new kind of relationship institution, one which did away with “exclusiveness,” “promises about the future,” and “inflexible roles” (Wittman 1970, p. 4).  This is to say, Whitman supported a society of open relationships.  Finally, while Whitman does say that homosexuality (or at least bisexuality) is innately universal, he does not argue for a missionary spirit in any real way (Wittman 1970, p. 3).  Instead, his strategy to “free the homosexual in anyone” is to “be gentle, and keep talking & acting free,” that is, to live separately and individually and to expect others to “convert” by example, rather than by active missionary work (Wittman 1970, p. 7).  In Whitman, we see all of the traits of the Jewish attitudes toward Rome:  contempt for those outside the group, contempt for “normal” institutions, an avoidance of missionary work, and a separation from mainstream society.

 

While many of these traits (such as contempt) are impossible to quantify, those that are provable seem to accurately describe the homosexual community at the time.  For one, homosexual relationships in 1970s San Francisco do seem to have rejected traditional marriage models in favor of more “open” and “nonexclusive” models.  In 1978, the mean number of new partners for a given San Francisco homosexual during an average four-month period was 29.3, and the median was 16 (Hethcote 1992, p. 59).  For another, it does seem that San Francisco acted as a “ghettoized” area for most homosexuals in the 1970s.  Somewhere around 20% of San Francisco was estimated to be homosexual in 1977, nowhere near a majority of San Francisco, yet still a sizable enough population to have their own communities and to be identified as a place with a relatively large homosexual population (Gold 1977).  Therefore, there seems to be good reason to believe that Whitman’s other, non-provable observations about homosexual attitudes in the 1970s have at least some accuracy.

 

So, then, we have established that the situation for American homosexuals was similar in many important ways to the situation for Jews in the first century CE.  Similarly, both faced a massive national tragedy which destroyed large parts of their respective communities.  For Roman Jews, that calamity was a war.  For American homosexuals, that calamity was a disease.  AIDS was, much like the Great Jewish Revolt, a culmination of many of the contrasts which American homosexuals had built up with mainstream society.  In particular, the opposition to traditional marriage and insistence on open sexuality led to the spreading of AIDS to America via sex tourism as well as fueling the extent of the spread of AIDS in San Francisco (Pepin 2011, pp. 191-192, p. 234).  By 1981, 44% of gay men had HIV, a disease that, at the time, meant rapid death (Pepin 2011, p. 233).  The homosexual community, to put it simply, suddenly faced a calamity that threatened the continued existence of their culture in San Francisco or anywhere, a threat to the continuation of their dasein.  And, of course, we need not emphasize how much more significant this fear was for homosexuals, being unable to reproduce and certainly unable to reproduce their homosexuality, than it was for Jews.  It is in this context that the rhetoric and attitudes of the homosexual community shifted, just as the rhetoric of the Jewish community had shifted following the Great Jewish Revolt.

 

The shift away from conflict with traditional society has been very clear.  This shift is exemplified by political commentator Andrew Sullivan’s 1989 advocacy for gay marriage.  Sullivan argued for gay marriage from a distinctly “social conservative” perspective, stating that it would be an “extension” of marriage for heterosexuals, a far cry from Whitman’s rhetoric on this topic.  Sullivan takes this argument a step further by outright opposing promiscuity among homosexuals:

 

[Gay marriage] provides role models for young gay people who, after the exhilaration of

coming out, can easily lapse into short-term relationships and insecurity with no tangible

goal in sight.  (Sullivan 1989)

 

Finally, Sullivan takes direct aim at the concept of the traditional homosexual political elites, opposing their unique position of opposition to gay marriage from the perspective of their opposition to assimilation into mainstream, heterosexual society:

 

The gay movement has ducked this issue primarily out of fear of division. Much of the

gay leadership clings to notions of gay life as essentially outsider, anti-bourgeois, radical.

Marriage, for them, is co-optation into straight society. For the Stonewall generation, it is

hard to see how this vision of conflict will ever fundamentally change. But for many other

gays—my guess, a majority—while they don’t deny the importance of rebellion 20 years

ago and are grateful for what was done, there’s now the sense of a new opportunity. A

need to rebel has quietly ceded to a desire to belong. To be gay and to be bourgeois no

longer seems such an absurd proposition. Certainly since AIDS, to be gay and to be

responsible has become a necessity.  (Sullivan 1989)

 

Yet what is more interesting about this quote than its extension of Sullivan’s opposition to the separatist line espoused by Wittman and others is its identification of the movement towards assimilation as being spurred by the AIDS epidemic among those most affected by it.  Elsewhere, Sullivan identifies gay marriage even closer with AIDS, describing it as a “public health measure” (Sullivan 1989).  This (if true) seems to confirm our argument that AIDS spurred on a movement towards a more assimilationist view towards the mainstream, particularly if Sullivan is correct that young homosexuals had a desire to be “bourgeois.”  That is an empirical question difficult to answer.  On the one hand, organizations representing homosexuals’ interests have, over the past thirty years, done as Sullivan argued and supported gay marriage quite aggressively, often viewing it as the culmination of a rights campaign.  On the other hand, the actual percentage of gays who engage in gay marriages remains quite small, with only about 21% of people who are LGBT married, relative to 48.3% of the total population (Jones Feb. 2021b).  Yet, we can agree with Sullivan that there has been at least a minor shift (to the tune of 20%) towards traditional family units, or at least a shift away from disgust for those family units, in line with our theory.  Further, those non-homosexual LGBT people whom Wittman opposes (people he specifically identifies with trying to assimilate into normal society) have become the norm within the LGBT community, with a majority (56.8%) of LGBT Americans identifying as bisexual (Jones Feb. 2022).

 

Universalism is slightly less easy to establish, yet is still clear in some recent significant works among LGBT authors, particularly Judith Butler.  (As a quick warning, Butler tends to make points very slowly and in deliberately difficult language.  We will summarize the important points of what Butler is arguing in the article; if the reader does not wish to read the quotes from Butler, there is little that will be lost.)  Butler argues that the purpose of what they describe as “feminism” is to repeatedly subvert gender, which they view as inherently oppressive, through repeatedly engaging in performative acts:

 

Gender ought not to be construed as a stable identity or locus of agency from which

various acts follow; rather, gender is an identity tenuously constituted in time, instituted

in an exterior space through a stylized repetition of acts. The effect of gender is

produced through the stylization of the body and, hence, must be understood as the

mundane way in which bodily gestures, movements, and styles of various kinds

constitute the illusion of an abiding gendered self. This formulation moves the

conception of gender off the ground of a substantial model of identity to one that

requires a conception of gender as a constituted social temporality. Significantly, if

gender is instituted through acts which are internally discontinuous, then the

appearance of substance is precisely that, a constructed identity, a performative

accomplishment which the mundane social audience, including the actors themselves,

come to believe and to perform in the mode of belief. Gender is also a norm that can

never be fully internalized; “the internal” is a surface signification, and gender norms

are finally phantasmatic, impossible to embody. If the ground of gender identity is the

stylized repetition of acts through time and not a seemingly seamless identity, then the

spatial metaphor of a “ground” will be displaced and revealed as a stylized

configuration, indeed, a gendered corporealization of time. The abiding gendered self

will then be shown to be structured by repeated acts that seek to approximate the ideal

of a substantial ground of identity, but which, in their occasional discontinuity, reveal

the temporal and contingent groundlessness of this “ground.”  The possibilities of

gender transformation are to be found precisely in the arbitrary relation between such

acts, in the possibility of a failure to repeat, a de-formity, or a parodic repetition that

exposes the phantasmatic effect of abiding identity as a politically tenuous construction.

 

If gender attributes, however, are not expressive but performative, then these

attributes effectively constitute the identity they are said to express or reveal. The

distinction between expression and performativeness is crucial. If gender attributes and

acts, the various ways in which a body shows or produces its cultural signification, are

performative, then there is no preexisting identity by which an act or attribute might be

measured; there would be no true or false, real or distorted acts of gender, and the

postulation of a true gender identity would be revealed as a regulatory fiction.  That

gender reality is created through sustained social performances means that the very

notions of an essential sex and a true or abiding masculinity or femininity are also

constituted as part of the strategy that conceals gender’s performative character and

the performative possibilities for proliferating gender configurations outside the

restricting frames of masculinist domination and compulsory heterosexuality.

 

Genders can be neither true nor false, neither real nor apparent, neither original nor

derived. As credible bearers of those attributes, however, genders can also be rendered

thoroughly and radically incredible.  (Butler 1999, pp. 179-180)

 

What is interesting about Butler’s perspective is that the shift which they are trying to create is normative and universal.  First, as a normative claim, Butler is arguing that life will be better for those who figure out that gender is fake.  Second, as a universal claim, Butler is not simply trying to argue that some individual should not respect gender but rather that the idea of gender ought to be discredited on the scale of our entire society.  While this has some relationship to Whitman’s own belief that all people are innately bisexuality, what distinguishes it is the precision of Butler’s plan to destroy gender:

 

The notion of an original or primary gender identity is often parodied within the cultural

practices of drag, cross-dressing, and the sexual stylization of butch/femme identities.

Within feminist theory, such parodic identities have been understood to be either

degrading to women, in the case of drag and cross-dressing, or an uncritical

appropriation of sex-role stereotyping from within the practice of heterosexuality,

especially in the case of butch/femme lesbian identities. But the relation between the

“imitation” and the “original” is, I think, more complicated than that critique generally

allows. Moreover, it gives us a clue to the way in which the relationship between

primary identification—that is, the original meanings accorded to gender—and

subsequent gender experience might be reframed.  The performance of drag plays upon

the distinction between the anatomy of the performer and the gender that is being

performed. But we are actually in the presence of three contingent dimensions of

significant corporeality: anatomical sex, gender identity, and gender performance. If the

anatomy of the performer is already distinct from the gender of the performer, and

both of those are distinct from the gender of the performance, then the performance

suggests a dissonance not only between sex and performance, but sex and gender, and

gender and performance. As much as drag creates a unified picture of “woman” (what

its critics often oppose), it also reveals the distinctness of those aspects of gendered

experience which are falsely naturalized as a unity through the regulatory fiction of

heterosexual coherence. In imitating gender, drag implicitly reveals the imitative

structure of gender itself—as well as its contingency. Indeed, part of the pleasure, the

giddiness of the performance is in the recognition of a radical contingency in the

relation between sex and gender in the face of cultural configurations of causal unities

that are regularly assumed to be natural and necessary. In the place of the law of

heterosexual coherence, we see sex and gender denaturalized by means of a

performance which avows their distinctness and dramatizes the cultural mechanism of

their fabricated unity.  (Butler 1999, pp. 174-175)

 

Here, Butler argues that drag is the best tactic to destroy gender.  The argument is not an especially credible one, and the tactic does not seem very likely to succeed, but the point we are making is not Butler’s brilliant tactical ability.  The point we are making is that Butler is arguing that, tactically, people ought to engage in their anti-gender position by actively thrusting that position upon people, by deliberately dressing up in drag thinking that people, seeing it, will no longer take the concept of gender seriously.  If anything is present in Butler, it is a missionary spirit.

 

A rather obvious objection at this point is that, while universalism and assimilationism are both preferences of post-AIDS LGBT argumentation, they seem to be completely separate, with Sullivan believing in the former but not the latter and Butler believing in the latter but not the former.  This is true of Sullivan and can be blamed on the earliness of his work – originally published in 1989, hardly after the peak of the AIDS epidemic.  In Butler’s case, though, in spite of their rhetoric, the performative tactic ends up being rather assimilationist and anti-political in practice.  Philosopher Martha Nussbaum has argued this very well, using Butler’s opposition to pornography law as an example:

 

Butler not only eschews such a hope [of positive political action], she takes pleasure in its

impossibility. She finds it exciting to contemplate the alleged immovability of power, and

to envisage the ritual subversions of the slave who is convinced that she must remain

such. She tells us--this is the central thesis of The Psychic Life of Power--that we all

eroticize the power structures that oppress us, and can thus find sexual pleasure only

within their confines. It seems to be for that reason that she prefers the sexy acts of

parodic subversion to any lasting material or institutional change. Real change would so

uproot our psyches that it would make sexual satisfaction impossible. Our libidos are the

creation of the bad enslaving forces, and thus necessarily sadomasochistic in structure.

But Butler’s argument has implications well beyond the cases of hate speech and

pornography. It would appear to support not just quietism in these areas, but a much

more general legal quietism--or, indeed, a radical libertarianism. It goes like this: let us do

away with everything from building codes to non-discrimination laws to rape laws,

because they close the space within which the injured tenants, the victims of

discrimination, the raped women, can perform their resistance. Now, this is not the same

argument radical libertarians use to oppose building codes and anti-discrimination laws;

even they draw the line at rape. But the conclusions converge.  (Nussbaum 1999)

 

Indeed, in Nussbaum’s view, Butler’s whole theory is, in many ways, a method to depoliticize feminism and make it effectively supportive of all current established order:

 

The great tragedy in the new feminist theory in America is the loss of a sense of public

commitment. In this sense, Butler’s self-involved feminism is extremely American, and it

is not surprising that it has caught on here, where successful middle-class people prefer

to focus on cultivating the self rather than thinking in a way that helps the material

condition of others. Even in America, however, it is possible for theorists to be dedicated

to the public good and to achieve something through that effort.

 

Many feminists in America are still theorizing in a way that supports material change and

responds to the situation of the most oppressed. Increasingly, however, the academic

and cultural trend is toward the pessimistic flirtatiousness represented by the theorizing

of Butler and her followers. Butlerian feminism is in many ways easier than the old

feminism. It tells scores of talented young women that they need not work on changing

the law, or feeding the hungry, or assailing power through theory harnessed to material

politics. They can do politics in safety of their campuses, remaining on the symbolic level,

making subversive gestures at power through speech and gesture. This, the theory says,

is pretty much all that is available to us anyway, by way of political action, and isn’t it

exciting and sexy?  (Nussbaum 1999)

 

As we can see from Nussbaum, in fact, Butler’s focus on “performance” means that any law is fine so long as it allows space for a kind of ritual criticism.  Fundamentally, this argument boils down to a variation on Jesus: “Render unto Caesar the things that are Caesar’s, and unto performance the things that are performance’s,” which is to say that one should accept the law as it is and should separately “perform” such that we can “subvert” the system without actually confronting it.  There could be no better analogue for early Christianity’s view towards Rome.

 

What is further interesting is that, like Paul proclaiming himself a Jew, both Sullivan and Butler avoid identifying themselves as activists for LGBT rights or, in Butler’s case, for the attitudes of the LGBT while very much holding views that are related to their membership in the LGBT movement.  First, Sullivan describes his argument as “conservative,” although it is an argument that he recognizes goes directly contrary and beyond what real social conservatives believe (Sullivan 1989).  Second, while declaring themselves a feminist, Butler directly takes the opposite position of most contemporary feminists on drag by their own admission and the opposite position of feminists like Catherine MacKinnon on pornography (Nussbaum 1999).  Yet more interestingly, Butler states they formulated their opinion while directly engaging in an activity they could only have engaged in by virtue of their homosexuality:

 

Why drag? Well, there are biographical reasons, and you might as well know that in the

United States the only way to describe me in my younger years was as a bar dyke who

spent her days reading Hegel and her evenings, well, at the gay bar, which occasionally

became a drag bar. And I had some relatives who were, as it were, in the life, and there

was some important identification with those “boys.” So I was there, undergoing a

cultural moment in the midst of a social and political struggle. But I also experienced in

that moment a certain implicit theorization of gender: it quickly dawned on me that

some of these so-called men could do femininity much better than I ever could, ever

wanted to, ever would.  (Butler 2004, p. 213)

 

Indeed, the passage seems to be more about Butler’s identification with and interest in fellow members of the LGBT community, particularly drag queens, than with Butler’s identification with or interest in women.  From this, the point we are making is that the activism of both Sullivan and Butler identifies as being a part of one ideology and attitude (conservatism and feminism respectively) while in fact being an advocate of another (the LGBT community), just as Paul identified as being part of one worldview (Judaism) while in fact being an advocate of another (Christianity).  This is not to say the two identities are mutually exclusively, but simply to say that one necessarily is more prominent than the other.  In all these cases, it is their status as members of the yet-to-be-founded religion.

 

Further, we have seen an increase in identification with the LGBT community along a similar timespan than that which occurred among Christians.  The shift towards Christianity began around thirty years after the Great Jewish Revolt, during the 100s.  For the LGBT community, a massive increase in identification has started to occur over the past ten years, with the percentage of Americans identifying as LGBT increasing from 3.5% in 2012 (about 30 years after 1981, when HIV grew at the fastest rate in San Francisco) to 7.1% in 2021 (a rate of growth of 0.4% per year) (Pepin 2011, p. 233).  Much of that growth has only occurred since 2017, with the rate of growth of the LGBT community since 2017 being 0.65% per year.  This shift has been especially significant among the young (Generation Z), who have increased from 10.5% LGBT in 2017 to 20.8% LGBT in 2021, a rate of increase of about 2.6% per year (Jones Feb. 2022).   Such a shift in popularity seems to look very similar to the delayed growth of Christianity in Judea, caused by the requirement for Christians to slowly define their post-crisis principles and spread them within the Jewish world before spreading them to the broader population.  Further, it seems that becoming LGBT is most common among those without another religion.  This assertion can be demonstrated in multiple ways.  The first is the fact that the young, who are most likely to be LGBT, are also most likely to be unaffiliated, as we showed earlier. Second, polls have shown that LGBT adults are significantly less religious than the rest of the population, with 41% of LGBT adults identifying as atheist, agnostic, or unaffiliated, with only 16% of gay adults and 19% of bisexual adults attending Church weekly, only 34% of LGBT adults stating religion is very important for their lives, and only 77% of LGBT adults believing in God.  Yet, LGBT individuals are nearly as likely as other adults to feel “a deep sense of spiritual peace and well-being” (49% of bisexuals and 54% of homosexuals, as opposed to 59% of heterosexual adults) and are more likely to “think about the meaning and purpose of life” (56% of both bisexual and homosexual adults, as opposed to 55% of heterosexual adults) (Schwadel & Sandstrom 2019).  It seems that LGBT people are likely to both reject all other religions and to have a deep interest in spirituality, making them fertile ground for the coming new religious attitude.  And, of course, the more rapid spread of the LGBT population in America than the Christian population in Rome can be pinned on technological progress in areas such as the mass media industry and transportation making ideas much easier to spread much quickly.

 

The simplest objection to this argument is that being LGBT is significantly different from being a member of a religion, and certainly different from being a member of an insurgent religion, since being LGBT is innate as well as being extremely fundamental for the quality of LGBT people’s lives (a gay or lesbian is unable to marry someone they love without recognizing their homosexuality).  Of course, we will not deny that recognizing one’s homosexuality is extremely important for a significant percentage of the LGBT population.  However, over the past decade, growth has mostly been among people for whom recognizing their membership in the LGBT community is not quite as important.  We can see this fact from several metrics.  The first is that members of the LGBT community are significantly more likely to be in heterosexual marriages today than ten years ago.  In 2012, only 1.3% of members of the LGBT community were married (most, we will assume, were married to opposite sex couples, considering that gay marriage was still not recognized in most states) (Jones Feb. 2021b).  By 2020, the percentage of members of the LGBT community that were married to an opposite sex spouse was 11.4%, about 2% greater than the percentage who were married to a same sex spouse.  And, among members of the LGBT community living with a domestic partner, the percentage who lived with an opposite sex domestic partner was over 2% greater than the percentage living with a same sex domestic partner (Gates & Newport 2012).  A second way of arriving at this finding is by looking at the identity of those who have started identifying as LGBT in recent years.  While statistics on the type of sexuality of those identifying as LGBT is more sparse than statistics about whether people identify as LGBT at all, we do have some statistical evidence that most of the recent growth has been among bisexuals rather than homosexuals.  Among American adults, the LGBT population increased from 5.6% to 7.1% from 2020 to 2021.  During that same period, the percentage of American adults who identified as gay or lesbian increased by 0.4% from 2.1% to 2.5% of the American population, whereas the percentage of Americans who identified as bisexual increased by 0.9% from 3.1% to 4%.  Among LGBT adults, the percentage that was either gay or lesbian declined during that period from 36.2% to 34.6%, and the percentage that was bisexual increased from 54.6% to 56.8%.  Another way of reaching this point is by looking at how younger members of the LGBT community differ from their older counterparts.  After all, most of the recent growth in the LGBT population has been driven from young people, who are significantly more LGBT than the general population (20.8% versus 7.1%), coming of age.  For these young people, homosexuality is only a little more common than for the general population, with 4.5% of Generation Z identifying as either gay or lesbian, as opposed to 2.5% of the general population.  On the other hand, bisexuality is extraordinarily common among young people, with a full 15% of Generation Z identifying as bisexual, compared to only 4% of the general population.  Further, the percentage of the LGBT community which is bisexual among the young is 72.1% versus 56.8% of the general LGBT population, and the percentage of the young LGBT population that is gay or lesbian is 21.6% versus 34.8% of the general LGBT population (Jones Feb. 2021a; Jones Feb. 2022).  What we are saying with these last two points is not that it is unimportant for bisexuals to identify as bisexual; it is simply that it is not quite as integral for them as it is for homosexuals.  For these people, identifying as bisexual might be preferable but is not necessary; a preference for identifying as such, rather than a need to identify as such, is what seems to be driving these people to identify as LGBT.  From this, we can see that the engine of increased LGBT identification is a preference for that identity; and, if this is the case, then it fits well into the model of religion, an area where one identifies as a Christian, for instance, based on a preference for identifying as a member of that religion (no matter how integral to their identity).

 

Finally, membership in an LGBT community does seem to be concomitant with advertiser-friendliness.  Studies have shown higher levels of depression and suicidality among adults identifying as LGBT both abroad and in America and higher level of depression and suicidality among at least bisexual youth (Janković 2020, p. 6; Krueger 2018, p. 246; Taliaferro 2017, p. 24).  There are significantly higher rates of cannabis usage (another determiner of advertiser-friendliness, as per our earlier discussion) among adults identifying as LGBT than among the population generally (Medley et al. 2016).  Adults identifying as LGBT tend to report lower levels of job satisfaction (Drydakis 2019, p. 4).  The effect of identification as LGBT on labor force participation rates are more complicated, with homosexual females being more likely to work and to work longer hours than heterosexual females, and heterosexual males being more likely to work and work longer hours than homosexual males.  That being said, heterosexual males are more likely to work and significantly more likely to work longer hours than homosexual females (Dilmaghani 2018, p. 305).  It seems likely that all these differences are caused by the effects of traditional gender roles among heterosexuals, with having a child significantly increasing labor force participation among heterosexual men and decreasing labor force participation among heterosexual women, while having no discernible effect on homosexual men or women (Dilmaghani 2018, p. 311).  Further, preliminary research has shown puberty blockers often prescribed for youth identifying as transgender cause brain damage in both the short- and long-term (at least in sheep, although the authors suggest this research could be applied to humans as well), mainly in respect to the part of the brain used for practical reasoning (operational memory), not the parts used for most other kinds of reasoning (Schneider 2017, p. 7, p. 8; Hough 2017, p. 8).  This impact, of course, if it is indeed inherent in the effects of puberty blockers, would make users into the very concept of advertiser friendliness – unable to work with the same effectiveness as other people but able to think similarly.  Overall, in most, if not all, metrics, identification as LGBT seems to make people more advertiser-friendly:  they tend to have lower work motivation and to have significantly greater neuroticism.  (We are not, for the sake of this article, going to involve ourselves in the debate over causes – and indeed, many may be short-term and arise as the result of a current lack of acceptance by the broader society.  No matter the cause, however, the effect from the perspective of advertisers, which is what matters, is the same: a higher percentage of the population that identifies as LGBT causes a greater number of more advertiser-friendly people and therefore more profit for advertising and mass media corporations.)

 

Thus, we have demonstrated that the “rising new religion” model that we instantiated with the example of early Christianity seems to fit with the modern rising LGBT population.  Everything is there – from the original oppressed, ghettoized subculture contemptuous of the mainstream, to the crisis in which the opposition to the mainstream causes mass death in the population, to the rise of a new belief system which opposes the old one by supporting a pacific, assimilationist attitude towards mainstream culture as well as a universalism of the ideology demanding a missionary spirit from its members, to the massive, although delayed growth among those who have previously rejected all old religious attitudes –  even the pseudo-celibacy of the founders of the new religious attitude (as homosexuals, at least, are unable to have children except through relatively rare processes such as surrogacy and, slightly less rarely, adoption).  Further, many of the traits which we believed to be necessarily existent in a religious attitude under modern conditions, such as advertiser-friendliness, are also clearly present in members of the LGBT community. 

 

What we mean to say by all this, of course, is not that being LGBT is a religion.  That would be ridiculous.  What we mean to say is that among modern members of the LGBT community, there is a distinct similarity to the early stages of the creation of a religion and that it is likely that in the future, there will be some kind of clear religion which comes out of this community and becomes the new religion of America.  From this model, the increase in the percentage of the population which identifies as LGBT allows those advocating in favor of  getting rid of traditional concepts of gender (like, say, Butler) to slowly refine methods of their missionary attempts.  As these strategies are refined, they become more and more effective.  The extreme point of effectiveness, if the history of Christianity is anything to go by, is a religious attitude which promises an answer to religious questions (“What is the meaning of life?” for instance).  Thus, our point is not that people who are LGBT are members of a religion but that the way in which views in favor of the LGBT community such as getting rid of gender are advocated for is likely to morph into a religion sometime in the future.  More importantly, what we are saying is that the religion created in that metamorphosis, in our religiously starved and confused time, will replace current religion in the same way Christianity replaced the old mythological religions of the Roman Empire.

 

Our argument here, while being much more effective if one agrees with our model of religious growth as well as our application of it to the modern LGBT community, is not necessary dependent on whether one accepts the belief that the growth of the LGBT community has a distinctly religious nature.  Indeed, one could read our argument here as being made up of two different arguments, one about the decline of religion and the other about the growth of the LGBT community, which both individually reinforce our central point about advertiser friendliness.  In this case, one would see that religion’s decline has made Americans more advertiser-friendly, as per the argument near the beginning of this section about religion as a source of culture, and that the rise of the LGBT community has made Americans more advertiser-friendly, as per our empirical evidence about the advertiser friendliness of the LGBT community.  Both facts would help prove our central point about advertiser strategy being the center of social change.  And considering our argument in this section about religion is a mere probabilistic one, one which need not be necessarily true but merely probably is, this fact is a great consolation to our broader argument.

 

We have shown with the religious example yet another area of life where social change benefits advertisers, as well as being very closely intertwined with advertiser decision-making.  Such a shift, as in the similar social and cultural shifts we described before it, lends even greater credence to our argument about the purpose and end of social change in our time.  It seems extremely odd that in cultural life, social life, and religious life, all the most significant social change just happens to benefit advertiser interests.  This seems especially odd because this social change, the growth of neuroticism and the decline in work ethic, seems to be in the exact opposite direction of what would benefit most other powerful people, whether economic or political.  It further seems odd that this social change should all be directly intertwined with areas of life in which mass media and advertisers have clearly played a role, whether that be the growth of new forms of entertainment controlled by mass media (video games), the growth of new forms of sociality controlled by mass media (social media), or changes in social attitudes which were used in Part 1 as the prototypical example of areas in which mass media had a role (the increase in the acceptance of members of the LGBT community).  If all these contemporaneous events, all seemingly caused by and benefitting the same people, seems like just a coincidence, it would be hard to know what condition an evidential argument could hypothetically reach to make anything not seem like a coincidence.

 

Just as importantly, we showed an important part of how our future, advertiser-determined future will look.  As religion declines, it is easy to fall into the trap of either believing that the future will look exactly like the past or that it will look exactly like the past without a single discrete element.  However, there are no cases of religious change in history in which either of those two situations was a result.  Further, people do not seem to be supportive of either option, the return to a historic religious attitude or the negation of all religion, in modern America.  By analyzing a case of equally significant religious change (the rise of Christianity), we have managed to avoid this trap, instead showing how and where from a new religion could emerge.  This new religion, by virtue of its genesis in the time of powerful advertisers, will be amenable towards advertiser interests, making people yet more advertiser-friendly.  The new religion will be the replacement of old religions, which can be characterized as having been created by and for people of another time with a different ethic and society and which are therefore contrary to the interests of our time.  Rather, the new religion will be a religion of our time, for our time, and supportive of our time.

 

So much for religion.  Long live advertising!

 

Conclusion to Part 2

 

With all of this analysis, we are left with the simple question of where that leaves us.  Thus far, we have been rather pessimistic about the future, focusing on what is disappearing rather than appearing, using terms like “rupturing” to describe social change.  This has created a very negative impression of the future.  However, it is important to point out that the coming social change is not, in all ways, terrible.  Certainly, parts of it like the increasing tolerance of the LGBT community are probably positive, and from certain perspectives, the whole concept of advertiser-friendliness is a positive concept.  After all, what is advertiser friendliness but one’s amenability to being placed in a modern pleasure dome constantly focused on providing you with dopamine and keeping you occupied, although leaving you fundamentally incapable of nearly anything?  Such a world would look like philosopher Robert Nozick’s example of the Experience Machine, where people have what they feel like experiences and accomplishments but in fact do not experience or accomplish anything.  Nozick explained the situation as such:

 

Suppose there were an experience machine that would give you any experience you

desired. Superduper neuropsychologists could stimulate your brain so that you would

think and feel you were writing a great novel, or making a friend, or reading an

interesting book. All the time you would be floating in a tank, with electrodes attached

to your brain. Should you plug into this machine for life, preprogramming your life’s

experiences? If you are worried about missing out on desirable experiences, we can

suppose that business enterprises have researched thoroughly the lives of many others.

You can pick and choose from their large library or smorgasbord of such experiences,

selecting your life’s experiences for, say, the next two years. After two years have

passed, you will have ten minutes or ten hours out of the tank, to select the experiences

of your next two years. Of course, while in the tank you won’t know that you’re there;

you’ll think it’s all actually happening. Others can also plug in to have the experiences

they want, so there’s no need to stay unplugged to serve them. (Ignore problems such

as who will service the machines if everyone plugs in.) Would you plug in?  (Nozick 1999,

pp. 42-43)

 

Nozick originally meant this example as a rebuttal of utilitarianism, a way of showing that people would not want to live like this.  He gave three reasons (which we will only lightly touch on for the point of this article): that we would like to “do certain things, and not just have the experience of doing them,” that we would like to “be a certain way, to be a certain sort of person” and that being in a “man-made reality” causes us to not be able to contact a “deeper reality than which people can construct” (Nozick 1999, p. 43).  All of these preferences could be true, and they could give us reason to oppose the direction our society is going in, but a perfect utilitarian might say they are not important.  An example of this theoretical utilitarian might be (a highly caricatured version of) philosopher Derek Parfit, who argued that we should treat ourselves as a group of mental states, and that therefore, we should merely try to improve those mental states (Parfit 1984, Chapter 11, Chapter 15).  This is a rational and reasonable view, whether or not we agree with it.  There is no simple, non-moral argument about the empirical facts of our society that can give us reason to condemn or support it.

 

Fundamentally, our support for or opposition to social change is always going to be dependent on our personal moral beliefs.  This is for the simple reason that our personal support for or opposition to any society is based in our personal moral and cultural values.  Joseph Schumpeter argued as such when discussing the Socialist argument:

 

First, it has been pointed out…with reference to socialism, that neither for us, the

observers, nor for the people that are to put their trust in socialism, is the economic

aspect the only or even the most important one. Price, in modern theory, is defined as a

mere coefficient of transformation. Cost, in the sense of opportunity cost, is a general

logical category. We shall however soon return to this. In defining as I did, I did not

intend to deny that. And in fairness to all the civilized socialists whom I have ever met or

read, it should be stated that the same holds true for them: that in stressing the

economic element because of the causative importance their creed attributes to it, they

do not mean to suggest that nothing is worth struggling for except beefsteaks and

radios. There are indeed insufferable stick-in-the-muds who mean precisely that.  And

many who are not stick-in-the-muds will nevertheless, in the hunt for votes, emphasize

the economic promise because of its immediate appeal. In doing so they distort and

degrade their creed. We will not do the same. Instead we will keep in mind that

socialism aims at higher goals than full bellies, exactly as Christianity means more than

the somewhat hedonistic values of heaven and hell. First and foremost, socialism means

a new cultural world. For the sake of it, one might conceivably be a fervent socialist

even though believing that the socialist arrangement is likely to be inferior as to

economic performance.  Hence no merely economic argument for or against can ever

be decisive, however successful in itself.  (Schumpeter 2003, pp. 169-170)

 

This much is to say that fundamentally cultural positions are most important to deciding whether one can support a given system, whether Socialism, Capitalism, or the consumer society.  To try to treat an argument about something as fundamental as the economic, cultural, social, and religious character of society as something where our norms can be determined by fact-gathering alone would be ridiculous at best. 

 

Of course, the fact that our opinions about social change are fundamentally based in our personal moral and cultural attitudes makes it fundamentally factional.  Not everyone can hold similar moral and cultural backgrounds or opinions, and our differences on these issues cannot be answered by simple, easy debate.  On issues such as this, our opinions cannot be homogenous.  Yet, on issues as important as the social change of our time, it would be ridiculous for us to simply agree to disagree.  Instead, we are normatively required to simply and dogmatically represent our own position, without the ability to understand the opposite position beyond an academic sense of “understanding.”  As Wittgenstein said, “Where two principles really do meet which cannot be reconciled with one another, then each man declares the other a fool and a heretic” (Wittgenstein 1969, p. 81).  If one looks at the present social change and declares it an altogether positive event, and if someone else views it as an altogether negative one, it seems likely those two people will be unable to understand the other person’s position and must resign themselves to uncompromising opposition to an opinion they necessarily view as nonsensical or wrongheaded.

 

This is a long way of expressing my sincerest apologies to those reading this who might, at this point, agree with me about what social change has occurred yet who view this social change as having constituted a positive epoch.  For those people, the rest of the article, which will focus on what one is morally required to do in the context of negative social change, will not apply.  For those people, who view positive social change as inevitable, the best thing they can do is to attempt to speed up this social change.  What this entails might be to personally support and affirm any and all significant social change which occurs, as all social change can be trusted to arrive at the same fundamentally good end.  For the rest of my readers, however (whom I can assume to be a majority, as, sadly, few people read anything this long about things they believe are good about the world), the final third of this article will be a welcome change in tone.  Rather than talking about all of the things going wrong with this world, we will mostly discuss all of the things that one can do to improve the world; we will create a moral framework for action, even in a situation where social change is inevitably going in the wrong direction.

 

To start this shift in tone, we will finish with a discussion of America as it is.  In spite of our discussions of many areas in which America is in decline, in all these areas, the majority of the population, or at least a large minority, have still yet to be assimilated into the new society.  Most of the statistics we have cited, while indicating a worse future, nevertheless indicate a great present.  The percentage of Americans who have little relationship with their historic culture via reading might be significantly higher than it has been in the past – however, still, three-quarters of young Americans regularly read for fun.  The decline of the family has been precipitous, yet still 70% of American children grow up in two-parent households.  While social society has declined, still 57% of Americans and 54% of young people are members of community organizations.  While informal relationships have declined, still three quarters of young people say they have at least one close friend.  While religiosity has declined, still 71% of Americans say they are affiliated with a religion.  While it is likely that a new advertiser-friendly religion will replace our current religions, that is a purely theoretical claim right now.  And, indeed, while advertiser friendliness is rising rapidly, it still does not encompass a majority of the population.  A majority of Americans still report their mental health as “good or excellent,” with 45% of young people reporting the same.  Still, the labor force participation rate is 62%, significantly over half the adult population.  Even though, in all these areas, every year seems statistically worse than the last, we are still not in a situation where the coming consumer society is upon us.  It is in the process of being instituted; it has not yet been instituted.  As it is, we still live in a Western society, in a social society, and in a theist society.

 

And for that, at least, we should rejoice.

Part 3:

Thus far, we have deliberately avoided normative claims in this article.  In the first section, we discussed the basis of social change, and in the second, we applied that analysis to present social change.  Throughout this discussion, we have tried to remain neutral as to the moral value of that change (except during the immediately preceding conclusion to Part Two).  In this section, we will focus on those normative claims we have previously avoided.  In particular, we will discuss the possibility of defiance of seemingly determined social change, social change akin to that which we have described broadly in Part One and more specifically in Part Two.  For, as we will discuss in this section, if such social change is indeed determined, then it is not simply determined in relation to theoretical people but in fact to each of us individually.  This would seem to leave us without the ability to approve or condemn people’s actions in relation to social change with any moral weight whatsoever.  However, as we will demonstrate, we absolutely can ascribe moral value to one’s involvement in historically “inevitable” events.  Therefore, we will argue that in spite of social change being seemingly inevitable, if we oppose social change, there are still moral demands to act against it. 

 

Once we have established that there must be some kind of defiance of social change in which we are morally required to engage, we will examine several possible ways in which social change could be resistible.  We will first focus on a concept of resisting social change which asserts that we can understand our actions against social change as being motivated by some kind of contrary desire or desires.  We will find this argument unreasonable.  We will then alter this conception to instead argue that, while social change endorses a particular attitude, action, or viewpoint, it cannot completely override our own endorsement of an alternative viewpoint which we have ourselves historically believed; it can merely present another viewpoint as being more salient.  Assuming we oppose currently occurrent social change, we can therefore refuse to admit the salience of the views endorsed by that social change.  However, we can only do this via altering our concept of remembering in relation to our opinions and habits to favor older memories.  This much being said, we cannot possibly endorse any attitude which predates us or reject any social change which we were not ourselves affected by (as in, for example, social change which changed a situation that predated ourselves) effectively.  Therefore, in defying any social change, we must be prepared to hold a position which only rejects a small segment of that social change – the segment which has affected our own personal positions. 

 

Social Determinism and Moral Blameworthiness

 

What this first subsection seeks to do is to apply philosophical work on the effects of the acceptance of determinism – that is, the view that all of our actions are somehow caused by something outside of the decisions of the person themselves – on our concept of social change.  To do so, we will first explain what we mean by “social determinism” and how our analysis leads to that doctrine.  Next, we will argue, based on philosopher P.F. Strawson’s work, that the doctrine of determinism need not make condemnation or approbation of people’s decisions along moral lines impossible and that we indeed ought to continue to believe in that concept of condemnation or approbation.  We will then apply this argument about general determinism to our more specific argument about social determinism, finding that Strawson’s theory seems to imply that moral appraisal is inapplicable to those whose actions are socially determined.  Yet, we argue, we do appraise those who act according to social determinism in a way directly parallel to Strawson’s concept of how we appraise actions in spite of a more generalized determinism.  Therefore, we offer a possible middle-ground position according to which we may continue to hold Strawson’s view without abdicating our ability to appraise socially determined actions.  This middle-ground, we will see, allows us to believe that active defiance to negative social change, rather than mere non-action, should be morally expected.

 

But, first, we must discuss social determinism.  Social determinism, as we use it here, views at least some of our actions as determined by social change.  What we mean by this is not that social determinism includes any view that says we can predict events based on general rules of nature (for instance, “when water is cooled to below zero degrees Celsius, it will freeze”) or even that we can predict future events based on historical events (for instance, “when a nation loses a war, its government will lose political legitimacy”).  Rather, what we mean is a view which states that on a statistical level, under certain conditions, changes to individual people’s actions, habits, or opinions are impossible to avoid (for instance, “inevitably, autocracies embitter the population”).  The distinction we are making here is between arguments which assert that the consequences of certain events or actions are determined (as in, “a nation losing a war (event) causes it to lose legitimacy (consequence)”) and those espousing the idea that the actions are themselves determined by certain conditions (as in “inevitably, autocracies (conditions) embitter the population (consequence)”).  According to the former, the basis of broad social shifts are individual decisions made by individual people and, as a result, are not statistically predictable based purely on social, political, or economic conditions.  According to the latter, on the other hand, at least in the aggregate, the actions which determine social change are themselves determined by conditions which make their aggregate impact inevitable, that is, the social change that occurs is determined ahead of time. 

 

It is quite clear that our previously elaborated concept of social change implies that most people’s actions are socially determined.  People are led to change their habits, actions, and opinions because advertisers and mass media producers manipulate their understanding of the world.  The advertisers and mass media producers who advocate certain viewpoints are influenced by firm-wide strategies decided upon by other people.  Advertisers decide to assume those strategies to avoid losing market share to other firms who are already using those strategies, and the initial innovative firm adopted the strategy due to trial and error.  Any given tried strategy can only be adopted as a firm’s strategy if it improves that firm’s efficiency.  Therefore, what we find is that the reason for any given social change is that the change causes an improvement in an industry’s efficiency, that is, the condition of the market being inefficient determines that social change will occur.  (This is to say: if (1) the market is in situation x, (2) situation y is the same as situation x except for the fact that society has experienced z social change, and (3) businesses in situation y are more efficient than in situation x, then (4) businesses will enact social change z.  Seeing as (1), (2), and (3) are all statements about conditions rather than actions and that (4) is an action, we can therefore conclude that the conditions we are under will statistically determine the actions taken by both businesses and people.  This is an exemplar case of social determinism.)

 

This social determinism also seems to effectively get rid of our ability to morally appraise people’s actions.  After all, the actions which are determined by social conditions cannot be blamed upon anyone individually but rather are a result of those conditions.  The individual did not choose to change their opinions or habits because they thought that would bring about desirable social change, but instead because they are influenced by advertisement to view the decisions that bring about social change as the most reasonable.  The firm does not intentionally choose to bring about social change; rather, unbeknownst to the firm, the conditions the firm was under made policies that brought about social change the most profitable.  Throughout the process of bringing about social change, that social change is never the goal of those who bring it about, and indeed it is likely unintentional.  Consequently, we can say that no individual’s particular actions are the basis, or even partially the basis, of the social change, since it would occur in almost exactly the same way without any given person’s individual contribution.  Thus, this argument seems to make nobody blameworthy for social change, as their decisions are merely the result of their conditions.  Therefore, we have no reason to view defiance to social change as morally required whatsoever; the social change one contributes to and the extent to which one contributes to it seem to be merely an implication of their conditions, not a reflection of their morality.

 

However, this final argument, that social determinism implies that we cannot be blamed or approbated for our decisions in relation to socially determined change, is what we will object to here.  The basis of our objection derives from Strawson’s analysis of our moral sentiments, in particular, his analysis of resentment.  As Strawson states, one feels resentment towards another when “one person is offended or injured by the action of another and … – in the absence of special considerations – the offended person might naturally or normally be expected to feel resentment” (Strawson 2008, p. 7).  Strawson then lists several conditions under which resentment is unwarranted and where, therefore, we do not often feel resentment in spite of being wronged.  The exception which matters to our analysis here is that those “warped or deranged or compulsive in behaviour or peculiarly unfortunate in his formative circumstances” (that is, their unique conditions) are not usually subject to resentment.  For those people, instead of viewing them with our “normal participant reactive attitudes” (for instance, resentment), we instead see them through “objective attitudes,” that is, “in terms of treatment or training” (Strawson 2008, p. 10).  (This is not to say that we cannot feel objectively towards those who wrong us in normal conditions, simply that to do so in normal conditions without also factoring in our emotional response seems somehow incomplete (Strawson 2008, p. 10).)  Therefore, if someone else’s unusual conditions determine their behavior, we are therefore, to Strawson, not capable of feeling resentment towards them (or, at least, that feeling is muted).  However, this argument creates a problem for determinism.  For, if all actions are determined by outside influences, then it becomes difficult to single out some but not all actions as deserving of resentment.  Instead, it seems that, were determinism true, no actions deserve any reactive feelings of resentment whatsoever.

 

Yet, Strawson argues, resentment need not be based in any metaphysical belief system.  Rather, it may be based on a kind of general “expectation of, and demand for, the manifestation of a certain degree of goodwill or regard on the part of other human beings towards ourselves,” based on the fact that “we demand of others for others, as well as of ourselves for others, something of the regard which we demand of others for ourselves” (Strawson 2008, p. 15, pp. 16-17).  This much is to say that we feel resentment towards those who wrong us because we have a general expectation of people not wronging us.  To wrong someone is to behave unusually, to break some kind of generally followed rule.  On the other hand, if someone is affected by some unusual malady or condition that causes them to wrong us, then they have failed to follow our generalized expectations at an earlier stage, they have failed to even have a “moral sense” (Strawson 2008, p. 18).  For those without a moral sense, indeed, their breaking certain rules might very well be our expectation.  Therefore, our moral resentment towards those who wrong us, influenced more by the shock of someone’s decision to break generally agreed upon moral rules than by the conditions that influenced them to behave how they did, does not apply to those who have no moral sense, since their decision to wrong us is not a shocking one.  (Now, it may very well be that it is shocking for people without a moral sense to wrong us, but that wrong is more likely to create resentment towards whoever is responsible for that person being the way they are and, perhaps even more likely, for being able to have access to the ability to wrong us.  At the very least, their not having a moral sense limits the extremity of our emotional reaction towards them individually.)

 

Before we move on to discuss how this applies to social determinism, we will first discuss why we should even care about this argument.  After all, our generally agreed-upon moral views could be wrong.  If determinism (or social determinism) seems to make blaming people individually for their actions nonsensical (since it is not their fault, but rather their condition’s, or society’s, etc.), then why should we cling to our emotional assumptions about life?  Why should we not be willing to take up a completely unusual position, ascribing no moral blame at all, in contradiction to all of society?

 

The first reason is that such a stance would be useless.  Consider a thought experiment from philosopher Pamela Hieronymi:

 

Suppose you somehow come to live among people the vast majority of whom are, as

you see it, regularly violating basic demands of interpersonal respect through, for

example, a strict system of caste, or racism, or sexism, or other social stratification.

Given your conviction that each person deserves equal respect and equal treatment, it

seems you should be regularly, frequently indignant, even outraged, and, if you fall into

one of the disfavored categories, resentful. But those emotions are costly, both

personally and interpersonally. It will therefore be difficult to avoid either regularly

using your resource, to cope with “the strains of involvement,” or else coming to see

those you live with as somehow morally immature or diseased. However, if you do

either, you will have, in effect, exempted those around you from the standards you

believe they are violating. By doing so consistently, you will shift your standards of

regard. Although you may remain convinced, intellectually, that those you live around

are being seriously disrespectful, neither your actions nor your reactions will embody

that conviction. It may become difficult to maintain, publicly, your claim that the

violations are serious ones—that they really matter. Although you may continue to

believe that a serious moral requirement is being violated, you no longer include that

requirement among the interpersonal expectations or demands to which you hold

others responsible. Your moral convictions may seem to be so only in name.

 

Alternatively, you might, instead, continue to suffer the strains of involvement, you

might continue to respond to others as responsible adults violating important standards

of regard—that is, you may continue to relate to others with the resentment or

indignation called for by your convictions. If so, you will be holding others to standards

that they, with the crowd, neither recognize nor regard as reasonable. They will then

resent you. As time goes on, it is likely they will either turn against you (the list of

martyred moral reformers is long) or else begin to use their resource to respond to you

more objectively: you will become a problem, an issue, or perhaps a kind of curiosity or

museum piece. You will then be left outside the scope of ordinary interpersonal

relationships. If you are one and they are many, this will not be so difficult for them. 

(Hieronymi 2020, pp. 84-85)

 

Hieronymi lays out the dilemma quite well, but the basic upshot is that, if one attempts to hold a position completely contrary to the rest of society, either one must hold several different concepts of disapproval, the only one of which they take seriously being the one considered “normal” in broader society, or they must constantly tax themselves and others with sanctimonious and putatively unreasonable goals, thereby reducing your own level of practical influence.

 

A second, more theoretical basis for not wholesale rejecting normalcy is given by philosopher Linda Zagzebski:

 

Almost everyone knows about some of the paradigms – Jesus Christ, Socrates, the

Buddha – but there are also paradigms who are known only to a small circle of

acquaintances. My position is that we may be mistaken about some of them, but we

cannot be mistaken about very many. If a theory has the consequence that neither Jesus

Christ, nor Socrates, nor the Buddha is a good person, we should question whether the

theory is a theory about what we call a good person. There is a breakdown at the

conceptual level, not simply a disagreement about cases. The issue of the multiplicity

and variability among good persons is a matter to which we will return, but at the outset

I want to insist that even if cultural variability poses a theoretical problem, it is a

problem that is unavoidable because the identity of exemplars of goodness is not

merely contingently related to our general moral judgments and the construction of a

moral theory. The particular judgments to which a moral theory must conform include

judgments about the identity of paradigmatically good persons.

 

This is an application in ethical discourse of a point made by Thomas Kuhn about

exemplars in scientific discourse. Kuhn says that terms such as “element,” “mixture,”

“planet,” and “unconstrained motion” are learned from direct application to exemplars.

When problems arise in the application of these terms, the problem is not one that can

be resolved by applying to the dispute criteria accepted in a neutral language used by

both sides and adequate to the statement of both theories. If the concept of an element

arises from a primitive grouping of exemplars without an answer to the question, “What

makes each one an element?,” it is likely that the concept of good person arises from

exemplars in a similar way. We do not have criteria for goodness in advance of

identifying the exemplars of goodness.  (Zagzebski 2004, p. 41)

 

Zagzebski’s argument is a probabilistic one, one based on the general rule, true both in science and morality, that particular judgments are more likely to be correct than some kind of broad theory.  As a result, broad theoretical judgments about morality (in this instance, that our blaming people is irrational along deterministic grounds) are innately less likely to be correct than particular judgments about individual moral decisions (in this instance, our feeling indignant towards someone who wronged us or not indignant towards a lunatic who wronged us).

 

Taken altogether, this argument creates the bounds for any determined action to be blameworthy.  For one thing, the action must be an abnormality, that is, it must not be the sort of action we would expect from others.  For another, the blamed agent must not be intrinsically abnormal in such a way in which the action itself can be framed as no longer being abnormal; for instance, if they are a lunatic.  Finally, our blaming others cannot be itself an abnormality, that is to say, our perspective as a judger of others must be understandable to those others.

 

These rules would seem to exclude those who support social change from being condemned or approbated for their decisions.  First, one’s support of social change is necessarily the normal behavior – social change as we have defined it is the making of a sort of belief, habit, or action normal or abnormal.  To go against social change and to claim that a different norm is preferable to the one favored by social change would be to hold the abnormal position.  Second, supporting social change is almost always something which is the norm within society – only a very small minority tend to oppose most change as it occurs, and even fewer oppose change that has already occurred.  Therefore, not only is opposing a particular social change wrong, taking a general moral position that social change is bad is similarly wrong.  Finally, social determinism seems to create a new class of people whose being affected by particular advertising in a particular time (that is, their living conditions) separates them from other agents.  As a result, even if one may blame people in some cases for supporting social change, blameworthiness does not seem to extend to those affected by advertising that changes the way in which they think.

 

Yet, in spite of these arguments, as a matter of fact, we often do blame people, in more than an objective context, for supporting certain social change.  Even more interestingly, we often blame them whether or not they were instrumental in causing negative social change to occur.  Take, as an extreme example, the Holocaust.  Most people find an emotional condemnation of the Nazis extremely intuitive.  Further, that emotional condemnation often takes the form which Strawson categorizes as the sentiment of resentment.  For instance, see Hannah Arendt’s condemnation of Adolf Eichmann:

 

And just as you supported and carried out a policy of not wanting to share the earth

with the Jewish people and the people of a number of other nations—as though you

and your superiors had any right to determine who should and who should not inhabit

the world—we find that no one, that is, no member of the human race, can be expected

to want to share the earth with you.  (Arendt 2006, p. 279)

 

Arendt’s condemnation of Eichmann has much in common with Strawson’s description of the emotional sentiments which underpin our condemnation or resentment of others.  The basis of Arendt’s condemnation is that Eichmann has broken some fundamental group of rules which most people historically have followed.  Further, rather than taking an objective point of view, Arendt takes the point of view of another person, individually being offended at even sharing “the earth” with Eichmann.  Only from sharing existence with Eichmann does she desire Eichmann’s punishment.

 

Indeed, it seems that an objective condemnation is insufficient when we are discussing the Holocaust.  As philosophers Herlinde Pauer-Studer and J. David Velleman argue:

 

Suppose someone were to say, “The gassing of six million Jews was a violation of their

rights.” Surely, though, a violation of rights is not the half of it: the language of rights

and wrongs is too legalistic. Something much more profound, more profoundly immoral,

was underway at Auschwitz-Birkenau.  (Pauer-Studer & Velleman 2015, p. 123)

 

If, indeed, Pauer-Studer and Velleman are right here, their description of how the objective point of view seems insufficient for us is parallel to how, according to Strawson, while we are capable of the objective point of view for those who wrong us in normal conditions, an objective explanation seems incomplete. 

 

Just as interestingly, Arendt seems to view Eichmann’s personal support for the Holocaust as unimportant to determining Eichmann’s guilt:

 

Let us assume, for the sake of argument, that it was nothing more than misfortune that

made you a willing instrument in the organization of mass murder; there still remains

the fact that you have carried out, and therefore actively supported, a policy of mass

murder. For politics is not like the nursery; in politics obedience and support are the

same.  (Arendt 2006, p. 279)

 

Indeed, this view is not simply true of Eichmann but also of other people complicit in but not necessarily supportive of the Holocaust.  Take, for instance, John Kekes’ moral condemnation of concentration camp administrator Franz Stangl:

 

His superiors manipulated him into becoming an extermination camp Kommandant by a

sinister combination of implied rewards and threats. But he could be manipulated only

because ambition and lack of moral strength led him to crave the rewards and succumb

to the threats. He knew what he was doing and felt badly about it, but he systematically

falsified his knowledge and feelings and thus prevented them from affecting his actions.

If he had not been ambitious, he could not have been manipulated into performing so

well at increasingly evil tasks. If his circumstances had not been those of Nazified Austria

and Germany, his ambition would have taken a different form. It was the fatal fusion of

his character and circumstances that made him a mass murderer.

It is understandable why he did what he did, but to understand is not to excuse or to

forgive. It is to see evil for what it is and condemn evildoers for what they are.  (Kekes

2005, pp. 63-64)

 

This view of Stangl as having been evil due to his complicity in the Holocuast in spite of his personal opposition to it and lack of personal ill will toward Jews is especially contrary to our earlier principles because of the normalcy of complicity in the Holocaust for those in Stangl’s particular cultural milieu.  After all, one could very easily argue that, in 1940s Germany, Stangl was no more complicit than a very large percentage, even a majority, of other Germans.  Indeed, Stangl seems to, in some ways, have been less supportive of the Holocaust than the norm at the time.  Further, Stangl was certainly not himself responsible for the Holocaust; if he had not been part of the process, someone else would have been Kommandant and just as many innocents would have died nevertheless.  Yet, Stangl is still blameworthy.

 

If these observations are as they seem, then our earlier objections to morally appraising those who engage in socially determined actions in the same way we appraise other actions are all wrong.  First, it seems that those who engage in actions which are considered not wrong in their respective society, at least in the historical context of judging great wrongs, can clearly be blameworthy, in spite of engaging in normal actions.  For, in spite of the Holocaust not being considered obviously immoral in Nazi Germany, the people who engaged in it are still blameworthy.  Second, those who engage in similar actions to the vast majority are also clearly blameworthy.  Stangl did not seem any more complicit than a large percentage of Germans, perhaps even a majority, yet he is still clearly blameworthy.  Indeed, the percentage of Germans who were more or less blameworthy than Stangl does not seem to have anything whatsoever to do with Stangl’s relative moral blameworthiness (Stangl does not become more guilty depending on whether 30% or 60% of contemporary Germans were similarly complicit to him.)  Finally, it does not seem that immoral social change that is socially determined makes those who involve themselves in that change morally blameless.  Without Stangl, the Holocaust would likely have killed a similar number of people, and there being an administrator to the Sobibor or Treblinka concentration camps was something which was determined no matter what Stangl decided to do.  Yet, he is still blameworthy.  Further, it seems almost unrelated to the question of whether Eichmann committed a grave crime to say that the Holocaust would have gone along much the same with or without his involvement.  Rather, the very fact that he contributed towards it, unrelated to his importance to its occurrence, seems like the basis of his moral guilt.  Even the most extreme social determinism does not excuse even the most minor contributor to the Holocaust.  And, it seems, if Arendt’s condemnation is anything to go by, that the blameworthiness we are talking about here is based on the same emotional sentiment (shock at someone breaking the fundamental rules of social behavior) which Strawson describes as being the basis of normal, day-to-day resentment, rather than mere objective appraisals.

 

Here, we seem to be left at an impasse.  On the one hand, we have intuitions about day-to-day life which tell us that social determinism should make those affected by it blameless.  On the other hand, we have intuitions about grave crimes against humanity that tell us that social determinism should take away little to no blame.  It is perhaps true that one of these two contrary groups of intuitions is wrong, yet it seems unlikely.  These two groups of intuitions – both our day-to-day concept of emotionality and our ability to condemn perpetrators for the absolute worst forms of injustice and genocide imaginable – seem so emotionally fundamental that it would seem much more effective as a strategy to think of some middle ground which can co-exist with both groups of intuitions.  What we have and will establish here as that middle-ground is merely a preliminary, and later in this article, when we arrive at a clearer concept of how defiance to social change is possible, we will be better able to explore how blameworthiness is also possible in spite of social determinism.

 

The main observation that allows us to construct a middle ground between these two sets of intuitions is that Nazi Germany did not exist in a vacuum.  Indeed, the world around Germany included many nations which condemned and did not contribute (at least intentionally) towards the Holocaust.  Further, there was a long history of Germany prior to the Holocaust, a history during most of which even the concept of something as awful as the Holocaust would have been viewed as outrageous.  Here we can see a possible outlier (which we will develop more later) of how an action, if considered normal within a particular society, can still be blameworthy:  if that society’s norms are not considered normal throughout the world, or if that society’s norms are not considered normal throughout the history of the nation more broadly.  This is not to say that a view being held by another nation or being considered normal some time in a nation’s history necessarily makes it considered normal today.  (Certainly, someone can find views favorable towards totalitarianism outrageous in spite of its preponderance globally or slavery outrageous in spite of its being considered normal historically.)  It is simply to say that the degree to which other cultures or one’s historic culture approves of something makes opinions otherwise considered absurd more likely to be conceivable and consequently the disapproval of certain actions more likely to be reasonable.

 

Such an observation would make sense within Strawson’s own classification of normal versus abnormal situations.  According to Strawson’s account of moral blameworthiness, the classification of particular people within the broader group of “people without moral sense” was the reason why we could not be surprised by their immoral actions, because we know not to expect moral behavior from them.  On the other hand, we categorize normal people as people from whom we should expect normal behavior.  It is for this reason that their acting immorally is shocking.  That is, it is as a result of people’s behavior being shocking that they are morally blameworthy for that behavior.  For instance, the sudden shift towards totalitarianism in Germany in the 1930s and 40s was a shocking shift for much of the world, and in spite of Hitler’s rather open anti-Semitism, pogroms as late as 1938 aroused “horror and indignation” from much of the world (Bullock 1964, pp. 473-474).  Perhaps with another several centuries of similar pogroms, we would have a more difficult time morally condemning the Germans and an easier time entering into a more objective perspective on the causes of their atrocities.  Our addition to Strawson’s account in relation to social determinism also partially explains why people are far more likely to have stronger emotional views on atrocities undertaken by people in the 20th century, following the advent of the modern concept of human rights, than among those of an earlier time.  Certainly, the human rights abuses which the Romans undertook in quelling the Great Jewish Revolt (which we described in Part Two of this article) do not create the same outrage in the average person as the Holocaust.  Indeed, our general outlook toward people who believe that “crimes of the remote past had to be judged as severely as those committed today” can be understood as poet Czesław Miłosz viewed philosopher Simone Weil, a philosopher who thought exactly this way:  a symptom of “strange,” “extreme,” and “uncompromising” views, perhaps to be admired but not to be seriously believed (Miłosz 2001, pp. 251-253).  So, we can see, recognizing social communities as being categorizable in the same way as individuals allows us to retain both of our important sets of intuitions.

 

Thus, we arrive at the ability to appraise people’s behavior in relation to social change, conditional on that social change being a truly unusual shift, that is, a significant deviation from the principles considered mainstream prior to the occurrence of the social change in question.  So, then, if social change is entirely morally unquestionable according to the principles which preceded it, or if it occurs in such a way that is merely the further extension of a long-term change in social attitudes, then it is beyond appraisal, for it is not classifiable as being separable from the norms of the society in question.  However, if social change is clearly a break with the history of the society in question, then it is absolutely possible to appraise the behaviors of people in relation to it.  This argument not only extends to outright action in favor of social change but also complicity, that is, merely letting that social change occur, just as we can blame Stangl for his complicity in the social change of his own time.  Of course, the degree of moral approbation or condemnation which we apply to different cases of social change will be different; social change which brings about the freeing of slaves would certainly bring much more approbation than social change which marginally increases wages; the same could be said in relation to condemnation as well.  The social change presently occurring (if we are to assume it is negative) is incomparably small in degree relative to the Holocaust; therefore, one’s complicity or support of the present social change is nowhere near as significant a moral wrong as Stangl’s.

 

The important upshot of this whole discussion, though, is that people’s actions in relation to the social change currently occurring can be morally appraised.  The social change currently occurring, as we have argued in Part Two, is a clear break from the social change occurring until extremely recently, often not just as a matter of how it occurs or the end goal of it, but also as a matter of what occurs.  Certainly, in some changes such as (but not solely):  the decrease in working hours among the majority and the rise in work hours among the minority, the increase in neurosis throughout society, the rise of drug use, the rise of cultural media that do not connect one with one’s cultural history, the remaking of sociality via social media, and the replacement of traditional religion are all significant shifts that would probably not be viewed positively by the average person in 1980, nor would they be directly traceable to changes occurring at that period of time.  So, then, we can see that people’s decision to be supportive, complicit, or opposed (or anything in between) to modern social change is morally charged.

 

With this much established, we shall now focus on what opposing social change would actually mean.  After all, while we have a pretty clear concept of what supporting social change entails (supporting all new cultural fads and so on) and what being complicit in social change entails (not doing anything), as we will discuss, opposing social change is not quite as simple as it might seem.

 

Opposing Social Change: Preliminaries

 

Opposing negative social change is an innately difficult thing to conceptualize.  Our understanding of social change in Part One seems to make such a view nearly impossible.  After all, social change is the manipulation of people’s desires based on very minor changes in their opinions about the facts of the world, most of which occur subconsciously.  Such a manipulation occurs without our knowledge and often in ways that are not clear to us.  Earlier, we compared such a manipulation to the grace of God due to the fact that it simultaneously did not determine our actions yet had such a great amount of influence that, statistically speaking, it almost always led us to conform.  It is difficult to imagine how one might reject such a strong and subtle influence upon our actions. 

 

Luckily for us, many of the same people who conceptualized the grace of God were also forced by doctrinal constraints to imagine a possible way for people to defy that grace.  This allowance for the defiance of grace is due to the fact that in nearly all Christian conceptualizations of grace, there is a possibility of sin, that is, of someone acting contrary to grace.  Of course, for those theologians, acting contrary to grace was a great evil, whereas for us here, it is a great good.  Yet, their descriptions of how people can come to be sinful, a necessary explanation in a world with people who do not always act according to morality, are still useful to understanding how we might go about defying the influence of advertisers and mass media producers.

 

There are two kinds of defiance to social change that we will look at here, both based on medieval philosophers’ different concepts of sin.  The first is philosopher Duns Scotus’ idea of sin, that sin was caused by a desire similar in kind to grace but innately pushing us to behave contrary to the grace of God.  We will find that this attitude, although attractive, is both incorrect both in understanding the basis of our opposition to social change and in diagnosing what defiance is possible against social change.  Next, we will look at Aquinas’ concept of sin as taking action that is neither in direct opposition to grace nor the preference of grace, taken by outright decision.  This other explanation, we will find, is both possible and morally demanded by our other views, with some modification.  Therefore, we will opt for our modified version of Aquinas’ concept of sin as the basis of our own concept of defiance to social change.

 

The Wrong Way to Oppose Social Change

 

But, first, let us discuss Scotus’ concept of sin.  To Scotus, while the grace of God influences our actions, we are at the same time influenced by a contrary, worldly desire for our own happiness.  This concept of our desire, based in part on Aquinas’ model of grace, holds that there are two contrary and incommensurate desires which influence our views: our desire for happiness and our desire for justice.  Therefore, there can exist something approximating freedom of our will from God’s, in that our own will is decided by which of these two desires is stronger in a given situation.  According to Duns Scotus scholar Richard Alan Cross:

 

…Aquinas, following Aristotle, gives an account of human actions as directed to our

natural self-fulfillment in happiness.

Scotus agrees with this sort of ideological account as a general account of the created

order. But he does not believe that free agents are like this. He holds that, in addition to

its natural inclination to self-fulfillment, a free power (i.e., the will) has a further

inclination: an inclination to justice. Scotus, following Anselm, calls these two

inclinations, respectively, the affectio commodi (affection for the beneficial or

advantageous) and the affectio iustitiae (the affection for justice).

The idea is that these inclinations explain the fact that the will has two different modes

of operation: one in which it seeks self-fulfillment in happiness, and one in which it seeks

justice.  (Cross 1999, pp. 86-87)

 

Scotus’ view of two contrary desires (what he describes as “inclinations”) for either justice or our own happiness applied to social change would be as simple as switching one of the two inclinations, that for justice, to our desire to go along with social change.  According to this view, the rejection of social change would be simple – all we would have to do is empower the opposite view, that favoring our own happiness (or some other end) rather than the view advocated by social change.

 

To apply Scotus’ attitude towards sin to defiance to social change, we can use Václav Havel’s concept of dissidence to (his concept of) post-totalitarianism.  In Havel’s view, post-totalitarianism is a system of governance which relies on cultural means to create conformism and obedience in the general population.  It does this by making people engage in a variety of rituals in which they swear their loyalty to a central government, thereby pressuring others to also swear their loyalty to the government and behave as the central government desires.  Havel calls this system, in which people’s desires are manipulated and forced to remain obedient to the government contrary to their desires, as “living within a lie” (Havel 2015, pp. 31-37).  Havel argues that the way to avoid “living within a lie” is simply by “living within the truth,” that is, simply not recognizing the claim that post-totalitarian culture makes over one’s actions and engaging as one might without post-totalitarian influence.  In Havel’s words:

 

Living the truth is thus woven directly into the texture of living a lie. It is the repressed

alternative, the authentic aim to which living a lie is an inauthentic response. Only

against this background does living a lie make any sense: it exists because of that

background. In its excusatory, chimerical rootedness in the human order, it is a response

to nothing other than the human predisposition to truth. Under the orderly surface of

the life of lies, therefore, there slumbers the hidden sphere of life in its real aims, of its

hidden openness to truth.

 

The singular, explosive, incalculable political power of living within the truth resides in

the fact that living openly within the truth has an ally, invisible to be sure, but

omnipresent: this hidden sphere. It is from this sphere that life lived openly in the truth

grows; it is to this sphere that it speaks, and in it that it finds understanding. This is

where the potential for communication exists. But this place is hidden and therefore,

from the perspective of power, very dangerous. The complex ferment that takes place

within it goes on in semi-darkness, and by the time it finally surfaces into the light of day

as an assortment of shocking surprises to the system, it is usually too late to cover them

up in the usual fashion. Thus they create a situation in which the regime is confounded,

invariably causing panic and driving it to react in inappropriate ways.  (Havel 2015, p. 41)

 

This much is to say that when not affected directly by the post-totalitarian influence, that is, in the part of our lives the post-totalitarian system is unable to observe, we are able to live within “the truth.”  By “living within the truth,” we can come to oppose the post-totalitarian system for the simple reason that post-totalitarianism is directly contrary to living truly.  Thus, all we need to do to defy the post-totalitarian system is to reject its views and instead live “authentically.”  On a mass level, “living within the truth” will result in creating our own separate “parallel structures,” that is, entirely different institutions from mainstream institutions where “life that is in harmony with its own aims” is lived directly contrary to mainstream culture (Havel 2015, pp. 78-80).

 

Applied to social change, this view would hold that we should simply not recognize social change.  Instead, we should simply live according to whichever desires we have that are directly contrary to social change.  In so doing, we would come to hold exactly the opposite views of those which advertiser- and mass media producer-caused social change has sought to impose upon us.  We would thereby hold views that seek to approximate the views which we would have held had there been no influence from advertisers or mass media producers ever.  To avoid social change affecting us, we would create our own separate society and institutions exclusively made up of those who also oppose social change.  To put it simply, we would try to set back the clock to the world prior to modern social change.

 

There are two significant problems with this theory.  The first is that it is itself inane.  Our new “authentic” morality would fail for multiple reasons:  it would force us to go entirely against common moral sense, and we would be forced to reject a great deal of good social change, so that our new moral sense would not be clearly better than simply an acceptance of even the most outrageous social change.  Second, this view does not properly understand what modern social change or objecting to that change even consists of.  Modern social change is not the imposition of a group of hostile desires into one’s decision-making, but rather the manipulation of what all of our desires even are, and rejection of modern social change is not based on some more fundamental group of desires but rather the holding of an alternative viewpoint, entirely detached from “inclination” or “desire.”  Therefore, not only does this concept of defiance fail to give us a way out of social change, it fails to even understand the basic building blocks of that theoretical way out.

 

Our first objection to this concept of authenticity as the basis of defiance to social change is that it is absurdly unintuitive.  To be truly “authentic,” according to our concept of “authenticity,” is to equally reject everything favored by the advertisers and mass media producers who influence social change.  Yet, such a view would require us to effectively create our own standards for moral concepts, exactly contrary to the normal standards.  These views would require us to fall into the trap Hieronymi earlier described – either being required to act as if we hold different moral views than we in fact do or to simply seem insane to the average person. 

 

However, this view is not simply unintuitive for practical reasons; it is also theoretically unworkable.  For us to reject all social change would be for us to argue that ordinary society, having been directly affected by a great deal of social change, is clearly morally compromised, and therefore, that we are required morally to reject mainstream society completely.  Yet, this is incoherent.  We cannot simply reject broad, normal social interactions for reasons Strawson understood.  As Hieronymi paraphrases from Strawson:

 

Given our natural commitment to interpersonal relating, engaging in such relationships,

where possible, does not require justification; participation in ordinary interpersonal

relations is not the sort of thing for which we have reasons. The question of whether we

should participate is thus idle. If we go on to ask whether, given the truth of

determinism, it would nonetheless be rational to abandon human society, we would

have “utterly failed to grasp the purport of the preceding answer, the fact of our natural

human commitment to ordinary interpersonal attitudes.”  (Hieronymi 2020, p. 63)

 

As Strawson argues, our involving ourselves in ordinary social interaction is a fact of life.  To simply demand that we remove our ordinary interpersonal interaction is to make morality a demand for inhumanity.  It is not, to put it lightly, a reasonable demand.  Even more importantly, if we morally demand people stop engaging in ordinary behavior for the sake of “authenticity,” that would be a clear overstepping of morality’s bounds.  As Bernard Williams argues:

 

…somewhere…one reaches the necessity that such things as deep attachments to other

persons will express themselves in the world in ways which cannot at the same time

embody the impartial view, and that they also run the risk of offending against it.

 

They run that risk if they exist at all; yet unless such things exist, there will not be

enough substance or conviction in a man's life to compel his allegiance to life itself. Life

has to have substance if anything is to have sense, including adherence to the impartial

system; but if it has substance, then it cannot grant supreme importance to the

impartial system, and that system's hold on it will be, at the limit, insecure.  (Williams

2002, p. 18)

 

As Williams argues, if we are to sincerely demand that people completely reject engaging in ordinary society in exchange for being properly “moral,” we run the risk of forcing people to choose between morality or the things that give their life meaning.  At that point, we cannot reasonably expect people to prefer the moral view.  This argument therefore simply reduces into a different version of the argument that social determinism makes us unable to fault people for going along with social change: after all, if rejecting social change is synonymous with rejecting normal, day-to-day life, then it is simply unreasonable and need not be followed.  If this is our concept of defiance to social change, then we remain incapable of understanding why we can fault people for their involvement in morally wrong social change.  Thus, the view that “authenticity” is the best way to defy social change does not progress our understanding of defiance to social change whatsoever.

 

Just as importantly, the “authenticity” argument has difficulty specifying what opinions, habits, or actions could be considered “authentic.”  To avoid social change by Scotus’ theory is at least partially to be able to distinguish between, on the one hand, our natural desires and, on the other, those created by God.  Applied to social change, this theory demands we be able to distinguish between those desires created by social change as opposed to those that are truly “authentic” and to favor the latter.  Yet this theory creates a problem.  Our present social change is not occurring in a vacuum; it was preceded by other social change.  Indeed, social change is the replacement of one imposed cultural attitude with another.  This fact demands the “authentic” person distinguish between good and bad social change.  One of two strategies could be used to deal with this problem.  The first is to directly oppose all social change caused by the agents that we diagnosed in the first part of this article.  The second is to oppose only the social change which seems least “authentic” and therefore (supposedly) most impacted by advertisers and mass media producers.

 

Let us start with the extreme reaction.  According to this response, all social change that has occurred since the advent of mass media must be equally opposed.  Ignoring for the moment how unintuitive and impossible this would be, it would still be completely unreasonable.  After all, while a great deal of social change has been bad, a great deal has also been good.  Indeed, much of the negative impact of social change that we listed in the second part of this article was the degeneration of institutions and cultural attitudes that have only existed for a century or two (or sometimes, as in the case of American social life, even less than a century).  To universally oppose all these developments could have many unpredictable effects.  Indeed, it could leave us more impoverished than the alternative.  As philosopher and politician Edmund Burke put it pithily:

 

We are but too apt to consider things in the state in which we find them, without

sufficiently adverting to the causes by which they have been produced, and possibly

may be upheld.  (Burke 2003, p. 67)

 

There is no certainty at all that the “authentic” person will not hold exactly this view, that they are not liable to destroy just as much, if not more, than they revitalize.

 

Not only will opposing all social change destroy some good social change, it is also not certain to get rid of bad social change.  For, in the creation of a new, anti-social change, “authentic” ethic, we must create new views, habits, and actions.  Those views, habits, and actions, however, will be influenced and manipulated by the exact same forces as those of the vast majority.  After all, this will be an ethic formed by people whose understanding of the world has been itself manipulated by advertisers and mass media producers.  For them to create a new “authentic” ethic would require them to make a claim about what is truly “authentic” about life, yet the nature of this claim will itself be determined by their manipulated understanding of normal life – it seems very unlikely that someone who developed in a society in which (for instance) neurosis and poor work ethic are being rapidly normalized will view those things as being as abnormal as they are.  Indeed, some of the worst social change can be supported based upon the view that supporting it will defeat exactly that viewpoint.  Look at Martin Heidegger’s support for Nazism for instance.  As philosopher Richard Velkley described it:

 

Writing soon after the end of the Second World War on his Rectoral Address of 1933,

Heidegger notes that in 1932 he found in Ernst Jünger’s book The Worker “an essential

understanding of Nietzsche’s metaphysics, insofar as in the horizon of this metaphysics

the history and the present of the West is seen and foreseen.” Jünger exposed “the

universal mastery of the will to power within the planetary scope of history.” Heidegger

then grasped that what Nietzsche meant by the “death of God” was that this actuality of

the will to power follows the collapse of the “effective power in history of the

supersensible world, especially the world of the Christian God.” Thereupon he saw the

need for “a reflection on the overcoming of the metaphysics of the will to power and a

dialogue with the Western tradition from its beginning.” In this concise account, without

evident irony, Heidegger claims that his rallying the university to support the National

Socialist regime was for the sake of overcoming the very doctrine, the will to power,

that the new regime, usurping Nietzsche’s authority, used as a slogan.  (Velkley 2011, p.

36)

 

This much is to say that Heidegger claimed that his support of the Nazis was the best way to combat exactly the viewpoint that the Nazis espoused (the will to power).  There is no reason to believe that we would have any way of avoiding this exact issue ourselves.  The problem with our new, “authentic” ethics would look very similar to the obstacle Ludwig Wittgenstein found to supporting nuclear war:

 

The hysterical fear of the atom bomb the public now has, or at least expresses, is almost

a sign that here for once a really salutary discovery has been made. At least the fear

gives the impression of being fear in the face of a really effective bitter medicine. I

cannot rid myself of the thought: if there were not something good here, the philistines

would not be making an outcry. But perhaps this too is a childish idea. For all I can mean

really is that the bomb creates the prospect of the end, the destruction of a ghastly evil,

of disgusting soapy water science and certainly that is not an unpleasant thought; but

who is to say what would come after such a destruction? The people now making

speeches against the production of the bomb are undoubtedly the dregs of the

intelligentsia, but even that does not prove beyond question that what they abominate

is to be welcomed.  (Wittgenstein 1980, pp. 48-49)

 

Just as that question “but who is to say what would come after such a destruction” applies to nuclear war, it also applies to a rejection of all social change.  What comes after the destruction of something we oppose need not be anything superior to what was destroyed, and indeed, in some cases, the replacement can be significantly worse than the disease.

 

The second possible solution, distinguishing between desires based on their seeming “authenticity” is even worse.  The social change we have described in Parts 1 and 2 is notable not for its overriding our emotions but rather for manipulating those emotions.  Therefore, social change is going to seem “authentic” relative to how well it manages to manipulate our emotions.  Of course, this implies that social change’s “authenticity” is proportionate to the effectiveness of the advertising or mass media which imposed the view, habit, or action.  Considering (as we discussed in Part One) the slow improvement of corporations’ strategies over time what this practically means is that the most recent social change, being advertised the most effectively and therefore seeming the most authentic, will be favored over social change that occurred a longer time ago.  Indeed, this strategy seems like a perfect way of, like Heidegger in Germany, unintentionally supporting the exact negative social change that we seek to oppose.

 

Outside of the problems we have found trying to apply Scotus’ view to social change as a theoretical and practical matter, Scotus’ interpretation of sin as being caused by a contrary desire to God’s grace cannot be applied to our model of social change.  To Scotus, one’s opposition to God’s grace is because of one’s natural desires.  However, the defiance which we have described to social change is necessarily not based on our desires but rather opposed to them (at least in their manipulated form).  Instead, our opposition to social change has been based on an ideological view that the social change underway is bad, entirely divorced from our personal desires.  Certainly, we would prefer that the social change underway were good; that is, we would desire that we not have to defy social change.  And yet, we are still committed to believe that the social change underway is wrong by virtue of some opinion.  And that opinion is not based on any desires we may have (except in a very unusual construal of the term) but rather based on conclusions we have reached through analyzing matters of fact.  In this case, the demand to oppose social change is not created by any desire to oppose social change but rather based on our opinion that certain fact-statements are true.

 

Now, this description of where our commitment to opposing social change comes from is difficult to conceptualize.  After all, we generally understand our decisions as having certain ends and those ends as being based upon our preference for certain states of affairs, that is, our desires.  However, as philosopher John R. Searle has argued, there is also another major way that our actions can be justified: based on our commitments.  Searle defines those “commitments” as such:

 

…a commitment is the adoption of a course of action or policy (or other intentional

content; one can, for example, be committed to beliefs and desires) where the nature of

the adoption gives one a reason for pursuing the course. Thus, for example, I am

committed to the practice of philosophy. And this commitment gives me a reason to

pursue it even on hard days when things are not going well.  Similarly one may be

committed to the Catholic faith or to the Democratic Party.  When Sally says that Jimmy

is unwilling to "commit” she means he is unwilling to adopt a policy that will give him a

reason for continuing in certain behavior and attitudes. Such reasons are desire-

independent, though this is disguised from us by the fact that the sorts of commitments

I have described are commitments to do things one may want to do anyhow.  (Searle

2003, p. 175)

 

This much is to say that our holding the position, our accepting the commitment, itself inherently requires us to act upon that commitment.  The basis of our acting a certain way is that we have previously committed to do so.  This concept of commitment inherently brings along with it two questions: first, how it comes about in the first place and second, why we follow-through with those commitments.  How our commitments come about is explained by Searle using the mundane example of one’s beliefs about the weather:

 

Suppose a speaker utters a sentence, for example, "It is raining," and suppose he

intends to make the assertion that it is raining…But if he is not just uttering the

sentence, but actually saying that it is raining, if he actually means that it is raining, then

he must intend that the utterance satisfy truth conditions…that it is raining. That is,

his…intention is to impose conditions of satisfaction (i.e., truth conditions) on conditions

of satisfaction (the utterance). His utterance now has a status function, it represents,

truly or falsely, the state of the weather. And he is not neutral vis-a-vis truth or falsity,

because his claim is a claim to truth. That imposition of that sort of status function, of

conditions of satisfaction on conditions of satisfaction, is already a commitment. Why?

Because the assertion was a free, intentional action of the speaker. He undertook to

claim that it is raining and thus he is now committed to the truth of the asserted

proposition. When he intentionally imposes conditions of satisfaction on conditions of

satisfaction, in the manner of an assertion, he takes responsibility for those conditions

being satisfied. And that commitment is already a desire-independent reason for action.

For example, the speaker has now created a reason for accepting the logical

consequences of his assertion, for not denying what he has said, for being able to

provide evidence or justification for what he has said, and for speaking sincerely when

he says it. All of these are the result of the constitutive rules for making assertions, and

the speaker invokes those rules when he imposes conditions of satisfaction on

conditions of satisfaction. The creation of the commitments creates desire-independent

reasons for action, and the commitment is already built into the structure of the speech

act. In making an assertion the speaker…creates a commitment…His assertion that it is

raining will be true or false depending on whether it really is raining. But the

commitment he makes will be satisfied only if the world really is the way he says it is,

only if it is raining.  (Searle 2003, pp. 173-174)

 

What Searle is saying here is that, when we make a statement about the truth of the world, by the very nature of what a declarative sentence is, we commit ourselves to the truth of that statement.  After all, if in fact our statement was false and we knew it was false, then (unless we intend to lie) declaring the fact about the world does not make much sense.  Further, we are also required to continue to believe what we have asserted about the world (unless given contrary evidence) if we are to “take responsibility” for what we said, and we are required (or at least have some reason to) “take responsibility” for what we say by the very nature of our making a statement; it is a rule of declarative sentences that we should “take responsibility” for them if we say them.  Not only does this apply to declarative statements, it also applies to beliefs.  As Searle argues:

 

Not all commitments are created by way of performing a speech act. For example, one

may commit oneself to a policy just by adopting a firm intention to continue with that

policy…We can create such a commitment for ourselves by imposing conditions of

satisfaction on some other entity…in making the truth claim we are also imposing

commitments on ourselves. In making an assertion we take responsibility for truth,

sincerity, and evidence…These responsibilities are met only if the world is such that the

utterance is true, the speaker is sincere, and the speaker has evidence for the assertion. 

(Searle 2003, pp. 175-176)

 

This much is to say that even the holding an opinion can create a commitment to act on that opinion, not because we necessarily have a desire to act on it, but rather because the very nature of viewing an opinion as true requires us to commit ourselves to it and to act as if it were true.  The reason why we are required to follow-through with the statements we say and the opinions we hold, Searle argues, is much the same as our reason for having taken them to be true in the first place:

 

Because the speaker stands in a special relation to his own assertions, in that he has

created them as his own commitments. He has freely and intentionally bound himself by

undertaking his commitments. He can be indifferent to the truth of someone else's

assertion, because he has not committed himself. He cannot be indifferent to the truth

of his own assertions, precisely because they are his commitments.  (Searle 2003, p.

176)

 

This is to say that to commit oneself to a certain opinion or action is to require oneself to follow through on that action, by the rules of what a commitment is.  We certainly cannot simply say that since we would not like to believe something that we shall not believe it, and in some cases, we can even say that while we would not like to take a particular action, we still shall do so due to our commitments.  Hence, we can see, our opinions, by creating commitments, can in fact influence our actions to be contrary to our desires.

 

This model of commitment-creating opinions tracks well onto our own commitment to oppose certain social change.  Our own opposition to social change is caused by quite theoretical opinions that we hold about that social change, not our desires.  Those opinions are created not by our desire to hold them but rather by analysis which concludes that they are true for purely theoretical reasons.  Those theoretical truth-statements are then translated into a demand for us to act on them not due to our desire to do so (in fact, the manipulation of our desires inherently caused by social change likely creates stronger desires pushing us away from defying social change) but because we nevertheless believe we are committed to defiance to social change by virtue of our belief that we should defy social change.  In this entire description of our decision to defy social change, never once does “desire” play a significant positive role.  Instead, if we are to apply Scotus’ account, we would be required to add in desire as a factor somewhere along the line (for instance, “we have a desire to act on our commitments”).  Yet, adding “desire” as a factor here would only overcomplicate our analysis with a factor we do not usually understand as being a factor at all.  Therefore, it is ultimately silly to view our opposition to social change as being created by some desire.

 

If indeed, as we have argued, defiance to social change has nothing to do with some desire to oppose social change, then Scotus’ theory of sin is not applicable to defying social change.  After all, without some significant contrary desire to those pushed by advertisers and mass media producers, our desires will always favor social change.  If this is true, and if, as Scotus says (in relation to grace, rather than to social change), the only reason for our only possible opposition to social change can come from our desires, then opposition to social change in any case is extremely unlikely, and consistent opposition to social change is completely impossible.  Clearly, there must be something other than desire that can influence our defiance of social change.

 

While our analysis of applying Scotus’ theory of sinning to defiance to social change has been extremely negative, that does not mean it has not productive.  Indeed, from this section, we have learned a great deal.  We have learned that any concept of defiance to social change, if it is to be effective, must not be simply unintuitive, that it must have a clear concept of not only what social change is to be rejected but also what social change is to be accepted, that it must not require the creation of a new ethic, that the appearance of authenticity is inverse to a view or attitude being less contrived, and that it must not seek to view defiance as being rooted in desire but rather in our reasoned beliefs and purposeful decision to defy.  All of these conclusions will be exceptionally helpful when we explore a more fruitful concept of sin, that of Thomas Aquinas.

 

The Right Way to Oppose Social Change

 

In this final section of this chapter, we will argue that Thomas Aquinas’ concept of sin, as applied to social change, allows us to pursue a consistent and reasonable concept of defiance to social change.  To do so, we will first examine Aquinas’ concept of sinning.  Then, we will apply that concept to social change in a similar manner to how we applied Scotus’ concept.  We will then find that the most obvious conclusion we could arrive at – one which views absurdity and abstraction as the best way to contradict social change – is in fact a poor substitute that innately ends up supporting the social change it attempts to oppose.  Rather, we will find that the best way to oppose social change is via a process of undoing one’s changing views about the world.  This process occurs through us asserting and committing ourselves to positions that we have previously held but since moved away from, starting with those which we have held most recently and which therefore seem the least absurd.  We argue that such a process allows us to undo the effects of social change upon ourselves and not be affected by further negative social change without forcing us to oppose all positive social change that has preceded us.

 

Thomas Aquinas conceptualizes sin differently than Duns Scotus.  Whereas Scotus views sin as being the triumph of our earthly desires over the desires created by God’s grace, Aquinas believes that the former cannot triumph over the latter.  For, to Aquinas, God’s grace is involved in all human decision-making, and it is superior in influence to all other desires.  So, God’s grace cannot possibly fail to influence human decision-making by somehow being less impactful than earthly desires.  Therefore, Aquinas rejects Scotus’ position.  As a result, Aquinas believes that God must actively will the possibility of sin.  However, Aquinas also believes that God cannot will sin, or else he would in effect will evil, which would go against God’s extreme goodness.  To solve this dilemma, Aquinas comes up with an ingenious conceptual scheme.  In Bernard Lonergan’s words, Aquinas views possible actions as being on a “three-lane highway,” where we choose “between what God wills to happen, what he wills not to happen, and what he permits to happen.”  Or, as Lonergan quotes from Aquinas:

 

God neither wills that evils occur, nor wills that evils not occur, but he wills to permit

evils to occur.  (Lonergan 2005, pp. 111-112)

 

Along this highway, Aquinas views sin as being a rebellion against God’s specific but not general ends.  This much is to say that to sin is to take action opposite God’s belief of which actions will achieve the good, but not to disagree that the good is good:

 

Thus, after maintaining in Pars prima, that nothing can occur praeter ordinem divinae

gubernationis (apart from the order of divine government), St Thomas at once proceeds

to ask whether anything can revolt contra ordinem divinae gubernationis (the order of

the divine government). The answer to this is a distinction between general and specific

ends, between universal governance and its execution by particular causes. In the

response St Thomas is content to deny revolt in the former sense…His argument is that

the sinner does not withdraw totally from divine governance, for the sinner intends

some good; and the implication is that in some partial manner the sinner does withdraw

and therefore is rightly punished.  (Lonergan 2005, p. 114)

 

Lonergan further argues that sinning is, from the perspective of God, completely unintelligible “rationally,” that is, that it is akin to a “surd” in mathematics, outside of the rules of the world:

 

What objective falsity excludes is understanding, the explanatory science that follows an

empirical science when the object of the empirical knowledge is objective truth. For,

obviously, the possibility of our understanding anything is ultimately due to the object's

commensurability to the divine intellect; and in absolute objective falsity it is precisely

this commensurability that is lacking. We can know sin as a fact; we cannot place it in

intelligible correlation with other things except per accidens; that is, one sin can be

correlated with another, for deficient antecedents have defective consequents; but the

metaphysical surd of sin cannot be related explanatorily or causally with the integers

that are objective truth; for sin is really irrational, a departure at once from the

ordinance of the divine mind and from the dictate of right reason.  (Lonergan 2005, p.

115)

 

Since sinning is a “surd,” that is, something that goes against the rules of the world, and since it is not willed whatsoever by God, it must be willed completely and freely by the sinner and not at all by God:

 

Because sin is a surd, an irrational, an objective falsity, it cannot have as antecedent

either cause or non-cause, where by non-cause is meant a policy of inaction that makes

sin inevitable; for both cause and non-cause are instances of intelligible correlation, and

the irrational cannot be so correlated. Thus, while reprobation precedes in virtue of

divine omnipotence and omniscience, still this precedence is a mere empirical, and in no

way an intelligible, antecedence. It does not cause, or lead to, or result in, the sin. And

so it leaves sin to be a first in its own order, to be due to the sinner alone, and to be a

ground for punishment in a way in which merit is not a ground for glory.  (Lonergan

2005, p. 116)

 

This description of sin, if it can be applied to social change, is one which would track nearly perfectly onto all of our descriptions of what defiance to social change would entail.  First, this understanding of sin does not require us to approve of something that in ordinary morality is evil.  Indeed, it would specifically prohibit the general rejection of social change or the holding of the positions actively rejected by those in a post-social change world in favor of holding positions that were allowed – but not favored – by social change.  Second, it allows defiance to social change to be a rational decision based on our commitments.  By contending that sinning is based on an outright decision to partially reject God, Aquinas portrays sinning as rational and teleological, just as defying social change based on one’s opinion that the social change underway is negative is rational and teleological.  Finally, Aquinas views the decision to sin as an outright decision of the sinner, something that can be “ground for punishment,” or, in our case, approbation.

 

However, we are getting ahead of ourselves in praising the application of Aquinas’ theory before actually applying it.  After all, Aquinas’ concept of sin is not as simply applied as Scotus’.  We cannot simply say that there are some things within us (or as Havel argued, desires) contrary to social change.  Instead, defiance of social change is a sort of action.  This action is even further complicated because, according to Aquinas, it cannot be “explanatorily or causally” explained, at least from the perspective of “objective truth.”  The most obvious explanation, judging from Aquinas’ argument, would be to view Aquinas’ rejection of a “general” objection to God and his view that sinning is surd-like from the perspective of “objective truth” as explaining defiance to social change as a result of acting on an absurd particularism, that is to say, a view that favors our repeatedly making absurd decisions and rejecting individual parts of social change without any concept of a broader rejection or basis for that rejection.  Take, for instance, postmodernist artwork.  Postmodernist artwork seeks to criticize broader artistic concepts by using them in a context that makes them meaningless, or at least not meaningful in their intended way.  As such, it is a very good example of something which both is absurd and simultaneously criticizes the particulars of a certain cultural milieu.  For instance, art critic Hal Foster describes the work of artist Laurie Anderson as using:

           

…the art-historical or pop-cultural cliché against itself, in order to decenter the

masculine subject of such representation, to pluralize the social self, to render cultural

meanings ambiguous, indeterminate.  (Foster 1987, p. 132)

 

While the criticism being described is of very different social trends than those to which we object here, the strategic argument might seem to be applicable:  one can oppose certain cultural attitudes simply by criticizing particular parts of the cultural milieu as absurd.  However, this sort of criticism is very easily assimilated into the exact cultural milieu to which it objects.  The reason for this is the same as the reason why attempts to defy social change that rely on “authenticity” fail.  Namely, critical abstraction has no concept of what should replace negative social change.  Indeed, there is little to it beyond criticism.  As a result, its constitution can very easily be made amenable to and a part of the negative social change it seeks to reject.  Indeed, this has already occurred in the case of abstract art, in a process philosopher David Carrier describes:

 

In 1985, when he was a journalist, [Hal] Foster praised “works that recall a repressed

source or marginal sign-system in such a way as to disturb or displace the given

institutional history of an art or discipline.” More recently, he has insisted that “critical

theory is immanent to innovative art.” But now, when Foster occupies an endowed chair

at Princeton University, he is not an outsider. Like other leftists, Foster often cites [Karl]

Marx, [Charles] Baudelaire, [Walter] Benjamin, and [Theodor] Adorno as role models. All

of those men wrote in genuine opposition, at some level, to their own cultures. Marx,

Baudelaire, and Benjamin were marginal intellectuals, and Adorno a political refugee,

but nowadays, critical theory and leftist art history are perfectly respectable academic

subjects.

 

“A public audience in the ideal modern sense,” [Thomas] Crow writes, “articulate,

independent, potentially resistant—was fundamentally at odds with the rigorously

corporate character of Old-Regime high culture.” In our society, this vision of an

independent public audience is still a utopian fantasy, for museums and galleries are the

corporate cultures of high art. In the last days of the old regime, Crow plausibly

indicates, critics writing about the Salons could only act in opposition to the government

because they had no place within the governing elite. But the French Revolution

destroyed this autocratic structure, and in our bourgeois culture leftist academics are as

much a part of an unruly, democratic system as anyone else.

 

Nostalgia for the situation of French intellectuals, circa 1765, or Adorno and Benjamin in

the 1930s, prevents Foster and other leftists from understanding the present. Neither

our leftist critics nor the artists they champion are outsiders. This does not necessarily

mean that leftists will triumph, or even be taken seriously, but it does show that they

are not impotent outsiders…Political artists and advocates of critical theory face an

escapable dilemma. When they achieve success, they become part of the system they

would criticize. In that way, these artists are like 1960s pop musicians praising

revolution who became very successful members of the establishment.  (Carrier 2003,

pp. 169-170)

 

This example displays that the making of absurd and particular objections to social change is an innately weak basis for defiance to an institutional state of affairs.  With time, its very success is manipulated to make its criticism into a reinforcement of social change, simply through different methods.

 

Yet, if absurd particularism fails, then it becomes more difficult to apply the Thomist concept of sin directly to social change.  After all, absurd particularism is a nearly one-to-one application of Thomism to social change.  And our problem with absurd particularism, the fact that it does not offer us any specific alternative to social change and eventually folds into support for whatever cultural “establishment” it seeks to reject, seems to be applicable to all forms of Thomistic sin.  After all, according to Aquinas, we are incapable of direct, general opposition to God, and we are incapable of creating an intelligible basis for defiance to God.  If we apply these same constraints to social change, then our defiance cannot be general, and it cannot be understandable.  Therefore, it seems that it would obviously follow that our defiance cannot be anything other than absurd particularism.

 

However, that analysis does not recognize that there are important differences between the Thomist God and currently underway social change.  After all, the Thomist God is beyond time; he is, in Lonergan’s words, “not simultaneous with anything else,” and God has “neither past nor future” (Lonergan 2005, p. 323).  Or, as theologian Paul J. Griffiths puts it, “there is no succession in [God’s] life, no before and after. Every event in that life is simultaneous with every other” (Griffiths 2014, p. 75).  Therefore, there can exist no change or nuance in the ends of God.  Grace must therefore favor the same exact opinions, habits, and actions for as long as we are alive.  It is a constant force.  Social change, on the other hand, is not.  As we discussed earlier, social change is created through a constant process of improvement of advertisers and mass media producers’ industrial strategies to change people’s behavior.  It is therefore a very dynamic force.  For one, it is constantly becoming more effective, by virtue of the natural effects of industry-wide trial and error and each firm’s copying other firms’ strategies.  For another, depending on changes in the market, it can also change what its ends even are.  In the second part of this article, we described one possible example, that the decreasing need for low-skill American labor caused advertisers to no longer desire American workers, therefore allowing them to push neurosis, something it had previously opposed, without hurting their own economic situation.  This example displays how advertisers and mass media producers can clearly over time come to reject ends which they themselves previously supported.

 

Another difference between God and social change is that God’s grace does not just influence our particular ends but also our general ends.  God’s grace impacts our intentions just as much as our particular decisions, and we are capable of understanding these two influences differently.  To Aquinas, in fact, God’s influence upon our general ends is more impactful than upon our specific ends; indeed, central to his theory is that we are capable of rejecting the latter but not capable of rejecting the former.  On the other hand, advertisers have little to no influence upon our general ends.  The way in which they influence us, if the operation we diagnosed in the first part of this article is correct, is by taking our already existent ends (in particular, our desire to be esteemed by others) and manipulating our understanding of the facts of the world to make us change our particular opinions, habits, and actions.  Therefore, while Aquinas views us as never being able to shake off the influence of God due to grace’s influence on our general ends, we have no reason to believe social change has any similar influence on our general ends.  Therefore, we do not need to reject them to any extent to defy social change.

 

Considering these two facts, there is a great deal in the world that is entirely understandable to us yet is inexplicable, either in a positive or a negative sense, from the perspective of social change.  For one, the reasons we take our general ends as being desirable (our desire for virtue, perhaps, or our desire for pleasure) are adopted through methods very different from the way in which those views manipulated by the advertisers and mass media producers are adopted, namely, by our recognizing certain opinions, habits, or actions allowing us to satisfy certain, more fundamental ends.  The advertisers and mass media producers are not able to understand, however, how those general ends, which are necessarily prior to social change, are arrived at.  Our general desires are, therefore, inexplicable from the perspective of social change, as they are viewed as simply a “given” upon which social change then acts.  As a result, we can conclude that we are capable of retaining our broad beliefs about, for instance, our desire for “the good” without precluding our ability to defy social change.

 

Of course, this conclusion does not help us to defy social change very much, either.  After all, general views about the good are not something that is necessarily opposed to social change, either.  The second category of inexplicable belief, however, helps us more with defying social change.  This category comprises those beliefs we have which precede the influence of the advertisers and mass media producers.  To influence our actions, the advertisers and mass media producers must manipulate previously existent views about the world to favor particular views, habits, or actions.  Those previously existent views, however, are, or at least once were, real.  Yet, to a particular advertiser or mass media producer, the only thing that is necessary to understand is how a particular method of social change will change a person’s views.  There is no need to understand what the prior views were.  (In fact, considering the pluralism of what those views might be, there might be good reason to not know, so that one’s advertisement will appeal to everyone instead of just a small demographic.)  Therefore, there is no need to understand the reason for a previous view.  This makes people’s original views inexplicable from the perspective of the advertisers and mass media producers, and it makes one’s continued belief in one’s pre-advertisement views in spite of the advertisement seem absurd from the perspective of advertisers and mass media producers.  Here, we see a case not only of inexplicable views but particular inexplicable views, habits, and actions contrary to what advertisers and mass media producers desire.  We see defiance to social change.

 

However, this steadfast holding of pre-social change views requires explanation.  After all, if our opinions, habits, and actions are affected by social change, it seems unreasonable to simply re-assert one’s previous views in spite of their overturning.  Further, there has to be some method for us to retrieve those views and to consistently hold to them against the influence of social change.  The first of our demands, some way for us to re-assert our pre-social change views, can be provided by philosopher Gareth Evans.  Evans argues that statements in which we discuss ourselves using the term ‘I’ create a unique grammatical situation in which we have a “tenseless predicate,” one which allows us to apply previously true facts to the present (as well as currently true facts to the past):

 

…if a subject has, at time t, a belief which he might manifest in judging ‘I am F’…then the

tendency of belief to persist means that there is a nonnegligible probability…of his still

having, at a later time t’, the disposition to judge ‘I am F’. If we now introduce the

consideration of tenses (etc. ), we introduce other aspects of cognitive dynamics:

combined with the aspect we have hitherto isolated, this yields (for instance): if a

subject has at t a belief which he might then manifest in judging ‘I am now F’, then there

is a non-negligible probability of his having, at a later time t’, a disposition to judge 'I

was previously F'. The later manifestation of the belief still employs ‘I’…and, so far as the

'I'-Idea is concerned, the later dispositions to judge flow out of the earlier dispositions to

judge, without the need for any skill or care…on the part of the subject.

A possessor of an ‘l’-Idea has a capacity to ascribe past-tense properties to himself on a

special basis: namely the memory of the basis appropriate for an earlier present-tense

judgement…But we must be careful not to lose sight of the fact that memory is not a

way of gaining knowledge.  (Evans 2002, p. 237, p. 238)

 

What is important about Evans’ analysis is that it concludes that remembering that we held a particular opinion is not a way of finding out that we hold that opinion but rather a display of our holding the opinion.  Therefore, to believe a certain statement to be true, we require no more proof than that we have believed it; the fact that we believe a particular idea is not liable to counterargument (if we have some contradicting proof, for instance) because the original remembering was not, in the first place, an argument.  Thus, we find that our holding a particular opinion is a bit like having a commitment to believe something; holding the opinion is, in itself, reason to act on it.  From this perspective, our opinions are created not by some group of arguments but rather by whichever previously held opinion seems most salient to us upon remembering it.  And this means that the only thing required of those who seek to reject social change is to favor the views they held prior to social change over the ones they held afterwards.

 

Thus, we could deliberately favor those opinions which we believed a longer time ago.  This would allow us to hold opinions contrary to those currently introduced by social change.  At the same time, such a process would not necessarily lead us into uncertainty or the creation of a new, easily assimilable ethic.  After all, our new worldview would simply be the ethic which we held prior to recent social change, and, as we argued earlier, advertisers and mass media producers cannot possibly understand, let alone manipulate, the views which were held prior to its existence.  Further, our new method of behaving would be unusual; indeed, as we will discuss later, it would be a deliberate and quite large shift in one of the most normal parts of our lives, our remembering what we believe.  Therefore, by favoring more distant over more recent opinions, we would be able to be completely “surd”-like to advertisers and mass media producers, impossible to explain via the normal methods of explanation that advertisers and mass media producers use to manipulate our opinions, habits, and actions.

 

Now, of course, such a deliberate favoring of opinions we held a longer time ago would require changing at least part of our normal practice of remembering our previous opinions.  After all, as Evans states, the value of one’s memory of a previous opinion is, at least in part, a function of “the gap between t and t',” that is, the amount of time that has passed since one previously thought their particular view (Evans 2002, p. 237).  Therefore, to favor one’s more distant memory would be an exact reversal of our normal practice of remembering.  This would seem to make our recommended practice, particularly if it is to change which actions we take, exactly resemble the sort of absurd or overly unusual behavior based on general moral reasoning that we have been trying to avoid.  After all, we are demanding that people completely change how they enact a normal part of their lives, in this case how they remember things about themselves, for the sake of defying social change.  And further, we are saying that not engaging in such behavior makes them immoral.  Such a charge, no matter how we justify it, seems on the face of it unsupportable.

 

However, in this case, our demand is appropriate.  The reason why is that the demanded change is not the reversal of a behavior in which we engage intentionally but rather the addition of a restriction to a behavior of which we are mostly not cognizant while we engage in it.  As Evans argues, ‘I’ statements that apply truths about the past to the present require no particular “skill or care,” which is to say that we do not deliberate about our method of remembering things.  While applying such a rule would be an imposition, therefore, that is all it would be.  Further, the reason for this imposition being a moral requirement is not because it is itself moral or that not engaging in it is itself immoral, but rather because we have previously accepted that we have a moral duty to defy social change.  That moral commitment then translates into a requirement to change our process of remembering.  Therefore, the moral issue at hand here is not whether we should or should not change our process of remembering but rather the question of whether we have a commitment to defy social change (we have argued we do), the question of whether changing our process of remembering is an effective way of opposing social change (we have argued it is), and the question of whether a commitment to something translates into a moral requirement to engage in the most effective, morally acceptable way of following through with that commitment (aided by Searle, we have argued it does). 

 

(One could counter at this point that Searle’s argument about commitments does not necessarily require us to engage in the most effective method of following through with commitments if that method is considered abnormal.  This argument is clearly wrong, though.  Imagine, for instance, that we were given a gift (therefore creating a moral commitment to show gratitude to the gift-giver) by someone from a culture with a different way of showing gratitude from ours.  If we knew how to show gratitude in that other person’s culture and deliberately avoided doing so because the method to show gratitude was abnormal in our culture, we would clearly not be acting morally.  While it is possible that if that culture’s method of showing gratitude seemed morally outrageous to us, it would not be morally objectionable to avoid showing gratitude in that fashion, that exception clearly does not apply to a method of showing gratitude that is morally neutral but simply abnormal.  Therefore, unless there is some outright moral objection to whatever method is most effective at following through with our moral commitments, the most effective strategy is clearly morally required.  Considering that there are no clear moral objections to favoring our more distant memories over our more recent ones in determining what our beliefs are, it seems like we need not worry about whether, if favoring more distant over more recent memories is indeed effective, we “ought” to engage in it.)

 

Of course, our examination of this method is not quite complete.  While it is certainly possible to change how we remember our opinions some of the time, at other times, this kind of strategy is clearly absurd.  First, it is difficult to believe that we can consistently change the way we employ basic strategies of remembering; while we might be able to do so in a couple of individual cases, it is hard to believe that we can always act on a self-consciously contrived remembering strategy.  Second, we cannot continually refuse to learn from errors when we make them as a stratagem to fight against social change; we need to be able to experience personal change and learning without social change.  These two problems can only be solved by distinguishing between two different kinds of knowledge.

 

To Michael Oakeshott, all action requires us to have certain knowledge.  That knowledge can be separated into two groups: technical and practical.  Technical knowledge is:

 

…susceptible of precise formulation, although special skill and insight may be required

to give it that formulation…The technique (or part of it) of driving a motor car on English

roads is to be found in the Highway Code, the technique of cookery is contained in the

cookery book, and the technique of discovery in natural science or in history is in their

rules of research, of observation and verification.  (Oakeshott 1962, pp. 7-8)

 

This knowledge is the knowledge that is taught to us, that is, things that we cannot learn on our own, or at least can learn through other methods.  It is different from “practical knowledge” (which we discussed in the first part of this article).  Practical knowledge cannot be explained through recitation or formulation but rather “exists only in practice, and the only way to acquire it is by apprenticeship to a master – not because the master can teach it (he cannot), but because it can be acquired only by continuous contact with one who is perpetually practising it.”  Examples of people who gain this practical knowledge include:

 

…a pianist acquires artistry as well as technique, a chess-player style and insight into the

game as well as a knowledge of the moves, and a scientist acquires (among other things)

the sort of judgement which tells him when his technique is leading him astray and the

connoisseurship which enables him to distinguish the profitable from the unprofitable

directions to explore.  (Oakeshott 1962, p. 9)

 

With this distinction in mind, our first problem evaporates quickly.  While, certainly, we cannot constantly decide that we are going to favor our older memories over our newer memories and deliberately make ourselves prefer the former, we can certainly inculcate a habit of doing so.  By creating this habit, our understanding of how we should remember our own opinions will translate from a piece of technical knowledge into a sort of practical knowledge, something that just seems obvious.  This will in fact increasingly seem obvious over time, so that it will no longer require intentional decision making to force our remembering-process to favor less recent memories, just as the chess player will slowly, over time, no longer have to deliberately think about the manner in which he learned how to think about the game.

 

Our defying social change will become easier over time for another reason as well.  As we continue to defy social change, we will become more and more entrenched in our own opinions, habits, and actions, which will be increasingly different from the opinions, habits, and actions pushed by advertisers and mass media producers.  At the same time, those opinions, habits, and actions that have been pushed by social change will also become increasingly normal.  As a result, we should expect those who defy social change to appear more and more unusual to advertisers.  As advertisers and mass media producers tend to prefer advertising to larger demographics over smaller demographics, the person defying social change, becoming an increasingly unusual demographic, should expect advertisements and mass media productions to seem increasingly unusual to them as well and to therefore be less likely to influence their view of the world.  The process by which this occurs has been described by Griffiths in relation to sin:

 

Sin…mov[es] [sinners] toward what they seek, which is the nothing from which they

came. Repeated sin, as it approaches the necrophiliac purity of invidia or acedia, may

bring sinners to the point where they lack the energy—the very being—to turn their

gaze toward the LORD who is the only possibility of re-creation, of setting sinners on a

path of remaking, away from the nihil and toward the LORD. The speculative position

taken here is that the reductive self-damage that is sin may, in some cases, find its

proper terminus, which is the bringing of the sinner to nothing; that this annihilation

may in some cases be permanent, which means that once sinners have gone out of

existence they will not be reconstituted…  (Griffiths 2014, p. 202)

 

While this is not a one-to-one analogue (after all, there is something prior to currently underway social change), if we view “nothing” as merely the world without the influence of social change and “something” as the influence of social change, then we can see how this description can track onto social change as well.  As we repeatedly reject social change, we should see ourselves slowly seeming more distant from the target demographic of advertisers and mass media producers.  As this occurs, we should have to resist social change less and less, and at some point, the target demographic of advertisements and mass media productions should be so different from ourselves that it would be impossible for us to understand the cultural attitudes of the socially changed world any more than we could understand an alien culture.  Therefore, the original difficulty of opposing social change should disappear over time.

 

The second difficulty, that we might find it difficult to experience personal change if we are suspicious of social change, is less easily fixable.  The problem is not as significant as it seems, though, for two reasons.  For one, practical knowledge can only be gained through repeated practice.  Therefore, our practical knowledge of the world, at least with current advertisement and mass media production being imperfect, cannot be manipulated in the same way as our technical knowledge.  Indeed, in the first section of this article, we argued that the manner in which the advertisers and mass media producers manipulated our understanding of the world was by changing our understanding of how normal or abnormal certain beliefs, habits, or actions were on a broad scale.  Further, in the second part of this article, all of the social change we described was change caused by people’s changing understanding of the normalcy or significance of different sorts of behavior.  Based on this analysis, there is reason to believe that what we have learned through the practical application of different traditional means of learning and acting does not need to be rejected.  The manner in which social change has been caused has not been through traditional media or sociality or religion but rather by moving people away from those traditional sorts of behavior.  Therefore, we can expect that social change (unless there are significant technological developments in entertainment) will not be able to manipulate practical knowledge in the near future.  Indeed, the only memories which we are encouraged to remember differently are those which reflect our broad habits and opinions, memories which make up only a very small percentage of our total memories and impact only a very small percentage of our total actions.  We need not, on the other hand, change whether we remember facts about the world or our interpersonal relationships, or a vast majority of other important parts of our lives.

 

Second, due to the natural erosion of the influence of the advertisers and mass media producers that (as we just described) results from our adoption of this resistance, our incapability for personal change will only be temporary.  As we become less and less influenced by advertisers and mass media producers, we will be able to increasingly allow ourselves to accept more and more personal change.  We will increasingly not have to favor more distant memories.  The process of how this will occur is very speculative and probably differs person-to-person, but to put it simply, at our desired end result, wherein advertisers and mass media producers simply do not appeal whatsoever, those who defy social change should be able to live with very normal personal change.

 

This much being said, the problem diagnosed here is a real one; it will require us to live in such a way that is extremely conformist.  Our knowledge will be a sort of “dasein,” as we defined it in the second part of this article; it will come, as John Haugeland said, “not so much by hard work and insight as by comfortable acquiescence and consensus” (Haugeland 2013, p. 135).  This knowledge will leave us somewhat weakened in creativity, at least for a time; it will leave us “not having to reinvent the wheel, coupled with a smug intolerance of anything but the same old wheels” (Haugeland 2013, p. 136).  This is an imperfect situation, admittedly.  It leaves defiance to social change undesirable, as we are required to hold an often conformist, uncreative position.  That being said, the fact that the best way to respond to social change is clearly imperfect is unsurprising.  Negative social change is negative.  It should not shock us that it will have a negative effect on everyone in a given society, whether or not they individually reject that social change.  Still, in spite of the real demands that our concept of defiance to social change makes of us, if our moral commitments have any worth whatsoever, we are still committed to follow through on those commitments by defying social change, whether or not we would like to.

 

The most obvious worry which we face here is that our whole theory seems to make opposition to social change inherently dependent upon one’s background.  For someone who was born earlier, their rejection of social change looks very different than for someone who was born later.  After all, the older person’s memory goes back much further.  Therefore, they are capable of opposing far more negative social change than a younger person.  Further, if someone is born after a negative social change has already occurred, then it is impossible, under our theory, for them to have even the capability to oppose that negative social change.  However, this is more a problem with the world than with our theory.  As we described in the second part of this article, there are some ways of living, some “daseins,” that are simply impossible to resurrect.  None of us can, for instance, pretend that we are Aztecs without, frankly, playing pretend.  Our ability to oppose social change only exists within the context in which such social change seems emergent to us.  Thus, we absolutely cannot oppose social change in a reactionary way; we are forced to oppose only that social change which has occurred in our lifetime.  This fact makes defying social change even more important: for one is defending values that, if one fails to defend, future generations will be incapable of defending.  Hieronymi discusses this sort of dynamic in describing hypothetical protest against inequality:

 

Though it is edifying to say, for example, that we are all moral equals, that each is

endowed with no more but also no less moral standing or worth than any other, in

many circumstances there is a palpable sense in which this is not yet so. Some are more

equal than others. The interests of some, and the voices of some, dominate the

standards of regard, the actually existing moral and interpersonal framework of a

society. And so the edifying claim is, in one way, not true. It might be more accurate to

say that each person ought to have equal moral status—though, of course, if this is so (if

each ought to have equal moral status) then, in another way, each one does, already,

have a kind of equal moral status, insofar as one appeals to the ideal. (Thus the double

vision.) The protester of injustice asks for this more rarified, ideal status to be made

actual. The urgency of protesting injustice comes, in part, from the fact that something

like the here-and-now reality of one’s moral standing is on the line and, perhaps, in part

from a concern that one’s vision of justice might be lost to history.  (Hieronymi 2020, p.

91)

 

Indeed, if we were in a society in which our “vision of justice” had been “lost to history,” we would have no way of opposing the negative social change that had occurred.  This is what makes the fact that our current culture is still Western, social, and theist, as we described in the second part of this article, a consolation to our view.  If this were not the case, we would have no reason or ability to defend a Western, social, and theist society.  The fact that precisely what we cling to will be different depending on the facts of our lives is not a problem with our theory, simply a necessary consequence of the inherent diversity of normal life.

 

An obvious objection to this argument is a simple matter of fact: there do exist people who hold opinions that are reactionary, whereby they hope to bring about social change that undoes significant already occurrent changes.  There are many people who purportedly support Communism or Nazism in spite of those ideologies’ long-past downfall, or desire to bring back some historical Caliphate or other theocracy.  Therefore, it can be said that one’s support of long-dead concepts of justice seems perfectly possible.  The problem is that, as a matter of fact, these ideologies are not really forms of defiance.  Indeed, they are correlated with one’s being, if anything, more advertiser-friendly, that is, more affected by current social change, as can be seen by the fact that political extremists tend to have higher rates of mental illness and young people tend to be more likely to be politically extreme (on both extremes) (Bhui 2019, pp. 549-552; Mitrea 2020, p. 798).  As a matter of fact, politically extremist or reactionary views seem to be more of a part of the post-social change political order, something which makes politics into a reason for mental illness more than anything else (by making political issues nearly impossible to fix and only fixable with “fundamental” national change rather than change in one’s local community).  Therefore, while reactionary views are certainly possible, it is not possible for these reactionary views to be sincerely opposed to modern social change.

 

In spite of these problems, and they are problems, our theory of defiance to social change fits all of our main demands.  It allows us to hold a position that is not clearly morally absurd or wrong but rather within the norm of society.  It tells us to defy social change by making non-absurd moral claims.  It allows us to cleave to a view of the world which is not easily assimilable into social change, because it is not understandable to advertisers and mass media producers.  It allows us to take a reasonable position as to what social change can be opposed, that is, the social change which has occurred during our particular lifetimes.  It allows us to oppose social change based upon our opinion that social change is bad.  Basically, it fits into all of the requirements that we created for a theory of defiance to social change.

 

Not only does this argument fit into those views, it also clears up some of the lack of clarity which our earlier views included.  In specific, when we described how it was possible for people to oppose some social change but not all social change, we noted that an important requirement for social change to be opposable was that it was unusual.  We noted that its unusual-ness was dependent upon some odd context, that, for instance, the reason why the Germans are clearly condemnable for supporting the Nazis, while it is difficult to strongly condemn similar Roman aggression, was that the Germans seemed to have a recent history that was not genocidal and were in a time when Nazi-like policy was antiquated.  This reason is backed up by our theory of defiance to social change.  After all, the Germans were capable of remembering, within their recent life history, some period when they did not support committing a genocide against Jews, when issues of human rights or pluralism might have mattered to those Germans, or at least where to favor genocide was extremely unusual.  On the other hand, the Romans did not have a common memory of any modern concept of “human rights” or “pluralism.”  Therefore, the Romans are not faultable in the same way the Germans are because the Romans had less of a capability to defy the negative actions in which their leaders engaged.  Hence, this concept of one’s memory of a society without a particularly evil sort of social change or of their own one-time opposition to that sort of social change creates a commitment to oppose that social change in the future.  (This might also be why most people view traitors to a particular good cause as so evil.)

 

Before we conclude, we would like to answer one more possible question: what this would look like in practice.  After all, we have remained very theoretical and purely discussed the manner of thought required to oppose social change.  We have not discussed what people would actually do if they were to believe this sort of thing.  To put it simply, it would require different things of different people.  For the average person, it would require they live with the ordinary attitudes held as long ago as possible; what this would mean would depend on one’s age, the culture one comes from, and other influences.  This could be moderated if one’s prior moral opinions were truly outrageous (if one were, for instance, a segregationist in their youth), and that person could take the most distant opinion they view as reasonable.  For a firm, it would require the holding of strategies, particularly in marketing, quite unusual today but which more closely resemble the firm at the most distant point retrievable.  It is reasonable that, here, too, there can be moderation for financial solvency; one can hardly expect firms to deliberately bankrupt themselves.  These moral requirements will be very particular to the person or firm to which they apply, although the parts of the process which we have not considered, in light of what we have said here, should be matters of fact rather than matters of morality.  As for matters of morality, we hope that we have cleared those up in this article.

 

Conclusion to Part 3:

 

Our argument thus far inherently forces us to ask a simple question: “Yes, but can it work on a mass scale?”  This is to say, can it really be that this sort of behavior becomes popular, that this sort of thing can be not just a moral requirement but also something which ends the social change currently occurring?  It seems, plainly, unlikely.  The demands made here are extremely specific and require one to deliberately create difficulty in their own life to slow social change.  Even if they could possibly be adhered to by one generation, it is extremely difficult to believe that the optimization of marketing practices will never be reached or that people will simultaneously and universally choose to oppose the present social change forever.  Further, individual morality is very rarely a particularly strong source of opposition to societal issues.  This is, in part, because it is extremely rare for a massive majority of people to all agree on any moral opinions, and in part because unjust societies, as philosopher Samuel Scheffler has observed: “tend to obscure the character of those very choices, and to encourage the development of moral sensibilities that can make it genuinely difficult for well-intentioned and well-integrated people accurately to perceive the moral conditions in which they live” (Scheffler 1994, p. 144).  Of course, this is not necessarily a prophecy.  Things can always change, and technologies can be discovered, which could theoretically change what sort of social conditions are good for businesses.  As we have previously argued (aided by Joseph Schumpeter), our prophecy can only hope to capture what would happen if the particular trends noticeable at a particular time were to continue affecting a society in the exact same way they previously had.  This much having been said, however, it is not therefore implied that the process of social change is somehow predetermined.  Social change can occur quickly or slowly through a variety of methods, and this can be affected by individuals’ actions.  Necessity does not therefore imply a determination of all the aspects of social change.

 

Seeing that people other than ourselves are likely also to find the social change currently underway, with its favoring of neuroticism and declining work ethic and its destruction of traditional sources of culture, social relations, and religious attitudes, there seems to be a moral demand not only for personal opposition to social change but also interpersonal opposition.  This, if we are to believe that the upholding of the good is also good (as consequentialists, who believe that, in philosopher Thomas Hurka’s words, we “identify the right act by how much good it produces,” would say (Hurka 1996, p. 57)), creates a moral demand for supporting others’ defiance to social change.  Considering that what sorts of actions that entails very much depends on one’s particular moral beliefs and particular personal situation, we will avoid any further discussion of interpersonal action.  For now, it will suffice to say that some action is perhaps demanded.

 

That being said, we will discuss, in the fourth part of this article, very shortly, the general framework of a political philosophy for two main reasons.  The first is that we promised to do so in the introduction (we said we would “prescribe” a possible political solution), thus creating a commitment which we must follow through with (if you will).  The second is that we would like to conclude on a slightly more positive note.  Thus far, in spite of our promise in the second part of this article, our ethical argument has been rather pessimistic.  Our views on social change have required of us inherently taxing personal decisions for the sake of pure morality.  They have, indeed, offered little hope of any actual success, instead preferring to focus on the precise character of failure and our consequence-independent moral duties.  The political world, being a place in which society-scale decisions (rather than just individual-scale decisions) can be made to combat society-scale cultural trends, can inherently allow us to at least theorize about much more significant decision-making than we can pursue on a purely moral level.

 

Nevertheless, we should emphasize that this purely individual and moral level, no matter how relatively pessimistic, is more important.  Our ability to personally defy social change is positive on both a moral and a personal level in a way that political action cannot possibly achieve.  Our ability, as we have outlined here, to turn back our internal clock in a time of negative social change to the most distant opinions possible so that we are not affected by that social change is not just valuable because of its direct effect, that it allows us to take the morally right stance to defy negative social change.  It is also valuable because it allows us to not be affected by the negative influence of that social change.  The personal degradation that we have discussed, the degeneration into neurosis and poor work ethic, the cultural rupturing with one’s past, the social rupturing with one’s communities, and the moral degradation that comes of an increasingly unequal and unjust society are all things that, if we fail to defy social change, will affect us and make us worse people.  Our avoidance of these personal effects of negative social change cannot possibly be achieved by anything other than the personal decision to defy that social change.  No political decision-making, no heavy-handed state-led influence, can possibly preserve anyone’s personal and moral state.  Therefore, when we discuss political defiance to social change, people’s personal moral decisions must come first.  No matter what we do to oppose social change, we must always recognize that defiance can only truly be achieved through an individual’s decision.  There is no other way.  It is either personal defiance or a complete acceptance of social change, which, as we argued in Part Two of this article, means a cultural rupture and the completely remaking of our concept of our culture, our sociality, and our religion in the image of advertisers and mass media producers.  It is either defiance to change and continuity with our historical view of the world or an acceptance of social change and complete rupture with that historical view

 

Having established these two exactly contrary options, we will conclude this section with a short passage from E.R. Curtius that displays just how important defending our cultural, social, and religious continuity is.  As Curtius argues, it is only in light of our heritage and continuity that the beautiful can possibly be preserved:

 

If, nevertheless, we trace [cultural continuity’s] course, we become aware that we

cannot reckon by generations, and hardly by centuries. Very long periods of time are

necessary to overcome epochs of enervation and barbarization. That is the teaching of

history, but also its consolation and its promise. Even in times of educational atrophy

and of anarchy the heritage of the European mind, which is bound up with language and

literature, can be fostered…Nothing can replace that heritage. Not philosophies,

techniques, political or economic systems. All these can produce what is good-not what

is beautiful. The Lacedemonians were pleasing to the gods because they prayed to be

given the beautiful with the good…  (Curtius 2013, p. 394)

 

Cultural continuity is that important to creating beauty, Curtius explains, because beauty can only be created through interaction with our fundamentally human identity.  This human identity can only be accessed by seeing which of our traits are not just caused by the views of the society but seem to be impervious to change over time as a result of changing social, political, or economic conditions.  Again from Curtius: 

 

[Cultural continuity is an aid] to memory. Upon memory rests the individual's

consciousness of his identity above all change. The literary tradition is the medium by

which the European mind preserves its identity through the millenniums … Culture, says

Vyacheslav Ivanov, is memory of ancestral initiations: "In this sense, culture is not only

monumental, but is also initiatory in the mind. For memory, its supreme mistress,

permits its true servants to share in the ancestral initiations and, by renewing these

within them, gives them the strength for new beginnings, new departures.”  (Curtius

2013, p. 395)

 

While Curtius here discusses the European culture in particular, his argument could be applied to any culture; all are equally created by people who have attempted to use different methods to display their fundamental humanity.  It seems only likely that all cultural heritages, not just the Western, have therefore managed to create the groundwork for the creation of beauty.  Yet, a culture (such as the one currently being created by advertisers and mass media producers) which is designed to completely rupture our connection to people from the past is incapable of creating beauty for that very same reason.  We can see from this perspective that in opposing the social change currently underway, we are defending the concept of the beautiful, even the concept of heritage itself, from a new society. 

 

This new society would be similar to what philosopher Alexander Kojève described as “the definitive annihilation of man,” that is, a society created to avoid any complexity, plurality, or change.  Simplicity is preferable to advertisers because a simple culture allows advertisers to be able to easily manipulate our views without any complex strategizing.  A unitary culture in which we all desire the same things is preferable to advertisers because it allows them to target everyone with the same advertisements.  A stagnant culture, which will be reached once our culture has been optimized from the perspective of advertisers, would allow advertisers to never again have to change their advertisements and therefore never again have to go through the financially difficult process of changing their economic strategy.  This annihilated society would be one in which many things, some good (like art) and some bad (like war), would likely disappear.  As Kojève describes it, there would be:

 

…the disappearance of wars and bloody revolutions. And also the disappearance of

Philosophy; for since Man himself no longer changes essentially, there is no longer any

reason to change the (true) principles which are at the basis of his understanding of the

World and of himself. But all the rest can be preserved indefinitely; art, love, play, etc.,

etc.; in short, everything that makes him happy…in which men (mutually recognizing

one another without reservation) no longer fight, and work as little as possible.  (Kojève

1980, p. 159)

 

Overall, this “annihilation of man” looks like our regression into “animality” and into a creature incapable of any emotion or feeling other than “contentedness” (Kojève 1980, pp. 159-160).  It is only in respect to that frightful endpoint to ongoing social change, the complete destruction of our unique value as a species, that our defiance to social change can seem at all reasonable, even in spite of the fact that doing so inherently forces one to agree with the losing side of history (at least in the short-term). 

 

What we have found is that only if we recognize the importance of what we are attempting to conserve from demise can the actions required for that conservation make any sense.  That is, only in respect to the importance of what we are protecting is the excess to which we are willing to go to protect it justifiable.  This moral conclusion seems entirely within our ordinary moral sense.

Part 4:

This last part of this article is going to be relatively short and schematic.  We will seek here to apply our broader position of what, morally, defiance to social change entails, to the political sphere.  We are not seeking to argue about which particular policies should be pursued, or even that we are obligated to engage in such political activity at all.  We are simply going to put forth a couple of concepts of sorts of political policies which might be effective at countering significant social change.  The main undercurrent in our general argument is that the political is an area in which advertisement can be used to counter the undesirable effects of non-political advertisement.  The purpose of this argument is not to say that we are capable of a broad-based counter-social change advertisement campaign but rather to argue that we are capable of two more specific and limited political undertakings to oppose social change.  First, we will argue that we are capable of opposing particular sorts of social change, and that might decrease the speed of social change.  Second, we will argue that we are capable of reminding people of their historical beliefs, thereby reinforcing the division between a group’s historical understanding of themselves and their current understanding of themselves, forcing them to choose between the two views, which might in some cases lead people to personally oppose social change.  In both of these two cases, we will emphasize first and foremost the importance of bringing about personal opposition to social change, the only true counter to it.

 

Before we discuss what political action we think is entailed by negative social change, we will first dispel a seemingly obvious political implication to the third part of our argument.  This implication would view our moral obligation in relation to political action as being to engage in the same sort of action as we would in personal or non-political interpersonal behavior, that is, preferring the attitudes which we held more distantly over those we held more recently.  Applied to the government, what this would mean would be to attempt to undo any political change that has occurred historically, thereby causing the government to operate as if it were in a previous time.  Of course, this attitude would leave us impotent in the face of actual social change; that is to say, it would leave us incapable of actually changing the consequences of that social change whatsoever.  After all, if we do not update our political views with contemporaneous social change, we are effectively requiring ourselves to not recognize that social change.  And, if we do not recognize that social change, then we have no duty to do anything about it.  Considering that the social change underway is not in any way dependent on any governmental activity, our avoidance of taking action would be equivalent to allowing social change to occur uninterrupted.

 

Of course, this argument does not necessarily mean that we not continue to appraise political action based on the same considerations as those governing normal interpersonal or personal action.  It would simply mean that we would be required to hold a political attitude which views the consequences of our political actions as unimportant.  Paul J. Griffiths has defined this view, what he describes as “political quietism,” as follows:

 

What I have in mind here is a quietism—a Bartlebyesque “I would prefer not to”—of

consequentialist interest in the consequences of political advocacy, a cultivation of a

sancta indifferentia to such consequences of such dazzling obscurity that nothing is

visible in the act of political advocacy other than exactly the act of such advocacy. 

(Griffiths 2014, p. 339)

 

There are two main problems with asserting political quietism in relation to our view that we are morally required to defy social change.  For one, the government is not a person.  It is a group of institutions whose behaviors are determined by people.  There is therefore no reason to believe that the government can have its desires manipulated independently of the individuals involved.  Therefore, so long as the individuals involved engage in the defiance which we have described (or even, in a representational system, if their behavior is taken to represent other people who engage in defiance to social change), there is no reason to believe that via the translation from that individual’s views to governmental action (or to a representative’s action), the particular view will suddenly stop being a form of defiance.  Therefore, it seems likely that to avoid becoming party to social change, all a government needs to do is to have its principles or actions formulated by people with the intention of defying social change; there is no reason why the institution itself should need to completely undo all change that has affected it. 

 

Our second reason for opposing political quietism is that it is extremely unintuitive.  Griffiths himself recognizes this fact:

 

It is strange in such polities [as ours], wilfully and almost incomprehensibly so, to

attempt the systematic replacement of such consequentialist advocacy with an

advocacy that justifies itself in the face of opposition (there is always opposition) by

appeal to something other than consequence—to the proposal’s convenientia, its

appropriate beauty, say; or to its rooting in long-lived local tradition; or to its

concordance with or entailment by the LORD’s explicit will; or to its principled rightness,

and, hence, to the corrupt wrongness of those competitors with which it is

incompatible. Such justifications of political proposals do occur in our post-Fordist

ghetto-laced cornucopias; but when they do, they tend to appear as illiberal

outcroppings in a sea of free public appeal to rationally assessable consequence, and it

is difficult to imagine the replacement of that sea by a continent of political advocacy

that is quietist with respect to interest in the outcome of what is advocated. Such a

continent would seem a true desert to most of us, should we find ourselves abroad in it;

and we should scarcely be able to recognize those at home in it as political agents. They

would seem to us like cooks without interest in the taste of what they cook, or

gardeners without interest in the bouquets of visual and olfactory beauty their work

might bring into being.  (Griffiths 2014, p. 340)

 

Indeed, Griffiths argues that engaging in such quietism requires “ascetical effort” that is “not natural” and to “refuse an appetite or a disposition proper to all human beings” (Griffiths 2014, p. 347).  For us to take seriously this significant a personal sacrifice would require some good reason to engage in it – some moral commitment or other personal gain which would come from our sacrifice.  Indeed, Griffiths, aided by the theologian Pascal, argues that such a self-sacrifice is required by the moral commitments which one’s being Christian commit them to (Griffiths 2014, pp. 341-343).  Whether Griffiths’ argument about Christianity is true is besides the argument we are making here, our point is that an extremely significant argument would be required for us to take political quietism seriously.  Absent some argument to this effect, we have good reason to reject any form of political quietism, including the view that we ought to ignore social change and instead reverse governmental changes in policy for the ultimate goal of returning to some prior governmental order.  Instead, we should recognize that the purpose of most political action is to create real consequences for society by and large.  Therefore, we should seek to bring about consequences contrary to negative social change.

 

When we discuss political action contrary to social change, we are inherently talking about political action that intends to affect what actions people take.  Perhaps even more importantly (particularly if our earlier argument about the importance of intending to defy social change in Part Three of this article is true), we are discussing political action that intends to affect the beliefs one has about the actions we take.  Luckily, there are many different sorts of policies that have a long history of affecting exactly these factors.  For one, as we discussed in Part One of this article, the campaign against nicotine undertaken by the United States government starting in the late 1960s was extremely effective.  The percentage of Americans who regularly smoked cigarettes decreased from 53.8% in 1955 to 12.5% in 2020 (and among those 18-24, only 7.4%).  This undoubtedly demonstrates that, even contrary to industry preferences, governments can push against certain cultural norms extremely effectively.  This success was created from a mixture of direct and indirect bans on positive nicotine advertising (or at least certain avenues of advertising), anti-nicotine advertising campaigns paid for by the government, the creation of laws (the banning of indoor smoking being a significant one) that made nicotine smoking more difficult, and nicotine becoming more expensive due to the imposition of excise taxes (Warner et al. 2014, p. 85).  The extent to which government policy can affect private decision-making is clearly not just in relation to nicotine:  when it comes to cannabis, as well, government policies of legally restricting access as well as anti-cannabis public advertising campaigns significantly decreased the rate of cannabis usage, and their reversal significantly increased the rate of usage (Yu et al. 2020, pp. 7-8). 

 

So, then, the government has an established record of success when it comes to decreasing the rate at which people engage in particular sorts of behavior.  Doing so requires a consistent policy of decreasing access through either taxes or legal bans, making usage more difficult, restricting positive advertising, and engaging in aggressive negative advertising.  Therefore, restricting particulars should not be at all difficult.  As we established in Part Two of this article, the sources of much of recent negative social change can be located in the widespread usage of particular products such as cannabis and social media.  There is no reason why the government cannot oppose the usage, or at least regular usage, of products which contribute towards speeding up social change, with the method of opposition depending on the specific character of whatever social change-causing item is being regulated.  We would, for instance, prefer to avoid restricting people’s ability to exercise their private right to speech by banning social media and rather pursue other methods to decrease the rate of social media usage, for instance.  Such a policy would make it less likely for people to be directly affected by social change.  Just as importantly, by decreasing the speed at which social change occurs, the difficulty for a particular person to oppose social change (that is, how foreign the views they are required to take as true to defy social change seem to them under the influence of advertisers and mass media producers) will decrease, making it significantly easier for people to personally oppose social change.

 

This is, of course, only a partial description of how we can regulate against significant social change.  It would be extremely difficult to seek to oppose society-wide social change with piecemeal policy addressing only the individual causes of that change.  When we look at social change on a broader scale, the very change to the character and the attitude of a society, we likely need to consider policies more fundamental to the character of our society.  For the sort of policy which will avoid this sort of social change, we can look to Quebec in the 1940s and 50s under Maurice Duplessis as an example.  At the time, Canada by and large was undergoing large-scale social change to become integrated into a more Americanized culture, caused by an economy increasingly dependent on its connection to the United States.  As philosopher George Parkin Grant describes:

 

Since 1960, Canada has developed into a northern extension of the continental

economy… Our traditional role – as an exporter of raw materials (particularly to Europe)

with highly protected industry in central Canada – gradually lost its importance in

relation to our role as a branch-plant of American capitalism. Our ruling class is

composed of the same groups as that of the United States, with the signal difference

that the Canadian ruling class looks across the border for its final authority in both

politics and culture.

From 1940 to 1957, the ruling class of this country was radically reshaped. In 1939, the

United Kingdom still seemed a powerful force, and the men who ruled Canada were a

part of the old Atlantic triangle. They turned almost as much to Great Britain as to the

United States, economically, culturally, and politically. After 1940, the ruling class found

its centre of gravity in the United States.  (Grant 2005, pp. 9-10)

 

The cultural shift underway was towards what Grant described as a sort of “technologized” culture, one which favors the destruction of “local cultures” and attitudes and seeks to replace those with “progress” (Grant 2005, p. 53).  In practice, this understanding of “progress” favors the destruction of all “brakes” on the “chariot of technology,” thereby creating a society which views the most recent technological developments as the centerpiece of its culture (Grant 1998, pp. 192-193).  Contemporary to this analysis, Grant saw Quebec as unusually underdeveloped in relation to American economic as well as cultural influence (in his words, they “came late into American technological expansion”) (Grant 1998, p. 106).  Quebec, we will argue, managed to delay the social change that affected the rest of Canada after World War II in two ways:  first, by having an antiquated education system which opposed the coming social change and, second, by having a political discourse that consistently sought to create friction between Quebec’s historic culture and the coming social change under the Duplessis government.

 

First and most importantly, Quebec kept its traditional Catholic School education rather than reforming its education system in a secular fashion.  Grant notes this as one reason explaining their being “left behind” (Grant 1998, pp. 125-126).  This Catholic School education taught a traditional religious value system at every level.  As historian Herbert F. Quinn describes:

 

The school, the other basic institution of this society, was to supplement and extend the

work of the family and the parish in the formation and development of the Christian

personality. To this end all levels of education had to be permeated by the value system

of Catholicism, for the Church insisted that no system of instruction could really be

called education unless it was based on religious principles. It had to be oriented

towards the supernatural, and put its greatest emphasis not on the pursuit of material

aims but on the saving of one’s soul through the practice of the Christian virtues. A

Quebec educationist writes: “Education must be based upon man’s spiritual nature and

eternal destiny. It is not confined to man’s mere physical, ethical and intellectual

development. It should fit him for eternity as well as for time…If it does not bring him

nearer to God it is a failure.”  (Quinn 1963, p. 13)

 

Such an ideological Catholic viewpoint was directly contrary to the sort of objective viewpoint, popular in the rest of Canada.  As Grant argued, this other, objective viewpoint favored the more technologically developed culture that came to dominate the rest of Canada at the time:

 

The 'value-free' social sciences not only provide the means of control, but also provide a

large percentage of the preachers who proclaim the dogmas which legitimize modern

liberalism within the university. At first sight it might be thought that practitioners of

'value-free' science would not make good preachers. In looking more closely, however,

it will be seen that the fact-value distinction is not self-evident, as is often claimed. It

assumes a particular account of moral judgment, and a particular account of objectivity.

To use the language of value about moral judgment is to assume that what man is doing

when he is moral is choosing in his freedom to make the world according to his own

values which are not derived from knowledge of the cosmos. To confine the language of

objectivity to what is open to quantifiable experiment is to limit purpose to our own

subjectivity. As these metaphysical roots of the fact-value distinction are not often

evident to those who affirm the method, they are generally inculcated in what can be

best described as a religious way; that is, as a doctrine beyond question.  (Grant 1998, p.

193)

 

This argument implies that, when it comes to education, our wariness towards the idea that undoing recent political development is sufficient to avoid negative social change can be relaxed.  Changes in the educational system caused by, or even concomitant with, social change seem to (at least in the case of Canada) favor that social and cultural change, whereas the education system prior to that social change inherently seems to oppose the social change.  The reason for this is quite simple: education, unlike most political institutions, is not simply tinged with cultural implications.  Instead, it is the very forming of children into, as philosopher Elizabeth Anderson describes it, “the kinds of citizens the next generation will become” and what “ideals” they will hold (Anderson 1995, p. 163).  As a result, at least in most cases, the values with which a particular education system was created will also be the values which are favored and taught by that system, whether they be the Catholic values of the Quebecois or the objective values favoring technology and progress of the non-Quebecois Canadians.  Therefore, if we seek to oppose social change in relation to education, we merely need to return the way in which the educational system is taught to the cultural milieu we prefer (that is, the one which we favor based upon our system of preferring the most distant remembered belief).

 

Second and just as importantly, the Quebecois government in the 1940s and 50s managed to use political discourse to push a certain viewpoint on what Quebec was, thereby changing the culture.  Such a viewpoint used traditional concepts of independence and Catholicism to directly put Quebec at odds with the national government and culture, thereby creating opposition to the value system which the national government represented.  The Quebecois government did this in two ways.  One was by emphasizing its Catholicism in opposition to the alleged Communism of the central Canadian government.  This political strategy came about because of the widespread fear that the Quebecois people felt towards Communism due to Catholic nations such as Poland and Hungary recently having been taken over by Communists (79% of Quebec said Communism was a significant threat in the mid-1940s).  Thus, any action taken against Quebec’s political system or against the connection between that political system and the Catholic Church, that is, its traditional cultural attitude, was branded as “anti-clerical” Communism (Quinn 1963, p. 125, pp. 127-128).  This strategy thereby placed one of the most central aspects of Quebecois historical culture, its Catholicism, in direct contrast with any development the central Canadian government (which was, as Grant said, at the time changing Canadian culture to be more technologized and akin to American culture) sought, effectively making anything contrary to the tradition of the Quebecois society seem like an existential threat to the society itself.  This strategy therefore forced the Quebecois population to make a commitment either to progress or to their culture, with the one in direct contradiction to the other.  When this demand was made, with much of the coming social change overturning the traditional Quebecois not yet occurrent, most contemporary Quebecois had a stronger commitment to their traditional culture than to the coming progress.  As a result, most people committed themselves to the traditional culture over social progress, and thus Quebec was less affected by social change than the rest of Canada.

 

Further, Duplessis’ government portrayed every policy that the central Canadian government favored as an attack on the Quebecois people’s independence as a cultural group.  Duplessis’ government argued that the most important issue was the attack on provincial rights from the central Canadian government.  Duplessis’ government took the position that the national government was intent on overturning the culture of Quebec through centralization and immigration, rhetoric that specifically played to historic fears of Quebecois that English-speaking Canadians sought to overturn their culture.  As a result, in Quinn’s words, Duplessis created among Quebecois “reason to believe that its interests as a distinct cultural group [we]re threatened,” with the consequence that “the struggle to defend and protect those interests tends to become the dominant issue in politics and encourages the growth of strong nationalistic sentiments” (Quinn 1963, p. 115, pp. 118-119, p. 192)  In doing so, Duplessis spread the idea that Quebecois ought to directly resist the social change underway in Canadian society as a direct threat to their culture.  Since most Quebecois wanted to keep their culture, Duplessis spread the view that the Quebecois ought to defy social change, slowing down this social change.

 

The importance of the actual issues, or whether Duplessis’ government was right in its accusation that the Canadian government wanted to centralize the government and create Communism in Canada, is not the discussion at hand here.  Neither is whether Duplessis’ government actually defended Quebec from centralization or Communism.  Indeed, one of the most important provincial rights according to Duplessis’ government (at least publicly) was the right to create certain kinds of taxes, which Duplessis argued were necessary for provincial social services.  Yet, Duplessis never actually imposed the taxes he supported nor created the social services that he said the taxes would fund (Quinn 1963, pp. 116-117).  The point is rather that Duplessis’ government used the political field to focus the attention of the Quebecois people on the threat which ongoing social change was creating to their traditional way of life by publicly arguing that the most fundamental parts of the Quebecois people’s culture, their Catholicism and their uniqueness from English Canadians, were under attack by social change.  By doing so, he made those people more likely to personally defy social change, therefore slowing down social change in Quebec.  Duplessis thereby showed that the political world is capable, entirely separate from actual policy, of slowing down social change.  After all, the political world is one in which mass advertising for an inherently (somewhat) interested public allows the spreading of certain attitudes, opinions, and habits.  While perhaps opposing social change directly is a bit unproductive, as advertisers and mass media producers can outspend to such a degree political figures or governments (as we discussed in Part One of this article), the Quebecois example shows that if the government seeks not to directly oppose social change but rather to emphasize that their citizens are forced to choose between the dichotomy of either their culture or the coming social change, then this might in fact have a real effect on how quickly social change occurs.  Further, we can imagine (although it is difficult to think of examples or even what an example might look like without being excessively authoritarian) some kind of policy that directly required people to choose between their cultural heritage and the coming social change in their personal lives, particularly if taken while the advocated social change in question is still marginal in society, which would likely have an even more positive effect. 

 

(Of course, we are not saying that social change could be completely stopped by this strategy.  Advertisers and mass media producers have a massive amount of money and will always figure out, with practice, what advertisements can best change people’s opinions, habits, and actions.  Over time, they will inevitably develop their methods in cases like these to argue that the traditional heritage of a peoples can exist in tandem with social change.  An example of this would be the slogan of Quebecois politician Robert Bourassa, “American technology – French culture.”  Although, as Grant believes, this view is completely inane, it still has rhetorical effect and may eventually convince people, as it did in Quebec (Grant 1998, p. 106).  After all, Bourassa was not just a politician – he was the Premier of Quebec for a combined 15 years.  Our point is rather that, by requiring advertisers to add several extra steps in their advertisements – first, the argument that traditional culture and social change are compatible and only them to actually change the culture after much of the social change has already happened and people are more ripe for it – a government can slow down social change and thereby, among at least a certain portion of society, allow some to come to defy that social change.)

 

We are not arguing that the Quebecois culture was good or deserved defending from the developing culture in Quebec.  As we have argued in Part Three of this article, we are not even capable of being proper reactionaries, and today, much of the widespread corruption and anti-Semitism in mid-20th Century Quebec is clearly repulsive, while the Canadian government seems relatively reasonable, with its centralized schooling and basic social safety net (Quinn 1963, pp. 133-140; Black 1977, p. 273).  What we are instead arguing for is that Duplessis’ government offers a model of how we can oppose the social change relative to our own time. Several of his strategies, chief among them an opposition to changing educational norms and support for bringing the opposition of historical heritage and cultural norms to the fore of political discussion, are certainly important contributions.

 

From this analysis, we can conclude that there a couple of clear conclusions about which political strategies can work to slow social change.  First, we can oppose particular practices or products that we know promote negative social change (such as, to use an obvious example, the production and use of cannabis) through legal means (either taxation or criminalization) as well as by taking action against advertisement in favor of those particular practices or products or creating and spreading negative advertisements against those practices and products.  For another, we can seek to undo, by force of law if necessary, changes in what and how the education system of a given country teaches children, thereby allowing us to avoid making the schools more amenable to recent social change.  Finally, we can use the rhetorical environment of politics, and perhaps even legal or other institutional power, to publicly emphasize the distinction between the historical heritage of a people and the social change currently underway, hoping to lead many of them to personally defy that social change.  Of course, none of these are specific policies.  Nor could they be.  As we outlined in the introduction, any reasonable diagnostic political philosophy must recognize, in Hans Sluga’s words, that we are not capable of using “pure theorizing” to understand exactly what political action we should support.  What political action is necessary depends upon the particular political situation, not simply broad theorizing.  The extent and extremity of the policy we should take depends on a variety of case-by-case issues, including but not limited to:

 

(1)   The speed at which social change is underway in a given area.  A large media market like, say, New York City is likely to face much faster social change than a small media market like rural Alabama.  The former, therefore, certainly requires faster and more extreme policy than the latter if social change is to be effectively stopped or even significantly slowed.

(2)  The rights and political obligations which a given government recognizes beyond its obligation to fight against negative social change.  Negative social change, while an important factor, is merely one of many possible consequences to taking or not taking political action.  One need not be a political quietist to recognize that consequences are not absolutely everything in political action (there may well be some unjust actions, as philosopher Elizabeth Anscombe has argued, such as trying a man for something of which he is not guilty, that cannot be made right no matter what their consequences are (Anscombe 1981, pp. 39-40)), and stopping social change is not the only consequence to political action.  The extent of the rights and obligations recognized will depend on the society.  On one extreme, there are some societies which recognize any regulation of private behavior (the usage of cannabis, for instance) as abridging fundamental rights.  On the other, there are some totalitarian societies that view all forms of life, from one’s sexual life to their speech, as being within the government’s scope of political influence.  Most societies lie somewhere between the two extremes.  Where a given society lies within this spectrum will obviously affect the policies that it views as justifiable.

(3)  The sort of society which is being changed.  If a society is, for instance, dominated by some historically oppressed minority culture, such as the Quebecois, emphasizing social change as a new version of that old oppression is clearly an effective strategy.  If not, then it clearly would not be as effective.  Or, if a society has a long history of distrust of whatever the government tells them, government-funded advertising campaigns are unlikely to affect that society’s views on how they should behave.  And so on.

(4)  The level to which a society has already been affected by the current ongoing negative social change.  If a society has a strong social, cultural, and political connections independent of the influence of advertisers and mass media producers, it is likely that those strategies which focus on dichotomizing the world between common cultural heritage on the one hand and social change on the other will be more effective than if that connection has been severed.  And if a particular behavior or product is more normalized and more common in a society, then trying to combat it, either through legal bans or public campaigns, will inherently be far more difficult than if it is only common or normalized in a marginal subculture.

 

This much being said, we can and have accomplished quite a bit through pure political theorizing.  We have a clear concept of what our main political priorities should be to deal with the ongoing social change in America.  Although there is a great deal of social scientific research and thought that would have to go into applying any one of these strategies to the specific issues of specific areas in modern America, this theorizing can give us a (hopefully) effectively schematic upon which we can build actual policy.

Conclusion:

In this conclusion, we will seek to very quickly answer the questions to each part of the diagnostic practice that we described in our introduction, as per the analysis of the rest of the article:

 

(1)  Initial, general observation: We see a massive increase in economic inequality amidst a decline in intellectual inequality, contrary to what one might expect.

(2)  Examination, probing: This increase in economic inequality seems to be a result of an increasingly broad separation between rich and poor in number of hours worked, caused by the growth of a personal preferences to work very few hours among some people and the desire to work long hours among many among others, combined with  the explosive rise of widespread neuroticism.

(3)  Selection and identification of localized cues: Within the cultural, social, and religious spheres, views that cause an increase in neuroticism as well as a desire to work fewer hours have rapidly started to replace the traditional cultural, social, and religious attitudes of America.

(4)  Description, comparison, classification: The current era is a reversal of the movement towards meritocracy which preceded it, with people’s culture increasingly and aggressively created by class lines.  Those cultural attitudes, determined by advertisers, are creating a division between one population that works an unusually high amount and therefore has a massive amount of disposable income and another who are conditioned into building their lives around providing advertisers and mass media producers knowledge on how to appeal to the former.

(5)  Determination of genetic and causal factors: Advertisers and mass media producers are unintentionally pushing neurosis and low work ethic upon the vast majority of people due to that strategy being the most empirically effective at making profits.  They have increasingly accelerated their cultural push, with their campaign increasing in intensity following each economic recession.

(6)  Prognosis: The cultural change currently underway is likely to continue until the culture, society, and religion are destroyed and remade entirely in the image of advertisers and mass media producers, with few if any relationship to the culture which preceded it.  Such a course is likely to completely sever the ties America has to traditional Western culture, traditional forms of community, and traditional religions and to replace with a new culture, new forms of community, and new religion.

(7)  Diagnostic opinion, prescription: To stop or at least slow the remaking of America, what is necessary is individuals defying social change by favoring the most distant views they individually can sympathize with and have previously held.  To bring this about, the political sector of society ought to do three things.  First, they ought to ban, regulate, and use government funds to campaign against particular activities which promote ongoing negative social change.  Second, they ought to try to reverse all significant changes in the education system to the earliest view within the lifetime of the political actors.  Third, they ought to use political discourse to push the idea that social change is an absolute threat to the heritage of those in the society, focusing on whatever sort of heritage is most important to that society, possibly taking direct policy to force people to choose between the two (cultural heritage or social change) if justifiably possible.

(8)  Intervention, therapy: This part is up to the reader.  Hopefully, they will answer our call.

Bibliography:

Adgate, Brad. “Agencies Agree; 2021 Was a Record Year for Ad Spending, with More Growth Expected in 2022.” Forbes, 10 December 2021, https://www.forbes.com/sites/bradadgate/2021/12/08/agencies-agree-2021-was-a-record-year-for-ad-spending-with-more-growth-expected-in-2022/?sh=b0a44b27bc69.

Adgate, Brad. “Total Entertainment Spending in U.S. Increased 14% in 2021 to $37 Billion.” Forbes, 22 March 2022, https://www.forbes.com/sites/bradadgate/2022/03/17/overview-of-the-entertainment-market-in-2021-coming-out-of-covid-19/?sh=6b303181519c.

Anderson, Elizabeth. Value in Ethics and Economics. Harvard University Press, 1995.

Anscombe, G.E.M. Ethics, Religion and Politics. Blackwell, 1981.

Arendt, Hannah. Eichmann in Jerusalem. Penguin Books, 2006.

Aristotle. The Nicomachean Ethics. Translated by David Ross, Oxford University Press, 2009.

Baldwin, Marjorie L., and Steven C. Marcus. “Labor Market Outcomes of Persons with Mental Disorders.” Industrial Relations, vol. 46, no. 3, 15 June 2007, pp. 481-510, https://doi.org/10.1111/j.1468-232x.2007.00478.x.

Ballard, Jamie. “Millennials Are the Loneliest Generation.” YouGov, 30 July 2019, https://today.yougov.com/topics/society/articles-reports/2019/07/30/loneliness-friendship-new-friends-poll-survey.  

Bernstein, Jared. “Median Income Rose as Did Poverty in 2007; 2000s Have Been Extremely Weak for Living Standards of Most Households.” Economic Policy Institute, 26 August 2008, https://www.epi.org/publication/webfeatures_econindicators_income_20080826/#:~:text=Numbers%20released%20by%20the%20U.S.,from%2012.3%25%20to%2012.5%25.

Berry, Mary Frances. The Pig Farmer's Daughter and Other Tales of American Justice: Episodes of Racism and Sexism in the Courts from 1865 to the Present. Knopf, 1999.

Bethune, Sophie. “Gen Z More Likely to Report Mental Health Concerns.” Monitor on Psychology, American Psychological Association, January 2019, https://www.apa.org/monitor/2019/01/gen-z.

Bhui, Kamaldeep, et al. “Extremism and Common Mental Illness: Cross-Sectional Community Survey of White British and Pakistani Men and Women Living in England.” The British Journal of Psychiatry, vol. 217, no. 4, 15 March 2019, pp. 547-554, https://doi.org/10.1192/bjp.2019.14.

Billing, A., Ehrle, K., and K. Kortenkamp (2012). “Children Cared for by Relatives: What Do We Know about Their Well-Being?” The Urban Institute, Series B, No. B-46, Washington, D.C.: The Urban Institute.

Black, Conrad. Duplessis. McClelland & Stewart, 1977.

Borghans, Lex, et al. “What Grades and Achievement Tests Measure.” SSRN Electronic Journal, 2016, https://doi.org/10.2139/ssrn.2872613.

Boyd, Richard N. “How to Be a Moral Realist.” Essays on Moral Realism, edited by Geoffrey Sayre-MacCord, Cornell Univ. Press, Ithaca, NY, 1988.

Brandt, Allan M. The Cigarette Century: The Rise, Fall, and Deadly Persistence of the Product that Defined America. Basic Books, 2007.

Brenan, Megan. “Movie Theater Attendance Far Below Historical Norms.” Gallup.com, 23 June 2022, https://news.gallup.com/poll/388538/movie-theater-attendance-far-below-historical-norms.aspx.

Brenan, Megan. “Religiosity Largely Unaffected by Events of 2020 in U.S.” Gallup.com, Gallup, 20 November 2021, https://news.gallup.com/poll/341957/religiosity-largely-unaffected-events-2020.aspx.

Bruner, Raisa. “Why Young People Are Quitting Jobs – and Not Going Back.” Time, 29 October 2021, https://time.com/6111245/young-workers-quitting/.

Buber, Martin. Eclipse of God: Studies in the Relation between Religion and Philosophy. Princeton University Press, 2016.

Bullock, Alan. Hitler: A Study in Tyranny. Harper & Row, 1964.

Burke, Edmund. Reflections on the Revolution in France. Edited by Frank M. Turner, Yale Univ. Press, 2003.

Butler, Judith. Gender Trouble. Routledge, 1999.

Butler, Judith. Undoing Gender. Routledge, 2004.

Byers, Justin. “U.S. Game Spending Hit $60.4B in 2021.” Front Office Sports, 20 January 2022, https://frontofficesports.com/us-game-spending-hit-60-4b-in-2021/.

Carrier, David. Writing About Visual Art. Allworth Press, 2003.

Carroll, Noël. Mystifying Movies: Fads & Fallacies in Contemporary Film Theory. Columbia University Press, 1988.

Carson, Shelley H., et al. “Decreased Latent Inhibition Is Associated with Increased Creative Achievement in High-Functioning Individuals.” Journal of Personality and Social Psychology, vol. 85, no. 3, 2003, pp. 499-506, https://doi.org/10.1037/0022-3514.85.3.499.

Chadi, Nicholas, et al. “Covid-19 and the Impacts on Youth Mental Health: Emerging Evidence from Longitudinal Studies.” Canadian Journal of Public Health, vol. 113, no. 1, 28 January 2022, pp. 44-52, https://doi.org/10.17269/s41997-021-00567-8.

Chamorro-Premuzic, Tomas, and Adrian Furnham. “Personality and Music: Can Traits Explain How People Use Music in Everyday Life?” British Journal of Psychology, vol. 98, no. 2, 2007, pp. 175-185, https://doi.org/10.1348/000712606x111177.

Cheng, Kai-Wen, and Don S. Kenkel. “U.S. Cigarette Demand: 1944-2004.” The B.E. Journal of Economic Policy and Research, vol. 10, 2010, pp. 1-18, https://doi.org/10.2202/1935-1682.2438.

Clarke, S.C. “Advance report of final divorce statistics, 1989 and 1990.” Monthly vital statistics report; vol. 43, no. 8, suppl. Hyattsville, Maryland: National Center for Health Statistics.  1995.

Clolery, Paul. “Study: Nonprofits Spending Billions on Ads, Promotion.” The NonProfit Times, 3 November 2021, https://www.thenonprofittimes.com/npt_articles/study-nonprofits-spending-billions-on-ads-promotion/#:~:text=Nonprofits%20spent%20%245.8%20billion%20in,Whole%20Whale%20and%20Cause%20IQ.

Compton, Wilson M., et al. “Marijuana Use and Use Disorders in Adults in the USA, 2002-14: Analysis of Annual Cross-Sectional Surveys.” The Lancet Psychiatry, vol. 3, no. 10, 2016, pp. 954-964, https://doi.org/10.1016/s2215-0366(16)30208-5.

Cornelius, Monica E., et al. “Tobacco Product Use among Adults — United States, 2020.” MMWR. Morbidity and Mortality Weekly Report, vol. 71, no. 11, 2022, pp. 397-405, https://doi.org/10.15585/mmwr.mm7111a1.

Cox, Daniel A. “The State of American Friendship: Change, Challenges, and Loss.” The Survey Center on American Life, 7 April 2022, https://www.americansurveycenter.org/research/the-state-of-american-friendship-change-challenges-and-loss/.

Cross, Richard Alan. Duns Scotus. Oxford University Press, 1999.

Curtius, Ernst Robert. European Literature and the Latin Middle Ages. Translated by Colin Burrow, Princeton University Press, 2013.

De Boer, Connie. “The Polls: Attitudes Toward Homosexuality.” The Public Opinion Quarterly, vol. 42, no. 2, 1978, pp. 265-276. JSTOR, http://www.jstor.org/stable/2748395. Accessed 27 May 2022.

De Tocqueville, Alexis. Democracy in America. Translated by Harvey C. Mansfield and Delba Winthrop, University of Chicago Press, 2000.

Di Forti, Marta, et al. “The contribution of cannabis use to variation in the incidence of psychotic disorder across Europe (EU-GEI): a multicentre case-control study.” The Lancet Psychiatry, vol. 6, no. 5, 2016, pp. 427-436., https://doi.org/10.1016/S2215-0366(19)30048-3.

Dilmaghani, Maryam. “Sexual Orientation, Labour Supply and Occupational Sorting in Canada.” Industrial Relations Journal, vol. 49, no. 4, 2018, pp. 298-318., https://doi.org/10.1111/irj.12223.

Dronkers, Jaap. “The Changing Impact of Education on Divorce and Break-up Risk.” Institute for Family Studies, 20 October 2015, https://ifstudies.org/blog/the-changing-impact-of-education-on-divorce-and-break-up-risk.

Drydakis, Nick. “Sexual Orientation and Labor Market Outcomes.” IZA World of Labor, 2019, https://doi.org/10.15185/izawol.111.v2.

Einhorn, Erin. “'Really Illuminating': After Making the SAT Optional, Colleges Study the Results.” NBCNews.com, NBCUniversal News Group, 10 April 2022, https://www.nbcnews.com/news/us-news/college-admissions-test-sat-act-rcna23574.

Elster, Jon. Explaining Social Behavior: More Nuts and Bolts for the Social Sciences. Cambridge University Press, 2007.

Eum, Jennifer. “Colleges with Highest Salaries after Graduation.” Forbes, 2 August 2016, https://www.forbes.com/sites/jennifereum/2016/03/29/colleges-with-highest-salaries-after-graduation/?sh=116a31b52e95.

Evans, Gareth. The Varieties of Reference. Edited by John McDowell, Clarendon Press, 2002.

Fitzgerald, Toni. “How Many Streaming Video Services Does the Average Person Subscribe to?” Forbes, 29 March 2019, https://www.forbes.com/sites/tonifitzgerald/2019/03/29/how-many-streaming-video-services-does-the-average-person-subscribe-to/?sh=35da96136301. 

Frank, Robert H. Passions within Reason: The Strategic Role of the Emotions. Norton, 1988.

Flynn, James R. Are We Getting Smarter?: Rising IQ in the Twenty-First Century. Cambridge University Press, 2012.

Foster, Hal. Recodings: Art, Spectacle, Cultural Politics. Bay Press, 1987.

Freeman, Charles. New History of Early Christianity. Yale University Press, 2009.

Fry, Richard, and Kim Parker. “Rising Share of U.S. Adults Are Living without a Spouse or Partner.” Pew Research Center, 15 October 2021, https://www.pewresearch.org/social-trends/2021/10/05/rising-share-of-u-s-adults-are-living-without-a-spouse-or-partner/.

Gates, Gary J., and Frank Newport. “Special Report: 3.4% of U.S. Adults Identify as LGBT.” Gallup.com, 18 October 2012, https://news.gallup.com/poll/158066/special-report-adults-identify-lgbt.aspx.

Gersen, Jeannie Suk. “At Trial, Harvard's Asian Problem and a Preference for White Students from ‘Sparse Country.’” The New Yorker, 23 October 2018, https://www.newyorker.com/news/our-columnists/at-trial-harvards-asian-problem-and-a-preference-for-white-students-from-sparse-country.

Gibson, John, and Phyllis Light. “Intelligence among University Scientists.” Nature, vol. 213, no. 5075, 1967, pp. 441-443., https://doi.org/10.1038/213441a0.

Gold, Herbert. “A Walk on San Francisco's Gay Side.” The New York Times, 6 November 1977, https://www.nytimes.com/1977/11/06/archives/awalkon-san-franciscos-gay-side-with-one-out-of-three-voters.html.

Government Accountability Office. (2016). Public Relations Spending: Reported Data on Related Federal Activities. (GAO Publication No. 16-877). Washington, D.C.: U.S. Government Printing Office.

Grant, George Parkin. Lament for a Nation: The Defeat of Canadian Nationalism. McGill-Queen's University Press, 2005.

Grant, George Parkin. The George Grant Reader. Edited by William Christian and Sheila Grant, University of Toronto Press, 1998.

Griffiths, Paul. Decreation: The Last Things of All Creatures. Baylor University Press, 2014.

Guignon, Charles B. Heidegger and the Problem of Knowledge. Hackett, 1983.

Gunton, Colin E. The One, the Three and the Many. Cambridge University Press, 2005.

Hanson, Melanie. “College Enrollment & Student Demographic Statistics.” EducationData.org, April 22, 2022, https://educationdata.org/college-enrollment-statistics.

Haugeland, John. Dasein Disclosed: John Haugeland’s Heidegger. Edited by Joseph Rouse, Harvard University Press, 2013.

Havel, Václav. “The Power of the Powerless.” The Power of the Powerless: Citizens against the State in Central-Eastern Europe, edited by John Keane, Routledge, London, 2015, pp. 23-96.

Hayghe, Howard V. “Family Members in the Work Force.” Monthly Labor Review, vol. 113, no. 3, March 1990, vol. pp. 14-19, Gale Academic OneFile, link.gale.com/apps/doc/A8873287/AONE?u=anon~e8530bb9&sid=googleScholar&xid=51394605. Accessed 23 Aug. 2022.

Heid, Stefan. Celibacy in the Early Church: The Beginnings of a Discipline of Obligatory Continence for Clerics in East and West. Ignatius Press, 2000.

Helfat, Constance E. “The Behavior and Capabilities of Firms.” Modern Evolutionary Economics an Overview, Cambridge University Press, 2018, pp. 85-103.

Hernandez, Dan. “Las Vegas Casino Workers Prep for Strike over Automation: 'Robots Can't Beat Us'.” The Guardian, 3 June 2018, https://www.theguardian.com/us-news/2018/jun/02/las-vegas-workers-strike-automation-casinos.

Herrnstein, Richard, and Charles Murray. The Bell Curve. Free Press, 1994.

Hethcote, Herbert Wayne, and James Walter Van Ark. Modeling HIV Transmission and AIDS in the United States. Springer-Verlag, 1992.

Hieronymi, Pamela. Freedom, Resentment, and the Metaphysics of Morals. Princeton University Press, 2020.

Hisam, Aliya, et al. “Is Political Activism on Social Media an Initiator of Psychological Stress?” Pakistan Journal of Medical Sciences, vol. 33, no. 6, 2017, pp. 1463-1467, https://doi.org/10.12669/pjms.336.12863.

Hobbes, Thomas. Leviathan. Penguin, 1985.

Homonoff, Howard. “2020 Political Ad Spending Exploded: Did It Work?” Forbes, 10 December 2021, https://www.forbes.com/sites/howardhomonoff/2020/12/08/2020-political-ad-spending-exploded-did-it-work/?sh=546196c33ce0.

Horowitz, Juliana Menasce, et al. “Most Americans Say There Is Too Much Economic Inequality in the U.S., but Fewer than Half Call It a Top Priority.” Pew Research Center, 9 January 2020, https://www.pewresearch.org/social-trends/2020/01/09/most-americans-say-there-is-too-much-economic-inequality-in-the-u-s-but-fewer-than-half-call-it-a-top-priority/.

Hough, D., et al. “A Reduction in Long-Term Spatial Memory Persists after Discontinuation of Peripubertal GnRH Agonist Treatment in Sheep.” Psychoneuroendocrinology, vol. 77, 2017, https://doi.org/10.1016/j.psyneuen.2016.11.029.

Houseman, Susan N., et al. “Measuring Manufacturing: How the Computer and Semiconductor Industries Affect the Numbers and Perceptions.” W.E. Upjohn Institute for Employment Research, 26 February 2014, https://doi.org/10.17848/wp14-209.

Hume, David. A Treatise of Human Nature. Edited by David Fate Norton and Mary J. Norton, Clarendon Press, 2011.

Hume, David. Essays: Moral, Political and Literary. Edited by Eugene F. Miller, Liberty Fund, 1987.

Hurka, Thomas. Perfectionism. Oxford University Press, 1996.

Janković, Janko, et al. “Depression and Suicidal Behavior in LGB and Heterosexual Populations in Serbia and Their Differences: Cross-Sectional Study.” PLOS ONE, vol. 15, no. 6, 2020, https://doi.org/10.1371/journal.pone.0234188.

Jones, Jeffrey M. “Belief in God in U.S. Dips to 81%, a New Low.” Gallup.com, 15 June 2022, https://news.gallup.com/poll/393737/belief-god-dips-new-low.aspx.

Jones, Jeffrey M. “LGBT Identification in U.S. Ticks Up to 7.1%.” Gallup.com, 17 February 2022, https://news.gallup.com/poll/234863/estimate-lgbt-population-rises.aspx.

Jones, Jeffrey M. “LGBT Identification Rises to 5.6% in Latest U.S. Estimate.” Gallup.com, 24 February 2021, https://news.gallup.com/poll/329708/lgbt-identification-rises-latest-estimate.aspx.

Jones, Jeffrey M. “One in 10 LGBT Americans Married to Same-Sex Spouse.” Gallup.com, 24 February 2021, https://news.gallup.com/poll/329975/one-lgbt-americans-married-sex-spouse.aspx.

Jones, Jeffrey M. “U.S. Church Membership Falls below Majority for First Time.” Gallup.com, 20 November 2021, https://news.gallup.com/poll/341963/church-membership-falls-below-majority-first-time.aspx. 

Kekes, John. The Roots of Evil. Cornell University Press, 2005.

Knifton, Lee, and Greig Inglis. “Poverty and Mental Health: Policy, Practice and Research Implications.” BJPsych Bulletin, vol. 44, no. 5, Oct. 2020, pp. 193-196, https://doi.org/10.1192/bjb.2020.78.

Koch, Richard. “America’s Highest Earners Work at Least 60 Hours a Week—More than Anyone Else in the World.” Qz.com, 13 October 2013, https://qz.com/134064/the-industrial-revolution-destroyed-the-link-between-hours-worked-and-wealth-so-why-are-we-still-working-so-much/#:~:text=And%20whatever%20the%20average%20number,need%20a%20longer%20term%20perspective.

Kochhar, Rakesh, and Stella Sechopoulos. “How the American Middle Class Has Changed in the Past Five Decades.” Pew Research Center, 20 April 2022, https://www.pewresearch.org/fact-tank/2022/04/20/how-the-american-middle-class-has-changed-in-the-past-five-decades/#:~:text=The%20middle%20class%2C%20once%20the,Center%20analysis%20of%20government%20data.

Konish, Lorie. “As States and Cities Raise Minimum Wage Rates, Some Experts Say a Higher Federal Rate Is Long Overdue.” CNBC, 7 July 2022, https://www.cnbc.com/2022/07/07/why-experts-say-a-higher-federal-minimum-wage-rate-is-long-overdue.html.

Kopf, Dan, and Corinne Purtill. “The Reason the Richest Women in the US Are the Ones Having the Most Kids.” Quartz, 11 November 2017, https://qz.com/1125805/the-reason-the-richest-women-in-the-us-are-the-ones-having-the-most-kids/.

Kojève, Alexandre. Introduction to the Reading of Hegel. Edited by Allan Bloom. Translated by James H. Nichols, Cornell University Press, 1980.

Krogstad, Jens Manuel. “How the Geography of U.S. Poverty Has Shifted since 1960.” Pew Research Center, 10 September 2015, https://www.pewresearch.org/fact-tank/2015/09/10/how-the-geography-of-u-s-poverty-has-shifted-since-1960/#:~:text=It's%20worth%20noting%20that%20as,according%20to%20Census%20Bureau%20data.

Krueger, Evan A., et al. “Sexual Orientation Group Differences in Perceived Stress and Depressive Symptoms among Young Adults in the United States.” LGBT Health, vol. 5, no. 4, 2018, pp. 242-249, https://doi.org/10.1089/lgbt.2017.0228.

Kuhn, Peter, and Fernando Lozano. “The Expanding Workweek? Understanding Trends in Long Work Hours among U.S. Men, 1979-2004.” 2005, https://doi.org/10.3386/w11895.

Lieberman, David. “Cable Operators Continue to Lead Industry in Profitability: Report.” Deadline, 15 September 2014, https://deadline.com/2014/09/cable-company-profitability-ey-report-833747/.

Lonergan, Bernard. Grace and Freedom: Operative Grace in the Thought of St Thomas Aquinas. Edited by Frederick E. Crowe and Robert M. Doran, Toronto University Press, 2005.

Lynn, Richard, and Susan Hampson. “The Rise of National Intelligence: Evidence from Britain, Japan and the U.S.A.” Personality and Individual Differences, vol. 7, no. 1, 1986, pp. 23-32, https://doi.org/10.1016/0191-8869(86)90104-2.

MacKinnon, Donald W. “What Makes a Person Creative?” Theory Into Practice, vol. 5, no. 4, 1966, pp. 151-156, https://doi.org/10.1080/00405846609542017.

McCarthy, Justin. “Americans Still Greatly Overestimate U.S. Gay Population.” Gallup.com, 7 June 2019, https://news.gallup.com/poll/259571/americans-greatly-overestimate-gay-population.aspx.

McCarthy, Justin. “U.S. Approval of Interracial Marriage at New High of 94%.” Gallup.com, 21 February 2022, https://news.gallup.com/poll/354638/approval-interracial-marriage-new-high.aspx.

McFarland, J., et al. “The Condition of Education 2017 (NCES 2017- 144).” U.S. Department of Education. Washington, D.C.: National Center for Education Statistics (2017). Retrieved July 24, 2022 from https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2017144.

Medley G., et al. “Sexual Orientation and Estimates of Adult Substance Use and Mental Health: Results from the 2015 National Survey on Drug Use and Health.” NSDUH Data Review. https://www.samhsa.gov/data/sites/default/files/NSDUH-SexualOrientation-2015/NSDUH-SexualOrientation-2015/NSDUH-SexualOrientation-2015.htm. Published October 2016.

Midgley, Mary. Darwin and the Solitary Self. Acumen, 2010.

Miles, Dennis. “Public Health Cigarette Smoking Act of 1969.” Public Health Cigarette Smoking Act of 1969 (2009), https://www.mtsu.edu/first-amendment/article/1089/public-health-cigarette-smoking-act-of-1969.

Miller, Rich, and Vince Golle. “Chart of Century Gives Powell Gloomy Glimpse of Trade-War World.” Bloomberg.com, 10 July 2018, https://www.bloomberg.com/news/articles/2018-07-10/chart-of-century-gives-powell-gloomy-glimpse-of-trade-war-world#xj4y7vzkg.

Miłosz, Czesław. To Begin Where I Am: Selected Essays. Edited by Bogdana Carpenter and Madeline G. Levine, Farrar, Straus and Giroux, 2001.

Miron, Oren, et al. “Suicide Rates among Adolescents and Young Adults in the United States, 2000-2017.” JAMA, vol. 321, no. 23, 2019, pp. 2,362-2,363, https://doi.org/10.1001/jama.2019.5054.

Mitrea, Elena Cristina, et al. “Extreme Pessimists? Expected Socioeconomic Downward Mobility and the Political Attitudes of Young Adults.” Political Behavior, vol. 43, no. 2, 18 January 2020, pp. 785-811, https://doi.org/10.1007/s11109-020-09593-7.

“Modeling the Future of Religion in America.” Pew Research Center, Washington D.C. September 13, 2022, https://www.pewresearch.org/religion/2022/09/13/modeling-the-future-of-religion-in-america/.

Morales, Lymari. “U.S. Adults Estimate That 25% of Americans Are Gay or Lesbian.” Gallup.com, 27 May 2011, https://news.gallup.com/poll/147824/adults-estimate-americans-gay-lesbian.aspx.

MPA. “Global Theatrical, Home Entertainment, and Pay TV Market Rebounds to $328.2 Billion, New MPA Report Shows.” Motion Picture Association, 14 March 2022, https://www.motionpictures.org/press/global-theatrical-home-entertainment-and-pay-tv-market-rebounds-to-328-2-billion-new-mpa-report-shows/.

Murad, Abdal Hakim. Travelling Home: Essays on Islam in Europe. The Quilliam Press, 2020.

National Center for Education Statistics. (2022). “Public High School Graduation Rates.” Condition of Education. U.S. Department of Education, Institute of Education Sciences. Retrieved July 15, 2022, from https://nces.ed.gov/programs/coe/indicator/coi.

National Science Foundation. (2014). “Science and Engineering Indicators 2014.” https://www.nsf.gov/statistics/seind14/index.cfm/overview.

Nayak, Malathi. “More Viewers Subscribing to Multiple Streaming Services: Survey.” Reuters, 12 October 2016, https://www.reuters.com/article/us-television-streamingvideo/more-viewers-subscribing-to-multiple-streaming-services-survey-idUSKCN12C2J2. 

Nelson, Richard R., and Sidney G Winter. An Evolutionary Theory of Economic Change. The Belknap Press of Harvard University Press, 1982, p. 53.

Newport, Frank. “Americans, Including Catholics, Say Birth Control Is Morally OK.” Gallup.com, 7 May 2021, https://news.gallup.com/poll/154799/americans-including-catholics-say-birth-control-morally.aspx.

Newport, Frank. “In U.S., Estimate of LGBT Population Rises to 4.5%.” Gallup.com, 22 May 2018, https://news.gallup.com/poll/234863/estimate-lgbt-population-rises.aspx.

Norden, William, and Emmanuel Elone. “China Provinces Increase Minimum Wages.” Bloomberg Tax, 10 September 2021, https://news.bloombergtax.com/payroll/chinese-provinces-increase-minimum-wages.

Nozick, Robert. Anarchy, State and Utopia. Blackwell, 1999.

Nussbaum, Martha C. “The Professor of Parody.” The New Republic, 22 February 1999, https://newrepublic.com/article/150687/professor-parody.

Oakeshott, Michael. Hobbes on Civil Association. Liberty Fund, 1975.

Oakeshott, Michael. Rationalism in Politics and Other Essays. Methuen, 1962.

Packard, Vance Oakley. The Hidden Persuaders. Ig Pub, 2007.

Pantic, Igor. “Online Social Networking and Mental Health.” Cyberpsychology, Behavior, and Social Networking, vol. 17, no. 10, 2014, pp. 652-657, https://doi.org/10.1089/cyber.2014.0070.

Parfit, Derek. Reasons and Persons. Clarendon, 1984.

“Partisan Antipathy: More Intense, More Personal.” Pew Research Center, Washington D.C. October 10, 2019, https://www.pewresearch.org/politics/2019/10/10/partisan-antipathy-more-intense-more-personal/.

Passel, Jeffrey S., and D’Vera Cohn. “Children of Unauthorized Immigrants Represent Rising Share of K-12 Students.” Pew Research Center, 17 Nov. 2016, https://www.pewresearch.org/fact-tank/2016/11/17/children-of-unauthorized-immigrants-represent-rising-share-of-k-12-students/.

Pauer-Studer, Herlinde, and J. David Velleman. Konrad Morgen: The Conscience of a Nazi Judge. Palgrave Macmillan, 2015.

Pechmann, Cornelia, et al. “What to Convey in Antismoking Advertisements for Adolescents: The Use of Protection Motivation Theory to Identify Effective Message Themes.” Journal of Marketing, vol. 67, no. 2, 2003, pp. 1-18, https://doi.org/10.1509/jmkg.67.2.1.18607.

Pepin, Jacques. The Origin of AIDS. Cambridge University Press, 2011.

Peracchia, Sara, et al. “Pathologic Use of Video Games and Motivation: Can the Gaming Motivation Scale (GAMS) Predict Depression and Trait Anxiety?” International Journal of Environmental Research and Public Health, vol. 16, no. 6, 20 March 2019, pp. 1,008-1,020, https://doi.org/10.3390/ijerph16061008.

Piketty, Thomas. Capital in the Twenty-First Century. Translated by Arthur Goldhammer, The Belknap Press of Harvard University Press, 2014.

Pippin, Robert B. Modernism as a Philosophical Problem: On the Dissatisfactions of European High Culture. Blackwell, 1999.

Pittman, Laura D., and Michelle K. Boswell. “The Role of Grandmothers in the Lives of Preschoolers Growing up in Urban Poverty.” Applied Developmental Science, vol. 11, no. 1, 2007, pp. 20-42, https://doi.org/10.1080/10888690709336721.

Plomin, Robert, and Ian John Deary. “Genetics and Intelligence Differences: Five Special Findings.” Molecular Psychiatry, vol. 20, no. 1, 20 February 2015, pp. 98-108, https://doi.org/10.1038/mp.2014.105.

Prentice, Deborah A., and Dale T. Miller. “Pluralistic Ignorance and Alcohol Use on Campus: Some Consequences of Misperceiving the Social Norm.” Journal of Personality and Social Psychology, vol. 64, no. 2, 1993, pp. 243-256, https://doi.org/10.1037/0022-3514.64.2.243.

Prinz, Jesse J. The Emotional Construction of Morals. Oxford University Press, 2007.

“Public Attitudes about Birth Control,” Roper Center for Public Opinion Research, 2015, https://ropercenter.cornell.edu/public-attitudes-about-birth-control.

Putnam, Robert. Bowling Alone – The Collapse and Revival of American Community. Simon & Schuster, 2000.

Quinn, Herbert F. The Union Nationale Study: A Study in Quebec Nationalism. University of Toronto Press, 1963.

Reed, Adolph L. The Jesse Jackson Phenomenon: The Crisis of Purpose in Afro-American Politics. Yale University Press, 1986.

“Religion in Latin America.” Pew Research Center, Washington D.C. November 13, 2014, https://www.pewresearch.org/religion/2014/11/13/religion-in-latin-america/.

Rice, Tom. White Robes, Silver Screens: Movies and the Making of the Ku Klux Klan. Indiana University Press, 2015.

Rogers, Guy MacLean. For the Freedom of Zion: The Great Revolt of Jews against Romans, 66-74 CE. Yale University Press, 2021.

Rougemont, Denis de. Love in the Western World. Princeton University Press, 1983.

Rousseau, Jeffrey. “2021 Sets Record for US Games Spending at $60.4bn: US Annual Report.” GamesIndustry.biz, 18 January 2022, https://www.gamesindustry.biz/2021-sets-record-for-us-games-spending-at-usd60-4bn-us-annual-report.

Rozin, Paul, and Leher Singh. “The Moralization of Cigarette Smoking in the United States.” Journal of Consumer Psychology, vol. 8, no. 3, 1999, pp. 321-337, https://doi.org/10.1207/s15327663jcp0803_07.

Sandstrom, Aleksandra, and Becka A. Alper. “Americans with Higher Education and Income Are More Likely to Be Involved in Community Groups.” Pew Research Center, 22 February 2019, https://www.pewresearch.org/fact-tank/2019/02/22/americans-with-higher-education-and-income-are-more-likely-to-be-involved-in-community-groups/.

Schäfer, Peter. The History of the Jews in the Greco-Roman World: The Jews of Palestine from Alexander the Great to the Arab Conquest. Routledge, 2003.

Schaeffer, Katherine. “Among Many U.S. Children, Reading for Fun Has Become Less Common, Federal Data Shows.” Pew Research Center, 12 November 2021, https://www.pewresearch.org/fact-tank/2021/11/12/among-many-u-s-children-reading-for-fun-has-become-less-common-federal-data-shows.

Scheffler, Samuel. Human Morality. Oxford University Press, 1994.

Schneider, Maiko A., et al. “Brain Maturation, Cognition and Voice Pattern in a Gender Dysphoria Case under Pubertal Suppression.” Frontiers in Human Neuroscience, vol. 11, 14 November 2017, https://doi.org/10.3389/fnhum.2017.00528.

Schröder, Martin. “Men Lose Life Satisfaction with Fewer Hours in Employment: Mothers Do Not Profit from Longer Employment—Evidence from Eight Panels.” Social Indicators Research, vol. 152, no. 1, 16 July 2020, pp. 317-334, https://doi.org/10.1007/s11205-020-02433-5.

Schroeder, Mark. Slaves of the Passions. Oxford University Press, 2007.

Schulenberg, J.E., et al. (2021). “Monitoring the future national survey results on drug use, 1975-2020: Volume II, College students and adults ages 19-60.” Ann Arbor: Institute for Social Research, The University of Michigan. Available at http://monitoringthefuture.org/pubs.html#monographs.

Schumpeter, Joseph A. Capitalism, Socialism, and Democracy. Routledge, 2003.

Schwadel, Philip, and Aleksandra Sandstrom. “Lesbian, Gay and Bisexual Americans Are Less Religious than Straight Adults by Traditional Measures.” Pew Research Center, 24 May 2019, https://www.pewresearch.org/fact-tank/2019/05/24/lesbian-gay-and-bisexual-americans-are-less-religious-than-straight-adults-by-traditional-measures/.

Searle, John R. Rationality in Action. MIT, 2003.

Segal, Edward. “How Employers Are Responding to Challenges Created by Labor Shortage: New Survey.” Forbes, Forbes Magazine, 3 August 2022, https://www.forbes.com/sites/edwardsegal/2022/08/03/how-employers-are-responding-to-challenges-created-by-labor-shortage-new-survey/?sh=5d13bd832b1d.

Shmemann, Alexander. For the Life of the World: Sacraments and Orthodoxy. St. Vladimir's Seminary Press, 1998.

Shrider, Emily A., Melissa Kollar, Frances Chen, and Jessica Semega, U.S. Census Bureau, Current Population Reports, P60-273, Income and Poverty in the United States: 2020, U.S. Government Publishing Office, Washington, DC, September 2021.

Sluga, Hans. Politics and the Search for the Common Good. Cambridge University Press, 2014.

Smith, Gregory C., and Patrick A. Palmieri. “Risk of Psychological Difficulties among Children Raised by Custodial Grandparents.” Psychiatric Services, vol. 58, no. 10, 2007, pp. 1,303-1,310, https://doi.org/10.1176/ps.2007.58.10.1303.

Smith, Tom W., et al. General Social Surveys, 1972-2018: Cumulative Codebook Principal Investigator, Tom W. Smith; Co-Principal Investigators, Michael Davern, Jeremy Freese and Stephen L. Morgan. – Chicago: NORC, 2019. – (National Data Program for the Social Sciences Series, no. 25).

Solomon, Jennifer Crew, and Jonathan Marx. “‘To Grandmother's House We Go’: Health and School Adjustment of Children Raised Solely by Grandparents.” The Gerontologist, vol. 35, no. 3, 1995, pp. 386-394, https://doi.org/10.1093/geront/35.3.386.

Strawson, P. F. “Freedom and Resentment.” Freedom and Resentment and Other Essays, Routledge, London, 2008, pp. 1-28.

Stump, Eleonore. Aquinas. Routledge, 2007.

Suciu, Peter. “Americans Spent on Average More than 1,300 Hours on Social Media Last Year.” Forbes, 10 December 2021, https://www.forbes.com/sites/petersuciu/2021/06/24/americans-spent-more-than-1300-hours-on-social-media/?sh=4ccd9bdf2547.

Sullivan, Andrew. “Here Comes the Groom.” The New Republic, 27 August 1989, https://newrepublic.com/article/79054/here-comes-the-groom.

Taliaferro, Lindsay A., et al. “Depression and Suicidality among Bisexual Youth: A Nationally Representative Sample.” Journal of LGBT Youth, vol. 15, no. 1, 2017, pp. 16-31, https://doi.org/10.1080/19361653.2017.1395306.

Taylor, Charles. “A Catholic Modernity?” A Catholic Modernity? Charles Taylor’s Marianist Award Lecture, Oxford University Press, New York, NY, 1999, pp. 13-37.

Taylor, Charles. A Secular Age. The Belknap Press of Harvard University Press, 2007.

Teasdale, T.W., and David R. Owen. “Continuing Secular Increases in Intelligence and a Stable Prevalence of High Intelligence Levels.” Intelligence, vol. 13, no. 3, 1989, pp. 255-262, https://doi.org/10.1016/0160-2896(89)90021-4.

The Annie E. Casey Foundation. “Generation Z’s Mental Health Issues.” The Annie E. Casey Foundation, 4 March 2021, https://www.aecf.org/blog/generation-z-and-mental-health.

“The Global Divide on Homosexuality Persists.” Pew Research Center, Washington, D.C., June 25, 2020, https://www.pewresearch.org/global/2020/06/25/global-divide-on-homosexuality-persists/.

“The Public, the Political System, and American Democracy.” Pew Research Center, Washington, D.C., April 26, 2018, https://www.pewresearch.org/politics/2018/04/26/the-public-the-political-system-and-american-democracy/.

Thompson, Derek. “The Free-Time Paradox in America” The Atlantic, 21 October 2016, https://www.theatlantic.com/business/archive/2016/09/the-free-time-paradox-in-america/499826/.

Thorsen, Tor. “NPD: 2007 Game Earnings Nearly $18 Billion, Halo 3 Sells 4.82 Million.” GameSpot, 2008, https://www.gamespot.com/articles/npd-2007-game-earnings-nearly-18-billion-halo-3-sells-482-million/1100-6184847/.

TNS. “TNS Media Intelligence Reports U.S. Advertising Expenditures Grew 0.2 Percent in 2007.” 27 March 2008, http://marketresearchworld.net/content/view/1958/2/.

Toossi, Mitra, “Labor force projections to 2022: the labor force participation rate continues to fall,” Monthly Labor Review, U.S. Bureau of Labor Statistics, December 2013, https://doi.org/10.21916/mlr.2013.40.

Tucker, Joan S., et al. “Running Away from Home: A Longitudinal Study of Adolescent Risk Factors and Young Adult Outcomes.” Journal of Youth and Adolescence, vol. 40, no. 5, 2010, pp. 507-518, https://doi.org/10.1007/s10964-010-9571-0.

U.S. Bureau of Labor Statistics. (2010, March). “Consumer Expenditures in 2008.” Retrieved July 25, 2022, from https://www.bls.gov/cex/csxann08.pdf.

U.S. Bureau of Labor Statistics. (2012, June 22). “American Time Use Survey – 2011 Results” [Press Release]. https://www.bls.gov/news.release/archives/atus_06222012.pdf.

U.S. Bureau of Labor Statistics. (2022, April 20). “Employment Characteristics of Families – 2021” [Press Release]. https://www.bls.gov/news.release/pdf/famee.pdf.

U.S. Bureau of Labor Statistics. (2022, June 23). “American Time Use Survey News Release” [Press Release]. https://www.bls.gov/news.release/atus.htm.

U.S. Bureau of Labor Statistics, “Labor Force Participation Rate [CIVPART],” retrieved from FRED, Federal Reserve, Bank of St. Louis; https://fred.stlouisfed.org/series/CIVPART, August 15, 2022.

U.S. Bureau of Labor Statistics, The Economics Daily, “Union membership rate 10.5 percent in 2018, down from 20.1 percent in 1983,” 25 January, 2019; https://www.bls.gov/opub/ted/2019/union-membership-rate-10-point-5-percent-in-2018-down-from-20-point-1-percent-in-1983.htm.

U.S. Bureau of the Census. Current Population Reports, Series P-20, No. 450, “Marital Status and Living Arrangements: March 1990,” U.S. Government and Printing Office, Washington DC, 1991.

U.S. Census Bureau 1962.  “1960 Census of Population: Supplementary Reports: Households, Married Couples, and Families in the United States: 1960.” Retrieved from https://www.census.gov/library/publications/1962/dec/pc-s1-23.html#:~:text=The%20number%20of%20households%20in,housing%20unit%20as%20the%20head.

U.S. Census Bureau 2021.  “America’s Families and Living Arrangements: 2021.”  Retrieved from https://www.census.gov/data/tables/2021/demo/families/cps-2021.html.

U.S. Department of Health and Human Services. (1990, December 31). “Foster Care Summary: 1991.” https://aspe.hhs.gov/reports/foster-care-summary-1991-0.

U.S. Department of State. (2016). “2016 Report on International Religious Freedom: Honduras.” https://www.refworld.org/docid/59b7d89e13.html.

“U.S. Teens Take After Their Parents Religiously, Attend Services Together, and Enjoy Family Rituals.” Pew Research Center, Washington, D.C., September 10, 2020, https://www.pewresearch.org/religion/2020/09/10/u-s-teens-take-after-their-parents-religiously-attend-services-together-and-enjoy-family-rituals/.

United States Congress Division of Health Interview Statistics, et al. “Changes in Cigarette Smoking Habits between 1955 and 1966,” U.S. Dept. of Health, Education, and Welfare, Public Health Service, Health Services and Mental Health Administration, 1970.

United States Department of Justice, Federal Bureau of Investigation.  (October 1999).  “Crime in the United States, 1998.”  Retrieved September 3, 2022, from https://ucr.fbi.gov/crime-in-the-u.s/1998.

United States Department of Justice, Federal Bureau of Investigation.  (September 2019).  “Crime in the United States, 2018.”  Retrieved September 3, 2022, from https://ucr.fbi.gov/crime-in-the-u.s/2018/crime-in-the-u.s.-2018/topic-pages/violent-crime.

Urbinati, Alessandra, et al. “Young Adult Unemployment through the Lens of Social Media: Italy as a Case Study.” Lecture Notes in Computer Science, 2020, pp. 380-396, https://doi.org/10.1007/978-3-030-60975-7_28.

Veblen, Thorstein. The Theory of the Leisure Class. Oxford University Press, 2007.

Veblen, Thorstein. “Why Is Economics Not an Evolutionary Science?” The Quarterly Journal of Economics, vol. 12, no. 4, July 1898, pp. 373-397, https://doi.org/10.2307/1882952.

Velkley, Richard L. Heidegger, Strauss, and the Premises of Philosophy: On Original Forgetting. University of Chicago Press, 2011.

Von Balthasar, Hans Urs. Dare We Hope "That All Men Be Saved"? Ignatius Press, 2014.

Walker, Mason, and Katerina Eva Matsa. “News Consumption across Social Media in 2021.” Pew Research Center, 20 September 2021, https://www.pewresearch.org/journalism/2021/09/20/news-consumption-across-social-media-in-2021.

Wang, Jin-Liang, et al. “The Association between Mobile Game Addiction and Depression, Social Anxiety, and Loneliness.” Frontiers in Public Health, vol. 7, 6 September 2019, pp. 1-6, https://doi.org/10.3389/fpubh.2019.00247.

Wang, Wendy. “The U.S. Divorce Rate Has Hit a 50-Year Low.” Institute for Family Studies, 10 November 2020, https://ifstudies.org/blog/the-us-divorce-rate-has-hit-a-50-year-low.

Warner, Kenneth E., et al. “Impact of Tobacco Control on Adult per Capita Cigarette Consumption in the United States.” American Journal of Public Health, vol. 104, no. 1, January 2014, pp. 83-89, https://doi.org/10.2105/ajph.2013.301591.

Weber, Max. The Protestant Ethic and the Spirit of Capitalism. Routledge, 2001.

Williams, Bernard. “Ethical Consistency.” Essays on Moral Realism, edited by Geoffrey Sayre-MacCord, Cornell University Press, Ithaca, NY, 1988.

Williams, Bernard. “Persons, Character, and Morality.” Moral Luck: Philosophical Papers, 1973-1980, Cambridge University Press, Cambridge, 2002, pp. 1-19.

Williams, Sarah Catherine, and Kristin Sepulveda. “The Share of Children in Foster Care Living with Relatives Is Growing.” Child Trends, 21 May 2019, https://www.childtrends.org/blog/the-share-of-children-in-foster-care-living-with-relatives-is-growing.

Wilson, James Q. The Moral Sense. Free Press Paperback, 1997.

Wittgenstein, Ludwig. Culture and Value. Edited by von Wright G.H. and Heikki Nyman. Translated by Peter Winch, University of Chicago Press, 1980.

Wittgenstein, Ludwig. On Certainty. Edited by Anscombe G.E.M. and G.H. Von Wright, Basil Blackwell, 1969.

Wittgenstein, Ludwig. “Wittgenstein's Lecture on Ethics.” The Philosophical Review, vol. 74, no. 1, 1965, pp. 3-12, https://doi.org/10.2307/2183526.

Wittman, Carl. A Gay Manifesto. Red Butterfly, 1970.

YPulse. “This Is How Gen Z & Millennials’ Social Media Use Has Changed in 2021.” YPulse, 21 December 2021, https://www.ypulse.com/article/2021/12/21/this-is-how-gen-z-millennials-social-media-use-has-changed-in-2021/.  

Yu, Bin, et al. “Marijuana Legalization and Historical Trends in Marijuana Use among US Residents Aged 12-25: Results from the 1979-2016 National Survey on Drug Use and Health.” BMC Public Health, vol. 20, no. 1, 4 February 2020, https://doi.org/10.1186/s12889-020-8253-4.

Zagorsky, Jay L. “Do You Have to Be Smart to Be Rich? The Impact of IQ on Wealth, Income and Financial Distress.” Intelligence, vol. 35, no. 5, 2007, pp. 489-501, https://doi.org/10.1016/j.intell.2007.02.003.

Zagzebski, Linda Trinkaus. Divine Motivation Theory. Cambridge University Press, 2004.

Next
Next

A Theory of Political Ethics