Supreme Ego

In June this year the Supreme Court vacated Roe v. Wade with an extreme, repetitive, and oddly argued opinion that will, despite assurances to the contrary, threaten a constellation of basic rights that have been built up over the past six decades using the concept of an individual right to privacy and liberty. The majority opinion makes a strong effort to deny the possibility of wider impacts beyond abortion, but that argument is belied by two factors: (1) the very wording of the justifications used by the decision, rejecting the 14th Amendment “due process” protections that underlie so many other Supreme Court decisions, and (2) the concurring opinion written by Justice Clarence Thomas, which actively promotes a broader application of those same justifications.

In the meantime, the same political activists and institutional groups that have brought us to this egregious moment in our nation are following Thomas’s lead, mobilizing not only to use federal legislation to expand the abortion ban nationwide but also to reverse the Supreme Court precedents that legalized contraception (Griswold, 1965, and Eisenstadt, 1972) and gay sex (Lawrence, 2003) and gay marriage (Obergefell, 2015). And neither the majority opinion nor the Thomas concurrence mentioned it, but the same legal logic that they used to overturn Roe could also be applied to once again allow bans on interracial marriage (Loving, 1967); despite the fact that the written dissenting opinion refers to it several times. The dissenting justices noted that the Casey v. Planned Parenthood decision of 1992 reaffirmed Roe by insisting on the “right of the individual” to make the most basic decisions about whether to bear a child, and tied that right not only to abortion but to contraception and to marriage. The previous Supreme Court rulings about those issues, and the rights they recognized, are inherently connected.

In this essay, however, I want to pull back from those potential future reversals in significant rights cases that might result from the June decision in Dobbs v. Jackson Women’s Health. This does not mean that I don’t recognize the very real danger posed by that opinion’s logic. Unfortunately we should also concern ourselves with the tendency revealed by the current Supreme Court supermajority in three cases, to include Dobbs, that were ruled on in the same 9-month session. And no, I am not even referring to such questionable rulings as those that blocked the EPA from enforcing CO2 emissions or OSHA from requiring vaccine mandates in the middle of a pandemic or the State of New York from requiring licenses to carry deadly weapons in public. Those were bad enough, but not nearly all.

There were three cases this session in which the Court majority demonstrated a strong religious bias in favor of their own personal religious beliefs, and, in the process, created serious fissures in the wall of separation between religion and government. Those three cases are the following:

(1) Dobbs, in which the majority opinion not only rejected the right to privacy, but in the process gave primacy to the minority religious view that life begins at conception. Justice Alito’s majority opinion distinctly made it clear that the only reason they considered Roe v. Wade to be different from the other privacy cases, and why they chose to vacate it, is that it involved “protection of fetal life.” But the idea that fetal life needs to be protected is not a universally accepted concept. The original rulings in Roe and Casey had worked to balance the concern about the fetus against concerns about the health and life of the mother. As the minority dissent in Dobbs noted about the new ruling, “To the majority ‘balance’ is a dirty word, as moderation is a foreign concept. The majority would allow States to ban abortion from conception onward because it does not think forced childbirth at all implicates a woman’s rights to equality and freedom.” For them, the supposed rights of a fetus, and even an embryo, eclipses those of the adult mother and her family.

(2) Carson v. Makin, in which the Court majority required the State of Maine to extend its tuition assistance program to include religious schools. In that tuition legislation, Maine had required any recipient schools to be “nonsectarian” in deference to the establishment clause of the First Amendment to the Constitution. The Supreme Court’s majority opinion took the side of the two sets of plaintiffs who wanted tuition support. That means that they rejected both the many past interpretations of the First Amendment and the interests of the taxpayers who would prefer not to support religions other than their own. The individual religious preferences of these six justices, expressed in the Carson ruling, will have repercussions in every state that wants to support private schools without funding specific religious doctrines.

(3) Kennedy v. Bremerton, in which the majority opinion clearly distorted the case record in order to rule in favor of a football coach who had been fired for encouraging his team and other students to join him in a vocal public prayer on the fifty yard line after games. They chose to elevate the “freedom of religion” right of this coach to primacy. iIn effect they ignored the comparable rights of the students and parents who had initiated the action by complaining that they had felt pressured to join in a religious activity at odds with their beliefs. The ruling overturned the 50-year precedent set in Lemon v. Kurtzman (1971) which had stated that the government (and its representatives) cannot advocate for any one religion.

All of these cases pitted minority religious demands against the previously-recognized rights of the larger public to be free of religious doctrines that differ from their own. In such cases the meaning of the First Amendment is clear, or it has been until this Supreme Court session, and the due process clause of the Fourteenth Amendment makes clear that this restriction applies to the state governments as well, in this wording: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.” The question, then, is why the majority of this Court has so blatantly and repeatedly rejected the long judicial history that has supported the separation of church and state.

The six majority members on the Supreme Court owe their presence there to membership in, and support from, the Federalist Society, an influential judicial clearinghouse that advocates for an originalist textualist interpretation of the Constitution. They cling to this dogma despite many statements from the original authors who supported flexibility in interpretation, especially where individual rights are concerned. As the Ninth Amendment states, “The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.” Clearly, at the very beginning, the men who wrote and signed the Constitution would not have wanted it interpreted in an unchanging “originalist” manner or to have a human right rejected because it wasn’t mentioned in the words of the document. But that particular set of original opinions doesn’t seem to matter, not to the Federalists nor to the author of the Dobbs decision.

Conveniently, the general philosophy of Federalist members tends to align originalism with current conservative views on abortion, LGBTQ rights, campaign finance, free markets and deregulation, and low taxes. To advance this broad agenda, the Society provides networking and scholarship support for conservative and libertarian students, lawyers, and judges—and helps Supreme Court candidates with their Senate confirmations. The current Supreme Court majority was vetted and promoted not only by the Federalist Society but also by a variety of “pro-life” and Christian Nationalist organizations.

The result is that we now have a Supreme Court majority that has demonstrated their willingness to force their own personal Christian doctrines on the entire country by endorsing the unconstitutional, in fact anti-constitutional, actions of activist Christian plaintiffs and ignoring the religious freedom rights of the other citizens they had interacted with, among them pregnant women, ordinary taxpayers, and members of other religions. But their willingness to do so was exactly why they were nominated and confirmed.

Posted in Politics, Sociocultural | Tagged , , , , , , , , , , , , , , | Leave a comment

No Such Thing

One simple economic principle that is all-too often ignored is the one represented by the acronym TANSTAAFL. Never mind that in order to create an easily pronounced phonetic entity the short instructive declaration behind TANSTAAFL has been often expressed as a nonstandard colloquial sentence containing a word that virtually no professional economist would ordinarily use—that underlying sentence is “There Ain’t No Such Thing As A Free Lunch.” Perhaps someone thought that the statement would be more memorable and effective as a pseudo-folksy formulation? In any case, the statement is true, in fact; there ain’t no such animal. That is, no product or service is ever provided without some cost. But that fact hasn’t stopped a massive number of entrepreneurs and their advertisers from pretending that they are providing something for nothing. They’re even spending large sums of money to convince people that that’s exactly what they’re doing. Admittedly, “free” attracts attention. At the time of this writing, according to ubiquitous advertising, consumers can get the following services without paying anything for them: Delivery of packages, personalized information about retirement homes or Medicare plans, scheduling of maintenance and repair services, help with filing income taxes, dinners with retirement advice, bank checking accounts, and analyses of the condition of their car or home HVAC system or teeth (free x-rays!).

You may have noted that there are examples that I’ve left out of the list above. I can start with one recent example. In the past decade there has been a rapid expansion in “free” stock brokerages. The most well-known of these is one attractively named after the anti-establishment hero Robin of Locksley, but there are many others. It would appear as if there’s no shortage of entrepreneurs with money and stock market connections who are ready to give away some of that to help ordinary people invest in the stock market without charging them any money for the opportunity. How generous! Magnanimous, even! It also, unfortunately, seems that there is no shortage of ordinary people willing to believe that such investment opportunities are truly free of cost. However, TANSTAAFL.

Before I go further, I’ll note that when I talk about investment cost I am referring to brokerage fees, not about what millions of day-traders and crypto investors and commodities traders have discovered; the reality that one of the most significant potential costs of investing in volatile markets is … risk. When you buy a share in something that can vary widely in value you want it’s value to increase, but you also take the chance that it can go downward rapidly. Too many small investors are attracted to speculative markets by the myth of endless and inevitable growth, and they become devastated when their life savings decline suddenly, when the markets suffer the inevitable downturn, as happened dramatically last month with crypto currencies and this month with stocks. Such personal losses can be significant, and it’s no consolation to note that when they purchased the collapsing token they weren’t directly charged a fee or commission. Risk involves a very different myth.

In general, whenever someone is offered a free service there is one important question they should ask. They should consider who, exactly, is actually paying for the activities they are supposedly getting for free, and how and why. Remember that the company that provides any service, of any kind, is paying people to do that work. In some cases their reasons for doing so are obvious. A vehicle repair garage might provide free brake checkups because they know that a certain percentage of the cars they check will need maintenance, which they can then provide, for a fee. A beauty salon that offers free makeup demonstrations knows that many of the customers will buy their recommended products, and that some will continue on as regular paying clients. The time share operation that offers a free informational dinner knows that an adequate number of their guests will accept the sales pitch and sign up for the endless string of monthly payments that keeps their business afloat. These days, in almost all of these cases, even those people who show up for the free service but who don’t accept further involvement can be monetized by selling their contact information to lists used for mass marketing. It is in fact possible that you can get something for free if you ignore such ancillary costs as wasting a couple of hours and the personal toll of irritation from aggressive salespeople and losing a certain amount of your privacy and future exposure to promotional contacts. But any company that offers a free service always has an ulterior motive—sometimes their goal is simply to encourage people to purchase things they don’t really need and wouldn’t otherwise buy, but even if the advice they provide is legitimate it is always motivated by the desire to turn a “free” service into an activity that will pay their bills. That intent is not always made obvious, but it’s always there.

In recent decades the internet has made possible another category of intermediary services that offer to provide information for free but that are subsidized by third-party providers who fold the cost of their efforts into what they charge. Examples of this are booking services that provide people with lists of other businesses, such as retirement homes or construction or maintenance services. If a customer selects one of the businesses on the list, that business generally returns what might be considered a finder’s fee to the booking service. The customer is rarely notified about that fee, but it is inevitably there and inevitably comes out of the amount that the customer is eventually charged. The same is true when someone uses a travel agency or an internet travel site to book a hotel or a flight or a rental car. In all cases, the company that provides the actual service, the hotel or airline or car rental company, will provide a kickback to the agency or the booking site. It is possible that the end provider spreads out the cost of such payments across all of their customers as a generalized business expense, raising the amount they charge to all customers. In some cases a hotel or airline will charge less, or provide more benefits, for customers who book directly with their own web site, which means it’s always good to compare the booking site results with the web site of the actual service provider. Even then, customers often never know how much they actually did pay for their own digital convenience, or for the convenience of others, but they should never doubt that that cost will be paid and that TANSTAAFL will reign. Companies that provide “free” services have to remain in business, after all, and they do prefer profitable ventures.

I return now to online stock brokerages that allow clients to purchase equities “for free,” that is, without any brokerage commission or fees. It is true, on the surface, that an ordinary investor can go to these sites and purchase equities, and that no fees will be charged directly. But those brokers, somehow, are still managing to make money. In fact, most of them have been quite profitable in recent years. That’s possible thanks in large part to an alternative method of extracting income from investors, involving the use of “off-exchange” wholesalers. In ordinary brokerage transactions an individual investor pays a broker for the stocks they want, paying a small commission for the access, and the broker buys those stocks from a registered public stock exchange. With the new no-commission brokerages, a fourth player is often added to this process. The investor pays the broker and the broker buys the equities from a wholesaler, at a raised purchase price. The wholesaler keeps the difference between that amount and the current exchange price and returns part of it as a kickback, called a Payment for Order Flow (PFOF), to the broker.

Anyone who has traveled overseas is probably familiar with a similar arrangement, in which someone who purchases a foreign currency pays a slightly inflated price and receives a lower return if they sell some of that money back. In most currency exchanges customers are provided with a list noting, for example, that they will pay $ 1.14 for each Euro they buy and will receive only $ 1.08 when they sell them back. Often the list will also note that $ 1.11 is the actual current exchange rate. Customers recognize that the extra $ 0.03 they pay for each Euro is what they pay for brokerage access. Such currency exchanges and the stock market PFOF system are quite similar, except for one thing: The currency transaction is completely transparent and PFOF is completely opaque. The individual PFOF investor is never told that they actually do pay a monetary cost in their “free” non-commission trade, a cost hidden in the inflated purchase price and the reduced amount when those stocks are resold.

The point of all this is that everyone should be suspicious whenever they are offered a free service and should give some thought to how it could be that any business making any free offer (if it is truly “free”) would manage to stay in business. In other words, we should all be aware of economic reality. We should remember the truth of TANSTAAFL and think about how we will, in reality, eventually, end up paying for any “free” offer we accept. Because, after all, in virtually all cases, we will. There’s another common phrase that should come to mind in reference to such attractive arrangements: Let the Buyer Beware.

Posted in Economy, Sociocultural | Tagged , , , , , , , , , , | 1 Comment

Big Boy Pants?

In recent decades we, as a society, have made progress in removing many of the vestiges of sexist behavior and thought from our society. For many of us it’s not enough progress, to be certain, just as for those groups that decry “political correctness” and “cancel culture” we’ve gone too far. Yet, it is progress. We have increasingly made it clear that many forms of verbal and physical violence that once were ignored, and too often approved of, will no longer be automatically tolerated. Those who use their position or status to intimidate or belittle or assault others will not always be exonerated in the ways they were in the past. The amount of social privilege held and misused by individuals based on their group membership is also being challenged and, in many cases, reduced. Again, this is not always true, nor nearly often enough, and we should continue to create change. Yet, we have seen progress.

There is forward movement on the level of sociocultural imagery as well, for example, in the scenes portrayed in advertising and in our assumptions about gender roles. We have commercials on television in which husbands do the dishes and change diapers and otherwise act like responsible parents. The work balance at home still needs to be improved, and there have recently been pandemic-related set-backs in which far more women than men left their jobs to provide childcare. Access to women’s health care has also been significantly curtailed, but our cultural reality has not yet been turned back entirely to what it was before the 1970s. Despite significant backlash, we have seen progress.

There is, however, one element of language in which we don’t seem to have progressed at all. It may be a seemingly small item, but one that looms large against the progress we have made. It is in the use of one set of phrases that seems to be as popular as it has ever been, a retrograde sexist usage that nobody seems to be concerned about, one that may be every bit as common among woke young progressives as it is among patriarchal conservatives. As one example, this month I was listening to a discussion on a National Public Radio program in which a female commentator was interviewing Scotland’s First Minister, a woman named Nicola Sturgeon. The question came up, as it always seems to in interviews of successful female legislators, about whether Sturgeon had attained her position by mimicking the actions and attitudes of her colleagues. That is, did she behave like a typical alpha male? We can briefly note, under the rubric of lack of progress, that there are concerns regarding why only women are asked such questions. More on that, perhaps, in another post. In this interview Sturgeon replied that, at first, “Unconsciously and unknowingly I started to behave in a way that was about conforming and fitting in with the people that I was surrounded by.” That is, she continued, she acted in ways that were “adversarial and aggressive.” As a result, she was considered “bossy and strident” and given the nickname “nippy sweetie,” a phrase that implied that her behavior was anti-social and that simultaneously belittled her as a person.

Such reactions from her make colleagues could have been expected. Sexism is still the norm. However, what surprised me in this interview was the way the NPR representative worded the question. She asked, directly, if Sturgeon had acted “ballsy.” The First Minister replied that her behavior had not often been referred to as ballsy, but instead had received the Scottish label “nippy,” which is somewhat similar, but negative in tone (i.e., adversarial and aggressive). Now, NPR is not exactly the most progressive or woke outlet in the media universe, but they have made efforts to limit sexist content, and their overall careful socially centrist approach has earned them the label “liberal media” even though they as often skew conservative in their programming. Somehow the term “ballsy” had slipped through into a discussion between two women on this relatively aware radio outlet. Are they maybe not aware enough?

The lesson is that even mindful individuals, even those who are generally regarded as feminist or feminist-leaning, including liberal comedians and politicians and other public figures, often use such cliches as “he lacks the balls to do that” or, ignoring the obvious physical contradictions, “she really showed some balls this time.” In some ways this is similar to a common rebuke used to encourage an indecisive or hesitant individual, advising them to “put on their big boy pants.” Like “ballsy,” this odd phrase is also often incongruously used in reference to women. I presume it means to act like an adult. The implication, though, is that only “big boys” are true adults. Perhaps big boy pants come with a pair of balls? That would be a logical interpretation.

Of course, in literal terms only men can have balls. The use of phrases such as “he lacks the balls,” however, goes beyond any literal reality. It implies that even though all men literally have balls, only “real men” really have them. Other men, along with women and anyone else deemed inferior, are advised to “grow a pair.” It is, therefore, the ultimate in gender stereotyping, the direct identification of specific positive behaviors with male anatomy. The only way that someone can satisfy such behavioral demands is, literally, to become a male. And these are behaviors that are regarded as valuable and desirable. So what about the female body? Men who fail to display ballsy behavior, those who are not forceful and domineering enough, have often been referred to as “pussies.” That demeaning reference has somewhat fallen out of favor in recent decades, thanks in large part to the spread of feminist thought, but it is still in use. Which means, again, we have made progress, but it is still incomplete.

Now, to answer one obvious objection, I fully recognize that the whole “balls” thing is a metaphor. It is, in that sense, indirect sexism. It is possible that those who use the related cliches aren’t even thinking about the literal, physical reference. Even so, this is a metaphor that celebrates traditional masculine roles and behaviors, the overconfident, pushy, domineering, mansplaining type of masculinity. In many ways it celebrates what has become correctly identified as toxic masculinity; it is “ballsy” to take charge and to move forward without subtlety, to disregard the feelings and opinions of others. It is even “ballsy” to do stupid and destructive things, and somehow the use of this terminology manages to put a positive spin on those activities, too, even if the consequences are, and could have been predicted to be, mostly negative and anti-social. It is why some truck owners hang an oversized metal bull scrotum from their trailer hitch. Examples of crazy “ballsy” behavior are ubiquitous in click bait videos on social media. If an activity demonstrates that the person “has balls,” it implies that that is a good thing, no matter how dangerous or anti-social the behavior is.

This may sound to some like a minor problem. This is, after all, only one word among thousands, representing one set of sexist assumptions among the many that we’ve been changing. But in its ubiquity and casual acceptance, “balls” is a clear demonstration of social intersectionality and the ways in which sexism manifests itself in sometimes unrecognized subtle interactions. Not that the use of “balls” itself is all that subtle, but it represents a much broader body of social dynamics, the corrosive hidden assumptions under the visible surface. It stands as an icon of how far we still have to go in countering the sexism within everyday life. We need to identify such items and replace them with more positive cliches to express the types of behavior we want people to emulate … and to apply them with more care.

Posted in Sociocultural | Tagged , , , , , , | Leave a comment

Imperial Failure

The 2022 invasion of Ukraine should not have been a surprise. Not to anyone. It was, in large part, a result of the imperial imperative; the desire of an egotistical leader with dictatorial powers to reconstruct the global prestige and territorial influence of the pre-1989 Soviet Union, and thus to raise himself to the status of the great leaders of the past. It thus had much in common, predictably so, with the military actions of many failing and former imperial regimes of the past. Another primary motive was the desire to reverse the supposed threat posed to the Russian regime by regional governments that support reforms such as democratic processes, freedom of speech, anti-corruption movements, and growing alignment with NATO and western Europe. Statements by President Putin and his supporters and his media have demonstrated the importance of political control within their “sphere of influence” as they promoted the imperial myth of Ukraine as an integral part of historical Russia.

The propagandistic preface to the 2022 attack on Ukraine thus mirrors the justifications that preceded the 2014 takeover of the Crimean peninsula and the 2008 Russian invasion of Georgia which resulted in the occupation of the provinces of Abkhazia and South Ossetia. The timing of the Georgia military action followed Putin’s 2000 election and the 2003 changes that brought an increasingly pro-western government to power in Georgia. There is an unfortunate trend here, one that puts Putin’s actions during his two-decade reign solidly within the global pattern displayed by imperial powers faced with the dissolution of their territorial influence, a violent history that does not bode well for the future. The almost inevitable consequence of imperial decline is war, military attempts to retain or regain powers that were at one time viewed by the empire as normal and appropriate.

War also has its common patterns. One almost ubiquitous and likely necessary function in justifying an attack on any foreign population is demonization of the people and their leadership. In reference to Ukraine, of course, the Russian propagandists have had a problem in that the people of Ukraine are largely indistinguishable, in physical and cultural terms, from much of the population of Russia. They do not have darker skins or distinctive facial features, the types of characteristics that have so often served colonial subjugation of southern hemisphere regions by northern hemisphere countries. Ukrainians are mostly Christians, not largely Muslim like the populations of Chechnya or Afghanistan. Even their clothing and customs are similar to those of the region around them, encompassing countries like Belarus and Moldova as well as large southwestern segments of Russia. Many Ukrainians even speak Russian and have relatives living in Russia. Therefore, to justify military action, Russian propaganda had to invent some other differences and amplify them in propaganda.

The Kremlin government responded to this need in two ways. First, they characterized the Ukrainian leadership as a pro-Nazi regime maintained by a fascist military. This is an effective message because of the strong regional memory of the trauma of World War II, so that in much of Russia the necessity of defeating and excising Nazis is understood at a visceral level. Second, they have severely controlled the media, making sure that any unapproved information about Ukraine or about their operations there, especially news about the destruction of cities and attacks on civilians, is strictly blocked. The war is not a war, it is a “special military operation.” Under a new law anyone in Russia can be imprisoned for as many as 15 years if they spread information other than the official message or if they use the words “war” or “invasion.” Independent media sources and popular social media platforms have been virtually silenced.

The Russian propaganda efforts at home seem to have been successful. Polls report that public support for Vladimir Putin has risen to near 80 percent. Many Russian nationals living in Ukraine have noted that their relatives in Russia refuse to believe any of their first-hand accounts about the actions of the Russian Army. There are indications that the indiscriminate violence used by Russian troops against civilians in Ukraine has been motivated by beliefs that the vast majority of the Ukrainian population supports Nazi philosophy and Nazi activities, that even Ukrainian civilians are, in effect, the enemy, and are deserving of harsh punishment. In short, Ukrainians have been broadly and effectively demonized, and this development has affected the conduct of the war.

What was unexpected in the progress of this war was the failure of the Russian military to sweep across Ukraine in a dominant blitzkrieg-style attack. The model for this, of course, is the effective movement of Hitler’s forces in the previous major European land war, the attacks of the late 1930s and early 1940s. It was assumed that Putin’s forces had levels of hardware superiority and troop strength similar to that of World War II Germany, characteristics that gave Hitler’s army control of both the air and land back in the early 1940s. Everyone, even those who opposed the war, seemed to believe that Putin’s forces would capture most of Ukraine, including the capital of Kyiv, within a month.

There were problems with this assumption. Perhaps the primary one is that warfare has changed significantly since World War II. Modern anti-tank weapons are still carried and fired by individual soldiers, similar in that way to the WWII bazooka, but the new guided rockets are designed to accurately hone in on and penetrate the weakest location in a tank’s armor, the turret. Unlike WWII scatter-shot anti-aircraft guns, modern hand-carried anti-aircraft weapons use heat-seeking rockets that are much more likely to hit their target and much more likely to destroy it. Ukraine’s open flat terrain is also ideal for the efforts of long-distance snipers using modern rifles, telescopic sights, and night-vision tools. Ukrainian troops have used all the above tools very effectively in small-unit guerrilla tactics. As a result, Russian soldiers and armor, relying on tanks and personnel carriers, have suffered significant casualties. They have failed to move rapidly on the ground and Russian aircraft have failed to gain control of the sky.

There is also some evidence that Russian military leadership was inadequate if not incompetent, and arrogant as well, failing to anticipate the effects of modern weapons and to provide necessary supplies for anything more than a brief attack, so that their intended blitzkrieg was stalled by unexpected casualties, by shortages of fuel and food, and by clogged roads and damaged bridges. Russian troop communications were insecure, allowing Russian-speaking Ukrainians to monitor their plans, greatly improving the effects of Ukrainian guerrilla and sniper actions. The leadership didn’t even anticipate the danger posed by Ukrainian cruise missiles, a failure that resulted in the loss of the flagship Moskva.

All of the above has created confusion, anxiety, and low morale among Russian troops, effects that may simultaneously have made many resist orders and, unfortunately, caused many others to take out their frustrations on civilians; thus the stories about widespread murder, rape, and looting. A focus on troop characteristics should also remind us about another characteristic of imperialistic military action. When an occupying force attempts to maintain or regain control of territory, the advantage of raw power often lies with the occupiers, the country that has the strongest economic and military power. However, the motivational advantage is generally more intense on the other side, among those who believe that they are defending their own land, their own country. Whatever narratives are offered for a takeover, troops who find themselves in a foreign land will always be less motivated, having much lower intrinsic morale compared with their enemies who are fighting for what they view as their “home.”

In short, there are three facts about the 2022 attack on Ukraine that should not have surprised anyone. One, as I noted at the beginning, is the attack itself. Putin’s ego and recent history made that inevitable. Another is that the Russian efforts have, as I write this, been notably, dramatically, unsuccessful. Russia’s corruption in weapons contracting and in military leadership had made that, too, inevitable. Finally, the devoted resistance of the Ukrainian population could have been predicted. We can only hope that the eventual outcome is disastrous enough for Putin and Russia that it will, finally, put an end to his attempts at territorial expansion by providing him with the lesson that he should have learned long ago. In history it seems that only a manifest failure has been effective in halting imperial ambitions such as his.

Posted in Sociocultural | Tagged , , , , , , , , , | Leave a comment

Deniability

I’ve always been a company man. It always made perfect sense to me. Starting with the fact that I was, and would in all likelihood remain, a mid-level functionary in a large business establishment, it seemed that it would be best for me, in terms of survival and stress reduction and gratitude to those to whom I owed my employment, to avoid questioning anything. I have followed that essential blueprint successfully for 26 years. Unfortunately, now, my long-term plan may have finally backfired, and, on top of it all, only four years before I would have qualified for retirement. I can’t be sure, of course. But it may be that my quiet avoidance of conflict, and possibly my longevity in an organization in which long stable careers are rare, has now been mistaken for blind loyalty, for an indicator or source of inside knowledge, or, laughably, I think, for wisdom. In any case, when it became clear that an organizational realignment was imminent and unavoidable, when management began making noises about poking around inside the firm, and someone “up there” decided that someone else was needed to investigate “the problem,” I was chosen. It is better, I suppose, to be the investigator than the scapegoat; although it has occurred to me that I may have been selected to serve in both capacities.

The upper administration sent memos around announcing my new role and encouraging everyone to cooperate with my investigation in any way they could. I’m still not certain if that advance notice actually helped or hurt my efforts. I’ve considered that the current reputation of the top management levels may mean that their imprimatur could prove to be more of a liability than an asset. It’s also quite possible that the advance notice of this investigation might have had the effect of giving everyone the time they needed to develop new rationalizations and to get rid of any unwanted evidence or to create new supportive documents. It’s also quite possible that such an effect was intentional.

I’ve been at it for about three months now. In that time I’ve found it exceedingly difficult to gather information. This doesn’t surprise me, of course, given the concerns listed above as well as the prevailing business practices of the past two decades. At the heart of it all, I think, is deniability. Upper management had long made it clear that they didn’t want to know any details from lower management, much less from those who actually do most of the work, just in case. That was it—just in case. They never went any further with any of their statements about information flow, neither to clarify their intent nor to answer the obvious question—just in case of what?—nor to provide any other elucidation of their own. And when they decided to provide instructions or, more accurately, when they offered us a general outline of our future goals or targets or objectives or mission statements or whatever else they might call them, they would do so by meeting with our team, and only our entire team, in person. They did not encourage any questions or feedback, and prohibited recordings. Written communications were even more abstract than the verbal, often amounting to little more than cheerleading, or, I should say, motivational messaging. The suggested strategies for achieving the general objectives or targets were only vaguely referenced, using terms such as “asset optimization” and “efficient resource utilization” and “development and pursuit of intrinsically positive growth.” The only thing that remained clear was that we were on our own in determining how we planned to “maximize throughput and operational returns.” That much I had observed long before I became the lead investigator in the current effort; I doubt that any such conclusions would be welcome in my ultimate report.

Gradually, using mostly unapproved channels of communication, I have documented that similar practices were standard in all of the other divisions of our company that I was tasked to investigate. Most of my information, of course, did not come from any of the current departmental managers or team leaders. Management at all levels apparently has internalized and/or mimicked the virtues of non-specificity and ambiguity. And it’s obvious that the example set by upper management has effectively trickled downward. This seems to be a reasonable and understandable defensive measure. Everyone knows that it would, after all, be much more difficult for the corporate bean-counters and upper administrators—not to mention appointed investigators—to distribute blame and impose accountability penalties if nobody really knew what outcomes to measure, if nobody had a clear idea of the quarterly starting point or ending point or the intervening expectations, justifications, and procedures. Every department had duly produced a mission statement and standardized quarterly reports, and these were printed in the standardized format in soft-cover binders, and duly filed with copies to the appropriate corporate libraries. The ones that I have read were universally positive, glowing even, employing acceptable circumlocutions to demonstrate above-average departmental efficiency while containing absolutely no wording that indicated what, exactly, was being completed so efficiently. I’ve spoken to more than half of the departmental team leaders, only to find that all of them have mastered one vital competency; the ability to spend at least an hour discussing their team’s successful accomplishments without once providing any clue as to what precisely they had accomplished or had intended to accomplish.

I have asked to see some of the spreadsheets and other computer models that are referenced in various reports and summaries of departmental activities. In most cases this access has been granted, grudgingly and generally only after I have repeatedly stressed the serious difficulties that may be facing the company as a whole, and only after I have been repeatedly cautioned by the creators and users of such files that the files themselves are so large and complex that I would be unlikely to adequately comprehend them. In fact, I’ve found that warning to be true; all are similar to the large, one might even say bloated, computer files that I myself have created and embellished over the past decade. The operational and diagnostic models referenced by the documents in the other departments are so extensive and labyrinthine, and the underlying formulas and algorithms so convoluted and poorly documented, that even those who refer to them and update them on a daily basis often do not seem to be able to provide a coherent explanation of their inner workings. Or perhaps, in line with the general corporate milieu, they choose not to try.

After all, those individuals, like myself, who can produce meaningful explanations of some corporate documents generally have long found that their immediate superiors, the people who make the decisions based on such reports, either cannot comprehend, or will not listen to, or will never bother to ask for, any background clarification. This is to be expected. The underlying computer models are, or were, state of the art and constantly evolving. The analysts who created them are, or were, college-trained specialists in such arcane arts as continuous probability and combinatorics and differential entropy. And generally, with the high employee turnover—I did note that I am a rare example of longevity, didn’t I?—the models generally have outlived their creators by years, if not decades, which means that the current analysts are usually working with a computational core built by a predecessor, or often someone before their immediate predecessor, and are sometimes models based on legacy functions which were obsolete when our current analysts entered college. This is what is described as the industry standard, the system necessary to maintain a competitive edge at the highest level.

Of course, what is listed above are the general impressions I’ve gleaned from hundreds of hours of discussions and explorations of files, interactions which were almost exclusively conducted under promises of individual anonymity. The pinpointing of individual experiences and details may not be important in any case; what is important are the widespread trends or tendencies that don’t appear to be isolated within any one department. We have achieved a level of corporate uniformity of purpose and methodology that is well beyond anything we expected to achieve when I first joined the firm. It may not be the result that we had hoped for, but it certainly seems to be the logical extension of the strategies we have applied. Are they dysfunctional? Time will tell, but my final report will not.

Posted in Satire, Sociocultural | Tagged , , , , , , , | Leave a comment

Maldistribution and its Consequences

In last month’s post I noted that the past four decades have demonstrated that there is a significant amount of surplus in the economic system and that that surplus, obviously and unfortunately, is not widely shared within our population. Benefits at the top income levels have grown enormously since 1980, expanding the portfolios of top-level management, financial advisors, and investors. The income and wealth inequality in the United States has reached even more extreme levels than our nation experienced during the Gilded Age, the age of autocratic wealth and control that began following the Civil War and ended with the Great Depression of the 1930s. As an example, in each of the years 2016 to 2019, the top ten percent of U.S. citizens received more than half of the total annual national income and held more than three-quarters of total wealth, while the bottom fifty percent—fully half of the population—received only 15 percent of the income and held only about one percent of total wealth. This current inequality is actually more extreme than the pre-depression levels of the late 1920s. And this time, as so many times in the past, the vast concentrations of wealth and investment income have led to significant problems and instability in the national and world economic system.

The most serious economic problems and scandals of the gilded age and of the most recent four decades are directly attributable to these high levels of wealth inequality. While we must admit that this is not a simple relationship, we can connect much of the instability of both eras to the corrosive effects of too much money chasing too few investments in search of easy and often inordinate financial returns. We can look at the historical record to demonstrate this.

In the 40 “gilded” years between 1890 and 1930 there were 9 recessions and 2 extreme depressions, including the first year of the Great Depression. In the 40 years between 1980 and 2020 there were 6 recessions, one of them the Great Recession of 2007-2009, the worst collapse since the 1930s. The economic downturns in this latter period would have been more frequent and more serious if it hadn’t been for massive interventions by the federal government and the Federal Reserve. Of those 17 major economic downturns mentioned above, almost all were caused by overreactions to economic stresses, reactions that were commonly called “panics” back in the 1800s, in which investors crashed the economy by attempting to pull back their levels of portfolio risk in response to downturns, fiscal scandals, or rumors.

Contrast this record with the 40 years in the middle decades, between 1940 and 1980, in which economic activity was largely stabilized by New Deal regulatory systems and by high marginal tax rates on the highest income levels, redistributive taxes that ranged from 70 to 95 percent. In those four middle decades, government policies helped to level out both income and wealth inequality and to dampen financial speculation. There were 7 relatively lesser recessions; three were caused by sharp drops in government spending after major wars (WWII, Korea, and Vietnam), two by intentional changes in fiscal policy to counter inflation (1958 and 1980), and two by the 1973 OPEC oil crisis and the 1979 Iranian revolution. None were caused by rampant speculation or investor panics and none required massive government interventions on the scale of the New Deal or the post-2008 stimulus. In short, reduced economic inequality meant reduced economic instability.

At the beginning of the Obama years (2009) there was some hope that we as a country would have learned from the Great Recession, in which vast speculative activity created a boom market in risky derivative assets that were based on poorly verified housing loans. Unfortunately, major corporate players in that collapse were bailed out and even allowed to grow by absorbing some of their failing competitors. Virtually none of the individual corporate leaders were punished for the frauds they promoted. Some meaningful legal reforms were passed by congress, but when the Republican Party regained control in 2017 they either reversed those changes or declined to implement them. Then they passed a massive tax reform bill, one which provided few benefits to ordinary workers and which by 2025 will be sending 83 percent of the resulting annual tax savings to corporations and to the richest one percent of taxpayers. The result has been yet another significant increase in wealth inequality and instability.

The above review is phrased in generalities. Perhaps it would help to take a somewhat more detailed look at the some of the specific dysfunctions created by excess wealth. We should all be familiar with a few of them. One full category involves investment bubbles, in which eager investors get together with brokers willing to take their disposable funds with the goal of inflating values in what become known as “hot” markets. The 1929 stock market crash and the Great Depression were the inevitable result of rampant highly leveraged speculation in equities. The late 1980s brought us a real estate investment boom that ended in scandals and the collapse of the Savings and Loan industry. The 1990s had the dot-com boom which flooded the nation with fiber optic infrastructure and poorly vetted internet start-ups. That went bust just before 2000. Eager investors pivoted again, creating the 2008 housing boom in which a widespread house-flipping mania was layered onto a hyperactive market in creative and often illegal mortgage products pushed by shady mortgage brokerages using sales methods that often bordered on fraud. Those brokers had no incentive to avoid risky contracts because they didn’t retain any of the loans they had arranged. They pocketed their commissions and processing fees and the resulting high-risk loans were sold off and bundled into derivative packages that were in turn traded by financial firms using short-term borrowed money at rates of leveraging that would have been illegal under the previous New Deal regulations. Once again, too many investors looking for rapid financial returns overstimulated a formerly regulated market.

Massive federal bailouts and legislative reforms managed to turn the Great Recession of 2008 into a lengthy, if gradual, economic expansion, building toward record profits, high returns to investors and the unprecedented growth of a few personal fortunes. What it did not lead to was ubiquitous prosperity. Millions of families lost their homes. Wages remained stagnant and the percentage of workers in the middle class actually declined. But the system remained largely unchanged. Once again, investors have massive amounts of disposable capital to spread around, and this time they’ve been lavishing it on new artifacts such as cryptocurrencies, non-fungible tokens (NFTs), and meme stocks. These have been accurately characterized as assets based on the “greater fool” theory, investments that have no intrinsic value beyond what can be cadged from a subsequent fool who assumes that prices will continue to grow endlessly. We can only hope that the inevitable collapse of these neo-boomlets will not affect the overall economy as much as the previous events.

There is, unfortunately, more than enough loose money left in the system to continue to cause distortions in the traditional business and real estate markets. Private equity firms are still using their financial and legal muscle to control and cannibalize successful corporations, actions that have led to the destruction of such venerable firms as Sears and Toys R Us. Even when they hold onto their purchases and attempt to manage them, the private equity strategy is to cut costs as much as possible, largely by slashing wages and benefits and staff without concern for the long-term consequences. In the real estate market wealthy investors began taking advantage of falling house prices and foreclosures in 2008, and they continue to purchase and convert family-owned homes into rental properties or flippable rapid profit sources or rarely visited second or third luxury homes. Foreign buyers have driven up housing prices in most large U.S. cities, including New York, Seattle and Miami. All this money has helped to accelerate price inflation for both owners and renters throughout the United States, making it very difficult for most ordinary workers to find housing they can afford. In so many ways, wealthy investors are increasingly using their money in ways that distort markets to their benefit with no concern about the significant negative social impacts they are creating.

Finally, wealthy individuals and large corporations have been using political contributions to influence legislators, to obtain laws and regulations favorable to themselves. Their efforts have effectively reduced their own tax liabilities, exacerbated political corruption, removed or weakened government regulations, helped secure subsidies and government contracts, and even influenced elections. They have benefited from ubiquitous distortions in economic markets that have been supported by self-serving changes in legislative and administrative politics.

But perhaps the biggest problem created by this level of inequality is social and political instability. The election of President Trump, with its deleterious effects—the tariff economy, cuts in taxation and government revenues, environmental setbacks, and savaged confidence in public functions and the media—was made possible by voters who saw themselves being ignored by politicians as they were increasingly bypassed economically. Such voters have also been strongly influenced by messages from neoconservative media outlets that have flourished with financial support from the same wealthy actors who have been lobbying legislators for favorable treatment. Today’s plutocratic corruption in the system is perhaps less blatant, a bit more indirect, than it was in the Gilded Age, but it is no less of an antidemocratic spiral designed to benefit a small coterie of corporate and financial interests. It is unsure what it will take to end this pattern. World history shows that the end result could be either an autocratic dictatorship or something more like the Progressive uprisings and reforms of the early 1900s or the 1930s. We can only hope that it will be the latter.

Posted in Economy, Sociocultural | Tagged , , , , , , , , , , , , , , | Comments Off on Maldistribution and its Consequences

Maldistribution and its Discontents

In the United States the past four decades have demonstrated that there is a significant amount of surplus in the economic system. That surplus, unfortunately, is not widely shared within our population. Benefits at the top income levels have grown enormously since 1980, whether the beneficiaries are top-level managers or financial advisors or investors. In the same period wages for ordinary workers have been essentially stagnant, barely keeping up with what have been minimal levels of inflation. This is despite the fact that the average workweek for full-time workers has been trending upward, so that in 2021 the average was 41.5 hours, and more than 10 percent of non-management workers worked more than 50 hours each week. From 1971 to 2019 the number of adults in middle-income households (the vaunted middle class) decreased from 61 percent to 51 percent and many of those who managed to remain in the middle class did so only by expanding from one full-time worker to two. In large part this is because worker compensation, which prior to 1980 had risen in congruence with economic productivity, grew only one-third as fast as productivity since then. In other words, the increased economic value created by workers was no longer being returned to them. The United States economy is no longer managed in ways that share our prosperity broadly.

Admittedly, economic inequality has always been a problem in our country. Back in 1989 the wealthiest 5 percent of families had 114 times as much wealth as the second quintile (the families in the range between 20 and 40 percent of the population adjusted by wealth). That’s the top 5 percent of families having more than 100 times the wealth of a lower-level 20 percent. Even back then, that did not qualify as a reasonable distribution. Twenty-seven years after that, however, a period that included the boom years of the 1990s, the bust of the 2008 great recession, and the following decade-long recovery—but not the Covid recession—in 2016 the top 5 percent had 248 times as much wealth as the second quintile. The excessive disparity had more than doubled. Note: Comparison with the second quintile is used because in the United States the median wealth of the first (lowest) quintile is almost always zero or negative.

The fact is that poverty in most modern developed countries is less a matter of scarcity of resources and more a matter of maldistribution. In most of these countries now there are growing discussions about shorter work weeks, most often without any decrease in personal income. Polls have found that large majorities of workers in Europe would prefer more leisure time to higher salaries, and recent studies have demonstrated that shorter weeks can be implemented with minimal loss in company productivity. In the United States, of course, a large proportion of workers desperately need, and deserve, higher incomes as well. Despite this, proposals for an adequate minimum wage and for shorter work hours have been met with direct opposition, even ridicule, and with charges that they are both unrealistic and “Socialist” (by implication, totalitarian and antidemocratic). But are those charges true? Given the current severely unequal distribution of income and resources, can that even, in fact, be an accurate assertion? Wouldn’t it be possible to spread income around much more equitably and increase leisure at the same time? The answer is yes, it would, if we only had the political will.

More than 9 decades ago John Maynard Keynes predicted that increases in worker productivity would make it possible for people to earn an adequate living while working only 15 hours a week. His prediction did not become reality, but only because any such possibility was derailed in part by inflation and expanded consumer demand fueled by advertising and easy credit, but primarily by the transfer of massive percentages of productivity-derived income away from wages and into the exploding financial markets and through them into the accounts of a relatively small number of investors. This tendency had existed since the beginnings of the industrial revolution, but it increased exponentially beginning in the 1970s.

For example, look at average annual working hours over the past eight centuries. Contrast the following figures with the standard that our efforts, regulations and laws, have approximated in creating our current arrangement of 40 hours a week for 50 weeks a year, or 2,000 hours per year:

In the 13th century, male peasants worked long hours during the growing season, with 12-hour days common, but the annual work peaks totaled fewer than 150 days. Total estimate: 1,620 hours per year.

In the 14th century, casual laborers also worked long days, but tended to end their “working season” when their basic needs were met, after about 120 days. Total estimate: 1,440 hours.

In the 15th century, common contract labor rules expanded the work year, commonly requiring 10-hour days for two-thirds of the year. Total estimate: 2,300 hours.

By the 18th century, industrialization had moved much labor into factory facilities that required more than 40 weeks per year, 7 days per week, 10+ hours per day. Total estimate: 3,200 hours.

This continued through the last half of the 19th century, when the growth of productivity, in combination with long employee hours and low wages, created vast economic surpluses that were concentrated at the top. In industrial nations extreme labor hours and poverty at the bottom commonly contrasted sharply with extreme wealth in the hands of a few, a “gilded age” that was golden only for a tiny percentage of the population.

It was during this period of industrialized excess, the late 18th and 19th centuries, that anthropologists began studying hunter-gatherer societies around the globe. At first they tended to regard such small “primitive” groups as following a minimal subsistence-level economy, one that required almost constant effort just to survive. The groups they visited were generally those that had been pushed into marginal environments containing limited resources, regions containing little water, sparse vegetation and scattered animal life. As they improved their methods and scope, however, their more analytical and objective studies demonstrated that even these marginal groups enjoyed a significant amount of leisure time. Further research found that there had been many other pre-agricultural societies that provided not only survival levels of sustenance, but that regularly produced large surpluses of useful goods; food, tools, art, and elaborate socioeconomic knowledge. In many of these societies it was common to redistribute much of that excess through frequent sharing or communal feasts, exemplified by the “potlatch” events of the northwest coast cultures of what is now the United States and Canada and the “Big Time” celebrations still practiced among California’s Indigenous peoples. Anthropologists have come to the conclusion that the lessons gained from such societies indicate that Keynes’ estimate of 15 hours per week of productive work may not be unreasonable even without modern labor-saving tools.

In the late 19th and early 20th century, worker protests and strikes and progressive political movements responded to rampant industrial excesses by moving toward 8-hour days and 5-day weeks. The resulting 40-hour week is now a widespread standard but, unfortunately, is still unpopular among corporate leaders. As I write this, workers at Kellogg’s factories in the United States are on strike after being required to work 12-hour shifts for 7 days a week over a period of months. Forced overtime is also a factor in a walkout against a Frito-Lay plant in Kansas, and Nabisco workers in five states are striking for a contract that would prevent the company from moving manufacturing operations to Mexico, where regulations regarding wages, benefits, and hours are much less stringent. Computerized management tools are being applied to monitor worker activity to repress “unproductive motion” and, ostensibly, increase employee productivity. Conflicts over this type of micromanagement are also increasing, taking labor protests beyond the usual base issues of wages and benefits. All of this is happening despite the fact that corporate profits and stockholder returns are at record high levels.

The optimistic Keynes prediction is now being repeated by some entrepreneurs in self-serving statements about automation and the future of work. In a 2021 article the CEO of Open AI noted that after machines (inevitably) take over virtually all human employment the AI industry would be so wealthy that they could bankroll a universal basic income to replace the system of paid work with paid leisure. Well, yes, I suppose they could. This, of course, ignores the current reality in which industries and owners are accumulating massive wealth without offering any such generous impulses; instead, they continue lobbying for lower taxes, resisting higher wages and union organizing, and using technology to impose stressful and hazardous job requirements on their employees. Their intent is clearly to increase profits, not to share any of their vast wealth with anyone. Perhaps it really is time for government to enforce corporate generosity.

The United States should be paying attention to the trends in western Europe, where adequate wage structures and supports for unions have not only provided lower-level employees with incomes significantly higher than comparable jobs in the United States, but have allowed workers the freedom to push for alternatives such as shorter work days and four-day weeks. We should be considering minimum wages and benefits that would actually allow even the lowest-paid full-time workers to support themselves and their families without working more than one job, and should seriously consider such governmental social supports as inclusive single-payer health insurance and universal basic income payments. We have a very affluent society, an economy that could easily afford all of this, if only we would find a way to reverse our decades-long trend of economic maldistribution.

Posted in Economy, Sociocultural | Tagged , , , , , , , , , , | Comments Off on Maldistribution and its Discontents

JIT Downfolly

A couple of decades ago a series of positive advertisements were featured on television networks and many cable outlets, almost wherever paid video could find a niche. The most memorable of them began with an image from the inside of what appeared to be a small storefront, the camera scrolling from the large plate glass windows and door inward across a wall full of empty shelves. At the end of the 30 second spot, following some stock images of container ships and delivery trucks arriving at night, those same shelves were filled with varied attractive merchandise. During those transitions a voiceover and brief on-screen phrases praised the benefits of something called “just-in-time” (JIT) logistics. The implication of all of the ads was that JIT was a new rational strategy that would improve efficiency and reduce costs in all aspects of commerce, especially at the end of the process, in retail stores, those locations commonly referred to as brick-and-mortar outlets.

Just to be accurate, though, JIT is not a new concept. It’s origins go back at least to 1938, when the Toyota Motor Company began an effort to improve the alignment of their manufacturing processes with the suppliers of the parts that they would need. Their goal was to order materials at the optimum moment; not so early that parts would have to sit on shelves while waiting to be installed, but not so late that the assembly line would have to be shut down until they arrived. With proper planning, it was argued, the costs related to both inventory storage and stoppage delays could be significantly reduced, along with any waste associated with last-minute design changes. The idea was widely adopted in the manufacturing sector, although the planning process was increasingly complicated by foreign outsourcing and, often, by the desirability of ordering parts from more than one source to reduce losses from unpredictable events that affected one supplier or one region of the world. Around the turn of the century, however, procurement philosophy turned in favor of single-source contracts and pricing concerns came to favor manufacturers in Southeast Asia.

The JIT promotional advertisements are long gone from television, but if you do an internet search for JIT you will find the positive-thinking detritus of that era, articles from business experts and pundits extolling the many benefits of precisely timing your orders for parts to match your manufacturing schedule, or putting in your request for the latest toy from the Chinese factory, the sole source, four months prior to Christmas. There is no possible downside in these logical songs of praise.

In the retail world there were related developments contemporaneous with full implementation of JIT. Those of us who have been “around a while,” as the saying goes, remember a time when the racks in a clothing store did not contain all of the items they had in stock. If you liked a pair of slacks but didn’t see it in the size or color you wanted, you would ask the clerk if they had the same thing in, say, a size eight. The clerk would then go into a back room and often come out with what you needed. A version of this arrangement still happens in many shoe departments, where you have to ask a salesperson to see a pair and try it on, but in most other stores that’s no longer the case; any back room storage space seems to be kept empty, if it hasn’t already been converted to floor display space. In most cases now, what you see out on the shelves and racks is all that they have available. This not only reduces back room storage, it also is credited with reducing the work force, as employees no longer have to leave the display floor.

Yet another related retail trend is what might be called “just-in-time employees.” In this new strategy retail management saves on labor costs by keeping their workers on call instead of in the store. When an unexpected number of customers appears in a store, the manager will send a message and the employee will head to work. On the other end, when customers leave the store, the manager will often tell employees that they are no longer needed. In this way, the store only pays workers for the period of time when they are actually at work, inside the store helping customers, not for the time they spent waiting for a call or traveling to and from the location. This clearly can save the company a significant amount in wages, but it can exact a serious toll on the employees’ ability to plan their lives, both in terms of scheduling and income stability.

The new realities of the years of the pandemic, 2020 and 2021, have changed logistics in ways that have exposed many of the flaws in all of these forms of JIT, flaws that also highlight the negative ripple effects possible in planning that doesn’t include allowances for unforeseen events.

Retail outlets and factories that have become acclimated to JIT ordering have been hit hard as one-month delivery schedules have stretched out to six months or longer. These delays have had multiple causes, including quarantine-related closures or employee absenteeism that have shut down factories or created backlogs in loading and unloading container ships or cross-country train or truck transportation. Retail outlets with no local warehousing facilities have ended up with empty shelves and lost sales as they wait for the next just-not-in-time delivery of items that they ordered for JIT arrival. Shipping costs have also multiplied, increasing by six to eleven times the pre-Covid rates. In many cases, corporations have responded by planning to move many of their manufacturing operations back closer to their customer base, including into the United States.

As for JIT employees, that system always relied on a surplus of retail and warehouse workers willing to accept substandard working arrangements. The pandemic has changed that dynamic as many workers have dropped out or changed employers rather than accept jobs that offered minimal wages and benefits, uncertain hours, questionable (often restrictive or repressive) working conditions, and the threat of direct public contact and the attendant risk of Covid contamination. Some politicians have used the resulting labor shortages as an excuse for cutting back on unemployment benefits, but recent surveys have shown that as much as 90 percent of those employees who have quit their jobs are, in fact, individuals over 55 who have taken early retirement. The fact is, labor relations have shifted from an employers’ market to an employees’ market in an overall environment of very low unemployment.

The pandemic was not the sole factor in these changes. It simply exacerbated the effects of four years of supply-chain volatility resulting from the Trump administration’s tariff wars, higher-than-usual frequency of union-representation requests, strikes, and walkouts, and increasing numbers of climate-related natural disasters on multiple continents. Not quite a perfect storm of negative impacts, but significant enough to cause many corporations to rethink their wage structures and working conditions as well as the rest of the spectrum of JIT cost-cutting and operational efficiencies.

My response? It’s about time. Most of the employer-centered innovations of the past fifty years have been implemented with the sole goal of increasing shareholder returns. These include JIT inputs and scheduling, employee efficiency monitoring, wage controls, union busting, tax avoidance, outsourcing, industry consolidation, and skimping on the quality and effectiveness of products. As I have noted before, corporate strategy has long been too intensely focused on shareholder value at the expense of any other stakeholders, including the employees, customers, local infrastructure, and the geosocial environments of all of the above. Policy inputs from all those other stakeholders have been largely ignored. It’s about time that the power balance shifted back from profits to community values. If the Covid pandemic can help force such a reassessment, then perhaps there may eventually be at least one positive outcome of this global disaster.

Posted in Economy | Tagged , , , , , , , , , , , , , | Comments Off on JIT Downfolly

Hair!

Visualize Iran’s Ayatollah Khomeini and Al Qaeda’s Osama bin Laden and orthodox rabbis and ZZ Top and Duck Dynasty. What do these people have in common? If you’re thinking that I made that question too easy by including too many examples representing too many very disparate individuals, you are probably right. But that was, in large part, the point. The answer is evident from pictures and media comments regarding all of the above individuals, from artifacts and information familiar to millions. It should also be evident to any people familiar with these examples that there are many different reasons why grown men would allow their facial hair to grow almost or totally unimpeded, to the point praised by the title song from Hair, the point where “it stops by itself.” Some of the above-named individuals have allowed their facial hair to grow to this point because of specific commands in their chosen sacred texts. As for some others, well, let’s just say that their reasons are likely not religious.

There is another commonality between the above well-bearded list. They all are followers, admittedly with differing levels of knowledge and personal commitment, of one of what has been overgeneralized as the Abrahamic religions. Their generous form of facial decoration is also a common feature among the founders of the major faiths in that tradition, notably Judaism, Christianity, Mormonism, and Bahá’i. Note that I have excluded Islam here, even though it does belong in the Abrahamic list, in deference to their strictures regarding depictions of Muhammad, much less his facial hair.

Abraham and Moses and the primary Judeo-Christian God are all generally depicted as having generous white beards and matching long hair, although these lengthy growths are often envisioned as neatly trimmed and combed and even wavy. Whether these characteristics appear or not, of course, depends on the preferences of the particular sect that is providing and venerating the image. I’m tempted to add to these prominent persons one other significant religious figure in modern European Christianity, the abundantly bearded Santa Claus. We must assume that the images of all of these men are meant to inspire reverence, placing them in the same category as aged family patriarchs who embody the desired qualities of experience, vast knowledge, and earned authority (in God’s case, this would include the related possession and sometimes arbitrary use of vast supernatural powers).

On the other hand, the alter-ego of the Christian God, Jesus, the much younger, mostly benevolent, version of his father, the one more likely to forgive than to punish, is depicted in most modern iconography with facial hair that is short, youthfully dark, and neatly trimmed. Jesus also most generally appears with European features, an apparent mischaracterization. But in contrast to his neatly trimmed facial hair, Jesus is usually pictured with the same full shoulder-length locks that God has, albeit in a darker and more youthful version. Here again, the long hair may be intended to denote wisdom beyond his years. Perhaps this is a variant of the Samson story in which long hair is considered concomitant with inordinate powers, in this case a different expression of strength.

Unlike their God and messiah, however, European and American Christian leaders are almost universally bare-faced and closely trimmed. This is clearly a denominational choice, a fact that can be demonstrated by comparing the bare faces of the Catholic Pope and most western Protestant leaders with the full minimal-trim growths preferred by the patriarchs of Eastern Orthodox Christian churches. Western Christian evangelists who regularly appear on television sport skin so well scraped and coated with foundation and concealers that they don’t even exhibit the common masculine flaw known as five o’clock shadow, nor do they display the evident heresy of neck hair that touches their collars. It’s almost as if they’re doing their best to remove all traces of their connection to our hairy evolutionary ancestors, a not-so-missing link they are always eager to deny. Perhaps rather than being examples for devotional imitation (as in Eastern Orthodoxy), the furry examples of the Father and the Son are merely considered historical aberrations.

So what are we to make of other tonsorial preferences and related evolutions? I am old enough to remember the years in which shoulder-length hair on males was considered unpatriotic, a “hippie” expression associated with opposition to the Vietnam War and/or the Establishment. It seems that masculine-style short hair on women was also considered subversive. Popular musical leaders like the Beatles and Petula Clark and “country outlaws” such as Waylon Jennings and Willie Nelson helped to change those attitudes. Today, beards and long male tresses, and the sometimes reviled female pixie cuts as well, are no longer considered rebellious or shocking or antisocial. That trend in itself can only be considered positive. We’re even making some progress in acceptance of a wide variety of more natural (i.e., not artificially straightened) hair styles preferred by black people, although that set of changes has required enforcement by legal actions.

I should note here that I personally sport a short beard, one that I keep in the range around a quarter of an inch in length. My head hair is also relatively short. I doubt that I will ever return to daily scrapings of my cheeks and chin with sharp edges, but also would never allow my facial hair to grow to the point where it would interfere with eating or become a temptation for nesting sparrows. I’m also not much impressed by the current fashion of semi-beards, the popular trims that are constantly maintained at a length that looks like two days worth of stubble. But I am liberal enough to believe that anyone should have the unquestioned right to trim the fur on their head to whatever length they prefer, including the formation of visible designs and letters and thin lines and isolated hedge rows. This is part of my broader philosophy that says that nobody should have to trim or shave or pluck or wax-rip whatever their body grows if they don’t want to, whether women or men, and in reference to any part of the body that contains and produces hair. Admittedly, I do have the feeling that applying hot wax to human skin and ripping it painfully away is a weird and perhaps barbaric practice, but if someone desires abnormally smooth skin and this is their preferred method of implementing that result, then I will raise no argument against it. Perhaps it beats the risk of cuts and razor burn in sensitive places. And well, yes, my biases are showing in that last two sentences, but that applies only to me. Go ahead, just as long as it is truly your decision and not that of “society” or “fashion” or “everyone is doing it.”

The broader reality is that body hair in many forms can be attractive. It is also a reminder of our kinship with the “lower animals” through evolution, animals such as the short and hirsute chimpanzees with whom we share some 97 percent of our genetic inheritance. The remaining three percent of our chromosomes obviously carries an amazing amount of shape-shifting information, including, and far more significant than, those few genes that put dense concentrations of hair follicles on our skin, or not, and that cause those follicles to grow almost-invisible peach fuzz or a self-regulating layer of warm protective fur or lengthy tresses that can grow out to multiple feet in length. Obviously, even those few hair genes display large variations in the physical results they produce (their phenotype, for those who prefer the correct terminology). It is fortunate that we humans have responded by developing skin coverings, clothing of extraordinarily varied types, that help make up for what our genes have lost, often using what the follicles of other animals have provided. After all, discounting such examples as images of Lady Godiva or the story of Rapunzel, it is unlikely that we humans can produce enough of our own largely skull-based fur to provide adequate protection against either embarrassment or cold. Another phenotype within the 3 percent, our highly expanded brains, has helped us make up for that hair inadequacy.

Posted in Sociocultural | Tagged , , , | Comments Off on Hair!

Homeland Coup

In the United States we often comment unfavorably on the failures of democratic rule in other countries, the various insurrections and coups and corrupt elections, or the simple failures to transfer power from a losing administration to the winners of an election. We compare such breaks in the rule of law and citizen consent to the long-term continuity of most western European countries and, of course, to our own success with two centuries of peaceful electoral-driven rule. What we fail to recognize often enough is the inherent fragility of any democratic form of government, even in countries with a long history of successful rule.

That veneer of exceptionalism has been progressively stripped away in the past year as we learned more about the attempts that were made by the administration of President Donald Trump, and by his other minions, to hijack the 2020 presidential election and, after the fact, to reverse the inauguration of his successor. The tales of incompetence and subterfuge are multiplying, released from former Trump associates and journalists, provided in books and media comments by Stephanie Grisham, Michael Wolff, David Cay Johnson, and Bob Woodward and Robert Costa, among others. More will be released as the House Committee on Oversight and Reform expands its hearings on the January 6th capitol riots that attempted to halt the certification of the electoral college results.

The January 6th attempted insurrection was an extraordinary event, the first large-scale destructive attack on the home of our legislative bodies since the British Army burned it in 1814 and the only serious domestic attempt ever made to halt the peaceful transfer of power from one president to another. But the riots weren’t the only efforts made to block that certification. In the resumed Congressional procedure following the riots, more than 128 Republican members voted to reject the Biden wins in Arizona and Pennsylvania. We have recently learned that the Trump staff had created a proposal in which Vice President Mike Pence would refuse to accept the electoral results from seven states using a bogus argument that the state electors had been challenged by alternate teams. This would either give Trump the win outright or throw the decision into the hands of the Republican-led House of Representatives. Fortunately, after Pence had consulted several knowledgeable experts (including former Vice President Spiro Agnew), he decided not to go ahead with the Trump plan.

There were also lawsuits intended to reject electoral results. In the months following the November election several pro-Trump legal teams filed challenges in at least nine states. As Biden himself noted on January 7th, “In more than 60 cases, in state after state after state, and then at the Supreme Court, judges, including people considered ‘his judges, Trump judges,’ to use his words, looked at the allegations that Trump was making and determined they were without any merit.” Biden’s summary was correct. There were 63 cases and only one win, a minor ruling that slightly reduced the amount of time that mail-in voters in Pennsylvania were allowed to correct their ballots. In that one win the number of votes affected was only a small fraction of the number Trump would have needed to change the overall state outcome.

There were also audits and recounts in many locations and none of those affected the results. It soon became glaringly obvious to all but the most partisan Trump supporters that the 2020 presidential election was one of the most secure and accurate in history. On December 1st, President Trump’s Attorney General, Bill Barr, noted that, “To date, we have not seen fraud on a scale that could have effected a different outcome in the election.” In the book he published a few months later he said, more directly, that Trump’s continuing election story was “all bullshit.” As for the president-reject himself, he finally agreed that there would be an “orderly transition” to a Biden administration, adding a typical denial, “even though I totally disagree with the outcome of the election, and the facts bear me out.” That was as close as Trump ever got to a concession. In the meantime, Trump was calling the Secretary of State of the state of Georgia, the man in charge of elections, asking him to find, somehow, somewhere, the exact number of pro-Trump votes to bring Georgia into his win column. We are fortunate that that official, a man named Brad Raffensperger, chose to follow the laws of his state rather than the demands of a powerful man who is still influential with Georgia voters.

For state election officials it wasn’t just pressure from the then-president. Elements within Trump’s Department of Justice were pushing for a broad investigation of charges of election fraud, work that would have included effective harassment of election workers across the country. If they had succeeded we could have seen a series of additional audits similar to the one that was completed in late September, after months of work, in Arizona. We may still see similar “fraudits” in other states as a result of decisions by GOP legislators, even in states that have already run official audits, despite the fact that the Arizona recount managed only to reinforce Biden’s win.

But the continuing threat was more than all of the above. There were Trump associates who were suggesting that the then-president could declare martial law to stop the transfer of power, and those who supported, even incited, the January 6th rioters and “domestic terrorists” may have done so in order to justify the imposition of military rule. Trump loyalists like Anthony Tata and Kash Patel were moved into key positions in the defense department after many previous civilian leaders resigned without explanation. That and Trump’s expressed attitudes led General Mark Milley, the Chairman of the Joint Chiefs of Staff, to become concerned that the then-president was planning a coup to stop the inauguration of president Biden. He noted that he had to be “on guard” for that possibility, and told journalists Carol Leonnig and Phillip Rucker that “They may try, but they’re not going to succeed. You can’t do this without the CIA and the FBI. We’re the guys with the guns.” We may be fortunate that the military Chiefs of Staff and the leaders of those intelligence agencies refused to consider the wishes of their outgoing boss, the man who was still the Commander in Chief. The actions of military leadership often makes antidemocratic coups successful in other countries.

The threat is hardly over. Legislatures in Republican-led states are passing new election laws that have two dangerous provisions. Their first set of moves have created restrictions designed to make it more difficult for people who tend to vote for Democrats to register and vote. That is occurring in at least 19 states. Their second strategy in most of the same states is redistricting, or more accurately, gerrymandering. If minorities can get past the new obstacles they’ll find themselves in districts in which they are a political minority. And the third GOP plan is even more anti-democratic. In Arizona and Georgia the legislature has passed laws that would strip their Secretary of State and their county election officials of the ability to oversee procedures and results, allowing them to replace traditionally nonpartisan actors with Republican-directed authorities. Other GOP-led states could soon follow suit. In more than twenty states the legislatures have also introduced bills that would limit the ability of judges to rule on election disputes. The danger there is that the GOP could simply overrule the will of the voters, expanding their power in the 2022 midterm elections and making it possible for Donald Trump to win in 2024. At that point the person rejected by the voters in 2020 would be in a position to use his presidential powers and the support of his political party to achieve his dream of making his presidency permanent.

Coups of this sort have happened in other countries, as they did (with our assistance) in Bolivia and Chile and Haiti and Honduras and Iran, among others. It easily happen in the United States, too. Fortunately the military and FBI came down on the side of the law in support of President Biden (and Governor Gretchen Whitmer of Michigan), so we have been fortunate in that. But we could fail to preserve our democracy if we don’t act now to protect nonpartisan control of elections and to expand voting rights and to reject the widespread lies about fraud.

Posted in Politics | Tagged , , , , , , , , , , , | Comments Off on Homeland Coup