Deniability

I’ve always been a company man. It always made perfect sense to me. Starting with the fact that I was, and would in all likelihood remain, a mid-level functionary in a large business establishment, it seemed that it would be best for me, in terms of survival and stress reduction and gratitude to those to whom I owed my employment, to avoid questioning anything. I have followed that essential blueprint successfully for 26 years. Unfortunately, now, my long-term plan may have finally backfired, and, on top of it all, only four years before I would have qualified for retirement. I can’t be sure, of course. But it may be that my quiet avoidance of conflict, and possibly my longevity in an organization in which long stable careers are rare, has now been mistaken for blind loyalty, for an indicator or source of inside knowledge, or, laughably, I think, for wisdom. In any case, when it became clear that an organizational realignment was imminent and unavoidable, when management began making noises about poking around inside the firm, and someone “up there” decided that someone else was needed to investigate “the problem,” I was chosen. It is better, I suppose, to be the investigator than the scapegoat; although it has occurred to me that I may have been selected to serve in both capacities.

The upper administration sent memos around announcing my new role and encouraging everyone to cooperate with my investigation in any way they could. I’m still not certain if that advance notice actually helped or hurt my efforts. I’ve considered that the current reputation of the top management levels may mean that their imprimatur could prove to be more of a liability than an asset. It’s also quite possible that the advance notice of this investigation might have had the effect of giving everyone the time they needed to develop new rationalizations and to get rid of any unwanted evidence or to create new supportive documents. It’s also quite possible that such an effect was intentional.

I’ve been at it for about three months now. In that time I’ve found it exceedingly difficult to gather information. This doesn’t surprise me, of course, given the concerns listed above as well as the prevailing business practices of the past two decades. At the heart of it all, I think, is deniability. Upper management had long made it clear that they didn’t want to know any details from lower management, much less from those who actually do most of the work, just in case. That was it—just in case. They never went any further with any of their statements about information flow, neither to clarify their intent nor to answer the obvious question—just in case of what?—nor to provide any other elucidation of their own. And when they decided to provide instructions or, more accurately, when they offered us a general outline of our future goals or targets or objectives or mission statements or whatever else they might call them, they would do so by meeting with our team, and only our entire team, in person. They did not encourage any questions or feedback, and prohibited recordings. Written communications were even more abstract than the verbal, often amounting to little more than cheerleading, or, I should say, motivational messaging. The suggested strategies for achieving the general objectives or targets were only vaguely referenced, using terms such as “asset optimization” and “efficient resource utilization” and “development and pursuit of intrinsically positive growth.” The only thing that remained clear was that we were on our own in determining how we planned to “maximize throughput and operational returns.” That much I had observed long before I became the lead investigator in the current effort; I doubt that any such conclusions would be welcome in my ultimate report.

Gradually, using mostly unapproved channels of communication, I have documented that similar practices were standard in all of the other divisions of our company that I was tasked to investigate. Most of my information, of course, did not come from any of the current departmental managers or team leaders. Management at all levels apparently has internalized and/or mimicked the virtues of non-specificity and ambiguity. And it’s obvious that the example set by upper management has effectively trickled downward. This seems to be a reasonable and understandable defensive measure. Everyone knows that it would, after all, be much more difficult for the corporate bean-counters and upper administrators—not to mention appointed investigators—to distribute blame and impose accountability penalties if nobody really knew what outcomes to measure, if nobody had a clear idea of the quarterly starting point or ending point or the intervening expectations, justifications, and procedures. Every department had duly produced a mission statement and standardized quarterly reports, and these were printed in the standardized format in soft-cover binders, and duly filed with copies to the appropriate corporate libraries. The ones that I have read were universally positive, glowing even, employing acceptable circumlocutions to demonstrate above-average departmental efficiency while containing absolutely no wording that indicated what, exactly, was being completed so efficiently. I’ve spoken to more than half of the departmental team leaders, only to find that all of them have mastered one vital competency; the ability to spend at least an hour discussing their team’s successful accomplishments without once providing any clue as to what precisely they had accomplished or had intended to accomplish.

I have asked to see some of the spreadsheets and other computer models that are referenced in various reports and summaries of departmental activities. In most cases this access has been granted, grudgingly and generally only after I have repeatedly stressed the serious difficulties that may be facing the company as a whole, and only after I have been repeatedly cautioned by the creators and users of such files that the files themselves are so large and complex that I would be unlikely to adequately comprehend them. In fact, I’ve found that warning to be true; all are similar to the large, one might even say bloated, computer files that I myself have created and embellished over the past decade. The operational and diagnostic models referenced by the documents in the other departments are so extensive and labyrinthine, and the underlying formulas and algorithms so convoluted and poorly documented, that even those who refer to them and update them on a daily basis often do not seem to be able to provide a coherent explanation of their inner workings. Or perhaps, in line with the general corporate milieu, they choose not to try.

After all, those individuals, like myself, who can produce meaningful explanations of some corporate documents generally have long found that their immediate superiors, the people who make the decisions based on such reports, either cannot comprehend, or will not listen to, or will never bother to ask for, any background clarification. This is to be expected. The underlying computer models are, or were, state of the art and constantly evolving. The analysts who created them are, or were, college-trained specialists in such arcane arts as continuous probability and combinatorics and differential entropy. And generally, with the high employee turnover—I did note that I am a rare example of longevity, didn’t I?—the models generally have outlived their creators by years, if not decades, which means that the current analysts are usually working with a computational core built by a predecessor, or often someone before their immediate predecessor, and are sometimes models based on legacy functions which were obsolete when our current analysts entered college. This is what is described as the industry standard, the system necessary to maintain a competitive edge at the highest level.

Of course, what is listed above are the general impressions I’ve gleaned from hundreds of hours of discussions and explorations of files, interactions which were almost exclusively conducted under promises of individual anonymity. The pinpointing of individual experiences and details may not be important in any case; what is important are the widespread trends or tendencies that don’t appear to be isolated within any one department. We have achieved a level of corporate uniformity of purpose and methodology that is well beyond anything we expected to achieve when I first joined the firm. It may not be the result that we had hoped for, but it certainly seems to be the logical extension of the strategies we have applied. Are they dysfunctional? Time will tell, but my final report will not.

Posted in Satire, Sociocultural | Tagged , , , , , , , | Comments Off on Deniability

Maldistribution and its Consequences

In last month’s post I noted that the past four decades have demonstrated that there is a significant amount of surplus in the economic system and that that surplus, obviously and unfortunately, is not widely shared within our population. Benefits at the top income levels have grown enormously since 1980, expanding the portfolios of top-level management, financial advisors, and investors. The income and wealth inequality in the United States has reached even more extreme levels than our nation experienced during the Gilded Age, the age of autocratic wealth and control that began following the Civil War and ended with the Great Depression of the 1930s. As an example, in each of the years 2016 to 2019, the top ten percent of U.S. citizens received more than half of the total annual national income and held more than three-quarters of total wealth, while the bottom fifty percent—fully half of the population—received only 15 percent of the income and held only about one percent of total wealth. This current inequality is actually more extreme than the pre-depression levels of the late 1920s. And this time, as so many times in the past, the vast concentrations of wealth and investment income have led to significant problems and instability in the national and world economic system.

The most serious economic problems and scandals of the gilded age and of the most recent four decades are directly attributable to these high levels of wealth inequality. While we must admit that this is not a simple relationship, we can connect much of the instability of both eras to the corrosive effects of too much money chasing too few investments in search of easy and often inordinate financial returns. We can look at the historical record to demonstrate this.

In the 40 “gilded” years between 1890 and 1930 there were 9 recessions and 2 extreme depressions, including the first year of the Great Depression. In the 40 years between 1980 and 2020 there were 6 recessions, one of them the Great Recession of 2007-2009, the worst collapse since the 1930s. The economic downturns in this latter period would have been more frequent and more serious if it hadn’t been for massive interventions by the federal government and the Federal Reserve. Of those 17 major economic downturns mentioned above, almost all were caused by overreactions to economic stresses, reactions that were commonly called “panics” back in the 1800s, in which investors crashed the economy by attempting to pull back their levels of portfolio risk in response to downturns, fiscal scandals, or rumors.

Contrast this record with the 40 years in the middle decades, between 1940 and 1980, in which economic activity was largely stabilized by New Deal regulatory systems and by high marginal tax rates on the highest income levels, redistributive taxes that ranged from 70 to 95 percent. In those four middle decades, government policies helped to level out both income and wealth inequality and to dampen financial speculation. There were 7 relatively lesser recessions; three were caused by sharp drops in government spending after major wars (WWII, Korea, and Vietnam), two by intentional changes in fiscal policy to counter inflation (1958 and 1980), and two by the 1973 OPEC oil crisis and the 1979 Iranian revolution. None were caused by rampant speculation or investor panics and none required massive government interventions on the scale of the New Deal or the post-2008 stimulus. In short, reduced economic inequality meant reduced economic instability.

At the beginning of the Obama years (2009) there was some hope that we as a country would have learned from the Great Recession, in which vast speculative activity created a boom market in risky derivative assets that were based on poorly verified housing loans. Unfortunately, major corporate players in that collapse were bailed out and even allowed to grow by absorbing some of their failing competitors. Virtually none of the individual corporate leaders were punished for the frauds they promoted. Some meaningful legal reforms were passed by congress, but when the Republican Party regained control in 2017 they either reversed those changes or declined to implement them. Then they passed a massive tax reform bill, one which provided few benefits to ordinary workers and which by 2025 will be sending 83 percent of the resulting annual tax savings to corporations and to the richest one percent of taxpayers. The result has been yet another significant increase in wealth inequality and instability.

The above review is phrased in generalities. Perhaps it would help to take a somewhat more detailed look at the some of the specific dysfunctions created by excess wealth. We should all be familiar with a few of them. One full category involves investment bubbles, in which eager investors get together with brokers willing to take their disposable funds with the goal of inflating values in what become known as “hot” markets. The 1929 stock market crash and the Great Depression were the inevitable result of rampant highly leveraged speculation in equities. The late 1980s brought us a real estate investment boom that ended in scandals and the collapse of the Savings and Loan industry. The 1990s had the dot-com boom which flooded the nation with fiber optic infrastructure and poorly vetted internet start-ups. That went bust just before 2000. Eager investors pivoted again, creating the 2008 housing boom in which a widespread house-flipping mania was layered onto a hyperactive market in creative and often illegal mortgage products pushed by shady mortgage brokerages using sales methods that often bordered on fraud. Those brokers had no incentive to avoid risky contracts because they didn’t retain any of the loans they had arranged. They pocketed their commissions and processing fees and the resulting high-risk loans were sold off and bundled into derivative packages that were in turn traded by financial firms using short-term borrowed money at rates of leveraging that would have been illegal under the previous New Deal regulations. Once again, too many investors looking for rapid financial returns overstimulated a formerly regulated market.

Massive federal bailouts and legislative reforms managed to turn the Great Recession of 2008 into a lengthy, if gradual, economic expansion, building toward record profits, high returns to investors and the unprecedented growth of a few personal fortunes. What it did not lead to was ubiquitous prosperity. Millions of families lost their homes. Wages remained stagnant and the percentage of workers in the middle class actually declined. But the system remained largely unchanged. Once again, investors have massive amounts of disposable capital to spread around, and this time they’ve been lavishing it on new artifacts such as cryptocurrencies, non-fungible tokens (NFTs), and meme stocks. These have been accurately characterized as assets based on the “greater fool” theory, investments that have no intrinsic value beyond what can be cadged from a subsequent fool who assumes that prices will continue to grow endlessly. We can only hope that the inevitable collapse of these neo-boomlets will not affect the overall economy as much as the previous events.

There is, unfortunately, more than enough loose money left in the system to continue to cause distortions in the traditional business and real estate markets. Private equity firms are still using their financial and legal muscle to control and cannibalize successful corporations, actions that have led to the destruction of such venerable firms as Sears and Toys R Us. Even when they hold onto their purchases and attempt to manage them, the private equity strategy is to cut costs as much as possible, largely by slashing wages and benefits and staff without concern for the long-term consequences. In the real estate market wealthy investors began taking advantage of falling house prices and foreclosures in 2008, and they continue to purchase and convert family-owned homes into rental properties or flippable rapid profit sources or rarely visited second or third luxury homes. Foreign buyers have driven up housing prices in most large U.S. cities, including New York, Seattle and Miami. All this money has helped to accelerate price inflation for both owners and renters throughout the United States, making it very difficult for most ordinary workers to find housing they can afford. In so many ways, wealthy investors are increasingly using their money in ways that distort markets to their benefit with no concern about the significant negative social impacts they are creating.

Finally, wealthy individuals and large corporations have been using political contributions to influence legislators, to obtain laws and regulations favorable to themselves. Their efforts have effectively reduced their own tax liabilities, exacerbated political corruption, removed or weakened government regulations, helped secure subsidies and government contracts, and even influenced elections. They have benefited from ubiquitous distortions in economic markets that have been supported by self-serving changes in legislative and administrative politics.

But perhaps the biggest problem created by this level of inequality is social and political instability. The election of President Trump, with its deleterious effects—the tariff economy, cuts in taxation and government revenues, environmental setbacks, and savaged confidence in public functions and the media—was made possible by voters who saw themselves being ignored by politicians as they were increasingly bypassed economically. Such voters have also been strongly influenced by messages from neoconservative media outlets that have flourished with financial support from the same wealthy actors who have been lobbying legislators for favorable treatment. Today’s plutocratic corruption in the system is perhaps less blatant, a bit more indirect, than it was in the Gilded Age, but it is no less of an antidemocratic spiral designed to benefit a small coterie of corporate and financial interests. It is unsure what it will take to end this pattern. World history shows that the end result could be either an autocratic dictatorship or something more like the Progressive uprisings and reforms of the early 1900s or the 1930s. We can only hope that it will be the latter.

Posted in Economy, Sociocultural | Tagged , , , , , , , , , , , , , , | Comments Off on Maldistribution and its Consequences

Maldistribution and its Discontents

In the United States the past four decades have demonstrated that there is a significant amount of surplus in the economic system. That surplus, unfortunately, is not widely shared within our population. Benefits at the top income levels have grown enormously since 1980, whether the beneficiaries are top-level managers or financial advisors or investors. In the same period wages for ordinary workers have been essentially stagnant, barely keeping up with what have been minimal levels of inflation. This is despite the fact that the average workweek for full-time workers has been trending upward, so that in 2021 the average was 41.5 hours, and more than 10 percent of non-management workers worked more than 50 hours each week. From 1971 to 2019 the number of adults in middle-income households (the vaunted middle class) decreased from 61 percent to 51 percent and many of those who managed to remain in the middle class did so only by expanding from one full-time worker to two. In large part this is because worker compensation, which prior to 1980 had risen in congruence with economic productivity, grew only one-third as fast as productivity since then. In other words, the increased economic value created by workers was no longer being returned to them. The United States economy is no longer managed in ways that share our prosperity broadly.

Admittedly, economic inequality has always been a problem in our country. Back in 1989 the wealthiest 5 percent of families had 114 times as much wealth as the second quintile (the families in the range between 20 and 40 percent of the population adjusted by wealth). That’s the top 5 percent of families having more than 100 times the wealth of a lower-level 20 percent. Even back then, that did not qualify as a reasonable distribution. Twenty-seven years after that, however, a period that included the boom years of the 1990s, the bust of the 2008 great recession, and the following decade-long recovery—but not the Covid recession—in 2016 the top 5 percent had 248 times as much wealth as the second quintile. The excessive disparity had more than doubled. Note: Comparison with the second quintile is used because in the United States the median wealth of the first (lowest) quintile is almost always zero or negative.

The fact is that poverty in most modern developed countries is less a matter of scarcity of resources and more a matter of maldistribution. In most of these countries now there are growing discussions about shorter work weeks, most often without any decrease in personal income. Polls have found that large majorities of workers in Europe would prefer more leisure time to higher salaries, and recent studies have demonstrated that shorter weeks can be implemented with minimal loss in company productivity. In the United States, of course, a large proportion of workers desperately need, and deserve, higher incomes as well. Despite this, proposals for an adequate minimum wage and for shorter work hours have been met with direct opposition, even ridicule, and with charges that they are both unrealistic and “Socialist” (by implication, totalitarian and antidemocratic). But are those charges true? Given the current severely unequal distribution of income and resources, can that even, in fact, be an accurate assertion? Wouldn’t it be possible to spread income around much more equitably and increase leisure at the same time? The answer is yes, it would, if we only had the political will.

More than 9 decades ago John Maynard Keynes predicted that increases in worker productivity would make it possible for people to earn an adequate living while working only 15 hours a week. His prediction did not become reality, but only because any such possibility was derailed in part by inflation and expanded consumer demand fueled by advertising and easy credit, but primarily by the transfer of massive percentages of productivity-derived income away from wages and into the exploding financial markets and through them into the accounts of a relatively small number of investors. This tendency had existed since the beginnings of the industrial revolution, but it increased exponentially beginning in the 1970s.

For example, look at average annual working hours over the past eight centuries. Contrast the following figures with the standard that our efforts, regulations and laws, have approximated in creating our current arrangement of 40 hours a week for 50 weeks a year, or 2,000 hours per year:

In the 13th century, male peasants worked long hours during the growing season, with 12-hour days common, but the annual work peaks totaled fewer than 150 days. Total estimate: 1,620 hours per year.

In the 14th century, casual laborers also worked long days, but tended to end their “working season” when their basic needs were met, after about 120 days. Total estimate: 1,440 hours.

In the 15th century, common contract labor rules expanded the work year, commonly requiring 10-hour days for two-thirds of the year. Total estimate: 2,300 hours.

By the 18th century, industrialization had moved much labor into factory facilities that required more than 40 weeks per year, 7 days per week, 10+ hours per day. Total estimate: 3,200 hours.

This continued through the last half of the 19th century, when the growth of productivity, in combination with long employee hours and low wages, created vast economic surpluses that were concentrated at the top. In industrial nations extreme labor hours and poverty at the bottom commonly contrasted sharply with extreme wealth in the hands of a few, a “gilded age” that was golden only for a tiny percentage of the population.

It was during this period of industrialized excess, the late 18th and 19th centuries, that anthropologists began studying hunter-gatherer societies around the globe. At first they tended to regard such small “primitive” groups as following a minimal subsistence-level economy, one that required almost constant effort just to survive. The groups they visited were generally those that had been pushed into marginal environments containing limited resources, regions containing little water, sparse vegetation and scattered animal life. As they improved their methods and scope, however, their more analytical and objective studies demonstrated that even these marginal groups enjoyed a significant amount of leisure time. Further research found that there had been many other pre-agricultural societies that provided not only survival levels of sustenance, but that regularly produced large surpluses of useful goods; food, tools, art, and elaborate socioeconomic knowledge. In many of these societies it was common to redistribute much of that excess through frequent sharing or communal feasts, exemplified by the “potlatch” events of the northwest coast cultures of what is now the United States and Canada and the “Big Time” celebrations still practiced among California’s Indigenous peoples. Anthropologists have come to the conclusion that the lessons gained from such societies indicate that Keynes’ estimate of 15 hours per week of productive work may not be unreasonable even without modern labor-saving tools.

In the late 19th and early 20th century, worker protests and strikes and progressive political movements responded to rampant industrial excesses by moving toward 8-hour days and 5-day weeks. The resulting 40-hour week is now a widespread standard but, unfortunately, is still unpopular among corporate leaders. As I write this, workers at Kellogg’s factories in the United States are on strike after being required to work 12-hour shifts for 7 days a week over a period of months. Forced overtime is also a factor in a walkout against a Frito-Lay plant in Kansas, and Nabisco workers in five states are striking for a contract that would prevent the company from moving manufacturing operations to Mexico, where regulations regarding wages, benefits, and hours are much less stringent. Computerized management tools are being applied to monitor worker activity to repress “unproductive motion” and, ostensibly, increase employee productivity. Conflicts over this type of micromanagement are also increasing, taking labor protests beyond the usual base issues of wages and benefits. All of this is happening despite the fact that corporate profits and stockholder returns are at record high levels.

The optimistic Keynes prediction is now being repeated by some entrepreneurs in self-serving statements about automation and the future of work. In a 2021 article the CEO of Open AI noted that after machines (inevitably) take over virtually all human employment the AI industry would be so wealthy that they could bankroll a universal basic income to replace the system of paid work with paid leisure. Well, yes, I suppose they could. This, of course, ignores the current reality in which industries and owners are accumulating massive wealth without offering any such generous impulses; instead, they continue lobbying for lower taxes, resisting higher wages and union organizing, and using technology to impose stressful and hazardous job requirements on their employees. Their intent is clearly to increase profits, not to share any of their vast wealth with anyone. Perhaps it really is time for government to enforce corporate generosity.

The United States should be paying attention to the trends in western Europe, where adequate wage structures and supports for unions have not only provided lower-level employees with incomes significantly higher than comparable jobs in the United States, but have allowed workers the freedom to push for alternatives such as shorter work days and four-day weeks. We should be considering minimum wages and benefits that would actually allow even the lowest-paid full-time workers to support themselves and their families without working more than one job, and should seriously consider such governmental social supports as inclusive single-payer health insurance and universal basic income payments. We have a very affluent society, an economy that could easily afford all of this, if only we would find a way to reverse our decades-long trend of economic maldistribution.

Posted in Economy, Sociocultural | Tagged , , , , , , , , , , | Comments Off on Maldistribution and its Discontents

JIT Downfolly

A couple of decades ago a series of positive advertisements were featured on television networks and many cable outlets, almost wherever paid video could find a niche. The most memorable of them began with an image from the inside of what appeared to be a small storefront, the camera scrolling from the large plate glass windows and door inward across a wall full of empty shelves. At the end of the 30 second spot, following some stock images of container ships and delivery trucks arriving at night, those same shelves were filled with varied attractive merchandise. During those transitions a voiceover and brief on-screen phrases praised the benefits of something called “just-in-time” (JIT) logistics. The implication of all of the ads was that JIT was a new rational strategy that would improve efficiency and reduce costs in all aspects of commerce, especially at the end of the process, in retail stores, those locations commonly referred to as brick-and-mortar outlets.

Just to be accurate, though, JIT is not a new concept. It’s origins go back at least to 1938, when the Toyota Motor Company began an effort to improve the alignment of their manufacturing processes with the suppliers of the parts that they would need. Their goal was to order materials at the optimum moment; not so early that parts would have to sit on shelves while waiting to be installed, but not so late that the assembly line would have to be shut down until they arrived. With proper planning, it was argued, the costs related to both inventory storage and stoppage delays could be significantly reduced, along with any waste associated with last-minute design changes. The idea was widely adopted in the manufacturing sector, although the planning process was increasingly complicated by foreign outsourcing and, often, by the desirability of ordering parts from more than one source to reduce losses from unpredictable events that affected one supplier or one region of the world. Around the turn of the century, however, procurement philosophy turned in favor of single-source contracts and pricing concerns came to favor manufacturers in Southeast Asia.

The JIT promotional advertisements are long gone from television, but if you do an internet search for JIT you will find the positive-thinking detritus of that era, articles from business experts and pundits extolling the many benefits of precisely timing your orders for parts to match your manufacturing schedule, or putting in your request for the latest toy from the Chinese factory, the sole source, four months prior to Christmas. There is no possible downside in these logical songs of praise.

In the retail world there were related developments contemporaneous with full implementation of JIT. Those of us who have been “around a while,” as the saying goes, remember a time when the racks in a clothing store did not contain all of the items they had in stock. If you liked a pair of slacks but didn’t see it in the size or color you wanted, you would ask the clerk if they had the same thing in, say, a size eight. The clerk would then go into a back room and often come out with what you needed. A version of this arrangement still happens in many shoe departments, where you have to ask a salesperson to see a pair and try it on, but in most other stores that’s no longer the case; any back room storage space seems to be kept empty, if it hasn’t already been converted to floor display space. In most cases now, what you see out on the shelves and racks is all that they have available. This not only reduces back room storage, it also is credited with reducing the work force, as employees no longer have to leave the display floor.

Yet another related retail trend is what might be called “just-in-time employees.” In this new strategy retail management saves on labor costs by keeping their workers on call instead of in the store. When an unexpected number of customers appears in a store, the manager will send a message and the employee will head to work. On the other end, when customers leave the store, the manager will often tell employees that they are no longer needed. In this way, the store only pays workers for the period of time when they are actually at work, inside the store helping customers, not for the time they spent waiting for a call or traveling to and from the location. This clearly can save the company a significant amount in wages, but it can exact a serious toll on the employees’ ability to plan their lives, both in terms of scheduling and income stability.

The new realities of the years of the pandemic, 2020 and 2021, have changed logistics in ways that have exposed many of the flaws in all of these forms of JIT, flaws that also highlight the negative ripple effects possible in planning that doesn’t include allowances for unforeseen events.

Retail outlets and factories that have become acclimated to JIT ordering have been hit hard as one-month delivery schedules have stretched out to six months or longer. These delays have had multiple causes, including quarantine-related closures or employee absenteeism that have shut down factories or created backlogs in loading and unloading container ships or cross-country train or truck transportation. Retail outlets with no local warehousing facilities have ended up with empty shelves and lost sales as they wait for the next just-not-in-time delivery of items that they ordered for JIT arrival. Shipping costs have also multiplied, increasing by six to eleven times the pre-Covid rates. In many cases, corporations have responded by planning to move many of their manufacturing operations back closer to their customer base, including into the United States.

As for JIT employees, that system always relied on a surplus of retail and warehouse workers willing to accept substandard working arrangements. The pandemic has changed that dynamic as many workers have dropped out or changed employers rather than accept jobs that offered minimal wages and benefits, uncertain hours, questionable (often restrictive or repressive) working conditions, and the threat of direct public contact and the attendant risk of Covid contamination. Some politicians have used the resulting labor shortages as an excuse for cutting back on unemployment benefits, but recent surveys have shown that as much as 90 percent of those employees who have quit their jobs are, in fact, individuals over 55 who have taken early retirement. The fact is, labor relations have shifted from an employers’ market to an employees’ market in an overall environment of very low unemployment.

The pandemic was not the sole factor in these changes. It simply exacerbated the effects of four years of supply-chain volatility resulting from the Trump administration’s tariff wars, higher-than-usual frequency of union-representation requests, strikes, and walkouts, and increasing numbers of climate-related natural disasters on multiple continents. Not quite a perfect storm of negative impacts, but significant enough to cause many corporations to rethink their wage structures and working conditions as well as the rest of the spectrum of JIT cost-cutting and operational efficiencies.

My response? It’s about time. Most of the employer-centered innovations of the past fifty years have been implemented with the sole goal of increasing shareholder returns. These include JIT inputs and scheduling, employee efficiency monitoring, wage controls, union busting, tax avoidance, outsourcing, industry consolidation, and skimping on the quality and effectiveness of products. As I have noted before, corporate strategy has long been too intensely focused on shareholder value at the expense of any other stakeholders, including the employees, customers, local infrastructure, and the geosocial environments of all of the above. Policy inputs from all those other stakeholders have been largely ignored. It’s about time that the power balance shifted back from profits to community values. If the Covid pandemic can help force such a reassessment, then perhaps there may eventually be at least one positive outcome of this global disaster.

Posted in Economy | Tagged , , , , , , , , , , , , , | Comments Off on JIT Downfolly

Hair!

Visualize Iran’s Ayatollah Khomeini and Al Qaeda’s Osama bin Laden and orthodox rabbis and ZZ Top and Duck Dynasty. What do these people have in common? If you’re thinking that I made that question too easy by including too many examples representing too many very disparate individuals, you are probably right. But that was, in large part, the point. The answer is evident from pictures and media comments regarding all of the above individuals, from artifacts and information familiar to millions. It should also be evident to any people familiar with these examples that there are many different reasons why grown men would allow their facial hair to grow almost or totally unimpeded, to the point praised by the title song from Hair, the point where “it stops by itself.” Some of the above-named individuals have allowed their facial hair to grow to this point because of specific commands in their chosen sacred texts. As for some others, well, let’s just say that their reasons are likely not religious.

There is another commonality between the above well-bearded list. They all are followers, admittedly with differing levels of knowledge and personal commitment, of one of what has been overgeneralized as the Abrahamic religions. Their generous form of facial decoration is also a common feature among the founders of the major faiths in that tradition, notably Judaism, Christianity, Mormonism, and Bahá’i. Note that I have excluded Islam here, even though it does belong in the Abrahamic list, in deference to their strictures regarding depictions of Muhammad, much less his facial hair.

Abraham and Moses and the primary Judeo-Christian God are all generally depicted as having generous white beards and matching long hair, although these lengthy growths are often envisioned as neatly trimmed and combed and even wavy. Whether these characteristics appear or not, of course, depends on the preferences of the particular sect that is providing and venerating the image. I’m tempted to add to these prominent persons one other significant religious figure in modern European Christianity, the abundantly bearded Santa Claus. We must assume that the images of all of these men are meant to inspire reverence, placing them in the same category as aged family patriarchs who embody the desired qualities of experience, vast knowledge, and earned authority (in God’s case, this would include the related possession and sometimes arbitrary use of vast supernatural powers).

On the other hand, the alter-ego of the Christian God, Jesus, the much younger, mostly benevolent, version of his father, the one more likely to forgive than to punish, is depicted in most modern iconography with facial hair that is short, youthfully dark, and neatly trimmed. Jesus also most generally appears with European features, an apparent mischaracterization. But in contrast to his neatly trimmed facial hair, Jesus is usually pictured with the same full shoulder-length locks that God has, albeit in a darker and more youthful version. Here again, the long hair may be intended to denote wisdom beyond his years. Perhaps this is a variant of the Samson story in which long hair is considered concomitant with inordinate powers, in this case a different expression of strength.

Unlike their God and messiah, however, European and American Christian leaders are almost universally bare-faced and closely trimmed. This is clearly a denominational choice, a fact that can be demonstrated by comparing the bare faces of the Catholic Pope and most western Protestant leaders with the full minimal-trim growths preferred by the patriarchs of Eastern Orthodox Christian churches. Western Christian evangelists who regularly appear on television sport skin so well scraped and coated with foundation and concealers that they don’t even exhibit the common masculine flaw known as five o’clock shadow, nor do they display the evident heresy of neck hair that touches their collars. It’s almost as if they’re doing their best to remove all traces of their connection to our hairy evolutionary ancestors, a not-so-missing link they are always eager to deny. Perhaps rather than being examples for devotional imitation (as in Eastern Orthodoxy), the furry examples of the Father and the Son are merely considered historical aberrations.

So what are we to make of other tonsorial preferences and related evolutions? I am old enough to remember the years in which shoulder-length hair on males was considered unpatriotic, a “hippie” expression associated with opposition to the Vietnam War and/or the Establishment. It seems that masculine-style short hair on women was also considered subversive. Popular musical leaders like the Beatles and Petula Clark and “country outlaws” such as Waylon Jennings and Willie Nelson helped to change those attitudes. Today, beards and long male tresses, and the sometimes reviled female pixie cuts as well, are no longer considered rebellious or shocking or antisocial. That trend in itself can only be considered positive. We’re even making some progress in acceptance of a wide variety of more natural (i.e., not artificially straightened) hair styles preferred by black people, although that set of changes has required enforcement by legal actions.

I should note here that I personally sport a short beard, one that I keep in the range around a quarter of an inch in length. My head hair is also relatively short. I doubt that I will ever return to daily scrapings of my cheeks and chin with sharp edges, but also would never allow my facial hair to grow to the point where it would interfere with eating or become a temptation for nesting sparrows. I’m also not much impressed by the current fashion of semi-beards, the popular trims that are constantly maintained at a length that looks like two days worth of stubble. But I am liberal enough to believe that anyone should have the unquestioned right to trim the fur on their head to whatever length they prefer, including the formation of visible designs and letters and thin lines and isolated hedge rows. This is part of my broader philosophy that says that nobody should have to trim or shave or pluck or wax-rip whatever their body grows if they don’t want to, whether women or men, and in reference to any part of the body that contains and produces hair. Admittedly, I do have the feeling that applying hot wax to human skin and ripping it painfully away is a weird and perhaps barbaric practice, but if someone desires abnormally smooth skin and this is their preferred method of implementing that result, then I will raise no argument against it. Perhaps it beats the risk of cuts and razor burn in sensitive places. And well, yes, my biases are showing in that last two sentences, but that applies only to me. Go ahead, just as long as it is truly your decision and not that of “society” or “fashion” or “everyone is doing it.”

The broader reality is that body hair in many forms can be attractive. It is also a reminder of our kinship with the “lower animals” through evolution, animals such as the short and hirsute chimpanzees with whom we share some 97 percent of our genetic inheritance. The remaining three percent of our chromosomes obviously carries an amazing amount of shape-shifting information, including, and far more significant than, those few genes that put dense concentrations of hair follicles on our skin, or not, and that cause those follicles to grow almost-invisible peach fuzz or a self-regulating layer of warm protective fur or lengthy tresses that can grow out to multiple feet in length. Obviously, even those few hair genes display large variations in the physical results they produce (their phenotype, for those who prefer the correct terminology). It is fortunate that we humans have responded by developing skin coverings, clothing of extraordinarily varied types, that help make up for what our genes have lost, often using what the follicles of other animals have provided. After all, discounting such examples as images of Lady Godiva or the story of Rapunzel, it is unlikely that we humans can produce enough of our own largely skull-based fur to provide adequate protection against either embarrassment or cold. Another phenotype within the 3 percent, our highly expanded brains, has helped us make up for that hair inadequacy.

Posted in Sociocultural | Tagged , , , | Comments Off on Hair!

Homeland Coup

In the United States we often comment unfavorably on the failures of democratic rule in other countries, the various insurrections and coups and corrupt elections, or the simple failures to transfer power from a losing administration to the winners of an election. We compare such breaks in the rule of law and citizen consent to the long-term continuity of most western European countries and, of course, to our own success with two centuries of peaceful electoral-driven rule. What we fail to recognize often enough is the inherent fragility of any democratic form of government, even in countries with a long history of successful rule.

That veneer of exceptionalism has been progressively stripped away in the past year as we learned more about the attempts that were made by the administration of President Donald Trump, and by his other minions, to hijack the 2020 presidential election and, after the fact, to reverse the inauguration of his successor. The tales of incompetence and subterfuge are multiplying, released from former Trump associates and journalists, provided in books and media comments by Stephanie Grisham, Michael Wolff, David Cay Johnson, and Bob Woodward and Robert Costa, among others. More will be released as the House Committee on Oversight and Reform expands its hearings on the January 6th capitol riots that attempted to halt the certification of the electoral college results.

The January 6th attempted insurrection was an extraordinary event, the first large-scale destructive attack on the home of our legislative bodies since the British Army burned it in 1814 and the only serious domestic attempt ever made to halt the peaceful transfer of power from one president to another. But the riots weren’t the only efforts made to block that certification. In the resumed Congressional procedure following the riots, more than 128 Republican members voted to reject the Biden wins in Arizona and Pennsylvania. We have recently learned that the Trump staff had created a proposal in which Vice President Mike Pence would refuse to accept the electoral results from seven states using a bogus argument that the state electors had been challenged by alternate teams. This would either give Trump the win outright or throw the decision into the hands of the Republican-led House of Representatives. Fortunately, after Pence had consulted several knowledgeable experts (including former Vice President Spiro Agnew), he decided not to go ahead with the Trump plan.

There were also lawsuits intended to reject electoral results. In the months following the November election several pro-Trump legal teams filed challenges in at least nine states. As Biden himself noted on January 7th, “In more than 60 cases, in state after state after state, and then at the Supreme Court, judges, including people considered ‘his judges, Trump judges,’ to use his words, looked at the allegations that Trump was making and determined they were without any merit.” Biden’s summary was correct. There were 63 cases and only one win, a minor ruling that slightly reduced the amount of time that mail-in voters in Pennsylvania were allowed to correct their ballots. In that one win the number of votes affected was only a small fraction of the number Trump would have needed to change the overall state outcome.

There were also audits and recounts in many locations and none of those affected the results. It soon became glaringly obvious to all but the most partisan Trump supporters that the 2020 presidential election was one of the most secure and accurate in history. On December 1st, President Trump’s Attorney General, Bill Barr, noted that, “To date, we have not seen fraud on a scale that could have effected a different outcome in the election.” In the book he published a few months later he said, more directly, that Trump’s continuing election story was “all bullshit.” As for the president-reject himself, he finally agreed that there would be an “orderly transition” to a Biden administration, adding a typical denial, “even though I totally disagree with the outcome of the election, and the facts bear me out.” That was as close as Trump ever got to a concession. In the meantime, Trump was calling the Secretary of State of the state of Georgia, the man in charge of elections, asking him to find, somehow, somewhere, the exact number of pro-Trump votes to bring Georgia into his win column. We are fortunate that that official, a man named Brad Raffensperger, chose to follow the laws of his state rather than the demands of a powerful man who is still influential with Georgia voters.

For state election officials it wasn’t just pressure from the then-president. Elements within Trump’s Department of Justice were pushing for a broad investigation of charges of election fraud, work that would have included effective harassment of election workers across the country. If they had succeeded we could have seen a series of additional audits similar to the one that was completed in late September, after months of work, in Arizona. We may still see similar “fraudits” in other states as a result of decisions by GOP legislators, even in states that have already run official audits, despite the fact that the Arizona recount managed only to reinforce Biden’s win.

But the continuing threat was more than all of the above. There were Trump associates who were suggesting that the then-president could declare martial law to stop the transfer of power, and those who supported, even incited, the January 6th rioters and “domestic terrorists” may have done so in order to justify the imposition of military rule. Trump loyalists like Anthony Tata and Kash Patel were moved into key positions in the defense department after many previous civilian leaders resigned without explanation. That and Trump’s expressed attitudes led General Mark Milley, the Chairman of the Joint Chiefs of Staff, to become concerned that the then-president was planning a coup to stop the inauguration of president Biden. He noted that he had to be “on guard” for that possibility, and told journalists Carol Leonnig and Phillip Rucker that “They may try, but they’re not going to succeed. You can’t do this without the CIA and the FBI. We’re the guys with the guns.” We may be fortunate that the military Chiefs of Staff and the leaders of those intelligence agencies refused to consider the wishes of their outgoing boss, the man who was still the Commander in Chief. The actions of military leadership often makes antidemocratic coups successful in other countries.

The threat is hardly over. Legislatures in Republican-led states are passing new election laws that have two dangerous provisions. Their first set of moves have created restrictions designed to make it more difficult for people who tend to vote for Democrats to register and vote. That is occurring in at least 19 states. Their second strategy in most of the same states is redistricting, or more accurately, gerrymandering. If minorities can get past the new obstacles they’ll find themselves in districts in which they are a political minority. And the third GOP plan is even more anti-democratic. In Arizona and Georgia the legislature has passed laws that would strip their Secretary of State and their county election officials of the ability to oversee procedures and results, allowing them to replace traditionally nonpartisan actors with Republican-directed authorities. Other GOP-led states could soon follow suit. In more than twenty states the legislatures have also introduced bills that would limit the ability of judges to rule on election disputes. The danger there is that the GOP could simply overrule the will of the voters, expanding their power in the 2022 midterm elections and making it possible for Donald Trump to win in 2024. At that point the person rejected by the voters in 2020 would be in a position to use his presidential powers and the support of his political party to achieve his dream of making his presidency permanent.

Coups of this sort have happened in other countries, as they did (with our assistance) in Bolivia and Chile and Haiti and Honduras and Iran, among others. It easily happen in the United States, too. Fortunately the military and FBI came down on the side of the law in support of President Biden (and Governor Gretchen Whitmer of Michigan), so we have been fortunate in that. But we could fail to preserve our democracy if we don’t act now to protect nonpartisan control of elections and to expand voting rights and to reject the widespread lies about fraud.

Posted in Politics | Tagged , , , , , , , , , , , | Comments Off on Homeland Coup

The Lost War

It is September of 2021 and the United States has completed a final withdrawal from Afghanistan after spending almost twenty years attempting to create a new Afghani national government. The conflict had cost the lives of more than 3,500 soldiers from 30 different coalition countries, more than 66,000 members of Afghan military and police units, and more than a hundred thousand Afghani civilians. All of this ended approximately as it had begun, with the Islamic movement called the Taliban firmly in control of the government and a variety of smaller, more radical Islamist groups, among them ISIS and Al-Queda, actively operating in the country. 

Those results are disappointing at the very least. Those who had favored the war have had difficulties identifying any positive developments that resulted from this lengthy tragedy and its two trillion dollar price tag. And there are still political and military leaders who refuse to accept the end, who argue that the United States should have continued its involvement, its military presence in Afghanistan, for an undetermined future period. Many have eagerly assigned blame to the leadership of President Biden, whose administration engineered the withdrawal, or to President Trump, whose representatives negotiated the end to the war. But the reality on the ground is that for the full two decades, from the beginning in 2002, the Taliban had gradually been rebuilding its domination over rural areas, increasing its membership and forming alliances with regional warlords, the traditional Afghani rural leaders. Even before the coalition military forces began to pull out and the Taliban began retaking regional capital cities it was obvious that the coalition-supported national government did not have effective control over most of the country. This end may have come faster than expected, but it was inevitable.

General Mark Milley, the Chair of the Joint Chiefs of Staff, noted that “In Afghanistan, our mission— our military mission— has come to an end… There are many tactical, operational, and strategic lessons to be learned.” There is one problem: The larger lessons are the same ones that we should have learned from our involvement in Vietnam, which should have been reinforced by our knowledge of Russia’s previous attempts to build their own version of an Afghani national government. We should have been especially familiar with Russia’s failure because we were in part responsible for it; we sent massive amounts of cash and weapons to various groups, the mujahideen and the warlords, those who were opposing the Russian presence and the Russian-supported Afghani government. 

There have been many efforts to compare our withdrawal from Afghanistan with the 1973 withdrawal from Vietnam. Both events were at times chaotic, although the latter was quite a bit more so than the former. Both situations were considered an embarrassment to the United States, which prefers to characterize itself as the most powerful military power in the world, a force that is virtually invincible. In both cases, we were wrong. Both conflicts were similar in many other ways. Vietnam and Afghanistan were non-war wars that bypassed the constitutional process in which Congress must officially declare war. Both were based on lies. Vietnam was authorized by congressional passage of the Gulf of Tonkin Resolution of 1964, an act based on a largely fabricated story of North Vietnamese attacks on the U.S.S. Maddox, and was sustained by the mythological domino theory of Communist expansion. The Afghanistan war was inspired by the Al-Queda attacks on the United States on September 11, 2001, but the arguments for U.S. invasion ignored the fact that the Taliban had officially condemned those attacks and had promised to turn over the Al-Queda leadership to a third-party authority for prosecution. In both Vietnam and Afghanistan, the expansion of military and political involvement was unnecessary, of questionable value in both moral and practical terms, and we will probably never know the true motives that led to the invasions and intensification.

The primary error of Afghanistan, however, is not that it was based on false public information or on unacknowledged motives. It was that it ignored the clear lessons of Vietnam. Following last month’s Kabul withdrawal, a number of commentators have said, in essence, “at the beginning, nobody could have known that the invasion of Afghanistan would end like this.” That is a false and self-serving message. The fact is that in October of 2001, as soon as the Bush Administration began its verbal demonization of the Taliban and its targeted bombing of sites in Afghanistan, as soon as it suggested an invasion, it was warned against any such action by a variety of foreign affairs and retired military experts. There were also large anti-war protests throughout the world. Those of us in the anti-war movement knew that the war was a mistake, essentially another Vietnam, and immoral as well. The cautionary examples of the Vietnam war and Russia’s Afghanistan disaster, not to mention the British fiasco of 1842, were all mentioned. It’s clear that the administration of President George W. Bush was adequately warned. Those arguments were ignored.

A large part of the problem came from the continuing myth of U.S. military superiority. According to this construct, the failures of other countries were irrelevant. Also, in most conservative circles the example of Vietnam had long ago been dismissed using a creative revision of history that claimed that the loss in Vietnam occurred only because the politicians in Washington refused to allow the U.S. military to use its full powers, a variant of the MacArthur hypotheses regarding the (also undeclared) war in Korea.. In their alternative mythology, the war in Vietnam should have been won—again the supposed invincibility of the U.S. military—and therefore there was no reason why the United States couldn’t succeed where the Russians had failed. We can only hope that the end of the conflict in Afghanistan will provide a longer-lasting lesson of U.S. fallibility.

Perhaps we need to revisit the reality-based analyses that were frequently made after Vietnam and before Afghanistan. For one, it is a mistake to think that any country, however powerful, can long impose its will on another, especially on countries with large amounts of difficult landforms that limit traditional military efforts to capture and hold terrain. In both Vietnam and Afghanistan the U.S. forces found it necessary to repeatedly clear locations that they had previously successfully captured, displacing an enemy that melted into the surrounding terrain and returned after they left. Worse, in both Vietnam and Afghanistan the U.S. forces were supporting deeply unpopular and notoriously corrupt national governments, and they pursued counterproductive strategies intended to depopulate and destroy rural hamlets, but which had the effect of creating opposition. The Viet Cong and the Taliban were not widely accepted either, but they were viewed by most residents of those countries as local participants rather than as puppets of foreign interlopers. Indigenous guerrilla forces will always have a significant advantage over traditional military units, especially if the latter are largely foreign.

These factors made it impossible, in effect, for the U.S. to “win” in either conflict. To reinforce the lesson we could add yet another historical note that the British failure in Afghanistan in 1842 was not the first time that their vaunted colonial forces had lost while trying to overcome all of the above difficulties. Beginning in 1776 they had failed to support their own unpopular “foreign” colonial system against a popular enemy that was home-grown in north America and that repeatedly melted away into the surrounding population, only to return later. They lost that war, too, despite having a strong advantage in military power.

It is time to face the fact that the United States cannot continue to act as the world’s police force and that it can no longer act like a colonial power. We currently have more than 750 bases in 80 countries, with troops deployed in perhaps twice as many, and a military budget that is larger than that of the next ten countries combined. Perhaps it is time for the United States to finally listen to the lessons of Vietnam and Afghanistan, recognize what it cannot accomplish, and reduce its military presence around the world. And maybe, just maybe, we could use some of the money we save to help support the millions of refugees that were created by the wars in Afghanistan, Iraq, Libya, and Syria.

Posted in Politics | Tagged , , , , , , , , , , | Comments Off on The Lost War

Just Say No?

As I write this the United States is in what appears to be the beginning of the third major surge of the Covid-19 pandemic, or maybe the fourth or fifth depending on how you define such things, with the number of new infections having risen above 100,000 per day from July 30 on, up from a June 14 low of 8,069. There is as yet no evidence that new infections will start to decline in the near future, especially with the normal winter flu season yet to begin. It is not expected that the number of new daily cases will reach the peak above 200,000 that the U.S. reached in mid-January, but the current caseload has surprised health authorities who didn’t expect a new surge until October.

The question is why the pandemic has defied such expectations. It may be a characteristic of highly infectious diseases. Certainly the 1918 pandemic had three major surges, two of them “off-season” in a pattern similar to Covid-19. But we also cannot ignore the fact that this July’s sharp increase in new Covid infections follows closely on the widespread reduction in social controls that optimistically followed the low figures in June. In early July most states and municipalities eased or removed masking and distancing requirements, at a time when vaccinations were still below 50 percent overall and well below that in regions where the populace is resistant to vaccination. The real question is why we expected any other result? Why would we not expect a “pandemic among the unvaccinated”?

The immediate problem is, and has been, that far too many people refuse to take part in the reasonable actions that have been shown to control the spread of disease. Admittedly, outright shutdowns of concerts and bars and restaurants and other venues where people congregate in large groups were extreme solutions, difficult for the economy, but such shutdowns have worked, in the United States and in many other countries. Less drastic options such as social distancing and wearing masks are more reasonable, and they were shown to have helped not only with Covid-19 but with the 2020-21 flu season as well. From September 2020 to May 2021 only 1,675 cases of “normal” influenza, and no deaths, were documented, a vanishingly small fraction of the usual seasonal numbers, in which tens of millions contract the flu each year and more than 20,000 die. Even distancing and masking, however, were met with serious, and often violent, opposition. That happens to be another commonality between Covid-19 and the 1918 flu; the spread of the latter was also aided by anti-mask protesters and political leaders who refused to halt large public gatherings. Even the president in 1918, Woodrow Wilson, refused to get the federal government involved in fighting the disease.

I have discussed the issues surrounding Covid-19 on a variety of social media platforms, from individual messaging to a neighborhood forum to Facebook. I’ve probably heard most of the reasons that people use to avoid wearing masks and to refuse vaccination. I’ve been told that masks are useless because nobody wears them correctly or because the virus is smaller than the pores in the mask fabric, or worse, that they promote diseases or create oxygen deficiency or weaken the body’s immune response. Vaccines are apparently also horrible., and not just the anti-Covid type. They contain poisons or the remains of aborted fetuses, can cause infertility or cancer or autism or autoimmune disorders or Alzheimers. It is claimed that the Covid vaccines actually alter the body’s DNA or cause the virus to mutate rapidly to become more dangerous. They also contain miniaturized computer chips that allow Bill Gates, or the illuminati, or both, to track people across the globe. And government Covid mandates are infringements on the freedom of choice of the American people, unnecessary restrictions because the pandemic is really a hoax covering up for symptoms caused by 5G phone service or devised to scare people into acceding to the social control agenda of George Soros, or the illuminati, or both.

Obviously, all of the above assertions are untrue. Or maybe that isn’t quite so obvious, because a large number of people, perhaps more than a quarter of the people in the United States, believe enough of them to refuse to wear a mask or to get the vaccine. And some of them also protest. It’s not just our country—there are currently riots in France in which largely unmasked people are taking to the streets to oppose the use of a national vaccination pass, the “pass sanitaire,” as a requirement for travel or eating in restaurants or other participation in public events.

Discussions of these issues, no matter where they occur, tend to follow certain patterns. In online exchanges regarding masks or vaccines I’ve found a higher than usual incidence of people attempting to back up their statements with citations of web sites. That doesn’t necessarily mean that there is any equivalence between those in favor of mitigating actions and those who reject them. The differences appear in the types of evidence cited. In short, when those in favor of mask use or vaccines refer to evidence, the material is most often an article in a scientific journal or an unbiased news source that describes findings in a scientific journal. In contrast, when anti-mask or anti-vaccine advocates provide a citation, it is almost always a video presentation by talking heads, either a single anti-vaccine doctor like Geert Vanden Boscche or “plandemic” promoter Judy Mikovits, or a small deviant group like the “Bakersfield doctors” or the hydroxychloroquine promoters who are self-titled “America’s Frontline Doctors.” Videos of this type have gone viral in the past year. In the few cases when they don’t cite videos, anti-vaxxers and anti-maskers supply links to articles in highly partisan sources, Fox News or OAN or others. On that side there is a vast echo chamber of web sites that often copy statements almost verbatim from one another, and their adherents ridicule any statements from mainstream media. On the other side, pro-mask and pro-vaccine advocates generally eschew videos in favor of written articles in peer-reviewed scientific journals or politically neutral sites that summarize the information from such journals.

These sourcing tendencies are not limited to the pandemic controversy. It has often been noted that in order to deny the reality of climate change, or to deny that it is a human-created (and human-solvable) problem, you must reject the arguments from some 97 percent of the knowledgeable atmospheric scientists and their peer-reviewed documents in favor of a small coterie of mavericks. The same is true of those who deny evolution by rejecting almost all of the relevant scientific data.

The problem is that there is at best a broad ignorance of science, at worst a strong opposition to scientific expert pronouncements. Those who depend on video arguments demonstrate a failure to understand the importance of peer-reviewed messaging and the vital scientific concept of reproducible results. Science is not just what results from speculative logic or Socratic monolog. It is, in practice and fact, the result of observation and data collection efforts that can be repeated.

There is also another disturbing element to discussions of the pandemic and climate change. That is the degree to which it mirrors the modern societal divide between conservatives and liberals. On one side are groups that believe in the primacy of the individual and individual rights, those who, for example, refuse to wear masks because it is a personal imposition that they don’t feel they need. On the other side are groups that stress the individual’s dependence on, and responsibility to, society and to the social fabric that supports our lives, those who argue that masks are less important in protecting the individual wearer and more important as a block to avoid spreading infections to everyone else around them, just as vaccines also serve both purposes.

It is even more extensive and troublesome than that. Individual “free choice” is being praised, in opposition to government health “mandates.” Unsubstantiated personal opinions are favored over the large-group expert-reviewed studies employed by science. Personal lifestyle preferences and corporate profit-based decisions are touted in resistance to the societal changes needed for environmental remediation and improvements. The demand by individuals to retain their own tax money argues against the need to raise money for government infrastructure, regulation, and social supports. The demands of individual investors to receive larger dividends trump a company’s social obligations to its customers and employees and their local communities.

In short, a wide range of individual and short-term preferences are being allowed to undermine the broad range of longer-term strategies needed to maintain society. Unfortunately, our experiences in the past half-century, highlighted by the current pandemic, demonstrates that the modern conservative devotion to the individualistic Thatcherite doctrine, that “there is no such thing as society,” and that “we must look to ourselves first,” can be destructive to the very concept of modern civilization.

Posted in Politics, Sociocultural | Tagged , , , , , , , | Comments Off on Just Say No?

Good Guy Gun

Jack Bolle looked down at the dealer’s display case. There, under tha glass on a small white pedestal in the middle of the neatly spaced rows of other, lesser pistols, was the one he wanted. He had been to several local dealers to look at it, to ask to handle it, to cradle it in the palm of his hand, feel its heft, to hold it up in front of him, in the two-armed pose favored by law enforcement, and aim it at the wall, at the paper calendar and the posters, out through the window. It was a Glock, the 19 Gen 5, matte black with the latest high traction surface texture, unmatched, they said, in hardness and resistance to damage and rust. An unmatched reputation. It was cool, figuratively and literally, a solid room temperature object, but it warmed to his touch as he wrapped his palm around the grip, as his index finger caressed the trigger guard. He had read all about it on web sites and talked to the salesmen and listened to everything they could tell him. It had been recommended by carry1open.com and many other sites. But none of the photos and written descriptions could compare with direct physical contact. It was beautiful, a miracle of engineering and manufacturing. He knew he wanted it; he needed it.

It would be expensive, yes, as much as eight hundred including the gun and the polymer thumb-release holster he wanted, and the ammo, and the tax, always the taxes. Government overreach. At least there was nothing they could do to stop him from buying it. And then he would have it. It would be his. He knew exactly where he would keep it, too. When he wasn’t wearing it, it would be kept on the shelf in the hidden cabinet he had built in the closet under the stairs, along with the other guns, the two he had inherited from his father and the others he had gradually added to his collection. But mostly the Glock is the one he would be wearing, whenever and wherever he went out. It would be his everyday defense weapon. He’d finally received his open carry permit; that had cost a bunch, too, what with the required training and all that, but it, too, was all worth it. His wife didn’t seem like she thought so, but by now she had stopped saying anything about it, so it was all good. They had enough income, and he had provided most of it. And, anyway, she spent a lot more money on shoes and clothes than he did, and there was that new vacuum cleaner she had wanted. That wasn’t cheap, either. Now, it was time for him again, it was time to buy the Glock. He filled out the application forms and handed them to the guy behind the counter, officially beginning the waiting period and the required background check. He would wait there at the display case a little while longer, until he could see that the Glock, his Glock now, had been moved from the display case into the room in the back where they kept the weapons that had been reserved for purchasers.

As Jack waited for the salesman to review his application form his eyes briefly scanned the sporting rifles hanging on the wall behind the counter, especially the Wyndam CDI. That was also tempting; a semiauto, 5.56-millimeter Bushmaster-style long gun, sleek black, rapid firing, a 30-round magazine. It would be fun at the range. But it wasn’t something he could carry every day. That would be impractical. And it was more expensive, too; he didn’t think he could get his wife to go for that, not quite yet. If he bought that it would have to join his other so-called assault rifles, the old generic Diamondback DB-15 and the Smith and Wesson M&P15 Sport, all of them standing vertically on the rack in the closet. He had planned ahead so there were three empty slots in there, room for future additions. The Wyndam would also take the place of the older rifles at the range on the weekends that he went there, once a month, at least for the first few months until the novelty wore off and people there got used to seeing it. He could imagine it pressed firmly against his shoulder, the momentary bumps and the sharp pops of each burst as he repeatedly squeezed the trigger.

It was only a week later that he finally brought the Glock home, and the day after that it was neewly cleaned and readied and in the new holster and hanging from his belt as he stood in front of the full-length mirror in the bedroom. He adjusted the belt to fit a bit lower on his hips and shifted it slightly further backward, then forward again. It looked great. It was just as cool as he had imagined. He practiced a release and draw movement, a bit awkwardly at first, then again and again, watching the mirror until the muscle memory took over and he had smoothed out the action. He smiled.

Two days after that, on a Saturday, he made a trip to the hardware store, the Glock holstered on his hip. As he had driven to the store and walked from his car to the door he sensed a new awareness, a heightened vigilance to everything around him, a readiness for rapid armed response to anything that might happen. On TV he had continued to hear, repeatedly, endlessly, about all of the crimes perpetrated by the bad guys in their city, the robberies and carjackings and assaults and revenge killings. The news was full of it. The police always arrived after the fact, too late. Now, at least, he would be prepared. He would be the good guy with a gun, prepared and observant and armed, ready to respond to any possible threat or assault with deadly force. He could avoid being a victim. He could defend others. That recognition reinforced his heightened sense of awareness, accompanied by a sort of adrenaline boost. He felt more alive, more energized, than ever. His eyes scanned the streets and parking lot around him, newly alert for any behavior that might be suspicious, anything out of the ordinary. His right hand slid down to rest on the reassuringly solid handle of the Glock on his hip. Yes, he was ready.

Inside the store, Jack noted the actions of others around him. For the most part, they would first look in his direction, then their eyes would drop to his waist, then they would look another direction, then simply turn and walk away, veering off into a side aisle or leaving the aisle that he had entered. For that reason he usually had a full aisle to himself. Out of the corner of his eye as he looked at the products on the shelves he spotted people who would stop at the end of the aisle, look in his direction, then move on. Nobody said anything, but it did seem that people were avoiding him. That was okay, he decided. They would soon realize that he was there to protect them, to keep them safer. They would learn to appreciate men like him. He soon found the paint and brush that he needed to put a new coat on his storage shed, paid for it, and went home.

As the month went on his experiences in other public places were similar, that is, in those locations that he could enter with his weapon, the locations that didn’t have “no gun” signs at the front door. There weren’t many of those in his small town. One, of course, was the church they attended, but he always went there with his wife and she made it clear that she wouldn’t feel comfortable leaving the house with him wearing the gun. In other places, when he was alone, he thought about ignoring the signs, but decided, at least for now, not to confront anyone. He could leave the holster and its contents in his car, it wasn’t a problem, even as he recognized that that would leave him unprotected and a bit nervous. Before he started open carry he had been concerned that maybe some lib or other anti-gun nut might raise a stink seeing him in public—he knew that some web sites had posted complaints about that—but he found that nobody he saw did, at least overtly. It was mostly just avoidance, that and some momentary surprised expressions and pauses as people looked in his direction. No, nobody said anything. Gradually he even began to realize that his previous heightened awareness of others had diminished. In fact, the overall intensity of his interactions with the world seemed to have decreased and he was less and less conscious of the holster itself, noticing it only occasionally, as when his right hand brushed up against it or when it bumped against the central console as he slid into his car seat. It was, he thought, moving from being a life enhancer to being a minor inconvenience.

The solution, he decided, was to renew his awareness of the threat. Jack increased the time he spent on open carry websites, searching out stories of individuals who had successfully defended themselves or others using the weapons they had available. He could imagine himself in those situations, backing down a perp, maybe even firing a well-placed shot. At first he thought there were a large number of such incidents, but soon realized that there was a lot of duplication and that many different web sites copied the same information. Still, that did provide him a new rededication for a few months. He again felt like a potential hero, a supporter of law and order. But then nothing happened. He never had any reason to pull the Glock out. The stories about gun owners foiling criminals were still out there, and new ones were added on occasion. Stories on the TV news about robberies and road rage and mass shootings were still there, but they always involved people and locations he didn’t know. Nothing seemed to happen around him.

It was not just that his life was boring; in fact, it was mostly like life before he started open carry, but it now was in contrast with that brief period of heightened awareness. In addition, there was the continuing inconvenience of that weight on his hip that got in his way when he wanted something out of his pocket or when he slid into his car seat. And then there was the tendency, still noticeable, of people avoiding him in stores. More and more when he went out he didn’t bother to get the Glock out of the closet. More and more, it seemed that it just wasn’t worth it.

Posted in Sociocultural | Tagged | Comments Off on Good Guy Gun

Diversity War

Recent Facebook meme: “America – My Ancestors Didn’t Travel 4,000 Miles for the Place to Be Overrun by Immigrants”

It is a continual and stupefying realization to me that a large percentage of self-professed “patriots” in the United States oppose such concepts as diversity, multiculturalism, and multilingualism. Their ideas are often expressed in opposition to immigration–or as President Trump often noted, immigration from “the wrong countries”–but also in statements decrying the loss or dilution of “American culture” or in demands that everyone speak English. Discussions and polls have shown that nobody is sure what, exactly, a definition of the common culture in the United States would include, but significant and vocal percentages of U.S. citizens believe that it would include belief in a Christian God, vague notions of “shared” northern European ideals, the ability to speak English, multi-generational family residence in the country, and support for the U.S. Constitution, flag, and/or the national anthem. Accordingly, people who are not Christian, or those who speak English poorly or not al all, or who are, or appear to be, of non-European ancestry are regarded as suspect or illegitimate. Often such people are deemed not worthy of citizenship, of remaining in the country.

In the past few years this bigotry seems to have gotten worse. We’ve seen increasing numbers of physical or verbal attacks on Asian Americans, Middle-Eastern Americans, Hispanics, Jews, Sikhs, Muslims, and others. Public demonstrations by White supremacy groups and Christian Nationalists have become more common and blatant, joining and amplifying pundit messages expressing fears about the potential loss of “American culture.” In perhaps the most egregious example of the ignorance of such activities, a Navajo state legislator in Phoenix was accosted by a group demanding that he should leave the U.S. and return to his own country. A Navajo!

There are many things that could be said about such astonishing intolerance, but the most important base fact to begin with is that the United States has always been a multicultural and multilingual country. That is true despite our efforts to keep Africans enslaved, to chase Natives and Hispanics out of the lands we stole from them, to reject Catholic immigrants, and to send the Chinese and Irish and Mexicans and Italians back after they had completed the necessary and often backbreaking tasks we needed them to accomplish. Our country has had an unending history of accepting immigrants from virtually anywhere when we needed massive numbers of workers to build our economy, only to follow with backlash actions that attempted to “cleanse” our society of the “un-American” individuals and influences we had previously recruited.

What we so often fail to do is to recognize that those diverse peoples and influences have always been a significant net benefit to out country. To provide just one time-limited but very significant example of that benefit, as we lead up to the 80th anniversary of our official entry into World War II, I would like to list a few of the many ways that those “un-American” citizens, the ones we have so often unfairly tried to reject, helped us to succeed against that war’s threat to our country and to democratic government, often risking their lives to do so.

Start with a group that suffered from a dual deficit. During the war years hatred against Germans and Italians grew. Many were subjected to group internment under the revised Alien Enemies Act and Presidential Proclamation 2526, much like the Japanese-Americans. A number were also victims of an older, more persistent prejudice; they were German Jews. One example was The Ritchie Boys, a group of recent European immigrants that was especially effective. These were individuals, including Jews, who had escaped the advance of Axis armies across Europe and who joined the U.S. war effort. They were trained to apply their knowledge of Europe and of German language and culture in efforts to collect useful intelligence from prisoners of war. As much as 60 percent of the actionable information about the enemy may have come from the efforts of this group. Prominent members included J.D. Salinger and David Rockefeller.

While we were recruiting refugee Europeans into the war effort there was one marginalized group at home that initially was ignored because they were considered unacceptable for combat: African Americans. The military had a pervasive policy of racial segregation. Despite that, more than one million African-Americans served in the war. Among these was the 761st Tank Battalion. The first Black tank battalion to see World War II action, the 761st played a significant role in holding back German forces in the Battle of the Bulge, spending 183 consecutive days in action. Other all-Black units also had prominent roles. One, the 969th Battalion, was later recommended for the Distinguished Unit Citation for its actions around Bastogne.

While Black infantry and artillery units were distinguishing themselves on the ground an experiment by the Army Air Corps was proving to be remarkably successful. This was the creation of a group of pilots, trained at Tuskegee University and at a variety of Army bases The formation of the 99th Pursuit Squadron was supported by first Lady Eleanor Roosevelt. In 1943 they received a Distinguished Unit Citation for their first combat action, the bombing of an Axis garrison on the island of Pantelleria, leading to its surrender in advance of the allied invasion of Sicily. The 99th was later re-designated the 99th Fighter Squadron and along with another Tuskegee unit, the 332nd Fighter Squadron, achieved an extraordinary combat record as escorts for bombing raids over Italy and Germany. Members of the 332nd earned 96 Distinguished Flying Crosses. On March 29th, 2007, the Tuskegee Airmen were collectively awarded the Congressional Gold Medal by President Bush and the U.S. Congress.

In the Pacific theater, the Marine Corps generally resisted using African-American units in combat; instead they were assigned supportive tasks in Ammunition and Depot companies. Working on small islands occupied by a stubborn and often hidden enemy, they inevitably ended up in active fighting. After learning of their courage and spirit, Lieutenant General Alexander Vandegrift, the commandant of the Marine Corps, noted, “The Negro Marines are no longer on trial. They are Marines—period.”

Japanese Americans were yet another group that was, like Germans and Italians, initially subjected to internment at the beginning of U.S. involvement in World War II. Soon, however, the Army decided that they were yet another resource that could not be ignored. The 442nd Regimental Combat Team, formed almost entirely of second-generation (Nisei) volunteers, fought in Italy and France. It became the most decorated unit for its size in U.S. military history, earning more than 18,000 awards in less than two years. Twenty-nine of its members were awarded the Medal of Honor.

Many Japanese Americans also volunteered for the U.S. forces as field translators in the Pacific theater and used their language skills to gather intelligence from prisoners of war and from messages they decoded. The Allied war effort against Japan was aided significantly by the useful information provided by these men. It’s undeniable that German- and Japanese-speaking citizens and immigrants were of immeasurable benefit in anticipating and countering the movements of Axis forces.

Finally, there is one other minority group whose efforts should be recognized. Members of Native American tribes used their distinctive languages to create unbreakable codes to transmit plans and intelligence on radio communications that could otherwise have been intercepted by the enemy. Code talking had been pioneered by Cherokee and Choctaw speakers during World War I. During the Second World War there were members of the Lakota, Meskwaki, Mohawk, Comanche, Tlingit, Hopi, Cree, Crow, and Navajo serving on all of the war fronts. These men were most often assigned to front-line combat units and paired with radio operators, one of the most dangerous infantry assignments in the war because they were specifically targeted by enemy snipers. Code talkers made it possible to rapidly transmit useful information from the front with virtually no likelihood that the enemy could decode it. The relatively large number of Native languages and limited knowledge of them outside the United States made it all possible.

This focus on World War II is not intended to marginalize the many other contributions that these or other minority groups have made to the United States in its relatively short history. We tend to focus on the English colonists, but our country began as an amalgamation of indigenous peoples and immigrants from many countries, and that vaunted national culture that conservatives want to preserve is an indivisible, unique mixture of their traditions and contributions along with those of northern Europe. For the continued success of our government and economy we are indebted to all of the many and varied residents, recognized and unrecognized, documented and undocumented, whether they speak English or not, whether they look like northern Europeans or not, whether they arrived 20,000 years ago or in 1650 or just last week. They all deserve to be here and to be recognized as full citizens and colleagues.

Posted in Politics, Sociocultural | Tagged , , , , , , , , , , , , , , , , , | Comments Off on Diversity War