What Ceiling?

As I write this it is May of 2023. A Democrat is president of the United States and the Republican Party has recently gained a narrow majority in the House of Representatives. Unsurprisingly, the odd situation called the debt ceiling crisis has surfaced again. Deja vu, once more. Perhaps certain members of Congress can’t remember any historical events that occurred more than ten years ago, or perhaps their ideological framework, or the wishes of their financial supporters, won’t allow them the luxury of learning from the past, but we have re-arrived at an impasse very similar to one that almost brought the U.S. economy to its knees back in 2011. President Biden and the Republican leadership are reprising a scenario almost identical to the one that the Republicans led us into with President Obama (and, notably, then-Vice President Biden). But even that all started quite a bit further in the past.

Thanks to spending on the revolutionary war the United States was in debt from its inception. But there was no debt limit until the First World War. In 1916 a significant proportion of Congress opposed the size of the spending required to create a military force that was capable of fighting a modern enemy. What was required, in fact, was a compromise; the first of many. The only way to get enough support in Congress for the war legislation was to put an arbitrary limit on the total amount that could be borrowed, so the first debt ceiling law was born. This limit on indebtedness is not in the Constitution. It was a creation of Congress, the branch of government that the Constitution charged with deciding on federal spending, and the only way to push the debt higher according to that law is through another act of Congress. It is an odd arrangement, however, because Congress can continue to pass other legislation that causes federal spending to increase well beyond that debt ceiling. It does so with great regularity, but the executive branch of the government cannot actually spend the money appropriated by Congress unless Congress subsequently raises the debt limit. Congress has done exactly that on an average of more than once a year in the past five decades, and for the most part there have been minimal problems. They’ve done so because failure would cause serious consequences; the United States would be unable to pay the expenses that Congress had already incurred, including scheduled payments to contractors and Social Security recipients and Medicare clients and bondholders, foreign and domestic. Any defaults would immediately cause the credit rating of the country to decline significantly and could cause future creditors to go elsewhere. Because current creditors are worldwide, it could also lead to an international monetary crisis.

Congress was relatively reasonable with the ceiling until the 1992 election of President Bill Clinton. In the 1994 mid-term election the Republican Party gained a majority in the House of Representatives for the first time in 40 years. They decided to take advantage of that win to force significant reductions in federal spending; specifically, spending on social programs. Clinton, along with almost all Democrats in congress, disagreed. That led to a 1995 standoff on the federal budget that led to the longest government shutdown in history, a 21-day disaster. The next year the GOP leadership followed up by demanding major budget cuts before they would agree to raise the debt ceiling. They wanted $245 billion in tax cuts, restraints on Medicare and Medicaid growth, significant changes in welfare programs, and a balanced budget within seven years. Clinton compromised a bit, and before a default could occur Congress passed a bill that raised the ceiling, eased regulations on small businesses and made slight changes in taxes.

Democrat Barack Obama was elected president in 2008 with a Democratic majority in Congress. Then in the 2010 mid-term elections the GOP, assisted by a new more radical anti-deficit group called the Tea Party, gained a 49-seat majority in the House of Representatives. Not long after that the debt again approached the ceiling. As in 1995, the new House majority demanded major concessions in social spending. It was estimated that if the ceiling were not raised, the U.S. Treasury would exceed its borrowing authority by August 2nd. On July 31st, two days prior to that date, a deal was reached in which the ceiling would be raised in exchange for a complex package of future spending cuts. Despite this agreement the financial markets declined dramatically and credit-ratings agencies like Standard & Poor’s downgraded the credit rating of the United States for the first time in history. The Government Accountability Office (GAO) estimated that financial changes due to the late approval in raising the debt ceiling would raise borrowing costs by $1.3 billion in 2011 alone.

In their compromise, Democrats and Republicans agreed to the Budget Control Act of 2011 in which Congress promised to cut spending by $1.2 trillion as it raised the debt ceiling by some $900 million. There was also a backup plan. If Congress failed to figure out how to cut the budget an automatic sequestration plan would begin. By 2013 Congress had failed, and the backup plan cut the budget of all federal agencies by an amount sufficient to make up the $1.2 trillion savings. The reductions didn’t apply to Social Security, Medicaid, veterans, or civil and military employee pay, but the other effects were not a good experience for the many federal employees who were laid off and for citizens who depended on their efforts. Conservative estimates noted that it cost the larger economy more than 700,000 jobs. However, it seems that Republicans didn’t learn from any of that, to the point that they are clearly willing to make the same mistakes for a third time now, in 2023.

Who is willing to do that? On April 27 almost all of the Republicans in the House voted for a debt ceiling bill tied to massive spending cuts. On May 5th 43 Republican senators signed a letter opposing raising the debt limit without such cuts, enough to sustain a filibuster to block any other debt ceiling bill. Note that whenever the Republicans complain about “excessive” federal spending, which they do only when a Democrat assumes the presidency, they conveniently ignore one form of federal spending that accounts for a significant portion of the national debt. That is the revenue foregone in tax cuts in the “tax reform” laws passed under presidents Reagan (1981 and 1983), G. W. Bush (2001) and Trump (2017). All of these bills increased the deficit and debt significantly; estimates have indicated that if the Trump tax cuts had never been passed it would not now be necessary to raise the debt ceiling. In the face of the GOP’s single-minded opposition, President Biden has only a few options:

One is to accept the GOP demands and go ahead with the massive reductions in social services and infrastructure spending that were contained in the bill passed by the House. This is unacceptable, as it would cause suffering for millions, in jobs and healthcare and many federal services, and could push us into a lengthy recession. Not only that, but the Republican Party would have learned, once again, that hostage-taking works and will be encouraged to repeat the threat in the future.

Second is to refuse to accept anything other than a single-purpose bill to raise the debt limit, with the likelihood that the GOP would refuse and would allow the nation to fall into default, another option that could severely damage the national and international economy, not to mention the status of the United States as a world financial leader.

Third, the U.S. Treasury has the power to issue platinum coins in any denomination. They could create two such coins, designate their value at one trillion dollars each, and deposit them in the Treasury account in the Federal Reserve (the Fed). The Treasury could then withdraw funds from that account to pay its obligations. A similar option would have the Fed purchase an option to buy government properties for two trillion. In the future, after Congress lifted the debt ceiling, the president could buy back those options or the coins. In the meantime the government could avoid any chance at default. Both of these options were discussed back in 2011 but not taken seriously despite the fact that they are very similar to the practice of quantitative easing that the Fed used to bail out the private sector after the 2008 recession..

Fourth, President Biden could declare that Section 4 of the 14th Amendment to the Constitution makes the debt ceiling law unconstitutional. That section states, “The validity of the public debt of the United States, authorized by law, including all debts incurred … shall not be questioned.” In the 1935 case Perry v. United States the Supreme Court held that Section 4 prohibited a current Congress from abrogating a debt contract incurred by a prior Congress. In other words, it’s neither necessary nor proper for today’s 118th Congress to reject or approve a debt limit; previous congressional sessions have already considered and voted on the current financial obligations when they passed the legislation which called for those expenditures. After that, it becomes the job of the executive branch to honor the previous laws and spend the amounts required. Under this interpretation, it is the president’s duty to ignore the debt ceiling.

If President Biden would follow this strategy he and his administration would undoubtedly be taken to court, a process that could take years to adjudicate. If, in the end, the Supreme Court rules against using the 14th Amendment in this manner, President Biden will still have significantly delayed the default and the types of destructive events or legislation that could have occurred in June of 2023. If, on the other hand, the Supreme Court rules that the congressional debt limit law is, in fact, unconstitutional, not only would the current crisis be resolved, but a recalcitrant majority in one house of Congress would never again be able to hold the “full faith and credit of the United States” as a hostage to force the United States to enact unpopular or destructive budget options. To avoid the errors of the past and to stop misuse of the debt ceiling in the future, it’s time to give the 14th Amendment a try.

Posted in Economy, Politics | Tagged , , , , , , , , , , | Leave a comment

Executive Risk

In the United States a great deal of the stability of the government is credited to what is commonly referred to as the balance of powers, the system in which each one of the three major functional divisions can put limits on the others. This is primarily mentioned in references to the federal level, but it is to some degree true of state and local governments as well. To briefly review, the legislative branch consists of the Senate and the House of Representatives and has the power of writing and approving laws. The executive branch, the president and the many federal agencies, can veto any of those laws but also has the job of implementing and enforcing them. The judicial branch, the federal court system headed by the Supreme Court, interprets the laws and can reject them outright or limit their applications. In all of these actions, especially in the writing, implementation, and interpretation phases, there are many nuances that can affect the ways in which the original intent of congress can be enhanced, frustrated, or negated.

In the most recent decades this tripartite balance has been distorted by a significant growth in the power of the executive branch, particularly in the powers of the presidency. A phenomenon that has become referred to as the “imperial presidency: has evolved. It has been decades in its formation, gradually expanding executive power despite a few significant reversals. The effectiveness of the president at some times has been temporarily enhanced by the popularity of charismatic individuals, especially during the administrations of Franklin Roosevelt and Dwight Eisenhower, but those instances are not part of this trend. What is involved are such developments as the increased presidential use of executive orders and signing statements, arbitrary implementations of executive agency powers, and the misuse of vague military-related resolutions passed by congress, especially those that relinquished the constitutional requirement that the legislative branch has the sole power of declaring war.

There have been occasional bursts of excessive power in the past. Looking at the more distant historical record in brief, federal executive punitive actions rose significantly during the first world war with the passage and zealous enforcement of the Sedition Act of 1918. More than ten thousand people were arrested, and many deported, in the raids carried out by the Department of Justice under Attorney General A. Mitchell Palmer. The most prominent victim of these activities was Eugene V. Debs, a labor leader who was arrested on June 30, 1918, two weeks after making a speech in which he opposed the military draft. Debs ran for president in 1920 as a third-party (Socialist) candidate and received almost a million votes despite being held in jail at the time. The Postmaster General also blocked thousands of periodicals from being distributed through the mail because they had published anti-war or similar articles. These actions disrupted the lives of millions of people and severely curtailed public discussion of many topics. Congress repealed the law at the end of 1920 along with several other wartime laws.

The next major increase in executive branch power occurred during World War II. This was the result of wartime unity and the extraordinary popularity of Franklin D. Roosevelt, voter support that helped Roosevelt gain large majorities in both houses of congress and (eventually) among the justices on the Supreme Court. Strong popular support for presidential actions carried through after that war into the Korean conflict and the cold war and the Truman and Eisenhower administrations. In these years the relatively high level of executive power was largely informal, less a matter of legal or bureaucratic overreach than of broad policy agreement between the president and congress and, importantly, growing levels of support among the public. Any such popularity, based as it was largely on current events or personal charisma, would be hard to maintain over time. The inevitable decline was accelerated in the post-Eisenhower decades by a series of adverse developments, beginning with public rejection of the anti-Communist excesses of Joseph McCarthy and his House Un-American Activities Committee and, later, the war in Vietnam and the Watergate scandal and 1974 Nixon resignation. By the mid-1970s public trust in government had not yet reached bottom, but was in steep decline. In 1973 congress passed the War Powers Act to severely limit a president’s ability to deploy military forces without specific approval. A year later they put new limits on presidential budgetary options through the Congressional Budget and Impoundment Control Act.

Efforts to restore presidential powers were perhaps inevitable, and they began with the election of Ronald Reagan in 1980. To begin with, Reagan filled the top levels of many federal agencies with leaders who actively opposed the intended missions of the organizations they headed. The first Secretary of the Interior was James G. Watt, a strong anti-environmentalist who promoted commercial exploitation of public lands by the oil, mining, and timber industries. Other oddly antagonistic agency appointments included the first Secretary of Labor, Raymond James Donovan, a businessman hostile to organized labor who significantly reduced the size of his agency and its regulatory powers, Secretary of Education Terrel Bell, who planned to dismantle the department of Education but was forced to recognize that any such action would require the passage of legislation, and Anne Gorsuch, who as lead administrator of the Environmental Protection Agency severely reduced employees and refused to enforce many EPA regulations. This kind of policy obstructionism became common in Republican administrations from 1980 on.

President Reagan also ignored the provisions of the War Powers Act when he initiated the 1983 invasion of Grenada, not bothering to consult with congressional leaders. Congress largely ignored this, and in 1986, by almost unanimous actions, passed the Goldwater-Nichols Act which unified and streamlined presidential control of the armed services. But the ultimate legislative abandonment of constitutional duty came in 2001 when, in another near-unanimous vote, congress passed the Authorization for Use of Military Force (AUMF), a vaguely worded resolution with no expiration date that allowed President George W. Bush to begin all-out wars in Afghanistan and Iraq. The same AUMF has provided subsequent presidents with the power to continue and even to expand military operations in the so-called “war on terror” without the official congressional declaration of war required by the constitution.

In the most recent five decades presidents have not only usurped the congressional power over military actions, they have also reduced the legislative branch’s role in writing and passing laws. This has been done through increasingly frequent and intrusive use of signing statements. In these the president signs a bill passed by congress, but attaches a written directive that specifies which parts of it that the executive branch actually intends to enforce. Such attachments are nothing new, and have been justified by the wording of Article II, Section 1 of the Constitution, which says the president “shall take Care that the Laws be faithfully executed.” At least, this is the odd interpretation of it favored by the Supreme Court in a 1986 decision in Bowshar v. Synar. The Department of Justice (DOJ) also has been consistent in advising presidents that they have the authority to refuse to enforce laws that they believe to be unconstitutional. The use of signing statements increased after a DOJ staff attorney, Samuel Alito (yes, the future Supreme Court justice), provided a memorandum promoting the use of “interpretive signing statements” to “increase the power of the Executive to shape the law.” In other words, the president would be attempting not only to ignore elements of newly passed laws, but to influence the passage of future laws. He would do this not by an outright veto, which would risk public censure and a congressional override vote, but by quietly refusing to perform his executive functions.

President Reagan was the first to use this new broad “interpretative” interpretation, and he included in some of his statements a phrase saying that congress cannot pass a law that undercuts the constitutional enforcement authority of the president, a view of executive authority that also clearly supported his obstructive appointments to leadership positions in federal agencies. George W. Bush repeated similar phrases as he signaled his intent to bypass three-quarters of the more than 1,000 provisions he objected to in 161 bills that he signed. He was not alone in his use of the strategy, but did set a record for written objections during his eight year administration. President Barack Obama opposed signing statements, but did use them 37 times, referring to 122 provisions of laws. After that, President Trump resumed the Bush extremes, objecting to 716 provisions of laws using 70 signing statements in only four years in office. As of now President Biden has reversed that, following his former boss’s example, only averaging three signing statements per year.

The expansion of executive power has therefore been variable and the election of President Biden in 2020 has signaled a partial reprieve. After all, the Trump presidency did provide some indicators of what could be possible if the leadership of the executive branch were not constrained by members who were devoted more to the rule of law than to the rule of one president. Scattered among the many instances of excessive Trump signing statements were other questionable actions, including appointments of unqualified cronies to high federal positions and repeated avoidance of government transparency rules. Trump frequently destroyed official documents and conducted business meetings without the record-keeping required by the Presidential Records Act. He also often held phone calls and text conversations on unsecured private devices. Worse yet, he had plans to replace federal civil service protections with a new system that would have allowed him to replace any “disloyal” employees, the so-called “deep state” with its bureaucratic inertia, at any time. That would have allowed him to create agencies completely subservient to his personal agenda. In short, he was well on the way to assuming semi-dictatorial powers. But such potential dangers ramped up to extreme levels in the wake of the 2020 presidential election.

After he lost the 2020 vote, President Trump reached out using the implied power of his office and tried to get local officials and the military to manipulate the vote results in many states. He tried to get his Secretary of Defense to send out National Guard troops to seize voting machines in hundreds of precincts across the country. Attorney General Bill Barr resigned on December 14, 2020 after he disagreed with Trump’s complaint that the election was fraudulent. Two weeks later Trump asked the acting AG Phil Rosen to “just say the election was corrupt” and to reject the results. When Rosen refused, Trump tried to replace him with a loyalist who would follow his instructions. That plan was blocked when almost all of the top DOJ officials threatened to resign en masse. On December 21st, in an oval office meeting, Trump discussed options for imposing martial law to allow him to stay in office. Finally, and infamously, Trump attempted to persuade Vice President Pence to reject the electoral college votes of seven states that had voted for candidate Biden, and at the same time his representatives were attempting to create and register alternate pro-Trump teams of electoral delegates to replace those that had been pledged to Biden. When those attempts failed, he did his best to organize and energize the insurrectionist mob that took over the United States Capitol building and temporarily halted the meeting that had been assembled to certify the results of the election. In short, Trump wanted to combine semi-dictatorial powers with an unending reign that no longer had to submit itself to voter approval.

In effect, there were only two things that allowed us to retain democracy in the United States after 2020. One was the clear expressed will of the voters in the 2020 election. The other was the stubborn insistence of many government officials to follow the rule of law, to reject the pressures applied by the president and his pro-Trump devotees, even in the face of death threats. To go on from here, to learn our lesson and avoid potential future disasters, we need to be aware of and oppose the steadily growing power of our “imperial presidency” and rebuild the balancing strength of legislative-branch powers. We also need to improve the rules and regulations that govern our elections and support the people who oversee them. After all, the ultimate power belongs with the will of the voting constituents.

Posted in Politics | Tagged , , , , , , , , , , , , , , | Leave a comment

Poverty Memes

In the United States we are conflicted about poverty. When referring to poor people there are two over-generalizations that we use. On one side we have what might be called the concept of the “noble poor” such as the hero-protagonists of John Steinbeck novels; on the other we have the victim-blaming tropes in which we assume that if someone is having financial problems it must be their fault. Under Franklin Roosevelt and Lyndon Johnson we developed anti-poverty programs that were redistributive, aimed at correcting what was viewed as an unfair imbalance under which a quarter of the population were not able to earn an adequate income. In less than one decade (the 1970s) we replaced that with a neoliberal philosophy and ushered in the austerity regimes of Ronald Reagan and William Clinton, promoting and using images such as the unmotivated slacker and the urban welfare queen. Throughout both of these divergent periods we also lauded rags-to-riches stories, a tradition that goes back well beyond Horatio Alger and Charles Dickens and has been sustained more recently by stories of celebrities such as Oprah Winfrey and J.D. Vance. Unfortunately, by praising such heroes we only reinforce the reproachful concept that those who have not succeeded are themselves to blame.

The problem may be that we love our stereotypes too much, that we are a bit too eager, or too lazy, in attaching ourselves to simplistic explanations and to broad groupings of people based on superficial characteristics, whether the people are minorities or millennials or women or the poor. Often when we switch our impressions of any such group the only real change is our focus. During World War II somewhat more than half of U.S. families were headed by single females, generally working mothers. Ever since then, that 1940s generation of women has been lauded as essential workers who provided “Rosie the riveter” services on the home front. Only four decades later, in contrast, public alarms were being raised about what was interpreted as a dangerous increase in female-headed families, from 10 percent in 1950 to 14 percent in 1980, with almost all of those mothers holding jobs outside the home. That supposed trend in “abnormal” families was regarded as a sign of social dysfunction. The female-headed household was included as one of the primary elements of the “culture of poverty” that sociologists and policy analysts blamed for keeping family incomes below adequate levels and for perpetuating social dysfunction over multiple generations.

Admittedly, there are two significant differences between the single mothers of the World War II years and those of the 1980s and later. The first is that most of the wartime families had incomes coming in from the absent fathers who were away fighting. The second is that the federal government provided heavily subsidized day care for the mothers who took factory jobs. Both of those factors together meant that few female-headed families during the 1940s were living below the poverty line. Many even had better and more stable incomes than they had in the 1930s. The same cannot be said about the similar families of the post-Reagan era. In fact, if there is anything that perpetuates poverty in recent decades it is the combination of the absence of preschool child care and the lack of adequate family income. Even after the children of these families enter first grade in public schools their educational progress is diminished by inadequate nutrition and a home life in substandard housing. These factors have continued to make it difficult for children to escape the early economic deficits in their lives.

The “culture of poverty” theory is an odd extension of the way we tend to blame individuals for their failure to prosper, a common stereotype about the poor. The academic version was initially developed by anthropologist Oscar Lewis and first published as documentation of his 1948 study of families in Mexico. He listed 70 characteristics of poverty culture. Some of these, however, seem to be logical responses to lived experience rather than “cultural” elements. It might, for example, be reasonable for people living in poverty to exhibit fatalistic attitudes, feelings of helplessness, and mistrust of dominant social institutions. Other characteristics on his list are environmental realities that are clearly beyond the power of individuals to control, including poor housing, overcrowding, high levels of dysfunction and crime, and neighborhood social systems that are minimally organized. A large part of the problem with the culture of poverty theory is the use of the word “culture,” which implies that the characteristics are ingrained faults of poor individuals rather than flaws of the socioeconomic system.

As an adult I have had some personal experience with systemic impacts. In the mid-1970s I spent one summer as an intern for the Environmental Protection Agency. My first attempt at finding short-term housing in Washington, D.C. put me into a space in a rooming house on a side street less than a quarter mile away from the broad affluent boulevard known as embassy row. Despite its proximity to that posh strip, the neighborhood I inhabited consisted of aging victorian structures, poorly maintained rental properties subdivided into multiple small apartments. The room I occupied was dingy and infested with cockroaches and bedbugs. It offered inadequate lighting and low water pressure. To do the reading and writing tasks that I needed to complete after working hours I would walk the three blocks to the Hilton Hotel on embassy row to sit in their air conditioned lobby. As I walked on the streets near my lodging I often passed filled black trash bags that remained in place for the full two weeks I was there.

Then I was offered a temporarily vacated room in a new condominium in a different part of the city. That building was new and clean, with functional utilities. In that neighborhood any trash bags placed on the curb disappeared within a few hours, an indication that the dominant social institutions functioned correctly there. If I had been one of the long-term residents of the previous neighborhood—if I had known that that housing and neighborhood would be my only option for years—I might have developed fatalist attitudes and mistrust of dominant institutions, but I wouldn’t consider those as being part of my personal or group culture. As a child I grew up in an old overcrowded house in a working-class neighborhood and in a family headed by a single working mother. I don’t believe that I or any of my siblings were afflicted with “cultural” attitudes that prevented us from escaping poverty.

Unfortunately, the concept of a culture of poverty has dominated public policy for more than fifty years. In Economics, a similar framework exists under the title of the cycle of poverty or the poverty trap. This concept is often used to explain why anti-poverty programs fail; the poor, it is argued, not only lack resources but are defeated by a self-perpetuating value system. Never mind that poverty itself is an open category from which most families escape within a few years. Never mind that other developed countries have been successful in providing anti-poverty social safety nets. Never mind that our own nation actually reduced poverty significantly under the New Deal and President Johnson’s Great Society programs, as incomplete as they were, and that the percentages of poor families only increased again in the 1980s, after President Reagan slashed federal social spending. Under President Clinton even more restrictions were added to the already reduced federal welfare supplements. Work requirements were added and states were put in charge, further cutting anti-poverty effects under the new Temporary Assistance for Needy Families (TANF) program. The modern failures of public policy to remedy poverty were intentional, efforts that were justified by economic and social philosophies that blamed poverty on the attitudes of the poor rather than (properly) on the innate characteristics of the corporate economy and government policies that support inequality.

In the 2000s the pendulum seems to be swinging back. Blaming the victims seems to remain as a common response to poverty, but public programs have shifted somewhat toward providing resources for the people who are trying to survive on below-poverty incomes. Due to congressional resistance the TANF program has not been made less onerous, but alternative methods to provide supplemental income, with minimal bureaucratic red tape, have been created. This includes Income Tax Credits for families with children and proposals for subsidized child care. Some locations, including Los Angeles and Stockton (California) and Chicago and Canada have been experimenting with providing monthly payments to needy people, a form of what is known as a Guaranteed Basic Income. Providing universal and automatic income supplements to people is a system that rejects the older welfare philosophy that required them to prove that they deserved public assistance. It is also proving to be a simple and effective tool to reduce poverty and the related levels of human suffering, and without blame or shame.

Posted in Politics, Sociocultural | Tagged , , , , | Leave a comment

Community Aid

The system of local irrigation in New Mexico contains a lengthy network of canals, most of them small and dirt-lined, maintained annually in a system that was developed over many centuries. In a few places there are larger government-maintained diversion canals some ten to twenty feet wide, most of them parallel to the few rivers with available water, but these connect with many much smaller waterways to provide water to homes and farms. Imagine a ditch 5 or 6 feet deep and somewhat wider, running through residential neighborhoods or around fields, with locked water gates every fifty or hundred feet, gates that can be opened to divert some of the water off through an underground culvert into small fields of beans and chiles and corn and other vegetables. Then imagine it and many others like it criss-crossing the wide flood plain of Albuquerque or the sparsely-settled rolling hills and dry valleys in the northern half of the state. In many locations it is difficult to take a recreational walk of two or three miles without coming across at least one of these. This is the system of the acequias, a southwest tradition hundreds of years old.

Acequias are found under the same name in Spain, but the original irrigation ditches in New Mexico and Arizona were created and used by the aboriginal residents, the many puebloans and the Pima and the Tohono O’odham and other groups, long before the first Europeans arrived. They made it possible for agriculture to expand beyond the few available water sources, not only the major rivers such as the Colorado, the Salt, and the Rio Grande, but a variety of smaller spring-fed creeks that flow primarily in the Spring and early Summer and, of course, the dry washes that see surface water only when it rains. In a landscape that gets less than ten inches of rain scattered throughout the year such irrigation methods are vital. When the first European settlers arrived they quickly developed their own water distribution systems and local organizations for annual ditch maintenance, arrangements that have now been operating continuously for multiple centuries. The original settler ditches have been modified somewhat, but they survive with much the same routes and the same Spanish terminology given them by those first settlers. To keep them operating, however, each Spring the soil residues that accumulated in the bottom over the previous year must be dug out along with any grasses and saplings that had started growing along the ditch banks. Damage caused by moles and muskrats and beavers and domestic animals undermining the sides of the channel can occur at any time, so the water flow must be monitored and repaired as necessary during the Summer months.

Even the mostly infilled flat residential areas of Albuquerque, by far the largest urban area in New Mexico, were at one time agricultural lands composed mostly of plots of a few acres each, most of them fed by water diverted from the Rio Grande. A person walking through these neighborhoods can walk along ditch banks for miles, repeatedly and frequently passing small gates, each consisting of a metal framework approximately ten inches wide containing a metal plate that can be slid upward to allow water to flow through an underground culvert into the back yard of a nearby house. These days in the city almost all of these gates look like they haven’t been opened in years; the connected yards are rarely cultivated. Still, there are enough agricultural operations scattered across the valley that the water flows steadily all summer, an ample amount some two to three feet deep and five feet wide.

Most of the acequias in more rural areas are managed by those who directly benefit from them. These are not official functions of the local governments, not the city or county or other regional government. A management council, or comisión, is composed of the parciantes, the property owners of the land parcels that have connections along each ditch. They make most of the decisions about when to organize crews for Spring preparations, when to open the water flow from the river at the start of the growing season, how much time each parciante can keep their gate open each day, how to deal with drought year and end-of-season shortages, how to negotiate with other ditch councils who depend on water from the same sources, and when to shut down the flow at the end of the growing season. These arrangements vary by local preferences, are overseen and somewhat regulated by state law, but for the most part the comisiónes are on their own.

To oversee day-to-day activities each council selects a mayordomo. This individual, normally one of the property owners, collects fees (a process known as andando collectando) and contacts and pays workers referred to as piones (peons) who are either parciantes themselves or their relatives or other designated representatives. The piones are volunteers, some out of duty to their relatives and some because they need the money, often both. Whenever work is needed to prepare the ditch before the water begins to flow in the Spring, deepening the channel or removing unwanted growth, or later in the season to repair any damage or discovered leakages, the mayordomo surveys the conditions, organizes a team, and supervises their efforts. The number of team members used depends on the length of the acequia that has to be cleared or the extent of the damage that must be repaired. In most cases this involves only a few days of work at a time, so the totals of the hourly pay rates and the fees charged to individual parciantes are relatively small.

Acequia operations are one small example of the many types of unofficial cooperative systems at work on the sidelines of our recognized private market economy. Most of these are small and localized and unique to regional cultural mores and community relationships, historical aberrations that somehow carry on through the years due to tradition and recognized needs that have not been, or cannot be, satisfied by either wider government or private sector establishments. In every part of our country there are voluntary associations staffed by unpaid or minimally salaried people and supported by donations and bake sales and bazaars and car washes. These provide for community center services and political advocacy and 4-H clubs and youth sports and, yes, often for vital resources like water and electricity and food and volunteer fire labor. This is the supplemental economy, a ubiquitous presence that persists with almost no recognition from the official economic or political establishment. In many ways it makes up for the failures of both the private and public sector.

Most of the human efforts expended for such community activities are either unpaid work or work that is reimbursed with undocumented payments or in-kind exchanges, which means that it doesn’t really count as work in the world of theoretical economics. It is rarely included in the statistics about labor and income. Yet these activities are everywhere and they consume an enormous amount of personal time that is generally considered, in effect dismissed, as leisure hours. Like housework and home elder and child care, the contributions of the vast supplemental economy are mostly ignored despite their importance in the operation of our communities.

Posted in Economy, Sociocultural | Tagged , , , , , , , , | Leave a comment

Pragmatic States

The United States has a lengthy history of pragmatism in both philosophy and action, a long tradition that may have helped the nation grow but may also have inspired, or at least prefigured, the serious political conflicts of the second decade of the second millennium. Pragmatism, or at least a practical mindset, seems to have begun at the beginning. Alexis de Toqueville made a note of it. He thought that it was the result of the lack of hereditary class distinctions and pre-arranged social levels, but it could just as well have been a consequence of frontier attitudes and the necessity of creating an integrated functioning economy out of a collection of isolated coastal outposts.

It does seem that scientific methods and factual analysis were very much in vogue at the time of the American revolution, a reasonable extension of the Scientific Enlightenment in Europe. But there were some distinct contrasts to the old world, too, attitudes that were mentioned by de Toqueville in his classic study Democracy in America; he noted that the people of what was then a new nation were noticeably, perhaps defiantly, practical in their statements and their actions. At times he expressed this reality in fairly limited terms, as when he wrote, “As one digs deeper into the national character of the Americans, one sees that they have sought the value of everything in this world only in the answer to this single question: how much money will it bring in?” The reality of this observation was later reflected in William James’ metaphorical (and controversial) question, “What, in short, is the truth’s cash-value in experiential terms?” It also more commonly surfaces in such disdainful American constructs as, “If you’re so smart, why ain’t you rich?”

More broadly, de Toqueville noted that American preferences were given to the concrete rather than the abstract or the theoretical, and to the utilitarian more than the aesthetic. Returning to an emphasis on monetary measures, he attributed this practical tendency to one national characteristic, the relative equality of individuals: “The prestige that attached to old things having disappeared, birth, condition, and profession no longer distinguish men or hardly distinguish them; there remains scarcely anything but money that creates very visible differences between them and that can set off some from their peers. The distinction that arises from wealth is increased by the disappearance and diminution of all the others.”

But the practical emphasis in the new country was not simply pecuniary. Such influential leaders as Thomas Jefferson and Benjamin Franklin distinguished themselves by being less concerned bout money and more devoted to politics and diplomacy, and to science that was potentially useful but still unconnected to direct commercial applications. Franklin was a founder and first secretary of the American Philosophical Society, a group devoted to the advancement of what was then known as “natural philosophy,” terminology that at that time referred to science in general, not to the abstract academic pursuits that de Toqueville had in mind when he wrote, “I think that in no country in the civilized world is less attention paid to philosophy than in the United States.”

When the practical mindset began to develop into a modern philosophic movement, in the 1870s, the leading proponents were all prominent United States citizens. Charles Sanders Pierce is considered as the originator, but he is less well known than his followers William James and John Dewey. James introduced his arguments in a small 1907 book titled Pragmatism, with the modest subtitle “a new name for an old way of thinking,” perhaps recognizing that he was building on long-standing American tendencies. He repeatedly included among his cognitive progenitors both Pierce and Walt Whitman.

At the heart of the philosophy of pragmatism is the idea that scientific concepts should be evaluated according to how well they explain and predict phenomena rather than how well they describe reality. There is, in this view, more than one way to visualize the world, and the “truth” of a statement depends on how useful it is. In some of his statements, James sounded almost like a 20th-century self-help guru: “Thoughts become perception, perception becomes reality. Alter your thoughts, alter your reality.”

There are a number of possibilities that can result from such a pragmatic worldview, outcomes that differ based on the goals or results desired by an individual. In U.S. history it was all well and good with individuals such as James and Dewey, the progressive educators who believed in knowledge-based democracy; James even had a number of students who became well-known positive influences on society, including W.E.B. DuBois and Walter Lippman. There was also at least one, Theodore Roosevelt, who achieved a position of significant power from which he could impose his reality on others, leading one to wonder in what ways the concepts of Jamesian pragmatism encouraged and directed early twentieth-century American imperialism. Roosevelt certainly created his own version of reality when he dispatched the Great White Fleet on an around-the-world voyage despite congressional concerns about funding.

In recognition of observed reality, unfortunately, there is always the potential for the dark side to emerge in the evolution of any philosophy in which truth and facts are seen as relative or conditional or where, as Dewey noted, “immutable truth is dead and buried.” What we have been forced to recognize at the start of the 21st century is that there can be a serious downside to allowing influential persons to select or create their own facts and their own truth. The prime example of this is the movement that led to the presidency of Donald Trump, a coordinated and cross-reinforcing propaganda system combining both traditional and social media. Such possibilities may have been anticipated by James when he noted, “There is nothing so absurd that it cannot be believed as truth if repeated often enough.” Combine this potential with the pragmatic desire of like-minded politicians to craft new facts and truths to suit their own purposes and we find ourselves in what many commentators have called a fact-free era. There is no consensus about when this era began in the states; it could be traced back to the expansion of cable television and the internet or to the many Tea party misrepresentations in the campaign against the Affordable Care Act (“Obamacare”). This recent wave of obfuscation included the manufactured justifications in the 2003 buildup to the second war in Iraq, although this is hardly a new tendency. The War in Vietnam was also sustained on lies, and it was Aeschylus, after all, who stated that “In war, truth is the first casualty.”

None of those older campaigns compare in scale to the pattern of fabrications inspired by candidate Trump and expanded during his presidency. The birther lie, the Qanon conspiracy complex, the promotion of the border wall as the sole solution to immigration, the unending multilevel election fraud lies; all of those “alternative facts” were created and sustained with the pragmatic purpose of putting a Republican in the most powerful position in the United States and keeping him there even after he lost the 2020 election. Donald Trump himself is by nature the ultimate negative pragmatist, in that he simply refuses to recognize any reality or facts that do not benefit him. He has no interest in other goals or ideals. His acolytes and supporters are equally opportunistic and they have made every effort to promote the many false narratives whether they believe in them or not. There should be no doubt that applied pragmatic philosophy in the hands of a potential despot and his avid minions almost brought an end to 230 years of successful democratic governance in the United States. Since then, the Trump program was repeated by Jair Balsonaro during his reign and loss of the presidency in Brazil, inspiring a 2023 destructive riot in Brasilia’s government complex, similar to the attack two years earlier on the capitol building in Washington, D.C.

This is not to blame the early pragmatists or their philosophy for the excesses of the recent truth-free era, just as it was wrong when conservative leaders blamed Foucault and deconstructionism for the narratives of modern leftists and their social protests. It is simply a recognition that pragmatism must be tempered by objective reality, communitarian reviews, and inclusive ideals.

Posted in Sociocultural | Tagged , , , , , , , , , , , | Leave a comment

Just a Job

It was a job. It paid slightly above minimum wage and provided no other benefits. Its primary advantage, if you can call it that, was that it required no thought, no active personal involvement or commitment other than the repetitive physical movements that Calvin had mastered in the first hour of employment. There was also minimal opportunity for interactions with any of his co-workers, not even with the woman who stood across from him at the end of the line. He provided her with large aluminum trays with inch-high edge rims, sort of like oversize baking sheets. She loaded each tray with four rows of seven small wrapped boxes of Brussels sprouts as they came off the line of rollers eight inches above the tray. When each tray was full he would move it to a slot on a seven-foot high metal rack and grab an empty tray from the same rack. By the time he got the new tray in place, something he thought took between three and four seconds, the woman already had the next row of boxes ready to drop into place, and the next rows after that were already queuing up on the narrow roller conveyor coming down from the packing operators. He would continue loading the trays into the rack until it was full, at which point a new rack of empty trays would appear and the full rack would be rolled away. It had to be that way. The line of sprouts-filled boxes wasn’t going to slow down or stop and there always had to be another empty tray and a place for the filled tray to go, and the filled rack had to be taken away into the freezer. This was the end of the production line and there was no other place for the boxes to go, except for spilling off the end onto the floor. Calvin had to concentrate on that. He never had a chance to look at the workers who packed the boxes a few feet up the line to his right nor the ones who rolled the racks into place behind him; his eyes were always on the trays, only the trays, his attention on getting everything in its place with no delay. There was simply no time to look around.

That didn’t mean that Cal didn’t know what else went on in the cannery, or the freezery, if you wanted to be accurate. All of their output was frozen, never canned. He had scanned the layout during his lunch breaks. The building itself was a cavernous metal box covering a massive gray concrete slab. Somewhere out in the field Brussels sprouts had been cut off their stalks and loaded into large cardboard bins. At one end of the cannery building those bins were dumped out and the contents funneled onto a narrow white conveyer belt passing between two rows of women whose job it was to repeatedly hold individual sprouts up against rapidly spinning blades that trimmed off the remains of the stalks and the outer leaves. That was the one really dangerous job because the women had to move quickly to keep up with the flow, and the blades were kept very sharp; on occasion a finger or two ended up where they weren’t supposed to be.

After trimming, the sprouts dropped into a chute filled with hot chlorinated water that rinsed and blanched them as they floated over to another conveyor belt that took them between two long rows of women who, again very rapidly, loaded them into small wax-coated boxes on scales that made sure each box held at least 12 ounces. The women then closed the boxes and added them to another conveyor in the unending train down to the end of the line, where they slid off the belt onto a narrow roller shelf leading down to where Cal and his partner waited with the endless trays and the rolling freezer racks. At the season’s high point the line would run eleven hours each day, six days a week, only shutting down for scheduled breaks and a half-hour lunch.

After hours of repetitive unrelenting monotony and the continuous background din, Cal went out into the quiet night to walk the half-mile back to his small apartment, an upstairs room in an eighty-year-old victorian residence that had been subdivided into five units. There he removed the clothes that reeked of the distinctive odor of cooked sprouts, clothes that he could hang up outside his window if it he didn’t expect rain or high winds. He could wear them again the next day after they aired out. He threw together whatever he had available in his small refrigerator, gulped down the resulting meal, and collapsed into bed. There wasn’t much time for anything else. A few months of that schedule and he was ready for the only other benefit of the job; it was seasonal. There were three months of Brussels sprouts in the fall and three months of spinach in the spring, with two blocks of unemployment pay separating the two. It was like a paid summer vacation twice a year, a break that was needed if the workers were to recuperate from the trials of each season.

It was a job. And he had kept at it for two years after it was the only thing he could find following graduation from high school. There weren’t many other job opportunities in his small town, and in his first few months, while still living at his parents home, he had barely managed to build up enough money to put down the security deposit and first month’s rent for his apartment, as minimal as it was. Never mind any kind of personal transportation; maybe he could afford a scooter, eventually. In many evenings, especially on weekends, his sleep suffered because of the late night noise from his neighbors, all of them apparently young workers with more ordinary weekly schedules. He hoped that an unblemished work record at the cannery would help him get a better position somewhere else, but it seemed that cannery work was not the kind of experience that other employers were looking for; he had kept looking, but hadn’t received any positive responses. Perhaps, he began to believe, he was stuck.

It was during the next spinach season that he found himself walking away from the cannery next to a couple of women from the trimming line, Cassie and Helen. Slightly older than him, they had been working there for four years. They invited him over for dinner, where he discovered that they had few more possessions than he did. Their only advantage was that their ability to share the rent allowed them to have a more spacious apartment, but it was obvious that their combined incomes didn’t go much further than his. There were a few other differences he noted during their dinner. One was that even though the meal was a relatively simple stew, their cooking skills were well beyond his own. The meal was delicious. But their discussion was also notable in one aspect; every time he brought up a complaint about the cannery and the type of work they were involved in, the women would shrug and change the subject. When he expressed the worry that he would still be struggling in the cannery forty years later, as an old man, Cassie just laughed and said, “You don’t have to worry about that, believe me.” The two were much more interested in talking about current events in the world outside of their small town. In all of this they were unfailingly positive.

It was several months later, at the beginning of the summer break, that the reasons for their optimism, or their casual acceptance of the world as it existed, began to come to the surface. The two never mentioned any of that themselves, but one morning they did invite Calvin to a brunch in the dining hall at the 4th Street Baptist Church, a meeting that would include about sixteen people who worked at the cannery along with their friends and families. It would be the first time he would be in a church since he had moved out of his parents’ house, but Cassie told him it was just a gathering of friends, not anything religious, so he decided to go. But at his table there were eight people, including Cassie and Helen, and four of them actively began a discussion about middle eastern crises and the recent movement of the United States embassy to Jerusalem and the continuing expansion of Jewish settlements there and on the West Bank. As Calvin listened he realized that they were enthused by these events, or perhaps not by these events themselves but by what they might imply for the near future. The others at his table were not actively contributing to the speculation, but they were clearly interested.

The event ended a bit after one o’clock and Cal found himself walking back to their apartments with Cassie and Helen again, a trip that began with none of them talking. A few blocks into the route Helen broke the silence. “You know, Cal, now and then it’s seemed to us like you’ve been disappointed when you complained about working at the cannery and we didn’t react the way you wanted … I mean, we didn’t do anything to encourage your negative comments or agree or disagree, or anything like that. Maybe now you’ll see why.”

Cal didn’t, but he wasn’t sure how to phrase his lack of understanding, how to get the two to express what they clearly had thoroughly incorporated into their view of the world. Something, obviously, allowed them to ignore all of the negative aspects in their lives and their lack of future options. He shook his head and said, “I’m not sure.”

Helen stopped walking and smiled. “Don’t you see? The signs, what they were talking about at our table, they’re all around us. We don’t know exactly when, but the world is building toward Armageddon. The end is on its way. The end of everything. Everything that’s happening right now is almost meaningless, it won’t last more than a couple more years. Don’t worry about any of this … this transitory stuff.” She waved her arms outward as if to dismiss the entire world. “It doesn’t mean anything, none of it. It’ll all be taken care of.”

Posted in Fiction | Tagged , , | Comments Off on Just a Job

Change Le Même

One of the most common tropes around the beginning of the twenty-first century is a phrase that has been used to describe virtually every major event from the attack on the world trade center to the election of Donald Trump as president to the premiere of the musical Hamilton to the Covid-19 pandemic. That phrase, with slight variations, is “the world will never be the same again.” Sometimes the words change, but the sentiment is the same: Vladimir Putin’s Russia invades the Ukraine, and “Life in Europe will never be the same again.” The United States government rejects the Paris Climate Accords of 2015, or reinstates them, and “this changes everything.” Well, of course it does, and life will never be quite the same, will it? The problem is that in media reports and political statements and commemorative gatherings those who use such phrases are attempting to connote significance or draw attention to an event by using an overgeneralized, meaningless, and increasingly trite statement.

The fact is that every event, every personal choice, every rainstorm, every car crash, every solar flare, every little or big thing, changes the flow of reality. Permanently. It doesn’t matter how small or inconsequential or “uneventful” an occurrence is. In fact it doesn’t even matter if anyone notices. If a tree falls in the forest and no sentient beings are there to notice it, does it make a difference? Yes, it does. As Heraclitus famously noted more than 2,500 years ago, “No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.” This statement is an analogy for his belief that everything is constantly changing, that even in the next moment the world around us will never be the same again. I actually prefer the more direct Terry Pratchett version of the same concept because it is both more inclusive and more immediate:

“The universe is, instant by instant, recreated anew. There is in truth no past, only a memory of the past. Blink your eyes, and the world you see next did not exist when you closed them. Therefore, the only appropriate state of the mind is surprise.”

The fact is that change is the only consistency in the world, and Pratchett’s statement expresses the all-encompassing nature of that variation. It is unrealistic to imply that any single event, no matter how large and consequential, has a monopoly on world-changing effects. Yes, the fall of the Berlin Wall in 1989 changed the future of Europe, but so did the action of an unnoticed swallow pooping on that same wall a week earlier. Nobody noticed that event, and the deposit may have been washed away by rain long before the wall was destroyed, but the wall, and the neighborhood surrounding it, would never be the same again. As a process of change the effects of the caustic poop and the erosive rain would, eventually, have had the same effect as the sledgehammers and excavators and skip loaders that made short work of the long-standing barrier. Admittedly, the bird impacts would not only have taken much longer but would have been disconnected from the larger political reality. The wall was both an effect of, and a powerful visual metaphor for, that reality, but otherwise the ultimate effect on the wall would have been similar.

This is not to say that someone speaking at a commemorative gathering thirty years after the fact, in 2019, would be wrong it they stated that the removal of the wall changed the fate of Europe and the world. That is true. But that statement on its own would also be meaningless, an empty hyperbolic flourish, especially if it is not followed up with any details about the larger political collapse. Instead of applying an overused cliché to a metaphor, the speaker could tell us what specific changes resulted from the collapse of the Soviet Union and the reintegration of Germany. Instead of vaguely implying that there was a uniquely significant series of events, a speaker should tell us about some of the specific changes that occurred and let us recognize, on our own, how important they were.

There is also a downside whenever we focus solely on differences that result from major events. Recognizing that there are changes all around us all the time, and that the smallest variations can be just as effective as some of the larger events, and sometimes even more effective in the aggregate, we can make more sense about what is happening and why. The famous chaos theory allegory about a butterfly changing direction in the air over China which causes consolidation of a thunderstorm over Ohio may not refer to something we will ever be completely aware of, but that doesn’t make it any less significant. But that’s perhaps the rub. In theory we know that the minuscule atmospheric fluctuations around that butterfly’s wings can affect the weather in the United States, but we can neither observe the wing movement accurately enough, nor can our most powerful computers model the resulting effects thousands of mile away. Even with the great improvements we’ve made in such forecasting, and the relative simplicity of predicting storm futures, there were widely divergent paths presaged for Hurricane Ian the week before it hit land in Florida in 2022. And even after those multiple forecast paths converged two days prior to landfall the predictions still missed the eventual attack point by more than a hundred miles, leaving much of the city of Fort Meyers inadequately prepared.

Anyone who follows the trends in sociology or politics knows that human behavior may be even more complex and unpredictable than the weather. One event, the theft of a wallet perhaps, will have a ripple effect that changes every time a new actor reacts to hearing about the theft or responds to movements of the wallet or its contents. Predicting the extent of that one theft would require detailed information about the capabilities of the thief and the personal inclinations of every person who is asked to respond to any attempted identity fraud through driver’s license or credit card information. A very different example on a much wider scale was provided by the 2022 midterm elections, in which pre-election polls indicated a decisive “red wave” that would have provided the Republican Party with Senate leadership and a large majority in the House of Representatives, as well as control over an increased number of state governments. The unanticipated individual actions of millions of voters combined to cancel those predictions and made it obvious that the future of the United States “would never be the same,” neither in the ways that were forecast by the polls nor in the ways it existed prior to the election. We can try to predict something about the broad cumulative results of that cumulative vote, although the ripple effects will be, again, too complex to analyze. And even if the votes hadn’t been anonymous we would never be able to accurately determine why the election didn’t follow the advance polls or pundit analyses. It was the result of the individual complex motivations of millions, people who themselves may not know exactly why they made the decisions they did.

Big events, and the speculation about them, can deflect from experiences and concepts that can be more meaningful in our lives. For many reasons it is important for us all to be more aware of, and indeed to pay increased attention to, the small changes that are always around us, the ones that also mean, to repeat the obvious, that the world will never be the same again. That awareness, for one, can help us as individuals have an impact on such diverse vital realities as the next election, the future of climate change, our personal relationships, the continued prosperity of the flowers and earthworms and bees in our gardens, and, perhaps most important of all, our own personal enjoyment of life.

It doesn’t matter if the things you observe are unique and extraordinary or everyday and mundane. The world around us is full of events; most of them are small and seemingly repetitive. But for all of that they can be consequential and interesting and informative. Given the right state of mind or meditative approach, few of them are either boring or a waste of time. As cognitive psychologist Steven Pinker noted, “A common man marvels at uncommon things; a wise man marvels at the commonplace.” Don’t wait for the big events, and don’t get distracted by them. Your best response to the world around you is indicated by the continuation of the Terry Pratchett quote at the beginning of this essay:

“Therefore, the only appropriate state of the mind is surprise. The only appropriate state of the heart is joy. The sky you see now, you have never seen before. The perfect moment is now. Be glad of it.”

Posted in Media, Sociocultural | Tagged , , , , , , , | Comments Off on Change Le Même

CEO Government

In recent decades it has been common for political pundits and some candidates for office to denigrate the very title “politician” and to promote the idea that what the country needs is to populate our government with non-politicians. In truth, of course, this means individuals who have virtually no relevant experience in the job. For some reason they don’t state their arguments in those terms. What they usually say is something like “my opponent is a career politician who has never run a business or hired people or met a payroll.” They like to say that it’s only common sense that “what we need in the legislature is people who have been successful businessmen.” Politicians, they claim, are isolated from reality, self-serving, only interested in getting re-elected. Non-politicians, it is implied, would bring to government the important skills of executive management based on reality.

I’ll return to those arguments later. First we can take a look at what our country has reaped from a business-directed philosophy regarding government. There are several relatively recent examples of political candidates who were successful in being elected using promises, from the punditry and their campaign supporters, that electing a prominent entrepreneur would revolutionize government. We can ignore those well-known individuals, like Ross Perot and Carly Fiorina, who attempted and failed in their campaigns. We might also leave aside those who have entered public service as legislators; our assessments of the success or failure of these individuals depend too much on partisan and subjective evaluations of their voting record. The important question is what happens when a wealthy businessperson becomes a high-ranking public servant in an executive position.

In New York City we had the example of Mayor Michael Bloomberg, who was the CEO of a very successful financial firm prior to his transition to the public sector. He was elected to the office of mayor in 2002 with virtually no relevant experience, but he was fairly popular, was reelected twice, and left after his third term only because of term limits. He made a good adjustment to the restrictions placed on public administrators, learning to work effectively with the 51-member city council and the various city agencies. There were, naturally, some disagreements, and a few cases in which Bloomberg vetoed bills and a few in which his vetoes were overridden, but overall his administration was competent and productive.

The same can almost be said of the tenure of Arnold Schwarzenegger as governor of California, which began as a result of a crowded 2003 special election following the recall of governor Gray Davis. Prior to this Schwarzenegger had been a professional bodybuilder, winning the Mr. Olympia title seven times, and a popular movie actor. He was reelected once and for the most part cooperated with the state legislature. His second term was his last because of term limits, although he likely wouldn’t have been chosen for a third term because his approval rating at the end had dropped to 23 percent because of growing stories of ethical and sexual misconduct.

Moderate success is not something that can be said of the tenure of Jesse Ventura as governor of Minnesota. Ventura gained his fame as a professional wrestler, color commentator, and actor prior to serving one term as mayor of Brooklyn Park, Minnesota and becoming governor in 1999 four years after his term as mayor ended. His single term was marked by significant dissension with the state legislature, producing a record number of legislative vetoes. He was accused of mishandling the state budget, ending with a large deficit, and repeatedly denigrated the media. He chose not to run for reelection.

Thus far it seems that the recent score for non-politicians in executive government positions is decidedly mixed, and certainly not an influence that has produced the revolutionary positive effects that have been promised by those promoting the meme of business-honed entrepreneurial skills.

At the presidential level the primary example of a successful businessman becoming a public executive was the brief tenure of Warren G. Harding, 1921 to 1923. His reputation for business acumen came from his rescue and rebuilding of a failing newspaper, the Marion Star, in Marion, Ohio. On his path to the presidency he served four years in the Ohio State Senate, two years as Lieutenant Governor, and one six-year term in the United States Senate. He parlayed his senate position into a successful run for president in the 1920 election. In theory he brought the best of both worlds, achievement in business enriched with experience in government at multiple levels. His governing philosophy was that government should assist businesses as much as possible, and he appointed Herbert Hoover as Secretary of Commerce. The second year of his term was marred by multiple strikes, including a nationwide railroad walkout by 400,000 workers.

His administration was also marred by a series of scandals, most of which only came to light after Harding’s 1923 death from a heart attack. Most of them involved corrupt practices such as influence peddling and kickbacks. This has been attributed to Harding’s tendency to nominate friends and business associates with minimal relevant experience to high-ranking agency positions. The pinnacle was the Teapot Dome scandal in which the Secretary of the Interior was convicted of accepting bribes to allow an oil company to drill into naval oil reserves in Montana and California without the benefit of competitive bidding or any of the procedures required before implementation of such significant government decisions. Congressional hearings revealed that Harding had approved the drilling. Overall it seems that the vaunted management skills that were supposed to come with election of a president and agency appointments of business leaders were not present in the Harding administration.

In the history of the United States there has been only one president who went almost directly from acting as a corporate executive to the Oval Office. That was Donald Trump. Harding at least had a few years of public sector experience before he was elected to the highest position. Trump had none. He is then, the most pure example of a high-level business administrator elevated to a government executive position, a fact that does not argue well for the theory of private-sector preference. Admittedly, Trump was not exactly the best example of an entrepreneur, given that what he achieved in business was financed by family wealth, was sustained as much by questionable public relations efforts as by good management, and was marred by repeated investment errors and bankruptcies. His public record is similarly replete with poor choices for campaign advisors and managers of federal agencies, inconsistent decision-making, self-serving decisions, and statements that demonstrated his lack of knowledge of U.S. history, science, laws, traditions, and regulations. Not to mention his lack of interest in all of the above. He famously treated the heads of agencies as if they were his personal minions, there to do his bidding (as if they were mid-level managers) instead of public servants dedicated to the legal mission of their organization and the public welfare. This is especially true of his four Attorneys General, each of whom were treated as if they were Trump’s personal lawyer. He was impeached twice, once for misusing federal resources in an attempt to slander a potential competitor and once for inspiring an insurrection to keep himself in power.

We could treat the Trump debacle as a fluke, a one-time disaster that resulted from elevation of a singularly unqualified individual, but the primary characteristic that caused Trump to fail was one that is unfortunately common among private-sector managers. That is the tendency to operate as a unitary executive, to make decisions without consulting other stakeholders. Public service is a different world; administrative options are much more limited due to policies regarding transparency, mandatory public hearings, judicial review, and the fact that most of the real decision-making power is invested in the legislative bodies rather than the executive. Like a powerful CEO, President Trump was not used to having his decisions questioned, much less blocked, and that became obvious in many of his choices and in his negative reactions to setbacks. In four years the Trump administration clearly demonstrated how foolish it is to promote business leadership by itself as a model to reform government.

The next time you hear a candidate say the all-too-common phrase “I’m not a politician” as if that were a positive attribute, imagine yourself on a human resources team looking at applicants for a job you need to fill. You need an experienced welder, and a candidate comes in and says, “I am not a welder.” Or you need a sous chef and the applicant tells you, “I’ve never cooked a thing.” Do you hire those people? If not, tell me why would you hire a declared non-politician for a job that requires political skills, including talking to constituents, writing legislation, and compromising to get that legislation passed? And if you do hire (vote for) that person, how can you then still expect your government to work effectively and accomplish things on your behalf? Or perhaps you are in fact a conservative “small government” idealist who really doesn’t want government to work well at all.

Posted in Politics, Uncategorized | Tagged , , , , , , , , , , | Comments Off on CEO Government

Ban Books and More

As I write this we are in the midst of a national effort to ban ideas. This isn’t anything new; our politicians and pundits have been involved in what has become known as the “culture wars” for years, and previous country-wide attempts to control political and moral speech go back more than two centuries. The United States Constitution was less than ten years old when the first Sedition Act was passed in 1798, making any speech or writing illegal that spreads “false, scandalous, and malicious” concepts about the government. The modern culture wars have a much broader purpose. Conservatives have long been complaining that such broad categories of tradition as Christianity, free enterprise, gun ownership, marriage, gender identification, and U.S. history are under attack. They are leading into the 2020 mid-term elections by stepping up their arguments, targeting liberalizing movements that are attempting to foster more open discussions about gay and transgender people and about the less savory aspects of the national historical record.

The conservative method of choice is, first, to distort and exaggerate any concepts they oppose. In their recent formulation there is a dangerous “gay agenda” that wants to convert or “groom” young people into “perverted” sexual behavior, and an associated “woke” agenda that wants to shame white people by telling them they are responsible for slavery and lynching and discrimination and native genocide and all of the other negative events in our shared past. Second, as part of pushing back against cultural liberalization they want to severely limit what can be taught in our schools, prohibiting discussions of inclusive gender roles and accurate history in all classrooms.

These efforts, of course, have spilled over into banning books that refer to the unwanted topics. They reject not only non-fiction books like The 1619 Project (Nicole Hannah-Jones) or And The Band Played On (Randy Shilts) that specifically discuss the banned historical record, although those are also included, but also fiction that features any minority or gay or transgender characters leading lives that are as ordinary and honest as people in their social situation can experience. The objections remain even if the references to discrimination or non-heterosexual activities are minimal. So the list this year contains notable award-winning fiction, including To Kill a Mockingbird (Harper Lee), Gender Queer (Maia Kobabe), The Handmaids Tale (Margaret Atwood), The Bluest Eye and Beloved (Toni Morrison), Maus (Art Spiegelman), The Absolutely True Story of a Part-Time Indian (Sherman Alexie), Heather Has Two Mommies (Leslea Newman and Laura Cornell), Lawn Boy (Jonathon Evison), How to Be an Anti-Racist (Ibram X. Kendi), Where The Wild Things Are (Maurice Sendak), and In The Dream House (Carmen Maria Machado). And, in an act that verges on self-satire, some activists have once again called for banning Fahrenheit 451 (Ray Bradbury), a repeatedly banned book about destroying all books, not just those with specific content. I don’t know about you, but I am somewhat familiar with most of these books and I am at a loss at figuring out what they all have in common.

These are only a few of the most familiar books on the many lists that have been created across the country. Other lesser-known titles have also been questioned and slated for removal from classrooms and libraries, almost all of them either about gay or transgender people or about racial or cultural minorities. In the past year more than 1,500 books have been banned in more than 90 school districts in half of the states in our country. Members of the school board in the Rapid City School District in South Dakota went further, asking about investigating their list of books to see whether they should be not only banned, but destroyed. Many school and city librarians have been threatened for having the temerity to argue against banning books, and a teacher in Norman, Oklahoma—incidentally, the home of the University of Oklahoma—was suspended because she provided her students with a QR code they could use to seek information and order books from the Brooklyn Public Library’s Books Unbanned program. After the teacher resigned under pressure, the Oklahoma State Secretary of Education pushed further, demanding that her state teaching certificate be revoked. As he explained, “This is completely the tool of a far-left extreme group that is using the profession and using schools to indoctrinate, groom kids and to try to hyper-sexualize [children] and teach them to hate their country. And we’re not going to allow it.” This statement indicates that the book ban is only one part of a multi-directional appeal to paranoia directed against public education.

The book-banning effort is not merely a collection of local or regional grassroots efforts. It is a national campaign coordinated by such activist conservative groups as the American Legislative Exchange Council (ALEC) and the Family Research Council (FRC). This movement encourages people to file similar challenges against the same books in multiple school districts in almost all states. For the local participants, that provides a significant advantage, for they can receive their lists of objectionable books and offensive content from a central source. This frees them from the drudgery of searching for and actually reading parts of any of the books they oppose. It also allows the campaign to bring in individuals other than concerned parents. Many conservative politicians have been promoting book bans as part of their usual electoral activities, a strategy that is a direct extension of their ongoing culture wars, the decades-long crusade of fear-mongering being used to build voter enthusiasm in advance of elections. They’ve expanded their crusade with manufactured outrage and anxiety about transgender use of public bathrooms and teaching Critical Race Theory. They’ve incorporated it into broader attacks on public education and campaigns for school board members. In other words, more of the same, only more of it.

My own first personal experience with book banning also involved a national effort, one that occurred during my high school years. In that case the cause was anti-pornography and it was sparked by the publication, in 1961, of Henry Miller’s semi-autobiographical novel Tropic of Cancer. The book had first been published in France in 1934, but had long been banned by the United States government because it “dealt too explicitly with his sexual adventures and challenged models of sexual morality.” The early 1960s campaign followed and built on the late 1950s controversies regarding Lady Chatterly’s Lover (D. H. Lawrence), Lolita (Vladimir Nabokov), and Howl (Alan Ginsburg), and it expanded to embrace William Burrough’s Naked Lunch and J. D. Salinger’s Catcher in the Rye, among other novels containing questionable dialog. But the one set of actions that most impressed me was the extraordinary campaign against the Dictionary of American Slang, a book edited by Stuart Flexner and Harold Wentworth and first published in 1960.

The Dictionary of American Slang was, as advertised, a dictionary in the standard format. It was therefore dry, matter-of-fact, a lengthy list of individual words with their definitions and, often, sample usages. This project, properly done, quite obviously required not only including some slang terms that were considered objectionable, but explanations that sometimes referred to a variety of body parts and bodily functions that at the time weren’t commonly found in “acceptable” books; for example, books other than the ones written by Henry Miller and D.H. Lawrence. The mere appearance of this book in public and school libraries was strongly protested, and removed from some locations, because it actually contained some “obscene” terminology. In other words, it was a fairly complete compilation of American slang. What was most surprising about the effort to ban this book, however, wasn’t the book itself. It was the strategy used by those who opposed it. The morally incensed individuals who showed up at meetings of school boards and city councils and library boards brought with them handouts, printed pages listing a large number of the offensive words and phrases that were contained in the dictionary. These they passed out to anyone who would take one. Their intent must have been to spread to others the outrage they felt at finding a book that actually contained such words, but the reality was that they were actively distributing the very content that they were hoping to have banned from public access.

The same odd strategy is still being used as I write this. People who oppose specific books based on overt sexual descriptions or objectionable words are showing up at public meetings with printed handouts containing many of the offensive passages they object to. Their lists include only those short passages, having separated those “perverted” and “obscene” phrases from any of the pesky “socially redeeming content” that surrounds them in the actual book. In other words, they remove the broader context, which is the vast amount of inoffensive material that has allowed the Supreme Court to reject censorship in so many other cases. Could these individuals be arrested for distributing pornography? By their own definitions it would make sense. Instead, the media generally passes on, without comment, their argument that they are doing all of this to protect children. That happens to be the same justification that was used by those who fought against the Dictionary of American Slang. Back in the 1960s supporters of the Dictionary had replied that those who opposed it were “protecting” most children from words they had already heard and used. These days the book banners seem to be going further, trying to protect children from reality, a reality with which many of them are already all too familiar. I must admit that it would be easier to remove descriptions of reality than to revise the reality itself—the very real historical and current systems of oppression—but the people who ban books are only interested in the first of these two options.

Posted in Education, Sociocultural | Tagged , , , , , , , , | Comments Off on Ban Books and More

Real Magic

Arthur had come to the conclusion that even after years of dedicated effort, after all of those hours and hours of lonely practice, staring at himself in front of a mirror, watching attentively to micro-adjust his movements and his statements, and despite the feeling that he had succeeded in virtually all of his goals, it just wasn’t enough. It just wasn’t satisfying, not any more. He began to wonder how much it had been, ever. It had been engrossing, at least. With the assistance of online videos and books and focused attention and innumerable repetitions, he had managed to master the subtle sleight-of-hand movements that were required to perform almost all of the magician’s tricks that he had ever seen performed. He could produce a specific playing card or a variety of other objects seemingly out of thin air. He could make small solid objects appear out of, or disappear into, a handkerchief or a hat or an observer’s pocket with invisible ease and without even thinking about his actions. Nobody who watched him could tell how he managed any of these subterfuges, and he still enjoyed the looks of open-mouthed astonishment and disbelief and perplexity that he regularly saw on the faces of his audiences and appreciated the small degree of local fame that he had achieved. Performance always brought positive feelings, no matter how small the crowd. But he had begun to realize that it wasn’t enough. For one thing, he always knew that, however impressed his audiences may be, it just wasn’t real. It was always a fake, a deceit hidden by body movements. It wasn’t authentic magic. Maybe, he thought, he wanted once or twice to be amazed himself.

What Arthur finally decided he wanted, what he hoped to achieve, was more, much more than tricks. He wanted something that violated the laws of physics or the prohibitive limits of resource realities. He wanted to be able to wave a wand or his open hand sideways in front of his body and proclaim some exotic mysterious phrase and see, as a result, a physical object change in form or appear out of thin air, preferably without the usual distractions such as the intervening puff of opaque smoke. “Abracosina!” he would cry while visualizing roast beef and mashed potatoes, and his dinner would appear, plated and ready to eat, a fork and knife at its side. “Lavasuto,” he would whisper, quietly but authoritatively after he had finished eating, with a slight uncurling of his fingers, and the dirty dishes would float gently away from the table, wash and dry themselves, and slip away neatly into their places in the cupboard. What good is magic, he argued to himself, if it doesn’t make your life easier? Significantly easier. But then he thought that perhaps such examples were too mundane to be considered as applications for the use of true magic. What, though, would it be like to be able to make a real difference, to construct or rebuild homes or repair cars or save people from injury? What would it be like to effortlessly feed hundreds of homeless, to turn water into wine? “Or,” he smiled, “soda into beer?” That would indeed be magic. Real magic, he told himself, should be miraculous!

Arthur decided that the answer was to be found in research into the only disciplines that were commonly regarded as being capable of magic of the type he would consider real. What he needed was sorcery or witchcraft, what were commonly referred to as the dark arts. Maybe he needed the assistance of supernatural beings, angels or demons who could access the powers that were hidden out of reach of normal everyday existence and the beyond the knowledge of normal individuals, of Muggles. For a few months he devoted his spare time to reading many of the works of H.P. Lovecraft and of his Gothic sources of inspiration; Edgar Allen Poe, Matthew Lewis, and Ann Radcliffe. There he found descriptions of the kinds of events and powers that he was hoping for, whether for good or evil, but he soon tired of all these works and rejected them as useless, interesting, as were the Harry Potter books, but entirely fictional and thus irrelevant. He had to admit that what was running through his mind frequently as he read these works were the fanciful images from Disney’s animated Sorcerer’s Apprentice with Mickey Mouse. The conclusion he came to was that these authors had active imaginations but no real experience of the kind that would help anyone else duplicate the stories they told. In short, they had been a waste of time.

What was needed was a change of focus. He diverted his research into another very different direction indicated by new online searches. That meant obtaining copies of grimoires, traditional spell-books such as the Key of Solomon and the Three Books of Occult Philosophy by Heinrich Agrippa and a few different versions of the Wiccan Book of Shadows. He cleared a corner of his bedroom and set up a shrine with a solid rosewood bookshelf, planks untainted by stains or vanishes or paints. The spell circle, big enough for him to sit in, was defined by woven strips of switch grass and a large number of those short votive candles that burn for hours in small glass containers. With the lights off, it was the perfect setting for concentrating on whatever spells he could attempt and the whatever potential results they could create.

His research pointed toward the importance of amulets, talismans, fetishes, charms; the name was unimportant. What was important were the words that were uttered in conjunction with manipulating one or more of these ritual objects, and, of course, the shape and imagery of the object itself. The proper vocal inflections, the verbal tone, sincere and solemn, would probably also be important. Finding the correct combination would take some experimentation, just as sleight of hand had taken hours and hours of practice. It seemed that objects of certain shapes were most significant, with the best options ovoid, others rounded abstract imitations of the human figure. Also important were the specific materials that comprised the object; there were mentions of quartz crystal, jade, opal, magnetite, obsidian, cast iron, gold, copper, soapstone, turquoise, ivory, and even some forms of ironwood. Fortunately for his budget it didn’t necessarily have to be an expensive substance; the critical characteristics were solidity and purity and personal resonance. Many of these items, especially in the Medieval traditions, also incorporated astrological symbols as engraved images. These could include symbols of the Zodiac, those derived from Hellenistic representations of constellations, or the vedic symbols of the Jyotisha system, or the animal designs used in Tibetan disciplines or, less often, the hieroglyphs of the Egyptian dynastic world. It was all a bit overwhelming, especially since he had not found the forms he wanted in purchased items and felt he would have to fashion each of these pieces himself in order to build the appropriate spiritual connection between the potential worshipper (himself) and the eternal soul of the charmed object. He soon had invested a bit too much of his income in a set of power tools, including hardened chisels and a rock polisher and grinders of various sizes and began spending late nights starting with rough chunks of solid rock slightly smaller than his closed fist, and working them down into smooth rounded shapes that he could caress in one hand while he meditated and concentrated.

The problem was that Arthur still didn’t know what to say or what to think while he sat in his circle in the semi-darkness surrounded by small flickering candles, closely holding his chosen talisman. He knew that others called on specific ancient deities or biblical demons or fallen angels, maybe Ares or Asmodeus or Beelzebub or Lillith or Mephistopheles or the Succubus or Tyche. Maybe he should go to Hermes, the messenger, and let him decide who should get his requests. Isn’t that the job of a courier? Or maybe directly address Hecate, the goddess of sorcery and magic. According to some texts, certain deities should only be contacted during the reign of specific astrological signs, and then only with the use of one specific type of his many sacred objects? And it was apparently common with some sources to address such powerful personages with some sort of pseudo-medieval wording, using thou and thine and wouldst and mote, as if the gods and demons were able to understand English but were stuck in the fourteenth century. Arthur preferred the straightforward spell language of modern Wiccans, but wondered if that might be seen as dismissive, an attitude that would offend? His other disappointment was that the Wiccan spells only addressed such things as improved health, reduced pain, better personal relationships. Vague goals, no real magic of the type he wanted. Maybe, perhaps, he should find other deities or demons and address them in ancient Greek or Latin phrases? Or, gods forbid, in Aramaic or Amharic or Ge’ez or Tigrinya or … R’lyehian? Who knew?

Perhaps it was his lack of experience or his methods or his uncertainty or attitude, or all of the above, but none of Arthur’s attempts to address a supernatural presence, the hours spent seated on the floor in the near-dark, ever elicited a noticeable response. The local practitioners that he contacted were mostly willing to discuss his questions, at least once he had demonstrated that he had done his homework and offered to meet them at a restaurant or tea shop, often with him paying the bill, but their advice was, in ways similar to what he had found in the grimoires, uncomfortably variable and too often contradictory. All he did was waste a lot more time and money, to add to what he had already spent on candles and books and rocks.

Well, maybe it wasn’t an entire waste, he eventually told himself; he had certainly learned a great deal about the arcane world of the supernatural and its practitioners. The whole thing just began to seem much too complex and uncertain. And maybe even ineffective, another set of tricks, this time a sleight of mind. And it seemed to be no more of a path toward what he wanted from magic than the illusions he had mastered, with the added limitation that this new path was a solitary one, with none of the positive feedback that had once been provided by audience acclaim. Maybe, he decided, it was time to return to cards and distractive patter and hidden objects. At least now he had more information to add to his comments about angels and demons and historic documents, exotic details that he could use to spice up his performances. It wasn’t really a loss. It might even make it all more fun.

Posted in Fiction, Sociocultural | Tagged , , , | Comments Off on Real Magic