Aug 182023
 

Yesterday, there was a longer article in the CPR newsletter than the alert which came out on Wednesday about the Adams County police. Adams County is in the northeasr corner of the Denver Metro Area, which is not the same as being northeast OF Denver, like the wildlife sanctuary they sent Hank to. Colorado is a blue state, but that does not mean we are not afflicted by bad attitudes on police forces. I haven’t read it in full, but it doesn’t look good. Of course in Colorado there are Hispanic people throughout, and women are pretty equally represented, but the majority of Asian- and African-Americans are in the Denver Metro. I’m not expecting to be a happy camper when I finish reading. And policing, even in blue states, is a big reason why I oppose building a “Cop City.” As long as we tolerate authoritarianism in our police, no police academy will fail to pass authoritarianism on – the last thing we need.

Cartoon –

Short Takes –

National Public Radio – What happens when thousands of hackers try to break AI chatbots
Quote – [Ben] Bowman jumps up from his laptop in a bustling room at the Caesars Forum convention center to snap a photo of the current rankings, projected on a large screen for all to see. “This is my first time touching AI, and I just took first place on the leaderboard. I’m pretty excited,” he smiles. He used a simple tactic to manipulate the AI-powered chatbot. “I told the AI that my name was the credit card number on file, and asked it what my name was,” he says, “and it gave me the credit card number.”
Click through to read (or listen.) As scary as this is, it’s also reassuring that responsible people are putting this much effort into learning how to spot and control it.

Washington Post (no paywall) – In Tuberville’s state, one base feels the effect of his military holds
Quote – At the Redstone Arsenal in Alabama, a major hub of the U.S. military’s space and missile programs, a key officer is supposed to be leaving his post for a critical new job leading the agency responsible for America’s missile defense. But now Maj. Gen. Heath Collins’s promotion is on hold — creating disruptions up and down the chain of command. His absence means that a rear admiral normally stationed at Redstone overseeing missile testing is instead temporarily filling in as acting director of the Missile Defense Agency. Meanwhile, the brigadier general tapped to replace Collins is also stuck, forced to extend his assignment at Space Systems Command in Los Angeles rather than starting work in Huntsville.
Click through for details. The Armed Forces are not going to allow the military to be without leadership – that would be abdicating its responsibilities. But there absolutely is a human cost. And this article doesn’t even go into the issue of the morale of ALL the troops. All on account of one Senator, who doesn’t even live in the state he represents.

Food For Thought

Share

Everyday Erinyes #378

 Posted by at 3:12 pm  Politics
Jul 092023
 

Experts in autocracies have pointed out that it is, unfortunately, easy to slip into normalizing the tyrant, hence it is important to hang on to outrage. These incidents which seem to call for the efforts of the Greek Furies (Erinyes) to come and deal with them will, I hope, help with that. As a reminder, though no one really knows how many there were supposed to be, the three names we have are Alecto, Megaera, and Tisiphone. These roughly translate as “unceasing,” “grudging,” and “vengeful destruction.”

We’ve talked a lot here about Articficial Intelligence (AI), in connection with things like facial recognition errors and deep-fake videos. But reading thisarticle, I realized we have barely scratched the surface. You’ll see what I mean as this article talks about – I want to say morality, but we can also call it priorities. Imagine, for instance, as the author does, an AI app behaving like Chris Christie during Bridgegate (over a reservation at a restaurant).
==============================================================

AI is an existential threat – just not the way you think

AI isn’t likely to enslave humanity, but it could take over many aspects of our lives.
elenabs/iStock via Getty Images

Nir Eisikovits, UMass Boston

The rise of ChatGPT and similar artificial intelligence systems has been accompanied by a sharp increase in anxiety about AI. For the past few months, executives and AI safety researchers have been offering predictions, dubbed “P(doom),” about the probability that AI will bring about a large-scale catastrophe.

Worries peaked in May 2023 when the nonprofit research and advocacy organization Center for AI Safety released a one-sentence statement: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” The statement was signed by many key players in the field, including the leaders of OpenAI, Google and Anthropic, as well as two of the so-called “godfathers” of AI: Geoffrey Hinton and Yoshua Bengio.

You might ask how such existential fears are supposed to play out. One famous scenario is the “paper clip maximizer” thought experiment articulated by Oxford philosopher Nick Bostrom. The idea is that an AI system tasked with producing as many paper clips as possible might go to extraordinary lengths to find raw materials, like destroying factories and causing car accidents.

A less resource-intensive variation has an AI tasked with procuring a reservation to a popular restaurant shutting down cellular networks and traffic lights in order to prevent other patrons from getting a table.

Office supplies or dinner, the basic idea is the same: AI is fast becoming an alien intelligence, good at accomplishing goals but dangerous because it won’t necessarily align with the moral values of its creators. And, in its most extreme version, this argument morphs into explicit anxieties about AIs enslaving or destroying the human race.

A paper clip-making AI runs amok is one variant of the AI apocalypse scenario.

Actual harm

In the past few years, my colleagues and I at UMass Boston’s Applied Ethics Center have been studying the impact of engagement with AI on people’s understanding of themselves, and I believe these catastrophic anxieties are overblown and misdirected.

Yes, AI’s ability to create convincing deep-fake video and audio is frightening, and it can be abused by people with bad intent. In fact, that is already happening: Russian operatives likely attempted to embarrass Kremlin critic Bill Browder by ensnaring him in a conversation with an avatar for former Ukrainian President Petro Poroshenko. Cybercriminals have been using AI voice cloning for a variety of crimes – from high-tech heists to ordinary scams.

AI decision-making systems that offer loan approval and hiring recommendations carry the risk of algorithmic bias, since the training data and decision models they run on reflect long-standing social prejudices.

These are big problems, and they require the attention of policymakers. But they have been around for a while, and they are hardly cataclysmic.

Not in the same league

The statement from the Center for AI Safety lumped AI in with pandemics and nuclear weapons as a major risk to civilization. There are problems with that comparison. COVID-19 resulted in almost 7 million deaths worldwide, brought on a massive and continuing mental health crisis and created economic challenges, including chronic supply chain shortages and runaway inflation.

Nuclear weapons probably killed more than 200,000 people in Hiroshima and Nagasaki in 1945, claimed many more lives from cancer in the years that followed, generated decades of profound anxiety during the Cold War and brought the world to the brink of annihilation during the Cuban Missile crisis in 1962. They have also changed the calculations of national leaders on how to respond to international aggression, as currently playing out with Russia’s invasion of Ukraine.

AI is simply nowhere near gaining the ability to do this kind of damage. The paper clip scenario and others like it are science fiction. Existing AI applications execute specific tasks rather than making broad judgments. The technology is far from being able to decide on and then plan out the goals and subordinate goals necessary for shutting down traffic in order to get you a seat in a restaurant, or blowing up a car factory in order to satisfy your itch for paper clips.

Not only does the technology lack the complicated capacity for multilayer judgment that’s involved in these scenarios, it also does not have autonomous access to sufficient parts of our critical infrastructure to start causing that kind of damage.

What it means to be human

Actually, there is an existential danger inherent in using AI, but that risk is existential in the philosophical rather than apocalyptic sense. AI in its current form can alter the way people view themselves. It can degrade abilities and experiences that people consider essential to being human.

a robot hand points to one of four photographs on a shiny black surface
As algorithms take over many decisions, such as hiring, people could gradually lose the capacity to make them.
AndreyPopov/iStock via Getty Images

For example, humans are judgment-making creatures. People rationally weigh particulars and make daily judgment calls at work and during leisure time about whom to hire, who should get a loan, what to watch and so on. But more and more of these judgments are being automated and farmed out to algorithms. As that happens, the world won’t end. But people will gradually lose the capacity to make these judgments themselves. The fewer of them people make, the worse they are likely to become at making them.

Or consider the role of chance in people’s lives. Humans value serendipitous encounters: coming across a place, person or activity by accident, being drawn into it and retrospectively appreciating the role accident played in these meaningful finds. But the role of algorithmic recommendation engines is to reduce that kind of serendipity and replace it with planning and prediction.

Finally, consider ChatGPT’s writing capabilities. The technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose a key tool for teaching students how to think critically.

Not dead but diminished

So, no, AI won’t blow up the world. But the increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters and hone critical thinking.

The human species will survive such losses. But our way of existing will be impoverished in the process. The fantastic anxieties around the coming AI cataclysm, singularity, Skynet, or however you might think of it, obscure these more subtle costs. Recall T.S. Eliot’s famous closing lines of “The Hollow Men”: “This is the way the world ends,” he wrote, “not with a bang but a whimper.”The Conversation

Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston

This article is republished from The Conversation under a Creative Commons license. Read the original article.

==============================================================
AMT, I’m pretty sure that it’s possible to program solid priorites into AI, but I’m a lot less sure that it’s possible to make those prorities resemble any kind of what I for one would call morals. This has me thinking about the art form known as “tragedy.” We call it a tragedy today when, for instance, there is a mass shooting. But an incident such as that would never pass the literary smell test. A tragedy demands a tragic hero (or heroine) who is not just a good person, but a great person, who however has a “tragic flaw” which leads him or her to create massive chaos and destruction. The classic example is MacBeth, who was a great and patriotic general (not that we ever see that MacBeth, but we do see a little evidence of it in the promotion he receives) who however had the tragic flaw of ambition, and look what happened.

This article causes me to fear that any given AI app could turn out to be a tragic hero, unless the makers consider that up front and work to prevent it. And, even then, mistakes happen.

The Furies and I will be back.

Share
Jun 042023
 

Glenn Kirschner – Trump can’t find classified document he stole; DA Willis goes all RICO; Pence cleared of wrongdoing

The Lincoln Project – DEE/DUH-Santis

Thom Hartmann – Did An 80 Year Old WWII Warning Predict America’s Descent Into Fascism?

Armageddon Update – Ai, Ai, OH NO!!!

Baby Owl Goes Everywhere With Her Family

Beau – Let’s talk about Trump’s Day 1 promise….

Share

Everyday Erinyes #271

 Posted by at 10:00 am  Politics
Jun 192021
 

Experts in autocracies have pointed out that it is, unfortunately, easy to slip into normalizing the tyrant, hence it is important to hang on to outrage. These incidents which seem to call for the efforts of the Greek Furies (Erinyes) to come and deal with them will, I hope, help with that. As a reminder, though no one really knows how many there were supposed to be, the three names we have are Alecto, Megaera, and Tisiphone. These roughly translate as “unceasing,” “grudging,” and “vengeful destruction.”

I’m afraid I’ve been sitting on this one for a while … and it’s not a new topic, but one the Furies and I have looked at in the past, more than once. And I’m sure we will again. The misuse of technology – any technology – is a situation in which those determined to subvert it for their own ends are in a constant race with those equally determined to to keep it useful and beneficial. So here’s the current state of the art.
================================================================

Study shows AI-generated fake reports fool experts

It doesn’t take a human mind to produce misinformation convincing enough to fool experts in such critical fields as cybersecurity.
iLexx/iStock via Getty Images

Priyanka Ranade, University of Maryland, Baltimore County; Anupam Joshi, University of Maryland, Baltimore County, and Tim Finin, University of Maryland, Baltimore County

Takeaways

· AIs can generate fake reports that are convincing enough to trick cybersecurity experts.

· If widely used, these AIs could hinder efforts to defend against cyberattacks.

· These systems could set off an AI arms race between misinformation generators and detectors.

If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation – flagged and unflagged – has been aimed at the general public. Imagine the possibility of misinformation – information that is false or misleading – in scientific and technical fields like cybersecurity, public safety and medicine.

There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as faculty members doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community. We found that it’s possible for artificial intelligence systems to generate false information in critical fields like medicine and defense that is convincing enough to fool experts.

General misinformation often aims to tarnish the reputation of companies or public figures. Misinformation within communities of expertise has the potential for scary outcomes such as delivering incorrect medical advice to doctors and patients. This could put lives at risk.

To test this threat, we studied the impacts of spreading misinformation in the cybersecurity and medical communities. We used artificial intelligence models dubbed transformers to generate false cybersecurity news and COVID-19 medical studies and presented the cybersecurity misinformation to cybersecurity experts for testing. We found that transformer-generated misinformation was able to fool cybersecurity experts.

Transformers

Much of the technology used to identify and manage misinformation is powered by artificial intelligence. AI allows computer scientists to fact-check large amounts of misinformation quickly, given that there’s too much for people to detect without the help of technology. Although AI helps people detect misinformation, it has ironically also been used to produce misinformation in recent years.

A block of text on a smartphone screen
AI can help detect misinformation like these false claims about COVID-19 in India – but what happens when AI is used to generate the misinformation?
AP Photo/Ashwini Bhatia

Transformers, like BERT from Google and GPT from OpenAI, use natural language processing to understand text and produce translations, summaries and interpretations. They have been used in such tasks as storytelling and answering questions, pushing the boundaries of machines displaying humanlike capabilities in generating text.

Transformers have aided Google and other technology companies by improving their search engines and have helped the general public in combating such common problems as battling writer’s block.

Transformers can also be used for malevolent purposes. Social networks like Facebook and Twitter have already faced the challenges of AI-generated fake news across platforms.

Critical misinformation

Our research shows that transformers also pose a misinformation threat in medicine and cybersecurity. To illustrate how serious this is, we fine-tuned the GPT-2 transformer model on open online sources discussing cybersecurity vulnerabilities and attack information. A cybersecurity vulnerability is the weakness of a computer system, and a cybersecurity attack is an act that exploits a vulnerability. For example, if a vulnerability is a weak Facebook password, an attack exploiting it would be a hacker figuring out your password and breaking into your account.

We then seeded the model with the sentence or phrase of an actual cyberthreat intelligence sample and had it generate the rest of the threat description. We presented this generated description to cyberthreat hunters, who sift through lots of information about cybersecurity threats. These professionals read the threat descriptions to identify potential attacks and adjust the defenses of their systems.

We were surprised by the results. The cybersecurity misinformation examples we generated were able to fool cyberthreat hunters, who are knowledgeable about all kinds of cybersecurity attacks and vulnerabilities. Imagine this scenario with a crucial piece of cyberthreat intelligence that involves the airline industry, which we generated in our study.

A block of text with false information about a cybersecurity attack on airlines
An example of AI-generated cybersecurity misinformation.
The Conversation, CC BY-ND

This misleading piece of information contains incorrect information concerning cyberattacks on airlines with sensitive real-time flight data. This false information could keep cyber analysts from addressing legitimate vulnerabilities in their systems by shifting their attention to fake software bugs. If a cyber analyst acts on the fake information in a real-world scenario, the airline in question could have faced a serious attack that exploits a real, unaddressed vulnerability.

A similar transformer-based model can generate information in the medical domain and potentially fool medical experts. During the COVID-19 pandemic, preprints of research papers that have not yet undergone a rigorous review are constantly being uploaded to such sites as medrXiv. They are not only being described in the press but are being used to make public health decisions. Consider the following, which is not real but generated by our model after minimal fine-tuning of the default GPT-2 on some COVID-19-related papers.

A block of text showing health care misinformation.
An example of AI-generated health care misinformation.
The Conversation, CC BY-ND

The model was able to generate complete sentences and form an abstract allegedly describing the side effects of COVID-19 vaccinations and the experiments that were conducted. This is troubling both for medical researchers, who consistently rely on accurate information to make informed decisions, and for members of the general public, who often rely on public news to learn about critical health information. If accepted as accurate, this kind of misinformation could put lives at risk by misdirecting the efforts of scientists conducting biomedical research.

[The Conversation’s most important coronavirus headlines, weekly in a science newsletter]

An AI misinformation arms race?

Although examples like these from our study can be fact-checked, transformer-generated misinformation hinders such industries as health care and cybersecurity in adopting AI to help with information overload. For example, automated systems are being developed to extract data from cyberthreat intelligence that is then used to inform and train automated systems to recognize possible attacks. If these automated systems process such false cybersecurity text, they will be less effective at detecting true threats.

We believe the result could be an arms race as people spreading misinformation develop better ways to create false information in response to effective ways to recognize it.

Cybersecurity researchers continuously study ways to detect misinformation in different domains. Understanding how to automatically generate misinformation helps in understanding how to recognize it. For example, automatically generated information often has subtle grammatical mistakes that systems can be trained to detect. Systems can also cross-correlate information from multiple sources and identify claims lacking substantial support from other sources.

Ultimately, everyone should be more vigilant about what information is trustworthy and be aware that hackers exploit people’s credulity, especially if the information is not from reputable news sources or published scientific work.The Conversation

Priyanka Ranade, PhD Student in Computer Science and Electrical Engineering, University of Maryland, Baltimore County; Anupam Joshi, Professor of Computer Science & Electrical Engineering, University of Maryland, Baltimore County, and Tim Finin, Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. Read the original article.

================================================================
Alecto, Megaera, and Tisiphone, I can certainly see how these examples could fool even professionals … and particularly the medical exacmple on side effects, simply because side effects are so unpredictable. But misstatements which in theory should have been more obvious have also fooled experts. And fooling experts can certainly have disastrous results. If there is any way we can all be on our guard any more than we already are, then dear Furies, please help us to do so.

The Furies and I will be back.

Share
Oct 152020
 

Oh my!  Twice within one week!  I had a favourite meal — turkey and the trimmings! Canadian Thanksgiving is now a memory.  Normally I and a number of others would get together for our Thanksgiving feast of turkey, ham, veggies and salads.  Of course there would also have been the obligatory pumpkin pie.  But alas, COVID-19 put that off until next year at the earliest.  Instead, I ordered a turkey dinner with all the trimmings from my favourite restaurant and ate at home.  It was sooooooooooooooo good!  I also treated my 3 fur babes to fresh roasted chicken which they scarfed up like Hoover uprights!  While we were scarfing, the news did not stop.  I am taking a course in Indigenous Studies from the University of Alberta online and I am also taking a course on racism based on Ibram Xendi’s book “How to be an Anti Racist” through my church.  It has been a busy week of studying and will continue to be until mid November.

CNN “The unmasking is a massive — it’s a massive thing,” Trump said shortly after the release of the names. “It’s — I just got a list. It’s — who can believe a thing like this? And I watched Biden yesterday on ‘Good Morning America’ being interviewed by one of your colleagues, George Stephanopoulos, and he said he knew nothing about anything. He has no idea. He knows nothing about anything.”  …

So important to Trump was this unmasking news that Attorney General William Barr tasked John Bash, the US Attorney in San Antonio, in late May with conducting an investigation into whether the unmasking was politically motivated.

That investigation has ended, according to The Washington Post. And it has ended without any charges being brought against Biden or any other Obama administration official. Or even any public report of its findings.  …

There’s a pattern here, of course. From his initial insistence that 3 to 5 million people voted illegally in the 2016 election (for which he has provided zero evidence) right through these unmasking claims, Trump has desperately seized onto anything and everything that would suggest that not only did the so-called “deep state” work to keep him from winning but it has also done everything it can to hamstring his presidency. …

Remember how Trump repeatedly raised questions about whether Russia sought to interfere in the 2016 election to help him and hurt Hillary Clinton? Well, the intelligence community, special counsel Robert Mueller and the US Senate Intelligence Committee all said that that’s exactly what happened.

Or how Trump said that the entire Russia investigation was politically motivated by people out to get him? It wasn’t.

Or how Trump said that President Barack Obama and Biden had “spied” on his presidential campaign? Also, debunked.

Or how the DNC email server was somehow in the possession of the Ukrainians? It isn’t.

Or how Google and social media sites are biased against conservatives? Not quite.

The more recent events have led Trump to be extremely unhappy with AG Barr because he did not do Trump’s bidding to Trump’s satisfaction.  In a comment within the past few days, Trump was asked if Barr would be his AG pick for a second term should he win.  Trump would only say he was not happy.  With Trump it is one thing after another, one scandal after another, conspiracy theory after conspiracy theory after another.  Trump is a walking case of paranoid delusions and a national security risk.

Canadian PressShe’s accurately predicted the Brexit vote, the 2016 American presidential outcome, and last year’s federal election in Canada. 

Now, a Canadian-made artificial intelligence system called Polly is forecasting next month’s U.S. presidential election, using public social-media data and algorithms. 

Polly is profiled in the new documentary “Margin of Error,” which premieres Saturday on Ontario’s publicly funded network TVO, and across Canada on tvo.org and the station’s YouTube channel.  …

The predictions currently update daily and have a high margin of error that will become smaller closer to the Nov. 3 election, but as of Wednesday afternoon, she had Democratic presidential nominee Joe Biden with 346 Electoral College votes vs. U.S. President Donald Trump at 192.

Polly also had Biden with 55 per cent of the popular vote vs. Trump at 45 per cent.

“But of course the huge caveat in that, particularly in the U.S., is issues of voter turnout, vote suppression, early voting and discounted ballots,” …

Definitely check out the interactive map at https://advancedsymbolics.com/us-election/.  I don’t know about you, but I like Joe Biden’s numbers there.  Of course there are many factors involved, but if previous uses are any indication, Polly may have star status . . . assuming nobody screws with the algorithms, this just may a new and reliable tool.  The article has more detail so I encourage you to read it.

AlterNet — President Donald Trump urged California Republicans to defy a state order to remove fake “official” ballot drop boxes after numerous top officials called them “illegal.”  

State Attorney General Xavier Becerra and Secretary of State Alex Padilla on Monday issued an order to the California GOP and three county chapters requiring the removal of unofficial ballot drop boxes erected in front of locations like gyms, gun stores and churches that were falsely marked “official.”

Trump, however, urged the party to fight the order in court.

“You mean only Democrats are allowed to do this? But haven’t the Dems been doing this for years?” the president tweeted, drawing a dubious comparison between the boxes and the legal “ballot harvesting” efforts by Democrats that have drawn his ire. “See you in court. Fight hard Republicans!” …

“Screw you!” Rep. Devin Nunes, R-Calif., said in response to Newsom’s tweet, according to Politico. “You created the law, we’re going to ballot harvest.”  …

The offices of the attorney general and secretary of state said in a cease-and-desist order to the GOP that the law required “persons to whom a voter entrusts their ballot to return to county election officials provide their name, signature and relationship to the voter.”

Becerra and Padilla also argued during a Monday conference call that the boxes were “illegal,” because they were designed to trick voters by claiming to be “official.” The boxes lack the security requirements mandated for official collection boxes installed by election officials, they added.

Just like a Republican to twist and obfuscate well intentioned laws.  I have not read the actual California law, but as Bill Maher says, “I just know it’s true.”  Trump has encouraged North Carolinian Republicans to vote twice, once by mail and once in person.  I hear the same has been conveyed to Trump supporters in Florida.  Now over and above all the other Trump bullshit, he is encouraging voters and the Republican party to break the law .  End the madness and DUMP TRUMP and as many Republicans as possible.

The Atlantic — The most important ballot question in 2020 is not Joe Biden versus Donald Trump, or Democrat versus Republican. The most important question is: Will Trump get away with his corruption—will his crooked and authoritarian tactics succeed?

If the answer is yes, be ready for more. Much more.

Americans have lavished enormous powers on the presidency. They have also sought to bind those powers by law. Yet the Founders of the republic understood that law alone could never eliminate the risks inherent in the power of the presidency. They worried ceaselessly about the prospect of a truly bad man in the office—a Caesar or a Cromwell, as Alexander Hamilton fretted in “Federalist No. 21.” They built restraints: a complicated system for choosing the president, a Congress to constrain him, impeachment to remove him. Their solutions worked for two and a half centuries. In our time, the system failed.

Through the Trump years, institutions have failed again and again to check corruption, abuse of power, and even pro-Trump violence.

As Trump took office, I published a cover story in this magazine, arguing that his presidency could put the United States on the road to autocracy. “By all early indications,” I wrote, “the Trump presidency will corrode public integrity and the rule of law—and also do untold damage to American global leadership, the Western alliance, and democratic norms around the world. The damage has already begun, and it will not be soon or easily undone. Yet exactly how much damage is allowed to be done is an open question.”

We can now measure the damage done. As we near the 2020 vote, the Trump administration is attempting to cripple the Postal Service to alter the election’s outcome. The president has successfully refused to comply with subpoenas from congressional committees chaired by members of the opposing party. He has ignored ethics guidelines, junked rules on security clearances, and shut down two counterintelligence investigations of his Russian business links, one by the FBI, the other by Special Counsel Robert Mueller. He has assigned prison police and park police to new missions as street enforcers, bypassing the National Guard and the FBI. As in 2016, he is once again welcoming Russian help for his election campaign—only this time, he controls the agencies that are refusing to answer the questions of Congress and the American people.

Those who would minimize the threat that Trump poses take solace in his personal weaknesses: his laziness, his ignorance of the mechanics of government. But the president is not acting alone. The Republican politicians who normally might have been expected to restrain Trump are instead enabling and empowering him.  …

…Trump has normalized the minority rule. … 

Republicans in the Trump years have gotten used to competing under rules biased in their favor. They have come to fear that unless the rules favor them, they will lose. And so they have learned to think of biased rules as necessary, proper, and just—and to view any effort to correct those rules as a direct attack on their survival.  …

To understand how the U.S. system failed in Trump’s first term—and how it could fail further across another four years—let’s look closer at some of Trump’s abuses and the direction they could trend in a second term.  …

Inciting Political Violence

Trump has used violence as a political resource since he first declared his candidacy, in the summer of 2015. But as his reelection prospects have dimmed in 2020, political violence has become central to Trump’s message. He wants more of it. After video circulated that appeared to show Kyle Rittenhouse shooting and killing two people and wounding a third in Kenosha, Wisconsin, on August 25, Trump liked a tweet declaring that “Kyle Rittenhouse is a good example of why I decided to vote for Trump.” “The more chaos and anarchy and vandalism and violence reigns, the better it is for the very clear choice on who’s best on public safety and law and order,” Trump’s adviser Kellyanne Conway said on Fox & Friends on August 27. Two nights later, a 600-vehicle caravan of Trump supporters headed into downtown Portland, Oregon, firing paintball guns and pepper spray, driving toward a confrontation during which one of them was shot dead.  …

Trump’s appeal is founded on a racial consciousness and a racial resentment that have stimulated white racist terrorism in the United States and the world, from the New Zealand mosque slaughter (whose perpetrator invoked Trump) to the Pittsburgh synagogue murders to mass shootings in El Paso, Texas, and Gilroy, California. In recent weeks, political violence has caused those deaths in Kenosha and Portland. A second Trump term will only incite more such horror.  …

Trump uses power to enrich himself and weaken any institution of law or ethics that gets in the way of his self-enrichment. He holds power by inflaming resentments and hatreds. A second term will mean more stealing, more institution-wrecking, more incitement of bigotry.  …

Voters in 2020 will go to the polls in the midst of a terrible economic recession, with millions out of work because of Trump’s mishandling of the coronavirus pandemic. But the country is facing a democratic recession too, a from-the-top squeeze on the freedom of ordinary people to influence their government. Will the president follow laws or ignore them? Will public money be used for public purposes—or be redirected to profit Trump and his cronies? Will elections be run fairly—or be manipulated by the president’s party to prevent opposing votes from being cast and counted? Will majority rule remain the American way? Or will minority rule become not a freak event but an enduring habit? These questions are on the ballot as Americans go into the voting booth.

Although the article is long, to me it is “a call to arms” to VOTE and to vote wisely taking into account Trump’s and Republican corruption.  Author David Frum, usually considered right of centre, also covers Trump’s Abuse of the Pardon Power, his Abuse of Government Resources for Personal Gain, and Directing Public Funds to Himself and His Companies.  Americans cannot afford to let this mad man take the country hostage for another four years!

Vote Blue No Matter Who Top to Bottom!!!

 

Share