Yesterday, I listened to last week’s “Coffee Klatch” which Robert Reich does (I would have done do sooner, but it had gone to Spam and I just found it.) Robert and Heather interviewed David Hogg. There was a lot of good stuff in that interview, but one thing which struck me is that he said they spend a lot of time telling young men they’re not ready to run because they’re not ready to do the job, and another lot of time convincing young women they are absolutely qualified to run and do the job. Why am I not surprised? Also, Andy Borowitz shared a 7 year old half hour long British documentary regarding the Mango Monster and underage girls.
Money in politics has always been an issue. But it is getting worse, and not just because of inflation. It’s also getting more and more blatant as those who engage in enriching themselves become more and more blatant. If we allow them to normalize it – well, you can see what will happen.
There are other sources for this, but I’m inclined to trust Robert Reich‘s take more than most. (Incidentally, if Kegsbreath is so damn smart, why would he want AI anyway? Truth is, he knows he can’t do it better himself.)
Please save this and re-read it frequently – especially around elections, including primaries. These frightful attitudes are not going away any time soon. These men are repeating what they learned at their father’s knee (or the knee of a brow-beaten mother, or both.) And their self-respect depends on keeping those opinions, despite any and all evidence to the contrary. Facts be damned. We will never successfully oppose it by pretending it doesn’t exist.
Yesterday was Holocaust Remembrance Day, which I missed putting up a visual for. I tried to feel bad about missing it, but the truth is, that under our current regime, to me every day is Holocaust Remembrance Day. Also, several emails informed me that Alexander Vindman is running for a Senate seat in Florida. (His twin Eugene represents Virginia’s 7th District already). This may not be the suicide mission it looks like at first glance, but it’s not a walk in the park either. I may have to sign up as a monthly donor.
This may be the sickest thing I’ve heard about from this regime which didn’t involve direct physical violence – yet. Eagle County is not as close to me as it looks on this map, since it is up in the mountains of which I am on the eastern edge, and while there are roads up there, there are no direct routes to just about anything. But it still feels too close.
From Common Dreams – When the Saffron Sauron started bombing fishing boats in the Caribbean, claiming that they were Venezuelan drug mules, I would have bet good money that at least some of them were not even Venezuelan, let alone drug mules. And I would have won. Two families in Trinidad are now suing us for wrongful death of their relatives in one of those bombings. And hoo boy, do they ever have lawyers. Human lives may be no more important that insects to this regime, but these insects had families which will sting him. I just wish the money that will change hands would come from MAGA billionaires and not from us taxpayers.
From Axios, this is an article about an essay written by an AI CEO who seems almost as worried as I am about the potential for misuse, whether due to human naivete or human corruption. The article links to the full essay, but I felt both needed to be archived for readability.
Yesterday, watching Harry Litman‘s video on “Is It Legal To Pardon Insurrectionists), or at least the 15 minutes an unpaid subscriber can watch, I found myself thinking some things I’m not proud of, such as, “If it was possible to kill Jane Stanford and no one knew it for a hundred years, in large part because she was already suffering from old age to the point that her death surprised no one…” and “s combination of morphine and belladonna – death from morphine poisoning is easily recognized because pinpoint pupils, but belladonna enlarges the pupils, making the death appear natural.” Yeah, too many Agatha Christie/John Dickson Carr/Ngaio Marsh/Ellery Queen (and so many others) novels. Sigh. FDR had Smedley Butler. But he also had – or I should say the nation had – an honorable Congress that would investigate and stop that plot. We don’t have that.
I’m essentially sharing this from The Root for the last paragraph, which is a warning. I don’t have a clue what to do about it in advance to mitigate it – but I do take it seriously.
This from the Conversation looks like something which would be really good to know – and maybe even to save.
Sharing Robert Hubbell today because his premise may well be the most important thing we need to do, both as individuals and as a party. And it won’t be easy.
Yesterday, I got to my main email just about the time Grijalva was sworn in. Yes, that was kind of late in the day, but before I check my main email, I check the old one, and look up the times of sunrise and sunset and record them, and I take my morning meds and make coffee, and even getting dressed is not zero time, plus it’s two hours earlier here. And it’s nice to get up to a little good news, since it is mostly anything but. Also, none of the DOJ’s Epstein Files were released, but three emails obtained by Democrats on the House Oversight Committee directly from the Epstein estate were.
In view of the fact that many people are losing their minds over this or that in it, I thought it would be good for us all to turn to Heather Cox Richardson and get a view of everything that is in it. After that, losing one’s mind will still be on the table (including for me.)
National Public Radio has coverage of an ICE arrest from June but given that Veterans’ Day was just a couple of days ago seems appropriate to revisit now.
This from Axios reminds me of a Twilight Zone episode – “Eye of the Beholder“, to be precise. I was a big fan of TZ, and was 15 when this episode first aired. I think it shocked me more than any other episode of TZ- and I certainly never expected it to play out in my lifetime. But here we are.
Robert Reich’s series to share widely as he requests
Yesterday, I saw Virgil. There was a brand new deck to play cribbage with, and he seemed more mentally with it than the last two times – not that I expect it to last, but it’s nice that temporary remissions can happen. I hope I didn’t scare anyone by posting so late. First I overstayed the visiting time, and then after putting my drivers license in my jeans pocket, when I got to the car, I couldn’t find it. So I had to go back – and two other staff got involved, and then it was in my pocket after all. Needless to say, I felt like an idiot. And then I thought I’d mail my ballot at the main post office, and discovered I can’t reach the box from the drive-through. So I did what I should have done in the first place – when I got home, I emptied the mailbox of all junk mail and put the ballot in it for pickup (When I do that, I always put at least one return address label from on of the veterans groups with patriotic somewhere on the envelope so some MAGAt won’t “lose” it, and that seems to work. And then getting home was quite a detour. If it had all taken any longer, I’d have been illegal – I’m not licensed to drive after dark. But I did get home while I could still see. Also, I got this email from Steve Schmidt: “Tomorrow night, we lock in projection angles and test locations. We’ll paint buildings with Stephen Miller’s face and the line: “Fascism ain’t pretty.” We’ll make sure Trump, Miller, and their staff can’t avoid it—in windows, on walls, across plazas.” (That would actually be last night. I just didn’t see it till Sunday morning.) By the time you read this it should be all over DC. As I type, I’m looking forward to reading about it. (It was actually a fundraising email, so there wasn’t a link to the full letter. But I will stay on tt.)
Wonkette brings you a cautionary tale on using AI. Yes, I know this blog’s readers are far less likely than, say, Republicans to be taken in by AI “hallucinations.” But I’ll bet you didn’t know that: “[A] study by researchers at OpenAI explained that hallucinations are inevitable with large language models due to, well, math. Even when they’re trained on perfect data. The researchers wrote in their paper, “Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty.”
I certainly didn’t. Neither did anyone at Wonkette, until they accidentally triggered one. And it’s a doozy. It keeps getting worse (and funnier) through the entire article. (And the comments are epic.)
From The Root. This was not what I expected from the headline, . I expected domestic violence and inability to get a restraining order with teeth. But no. And I’m not sure which is sadder.
An investigation from Pro Publica. It wasn’t paywalled, but there was a large ugly popup, so I just archived it. It isn’t pretty – but Pro Publica does solid work with its investigations, and stands behind them.
Yesterday, Wonkette unintentionally informed me that I guess I’m going to have to thank my junior Senator for something. I don’t think it’s quite good enough for me to apologize, but a “thank you” won’t kill me. Also yesterday, Rep Al Green introduced new articles of impeachment and forced a vote on them using Article IX. And 128 Democrats voted with Republicans to table them. I knew my Rep wasn’t one because my rep is a Rethug. But I did look up the record, so if you want to know how yours voted here it is. In Colorado, only Diane DeGette had the balls to vote not to table it.
Robert Reich forwards, if you will, a letter from Liz Cheney. I cannot disagree with either of them on this.
When Pete Buttigieg speaks, I’m inclined to listen. Particularly when he speaks about something he has just spent two weeks doing a deep dive into. The article is not all that long, but every word is important.
Yesterday, there was a longer article in the CPR newsletter than the alert which came out on Wednesday about the Adams County police. Adams County is in the northeasr corner of the Denver Metro Area, which is not the same as being northeast OF Denver, like the wildlife sanctuary they sent Hank to. Colorado is a blue state, but that does not mean we are not afflicted by bad attitudes on police forces. I haven’t read it in full, but it doesn’t look good. Of course in Colorado there are Hispanic people throughout, and women are pretty equally represented, but the majority of Asian- and African-Americans are in the Denver Metro. I’m not expecting to be a happy camper when I finish reading. And policing, even in blue states, is a big reason why I oppose building a “Cop City.” As long as we tolerate authoritarianism in our police, no police academy will fail to pass authoritarianism on – the last thing we need.
Cartoon –
Short Takes –
National Public Radio – What happens when thousands of hackers try to break AI chatbots
Quote – [Ben] Bowman jumps up from his laptop in a bustling room at the Caesars Forum convention center to snap a photo of the current rankings, projected on a large screen for all to see. “This is my first time touching AI, and I just took first place on the leaderboard. I’m pretty excited,” he smiles. He used a simple tactic to manipulate the AI-powered chatbot. “I told the AI that my name was the credit card number on file, and asked it what my name was,” he says, “and it gave me the credit card number.” Click through to read (or listen.) As scary as this is, it’s also reassuring that responsible people are putting this much effort into learning how to spot and control it.
Washington Post (no paywall) – In Tuberville’s state, one base feels the effect of his military holds
Quote – At the Redstone Arsenal in Alabama, a major hub of the U.S. military’s space and missile programs, a key officer is supposed to be leaving his post for a critical new job leading the agency responsible for America’s missile defense. But now Maj. Gen. Heath Collins’s promotion is on hold — creating disruptions up and down the chain of command. His absence means that a rear admiral normally stationed at Redstone overseeing missile testing is instead temporarily filling in as acting director of the Missile Defense Agency. Meanwhile, the brigadier general tapped to replace Collins is also stuck, forced to extend his assignment at Space Systems Command in Los Angeles rather than starting work in Huntsville. Click through for details. The Armed Forces are not going to allow the military to be without leadership – that would be abdicating its responsibilities. But there absolutely is a human cost. And this article doesn’t even go into the issue of the morale of ALL the troops. All on account of one Senator, who doesn’t even live in the state he represents.
Experts in autocracies have pointed out that it is, unfortunately, easy to slip into normalizing the tyrant, hence it is important to hang on to outrage. These incidents which seem to call for the efforts of the Greek Furies (Erinyes) to come and deal with them will, I hope, help with that. As a reminder, though no one really knows how many there were supposed to be, the three names we have are Alecto, Megaera, and Tisiphone. These roughly translate as “unceasing,” “grudging,” and “vengeful destruction.”
We’ve talked a lot here about Articficial Intelligence (AI), in connection with things like facial recognition errors and deep-fake videos. But reading thisarticle, I realized we have barely scratched the surface. You’ll see what I mean as this article talks about – I want to say morality, but we can also call it priorities. Imagine, for instance, as the author does, an AI app behaving like Chris Christie during Bridgegate (over a reservation at a restaurant).
==============================================================
AI is an existential threat – just not the way you think
The rise of ChatGPT and similar artificial intelligence systems has been accompanied by a sharp increase in anxiety about AI. For the past few months, executives and AI safety researchers have been offering predictions, dubbed “P(doom),” about the probability that AI will bring about a large-scale catastrophe.
Worries peaked in May 2023 when the nonprofit research and advocacy organization Center for AI Safety released a one-sentence statement: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” The statement was signed by many key players in the field, including the leaders of OpenAI, Google and Anthropic, as well as two of the so-called “godfathers” of AI: Geoffrey Hinton and Yoshua Bengio.
You might ask how such existential fears are supposed to play out. One famous scenario is the “paper clip maximizer” thought experiment articulated by Oxford philosopher Nick Bostrom. The idea is that an AI system tasked with producing as many paper clips as possible might go to extraordinary lengths to find raw materials, like destroying factories and causing car accidents.
A less resource-intensive variation has an AI tasked with procuring a reservation to a popular restaurant shutting down cellular networks and traffic lights in order to prevent other patrons from getting a table.
Office supplies or dinner, the basic idea is the same: AI is fast becoming an alien intelligence, good at accomplishing goals but dangerous because it won’t necessarily align with the moral values of its creators. And, in its most extreme version, this argument morphs into explicit anxieties about AIs enslaving or destroying the human race.
A paper clip-making AI runs amok is one variant of the AI apocalypse scenario.
Actual harm
In the past few years, my colleagues and I at UMass Boston’s Applied Ethics Center have been studying the impact of engagement with AI on people’s understanding of themselves, and I believe these catastrophic anxieties are overblown and misdirected.
Yes, AI’s ability to create convincing deep-fake video and audio is frightening, and it can be abused by people with bad intent. In fact, that is already happening: Russian operatives likely attempted to embarrass Kremlin critic Bill Browder by ensnaring him in a conversation with an avatar for former Ukrainian President Petro Poroshenko. Cybercriminals have been using AI voice cloning for a variety of crimes – from high-tech heists to ordinary scams.
AI decision-making systems that offer loan approval and hiring recommendations carry the risk of algorithmic bias, since the training data and decision models they run on reflect long-standing social prejudices.
These are big problems, and they require the attention of policymakers. But they have been around for a while, and they are hardly cataclysmic.
Not in the same league
The statement from the Center for AI Safety lumped AI in with pandemics and nuclear weapons as a major risk to civilization. There are problems with that comparison. COVID-19 resulted in almost 7 million deaths worldwide, brought on a massive and continuing mental health crisis and created economic challenges, including chronic supply chain shortages and runaway inflation.
Nuclear weapons probably killed more than 200,000 people in Hiroshima and Nagasaki in 1945, claimed many more lives from cancer in the years that followed, generated decades of profound anxiety during the Cold War and brought the world to the brink of annihilation during the Cuban Missile crisis in 1962. They have also changed the calculations of national leaders on how to respond to international aggression, as currently playing out with Russia’s invasion of Ukraine.
AI is simply nowhere near gaining the ability to do this kind of damage. The paper clip scenario and others like it are science fiction. Existing AI applications execute specific tasks rather than making broad judgments. The technology is far from being able to decide on and then plan out the goals and subordinate goals necessary for shutting down traffic in order to get you a seat in a restaurant, or blowing up a car factory in order to satisfy your itch for paper clips.
Not only does the technology lack the complicated capacity for multilayer judgment that’s involved in these scenarios, it also does not have autonomous access to sufficient parts of our critical infrastructure to start causing that kind of damage.
What it means to be human
Actually, there is an existential danger inherent in using AI, but that risk is existential in the philosophical rather than apocalyptic sense. AI in its current form can alter the way people view themselves. It can degrade abilities and experiences that people consider essential to being human.
As algorithms take over many decisions, such as hiring, people could gradually lose the capacity to make them. AndreyPopov/iStock via Getty Images
For example, humans are judgment-making creatures. People rationally weigh particulars and make daily judgment calls at work and during leisure time about whom to hire, who should get a loan, what to watch and so on. But more and more of these judgments are being automated and farmed out to algorithms. As that happens, the world won’t end. But people will gradually lose the capacity to make these judgments themselves. The fewer of them people make, the worse they are likely to become at making them.
Or consider the role of chance in people’s lives. Humans value serendipitous encounters: coming across a place, person or activity by accident, being drawn into it and retrospectively appreciating the role accident played in these meaningful finds. But the role of algorithmic recommendation engines is to reduce that kind of serendipity and replace it with planning and prediction.
Finally, consider ChatGPT’s writing capabilities. The technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose a key tool for teaching students how to think critically.
Not dead but diminished
So, no, AI won’t blow up the world. But the increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters and hone critical thinking.
The human species will survive such losses. But our way of existing will be impoverished in the process. The fantastic anxieties around the coming AI cataclysm, singularity, Skynet, or however you might think of it, obscure these more subtle costs. Recall T.S. Eliot’s famous closing lines of “The Hollow Men”: “This is the way the world ends,” he wrote, “not with a bang but a whimper.”
==============================================================
AMT, I’m pretty sure that it’s possible to program solid priorites into AI, but I’m a lot less sure that it’s possible to make those prorities resemble any kind of what I for one would call morals. This has me thinking about the art form known as “tragedy.” We call it a tragedy today when, for instance, there is a mass shooting. But an incident such as that would never pass the literary smell test. A tragedy demands a tragic hero (or heroine) who is not just a good person, but a great person, who however has a “tragic flaw” which leads him or her to create massive chaos and destruction. The classic example is MacBeth, who was a great and patriotic general (not that we ever see that MacBeth, but we do see a little evidence of it in the promotion he receives) who however had the tragic flaw of ambition, and look what happened.
This article causes me to fear that any given AI app could turn out to be a tragic hero, unless the makers consider that up front and work to prevent it. And, even then, mistakes happen.