
Yesterday, I saw Virgil. There was a brand new deck to play cribbage with, and he seemed more mentally with it than the last two times – not that I expect it to last, but it’s nice that temporary remissions can happen. I hope I didn’t scare anyone by posting so late. First I overstayed the visiting time, and then after putting my drivers license in my jeans pocket, when I got to the car, I couldn’t find it. So I had to go back – and two other staff got involved, and then it was in my pocket after all. Needless to say, I felt like an idiot. And then I thought I’d mail my ballot at the main post office, and discovered I can’t reach the box from the drive-through. So I did what I should have done in the first place – when I got home, I emptied the mailbox of all junk mail and put the ballot in it for pickup (When I do that, I always put at least one return address label from on of the veterans groups with patriotic somewhere on the envelope so some MAGAt won’t “lose” it, and that seems to work. And then getting home was quite a detour. If it had all taken any longer, I’d have been illegal – I’m not licensed to drive after dark. But I did get home while I could still see. Also, I got this email from Steve Schmidt: “Tomorrow night, we lock in projection angles and test locations. We’ll paint buildings with Stephen Miller’s face and the line: “Fascism ain’t pretty.” We’ll make sure Trump, Miller, and their staff can’t avoid it—in windows, on walls, across plazas.” (That would actually be last night. I just didn’t see it till Sunday morning.) By the time you read this it should be all over DC. As I type, I’m looking forward to reading about it. (It was actually a fundraising email, so there wasn’t a link to the full letter. But I will stay on tt.)
Wonkette brings you a cautionary tale on using AI. Yes, I know this blog’s readers are far less likely than, say, Republicans to be taken in by AI “hallucinations.” But I’ll bet you didn’t know that:
“[A] study by researchers at OpenAI explained that hallucinations are inevitable with large language models due to, well, math. Even when they’re trained on perfect data. The researchers wrote in their paper, “Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty.”
I certainly didn’t. Neither did anyone at Wonkette, until they accidentally triggered one. And it’s a doozy. It keeps getting worse (and funnier) through the entire article. (And the comments are epic.)
From The Root. This was not what I expected from the headline, . I expected domestic violence and inability to get a restraining order with teeth. But no. And I’m not sure which is sadder.
An investigation from Pro Publica. It wasn’t paywalled, but there was a large ugly popup, so I just archived it. It isn’t pretty – but Pro Publica does solid work with its investigations, and stands behind them.


Sorry, the comment form is closed at this time.