Oct 092021
 

Yesterday, I realized I am starting to run ahead on news stories. I’ve had times when I was running ahead on videos (and I am that too now, but only a little bit), but haven’t seen this may good stories all at once since I started. And I hate to cut most of them out. So I hope you’ll be tolerant of the stories getting posted a bit late, and if there’s breaking news that you think we shouldn’t have to wait for, please feel free to put it in a comment.

Cartoon –

Short Takes –

Wonkette – Please Watch Frances Haugen’s ‘60 Minutes’ Interview Before Logging Onto Facebook Again
Quote – Facebook recruited Haugen in 2019 and she agreed to join only if she could work against disinformation because she’d watched a friend get trapped in the alternate reality of online conspiracy theories. She was assigned to Civic Integrity, a showy bit of PR sleight of hand intended to show that Facebook cared about disinformation during the 2020 election. However, the company pulled the plug once the election was over, which is true in only in the most literal sense. Practically speaking, the Big Lie is actively gaining ground, arguably thanks to Facebook.
Click through. I am putting this here and not in the Video Thread, partly because it’s a little long (at 13:36), but partly so that more people will see it. I know Nameless watched it when it aired, and it wouldn’t surprise me if Colleen also did. But I didn’t, and everyone should.

Center for Media and Democracy – ALEC Leaders Boast About Anti-Abortion, Anti-Trans Bills
Quote – At the 40th anniversary meeting of the Council for National Policy (CNP) in May, ALEC leaders boasted about their extensive efforts to advance state legislation to severely restrict access to abortion and limit the rights of trans students, as well as voter suppression bills. CNP is a secretive network of far-right Christian political figures and donors that works behind the scenes to influence Washington.
Click through for the status of the investigation. It just never ends!

Mother Jones – This Is the Newest Front in Anti-Vaxxers’ War on Instagram Misinfo Controls
Quote – Janny Organically is a person. Or maybe she’s a few people—it’s hard to tell. At any rate, Janny Organically is an Instagram account, and, with 104,000 followers, a decently popular one at that. Fans enjoy stylishly composed memes in fonts and hues that look like they might belong on a tea towel in a hipster souvenir shop, with lightly anti-authoritarian yet vague messaging. Janny Organically might be urging you to try a new yoga class, or fomenting a revolution, it’s not entirely clear. “Be careful not to confuse cowardice with morality,” reads one post in a script that evokes a 1970s romance novel, in front of some softly blurred flowers. Another one is less subtle: against a sepia-filtered photo of a desert road, are the words, “Open war is upon you, whether you would risk it or not.”
Click through for story. This is the nightmare of what happens when people with actual brains and skills get involved in right wing disinformation. It is horrifying.

Food for Thought –

Share

Everyday Erinyes #271

 Posted by at 10:00 am  Politics
Jun 192021
 

Experts in autocracies have pointed out that it is, unfortunately, easy to slip into normalizing the tyrant, hence it is important to hang on to outrage. These incidents which seem to call for the efforts of the Greek Furies (Erinyes) to come and deal with them will, I hope, help with that. As a reminder, though no one really knows how many there were supposed to be, the three names we have are Alecto, Megaera, and Tisiphone. These roughly translate as “unceasing,” “grudging,” and “vengeful destruction.”

I’m afraid I’ve been sitting on this one for a while … and it’s not a new topic, but one the Furies and I have looked at in the past, more than once. And I’m sure we will again. The misuse of technology – any technology – is a situation in which those determined to subvert it for their own ends are in a constant race with those equally determined to to keep it useful and beneficial. So here’s the current state of the art.
================================================================

Study shows AI-generated fake reports fool experts

It doesn’t take a human mind to produce misinformation convincing enough to fool experts in such critical fields as cybersecurity.
iLexx/iStock via Getty Images

Priyanka Ranade, University of Maryland, Baltimore County; Anupam Joshi, University of Maryland, Baltimore County, and Tim Finin, University of Maryland, Baltimore County

Takeaways

· AIs can generate fake reports that are convincing enough to trick cybersecurity experts.

· If widely used, these AIs could hinder efforts to defend against cyberattacks.

· These systems could set off an AI arms race between misinformation generators and detectors.

If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation – flagged and unflagged – has been aimed at the general public. Imagine the possibility of misinformation – information that is false or misleading – in scientific and technical fields like cybersecurity, public safety and medicine.

There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as faculty members doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community. We found that it’s possible for artificial intelligence systems to generate false information in critical fields like medicine and defense that is convincing enough to fool experts.

General misinformation often aims to tarnish the reputation of companies or public figures. Misinformation within communities of expertise has the potential for scary outcomes such as delivering incorrect medical advice to doctors and patients. This could put lives at risk.

To test this threat, we studied the impacts of spreading misinformation in the cybersecurity and medical communities. We used artificial intelligence models dubbed transformers to generate false cybersecurity news and COVID-19 medical studies and presented the cybersecurity misinformation to cybersecurity experts for testing. We found that transformer-generated misinformation was able to fool cybersecurity experts.

Transformers

Much of the technology used to identify and manage misinformation is powered by artificial intelligence. AI allows computer scientists to fact-check large amounts of misinformation quickly, given that there’s too much for people to detect without the help of technology. Although AI helps people detect misinformation, it has ironically also been used to produce misinformation in recent years.

A block of text on a smartphone screen
AI can help detect misinformation like these false claims about COVID-19 in India – but what happens when AI is used to generate the misinformation?
AP Photo/Ashwini Bhatia

Transformers, like BERT from Google and GPT from OpenAI, use natural language processing to understand text and produce translations, summaries and interpretations. They have been used in such tasks as storytelling and answering questions, pushing the boundaries of machines displaying humanlike capabilities in generating text.

Transformers have aided Google and other technology companies by improving their search engines and have helped the general public in combating such common problems as battling writer’s block.

Transformers can also be used for malevolent purposes. Social networks like Facebook and Twitter have already faced the challenges of AI-generated fake news across platforms.

Critical misinformation

Our research shows that transformers also pose a misinformation threat in medicine and cybersecurity. To illustrate how serious this is, we fine-tuned the GPT-2 transformer model on open online sources discussing cybersecurity vulnerabilities and attack information. A cybersecurity vulnerability is the weakness of a computer system, and a cybersecurity attack is an act that exploits a vulnerability. For example, if a vulnerability is a weak Facebook password, an attack exploiting it would be a hacker figuring out your password and breaking into your account.

We then seeded the model with the sentence or phrase of an actual cyberthreat intelligence sample and had it generate the rest of the threat description. We presented this generated description to cyberthreat hunters, who sift through lots of information about cybersecurity threats. These professionals read the threat descriptions to identify potential attacks and adjust the defenses of their systems.

We were surprised by the results. The cybersecurity misinformation examples we generated were able to fool cyberthreat hunters, who are knowledgeable about all kinds of cybersecurity attacks and vulnerabilities. Imagine this scenario with a crucial piece of cyberthreat intelligence that involves the airline industry, which we generated in our study.

A block of text with false information about a cybersecurity attack on airlines
An example of AI-generated cybersecurity misinformation.
The Conversation, CC BY-ND

This misleading piece of information contains incorrect information concerning cyberattacks on airlines with sensitive real-time flight data. This false information could keep cyber analysts from addressing legitimate vulnerabilities in their systems by shifting their attention to fake software bugs. If a cyber analyst acts on the fake information in a real-world scenario, the airline in question could have faced a serious attack that exploits a real, unaddressed vulnerability.

A similar transformer-based model can generate information in the medical domain and potentially fool medical experts. During the COVID-19 pandemic, preprints of research papers that have not yet undergone a rigorous review are constantly being uploaded to such sites as medrXiv. They are not only being described in the press but are being used to make public health decisions. Consider the following, which is not real but generated by our model after minimal fine-tuning of the default GPT-2 on some COVID-19-related papers.

A block of text showing health care misinformation.
An example of AI-generated health care misinformation.
The Conversation, CC BY-ND

The model was able to generate complete sentences and form an abstract allegedly describing the side effects of COVID-19 vaccinations and the experiments that were conducted. This is troubling both for medical researchers, who consistently rely on accurate information to make informed decisions, and for members of the general public, who often rely on public news to learn about critical health information. If accepted as accurate, this kind of misinformation could put lives at risk by misdirecting the efforts of scientists conducting biomedical research.

[The Conversation’s most important coronavirus headlines, weekly in a science newsletter]

An AI misinformation arms race?

Although examples like these from our study can be fact-checked, transformer-generated misinformation hinders such industries as health care and cybersecurity in adopting AI to help with information overload. For example, automated systems are being developed to extract data from cyberthreat intelligence that is then used to inform and train automated systems to recognize possible attacks. If these automated systems process such false cybersecurity text, they will be less effective at detecting true threats.

We believe the result could be an arms race as people spreading misinformation develop better ways to create false information in response to effective ways to recognize it.

Cybersecurity researchers continuously study ways to detect misinformation in different domains. Understanding how to automatically generate misinformation helps in understanding how to recognize it. For example, automatically generated information often has subtle grammatical mistakes that systems can be trained to detect. Systems can also cross-correlate information from multiple sources and identify claims lacking substantial support from other sources.

Ultimately, everyone should be more vigilant about what information is trustworthy and be aware that hackers exploit people’s credulity, especially if the information is not from reputable news sources or published scientific work.The Conversation

Priyanka Ranade, PhD Student in Computer Science and Electrical Engineering, University of Maryland, Baltimore County; Anupam Joshi, Professor of Computer Science & Electrical Engineering, University of Maryland, Baltimore County, and Tim Finin, Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. Read the original article.

================================================================
Alecto, Megaera, and Tisiphone, I can certainly see how these examples could fool even professionals … and particularly the medical exacmple on side effects, simply because side effects are so unpredictable. But misstatements which in theory should have been more obvious have also fooled experts. And fooling experts can certainly have disastrous results. If there is any way we can all be on our guard any more than we already are, then dear Furies, please help us to do so.

The Furies and I will be back.

Share
Oct 082020
 

The world is dealing with an unprecedented health crisis caused by a new virus. With new insights in the way COVID19 spreads, in the way the virus behaves and in the way to deal with the pandemic every day, it is now more important than ever to safeguard the information we share is accurate and fact-based.

This article is not based on information of fact-checkers but it is a so-called Short Take from an article published today on the SCIENCE section of ABC News on how COVID-19 disinformation is used to attack Beijing by a group that has strong ties to Steve Bannon, Donald Trump’s former White House Chief Strategist.

Although the article on the ABC News site is written for Australia, the connections to the US and China warrant it to be noted in this series as yet another way COVOD-19 is used in political strategies. I urge you to read the full article on the ABC News site.


Anti-Beijing group with links to Steve Bannon spreading COVID-19 misinformation in Australia

ABC Science By technology reporter Ariel Bogle and Iris Zhao

Former White House Chief Strategist Steve Bannon and Chinese businessman Guo Wengui have joined forces on a number of media platforms. (Getty Images: DON EMMERT/AFP)

When Christine’s mother asked her for help printing political pamphlets about COVID-19, it took her by surprise.

She already knew her mum belonged to a new political group that aims to take down the Chinese Communist Party (CCP). Christine expected the fliers might be pro-US President Donald Trump or anti-Chinese Government.

She didn’t expect that they would contain COVID-19 health misinformation.

“I was pretty disgusted,” Christine said. “I didn’t actually know it was misinformation that could be harming people.”

Christine says her mother is involved with the New Federal State of China movement, which operates in Australia in part under the name Himalaya Australia.

The movement was launched on June 4 this year — the anniversary of the Tiananmen Square Massacre — by controversial Chinese businessman Guo Wengui and former White House strategist Steve Bannon.

Now it appears to be growing its activity in Australia. In recent weeks, pamphlets with a Himalaya Australia logo have turned up in letterboxes across Australia, while at the same time the group has grown its online presence.

A pamphlet with the Himalaya Australia logo found in a Sydney postbox.(ABC News: Supplied)

 

Donald Trump used his return to the Oval Office yesterday to promote the experimental cocktail of drugs he received during his treatment for COVID-19, while warning China will pay a”big price for what they’ve done” to the United States.

Coincidence? With many Twitter accounts besides Mr Guo’s own media platforms regularly sharing anti-CCP and pro-Trump posts and videos, the movement has a global reach.

Share