Hey, Mom! The Explanation.

Here's the permanent dedicated link to my first Hey, Mom! post and the explanation of the feature it contains.

Friday, May 5, 2023

A Sense of Doubt blog post #2999 - Maintaining Mental Health: Fight Against Misinformation and Fake News



A Sense of Doubt blog post #2999 - Maintaining Mental Health: Fight Against Misinformation and Fake News

Happy Day of the Dead - Cinco de Mayo.

Fitting.

Truth is dying.

It may be dead already.

But

DEAD CAN DANCE.

This post has been in the works for some time.

I am determined to publish it today no matter what.

I have this thing about some posts. I feel I have to tell the WHOLE STORY. Not sure why. I can always just do another post.

This seems a fitting post for #2999 on the cusp of a milestone.

Let's sort out some content about misinformation.

There's three categories here.

Misinformation is false, misleading, or inaccurate information but not necessarily with the intent to deliberately deceive.

Disinformation is false information deliberately meant to deceive the people.

and now there is

Malinformation is information that is usually true and factual but is intentionally conveyed to inflict harm or imminent threat to another person, organization, or country.

Reason Magazine that has an agenda to promote, something I agree with some of the time but not all of the time, constructs malinformation as true but inconvenient in this article about an author who alleged that the CDC exaggerated the evidence supporting the efficacy of mask during the pandemic.

That's actually neither the definition of malinformation nor do I feel that the author, Jacob Sullum, is right that  the CDC exaggerated the evidence supporting the efficacy of mask during the pandemic. In fact, at first, mask efficacy was downplayed: more as protect from spreading virus than protection from contracting the virus.

Granted all these categories are subjective, much like vulgarity, it's in the eyes of the beholder. One person's misinformation is another person's truth. One person trusts and believes in the message of the government and its agencies, and others believe in the government as "censorship-industrial complex."

In a federal lawsuit filed last year, the attorneys general of Missouri and Louisiana, joined by scientists who ran afoul of the ever-expanding crusade against disinformation, misinformation, and malinformation, argue that such pressure violates the First Amendment. This week, Terry A. Doughty, a federal judge in Louisiana, allowed that lawsuit to proceed, saying the plaintiffs had adequately alleged "significant encouragement and coercion that converts the otherwise private conduct of censorship on social-media platforms into state action."

Doughty added that the plaintiffs "have plausibly alleged state action under the theories of joint participation, entwinement, and the combining of factors such as subsidization, authorization, and encouragement." Based on that analysis, he ruled that the plaintiffs "plausibly state a claim for violation of the First Amendment via government-induced censorship."



Mis-, Dis-, and Mal-information are big topics with Reason Magazine, which is a libertarian journalistic screed that has championed under-served voices since 1968. I like its take on sex workers and the myths of sex trafficking. I dislike other stances, such as the previous one on Malinformation. Then again, maybe I just want to deny inconvenient truths. I certainly do when it comes to JK Rowling's anti-trans rhetoric.

Though Reason purports itself to be dedicated to free minds and free markets, I see an intolerance in its dogma, at times, that feels not so free.

This article from Oct 31, 2022 is a good one about how the DHS and the FBI regularly report mis- and disinformation to tech companied for removal but that they are not "policing speech."

I am in favor of finding and removing Russian misinformation in our social media and communications platforms (as well as the FOX News Russian Propaganda machine which may be a thing of the past with the termination of Russian stooge Tucker Carlson) even if by absolute definition of Free Speech removal is a violation.

Policing misinformation also poses numerous risks to free speech. This was one of the justifications initially given for shutting down the Disinformation Governance Board. With narrow exceptions, false statements are protected by the First Amendment, and any broad efforts to restrict misinformation would have a chilling effect on other speech.

For example, the New York Post reported in 2020 that a laptop belonging to then-candidate Joe Biden's son Hunter turned up at a Delaware repair shop, full of salacious and potentially damaging information. The story was widely panned, including by Jankowicz, as likely Russian disinformation. Twitter banned users from sharing the article, and Facebook limited its spread as well. But a year and a half later, The New York Times largely confirmed the veracity of the original report.

The laptop story provides a useful template for how DHS influence over social media moderation could look









Accounts of this crisis of knowledge, however, overlook how its differing elements arise from a common source. Our problem concerns not just the way we generate knowledge but our attitude toward knowledge, how we present ourselves to each other as knowers. Beneath the epistemological crisis is a deeper psychological one: the problem of knowingness. Knowingness, as the philosopher and psychoanalyst Jonathan Lear defines it in Open Minded (1998), is a posture of always ‘already knowing’, of purporting to know the answers even before the question arises. When new facts come to light, the knowing person is unperturbed. You may be shocked, but they knew all along.

In 21st-century culture, knowingness is rampant. You see it in the conspiracy theorist who dismisses contrary evidence as a ‘false flag’ and in the podcaster for whom ‘late capitalism’ explains all social woes. It’s the ideologue who knows the media has a liberal bias – or, alternatively, a corporate one. It’s the above-it-all political centrist, confident that the truth is necessarily found between the extremes of ‘both sides’. It’s the former US president Donald Trump, who claimed, over and over, that ‘everybody knows’ things that were, in fact, unknown, unproven or untrue.

Knowingness is a particular danger for people whose job is to inform us. For instance, there is the pompous professor with the unassailable theory or the physician who enters the examining room certain the patient’s problems result from the condition the doctor happens to be an expert in. ‘Just as I thought,’ says the too-knowing oncologist to the patient with ambiguous test results. ‘Fibrosarcoma. It’s always fibrosarcoma.’


Knowingness may just be another way to say confirmation bias. Are we really capable of being "open minded"?

After all, I was just criticizing Reason Magazine's central axioms and rhetoric: was I right? Or does Reason's material just conflict with my "knowingness"?

Maybe the problem isn't the information but WHY the information is out there.

After all, mis-, dis-, and mal- information were not just invented in the age of social media. We humans have always engaged in falsehood and deception.

The reasons may vary but the root causes are mostly the same.

The next article from Mother Jones speaks to this issue.

Then there's a bunch of Slashdot.

This is all for today.

Cut a bunch of content and moved to a future post.

That's how I manage!

Thanks for tuning in.




FROM Nov. 16, 2021: (told you I had this post in the works for a long time):

Disinformation Isn’t Just a Tech Problem. It’s a Social One, Too.

That’s one important takeaway from a new report that studies the “crisis of trust and truth.”

by Ali Breland

Mis- and disinformation are often viewed as a cause of society’s ills. But a new report from the Aspen Institute’s Commission on Information Disorder, which studied the global “crisis of trust and truth,” offers a different perspective on how to think about the proliferation of conspiracy theories and bogus info: The rise of disinformation is the product of long-standing social problems, including income inequality, racism, and corruption, which can be easily exploited to spread false information online.

“Saying that the disinformation is the problem—rather than a way in which the underlying problem shows itself—misses the point entirely,” the report quotes Mike Masnick, founder of Techdirt, as saying.

Disinformation, as the report’s authors explain, comes from “corporate, state actor, and political persuasion techniques employed to maintain power and profit, create harm, and/or advance political or ideological goals” and can exacerbate “long-standing inequalities and undermines lived experiences for historically targeted communities, particularly Black/African American communities.” Disinformation is a nascent and fast-moving field of study and Aspen’s report, released Monday, offers a point of view that departs from the conventional wisdom that information problems stem largely from tech and social media platforms. 


The Aspen commission, which was co-chaired by Katie Couric, Color of Change president Rashad Robinson, and former director of the Cybersecurity and Infrastructure Security Agency Chris Krebs, spent six months examining the causes of and solutions to disinformation. Advisors to the commission included Joan Donovan, research director at Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy; Nathaniel Gleicher, head of Cybersecurity at Meta (formerly Facebook), and Evelyn Douek, a lecturer at Harvard Law School. 

The report outlines goals and recommendations for addressing disinformation on social media, including amendments to Section 230, a frequently debated and often misunderstood part of the 1996 Communications Decency Act, which gives tech companies legal immunity when it comes to user-generated content posted on their platforms. The report proposes “withdraw[ing] platform immunity for content that is promoted through paid advertising and post promotion” and “remove[ing] immunity as it relates to the implementation of product features, recommendation engines, and design.” In other words, algorithmically boosted content and information would no longer be legally protected. 

The authors also urged the executive branch to take action on disinformation broadly, something that’s been controversial even among disinformation researchers and thinkers. They recommended the White House create a “comprehensive strategic approach to countering disinformation and the spread of misinformation” that includes “a centralized national response strategy.” 

Despite the breadth and recommendations of the report, its authors still noted the limitations of stopping disinformation. “To be clear, information disorder is a problem that cannot be completely solved,” they write. “Its eradication is not the end goal. Instead, the Commission’s goal is to mitigate misinformation’s worst harms with prioritization for the most vulnerable segments of our society.”









MIT's Technology Review shares data from a Facebook-run tool called CrowdTangle. It shows that by 2018 in the nation of Myanmar (population: 53 million), " All the engagement had instead gone to fake news and clickbait websites.

"In a country where Facebook is synonymous with the internet, the low-grade content overwhelmed other information sources."[T]he sheer volume of fake news and clickbait acted like fuel on the flames of already dangerously high ethnic and religious tensions. It shifted public opinion and escalated the conflict, which ultimately led to the death of 10,000 Rohingya, by conservative estimates, and the displacement of 700,000 more. In 2018, a United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a "determining role" in the atrocities. Months later, Facebook admitted it hadn't done enough "to help prevent our platform from being used to foment division and incite offline violence." Over the last few weeks, the revelations from the Facebook Papers, a collection of internal documents provided to Congress and a consortium of news organizations by whistleblower Frances Haugen, have reaffirmed what civil society groups have been saying for years: Facebook's algorithmic amplification of inflammatory content, combined with its failure to prioritize content moderation outside the US and Europe, has fueled the spread of hate speech and misinformation, dangerously destabilizing countries around the world.

But there's a crucial piece missing from the story. Facebook isn't just amplifying misinformation.

The company is also funding it.

An MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fueling the deterioration of information ecosystems around the world.

Facebook pays them for permission to open their content within Facebook's app (where Facebook controls the advertising) rather than having users clickthrough to the publisher's own web site, reports Technology Review:Early on, Facebook performed little quality control on the types of publishers joining the program. The platform's design also didn't sufficiently penalize users for posting identical content across Facebook pages — in fact, it rewarded the behavior. Posting the same article on multiple pages could as much as double the number of users who clicked on it and generated ad revenue. Clickbait farms around the world seized on this flaw as a strategy — one they still use today... Clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of U.S. dollars a month in ad revenue, or 10 times the average monthly salary — paid to them directly by Facebook. An internal company document, first reported by MIT Technology Review in October, shows that Facebook was aware of the problem as early as 2019... At one point, as many as 60% of the domains enrolled in Instant Articles were using the spammy writing tactics employed by clickbait farms, the report said...

75% of users who were exposed to clickbait content from farms run in Macedonia and Kosovo had never followed any of the pages. Facebook's content-recommendation system had instead pushed it into their news feeds.

Technology Review notes that Facebook now pays billions of dollars to the publishers in their program. It's a long and detailed article, which ultimately concludes that the problem "is now happening on a global scale."Thousands of clickbait operations have sprung up, primarily in countries where Facebook's payouts provide a larger and steadier source of income than other forms of available work. Some are teams of people while others are individuals, abetted by cheap automated tools that help them create and distribute articles at mass scale...

Google is also culpable. Its AdSense program fueled the Macedonia- and Kosovo-based farms that targeted American audiences in the lead-up to the 2016 presidential election. And it's AdSense that is incentivizing new clickbait actors on YouTube to post outrageous content and viral misinformation.

Reached for comment, a Facebook spokesperson told Technology Review that they'd misunderstood the issue. And the spokesperson also said "we've invested in building new expert-driven and scalable solutions to these complex issues for many years, and will continue doing so."

Google's spokesperson confirmed examples in the article violated their own policies and removed the content, adding "We work hard to protect viewers from clickbait or misleading content across our platforms and have invested heavily in systems that are designed to elevate authoritative information."


Jan 14, 2022


The Ring of Fire


In case you needed any further proof that Rand Paul is an absolutely vile human being, a video of Paul from a 2013 lecture to college students has reemerged. The video shows Rand Paul - who was already a United States Senator - telling college students to spread "misinformation" to their classmates because, as Paul put it, "it works." He says he did this in medical school, and clearly he still does it today as a Senator. Ring of Fire's Farron Cousins discusses this. 

FROM: 









Do Inaccurate Search Results Disrupt Democracies? (wired.com)


Users of Google "must recalibrate their thinking on what Google is and how information is returned to them," warns an Assistant Professor at the School of Information and Library Science at UNC-Chapel Hill.

In a new book titled The Propagandists' Playbook, they're warning that simple link-filled search results have been transformed by "Google's latest desire to answer our questions for us, rather than requiring us to click on the returns." The trouble starts when Google returns inaccurate answers "that often disrupt democratic participation, confirm unsubstantiated claims, and are easily manipulatable by people looking to spread falsehoods."By adding all of these features, Google — as well as competitors such as DuckDuckGo and Bing, which also summarize content — has effectively changed the experience from an explorative search environment to a platform designed around verification, replacing a process that enables learning and investigation with one that is more like a fact-checking service.... The problem is, many rely on search engines to seek out information about more convoluted topics. And, as my research reveals, this shift can lead to incorrect returns... Worse yet, when errors like this happen, there is no mechanism whereby users who notice discrepancies can flag it for informational review....

The trouble is, many users still rely on Google to fact-check information, and doing so might strengthen their belief in false claims. This is not only because Google sometimes delivers misleading or incorrect information, but also because people I spoke with for my research believed that Google's top search returns were "more important," "more relevant," and "more accurate," and they trusted Google more than the news — they considered it to be a more objective source....

This leads to what I refer to in my book, The Propagandists' Playbook, as the "IKEA effect of misinformation." Business scholars have found that when consumers build their own merchandise, they value the product more than an already assembled item of similar quality — they feel more competent and therefore happier with their purchase. Conspiracy theorists and propagandists are drawing on the same strategy, providing a tangible, do-it-yourself quality to the information they provide. Independently conducting a search on a given topic makes audiences feel like they are engaging in an act of self-discovery when they are actually participating in a scavenger-hunt engineered by those spreading the lies....

Rather than assume that returns validate truth, we must apply the same scrutiny we've learned to have toward information on social media.

Another problem the article points out: "Googling the exact same phrase that you see on Twitter will likely return the same information you saw on Twitter.

"Just because it's from a search engine doesn't make it more reliable."




The New York Times tells the story of 17-year-old Ellie Zeiler, a TikTok creator with over 10 million followers, who received an email in June from Village Marketing, an influencer marketing agency.

"It said it was reaching out on behalf of another party: the White House."Would Ms. Zeiler, a high school senior who usually posts short fashion and lifestyle videos, be willing, the agency wondered, to participate in a White House-backed campaign encouraging her audience to get vaccinated against the coronavirus...? Ms. Zeiler quickly agreed, joining a broad, personality-driven campaign to confront an increasingly urgent challenge in the fight against the pandemic: vaccinating the youthful masses, who have the lowest inoculation rates of any eligible age group in the United States...

To reach these young people, the White House has enlisted an eclectic army of more than 50 Twitch streamers, YouTubers, TikTokers and the 18-year-old pop star Olivia Rodrigo, all of them with enormous online audiences. State and local governments have begun similar campaigns, in some cases paying "local micro influencers" — those with 5,000 to 100,000 followers — up to $1,000 a month to promote Covid-19 vaccines to their fans. The efforts are in part a counterattack against a rising tide of vaccine misinformation that has flooded the internet, where anti-vaccine activists can be so vociferous that some young creators say they have chosen to remain silent on vaccines to avoid a politicized backlash...

State and local governments have taken the same approach, though on a smaller scale and sometimes with financial incentives. In February, Colorado awarded a contract worth up to $16.4 million to the Denver-based Idea Marketing, which includes a program to pay creators in the state $400 to $1,000 a month to promote the vaccines... Posts by creators in the campaign carry a disclosure that reads "paid partnership with Colorado Dept. of Public Health and Environment...." Other places, including New Jersey, Oklahoma City County and Guildford County, N.C., as well as cities like San Jose, Calif., have worked with the digital marketing agency XOMAD, which identifies local influencers who can help broadcast public health information about the vaccines.

In another article, the Times notes that articles blaming Bill Gates for the pandemic appeared on two local news sites (one in Atlanta, and one in Phoenix) that "along with dozens of radio and television stations, and podcasts aimed at local audiences...have also become powerful conduits for anti-vaccine messaging, researchers said."


Slashdot reader DevNull127 shares this transcript of James Cameron's new interview with the BBC — which they've titled "The Danger of Deepfakes."

"Almost everything we create seems to go wrong at some point," James Cameron says...James Cameron: Almost everything we create seems to go wrong at some point. I've worked at the cutting edge of visual effects, and our goal has been progressively to get more and more photo-real. And so every time we improve these tools, we're actually in a sense building a toolset to create fake media — and we're seeing it happening now. Right now the tools are — the people just playing around on apps aren't that great. But over time, those limitations will go away. Things that you see and fully believe you're seeing could be faked.

This is the great problem with us relying on video. The news cycles happen so fast, and people respond so quickly, you could have a major incident take place between the interval between when the deepfake drops and when it's exposed as a fake. We've seen situations — you know, Arab Spring being a classic example — where with social media, the uprising was practically overnight.

You have to really emphasize critical thinking. Where did you hear that? You know, we have all these search tools available, but people don't use them. Understand your source. Investigate your source. Is your source credible?

But we also shouldn't be prone to this ridiculous conspiracy paranoia. People in the science community don't just go, 'Oh that's great!' when some scientist, you know, publishes their results. No, you go in for this big period of peer review. It's got to be vetted and checked. And the more radical a finding, the more peer review there is. So good peer-reviewed science can't lie. But people's minds, for some reason, will go to the sexier, more thriller-movie interpretation of reality than the obvious one.

I always use Occam's razor — you know, Occam's razor's a great philosophical tool. It says the simplest explanation is the likeliest. And conspiracy theories are all too complicated. People aren't that good, human systems aren't that good, people can't keep a secret to save their lives, and most people in positions of power are bumbling stooges. The fact that we think that they could realistically pull off these — these complex plots? I don't buy any of that crap! Bill Gates is not really trying to microchip you with the flu vaccine! [Laughs]

You know, look, I'm always skeptical of new technology, and we all should be. Every single advancement in technology that's ever been created has been weaponized. I say this to AI scientists all the time, and they go, 'No, no, no, we've got this under control.' You know, 'We just give the AIs the right goals...' So who's deciding what those goals are? The people that put up the money for the research, right? Which are all either big business or defense. So you're going to teach these new sentient entities to be either greedy or murderous.

If Skynet wanted to take over and wipe us out, it would actually look a lot like what's going on right now. It's not going to have to — like, wipe out the entire, you know, biosphere and environment with nuclear weapons to do it. It's going to be so much easier and less energy required to just turn our minds against ourselves. All Skynet would have to do is just deepfake a bunch of people, pit them against each other, stir up a lot of foment, and just run this giant deepfake on humanity.

I mean, I could be a projection of an AI right now.



+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

- Bloggery committed by chris tower - 2305.05 - 10:10

- Days ago = 2863 days ago

- New note - On 1807.06, I ceased daily transmission of my Hey Mom feature after three years of daily conversations. I plan to continue Hey Mom posts at least twice per week but will continue to post the days since ("Days Ago") count on my blog each day. The blog entry numbering in the title has changed to reflect total Sense of Doubt posts since I began the blog on 0705.04, which include Hey Mom posts, Daily Bowie posts, and Sense of Doubt posts. Hey Mom posts will still be numbered sequentially. New Hey Mom posts will use the same format as all the other Hey Mom posts; all other posts will feature this format seen here.

No comments: