Finally! An Interesting Twitter Files That Appears To Reveal Sketchy Government Behavior

Took them long enough.

twitter-gfdd804df4_1920We finally have an interesting edition of the Twitter files!

When the Twitter Files began, I actually expected something interesting to come out of them. All of the big tech companies have been unfortunately unwilling to be as transparent as they could be about how their content moderation practices work. Much of the transparency we’ve received has been either through whistleblowers leaking information (which is often misinterpreted by journalists) or through the companies partnering with academics, which often leads to rather dry analysis of what’s happening, and which maybe a dozen people read. There have been moments of openness, but the messy stuff gets hidden.

So I had hoped that when Elon took over and announced his plans to be transparent about what had happened in the past, we might actually learn some dirt. Because there’s always some dirt. The big question was what form that dirt might take, and how much of it was systemic rather than one-time errors and mistakes. But, until now, the Twitter Files have been worse than useless. They were presented by journalists who had neither the knowledge nor the experience to understand what they were looking at, combined with an apparent desire to present the narrative in a certain framing.

Because of that, I’ve written multiple posts walking through the “evidence” presented, and showing how Musk’s chosen reporters didn’t understand things and were misrepresenting reality. Given that most journalist know to put the important revelations up top, and that each new “release” in the Twitter files seemed more breathless, but less interesting, than the previous ones, I was basically expecting nothing at all of interest to come from the files. Indeed, that was a disappointment.

As Stanford’s Renee DiResta noted, this was a real missed opportunity. If the files had actually been handed over to people who understand this field, what was important, and what was banal everyday trust & safety work, the real stories could have been discussed.

The Twitter Files thus far are a missed opportunity. To settle scores with Twitter’s previous leaders, the platform’s new owner is pointing to niche examples of arguable excesses and missteps, possibly creating far more distrust in the process. And yet there is a real need for public understanding of how platform moderation works, and visibility into how enforcement matches up against policy. We can move toward genuine transparency—and, hopefully, toward a future in which people can see the same facts in similar ways.

So when the Intercept’s Lee Fang kicked off the 8th installment of the Twitter files, I was not expecting much at all. After all, Fang was one of the authors of the very recent garbage Intercept story that totally misunderstood the role of CISA in the government and (falsely) argued that the government demanded Twitter censor the Hunter Biden laptop story. The fact that the evidence from the Twitter files totally disproved his earlier story should at least result in Fang questioning his understanding of these things.

Sponsored

And yet… it appears that he may have (finally) legitimately found a real story of malfeasance in the Twitter files in his most recent installment. Like all the others, he initially posted his findings — where he admits he was granted access to Twitter’s internal systems via a Twitter-employed lawyer who would search for and access the documents he requested — on Twitter in a messy and hard to follow thread. He then posted a more complete story on The Intercept.

The story is still somewhat messy and confused, and it’s not entirely clear Fang even fully realizes what he found, but it does suggest serious malfeasance on the part of the government. It actually combines a few other stories we’ve covered recently. First, towards the end of the summer, Twitter and Meta announced that they had found and taken down a disinformation campaign running on their platforms — and all signs suggested the campaign was being run by the US government.

As was noted at the time, the propaganda campaign did not appear to be all that successful. Indeed, it was kind of pathetic. From the details, it sounded like someone in the US government had the dumb idea of “hey, let’s just create our own propaganda social media accounts to counter foreign propaganda accounts,” rather than embracing “hey, we’re the US government, we can just speak openly and transparently.” The overall failure of the campaign was… not surprising. And we were happy that Twitter and Meta killed the campaign (and now we’re hearing that the US government is doing an investigation into how this campaign came to be in the first place).

The second recent story we had was about Meta’s “Xcheck” program, which was initially revealed in the Facebook files as a special kind of “whitelist” for high profile accounts. Meta asked the Oversight Board to review the program, and just a few weeks ago the Oversight Board finally released its analysis and suggestions (after a year of researching the program). It turns out that it’s basically just like what we said when the program was first revealed: after a few too many “false positives” on high profile accounts became embarrassing (for example, then President Obama’s Facebook account was taken down because he recommended the book “Moby Dick” and there was an automated flag on the word “dick”), someone at Facebook instituted the Xcheck program to effectively whitelist high profile individuals so that flags on their account would need to be reviewed by a human before any action was taken.

As we discussed in our podcast about Xcheck, in many ways, Facebook was choosing to favor “false negatives” for high profile accounts over “false positives.” The end result, then, is that high profile accounts are effectively allowed to get away with more, and violate the rules with a larger lag for consequences, but they’re less likely to be suspended accidentally. Tradeoffs. The entire content moderation space is full of them.

Sponsored

Again as we noted when that story first came out, basically every social media platform has some form of this in action. It almost becomes necessary to deal with the scale and not accidentally ban your most high profile users. But, it comes with some serious risks and issues, which are also highlighted in the Oversight Board’s policy recommendations regarding Xcheck.

Thus, it’s not at all surprising that Twitter clearly has a similar whitelist feature. This was actually somewhat revealed in an earlier Twitter File when Bari Weiss, thinking she was revealing unfair treatment of the @LibsOfTikTok account, actually revealed it was on a similar Xcheck style whitelist that clearly showed a flag on the account saying DO NOT TAKE ACTION ON USER WITHOUT CONSULTING an executive team.

That’s all background that finally gets us to the Lee Fang story. It reveals that the US government apparently got some of its accounts onto this whitelist after they had been dinged earlier. The accounts, at the time, were properly labeled as being run by the US government. But here’s the nefarious bit: sometime after that, the accounts changed to no longer be transparent about the US government being behind them, but because they were on this whitelist it’s likely that they were able to get away with sketchy behavior with less review by Twitter, and it likely took longer to catch that they were engaged in a state-backed propaganda campaign.

As the article notes, in 2017, someone at the US government noticed that these accounts — which, again, at the time clearly said they were run by the US government — were somehow limited by Twitter:

On July 26, 2017, Nathaniel Kahler, at the time an official working with U.S. Central Command — also known as CENTCOM, a division of the Defense Department — emailed a Twitter representative with the company’s public policy team, with a request to approve the verification of one account and “whitelist” a list of Arab-language accounts “we use to amplify certain messages.”

“We’ve got some accounts that are not indexing on hashtags — perhaps they were flagged as bots,” wrote Kahler. “A few of these had built a real following and we hope to salvage.” Kahler added that he was happy to provide more paperwork from his office or SOCOM, the acronym for the U.S. Special Operations Command.

Now, it seems reasonable to question whether or not Twitter should have put them on a whitelist in the first place, but if they were properly marked, and not engaged in violative behavior, you can see how it happened. But Twitter absolutely should have had policies stating that if those accounts have their descriptions or names or whatever changed, the whitelist flag should automatically be removed, or at least sent up for a human review to make sure it was still appropriate. And that apparently did not happen.

As The Intercept report notes, Twitter at this time was under tremendous pressure from basically all corners about the fact that ISIS was an effective user of social media for recruitment and propaganda. So the company had been somewhat aggressive in trying to stamp that out. And it sounds like the US accounts got caught up in those efforts.

So there is a lot of interesting stuff revealed here: more details on the US government’s foreign social media propaganda campaigns, and more evidence of how Twitter’s “whitelist” program works and the fact that it did not appear to have very good controls (not that surprising, as almost no company’s similar tool has good controls, as we saw with the OSB’s analysis of Xcheck for Meta).

But… the spin that “Twitter aided the Pentagon in its covert online propaganda campaign,” is, yet again, kinda missing the important stuff here. Neither the Pentagon nor Twitter look good in this report, but in an ideal world it would lead to more openness (a la the OBS’s look into Xcheck) regarding how Twitter’s whitelist program works, as well as more revelations about how the DOD was able to run its foreign propaganda campaign, including how it changed Twitter accounts from being public about their affiliation to hiding it.

This is where it would be useful if a reporter who understood how all this worked was involved in the research and could ask questions of Twitter regarding how big the whitelist is (for Meta it reached about 6 million users), and what the process was for getting on it. What controls were there? Who could put people on the whitelist? Were there ever any attempts to review those who were on the whitelist to see if they abused their status? All of that would be interesting to know, and as Renee DiResta’s piece noted, would be the kinds of questions that actual experts would ask if Elon gave them access to these files, rather than… whoever he keeps giving them to.

Finally! An Interesting Twitter Files That Appears To Reveal Sketchy Government Behavior

The Copyright Industry Is About To Discover That There Are Hundreds Of Thousands Of Songs Generated By AI Already Available, Already Popular
Fifth Circuit Asked To Not Fuck Up Solid First Amendment Decision It’s Already Handed Down Twice
Lobbying, Corruption Stall Landmark NY Right To Repair Bill

CRM Banner