May 09, 2017

Traces #4 by Mike Caulfield: The Coming Annotation Wars

I went to the iAnnotate conference last week and it was lovely. Annotation is slowly coming into its own as a technology; I tend to think of it as a way to "re-webify" a web that has increasingly moved away from the hypertext dreams of its founders towards something that feels more networked and less centralized, more serendipitous and less engineered. 

Of course, as George W. Bush famously said, "The fool me can't get fooled again".  I take this to mean that history cannot repeat itself precisely, because we all remain culpable for the lessons of the past. And the lessons of the past decade or so of the web have been harsh. The dream of open participation, in the economic and social climate it has been dropped into, has been as much plague as cure. It's been easily perverted by the hateful, the corporate, and more recently perhaps, the state-sponsored. 

Esther Dyson reminded us of this in her keynote conversation for the conference, wading into the discussion of whether people should be able to opt-out of being annotated. She brought up the evolution of anonymity on the internet. At first, people thought anonymity on the internet was core to it. Later problems emerged, and the compromise was that anonymity shouldn't be encouraged, but it should be allowed to those who need it. While we can get into some deep debates about the worthwhile-ness of anonymity, I'm in basic agreement with her formulation -- it's your browser, and you're allowed to annotate anything you want with it. But the separate question is what should be encouraged by the design of our technology. People want to turn this into a legal debate, but it's not. It's a tools debate, and the main product of a builder of social tools is not the tool itself but the culture that it creates. So what sort of society do you want to create?

An example may suss this out a bit. One of the projects I'm working on is an annotation bot called "contextbot". And what contextbot does is pretty neat. It layers articles with fact--checking context. As an example, it takes a list of 168 known front organizations for industry groups and applies annotations to news articles that mention those organizations, noting they might be astroturf organizations and asking people to check.I'm still working out the kinks, but here's a screengrab of a sample result.


I've set it up so that the code will run once a day and be able to grab the latest news, tagging it before most people read it. I've set this one up primarily to train students to better investigate and think about the articles they read on the web, but you can think of a whole army of these things adding layers of context to articles, calling out not only front groups, but linking people to links not provided in the page, relevant research, past articles, Wikipedia treatments of news sources. It could be a major solution to the lack of context often provided on the web.

Except, of course, it works the other way too. If I decide, in an MRA-fueled wrath, that I'm all about "ethics in gaming journalism" I can find every article that mentions Zoë Quinn or Audrey Watters and overlay annotations that accuse them of various forms of kickbacks, corruption, and SJW conspiracy. It's virtually guaranteed that, without proper affordances, the minute annotation is popular it will become a stream of raw sewage.

This happens to every social technology every single time, completely predictably. And sometimes people get it right over time, and sometimes they don't. But weirdly, no one follows the excellent advice of George W. and designs in anticipation of this. People react to the inevitable troll-ification of environments, but they seldom spend enough design time up front thinking about it. When the trolls show up, it's always "Quelle suprise!" in the cubicles. Because who-couldda-known, right?

Annotation is incredibly promising. It's also a chance to get this sort of social design right, and learn from the repeated mistakes of the past. I'm hoping those involved with it right now will reach out to those with a deep and personal knowledge of how these things go wrong and get ahead of the curve for once.

Things to Think With

Full Spectrum Influence Operations: Clint Watts is an expert in state-sponsored social media influence operations, and his testimony to the Senate Armed Services committee is riveting. We're vulnerable to state-sponsored attacks, he says, because we are too narrowly technological in our solutions. The Russians put together distributed and agile cross-functional networks of technologists, writers, trolls, and analysts. Americans tend to conceptualize the problem as technical, and don't understand the synergies between multiple legitimate and illegitimate entities. The Russians buy talent. Americans buy AI. 

The Russians also take an ecosystem approach to the problem. "Full Spectrum Influence Operations" coordinate the actions of illegitimate actors (hackers who steal documents) with borderline actors (controllers of bot networks and paid trolls) with legitimate actors (news agencies). They identify and an infiltrate networks of "useful idiots" who are hungry for the disinformation and stolen documents they can supply, which serves their immediate ends. This full spectrum approach launders foreign disinformation into seemingly legitimate U.S. based news almost immediately. It's not the bots and the Gmail hacks that are the impressive piece -- it's the way in which the entire system is expertly leveraged to create legitimacy.

Downstream Policy Attitudes: When we think about fighting fake news and polarization a naive view is that accepting facts leads to changes in what we call "downstream policy attitudes." But this isn't always the case. In research that called stronger claims of a backfire effect into question, the researchers noted that some of the more interesting inputs in to policy preferences had not been studied:
"I would just say: don't place facts on too high a pedestal. This is only one component of the way that average people come to hold political preferences. For instance, we didn't see any differences on policy preferences among corrected and uncorrected groups."

The Marketplace of Ideas Is Not A Functioning Marketplace: One of the prevailing principles ingrained in us is that the best way to test true ideas is to throw them into what Oliver Wendell Holmes called the "competition of the market," that is, the truth need not fear lies because just as the best product wins in the market, the truest statement and most useful ideas will show their mettle and win in the end.

As Lincoln Caplan notes, this is demonstrably not true. Scholarship going back almost fifty years shows that when it comes to truth the necessary conditions to a functional marketplace don't exist. The idea that an effective remedy to an overabundance of false speech is more speech is countered by almost everything we know about cognitive science.

We admit this in some contexts: for example, we don't claim that someone who has committed perjury is not guilty of a crime because the other parties to a case had the remedy of counterspeech. We understand that perjury's damage can't really be undone. But we seem ignorant of this in other contexts -- the most recent case being the idea that showing fake headlines in social media will be OK if we show real stories or debunking sites next to them. It might help the spread of these stories, but it's not clear it undoes the damage. I don't know what the remedy is, really. These are complex problems. But it's time to admit the marketplace of ideas metaphor is leading us astray.
 

Quick Hits

New York Times on the recasting of privacy as a luxury rather than a right. The usual suspects appear.

Fans of the blog know I've been a proponent of a decentralized web based on content-based networking. Maybe this Silicon Valley plot thread will get people interested. Or maybe it's the last gasp before the idea gets ruined by HackerNews gibberish.

Bots play a non-trivial role in the spread of misinformation, but humans have something to be proud of -- a small group of dedicated human shitposters can do this stuff without the use of bots, because our networks are just that fragile and easily gamed. Vivez les humains!

Zeynep's book is out. Read it.

This 45 minute documentary on the problem of Facebook's lack of accountability will not have much new information for most readers of this blog, but it packages up quite a lot into a short presentation. It might make an excellent kickoff to a class. Good to send to relatives too.

Heading to Montreal Misinformation Conference Workshop next Monday -- if you're going to that too, ping me on Twitter.

Warmly, 

Mike.