5:08pm on Wednesday, March 1, 2017. I am in San Francisco. I am not here for GDC, the Game Developers Conference. I used to come here for the Game Developers Conference and I haven't been in a long time, and now I'm here and it's by *accident*. I have not yet had time to see any of my friends (hi, friends!) who are here in town for industry shenanigans.
Instead, I'm here working on a project that I'm too tired to even come up with a SPECIAL CODENAME for that will impress you all, but the other thing that I've done is I have managed to finish A DIFFERENT THING that should fix SOME OTHER THING in mysterious, behind-the-curtain ways. SPECIAL CODENAME is also pretty interesting because it's the first step in a BIG PLAN TO TAKE OVER THE WORLD.
It turns out that all you need to do is write things and treat them like SPECIAL CODENAME and then they start behaving just like SPECIAL CODENAME. I mean, even *I'm* impressed now (for small, low, easily satisfied values of "impressed" of course").
1.0 I'm OK
So thanks to the stunning success of my accidental newsletter (subject: "you can'"), you can sign up for "I can", an occasional letter of gentle affirmations.
Like this newsletter, I guess the point of "I can" is as much for me as it might be for you. Just writing them out should be helpful.
2.0 Grab Bag
It's Monday, March 6, 2017 now. A DIFFERENT THING continues to be in progress, meanwhile SPECIAL CODENAME was finished, and those of you who've been paying attention might have figured out what SPECIAL CODENAME was in the meantime.
First, via Twitter, Netflix is going to "trial technology that hands control to viewers", which is an amusing way of missing the lede that Netflix is in the position it's in right now precisely because as a video-on-demand platform, it *handed control to viewers*. No, that type of control isn't enough! In a sort of all-computer-programs-eventually-read-email, apparently all TV producers and TV networks eventually decide that they need to figure out, Once And For All, how to properly implement branching narratives so they can be "interactive narratives". The original source I had for this was The Daily Mail, but I'm not cruel and I don't want to encourage them, so I'm going to link you to the USA Today coverage (marginally better, I suppose?) instead.
The announcement was made at Mobile World Congress, which at some point might just drop the Mobile from its name and just say World Congress because later generations will scratch their heads and wonder what Mobile means exactly.
The short of this is that branching narratives in media like television have historically turned out to be a) not very compelling to viewers (there's a whole bunch (ie decades) of inertia in the medium of television/film/video being, well, linear), b) difficult to produce (think: first your job was to conceive of, script, produce and then edit *one* compelling linear narrative. Now you have to do n linear narratives and ideally *all* of them should be fulfilling - ie don't do one "proper" ending and then a whole bunch of filler endings. In general, your audience is not dumb and won't take that shit from you) c) expensive for little payoff (see (b)).
There are, it turns out, other ways of making interactive narratives. Arguably, one of the whole point about a narrative is that it's linear in the first place! Anyway, perhaps we should try to solve this problem in text first (see: Bioware and others) before trying to solve it in one of the most expensive creative media ever conceived.
Maybe - just maybe - there's a way of making this work. Other (good) people have tried and failed. So, I suppose I'm not entirely pouring cold water on it? I'm just skeptical that it's a) a good idea, b) sustainable, c) other things, and that maybe Netflix should concentrate on the things that it's already good at. Of course, this could *also* be an amusing strategic fakeout to get, I dunno, Amazon or HBO to spin into trying to do interactive telvision with branching narrative in order to compete with Netflix, who behind the scenes know that this is a fool's errand.
I noted on Twitter, somewhat sarcastically, that Netflix's greatest contribution to the medium of television (aside from letting the viewer choose what to watch), has been the implementation of next-episode-autoplay, which is about as far as you can get from "branching narrative" as you can get.
Second, Adrianne Jeffries wrote on The Outline about Google's Featured Snippets, those one-box type answers that you get when "Google" "thinks" it can answer a search query. Sometimes those snippets are a) useful and b) easy to verify, like "when is mother's day", and sometimes they are an absolute tire fire, like "is Obama planning martial law" which has since been updated to answer that no, he isn't, and instead has a link to a news story about how Google used to say that *someone else* was saying yes, Obama was planning martial law.
This is what happens when the world gets to explore in the Valley's murky grey area of "done is better than perfect". On the one hand, many people now agree that a thing that exists is better than a thing that does not exist because it is attempting to be perfect. That is what the statement is about. It is about things like: you know, maybe Gmail being in beta (and sometimes falling over) is OK than Gmail not existing at all.
But! Clearly now we have "are false answers, quickly provided, assumed as authoritative, in an attempt to quickly provide access to information better than no answers at all"? And the answer is: lots of us, I think, in general, would prefer to have no false answers from an authoritative source.
This is what I imagined happened at Google: someone had the idea that hey, what if the search box didn't just return links to web pages, but if it provided the actual *answer* to a query? And someone else said: yeah, okay, that sounds reasonable. And reasonably, they started out with things that could be found and, to a certain extent, verified, using the Knowledge Graph, a sort of semantic search. The useful thing to know here is that knowledge graph is one of the components that a bunch of people used to think was a pre-requisite for having a Turing-passing artificial intelligence. You'd need to ask an AI what colour the sky was, and the AI would need to "know" that the sky is blue. One attempt at doing this was/is a project called Cyc (as in Encyclopaedia) that a bunch of nerds like me like to check in now and then because it's a very laborious, ground-up attempt for humans to document what "common sense" is.
Anyway. This is sort of doable for some things but as Jeffries' article points out, is much harder to do for, well, harder things? That are harder to verify? Which leads me inevitably to The Hubris Of Silicon Valley.
It's one thing of course for a search engine to answer someone's query about, I don't know, how many days there are in a year, it's another for a search engine to authoritatively answer about other things in the world like, oh, I don't know, "is Obama planning a coup". The holes here are:
a) "organizing the world's information" is a position like a) information is just information, it doesn't have to be "true", verifiable information, which is fair enough, but also more cynically, b) "Google is a search engine that organizes and provides access to the world's information but makes no representations as to the veracity of any such linked information" which is what a lawyer would say these days to minimize liability and responsibility
b) truth is hard (let's go coding and fix healthcare instead);
c) seriously though, Google's responsibility only ends at providing a *link* to information and, er, links don't constitute endorsement
Only *of course they do* because of their placement and because of what little we know about the architecture of our brains, inherited trust and because of how information is presented.
OK but! What if we're just moving fast and breaking things! Clearly Google's *intent* is that it would *prefer* to provide verified information. So for the moment, let's provide *unverified* information that is picked based upon criteria that are useless for evaluting *veracity* for the answers to certain questions (at a base level: are things linking to other things), and let *humans* do what humans are good at, and have the humans report that the information we are preferentially highlighting is false.
The problem here of course is the labour asymmetry and the externalisation of results. *Probably* no, or a minimum (n < 100?) humans are involved in checking the answers to some of Google's snippets before they go live. I suspect they are algorithmically determined, with minimal review. The burden for review falls upon the uncompensated users of the system: it is in the users' interest (but not Google's?) that the information provided is correct. There's a difference in automatic, passive labor being involved in improving the information that Google provides (ie: the assumption that search results that are clicked on are 'better' and their trust improves'), but that goes out the window when a bad result is prioritized and shown to users. *Then* the review process is active and requires us as users to manually flag stuff. The steps to provide feedback on a Snippet are:
- click "Feedback"
- pick an option (This is helpful, Something is missing, Something is wrong, This isn't useful)
- and a text field for any other constructive comments or abuse you'd like to hurl)
This is clearly Too Much Work to expect people to do, but the burden falls on people who, I don't know, don't like the fact that Google is/was automatically prioritizing a result that said - with Google's implied backing - that yes, Obama was/is planning a coup.
"Ship it then fix it" as an attitude rapidly (and rightly) coming under more examination as the implications of *one* single platform making (unverified) representations about the world become clearer. For some people, those implications were super clear *before* they happened. I guess we'll see what happens next.