February 14, 2014

Episode Seventeen: The Job To Be Done, The Ethics, The Good and The Practice

1. The Job To Be Done

Benedict Evans posted the other day[1] that the number of Apple Computers-With-An-Asterisk (ie iOS and OS X devices that support an application platform, implicitly excluding the Apple TV hobby product) sold surpassed the number of Microsoft Windows PCs sold globally in the fourth quarter of 2014. The response and commentary on sites like Hacker News was a predictable case of completely missing the point: that for anyone who doesn't own a computer (surprise: that's most of the entire world), Apple Computers-With-An-Asterisk actually get the job done. And the job done is not necessarily one that requires general purpose computing in the sense that it's been deployed to the world through Microsoft's Computer On Every Desk vision. In a sense, the competing vision that Apple instead promulgated was one of a Bicycle In Every Pocket. At the risk of harping on, yet again, about Jobs' insistence (and somewhat tunnel vision focus) of Apple being at the intersection of liberal arts and technology: perhaps that's yet another distinguishing factor between the two companies' application of technology. Microsoft, a traditionally engineering heavy company, can be seen as historically focussed on the, well, engineering side of things. MS-DOS and Windows PCs were complicated beasts, capable of a great many number of things, but they were machines that had to be harnessed. They were raw power, with a bewildering set of options (which options are, naturally, alluring to a great many number of people), but they were not as focussed on, from the user's point of view, the Job To Be Done. And in the words of Sorkin-dramatising-Zuckerberg, if you'd invented iMovie, iPhoto or iTunes, you'd have invented iMovie, iPhoto or iTunes. And instead Windows get, well, Windows MovieMaker, Windows Photo, and Windows Media Player.

Let me put it this way, and I realise that this is more of a reiteration than a genuinely new insight. Tablets - in particular, the iPad - are good enough, now. A year ago, at the traditional Thanksgiving technical support open house, we replaced my in-laws aging laptops. For my father-in-law, a new Windows laptop (which I'll freely admit that even to myself, remains somewhat impenetrable, a situation I find amusing given I still remember building myself a Windows 2000 Pro workstation while at college), but for my mother-in-law, an iPad. And it's been fantastic for her. It does everything she needs, and she uses it more.

That's the thing about the job to be done - another illustration of the cars-not-trucks argument that Jobs put forward to, I think, Mossberg at an All Things D conference. In our house, for example, we do not need a general purpose cooling machine, that can take any configuration of things-to-be-cooled and cool them in any which way. Instead, we have a fridge/freezer. The general-purpose PC, in terms of form factor and use case, is turning out to be but an aberration. The problem that the PC industry has found itself in is that it optimised itself into a local maxima: they optimised and optimised and optimised for the market that they thought they were in - which was People Who Buy PCs - and found themselves sitting atop a pretty mountain. And for the landscape they were in, that was just fine. It's just that if you started looking in a different solution or possibility space, there turned out to be a giant, adjacent untapped maxima that we still haven't seen the ceiling of: and that's tablets and mobile.

The job to be done turns out to be things like staying in touch, or listening to music, or looking at or sharing photos, or playing a quick game: none of which require either being tethered to a desk or even on a couch with a laptop. 

It makes you think, and hopefully without descending into Valley-esque dreams of capital D disruption, what other local maxima have been entirely reliant upon heavy, tethered processing.

[1] http://ben-evans.com/benedictevans/2014/2/12/apple-passes-microsoft

2. The Ethics Board

On Google's acquisition of Deep Mind Technologies, yet another deep learning artificial intelligence startup, one of the side notes of news was that the terms of the deal included the requirement that Google set up an A.I. ethics board. There's some good commentary on this, [1, 2] and I have my suspicions (or, in this case, hopes) about what the terms of reference of the board might be. 

The board is not, I don't think, about the fact that Google has both a sometime military research contractor on board with the Boston Dynamics and their ATLAS, BIGDOG and so-on platforms (which, honestly, from someone with a somewhat naive point of view, feel like they could be militarised rather easily) and there are people worried about some sort of Terminator scenario. It's instead, I think, about the unintended consequences of deploying large-scale artificial intelligence algorithms out into the real world and how those algorithms and products might be deployed. 

With news that algorithmic prediction of revolution or protest is becoming increasingly accurate[3], an ethics board at a build-fast-and-deploy company like Google (and countless others, including Facebook) to look after those A/B tests of functionality and deployment of, frankly, opaque and smart-esque micro-products sounds bizarrely like a good thing to do, and I find the feeling of siding with the formation of what amounts to a laboratory experiments academic ethics board by which proposals must pass unnerving to someone who wouldn't normally characterise themselves as either conservative or alarmist.

I'm a big fan of Ellis's Global Frequency comics universe, the idea of a select group of people around the world with Very Unique Skills, deployed in interchangeable and substitutable combinations to deal with outside context problem and existential threats due to unintended outbreaks of the future that need to be mopped up. I would pay cash money for fiction that shows how we might deal with, as Zeynep Tufekci (hopefully I spelled her name right this time) the new algorithmic nightmares that crop up. There might not be as many explodey things or biohorror, but gosh aren't there enough things out there that would need dealing with. And I honestly have no idea how they might be dealt with.

As an aside: the idea of Richard Feynman investigating Flash crashes or Stuxnet makes me simultaneously so excited I might explode and genuinely depressed at the fact that people always die.

[1] http://www.forbes.com/sites/privacynotice/2014/02/03/inside-googles-mysterious-ethics-board/
[2] http://slate.me/1bq52iu (Slate, your URLs are too long)
[3] http://www.theverge.com/2014/2/12/5404750/can-a-database-predict-a-revolution

3. The Good

It's easier to write about bad things than it is to write about good things. Russell Davies reminded me about this the other day[1], so here's me trying to, well, write about good things. Everyone loves a rant, but, well, they're not always productive. So: here's a good thing.

One of the things that I do really like about Nike's Fuelband is that they have achieved a certain simplicity with the red-to-green mechanic of showing how much activity you've achieved. For me, that's one the victories in the whole quantified self movement and abstracting away from measured numbers into differently (and more easily) understood forms that allow for nuance. There's a whole bunch of other things I could say about the Fuelband, but for once, I'd like to draw attention to just the things that I like about it.

[1] http://russelldavies.typepad.com/planning/2014/02/elude-these-pitfalls.html

4. Should Everyone Undertake The Practice?

Ian Betteridge, of the eponymous Betteridge's Law, has written about the practice of writing. I have to say that the discipline of sitting down and actually hammering something out on the keyboard has been, hands down, fantastic for me. I would not have been able to anticipate the reaction (and I'd be lying if I said there wasn't some part of me that was doing this for validation or recognition), but at the same time the fact that I can prove to myself that I have more than 500 words, every week day, of something interesting I feel I can explore, has simultaneously been freeing and terrifying. 

It's still terrifying even after seventeen editions to deal with the fear that I'll literally have nothing interesting to say. But one of the things that I've learned about myself over the past ten years or so is that I do some of my best thinking out loud, or through speech. And, apparently, through writing. I frequently don't know where I'm going when I start something (in today's episode, I think I'm most interested in where I ended up with the piece on Google's Ethics Board), but I almost always end up somewhere that feels like I've unlocked something. It may not be something big or earth shattering, but it is immensely satisfying. It might not work for everyone, but I can heartily recommend something like it. And, hopefully, I'm getting better at writing with all of this practice.  

Bonus: even though it's terrifying, it doesn't feel like work. I am, quietly, pleased.

[1] http://www.technovia.co.uk/2014/02/500.html

That's it for today. Have a great weekend - it's my son's first birthday party and I have very, very good friends visiting from London - and I'll see you again on Monday.

As ever, I very much appreciate your feedback, and I always endeavour to reply.

Best,

Dan