This one is more of a free-association one. And a short one.
Here's one reckon I just had (yes, another one, you can add it to the pile that everyone else has had) about WhatsApp and the reaction to its purchase: humans are good at *surface*. We have lots of things like neural structure for recognising surface level features like faces, or edges, or inertia or top-level physical properties of the world. So for most of us, WhatsApp is what it presents: a messaging application that runs on a (smart)phone. But the real value in WhatsApp was the infrastructure behind it: the invisible mechanism that transports a godawful number of messages worldwide and routes them, seamlessly. That is potentially worth about nineteen billion dollars -- especially when you think about mobile messaging in the previously-known-only-as-SMS and the value and economy it generated.
So: all this talk about posthumanism and mind uploading is interesting - in particular, Rameez Naam's rebuttal of what you might call the Cult of Singularity - but what got me thinking was: what are the neural structures that would be particularly beneficial to a post-human? What sort of things would it be great to have unconscious recognition of? You wouldn't even have to *look*, for example, to understand the infrastructure behind a particular network, you would just *know*, at some sort of atavistic level. Or what would a neural structure devoted to recognising the implications and rules behind network or graph theory imply? How might you hack together those types of brain structures?
It feels like it's this type of lack of instinctual understanding of that prevents us from dealing with long-term existential threats. The things that threaten us now include complex, interdependent webs of infrastructure creating outcomes that we don't foresee, or include outside context events like a giant rock coming down upon is that will *require* complex, interdependent webs of infrastructure and the rapid design and implementation of those webs. When we can now produce mathematical proofs that we can't understand and indeed financial trading instruments that are essentially black boxes, what type of human augmentation is needed to deal with the present, nevermind the future?
As ever, Ted Chiang's already written a story about how this might *feel* - if you haven't read his short story Understand, I highly recommend it.
The whole Star Trek: TNG computing infrastructure universe continues to dwell on my mind. When Geordi and Data invent a new way of inverting the positronic emissions in the deflector dish to close a dangerous subspace eruption, what exactly happens? Do they have to call up Starfleet Engineering HQ to get at the firmware of the deflector? Does Geordi have to turn off the deflector to flash it and they kind of have to stop the engines and come to relative dead stop while the deflector reboots? In our mundane future, does the deflector dish subsystem actually run a tiny embedded ftp server and Geordi has to upload a new binary firmware to it, and then wait while it grabs a new DHCP address from the Main Computer? And then after that, does Geordi (or, more probably, Data, because no one likes writing documentation) write some documentation for the firmware diffs for the deflector and then push them upstream to Starfleet Engineering's Github repo and then they get merged in and then on Patch Tuesday Starfleet pushes out a bunch of updates? Is part of Engineering's job basically going around all of the ship's systems and looking at those little red badges with numbers in them and then going: oh, looks like we need to install a system update on the environmental systems today to improve O2 efficiency? In fact, if there is a Starfleet Engineering repo on Github, is that how they recruit engineers into the Academy? Look at who's contributed the most to Starfleet projects? Or, has the consumerisation of IT finally reached the 24th Century and when you're a brand new Ensign with your newly-issued PADD, do you head into the Starfleet App Store and go put on your favourite messaging apps? Does Data publish apps in the store? Also, charts! How do you discover what app you might want to install on your brand new Warp Core? Is the model of Data and Geordi the logical conclusion of Dev/Ops, writing functionality on the fly, inside a mission envelope, and then deploying it? Are they an effective team? Does Data even write code? Does anyone in Starfleet write code? Does everyone in the Federation Know how to Code?
So many questions. It's this kind of thing that you need an updated technical manual for.
Google's Tango project has me giddily excited, not least of which because it marks the return to public eye of Johnny Lee Chung at Google's new Advanced Technology and Projects (ATAP) group.
The telling sentence in Tango's writeup for me is this: "The goal of Project Tango is to give mobile devices a human-scale understanding of space and motion." because a lot of people seem to have gotten caught up in the various demos and videos of the hardware that currently exist. What the goal sounds like is being able to give mobile computing devices the sense of proprioception to increase their situational awareness, another step toward embodied computing with more context. It's not necessarily about Google being able to map the inside of your house and retaining that data (that feels to me the absolute bottom layer), but more what that knowledge enables the device that you're carrying to do. There's a whole lot of implicit information that we process in terms of our interactions with the world that are dealt with on a purely subconcious level - things like how fast we're moving and where we're moving just in terms of our limbs and how we understand where we are in 3D space and what our relation in that space is to other objects.
(As an aside, it's interesting that part of the way that we're designing computers to see actually meshes with the way children think *we* see. There's this thing, the emission theory of sight, which is where our eyes reach out into the world and paint it into being. Which is pretty much what IR depth cameras like the Kinect do, spraying a pseudorandom dot grid out into the world at frequencies that aren't visible to us, bathing the world in little photonic tentacles that reflect back and describe the shape of the world to them. Our eyes don't see like that, of course, we don't emit anything and merely receive reflected photons on our retina. But isn't it interesting how we're solving the problem for the intelligence we're creating.)
One: the on-purpose infection of individuals with parasites for the creation of superhumans, or: has anyone rebooted Catwoman's origin story with her being created by a union with toxoplasmosis yet?
Two: Dan Simmons' Ilium/Olympos series of novels has a somewhat throwaway reference to a pervasive instrumentation and communications network that covered all living things, some sort of networked Gaia. So here's a realtime forest map.