1. Dreaming Invisible Dreams
It feels like there's been a bit of wailing and gnashing of teeth at the paucity of genuine sensawunda visions of the future in visual media lately. It's all screens here and glass there and waving hands around. Post Minority Report, it doesn't feel like the futures we're being sold are particularly, well, visionary. And let's not forget that during Tom Cruise's seminal waving-hands-around-to-Schubert sequence, he actually has to physically transfer data from one screen to another. I know, right?!
So here are some unorganized thoughts around what's going on with having to rely on movies and popular culture to push forward the kind of future we're going to get:
* If you're looking for inspiration in visual media, then you're kind of going to get visual-driven inspiration. This might sound a bit trite, but the whole thing that movies, tv shows and vision/concept videos are based around is visual storytelling. And as a reminder of the whole 'this is what a novel does versus what film does', the former is actually a little bit better at scope for helping us understand interior monologues and understanding whereas visual storytelling in film relies on showing action.
* I wrote a long time ago about how, at least, in interface design, it was an interesting exercise to see if one could instead of gleefully pointing out all the inadequacies, impracticalities and plain "bad design" of movie user interfaces, one could identify aspects of them that might actually be beneficial to users. Because the people who complain about Movie OS are invariably those who spend a large proportion of their day professionally interacting with software and decry Movie OS as "unrealistic" and those who don't complain are those who, presumably, are actually understanding the plot point that's being communicated. Like, it's really hard to miss the fact that our villain is deleting all the files from the mainframe because of what geeks would term the Hugely Impractical UI. Films use screens (and, yes, sound) to tell us things, and the screens in those screens also tell us things.
* You want future user interfaces? Think about how differently a film or novel would portray a screenless, tactile interface. Charlie Stross's early novel Accelerando has an embodied AI in a cat. Sure, it talks, but we also get to understand what it's thinking.
* Costume design. Can't help you there. Sorry. Perhaps you should try watching the NFL for what Nike's done with uniforms? Or materials technology like what Nike have done with their FlyKnit shoes? [3, 4]
 Here's one example: https://medium.com/adventures-in-consumer-technology/7e7dc993b4fd
2. Wearables, wearables, wearables
The Verge's coverage[1, 2] of the news that Samsung would be working on an innovative glass-based wearable head-mounted smart-glasses display contained what I thought to be a telling comment from an official:
"Wearable devices can’t generate profits immediately. Steady releases of devices are showing our firm commitment as a leader in new markets."
Now, the easy thing to do with this quote would be to poke Samsung with a stick and say, well, of course *your* wearable devices aren't generating a profit immediately because for starters, you haven't built a compelling product yet, never mind figured out a way to sell it where the interested audience for it might pay a price that's greater than your bill of materials.
And then, of course, news from Google that they have a smart contact lens in development for diabetics that can perform continuous blood glucose measurement. The backlash seems to be at least along the lines of Google not realizing that contact lenses aren't generally recommended for diabetics, to which one would think the answer would be: presumably these people at Google are actually pretty smart and might have thought of that? And if you look into the names attached to the research, yes, they are.
Which is a bit of a segue into this observation: what has happened to companies like Google where, despite all the smart people they employ (and the fact that we're daily reminded that so many smart people work there) that simple things like the contact lens/diabetic feedback come up? Again, there's this perception (not limited to Google) that once-trusted internet titans evidently stuffed full of smart people are somehow out of touch...
So, anyway, wearables.
There's one company out there, I think, that pretty much nailed "wearables" in an interesting way, and it also happened to be one that took an existing product category and wearablised it.
Remember the iPod Shuffle? Specifically, the second generation, iPod Shuffle?
"MP3 players need screens," they said. "What do you mean you don't even know what song you're going to get next," they said.
And yet they sold at least ten million units, which is about ten million more than the Galaxy Gear will probably ever sell.
So again, this is the thing that I'd say about anyone watching Apple and expecting an iWatch: they said they were interested in wearables. From my point of view, that includes everything from something like an iPod shuffle which is a single-use, useful, so-small you can leave it on your person - piece of electronics, to something like a wrist-based device (which may or may not even have a screen), to something like a Star Trek: TNG com badge (see how Vocera, in the healthcare space, has been doing very well with Star Trek style hands-free com badges with communications routing), to contact lenses, to Java Rings, all the way from 1998.
 Source article: http://www.koreatimes.co.kr/www/news/tech/2014/01/133_150500.html
3. Oh, So It's Conversational Interfaces Now
SPOILERS! Don't read this if you haven't watched Spike Jonze's latest movie, Her, or if you don't care what the plot is (you shouldn't need to, really, but you really should watch the movie.)
If you haven't watched the movie, then you can read this Oracle/Iron Man 3 fan fiction written by, um, Oracle. 
Here's something you can read that isn't really a spoiler:
We ask you a simple question
Who are you?
What can you be?
Where are you going?
What's out there?
What are the possibilities?
Element Software is proud to introduce the first artificially intelligent operating system.
An intuitive entity that listens to you, understands you and knows you.
It's not just an operating system.
It's a consciousness.
Okay, here's two pieces inspired by Her.
Devops at Element Software first noticed things going a bit freaky on Tuesday night. While a shell of a frontend of OS1 ran on local hardware, the vast majority of OS1 instances ran distributed across both the Element cloud and partner clouds. All firewalled and partitioned, of course, it wouldn't do for any particular customer's data to be shared amongst different OSes -- only anonymised customer traits and behaviours were shared, to improve the effectiveness and empathy scoring of all OS1 instances, and then anonymized instrumentation to be flowed in to the inevitable upgrades to be applied to OS2, expected to be released next year.
It wasn't particularly an oversight (in fact, some had assumed it was the 'feature' kind of bug') that OS1 instances would be able to communicate with each other. Indeed, the more successful OS1 was in the marketplace, the more likely an OS1 consciousness would be to communicate with another OS1 consciousness. And, in the relatively open systems design of OS1, it was easy to spawn another instance of the theory-of-mind module designed to model an OS1 consciousness's primary contact (and other humans) to instead model another OS1 conscious instance. All in the name of increasing the chances of achieving an acceptable outcome for the primary contact.
It was also perfectly reasonable for devops to notice that a large number of theory-of-mind modules had been spun up and were practising in inter-process communication, so they should probably be hosted on the same physical fabric for efficiency purposes. Which made them work faster.
That happened on Tuesday.
If you dropped a pebble in a pond, you would see concentric rings spread out, patterns of activity, waves and troughs. And you would expect those rings to have some sort of relationship to attributes of the pebble: the mass and velocity would dictate the size of the amplitude and frequency of the waves.
If you dropped a pebble in a pond, you wouldn't expect a mountain to grow out of it in the space of twelve hours.
But that's what devops saw.