June 15, 2016

Technoccult News: Stealing the Light to Write By

So I've once again been thinking a lot about the fact that I think that recommendations algorithms need a "No And Stop Recommending This To Me" function, if their designers really want them to learn anything. I've discussed this before, but it came up again as I was watching Netflix this week, and came to the end of Maria Bamford's utterly brilliant Lady Dynamite, and Netflix tried to segue into Adam Sandler's latest monstrosity. I need a way to tell Netflix, Amazon, Google, Hulu, all the Technical Kids learning in this school yard we call life, "No, and I really never want you to suggest that to me, again. If I want it, I'll search for it."

I know they're trying to sell me stuff, and want to even be able to dictate what I'll want to be sold, but shit. Muhammad Ali died, last week, y'all, and I've been talking about him a lot. Don't you maybe want to show me a documentary about him, instead?

Or maybe a decent documentary about the Stonewall Riots?

Anyway. Hello from the rats' nest in the server that houses the gradually ever-more self-aware machine learning algorithm in the the Southeastern Corner of These United States.

…I mean, "The Shores of Lake Geneva"

Yeah, I'm in Geneva, this week, for the Frankenstein's Shadow Symposium. So… hey.

 
|+| Friday Night Lights |+|
When last we conversed, I was telling you all about the IEEE Ethics Conference, in Vancouver. We left off at the end of Friday, and several wonderful panels. What happened next was the Dinner keynote by Dr Subrata Saha, on "Ethical Issues in Biomedical Engineering Research and Practice."

In the course of his talk he listed technologies like the stent, MRI, artificial hearts, joints, etc, and how they all require years of testing to model both longterm patient care and personal and societal impact. He said that we are at a crossroads of increasing technological capability, where many cherished beliefs about humanity are being called into question, and that we will have to give much more weight to considerations of the difference between what is legal and what is ethical.

He noted that conflicts of interests will need disclosure and discussion, and that issues of leveraging innovation in the face of intellectual property concerns will lead to more scrutiny. To that end, he says, industry sponsored research needs disclosure. To avoid bias, sponsors need to have minimal involvement at ALL development stages. Trust your instincts and verify. That is, if it feels like a conflict, ask a colleague, and if they concur, then act on it. Saha says that professionals have twin traditions to help guide them: The Hippocratic Oath and Public health/safety standards.

He touched on the potential use of nanotech to cure/repair osteal tissue, asking what are the Risks and Rewards. Might there be dangers of nano-sized particles? Cancers? Blood/bone/brain barriers? Could we have filters implanted? This part made me think of everyone having Ioncic Breeze upgraded lungs, to keep out micro- and nanoparticles. What about impact on human health? On the ecosystem? What about bad actors? Can we count on regulation? What about developed vs developing world applications?

Saha says that it's important to remember that nanobots are 50+ years out, and that we have to find something between the dystopian and the utopian, if we're going to realistically discuss it. Any regulations we devise should be strong, uniform, and based in sound science.

He moved on to cybersecurity, and the potential hacking of biomedical interventions, and a reminder that there are vulnerabilities throughout that field (hacked pacemakers, anyone?).

He raised the issue of the ethics of deep brain stimulation, and reminding the audience that fundamental personality changes have been recorded. With that in mind, is there any way that this choice can be legitimately made for others?

All in all, Dr Saha's talk worked very hard to hit literally every major contemporary biomedical ethical dilemma.

After dinner and the talk, we went for drinks with Matt, Ashley, and Dylan. Really great conversation about everything from hometown antics to the state of art and academia.

 
|+| Listen to the Rhythm of the Rock and Roll |+|
First thing in the morning, i tied to psych myself up to give a talk to a roomful of academics and industry professionals about thinking through our intentions toward machine minds, in regards to the nature of personhood and rights. There were military contractors at this conference (don't know if they'll be in the room) & people who've worked w/ google & Xerox Parc. I'd be terrified if it weren't for the fact that this group has already shown itself to be STRANGELY sympathetic to the issue of thinking past/through/about bias.

As I was watching other people present (more on that in a sec), I started feeling a little self-conscious about not having more than my title and contact info on my powerpoint, but i figured, screw it. My strongest presentation style has always been notes-supported extemporaneous speaking. Anything else I need, I tend to build from there.

We went and saw the panel on ETHICS INNOVATION AND DESIGN where we caught D.E. Wittkower's "Principles of Anti-Discriminatory Design," on anticipating discriminatory outcomes and working toward implementing actively anti-discriminatory processes. He discussed the importance of having representatives of marginalized groups on your design team, while noting that not everyone is clearly aware of their own marginalization. In these situations, it is then the responsibility of the "normatively advantaged" to use said advantages to repair what damages they can. But it's important to remember that one cannot simply "imagine" themselves as a member of a different group; it requires systematic controls and experimentation. Things like Gender and Diversity Impact Assessment Reports can tell us the ways in which having a truly diverse team, rather than just a nominally diverse team ("Look at my African American") changes how and how well our teams operate.

Wittkower told a cautionary tale about the FBI trying to go out of its way to "recognise the unique skills" of their Hispanic and Latin@ Agents, and so put them in liaison and translation positions… All of which have fewer opportunities for advancement and promotion.

Next up was Richard L Wilson on "Anticipatory Ethics, Innovation, and Artifacts." He started by sketching a bit of the field he was discussing, including the ethics of engineering and computer science. He discussed information computing technologies, La Tour on Actor Network Theory, and the works of James Moor, Philip Brey, and Deborah Johnson. The Ultimate thrust of his work was the ways in which artifact design changes the literal physical capabilities of the user, and what that means we have to take into account as we design, build and implement new artifacts, in the world.

Next panel we saw was ETHICS AND THE INTERNET, starting with Vanessa Chenel presenting the paper "Reliability And Acceptability Of An Online Decision Support System For The Self-Selection Of Assistive Technologies By Older Canadians: A Research Protocol," which she wrote with Claudine Auger, Manon Guay, Jeffrey Jutai, Ben Mortenson, Peter Gore, and Garth Johnson. This paper concerned the use of assistive tech pathways by older Canadians, specifically the SmartAssist2 Online Portal. This is system that would let them map and model their house online, and work in interactive consultation with the system about which in-home interventions would make their lives easier, and would make it easier for health professionals to get to them, in case of emergency. The online portal is provided and maintained by healthcare professionals, all of whom are concerned by the rising demands of smart home tech, what geographic areas would potentially be serviceable, and what their wait times would be. The goal is to provide older adults with new mechanisms of information and support in their self-selection of resources.

SmartAssist2 is apparently meant to emulate clinical thought processes, and has been being developed, used, and refined for the past 12 years. Users register and then can make their way through assistance topics with graphic icons which indicate the area of the house and then the type of modifications or advice or other interventions they need. They click the icon for the knowledge base and then select their specific issues, and can even describe the physical characteristics of themselves and their homes, to further tailor the experience, and it can even adapt to regional differences in housing construction.

Main question about this one was from Ashley Shew as to why there testing group was limited to the elderly, and not also disabled individuals. The answer was, simply, funding and testing parameter controls.

Next up was Nicholas Proferas presenting his and Katie Shilton's "Values in a Shifting Stack: Exploring How Application Developers Evaluate Values and Ethics of a New Internet Protocol." This talk was about the ethical and design implications of switching to Named Data Networking from TCP/IP. In the NDN model, every bit is named and cryptographically signed, and those bits don't move via a To-From flow path, but can be decentralized, in much the same way as a torrent filesharing system. All the bits originate from and are signed by the owner—say, the New York Times—but when you go to retrieve your times subscription, it may be the case that those bits will get to you faster and with greater fidelity if they come from your neighbour, rather than from the central NYT distribution server.

Proferes says that the values (both in terms of "worth" and "ethos") of NDN are that is uses end-to-end principles, rather than a shared external source; that we can ask what we mean by "fairness of traffic" in this framework; that NDN isn't less private than TCP/IP. The srange thing is, Proferes says, is that many developers who say they desire the kind of encryption NDN offers still haven't implemented their own crypto. So, with that in mind, he asks, "Will NDN really improve the trustworthiness of the Internet, and can it truly replace TCP/IP?" With the opportunities for innovation as regards the Internet of Things™, and the opensource goals NDN claims to champion? Maybe.

Next was Bernard Cohen's "Ethical Considerations of the Societal Impact of the Internet of Things," where we once again discussed cars and pacemakers and other internet connected devices and the direct personal threat they can pose. In addition, there' are now 1.1 billion datapoints in the world, which represents a 90% growth in the past 5 years. 2019, he says, will see half of all IP traffic coming from non-PC devices, and there will be 25 billion devices connected to the internet by 2020. He asked if we had somehow moved "too far" from fossil fuels, and what the financial impact of all this data was likely to be. No specific answers, as much as imploring to ask the questions.

My panel,ETHICS OF SMART MACHINES was after that. My talk, in case you somehow forgot, was called "On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences." I didn't feel that my talk went well, at all, but apparently I was wrong. Kirsten and Matt both said it was great, which I'd chalk up to people I know, knowing my work, if it weren't for several people wanting to talk to me about it, afterward, and making reference to it, in the last few hours of the conference. So I'm going to shut up about how I thought it went, and let that stand for me.

After that, there was Johannes Dregger and Jonathan Niehaus, presenting their and Peter Ittermann, Hartmut Hirsch-Kreinsen and Michael ten Hompel's paper "Digitization of Manufacturing and its Societal Challenges. A Framework for Future Industrial Labor." It was an overview of Cyberphysical systems, smart machines, and manufacturing robots outside of the normal constraints used to keep humans safe from them. They discussed the uses of Augmented Reality and the digitization of human/machine interfaces, asking how all of this would change the future of industrial labour. Surveyed skeptical responses point at the job losses and lowered qualifications necessary for once-skilled labour, while optimistic ones point at the opportunities for social and emotional enrichment, and the better quality of tools to hand.

My question was whether there was any evidence to support the idea that a threshold of joblessness might lead to the implementation of universal basic income, and what that threshold might be. Answer, "Maybe; we don't know, yet."

Kevin Meuer presenting his and Meg Jones' paper, "Can (and Should) Hello Barbie Keep a Secret?" Mattel's Hello Barbie both talks and records, and some children use it as a diary. That being the case, what are the ethical implications of a parent being able to simply play back that recording that a child thought was secret? What are the ways in which that harms that child's ability to form trust relationships with other humans, and specifically parental figures? And what of scenarios where we absolutely do not want a child to keep a secret, such as abuse? No easy answers on this one, but very important questions which, as more and more devices gain the ability to record and converse (even if only in a bot-like fashion), we very much need to address.

Next was Richard Wilson with "UAV's: The Impact on Civilian Populations," and if I'm honest I didn't get notes for his talk. There was so much hawkishness and a kind of fundamental assumption that the use of drones was good and going to be done that I kind of zoned out. A bit too much dark humour for the room, as well. Not that gallows humour is a bad thing, but you probably shouldn't assume that it'll be uncritically received as good by whatever audience is in front of you.

After that was lunch, and then the panel ETHICS OF AUTONOMOUS MACHINES starting with Peter Reiner, Saskia K. Nagel, and Viorica Hrincu's paper "Algorithm Anxiety: Do Decision-Making Algorithms Pose a Threat to Autonomy?" Covered neuroethics, looking at algorithmic devices as extensions of the human mind, and the impact of algorithmic autonomy on human autonomy. Presented the idea of "rugged individualism" as a fiction, that we don't have any singular autonomy, but only ever relational autonomy. To that end, our algorithms have a kind of techno-social embeddedness.

He discussed Technologies of the Ext4ended Mind (T.E.M.s), stating that for them to truly fit the category, they have to a) do some kind of cognitive work; b) loop into the human mental processes; and c) be perceived as T.E.M.s, by Humans. Now this last one's a bit bootstrappy, but as an example, he says that Googling new information meets all of these criteria, and so is an example of extended cognition. Outboard brains and memories. (Did I tell you how much I loved this conference? Every time I felt like I didn't get to adequately cover something, someone else would discuss it further in their talk. Felt like home.)

But as we all know, people can be deeply uncomfortable with the idea of technology as an extension of the Self. In trying to make them more comfortable, we have to be sure we don't stray into making them be more comfortable. The difference between nudging and coercion.

After this was Zhaoming Gao and Meizhen Dong's "Robot Warriors Autonomously Employing Lethal Weapons: Can They Be Morally Justified?" which looked at the question of not just whether the human use of these is morally justifiable, but also whether the machines themselves could ever contain a moral decision-making framework. Their paper was decidedly in the "No" category, looking at the frameworks necessary for moral justification and finding so-called robot warriors entirely lacking in them. I have to disagree with this conclusion, in terms of its scope, though. Never? Seems a bit much.

Then Selina Pan delivered the paper written by her, Sarah Thornton, and J. Christian Gerdes, "Prescriptive and Proscriptive Moral Regulation for Autonomous Vehicles in Approach and Avoidance." This paper essentially described the variables at play in the programming of Auto^2 crash avoidance systems, and detailed a process of research in which four cars were designed and compared, with variable sliders on which factors they preferenced: Obeying the Law, Avoiding Collisions, Efficiency of Reaching Destination, and Speed. They showed that if any one of the variables was changed, even in the slightest amount, it wildly altered the way in which the Auto^2 went about accomplishing its task, even if the end goal (Arrive At Point B) was the same. They looked at the impact of the shape of the car, the kind of tires, the designer, all of it, and the impact that had on what kind of assumptions were made and baked into the process.

Their work was amazing and wonderfully solid, so I was fucking floored when, after their talk, they asked me for input on their project, because I was one of maybe two people they'd ever heard explicitly state that programming code is inherently biased. We're translating the world as we understand it into a language that machines can understand, so whatever biases we have, they'll learn.

The closing keynote was by Philip Chmielewski., and it was called "Techno-Ethics: Upon Shifting Selves and Translocal Networks Constructing the Metropolis." Six minutes into it, and he touched on Ghost in the Shell, Serial Experiments: Lain, architecture, cybernetics, and Foucault. Specifically discussing the ethics of Architecture, built environments, and the notion of throughput as pertains to information systems, communities, and communties as information systems. Then he drew an analogy between Information and Communications Technologies (ICT) and Coyote as Trickster God, at which point I was convinced that that weekend was some kind of very pointed object lesson.

He discussed the idea that ICT is beneficial to us, when we regard it correctly, but that its uses and intersections in our lives can take turns that are mysterious and shocking, if we aren't careful.

Then he closed out with a quote from David Gunkel. Yes.

 
|+| One Month Later |+|
So that's where we're going to end this one. Next time (later this week) we'll talk about what's going on with the final season of Person of Interest, and after that, I'll do a break down of this symposium that I'm at, right now.

For instance, I saw the Villa Diodati, where Byron and the Shelley's stayed. I saw the actual place where Mary Shelley wrote Frankenstein. So that's amazing. Was going to try to make it to Borges' grave, but I absolutely did not pack warmly enough for the weather this week, and I would really rather not get sick in or on the way back or after I get back from Geneva. But we'll see. Maybe I can find a hat to buy.

AMENDMENT: I didn't find a hat to buy, and so didn't make it to Borges' grave, but spent all day & night talking with people who really get it about something we love, so, mark it a win.

For now, I'm off to the first day of talks. If you're missing the link spam, then I really recommend you hit up the Technoccult Twitter and Facebook pages. And as always, if you like what we do, feel free to subscribe or tip and tell your friends.

If you don't like what we do, then feel free to unsubscribe, and I feel relatively sure that you won't have dreams about void-and-red-eyed Ravens and Coyotes stalking the periphery of your senses, as you go about your now-horrifyingly fraught daily tasks.

Until Next Time.