May 22, 2016

Technoccult News: Preying with Trickster Gods

And we're back in the States. The United States. Of America. Weird how we forget that there's another one of those like, right next to us. Estados Unidos Mexicanos. Anyway, we're back, and whooo cripes but it's been a busy damn week. So let's just get into it.

Hello from the Emotional and Professional Rollercoaster in the Southeastern Corner of Estos Estados Unidos ((Americanos).

 
=|Accidentally Bringing Light Into The World|=
We'll get into the rundown of my notes from IEEE Ethics 2016, in just a bit, but first I want to frame the whole thing, for you. Two days before flying to Vancouver I made this Twitter post. If you don't feel like clicking through, it says "Trickster gods, man. whaddaya gonna do? [Shrug Emoji]."

The first thing we saw on arriving in Vancouver was this very interesting display. It's a kind of walk-through diorama about Raven bringing Light (and thus Knowledge, and Language) to the World. Spoiler alert: The closing keynote for the weekend invoked Coyote, as well as SERIAL EXPERIMENTS: LAIN and GHOST IN THE SHELL. So. While we were in the security line to Leave Vancouver, I noted to Kirsten that I think they may have taken that initial Tweet as a Prayer, ad the lack of sleep as a sacrifice.

My mind wasn't exactly changed on this score when, after I got back to the office on Monday to do some end-of-semester paperwork, I found out that I'd been accepted for the Frankenstein's Shadow Symposium in Geneva. And after I freaked out a little bit, on Twitter, the person who sent me the acceptance pinged me and a guy named John C Havens, hoping to introduce us to each other, and get us in each others' orbits. Problem was, John and I had already met and hit it off quite well, the first day of IEEE.

Then, Tuesday, I was quoted in WIRED.

…Yeah.

So when I say it's been a busy damn week, I mean it has been a busy damn week.

Things quieted down, more or less, but the residual paperwork has been kind of steady, not to mention a little bit of an emotional hangover, yesterday. No real breaks, until today, to try to do this. But now that we have this space, let me tell you about some of the people I met, last weekend. I think you like many of them.

 
=|Up Off Paper, Down On Bits|=
One of the first things I noticed about IEEE was that there's a great deal of hierarchy in the organization. There's the international organization, then the regional chapters, then the operators and organizers within each region.

The opening keynote (there were three keynotes) was given by Rob Hunter, who framed the discussion of technological ethics as an issue of brand management. As a starting point, this was not my ideal, and left me feeling a little sour. This feeling didn't last long, as I simultaneously remembered that one of the major partners for the weekend was Boeing, one of the largest Military Industrial manufacturers in the world, and was shown that there were many varied perspectives in attendance, well beyond the pro-capitalist and hawkish.

The next panel we attended was called "Ethical Considerations in the Design of Artificial Intelligence," and it was moderated by the aforementioned John Havens. Two things I'm not going to lie about: 1) We were super leery of the fact that this entire panel was made up of dudes. Very much worried. 2) While listening to these people talk, I 100% felt like I should have been on this panel. Not because they were missing things, but because they were getting so much right. My "You're Murdering Yourself And Ruining Your Life" cards are got a real workout this weekend.

Havens framed the discussion by noting that the framing of "who should automated vehicles choose to kill?" is deeply problematic, and offered the names of some groups working on solutions, including his own. He talked about the possibility of Crowdsourcing a code of conduct, and of refining that code, over years, by bringing it into engagement with other groups, at meetings over the next two years, and hearing and addressing their concerns. When asked, he acknowledged that the beginning of the process was heavily US/North American-centric, but that as the project grew it would specifically seek to engage with non-western perspectives, and their concerns.

Next was Mike van der Loos, whose Open Roboethics Initiative (ORI) team seems to be very aware of the need to research on intuitions and biases. He talked specifically about the nature of embodiment and how what kind of form a machine has will massively influence how its used, and what it can learn how to do. He talked about Human/Robot collaboration, and how design factors so fully into the ability for that to even happen. He spent some time talking about the ways in which automated systems, embodied and non, might accidentally harm humans, and looking at inherently non-injurious design.

(I asked a question about this, during the Q&A; specifically, asking what kinds of bodies they were tested on, and how the testing scenarios were designed. He showed some work they'd done factoring that in.)

He spent some time differentiating between RoboEthics, Robot Ethics, and Robots' Ethics. RoboEthics he defined as the ethics of humas as applied to their tech. Robot Ethics he defined as a code of conduct implemented into a machine mind, by which it would be required to abide, in order to function (he noted, here, the limitations of AI being able to follow human ethical rules in ways with which we would be comfortable; Asimovian scenarios of robots interpreting a rule very differently than a human would, etc). And Robots' Ethics he described as giving machine minds the capacity for morality, and letting/guiding them in making their own ethical systems out of them.

(See? Should've definitely been on this panel :T )

Questions he asked: How do we find ethical answers to these problems? Whose biases are going to get implemented? He referenced the different answers given by people on a survey about drones, when the question was framed differently. "Should Remote Controlled Drones Be Armed" got a roughly even split of yeses and no's, but "Should Autonomous Drones Be Armed" got resounding Nos. He talked about the ORI, here, and gave names and images of the team, showing that it was roughly 50/50 male and female, which was deeply encouraging. In an effort to further gauge public intuition, they ran 10 polls and collected 766 responses on how people thought about autonomous cars. They even measured for how long it took people to answer different but similar versions of the same question, as a "How hard was it for you answer the question" metric. Ultimately, they were testing people's feelings on the overarching question of "Who gets to make the decision as to whether you or the car gets to make the decision?"

Next up was Alan Mackworth, discussing "Trusted Artificial Autonomous Agents" (AAA), as he doesn't like the terms "AI" or "Robots," though obviously not for the same reasons I dislike them. He wanted to set a bit on navigating between the hope and the hype of machine intelligences, and talk about the idea of trust. He, too, was pretty clear on the notion that our in-built biases can drastically influence the ways in which systems are built and can develop, and he was also aware of the need to have our systems be able to interrogate their own mistakes. Not just that there was an error, but what the error was, and why it was made.

According to Mackworth, ethical rules put in place need ways to mirror the values of he user, as well as mirroring society's values. Otherwise, you could end up making a criminal AAA—one which perfectly reflects the desires and morality of the user, but not of society as a whole. To this end, he proposes several methods of instilling those values and likely behaviours into AAA. 1) Formal specification and verification; 2) hierarchical constraint-based modular architecture; 3) Inference of human values; 4) Semi-Autonomy, with aw human in the loop; and 5) Participatory Action Design. This last one was a kind of user-defined model, with what he called "Wizard of Oz" techniques.

He talked about humans in "autonomous" wheelchairs (human agents moving them), with varying levels of control. L1) Humans could drive, with "the system" limiting speed. L2) Non-intrusive steering adjustments. L3) L1+Intrusive turning away from obstacles. L4. Completely autonomous, where humans were subject to the "decisions of the system." Predictably, people hated the model where they were entirely subject to the system.

Questions include, "How would they infer human values?" He talked about natural language systems, and sliders to adjust levels of autonomy. Talked about the need for more R&D and pointed to Stanford's 100 Years of AI Study.

After that was John Sullins, who's apparently a friend of a friend. He talked about the difficulties posed by different language models used in the fields that will have to deal with these systems, moving forward. Lawyers, Ethicists, and Engineers all speak different languages, he says, and that leads to misunderstandings and misapprehensions. On top of this, cases where actual use vs. reasonable use vs. intended use vs. legal use vs. ethical use may all be at odds, and may all surprise the designers and ethicists involved.

Sullins noted that classical ethics has only been designed with human agents in mind, and approached by engineers as the question of what is the best system to apply. But in ethics, the work is never truly finished, if it's being done well. We're never perfect, we're only ever less wrong. He brought in John Dewey, here, and I despaired that one of my friends was in the other room chairing a different panel, and another was in a whole other country. Dewey said that, in ethics, questions of Self-interest Vs. Benevolence, Egocentrism Vs Altruism, and Free Will Vs. Determinism are mere distractions. So instead, Sullins says, if we're following Dewey's lead, then we should focus on responsibility and the ability to embed ethics in our designs, and here he quotes Dewey about constantly asking whether the work we're doing is meant to benefit ourselves or others.

Engineers need to do this, but also need to pull back from trying to solve some specific tech problem, and be concerned with the ethical impacts of their work, on the whole. Ethical Engineering needs to become a major project, as it would prevent the "Release->Disaster->Beg Forgiveness" Cycle. Sullins suggests that ethics boards can do the work of identifying these concerns beforehand, but, as I've noted before, they themselves need to be carefully crafted, so as not to institute other biases.

Made the distinction between Artificial Ethical Agents and Artificial Moral Agents as being that an AMA wants to improve itself and be better when it screws up.

During the Q&A there was a question (not from me) about the prevalence of Western ethics in AI Ethics design, and how we could go about querying an AI about it's decision making. The general answer was that the design of ethical systems needs more consideration of embodiment.

A question to van der Loos about whether healthcare questions received meaningfully different answers than weapons systems questions. That is, whether people felt differently about WATSON making autonomous decisions than they did about drones doing so? He answered that, in those scenarios, people were more concerned about privacy than decision-making (see previous newsletter).

Someone asked about holding moral agents legally responsible, and, though we time on this one, I asked the panel members individually if there was any chance of there ever being machine minds that we recognize as conscious. Responses were thoughtful but doubtful that we'd see it any time soon, though we did talk about Ron Arkin having developed Non Serviam systems—systems that refuse to take certain kinds of orders.

Next panel up, that day, was "Discrimination and Technology" with Anna Lauren Hoffmann, Ashley Shew, and D.E. Wittkower, which, as a title, could have gone a number of ways, and some of them very not good.

We started with Wittkower on discrimination in imaging tech and 'disconscious racism.' He used HP's motion camera debacle as a prime example, discussed Jean Luc Goddard's unwillingness to use certain kinds of film due to the fact that they were only tested for white skin, and referenced a Guardian article I've used to teach students about the unconscious assumptions in imaging tech, ad the idea of a "Global Assumption of Whiteness." He noted that certain kinds of film only got better at representing black bodies, when they started trying to be able to film dark horses in low light.

Yes, you read that right. Dark coloured beasts of burden were considered, before dark-skinned humans.

He talked about selfies in POC communities as radical acts of empowerment, and the pushback against things like fingerprint scanners not recognisng black hands, Google's "black people=gorillas" nightmare, and Nikon's digital systems asking Asian people if they blinked and wanted to retake the picture. He discussed the fact that these were done things, that is they were programmed for and written down. They didn't just oops into existence.

Didn't exactly learn new things in this panel but was deeply gratified to see this research being formally done by and presented in this context someone who wasn't me.

Next up was Anna Lauren Hoffmann on gender, technology, and discrimination. She started with three stories about lived trans experiences: 1) Body scanners at the TSA airport lines not being able to categorize non-op trans bodies, and slotting them as "A Real Risk." 2) *Nym policies at Facebook and Google and the material psychological and physical harm they posed to marginalized groups, including abuse survivors and trans individuals. 3) Tinder, in which transwomen are often reported as "fraudulent" women and have no recourse to reinstate their profiles.

Hoffman went on to discuss what she called "non-normative identity maintenance," with this idea of a fluid, "various" self, in which who we are and how we present is self-informed, messy, and moral. She says that this stands in tension with the drive toward computational reductionism of self as externally administered and harmful, and that this tension has implications for social justice.

Self respect and a sense of one's own value are defined by Hoffman as a "secure conviction that one's identity and plans of life are worth pursuing," but she says that this is not just a personal issue. Our self respect and value are shaped and constructed by the societies with which we interface. Economic power and class mobility; disability and the built environment; Intersections of feminism and race theory as concerned with one's place in the society; LBG concerns of heteronormativity; trans lives and the constraints of the gender binary. All of these affect these groups differently.

See, again, my note last time, about the struggle. Intersectionality becomes a necessity. It was great to see a transwoman speaking and being listened to about issues of embodied gender and tech.

Ashley Shew rounded out this panel with "Up-Standing, Norms, Technology, and Disability," a discussion of ableism, expectations, and LANGUAGE USE as marginalising disabled bodies. I'll start by noting that shew is disabled, having had her right leg removed due to cancer, and she spent her talk not on the raised dias, but at floor-level. Her reason? "I don't walk up stairs without hand rails, or stand on raised platforms without guards."

Shew notes that a high number of wheelchair users DO NOT WANT EXOSKELETONS, but, NO ONE ASKED them. Also stair-climbing wheelchairs aren't covered by insurance. She said that when she brought this up in the class she taught, one of the engineers left the room looking visibly distressed. He came back later and said that he'd gone home to talk to his brother with spina bifida, who was the whole reason he was working on exoskeletons. He asked his brother, "Do you even want this?" And the brother said, basically, "It's cool that you're into it but… No."

So, Shew asks, why are the exoskeletons being designed? The answer, as you might guess, is transhumanists and the military. Discussing "helping our vets" makes it a noble cause, without drawing too much attention to the fact that they'll be using them on the battlefield as well.

Shew discusses disability as a social imposition, not merely an embodied one, and quotes Bill Peace on "the altruistic ignorance of bipedal people." Biases ingrained into social institutions. I was reminded very much of Natalie Kane's "Means Well Technologies." This was when I fell pretty completely in love with our IEEE weekend.

There was a question from John Havens on the potential use of mixed reality (VR/AR) for issues of identity. Hoffman responded that she is hesitant to say that new tech will automatically solve the problems of old tech. Wittkower replied that it might lead to the ability to just Erase body types, and referred to Second Life's not exactly having "work-appropriate clothing" for women whose companies like to do telepresence meetings, there, and how SnapChat's Bob Marley filter on 4/20 basically just amounted to digital blackface. Shew noted that disability simulators make people have a far lower estimation of disabled life than is actually reported by people with actual disabilities (it only takes about 18 months for people to fully acclimate to a new disability); AR might do the same.

The question was then asked about how to go about asking these questions and engaging others' lived experiences, with sensitivity? The overarching answer what that there is no 1-size-fits-all answer, but that we need to be aware of our assumptions before we even start asking. If we try to do less, we'll have less in-built expectation bias.

Shew was asked "what about users who do want and need assistive tech? She notes that, yes, they do exist, but the problem is that they were assumed to be the predominant population, because of assumptions of wanting "normalcy," where that's defined as "more like nondisabled bodies."

ASK THE POPULATION CONCERNED WHAT THEY WANT.

 
=|Closing out the Week|=
That was most of Friday. Since we're already at 3000 words and I need to finish cleaning out my old office, that's going to be it, for this time. Next time, we'll talk about Friday's Keynote Dinner, and what Saturday felt like.

Other than that, if you enjoyed what we did here, don't forget to tell your friends, and if you have a little extra to throw into the Patreon or the Square account, then please do.

If you didn't enjoy it, then keep that cash and go buy some food for somebody who needs or would like it. Maybe do that anyway. Either way, if you've got a copy of the June issue of WIRED, flip to page 78 and I see you when you get there.

Until Next Time.