If you listen to the tech world of Silicon Valley, it's easy to believe the rapid advent of new technology is all upside. In fact, any time politics prepares to get in the way of technology companies (many of whom have done some pretty anti-competitive and/or unethical things), those companies will characterize the politics as "stifling innovation" which is, of course, the end-all be-all of human advancement. Duh.
In truth, technology (as I've taken great pains to show in this newsletter) can have serious negative effects. It can enable the automated killing of anyone and everyone. It can enforce performance behavior in previously casual circumstances. It can leave us in a vast and perpetual legal fugue. It is for all these reasons that we cannot allow technology to develop without oversight and consideration of consequences.
This begs the question of whether it is ethical to create new software in our current technological and legal context. On the one hand, if you're making Computer-Aided Diagnosis software to assist doctors in diagnosing and selecting treatment for cancer, your work is probably a net good for society. On the other, if you're creating software to automate a vast swath of jobs (as discussed in previous letter "Jobs for Horses"), then you may be on the low end of the ethical scale.
Sadly, that work is the work of much of today's software engineering world. Whether you're creating software to make managing paperwork easier, or robots on a factory line, you are slowly but surely assisting in the elimination of human labor.
In short, I (and others who do my sort of work) am killing your job. On purpose. Because your boss asked me to. Not necessarily because you want it, or because it's good for society, but because the siren call of higher efficiency is nigh un-ignorable by any true capitalist.
There is a certain hilarity in this. In an effort to be more efficient and thus compete at lowers prices for the consumer, we automate work which thereby eliminates job, floods the market with unemployed workers, depresses wages, and then leaves everyone less able to purchase what they need. The cycle is self destroying.
And at the center of it is the question: is it ethical to continue working on automation software? Whether you're a deontologist or a utilitarian or something else entirely, the answer is unclear.
A Kantian may say that the job of creating software which eliminates other jobs violates a categorical imperative. In essence, since you would not want someone else to automate your job away, you should not do work to eliminate theirs. Yet they may also say that there is no imperative, as the elimination of work (and transition to a post-work world) is virtuous in and of itself.
A utilitarian's view is entirely based on the time frame you are considering. In the short term, elimination of jobs is likely a net harm, as unemployment and thereby suffering rises. Yet in the long term, the elimination of the need to work makes every better off. Assuming such a system is possible, the action is completely justified.
In the end, it is unlikely these ethical questions will change the direction of history. The call of lower costs is too great, and the carrot of a well-paying job in automation is too good to pass up. In the end, as I discussed in Jobs for Horses, automation is inevitable. The true question is how we respond to it.
Yet there will always be something a little perverse in killing people's jobs, and convincing them to be thankful that you are.