Many professions are subject to strict guidelines and codes of ethics, binding their participants to abide by common principles. Violations of these rules can lead to legal repercussions or a loss of a license to practice one’s chosen profession. The Hippocratic Oath, the American Bar Association’s Model Rules of Professional Conduct, or the American Society of Civil Engineers Code of Ethics are examples of such self-regulating or self-policing organizational definitions.
Information Technology has none of that. Our profession remains entirely uncontrolled, unlicensed, unregulated: anybody can — and does! — call themselves a “software engineer” or “systems architect,” for example.
The ACM’s Software Engineering Code of Ethics or USENIX LISA System Administrators’ Code of Ethics are two examples of attempts to define such guidelines for an undefined profession; yet the majority of WebOps, SysAdmins, SREs, or software developers have never heard of them.
At the same time, we are increasingly responsible for building and maintaining critical infrastructure components for software that handles our users’ most private data, for products that directly or indirectly influence people’s lives. We are building the internet and the World Wide Web; we are connecting people (and increasingly, things), creating new products, and we like to “disrupt” existing industries and claim to strive to “make the world a better place.”
But rarely do we consider our direct ethical obligations as privileged insiders of this dominant economic force. How do we build self-driving cars that might have to decide one day whether or not their passengers should die to avoid a greater catastrophe, when can’t even guarantee the privacy of elementary students’ data? Is reliance on science fiction’s Three Laws of Robotics sufficient to implement ethical decision-making engines? Could (and more importantly, should) we develop automation and monitoring solutions to “scale” the delivery of lethal injections? Do we have a requirement to protect user communications from warrantless government spying, whether or not our users demand it?
Does a simple guideline such as “First, do no harm” make sense in our profession? How would this translate into the many fields of work we cover?
I’d like to review these questions and present a discussion of the obligations we have beyond just increasing share-holder wealth. This discussion would range — as illustrated above — from the big and difficult decisions (e.g. whistleblowing, life-and-death, changing jobs) to simple best practices (e.g. protecting users’ data in transit and at rest, communicating clear terms of service).
Jan Schaumann is an infrastructure and information security engineer and an adjunct professor of computer science. Jan has over 15 years of experience in both small-scale deployments and enormous high-availability infrastructures serving millions of users. Today he spends most of his time worrying about online privacy and infrastructure security and integrity. You can follow him on Twitter as @jschauma.
Comments on this page are now closed.
©2015, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org