top of page

WHO AUTHORS THE AUTOMATION?

Jun 20, 2019

Tim Morgan published his sixth blog post in our Emerging Fellows program by checking the possibility of building intended and unintended governance into automation. The views expressed are those of the author and not necessarily those of the APF or its other members.

Quis custodiet ipsos custodes? Who watches the watchers? This question penned by the ancient poet Juvenal may have more urgency in the current automation age than it did two thousand years ago in the palaces of Rome.

 

Our money is stored as data and flows digitally at the speed of light. We socialize online. We play online games and stream movies. Apps record our exercise, sleep, and heartbeats. News comes via feeds customized based on our observed interests. Search engines return ranked results based on more than just the keywords. All of it is done via algorithms written by someone for a specific reason. Even self-learning AIs are ultimately built to fulfill a human desire or need. Automations are systems which embody someone’s values.

 

What values are being intentionally and unintentionally encoded into the automated backbone of current society? What intended and unintended governance is being built into automation? As automation advances and becomes ubiquitous, what we develop and how we apply it becomes critical.

 

Most automation is developed to meet specific requirements. Quality Assurance professionals analyze and validate software via requirements-based testing and analysis. Automations are systems, be they games or banking apps or robots.

 

Complex systems have complex behaviors. Complex systems create and reinforce values consistent with that system. Systems promote the values embedded into the software, whether or not the developer intended to embed them. Testing rarely goes beyond the mere functional requirements to measure the systemic impacts of automation against the larger world. The current Institutional and Market watchers are insufficient to that task.

 

Proactive policing software promotes racially biased patrolling patterns when systemic biases have been unintentionally embedded into data and code based on existing policing practices. Voting rolls are purged of legal voters because of erroneous, and sometimes intentional, partial name matches with convicted criminals. Traffic light cameras are used more for automated revenue enhancement than for protecting public safety. Social media and news media both use dynamic consumer metrics to automatically amplify attention-getting divisive stories ahead of socially uplifting ones. Successful games exploit known psychological triggers to promote compulsive game-play, even when embedding those triggers were not a conscious programming choice. Successful games, news, and social media have quickly evolved into attention predators via market selection. Automation is evolving, but market and institutional selection mechanisms are not necessarily socially benign.

 

The future holds some interesting values questions around advanced automation. Could we go beyond normal Don’t-Hit-A-Pedestrian safety programming in self-driving cars, adding in Good Samaritan assistance behaviors for the roadside stranded or injured? Will consumers ever get a Make-My-Life-Better setting on their social media? Will we find ways to create new Social Awareness algorithms and new Social Quality Assurance testing standards for commercial and institutional automation?

 

Failure to anticipate the untested social impacts of new automation before it is deployed turns the entire world into an increasingly bug-filled, chaotic, free-for-all of externalized impacts and socialized costs. How technology is applied is a choice. How to encourage development of future automation which balances profit or control with social good is an epic challenge of our current era for both developers and users alike.  Who authors the automation? Who watches the watchers? Ultimately, we all do if we want a better world.

 

© Tim Morgan 2019

bottom of page