This is why we need an ethical tech manifesto

C2 team
Chelsea Manning, Big data, big problems: It’s time for an ethical tech manifesto

“Technology is neither good nor bad; nor is it neutral.”

So reads the first and most famous of American historian Melvin Kranzberg’s six Kranzberg Laws, a series of truisms based on what he saw as technology’s intimate relationship with sociocultural change.

Former U.S. Army intelligence analyst turned whistleblower and activist Chelsea Manning echoes Kranzberg when she offers up a law of her own: “AI is only as good as the data we put into it.”

Laying down the laws

Melvin Kranzberg introduced his Kranzberg’s Laws at the 1985 annual meeting of the Society for the History of Technology: 

  1. Technology is neither good nor bad; nor is it neutral.
  2. Invention is the mother of necessity.
  3. Technology comes in packages, big and small.
  4. Although technology might be a prime element in many public issues, non-technical factors take precedence in technology-policy decisions.
  5. All history is relevant, but the history of technology is the most relevant.
  6. Technology is a very human activity — and so is the history of technology.

Chelsea knows the drawbacks of data technology well. In the early days of the information wars, she was a key player in catalyzing a conversation about data security and transparency, releasing more than 700,000 Iraq and Afghan war logs and other classified documents to WikiLeaks in 2010.

Calling out technologists for being complicit when code is used for ill, Chelsea is encouraging the IT community to embed moral, social and behavioural considerations into their development timelines.

“Just because you can build a tool doesn’t mean you should.” – Chelsea Manning

She also strongly suggests business leaders and IT developers create a manifesto for ethical tech that draws inspiration from the Declaration on Discrimination in Machine Learning produced by RightsCon Toronto conference in 2018. The declaration aims to develop detailed guidelines for the promotion of equality and protection of the right to nondiscrimination in machine learning.

“Applied to big datasets, machine learning enables detailed discrimination caused by the underlying data and the design and implementation of systems,” RightsCon organizers stated. “The lack of diversity among those designing and implementing systems contributes to these risks.”

 

Bad tech WILL be used against you

We’re already surrounded by artificial intelligence. It’s used in cars, in HR hiring systems and to process power and water systems, for example. And these machines, plainly and simply, read what we feed them.

The problem is that “we tend not to apply critical thinking tools to algorithms,” says Timnit Gebru, a postdoctoral researcher for Microsoft’s Fairness, Accountability, Transparency and Ethics in AI research group (FATE), and co-founder of Black in AI. Hence, the perpetuation of automation bias: the propensity for humans to favour suggestions from automated decision-making systems, for better or worse.

Examples of automation bias were exposed in the paper “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings,” produced by researchers at Boston University and Microsoft Research New England. A small sampling of what they found:

  • Man is to king as woman is to… queen (so far so good)
  • Man is to doctor as woman is to… nurse (beg your pardon?)
  • He is to computer programmer as she is to… homemaker (oops!)

Bias is part of a larger conversation on ethical tech, according to Samuel Moreau, Partner Design Director – Cortana & Artificial Intelligence at Microsoft. He says there are five types of bias:

  1. Dataset bias: Data that doesn’t represent the diversity of a population.
  2. Automation bias: Occurs when automated decisions override social or cultural considerations. The Brisha Borden case is one example.
  3. Association bias: Data associations that reinforce and multiply cultural bias.
  4. Interaction bias: Occurs when humans tamper with AI, as took place with Microsoft Twitter bot Tay’s racist tweets.
  5. Confirmation bias: Oversimplified personalization makes biased assumptions and narrows a user’s views.

 

Facing the flaws

Consider that early speech recognition technology in cars struggled to process commands spoken by women or by men with foreign accents. And in 2015, a tweet about Google’s facial-recognition software went viral after it identified two black Americans as gorillas.

In addressing the issue, Google’s Chief Social Architect Yonatan Zunger admitted that, “clearly, [we have] a lot of work to do with automatic image labelling.”

But this is only the tip of the iceberg.

Crime recidivism algorithms are currently being used by some U.S. judges to determine sentencing. Police forces in New Orleans, Orlando and Washington County, Oregon have recently been flagged by The American Civil Liberties Union and other privacy activists for experimenting with a face recognition tool created by Amazon called Rekognition. As of 2016, one in two adult Americans is in a law enforcement face-recognition network thanks to access to their driver’s licence photos and other picture ID.

Were these tools designed for policing? Not at all, which begs two questions: Should this even be happening, and are these tools robust enough to be used for such purposes, where errors could have a deeply negative impact on people’s lives?

 

Anything goes for anyone

Whatever you believe, the problem is that currently any model of AI can be used for anything and by anyone, which opens the tech up for misuse.

“There are no rules that say if you are using facial recognition tools as law enforcement, what kind of properties it should have,” says Timnit. “There are no rules that say whoever is using these automated tools needs to let people know how and why they’re using it. [There are] no standards and no documentation.”

“We don’t even know if current tools are breaking current laws.” — Timnit Gebru

An advocate for regulation, documentation and standardization, Timnit feels strongly that AI researchers “shouldn’t reduce every problem to a dataset or some metric to be optimized,” she says. “We need to understand the social and structural problems in our space.”

 

Calling all technologists

Some big companies are beginning to react. Google Brain researcher Hugo Larochelle believes there are four basic principles for practicing responsible AI: transparency, privacy, robustness and impartiality.

Ruth Kikin-Gil, a senior UX design strategist with Microsoft, has a similar take: “Build diverse and inclusive teams, create intelligible databases and communicate the reasoning behind decisions to the end user.”

Organizational principles are one thing, but Anna Neistat, the Head of Research at Amnesty International, wants technologists to take human rights injustices personally.

“It’s their fight as well,” she says. “Ultimately, if we lose this fight, we’re all going to suffer.”

Anna’s urging IT to start thinking about what they can contribute the cause — and fast.

“No tech innovation will get us into the future if our human rights are forgotten.”

Generic placeholder image

Biased algorithms are everywhere and no one seems to care,” about how the big companies that are developing them are showing no interest in fixing them.

 

Taking back control of the tech in our pockets

In much the same way regulations caught up with the auto industry and set standards for drug testing, it’s time for the IT community, big institutions, police and military forces, and government to be introspective about our practices and disciplines, to create (and enforce) a rule book… before it’s too late.

Practice political agency. As citizens, our vigilance is crucial. Malicious tools need to be identified and political activists need to be supported. Paranoia and complacency will not help.

Change tech culture. Business and tech cultures must change at the middle management and developmental level to encourage diversity and security. Algorithms must stop reflecting human biases. Moral and ethical questions cannot afford to stay in the lab if we wish to prevent our tech tools from serving a police state. 

Wield the power of the people. Chelsea believes the labour movement of the 20th century is a good source of inspiration for the way forward. Workers sought better working conditions and treatment from employers in response to the dehumanizing aspects of industrial capitalism. Citizens need to cooperate and coordinate their efforts in order to be heard by policymakers and by the tech industry, just like workers did when they strived to assert their rights in a system they deemed unfair and inhumane.

 

The C2 Montréal Minutes: Actionable insights for creative business leaders

This article is excerpted from Transformative Collisions: The C2 Montréal 2018 Minutes, a roadmap for progressive business leaders, bold entrepreneurs and those wishing to up their creative game. You can read it in its entirety here.

Questions or comments? Drop us a line at editorial@c2.biz