AllAI logo

There is no such thing as Artificial Intelligence


AI is being used as a shield that is literally a shell company, allowing people to do what they really want and misdirect attention towards some machines.

Joanna Bryson

If you make a technology ubiquitous enough, people will assume that someone is answering for it. The risks have been calculated, someone's monitoring the deployment and things can be shut down if they get wonky.

But governments and international bodies are actively shirking the risk conversation. Relegating it to specialized conferences and pushing it to Q&A in info sessions. Tech companies are hyping ethical AI externally while only authorizing or enabling best practices internally. No requirements. No roadblocks.

We have no real understanding of the risks of AI to society. And since the fatal flaw - bias - is a function of how AI architecture works, there is no technical solution to the problem.

We've pushed forward with adoption despite having no kill switch. And while we've normailized the idea that AI can be responsible for itself, it won't be intelligent enough for that at any point in the forseeable future.

In a society where we're rapidly altering our environment to fit the technology, how can we ensure that we retain our ability to mold society to fit the ways we want to live?

The flaw is in the architecture?


TensorFlow load weights code

AI is not intelligent. It is accurate and efficient in its automation. We feed it a snapshot of its environment through the training data. It searches for the most informative relationships between and within all of the pieces of data and stores these relationships as weights.

Image from Weights and Biases in AI Wiki.

Weights are what form the foundation of the model. Every model. All societies have some level of bias and our data is an archive of the distorted relationships between social groups. These distortions are introduced into in the AI through the weighting process. That is where the anti-[insert relevant undesirable]-bias lives. And that is why we cannot remove it.

[N]eural network training
uses libraries of input data to converge model weight parameters by applying the labeled input data (forward projection), measuring the output predictions and then adjusting the model weight parameters to better predict output predictions (back projection). Neural network inference is using a trained model of weight parameters and applying it to input data to receive output predictions.

Survey of Machine Learning Accelerators

But it's called artificial intelligence


AI is not intelligent, it is a pre-programmed system that is able to complete tasks. It is unable to learn new skills or adapt to new environments. While we've made great strides in the area of AI, we are very far from the technology living up to its name.

Since the AI isn't intelligent enough to operate in society, we've begun adapting society to fit the AI. And the societies and groups whose data was used to train the AI.

Without intelligence, there's no control, no way for us to ensure that the AI will reliably act responsibly or in the manner we intend. And there's no real way to shut it off when it goes astray.

Is risk mitigation even possible?


We're never going to be able to eliminate societal bias from AI. But we can actively constrain the use of AI to applications that do not place our institutions and beliefs at risk. And in those areas where the risk isn't as great, we can work to design products that consider harm mitigation from their inception.

We can also take personal responsibility:

  • Fight to prohibit the use of automated or hybrid AI decision-making in every area that is responsible for social order or the societal safety net - specifically law, criminal justice, government, housing, and health.

  • Stop blaming AI for AI. Hold companies responsible for the havoc their products unleash in the world.

  • Be an active and adamant advocate for your society during the consultation and design process for new AI systems in your country.

AI is not going to save us. The companies that enable, accelerate and create AI aren't going to save us. Our governments and international bodies aren't going to save us. Not on their own.

The onus is on us.