When we think about the future of our world and what exactly that looks like, it’s easy to focus on the shiny objects and technology that make our lives easier: flying cars, 3D printers, digital currencies and automated everything. In the opening scene of the animated film WALL-E – which takes place in the year 2805 – a song from “Hello, Dolly!” happily plays in the background, starkly contrasting the glimpse we get of our future planet Earth: an abandoned wasteland with heaping piles of trash around every corner. Humans had all evacuated Earth by this point and were living in a spaceship, where futuristic technology and automation left them overweight, lazy and completely oblivious to their surroundings. Machines do everything for them, from the hoverchairs that carry them around, to the robots that prepare their food. Glued to their screens all day, which have taken control of their lives and decisions, humans exhibit lazy behaviors like video chatting the person physically next to them.
While yes, this is an animated, fictitious film, many speculate that this could be somewhat of an accurate depiction of our future, and I tend to agree. Advancements in AI and technology are meant to make our lives easier, yet they pose a threat to society when they are not perfect. Today, businesses and individuals face many challenges with AI: from tech and social media giants controlling speech on their platforms to services and technologies that speed up processes but apply unintentional bias. When we start relying on algorithms to make decisions for us, that’s when things begin to take a turn for the worse, and we get one inch closer to living in a place not too far off from the environment we see in WALL-E. AI can’t just be good enough for us to create a better world for ourselves – it must be perfect. Here’s why:
An overreliance on AI amplifies the biases that we should be eliminating.
As each year passes, the global use of AI continues to grow. While advancements in AI should be making our lives easier, they’re also highlighting some of our implicit biases that many are working hard to eliminate. A study from MIT found that gender classification systems sold by several major tech companies had an error rate as much as 34.4 percentage points higher for darker-skinned females than lighter-skinned males. Likely due to skewed data sets, examples like this present a myriad of problems in decision making, especially in employment recruiting and criminal justice systems. Algorithms that exclude female candidates for traditionally male-dominated jobs, or algorithms that determine a criminal’s “risk score” heavily weighted in appearance versus actions, are only amplifying the biases that we should be removing.
A black-box approach to AI puts our first amendment rights at risk.
A black box system in which users lack transparency of algorithm development and model training along with knowledge as to why models make the decisions that they do is very problematic in the ethics of AI. We as humans all have blind spots, so the creation of models and algorithms should involve elevated human context and not just more powerful machines. If we punt all of our decisions to an algorithm and we no longer know what’s going on behind the scenes, the use of AI risks becoming irresponsible at best and unethical at worst, even putting our first amendment rights at risk. One study from the University of Washington found that leading AI models for identifying hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans. Biases in hate-speech tools have the potential to unfairly censor speech on social media, banning only select groups of people or individuals. By implementing a “human-in-the-loop” approach, humans get the final say in decision making and black-box bias can be avoided.
When we start relying on AI to make decisions for us, it often does more harm than good. Last year, WIRED published an article called “Artificial Intelligence Makes Bad Medicine Even Worse,” which highlights how diagnoses powered by AI aren’t always accurate, and when they are, they’re not always necessary to treat. Imagine getting screened for cancer without having any symptoms and being told that you do in fact have cancer, but later finding out that it was just something that looks like cancer, and the algorithm was wrong. While advancements in AI should be changing healthcare for the better, AI in an industry like this absolutely must be regulated in a way where the human is making the final decision or diagnosis, rather than a machine. If we remove the human from the equation and fail to regulate ethical AI, we risk making detrimental errors in crucial, everyday processes.
MORE FOR YOU
AI needs to be better than good. To protect the human, it has to be perfect. If we begin to rely on machines to make decisions for us when the technology is “good enough,” we amplify biases, risk our first amendment rights and fail to regulate some of the most crucial decisions. An overreliance on less-than-perfect AI may make our lives easier, but it will also make us lazier and potentially accepting of poor decisions. At what point do we begin to rely on the machine for everything? And if we do, will we all end up evacuating an uninhabitable planet Earth, relying on hoverchairs to carry us around and machines to prepare our food for the rest of our lives – just like in WALL-E? As AI advances, we must protect the human at all costs. Perfect is the enemy of good, but for AI, it needs to be the standard.