5 Reasons Why I Don’t Trust Artificial Intelligence

Dina Mostovaya
5 min readOct 28, 2024

--

Photo credit: Freepik

Whether we like it or not, artificial intelligence is already an integral part of our daily lives. We’re witnessing it evolve right before our eyes, marking one of the most significant technological leaps in history.

There are many benefits to this — for millions of people worldwide, AI makes it easier to tackle their daily tasks and improve their businesses. At the same time, many organizations are delegating key decisions to AI-powered systems, to the point where they have a say in whether we get hired, approved for a loan, or sent to jail. If this is not enough, it also affects our medical treatments.

This is happening because many individuals and organizations alike believe AI can deliver the fairest and best possible outcomes.

In the midst of so much hype, I’m here to present a counterargument. AI is no magic pill, and personally, I don’t believe it can be fully trusted or relied upon. As powerful as this technology is, it still needs our human input. In this column, I will explain why.

5 reasons we cannot fully rely on AI

#1: AI algorithms are created by humans, and can be intentionally designed in ways that benefit the developer

There are many misconceptions about AI. Since it is, in principle, not influenced by emotions, many people believe it to be impartial and emit objective judgments.

However, this is seldom the case. Let’s remember that algorithms are programmed by humans, and merely amplify the data they are trained with. This leads to irrational decisions, whether that is because developers lacked enough data or they intentionally manipulated the system.

Examples abound. For instance, iTutor, a tutoring company, was under fire because its algorithms automatically rejected candidates older than a certain age. All it took for someone to receive an interview invitation was to alter their birthdate.

In another landmark case, Derek Mobley, a “40-year-old Black man with depression,” filed a lawsuit against AI-based recruitment software provider Workday, asserting that its systems were racially biased and resulted in him being rejected for over 100 positions. Although these claims were dismissed earlier this year, the judge did recognize that AI developers may bear direct responsibility for discrimination in hiring.

#2: Algorithms replicate patterns observed in the real world

Not every developer is out to manipulate the system, and many of them might genuinely believe that they are programming bias-free algorithms. Nevertheless, since AI learns from its observations, it automatically will perpetuate some stereotypes. As a system, it cannot fact-check, verify accuracy, or critically evaluate the data it receives.

Additionally, the behavior of a neural network is largely shaped by its initial dataset, and unfortunately, today we face a shortage of data that is “fair,” meaning gender or ethnically balanced. Therefore, most data carries intrinsic biases with it.

Many cases prove this. For instance, in algorithms used for machine translation and chatbots, frequency plays a key role — how often one word appears next to another matters. If the word “architect” is more commonly associated with male pronouns and “teacher” with female pronouns, the technology adopts this worldview, in which certain professions are more correlated with males or females.

Caroline Criado Perez, in her book Invisible Women, touches on this, mentioning a study by University of Washington researcher Rachael Tatman, who discovered that Google’s speech recognition programs were 70% more accurate at understanding male voices than female voices. Why? Because the data was not sufficiently gender-diverse. She cites the example of the TIMIT speech corpus, in which 69% of recordings were male voices, creating an imbalance.

In another situation related to recruiting, AI has been found to have numerous biases when evaluating job applicants. Since it might rely on data from current employees, candidates whose hobbies include “baseball” or “basketball” — popular pastimes among successful male employees — may score higher than others.

#3: AI — and its developers — don’t care about your personal data

As mentioned earlier, to successfully train an AI model, you require vast amounts of data. The more, the better. This often leads developers to use whatever data they can access, including user data from public sources.

In 2019, IBM created its own facial recognition database to train a neural network that would be allegedly free from gender and racial biases. The dataset included millions of photos with detailed metadata (facial expressions, age, skin color). It was later revealed that IBM had used photos from the social media platform Flickr without the users’ consent.

Risks amplify since AI developers may share your data with third parties. According to a Mozilla report, platforms like Replika AI and Eva AI are selling user data. Many romantic AI chatbots don’t allow users to delete their accounts and don’t require complex passwords. At the same time, these chatbots often prompt users to share sensitive information, emotionally encouraging them to send photos and voice messages. However, how can we comfortably do so if we don’t know where they will end up?

#4: AI often makes unexplainable decisions without clear parameters

AI decision-making processes are, frequently, hard to understand. This lack of transparency means that users have little insight into how or why this technology reaches a particular outcome, making it difficult to fully trust its judgments.

Despite this, today, companies increasingly use AI to evaluate employees. There are metrics, such as the turnover propensity index, that predict how likely an employee is to quit in the future based on available data, such as the number of previous jobs, position, education, and more. Amazon, for example, uses similar metrics, where systems automatically terminate warehouse workers if they are deemed to be underperforming.

The problem lies in the fact that AI never established causal relationships between phenomena. Causality is replaced by correlation: a statistical model merely identifies how one parameter changes in relation to others. This means it’s impossible to see how the model “thinks,” which specific combinations of parameters influence the outcome, and why. As a result, companies end up making “poor” decisions that negatively impact employees’ lives — all without a valid justification.

#5: AI creates a sense of complacency that damages critical thinking

It’s one thing to draft an email for a potential partner or write a social media post using an AI assistant. It’s another matter entirely if you’re, for example, a doctor who thoughtlessly follows what algorithms suggest, or a banker who makes decisions completely relying on it without diligently evaluating a prospective deal.

As discussed, AI is not a silver bullet. And if we blindly delegate our choices to it, we are very likely to get them wrong. That is, I believe, one of the main dangers of this technology — we can easily “outsource” tasks to AI and stop thinking critically. Eventually, this can be tremendously harmful.

Final thoughts

For the reasons outlined above, AI should serve only as a co-pilot, and relying on it 100% is not advisable. We need to remember that we, as humans, should control any technology, and never the other way around.

Finally, one of the most important tasks in the field of AI ethics today is to debunk the myths surrounding it. Too many people attribute human qualities to robots. In reality, the term “artificial intelligence” would be more accurate if we removed the last word, and remembered that we, as humans, are by nature intelligent too.

--

--

Dina Mostovaya
Dina Mostovaya

Written by Dina Mostovaya

An award-winning global cultural & business strategist; founder of Mindset Consulting and Sensity Studio

No responses yet