TheTechOasis company logo
home

Ethical issues of artificial intelligence ANNs - TheTechOasis

Date: 2022-09-11

We need to reassess how we use AI to not harm society

Hand stop

Photo by Nadine Shaabana on Unsplash

AI, AI everywhere

You have AI in your computer, you have AI in your phone. For f*uck sake, you even have AI listening to your every conversation through your home assistant technology, that being Alexa, Google Home, or whatever. It has become really hard to see anything tech-related that is somehow innovative that doesn't have some sort of AI embedded into it.

AI use cases are close to infinite, we have seen from AI that drives cars autonomously, to AI drawing Van Gogh-style paintings from photos. But these are no ordinary AI, these are Artificial Neural Networks.

However, despite the objective impact on society already, Artificial Neural Networks raise some serious questions that not only put AI's market growth in jeopardy but could also even become a hazard to its existence. Could their biggest virtues be, at the same time, what causes their downfall?

AI impacts us in unfathomable ways

It is no secret that AI is already changing the world, some say for the better, some say for the worse, but undeniably, AI is changing it and, more importantly, is changing us, and there's nothing we can do about it. AI is affecting how we socialize with each other, how we work, how we behave, and, scarily enough, could even determine how we raise our kids.

And its influence is just growing and growing...

According to a study by PwC, AI could be sitting around a $16 trillion market cap by 2030. To fathom how huge that number is, for reference, all the known gold in the world is valued at $9 trillion, and the cryptocurrency market, albeit with its everyday volatility, sits at around $2 trillion. Hence, AI would still be $5 trillion more valuable, as a market, than those two in today's numbers.

And this is all thanks to Artificial Neural Networks.

What is AI really?

To talk about ANNs, we first need to understand what AI is, because these days you see hundreds of thousands of articles and comments about AI, but few people actually understand what it is.

AI, at this point in time, are statistic-focused algorithms that, using a great deal of data, are capable of optimizing and performing different types of outputs, like predictions, recommendations, or, with more advanced AI, output actual actions (driving a car, creating a painting, creating a musical piece, etc.).

Thus, this versatility that AI has is only thanks to the capacity of these algorithms to perform well and perform completely different actions while, mathematically, being intrinsically equal.

And the algorithms that are capable of adapting to such disparate use cases are none other than Artificial Neural Networks.

Now, what the actual f*ck are Artificial Neural Networks

Artificial Neural Networks are a specific type of AI algorithm that has unmatched versatility when it comes to AI use cases. You can use ANNs to drive your car, manage your Mac's M1 or M2 core, draw paintings or write music. All these things, added to particular nuances to each case (there are different types of ANNs), are created by ANNs.

But how does one unique algorithm be capable of encompassing such a ludicrous amount of different practical use cases?

Well, it's simple and hard to explain at the same time.

In simple terms, an ANN is an algorithm that intertwines layers of neurons (we can consider each neuron a variable) and their corresponding weights. All that the algorithm does in each and every one of those use cases described before is minimize an error function, much like you did when you were at calculus during your college years.

For those of you who aren't tech-heads, what ANNs do is minimize the possibility of making a wrong call.

They are trained by calculating an error function (a function that computes the error of a certain ANN output, like when the ANN says something is a dog when in reality is a cat, for instance) and using several mathematical techniques like partial derivatives to fine-tune the neurons (variables) that will activate (that matter for that use case), and their corresponding weights to minimize the chance of making a mistake.

ANNs are incredibly powerful because they are capable of deciding what factors (variables) matter to make the best assumption and, thereby, reduce the chance of error.

Without getting into the technical details of how they actually work, my goal is to make sure you understand why they apply so well to so many practical fields. To do this, let's use examples:

  • In an autonomous car, they are trained to avoid accidents (the error)
  • In an object detector, like a face detector like the one used by the Chinese government, they learn to minimize the chance of identifying a face that 1. isn't actually a face, and 2. isn't the person they think it is
  • In a Van Gogh painting generator, they are trained to avoid creating paintings that don't resemble Van Gogh's paintings.
  • In a model that detects Alzheimer's in brain x-rays, they are trained to ignore brain patterns that don't seem to be a signal of Alzheimer's

By the way, these are all real use cases.

You see the pattern, right? The unique thing about ANNs is that they are completely detached from the reality of what they are executing, they work at an abstraction layer above the actual use case by minimizing the chance of error. Because, as you can see in the examples above, any use case can be classified into two categories, a positive outcome and a negative one.

But what do I mean by ANNs working 'above' the actual use case?

What I mean is that ANNs aren't sentient; they don't understand what Alzheimer's is, they don't know who Van Gogh was and what type of painting he did, they just simply learn, by use of humongous amounts of data, to perform the most accurate prediction/action to the desired outcome.

They just turn any use case (a photo, for example) into actual numbers and functions (pixels in this example) they can then optimize to an extent they hardly ever make mistakes.

The bottom line is, ANNs work for so many use cases because they are capable of transforming any distinct use case into something that occurs universally; any use case can be divided into a positive scenario and a negative one. Therefore, any input that can be transformed into numbers can then be interpreted by an ANN and optimized by reducing the chance of making bad calls throughout the millions of data points these ANNs need to be trained.

However, ANNs are expensive to train, as they need lots of data. Thus, the normal rule of thought is that any tabulated set of data will be trained with other, less computationally-intensive algorithms in the Machine Learning spectrum, and anything non-tabulated (images, x-rays, sounds, text, you name it) sets of data will have to be modeled by ANNs.

Not everything is perfect with ANNs

As I described at the beginning of the article, not everything is perfect around Artificial Neural Networks. Indeed, they offer great solutions for many of society's most pressing matters, but those same characteristics that make them brilliant also entail some worrying ethical issues of artificial intelligence.

In simple terms, ANNs are as opaque as an algorithm can be.

But what does that mean?

Simple, when the neuron network gets too big, we aren't just incapable of understanding how they are making the correct decisions.

Remember I mentioned that they are capable of choosing what variables matter and how important they are by themselves?

That's a virtue and a problem at the same time.

On the one hand, you avoid any human biases by enabling the algorithm to fine-tune and decide what matters based purely on data, thereby being capable of analyzing hundreds of thousands of variables and their correlations and choosing only those that matter. This is impossible to be done by a human being, period.

On the other hand, the engineers behind those models have no clue about what's going on inside the model. And this is a problem.

But why? As long as it works it's fine, isn't it? Well, depends.

ANNs' opacity is a great problem when transparency is required. This is better understood with an example:

An insurance company doesn't want to pay for an accident one customer had. However, the customer understands that the accident is covered by the insurance policy. Thus, they go to court.

When the judge demands the insurance company to explain themselves as to why they are rejecting that accident, the insurance company can't explain their reasoning because the decision was made by an opaque ANN that has studied millions of those cases and considers that the insurance company doesn't have to pay.

What do you think the judge will do?

Although the model probably has done a correct recommendation, the judge could care less about the accuracy of the model. If no explanation is given, the insurance company will have to pay, obviously.

And this example can be easily extrapolated to any regulated sector. In fact, many regulators are canceling AI-powered solutions that have almost 100% accuracy, simply because they aren't transparent.

Transparency is needed in situations where ethics must be considered. Hence, ANNs are absolutely out of the question for an ever-growing amount of practical use cases that require some sort of transparency.

AI isn't being laid back by lack of funding or interest, AI is being restrained by the surge of Responsible AI (RAI), a trend in AI that wants to enforce ethics around anything related to AI because AI models are already making decisions that affect society considerably, and thus a responsible use of the technology must be ensured.

And things can actually become worse.

EU regulators are notoriously known for crippling technology growth in the name of ethics, for data privacy, transparency, etc.

Soon, the Artificial Intelligence Act will be deployed in the EU, with a focus on ethics for each and every use case, where AI will be carefully analyzed.

Never in history will a technology, AI, and most specifically an algorithm, ANNs, be more scrutinized by regulators and society. Researchers must find ways to make these models more transparent, period. If not, the ethical issues of artificial intelligence, albeit its great potential, could minimized its impact, and with it, the immense innovation potential that AI is credited for.

The rest of my stories

If you enjoyed this post, you can become a member of Medium and open yourself to an absurd amount of curated, bespoke content tailored to your needs through this link:

Important!

If you happen to enjoy my blog, subscribe below to my weekly newsletter.

Join the community of leaders that stay easily up-to-date with the essential tech & crypto insights, simplified so even your dog will understand them.

Check out my latest blog!

© 2023 | icon courtesy of Freepik - Flaticon