AI has rapidly become a growing part of everyday life, sparking both excitement and unease. Some people are energised by the technology, whilst others express fear. Many of us, however, find ourselves somewhere in the middle: intrigued but cautious.
In this article, I will explore the ethical dimension of AI. Why does it provoke fear? Are those fears justified? And what kind of future are we creating?
Can AI make ethical decisions?
Most of us use AI for simple tasks - proofreading, brainstorming, or drafting emails. It’s fast, helpful, and unthreatening. However, unease grows as AI is increasingly used in contexts where moral questions arise. Can a machine make moral choices?
Consider a self-driving car faced with a dilemma: swerve and risk passengers to save a child, or stay the course and harm the child. Ethical scenarios like this expose the limits of AI.
Moral reasoning is inherently human. It draws on empathy, intuition, and lived experience; things no algorithm can truly replicate. Ethics aren’t fixed equations; real-world dilemmas are often messy and nuanced, rarely fitting neatly into a programmable code.
Take utilitarianism, for instance. It aims to maximise happiness and reduce harm. An AI might use it to decide that a self-driving car should save the most lives. However, the same logic could lead an AI to recommend sacrificing one person to save five or evicting a single tenant to accommodate a family.
So, without human judgment, AI’s rigid logic could lead to ethically troubling outcomes. While AI can follow ethical frameworks, it cannot truly make ethical decisions as it lacks the empathy and moral responsibility to do so.
Algorithmic biases
Even when AI isn't making high-stakes decisions, it can still cause harm by reinforcing biases. The data used to train AI models often reflects embedded prejudices, meaning that AI can unintentionally amplify those biases.
For example, AI could be used for tenant screening, but if trained on biased data, it could favour certain demographics over others, perpetuating inequality. This isn’t hypothetical; Amazon once developed a recruitment algorithm that penalised CVs containing the word "woman" because it had been trained on historical hiring data skewed toward male candidates.
Cases such as these have prompted a sense of urgency amongst organisations and governments to develop regulatory measures that address algorithmic transparency and train developers to recognise bias in their work (1). The European Union’s AI Act is one of the leading policy responses aimed at ensuring fairness and accountability.
Technology first, ethics later?
People are uneasy about AI largely because of fears of losing control and the human touch. While it may seem a leap from using AI to proofread emails to relying on it for ethical decisions, the pace of development is rapidly narrowing that gap.
The challenge lies in the fact that technologists are not ethicists. Innovation moves faster than ethical reflection, and with AI being embedded more deeply into everyday life, it can feel overwhelming to keep up with the implications. It’s also difficult to predict where the technology will lead. After all, the internet itself began as a tool for government researchers to share work.
The future: Charting a responsible path
So, where does that leave us? We shouldn’t be overly fearful. AI is here, and it's proving to be a powerful enabler of innovation, productivity, and growth. The challenge is to ensure we use this technology responsibly.
Organisations must take a proactive role in ensuring that AI aligns with their core values, compliance requirements, and strategic objectives. Remit’s AI survey revealed that just 24% of businesses have formal policies governing AI use in workflows, along with an AI council to oversee its impact. Furthermore, 51% of organisations lack formal AI ethics principles. These gaps highlight the need for ongoing development in governance and ethical frameworks as AI continues to evolve.
Crucially, we must stay connected to the human side. AI shouldn't replace critical thinking, empathy, or accountability. AI should serve human needs and values, not override them. By prioritising empathy, fairness, and accountability, we can harness AI's potential while safeguarding what makes us human.
(1) https://businesscasestudies.co.uk/can-ai-ever-be-truly-unbiased-exploring-the-challenges/