Why AI Ethics Matters

Why AI Ethics Matters

Artificial Intelligence is transforming our world at a breathtaking pace. It diagnoses diseases, steers vehicles, decides on loan approvals, and influences which news you see. But with this power come fundamental questions: Who decides what AI is allowed to do? What values does it follow? And who bears the responsibility when something goes wrong?

AI ethics is not a side issue for philosophers, it is one of the most pressing challenges of our time. In this lesson, you will learn why that is the case, what real-world harm has already occurred, and why you as an AI user are directly affected.

Did you know? According to a UNESCO study from 2024, over 60 countries have published official AI ethics guidelines. Yet to this day, there is no globally binding framework. The EU AI Act (in force since 2025) is the most ambitious attempt so far, but even it does not cover all ethical questions. AI ethics is a constantly evolving field.

Ethics Is Not a Nice-to-Have

Some view ethics as a brake on innovation. The opposite is true: Lacking ethics is a risk, for companies, for society, and for individuals. AI systems without ethical guardrails have already caused real harm. The following examples show what happens when ethics is considered after the fact instead of from the start.

Historical Examples of Ethical Failure

The history of AI is full of warning signs. Three particularly striking cases:

COMPAS System (2016)

The COMPAS system (Correctional Offender Management Profiling for Alternative Sanctions) was used in US courts to predict the recidivism likelihood of defendants. Judges used these scores for decisions about bail and sentencing.

The problem: An investigation by ProPublica showed that the system systematically rated Black defendants as more dangerous than white defendants – even with comparable backgrounds. The error rate for African Americans was nearly double. Yet the algorithm was used for years without those affected being able to challenge it.

Ethical issue: Discrimination, lack of transparency, no appeals process.

Amazon Recruiting AI (2018)

Amazon developed an AI system to automatically rate job applications. The system was trained on historical hiring data – and since Amazon had historically hired predominantly men, the AI learned: prefer male applicants.

The AI systematically downgraded resumes containing the word "women's" (e.g., "women's chess club"). Amazon shut down the project, but the case became a textbook example of how historical biases live on in AI systems.

Ethical issue: Gender discrimination through biased training data.

Microsoft Tay (2016)

Microsoft launched the chatbot "Tay" on Twitter, designed to learn through user interactions. Within 16 hours, it had to be shut down – trolls had taught it racist, sexist, and antisemitic statements, which it cheerfully repeated.

Tay revealed a fundamental problem: when an AI system learns from user behavior without filters, it adopts the worst aspects of human behavior too. Without safety mechanisms, AI becomes an amplifier of hatred.

Ethical issue: Missing safety measures, no alignment with human values.

The Trolley Problem for AI

Imagine this: a self-driving car detects that an accident is unavoidable. It can swerve left and endanger a pedestrian, or continue straight and risk the passengers. How should the AI decide?

The classic "Trolley Problem" from philosophy suddenly becomes real through autonomous driving. And it gets more complex: should the age of those involved play a role? Their number? What if a child is standing in the road?

Example: The MIT research project "Moral Machine" collected over 40 million decisions from people worldwide. The results showed massive cultural differences: in Western countries, people tended to save the younger person, while in Eastern cultures the older person was preferred (respect for elders). How do you program a "global ethics" into a car that drives in different countries? The answer is: there is no simple solution.

The Trolley Problem is just the tip of the iceberg. Similar dilemmas arise wherever AI makes decisions: should a medical AI prioritize a young or an elderly patient when resources are scarce? Should a credit AI consider the neighborhood, even if that leads to discrimination?

Who Is Responsible When AI Makes Mistakes?

When a human doctor makes a mistake, the liability question is clear. But what if an AI makes a wrong diagnosis? Who is liable?

Warning: The question of responsibility for AI errors is unresolved worldwide. Neither the EU AI Act nor other frameworks provide a complete answer yet. This means: if you use AI systems professionally, you may bear a responsibility whose scope is not yet legally defined. Never rely blindly on AI decisions in critical areas.

The stakeholders, those involved and responsible, form a complex web:

  • Developers and researchers: They design the models, select the training data, and define the optimization goals. Their decisions during the development phase have far-reaching ethical consequences.
  • Companies: They deploy AI systems and bear responsibility for their impact on customers, employees, and society. Profit motives must not override ethical principles.
  • Legislators and regulators: They create the legal framework, but technology evolves faster than laws. The gap between technological reality and regulation is an ongoing problem.
  • Society and individuals: Every one of us is affected, and every one of us has a voice. Informed citizens can put pressure on companies and policymakers to demand ethical standards.
Quote: "Technology is neither good nor bad; nor is it neutral." – Melvin Kranzberg, historian. This quote applies especially to AI: it is a powerful tool whose impact depends entirely on how we design and deploy it.
Practical Tip: You don't have to be a lawyer or philosopher to understand AI ethics. Start with one simple question for every AI system you use: "Who could be disadvantaged by this system?" This question alone sharpens your awareness enormously and helps you identify potential problems early.
Why was the COMPAS system ethically criticized?
Correct! The ProPublica investigation showed that COMPAS falsely rated Black defendants as "high risk" nearly twice as often as white defendants with comparable backgrounds. The system amplified existing racial biases instead of correcting them, a textbook example of algorithmic discrimination.
Not quite. The main problem with COMPAS was not a technical issue or a matter of authorization. The ProPublica investigation revealed that the system systematically rated Black defendants as more dangerous, a clear case of algorithmic discrimination with real consequences for those affected.
Key Takeaways:
  • AI ethics is not a side issue, it affects real people with real consequences, from criminal sentencing to credit decisions.
  • Historical cases like COMPAS, Amazon's recruiting AI, and Tay show: without ethical guardrails, AI causes measurable harm.
  • The Trolley Problem illustrates that many ethical AI questions have no simple answers, but the questions must still be asked.
  • Responsibility for AI ethics lies with all stakeholders: developers, companies, legislators, and society.
  • As an AI user, you can contribute by critically questioning AI systems and demanding transparency.