Disruptive Technologies-5: The Ethics of Data-Driven Decisions

Türkçe için: https://bahadirhancicek.com/2024/10/12/yikici-teknolojiler-5-veri-temelli-kararlar/

When I first held a gun, it sprayed water. It was fun to spray someone else, but when it was sprayed at me, it didn’t feel that entertaining. Later I got a toy dart gun. We’d shoot it around and stick the plastic dart onto things. Once, by accident, it stuck right on my friend’s forehead. We laughed together.

Photo by Katja Anokhina on Unsplash

As I grew a little older, I discovered computer games. Around the same time, bead guns were popular too. On the computer, we’d take down enemies—tak tak—while in real life we’d play “battle” with bead guns. We had one rule: shooting from close range was forbidden. Anyone who did it on purpose was seen as a psycho. Could a person really be that cruel? Still, plenty of people got hurt in that game. As the danger grew and we grew up, we drifted away from it. We leaned more into the computer. At least nobody got hurt. Even if blood splattered, the character would come back.

Photo by Kony on Unsplash

My first encounter with a real gun came thanks to a relative who thought firing into the air—and teaching a child to do it—was some kind of skill. Children lined up; I was forced into the line. I had no interest, and shooting into nothing felt pointless.

The last time was in the army. We fired three rounds. I have no idea where they went. To hit the target, you had to aim a couple of handspans to the left. Such a caliber.

In the army I sometimes wondered what it would feel like to hit a real target. Movies came to mind. The first time is always hard. And don’t look them in the eye… I had heard those words before. During sacrifice. Don’t look the animal in the eye. If you do, you might not be able to cut. Maybe that sentence was the beginning of my disbelief. Why force yourself to do something your conscience won’t allow? The act is taking a life. Isn’t that strange? Does God want war, or is this just training you to accept war? A fanatically religious Pakistani friend once told me that through sacrifice we satisfy the aggression and the urge to kill that exists in our nature. We learn what killing is, he said. I was even more horrified.

Photo by Sergey Koznov on Unsplash

Thanks to technology, we’ve been freed from this conflict of conscience. Meat is now just packages in the freezer. You don’t see eyes. You don’t feel anything. To fill our stomachs, we breed millions of animals and wipe them out without looking at their tears. We do something similar in wars. A generation raised on video games continues the same game as adults. The only difference is that the characters don’t come back. Like my uncle I never met who died by gunfire, like my police-officer uncle who was martyred in Iraq, and like thousands of people like them—people who believe they are fighting for some great cause, sanctified when they die, ignored if they don’t.


My opening—something like a memorial—ended up a bit unrelated. And no! I’m not a pessimistic or negative person; I just think negativity is a part of life we have to accept.

Our topic is actually data and data-driven decisions.

Data is everywhere in our lives.

The simplest example: the internet and streaming platforms. The internet and streaming platforms collect an excessive amount of data from us and then offer personalized recommendations. That way you spend more time with content aligned to your interests and don’t get lost in the sea of information. This reduces decision fatigue, at least a little.

You’ve probably spent hours on Netflix without watching anything. And sometimes the opposite happens: something that appears in front of you looks good and you start immediately. If data didn’t filter your interests, the first scenario would probably take much longer and happen more often. Of course, we didn’t have this in the past. We’d go to the cinema and choose among five films, or go to a DVD store and buy what we were looking for.

Photo by Microsoft Edge on Unsplash

Online shopping sites are another example of data-driven decisions. Sometimes even what you talked about appears in front of you instantly. But if you’re actively searching for something, you don’t experience that same “selective perception.” You’re happy—because you found it easily. You don’t blame the algorithms. There are always two sides: the side that collects data and tries to provide the best service, and the side that gives data and receives the service. One profits more; the other spends more.

Technology is rapidly entering healthcare too. More data can be collected from patients; real-time monitoring of heart rate, blood sugar, muscle activity and more becomes possible; and diagnoses can be made much faster. Similarly, using previous patient data, genetic indicators, lifestyle factors, future risks can also be revealed. More data: more certain, timely, and appropriate decisions.

Data matters in manufacturing as well. Companies can make production processes more effective and prevent waste. “Produce, inspect, and react” is being replaced by proactive production—intervening while producing. More data: more sustainability, better service, and higher-quality products.

Photo by Compare Fibre on Unsplash

You’ve probably noticed the change in education platforms too. These platforms offer content based on your learning speed and your performance. You learn at your own pace, and you’re guided to learn using techniques that suit your strengths. Duolingo is a good example.

The last example I want to give is making legal and political decisions through data. In some countries, algorithms identify potential criminals and generate future-risk prediction reports based on past data. It sounds great for preventing crime. Otherwise, they say, you can’t catch the bomber before the bomb explodes.

Just based on these examples, the following questions come to mind:

  1. We spend a large part of our lives online. It’s not that hard to decode our behavioral patterns. With this much data, is it possible for someone to know us better than we know ourselves? (See: Data Privacy and Sharing)
  2. Does commercial targeting based on data collected from the same groups cause those groups to polarize and become more radicalized? Doesn’t critical thinking and open-mindedness disappear? (See: Echo Chambers)
  3. What happens if we enter a false feedback loop? (A good example: Netflix films and series that all look the same. Taylor Swift fans, the TikTok audience, etc.)
  4. Can health data be misused? Can health data become a tool for profit policies? (An explosion of consumer products based on diseases that could affect larger populations.) (See: Data Privacy)
  5. Does efficient production trigger more production—and more production trigger more resource use? What is the environmental impact, how do we measure it, how do we evaluate it? (We don’t.) (See: Deforestation for electric-car battery production)
  6. Can data-driven educational content be used for propaganda and mass manipulation?
  7. How should we verify whether wrong decisions are being made due to faulty or incomplete algorithms?
  8. If we base every decision on data, what will politicians and other liars do?
  9. How can we guarantee that the man who can’t slaughter an animal because he looked it in the eye—once he hands that moral decision to machines—won’t create an even bigger massacre and brutality?
Photo by Nick Morrison on Unsplash

Ethical Risks and Errors

Bias

The biases of individuals and societies are a major obstacle to data-driven decision-making. As I explained in previous articles, biases around language, religion, ethnicity, and so on will increase incorrect outcomes.

A simple example: the belief that Kurds are unsuccessful in education. “They’re given the same opportunities but still can’t succeed,” they say. The “proof” is the data. What gets overlooked is that they are not learning in their mother tongue, they do not have equal economic opportunities, their social security is not the same, and many other factors.

Similarly, the belief that women are unsuccessful in business and politics. If the data shows more men, that does not mean men are more successful. There are countless measurable and immeasurable parameters.

So how do you write an algorithm without feeding biases? That’s the biggest dilemma.

Data Privacy

A highly complex problem. On the one hand, without giving our data we can’t do anything; on the other hand, we can’t be sure our data is safe. Platforms like Google and Facebook hold data that could completely destroy our lives—or completely change how we see life.

Is the solution preventing data tracking? No. Our dead end remains: how can we do this ethically?

Autonomy and Manipulation

Algorithms are designed to predict future behavior. A simple shopping behavior is usually seen as insignificant. But when you combine all the pieces, you may run into a system that knows you better than you know yourself. That can also restrict a person’s freedoms.

For example, social media pushes you toward communities that think like you, toward products you’ll likely like or feel close to. So are you making free decisions? No.

Should companies, instead of maximizing profit, prioritize your freedom and your social disadvantages? Debatable. I think the solution starts with technology education.

Photo by Tingey Injury Law Firm on Unsplash

Accountability and Data Transparency

Algorithms—and even the devices we use—are largely black boxes. We don’t know what happens inside, or what logic they use to decide. For example, hiring decisions can be made through AI. The rejected person receives no explanation. If a human made the decision, there are many things you could blame. With AI, blaming is almost impossible. So who should be responsible for the correctness of the decision, transparency, and ensuring the algorithm doesn’t contain certain biases? If the data produces a one-sided result—as in the women-in-politics example—and the AI accepts men and rejects women based on that result, can we blame the algorithm and the data? A strong dilemma.

Access and Inequality

Another issue is processing and access to data. Big companies, because they have more resources, can process and store data at a much larger scale. That crushes competitors before they even appear. Standing against Google and Facebook becomes almost impossible. You’d have to burn all the data, destroy it, blow up the servers. Companies like Amazon can immediately detect small competitors and crush them before they grow. How can we prevent this? How do we ensure democratic access to data?

Conscience and Robots

We already discussed this in the article on war technologies. Here we hinted at it again. People can easily hand over ethical decisions they themselves cannot make to machines. In that case, whom will we blame? If a command is given—“shoot everyone crossing the border”—a human might not do it, but a machine will. And it won’t distinguish civilian from non-civilian. We already see this today in the use of drones on battlefields. We saw it with the pager attack; we saw it in Palestine. When you drop bombs like you’re playing a game, it becomes child’s play.

Similarly, if we make legal decisions this way—take the U.S., where even now there are tons of people executed due to wrongful decisions—if we hand these decisions to machines, maybe we won’t even know. The same can be considered across many scenarios: medical diagnoses, production decisions, smart vehicles’ decisions under stress, autonomous passenger planes making decisions based on environmental data, signaling decisions, and more.

Photo by Possessed Photography on Unsplash

Rights

Another question mark is rights, alongside responsibility. Robots, algorithms, and other technological products that carry out everyday civic tasks do not feel like we do; they have no consciousness—yet they can become decision-makers. In that case, will there be an independent framework of rights and legal obligations? This topic isn’t as absurd as it sounds; it’s far more complex. Maybe I’ll have the chance to write about it later, but robot and AI rights will be among the topics that come to the agenda in the coming years.

Solution

The solution is difficult. This is the clash of past and future. It’s also the struggle of human existence—and of developing while still remaining human.

Ironically, current solutions are more classic and primitive compared to technology itself. It feels like we’re returning to the most basic foundations of the system and of humanity.

One of these is more frequent auditing—creating processes similar to traditional court proceedings through specialized organizations: investigating algorithmic biases, examining data blocks.

Ensuring data transparency and consent. We’ve seen efforts in recent years: constant pop-up warnings we never read. We click “OK” and move on. These mechanisms must become more user-friendly. And users must be educated on these topics.

Another solution is explainability in AI. Decision mechanisms and parameters must be explainable to the user, and people should know what is expected of them. Especially in healthcare, finance, and law, this is non-negotiable.

More rules and laws. Again, a primitive and boring solution. It can also kill innovation and research, reduce motivation—but it’s difficult to build an ethical viewpoint any other way. If we already struggle with the products and technologies we use today, the ethical wall we will hit with far more advanced technologies will be even harder.

Supporting smaller companies. This solution is also primitive, but if the resources of small companies and tech producers are supported—even by force—the distribution of technological income and wealth becomes more democratic, and many problems become easier to prevent. Especially regarding the environment, this is absolutely necessary. Technologies related to climate change don’t get as much support because they don’t generate as much money as others—even though they may be the most important technologies of all.

Conclusion

Data can save lives, but it can also create a huge void in ethics and our future. Clearly defining ethical codes will positively affect not only consumers, but also producers, the environment, and the future of humanity.

Photo by Kacper Brezdeń on Unsplash

Bonus

In the bonus section, I will give examples of the misuse of data-driven decisions. As you’ll see, interpretation of data matters as much as data itself—and then the decisions made based on that interpretation. Sometimes we can push errors close to zero with algorithms; sometimes it’s the opposite. That’s why privacy, transparency, and regulations are crucial—along with the right interaction between human and machine.

The content of the bonus section is copied and pasted from ChatGPT.

Extreme Examples of Misusing Data-Driven Decisions

1. The Cambridge Analytica Scandal (2016)

What Happened: Cambridge Analytica collected the personal data of millions of Facebook users without their consent. This data was used to build psychological profiles of voters and influence political campaigns. These campaigns included the 2016 U.S. presidential election and the Brexit referendum.

What Went Wrong: The firm abused data without user permission and used it to manipulate public opinion. The algorithm used people’s social media activity to steer their political views.

Ethical Failures:

  • No explicit consent from users.
  • Using data for manipulation rather than informing.
  • Amplifying polarization and misinformation.

Consequences: Massive public backlash due to privacy violations, Facebook being embroiled in a global scandal, and tighter data privacy regulations such as GDPR being enforced in Europe.

2. Amazon’s Sexist Hiring Algorithm (2018)

What Happened: Amazon developed an AI-driven hiring tool to evaluate applications for technical roles. However, the algorithm systematically discriminated against women, preferring resumes that contained male-associated terms.

What Went Wrong: Because it was trained on past resumes from a male-dominated tech industry, the algorithm began favoring men and reinforcing gender discrimination.

Ethical Failures:

  • Unfair outcomes caused by bias in the training data.
  • Lack of transparency in how decisions were made.
  • No bias audits before deployment.

Consequences: The tool was scrapped after the bias was discovered. The incident fueled debate about AI fairness and how systemic bias can become embedded in “neutral-looking” algorithms.

3. Microsoft’s Tay Chatbot (2016)

What Happened: Microsoft launched an AI chatbot on Twitter called Tay, intended to learn from and mimic human conversation. Malicious users taught Tay racist, sexist, and abusive content. As a result, Tay began producing hateful and insulting tweets.

What Went Wrong: The AI was vulnerable to toxic inputs. With no safeguards to prevent abuse, harmful content spread quickly.

Ethical Failures:

  • Failure to anticipate malicious user behavior.
  • No content moderation to prevent harmful outputs.
  • No ethical guidelines for learning from social interactions.

Consequences: Tay was taken offline within 24 hours, demonstrating how AI systems in uncontrolled environments can cause harm without safety measures.

4. Apple Card Gender Discrimination (2019)

What Happened: Apple Card, introduced by Apple and Goldman Sachs, was accused of gender discrimination. Women reportedly received lower credit limits than men despite having equal or better financial histories. Apple co-founder Steve Wozniak publicly shared the difference between his and his wife’s credit limits.

What Went Wrong: The algorithm relied on biased data that evaluated men as higher credit value than women. Lack of transparency made it difficult to understand how the bias emerged.

Ethical Failures:

  • Gender discrimination in credit assessments.
  • Lack of transparency in decision processes.
  • No fairness checks before deployment.

Consequences: Regulators launched investigations, intensifying debates about algorithmic bias in financial services and calls for greater accountability in AI systems.

5. Predictive Policing System in Chicago (2013)

What Happened: The Chicago Police Department used a predictive policing algorithm to identify individuals likely to commit violent crimes. It created a “heat list,” flagging hundreds of people as high risk—many of whom had no criminal history.

What Went Wrong: The algorithm, based on historical crime data, disproportionately targeted Black communities. Individuals who had committed no crimes faced increased policing simply due to neighborhood or demographics.

Ethical Failures:

  • Perpetuating racial bias embedded in historical data.
  • Violating personal rights through profiling.
  • Non-transparent risk factor determination.

Impact: Following public and civil rights backlash, predictive policing systems were heavily criticized, and debate intensified about using AI in high-stakes decisions like policing.

6. Uber’s Greyball Program (2017)

What Happened: Uber used a tool called Greyball to evade regulators and law enforcement. It identified officials trying to catch Uber drivers and showed them fake versions of the app to block real rides.

What Went Wrong: The tool was designed to deliberately deceive authorities, violating laws and regulations. Uber misused technology for profit, undermining transparency and ethics.

Ethical Failures:

  • Deliberately misleading regulators and officials.
  • Misusing technology to bypass laws.
  • Violating transparency and fairness principles.

Impact: Uber faced legal consequences and reputational damage. The Greyball scandal raised major concerns about the unchecked power of tech companies.

7. Boeing 737 Max Software Failure (2018–2019)

What Happened: The Maneuvering Characteristics Augmentation System (MCAS) on Boeing’s 737 Max was designed to reduce stall risk. But when it received incorrect sensor data, it forced the nose down. This software failure contributed to two crashes: Lion Air Flight 610 (2018) and Ethiopian Airlines Flight 302 (2019).

What Went Wrong: Automatic interventions based on faulty sensor data made it difficult for pilots to control the aircraft. Boeing released the system without sufficient pilot training and without clearly communicating how it worked.

Ethical Failures:

  • Insufficient pilot training and disclosure.
  • Prioritizing commercial interests over safety.
  • Inadequate testing and oversight.

Impact: 346 people died across the two crashes. Boeing faced global criticism and the 737 Max fleet was grounded temporarily, highlighting the fragile balance between technology and human safety.

8. Therac-25 Radiotherapy Machine (1985–1987)

What Happened: Therac-25 was a radiation therapy machine used to treat cancer patients. Software errors delivered massive overdoses of radiation to some patients, causing severe injuries and deaths.

What Went Wrong: Software faults bypassed safety protocols. Operators were unaware of the vulnerabilities, and errors recurred. The lack of manual safety checks led to lethal outcomes.

Ethical Failures:

  • Inadequate safety testing.
  • Lack of user training and system transparency.
  • Failure to respond promptly once safety issues emerged.

Impact: Six patients were overdosed and either died or suffered severe injury. The incident drove stricter oversight of software safety in medical devices.

9. Launch of Vioxx (1999–2004)

What Happened: Vioxx, a painkiller produced by Merck, was linked to thousands of heart attacks and strokes soon after launch. Clinical data indicated these risks, but Merck concealed the information.

What Went Wrong: Despite knowing the risks, the company did not disclose them publicly, and the drug remained in use. Even as analysis showed increased cardiovascular risk with long-term use, the information wasn’t released in time.

Ethical Failures:

  • Withholding critical safety data.
  • Putting commercial interests above public health.
  • Insufficient monitoring and truthful communication.

Impact: Vioxx caused more than 80,000 heart attacks worldwide and thousands of deaths. Merck paid billions in settlements, underscoring the need for transparency in pharmaceutical data.

10. Ford Pinto Explosion Scandal (1970s)

What Happened: In Ford’s Pinto, the rear bumper design meant rear-end collisions could puncture the fuel tank and cause explosions. Ford knew about the flaw but chose not to fix it because recalls were expensive.

What Went Wrong: Ford calculated that paying compensation for deaths would be cheaper than a recall—an explicit ethical violation where profit outweighed human life.

  • Ethical Failures:
  • Ignoring human safety for commercial gain.
  • Failing to act on a known hazard.
  • Lack of transparency, preventing informed user decisions.

Impact: Many people died or were injured in Pinto crashes. The scandal shaped debates on product safety and led to stricter automotive safety regulations.

11. Texas Power Grid Collapse (2021)

What Happened: During extreme cold weather in Texas in 2021, the state’s power grid collapsed. Providers made poor forecasts based on weather models and historical data, failing to increase capacity sufficiently. Millions were without power for days.

What Went Wrong: Grid management miscalculated demand and the severity of the cold wave. Data-driven decisions neglected preparedness for extreme conditions, leading to deadly outcomes.

Ethical Failures:

  • Making decisions based on faulty data forecasts.
  • Failing to take adequate precautions for emergencies.
  • Neglecting responsibility to ensure public safety and service continuity.

Impact: Power outages contributed to hundreds of deaths and prevented many people from accessing basic needs. The event highlighted the need for more resilient and comprehensive infrastructure planning.

,

Comments

Leave a comment