How to Navigate Ethical Dilemmas of AI?
Artificial intelligence (AI) has completely changed and transformed the business world. People are thinking about how AI and its increase are impacting our lives. The concern is not only on a personal or societal level but also at the business level, especially on how AI impacts the goodwill or reputation of the business. Companies do not want to get involved in any of the ethical scandals of AI.
One of the great examples of such a scandal is Amazon, as they took a severe backlash when they sold the Rekognition to the law enforcement agencies. After the scandal, Amazon stopped providing the technology or software to the agencies until a proper framework was created.
AI and its Ethical Dilemmas
AI-made decisions contain different biases. The biases are present either because humans haven’t noticed that there are biases during development or because the data of AI is not a correct representation of the whole population. The bias in the data can often lead to ignoring the minority or other elements of the population.
The best example of such bias is the amazon AI recruiting tool. It used to only recruit male candidates and ignore female candidates. Almost 60% of males were recruited through the instrument. But, Amazon discontinued the tool as it was ignoring women. It caused an issue in the whole business dynamic. Hence, ethical ways must be adopted. The businesses try their best to avoid any kind of biases in the data so that the algorithm of AI can detect more authentic results and outcomes. It is impossible to remove all preferences, but we can try to be less subjective.
Some things that function automatically without human interaction are autonomous things (AUT), including robots, drones, and automatic cars. The most ethical issues arise from self-driven cars and automated drones. The market for autonomous vehicles is expected to reach $557 billion by 2026. It shows an increase since 2019, as it was then valued at $54 billion.
These vehicles pose many ethical risks to AI. The liability and accountability of these vehicles are questionable. A relevant example of this moral dilemma is the uber hitting a person who later died due to injuries. Self-driving car Uber was the first to create such an ethical issue or risk. The incident happened in 2018. After a lot of investigation by NTSB and the Arizona Police department, the judge concluded that the company was not liable for the death. The authorities suggested or said that it happened due to the distraction of a safety driver. They put this case in the category of “Completely Avoidable”.
Another would be lethal autonomous weapons (LAWs) as a part of the AI arms race. LAWs enable you to target what is fed or programmed based on the constraints and the descriptions. Though, it is quite debatable whether it’s not good or okay to use LAWs. South Korea, Russia, and the U.S. favor LAWs according to the UN in 2018, while renowned names like Elon Musk, Stephen Hawking, and others signed the letter informing about the threats of LAWs and their usage.
One of the most significant AI dilemmas is the increasing unemployment of humans due to automation. Based on the results of the CNBC survey, 27% of U.S. citizens think that AI will increase the chances of job loss in 5 years. Meanwhile, among people between the ages of 18-24, 37% think the same. They are not big numbers, but they show the prediction for the coming five years. McKinsey also suggests that AI and robots can easily replace 30% of human labor in 2030. Hence, society’s assumption or expectation and McKinsey’s estimation show that the fear of AI takeover and loss of a job is significant.
Ways to Navigate the Dilemmas
Following are the ways to navigate the ethical dilemmas of the AI:
- AI must be transparent. To accomplish this transparency, we have to take certain initiatives, like Open AI research, so that the public knows about what goes behind the algorithm and data. Google has worked towards this Open AI initiative as well. We can also create AI blogs to spread knowledge and information about AI.
- The developers and businesses that develop and use AI technology must be able to explain how the AI algorithm concludes. We must know what goes behind the algorithm that leads it to a certain conclusion. There are several ways one can explain the factors that impact the algorithm’s decision.
- The AI technology and community must be inclusive and ready to try diversity so that there are fewer chances of any kind of biases and more chances of reducing discrimination and unemployment.
- The development of AI systems in different countries is happening, and there is no legal framework yet. Therefore, we must align and modernize the legal framework for better functionality and results of AI.
Navigating through different ethical dilemmas of AI will enable us to eliminate the doubts and risks people are having regarding the societal, environmental and business influence AI can induce.