An increase in the improved algorithm, computing knowledge and availability of a large amount of data have transformed society and businesses considerably in past few years. Among all, Artificial Intelligence (AI) is one of the innovations which describes the capacity to perform a task by computers. The task may involve generalized, ability to review, learn from the past, discern meaning or find relationships and patterns to respond to changing dynamics of the marketplace.
Challenges and issues raised by AI
Innovation at the cost of human life cannot be accepted by people around the world and AI must be limited only to those areas where it is required the most. To redesign automated systems like the automobile sector, healthcare industry, etc., the role of AI is underlined optimistically by the founder of Facebook, Mark Zuckerberg and deemed as a fearful innovation by Elon Musk at the same time. AI can behave unpredictably or be misused and harm people and organizations in harmful ways. This further raises questions on the role of ethics, law and technology that governs AI systems more than ever before.
It can be argued that digital revolutions transport’s people’s views regarding behaviour, priority and values which makes AI governance a fundamental issue. Indeed, scientists are trying to make a mechanism for rewarding and social loafing through the AI systems. If these systems from next generations modify various incidents, it can be questioned that whether such machines can be trained towards legal, ethical considerations and human life values or not.
What happened to Tay launched by Microsoft in 2016?
Technical artefacts, according to the theorists have become increasingly capable as well as adaptable while acting in a human-like behaviour. This further makes AI machines unpredictable and they start behaving not only like a tool to perform pre-defined functions but also develop a proper way of acting in a self-dictated manner. Microsoft launched an AI in 2016 named Tay which endowed proper learning ability. This robot-like artefact shaped its global view by making online interactions with people and provided authentic expression after reading people’s interaction behaviour.
The experience faced by the company proved very disastrous for which Windows had to deactivate Tay in less than 24 hours as the tool produced some worrying outcomes. Tay was supposed to interact with human users on Twitter to learn their conversation pattern but the chatbot generated inappropriate comments including antisemitic, sexist and racist comments. This shows how AI implementation rises regulatory and ethical challenges as it creates possibilities of obtaining results other than intended ones or the completely unexpected ones. Additionally, it may result in harming other functions like discriminatory offences like ones generated by Tay.
Asimov: A classic example of AI failure
One of the classic examples has been provided by the rules of Asimov which reflects design level issues raised by robots as technology or tools. One of the rules includes, “A robot may not injure a human being or, through inaction, allow a human being to come to harm”. The three rules initiated a moral dilemma of various kinds that reveals what happens when logical rules set is applied and deems to fail when interpreted by other thinking forms. Hence, giving autonomy to AI machines will not only benefit technological advancements but also create a series of legal and moral implications
Autonomous vehicles debate
One of the examples showing a set of rigid design values that led to an ethical mess is autonomous vehicles. Should an autonomous vehicle when facing unavoidable consequences or collision with another vehicle sacrifice the person sitting in the car or save the other outside the car? In research, 34.2% of people agreed to make self-sacrifice but says that they will never buy an autonomous vehicle although there is very little possibility that the car will crash and increase lifesaving cases. This means, the ratio between individual life and good for other people is generating a complex dilemma which arises another question, who will decide who lives or suffer. The answer to it remains uncleared by scholars and autonomous car manufacturers.
What about IoT?
The new concept based on the internet of things, Web 3.0 and AI depends on constant interaction made between sensors, intelligent tools and people that generate a massive amount of data, cloud storage and processes which is reflected in changing the day-to-day lives of people. However, the advancements made in AI and machine learning paired with computing powers for executing algorithms have led towards breakthroughs in the scientific and commercial realms.
At the same time, the technological artefact is creating legal and social issues related to data mining and accessibility, privacy, safety, transparency and algorithmic bias. For instance, due to high accuracy and predictability levels, AI systems often infer the likelihood of symptoms related to depression before it appears by analysing social media candidates. The system might predict the likelihood of a potential person being pregnant or choose another with aggressiveness who can fit into the corporate world. Such hidden bias, therefore, proves to be a substantive ethical challenge rather than reflecting corporate culture to lessen existing biases.
What needs to be done to avoid challenges?
The existing legal and ethical challenges raised by the use of AI requires that the AI system is made after instilling an ethical code of conduct to eliminate its bias decision-making process. Before teaching or deciding on how AI can be made ethical, organizations must think about ways that can make their decisions more ethical. Bad actors can become a real threat to national security as well as society which needs to reinforce a course of action that may avoid long-lasting consequences. Therefore, research suggests that morality must never be outsourced to AI machines even though they are made algorithmic accountable for it. By erasing hidden biases, AI can lead to a more transparent, healthy and trustworthy corporate culture. Algorithm security and audits can further correct or prevent black-box algorithms. Additionally, real-time stimulations occurring in a controlled environment will generate beneficial designs of legal and ethical AI. Developing human-friendly algorithms is another way that can lead to a better understanding of AI systems without applying decision-making power and discover potential benefits or risks besides preventing unexpected crises.
Since AI is created by humankind, it can be believed that such creation can employ humankind’s errors and show adverse effects than solving existing real-world issues. Organizations who operate, develop or sell AI without getting certified agency permission must be strictly made liable for harm caused by AI. The liability could be made severally and jointly which will permit a plaintiff to recover the number of damages done due to uncertified algorithmic operation.
The government or certified agency is required to develop a set of rules so that precertification and AI testing can be made legally. These rules will not only make AI developers gather data and make certain tests in the secured environment but also help certified agencies be well-informed about AI design intentions. Taking into account the issues that can occur due to AI, government officials must have the power to limit the certification scopes, for instance, an AI machine is made certified only if it is used in a pre-defined setting or in addition to other safety protocols.
Artificial Intelligence will continue to grow and disrupt society as well as organizations in an unimaginable way. With time, it has become more challenging to keep pace with the rapid developments of AI machines and how they are deployed. There is hardly any field where AI has not affected both legally and ethically and thus, present research related to AI and machine learning needs to make a further study on the number of damages AI can make in the different industrial sectors.