Can AI be moral agent?
   Date :02-Sep-2023

Can AI 
 
 
 
JAMES Moore of the Dartmouth College said, “Programming a computer to be ethical is much more difficult than programming a computer to play chess. Chess is a simple domain with well- defined legal moves. Ethics operates in a complex domain with some ill- defined legal moves.” Today the modern world is blended with newly discovered EdTech advancement of Artificial Intelligence. No matter how inventive machine learning research becomes, certain key limitations and ethical questions about Humanism still exist. Moral standards are subjective and vary from one culture to another over time. Machine learning is built on algorithms, but can machines be moral agents or artificial moral agents (AMA’s)? The present problem of this current development of AI lacks in imparting the skills necessary to establish long interpersonal relationships. Also, it cannot enhance children’s social and emotional development or aid in the development of soft skills. It is unable to impart to our children the importance of a strong work ethic as well as the principles of self-motivation and determination, it is somehow incapable in imparting necessary practical knowledge to students. Machine data base cannot build confidence for any shy children in the classroom or solve any anxiety problems of human beings. Machine algorithms are unable to gauge how sympathetic or empathic the other person is.
 
It is a database tool; it cannot replace humans in imbibing moral principles to our children. Love and ethics are the two essential lessons that may be taught and learned only through human interactions. Moral learning is to help children develop values that are already rooted in them not to make predictions or decisions based on that data available. We can say that, “Moral Education needs to be about developing first-class humans, not second-class robots”. Moral values are not be taught but are caught by human experiences. AI doesn’t adhere to human ideals. What matters to artificial beings may not align with what people value. How can we develop robots to always act according to our strongest values if humans can’t even agree on principles across cultures? For Aristotle, “Morality is something about us, not something outside us to which we must conform”. Morality, however, is a highly contentious topic. Although philosophers and theologians have come up with numerous different moral theories, there is still a conflict about which is the correct one, despite centuries of debate.
 
Once humans have a grasp of the values, they will impart those values to the technology. People cannot objectively pass on morality in measurable parameters; teaching morality to machines is difficult. In fact, it’s questionable if people in general have a solid grasp of morality? Face-to-face interactions tend to decrease when using artificial intelligence (AI), and judgements become a part of more complex processes that humans may not always understand. The moral distance established by technology may stop developing feelings of empathy and sympathy when viewing the other person’s face. There won’t be such things as good AI or bad AI in the future; instead, it will be left to us how we use it. There will only be good humans and immoral humans. AI will give us the ability to transform not only the surrounding of us but also ourselves. It challenges us to improve our ability to recognise and understand our prejudices and biases as well as the society around us. Thereupon, we should be conscious of our thoughts and experiences before the algorithms decide for us.