Pervasive Impact of AI and Need for Ethics
Traditionally, AI had been created and used by IT and Internet companies. For example, Google has always used AI for its search engines. Facebook has been using AI for targeted advertising and photo tagging. Microsoft and Apple use AI to power their digital assistants.

However, the application of AI is now much wider than the IT sector defined in a narrow sense. For example, experiments with self-driven cars raise societal concerns on safety, use of drones and autonomous weapons that can kill without human intervention raise humanitarian questions related to collateral damage, decision support for courtrooms and policing systems to predict re-offenders have thrown up concerns over social and racial discrimination.
Also, nowadays, AI with its ever-increasing technological capabilities, such as Deep Neural Networks, VSM, etc., can invade the privacy of common citizens and read the emotions of individuals using image recognition systems from a remote location, without anyone ever knowing. The applications of emotional data are used for designing campaigns and prediction in marketing, political campaigns, etc.
Technology and Society: Broader Need to Discuss Technological Ethics and Societal Problems

The benefits of AI may be numerous. For example, application of image recognition in healthcare may help in early diagnosis of critical ailments like Alzheimer and Cancer. But, the same technology may raise ethical questions when applied to a day-to-day application like a self-driving car faced with a decision of protecting a child if it comes in front of the car or dashing the car against a structure and hurting its passenger. What should it choose? Should the use of autonomous lethal weapons be allowed at all, and if so, what should be the decision constraints on AI?
AI’s Dual Impact: Two sides of the same coin, Dr Jekyl & Dr Hyde

While there are multiple benefits of AI, just like any technology, the negative fallout of AI has far-reaching and unfathomable implications, just like nuclear technologies. Nuclear is human-friendly if used for peaceful civil applications, such as nuclear power, and catastrophic if used for defence applications, such as the atom bomb, which may potentially annihilate mankind.
While the intentions of developing AI like any technology has been good driven by the spirit of scientific enquiry, innovation and improvement of society, the ethical problems faced by AI systems are the unintended consequences of this technology such as the bias in the data and models it uses, as the power foundations of AI is Big Data, Super-Efficient Software Algorithms and Superfast Computing Hardware. The bias in the data and the trajectory of the evolutionary models the AI systems use may not always be very transparent, and the output of the systems may not be consistent with human and societal values, thereby necessitating the need to deliberately inject a Code of Ethics and Standards for regulating the design and usage of AI-based systems.
Conclusion

Mankind has entered an era of Collaborative Intelligence with the invention of Artificial Intelligence, AI. It heralds the second Machine Revolution after the Industrial Revolution, triggered by the invention of the steam engine. The Industrial era was characterized by Machines supporting Man. The second Machine Revolution is characterized by Machines competing with Man with the potential threat of elimination of Human Intelligence. To mitigate this threat, the limit of AI has to be set by humans through responsible design and use of AI, which is beneficial for the improvement of the quality of life of mankind and society in a holistic sense. The phrase “improvement for whom” —for the doctor or the patient, the judge or the accused — becomes central to the discussion of AI ethics as it is linked to power, tremendous concentration of power, if left in the hands of a few.