With the increased usage of machine learning and artificial intelligence in the real world, the need is felt more than before for implementing ""conscious"" designs that incorporate principles of ethics, and moral values from fairness, respect, transparency, and accountability. While there have been detailed debates on the feasibility of implementing values into ""artificial"" intelligence, the increased human-machine interaction in the implementation of the latest machine learning and artificial intelligence systems calls for a revisit to the topic of whether future AI systems must include as part of their design, an embodiment of values into the socio-technical systems.
As we interact in a world of self-driving cars, reusable rockets and autonomous machines around us, the need to ensure error-free, non-life-threatening machine learning algorithms has been felt increasingly across domains. As a result, there is a lot of research around building AI systems that go beyond the replica of human ""gray matter"" to superseding ""gray areas"" in ethics and embodying value systems through intuitive correlations.
Easy as it may sound, this needs alignment to more complex, real-world societal norms driven by morals, laws, ethics and most importantly, human biases. Whither from here, and how can we develop machines incorporating ethics? Tune in for more on this topic.