The Evolution and Ethics of AI

By Kim Nilsson, CEO 

It’s been 20 years since IBM’s Deep Blue AI beat Chess Grandmaster Gary Kasparov; last year Google’s Deep Mind AI beat human players at GO – a far more complex game to master; 2 weeks ago Libratus – an artificial intelligence (AI) – beat humans at Poker (harder still since machines had to learn how to bluff). AI is rapidly impacting in our day-to-day lives, from 11,000 (and counting…) chat-bots on Facebook, to robot carers for the elderly.

AI is an incredibly exciting field that offers the potential to bring mammoth developments to commercial, academic and public arenas, with a host of modern forums already having adopted data science in some way into their business. The introduction and advancements of AI in so many fields has often raised debate of morality and how ethical our reliance on computers can be. In order for AI to be as effective and beneficial as possible, experts are calling for a greater understanding of developments and capabilities and for businesses and consumers alike to recognise the potential and vulnerabilities that AI evolution holds.

 

AI and Ethical Decision-Making

With such rapid developments in this field, the question of whether ethical consciousness can be programmed into AI remains at the forefront of thinking for many. Our hunger for appliances and machines to work independent of human control, for example, poses significant potential problems for human welfare, security of society and law enforcement. If we develop cars that can drive without human input, for example, and the car is presented with an unexpected child who has run into the road, can we program the car to swerve the child and potentially hit oncoming traffic or hit the child or mount the pavement to avoid the child and potentially hit a pedestrian? At what point does our ambition for advanced AI compromise our safety and how do we evolve technology whilst maintaining ethical sense? As Vincent Conitzer is keen to highlight through his studies of this matter

 

“humans take great pride in being the only creatures who make moral judgements”

As such, we present ourselves with an immediate dilemma – do we compromise our monopoly on morality in favour of technological developments or do we prioritise our AI advancements at the risk of losing our moral hierarchy?

As it stands, the majority of contemporary AI systems adopt their morality based on consequential understanding. That is to say that an AI makes moral choices based on potential outcomes. As AI advances, it becomes entirely possible for these systems to adopt a human moral approach of assessing additional features prior to making a decision. An AI might therefore begin to consider not only the consequences but effects on others, environmental concerns and privacy considerations. When such a time comes, we humans will then analyse our moral decisions against the moral decisions of an AI and effectively, possibly learn and evolve our own thinking from that of the computerised system. Here lies a moral concern in itself – if we reach the stage of potentially learning how to better our moral decision making from an AI, surely we have already relinquished our hierarchy in this field and become less capable than the AI system?

 

AI’s Place In Society

Undoubtedly, AI has a very real and relevant place in modern society. Whether utilised in the thousands of chat bots on social media, through robotic care of the elderly to interactive gaming opponents, AI has introduced support, entertainment and pioneering possibilities into many avenues of modern living. So successful has AI been in these fields, that there is a very real economic concern about the potential for humans being made redundant in several jobs thanks to technological developments. Bill Gates recently cemented the concern that people could soon lose their value in several industries in favour of AI workers and that to combat this, there should be a tax consideration for robotic staff.

Ethically, we must also be aware of the roles that we ask an AI to complete and the potential bias that they may adopt in their design and manufacture processes. For example, if we come to rely on an AI to choose and administer our medication, how can we be certain that their choices are independent of bias and are not linked to economic gains for specific manufacturers? Should an AI make a mistake in their choice, can they be held responsible for their error? If not, is the recipient to blame or the designer or engineer? Where will ethical responsibility lie when we advance AI so swiftly that computers carry out potentially lifesaving tasks without human overviews or control?

So important are these considerations that specific research and management groups have been established by way of analysing and securing human safety in light of intense and rapid AI evolution. To be sure that our AI developments don’t overtake our ability to control our societies, public systems and general safety, groups such as The Ethics and Governance of Artificial Intelligence Fund and the Partnership on AI strive to review and maintain relevant ethical frameworks that allow for AI progression without compromising the wellbeing and safety of human functionality and life. Both group’s work in this field encourages a wider understanding of AI and to establish best practices for this technology in a host of arenas. Through openness, understanding and knowledge, groups such as these hope to cement the ability for both AI and humans to coexist and evolve safely, altruistically and happily.

 

The Ethical Evolution

Along with marketing experts such as Ogilvy and Mather, these groups push for an openness in the discussion of AI, a sound understanding for all and the adoption of responsibilities by businesses who implement various forms of AI into their systems. In their 2017 Digital Trends forecast, Ogilvy and Mather set out the need for businesses to adopt the responsibility of ethical decision making when they initiate AI into their plans. They expect every business to keep basic ethical principles of humans at the core of their practices and to ensure that these ethics are prioritised over the economic potential that AI offers.

With so many experts currently highlighting the need to limit the capabilities of their AI systems for fear that they will develop more quickly than we can keep up, there is an arguably over-zealous fear that we will quickly become subservient to machines. Although the potential for mammoth evolutionary changes is very real, AI still remains at the control of humans and whilst we are competent enough to recognise the vulnerabilities, we remain powerful enough to harness the potential and maintain our place on an ethical pedestal.

Enjoyed this blog? Share it with friends or follow us for more:
5391

Enjoy this blog? Please spread the word :)

RSS1k
Follow by Email5k
Facebook730
LinkedIn705