Quantcast
Channel: Making Sense of Technology
Viewing all articles
Browse latest Browse all 10

AI is good – or is it?

$
0
0

A computer only does what it is told. Provided we are diligent in our programming and keep track of our code before the algorithms run wild, articifical intelligence and machine learning liberate us and make our lives more efficient. When we coexist, decisions made and carried out based on algorithms are as liberating as the invention of the washing machine.

In his book Människor och AI (Humans and AI) written together with co-author Professor Jonas Stier, Swedish technologist Daniel Akenine looks at a multitude of scenarios where using AI can benefit and enhance human existence. And a multitude of scenarios where it cannot – or should not. A quick review of his discourse:

AssumptionPositive PotentialNegative Potential
AI is good for humansAI’s maximize freedom and give us more time to do the things we want to do.AI’s maximize profit and productivity whilst ignoring the risk of creating chasms in society or human integrity.
AI is fairAI’s do not discriminate based on gender, sexual preferences or ethniticity.AI’s mirror or reenforce inequality in our society by using inappropriate source of data.
AI is secureAI-based systems in warfare are subject to limitation. Civilian AI-systems are built with well tested security standards and robust security protocols.Cheap AI-based systems maximize casualties when these systems are used in conflict. Civilian AI-based technology can harm people due to bad objectives or faulty programming.
AI is transparentWe understand how algorithms make decisions and when they have difficulties making the right decision.AI can be compared to a black box where the results can neither be explained nor questioned.
AI is responsibleEverybody has a clear responsiblity for the results achieved from the algorithms they build. From developer to user.Nobody takes responsibility for the damage an AI-system may cause.
Daniel Akenine is a frequent lecturer on all aspects of AI

In the book, subject matter experts consider these points from their field of expertise. Daniel Akenine’s purpose, as he clearly states, is to create a more nuanced view of what an AI and its smart algorithms can and cannot do and how this will impact human existence in the near future. Topics like AI-supported judicial systems, the future of urban development, taxation, conflicts and warfare, human integrity.

Let me pick just one for a further deep dive

How do we prepare for the risks?

Contributor Åsa Schwarz, Sweden Security Profile of the Year 2017, thinks as I do that algorithms as such cannot be trusted blindly: Open your eyes – your AI is biased. On the other hand, there is not yet reason to fear a Terminator/Skynet or Matrix scenario where machines take over and human kind is in danger of being eliminated.

But if we circumvent the human influence which hopefully includes an ethical starting point (AI is responsible in Daniel Akenine’s chart), there are bad decisions being made today using AI which lead to severe consequences for people and nations. If you feel inspired, the recent Netflix release Coded Bias helps you understand what I mean with consequences.

Just consider something simple like using AI to make selection of job candidates more efficient. By ticking all the boxes that were pre-determined and programmed, the system will not see the potential of candidates who do not fit but may add unique experiences and creativity which the recruiting company would need to grow and survive. Uniformity and stagnation led to the downfall of the Roman Empire. You need barbarian forces who are not following the rule of “this is the way” to continue to evolve.

Åsa Schwarz, Head of Business Development at Knowit Cybersecurity & Law. 

An AI can evolve based on the construction of its algorithms and make decisions that impact its actions. And there is both the malicious and the accidental consequence of AI to consider. Accidental negative consequences may be prevented but it requires the imagination of the programmer to provide the AI with the potential risk assessment: The AI has no imagination.

If you program an AI say to remove rubbish from a predefined surface (Åsa Schwarz uses the example of the Stockholm Central Train Station), it will do just that. Remove what the coding identifies as rubbish = not belonging there from the area. Now imagine you have designed a self-programming AI to help it continue to become increasingly efficient. And it may take that further instruction to remove the cause of the rubbish – i.e. remove the humans. Boom.

Malicious intent is everywhere, eg. in the physical sense through the use of drone warfare. But much worse – as an ever growing threat to us all – in Cybersecurity. Tom Leighton, Founder and CEO of Akamai Technologies, mentions during TechBBQ (2019) how a concerted attack on critical online systems can paralyze entire nations: “You can disconnect most countries now from the rest of the internet now through a coordinated Denial-of-Service attack…”, he says in the video interview.

If humans want to safely reap the obvious benefits of integrating AI even further into every aspect of society, we must understand who owns the system. We do.

So we have to act responsibly, and focus on developing concepts for the safe and secure architecture and design of AI driven processes with built-in failsafes and barriers. Microsoft as one example provides a comprehensive framework called the Security Development Lifecycle . Another important aspect is the sophisticated support for incident handling. And my favourite topic: Complete documentation so that you are able to trace and track the problem to fix it.

Lawmakers can only do so much – the real power of artificial intelligence lies in the hands of the developers of the systems.

AI’s have no ethical subroutine – you do.


Viewing all articles
Browse latest Browse all 10

Trending Articles