This Sunday we are discussing: Is Artificial Intelligence a threat?
So by the time you receive this email, it would have gone through and
processed by a myriad of Artificial Intelligence systems. It is the
marvel of the human brain that we can exchange a few million electrical
impulses between us and make things happen in the real world, for
example turning up on Sunday for the meeting. But deep down what we are
afraid of is that our washing machine might one day decide to murder us
in the middle of the night. This won't happen but it does not mean that
the washing machine won't kill us. In my short essay I try to argue why
our washing machine is not a bosom friend and why shouldn't think it is,
useful as it maybe.
In the meantime Ruel has sent us the link to his essay followed by news
from Miguel about Maths tertulias and from David about visits to the
British Cemetery in Madrid.
----Ruel's essay
Hello Lawrence,
Here is the link to the essay I wrote:
https://ruelfpepa.wordpress.com/2015/04/15/is-artificial-intelligence-a-threat/
See you on Sunday.
Best,
Ruel
---------------From Miguel
Estimado tertuliano,
Por si fuera de interés te anunciamos la conferencia adjunta.
Saludos cordiales,
Tertulia de Matemáticas
https://sites.google.com/site/tertuliadematematicas/
------ British Cemetery in Madrid visits
Redacto el presente mensaje tanto en español como en inglés con el
objeto de comunicarles el programa de visitas guiadas al Cementerio
Británico, los sábados por la mañana a las 11.00 horas - el punto de
encuentro es la entrada del Cementerio
sábado, día 25 de abril, con las explicaciones en español
sábado, día 9 de mayo, con las explicaciones en inglés
sábado, día 30 de mayo, con las explicaciones en español
Si prefiere hacer la visita en una fecha no programada y siempre que
formen un grupo de un mínimo de 8 personas, avíseme a <please send me a
private message and I'll pass it on to David>
TOMEN NOTA DE NUESTRA PÁGINA WEB < www.british cemeterymadrid.com> donde
se pone la dirección.
*********************************************************************
I am writing this note in both Spanish and in English to provide the
programme of Saturday morning guided visits to the British Cemetery, all
of which take place at 11.00 a.m - we meet at the Cemetery entrance
Saturday, 25th April : the visit will be in Spanish
Saturday, 9th May : the visit will be in English
Saturday, 30th May : the visit will be in Spanish
If you would like a visit on a different date and you can form a group
of 8 persons or more, let me know at <butler_d_j@yahoo.es>
PLEASE TAKE NOTE OF OUR WEBSITE< http://www.britishcemeterymadrid.com/ >
for details of location.
David Butler
-------------Essay Lawrence
The problem with artificial intelligence is that it is
tainted with an original sin. It is a child begotten from human pride in believing
that we can create an intelligent system that is perfect and certainly more
intelligent than us. But although we cannot create a system with perfect
intelligence we can create a system that is more efficient than us; and
certainly more loyal than human beings.
There are a number of limitations of artificial intelligence
that confirm that AI cannot be a perfect intelligent system. The first of these
limitations is precisely the limitations that we human beings have: our
knowledge about the world is based on probability because we are limited to the
induction method of thinking. It is not that we cannot say anything with
certainty about the physical world but that we cannot say a priori when we have
reached an infallible level of certainty. In other words, we cannot say whether
a hypothesis is certain about the world in advance before trying to prove it or
refute it, and how many empirical examples does to take to refute or confirm a
hypothesis before it becomes certain knowledge? Thus, if AI systems rely on
inductive reasoning to “learn” then they have the same learning weaknesses as
us.
As a methodology AI has the same empirical limitations as
us. Of course we have to distinguish here between AI methodology and AI
application in say machines (Machine intelligence). If 2+3=5 as an example of
AI methodology then 2€ in Bank account A and 3€ in Bank account = 5€ in my bank
is an example (although not very imaginative) of applied AI. Thus, although two
plus three will always be five there is no reason to suppose a priori that the
amount of money we have in our bank account is always accurate. This is the
empirical curse of induction, we are condemned to always having to verify the
future.
The other important limitation of AI is that without
exception until now, AI is always applied to solve human problems especially
our interaction with the world we live in. And even then, the nature of the
problem is in doing things better, quicker, accurately, longer, repetitively
than us and in awkward situations that are uncomfortable for us. For example,
accurately filtering out digital and white noise when we are having a
conversation with someone else on our mobile phone.
But before we continue, although very relevant to the
previous paragraph, it is important to distinguish the difference between AI as
a methodology and AI as an application. As I have already indicated we
encounter AI in machines, thus our experience of AI and our myths are based on
how machines interact with us. So referring to our question the “threat” part
is of course a threat to us. We are afraid that some machine might become
independent of human control and start doing things to harm us. For example, we
don’t worry too much, if at all, that one day a machine that has an AI
operating system, for example a machine in coal fired power station, will
decide to stop power production because of climate warming. We don’t make AI to
be a moral agent; at least not yet. But we are afraid that “somehow” a machine
in a power station might “decide” to send out a power surge through the grid to
fry the electronic chips in our pc’s and electro domestic hardware.
So one of our first tasks is to get rid of the Hollywood
impression and myths on how AI can be a threat to us. The threat is more likely
to be the problem of induction if a system is based on data gathering for its
operation or a fixed database. And in particular, can an AI system handle or
deal with such events as a “black swan” (see Nassim Taleb) or a “dragon-king”
(see Didier Sornette)?
Basically, and I mean very basic, a black-swan is an event
that happens so rare that we do not even consider it in our calculations; this
is akin to the problem of induction but with some new important twists. And
dragon-kings (are an even more complex idea) are events (mainly but not
exclusively negative events) we can predict a priori by the very nature of a
given system. This sounds very similar to determinism but here the problem is
that the event is due to the very nature of the system and it is predictable;
for example by using pneumatic tyres we can predict a priori that tyres will
have punctures. Sornette has successfully predicted certain events, for example
in the stock markets using his methodology to predict not only what will happen
but also when it will happen.
Indeed, the first limitations of AI systems are the
limitations we impose on the system, never mind philosophical limitations. What
we choose to include in the AI system and what system we use to solve a given
problem will itself determine the kind of failures of the system. Added to this
is the very likelihood of human errors, carelessness and unfortunate random
event.
For example, the auto pilot of the crashed plane in France
was not build to recognise a malevolent procedure from a benevolent procedure
by a pilot. Indeed the plane is designed to recognise and maybe prevent illegal
moves by the pilot (i.e. protect the plane from the pilot concept of plane
design) but clearly if we accept the official version (and this is key) of
events the auto pilot was not built to recognise intentional legal manoeuvres
by the pilot with illegal consequences. And yet all the relevant information
was available to an AI auto pilot to distinguish malevolent from legal actions:
why turn off the auto pilot when there is no emergency, the weather is good,
and this part of the journey is usually flown by auto pilot, and given the
speed, height and location of the plane the new instructions won’t lead to an
airfield, why all these changes when there is only one crew member at the controls
(there are ways to detect this) etc etc?
The question is not whether the AI system, in this case, the
Auto Pilot can identify a legitimate move by the pilot but whether the AI
system can identify a morally sound legitimate move by the pilot. The captain
would immediately have recognised that the new change in the course of the
plane was an illegal and immoral manoeuvre and would have acted to prevent the
outcome. In the official version of events the pilot entered legitimate new
instructions but was not designed to question the morality of those new
instructions; it would however, have questioned the legitimacy of say
increasing the speed of the plane beyond the capacity of the engines.
And this goes back to the original sin I started with; we
believe we can build a perfect system when in reality the system is built in
our imperfect image. And one of those imperfections is that we tend to give
more value to behaviour rather than intentions. Indeed the legal profession do
recognises this weakness in human beings and therefore actively emphasise the
importance of “intention” as a necessary condition in the type of outcome in a
legal case. AI systems mimic behavioural
patterns of human beings and not intentional acts of human beings. The auto
pilot is very good at maintaining height and speed but not very good at
determining whether the new instructions are morally or legally legitimate, a
mistake by the pilot or a clever way to bypass the safety features of the
system and thus intentional instructions to crush the plane. Hence, AI systems
are basically systems that “ask” what is being done and how can it be
reproduces? Whereas we humans most times also ask, or should ask, “why is it
being done?” and “is it good for me?” And as reasonable (in the legal sense)
and rational (in the philosophical) sense we can answer these questions
independent of whether we are asked or not.
Before our washing machine can try and murder us in the
middle of the night there are other issues that AI systems can become a threat
or a risk to us. The singular most important feature of AI systems is that
these systems require physical inputs and outputs to function; no private
language problems here. Hence, the quality of the output (e.g. keep the plane
flying straight) depends very much on the quality of its inputs. Thus, if the
system does not have a sensor to input the relevant information for example an
infra red sensor and a camera to check whether there is a live second pilot in
the cockpit the system cannot decide whether the new manoeuvre is suspicious in
the first place. Once again AI systems are limited not only by our choice of
what we want the system to do but by our foresight of what our system ought to
do.
The question Turing asked was whether a machine is
intelligent? In other words can a machine use intelligence to solve human
problem?
From our perspective we mustn’t confuse the term Artificial
Intelligence with any notion of human intelligence; what we are talking about
is basically computer software that is incorporated in machines to interact
with the environment and “...takes actions which maximize its chances of
success” (Wikipedia: Intelligence/Artificial Intelligence). Sure, this is like
all roads lead to Rome but it does not follow that all these roads are the same.
I have also been using the term AI systems to include the
notion of input/output algorithms within a machine designed to achieve
something for our purpose. Thus, if Gilbert Ryle put to rest the ghost in the
machine argument, i.e. the idea that the mind is not some entity working in
parallel with the body, we need to put to rest the idea that there is some
“genius in the machine” when we talk about AI. What there is, are a group of
algorithms and routines, manifested as electrical patterns, that manage the digital
computer in the machine. No doubt these routines and digital interactions can
achieve amazing things and do require the hard work of some of the human
geniuses we can ever meet, but there is no genius in the machine. The genius is
some engineer that is underpaid in some impersonal office trying to make a
decent living.
The problem with the Turing test is that the interaction
with the machine is just a language behaviour interaction. But even if somehow
we can build a machine that looks like our closest friend it will still take
more than just behavioural performance similar to my friend to establish an
intelligent machine. There are, for example, common experiences and shared
emotional experiences that it would be unlikely that a machine can use and incorporate
into a dialogue successfully. To begin with language exchanges are also
emotional exchanges and emotional exchanges need not always manifest themselves
into language acts but that they may manifest into physical acts; eg a hug, a
pat on the back, a smile etc. An AI system will probably won’t be able to
interact as if it was a live person because it takes more than just a code to
establish a moral emotional system. If this was possible personal computers
would be more user friendly.
Thus the threat here is that AI systems probably cannot be
designed to interact as human being by virtue of the fact that human beings can
over ride any regulating code or system based purely on emotional impulse
rather than a logic circuit or algorithms. Indeed Godel’s first incompleteness
theorem reinforces this argument with the claim by Godel that it is always
possible to create formal statements that cannot be proved by the system but
nevertheless, a human being can still make sense of the “Godel Statement.” The threat
here is that an AI system can mimic a behaviour but it is unlikely to
emotionally react and give you a kiss on the cheeks or solve a problem just
because we are getting emotional. As I said there is no danger that our washing
machine might want to murder us but nor will it surprise us with a kiss. And
nor will it stop colours from running just because we get very angry.
The problem for us is that we have this bad habit of
anthropomorphising inanimate objects just because it makes things easy for us
to relate to these objects. AI based machines won’t be having intentional
actions independent of us unless we program them to function in a certain way
when they detect a certain empirical input. And we can understand things much
better if we describe a machine that has broken down or a machine not fit for
purpose to be evil or bad rather just a machine malfunctioning. A
malfunctioning machine takes away the emotional outrage we are so addicted to.
The other big problem is that we tend to play loose and
dirty with language. Artificial Intelligence is just a term some scientists
gave a certain engineering activity and problem a few decades ago; AI is just a
name and there is nothing else to be implied from these two words put together.
This is a quirk of English that we can build these elaborate noun groups. There
is nothing intelligent about machines we have designed and built to try and
solve our problems and they are artificial simply because these machines don’t
grow on trees.
To conclude, AI systems are not a threat, what is a threat
is the human component part of the system. Speaking for myself I am not afraid
that an AI machine might want to kill me, but I am afraid that I might be
killed by such a machine.
Best Lawrence
(typos corrected 19/04/2015)
tel: 606081813
philomadrid@gmail.com <mailto:philomadrid@gmail.com>
Blog: http://philomadrid.blogspot.com.es/
<http://philomadrid.blogspot.com.es/>
PhiloMadrid Meeting
Meet 6:30pm
Centro Segoviano
Alburquerque, 14
28010 Madrid
914457935
Metro: Bilbao
-----------Ignacio------------
Open Tertulia in English every
From: January 15 at Triskel in c/San Vicente Ferrer 3.
Time: from 19:30 to 21h
http://sites.google.com/site/tertuliainenglishmadrid/
<http://sites.google.com/site/tertuliainenglishmadrid/>
----------------------------
from Lawrence, SUNDAY PhiloMadrid meeting at 6:30pm: Is Artificial
Intelligence a threat? + News
No comments:
Post a Comment