The influence of AI on our behaviour
Topic by Jorge*
Essay by Lawrence
Before looking at the issue of behaviour we need to distinguish between: doing AI and what is AI? “What is AI?” is more within the realms of philosophy and consequently we need to take a detour before answering the issue head on.
How we name things and objects has been a classical issue in philosophy for many decades if not centuries. Without going into any details, for many centuries it was accepted that if someone was called Smith, this was likely to be a person who worked with metals, thus, a blacksmith was someone who worked with iron. Of course, this progressed from being a person with such an occupation to maybe a child of a blacksmith. Today a Mr or Ms Smith is just the surname of that person, and I am sure that many people today do not know what a blacksmith is.
More recently we tend to name new phenomena or new objects as part of some kind of a Wittgenstein language game: we just decide to call something with a given word or words and that will decide the meaning of these words. Except that those who are not participants in the game do not have full access to the meaning of the terms used. Just to mention a set of participants, scientists are very good at this naming game.
This would not be serious if it was not for the bad habit of scientists sometimes adopting terms from ordinary language and then giving them the meaning of their own language game. Psychologists use the term “mind” differently from philosophers and in our everyday language we use the word mind with different meanings: I don’t mind – it is alright by me; mind your language – be careful what you say and so on.
The term “Artificial Intelligence” clearly belongs in the realm of a scientific language game. And it has nothing to do with something not being natural or being intelligent. Indeed, John McCarthy who coined the term described AI as the fruit flies of Chess and board games: see the obituary of McCarthy at Stanford News (1).
A second preliminary aspect of our discussion is that in recent years both in academia and even more real life and education we have lost the clear distinction between science and technology (applied science). The MIT in1861, in the USA, was not called the Massachusetts Institute of Technology for nothing: technology was more important at the time for a new country than anything else.
This is relevant for our discussion because if we mean by AI, technology then this would have a direct bearing on our behaviour. If, however, AI is science, or at best a scientific tool then surely it does not have a direct causal connection with our behaviour. I can use another example to illustrate the point: experiments conducted on fruit flies or mice are not in a direct causal relationship with us even if the results will one day be relevant in the making of a medication. In contract, the results of a fourth phase clinical trial (2) does have a direct causal relationship with us since a negative result at this stage would most definitely stop a prospective medication in its present form.
But even today the term science has been high jacked by populist media, advertising agencies and snake oil merchants. Many claims to scientific discovery are based on a one off study with a small cohort: at best in these cases scientific means that whatever we say is scientific does no harm at all. Let us forget for now any misleading claims about some discovery that might have a negative effect: for example claims about orange juice in a carton! Unfortunately, AI is not immune to this process of being high jacked by commercial interests.
Today, even more than ever, there is a feeling that the scientist is the science. But this idea is not new. For example Thomas Kuhn, in his book The Structure of Scientific Revolutions (3) does suggest that those scientists who promote an old paradigm, even in the face of evidence to the contrary, tend to behave as if they were the high priests of that old paradigm. Consider, for example, the fate of Ignaz Semmelweis (4) who advocated clinical hygiene (washing of hands) and the treatment he received from his medical peers and medical community.
This is relevant for our discussion because in our everyday life we might make value judgements on some scientific claims on purely emotional and irrelevant criteria. Don’t forget that Albert Einstein theories were not rejected, especial in English speaking countries, because his theories were shown to be invalid but purely because he was German during the First World War and a Jew. During the run up of the Brexit referendum and the Presidency of Donald Trump, scientists who dared speak out against Brexit or the policies of the president were at best dismissed as irrelevant experts or even enemies of the people. Maybe we can point at Hollywood for any negative ideas we might have about AI (5) with all those science fiction films and their box office hits.
So does AI influence our behaviour? AI as technology then it does influence our behaviour by virtue that this technology is successfully implemented in many gadgets we depend on in our modern life. Mobile phones, to give an example, are today an extension of our brain and I do not mean in a metaphorical sense. Not too long ago, remembering more than ten telephone numbers was a feat in itself. And text correctors and translators are a must have in the age of the internet. Even if non English based companies tend to have a series problem with programming their AI for the plural “s” and the possessive “s” in English.
As for the doing of AI, this is a more complex issue not only because this is a really technical field involving many tools but even more because it covers a wide range of disciplines. But what is philosophically relevant is that AI has the same problems we have had from time immemorial: that is our sense perceptions. Data sets and data bases are the equivalent of our sense perceptions. And the problem is not that this data might be corrupt or biased but rather that neither the AI tool nor the AI operator seem to have the opportunity, and I emphasise seem, to go behind the data and investigate the provenance of the data to the extent that the tool might exclude that data set or data base.
And how does AI account for political interference with policies that lead to a biased data set? Considering the statistics of victims of Covid 19 in the UK, how can we confirm or reject the hypothesis that the British government at the beginning of the pandemic transferred many elderly people from hospitals to residence homes with ethical disregard to the consequence to this age group? This is a political issue and not a dataset problem.
The question I am asking is not an ethical or moral question about datasets but rather how valid are the data sets employed by AI tools? Many ethical issues about AI have already been identified and documents by those doing AI: this is well documented on the internet. As I have hinted at earlier, the issue about data is not one of quantity but rather one of quality and provenance.
(1) Stanford's John McCarthy, seminal figure of artificial intelligence, dies at 84
https://news.stanford.edu/news/2011/october/john-mccarthy-obit-102511.html
(2) Step 3: Clinical Research (FDA)
(3) Scientific Revolutions
https://plato.stanford.edu/entries/scientific-revolutions/
(4) Ignaz Semmelweis
https://en.wikipedia.org/wiki/Ignaz_Semmelweis
(5) Can artificial intelligence become sentient, or smarter than we are - and then what?
*late ^PDF articles from Jorge:
2016 OP015_FromElectronstophotons.pdf
https://drive.google.com/file/d/1_Gd2nOj5Q6YW6gyKKN_LnjoQWM8aulIR/view?usp=sharing
2020 ArtificialIntelligence definitions.pdf
https://drive.google.com/file/d/1PVGILmglc2on7A45t9pMDbw6oAo6dqut/view?usp=sharing
Best Lawrence
telephone/WhatsApp: 606081813
Email: philomadrid@gmail.com
http://www.philomadrid.com
No comments:
Post a Comment