This post began with two articles on Digidig.
Abe and you, and me, and each of us. On relating with an algorithm
by Toni Muzi Falconi.
The conversation is more and more important each day, Toni and Italo are trying to bring the communication professionals to discuss it before the storm hits. I did feel the talk needs to take a few steps back to move forward. One of which is nomenclature, the other is in approach.
Please be patient, since little research was done in writing this post. I’m using a few references kept in Evernote, my own sensitivity towards the subject and my effort in organizing information to write what follows. Which is why anything you feel can be corrected should be posted as a comment, with my deepest thanks.
A data format is just a structure for our information, it can and should be able to be read by any human or system. We use data formats everyday, excel files or word documents can have the same information within, but only one of them lets us manipulate data and arrange it into formulas. For businesses where employees use Windows and Apple devices, there is a struggle that Open Office aims to solve in bringing data formats into the same standard.
The word is being used so often that we risk confusing it with something else. An algorithm is a set of steps to solve a problem. It takes input, from sensors or files and returns information that the user, or the next system, will use for a certain end.
It is a tool akin to a hammer or drill. Our relationship with it is one of practicality and we will abandon it as soon as it is no longer useful for our goals.
Like any tool, we measure its usefulness every time we use it and trust its quality based on the brand that built it.
(I feel we should make a distinction between algorithm and a sort of trigger. For example, the motion sensor in some office buildings will turn the light on when sensing people are present. It will turn off after inactivity. The simplicity of this trigger should not be considered an algorithm.)
Allow me to risk a line of thought and say that an Artificial intelligence (AI) starts to form when you build a system that is able to gather data on its own, communicate information to humans, and act upon it.
Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, an ideal "intelligent" machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal.
 The intelligent agent paradigm:
Russell & Norvig 2003, pp. 27, 32–58, 968–972
Poole, Mackworth & Goebel 1998, pp. 7–21
Luger & Stubblefield 2004, pp. 235–240
Hutter 2005, pp. 125–126
The definition used in this article, in terms of goals, actions, perception and environment, is due to Russell & Norvig (2003). Other definitions also include knowledge and learning as additional criteria.
For example an automatic car will take input from the user (destination), use an algorithm to determine the best path (gps coordinates and traffic information) and drive avoiding obstacles that it finds on it’s way: other vehicles, people, red lights.
Our relationship with this AI is still based on practical goals at this point. Our choice to use a certain model or a certain version of this car depends more on our trust that it’s creator did a good job. Would you buy a car from Tesla or Microsoft?
Other AI’s may be more complex, and will be built using algorithms that no single human being can fully comprehend. (A plane for example, uses so many complex systems that we are not able to grasp their full body of knowledge)
At this point we can pause and reflect on how we will relate with these new entities.
Toni did just that and proposes the following indicators for this relationship.
Thus, if it is true that there are at least four principal indicators to help investigate and evaluate the quality of any single relationship:
Finally, the questions that need to be investigated in order to move forward could be:
- the trust in the relationship by the subjects of that relationship;
- the satisfaction in the relationship by the subjects of that relationship,
- the commitment in the relationship by the subjects of that relationship,
- the equilibrium of reciprocal influence in the relationship by the subjects of that relationship ... plus other indicators pertaining specifically to the peculiar nature of the subjects of the relationship (individual/individual; organization/organization; algorithm/algorithm and one with the others and viceversa.
- what batteries of questions need to be answered by the two subjects of the relationship for each of the selected indicators?
- what AI processes need to be created to allow each single algorithm to – in parallel with the single web user- respond to the same battery of questions?
- what other specific indicators need eventually to be introduced beyond the four? For example: in the relationship between organization and organization sometimes it is helpful to introduce an indicator pertaining the distinct epigenetic profiles of the two subjects of the relationship;
- what are to be considered reasonable reciprocal influence on the balance of a relationship for each of the selected indicators? For example: In the individual/individual relationship any obvious unbalance resulting from the response might suggest to either of the subjects actions to create better equilibrium – with the possible exception of the reciprocal influence indicator related to parent/child, teacher/student, boss/subordinate…obviously keeping in mind the socio cultural specificities of the territory where the single relationship develops..and so on with every other possible combination.
In extreme we can imagine a future where a company exists and consists exclusively of an AI or group of AI’s. Companies are already jokingly compared to robots and Communication professionals break their hearts trying to make them “Authentic” and “Human”.
My point is that these AIs and Algorithms are so complex that in the end we will be measuring the trust we have in their creators, rather than the trust we have in the system on its own. Which does not mean I would disavow any effort to measure the trust in the artificial entity. Once there are no records of the creation process, we need a worthy diagnosis of the honesty of the AI.
At this point, my suggestion is that we look to other philosophers on robotics to setup ground rules and methods to keep the AI accountable. Similar to Asimov’s rules of robotics. It won’t be bullet proof unless we can audit the source code and make sure those ground rules are present and are in fact used in the decision making process.
To make things more complicated, this is a scenario where the AI is not self-aware. Once an AI becomes conscious of what it is and is able to make autonomous decisions, it will change the relationship it has with us and with other AI’s. At this point will the creator’s role in building the AI will begin to dwindle because he can’t foresee the effects of this autonomous decision process and the way it will impact the evolution of the system that was built.
This is still a scenario where the AI does not have free will. It takes information and context from the surroundings and makes decisions based on its experience and guidelines.
John Searle did a talk at Google on this exact subject.
The next logical step is to question at what point does the AI have free will. It is also where I draw my line at the moment. The subject of free will is not something I have given much thought, so forgive me for going back to Searle on this topic.
This topic spurred interesting conversations among friends. The first of which was Adriana. She mentioned that we should only call something “intelligence” when it is able to produce something new. So a non-human entity capable of processing the world around it and that develops something new can be considered an Artificial Intelligence.
On a different note, Nuno Nunes was kind enough to read it and point out that even when we do have access to the source code, the way the system functions can be so complex that we are unable to audit it and ascertain its honesty (for lack of a better word).
One example are the Deep Learning Systems, where algorithms pass along data and information along different layers before producing an output.
Another point, is that in the concept of algorithms and AI there are different levels of detail for the concept. He explained that what I described above is a function and not the algorithm itself. The algorithm is the abstract list of instructions, not their automatic execution. In AI there is also the concept of Agent. The Agent observes the environment and runs specific routines based on that observation. Like what I did when building Johnny Five. (More on that later, when I find the time.)
Yet, for Communication Professionals and others dealing with understanding the implications of how we relate to non-humans, brands and companies, it is sufficient to have the broad strokes of what are AI’s. Otherwise we will find ourselves blocked by the learning curve.
Header photo on this page was done by Cryteria and is available from Wikimedia Commons.
Enter your email to get a weekly digest of new posts.