Published on

AI 1: Fundamental Issues in Intelligent Systems

Authors

1. Turing's 1950 "Computing Machinery and Intelligence" Paper

1.2 Read Turing’s original paper on AI (Turing, 1950). In the paper, he discusses several objections to his proposed enterprise and his test for intelligence. Which objections still carry weight? Are his refutations valid? Can you think of new objections arising from developments since he wrote the paper? In the paper, he predicts that, by the year 2000, a computer will have a 30% chance of passing a five-minute Turing Test with an unskilled interrogator. What chance do you think a computer would have today? In another 50 years (Russell & Norvig, 2010)?

I will review some of these objections one-by-one.

"(1) The Theological Objection." There have been many advancements in technology throughout the ages that were controversial because they were seen as an encroachment on God's domain that are no longer seen as such and AI, if handled appropriately and responsibly in a way that is useful to humans, it can be added amongst them.

"(2) The ‘Heads in the Sand’ Objection." If this objection is framed in terms of a fear of loosing command over the AI such that AI begins to command us, then this is still a valid (long-term) concern. But if this is framed as simply feeling intimidated by the power and ability of computers; in some areas computers have out-powered the human mind and the feeling of intimidation has waned as the usefulness of having these tools at our disposal has become more apparent.

"(3) The Mathematical Objection." I tend to agree with Turing's dismissal of this objection here, and I would add that human perceptions of reality are not always accurate or logical infallible and for good reason. Anil Seth, Professor of Cognitive and Computational Neuroscience at the University of Sussex and others have prosed the idea that human consciousness is a kind of controlled hallucination and that human perceptions can be inaccurate as a means of prioritizing survival over objectivity. Neuroscientist Dr. Beau Lotto stated on Ted Talks that "The brain didn't actually evolve to see the world the way it is ... The brain has evolved to see the world it is useful to see." This underlines an important point that logical acuity and utility don't always go hand in hand, although there certainly are risk involved logical fallacies.

"(4) The Argument from Consciousness." This objection roughly states that a computer cannot be equal to a human brain until it is capable of human-like consciousness, which includes emotions and creativity. Turing takes a very solipsistic position stating that we can't know with certainty that other humans even possess these characteristics, which in my opinion is a bit of a digression from the matter. My take is that AI should be gauged in terms of its usefulness to humans rather than its likeness to humans in every respect. That usefulness can even be in the form or appearing to have human characteristic by non-human means that don't require it to be conscious. Should consciousness even be a goal of AI? It certainly would make the pursuit more interesting, and we may learn something about ourselves in the process, but I would imagine consciousness might make AI be less useful and potentially more dangerous.

"(5) Arguments from Various Disabilities." Arguments in the form "you will never be able to make one to do X" in reference to human technology tend to not age well. Today's reality is yesterday's science fiction as the list of impossibles becomes possible.

"(6) Lady Lovelace's Objection." This objection states that computers "never do anything really new." This specific argument of disability that addresses our inability to impart creativity into Al programs, along with the inability to impart true human-like consciousness may be the hardest to overcome. So in that sense, one could argue that this objection has aged better than the others. But, as Turing states, computers certainly do have the ability to surprise us, even if it does not do so via creativity. I would say there is no such thing as originality since everything deemed "original" arises out of the context of a certain set of influences and restraints. In that sense, everything "new" is a re-arrangement of elements that already existed.

"(7) Argument from Continuity in the Nervous System." This states that human brains are not entirely digital and digital computers, which limits them from acting like human brains. Turning lived at the tail end of the brief era of analog computers and as such he accurately points out that computers need not be digital. The recent rise of quantum biology has suggested that human brains utilize quantum properties. New-found abilities of quantum computers seem to verify that there may be important abilities of the human mind that will soon be possible with quantum computers. I would also add that hybrid computers (analog and digital) and mixed-signal integrated circuits (which are also analog and digital) may fill in more gaps in emulating human neurons, although interest in these technologies has sadly fallen away in recent years.

Summary and Additional Objections

So in summary I would say that the only objection that still stands as a long term concern is the ‘heads in the sand’ objection, which is related to the concept of the technological singularity. Whether we will have fair warning that we are approaching it or whether it will be delivered in a sudden avalanche of new technology remains to be seen. But it's certainly something that should be discussed and planned for, even if it doesn't happen for decades to come.

One notable objection that has arising since this publication is the issue of privacy and surveillance. But I would say the onus is on the companies and organizations carrying this out rather than the technology being used, although issue of security does involve the technology being used. One notable pending security concern is the post-quantum cryptography problem, which refers to cryptographic algorithms, such as public-key algorithms like RSA, that will likely not be resistant to attack by quantum computers.

Chance of passing a five-minute Turing Test with an unskilled interrogator

As more and more of the general public become more tech-savvy it may be hard to find or define "unskilled interrogator." There are also tremendous shortcomings this the turning test that, in my opinion, can render the result meaningless. I would imagine that the result of the test largely depends on exactly how "unskilled" the interrogator is. Whether the human is fooled or not could also depend heavily on the type and scope of the questions asked. For example, if the questions are about math or chess, for example, it may be impossible for the interrogator to know if it is a computer whereas questions involving asking about previous relationships and feeling of rejection and joy, the computer may have little chance of fooling them unless is has a huge database of actual human conversations from which to plagiarize. The Loebner Prize applied certain restrictions to the questions, but I remain skeptical.

Objections aside, I don't think most AI programs would fair better than Turing's predictions for 2000, and when the do, it's probably due to some strange details. For example, The "Eugene Goostman" program, which simulated a 13-year-old Ukrainian boy, was said to have passed the Turing test in 2014 but did so in the context of the interrogator being told they are a 13-year-old Ukrainian boy, who obviously would be expected to have less command of English (which the test was performed in) and have limited life experiences and knowledge. What chance do you think a computer would have in another 50 years? It depends on how bad the test is in light of many details! But I would say there will likely be hundreds or thousands of AI programs that will pass.


2. What qualifies as AI?

1.7 To what extent are the following computer systems instances of artificial intelligence: • Supermarket bar code scanners. • Web search engines. • Voice-activated telephone menus. • Internet routing algorithms that respond dynamically to the state of the network (Russell & Norvig, 2010).

Supermarket bar code scanners do have 'computer vision' that involves image processing of input but do not need AI to operate satisfactorily. The image is decoded to a string is then looked up (usually by a separate device) for data associated with it, such as price. Unless the decoding involves complex image processing to allow it to work reliably on terribly distorted images, there is no need for any AI.

Web search engines as we know them today typically do use AI. Google Search, in particular relies quite a bit on AI algorithms operating on a set of rules which are not publicly known in any specificity.

Voice-activated telephone menus typically do use AI, but to what degree depends on what is specifically meant here. If we are including smartphones voice-activated intelligent personal assistants (IPA) such as Siri and Google Assistant then the answer is a resounding "yes," they use a lot of AI. But if we mean the voice-activated menu navigation encountered when calling a company or service phone numbers, which let you speak instead of pressing keys, the amount of AI involved is thinner.

Internet routing algorithms that respond dynamically to the state of the network can and sometimes do benefit from AI to observe and predict patterns of internet traffic. This is very purpose-specific, so I would say the degree to which they rely on AI may be less than voice-activated intelligent personal assistants but perhaps more than voice-activated menu navigation.


3. What can AI do?

1.14 Examine the AI literature to discover whether the following tasks can currently be solved by computers For the currently infeasible tasks, try to find out what the difficulties are and predict when, if ever, they will be overcome (Russell & Norvig, 2010).

a. Playing a decent game of table tennis (Ping-Pong).

Yes. AI can do this today. For example, the Seoul Global Startup Center has created a system called fastpong and there are numerous other examples going back to the 1980s.

b. Driving in the center of Cairo, Egypt.

Autonomous vehicles can avoid pedestrians, but operating them in areas with many pedestrians is controversial at best. Autonomous vehicle software has advanced to the point of identifying if a moving object is a human some other object but it must account for situations in which some kind of accident is inevitable and there is only a "least bad" outcome (driving into a wall vs a pedestrian, for example) and this poses many ethical as well as legal hurdles. So, even if AI can drive in areas with high numbers of pedestrians, there is still the issue of whether we allow them to. But it appears that the technology to handle driving in the presence of pedestrians is well on its way and I believe a lot of the trepidation around autonomous vehicles is a result of how new a concept they are, which will subside over time as they prove themselves to be as safe and often more safe than human drivers. As for the legal hurdles, I believe this will be worked out as well as companies producing then begin to adhere to some legally defined rules of what outcomes should be prioritized.

c. Driving in Victorville, California.

I don't know the lay of the land in Victorville, California but I'm assuming this area is less congested with pedestrians. Additionally, California is one of the states which allow autonomous vehicles so I will say yes, this can currently be solved by AI.

d. Buying a week’s worth of groceries at the market.

There are now grocery stores that use robots to stock shelves and now many are being used to "serve customers" but getting items for specific customers is not their primary purpose at the moment. I am assuming the question does not have this kind of device in mind but rather a robot owned by consumers who can go to the supermarket and do the shopping for them, which does not currently exist as far as I can tell. But would say the permanently installed in grocery store robots should start filling consumer order soon, particularly in a post-pandemic world. This could be combined with human or autonomous vehicle based delivery as outlined in this article.

e. Buying a week’s worth of groceries on the Web.

Web-bots have been made which are capable of making purchases so yes, this is possible. Details of how successful this is probably depends on how much the decisions on selection of items is left up to AI. You can order grocery items via Amazon Alexa, but you cannot rely on Alexa to observe what is running low in your kitchen and somehow deduce what you would like ordered by observing the user. In other words, humans are still largely in control of the orders.

f. Playing a decent game of bridge at a competitive level.

I read conflicting information on this, but it seems that yes, there is "computer bridge" which runs on AI, and it can play a quite well at a competitive level. Bridge apparently has a psychological side that should pose to be more challenging for AI but, according to the article "The History of Computer Games" published by the City University of New York (CUNY) in 2006:

"In 1998, GIB not only won the Computer Bridge World Championship, it was also the only computer player invited to play in the Par Contest at the World Bridge Championships. Out of 35 competitors, GIB finished 12th."

g. Discovering and proving new mathematical theorems.

Proving? Yes. Discovering? That poses more of a challenge and so, although this is possible, it would not be a routine ability of current day AI. Additionally, there are many mathematical theorems that are not currently defined which are not of much use and thus don't really need to be defined. So I would think an added difficulty would be in ensuring the AI comes up with something worthwhile. With this in mind, it might be useful to guide the AI in some way toward an area worth pursuing.

h. Writing an intentionally funny story.

Possibly soon, but "funny" is subjective. Apparently there is a work in progress called Tito-Joker, but I can't vouch for how "effective" it is since the live web deployment of it is currently down and although one could install it, it would take some time for it to learn and improve. I'm guessing it's not that funny. How might better AI humor be achieved? A better understand of what makes something funny in the first place would help. For example, jokes on topics that people find slightly uncomfortable are funny because they release tension. A registry of what humans currently find uncomfortable but not too uncomfortable is probably useful. Scraping this data, such as from the internet, might be challenging but not impossible.

i. Giving competent legal advice in a specialized area of law.

Yes, and this has available for some time, even predating the expert system boom and bust of the 1980s

j. Translating spoken English into spoken Swedish in real time.

Yep, AI can do this. One landmark 2012 demonstration, illustrating potential of deep neural networks, founding vice president of Microsoft Research Worldwide, Rick Rashid, spoke to an audience of Chinese students were his English words were spoken by a computer just seconds behind him in Mandarin and in his own voice!


4. Understanding Intelligent Agents

2.3 For each of the following assertions, say whether it is true or false and support your answer with examples or counterexamples where appropriate (Russell & Norvig, 2010).

a. An agent that senses only partial information about the state cannot be perfectly rational.

This is false. Simple reflex agents often won't work satisfactorily if the environment is not fully observable but other types of agents won't have this problem. For example, in the case of deterministic environment, a model-based agent may not be able to sense all information, yet it can fill in gaps by use of its model, which can determine the current state with 100% accuracy based on the previous state since the environment is deterministic. In other cases, knowing each detail of the environment at given moments may not be absolutely necessary. With this said, perfectly rational agents performing non-trivial tasks are quite rare. The textbook defines them as follows:

"A perfectly rational agent acts at every instant in such a way as to maximize its expected utility, given the information it has acquired from the environment. We have seen that the calculations necessary to achieve perfect rationality in most environments are too time-consuming, so perfect rationality is not a realistic goal."

b. There exist task environments in which no pure reflex agent can behave rationally.

This is true. As previously mentioned, pure reflex agents (without a model or any randomization of actions) have trouble with partially observable environments. For example, if a robotic filler agent's goal is to ensure an array of tanks are filled above 75% while they are being depleted and can only observe the state of one tank at a time, so it will not know to avoid checking tanks it just filled moments ago, resulting in needless navigation between them. Other tasks directly require that the agent maintain a percept history, such as YouTube video recommendations.

c. There exists a task environment in which every agent is rational.

This is true, but these are probably task environments that don't require much intelligence in the first place, such as ones that only require a single sensor and a single action or ones that have a broad acceptance of many different actions that are all deemed as equally rational in the performance measure.

d. The input to an agent program is the same as the input to the agent function.

This is false. The agent function a mathematical abstraction, denoted by f:PAf:P^{\ast }\rightarrow A, which maps (takes as input) percept histories (every possible percepts sequence) to actions that the agent will perform. The agent program is an implementation of the agent function, and it takes as input (as argument to the agent program function or method) the current percept only. If the agent program stores percept histories it does so as a variable that persists and is available to the agent program, such as a data member variable as would be the case when the agent program is implemented as a method (aka member function) on a class object.

e. Every agent function is implementable by some program/machine combination.

This is false. Current technologies have limitations compared to mathematics and probably always will. There are problems with solutions in the form of mathematical agent functions which, if implemented as an agent program, would use far more resources than the physical machine is capable of, meaning the problem is intractable. For more specific examples, there are numbers that can be notated mathematically which exceed in magnitude and/or precision what a computer can represent and, even if the computer's native numeric types are expanded programmatically, there still exists a limited amount of memory on a computer with which to store them. The limited amount of memory also means that percept histories are limited in size compared to what might be abstractly defined mathematically with an agent function.

f. Suppose an agent selects its action uniformly at random from the set of possible actions. There exists a deterministic task environment in which this agent is rational.

This is true, but just like in part c, these are probably task environments that don't require much intelligence in the first place, such as ones deemed equally "useful" no matter which actions it selected or ones where the desired output IS random, essentially meaning the environment doesn't matter. One should probably question whether such a simple agent should have ever been conceived of as an "intelligent agent" if a simple algorithm had sufficed instead.

g. It is possible for a given agent to be perfectly rational in two distinct task environments.

This is true, but this probably depends on the degree to which the two task environments are distinct from each other. I'll give an example for which this is true, but the task environments overlap in that they have the same sensors and actuators but different environments and performance measures. Take the example of an agent that dispenses cat food based on the presence of a cat walking up to it by means of a camera and eye detection. Although this machine is placed on the floor and contains cat food, it could be mounted on a wall and loaded with gum balls. In this way, we have a second task environment in which the agent dispenses candy in response of detection of a human eye would be equally rational. The positioning of the machine in the cat dispenser task environment effectively prevents humans from inadvertently triggering the cat dispenser (unless they are crawling on the floor, which is not considered possible) and, likewise, the gum ball dispenser task environment would not be inadvertently triggered by a cat since there is no place for a cat to be perched in front of the machine while it's mounted to the wall.

h. Every agent is rational in an unobservable environment.

This is false. For obviously reasons, most rational agents must observe the environment. A Roomba needs to observe obstacles in the environment in order to be rational, an autonomous vehicle needs to observe other cars in other to be rational and there are many other examples.

i. A perfectly rational poker-playing agent never loses.

This is false. I don't know the rules of poker that well, but I do know there is an element of chance that cannot be overpowered by strategy. Although an unscrupulous designer might be able to devise a system of hidden cameras to make an unfairly advantaged poker-playing agent, if one were to consider cheating to be rational within the performance measure!

References

  • Russell, S., & Norvig, P. (2010). Artificial Intelligence: A Modern Approach (3rd ed.). Pearson Prentice Hall.