John Searle and Artificial Intelligence

HomeFree EssaysScienceJohn Searle and Artificial IntelligenceBuy Custom Essay
← Metaphysics – EpistemologyParadigms →

Buy custom John Searle and Artificial Intelligence essay

Introduction

John Searle, a philosopher, presented strong arguments against strong artificial intelligence (AI) based on the concepts supporting the Chinese room. The Chinese room was a thought experiment conducted by Searle to prove his objections against strong AI. Using the concepts and ideas endorsed by Searle, the succeeding discussion will focus on: (1) defining strong AI, (2) comparing Searle’s arguments to other objections against AI using the Turing test, (3) discussing Searle’s objections and arguments against systems reply, robot reply, brain stimulator reply, and combination reply using concepts from the Chinese room, and (5) identifying what Searle’s arguments seek to show and what it does not show. Overall, Searle’s responses to the replies against his views and ideas about AI prove that his own arguments are consistent. AI is not capable of exhibiting behavior and thinking process that are of first nature to human beings.

Artificial Intelligence (AI) and Weak vs. Strong AI

Scholars find it difficult to establish a solid and concrete definition of artificial intelligence (AI) because it spans various areas and fields of science. However, to understand fully what AI is and the issues that concern it, studying various definitions is necessary. According to Taylor (1988, p. 9), AI is “a programming style, where programs operate on data according to rules in order to accomplish goals.” Taylor’s definition of AI focuses primarily on the functional nature of AI so that effective means of carrying out and accomplishing work and responsibilities relate to the development of AI due to thoughtful and purposeful programming. In this case, programmers develop AI with the purpose of creating mediums to increase efficiency and productivity in various fields and areas of life. Okon’s definition of AI (2004), on the other hand, focuses on standards of intelligence. According to Okon, it should exhibit “the ability of a computer (or other machine) to reason and reach conclusions” (2004, p. 285). AI should show a certain level of intelligence that allows reason and rational decision-making. Moreover, AI should operate under an expert system through which knowledge is shared, stored, and processed to develop solutions to various problems and issues (Okon, 2004). Schank also listed the characteristics that identify AI: (1) representation, (2) decoding, (3) inference, (4) control of combinatorial explosion, 5) indexing, (6) prediction and recovery, (7) dynamic modification, (8) generalization, (9) curiosity, and (10) creativity (Partridge & Hussain, 1992, p. 4). Aside from previous definitions, the differences between weak and strong AI also underscore the issues that will be discussed subsequently.

Weak AI refers to “the view that intelligent behavior can be modeled and used by computers to solve complex problems” (Coppin, 2004, p. 5). Furthermore, weak AI focuses on the role and purpose of programming when solving complex problems that human beings could not normally solve. For instance, the ability of the program to process complicated equations and large masses of information within seconds relate to weak AI. While weak AI uses embedded prompts and commands to solve complex problems, strong AI, on the other hand, uses embedded intelligence to think like human beings. Strong AI is “the supposition that some forms of artificial intelligence can truly reason and solve problems… that it is possible for machines to become sapient or self-aware” (Kumar, 2009, p. 14). Individuals who support AI emphasize the importance of integrating desired level of intelligence and behavior to programming with the purpose of creating a computer that thinks like a man. These individuals believe that it is possible for programmers to create a robot that would exhibit genuine human emotions (Coppin, 2004). To Searle, strong AI is “the view that the appropriately programmed digital computer thereby necessarily has a mind in exactly the same sense that [human beings] have minds” (Grewendorf & Meggle, 2002, p. 19).

While Searle does not directly criticize weak AI, the philosopher strongly opposes and argues against the concepts relating to strong AI. Searle’s arguments against strong AI relate to his beliefs that computers are incapable of adopting and exhibiting human intelligence and behavior. According to Searle, not all forms of AI, for instance, weak AI are flawed. Searle merely believes that overly ambitious forms like strong AI are philosophically flawed systems because its supporters believe that computers could exhibit human behavior and thinking. Searle said, strong AI “involves a logical mistake and it’s a rather simple mistake, namely the mistake of confusing the syntactical processes of the implemented computer program with the semantic or contentful mental processes of actual human minds” (Grewendorf & Meggle, 2002, p. 19). When it comes to strong AI, Searle strongly believes that there is no comparison with computer processing, regardless of how advanced, and mind processing it can be. Despite the fact that computers can have various designs and configurations, they could not simulate the complexity, volatility, and depth of the human mind.

Searle’s Arguments against Strong AI

Searle has sought to support his arguments against strong AI by conducting a thought experiment that is the Chinese room. The objective of Searle in carrying out this experiment is to prove that strong AI does not exist. In the thought experiment, Searle assumed that the Chinese room is a computer with the help of which programmers operate the system input data in the computer. The data will be processed by the computer through a series of scripts or commands. Searle based the Chinese room experiment on these two ideas. A man who knows English but does not understand Chinese is placed in a room. In the room, the man has to look at cards that contain Chinese letters equivalent to a story, and then answers questions about the story by reading the prompts written in English. As a result, the man answers the questions correctly due to the instructions indicated in English even if he cannot read Chinese. “The point is that, though the output is exactly what a person who understands Chinese would expect, the person in the room understands nothing of Chinese” (Wilkinson, 2000, p. 110). The thought experiment is relevant. Searle merely wanted to show the difference between the comprehension of human beings and computers. The Chinese room experiment illustrates the computer system, in which the computer will be able to process information with the help of scripts and commands. The results of processing could be successful, but it does not guarantee the computer’s understanding of the contents and meaning of the data.

Based on Searle’s Chinese room experiment, he presented his arguments by comparing syntax and semantics. According to Searle, “a computer program is a purely syntactical symbol system, and entirely lacks semantic properties” (Wilkinson, 2000, p. 111). In the Chinese room experiment, syntax refers to the contents of the cards written in Chinese that the man could not comprehend. Due to the cue cards in English, the man is able to distinguish the patterns and answer the questions correctly. However, the ability of the man to do so does not mean he understands Chinese. Thus, the system lacks semantics, or the capacity to attach and understand meaning to patterns. The syntax vs. semantics debated in Searle’s argument reflects the difference between human beings and computers. Thus, computers are syntactical in nature while human beings are semantical. They are capable of attaching appropriate meaning based on their objective or subjective understanding of data. As emphasized by Searle, “Therefore, human thought is strongly disanalogous to a computer program” (Wilkinson, 2000, p. 111).

Searle’s arguments are also against the Turing test. During the 1950s, Turing conducted an experiment in order to prove that computers could exhibit human behavior and intelligence. The Turing test, which was fashioned after a popular game played during parties, involved three individuals. Two of the three participants, male and female, were placed in one room, while the third participant, the judge, was placed in a separate room. During the game, notes were slipped in the judge’s room and his responsibility was to determine whether the answers written on the notes were written by the male or female participant. Similarly, the Turing test has been applied to AI. The male participant was replaced by the computer while the female participant remained. Instead of guessing whether the responses on the paper were by the male or female participant, the judge then had to determine whether it was by the computer or the person. If the judge was capable of guessing correctly, then he conquered the Turing test (O’Regan, 2012).

The Turing Test has been met with criticisms from various scholars, including Searle, who identifies the flaws in Turing’s design. Criticisms against the Turing Test include the idea that computers could merely guess whether a computer or human being is giving answers based on probability, and that the test limits the capacity of computers and discounts what programs could actually do – storing and retrieving data and processing data at a faster speed than human beings (Kumar, 2009). Other criticisms include the idea that computers cannot form opinions just based on responses received. Moreover, the computer is not capable of assigning traits and characteristics that would solidify its understanding of computer or human responses.

According to Whitby (1996, p. 36), Turing’s Test is also faulty because, first, “it gives no indication as to what a partial success might look like,” and second, “it gives no direct indications as to how success might be achieved.” Searle’s arguments against the Turing Test are based on the idea that computers are primarily cognitive systems that operate based on scripts, prompts, and commands. The game of pinpointing whether a computer or human being is behind a set of responses, however, is not a cognitive task but a psychological one. Therefore, since a computer is incapable of exhibiting psychological behavior, its responses to the Turing exercise are incompatible with the intended results (Burkhardt, 1990).

The Chinese Room and Searle’s Responses

The Chinese room has established the foundation of Searle’s arguments against strong AI. However, Searle is criticized repeatedly because of his arguments. One prevalent criticism towards Searle’s Chinese room experiment is the systems reply. Essentially, the systems reply emphasizes how meaning could be developed and attached to the whole system including the story, Chinese characters, cue cards in English, person, and his responses. The reply focuses on the systemic order of the arrangement, the combination of both syntax and semantics, such that the capability of AI relies on an entire system as well as the programmer, data, prompts/scripts, and AI system. Searle responds to the systems reply by emphasizing that even if AI was taken as a system, it would still be unable to attach meaning and practice semantics, like the person in the Chinese room would still be unable to understand Chinese even if he was viewed as part of a system (Crumley, 2006). In simple terms, putting together the elements in the system will not make the AI develop the capacity to attach meaning and process information like a human being.

The robot reply is another criticism against Searle’s Chinese room experiment. The robot reply refers to the idea that “meaning is found in the system’s causal interactions with the environment” (Crumley, 2006, p. 108). If the system inherent in the Chinese room were placed inside a robot as its brain, then it would be able to respond to a human being’s questions guided by scripts, prompts, and commands. Therefore, proponents of the robot reply emphasize that the robot would still be able to respond appropriately to a human being, like a person or companion. Searle, however, debunks the robot reply by emphasizing that despite the robot’s capacity to respond to questions and act as the companion to the human being, the robot still remains incapable of attaching meaning to its responses. Moreover, Searle emphasizes that although the robot understands the meanings behind its responses to questions or prompts, it would still be incapable of attaching symbolic meanings, like connotative meanings, words or phrases during a conversation. “Knowing the meaning of the symbol… requires having a representation that a particular sequence of symbols is the effect of a particular interaction with the world” (Crumley, 2006, p. 108). Therefore, the robot’s interaction with the external environment does not guarantee that semantics is included in AI.

The brain simulator reply refers to the idea that “the working of a brain is simulated to such detail that all the functional processes going on inside it are reflected” (Cilliers, 1998, p. 50). Thus, in the Chinese room, the system is created in such a way that the thoughts and ideas of the person in the room are structured to simulate the brain of someone who understands Chinese. In this way, the person inside this room will be able to understand and attach meanings to the images in the cards. However, like Searle’s response to the system reply and robot reply, he admits that the system might be arranged or structured following a sophisticated system that simulates the responses and cognitive processes of the human brain. Searle acknowledges the possibility of this happening but he also argues that regardless of the advancement of the system, it would still be unable to understand data or information like human beings does. In the Chinese room, the man may be able to think like a man who understands Chinese, however, he would think due to simulation, but if we look closer the man is still unable to understand Chinese. Searle also emphasizes that the brain simulator reply focuses on simulating what the AI system is expected to do, but not how it should do it like human beings (Cilliers, 1998).  

The combination reply is a response to the weak arguments against Searle’s ideas about AI. The combination reply is a means of putting together the ideas from previous responses  It concerns the robot reply and brain simulation reply in contrast to Searle’s arguments against strong AI. It is done in an effort to seek a genuine understanding of the system and how it should be structured to achieve a system that is comparable to human behavior and thinking. Thus, the combination reply emphasizes the idea that the structure of AI could be improved if the system is simulated and adjusted (the brain simulation reply) repeatedly based on its interaction with the external environment (the robot reply). The result is that the AI system would become more flexible and adoptive, and, therefore, more effective and realistic. Searle, however, argues that despite all the adjustments and simulations based on the system’s interaction with its external environment, the AI system still could not exhibit the capacity to apply semantics. Scripts, commands, and prompts would still be used to structure the AI, and, thus, it would never be able to create and discern meaning on its own like human beings can.

Conclusion

The discussion of Searle’s arguments against strong AI and responses and criticisms that he received show the impossibility of AI to exhibit the characteristics and qualities of human beings. Searle does not directly rule out strengths and advantages of AI and he acknowledges what computers can do that human beings cannot. However, Searle states that computers merely make work easier and more efficient for human beings. On the contrary, computers cannot be structured to act or think like human beings. Searle’s arguments are based on the ideas inherent in the Chinese room thought experiment. Therefore, despite the outcomes and propensity of AI to productivity, it cannot generate, attach, and discern meaning like human beings can. Many scholars criticized Searle’s arguments including the systems reply, robot reply, brain simulation reply, and combination reply. However, he remains steadfast in proving that, unlike humans, AI cannot process information in depth and complexity. Overall, Searle’s arguments show that despite the rapid progression of technology, AI has its limits, and that despite the capacity of AI to store, retrieve, and process data at impossible speeds, the AI system is not capable of adopting the cognitive, psychological, emotional, and even physical functions of the human being.

Buy custom John Searle and Artificial Intelligence essay

Order Now
Order nowhesitating

Related essays

  1. Paradigms
  2. Scientific Revolution and Popular Memory
  3. Metaphysics – Epistemology
  4. A Scientific Report on the Physics of a Situation
Order now