The Mind as Computer Metaphor: Benson and the Mistaken Application of Mental Steps to Software (Part 3)

Go to: Part 1, Part 2, Part 3, Part 4, Part 5

(This series of posts is based on an upcoming paper for the AIPLA Spring 2016 meeting.)

Part III. FUNCTIONALISM: A PHILOSOPHICAL ARGUMENT IN SUPPORT OF THE FUNCTIONAL EQUIVALENCE OF MENTAL STEPS AND COMPUTER

Even if UDC did not support the Solicitor General’s argument that “the functions themselves are the same procedures which a human being would perform in working the same computation,” there remains whether this theory nonetheless holds merit on its own. This is an important question because this argument underlies the “pencil and paper” test of patent eligibility that is frequently invoked by the courts.

The functional equivalence argument—that the mind/brain operates in a similar way as a digital computer—is now a familiar part of the “mind as computer” metaphor. The “mind as computer” metaphor is presently formalized as the computational theory of mind or computationalism,1 the view “that intelligent behavior is causally explained by computations performed by the agent’s cognitive system (or brain).”2 Simply stated, as applied to humans, it holds that cognition in the brain is provided by computation. This view is now the dominant view in cognitive science and related fields.

The Solicitor General’s argument is more specific than that. It argues that computers actually perform the same functional procedures as the mind/brain itself. This stronger claim falls within a specific version of computationalism known as “machine functionalism” formalized by Hilary Putnam:

According to this model, psychological states (“believing that p,” “desiring that p,” “consider whether p,” etc.) are simply “computational states” of the brain. The proper way to think of the brain is a digital computer. Our psychology is to be described as the software of this computer—its “functional organization.”3

Simplified, functionalism is the view that mental states are identified by what functions they perform, rather than by the underlying structure of the brain that generates them. This thesis was inspired by numerous developments in computer science and in the field of artificial intelligence, which sought to construct machines that could think. Early successes in the field, such as the computer program Logic Theorist (1956), which successfully proved numerous mathematical theorems by a deductive process, suggested that this goal was achievable.4 In the 1930s Alan Turing proposed the model of an abstract machine (the Turing Machine) that could be programmed to compute any computable sequence.5 In the 1940’s McCollough and Pitts modeled the operation of neurons in the brain using Boolean logic, the same logic used in computer programming.6 John von Neumann, regarded with Turing as one of the architects of the modern computer,7 took these works further and proposed a general theory of automata in which both living organisms and machines could be described using the same principles, including those of the sort described by McCollough and Pitts. To von Neumann, biological entities, including the human brain, could be modeled and replicated, in digital mechanisms, at least under certain circumstances.8 However, functionalism, as proposed by Putnam, and as implicitly present in the mental steps doctrine, is not without its problems. Putnam himself described his own functionalism doctrine: “Functionalism, construed as the thesis that propositional attitudes are just computational states of the brain, cannot be correct.” 9

But even machine functionalism does not make the same philosophical and factual commitments set forth by the Solicitor General’s procedural equivalence argument. Machine functionalism describes the operations of the mind/brain architecturally. It makes no argument or assumption about how specific types of computations would be made by the brain, nor does it imply that a digital computer, even accounting for the differences in its “physical characteristics,” performs the same procedures as a human brain would for a given function. While there are still strong arguments for more sophisticated versions of functionalism, the Solicitor General’s procedural equivalence argument is very likely wrong, particularly when applied to mathematical operations.

Over the past two decades, significant work in neurophysiology has begun to discretely identify the specific structures in the brain that are involved in mathematical operations and how those operations are performed.10 The brain does not simply add, subtract, multiply and divide numbers in a single region, but instead uses between ten and twenty different regions performing different tasks.11 In particular, multiplication first involves conversion of the numbers into a linguistic or verbal (word) format to access a verbal (not numerical) memory of multiplication tables; in contrast, number comparisons (e.g., “is 3 > 7?” or deciding between images which has more “dots”) are entirely non-linguistic.12 As Dehaene notes, “The diversity of cerebral areas involved in multiplication and comparison underline once more that arithmetic is not a holistic phrenological “faculty” associated with a single calculation center. Each operation recruits an extended cerebral network. Unlike a computer, a brain does not have a specialized arithmetic processor.”13 Not only are many mathematical operations linguistically driven, they are use the same brain circuits used for the perception of time, space, and even hand and eye movement.14 Tests showed that during addition, subjects’ eyes moved to the right (increasing along an internal number line, and during subtraction, their eyes moved to the right (decreasing along the number line).15 Dehaene observes: “When we think about numbers, or do arithmetic, we do not solely rely on a purified, ethereal, abstract concept of number. Our brain immediately links the abstract number to concrete notions of size, location and time. We do not do arithmetic “in the abstract.””16 The brain does not merely compute numbers: it uses multiple and diverse operations involving linguistic, spatial, visual, and temporal components.

Thus, the arguments and assumptions that underlie Benson’s procedural equivalence of computers and brains are false. Computers do not convert digital bits for “1” and “0” into the words “one” and “zero” or activate a digital camera (the “eyes”) to determine results. The actual computational procedures performed by a computer are entirely different both in form and process from what a human does, even if both would ultimately achieve the same results. For example, when a computer multiplies two numbers, the underlying procedures are entirely different from what a human would do. What a human does in a few operations to multiply two digits, say “9 x 8,” requires dozens of operations at the level of individual logic gates (complexes of transistors). Even if a person were to perform the calculation in binary, the sequence of operations used would be quite different.

Another problem with this procedural equivalence argument is that it turns the inventor’s disclosure of the invention as required by Section 112 against the invention’s eligibility under Section 101. To satisfy Section 112, the disclosure must allow one of skill in the art to practice the invention. To an engineer this is often an explanation of the operative principles of the invention, often in terms of engineering equations or other computational representations. Once thus described, a court can readily conclude that a human can perform the equations with “pencil-and-paper”—a trivial conclusion at best.

One rebuttal to this line of analysis is that of course computers do not do exactly what human brains do, because they have digital circuits, not neurons. What matters, this line of reasoning goes, is that operations are functionally equivalent, not physically or procedurally the same. But this argument begs the question since there does not appear to be any level of functional organization at which the actual native operations of the brain use the “same procedures” as, or are functionally equivalent to, the computer. The argument assumes that relevant procedures are the entirely artificial ones created by humans to define the mathematical operations of interest. But that ignores the fact that these operations are implemented on a machine that was designed in the first place in accordance with mathematical principles precisely for the purpose of implementing such procedures. The power of computers comes not from their ability to perform monolithically complex equations per se, but rather from a design that relies on the ability of the hardware to perform a limited number of very simple, repetitive operations at high speed. This hardware model was adopted because mathematical problem solving involves breaking complex operations down into a large (often extremely large) number of simpler operations. After all, humans invented the formal symbolism of arithmetic, and likewise invented computers, as well as other machines, to perform these functions. Put another way, generally speaking, there is no algorithm that is executed by a computer that was not first thought of by a human computer programmer. It should be no surprise then, let alone considered an insightful analysis, that a person can perform the operations described for a computer.

Further, the articulation of the Step 1 of the Alice test, to identify whether the claim is “directed to” an abstract idea only serves to make matters worse, not better.17 The courts use this step as a “quick look” for the “gist” of the claim.18 This merely allows the courts to create a high level description of the purpose of the invention, which in the software domain is frequently to solve a functional problem--the very reason humans create artifacts in the first place.19 At that point it becomes trivially easy to argue that a human could perform the function. For example, in Comcast IP Holdings I, LLC v. Sprint Commc'ns Co. L.P., the claim was directed to a method of optimizing a telephone network, and included a step of “determining whether a telephony parameter associated with the request requires acceptance of a user prompt to provide to the application access to the telephony network, 20 The court boiled this down to simply “the abstract idea at the heart of the claim is the very concept of a decision,” which immediately led to the conclusion that “A decision is a basic mental process upon which everyone relies. A decision may be performed, and generally is performed, entirely in the human mind.”21 In short, Step 1 of the Alice test enables the question-begging of the fictional form of the mental step doctrine to begin right off the bat.

Another key difference between how computers perform their operations and how humans do is that humans, but not computers, understand what they are doing, and the meaning of their operations. A human undertaking the task of sorting book on a shelf alphabetically by title knows that she is dealing with books, that the sequence of words on the binding are titles, and that words are composed of letters, and so forth. She performs these operations directly on the words. This knowledge of the domain impacts how the operations themselves are performed. A computer can sort the same titles, but only once each title is represented as a string of numbers—the computer does not “know” that the numbers represent a book title any more than the human’s finger “knows” she is moving a book, and cannot use this knowledge to change the manner of sorting.

Thus, whether taken as a specific or general statement, the arguments made by the Solicitor General and adopted by the Supreme Court, do not support the functional equivalence of the operations of digital computers in relationship to human minds.

Footnotes:

1 See The Computational Theory of Mind, Stanford Encyclopedia of Philosophy (Oct. 16, 2015), http://plato.stanford.edu/entries/computational-mind/.

2 Piccinini, Computationalism in Philosophy of Mind, Philosophy Compass, v4 515-532 (2009), DOI: 10.1111/j.1747-9991.2009.00215.x.

3 Putnam, Representation and Reality 73 (1988).

4 See Dasgupta, It Began with Babbage-The Genesis of Computer Science 236 (2014). Dasgupta’s book provides an exceptional review and analysis of the history of computer science.

5 Turing, On Computable Numbers, with an Application to the Entscheidungs problem, Proceedings of the London Mathematical Society 2 (1937).

6 McColluch, Pitts, A Logical Calculus of the Ideas Immanent in Nervous Activity, Bulletin of Mathematical Biophysics 5:115–133 (1943).

7 von Neumann authored the seminal report on the EDVAC computer in 1945, in which he described architecture of the stored program computer. See Dasgupta, fn. 61, p. 108-112.

8 von Neumann, The General and Logical Theory of Automata (1951).

9 Putnam, fn. 60, p. 73 (emphasis in original).

10 See generally Dehaene, The Number Sense—How the Mind Creates Mathematics (Revised and Updated Ed.) (2011). See also Dehaene, Molko, Cohen, and Wilson, Arithmetic and the brain, Current Opinion in Neurobiology 14:218-224 (2004), and references cited therein (summarizing several decades of research into how the brain performs mathematical and related operations).

11 Dehaene, p. 200.

12 Id. at 180, 202, 242,

13 Id. at 204.

14 Id. at 244-245.

15 Id. 246.

16 Id. See also Lakoff, Johnson, Philosophy in the Flesh-The Embodied Mind and Its Challenge to Western Thought, 4 (1990) (“Reason is not disembodied, as the tradition has largely held, but arises from the nature of our brains, bodies, and bodily experience…The same neural and cognitive mechanisms that allow us to perceive and move around also create our conceptual systems and modes of reason. Thus, to understand reason we must understand the details of our visual system, our motor system, and the general mechanisms of neural binding.”)

17 Alice, 134 S. Ct. at 2355.

18 Enfish, LLC v. Microsoft Corp., 56 F. Supp. 3d 1167, 1173 (C.D. Cal. 2014) ("Step one is a … 'quick look'

test, the purpose of which is to identify a risk of preemption and ineligibility. If a claim's purpose is abstract, the court looks with more care at specific claim elements at step two."); Open Text S.A. v. Box, Inc., 78 F. Supp. 3d 1043, 1046 (N.D. Cal. 2015) (citing Bilski v. Kappos, 561 U.S. 593, 611-12 (2010)). (“In evaluating the first prong of the Mayo/Alice test, which looks to see if the claim in question is directed at an abstract idea, the Court distills the gist of the claim.”

19 See, Simon, Sciences of the Artificial 4-5 (1996) (“The engineer, and more generally the designer, is concerned with how things ought to be how they ought to be in order to attain goals, and to function.”)(emphasis in original).

20 Civ. No. CV 12-205-RGA, 2014 WL 3542055, at *4 (D. Del. July 16, 2014).

21 Id. at 6.

*The perspectives expressed in the Bilski Blog, as well as in various sources cited therein from time to time, are those of the respective authors and do not necessarily represent the views of Fenwick & West LLP or its clients.