2005/06/07

Cognition or Computation, first thoughts.

An interesting discussion from eripsa that should be weighed in on, albeit not by me. However, we shall endeavor to stake out a position, at least provisionally, based on what is plausible at the moment. Keeping in mind, of course, that the depth of literature in this field is so far unavailable and unabsorbed, so we shall be speaking in some measure of ignorance. Let the following be a highly tenuously held position which it would be easy to topple if any compelling evidence against it were offered. No discussion of this specific statement has been undertaken prior to its issue, and indeed no study of the thread from which it is pulled has been made. It seems best, after all, to consider first what one has so far in mind, to make it clear, and thence to modify it as warranted.

There can be no doubt that supernatural forces are impossible. If "supernatural" powers, properties, or forces exist, they are merely unstudied. We do well to remember Asimov's proclamation concerning technology and magic; much the same thing may be said about the properties of what is experienced: all aspects of that which is are, in the relevant empiricistic sense, natural--there is no mystery beyond lack of knowledge. And of course such knowledge is "to be had" (we do well also to recall a different sense of the final statement of Wittgenstein's Tractatus--if we can "point at it" we are able to account for it, and nothing is unaccountable). The result is that there is no soul as revealed by the religious books. Or, if there is, our empirical investigations will eventually reveal it.

The point being, it will eventually be impossible to deny that there is some sort of "reductionist" explanation of the workings of mind, which is itself computable. That is, there will be a way to predict future subjective states, in some sense, from current brain states and subjective states. The difficulty is not with the theoretical possibility but with our ability to understand the different levels of discussion as separate, combined, or identical. But this is perhaps solvable--who in the sixteenth century could have understood a cell phone?

As far as the "computer" aspect of the brain, it is of course a false analogy to suppose that a solid-state computer can perform the functions the brain does. Partially this is because the brain does not operate in discrete states. Partially it is because it operates under a probabilistic framework far more than the silicone chips of binary computers, or the tapes of Universal Turing Machines do. Nevertheless, the explanation of consciousness and subjective states will be available in purely "physical" terms, and soon enough at that.

Then there is the question of intelligence, as supposedly contrasted with emotion. This is what is known as a false dichotomy, and on its face no less. Perhaps some bumbling boobs can still believe that these sorts of states are separate, but whoever they are perhaps they ought to wonder what they think an emotion is, after all. No sympathy for these anti-humanists. An emotion is a subjective state no less than "understanding" or "seeing" or "remembering". Supposed "qualitative" differences are illusory. The state of understanding has one set of qualia and the state of, say, sadness has another set. If it seems odd to speak of a state being understandingish then perhaps it is only because we have not the proper vocbulary. But of course vocabulary is in an important way arbitrary--in the sense that we can, as philosophers, just make this word up and know that it does have an extension. For we just saw what it was.

Ordinary experience bears this out, of course. (A debt to Dewey is here to be acknowledged.) The "understanding" and "sadness" qualia may mix in the processing of a particular sequence of data--what we could describe as 'the viewing of a movie about the Holocaust', for instance.

Where does this get us? Well it points at this final idea. That an "intelligent" system which operates only logically cannot be a person. Its logical functions must be constrained by perameters which we can call "interests." These will be either arbitrary (as in decided by the "creator" of the system) or functional (as in due to the nature of the system--some purpose for or in its goings-on as one or more of the things it is). If you discern an Aristotelian flavor herein you are nosing in the right direction, though be careful not to get too close to the source, which may burn. Emotion is, for the type of thing we "know" as being a person, i.e. a human being, a functional parameter on its internal state. By this we can only mean that we are set up to have interests of a certain sort with are not reducible to purely logical consideration. For, of course, geometric reasoning must have assumptions. The functional interests (and the arbitrary ones, for that matter) serve, for the system, as "assumptions." The processing hardward gives rise to states that do not conform to the parameters of traditional predicate logic. Witness, for instance, our ability to correctly use common nouns such as "cat" without the term's being quantifiable either in truth-functional terms or probabilistic ones. Emotions are, then, a data-processing mechanism that works through a different set of "rules" than purely logical ones (and yes, we have been throwing around 'logical' quite loosely).

What then are programs and instructions for computers--they are just particular events going on in particular locations and involving particular materials. One could of course describe them in general terms. To say that a computer "only follows commands" within its software architecture is to say something true, given a wide enough definition of 'command'. The same sense of 'command', however, also shows that human brains likewise operate only due to certain inputs, which are 'commands' in the relevant sense, and so one must question, then whether we should not impute metaphysical intentionality to machines, animals, falling rocks and so forth. But the answer is simple. Instead of taking the notion of intentionality and trying to apply it to inanimate things, we ought to examine just what it turns out to be. A simple analysis reveals that a certain sort of recursive loop within a system, whereby the system relates to its environment (in some here-unspecified way), is the characteristic of an events being "caused" "intentionally." Thus someone who acts intentionally may be involved in event X. Following Anscombe for the most part we can say that for descriptions A, B, C, etc. (of which there are indefinitely many) are not all what was done intentionally. For event X, the person did A and C, but accidentally also B, D, etc. (Thus we also reach ethics, in the form of plausible foresight and culpability.) We get out of the trap (and the sorites paradox of intentionality) by recognizing that the reflexive consideration of what is to be done is the mark of intentionality.

This may look like nothing has moved, for now perhaps it seems we are describing intentionality in terms of "what it wanted to do" or "what it went after", which explains nothing. To this we have no answer except perhaps that we are after explication and not some ultimate explanation. We must have assumptions after all. That final answer seems like a cop-out to some, and we may allow that it appears so, but yet for an initial statement of position we can go no further.

0 Comments:

Post a Comment

<< Home