Thus, no matter what the current debate or discussion, the central issue is often without conceptual and contentual representations, that if one is without concept, is without idea, such that in one foul swoop would ingest the mere truth that lies to the underlying paradoxes of why is there something instead of nothing? Whatever it is that makes, what would otherwise be mere utterances and inscriptions into instruments of communication and understanding. This philosophical problem is to demystify this over-flowing emptiness, and to relate to what we know of ourselves and subjective matter’s resembling reality or ours is to an inherent perceptivity of the world and its surrounding surfaces.
Contributions to this study include the theory of ‘speech arts’, and the investigation of communicable communications, especially the relationship between words and ‘ideas’, and words and the ‘world’. It is, nonetheless, that which and utterance or sentence expresses, the proposition or claim made about the world. By extension, the content of a predicate that any expression effectively connecting with one or more singular terms to make a sentence, the expressed condition that the entities referred to may satisfy, in which case the resulting sentence will be true. Consequently we may think of a predicate as a function from things to sentences or even to truth-values, or other sub-sentential components that contribute to sentences that contain it. The nature of content is the central concern of the philosophy of language.
All and all, assuming their rationality has characterized people is common, and the most evident display of our rationality is capable to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers, and painters all think, and there is no deductive reason that their deliberations should take any more verbal a form than their actions. It is permanently tempting to conceive of this activity about the presence in the mind of elements of some language, or other medium that represents aspects of the world and its surrounding surface structures. However, the model has been attacked, notably by Ludwig Wittgenstein (1889-1951), whose influential application of these ideas was in the philosophy of mind. Wittgenstein explores the role that reports of introspection, or sensations, or intentions, or beliefs can play of our social lives, to undermine the Cartesian mental picture is that they functionally describe the goings-on in an inner theatre of which the subject is the lone spectator. Passages that have subsequentially become known as the ‘rule following’ considerations and the ‘private language argument’ are among the fundamental topics of modern philosophy of language and mind, although their precise interpretation is endlessly controversial.
Effectively, the hypotheses especially associated with Jerry Fodor (1935-), whom is known for the ‘resolute realism’, about the nature of mental functioning, that occurs in a language different from one’s ordinary native language, but underlying and explaining our competence with it. The idea is a development of the notion of an innate universal grammar (Avram Noam Chomsky, 1928-), in as such, that we agree that since a computer programs are linguistically complex sets of instructions were the relative executions by which explains of surface behaviour or the adequacy of the computerized programming installations, if it were definably amendable and, advisably corrective, in that most are disconcerting of many that are ultimately a reason for ‘us’ of thinking intuitively and without the indulgence of retrospective preferences, but an ethical majority in defending of its moral line that is already confronting ‘us’. That these programs may or may not improve to conditions that are lastly to enhance of the right sort of an existence forwarded toward a more valuing amount in humanities lesser extensions that embrace one’s riff of necessity to humanities’ abeyance to expressions in the finer of qualities.
As an explanation of ordinary language-learning and competence, the hypothesis has not found universal favour, as only ordinary representational powers that by invoking the image of the learning person’s capabilities are apparently whom the abilities for translating are contending of an innate language whose own powers are mysteriously a biological given. Perhaps, the view that everyday attributions of intentionality, beliefs, and meaning to other persons proceed by means of a tactic use of a theory that enables one to construct these interpretations as explanations of their doings. We commonly hold the view along with ‘functionalism’, according to which psychological states are theoretical entities, identified by the network of their causes and effects. The theory-theory has different implications, depending upon which feature of theories we are stressing. Theories may be thought of as capable of formalization, as yielding predictions and explanations, as achieved by a process of theorizing, as answering to empirical evidence that is in principle describable without them, as liable to be overturned by newer and better theories, and so on.
The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the nonexistence of a medium in which this theory can be couched, as the child learns simultaneously the minds of others and the meaning of terms in its native language, is not gained by the tactic use of a ‘theory’, enabling ‘us’ to infer what thoughts or intentions explain their actions, but by re-living the situation ‘in their shoes’ or from their point of view, and by that understanding what they experienced and theory, and therefore expressed. Understanding others is achieved when we can ourselves deliberate as they did, and hear their words as if they are our own. The suggestion is a modern development frequently associated in the ‘verstehen’ traditions of Dilthey (1833-1911), Weber (1864-1920) and Collingwood (1889-1943).
We may call any process of drawing a conclusion from a set of premises a process of reasoning. If the conclusion concerns what to do, the process is called practical reasoning, otherwise pure or theoretical reasoning. Evidently, such processes may be good or bad, if they are good, the premises support or even entail the conclusion drawn, and if they are bad, the premises offer no support to the conclusion. Formal logic studies the cases in which conclusions are validly drawn from premises, but little human reasoning is overly of the forms logicians identify. Partly, we are concerned to draw conclusions that ‘go beyond’ our premises, in the way that conclusions of logically valid arguments do not for the process of using evidence to reach a wider conclusion. Nonetheless, such anticipatory pessimism in the opposite direction to the prospects of conformation theory, denying that we can assess the results of abduction in terms of probability. A cognitive process of reasoning in which a conclusion is played-out from a set of premises usually confined of cases in which the conclusions are supposed in following from the premises, i.e., an inference is logically valid, in that of deductibility in a logically defined syntactic premise but without there being to any reference to the intended interpretation of its theory. Furthermore, as we reason we use indefinite traditional knowledge or commonsense sets of presuppositions about what it is likely or not a task of an automated reasoning project, which is to mimic this causal use of knowledge of the way of the world in computer programs.
Some ‘theories’ usually emerge themselves of engaging to exceptionally explicit predominancy as [ supposed ] truths that they have not organized, making the theory difficult to survey or study as a whole. The axiomatic method is an idea for organizing a theory, one in which tries to select from among the supposed truths a small number from which they can see all others to be deductively inferable. This makes the theory more tractable since, in a sense, they contain all truths in those few. In a theory so organized, they call the few truths from which they deductively imply all others ‘axioms’. David Hilbert (1862-1943) had argued that, just as algebraic and differential equations, which we were used to study mathematical and physical processes, could have themselves be made mathematical objects, so axiomatic theories, like algebraic and differential equations, which are means to representing physical processes and mathematical structures could be of investigating.
Conformation to theory, the philosophy of science, is a generalization or set referring to unobservable entities, i.e., atoms, genes, quarks, unconscious wishes. The ideal gas law, as an example, infers to such observable pressure, temperature, and volume, the ‘molecular-kinetic theory’ refers to molecules and their material possession, . . . although an older usage suggests the lack of adequate evidence in support thereof, as an existing philosophical usage does in truth, follow in the tradition (as in Leibniz, 1704), as many philosophers had the conviction that all truths, or all truths about a particular domain, followed from as few than for being many governing principles. These principles were taken to be either metaphysically prior or epistemologically prior or both. In the first sense, they we took to be entities of such a nature that what exists s ‘caused’ by them. When the principles were taken as epistemologically prior, that is, as ‘axioms’, they were taken to be either epistemologically privileged, e.g., self-evident, not needing to be demonstrated, or again, included ‘or’, to such that all truths so truly follow from them by deductive inferences. Gödel (1984) showed in the spirit of Hilbert, treating axiomatic theories as themselves mathematical objects that mathematics, and even a small part of mathematics, elementary number theory, could not be axiomatized, that more precisely, any class of axioms that is such that we could effectively decide, of any proposition, whether or not it was in that class, would be too small to capture in of the truths.
The notion of truth occurs with remarkable frequency in our reflections on language, thought and action. We are inclined to suppose, for example, that truth is the proper aim of scientific inquiry, that true beliefs help to achieve our goals, that to understand a sentence is to know which circumstances would make it true, that reliable preservation of truth as one argues of valid reasoning, that moral pronouncements should not be regarded as objectively true, and so on. To assess the plausibility of such theses, and to refine them and to explain why they hold (if they do), we require some view of what truth be a theory that would account for its properties and its relations to other matters. Thus, there can be little prospect of understanding our most important faculties in the sentence of a good theory of truth.
Such a thing, however, has been notoriously elusive. The ancient idea that truth is some sort of ‘correspondence with reality’ has still never been articulated satisfactorily, and the nature of the alleged ‘correspondence’ and the alleged ‘reality’ persistently remains objectionably enigmatical. Yet the familiar alternative suggestions that true beliefs are those that are ‘mutually coherent’, or ‘pragmatically useful’, or ‘verifiable in suitable conditions’ has each been confronted with persuasive counterexamples. A twentieth-century departure from these traditional analyses is the view that truth is not a property at all that the syntactic form of the predicate, ‘is true’, distorts its really semantic character, which is not to describe propositions but to endorse them. Nevertheless, we have also faced this radical approach with difficulties and suggest, counter intuitively that truth cannot have the vital theoretical role in semantics, epistemology and elsewhere that we are naturally inclined to give it. Thus, truth threatens to remain one of the most enigmatic of notions: An explicit account of it can seem essential yet beyond our reach. All the same, recent work provides some evidence for optimism.
A theory is based in philosophy of science, is a generalization or se of generalizations purportedly referring to observable entities, its theory refers top molecules and their properties, although an older usage suggests the lack of an adequate make-out in support therefrom as merely a theory, later-day philosophical usage does not carry that connotation. Einstein’s special and General Theory of Relativity, for example, is taken to be extremely well founded.
These are two main views on the nature of theories. According to the ‘received view’ theories are partially interpreted axiomatic systems, according to the semantic view, a theory is a collection of models (Suppe, 1974). By which, some possibilities, unremarkably emerge as supposed truths that no one has neatly systematized by making theory difficult to make a survey of or study as a whole. The axiomatic method is an ideal for organizing a theory (Hilbert, 1970), one tries to select from among the supposed truths a small number from which they can see all the others to be deductively inferable. This makes the theory more tractable since, in a sense, they contain all truth’s in those few. In a theory so organized, they call the few truths from which they deductively incriminate all others ‘axioms’. David Hilbert (1862-1943) had argued that, morally justified as algebraic and differential equations, which were antiquated into the study of mathematical and physical processes, could hold on to themselves and be made mathematical objects, so they could make axiomatic theories, like algebraic and differential equations, which are means of representing physical processes and mathematical structures, objects of mathematical investigation.
Of mathematics, elementary number theory, could not be axiomatized, that, more precisely, any class of axioms that is such that we could effectively decide, of any proposition, whether or not it was in that class, would be too small to capture all of the truths.
The notion of truth occurs with remarkable frequency in our reflections on language, thought, and action. We are inclined to suppose, for example, that truth is the proper aim of scientific inquiry, that true beliefs help ‘us’ to achieve our goals, tat to understand a sentence is to know which circumstances would make it true, that reliable preservation of truth as one argues from premises to a conclusion is the mark of valid reasoning, that moral pronouncements should not be regarded as objectively true, and so on. In order to assess the plausible of such theses, and in order to refine them and to explain why they hold, if they do, we expect some view of what truth be of a theory that would keep an account of its properties and its relations to other matters. Thus, there can be little prospect of understanding our most important faculties without a good theory of truth.
Astounded by such a thing, however, has been notoriously elusive. The ancient idea that truth is one sort of ‘correspondence with reality’ has still never been articulated satisfactorily: The nature of the alleged ‘correspondence’ and te alleged ‘reality remains objectivably obscure. Yet, the familiar alternative suggests ~. That true beliefs are those that are ‘mutually coherent’, or ‘pragmatically useful’, or ‘verifiable’ in suitable conditions has each been confronted with persuasive counterexamples. A twentieth-century departure from these traditional analyses is the view that truth is not a property at al ~. That the syntactic form of the predicate,‘ . . . is true’, distorts the ‘real’ semantic character, with which is not to describe propositions but to endorse them. Still, this radical approach is also faced with difficulties and suggests, counter intuitively that truth cannot have the vital theoretical role in semantics, epistemology and elsewhere that we are naturally inclined to give it. Thus, truth threatens to remain one of the most enigmatic of notions, and a confirming account of it can seem essential yet, on the far side of our reach. However, recent work provides some grounds for optimism.
The belief that snow is white owes its truth to a certain feature of the external world, namely, to the fact that snow is white. Similarly, the belief that dogs bark is true because of the fact that dogs bark. This trivial observation leads to what is perhaps the most natural and popular account of truth, the ‘correspondence theory’, according to which a belief (statement, a sentence, propositions, etc. (as true just in case there exists a fact corresponding to it (Wittgenstein, 1922). This thesis is unexceptionable, all the same, it is to provide a rigorous, substantial and complete theory of truth, If it is to be more than merely a picturesque way of asserting all equivalences to the form. The belief that ‘p’ is true ‘p’.Then it must be supplemented with accounts of what facts are, and what it is for a belief to correspond to a fact, and these are the problems on which the correspondence theory of truth has floundered. For one thing, it is far from going unchallenged that any significant gain in understanding is achieved by reducing ‘the belief that snow is white is’ true’ to the facts that snow is white exists: For these expressions look equally resistant to analysis and too close in meaning for one to provide a crystallizing account of the other. In addition, the undistributed relationship that holds in particular between the belief that snow is white and the fact that snow is white, between the belief that dogs bark and the fact that a ‘dog barks’, and so on, is very hard to identify. The best attempt to date is Wittgenstein’s 1922, so-called ‘picture theory’, by which an elementary proposition is a configuration of terms, with whatever stare of affairs it reported, as an atomic fact is a configuration of simple objects, an atomic fact corresponds to an elementary proposition and makes it true, when their configurations are identical and when the terms in the proposition for it to the similarly-placed objects in the fact, and the truth value of each complex proposition the truth values entail of the elementary ones. However, eve if this account is correct as far as it goes, it would need to be completed with plausible theories of ‘logical configuration’, ‘rudimentary proposition’, ‘reference’ and ‘entailment’, none of which are better-off for what is to come.
The cental characteristic of truth One that any adequate theory must explain is that when a proposition satisfies its ‘conditions of proof or verification’ then it is regarded as true. To the extent that the property of corresponding with reality is mysterious, we are going to find it impossible to see what we take to verify a proposition should show the possession of that property. Therefore, a tempting alternative to the correspondence theory an alternative that eschews obscure, metaphysical concept that explains quite straightforwardly why verifiability infers, truth is simply to identify truth with Verifiability (Peirce, 1932). This idea can take on variously formed. One version involves the further assumption that verification is ‘holistic’, . . . ‘in that a belief is justified (i.e., verified) when it is part of an entire system of beliefs that are consistent and ‘counterbalance’ (Bradley, 1914 and Hempel, 1935). This is known as the ‘coherence theory of truth’. Another version involves the assumption associated with each proposition, some specific procedure for finding out whether one should amazingly. On this account, to say that a proposition is true is to sa that the appropriate procedure would verify (Dummett, 1979. and Putnam, 1981). While mathematics this amounts to the identification of truth with provability.
The attractions of the verificationist account of truth are that it is refreshingly clear compared with the correspondence theory, and that it succeeds in connecting truth with verification. The trouble is that the bond it postulates between these notions is implausibly strong. We do in true statements’ take verification to indicate truth, but also we recognize the possibility that a proposition may be false in spite of there being impeccable reasons to believe it, and that a proposition may be true although we are not able to discover that it is. Verifiability and ruth are no doubt highly correlated, but surely not the same thing.
A third well-known account of truth is known as ‘pragmatism’ (James, 1909 and Papineau, 1987). As we have just seen, the verificationist selects a prominent property of truth and considers the essence of truth. Similarly, the pragmatist focuses on another important characteristic namely, that true belief is a good basis for action and takes this to be the very nature of truth. True assumpsits are said to be, by definition, those that provoke actions with desirable results. Again, we have an account statement with a single attractive explanatory characteristic, besides, it postulates between truth and its alleged analysand in this case, utility is implausibly close. Granted, true belief tends to foster success, but it happens regularly that actions based on true beliefs lead to disaster, while false assumptions, by pure chance, produce wonderful results.
One of the few uncontroversial facts about truth is that the proposition that snow is white if and only if snow is white, the proposition that lying is wrong is true if and only if lying is wrong, and so on. Traditional theories acknowledge this fact but regard it as insufficient and, as we have seen, inflate it with some further principle of the form, ‘x’ is true if and only if ‘x’ has property ‘P’ (such as corresponding to reality, Verifiability, or being suitable as a basis for action), which is supposed to specify what truth is. Some radical alternatives to the traditional theories result from denying the need for any such further specification (Ramsey, 1927, Strawson, 1950 and Quine, 1990). For example, ne might suppose that the basic theory of truth contains nothing more that equivalences of the form, ‘The proposition that ‘p’ is true if and only if ‘p’ (Horwich, 1990).
That is, a proposition, ‘K’ with the following properties, that from ‘K’ and any further premises of the form. ‘Einstein’s claim was the proposition that p’ you can imply p’. Whatever it is, now supposes, as the deflationist says, that our understanding of the truth predicate consists in the stimulative decision to accept any instance of the schema. ‘The proposition that ‘p’ is true if and only if ‘p’, then your problem is solved. For ‘K’ is the proposition, ‘Einstein’s claim is true ’, it will have precisely the inferential power needed. From it and ‘Einstein’s claim is the proposition that quantum mechanics are wrong’, you can use Leibniz’s law to imply ‘The proposition that quantum mechanic is wrong is true; Which given the relevant axiom of the deflationary theory, allows you to derive ‘Quantum mechanics is wrong’. Thus, one point in favour of the deflationary theory is that it squares with a plausible story about the function of our notion of truth, in that its axioms explain that function without the need for further analysis of ‘what truth is’.
Support for deflationism depends upon the possibleness of showing that its axiom instances of the equivalence schema unsupplements by any further analysis, will suffice to explain all the central facts about truth, for example, that the verification of a proposition indicates its truth, and that true beliefs have a practical value. The first of these facts follows trivially from the deflationary axioms, for given ours a prior knowledge of the equivalence of ‘p’ and ‘The a propositions that ‘p is true’, any reason to believe that ‘p’ becomes an equally good reason to believe that the preposition that ‘p’ is true. We can also explain the second fact in terms of the deflationary axioms, but not quite so easily. Consider, to begin with, beliefs of the form:
(B) If I perform the act ‘A’, then my desires will be fulfilled.
Notice that the psychological role of such a belief is, roughly, to cause the performance of ‘A’. In other words, gave that I do have belief (B), then typically.
I will perform the act ‘A’
Notice also that when the belief is true then, given the deflationary axioms, the performance of ‘A’ will in fact lead to the fulfilment of one’s desires, i.e.,
If (B) is true, then if I perform ‘A’, my desires will be fulfilled
Therefore,
If (B) is true, then my desires will be fulfilled
So valuing the truth of beliefs of that form is quite treasonable. Nevertheless, inference has derived such beliefs from other beliefs and can be expected to be true if those other beliefs are true. So assigning a value to the truth of any belief that might be used in such an inference is reasonable.
To the extent that such deflationary accounts can be given of all the acts involving truth, then the explanatory demands on a theory of truth will be met by the collection of all statements like, ‘The proposition that snow is white is true if and only if snow is white’, and the sense that some deep analysis of truth is needed will be undermined.
Nonetheless, there are several strongly felt objections to deflationism. One reason for dissatisfaction is that the theory has an infinite number of axioms, and therefore cannot be completely written down. It can be described, as the theory whose axioms are the propositions of the fore ‘p if and only if it is true that p’, but not explicitly formulated. This alleged defect has led some philosophers to develop theories that show, first, how the truth of any proposition derives from the referential properties of its constituents, and second, how the referential properties of primitive constituents are determinated (Tarski, 1943 and Davidson, 1969). However, assuming that all propositions including belief attributions remain controversial, law of nature and counterfactual conditionals depends for their truth values on what their constituents refer to implicate. In addition, there is no immediate prospect of a presentable, finite possibility of reference, so that it is far form clear that the infinite, list-like character of deflationism can be avoided.
Additionally, it is commonly supposed that problems about the nature of truth are intimately bound up with questions as to the accessibility and autonomy of facts in various domains: Questions about whether the facts can be known, and whether they can exist independently of our capacity to discover them (Dummett, 1978, and Putnam, 1981). One might reason, for example, that if ‘T is true ‘means’ nothing more than ‘T will be verified’, then certain forms of scepticism, specifically, those that doubt the correctness of our methods of verification, that will be precluded, and that the facts will have been revealed as dependent on human practices. Alternatively, it might be said that if truth were an inexplicable, primitive, non-epistemic property, then the fact that ‘T’ is true would be completely independent of ‘us’. Moreover, we could, in that case, have no reason to assume that the propositions we believe in, that in adopting its property, so scepticism would be unavoidable. In a similar vein, it might be thought that as special, and perhaps undesirable features of the deflationary approach, is that truth is deprived of such metaphysical or epistemological implications.
On closer scrutiny, however, it is far from clear that there exists ‘any’ account of truth with consequences regarding the accessibility or autonomy of non-semantic matters. For although an account of truth may be expected to have such implications for facts of the form ‘T is true’, it cannot be assumed without further argument that the same conclusions will apply to the fact ’T’. For it cannot be assumed that ‘T’ and ‘T’ are true’ and is equivalent to one another given the account of ‘true’ that is being employed. Of course, if truth is defined in the way that the deflationist proposes, then the equivalence holds by definition. Nevertheless, if truth is defined by reference to some metaphysical or epistemological characteristic, then the equivalence schema is thrown into doubt, pending some demonstration that the trued predicate, in the sense assumed, will be satisfied in as far as there are thought to be epistemological problems hanging over ‘T’s’ that do not threaten ‘T is true’, giving the needed demonstration will be difficult. Similarly, if ‘truth’ is so defined that the fact, ‘T’ is felt to be more, or less, independent of human practices than the fact that ‘T is true’, then again, it is unclear that the equivalence schema will hold. It would seem. Therefore, that the attempt to base epistemological or metaphysical conclusions on a theory of truth must fail because in any such attempt the equivalence schema will be simultaneously relied on and undermined.
The most influential idea in the theory of meaning in the past hundred yeas is the thesis that meaning of an indicative sentence is given by its truth-conditions. On this conception, to understand a sentence is to know its truth-conditions. The conception was first clearly formulated by Frége (1848-1925), was developed in a distinctive way by the early Wittgenstein (1889-1951), and is a leading idea of Davidson (1917-). The conception has remained so central that those who offer opposing theories characteristically define their position by reference to it.
The conception of meaning as truth-conditions necessarily are not and should not be advanced as a complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts conventionally acted by the various types of a sentence in the language, and must have some idea of the significance of various kinds of speech acts. The claim of the theorist of truth-conditions should as an alternative is targeted on the notion of content: If two indicative sentences differ in what they strictly and literally say, then this difference is fully accounted for by the difference in their truth-conditions. Most basic to truth-conditions is simply of a statement that is the condition the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the security disappears when it turns out that the truth condition can only be defined by repeating the very same statement, as a truth condition of ‘snow is white’ is that snow is white, the truth condition of ‘Britain would have capitulated had Hitler invaded’ is the Britain would have capitulated had Hitler invaded. It is disputed wether. This element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to use it in a network of inferences.
Whatever it is that makes, what would otherwise be mere sounds and inscriptions into instruments of communication and understanding. The philosophical problem is to demystify this power, and to relate it to what we know of ourselves and the world. Contributions to the study include the theory of ‘speech acts’ and the investigation of communication and the relationship between words and ideas and the world and surrounding surfaces, by which some persons express by a sentence are often a function of the environment in which he or she is placed. For example, the disease I refer to by a term like ‘arthritis’ or the kind of tree I refer to as an ‘oak’ will be defined by criteria of which I know nothing. The raises the possibility of imagining two persons in alternatively differently environmental, but in which everything appears the same to each of them, but between them they define a space of philosophical problems. They are the essential components of understanding nd any intelligible proposition that is true must be capable of being understood. Such that which is expressed by an utterance or sentence, the proposition or claim made about the world may by extension, the content of a predicated or other sub-sentential component is what it contributes to the content of sentences that contain it. The nature of content is the cental concern of the philosophy of language.
In particularly, the problems of indeterminancy of translation, inscrutability of reference, language, predication, reference, rule following, semantics, translation, and the topics referring to subordinate headings associated with ‘logic’. The loss of confidence in determinate meaning (‘Each is another encoding’) is an element common both to postmodern uncertainties in the theory of criticism, and to the analytic tradition that follows writers such as Quine (1908-). Still, it may be asked, why should we suppose that fundamental epistemic notions should be keep an account of for in behavioural terms what grounds are there for supposing that ‘p knows p’ is a subjective matter in the prestigiousness of its statement between some subject statement and physical theory of physically forwarded of an objection, between nature and its mirror? The answer is that the only alternative seems to be to take knowledge of inner states as premises from which our knowledge of other things is normally implied, and without which our knowledge of other things is normally inferred, and without which knowledge would be ungrounded. However, it is not really coherent, and does not in the last analysis make sense, to suggest that human knowledge have foundations or grounds. It should be remembered that to say that truth and knowledge ‘can only be judged by the standards of our own day’ is not to say that it is less meaningful nor is it ‘more “cut off from the world, which we had supposed. Conjecturing it is as just‘ that nothing counts as justification, unless by reference to what we already accept, and that at that place is no way to get outside our beliefs and our oral communication so as to find some experiment with others than coherence. The fact is that the professional philosophers have thought it might be otherwise, since one and only they are haunted by the marshy sump of epistemological scepticism.
What Quine opposes as ‘residual Platonism’ is not so much the hypostasising of non-physical entities as the notion of ‘correspondence’ with things as the final court of appeal for evaluating present practices. Unfortunately, Quine, for all that it is incompatible with its basic insights, substitutes for this correspondence to physical entities, and specially to the basic entities, whatever they turn out to be, of physical science. Nevertheless, when their doctrines are purified, they converge on a single claim ~. That no account of knowledge can depend on the assumption of some privileged relations to reality. Their work brings out why an account of knowledge can amount only to a description of human behaviour.
One answer is that the belief has a coherent place or role in a system of beliefs, perception or the having the perceptivity that has its influence on beliefs. As, you respond to sensory stimuli by believing that you are reading a page in a book than believing that you have a centaur in the garden. Belief has an influence on action, or its belief is a desire to act, if belief will differentiate the differences between them, that its belief is a desire or if you were to believe that you are reading a page than if you believed in something about a centaur. Sortal perceptivals hold accountably the perceptivity and action that are indeterminate to its content if its belief is the action as if stimulated by its inner and latent coherence in that of your belief, however. The same stimuli may produce various beliefs and various beliefs may produce the same action. The role that gives the belief the content it has is the role it plays within a network of relations to other beliefs, some latently causal than others that relate to the role in inference and implication. For example, I infer different things from believing that I am reading a page in a book than from any other belief, justly as I infer about other beliefs.
The information of perceptibility and the output of an action supplement the central role of the systematic relations the belief has to other belief, but the systematic relations give the belief the specific contentual representation it has. They are the fundamental source of the content of belief. That is how coherence comes in. A belief has the representational content by which it does because of the way in which it coheres within a system of beliefs (Rosenberg, 1988). We might distinguish weak coherence theories of the content of beliefs from stronger coherence theories. Weak coherence theories affirm that coherence is one determinant of the representation given that the contents are of belief. Strong coherence theories of the content of belief affirm that coherence is the sole determinant of the contentual representations of belief.
These philosophical problems include discovering whether belief differs from other varieties of assent, such as ‘acceptance’ discovering to what extent degrees of belief is possible, understanding the ways in which belief is controlled by rational and irrational factors, and discovering its links with other properties, such as the possession of conceptual or linguistic skills. This last set of problems includes the question of whether prelinguistic infants or animals are properly said to have beliefs.
Thus, we might think of coherence as inference to the best explanation based on a background system of beliefs, since we are not aware of such inferences for the most part, the inferences must be interpreted as unconscious inferences, as information processing, based on or finding the background system that proves most convincing of acquiring its act and used from the motivational force that its underlying and hidden desire are to do so. One might object to such an account on the grounds that not all justifiable inferences are self-explanatory, and more generally, the account of coherence may, at best, is ably successful to competitions that are based on background systems (BonJour, 1985, and Lehrer, 1990). The belief that one sees a shape competes with the claim that one does not, with the claim that one is deceived, and other sceptical objections. The background system of beliefs informs one that one is acceptingly trustworthy and enables one to meet the objections. A belief coheres with a background system just in case it enables one to meet the sceptical objections and in the way justifies one in the belief. This is a standard strong coherence theory of justification (Lehrer, 1990).
Illustrating the relationship between positive and negative coherence theories in terms of the standard coherence theory is easy. If some objection to a belief cannot be met in terms of the background system of beliefs of a person, then the person is not justified in that belief. So, to return to Trust, suppose that she has been told that a warning light has been installed on her gauge to tell her when it is not functioning properly and that when the red light is on, the gauge is malfunctioning. Suppose that when she sees the reading of 105, she also sees that the red light is on. Imagine, finally, that this is the first time the red light has been on, and, after years of working with the gauge, Julie, who has always placed her trust in the gauge, believes what the gauge tells her, that the liquid in the container is at 105 degrees. Though she believes what she reads is at 105 degrees is not a justified belief because it fails to cohere with her background belief that the gauge is malfunctioning. Thus, the negative coherence theory tells ‘us’ that she is not justified in her belief about the temperature of the contents in the container. By contrast, when the red light is not illuminated and the background system of Julies tells her that under such conditions that gauge is a trustworthy indicator of the temperature of the liquid in the container, then she is justified. The positive coherence theory tells ‘us’ that she is justified in her belief because her belief coheres with her background system of Julie tells she that under such conditions that gauge is a trustworthy indicator of the temperature of the liquid in the container, then she is justified. The positive coherence theory tells ‘us’ that she is justified in her belief because her belief coheres with her background system continues as a trustworthy system.
The foregoing sketch and illustration of coherence theories of justification have a common feature, namely, that they are what is called internalistic theories of justification what makes of such a view are the absence of any requirement that the person for whom the belief is justified have any cognitive access to the relation of reliability in question. Lacking such access, such a person will usually, have no reason for thinking the belief is true or likely to be true, but will, on such an account, are none the lesser to appear epistemologically justified in accepting it. Thus, such a view arguably marks a major break from the modern epistemological traditions, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.
They are theories affirming that coherence is a matter of internal relations between beliefs and that justification is a matter of coherence. If, then, justification is solely a matter of internal relations between beliefs, we are left with the possibility that the internal relations might fail to correspond with any external reality. How, one might object, can be to assume the including of interiority. A subjective notion of justification bridge the gap between mere true belief, which might be no more than a lucky guess, and knowledge, which must be grounded in some connection between internal subjective conditions and external objective realities?
The answer is that it cannot and that something more than justified true belief is required for knowledge. This result has, however, been established quite apart from consideration of coherence theories of justification. What are required maybes put by saying that the justification that one must be undefeated by errors in the background system of beliefs? Justification is undefeated by errors just in case any correction of such errors in the background system of belief would sustain the justification of the belief on the basis of the corrected system. So knowledge, on this sort of positivity is acclaimed by the coherence theory, which is the true belief that coheres with the background belief system and corrected versions of that system. In short, knowledge is true belief plus justification resulting from coherence and undefeated by error (Lehrer, 1990). The connection between internal subjective conditions of belief and external objectivity are from which reality’s result from the required correctness of our beliefs about the relations between those conditions and realities. In the example of Trust, she believes that her internal subjectivity to conditions of sensory data in which the experience and perceptual beliefs are connected with the external objectivity in which reality is the temperature of the liquid in the container in a trustworthy manner. This background belief is essential to the justification of her belief that the temperature of the liquid in the container is 105 degrees, and the correctness of that background belief is essential to the justification remaining undefeated. So our background system of beliefs contains a simple theory about our relation to the external world that justifies certain of our beliefs that cohere with that system. For instance, such justification to convert to knowledge, that theory must be sufficiently free from error so that the coherence is sustained in corrected versions of our background system of beliefs. The correctness of the simple background theory provides the connection between the internal condition and external reality.
The coherence theory of truth arises naturally out of a problem raised by the coherence theory of justification. The problem is that anyone seeking to determine whether she has knowledge is confined to the search for coherence among her beliefs. The sensory experiences she has been deaf-mute until they are represented in the form of some perceptual belief. Beliefs are the engines that pull the train of justification. Nevertheless, what assurance do we have that our justification is based on true beliefs? What justification do we have that any of our justifications are undefeated? The fear that we might have none, that our beliefs might be the artifacts of some deceptive demon or scientist, leads to the quest to reduce truth to some form, perhaps an idealized form, of justification (Rescher, 1973, and Rosenberg, 1980). That would close the threatening sceptical gap between justification and truth. Suppose that a belief is true if and only if it is justifiable of some person. For such a person there would be no gap between justification and truth or between justification and undefeated justification. Truth would be coherence with some ideal background system of beliefs, perhaps one expressing a consensus among systems or some consensus among belief systems or some convergence toward a consensus. Such a view is theoretically attractive for the reduction it promises, but it appears open to profound objectification. One is that there is a consensus that we can all be wrong about at least some matters, for example, about the origins of the universe. If there is a consensus that we can all be wrong about something, then the consensual belief system rejects the equation of truth with the consensus. Consequently, the equation of truth with coherence with a consensual belief system is itself incoherently.
Coherence theories of the content of our beliefs and the justification of our beliefs themselves cohere with our background systems but coherence theories of truth do not. A defender of coherentism must accept the logical gap between justified belief and truth, but may believe that our capacities suffice to close the gap to yield knowledge. That view is, at any rate, a coherent one.
What makes a belief justified and what makes a true belief knowledge? Thinking that whether a belief deserves one of these appraisals is natural depends on what causal subject to have the belief. In recent decades a number of epistemologists have pursed this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that ‘p’ is knowledge just in case it has the right causal connection to the fact that ‘p’. Such a criterion can be applied only to cases where the fact that ‘p’ is a sort that can enter causal relations, this seems to exclude mathematically and other necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually of this sort of criterion have usually supposed that it is limited to perceptual knowledge of particular facts about the subject’s environment.
For example, Armstrong (1973) proposed that a belief of the form ‘This (perceived) object is F’ is (non-inferential) knowledge if and only if the belief is a completely reliable sign that the perceived object is ‘F’, that is, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any subject ‘χ’ is to occur, and so thus a perceived object of ‘y’, if ‘χ’ undergoing those properties are for ‘us’ to believe that ‘y’ is ‘F’, then ‘y’ is ‘F’. Dretske (1981) offers a similar account, in terms of the belief’s being caused by a signal received by the perceiver that carries the information that the object is ‘F’.
This sort of condition fails, however, to be sufficient for non-inferential perceptual knowledge because it is compatible with the belief’s being unjustified, and an unjustifiable belief cannot be knowledge. For example, suppose that your mechanisms for colour perception are working well, but you have been given good reason to think otherwise, to think, say, that the substantive primary colours that are perceivable, that things look chartreuse to you and chartreuse things look magenta. If you fail to heed these reasons you have for thinking that your colour perception or sensory data is a way. Believing in a ‘thing’, which looks to blooms of vividness that you are to believe of its chartreuse, your belief will fail to be justified and will therefore fail to be knowledge, even though it is caused by the thing’s being magenta in such a way as to be a completely reliable sign, or to carry the information, in that the thing is magenta.
One could fend off this sort of counterexample by simply adding to the causal condition the requirement that the belief be justified, buy this enriched condition would still be insufficient. Suppose, for example, that in nearly all people, but not in you, as it happens, causes the aforementioned aberration in colour perceptions. The experimenter tells you that you have taken such a drug but then says, ‘no, hold off a minute, the pill you took was just a placebo’, suppose further, that this last thing the experimenter tells you is false. Her telling you that it was a false statement, and, again, telling you this gives you justification for believing of a thing that looks a subtractive primary colour to you that it is a sensorial primary colour, in that the fact you were to expect that the experimenters last statements were false, making it the case that your true belief is not knowledgeably correct, thought as though to satisfy its causal condition.
Goldman (1986) has proposed an importantly different causal criterion namely, that a true belief is knowledge, if it is produced by a type of process that is ‘globally’ and ‘locally’ reliable. Causing true beliefs is sufficiently high is globally reliable if its propensity. Local reliability has to do with whether the process would have produced a similar but false belief in certain counterfactual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge does not require the fact believed to be casually related to the belief, and so it could in principle apply to knowledge of any kind of truth.
Goldman requires that global reliability of the belief-producing process for the justification of a belief, he requires it also for knowledge because justification is required for knowledge, in what requires for knowledge but does not require for justification, which is locally reliable. His idea is that a justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counterfactual situation in which it is false. The relevant alternative account of knowledge can be motivated by noting that other concepts exhibit the same logical structure. Two examples of this are the concept ‘flat’ and the concept ‘empty’ (Dretske, 1981). Both appear to be absolute concepts-A space is empty only if it does not contain anything and a surface is flat only if it does not have any bumps. However, the absolute character of these concepts is relative to a standard. In the case of ‘flat’, there is a standard for what counts as a bump and in the case of ‘empty’, there is a standard for what counts as a thing. To be flat is to be free of any relevant bumps and to be empty is to be devoid of all relevant things.
Nevertheless, the human mind abhors a vacuum. When an explicit, coherent world-view is absent, it functions on the basis of a tactic one. A tactic world-view is not subject to a critical evaluation, and it can easily harbour inconsistencies. Indeed, our tactic set of beliefs about the nature of reality is made of contradictory bits and pieces. The dominant component is a leftover from another period, the Newtonian ‘clock universe’ still lingers as we cling to this old and tired model because we know of nothing else that can take its place. Our condition is the condition of a culture that is in the throes of a paradigm shift. A major paradigm shift is complex and difficult because a paradigm holds ‘us captive: We see reality through it, as through coloured glasses, but we do not know that, we are convinced that we see reality as it is. Hence the appearance of a new and different paradigm is often incomprehensible. To someone raised believing that the Earth is flat, the suggestion that the Earth is spherical would seem preposterous: If the Earth were spherical, would not the poor antipodes fall ‘down’ into the sky?
Yet, as we face a new millennium, we are forced to face this challenge. The fate of the planet is in question, and it was brought to its present precarious condition largely because of our trust in the Newtonian paradigm. As Newtonian world-view has to go, and, if one looks carefully, the main feature of the new, emergent paradigm can be discerned. The search for these features is what was the influence of a fading paradigm. All paradigms include subterranean realms of tactic assumptions, the influence of which outlasts the adherence to the paradigm itself.
The first line of exploration suggests the ‘weird’ aspects of the quantum theory, with fertile grounds for our feeling of which should disappear in inconsistencies with the prevailing world-view. This feeling is in replacing by the new one, i.e., if one believes that the Earth is flat, the story of Magellan’s travels is quite puzzling: How travelling due west is possible for a ship and, without changing direct. Arrive at its place of departure? Obviously, when the flat-Earth paradigm is replaced by the belief that Earth is spherical, the puzzle is instantly resolved.
The founders of Relativity and quantum mechanics were deeply engaging but incomplete, in that none of them attempted to construct a philosophical system, however, that the mystery at the heart of the quantum theory called for a revolution in philosophical outlooks. During which time, the 1920's, when quantum mechanics reached maturity, began the construction of a full-blooded philosophical system that was based not only on science but on nonscientific modes of knowledge as well. As, the fading influence drawn upon the paradigm goes well beyond its explicit claim. We believe, as the scenists and philosophers did, that when we wish to find out the truth about the universe, nonscientific nodes of processing human experiences can be ignored, poetry, literature, art, music are all wonderful, but, in relation to the quest for knowledge of the universe, they are irrelevant. Yet, it was Alfred North Whitehead who pointed out the fallacy of this speculative assumption. In this, as well as in other aspects of thinking of some reality in which are the building blocks of reality are not material atoms but ‘throbs of experience’. Whitehead formulated his system in the late 1920s, and yet, as far as I know, the founders of quantum mechanics were unaware of it. It was not until 1963 that J.M. Burgers pointed out that its philosophy accounts very well for the main features of the quanta, especially the ‘weird ones’, enabling as in some aspects of reality is ‘higher’ or ’deeper’ than others, and if so, what is the structure of such hierarchical divisions? What of our place in the universe? Finally, what is the relationship between the great aspiration within the lost realms of nature? An attempt to endow ‘us’ with a cosmological meaning in such a universe seems totally absurd, and, yet, this very universe is just a paradigm, not the truth. When you reach its end, you may be willing to join the alternate view as accorded to which, surprisingly bestow ‘us’ with what is restored, although in a post-postmodern context.
The philosophical implications of quantum mechanics have been regulated by subjective matter’s, as to emphasis the connections between what I believe, in that investigations of such interconnectivity are anticipatorially the hesitations that are an exclusion held within the western traditions, however, the philosophical thinking, from Plato to Platinous had in some aspects of interpretational presentation of her expression of a consensus of the physical community. Other aspects are shared by some and objected to (sometimes vehemently) by others. Still other aspects express my own views and convictions, as turning about to be more difficult that anticipated, discovering that a conversational mode would be helpful, but, their conversations with each other and with me in hoping that all will be not only illuminating but finding to its read may approve in them, whose dreams are dreams among others than themselves.
These examples make it seem likely that, if there is a criterion for what makes an alternative situation relevant that will save Goldman’s claim about reliability and the acceptance of knowledge, it will not be simple.
The interesting thesis that counts asa causal theory of justification, in the meaning of ‘causal theory’ intend of the belief that is justified just in case it was produced by a type of process that is ‘globally’ reliable, that is, its propensity to produce true beliefs-that can be defined to a favourably bringing close together the proportion of the belief and to what it produces, or would produce where it used as much as opportunity allows, that is true-is sufficiently that a belief acquires favourable epistemic status by having some kind of reliable linkage to the truth. Variations of this view have been advanced for both knowledge and justified belief. The first formulations of are reliably in its account of knowing appeared in if not by F.P. Ramsey (1903-30) who made important contributions to mathematical logic, probability theory, the philosophy of science and economics. Instead of saying that quarks have such-and-such properties, the Ramsey sentence says that it is moderately something that has those properties. If the process is repeated for all of the theoretical terms, the sentence gives the ‘topic-neutral’ structure of the theory, but removes any implication that we know what the term so covered have as a meaning. It leaves open the possibility of identifying the theoretical item with whatever, but it is that best fits the description provided, thus, substituting the term by a variable, and existentially qualifying into the result. Ramsey was one of the first thinkers to accept a ‘redundancy theory of truth’, which he combined its radical views of the function of many kinds of the proposition. Neither generalizations, nor causal propositions, not those treating probabilities or ethics, described facts, but each has a different specific function in our intellectual commentators on the early works of Wittgenstein, and his continuing friendship with the latter liked to Wittgenstein’s return to Cambridge and to philosophy in 1929.
The most sustained and influential application of these ideas were in the philosophy of mind, or brain, as Ludwig Wittgenstein (1889-1951) whom Ramsey persuaded that remained work for him to do, the way that is most undoubtedly was of an appealingly charismatic figure in a 20th-century philosophy, living and writing with a power and intensity that frequently overwhelmed his contemporaries and readers, the early period is centred on the ‘picture theory of meaning’ according to which sentence represents a state of affairs by being a kind of picture or model of it. Containing the elements that were in corresponding to those of the state of affairs and structure or form that mirrors that a structure of the state of affairs that it represents. All logic complexity is reduced to that of the ‘propositional calculus, and all propositions are ‘truth-functions of atomic or basic propositions.
The interesting thesis that counts as a causal theory of justification, in the making of ‘causal theory’ intended for the belief as it is justified in case it was produced by a type of process that is ‘globally’ reliable, that is, its propensity to produce true beliefs that can be defined, to a well-thought-of approximation, as the proportion of the beliefs it produces, or would produce where it used as much as opportunity allows, that is true is sufficiently relializable. Variations of this view have been advanced for both knowledge and justified belief, its first formulation of a reliability account of knowing appeared in the notation from F.P.Ramsey (1903-30). The theory of probability, he was the first to show how a ‘personalist theory’ could be developed, based on a precise behavioural notion of preference and expectation. In the philosophy of language. Much of Ramsey’s work was directed at saving classical mathematics from ‘intuitionism’, or what he called the ‘Bolshevik menace of Brouwer and Weyl. In the theory of probability he was the first to show how a personalist theory could be developed, based on precise behavioural notation of preference and expectation. In the philosophy of language, Ramsey was one of the first thankers, which he combined with radical views of the function of many kinds of a proposition. Neither generalizations, nor causal propositions, nor those treating probability or ethics, describe facts, but each has a different specific function in our intellectual economy.
Ramsey’s sentence theory is the sentence generated by taking all the sentences affirmed in a scientific theory that use some term, e.g., ‘quark’. Replacing the term by a variable, and existentially quantifying into the result. Instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If the process is repeated for all of a group of the theoretical terms, the sentence gives the ‘topic-neutral’ structure of the theory, but removes any implication that we know what the term so treated characterized. It leaves open the possibility of identifying the theoretical item with whatever, and it is that best fits the description provided. Virtually, all theories of knowledge. Of course, share an externalist component in requiring truth as a condition for known in. Reliabilism goes further, however, in trying to capture additional conditions for knowledge by ways of a nomic, counterfactual or other such ‘external’ relations between belief and truth. Closely allied to the nomic sufficiency account of knowledge, primarily dur to Dretshe (1971, 1981), A. I. Goldman (1976, 1986) and R. Nozick (1981). The core of this approach is that x’s belief that ‘p’ qualifies as knowledge just in case ‘x’ believes ‘p’, because of reasons that would not obtain unless ‘p’s’ being true, or because of a process or method that would not yield belief in ‘p’ if ‘p’ were not true. For example, ‘x’ would not have its current reasons for believing there is a telephone before it. Perhaps, would it not come to believe that this in the way it suits the purpose, thus, there is a differentiable fact of a reliable guarantor that the belief’s bing true. A stouthearted and valiant counterfactual approach says that ‘x’ knows that ‘p’ only if there is no ‘relevant alternative’ situation in which ‘p’ is false but ‘x’ would still believe that a proposition ‘p’; must be sufficient to eliminate all the alternatives to ‘p’ where an alternative to a proposition ‘p’ is a proposition incompatible with ‘p’? . That in one’s justification or evidence for ‘p’ must be sufficient for one to know that every alternative to ‘p’ is false. This element of our evolving thinking, about which knowledge is exploited by sceptical arguments. These arguments call our attentions to alternatives that our evidence sustains itself with no elimination. The sceptic inquires to how we know that we are not seeing a cleverly disguised mule. While we do have some evidence against the likelihood of such as deception, intuitively knowing that we are not so deceived is not strong enough for ‘us’. By pointing out alternate but hidden points of nature, in that we cannot eliminate, as well as others with more general application, as dreams, hallucinations, etc., the sceptic appears to show that every alternative is seldom. If ever, satisfied.
This conclusion conflicts with another strand in our thinking about knowledge, in that we know many things. Thus, there is a tension in our ordinary thinking about knowledge ~. We believe that knowledge is, in the sense indicated, an absolute concept and yet, we also believe that there are many instances of that concept.
If one finds absoluteness to be too central a component of our concept of knowledge to be relinquished, one could argue from the absolute character of knowledge to a sceptical conclusion (Unger, 1975). Most philosophers, however, have taken the other course, choosing to respond to the conflict by giving up, perhaps reluctantly, the absolute criterion. This latter response holds as sacrosanct our commonsense belief that we know many things (Pollock, 1979 and Chisholm, 1977). Each approach is subject to the criticism that it preserves one aspect of our ordinary thinking about knowledge at the expense of denying another. The theory of relevant alternatives can be viewed as an attempt to provide a more satisfactory response to this tension in our thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.
Epistemology, is the Greek word, epistēmē, is meant to and for a well-balanced form of ‘knowledge’, for which the theory of knowledge, and its fundamental questions include the origin of knowledge, the place of experience in generating knowledge, and the place of reason in doing so, the relationship between knowledge and certainty. As between knowledge and the impossibility of error, the possibility of universal scepticism, and the changing forms of knowledge that arises from, a new conceptualized world. All these issues link with other central concerns of philosophy, such as the nature of truth and the nature of truth and the nature of experience and meaning. Seeing epistemology is possible as dominated by two rival metaphors. One is that of a building or pyramid, built on foundations. In this conception it is the job of the philosopher to describe especially secure foundations, and to identify secure odes of construction, so that they can show the resulting edifice to be sound. This metaphor favours part of the ‘given’ as a basis of knowledge, and of a rationally defensible theory of confirmation and inference for construction. The other metaphor is that of a boat or fuselage, which has no foundation but owes its strength to the stability given by its interlocking parts. This rejects the idea of a basis in the ‘given’, favours ideas of coherence and ‘holism’, but finds it harder to ward off scepticism. The problem of defining knowledge as a true belief plus some favoured relations between the believer and the facts begun with Plato’s view in the “Theaetetus” that knowledge is true belief plus some ‘logos’.
Theories, in philosophy of science, are generalizations or set of generalizations purportedly referring to unobservable entities, e.g., atoms, genes, quarks, unconscious wishes. The ideal gas law, for example, refers only to such observables as pressure, temperature, and volume; the molecular-kinetic theory refers to molecules and their properties. Although, an older usage suggests lack of adequate evidence in playing a subordinate role to of this (‘merely a theory’), current philosophical usage that does not carry that connotation. Einstein’s special theory of relativity for example, is considered extremely well founded.
As space, the classical questions include: Is space real? Is it some kind of mental construct or artefact of our ways of perceiving and thinking? Is it ‘substantival’ or purely? ;relational’? According to Substantivalism, space is an objective thing consisting of points or regions at which, or in which, things are located. Opposed to this is relationalism, according to which the only thing that is real about space are the spatial (and temporal) relations between physical objects. Substantivalism was advocated by Clarke speaking for Newton, and relationalism by Leibniz, in their famous correspondence, and the debate continues today. There is also an issue whether the measure of space and time are objective e, or whether an element of convention enters them. Whereby, the influential analysis of David Lewis suggests that a regularity hold as a matter of convention when it solves a problem of co-ordination in a group. This means that it is to the benefit of each member to conform to the regularity, providing the other do so. Any number of solutions to such a problem may exist, for example, it is to the advantages of each of us to drive on the same side of the road as others, but indifferent whether we all drive o the right or the left. One solution or another may emerge for a variety of reasons. It is notable that on this account convections may arise naturally; they do not have to be the result of specific agreement. This frees the notion for use in thinking about such things as the origin of language or of political society.
Finding to a theory that magnifies the role of decisions, or free selection from among equally possible alternatives, in order to show that what appears to be objective or fixed by nature is in fact an artefact of human convention, similar to conventions of etiquette, or grammar, or law. Thus one might suppose that moral rules owe more to social convention than to anything imposed from outside, or hat supposedly inexorable necessities are in fact the shadow of our linguistic conventions. The disadvantage of conventionalism is that it must show that alternative, equally workable e conventions could have been adopted, and it is often easy to believe that, for example, if we hold that some ethical norm such as respect for promises or property is conventional, we ought to be able to show that human needs would have been equally well satisfied by a system involving a different norm, and this may be hard to establish.
A convention also suggested by Paul Grice (1913-88) directing participants in conversation to pay heed to an accepted purpose or direction of the exchange. Contributions made deficiently non-payable for attentions of which were liable to be rejected for other reasons than straightforward falsity: Something true but unhelpful or inappropriately are met with puzzlement or rejection. We can thus never infer fro the fact that it would be inappropriate to say something in some circumstance that what would be aid, were we to say it, would be false. This inference was frequently and in ordinary language philosophy, it being argued, for example, that since we do not normally say ‘there sees to be a barn there’ when there is unmistakably a barn there, it is false that on such occasions there seems to be a barn there.
There are two main views on the nature of theories. According to the ‘received view’ theories are partially interpreted axiomatic systems, according to the semantic view, a theory is a collection of models (Suppe, 1974). However, a natural language comes ready interpreted, and the semantic problem is no specification but of understanding the relationship between terms of various categories (names, descriptions, predicates, adverbs . . .) and their meanings. An influential proposal is that this relationship is best understood by attempting to provide a ‘truth definition’ for the language, which will involve giving terms and structure of different kinds have on the truth-condition of sentences containing them.
The axiomatic method . . . as, . . . a proposition lid down as one from which we may begin, an assertion that we have taken as fundamental, at least for the branch of enquiry in hand. The axiomatic method is that of defining as a set of such propositions, and the ‘proof procedures’ or finding of how a proof ever gets started. Suppose I have as premise
(1) p and (2) p ➞ q. Can I infer q? Only, it seems, if I am sure of,
(3) (p & p ➞ q) ➞ q. Can I then infer q? Only, it seems, if I am sure that (4) (p & p ➞ q) ➞ q) ➞ q. For each new axiom (N) I need a further axiom (N + 1) telling me that the set-class may as, perhaps be so far that it implies ‘q’, and the regress never stops. The usual solution is to treat a system as containing not only axioms, but also rules of reference, allowing movement fro the axiom. The rule ‘modus ponens’ allows us to pass from the first two premises to ‘q’. Charles Dodgson Lutwidge (1832-98) better known as Lewis Carroll’s puzzle shows that it is essential to distinguish two theoretical categories, although there may be choice about which to put in which category.
This type of theory (axiomatic) usually emerges as a body of (supposes) truths that are not nearly organized, making the theory difficult to survey or study a whole. The axiomatic method is an idea for organizing a theory (Hilbert 1970): one tries to select from among the supposed truths a small number from which all others can be seen to be deductively inferable. This makes the theory rather more tractable since, in a sense, all the truths are contained in those few. In a theory so organized, the few truths from which all others are deductively inferred are called axioms. In that, just as algebraic and differential equations, which were used to study mathematical and physical processes, could themselves be made mathematical objects, so axiomatic theories, like algebraic and differential equations, which are means of representing physical processes and mathematical structures, could be made objects of mathematical investigation.
When the principles were taken as epistemologically prior, that is, as axioms, either they were taken to be epistemologically privileged, e.g., self-evident, not needing to be demonstrated or (again, inclusive ‘or’) to be such that all truths do follow from them (by deductive inferences). Gödel (1984) showed that treating axiomatic theories as themselves mathematical objects, that mathematics, and even a small part of mathematics, elementary number theory, could not be axiomatized, that, more precisely, any class of axioms which in such that we could effectively decide, of any proposition, whether or not it was in the class, would be too small to capture all of the truths.
The use of a model to test for the consistency of an axiomatized system is older than modern logic. Descartes’s algebraic interpretation of Euclidean geometry provides a way of showing tat if the theory of real numbers is consistent, so is the geometry. Similar mapping had been used by mathematicians in the 19th century for example to show that if Euclidean geometry is consistent, so are various non-Euclidean geometries. Model theory is the general study of this kind of procedure: The study of interpretations of formal system. Proof theory studies relations of deducibility as defined purely syntactically, that is, without reference to the intended interpretation of the calculus. More formally, a deductively valid argument starting from true premises, that yields the conclusion between formulae of a system. But once the notion of an interpretation is in place we can ask whether a formal system meets certain conditions. In particular, can it lead us from sentences that are true under some interpretation to ones that are false under the same interpretation? And if a sentence is true under all interpretations, is it also a theorem of the system? We can define a notion of validity (a formula is valid if it is true in all interpretations) and semantic consequence. The central questions for a calculus will be whether all and only its theorems are valid, and whether {A1 . . . An} ⊨ B -if and only if, {A1. . . . and some formulae ⊢ B}. These are the questions of the soundness and completeness of a formal system. For the propositional calculus this turns into the question of whether the proof theory delivers as theorems all and only tautologies. There are many axiomatizations of the propositional calculus that are consistent an complete. Gödel proved in 1929 that first-order predicate calculus is complete: any formula that is true under every interpretation is a theorem of the calculus.
The propositional calculus or logical calculus whose expressions are letter represents sentences or propositions, and constants representing operations on those propositions to produce others of higher complexity. The operations include conjunction, disjunction, material implication and negation (although these need not be primitive). Propositional logic was partially anticipated by the Stoics but researched maturity only with the work of Frége, Russell, and Wittgenstein.
The concept introduced by Frége of a function taking a number of names as arguments, and delivering one proposition as the value. The idea is that ‘χ loves y’ is a propositional function, which yields the proposition ‘John loves Mary’ from those two arguments (in that order). A propositional function is therefore roughly equivalent to a property or relation. In Principia Mathematica, Russell and Whitehead take propositional functions to be the fundamental function, since the theory of descriptions could be taken as showing that other expressions denoting functions are incomplete symbols.
Keeping in mind, the two classical ruth-values that a statement, proposition, or sentence can take. It is supposed in classical (two-valued) logic, that each statement has one of these e values, and none has both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement t there corresponds a determinate truth condition, or way the world must be for it to be true, and otherwise false. Statements may be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative governing assertion. Considerations of vagueness may introduce greys into a black-and-white scheme. For the issue of whether falsity is the only of failing to be true.
Formally, it is nonetheless, that any suppressed premise or background framework of thought necessary to make an argument valid, or a position tenable. More formally, a presupposition has been defined as a proposition whose truth is necessary for either the truth or the falsity of another statement. Thus, if ‘p’ presupposes ‘q’, ‘q’ must be true for p to be either true or false. In the theory of knowledge of Robin George Collingwood (1889-1943), any propositions capable of truth or falsity stand on a bed of ‘absolute presuppositions’ which are not properly capable of truth or falsity, since a system of thought will contain no way of approaching such a question. It was suggested by Peter Strawson (1919-), in opposition to Russell’s theory of ‘definite descriptions, that ‘there exists a King of France’ is a presupposition of ‘the King of France is bald’, the latter being neither true, nor false, if there is no King of France. It is, however, a little unclear whether the idea is that no statement at all is made in such a case, or whether a statement is made, but fails of being either true or false. The former option preserves classical logic, since we can still say that every statement is either true or false, but the latter des not, since in classical logic the law of ‘bivalence’ holds, and ensures that nothing at all is presupposed for any proposition to be true or false. The introduction of presupposition therefore means tat either a third truth-value is found, ‘intermediate’ between truth and falsity, or that classical logic is preserved, but it is impossible to tell whether a particular sentence expresses a proposition that is a candidate for truth ad falsity, without knowing more than the formation rules of the language. Each suggestion carries costs, and there is some consensus that at least where definite descriptions are involved, examples like the one given are equally well handed by regarding the overall sentence false when the existence claim fails.
A proposition may be true or false it is said to take the truth-value true, and if the latter are the truth-value false. The idea behind the term is the analogy between assigning a propositional variable one or other of these values, as a formula of the propositional calculus, and assigning an object as the value of any other variable. Logics with intermediate values are called many-valued logics. Then, a truth-function of a number of propositions or sentences is a function of them that has a definite truth-value, depends only on the truth-values of the constituents. Thus (p & q) is a combination whose truth-value is true when ‘p’ is true and ‘q’ is true, and false otherwise, ¬ p is a truth-function of ‘p’, false when ‘p’ is true and true when ‘p’ is false. The way in which te value of the whole is determined by the combinations of values of constituents is presented in a truth table.
In whatever manner, truths of fact cannot be reduced to any identity and our only way of knowing them is empirically, by reference to the facts of the empirical world. Likewise, since their denial does not involve a contradiction, there is merely contingent: They’re could have been in other ways a hold of the actual world, but not every possible one. Some examples re ‘Caesar crossed the Rubicon’ and ‘Leibniz was born in Leipzig’, as well as propositions expressing correct scientific generalizations. In Leibniz’s view truths of fact rest on the principle of sufficient reason, which is a reason why it is so. This reason is that the actual worlds by which he means the total collection of things past, present and their combining futures are better than any other possible world and therefore created by God. The foundation of his thought is the conviction that to each individual there corresponds a complete notion, knowable only to God, from which is deducible all the properties possessed by the individual at each moment in its history. It is contingent that God actualizes te individual that meets such a concept, but his doing so is explicable by the principle of ‘sufficient reason’, whereby God had to actualize just that possibility in order for this to be the best of all possible worlds. This thesis is subsequently lampooned by Voltaire (1694-1778), in whom of which was prepared to take refuge in ignorance, as the nature of the soul, or the way to reconcile evil with divine providence.
In defending the principle of sufficient reason sometimes described as the principle that nothing can be so without there being a reason why it is so. Bu t the reason has to be of a particularly potent kind: Eventually it has to ground contingent facts in necessities, and in particular in the reason an omnipotent and perfect being would have for actualizing one possibility than another. Among the consequences of the principle is Leibniz’s relational doctrine of space, since if space were an infinite box there could be no reason for the world to be at one point in rather than another, and God placing it at any point violate the principle. In Abelard’s (1079-1142), as in Leibniz, the principle eventually forces te recognition that the actual world is the best of all possibilities, since anything else would be inconsistent with the creative power that actualizes possibilities.
If truth consists in concept containment, then it seems that all truths are analytic and hence necessary; and if they are all necessary, surely they are all truths of reason. In that not every truth can be reduced to an identity in a finite number of steps; in some instances revealing the connection between subject and predicate concepts would require an infinite analysis, but while this may entail that we cannot prove such proposition as a prior, it does not appear to show that proposition could have ben false. Intuitively, it seems a better ground for supposing that it is a necessary truth of a special sort. A related question arises from the idea that truths of fact depend on God’s decision to create the best world: If it is part of the concept of this world that it is best, how could its existence be other than necessary? An accountable and responsively answered explanation would be so, that any relational question that brakes the norm lay eyes on its existence in the manner other than hypothetical necessities, i.e., it follows from God’s decision to create the world, but God had the power to create this world, but God is necessary, so how could he have decided to do anything else? Leibniz says much more about these matters, but it is not clear whether he offers any satisfactory solutions.
The view that the terms in which we think of some area are sufficiently infected with error for it to be better to abandon them than to continue to try to give coherent theories of their use. Eliminativism should be distinguished from scepticism which claims that we cannot know the truth about some area; eliminativism claims rather than there is no truth there to be known, in the terms which we currently think. An eliminativist about theology simply counsels abandoning the terms or discourse of theology, and that will include abandoning worries about the extent of theological knowledge.
Eliminativists in the philosophy of mind counsel abandoning the whole network of terms mind, consciousness, self, Qualia that usher in the problems of mind and body. Sometimes the argument for doing this is that we should wait for a supposed future understanding of ourselves, based on cognitive science and better than any our current mental descriptions provide, sometimes it is supposed that physicalism shows that no mental description of us could possibly be true.
Greek scepticism centred on the value of enquiry and questioning, scepticism is now the denial that knowledge or even rational belief is possible, either about some specific subject matter, e.g., ethics, or in any subsequent whatsoever. Classically, scepticism springs from the observation that the best methods in some area seem to fall short of giving us contact with the truth, e.g., there is a gulf between appearance and reality, and in frequency cites the conflicting judgements that our methods deliver, with the result that questions of truth become undecidable.
Sceptical tendencies emerged in the 14th-century writings of Nicholas of Autrecourt. His criticisms of any certainty beyond the immediate deliverance of the senses and basic logic, and in particular of any knowledge of either intellectual or material substances, anticipate the later scepticism of Balye and Hume. The latter distinguishes between Pyrrhonistic and excessive scepticism, which he regarded as unlivable, and the more mitigated scepticism which accepts everyday or common-sense beliefs (not as the delivery of reason, but as due more to custom and habit), but is duly wary of the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by ancient scepticism from Pyrrho through to Sexus Empiricus. Although the phrase ‘Cartesian scepticism’ is sometimes used, Descartes himself was not a sceptic, but in the method of doubt, uses a sceptical scenario in order to begin the process of finding a secure mark of knowledge. Descartes himself trusts a category of ‘clear and distinct’ ideas, not far removed from the phantasia kataleptiké of the Stoics.
Scepticism should not be confused with relativism, which is a doctrine about the nature of truth, and may be motivated by trying to avoid scepticism. Nor is it identical with eliminativism, which counsels abandoning an area of thought altogether, not because we cannot know the truth, but because there are no truths capable of bing framed in the terms we use.
Descartes’s theory of knowledge starts with the quest for certainty, for an indubitable starting-point or foundation on the basis alone of which progress is possible. This is eventually found in the celebrated ‘Cogito ergo sum’: I think therefore I am. By locating the point of certainty in my own awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated them following centuries in spite of various counter-attacks on behalf of social and public starting-points. The metaphysics associated with this priority is the famous Cartesian dualism, or separation of mind and matter into a dual purposed interacting substances, Descartes rigorously and rightly sees that it takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a ‘clear and distinct perception’ of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: as Hume drily puts it, ‘to have recourse to the veracity of the supreme Being, in order to prove the veracity of our senses, is surely making a very unexpected circuit’.
In his own time Descartes’s conception of the entirely separate substance of the mind was recognized to give rise to insoluble problems of the nature of the causal connection between the two. It also gives rise to the problem, insoluble in its own terms, of other minds. Descartes’s notorious denial that non-human animals are conscious is a stark illustration of the problem. In his conception of matter Descartes also gives preference to rational cogitation over anything derived from the senses. Since we can conceive of the matter of a ball of wax surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature. Descartes’s thought, as reflected in Leibniz, that the qualities of sense experience have no resemblance to qualities of things, so that knowledge of the external world is essentially knowledge of structure rather than of filling. On this basis Descartes erects a remarkable physics. Since matter is in effect the same as extension there can be no empty space or ‘void’, since there is no empty space motion is not a question of occupying previously empty space, but is to be thought of in terms of vortices (like the motion of a liquid).
Although the structure of Descartes’s epistemology, the philosophical theories of mind, and theory of matter have ben rejected many times, their relentless awareness of the hardest issues, their exemplary clarity, and even their initial plausibility, all contrive to make him the central point of reference for modern philosophy.
The self conceived as Descartes presents it in the first two Meditations: aware only of its own thoughts, and capable of disembodied existence, neither situated in a space nor surrounded by others. This is the pure self of ‘I’ that we are tempted to imagine as a simple unique thing that makes up our essential identity. Descartes’s view that he could keep hold of this nugget while doubting everything else is criticized by Lichtenberg and Kant, and most subsequent philosophers of mind.
Descartes holds that we do not have any knowledge of any empirical proposition about anything beyond the contents of our own minds. The reason, roughly put, is that there is a legitimate doubt about all such propositions because there is no way to deny justifiably that our senses are being stimulated by some cause (an evil spirit, for example) which is radically different from the objects which we normally think affect our senses.
He also points out, that the senses (sight, hearing, touch, etc., are often unreliable, and ‘it is prudent never to trust entirely those who have deceived us even once’, he cited such instances as the straight stick which looks ben t in water, and the square tower which looks round from a distance. This argument of illusion, has not, on the whole, impressed commentators, and some of Descartes’ contemporaries pointing out that since such errors come to light as a result of further sensory information, It cannot be right to cast wholesale doubt on the evidence of the senses. But Descartes regarded the argument from illusion as only the first stage in a softening up process which would ‘lead the mind away from the senses’. He admits that there are some cases of sense-base belief about which doubt would be insane, e.g., the belief that I am sitting here by the fire, wearing a winter dressing gown’.
Descartes was to realize that there was nothing in this view of nature that could explain or provide a foundation for the mental, or from direct experience as distinctly human. In a mechanistic universe, he said, there is no privileged place or function for mind, and the separation between mind and matter is absolute. Descartes was also convinced, that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invent algebraic geometry.
A scientific understanding of these ideas could be derived, said Descartes, with the aid of precise deduction, and he also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Newton’s Principia Mathematica in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and guiding principle of scientific knowledge.
Having to its recourse of knowledge, its cental questions include the origin of knowledge, the place of experience in generating knowledge, and the place of reason in doing so, the relationship between knowledge and certainty, and between knowledge and the impossibility of error, the possibility of universal scepticism, and the changing forms of knowledge that arise from new conceptualizations of the world. All of these issues link with other central concerns of philosophy, such as the nature of truth and the natures of experience and meaning. Seeing epistemology is possible as dominated by two rival metaphors. One is that of a building or pyramid, built on foundations. In this conception it is the kob of the philosopher to describe especially secure foundations, and to identify secure modes of construction, is that the resulting edifice can be shown to be sound. This metaphor of knowledge, and of a rationally defensible theory of confirmation and inference as a method of construction, as that knowledge must be regarded as a structure rose upon secure, certain foundations. These are found in some formidable combinations of experience and reason, with different schools (empiricism, rationalism) emphasizing the role of one over that of the others. Foundationalism was associated with the ancient Stoics, and in the modern era with Descartes (1596-1650). Who discovered his foundations in the ‘clear and distinct’ ideas of reason? Its main opponent is coherentism, or the view that a body of propositions mas be known without a foundation in certainty, but by their interlocking strength, than as a crossword puzzle may be known to have been solved correctly even if each answer, taken individually, admits of uncertainty. Difficulties at this point led the logical passivists to abandon the notion of an epistemological foundation together, and to flirt with the coherence theory of truth. It is widely accepted that trying to make the connection between thought and experience through basic sentences depends on an untenable ‘myth of the given’.
Still, of the other metaphor, is that of a boat or fuselage, that has no foundation but owes its strength to the stability given by its interlocking parts. This rejects the idea of a basis in the ‘given’, favours ideas of coherence and holism, but finds it harder to ward off scepticism. In spite of these concerns, the problem, least of mention, is of defining knowledge in terms of true beliefs plus some favoured relations between the believer and the facts that began with Plato’s view in the “Theaetetus,” that knowledge is true belief, and some logos. Due of its nonsynthetic epistemology, the enterprising of studying the actual formation of knowledge by human beings, without aspiring to certify those processes as rational, or its proof against ‘scepticism’ or even apt to yield the truth. Natural epistemology would therefore blend into the psychology of learning and the study of episodes in the history of science. The scope for ‘external’ or philosophical reflection of the kind that might result in scepticism or its refutation is markedly diminished. Despite the fact that the terms of modernity are so distinguished as exponents of the approach include Aristotle, Hume, and J.S. Mills.
The task of the philosopher of a discipline would then be to reveal the correct method and to unmask counterfeits. Although this belief lay behind much positivist philosophy of science, few philosophers now subscribe to it. It places too well a confidence in the possibility of a purely previous ‘first philosophy’, or viewpoint beyond that of the work one’s way of practitioners, from which their best efforts can be measured as good or bad. These standpoints now seem that too many philosophers to be a fanciefancy, that the more modest of tasks that are actually adopted at various historical stages of investigation into different areas with the aim not so much of criticizing but more of systematization, in the presuppositions of a particular field at a particular tie. There is still a role for local methodological disputes within the community investigators of some phenomenon, with one approach charging that another is unsound or unscientific, but logic and philosophy will not, on the modern view, provide an independent arsenal of weapons for such battles, which indeed often come to seem more like political bids for ascendancy within a discipline.
This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge processed through some natural selection process, the best example of which is Darwin’s theory of biological natural selection. There is a widespread misconception that evolution proceeds according to some plan or direct, but it has neither, and the role of chance ensures that its future course will be unpredictable. Random variations in individual organisms create tiny differences in their Darwinian fitness. Some individuals have more offsprings than others, and the characteristics that increased their fitness thereby become more prevalent in future generations. Once upon a time, at least a mutation occurred in a human population in tropical Africa that changed the haemoglobin molecule in a way that provided resistance to malaria. This enormous advantage caused the new gene to spread, with the unfortunate consequence that sickle-cell anaemia came to exist.
Chance can influence the outcome at each stage: First, in the creation of genetic mutation, second, in wether the bearer lives long enough to show its effects, thirdly, in chance events that influence the individual’s actual reproductive success, and fourth, in wether a gene even if favoured in one generation, is, happenstance, eliminated in the next, and finally in the many unpredictable environmental changes that will undoubtedly occur in the history of any group of organisms. As Harvard biologist Stephen Jay Gould has so vividly expressed that process over again, the outcome would surely be different. Not only might there not be humans, there might not even be anything like mammals.
We will often emphasis the elegance of traits shaped by natural selection, but the common idea that nature creates perfection needs to be analysed carefully. The extent to which evolution achieves perfection depends on exactly what you mean. If you mean “Does natural selections always take the best path for the long-term welfare of a species?” The answer is no. That would require adaption by group selection, and this is, unlikely. If you mean “Does natural selection creates every adaption that would be valuable?” The answer again, is no. For instance, some kinds of South American monkeys can grasp branches with their tails. The trick would surely also be useful to some African species, but, simply because of bad luck, none have it. Some combination of circumstances started some ancestral South American monkeys using their tails in ways that ultimately led to an ability to grab onto branches, while no such development took place in Africa. Mere usefulness of a trait does not necessitate a means in that what will understandably endure phylogenesis or evolution.
This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge proceeds through some natural selection process, the best example of which is Darwin’s theory of biological natural selection. The three major components of the model of natural selection are variation selection and retention. According to Darwin’s theory of natural selection, variations are not pre-designed to do certain functions. Rather, these variations that do useful functions are selected. While those that do not employ of some coordinates in that are regainfully purposed are also, not to any of a selection, as duly influenced of such a selection, that may have responsibilities for the visual aspects of a variational intentionally occurs. In the modern theory of evolution, genetic mutations provide the blind variations: Blind in the sense that variations are not influenced by the effects they would have-the likelihood of a mutation is not correlated with the benefits or liabilities that mutation would confer on the organism, the environment provides the filter of selection, and reproduction provides the retention. Fit is achieved because those organisms with features that make them less adapted for survival do not survive in connection with other organisms in the environment that have features that are better adapted. Evolutionary epistemology applies this blind variation and selective retention model to the growth of scientific knowledge and to human thought processes overall.
The parallel between biological evolution and conceptual or ‘epistemic’ evolution can be seen as either literal or analogical. The literal version of evolutionary epistemology dees biological evolution as the main cause of the growth of knowledge. On this view, called the ‘evolution of cognitive mechanic programs’, by Bradie (1986) and the ‘Darwinian approach to epistemology’ by Ruse (1986), that growth of knowledge occurs through blind variation and selective retention because biological natural selection itself is the cause of epistemic variation and selection. The most plausible version of the literal view does not hold that all human beliefs are innate but rather than the mental mechanisms that guide the acquisitions of non-innate beliefs are themselves innately and the result of biological natural selection. Ruse, (1986) demands of a version of literal evolutionary epistemology that he links to sociolology (Rescher, 1990).
On the analogical version of evolutionary epistemology, called the ‘evolution of theory’s program’, by Bradie (1986). The ‘Spenserians approach’ (after the nineteenth century philosopher Herbert Spencer) by Ruse (1986), the development of human knowledge is governed by a process analogous to biological natural selection, rather than by an instance of the mechanism itself. This version of evolutionary epistemology, introduced and elaborated by Donald Campbell (1974) as well as Karl Popper, sees the [ partial ] fit between theories and the world as explained by a mental process of trial and error known as epistemic natural selection.
Both versions of evolutionary epistemology are usually taken to be types of naturalized epistemology, because both take some empirical facts as a starting point for their epistemological project. The literal version of evolutionary epistemology begins by accepting evolutionary theory and a materialist approach to the mind and, from these, constructs an account of knowledge and its developments. In contrast, the metaphorical version does not require the truth of biological evolution: It simply draws on biological evolution as a source for the model of natural selection. For this version of evolutionary epistemology to be true, the model of natural selection need only apply to the growth of knowledge, not to the origin and development of species. Crudely put, evolutionary epistemology of the analogical sort could still be true even if creationism is the correct theory of the origin of species.
Although they do not begin by assuming evolutionary theory, most analogical evolutionary epistemologists are naturalized epistemologists as well, their empirical assumptions, least of mention, implicitly come from psychology and cognitive science, not evolutionary theory. Sometimes, however, evolutionary epistemology is characterized in a seemingly non-naturalistic fashion. Campbell (1974) says that ‘if one is expanding knowledge beyond what one knows, one has no choice but to explore without the benefit of wisdom’, i.e., blindly. This, Campbell admits, makes evolutionary epistemology close to being a tautology (and so not naturalistic). Evolutionary epistemology does assert the analytic claim that when expanding one’s knowledge beyond what one knows, one must precessed to something that is already known, but, more interestingly, it also makes the synthetic claim that when expanding one’s knowledge beyond what one knows, one must proceed by blind variation and selective retention. This claim is synthetic because it can be empirically falsified. The central claim of evolutionary epistemology is synthetic, not analytic. If the central contradictory, which they are not. Campbell is right that evolutionary epistemology does have the analytic feature he mentions, but he is wrong to think that this is a distinguishing feature, since any plausible epistemology has the same analytic feature (Skagestad, 1978).
Two extraordinary issues lie to awaken the literature that involves questions about ‘realism’, i.e., What metaphysical commitment does an evolutionary epistemologist have to make? Progress, i.e., according to evolutionary epistemology, does knowledge develop toward a goal? With respect to realism, many evolutionary epistemologists endorse that is called ‘hypothetical realism’, a view that combines a version of epistemological ‘scepticism’ and tentative acceptance of metaphysical realism. With respect to progress, the problem is that biological evolution is not goal-directed, but the growth of human knowledge seems to be. Campbell (1974) worries about the potential dis-analogy here but is willing to bite the stone of conscience and admit that epistemic evolution progress toward a goal (truth) while biologic evolution does not. Many another has argued that evolutionary epistemologists must give up the ‘truth-topic’ sense of progress because a natural selection model is in essence, is non-teleological, as an alternative, following Kuhn (1970), and embraced in the accompaniment with evolutionary epistemology.
Among the most frequent and serious criticisms levelled against evolutionary epistemology is that the analogical version of the view is false because epistemic variation is not blind (Skagestad, 1978 and Ruse, 1986) Stein and Lipton (1990) have argued, however, that this objection fails because, while epistemic variation is not random, its constraints come from heuristics that, for the most part, are selective retention. Further, Stein and Lipton come to the conclusion that heuristics are analogous to biological pre-adaptions, evolutionary pre-biological pre-adaptions, evolutionary cursors, such as a half-wing, a precursor to a wing, which have some function other than the function of their descendable structures: The function of Descendable structures, the function of their descendable character embodied to its structural foundations, is that of the guidelines of epistemic variation is, on this view, not the source of disanalogy, but the source of a more articulated account of the analogy.
Many evolutionary epistemologists try to combine the literal and the analogical versions (Bradie, 1986, and Stein and Lipton, 1990), saying that those beliefs and cognitive mechanisms, which are innate results from natural selection of the biological sort and those that are innate results from natural selection of the epistemic sort. This is reasonable asa long as the two parts of this hybrid view are kept distinct. An analogical version of evolutionary epistemology with biological variation as its only source of blondeness would be a null theory: This would be the case if all our beliefs are innate or if our non-innate beliefs are not the result of blind variation. An appeal to the legitimate way to produce a hybrid version of evolutionary epistemology since doing so trivializes the theory. For similar reasons, such an appeal will not save an analogical version of evolutionary epistemology from arguments to the effect that epistemic variation is blind (Stein and Lipton, 1990).
Although it is a new approach to theory of knowledge, evolutionary epistemology has attracted much attention, primarily because it represents a serious attempt to flush out a naturalized epistemology by drawing on several disciplines. In science is relevant to understanding the nature and development of knowledge, then evolutionary theory is among the disciplines worth a look. Insofar as evolutionary epistemology looks there, it is an interesting and potentially fruitful epistemological programme.
What makes a belief justified and what makes a true belief knowledge? Thinking that whether a belief deserves one of these appraisals is natural depends on what caused the depicted branch of knowledge to have the belief. In recent decades a number of epistemologists have pursued this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that ‘p’ is knowledge just in case it has the right causal connection to the fact that ‘p’. Such a criterion can be applied only to cases where the fact that ‘p’ is a sort that can enter inti causal relations, as this seems to exclude mathematically and other necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually supposed that it is limited to perceptual representations where knowledge of particular facts about subjects’ environments.
For example, Armstrong (1973) predetermined that a position held by a belief in the form ‘This perceived object is ‘F’ is [non-inferential] knowledge if and only if the belief is a completely reliable sign that the perceived object is ‘F’, that ism, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any subject ‘χ’ and perceived object ‘y’, if ‘χ’ has those properties and believed that ‘y’ is ‘F’, then ‘y’ is ‘F’. (Dretske (1981) offers a rather similar account, in terms of the belief’s being caused by a signal received by the perceiver that carries the information that the object is ‘F’).
This sort of condition fails, however, to be sufficiently for non-inferential perceptivity, for knowledge is accountable for its compatibility with the belief’s being unjustified, and an unjustified belief cannot be knowledge. For example, suppose that your mechanisms for the sensory data of colour as perceived, are working well. However, you have been given good reason to think otherwise, to think, say, that the sensory data of things look chartreuse to say, that chartreuse things look magenta, if you fail to heed these reasons you have for thinking that your colour perception is refractively to follow a credo of things that look bicoloured to you that it is tinge, your belief will fail atop be justified and will therefore fail to be knowledge, even though it is caused by the thing’s being withing the grasp of sensory perceptivity, in such a way as to be a completely reliable sign, or to carry the information that the thing is sufficiently to organize all sensory data as perceived in and of the world, or Holistic view.
One could fend off this sort of counterexample by simply adding to the belief be justified. However, this enriched condition would still be insufficient. Suppose, for example, that in an experiment you are given a drug that in nearly all people, but not in you, as it happens, causes the aforementioned aberration in colour perception. The experimenter tells you that you have taken such a drug but then says, That the pill taken was just a placebo’. Yet suppose further, that the experimenter tells you are false, her telling you this gives you justification for believing of a thing that looks magenta to you that it is magenta, but a fact about this justification that is unknown to you, that the experimenter’s last statement was false, makes it the case that your true belief is not knowledge even though it satisfies Armstrong’s causal condition.
Goldman (1986) has proposed an importantly different causal criterion, namely, that a true belief is knowledge if it is produced by a type of process that is ‘globally’ and ‘locally’ reliable. Causing true beliefs is sufficiently high is globally reliable if its propensity. Local reliability has to do with whether the process would have produced a similar but false belief in certain counterfactual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge does not require the fact believed to be causally related to the belief, and so it could in principle apply to knowledge of any kind of truth.
Goldman requires the global reliability of the belief-producing process for the justification of a belief, he requires it also for knowledge because justification is required for knowledge. What he requires for knowledge, but does not require for justification is local reliability. His idea is that a justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counterfactual situation in which it is false. Its purported theory of relevant alternatives can be viewed as an attempt to provide a more satisfactory response to this tension in our thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.
According to the theory, we need to qualify rather than deny the absolute character of knowledge. We should view knowledge as absolute, reactive to certain standards (Dretske, 1981 and Cohen, 1988). That is to say, in order to know a proposition, our evidence need not eliminate all the alternatives to that preposition, rather for ‘us’, that we can know our evidence eliminates all the relevant alternatives, where the set of relevant alternatives (a proper subset of the set of all alternatives) is determined by some standard. Moreover, according to the relevant alternatives view, and the standards determining that of the alternatives is raised by the sceptic are not relevant. If this is correct, then the fact that our evidence cannot eliminate the sceptic’s alternative does not lead to a sceptical result. For knowledge requires only the elimination of the relevant alternatives, so the relevant alternative view preserves in both strands in our thinking about knowledge. Knowledge is an absolute concept, but because the absoluteness is relative to a standard, we can know many things.
The interesting thesis that counts as a causal theory of justification (in the meaning of ‘causal theory’ intended here) is the following: A belief is justified in case it was produced by a type of process that is ‘globally’ reliable, that is, its propensity to produce true beliefs-that can be defined (to a good approximation) As the proportion of the beliefs it produces (or would produce) that is true is sufficiently great.
This proposal will be adequately specified only when we are told (i) how much of the causal history of a belief counts as part of the process that produced it, (ii) which of the many types to which the process belongs is the type for purposes of assessing its reliability, and (iii) relative to why the world or worlds are the reliability of the process type to be assessed the actual world, the closet worlds containing the case being considered, or something else? Let ‘us’ look at the answers suggested by Goldman, the leading proponent of a reliabilist account of justification.
(1) Goldman (1979, 1986) takes the relevant belief producing process to include only the proximate causes internal to the believer. So, for instance, when recently I believed that the telephone was ringing the process that produced the belief, for purposes of assessing reliability, includes just the causal chain of neural events from the stimulus in my ear’s inward ands other concurrent brain states on which the production of the belief depended: It does not include any events’ of an ‘I’ in the calling of a telephone or the sound waves travelling between it and my ears, or any earlier decisions I made that were responsible for my being within hearing distance of the telephone at that time. It does seem intuitively plausible of a belief depends should be restricted to internal omnes proximate to the belief. Why? Goldman does not tell ‘us’. One answer that some philosophers might give is that it is because a belief’s being justified at a given time can depend only on facts directly accessible to the believer’s awareness at that time (for, if a believer ought to holds only beliefs that are justified, she can tell at any given time what beliefs would then be justified for her). However, this cannot be Goldman’s answer because he wishes to include in the relevantly process neural events that are not directly accessible to consciousness.
(2) Once the reliabilist has told ‘us’ how to delimit the process producing a belief, he needs to tell ‘us’ which of the many types to which it belongs is the relevant type. Coincide, for example, the process that produces your current belief that you see a book before you. One very broad type to which that process belongs would be specified by ‘coming to a belief as to something one perceives as a result of activation of the nerve endings in some of one’s sense-organs’. A constricted type, for which an unvarying process belongs, for in that, would be specified by ‘coming to a belief as to what one sees as a result of activation of the nerve endings in one’s retinas’. A still narrower type would be given by inserting in the last specification a description of a particular pattern of activation of the retina’s particular cells. Which of these or other types to which the token process belongs is the relevant type for determining whether the type of process that produced your belief is reliable?
If we select a type that is too broad, as having the same degree of justification various beliefs that intuitively seem to have different degrees of justification. Thus the broadest type we specified for your belief that you see a book before you apply also to perceptual beliefs where the object seen is far away and seen only briefly is less justified. On the other hand, is we are allowed to select a type that is as narrow as we please, then we make it out that an obviously unjustified but true belief is produced by a reliable type of process. For example, suppose I see a blurred shape through the fog far in a field and unjustifiedly, but correctly, believe that it is a sheep: If we include enough details about my retinal image is specifying the type of the visual process that produced that belief, we can specify a type is likely to have only that one instanced and is therefore 100 percent reliable. Goldman conjectures (1986) that the relevant process type is ‘the narrowest type that is casually operative’. Presumably, a feature of the process producing beliefs were causally operatives in producing it just in case some alternative feature instead, but it would not have led to that belief. (We need to say ‘some’ here rather than ‘any’, because, for example, when I see an oak tree the particular ‘oak’ material bodies of my retinal images are clearly casually operatives in producing my belief that I see a tree even though there are alternative shapes, for example, ‘oakish’ ones, that would have produced the same belief.)
(3) Should the justification of a belief in a hypothetical, non-actual example turn on the reliability of the belief-producing process in the possible world of the example? That leads to the implausible result in that in a world run by a Cartesian demon a powerful being who causes the other inhabitants of the world to have rich and coherent sets of perceptual and memory impressions that are all illusory the perceptual and memory beliefs of the other inhabitants are all unjustified, for they are produced by processes that are, in that world, quite unreliable. If we say instead that it is the reliability of the processes in the actual world that matters, we get the equally undesired result that if the actual world is a demon world then our perceptual and memory beliefs are all unjustified.
Goldman’s solution (1986) is that the reliability of the process types is to be gauged by their performance in ‘normal’ worlds, that is, worlds consistent with ‘our general beliefs about the world . . . ‘about the sorts of objects, events and changes that occur in it’. This gives the intuitively right results for the problem cases just considered, but indicate by inference an implausible proportion of making compensations for alternative tending toward justification. If there are people whose general beliefs about the world are very different from mine, then there may, on this account, be beliefs that I can correctly regard as justified (ones produced by processes that are reliable in what I take to be a normal world) but that they can correctly regard as not justified.
However, these questions about the specifics are dealt with, and there are reasons for questioning the basic idea that the criterion for a belief’s being justified is its being produced by a reliable process. Thus and so, doubt about the sufficiency of the reliabilist criterion is prompted by a sort of example that Goldman himself uses for another purpose. Suppose that being in brain-state always causes one to believe that one is in brained-state B. Here the reliability of the belief-producing process is perfect, but ‘we can readily imagine circumstances in which a person goes into grain-state B and therefore has the belief in question, though this belief is by no means justified’ (Goldman, 1979). Doubt about the necessity of the condition arises from the possibility that one might know that one has strong justification for a certain belief and yet that knowledge is not what actually prompts one to believe. For example, I might be well aware that, having read the weather bureau’s forecast that it will be much hotter tomorrow. I have ample reason to be confident that it will be hotter tomorrow, but I irrationally refuse to believe it until my Aunt Hattie tells me that she feels in her joints that it will be hotter tomorrow. Here what prompts me to believe dors not justify my belief, but my belief is nevertheless justified by my knowledge of the weather bureau’s prediction and of its evidential force: I can advert to any disclaiming assumption that I ought not to be holding the belief. Indeed, given my justification and that there is nothing untoward about the weather bureau’s prediction, my belief, if true, can be counted knowledge. This sorts of example raises doubt whether any causal conditions, are it a reliable process or something else, is necessary for either justification or knowledge.
Philosophers and scientists alike, have often held that the simplicity or parsimony of a theory is one reason, all else being equal, to view it as true. This goes beyond the unproblematic idea that simpler theories are easier to work with and gave greater aesthetic appeal.
One theory is more parsimonious than another when it postulates fewer entities, processes, changes or explanatory principles: The simplicity of a theory depends on essentially the same consecrations, though parsimony and simplicity obviously become the same. Demanding clarification of what makes one theory simpler or more parsimonious is plausible than another before the justification of these methodological maxims can be addressed.
If we set this description problem to one side, the major normative problem is as follows: What reason is there to think that simplicity is a sign of truth? Why should we accept a simpler theory instead of its more complex rivals? Newton and Leibniz thought that the answer was to be found in a substantive fact about nature. In “Principia,” Newton laid down as his first Rule of Reasoning in Philosophy that ‘nature does nothing in vain . . . ‘for Nature is pleased with simplicity and affects not the pomp of superfluous causes’. Leibniz hypothesized that the actual world obeys simple laws because God’s taste for simplicity influenced his decision about which world to actualize.
The tragedy of the Western mind, described by Koyré, is a direct consequence of the stark Cartesian division between mind and world. We discovered the ‘certain principles of physical reality’, said Descartes, ‘not by the prejudices of the senses, but by the light of reason, and which thus possess so great evidence that we cannot doubt of their truth’. Since the real, or that which actually exists external to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes conclude that all quantitative aspects of reality could be traced to the deceitfulness of the senses.
The most fundamental aspect of the Western intellectual tradition is the assumption that there is a fundamental division between the material and the immaterial world or between the realm of matter and the realm of pure mind or spirit. The metaphysical farmwork based on this assumption is known as ontological dualism. As the word dual implies, the framework is predicated on an ontology, or a conception of the nature of God or Being, that assumes reality has two distinct and separable dimensions. The concept of Being as continuous, immutable, and having a prior or separate existence from the world of change dates from the ancient Greek philosopher Parmenides. The same qualities were associated with the God of the Judeo-Christian tradition, and they were considerably amplified by the role played in the theology by Platonic and Neoplatonic philosophy.
Nicolas Copernicus, Galileo, Johannes Kepler, and Isaac Newton were all inheritors of a cultural tradition in which ontological dualism was a primary article of faith. Hence the idealization of the mathematical ideal as a source of communion with God, which dates from Pythagoras, provided a metaphysical foundation for the emerging natural sciences. This explains why, the creators of classical physics believed that doing physics was a form of communion with the geometrical and mathematical form’s resident in the perfect mind of God. This view would survive in a modified form in what is now known as Einsteinian epistemology and accounts in no small part for the reluctance of many physicists to accept the epistemology y associated with the Copenhagen Interpretation.
At the beginning of the nineteenth century, Pierre-Simon LaPlace, along with a number of other French mathematicians, advanced the view that the science of mechanics constituted a complete view of nature. Since this science, by observing its epistemology, had revealed itself to be the fundamental science, the hypothesis of God was, they concluded, entirely unnecessary.
LaPlace is recognized for eliminating not only the theological component of classical physics but the ‘entire metaphysical component’ as well’. The epistemology of science requires, he said, that, ‘we start by inductive generalizations from observed facts to hypotheses that are ‘tested by observed conformity of the phenomena’. What was unique about LaPlace’s view of hypotheses was his insistence that we cannot attribute reality to them. Although concepts like force, mass, motion, cause, and laws are obviously present in classical physics, they exist in LaPlace’s view only as quantities. Physics is concerned, he argued, with quantities that we associate as a matter of convenience with concepts, and the truths about nature are only the quantities.
As this view of hypotheses and the truths of nature as quantities were extended in the nineteenth century to a mathematical description of phenomena like heat, light, electricity, and magnetism. LaPlace’s assumptions about the actual character of scientific truths seemed correct. This progress suggested that if we could remove all thoughts about the ‘nature of’ or the ‘source of’ phenomena, the pursuit of strictly quantitative concepts would bring us to a complete description of all aspects of physical reality. Subsequently, figures like Comte, Kirchhoff, Hertz, and Poincaré developed a program for the study of nature hat was quite different from that of the original creators of classical physics.
The seventeenth-century view of physics as a philosophy of nature or as natural philosophy was displaced by the view of physics as an autonomous science that was ‘the science of nature’. This view, which was premised on the doctrine of positivism, promised to subsume all of the nature with a mathematical analysis of entities in motion and claimed that the true understanding of nature was revealed only in the mathematical description. Since the doctrine of positivism assumes that the knowledge we call physics resides only in the mathematical formalism of physical theory, it disallows the prospect that the vision of physical reality revealed in physical theory can have any other meaning. In the history of science, the irony is that positivism, which was intended to banish metaphysical concerns from the domain of science, served to perpetuate a seventeenth-century metaphysical assumption about the relationship between physical reality and physical theory.
Epistemology since Hume and Kant has drawn back from this theological underpinning. Indeed, the very idea that nature is simple (or uniform) has come in for a critique. The view has taken hold that a preference for simple and parsimonious hypotheses is purely methodological: It is constitutive of the attitude we call ‘scientific’ and makes no substantive assumption about the way the world is.
A variety of otherwise diverse twentieth-century philosophers of science have attempted, in different ways, to flesh out this position. Two examples must suffice here: Hesse (1969) as, for summaries of other proposals. Popper (1959) holds that scientists should prefer highly falsifiable (improbable) theories: He tries to show that simpler theories are more falsifiable, also Quine (1966), in contrast, sees a virtue in theories that are highly probable, he argues for a general connection between simplicity and high probability.
Both these proposals are global. They attempt to explain why simplicity should be part of the scientific method in a way that spans all scientific subject matters. No assumption about the details of any particular scientific problem serves as a premiss in Popper’s or Quine’s arguments.
Newton and Leibniz thought that the justification of parsimony and simplicity flows from the hand of God: Popper and Quine try to justify these methodologically maims without assuming anything substantive about the way the world is. In spite of these differences in approach, they have something in common. They assume that all users of parsimony and simplicity in the separate sciences can be encompassed in a single justifying argument. That recent developments in confirmation theory suggest that this assumption should be scrutinized. Good (1983) and Rosenkrantz (1977) has emphasized the role of auxiliary assumptions in mediating the connection between hypotheses and observations. Whether a hypothesis is well supported by some observations, or whether one hypothesis is better supported than another by those observations, crucially depends on empirical background assumptions about the inference problem here. The same view applies to the idea of prior probability (or, prior plausibility). In of a single hypo-physical science if chosen as an alternative to another even though they are equally supported by current observations, this must be due to an empirical background assumption.
Principles of parsimony and simplicity mediate the epistemic connection between hypotheses and observations. Perhaps these principles are able to do this because they are surrogates for an empirical background theory. It is not that there is one background theory presupposed by every appeal to parsimony; This has the quantifier order backwards. Rather, the suggestion is that each parsimony argument is justified only to each degree that it reflects an empirical background theory about the subjective matter. On this theory is brought out into the open, but the principle of parsimony is entirely dispensable (Sober, 1988).
This ‘local’ approach to the principles of parsimony and simplicity resurrects the idea that they make sense only if the world is one way rather than another. It rejects the idea that these maxims are purely methodological. How defensible this point of view is, will depend on detailed case studies of scientific hypothesis evaluation and on further developments in the theory of scientific inference.
It is usually not found of one and the same that, an inference is a (perhaps very complex) act of thought by virtue of which act (1) I pass from a set of one or more propositions or statements to a proposition or statement and (2) it appears that the latter are true if the former is or are. This psychological characterization begets some occurrences of wider summations toward its occupying study in literature, under more lesser than inessential variations. Desiring a better characterization of inference is natural. Yet attempts to do so by constructing a fuller psychological explanation fail to comprehend the grounds on which inference will be objectively valid-A point elaborately made by Gottlob Frége. Attempts to understand the nature of inference through the device of the representation of inference by formal-logical calculations or derivations better (1) leave ‘us’ puzzled about the relation of formal-logical derivations to the informal inferences they are supposedly to represent or reconstruct, and (2) leaves ‘us’ worried about the sense of such formal derivations. Are these deprivations inference? Are not informal inferences needed in order to apply the rules governing the constructions of formal derivations (inferring that this operation is an application of that formal rule)? These are concerns cultivated by, for example, Wittgenstein.
Coming up with an adequate characterization of inference-and even working out what would count as a very adequate characterization here is demandingly by no means nearly some resolved philosophical problem. Traditionally, a proposition that is not a ‘conditional’, as with the ‘affirmative’ and ‘negative’, modern opinion is wary of the distinction, since what appears categorical may vary with the choice of a primitive vocabulary and notation. Apparently categorical propositions may also turn out to be disguised conditionals: ‘x’ is intelligent (categorical?) Equivalent, if ‘x’ is given a range of tasks, she does them better than many people (conditional?). The problem is not merely one of classification, since deep metaphysical questions arise when facts that seem to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.
Its condition of some classified necessity is so proven sufficient that if ‘p’ is a necessary condition of ‘q’, then ‘q’ cannot be true unless ‘p’; is true? If ‘p’ is a sufficient condition, thus steering well is a necessary condition of driving in a satisfactory manner, but it is not sufficient, for one can steer well but drive badly for other reasons. Confusion may result if the distinction is not heeded. For example, the statement that ‘A’ causses ‘B’ may be interpreted to mean that ‘A’ is itself a sufficient condition for ‘B’, or that it is only a necessary condition fort ‘B’, or perhaps a necessary parts of a total sufficient condition. Lists of conditions to be met for satisfying some administrative or legal requirement frequently attempt to give individually necessary and jointly sufficient sets of conditions.
What is more, that if any proposition of the form ‘if p then q’. The condition hypothesized, ‘p’. Is called the antecedent of the conditionals, and ‘q’, the consequent? Various kinds of conditional have been distinguished. Its weakest is that of ‘material implication’, merely telling that either ‘not-p’, or ‘q’. Stronger conditionals include elements of ‘modality’, corresponding to the thought that ‘if p is truer then q must be true’. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether conditionals are better treated semantically, yielding differently finds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning with surface differences arising from other implicatures.
It follows from the definition of ‘strict implication’ that a necessary proposition is strictly implied by any proposition, and that an impossible proposition strictly implies any proposition. If strict implication corresponds to ‘q follows from p’, then this means that a necessary proposition follows from anything at all, and anything at all follows from an impossible proposition. This is a problem if we wish to distinguish between valid and invalid arguments with necessary conclusions or impossible premises.
The Humean problem of induction is that if we would suppose that there is some property ‘A’ concerning and observational or an experimental situation, and that out of a large number of observed instances of ‘A’, some fraction m/n (possibly equal to 1) has also been instances of some logically independent property ‘B’. Suppose further that the background portional circumstances not specified in these descriptions have been varied to a substantial degree and that there is no collateral information available concerning the frequency of ‘B’s’ among ‘A’s’ or, concerning causal or nomologically connections between instances of ‘A’ and instances of ‘B’.
In this situation, an ‘enumerative’ or ‘instantial’ induction inference would move rights from the premise, that m/n of observed ‘A’s’ are ‘B’s’ to the conclusion that approximately m/n of all ‘A’s’ are ‘B’s’. (The usual probability qualification will be assumed to apply to the inference, rather than being part of the conclusion.) Here the set class of the ‘A’s’, should be taken to include not only unobserved ‘A’s’ and future ‘A’s’, but also possible or hypothetical ‘A’s’ (an alternative conclusion would concern the probability or likelihood of the adjacently observed ‘A’ being a ‘B’).
The traditional or Humean problem of induction, often referred to simply as ‘the problem of induction’, is the problem of whether and why inferences that fit this schema should be considered rationally acceptable or justified from an epistemic or cognitive standpoint, i.e., whether and why reasoning in this way is likely to lead to true claims about the world. Is there any sort of argument or rationale that can be offered for thinking that conclusions reached in this way are likely to be true in the corresponding premisses is true - or even that their chances of truth are significantly enhanced?
Hume’s discussion of this issue deals explicitly only with cases where all observed ‘A’s’ are ‘B’s’ and his argument applies just as well to the more general case. His conclusion is entirely negative and sceptical: Inductive inferences are not rationally justified, but are instead the result of an essentially a-rational process, custom or habit. Hume (1711-76) challenges the proponent of induction to supply a cogent line of reasoning that leads from an inductive premise to the corresponding conclusion and offers an extremely influential argument in the form of a dilemma (a few times referred to as ‘Hume’s fork’), that either our actions are determined, in which case we are not responsible for them, or they are the result of random events, under which case we are also not responsible for them.
Such reasoning would, he argues, have to be either deductively demonstrative reasoning in the concerning relations of ideas or ‘experimental’, i.e., empirical, that reasoning concerning matters of fact or existence. It cannot be the former, because all demonstrative reasoning relies on the avoidance of contradiction, and it is not a contradiction to suppose that ‘the course of nature may change’, that an order that was observed in the past and not of its continuing against the future: But it cannot be, as the latter, since any empirical argument would appeal to the success of such reasoning about an experience, and the justifiability of generalizing from experience are precisely what is at issue-so that any such appeal would be question-begging. Hence, Hume concludes that there can be no such reasoning (1748).
An alternative version of the problem may be obtained by formulating it with reference to the so-called Principle of Induction, which says roughly that the future will resemble the past or, somewhat better, that unobserved cases will resemble observed cases. An inductive argument may be viewed as enthymematic, with this principle serving as a supposed premiss, in which case the issue is obviously how such a premiss can be justified. Hume’s argument is then that no such justification is possible: The principle cannot be justified a prior because having possession of been true in experiences without obviously begging the question is not contradictory to have possession of been true in experiences without obviously begging the question.
The predominant recent responses to the problem of induction, at least in the analytic tradition, in effect accept the main conclusion of Hume’s argument, namely, that inductive inferences cannot be justified in the sense of showing that the conclusion of such an inference is likely to be true if the premise is true, and thus attempt to find another sort of justification for induction. Such responses fall into two main categories: (i) Pragmatic justifications or ‘vindications’ of induction, mainly developed by Hans Reichenbach (1891-1953), and (ii) ordinary language justifications of induction, whose most important proponent is Frederick, Peter Strawson (1919-). In contrast, some philosophers still attempt to reject Hume’s dilemma by arguing either (iii) That, contrary to appearances, induction can be inductively justified without vicious circularity, or (iv) that an anticipatory justification of induction is possible after all. In that:
(1) Reichenbach’s view is that induction is best regarded, not as a form of inference, but rather as a ‘method’ for arriving at posits regarding, i.e., the proportion of ‘A’s’ remain additionally of ‘B’s’. Such a posit is not a claim asserted to be true, but is instead an intellectual wager analogous to a bet made by a gambler. Understood in this way, the inductive method says that one should posit that the observed proportion is, within some measure of an approximation, the true proportion and then continually correct that initial posit as new information comes in.
The gambler’s bet is normally an ‘appraised posit’, i.e., he knows the chances or odds that the outcome on which he bets will actually occur. In contrast, the inductive bet is a ‘blind posit’: We do not know the chances that it will succeed or even that success is that it will succeed or even that success is possible. What we are gambling on when we make such a bet is the value of a certain proportion in the independent world, which Reichenbach construes as the limit of the observed proportion as the number of cases increases to infinity. Nevertheless, we have no way of knowing that there are even such a limit, and no way of knowing that the proportion of ‘A’s’ are in addition of ‘B’s’ converges in the end on some stable value than varying at random. If we cannot know that this limit exists, then we obviously cannot know that we have any definite chance of finding it.
What we can know, according to Reichenbach, is that ‘if’ there is a truth of this sort to be found, the inductive method will eventually find it. That this is so is an analytic consequence of Reichenbach’s account of what it is for such a limit to exist. The only way that the inductive method of making an initial posit and then refining it in light of new observations can fail eventually to arrive at the true proportion is if the series of observed proportions never converges on any stable value, which means that there is no truth to be found pertaining the proportion of ‘A’s additionally constitute ‘B’s’. Thus, induction is justified, not by showing that it will succeed or indeed, that it has any definite likelihood of success, but only by showing that it will succeed if success is possible. Reichenbach’s claim is that no more than this can be established for any method, and hence that induction gives ‘us’ our best chance for success, our best gamble in a situation where there is no alternative to gambling.
This pragmatic response to the problem of induction faces several serious problems. First, there are indefinitely many other ‘methods’ for arriving at posits for which the same sort of defence can be given-methods that yield the same result as the inductive method over time but differ arbitrarily before long. Despite the efforts of others, it is unclear that there is any satisfactory way to exclude such alternatives, in order to avoid the result that any arbitrarily chosen short term posit is just as reasonable as the inductive posit. Second, even if there is a truth of the requisite sort to be found, the inductive method is only guaranteed to find it or even to come within any specifiable distance of it in the indefinite long run. Nevertheless, any actual application of inductive results always takes place in the presence to the future eventful states in making the relevance of the pragmatic justification to actual practice uncertainly. Third, and most important, it needs to be emphasized that Reichenbach’s response to the problem simply accepts the claim of the Humean sceptic that an inductive premise never provides the slightest reason for thinking that the corresponding inductive conclusion is true. Reichenbach himself is quite candid on this point, but this does not alleviate the intuitive implausibility of saying that we have no more reason for thinking that our scientific and commonsense conclusions that result in the induction of it ‘
. . . is true’ than, to use Reichenbach’s own analogy (1949), a blind man wandering in the mountains who feels an apparent trail with his stick has for thinking that following it will lead him to safety.
An approach to induction resembling Reichenbach’s claiming in that those particular inductive conclusions are posits or conjectures, than the conclusions of cogent inferences, is offered by Popper. However, Popper’s view is even more overtly sceptical: It amounts to saying that all that can ever be said in favour of the truth of an inductive claim is that the claim has been tested and not yet been shown to be false.
(2) The ordinary language response to the problem of induction has been advocated by many philosophers, but the discussion here will be restricted to Strawson’s paradigmatic version. Strawson claims that the question whether induction is justified or reasonable makes sense only if it tacitly involves the demand that inductive reasoning meet the standards appropriate to deductive reasoning, i.e., that the inductive conclusions are shown to follow deductively from the inductive assumption. Such a demand cannot, of course, be met, but only because it is illegitimate: Inductive and deductive reasons are simply fundamentally different kinds of reasoning, each possessing its own autonomous standards, and there is no reason to demand or expect that one of these kinds meet the standards of the other. Whereas, if induction is assessed by inductive standards, the only ones that are appropriate, then it is obviously justified.
The problem here is to understand to what this allegedly obvious justification of an induction amount. In his main discussion of the point (1952), Strawson claims that it is an analytic true statement that believing it a conclusion for which there is strong evidence is reasonable and an analytic truth that inductive evidence of the sort captured by the schema presented earlier constitutes strong evidence for the corresponding inducive conclusion, thus, apparently yielding the analytic conclusion that believing it a conclusion for which there is inductive evidence is reasonable. Nevertheless, he also admits, indeed insists, that the claim that inductive conclusions will be true in the future is contingent, empirical, and may turn out to be false (1952). Thus, the notion of reasonable belief and the correlative notion of strong evidence must apparently be understood in ways that have nothing to do with likelihood of truth, presumably by appeal to the standard of reasonableness and strength of evidence that are accepted by the community and are embodied in ordinary usage.
Understood in this way, Strawson’s response to the problem of inductive reasoning does not speak to the central issue raised by Humean scepticism: The issue of whether the conclusions of inductive arguments are likely to be true. It amounts to saying merely that if we reason in this way, we can correctly call ourselves ‘reasonable’ and our evidence ‘strong’, according to our accepted community standards. Nevertheless, to the undersealing of issue of wether following these standards is a good way to find the truth, the ordinary language response appears to have nothing to say.
(3) The main attempts to show that induction can be justified inductively have concentrated on showing that such as a defence can avoid circularity. Skyrms (1975) formulate, perhaps the clearest version of this general strategy. The basic idea is to distinguish different levels of inductive argument: A first level in which induction is applied to tings other than arguments: A second level in which it is applied to arguments at the first level, arguing that they have been observed to succeed so far and hence are likely to succeed in general: A third level in which it is applied in the same way to arguments at the second level, and so on. Circularity is allegedly avoided by treating each of these levels as autonomous and justifying the argument at each level by appeal to an argument at the next level.
One problem with this sort of move is that even if circularity is avoided, the movement to higher and higher levels will clearly eventually fail simply for lack of evidence: A level will reach at which there have been enough successful inductive arguments to provide a basis for inductive justification at the next higher level, and if this is so, then the whole series of justifications collapses. A more fundamental difficulty is that the epistemological significance of the distinction between levels is obscure. If the issue is whether reasoning in accord with the original schema offered above ever provides a good reason for thinking that the conclusion is likely to be true, then it still seems question-begging, even if not flatly circular, to answer this question by appeal to anther argument of the same form.
(4) The idea that induction can be justified on a pure priori basis is in one way the most natural response of all: It alone treats an inductive argument as an independently cogent piece of reasoning whose conclusion can be seen rationally to follow, although perhaps only with probability from its premise. Such an approach has, however, only rarely been advocated (Russell, 19132 and BonJour, 1986), and is widely thought to be clearly and demonstrably hopeless.
Many on the reasons for this pessimistic view depend on general epistemological theses about the possible or nature of anticipatory cognition. Thus if, as Quine alleges, there is no a prior justification of any kind, then obviously a prior justification for induction is ruled out. Or if, as more moderate empiricists have in claiming some preexistent knowledge should be analytic, then again a prevenient justification for induction seems to be precluded, since the claim that if an inductive premise ids truer, then the conclusion is likely to be true does not fit the standard conceptions of ‘analyticity’. A consideration of these matters is beyond the scope of the present spoken exchange.
There are, however, two more specific and quite influential reasons for thinking that an early approach is impossible that can be briefly considered, first, there is the assumption, originating in Hume, but since adopted by very many of others, that a move forward in the defence of induction would have to involve ‘turning induction into deduction’, i.e., showing, per impossible, that the inductive conclusion follows deductively from the premise, so that it is a formal contradiction to accept the latter and deny the former. However, it is unclear why a prior approach need be committed to anything this strong. It would be enough if it could be argued that it is deductively unlikely that such a premise is true and corresponding conclusion false.
Second, Reichenbach defends his view that pragmatic justification is the best that is possible by pointing out that a completely chaotic world in which there is simply not true conclusion to be found as to the proportion of ‘A’s’ in addition that occur of, but B’s’ is neither impossible nor unlikely from a purely a prior standpoint, the suggestion being that therefore there can be no a prior reason for thinking that such a conclusion is true. Nevertheless, there is still a substring wayin laying that a chaotic world is a prior neither impossible nor unlikely without any further evidence does not show that such a world os not a prior unlikely and a world containing such-and-such regularity might anticipatorially be somewhat likely in relation to an occurrence of a long-run patten of evidence in which a certain stable proportion of observed ‘A’s’ are ‘B’s’ ~. An occurrence, it might be claimed, that would be highly unlikely in a chaotic world (BonJour, 1986).
Goodman’s ‘new riddle of induction’ purports that we suppose that before some specific time ’t’ (perhaps the year 2000) we observe a larger number of emeralds (property A) and find them all to be green (property B). We proceed to reason inductively and conclude that all emeralds are green Goodman points out, however, that we could have drawn a quite different conclusion from the same evidence. If we define the term ‘grue’ to mean ‘green if examined before ’t’ and blue examined after t ʹ, then all of our observed emeralds will also be gruing. A parallel inductive argument will yield the conclusion that all emeralds are gruing, and hence that all those examined after the year 2000 will be blue. Presumably the first of these concisions is genuinely supported by our observations and the second is not. Nevertheless, the problem is to say why this is so and to impose some further restriction upon inductive reasoning that will permit the first argument and exclude the second.
The obvious alternative suggestion is that ‘grue. Similar predicates do not correspond to genuine, purely qualitative properties in the way that ‘green’ and ‘blueness’ does, and that this is why inductive arguments involving them are unacceptable. Goodman, however, claims to be unable to make clear sense of this suggestion, pointing out that the relations of formal desirability are perfectly symmetrical: Grue’ may be defined in terms if, ‘green’ and ‘blue’, but ‘green’ an equally well be defined in terms of ‘grue’ and ‘green’ (blue if examined before ‘t’ and green if examined after ‘t’).
The ‘grued, paradoxes’ demonstrate the importance of categorization, in that sometimes it is itemized as ‘gruing’, if examined of a presence to the future, before future time ‘t’ and ‘green’, or not so examined and ‘blue’. Even though all emeralds in our evidence class grue, we ought must infer that all emeralds are gruing. For ‘grue’ is unprojectible, and cannot transmit credibility from known to unknown cases. Only projectable predicates are right for induction. Goodman considers entrenchment the key to projectibility having a long history of successful protection, ‘grue’ is entrenched, lacking such a history, ‘grue’ is not. A hypothesis is projectable, Goodman suggests, only if its predicates (or suitable related ones) are much better entrenched than its rivalrous past successes that do not assume future ones. Induction remains a risky business. The rationale for favouring entrenched predicates is pragmatic. Of the possible projections from our evidence class, the one that fits with past practices enables ‘us’ to utilize our cognitive resources best. Its prospects of being true are worse than its competitors’ and its cognitive utility is greater.
So, to a better understanding of induction we should then term is most widely used for any process of reasoning that takes ‘us’ from empirical premises to empirical conclusions supported by the premises, but not deductively entailed by them. Inductive arguments are therefore kinds of applicative arguments, in which something beyond the content of the premise is inferred as probable or supported by them. Induction is, however, commonly distinguished from arguments to theoretical explanations, which share this applicative character, by being confined to inferences in which he conclusion involves the same properties or relations as the premises. The central example is induction by simple enumeration, where from premises telling that Fa, Fb, Fc . . . ‘where a, b, c’s, are all of some kind ‘G’, it is inferred that G’s from outside the sample, such as future G’s, will be ‘F’, or perhaps that all G’s are ‘F’. In this, which and the other persons deceive them, children may infer that everyone is a deceiver: Different, but similar inferences of a property by some object to the same object’s future possession of the same property, or from the constancy of some law-like pattern in events and states of affairs ti its future constancy. All objects we know of attract each other with a force inversely proportional to the square of the distance between them, so perhaps they all do so, and will always do so.
The rational basis of any inference was challenged by Hume, who believed that induction presupposed belie in the uniformity of nature, but that this belief has no defence in reason, and merely reflected a habit or custom of the mind. Hume was not therefore sceptical about the role of reason in either explaining it or justifying it. Trying to answer Hume and to show that there is something rationally compelling about the inference referred to as the problem of induction. It is widely recognized that any rational defence of induction will have to partition well-behaved properties for which the inference is plausible (often called projectable properties) from badly behaved ones, for which it is not. It is also recognized that actual inductive habits are more complex than those of similar enumeration, and that both common sense and science pay attention to such giving factors as variations within the sample giving ‘us’ the evidence, the application of ancillary beliefs about the order of nature, and so on.
Nevertheless, the fundamental problem remains that ant experience condition by application show ‘us’ only events occurring within a very restricted part of a vast spatial and temporal order about which we then come to believe things.
Uncompounded by its belonging of a confirmation theory finding of the measure to which evidence supports a theory fully formalized confirmation theory would dictate the degree of confidence that a rational investigator might have in a theory, given some body of evidence. The grandfather of confirmation theory is Gottfried Leibniz (1646-1718), who believed that a logically transparent language of science would be able to resolve all disputes. In the 20th century a fully formal confirmation theory was a main goal of the logical positivist, since without it the central concept of verification by empirical evidence itself remains distressingly unscientific. The principal developments were due to Rudolf Carnap (1891-1970), culminating in his “Logical Foundations of Probability” (1950). Carnap’s idea was that the measure necessitated would be the proportion of logically possible states of affairs in which the theory and the evidence both hold, compared ti the number in which the evidence itself holds that the probability of a preposition, relative to some evidence, is a proportion of the range of possibilities under which the proposition is true, compared to the total range of possibilities left by the evidence. The difficulty with the theory lies in identifying sets of possibilities so that they admit of measurement. It therefore demands that we can put a measure on the ‘range’ of possibilities consistent with theory and evidence, compared with the range consistent with the evidence alone.
Among the obstacles the enterprise meets, is the fact that while evidence covers only a finite range of data, the hypotheses of science may cover an infinite range. In addition, confirmation proves to vary with the language in which the science is couched, and the Carnapian programme has difficulty in separating genuinely confirming variety of evidence from less compelling repetition of the same experiment. Confirmation also proved to be susceptible to acute paradoxes. Finally, scientific judgement seems to depend on such intangible factors as the problems facing rival theories, and most workers have come to stress instead the historically situated scene of what would appear as a plausible distinction of a scientific knowledge at a given time.
Arose to the paradox of which when a set of apparent incontrovertible premises is given to unacceptable or contradictory conclusions. To solve a paradox will involve showing either that there is a hidden flaw in the premises, or that the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerated. Paradoxes are therefore important in philosophy, for until one is solved it shows that there is something about our reasoning and our concepts that we do not understand. What is more, and somewhat loosely, a paradox is a compelling argument from unacceptable premises to an unacceptable conclusion: More strictly speaking, a paradox is specified to be a sentence that is true if and only if it is false. A characterized objection lesson of it, ought to be: “The displayed sentence is false.”
Seeing that this sentence is false if true is easy, and true if false, a paradox, in either of the senses distinguished, presents an important philosophical challenger. Epistemologists are especially concerned with various paradoxes having to do with knowledge and belief. In other words, for example, the Knower paradox is an argument that begins with apparently impeccable premisses about the concepts of knowledge and inference and derives an explicit contradiction. The origin of the reasoning is the ‘surprise examination paradox’: A teacher announces that there will be a surprise examination next week. A clever student argues that this is impossible. ‘The test cannot be on Friday, the last day of the week, because it would not be a surprise. We would know the day of the test on Thursday evening. This means we can also rule out Thursday. For after we learn that no test has been given by Wednesday, we would know the test is on Thursday or Friday -and would already know that it s not on Friday and would already know that it is not on Friday by the previous reasoning. The remaining days can be eliminated in the same manner’.
This puzzle has over a dozen variants. The first was probably invented by the Swedish mathematician Lennard Ekbon in 1943. Although the first few commentators regarded the reverse elimination argument as cogent, every writer on the subject since 1950 agrees that the argument is unsound. The controversy has been over the proper diagnosis of the flaw.
Initial analyses of the subject’s argument tried to lay the blame on a simple equivocation. Their failure led to more sophisticated diagnoses. The general format has been an assimilation to better-known paradoxes. One tradition casts the surprise examination paradox as a self-referential problem, as fundamentally akin to the Liar, the paradox of the Knower, or Gödel’s incompleteness theorem. That in of itself, says enough that Kaplan and Montague (1960) distilled the following ‘self-referential’ paradox, the Knower. Consider the sentence:
(S) The negation of this sentence is known (to be true).
Suppose that (S) is true. Then its negation is known and hence true. However, if its negation is true, then (S) must be false. Therefore (s) is false, or what is the name, the negation of (S) is true.
This paradox and its accompanying reasoning are strongly reminiscent of the Lair Paradox that (in one version) begins by considering a sentence ‘This sentence is false’ and derives a contradiction. Versions of both arguments using axiomatic formulations of arithmetic and Gödel-numbers to achieve the effect of self-reference yields important meta-theorems about what can be expressed in such systems. Roughly these are to the effect that no predicates definable in the formalized arithmetic can have the properties we demand of truth (Tarski’s Theorem) or of knowledge (Montague, 1963).
These meta-theorems still leave ‘us; with the problem that if we suppose that we add of these formalized languages predicates intended to express the concept of knowledge (or truth) and inference - as one mighty does if a logic of these concepts is desired. Then the sentence expressing the leading principles of the Knower Paradox will be true.
Explicitly, the assumption about knowledge and inferences are:
(1) If sentences ‘A’ are known, then “a.”
(2) (1) is known?
(3) If ‘B’ is correctly inferred from ‘A’, and ‘A’ is known, then ‘B’ if known.
To give an absolutely explicit t derivation of the paradox by applying these principles to (S), we mus t add (contingent) assumptions to the effect that certain inferences have been done. Still, as we go through the argument of the Knower, these inferences are done. Even if we can somehow restrict such principles and construct a consistent formal logic of knowledge and inference, the paradoxical argument as expressed in the natural language still demands some explanation.
The usual proposals for dealing with the Liar often have their analogues for the Knower, e.g., that there is something wrong with a self-reference or that knowledge (or truth) is properly a predicate of propositions and not of sentences. The relies that show that some of these are not adequate are often parallel to those for the Liar paradox. In addition, one can try here what seems to be an adequate solution for the Surprise Examination Paradox, namely the observation that ‘new knowledge can drive out knowledge’, but this does not seem to work on the Knower (Anderson, 1983).
There are a number of paradoxes of the Liar family. The simplest example is the sentence ‘This sentence is false’, which must be false if it is true, and true if it is false. One suggestion is that the sentence fails to say anything, but sentences that fail to say anything are at least not true. In fact case, we consider to sentences ‘This sentence is not true’, which, if it fails to say anything is not true, and hence (this kind of reasoning is sometimes called the strengthened Liar). Other versions of the Liar introduce pairs of sentences, as in a slogan on the front of a T-shirt saying ‘This sentence on the back of this T-shirt is false’, and one on the back saying ‘The sentence on the front of this T-shirt is true’. It is clear that each of the sentences individually are well formed, and if it were it not for the other, might have said something true. So any attempt to dismiss the paradox by sating that the sentence involved is meaningless will face problems.
Even so, the two approaches that have some hope of adequately dealing with this paradox is ‘hierarchy’ solutions and ‘truth-value gap’ solutions. According to the first, knowledge is structured into ‘levels’. It is argued that there be is one coherent notion, expressed by the verb ‘knows’, but rather a whole series of notions, now. Know, and so on, as perhaps into transfinite states, by term for which are predicated expressions as such, yet, there are ‘ramified’ concepts and properly restricted, (1)-(3) lead to no contradictions. The main objections to this procedure are that the meaning of these levels has not been adequately explained and that the idea of such subscripts, even implicit, in a natural language is highly counterintuitive the ‘truth-value gap’ solution takes sentences such as (S) to lack truth-value. They are neither true nor false, but they do not express propositions. This defeats a crucial step in the reasoning used in the derivation of the paradoxes. Kripler (1986) has developed this approach in connection with the Liar and Asher and Kamp (1986) has worked out some details of a parallel solution to the Knower. The principal objection is that ‘strengthened’ or ‘super’ versions of the paradoxes tend to reappear when the solution itself is stated.
Since the paradoxical deduction uses only the properties (1)-(3) and since the argument is formally valid, any notion that satisfy these conditions will lead to a paradox. Thus, Grim (1988) notes that this may be read as ‘is known by an omniscient God’ and concludes that there is no coherent single notion of omniscience. Thomason (1980) observes that with some different conditions, analogous reasoning about belief can lead to paradoxical consequence.
Overall, it looks as if we should conclude that knowledge and truth are ultimately intrinsically ‘stratified’ concepts. It would seem that wee must simply accept the fact that these (and similar) concepts cannot be assigned of any one fixed, finite or infinite. Still, the meaning of this idea certainly needs further clarification.
Its paradox arises when a set of apparently incontrovertible premises gives unacceptable or contradictory conclusions, to solve a paradox will involve showing either that there is a hidden flaw in the premises, or that the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerated. Paradoxes are therefore important in philosophy, for until one is solved its shows that there is something about our reasoning and concepts that we do not understand. Famous families of paradoxes include the ‘semantic paradoxes’ and ‘Zeno’s paradoxes’. Art the beginning of the 20th century, paradox and other set-theoretical paradoxes led to the complete overhaul of the foundations of set theory, while the ’Sorites paradox’ has lead to the investigations of the semantics of vagueness and fuzzy logics.
It is, however, to what extent can analysis be informative? This is the question that gives a riser to what philosophers has traditionally called ‘the’ paradox of analysis. Thus, consider the following proposition:
(1) To be an instance of knowledge is to be an instance of justified true belief not essentially grounded in any falsehood.
(1) if true, illustrates an important type of philosophical analysis. For convenience of exposition, I will assume (1) is a correct analysis. The paradox arises from the fact that if the concept of justified true belief not been essentially grounded in any falsification is the analysand of the concept of knowledge, it would seem that they are the same concept and hence that:
(2) To be an instance of knowledge is to be an instance of
knowledge and would have to be the same propositions as (1). But then how can (1) be informative when (2) is not? This is what is called the first paradox of analysis. Classical writings’ on analysis suggests a second paradoxical analysis (Moore, 1942).
(3) An analysis of the concept of being a brother is that to be a
brother is to be a male sibling. If (3) is true, it would seem that the concept of being a brother would have to be the same concept as the concept of being a male sibling and tat:
(4) An analysis of the concept of being a brother is that to be a brother is to be a brother
would also have to be true and in fact, would have to be the same proposition as (three?). Yet (3) is true and (4) is false.
Both these paradoxes rest upon the assumptions that analysis is a relation between concepts, than one involving entity of other sorts, such as linguistic expressions, and tat in a true analysis, analysand and analysandum are the same concepts. Both these assumptions are explicit in Moore, but some of Moore’s remarks hint at a solution that a statement of an analysis is a statement partly about the concept involved and partly about the verbal expressions used to express it. He says he thinks a solution of this sort is bound to be right, but fails to suggest one because he cannot see a way in which the analysis can be even partly about the expression (Moore, 1942).
Elsewhere, of such ways, as a solution to the second paradox, to which is explicating (3) as: (5) An analysis is given by saying that the verbal expression ‘χ is a brother’ expresses the same concept as is expressed by the conjunction of the verbal expressions ‘χ is male’ when used to express the concept of being male and ‘χ is a sibling’ when used to express the concept of being a sibling. (Ackerman, 1990). An important point about (5) is as follows. Stripped of its philosophical jargon (‘analysis’, ‘concept’, ‘χ is a . . . ‘), (5) seems to state the sort of information generally stated in a definition of the verbal expression ‘brother’ in terms of the verbal expressions ‘male’ and ‘sibling’, where this definition is designed to draw upon listeners’ antecedent understanding of the verbal expression ‘male’ and ‘sibling’, and thus, to tell listeners what the verbal expression ‘brother’ really means, instead of merely providing the information that two verbal expressions are synonymous without specifying the meaning of either one. Thus, its solution to the second paradox seems to make the sort of analysis tat gives rise to this paradox matter of specifying the meaning of a verbal expression in terms of separate verbal expressions already understood and saying how the meanings of these separate, already-understood verbal expressions are combined. This corresponds to Moore’s intuitive requirement that an analysis should both specify the constituent concepts of the analysandum and tell how they are combined, but is this all there is to philosophical analysis?
To answer this question, we must note that, in addition too there being two paradoxes of analysis, there is two types of analyses that are relevant here. (There are also other types of analysis, such as reformatory analysis, where the analysand are intended to improve on and replace the analysandum. But since reformatory analysis involves no commitment to conceptual identity between analysand and analysandum, reformatory analysis does not generate a paradox of analysis and so will not concern ‘us’ here.) One way to recognize the difference between the two types of analysis concerning ‘us’ here is to focus on the difference between the two paradoxes. This can be done by means of the Frége-inspired sense-individuation condition, which is the condition that two expressions have the same sense if and only if they can be interchangeably ‘salva veritate’ whenever used in propositional attitude context. If the expressions for the analysands and the analysandum in (1) met this condition, (1) and (2) would not raise the first paradox, but the second paradox arises regardless of whether the expression for the analysand and the analysandum meet this condition. The second paradox is a matter of the failure of such expressions to be interchangeable salva veritate in sentences involving such contexts as ‘an analysis is given thereof. Thus, a solution (such as the one offered) that is aimed only at such contexts can solve the second paradox. This is clearly false for the first paradox, however, which will apply to all pairs of propositions expressed by sentences in which expressions for pairs of analysands and analysantia raising the first paradox is interchangeable. For example, consider the following proposition:
(6) Mary knows that some cats tail.
It is possible for John to believe (6) without believing:
(7) Mary has justified true belief, not essentially grounded in any falsehood, that some cats lack tails.
Yet this possibility clearly does not mean that the proposition that Mary knows that some casts lack tails is partly about language.
One approach to the first paradox is to argue that, despite the apparent epistemic inequivalence of (1) and (2), the concept of justified true belief not essentially grounded in any falsehood is still identical with the concept of knowledge (Sosa, 1983). Another approach is to argue that in the sort of analysis raising the first paradox, the analysand and analysandum is concepts that are different but that bear a special epistemic relation to each other. Elsewhere, the development is such an approach and suggestion that this analysand-analysandum relation has the following facets.
(1) The analysand and analysandum are necessarily coextensive, i.e., necessarily every instance of one is an instance of the other.
(2) The analysand and analysandum are knowable theoretical to be coextensive.
(3) The analysandum is simpler than the analysands a condition whose necessity is recognized in classical writings on analysis, such as, Langford, 1942.
(4) The analysand do not have the analysandum as a constituent.
Condition (4) rules out circularity. But since many valuable quasi-analyses are partly circular, e.g., knowledge is justified true belief supported by known reasons not essentially grounded in any falsehood, it seems best to distinguish between full analysis, from that of (4) is a necessary condition, and partial analysis, for which it is not.
These conditions, while necessary, are clearly insufficient. The basic problem is that they apply too many pairs of concepts that do not seem closely enough related epistemologically to count as analysand and analysandum. , such as the concept of being, and the concept of the fourth root of 1296. Accordingly, its solution upon what actually seems epistemologically distinctive about analyses of the sort under consideration, which is a certain way they can be justified. This is by the philosophical example-and-counterexample method, which is in a general term that goes as follows. ‘J’ investigates the analysis of K’s concept ‘Q’ (where ‘K’ can but need not be identical to ‘J’ by setting ‘K’ a series of armchair thought experiments, i.e., presenting ‘K’ with a series of simple described hypothetical test cases and asking ‘K’ questions of the form ‘If such-and-such where the case would this count as a case of Q? ‘J’ then contrasts the descriptions of the cases to which; K’ answers affirmatively with the description of the cases to which ‘K’ does not, and ‘J’ generalizes upon these descriptions to arrive at the concepts (if possible not including the analysandum) and their mode of combination that constitute the analysand of K’‘s concept ‘Q’. Since ‘J’ need not be identical with ‘K’, there is no requirement that ‘K’ himself be able to perform this generalization, to recognize its result as correct, or even to understand he analysand that is its result. This is reminiscent of Walton’s observation that one can simply recognize a bird as a swallow without realizing just what feature of the bird (beak, wing configurations, etc.) form the basis of this recognition. (The philosophical significance of this way of recognizing is discussed in Walton, 1972) ‘K’ answers the questions based solely on whether the described hypothetical cases just strike him as cases of ‘Q’. ‘J’ observes certain strictures in formulating the cases and questions. He makes the cases as simple as possible, to minimize the possibility of confusion and to minimize the likelihood that ‘K’ will draw upon his philosophical theories (or quasi-philosophical, a rudimentary notion if he is unsophisticated philosophically) in answering the questions. For this conflicting result, the conflict should ‘other things being equal’ be resolved in favour of the simpler case. ‘J’ makes the series of described cases wide-ranging and varied, with the aim of having it be a complete series, where a series is complete if and only if no case that is omitted in such that, if included, it would change the analysis arrived at. ‘J’ does not, of course, use as a test-case description anything complicated and general enough to express the analysand. There is no requirement that the described hypothetical test cases be formulated only in terms of what can be observed. Moreover, using described hypothetical situations as test cases enables ‘J’ to frame the questions in such a way as to rule out extraneous background assumption to a degree, thus, even if ‘K’ correctly believes that all and only P’s are R’s, the question of whether the concepts of P, R, or both enter the analysand of his concept ‘Q’ can be investigated by asking him such questions as ‘Suppose (even if it seems preposterous to you) that you were to find out that there was a ‘P’ that was not an ‘R’. Would you still consider it a case of Q?
Taking all this into account, the fifth necessary condition for this sort of analysand-analysandum relations is as follows:
(e) If ‘S’ is the analysand of ‘Q’, the proposition that necessarily all and only instances of ‘S’ are instances of ‘Q’ can be justified by generalizing from intuition about the correct answers to questions of the sort indicated about a varied and wide-ranging series of simple described hypothetical situations.
It so does occur of antinomy, when we are able to argue for, or demonstrate, both a proposition and its contradiction, roughly speaking, a contradiction of a proposition ‘p’ is one that can be expressed in form ‘not-p’, or, if ‘p’ can be expressed in the form ‘not-q’, then a contradiction is one that can be expressed in the form ‘q’. Thus, e.g., if ‘p is 2 + 1 = 4, then 2 + 1 ≠ 4 is the contradictory of ‘p’, for 2 + 1 ≠ 4 can be expressed in the form not (2 + 1 = 4). If ‘p’ is 2 + 1 ≠ 4, then 2 + 1 - 4 is a contradictory of ‘p’, since 2 + 1 ≠ 4 can be expressed in the form not
(2 + 1 = 4). This is, mutually, but contradictory propositions can be expressed in the form, ‘r’, ‘not-r’. The Principle of Contradiction says that mutually contradictory propositions cannot both be true and cannot both be false. Thus, by this principle, since if ‘p’ is true, ‘not-p’ is false, no proposition ‘p’ can be at once true and false (otherwise both ‘p’ and its contradictories would be false?). In particular, for any predicate ‘p’ and object ‘χ’, it cannot be that ‘p’; is at once true of ‘χ’ and false of χ? This is the classical formulation of the principle of contradiction, but it is nonetheless, that wherein, we cannot now fault either demonstrates. We would eventually hope to be able ‘to solve the antinomy’ by managing, through careful thinking and analysis, eventually to fault either or both demonstrations.
Many paradoxes are as an easy source of antinomies, for example, Zeno gave some famously lets say, logical-cum-mathematical arguments that might be interpreted as demonstrating that motion is impossible. But our eyes as it was, demonstrate motion (exhibit moving things) all the time. Where did Zeno go wrong? Where do our eyes go wrong? If we cannot readily answer at least one of these questions, then we are in antinomy. In the “Critique of Pure Reason,” Kant gave demonstrations of the same kind -in the Zeno example they were obviously not the same kind of both, e.g., that the world has a beginning in time and space, and that the world has no beginning in time or space. He argues that both demonstrations are at fault because they proceed on the basis of ‘pure reason’ unconditioned by sense experience.
At this point, we display attributes to the theory of experience, as it is not possible to define in an illuminating way, however, we know what experiences are through acquaintances with some of our own, e.g., visual experiences of as afterimage, a feeling of physical nausea or a tactile experience of an abrasive surface (which might be caused by an actual surface -rough or smooth, or which might be part of a dream, or the product of a vivid sensory imagination). The essential feature of experience is it feels a certain way -that there is something that it is like to have it. We may refer to this feature of an experience as its ‘character’.
Another core feature of the sorts of experiences with which this may be of a concern, is that they have representational ‘content’. (Unless otherwise indicated, ‘experience’ will be reserved for their ‘contentual representations’.) The most obvious cases of experiences with content are sense experiences of the kind normally involved in perception. We may describe such experiences by mentioning their sensory modalities ad their contents, e.g., a gustatory experience (modality) of chocolate ice cream (content), but do so more commonly by means of perceptual verbs combined with noun phrases specifying their contents, as in ‘Macbeth saw a dagger’. This is, however, ambiguous between the perceptual claim ‘There was a (material) dagger in the world that Macbeth perceived visually’ and ‘Macbeth had a visual experience of a dagger’ (the reading with which we are concerned, as it is afforded by our imagination, or perhaps, experiencing mentally hallucinogenic imagery).
As in the case of other mental states and events with content, it is important to distinguish between the properties that and experience ‘represents’ and the properties that it ‘possesses’. To talk of the representational properties of an experience is to say something about its content, not to attribute those properties to the experience itself. Like every other experience, a visual; experience of a non-shaped square, of which is a mental event, and it is therefore not itself either irregular or is it square, even though it represents those properties. It is, perhaps, fleeting, pleasant or unusual, even though it does not represent those properties. An experience may represent a property that it possesses, and it may even do so in virtue of a rapidly changing (complex) experience representing something as changing rapidly. However, this is the exception and not the rule.
Which properties can be [directly] represented in sense experience is subject to debate. Traditionalists include only properties whose presence could not be doubted by a subject having appropriate experiences, e.g., colour and shape in the case of visual experience, and apparent shape, surface texture, hardness, etc., in the case of tactile experience. This view is natural to anyone who has an egocentric, Cartesian perspective in epistemology, and who wishes for pure data in experiences to serve as logically certain foundations for knowledge, especially to the immediate objects of perceptual awareness in or of sense-data, such categorized of colour patches and shapes, which are usually supposed distinct from surfaces of physical objectivity. Qualities of sense-data are supposed to be distinct from physical qualities because their perception is more relative to conditions, more certain, and more immediate, and because sense-data is private and cannot appear other than they are they are objects that change in our perceptual field when conditions of perception change. Physical objects remain constant.
Others who do not think that this wish can be satisfied, and who are more impressed with the role of experience in providing animisms with ecologically significant information about the world around them, claim that sense experiences represent properties, characteristic and kinds that are much richer and much more wide-ranging than the traditional sensory qualities. We do not see only colours and shapes, they tell ‘us’, but also earth, water, men, women and fire: We do not smell only odours, but also food and filth. There is no space here to examine the factors relevantly responsible to their choice of situational alternatives. Yet, this suggests that character and content are not really distinct, and there is a close tie between them. For one thing, the relative complexity of the character of sense experience places limitations upon its possible content, e.g., a tactile experience of something touching one’s left ear is just too simple to carry the same amount of content as typically convincing to an every day, visual experience. Moreover, the content of a sense experience of a given character depends on the normal causes of appropriately similar experiences, e.g., the sort of gustatory experience that we have when eating chocolate would be not represented as chocolate unless it was normally caused by chocolate. Granting a contingent ties between the character of an experience and its possible causal origins, once, again follows that its possible content is limited by its character.
Character and content are none the less irreducibly different, for the following reasons. (i) There are experiences that completely lack content, e.g., certain bodily pleasures. (ii) Not every aspect of the character of an experience with content is relevant to that content, e.g., the unpleasantness of an aural experience of chalk squeaking on a board may have no representational significance. (iii) Experiences in different modalities may overlap in content without a parallel overlap in character, e.g., visual and tactile experiences of circularity feel completely different. (iv) The content of an experience with a given character may vary according to the background of the subject, e.g., a certain content ‘singing bird’ only after the subject has learned something about birds.
According to the act/object analysis of experience (which is a special case of the act/object analysis of consciousness), every experience involves an object of experience even if it has no material object. Two main lines of argument may be offered in support of this view, one ‘phenomenological’ and the other ‘semantic’.
In an outline, the phenomenological argument is as follows. Whenever we have an experience, even if nothing beyond the experience answers to it, we seem to be presented with something through the experience (which is itself diaphanous). The object of the experience is whatever is so presented to ‘us’-is that it is an individual thing, an event, or a state of affairs.
The semantic argument is that objects of experience are required in order to make sense of certain features of our talk about experience, including, in particular, the following. (1) Simple attributions of experience, e.g., ‘Rod is experiencing an oddity that is not really square but in appearance it seems more than likely a square’, this seems to be relational. (2) We appear to refer to objects of experience and to attribute properties to them, e.g., ‘The after-image that John experienced was certainly odd’. (3) We appear to quantify ov er objects of experience, e.g., ‘Macbeth saw something that his wife did not see’.
The act/object analysis faces several problems concerning the status of objects of experiences. Currently the most common view is that they are sense-data-private mental entities that actually posses the traditional sensory qualities represented by the experiences of which they are the objects. But the very idea of an essentially private entity is suspect. Moreover, since an experience may apparently represent something as having a determinable property, e.g., redness, without representing it as having any subordinate determinate property, e.g., any specific shade of red, a sense-datum may actually have a determinate property subordinate to it. Even more disturbing is that sense-data may have contradictory properties, since experiences can have contradictory contents. A case in point is the waterfall illusion: If you stare at a waterfall for a minute and then immediately fixate on a nearby rock, you are likely to have an experience of the rock’s moving upward while it remains in the same place. The sense-data theorist must either deny that there are such experiences or admit contradictory objects.
These problems can be avoided by treating objects of experience as properties. This, however, fails to do justice to the appearances, for experience seems not to present ‘us’ with properties embodied in individuals. The view that objects of experience is Meinongian objects accommodate this point. It is also attractive in as far as (1) it allows experiences to represent properties other than traditional sensory qualities, and (2) it allows for the identification of objects of experience and objects of perception in the case of experiences that constitute perception.
According to the act/object analysis of experience, every experience with content involves an object of experience to which the subject is related by an act of awareness (the event of experiencing that object). This is meant to apply not only to perceptions, which have material objects (whatever is perceived), but also to experiences like hallucinations and dream experiences, which do not. Such experiences nonetheless appear to represent something, and their objects are supposed to be whatever it is that they represent. Act/object theorists may differ on the nature of objects of experience, which have been treated as properties. Meinongian objects (which may not exist or have any form of being), and, more commonly private mental entities with sensory qualities. (The term ‘sense-data’ is now usually applied to the latter, but has also been used as a general term for objects of sense experiences, as in the work of G. E. Moore) Act/object theorists may also differ on the relationship between objects of experience and objects of perception. In terms of perception (of which we are ‘indirectly aware’) are always distinct from objects of experience (of which we are ‘directly aware’). Meinongian, however, may treat objects of perception as existing objects of experience. But sense-datum theorists must either deny that there are such experiences or admit contradictory objects. Still, most philosophers will feel that the Meinongian’s acceptance of impossible objects is too high a price to pay for these benefits.
A general problem for the act/object analysis is that the question of whether two subjects are experiencing one and the same thing (as opposed to having exactly similar experiences) appears to have an answer only on the assumption that the experiences concerned are perceptions with material objects. But in terms of the act/object analysis the question must have an answer even when this condition is not satisfied. (The answer is always negative on the sense-datum theory; it could be positive on other versions of the act/object analysis, depending on the facts of the case.)
In view of the above problems, the case for the act/object analysis should be reassessed. The phenomenological argument is not, on reflection, convincing, for it is easy enough to grant that any experience appears to present ‘us’ with an object without accepting that it actually does. The semantic argument is more impressive, but is none the less answerable. The seemingly relational structure of attributions of experience is a challenge dealt with below in connection with the adverbial theory. Apparent reference to and quantification over objects of experience can be handled by analysing them as reference to experiences themselves and quantification over experiences tacitly typed according to content. Thus, ‘The after-image that John experienced was colourfully appealing’ becomes ‘John’s after-image experience was an experience of colour’, and ‘Macbeth saw something that his wife did not see’ becomes ‘Macbeth had a visual experience that his wife did not have’.
Pure cognitivism attempts to avoid the problems facing the act/object analysis by reducing experiences to cognitive events or associated disposition, e.g., Susy’s experience of a rough surface beneath her hand might be identified with the event of her acquiring the belief that there is a rough surface beneath her hand, or, if she does not acquire this belief, with a disposition to acquire it that has somehow been blocked.
This position has attractions. It does full justice to the cognitive contents of experience, and to the important role of experience as a source of belief acquisition. It would also help clear the way for a naturalistic theory of mind, since there seems to be some prospect of a physicalist/functionalist account of belief and other intentional states. But pure cognitivism is completely undermined by its failure to accommodate the fact that experiences have a felt character that cannot be reduced to their content, as aforementioned.
The adverbial theory is an attempt to undermine the act/object analysis by suggesting a semantic account of attributions of experience that does not require objects of experience. Unfortunately, the oddities of explicit adverbializations of such statements have driven off potential supporters of the theory. Furthermore, the theory remains largely undeveloped, and attempted refutations have traded on this. It may, however, be founded on sound basis intuitions, and there is reason to believe that an effective development of the theory (which is merely hinting at) is possible.
The relevant intuitions are (1) that when we say that someone is experiencing ‘an A’, or has an experience ‘of an A’, we are using this content-expression to specify the type of thing that the experience is especially apt to fit, (2) that doing this is a matter of saying something about the experience itself (and maybe about the normal causes of like experiences), and (3) that it is no-good of reasons to posit of its position to presuppose that of any involvements, is that its descriptions of an object in which the experience is. Thus the effective role of the content-expression in a statement of experience is to modify the verb it compliments, not to introduce a special type of object.
Perhaps, the most important criticism of the adverbial theory is the ‘many property problem’, according to which the theory does not have the resources to distinguish between, e.g.,
(1) Frank has an experience of a brown triangle
and:
(2) Frank has an experience of brown and an experience of a triangle.
Which is entailed by (1) but does not entail it. The act/object analysis can easily accommodate the difference between (1) and (2) by claiming that the truth of (1) requires a single object of experience that is both brown and triangular, while that of the (2) allows for the possibility of two objects of experience, one brown and the other triangular, however, (1) is equivalent to:
(1*) Frank has an experience of something’s being both brown and triangular.
And (2) is equivalent to:
(2*) Frank has an experience of something’s being brown and an experience of something’s being triangular,
and the difference between these can be explained quite simply in terms of logical scope without invoking objects of experience. The adverbialists may use this to answer the many-property problem by arguing that the phrase ‘a brown triangle’ in (1) does the same work as the clause ‘something’s being both brown and triangular’ in (1*). This is perfectly compatible with the view that it also has the ‘adverbial’ function of modifying the verb ‘has an experience of’, for it specifies the experience more narrowly just by giving a necessary condition for the satisfaction of the experience (the condition being that there are something both brown and triangular before Frank).
A final position that should be mentioned is the state theory, according to which a sense experience of an ‘A’ is an occurrent, non-relational state of the kind that the subject would be in when perceiving an ‘A’. Suitably qualified, this claim is no doubt true, but its significance is subject to debate. Here it is enough to remark that the claim is compatible with both pure cognitivism and the adverbial theory, and that state theorists are probably best advised to adopt adverbials as a means of developing their intuitions.
Yet, clarifying sense-data, if taken literally, is that which is given by the senses. But in response to the question of what exactly is so given, sense-data theories posit private showings in the consciousness of the subject. In the case of vision this would be a kind of inner picture show which itself only indirectly represents aspects of the external world that has in and of itself a worldly representation. The view has been widely rejected as implying that we really only see extremely thin coloured pictures interposed between our mind’s eye and reality. Modern approaches to perception tend to reject any conception of the eye as a camera or lense, simply responsible for producing private images, and stress the active life of the subject in and of the world, as the determinant of experience.
Nevertheless, the argument from illusion is of itself the usually intended directive to establish that certain familiar facts about illusion disprove the theory of perception called naïevity or direct realism. There are, however, many different versions of the argument that must be distinguished carefully. Some of these distinctions centre on the content of the premises (the nature of the appeal to illusion); others centre on the interpretation of the conclusion (the kind of direct realism under attack). Let ‘us’ set about by distinguishing the importantly different versions of direct realism which one might take to be vulnerable to familiar facts about the possibility of perceptual illusion.
A crude statement of direct realism might go as follows. In perception, we sometimes directly perceive physical objects and their properties, we do not always perceive physical objects by perceiving something ‘else’, e.g., a sense-datum. There are, however, difficulties with this formulation of the view, as for one thing a great many philosophers who are ‘not’ direct realists would admit that it is a mistake to describe people as actually ‘perceiving’ something other than a physical object. In particular, such philosophers might admit, we should never say that we perceive sense-data. To talk that way would be to suppose that we should model our understanding of our relationship to sense-data on our understanding of the ordinary use of perceptual verbs as they describe our relation to and of the physical world, and that is the last thing paradigm sense-datum theorists should want. At least, many of the philosophers who objected to direct realism would prefer to express in what they were of objecting too in terms of a technical (and philosophically controversial) concept such as ‘acquaintance’. Using such a notion, we could define direct realism this way: In ‘veridical’ experience we are directly acquainted with parts, e.g., surfaces, or constituents of physical objects. A less cautious version of the view might drop the reference to veridical experience and claim simply that in all experience we are directly acquainted with parts or constituents of physical objects. The expressions ‘knowledge by acquaintance’ and ‘knowledge by description’, and the distinction they mark between knowing ‘things’ and knowing ‘about’ things, are generally associated with Bertrand Russell (1872-1970), that scientific philosophy required analysing many objects of belief as ‘logical constructions’ or ‘logical fictions’, and the programme of analysis that this inaugurated dominated the subsequent philosophy of logical atomism, and then of other philosophers, Russell’s “The Analysis of Mind,” the mind itself is treated in a fashion reminiscent of Hume, as no more than the collection of neutral perceptions or sense-data that make up the flux of conscious experience, and that looked at another way that also was to make up the external world (neutral monism), but “An Inquiry into Meaning and Truth” (1940) represents a more empirical approach to the problem. Yet, philosophers have perennially investigated this and related distinctions using varying terminology.
Distinction in our ways of knowing things, highlighted by Russell and forming a central element in his philosophy after the discovery of the theory of ‘definite descriptions’. A thing is known by acquaintance when there is direct experience of it. It is known by description if it can only be described as a thing with such-and-such properties. In everyday parlance, I might know my spouse and children by acquaintance, but know someone as ‘the first person born at sea’ only by description. However, for a variety of reasons Russell shrinks the area of things that can be known by acquaintance until eventually only current experience, perhaps my own self, and certain universals or meanings qualify anything else is known only as the thing that has such-and-such qualities.
Because one can interpret the relation of acquaintance or awareness as one that is not ‘epistemic’, i.e., not a kind of propositional knowledge, it is important to distinguish the above aforementioned views read as ontological theses from a view one might call ‘epistemological direct realism? In perception we are, on at least some occasions, non-inferentially justified in believing a proposition asserting the existence of a physical object. Since it is that these objects exist independently of any mind that might perceive them, and so it thereby rules out all forms of idealism and phenomenalism, which hold that there are no such independently existing objects. Its being to ‘direct’ realism rules out those views defended under the cubic of ‘critical naive realism’, or ‘representational realism’, in which there is some non-physical intermediary -usually called a ‘sense-datum’ or a ‘sense impression’ -that must first be perceived or experienced in order to perceive the object that exists independently of this perception. Often the distinction between direct realism and other theories of perception is explained more fully in terms of what is ‘immediately’ perceived, than ‘mediately’ perceived. What relevance does illusion have for these two forms of direct realism?
The fundamental premise of the arguments is from illusion seems to be the theses that things can appear to be other than they are. Thus, for example, straight sticks when immerged in water looks bent, a penny when viewed from certain perspective appears as an illusory spatial elliptic circularity, when something that is yellow when place under red fluorescent light looks red. In all of these cases, one version of the argument goes, it is implausible to maintain that what we are directly acquainted with is the real nature of the object in question. Indeed, it is hard to see how we can be said to be aware of the really physical object at all. In the above illusions the things we were aware of actually were bent, elliptical and red, respectively. But, by hypothesis, the really physical objects lacked these properties. Thus, we were not aware of the substantial reality of been real as a physical objects or theory.
So far, if the argument is relevant to any of the direct realisms distinguished above, it seems relevant only to the claim that in all sense experience we are directly acquainted with parts or constituents of physical objects. After all, even if in illusion we are not acquainted with physical objects, but their surfaces, or their constituents, why should we conclude anything about the hidden nature of our relations to the physical world in veridical experience?
We are supposed to discover the answer to this question by noticing the similarities between illusory experience and veridical experience and by reflecting on what makes illusion possible at all. Illusion can occur because the nature of the illusory experience is determined, not just by the nature of the object perceived, but also by other conditions, both external and internal as becoming of an inner or as the outer experience. But all of our sensations are subject to these causal influences and it would be gratuitous and arbitrary to select from indefinitely of many and subtly different perceptual experiences some special ones those that get ‘us’ in touch with the ‘real’ nature of the physical world and its surrounding surfaces. Red fluorescent light affects the way thing’s look, but so does sunlight. Water reflects light, but so does air. We have no unmediated access to the external world.
Still, why should we consider that we are aware of something other than a physical object in experience? Why should we not conclude that to be aware of a physical object is just to be appeared to by that object in a certain way? In its best-known form the adverbial theory of something proposes that the Grammitis of associated language objects in a statement attributing an experience to someone been analysed and expressed dialectically can be an adverb. For example,
(A) Rod is experiencing a coloured square.
Is rewritten as?
Rod is experiencing, (coloured square)-ly
This is presented as an alternative to the act/object analysis, according to which the truth of a statement like (A) requires the existence of an object of experience corresponding to its grammatical object. A commitment to t he explicit adverbializations of statements of experience is not, however, essential to adverbialism. The core of the theory consists, rather, in the denial of objects of experience (as opposed ti objects of perception) coupled with the view that the role of the grammatical object in a statement of experience is to characterize more fully te sort of experience that is being attributed to the subject. The claim, then, is that the grammatical object is functioning as a modifier and, in particular, as a modifier of a verb. If it as a special kind of adverb at the semantic level.
At this point, it might be profitable to move from considering the possibility of illusion to considering the possibility of hallucination. Instead of comparing paradigmatic veridical perception with illusion, let ‘us’ compare it with complete hallucination. For any experiences or sequence of experiences we take to be veridical, we can imagine qualitatively indistinguishable experiences occurring as part of a hallucination. For those who like their philosophical arguments spiced with a touch of science, we can imagine that our brains were surreptitiously removed in the night, and unbeknown to ‘us’ are being stimulated by a neurophysiologist so as to produce the very sensations that we would normally associate with a trip to the Grand Canyon. Currently permit ‘us’ into appealing of what we are aware of in this complete hallucination that is obvious that we are not awaken to the sparking awareness of physical objects, their surfaces, or their constituents. Nor can we even construe the experience as one of an object’s appearing to ‘us’ in a certain way. It is after all a complete hallucination and the objects we take to exist before ‘us’ are simply not there. But if we compare hallucinatory experience with the qualitatively indistinguishable veridical experiences, should we most conclude that it would be ‘special’ to suppose that in veridical experience we are aware of something radically different from what we are aware of in hallucinatory experience? Again, it might help to reflect on our belief that the immediate cause of hallucinatory experience and veridical experience might be the very same brain event, and it is surely implausible to suppose that the effects of this same cause are radically different -acquaintance with physical objects in the case of veridical experience: Something else in the case of hallucinatory experience.
This version of the argument from hallucination would seem to address straightforwardly the ontological versions of direct realism. The argument is supposed to convince ‘us’ that the ontological analysis of sensation in both veridical and hallucinatory experience should give ‘us’ the same results, but in the hallucinatory case there is no plausible physical object, constituent of a physical object, or surface of a physical object with which additional premiss we would also get an argument against epistemological direct realism. That premiss is that in a vivid hallucinatory experience we might have precisely the same justification for believing (falsely) what we do about the physical world as we do in the analogous, phenomenological indistinguishable, veridical experience. But our justification for believing that there is a table before ‘us’ in the course of a vivid hallucination of a table are surely not non-inferential in character. It certainly is not, if non-inferential justifications are supposedly a consist but yet an unproblematic access to the fact that makes true our belief -by hypothesis the table does not exist. But if the justification that hallucinatory experiences give ‘us’ the same as the justification we get from the parallel veridical experience, then we should not describe a veridical experience as giving ‘us non-inferential justification for believing in the existence of physical objects. In both cases we should say that we believe what we do about the physical world on the basis of what we know directly about the character of our experience.
In this brief space, I can only sketch some of the objections that might be raised against arguments from illusion and hallucination. That being said, let us begin with a criticism that accepts most of the presuppositions of the arguments. Even if the possibility of hallucination establishes that in some experience we are not acquainted with constituents of physical objects, it is not clear that it establishes that we are never acquainted with a constituent of physical objects. Suppose, for example, that we decide that in both veridical and hallucinatory experience we are acquainted with sense-data. At least some philosophers have tried to identify physical objects with ‘bundles’ of actual and possible sense-data.
To establish inductively that sensations are signs of physical objects one would have to observe a correlation between the occurrence of certain sensations and the existence of certain physical objects. But to observe such a correlation in order to establish a connection, one would need independent access to physical objects and, by hypothesis, this one cannot have. If one further adopts the verificationist’s stance that the ability to comprehend is parasitic on the ability to confirm, one can easily be driven to Hume’s conclusion:
Let us chance our imagination to the heavens, or to the utmost limits of the universe, we never really advance a step beyond ourselves, nor can conceivable any kind of existence, but those perceptions, which have appear̀d in that narrow compass. This is the universe of the imagination, nor have we have any idea but what is there Reduced. (Hume, 1739-40, pp. 67-8).
If one reaches such a conclusion but wants to maintain the intelligibility and verifiability of the assertion about the physical world, one can go either the idealistic or the phenomenalistic route.
However, hallucinatory experiences on this view is non-veridical precisely because the sense-data one is acquainted with in hallucination do not bear the appropriate relations to other actual and possible sense-data. But if such a view where plausible one could agree that one is acquainted with the same kind of a thing in veridical and non-veridical experience but insists that there is still a sense in which in veridical experience one is acquainted with constituents of a physical object?
A different sort of objection to the argument from illusion or hallucination concerns its use in drawing conclusions we have not stressed in the above discourses. I, have in mentioning this objection, may to underscore an important feature of the argument. At least some philosophers (Hume, for example) have stressed the rejection of direct realism on the road to an argument for general scepticism with respect to the physical world. Once one abandons epistemological; direct realisms, one has an uphill battle indicating how one can legitimately make the inferences from sensation to physical objects. But philosophers who appeal to the existence of illusion and hallucination to develop an argument for scepticism can be accused of having an epistemically self-defeating argument. One could justifiably infer sceptical conclusions from the existence of illusion and hallucination only if one justifiably believed that such experiences exist, but if one is justified in believing that illusion exists, one must be justified in believing at least, some facts about the physical world (for example, that straight sticks look bent in water). The key point to stress in relying to such arguments is, that strictly speaking, the philosophers in question need only appeal to the ‘possibility’ of a vivid illusion and hallucination. Although it would have been psychologically more difficult to come up with arguments from illusion and hallucination if we did not believe that we actually had such experiences, I take it that most philosophers would argue that the possibility of such experiences is enough to establish difficulties with direct realism. Indeed, if one looks carefully at the argument from hallucination discussed earlier, one sees that it nowhere makes any claims about actual cases of hallucinatory experience.
Another reply to the attack on epistemological direct realism focuses on the implausibility of claiming that there is any process of ‘inference’ wrapped up in our beliefs about the world and its surrounding surfaces. Even if it is possible to give a phenomenological description of the subjective character of sensation, it requires a special sort of skill that most people lack. Our perceptual beliefs about the physical world are surely direct, at least in the sense that they are unmediated by any sort of conscious inference from premisses describing something other than a physical object. The appropriate reply to this objection, however, is simply to acknowledge the relevant phenomenological fact and point out that from the perceptive of epistemologically direct realism, the philosopher is attacking a claim about the nature of our justification for believing propositions about the physical world. Such philosophers need carry out of any comment at all about the causal genesis of such beliefs.
As mentioned, which proponents of the argument from illusion and hallucination have often intended it to establish the existence of sense-data, and many philosophers have attacked the so-called sense-datum inference presupposed in some statements of the argument. When the stick looked bent, the penny looked elliptical and the yellow object looked red, the sense-datum theorist wanted to infer that there was something bent, elliptical and red, respectively. But such an inference is surely suspect. Usually, we do not infer that because something appears to have a certain property, that affairs that affecting something that has that property. When in saying that Jones looks like a doctor, I surely would not want anyone to infer that there must actually be someone there who is a doctor. In assessing this objection, it will be important to distinguish different uses words like ‘appears’ and ‘looks’. At least, sometimes to say that something looks ‘F’ way and the sense-datum inference from an F ‘appearance’ in this sense to an actual ‘F’ would be hopeless. However, it also seems that we use the ‘appears’/’looks’ terminology to describe the phenomenological character of our experience and the inference might be more plausible when the terms are used this way. Still, it does seem that the arguments from illusion and hallucination will not by themselves constitute strong evidence for sense-datum theory. Even if one concludes that there is something common to both the hallucination of a red thing and a veridical visual experience of a red thing, one need not describe a common constituent as awarenesses of something red. The adverbial theorist would prefer to construe the common experiential state as ‘being appeared too redly’, a technical description intended only to convey the idea that the state in question need not be analysed as relational in character. Those who opt for an adverbial theory of sensation need to make good the claim that their artificial adverbs can be given a sense that is not parasitic upon an understanding of the adjectives transformed into verbs. Still, other philosophers might try to reduce the common element in veridical and non-veridical experience to some kind of intentional state. More like belief or judgement. The idea here is that the only thing common to the two experiences is the fact that in both I spontaneously takes there to be present an object of a certain kind.
The selfsame objections can be started within the general framework presupposed by proponents of the arguments from illusion and hallucination. A great many contemporary philosophers, however, uncomfortable with the intelligibility of the concepts needed to make sense of the theories attacked even. Thus, at least, some who object to the argument from illusion do so not because they defend direct realism. Rather they think there is something confused about all this talk of direct awareness or acquaintance. Contemporary Externalists, for example, usually insist that we understand epistemic concepts by appeal: To nomologically connections. On such a view the closest thing to direct knowledge would probably be something by other beliefs. If we understand direct knowledge this way, it is not clar how the phenomena of illusion and hallucination would be relevant to claim that on, at least some occasions our judgements about the physical world are reliably produced by processes that do not take as their input beliefs about something else.
The expressions ‘knowledge by acquaintance’ and ‘knowledge by description’, and the distinction they mark between knowing ‘things’ and knowing ‘about’ things, are now generally associated with Bertrand Russell. However, John Grote and Hermann von Helmholtz had earlier and independently to mark the same distinction, and William James adopted Grote’s terminology in his investigation of the distinction. Philosophers have perennially investigated this and related distinctions using varying terminology. Grote introduced the distinction by noting that natural languages ‘distinguish between these two applications of the notion of knowledge, the one being ϒνѾναι, noscere, Kennen, connaître, the other being εìδέναι, ‘scire’, ‘Wissen’, ‘savoir’ (Grote, 1865). On Grote’s account, the distinction is a natter of degree, and there are three sorts of dimensions of variability: Epistemic, causal and semantic.
We know things by experiencing them, and knowledge of acquaintance (Russell changed the preposition to ‘by’) is epistemically priori to and has a relatively higher degree of epistemic justification than knowledge about things. Indeed, sensation has ‘the one great value of trueness or freedom from mistake’ (1900).
A thought (using that term broadly, to mean any mental state) constituting knowledge of acquaintance with a thing is more or less causally proximate to sensations caused by that thing, while a thought constituting knowledge about the thing is more or less distant causally, being separated from the thing and experience of it by processes of attention and inference. At the limit, if a thought is maximally of the acquaintance type, it is the first mental state occurring in a perceptual causal chain originating in the object to which the thought refers, i.e., it is a sensation. The thing’s presented to ‘us’ in sensation and of which we have knowledge of acquaintance include ordinary objects in the external world, such as the sun.
Grote contrasted the imagistic thoughts involved in knowledge of acquaintance with things, with the judgements involved in knowledge about things, suggesting that the latter but not the former are mentally contentual by a specified state of affairs. Elsewhere, however, he suggested that every thought capable of constituting knowledge of or about a thing involves a form, idea, or what we might call contentual propositional content, referring the thought to its object. Whether contentual or not, thoughts constituting knowledge of acquaintance with a thing are relatively indistinct, although this indistinctness does not imply incommunicably. On the other hand, thoughts constituting distinctly, as a result of ‘the application of notice or attention’ to the ‘confusion or chaos’ of sensation (1900). Grote did not have an explicit theory on reference, the relation by which a thought is ‘of’ or ‘about’ a specific thing. Nor did he explain how thoughts can be more or less indistinct.
Helmholtz held unequivocally that all thoughts capable of constituting knowledge, whether ‘knowledge that has to do with Notions’ (Wissen) or ‘mere familiarity with phenomena’ (Kennen), is judgements or, we may say, have conceptual propositional contents. Where Grote saw a difference between distinct and indistinct thoughts, Helmholtz found a difference between precise judgements that are expressible in words and equally precise judgements that, in principle, are not expressible in words, and so are not communicable (Helmholtz, 19620. As happened, James was influenced by Helmholtz and, especially, by Grote. (James, 1975). Taken on the latter’s terminology, James agreed with Grote that the distinction between knowledge of acquaintance with things and knowledge about things involves a difference in the degree of vagueness or distinctness of thoughts, though he, too, said little to explain how such differences are possible. At one extreme is knowledge of acquaintance with people and things, and with sensations of colour, flavour, spatial extension, temporal duration, effort and perceptible difference, unaccompanied by knowledge about these things. Such pure knowledge of acquaintance is vague and inexplicit. Movement away from this extreme, by a process of notice and analysis, yields a spectrum of less vague, more explicit thoughts constituting knowledge about things.
All the same, the distinction was not merely a relative one for James, as he was more explicit than Grote in not imputing content to every thought capable of constituting knowledge of or about things. At the extreme where a thought constitutes pure knowledge of acquaintance with a thing, there is a complete absence of conceptual propositional content in the thought, which is a sensation, feeling or precept, of which he renders the thought incommunicable. James’ reasons for positing an absolute discontinuity in between pure cognition and preferable knowledge of acquaintance and knowledge at all about things seem to have been that any theory adequate to the facts about reference must allow that some reference is not conventionally mediated, that conceptually unmediated reference is necessary if there are to be judgements at all about things and, especially, if there are to be judgements about relations between things, and that any theory faithful to the common person’s ‘sense of life’ must allow that some things are directly perceived.
James made a genuine advance over Grote and Helmholtz by analysing the reference relation holding between a thought and of him to specific things of or about which it is knowledge. In fact, he gave two different analyses. On both analyses, a thought constituting knowledge about a thing refers to and is knowledge about ‘a reality, whenever it actually or potentially ends in’ a thought constituting knowledge of acquaintance with that thing (1975). The two analyses differ in their treatments of knowledge of acquaintance. On James’s first analysis, reference in both sorts of knowledge is mediated by causal chains. A thought constituting pure knowledge of acquaintances with a thing refers to and is knowledge of ‘whatever reality it directly or indirectly operates on and resembles’ (1975). The concepts of a thought ‘operating on’ a thing or ‘terminating in’ another thought are causal, but where Grote found teleology and final causes. On James’s later analysis, the reference involved in knowledge of acquaintance with a thing is direct. A thought constituting knowledge of acquaintance with a thing either is that thing, or has that thing as a constituent, and the thing and the experience of it is identical (1975, 1976).
James further agreed with Grote that pure knowledge of acquaintance with things, i.e., sensory experience, is epistemologically priori to knowledge about things. While the epistemic justification involved in knowledge about things rests on the foundation of sensation, all thoughts about things are fallible and their justification is augmented by their mutual coherence. James was unclear about the precise epistemic status of knowledge of acquaintance. At times, thoughts constituting pure knowledge of acquaintance are said to posses ‘absolute veritableness’ (1890) and ‘the maximal conceivable truth’ (1975), suggesting that such thoughts are genuinely cognitive and that they provide an infallible epistemic foundation. At other times, such thoughts are said not to bear truth-values, suggesting that ‘knowledge’ of acquaintance is not genuine knowledge at all, but only a non-cognitive necessary condition of genuine knowledge, knowledge about things (1976). Russell understood James to hold the latter view.
Russell agreed with Grote and James on the following points: First, knowing things involves experiencing them. Second, knowledge of things by acquaintance is epistemically basic and provides an infallible epistemic foundation for knowledge about things. (Like James, Russell vacillated about the epistemic status of knowledge by acquaintance, and it eventually was replaced at the epistemic foundation by the concept of noticing.) Third, knowledge about things is more articulate and explicit than knowledge by acquaintance with things. Fourth, knowledge about things is causally removed from knowledge of things by acquaintance, by processes of reelection, analysis and inference (1911, 1913, 1959).
But, Russell also held that the term ‘experience’ must not be used uncritically in philosophy, on account of the ‘vague, fluctuating and ambiguous’ meaning of the term in its ordinary use. The precise concept found by Russell ‘in the nucleus of this uncertain patch of meaning’ is that of direct occurrent experience of a thing, and he used the term ‘acquaintance’ to express this relation, though he used that term technically, and not with all its ordinary meaning (1913). Nor did he undertake to give a constitutive analysis of the relation of acquaintance, though he allowed that it may not be unanalysable, and did characterize it as a generic concept. If the use of the term ‘experience’ is restricted to expressing the determinate core of the concept it ordinarily expresses, then we do not experience ordinary objects in the external world, as we commonly think and as Grote and James held we do. In fact, Russell held, one can be acquainted only with one’s sense-data, i.e., particular colours, sounds, etc.), one’s occurrent mental states, universals, logical forms, and perhaps, oneself.
Russell agreed with James that knowledge of things by acquaintance ‘is essentially simpler than any knowledge of truths, and logically independent of knowledge of truths’ (1912, 1929). The mental states involved when one is acquainted with things do not have propositional contents. Russell’s reasons here seem to have been similar to James’s. Conceptually unmediated reference to particulars necessary for understanding any proposition mentioning a particular, e.g., 1918-19, and, if scepticism about the external world is to be avoided, some particulars must be directly perceived (1911). Russell vacillated about whether or not the absence of propositional content renders knowledge by acquaintance incommunicable.
Russell agreed with James that different accounts should be given of reference as it occurs in knowledge by acquaintance and in knowledge about things, and that in the former case, reference is direct. But Russell objected on a number of grounds to James’s causal account of the indirect reference involved in knowledge about things. Russell gave a descriptional rather than a causal analysis of that sort of reference: A thought is about a thing when the content of the thought involves a definite description uniquely satisfied by the thing referred to. Indeed, he preferred to speak of knowledge of things by description, rather than knowledge about things.
Russell advanced beyond Grote and James by explaining how thoughts can be more or less articulate and explicit. If one is acquainted with a complex thing without being aware of or acquainted with its complexity, the knowledge one has by acquaintance with that thing is vague and inexplicit. Reflection and analysis can lead one to distinguish constituent parts of the object of acquaintance and to obtain progressively more comprehensible, explicit, and complete knowledge about it (1913, 1918-19, 1950, 1959).
Apparent facts to be explained about the distinction between knowing things and knowing about things are there. Knowledge about things is essentially propositional knowledge, where the mental states involved refer to specific things. This propositional knowledge can be more or less comprehensive, can be justified inferentially and on the basis of experience, and can be communicated. Knowing things, on the other hand, involves experience of things. This experiential knowledge provides an epistemic basis for knowledge about things, and in some sense is difficult or impossible to communicate, perhaps because it is more or less vague.
If one is unconvinced by James and Russell’s reasons for holding that experience of and reference work to things that are at least sometimes direct. It may seem preferable to join Helmholtz in asserting that knowing things and knowing about things both involve propositional attitudes. To do so would at least allow one the advantages of unified accounts of the nature of knowledge (propositional knowledge would be fundamental) and of the nature of reference: Indirect reference would be the only kind. The two kinds of knowledge might yet be importantly different if the mental states involved have different sorts of causal origins in the thinker’s cognitive faculties, involve different sorts of propositional attitudes, and differ in other constitutive respects relevant to the relative vagueness and communicability of the mental sates.
In any of cases, perhaps most, Foundationalism is a view concerning the ‘structure’ of the system of justified belief possessed by a given individual. Such a system is divided into ‘foundation’ and ‘superstructure’, so related that beliefs in the latter depend on the former for their justification but not vice versa. However, the view is sometimes stated in terms of the structure of ‘knowledge’ than of justified belief. If knowledge is true justified belief (plus, perhaps, some further condition), one may think of knowledge as exhibiting a foundationalist structure by virtue of the justified belief it involves. In any event, the construing doctrine concerning the primary justification is layed the groundwork as affording the efforts of belief, though in feeling more free, we are to acknowledge the knowledgeable infractions that will from time to time be worthy in showing to its recognition.
The first step toward a more explicit statement of the position is to distinguish between ‘mediate’ (indirect) and ‘immediate’ (direct) justification of belief. To say that a belief is mediately justified is to any that it s justified by some appropriate relation to other justified beliefs, i.e., by being inferred from other justified beliefs that provide adequate support for it, or, alternatively, by being based on adequate reasons. Thus, if my reason for supposing that you are depressed is that you look listless, speak in an unaccustomedly flat tone of voice, exhibit no interest in things you are usually interested in, etc., then my belief that you are depressed is justified, if, at all, by being adequately supported by my justified belief that you look listless, speak in a flat tone of voice. . . .
A belief is immediately justified, on the other hand, if its justification is of another sort, e.g., if it is justified by being based on experience or if it is ‘self-justified’. Thus my belief that you look listless may not be based on anything else I am justified in believing but just on the cay you look to me. And my belief that 2 + 3 = 5 may be justified not because I infer it from something else, I justifiably believe, but simply because it seems obviously true to me.
In these terms we can put the thesis of Foundationalism by saying that all mediately justified beliefs owe their justification, ultimately to immediately justified beliefs. To get a more detailed idea of what this amounts to it will be useful to consider the most important argument for Foundationalism, the regress argument. Consider a mediately justified belief that ‘p’ (we are using lowercase letters as dummies for belief contents). It is, by hypothesis, justified by its relation to one or more other justified beliefs, ‘q’ and ‘r’. Now what justifies each of these, e.g., q? If it too is mediately justified that is because it is related accordingly to one or subsequent extra justified beliefs, e.g., ‘s’. By virtue of what is ‘s’ justified? If it is mediately justified, the same problem arises at the next stage. To avoid both circularity and an infinite regress, we are forced to suppose that in tracing back this chain we arrive at one or more immediately justified beliefs that stop the regress, since their justification does not depend on any further justified belief.
According to the infinite regress argument for Foundationalism, if every justified belief could be justified only by inferring it from some further justified belief, there would have to be an infinite regress of justifications: Because there can be no such regress, there must be justified beliefs that are not justified by appeal to some further justified belief. Instead, they are non-inferentially or immediately justified, they are basic or foundational, the ground on which all our other justifiable beliefs are to rest.
Variants of this ancient argument have persuaded and continue to persuade many philosophers that the structure of epistemic justification must be foundational. Aristotle recognized that if we are to have knowledge of the conclusion of an argument in the basis of its premisses, we must know the premisses. But if knowledge of a premise always required knowledge of some further proposition, then in order to know the premise we would have to know each proposition in an infinite regress of propositions. Since this is impossible, there must be some propositions that are known, but not by demonstration from further propositions: There must be basic, non-demonstrable knowledge, which grounds the rest of our knowledge.
Foundationalist enthusiasms for regress arguments often overlook the fact that they have also been advanced on behalf of scepticism, relativism, fideisms, conceptualism and coherentism. Sceptics agree with foundationalist’s both that there can be no infinite regress of justifications and that nevertheless, there must be one if every justified belief can be justified only inferentially, by appeal to some further justified belief. But sceptics think all true justification must be inferential in this way -the foundationalist’s talk of immediate justification merely overshadows the requiring of any rational justification properly so-called. Sceptics conclude that none of our beliefs is justified. Relativists follow essentially the same pattern of sceptical argument, concluding that our beliefs can only be justified relative to the arbitrary starting assumptions or presuppositions either of an individual or of a form of life.
Regress arguments are not limited to epistemology. In ethics there is Aristotle’s regress argument (in “Nichomachean Ethics”) for the existence of a single end of rational action. In metaphysics there is Aquinas’s regress argument for an unmoved mover: If a mover that it is in motion, there would have to be an infinite sequence of movers each moved by a further mover, since there can be no such sequence, there is an unmoved mover. A related argument has recently been given to show that not every state of affairs can have an explanation or cause of the sort posited by principles of sufficient reason, and such principles are false, for reasons having to do with their own concepts of explanation (Post, 1980; Post, 1987).
The premise of which in presenting Foundationalism as a view concerning the structure ‘that is in fact exhibited’ by the justified beliefs of a particular person has sometimes been construed in ways that deviate from each of the phrases that are contained in the previous sentence. Thus, it is sometimes taken to characterise the structure of ‘our knowledge’ or ‘scientific knowledge’, rather than the structure of the cognitive system of an individual subject. As for the other phrase, Foundationalism is sometimes thought of as concerned with how knowledge (justified belief) is acquired or built up, than with the structure of what a person finds herself with at a certain point. Thus some people think of scientific inquiry as starting with the recordings of observations (immediately justified observational beliefs), and then inductively inferring generalizations. Again, Foundationalism is sometimes thought of not as a description of the finished product or of the mode of acquisition, but rather as a proposal for how the system could be reconstructed, an indication of how it could all be built up from immediately justified foundations. This last would seem to be the kind of Foundationalism we find in Descartes. However, Foundationalism is most usually thought of in contemporary Anglo-American epistemology as an account of the structure actually exhibited by an individual’s system of justified belief.
It should also be noted that the term is used with a deplorable looseness in contemporary, literary circles, even in certain corners of the philosophical world, to refer to anything from realism -the view that reality has a definite constitution regardless of how we think of it or what we believe about it to various kinds of ‘absolutism’ in ethics, politics, or wherever, and even to the truism that truth is stable (if a proposition is true, it stays true).
Since Foundationalism holds that all mediate justification rests on immediately justified beliefs, we may divide variations in forms of the view into those that have to do with the immediately justified beliefs, the ‘foundations’, and those that have to do with the modes of derivation of other beliefs from these, how the ‘superstructure’ is built up. The most obvious variation of the first sort has to do with what modes of immediate justification are recognized. Many treatments, both pro and con, are parochially restricted to one form of immediate justification -self-evidence, self-justification (self-warrant), justification by a direct awareness of what the belief is about, or whatever. It is then unwarrantly assumed by critics that disposing of that one form will dispose of Foundationalism generally (Alston, 1989). The emphasis historically has been on beliefs that simply ‘record’ what is directly given in experience (Lewis, 1946) and on self-evident propositions (Descartes’ ‘clear and distinct perceptions and Locke’s ‘Perception of the agreement and disagreement of ideas’). But self-warrant has also recently received a great deal of attention (Alston 1989), and there is also a reliabilist version according to which a belief can be immediately justified just by being acquired by a reliable belief-forming process that does not take other beliefs as inputs (BonJour, 1985, ch. 3).
Foundationalisms also differ as to what further constraints, if any, are put on foundations. Historically, it has been common to require of the foundations of knowledge that they exhibit certain ‘epistemic immunities’, as we might put it, immunity from error, refutation or doubt. Thus Descartes, along with many other seventeenth and eighteenth-century philosophers, took it that any knowledge worthy of the name would be based on cognations the truth of which is guaranteed (infallible), that were maximally stable, immune from ever being shown to be mistaken, as incorrigible, and concerning which no reasonable doubt could be raised (indubitable). Hence the search in the “Meditations” for a divine guarantee of our faculty of rational intuition. Criticisms of Foundationalism have often been directed at these constraints: Lehrer, 1974, Will, 1974? Both responded to in Alston, 1989. It is important to realize that a position that is foundationalist in a distinctive sense can be formulated without imposing any such requirements on foundations.
There are various ways of distinguishing types of foundationalist epistemology by the use of the variations we have been enumerating. Plantinga (1983), has put forwards an influential innovation of criterial Foundationalism, specified in terms of limitations on the foundations. He construes this as a disjunction of ‘ancient and medieval foundationalism’, which takes foundations to comprise what is self-evidently and ‘evident to he senses’, and ‘modern foundationalism’ that replaces ‘evidently to the senses’ with ‘incorrigible’, which in practice was taken to apply only to beliefs about one’s present states of consciousness. Plantinga himself developed this notion in the context of arguing those items outside this territory, in particular certain beliefs about God, could also be immediately justified. A popular recent distinction is between what is variously called ‘strong’ or ‘extreme’ Foundationalism and ‘moderate’, ‘modest’ or ‘minimal’ foundationalism, with the distinction depending on whether various epistemic immunities are required of foundations. Finally, its distinction is ‘simple’ and ‘iterative’ Foundationalism (Alston, 1989), depending on whether it is required of a foundation only that it is immediately justified, or whether it is also required that the higher level belief that the firmer belief is immediately justified is itself immediately justified. Suggesting only that the plausibility of the stronger requirement stems from a ‘level confusion’ between beliefs on different levels.
The classic opposition is between foundationalism and coherentism. Coherentism denies any immediate justification. It deals with the regress argument by rejecting ‘linear’ chains of justification and, in effect, taking the total system of belief to be epistemically primary. A particular belief is justified in the extent that it is integrated into a coherent system of belief. More recently into a pragmatist like John Dewey has developed a position known as contextualism, which avoids ascribing any overall structure to knowledge. Questions concerning justification can only arise in particular context, defined in terms of assumptions that are simply taken for granted, though they can be questioned in other contexts, where other assumptions will be privileged.
Foundationalism can be attacked both in its commitment to immediate justification and in its claim that all mediately justified beliefs ultimately depend on the former. Though, it is the latter that is the position’s weakest point, most of the critical fire has been detected to the former. As pointed out about much of this criticism has been directly against some particular form of immediate justification, ignoring the possibility of other forms. Thus, much anti-foundationalist artillery has been directed at the ‘myth of the given’. The idea that facts or things are ‘given’ to consciousness in a pre-conceptual, pre-judgmental mode, and that beliefs can be justified on that basis (Sellars, 1963). The most prominent general argument against immediate justification is a ‘level ascent’ argument, according to which whatever is taken ti immediately justified a belief that the putative justifier has in supposing to do so. Hence, since the justification of the higher level belief after all (BonJour, 1985). We lack adequate support for any such higher level requirements for justification, and if it were imposed we would be launched on an infinite undergo regress, for a similar requirement would hold equally for the higher level belief that the original justifier was efficacious.
Coherence is a major player in the theatre of knowledge. There are coherence theories of belief, truth, and justification. These combine in various ways to yield theories of knowledge. We will proceed from belief through justification to truth. Coherence theories of belief are concerned with the content of beliefs. Consider a belief you now have, the beliefs that you are reading a page in a book, so what makes that belief the belief that it is? What makes it the belief that you are reading a page in a book than the belief hat you have a monster in the garden?
One answer is that the belief has a coherent place or role in a system of beliefs. Perception has an influence on belief. You respond to sensory stimuli by believing that you are reading a page in a book rather than believing that you have a centaur in the garden. Belief has an influence on action. You will act differently if you believe that you are reading a page than if you believe something about a centaur. Perspicacity and action undermine the content of belief, however, the same stimuli may produce various beliefs and various beliefs may produce the same action. The role that gives the belief the content it has in the role it plays in a network of relations to the beliefs, the role in inference and implications, for example, I refer different things from believing that I am inferring different things from believing that I am reading a page in a book than from any other beliefs, just as I infer that belief from any other belief, just as I infer that belief from different things than I infer other beliefs from.
The input of perception and the output of an action supplement the centre role of the systematic relations the belief has to other beliefs, but it is the systematic relations that give the belief the specific content it has. They are the fundamental source of the content of beliefs. That is how coherence comes in. A belief has the content that it does because of the way in which it coheres within a system of beliefs (Rosenberg, 1988). We might distinguish weak coherence theories of the content of beliefs from strong coherence theories. Weak coherence theories affirm that coherences are one-determinant of the content of belief. Strong coherence theories of the contents of belief affirm that coherence is the sole determinant of the content of belief.
When we turn from belief to justification, we are in confronting a corresponding group of similarities fashioned by their coherences motifs. What makes one belief justified and another not? The answer is the way it coheres with the background system of beliefs. Again, there is a distinction between weak and strong theories of coherence. Weak theories tell ‘us’ that the way in which a belief coheres with a background system of beliefs is one determinant of justification, other typical determinants being perception, memory and intuition. Strong theories, by contrast, tell ‘us’ that justification is solely a matter of how a belief coheres with a system of beliefs. There is, however, another distinction that cuts across the distinction between weak and strong coherence theories of justification. It is the distinction between positive and negative coherence theories (Pollock, 1986). A positive coherence theory tells ‘us’ that if a belief coheres with a background system of belief, then the belief is justified. A negative coherence theory tells ‘us’ that if a belief fails to cohere with a background system of beliefs, then the belief is not justified. We might put this by saying that, according to a positive coherence theory, coherence has the power to produce justification, while according to a negative coherence theory, coherence has only the power to nullify justification.
A strong coherence theory of justification is a combination of a positive and a negative theory that tells ‘us’ that a belief is justified if and only if it coheres with a background system of beliefs.
Traditionally, belief has been of epistemological interest in its propositional guise: ‘S’ believes that ‘p’, where ‘p’ is a proposition toward which an agent, ‘S’, exhibits an attitude of acceptance. Not all belief is of this sort. If I trust what you say, I believe you. And someone may believe in Mrs. Thatcher, or in a free-market economy, or in God. It is sometimes supposed that all belief is ‘reducible’ to propositional belief, belief-that. Thus, my believing you might be thought a matter of my believing, perhaps, that what you say is true, and your belief in free-markets or in God, a matter of your believing that free-market economy’s are desirable or that God exists.
It is doubtful, however, that non-propositional believing can, in every case, be reduced in this way. Debate on this point has tended to focus on an apparent distinction between ‘belief-that’ and ‘belief-in’, and the application of this distinction to belief in God. Some philosophers have followed Aquinas ©. 1225-74), in supposing that to believe in, and God is simply to believe that certain truths hold: That God exists, that he is benevolent, etc. Others (e.g., Hick, 1957) argue that belief-in is a distinctive attitude, one that includes essentially an element of trust. More commonly, belief-in has been taken to involve a combination of propositional belief together with some further attitude.
H.H. Price (1969) defends the claims that there are different sorts of ‘belief-in’, some, but not all, reducible to ‘beliefs-that’. If you believe in God, you believe that God exists, that God is good, etc., but, according to Price, your belief involves, in addition, a certain complex pro-attitude toward its object. One might attempt to analyse this further attitude in terms of additional beliefs-that: ‘S’ believes in ‘χ’ just in case (1) ‘S’ believes that ‘χ’ exists (and perhaps holds further factual beliefs about χ): (2)’S’ believes that ‘χ’ is good or valuable in some respect, and (3) ‘S’ believes that χ’s being good or valuable in this respect is itself is a good thing. An analysis of this sort, however, fails adequately to capture the further affective component of belief-in. Thus, according to Price, if you believe in God, your belief is not merely that certain truths hold, you posses, in addition, an attitude of commitment and trust toward God.
Notoriously, belief-in outruns the evidence for the corresponding belief-that. Does this diminish its rationality? If belief-in presupposes belief-that, it might be thought that the evidential standards for the former must be, at least as high as standards for the latter. And any additional pro-attitude might be thought to require a further layer of justification not required for cases of belief-that.
Some philosophers have argued that, at least for cases in which belief-in is synonymous with faith (or faith-in), evidential thresholds for constituent propositional beliefs are diminished. You may reasonably have faith in God or Mrs. Thatcher, even though beliefs about their respective attitudes, were you to harbour them, would be evidentially substandard.
Belief-in may be, in general, less susceptible to alternations in the face of unfavourable evidence than belief-that. A believer who encounters evidence against God’s existence may remain unshaken in his belief, in part because the evidence does not bear on his pro-attitude. So long as this is united with his belief that God exists, the belief may survive epistemic buffeting-and reasonably so in a way that an ordinary propositional belief-that would not.
At least two large sets of questions are properly treated under the heading of epistemological religious beliefs. First, there is a set of broadly theological questions about the relationship between faith and reason, between what one knows by way of reason, broadly construed, and what one knows by way of faith. These theological questions may as we call theological, because, of course, one will find them of interest only if one thinks that in fact there is such a thing as faith, and that we do know something by way of it. Secondly, there is a whole set of questions having to do with whether and to what degree religious beliefs have warrant, or justification, or positive epistemic status. The second, is seemingly as an important set of a theological question is yet spoken of faith.
Rumours about the death of epistemology began to circulate widely in the 1970s. Death notices appeared in such works as ‘Philosophy and Mirror of Nature’ (1979) by Richard Rorty and William’s ‘Groundless Belief’ (1977). Of late, the rumours seem to have died down, but whether they will prove to have been exaggerated remain to be seen.
Arguments for the death of epistemology typically pass through three stages. At the first stage, the critic characterizes the task of epistemology by identifying the distinctive sorts of questions it deals with. At the second stage, he tries to isolate the theoretical ideas that make those questions possible. Finally, he tries to undermine those ideas. His conclusion is that, since the ideas in question are less than compelling, there is no pressing need to solve the problems they give rise to. Thus the death-of-epistemology theorist holds that there is no barrier in principle to epistemology’s going the way of, demonology or judicial astrology. These disciplines too centred on questions that were once taken very seriously are indeed as their presuppositions came to seem dubious, debating their problems came to seem pointless. Furthermore, some theorists hold that philosophy, as a distinctive professionalized activity, revolve essentially around epistemological inquiry, so that speculation about the death of epistemology is apt to evolve into speculation about the death of philosophy generally.
Clearly, the death-of-epistemology theorists must hold that there is nothing special about philosophical problems. This is where philosophers who see little sense in talk of the death of epistemology disagree. For them, philosophical problems, including epistemological problems, are distinctive in that they are ‘natural’ or ‘intuitive’: That is to day, they can be posed and understood taking for granted little or nothing in the way of contentious, theoretical ideas. Thus, unlike problems belonging to the particular sciences, they are ‘perennial’ problems that could occur to more or less anyone, anytime and anywhere . But are the standard problems of epistemology really as ‘intuitive’ as all that? Or, if they have come to seem so commonsensical, is this only because commonsense is a repository for ancient theory? There are the sorts of question that underlie speculation about epistemology’s possible demise.
Because it revolves round questions like this, the death-of-epistemology movement is distinguished by its interest in what we may call ‘theoretical diagnosis’: Bringing to light the theoretical background to philosophical problems so as to argue that they cannot survive detachments from it. This explains the movement’s interest in historical-explanatory accounts of their emergence of philosophical problems. If certain problems can be shown not to be perennial, but rather to have emerged at a definite point in time, this is strongly suggestive of their dependence on some particular theoretical outlook, and if an account developed of the discipline centred on those problems, that is evidence e for its correctness. Still, the goal of theoretical diagnosis is to establish logical dependance, not just historical correlation. So, although historical investigation into the roots and development of epistemology can provide valuable clues to the ideas that inform its problems, history cannot substitute for problem-analysis.
The death-of-epistemology m0venent has many sources: In the pragmatics, particularly James and Dewey, and in the writings of Wittgenstein, Quine, Sellars and Austin. But the project of theoretical diagnosis must be distinguished from the ‘therapeutic’ approach to philosophical problems that some names on this list might call to mind. The practitioner of theoretical diagnosis does not claim that the problems he analyses are ‘pseudo-problems’, rooted in ‘conceptual confusion’. Rather, he claims that, while genuine, they are wholly internal to a particular intellectual project whose generally unacknowledged theoretical commitments he aims to isolate and criticize.
Turning to details, the task of epistemology, as these radical critics conceive it, is to determine the nature, scope and limits that the very possibility of human knowledge. Since epistemology determines the extent, to which knowledge is possible, it cannot itself take for empirical inquiry. Thus, epistemology purports to be a non-empirical discipline, the function of which is to sit in judgement on all particular discursive practices with a view to determining their cognitive status. The epistemologist or, in the era of epistemologically-centred philosophy, we might as well say ‘the philosopher’) is someone processionally equipped to determine what forms of judgements are ‘ scientific’, ‘rational’, ‘merely expressive, and so on. Epistemology is therefore fundamentally concerned with sceptical questions. Determining the scope and limits of human knowledge is a matter of showing where and when knowledge is possible. But there is a project called ‘showing that knowledge is possible’ only because there are powerful arguments for the view that knowledge is impossible. Here the scepticism in question is first and foremost radical scepticism, the thesis that with respect to this or that area of putative knowledge we are never so much as justified in believing one thing than another. The task of epistemology is thus to determine the extent to which it s possible to respond to challenges posed by radically sceptical arguments by determining where we can and cannot have justifications for our beliefs. If it turns out that the prospects are more hopeful for some sorts beliefs than for others, we will have uncovered a difference in epistemological status. The ‘scope and limits’ question and problems of radical scepticism are two sides of one coin.
This emphasis on scepticism as the fundamental problem of epistemology may strike philosophers as misguided. Much recent work on the concept of knowledge, particularly that inspired by Gettier’s demonstration of the insufficiency of the standards of ‘justified true belief’ analysis, has been carried on independently on any immediate concern with scepticism. I think it must be admitted that philosophers who envisage the death off epistemology tend to assume a somewhat dismissive attitude to work of this kind. In part, this is because they tend to be dubious about the possibility of stating precise necessary and sufficient conditions for the application of any concern. But the determining factor is their though that only the centrality of the problem of radical scepticism can explain the importance for philosophy that, at least in the modern period, epistemology has take n on. Since radical scepticism concerns the very possibility, of justification, the philosophers who put this problem first, question about what special sorts of justification yield knowledge, or about whether knowledge might be explained in non-justificational terms, are of secondary importance. Whatever importance they have will have to derive in the end from connections, if any, with sceptical problems.
In light of this, the fundamental question for death-of-epistemology theorists becomes, ‘What are the essential theatrical presuppositions of argument for radical scepticism?’ Different theorists suggest different answers. Rorty traces scepticism to the ‘representationalists ‘ conception of belief and its close ally, the correspondence theory of truth with non-independent ‘reality’ (mind as the mirror of nature), we will to assure ourselves that the proper alignment has been achieved. In Rorty’s view, by switching to more ‘pragmatic’ or ‘behaviouristic’ conception of beliefs as devices for coping with particular, concrete problems, we can put scepticism, hence the philosophical discipline that revolves around in, behind us once and for all.
Other theorists stress epistemological Foundationalism as the essential back-ground to traditional sceptic problems. There reason for preferring this approach, arguments for epistemological conclusions require at least one epistemological premiss. It is, therefore, not easy to see how metaphysical or semantic doctrines of the sort emphasized by Rorty could, by themselves, generate epistemological problems, such cases as radical scepticism. On the other hand, on cases for scepticism’s essential dependence on foundationalist preconceptions i s by no means easy to make. It has even been argued that this approach ‘gets things almost entirely upside down’. The thought is that foundationalism is an attempt to save knowledge from the sceptic, and is therefore a reaction to, than a presupposition of, the deepest and most intuitive arguments for scepticism. Challenges like this certainly needs to be met by death-of-epistemology theorists, who have sometimes been too ready to take for obvious scepticism’s dependance on foundationalist or other theoretical ideas. This reflects, perhaps, the dangers of taking one’s cue from historical accounts of the development of sceptical problems. It may be that, in the heyday of foundationalism, sceptical arguments were typically presented within a foundationalist content. But the crucial questions do take foundationalism for granted but whether there are in any that do not . This issue-is the general issue of whether skepticism is a truly intuitive problem -can only be resolved by detailed analysis of the possibilities and resources of sceptical argumentation.
Another question concerns why anti-foundationalist leads to the death of epistemology than a non-foundational, hence Coherentists, approach to knowledge and justification. It is true that death-of-epistemology theorists often characterize justification in terms of coherence. But their intention is to make a negative point. According to foundationalism, our beliefs fall naturally into broad epistemological categories that reflect objective, context-independent relations of epistemological priority. Thus, for example, experiential beliefs are thought to be naturally or intrinsically prior to beliefs about the external world, in the sense that any evidence we have for the latter must derive in the end from the former. This relation epistemology priority is, so to say, just a fact, foundationalism is therefore committed to a strong form of Realism about epistemological facts and relations, calls it ‘epistemological realism’. For some anti-foundationalist’s, talk of coherence is just a way of rejecting this picture in favour of the view that justification is a matter of accommodating new beliefs to relevant back-ground beliefs in contextually appropriate ways, there being no context-independent, purely epistemological restrictions on what sorts of beliefs can confer evidence on what others. If this is all that is meant, talk of coherence does not point to a theory of justification so much as to the deflationary view that justification is not the sort of thing we should expect to have theories about, there is, however, a stronger sense of 'coherence' which does point in the direction of a genuine theory. This is the radically holistic account of justification, according to which inference depends on assessing our entire belief-system or total view, in the light of abstract criteria of ‘coherence’. But it is questionable whether this view, which seems to demand privileged knowledge of what we believe, is an alternative to foundationalism or just a variant form. Accordingly, it is possible that a truly uncompromising anti-foundationalism will prove as hostile to traditional coherence theories as too standard foundationalist positions, reinforcing the connection between the rejection of foundationalism and the death of epistemology.
The death-of-epistemology movement has some affinities with the call for a ‘naturalized’ approach to knowledge. Quine argues that the time has come for us to abandon such traditional projects as refuting the sceptic showing how empirical knowledge can be rationally reconstructed on a sensory basis, hence justifying empirical knowledge at large. We should concentrate instead on the more tractable problem of explaining how we ‘project our physics from our data’, i.e., how retinal stimulations cause us to respond with increasingly complex sentence s about events in our environment. Epistemology should be transformed into a branch of natural science, specifically experimental psychology. But though Quine presents this as a suggestion about how to continued doing epistemology, to philosophers how think that the traditional questions still lack satisfactory answers, it looks more like abandoning epistemology in favour of another pursuit entirely. It is significant therefore, which in subsequent writings Quine has been less dismissive of sceptical concerns. But if this is how ‘naturalized’ epistemology develops, then for the death-of-epistemology theorists, its claim will open up a new field for theoretical diagnosis.
Epistemology, is, so we are told, a theory of knowledge: Of course, its aim is to discern and explain that quality or quantity enough of which distinguishes knowledge from mere true belief. We need a name for this quality or quantity, whatever precisely it is, call it ‘warrant’. From this point of view, the epistemology of religious belief should centre on the question whether religious belief has warrant, an if it does, hoe much it has and how it gets it. As a matter of fact, however, epistemological discussion of religious belief, at least since the Enlightenment (and in the Western world, especially the English-speaking Western world) has tended to focus, not on the question whether religious belief has warrant, but whether it is justified. More precisely, it has tended to focus on the question whether those properties manifested by theistic belief -the belief that there exists a person like the God of traditional Christianity, Judaism and Islam: An almighty Law Maker, or an all-knowing and most wholly benevolent and a loving spiritual person who has created the living world. The chief question, therefore, has ben whether theistic belief is justified, the same question is often put by asking whether theistic belief is rational or rationally acceptable. Still further, the typical way of addressing this question has been by way of discussing arguments for or and against the existence of God. On the pro side, there are the traditional theistic proofs or arguments: The ontological, cosmological and teleological arguments, using Kant’s terms for them. On the other side, the anti-theistic side, the principal argument is the argument from evil, the argument that is not possible or at least probable that there be such a person as God, given all the pain, suffering and evil the world displays. This argument is flanked by subsidiary arguments, such as the claim that the very concept of God is incoherent, because, for example, it is impossible that there are the people without a body, and Freudian and Marxist claims that religious belief arises out of a sort of magnification and projection into the heavens of human attributes we think important.
But why has discussion centred on justification rather than warrant? And precisely what is justification? And why has the discussion of justification of theistic belief focussed so heavily on arguments for and against the existence of God?
As to the first question, we can see why once we see that the dominant epistemological tradition in modern Western philosophy has tended to ‘identify’ warrant with justification. On this way of looking at the matter, warrant, that which distinguishes knowledge from mere true belief, just ‘is’ justification. Belief theory of knowledge-the theory according to which knowledge is justified true belief has enjoyed the status of orthodoxy. According to this view, knowledge is justified truer belief, therefore any of your beliefs have warrant for you if and only if you are justified in holding it.
But what is justification? What is it to be justified in holding a belief? To get a proper sense of the answer, we must turn to those twin towers of western epistemology. René Descartes and especially, John Locke. The first thing to see is that according to Descartes and Locke, there are epistemic or intellectual duties, or obligations, or requirements. Thus, Locke:
Faith is nothing but a firm assent of the mind, which if it is regulated, A is our duty, cannot be afforded to anything, but upon good reason: And cannot be opposite to it, he that believes, without having any reason for believing, may be in love with his own fanciers: But, neither seeks truth as he ought, nor pats the obedience due his maker, which would have him use those discerning faculties he has given him: To keep him out of mistake and error. He that does this to the best of his power, however, he sometimes lights on truth, is in the right but by chance: And I know not whether the luckiest of the accidents will excuse the irregularity of his proceeding. This, at least is certain, that he must be accountable for whatever mistakes he runs into: Whereas, he that makes use of the light and faculties God has given him, by seeks sincerely to discover truth, by those helps and abilities he has, may have this satisfaction in doing his duty as rational creature, that though he should miss truth, he will not miss the reward of it. For he governs his assent right, and places it as he should, who in any case or matter whatsoever, believes or disbelieves, according as reason directs him. He that does otherwise, transgresses against his own light, and misuses those faculties, which were given him . . . (Essays 4.17.24).
Rational creatures, creatures with reason, creatures capable of believing propositions (and of disbelieving and being agnostic with respect to them), say Locke, have duties and obligation with respect to the regulation of their belief or assent. Now the central core of the notion of justification(as the etymology of the term indicates) this: One is justified in doing something or in believing a certain way, if in doing one is innocent of wrong doing and hence not properly subject to blame or censure. You are justified, therefore, if you have violated no duties or obligations, if you have conformed to the relevant requirements, if you are within your rights. To be justified in believing something, then, is to be within your rights in so believing, to be flouting no duty, to be to satisfy your epistemic duties and obligations. This way of thinking of justification has been the dominant way of thinking about justification: And this way of thinking has many important contemporary representatives. Roderick Chisholm, for example (as distinguished an epistemologist as the twentieth century can boast), in his earlier work explicitly explains justification in terms of epistemic duty (Chisholm, 1977).
The (or, a) main epistemological; questions about religious believe, therefore, has been the question whether or not religious belief in general and theistic belief in particular is justified. And the traditional way to answer that question has been to inquire into the arguments for and against theism. Why this emphasis upon these arguments? An argument is a way of marshalling your propositional evidence-the evidence from other such propositions as likens to believe-for or against a given proposition. And the reason for the emphasis upon argument is the assumption that theistic belief is justified if and only if there is sufficient propositional evidence for it. If there is not’ much by way of propositional evidence for theism, then you are not justified in accepting it. Moreover, if you accept theistic belief without having propositional evidence for it, then you are ging contrary to epistemic duty and are therefore unjustified in accepting it. Thus, W.K. William James, trumpets that ‘it is wrong, always everything upon insufficient evidence’, his is only the most strident in a vast chorus of only insisting that there is an intellectual duty not to believe in God unless you have propositional evidence for that belief. (A few others in the choir: Sigmund Freud, Brand Blanshard, H.H. Price, Bertrand Russell and Michael Scriven.)
Now how it is that the justification of theistic belief gets identified with there being propositional evidence for it? Justification is a matter of being blameless, of having done one’s duty (in this context, one’s epistemic duty): What, precisely, has this to do with having propositional evidence?
The answer, once, again, is to be found in Descartes especially Locke. As, justification is the property your beliefs have when, in forming and holding them, you conform to your epistemic duties and obligations. But according to Locke, a central epistemic duty is this: To believe a proposition only to the degree that it is probable with respect to what is certain for you. What propositions are certain for you? First, according to Descartes and Locke, propositions about your own immediate experience, that you have a mild headache, or that it seems to you that you see something red: And second, propositions that are self-evident for you, necessarily true propositions so obvious that you cannot so much as entertain them without seeing that they must be true. (Examples would be simple arithmetical and logical propositions, together with such propositions as that the whole is at least as large as the parts, that red is a colour, and that whatever exists has properties.) Propositions of these two sorts are certain for you, as fort other prepositions. You are justified in believing if and only if when one and only to the degree to which it is probable with respect to what is certain for you. According to Locke, therefore, and according to the whole modern foundationalist tradition initiated by Locke and Descartes (a tradition that until has recently dominated Western thinking about these topics) there is a duty not to accept a proposition unless it is certain or probable with respect to what is certain.
In the present context, therefore, the central Lockean assumption is that there is an epistemic duty not to accept theistic belief unless it is probable with respect to what is certain for you: As a consequence, theistic belief is justified only if the existence of God is probable with respect to what is certain. Locke does not argue for his proposition, he simply announces it, and epistemological discussion of theistic belief has for the most part followed hin ion making this assumption. This enables ‘us’ to see why epistemological discussion of theistic belief has tended to focus on the arguments for and against theism: On the view in question, theistic belief is justified only if it is probable with respect to what is certain, and the way to show that it is probable with respect to what it is certain are to give arguments for it from premises that are certain or, are sufficiently probable with respect to what is certain.
There are at least three important problems with this approach to the epistemology of theistic belief. First, there standards for theistic arguments have traditionally been set absurdly high (and perhaps, part of the responsibility for this must be laid as the door of some who have offered these arguments and claimed that they constitute wholly demonstrative proofs). The idea seems to test. a good theistic argument must start from what is self-evident and proceed majestically by way of self-evidently valid argument forms to its conclusion. It is no wonder that few if any theistic arguments meet that lofty standard -particularly, in view of the fact that almost no philosophical arguments of any sort meet it. (Think of your favourite philosophical argument: Does it really start from premisses that are self-evident and move by ways of self-evident argument forms to its conclusion?)
Secondly, attention has ben mostly confined to three theistic arguments: The traditional arguments, cosmological and teleological arguments, but in fact, there are many more good arguments: Arguments from the nature of proper function, and from the nature of propositions, numbers and sets. These are arguments from intentionality, from counterfactual, from the confluence of epistemic reliability with epistemic justification, from reference, simplicity, intuition and love. There are arguments from colours and flavours, from miracles, play and enjoyment, morality, from beauty and from the meaning of life. This is even a theistic argument from the existence of evil.
But there are a third and deeper problems here. The basic assumption is that theistic belief is justified only if it is or can be shown as the probable respect to many a body of evidence or proposition -perhaps, those that are self-evident or about one’s own mental life, but is this assumption true? The idea is that theistic belief is very much like a scientific hypothesis: It is acceptable if and only if there is an appropriate balance of propositional evidence in favour of it. But why believe a thing like that? Perhaps the theory of relativity or the theory of evolution is like that, such a theory has been devised to explain the phenomena and gets all its warrant from its success in so doing. However, other beliefs, e.g., memory beliefs, felt in other minds is not like that, they are not hypothetical at all, and are not accepted because of their explanatory powers. There are instead, the propositions from which one start in attempting to give evidence for a hypothesis. Now, why assume that theistic belief, belief in God, is in this regard more like a scientific hypothesis than like, say, a memory belief? Why think that the justification of theistic belief depends upon the evidential relation of theistic belief to other things one believes? According to Locke and the beginnings of this tradition, it is because there is a duty not to assent to a proposition unless it is probable with respect to what is certain to you, but is there really any such duty? No one has succeeded in showing that, say, belief in other minds or the belief that there has been a past, is probable with respect to what is certain for ‘us’. Suppose it is not: Does it follow that you are living in epistemic sin if you believe that there are other minds? Or a past?
There are urgent questions about any view according to which one has duties of the sort ‘do not believe ‘p’ unless it is probable with respect to what is certain for you; . First, if this is a duty, is it one to which I can conform? My beliefs are for the most part not within my control: Certainly they are not within my direct control. I believe that there has been a past and that there are other people, even if these beliefs are not probable with respect to what is certain forms (and even if I came to know this) I could not give them up. Whether or not I accept such beliefs are not really up to me at all, For I can no more refrain from believing these things than I can refrain from conforming yo the law of gravity. Second, is there really any reason for thinking I have such a duty? Nearly everyone recognizes such duties as that of not engaging in gratuitous cruelty, taking care of one’s children and one’s aged parents, and the like, but do we also find ourselves recognizing that there is a duty not to believe what is not probable (or, what we cannot see to be probable) with respect to what are certain for ‘us’? It hardly seems so. However, it is hard to see why being justified in believing in God requires that the existence of God be probable with respect to some such body of evidence as the set of propositions certain for you. Perhaps, theistic belief is properly basic, i.e., such that one is perfectly justified in accepting it on the evidential basis of other propositions one believes.
Taking justification in that original etymological fashion, therefore, there is every reason ton doubt that one is justified in holding theistic belief only inf one is justified in holding theistic belief only if one has evidence for it. Of course, the term ‘justification’ has under-gone various analogical extensions in the of various philosophers, it has been used to name various properties that are different from justification etymologically so-called, but anagogically related to it. In such a way, the term sometimes used to mean propositional evidence: To say that a belief is justified for someone is to saying that he has propositional evidence (or sufficient propositional evidence) for it. So taken, however, the question whether theistic belief is justified loses some of its interest; for it is not clear (given this use) beliefs that are unjustified in that sense. Perhaps, one also does not have propositional evidence for one’s memory beliefs, if so, that would not be a mark against them and would not suggest that there be something wrong holding them.
Another analogically connected way to think about justification (a way to think about justification by the later Chisholm) is to think of it as simply a relation of fitting between a given proposition and one’s epistemic vase -which includes the other things one believes, as well as one’s experience. Perhaps tat is the way justification is to be thought of, but then, if it is no longer at all obvious that theistic belief has this property of justification if it seems as a probability with respect to many another body of evidence. Perhaps, again, it is like memory beliefs in this regard.
To recapitulate: The dominant Western tradition has been inclined to identify warrant with justification, it has been inclined to take the latter in terms of duty and the fulfilment of obligation, and hence to suppose that there is no epistemic duty not to believe in God unless you have good propositional evidence for the existence of God. Epistemological discussion of theistic belief, as a consequence, as concentrated on the propositional evidence for and against theistic belief, i.e., on arguments for and against theistic belief. But there is excellent reason to doubt that there are epistemic duties of the sort the tradition appeals to here.
And perhaps it was a mistake to identify warrant with justification in the first place. Napoleons have little warrant for him: His problem, however, need not be dereliction of epistemic duty. He is in difficulty, but it is not or necessarily that of failing to fulfill epistemic duty. He may be doing his epistemic best, but he may be doing his epistemic duty in excelsis: But his madness prevents his beliefs from having much by way of warrant. His lack of warrant is not a matter of being unjustified, i.e., failing to fulfill epistemic duty. So warrant and being epistemologically justified by name are not the same things. Another example, suppose (to use the favourite twentieth-century variant of Descartes’ evil demon example) I have been captured by Alpha-Centaurian super-scientists, running a cognitive experiment, they remove my brain, and keep it alive in some artificial nutrients, and by virtue of their advanced technology induce in me the beliefs I might otherwise have if I were going about my usual business. Then my beliefs would not have much by way of warrant, but would it be because I was failing to do my epistemic duty? Hardly.
As a result of these and other problems, another, externalist way of thinking about knowledge has appeared in recent epistemology, that a theory of justification is internalized if and only if it requires that all of its factors needed for a belief to be epistemically accessible to that of a person, internal to his cognitive perception, and externalist, if it allows that, at least some of the justifying factors need not be thus accessible, in that they can be external to the believer’ s cognitive Perspectives, beyond his ken. However, epistemologists often use the distinction between internalized and externalist theories of epistemic justification without offering any very explicit explanation.
Or perhaps the thing to say, is that it has reappeared, for the dominant sprains in epistemology priori to the Enlightenment were really externalist. According to this externalist way of thinking, warrant does not depend upon satisfaction of duty, or upon anything else to which the Knower has special cognitive access (as he does to what is about his own experience and to whether he is trying his best to do his epistemic duty): It depends instead upon factors ‘external’ to the epistemic agent -such factors as whether his beliefs are produced by reliable cognitive mechanisms, or whether they are produced by epistemic faculties functioning properly in-an appropriate epistemic environment.
How will we think about the epistemology of theistic belief in more than is less of an externalist way (which is at once both satisfyingly traditional and agreeably up to date)? I think, that the ontological question whether there is such a person as God is in a way priori to the epistemological question about the warrant of theistic belief. It is natural to think that if in fact we have been created by God, then the cognitive processes that issue in belief in God are indeed realisable belief-producing processes, and if in fact God created ‘us’, then no doubt the cognitive faculties that produce belief in God is functioning properly in an epistemologically congenial environment. On the other hand, if there is no such person as God, if theistic belief is an illusion of some sort, then things are much less clear. Then beliefs in God in of the most of basic ways of wishing that never doubt the production by which unrealistic thinking or another cognitive process not aimed at truth. Thus, it will have little or no warrant. And belief in God on the basis of argument would be like belief in false philosophical theories on the basis of argument: Do such beliefs have warrant? Notwithstanding, the custom of discussing the epistemological questions about theistic belief as if they could be profitably discussed independently of the ontological issue as to whether or not theism is true, is misguided. There two issues are intimately intertwined,
Nonetheless, the vacancy left, as today and as days before are an awakening and untold story beginning by some sparking conscious paradigm left by science. That is a central idea by virtue accredited by its epistemology, where in fact, is that justification and knowledge arising from the proper functioning of our intellectual virtues or faculties in an appropriate environment. This particular yet, peculiar idea is captured in the following criterion for justified belief:
(J) ‘S’ is justified in believing that ‘p’ if and only if of S’s believing that ‘p’ is the result of S’s intellectual virtues or faculties functioning in appropriate environment.
What is an intellectual virtue or faculty? A virtue or faculty in general is a power or ability or competence to achieve some result. An intellectual virtue or faculty, in the sense intended above, is a power or ability or competence to arrive at truths in a particular field, and to avoid believing falsehoods in that field. Examples of human intellectual virtues are sight, hearing, introspection, memory, deduction and induction. More exactly.
(V) A mechanism ‘M’ for generating and/or maintaining beliefs is an intellectual virtue if and only if ‘M’‘s’ is a competence to believing true propositions and refrain from false believing propositions within a field of propositions ‘F’, when one is in a set of circumstances ‘C’.
It is required that we specify a particular field of suggestions or its propositional field for ‘M’, since a given cognitive mechanism will be a competence for believing some kind of truths but not others. The faculty of sight, for example, allows ‘us’ to determine the colour of objects, but not the sounds that they associatively make. It is also required that we specify a set of circumstances for ‘M’, since a given cognitive mechanism will be a competence in some circumstances but not others. For example, the faculty of sight allows ‘us’ to determine colours in a well lighten room, but not in a darkened cave or formidable abyss.
According to the aforementioned formulations, what makes a cognitive mechanism an intellectual virtue is that it is reliable in generating true beliefs than false beliefs in the relevant field and in the relevant circumstances. It is correct to say, therefore, that virtue epistemology is a kind of reliabilism. Whereas, genetic reliabilism maintains that justified belief is belief that results from a reliable cognitive process, virtue epistemology makes a restriction on the kind of process which is allowed. Namely, the cognitive processes that are important for justification and knowledge is those that have their basis in an intellectual virtue.
Finally, that the concerning mental faculty reliability point to the importance of an appropriate environment. The idea is that cognitive mechanisms might be reliable in some environments but not in others. Consider an example from Alvin Plantinga. On a planet revolving around Alfa Centauri, cats are invisible to human beings. Moreover, Alfa Centaurian cats emit a type of radiation that causes humans to form the belief that there I a dog barking nearby. Suppose now that you are transported to this Alfa Centaurian planet, a cat walks by, and you form the belief that there is a dog barking nearby. Surely you are not justified in believing this. However, the problem here is not with your intellectual faculties, but with your environment. Although your faculties of perception are reliable on earth, yet are unrealisable on the Alga Centaurian planet, which is an inappropriate environment for those faculties.
The central idea of virtue epistemology, as expressed in (J) above, has a high degree of initial plausibility. By masking the idea of faculties’ cental to the reliability if not by the virtue of epistemology, in that it explains quite neatly to why beliefs are caused by perception and memories are often justified, while beliefs caused by unrealistic and superstition are not. Secondly, the theory gives ‘us’ a basis for answering certain kinds of scepticism. Specifically, we may agree that if we were brains in a vat, or victims of a Cartesian demon, then we would not have knowledge even in those rare cases where our beliefs turned out true. But virtue epistemology explains that what is important for knowledge is toast our faculties are in fact reliable in the environment in which we are. And so we do have knowledge so long as we are in fact, not victims of a Cartesian demon, or brains in a vat. Finally, Plantinga argues that virtue epistemology deals well with Gettier problems. The idea is that Gettier problems give ‘us’ cases of justified belief that is ‘truer by accident’. Virtue epistemology, Plantinga argues, helps ‘us’ to understand what it means for a belief to be true by accident, and provides a basis for saying why such cases are not knowledge. Beliefs are rue by accident when they are caused by otherwise reliable faculties functioning in an inappropriate environment. Plantinga develops this line of reasoning in Plantinga (1988).
But although virtue epistemology has god initial plausibility, it faces some substantial objections. The first of an objection, which virtue epistemology face is a version of the generality problem. We may understand the problem more clearly if we were to consider the following criterion for justified belief, which results from our explanation of (J).
(J ʹ) ‘S’ is justified in believing that ‘p’ if and entirely if.
(A) there is a field ‘F’ and a set of circumstances ‘C’ such that
(1) ‘S’ is in ‘C’ with respect to the proposition that ‘p’,
(2) ‘S’ is in ‘C’ with respect to the proposition that ‘p’,
(3) If ‘S’ were in ‘C’ with respect to a proposition in ‘F’.
Then ‘S’ would very likely believe correctly with regard
to that proposition.
The problem arises in how we are to select an appropriate ‘F’ and ‘C’. For given any true belief that ‘p’, we can always come up with a field ‘F’ and a set of circumstances ‘C’, such that ‘S’ is perfectly reliable in ‘F’ and ‘C’. For any true belief that ‘p’, let ‘F’s’ be the field including only the propositions ‘p’ and ‘not-p’. Let ‘C’ include whatever circumstances there are which causes ‘p’s’ to be true, together with the circumstanced which causes ‘S’ to believe that ‘p’. Clearly, ‘S’ is perfectly reliable with respect to propositions in this field in these circumstances. But we do not want to say that all of S’s true beliefs are justified for ‘S’. And of course, there is an analogous problem in the other direction of generality. For given any belief that ‘p’, we can always specify a field of propositions ‘F’ and a set of circumstances ‘C’, such that ‘p’ is in ‘F’, ‘S’ is in ‘C’, and ‘S’ is not reliable with respect to propositions in ‘F’ in ‘C’.
Variations of this view have been advanced for both knowledge and justified belief. The first formulation of a reliability account of knowing appeared in a note by F.P. Ramsey (1931), who said that a belief was knowledge if it is true, certain and obtained by a reliable process. P. Unger (1968) suggested that ‘S’ knows that ‘p’ just in case it is not at all accidental that ‘S’ is right about its being the case that ‘p’. D.M. Armstrong (1973) drew an analogy between a thermometer that reliably indicates the temperature and a belief that reliably indicate the truth. Armstrong said that a non-inferential belief qualified as knowledge if the belief has properties that are nominally sufficient for its truth, i.e., guarantee its truth via laws of nature.
Closely allied to the nomic sufficiency account of knowledge, primarily due to F.I. Dretske (1981), A.I. Goldman (1976, 1986) and R. Nozick (1981). The core of tis approach is that S’s belief that ‘p’ qualifies as knowledge just in case ‘S’ believes ‘p’ because of reasons that would not obtain unless ‘p’s’ being true, or because of a process or method that would not yield belief in ‘p’ if ‘p’ were not true. For example, ‘S’ would not have his current reasons for believing there is a telephone before him, or would not come to believe this, unless there was a telephone before him. Thus, there is a counterfactual reliable guarantor of the belief’s being true. A variant of the counterfactual approach says that ‘S’ knows that ‘p’ only if there is no ‘relevant alterative’ situation in which ‘p’ is false but ‘S’ would still believe that ‘p’.
To a better understanding, this interpretation is to mean that the alterative attempt to accommodate any of an opposing strand in our thinking about knowledge one interpretation is an absolute concept, which is to mean that the justification or evidence one must have in order to know a proposition ‘p’ must be sufficient to eliminate all the alternatives to ‘p’ (where an alternative to a proposition ‘p’ is a proposition incompatible with ‘p’). That is, one’s justification or evidence for ‘p’ must be sufficient fort one to know that every alternative to ‘p’ is false. These elements of our thinking about knowledge are exploited by sceptical argument. These arguments call our attention to alternatives that our evidence cannot eliminate. For example, (Dretske, 1970), when we are at the zoo. We might claim to know that we see a zebra on the basis of certain visual evidence, namely a zebra-like appearance. The sceptic inquires how we know that we are not seeing a clearly disguised mule. While we do have some evidence against the likelihood of such a deception, intuitively it is not strong enough for ‘us’ to know that we are not so deceived. By pointing out alternatives of this nature that cannot eliminate, as well as others with more general application (dreams, hallucinations, etc.), the sceptic appears to show that this requirement that our evidence eliminate every alternative is seldom, if ever, met.
The above considerations show that virtue epistemology must say more about the selection of relevant fields and sets of circumstances. Established addresses the generality problem by introducing the concept of a design plan for our intellectual faculties. Relevant specifications for fields and sets of circumstances are determined by this plan. One might object that this approach requires the problematic assumption of a Designer of the design plan. But Plantinga disagrees on two counts: He does not think that the assumption is needed, or that it would be problematic. Plantinga discusses relevant material in Plantinga (1986, 1987 and 1988). Ernest Sosa addresses the generality problem by introducing the concept of an epistemic perspective. In order to have reflective knowledge, ‘S’ must have a true grasp of the reliability of her faculties, this grasp being itself provided by a ‘faculty of faculties’. Relevant specifications of an ‘F’ and ‘C’ are determined by this perspective. Alternatively, Sosa has suggested that relevant specifications are determined by the purposes of the epistemic community. The idea is that fields and sets of circumstances are determined by their place in useful generalizations about epistemic agents and their abilities to act as reliable-information sharers.
The second objection which virtue epistemology faces are that (J) and
(Jʹ) are too strong. It is possible for ‘S’ to be justified in believing that ‘p’, even when ‘S’s’ intellectual faculties are largely unreliable. Suppose, for example, that Jane’s beliefs about the world around her are true. It is clear that in this case Jane’s faculties of perception are almost wholly unreliable. But we would not want to say that none of Jane’s perceptual beliefs are justified. If Jane believes that there is a tree in her yard, and she vases the belief on the usual tree-like experience, then it seems that she is as justified as we would be regarded a substitutable belief.
Sosa addresses the current problem by arguing that justification is relative to an environment ‘E’. Accordingly, ‘S’ is justified in believing that ‘p’ relative to ‘E’, if and only if ‘S’s’ faculties would be reliable in ‘E’. Note that on this account, ‘S’ need not actually be in ‘E’ in order for ‘S’ to be justified in believing some proposition relative to ‘E’. This allows Soda to conclude that Jane has justified belief in the above case. For Jane is justified in her perceptual beliefs relative to our environment, although she is not justified in those beliefs relative to the environment in which they have actualized her.
We have earlier made mention about analyticity, but the true story of analyticity is surprising in many ways. Contrary to received opinion, it was the empiricist Locke rather than the rationalist Kant who had the better information account of this type or deductive proposition. Frége and Rudolf Carnap (1891-1970) A German logician positivist whose first major works was “Der logische Aufbau der Welt” (1926, trs, as “The Logical Structure of the World,” 1967). Carnap pursued the enterprise of clarifying the structures of mathematics and scientific language (the only legitimate task for scientific philosophy) in “The Logical Syntax of Language,” (1937). Yet, refinements continued with “Meaning and Necessity” (1947), while a general losing of the original ideal of reduction culminated in the great “Logical Foundations of Probability” and the most importantly single work of ‘confirmation theory’ in 1950. Other works concern the structure of physics and the concept of entropy.
Both, Frége and Carnap, represented as analyticity’s best friends in this century, did as much to undermine it as its worst enemies. Quine (1908-) whose early work was on mathematical logic, and issued in “A System of Logistic” (1934), “Mathematical Logic” (1940) and “Methods of Logic” (1950) it was with this collection of papers a “Logical Point of View” (1953) that his philosophical importance became widely recognized, also, Putman (1926-) his concern in the later period has largely been to deny any serious asymmetry between truth and knowledge as it is obtained in natural science, and as it is obtained in morals and even theology. Books include, Philosophy of logic (1971), Representation and Reality (1988) and Renewing Philosophy (1992). Collections of his papers including Mathematics, Master, and Method, (1975), Mind, Language, and Reality, (1975) and Realism and Reason (1983). Both of which represented as having refuted the analytic/synthetic distinction, not only did no such thing, but, in fact, contributed significantly to undoing the damage done by Frége and Carnap. Finally, the epistemological significance of the distinctions is nothing like what it is commonly taken to be.
Locke’s account of an analyticity proposition as, for its time, everything that a succinct account of analyticity should be (Locke, 1924, pp. 306-8) he distinguished two kinds of analytic propositions, identified propositions in which we affirm the said terms if itself, e.g., ‘Roses are roses’, and predicative propositions in which ‘a part of the complex idea is predicated of the name of the whole’, e.g., ‘Roses are flowers’. Locke calls such sentences ‘trifling’ because a speaker who uses them ‘trifles with words’. A synthetic sentence, in contrast, such as a mathematical theorem, states ‘a truth and conveys with its informative real knowledge’. Correspondingly, Locke distinguishes two kinds of ‘ necessary consequences’, analytic entailment where validity depends on the literal containment of the conclusions in the premiss and synthetic entailments where it does not. (Locke did not originate this concept-containment notion of analyticity. It is discussions by Arnaud and Nicole, and it is safe to say it has been around for a very long time (Arnaud, 1964).
Kant’s account of analyticity, which received opinion tells ‘us’ is the consummate formulation of this notion in modern philosophy, is actually a step backward. What is valid in his account is not novel, and what is novel is not valid. Kant presents Locke’s account of concept-containment analyticity, but introduces certain alien features, the most important being his characterizations of most important being his characterization of analytic propositions as propositions whose denials are logical contradictions (Kant, 1783). This characterization suggests that analytic propositions based on Locke’s part-whole relation or Kant’s explicative copula are a species of logical truth. But the containment of the predicate concept in the subject concept in sentences like ‘Bachelors are unmarried’ is a different relation from containment of the consequent in the antecedent in a sentence like ‘If John is a bachelor, then John is a bachelor or Mary read Kant’s Critique’. The former is literal containment whereas, the latter are, in general, not. Talk of the ‘containment’ of the consequent of a logical truth in the metaphorical, a way of saying ‘logically derivable’.
Kant’s conflation of concept containment with logical containment caused him to overlook the issue of whether logical truths are synthetically deductive and the problem of how he can say mathematical truths are synthetically deductive when they cannot be denied without contradiction. Historically. , the conflation set the stage for the disappearance of the Lockean notion. Frége, whom received opinion portrays as second only to Kant among the champions of analyticity, and Carnap, who it portrays as just behind Frége, was jointly responsible for the appearance of concept-containment analyticity.
Frége was clear about the difference between concept containment and logical containment, expressing it as like the difference between the containment of ‘beams in a house’ the containment of a ‘plant in the seed’ (Frége, 1853). But he found the former, as Kant formulated it, defective in three ways: It explains analyticity in psychological terms, it does not cover all cases of analytic propositions, and, perhaps, most important for Frége’s logicism, its notion of containment is ‘unfruitful’ as a definition: Mechanisms in logic and mathematics (Frége, 1853). In an insidious containment between the two notions of containment, Frége observes that with logical containment ‘we are not simply talking out of the box again what we have just put inti it’. This definition makes logical containment the basic notion. Analyticity becomes a special case of logical truth, and, even in this special case, the definitions employ the power of definition in logic and mathematics than mere concept combination.
Carnap, attempting to overcome what he saw a shortcoming in Frége’s account of analyticity, took the remaining step necessary to do away explicitly with Lockean-Kantian analyticity. As Carnap saw things, it was a shortcoming of Frége’s explanation that it seems to suggest that definitional relation underlying analytic propositions can be extra-logic in some sense, say, in resting on linguistic synonymy. To Carnap, this represented a failure to achieve a uniform formal treatment of analytic propositions and left ‘us’ with a dubious distinction between logical and extra-logical vocabulary. Hence, he eliminated the reference to definitions in Frége’s explanation of analyticity by introducing ‘meaning postulates’, e.g., statements such as (∀χ) (χ is a bachelor-is unmarried) (Carnap, 1965). Like standard logical postulate on which they were modelled, meaning postulates express nothing more than constrains on the admissible models with respect to which sentences and deductions are evaluated for truth and validity. Thus, despite their name, its asymptomatic-balance having to pustulate itself by that in what it holds on to not more than to do with meaning than any value-added statements expressing an indispensable truth. In defining analytic propositions as consequences of (an explained set of) logical laws, Carnap explicitly removed the one place in Frége’s explanation where there might be room for concept containment and with it, the last trace of Locke’s distinction between semantic and other ‘necessary consequences’.
Quine, the staunchest critic of analyticity of our time, performed an invaluable service on its behalf-although, one that has come almost completely unappreciated. Quine made two devastating criticism of Carnap’s meaning postulate approach that expose it as both irrelevant and vacuous. It is irrelevant because, in using particular words of a language, meaning postulates fail to explicate analyticity for sentences and languages generally, that is, they do not, in fact, bring definition to it for variables ‘S’ and ‘L’ (Quine, 1953). It is vacuous because, although meaning postulates tell ‘us’ what sentences are to count as analytic, they do not tell ‘us’ what it is for them to be analytic.
Received opinion gas it that Quine did much more than refute the analytic/synthetic distinction as Carnap tried to draw it. Received opinion has that Quine demonstrated there is no distinction, however, anyone might try to draw it. Nut this, too, is incorrect. To argue for this stronger conclusion, Quine had to show that there is no way to draw the distinction outside logic, in particular theory in linguistic corresponding to Carnap’s, Quine’s argument had to take an entirely different form. Some inherent feature of linguistics had to be exploited in showing that no theory in this science can deliver the distinction. But the feature Quine chose was a principle of operationalist methodology characteristic of the school of Bloomfieldian linguistics. Quine succeeds in showing that meaning cannot be made objective sense of in linguistics. If making sense of a linguistic concept requires, as that school claims, operationally defining it in terms of substitution procedures that employ only concepts unrelated to that linguistic concept. But Chomsky’s revolution in linguistics replaced the Bloomfieldian taxonomic model of grammars with the hypothetico-deductive model of generative linguistics, and, as a consequence, such operational definition was removed as the standard for concepts in linguistics. The standard of theoretical definition that replaced it was far more liberal, allowing the members of as family of linguistic concepts to be defied with respect to one another within a set of axioms that state their systematic interconnections -the entire system being judged by whether its consequences are confirmed by the linguistic facts. Quine’s argument does not even address theories of meaning based on this hypothetico-deductive model (Katz, 1988, Katz, 1990).
Putman, the other staunch critic of analyticity, performed a service on behalf of analyticity fully on a par with, and complementary to Quine’s, whereas, Quine refuted Carnap’s formalization of Frége’s conception of analyticity, Putman refuted this very conception itself. Putman put an end to the entire attempt, initiated by Frége and completed by Carnap, to construe analyticity as a logical concept (Putman, 1962, 1970, 1975).
However, as with Quine, received opinion has it that Putman did much more. Putman in credited with having devised science fiction cases, from the robot cat case to the twin earth cases, that are counter examples to the traditional theory of meaning. Again, received opinion is incorrect. These cases are only counter examples to Frége’s version of the traditional theory of meaning. Frége’s version claims both (1) that senses determines reference, and (2) that there are instances of analyticity, say, typified by ‘cats are animals’, and of synonymy, say typified by ‘water’ in English and ‘water’ in twin earth English. Given (1) and (2), what we call ‘cats’ could not be non-animals and what we call ‘water’ could not differ from what the earthier twin called ‘water’. But, as Putman’s cases show, what we call ‘cats’ could be Martian robots and what they call ‘water’ could be something other than H2O Hence, the cases are counter examples to Frége’s version of the theory.
Putman himself takes these examples to refute the traditional theory of meaning per se, because he thinks other versions must also subscribe to both (1) and. (2). He was mistaken in the case of (1). Frége’s theory entails (1) because it defines the sense of an expression as the mode of determination of its referent (Frége, 1952, pp. 56-78). But sense does not have to be defined this way, or in any way that entails (1).it can be defined as (D).
(D) Sense is that aspect of the grammatical structure of expressions and sentences responsible for their having sense properties and relations like meaningfulness, ambiguity, antonymy, synonymy, redundancy, analyticity and analytic entailment. (Katz, 1972 & 1990). (Note that this use of sense properties and relations is no more circular than the use of logical properties and relations to define logical form, for example, as that aspect of grammatical structure of sentences on which their logical implications depend.)
Again, (D) makes senses internal to the grammar of a language and reference an external; matter of language use -typically involving extra-linguistic beliefs, Therefore, (D) cuts the strong connection between sense and reference expressed in (1), so that there is no inference from the modal fact that ‘cats’ refer to robots to the conclusion that ‘Cats are animals’ are not analytic. Likewise, there is no inference from ‘water’ referring to different substances on earth and twin earth to the conclusion that our word and theirs are not synonymous. Putman’s science fiction cases do not apply to a version of the traditional theory of meaning based on (D).
The success of Putman and Quine’s criticism in application to Frége and Carnap’s theory of meaning together with their failure in application to a theory in linguistics based on (D) creates the option of overcoming the shortcomings of the Lockean-Kantian notion of analyticity without switching to a logical notion. this option was explored in the 1960s and 1970s in the course of developing a theory of meaning modelled on the hypothetico-deductive paradigm for grammars introduced in the Chomskyan revolution (Katz, 1972).
This theory automatically avoids Frége’s criticism of the psychological formulation of Kant’s definition because, as an explication of a grammatical notion within linguistics, it is stated as a formal account of the structure of expressions and sentences. The theory also avoids Frége’s criticism that concept-containment analyticity is not ‘fruitful’ enough to encompass truths of logic and mathematics. The criticism rests on the dubious assumption, parts of Frége’s logicism, that analyticity ‘should’ encompass them, (Benacerraf, 1981). But in linguistics where the only concern is the scientific truth about natural concept-containment analyticity encompass truths of logic and mathematics. Moreover, since we are seeking the scientific truth about trifling propositions in natural language, we will eschew relations from logic and mathematics that are too fruitful for the description of such propositions. This is not to deny that we want a notion of necessary truth that goes beyond the trifling, but only to deny that, that notion is the notion of analyticity in natural language.
The remaining Frégean criticism points to a genuine incompleteness of the traditional account of analyticity. There are analytic relational sentences, for example, Jane walks with those with whom she strolls, ’Jack kills those he himself has murdered’, etc., and analytic entailment with existential conclusions, for example, ‘I think’, therefore ‘I exist’. The containment in these sentences is just as literal as that in an analytic subject-predicate sentence like ‘Bachelors are unmarried’, such are shown to have a theory of meaning construed as a hypothetico-deductive systemizations of sense as defined in (D) overcoming the incompleteness of the traditional account in the case of such relational sentences.
Such a theory of meaning makes the principal concern of semantics the explanation of sense properties and relations like synonymy, an antonymy, redundancy, analyticity, ambiguity, etc. Furthermore, it makes grammatical structure, specifically, senses structure, the basis for explaining them. This leads directly to the discovery of a new level of grammatical structure, and this, in turn, makes possible a proper definition of analyticity. To see this, consider two simple examples. It is a semantic fact that ‘a male bachelor’ is redundant and that ‘spinsters’ are synonymous with ‘women who never married’. In the case of the redundancy, we have to explain the fact that the sense of the modifier ‘male’ is already contained in the sense of its head ‘bachelor’. In the case of the synonymy, we have to explain the fact that the sense of ‘sinister’ is identical to the sense of ‘woman who never married’ (compositionally formed from the senses of ‘woman’, ‘never’ and ‘married’). But is so fas as such facts concern relations involving the components of the senses of ‘bachelor’ and ‘spinster’ and is in as these words were simply syntactic, there must be a level of grammatical structure at which simpler of the syntactical remain semantically complex. This, in brief, is the route by which we arrive a level of ‘decompositional semantic structure; that is the locus of sense structures masked by syntactically simple words.
Discovery of this new level of grammatical structure was followed by attemptive efforts as afforded to represent the structure of the sense’s finds there. Without going into detail of sense representations, it is clear that, once we have the notion of decompositional representation, we can see how to generalize Locke and Kant’s informal, subject-predicate account of analyticity to cover relational analytic sentences. Let a simple sentence ‘S’ consisted of some placed predicate ‘P’ with terms T1 . . . , . Tn occupying its argument places.
The analysis in case, first, S has a term T1 that consists of a place predicate Q (m > n or m = n) with terms occupying its argument places, and second, P is contained in Q and, for each term TJ. . . . T1 + I, . . . . , Tn, TJ is contained in the term of Q that occupies the argument place in Q corresponding to the argument place occupied by TJ in P. (Katz, 1972)
To see how (A) works, suppose that ‘stroll’ in ‘Jane walks with those whom she strolls’ is decompositionally represented as having the same sense as ‘walk idly and in a leisurely way’. The sentence is analytic by (A) because the predicate ‘stroll’ (the sense of ‘stroll) and the term ‘Jane’ * the sense of ‘Jane’ associated with the predicate ‘walk’) is contained in the term ‘Jane’ (the sense of ‘she herself’ associated with the predicate ‘stroll’). The containment in the case of the other terms is automatic.
The fact that (A) itself makes no reference to logical operators or logical laws indicate that analyticity for subject-predicate sentences can be extended to simple relational sentences without treating analytic sentences as instances of logical truths. Further, the source of the incompleteness is no longer explained, as Frége explained it, as the absence of ‘fruitful’ logical apparatus, but is now explained as mistakenly treating what is only a special case of analyticity as if it were the general case. The inclusion of the predicate in the subject is the special case (where n = 1) of the general case of the inclusion of an–place predicate (and its terms) in one of its terms. Noting that the defects, by which, Quine complained of in connection with Carnap’s meaning-postulated explication are absent in (A). (A) contains no words from a natural language. It explicitly uses variable ‘S’ and variable ‘L’ because it is a definition in linguistic theory. Moreover, (A) tell ‘us’ what property is in virtue of which a sentence is analytic, namely, redundant predication, that is, the predication structure of an analytic sentence is already found in the content of its term structure.
Received opinion has been anti-Lockean in holding that necessary consequences in logic and language belong to one and the same species. This seems wrong because the property of redundant predication provides a non-logic explanation of why true statements made in the literal use of analytic sentences are necessarily true. Since the property ensures that the objects of the predication in the use of an analytic sentence are chosen on the basis of the features to be predicated of them, the truth-conditions of the statement are automatically satisfied once its terms take on reference. The difference between such a linguistic source of necessity and the logical and mathematical sources vindicate Locke’s distinction between two kinds of ‘necessary consequence’.
Received opinion concerning analyticity contains another mistake. This is the idea that analyticity is inimical to science, in part, the idea developed as a reaction to certain dubious uses of analyticity such as Frége’s attempt to establish logicism and Schlick’s, Ayer’s and other logical; postivists attempt to deflate claims to metaphysical knowledge by showing that alleged deductive truths are merely empty analytic truths (Schlick, 1948, and Ayer, 1946). In part, it developed as also a response to a number of cases where alleged analytic, and hence, necessary truths, e.g., the law of excluded a seeming next-to-last subsequent to have been taken as open to revision, such cases convinced philosophers like Quine and Putnam that the analytic/synthetic distinction is an obstacle to scientific progress.
The problem, if there is, one is one is not analyticity in the concept-containment sense, but the conflation of it with analyticity in the logical sense. This made it seem as if there is a single concept of analyticity that can serve as the grounds for a wide range of deductive truths. But, just as there are two analytic/synthetic distinctions, so there are two concepts of concept. The narrow Lockean/Kantian distinction is based on a narrow notion of expressions on which concepts are senses of expressions in the language. The broad Frégean/Carnap distinction is based on a broad notion of concept on which concepts are conceptions -often scientific one about the nature of the referent (s) of expressions (Katz, 1972) and curiously Putman, 1981). Conflation of these two notions of concepts produced the illusion of a single concept with the content of philosophical, logical and mathematical conceptions, but with the status of linguistic concepts. This encouraged philosophers to think that they were in possession of concepts with the contentual representation to express substantive philosophical claims, e.g., such as Frége, Schlick and Ayer’s, . . . and so on, and with a status that trivializes the task of justifying them by requiring only linguistic grounds for the deductive propositions in question.
Finally, there is an important epistemological implication of separating the broad and narrowed notions of analyticity. Frége and Carnap took the broad notion of analyticity to provide foundations for necessary and a priority, and, hence, for some form of rationalism, and nearly all rationalistically inclined analytic philosophers that followed them in this, thus, when Quine dispatched the Frége-Carnap position on analyticity, it was widely believed that necessary, as a priority, and rationalism had also been despatched, and, as a consequence. Quine had ushered in an ‘empiricism without dogmas’ and ‘naturalized epistemology’. But given there is still a notion of analyticity that enables ‘us’ to pose the problem of how necessary, synthetic deductive knowledge is possible (moreover, one whose narrowness makes logical and mathematical knowledge part of the problem), Quine did not undercut the foundations of rationalism. Hence, a serious reappraisal of the new empiricism and naturalized epistemology is, to any the least, is very much in order (Katz, 1990).
In some areas of philosophy and sometimes in things that are less than important we are to find in the deductively/inductive distinction in which has been applied to a wide range of objects, including concepts, propositions, truths and knowledge. Our primary concern will, however, be with the epistemic distinction between deductive and inductive knowledge. The most common way of marking the distinction is by reference to Kant’s claim that deductive knowledge is absolutely independent of all experience. It is generally agreed that S’s knowledge that ‘p’ is independent of experience just in case S’s belief that ‘p’ is justified independently of experience. Some authors (Butchvarov, 1970, and Pollock, 1974) are, however, in finding this negative characterization of deductive unsatisfactory knowledge and have opted for providing a positive characterisation in terms of the type of justification on which such knowledge is dependent. Finally, others (Putman, 1983 and Chisholm, 1989) have attempted to mark the distinction by introducing concepts such as necessity and rational unrevisability than in terms of the type of justification relevant to deductive knowledge.
One who characterizes deductive knowledge in terms of justification that is independent of experience is faced with the task of articulating the relevant sense of experience, and proponents of the deductive ly cites ‘intuition’ or ‘intuitive apprehension’ as the source of deductive justification. Furthermore, they maintain that these terms refer to a distinctive type of experience that is both common and familiar to most individuals. Hence, there is a broad sense of experience in which deductive justification is dependent of experience. An initially attractive strategy is to suggest that theoretical justification must be independent of sense experience. But this account is too narrow since memory, for example, is not a form of sense experience, but justification based on memory is presumably not deductive. There appear to remain only two options: Provide a general characterization of the relevant sense of experience or enumerates those sources that are experiential. General characterizations of experience often maintain that experience provides information specific to the actual world while non-experiential sources provide information about all possible worlds. This approach, however, reduces the concept of non-experiential justification to the concept of being justified in believing a necessary truth. Accounts by enumeration have two problems (1) there is some controversy about which sources to include in the list, and (2) there is no guarantee that the list is complete. It is generally agreed that perception and memory should be included. Introspection, however, is problematic, and beliefs about one’s conscious states and about the manner in which one is appeared to are plausible regarded as experientially justified. Yet, some, such as Pap (1958), maintain that experiments in imagination are the source of deductive justification. Even if this contention is rejected and deductive justification is characterized as justification independent of the evidence of perception, memory and introspection, it remains possible that there are other sources of justification. If it should be the case that clairvoyance, for example, is a source of justified beliefs, such beliefs would be justified deductively on the enumerative account.
The most common approach to offering a positive characterization of deductive justification is to maintain that in the case of basic deductive propositions, understanding the proposition is sufficient to justify one in believing that it is true. This approach faces two pressing issues. What is it to understand a proposition in the manner that suffices for justification? Proponents of the approach typically distinguish understanding the words used to express a proposition from apprehending the proposition itself and maintain that being relevant to deductive justification is the latter which. But this move simply shifts the problem to that of specifying what it is to apprehend a proposition. Without a solution to this problem, it is difficult, if possible, to evaluate the account since one cannot be sure that the account since on cannot be sure that the requisite sense of apprehension does not justify paradigmatic inductive propositions as well. Even less is said about the manner in which apprehending a proposition justifies one in believing that it is true. Proponents are often content with the bald assertions that one who understands a basic deductive proposition can thereby ‘see’ that it is true. But what requires explanation is how understanding a proposition enable one to see that it is true.
Difficulties in characterizing deductive justification in a term either of independence from experience or of its source have led, out-of-the-ordinary to present the concept of necessity into their accounts, although this appeal takes various forms. Some have employed it as a necessary condition for deductive justification, others have employed it as a sufficient condition, while still others have employed it as both. In claiming that necessity is a criterion of the deductive. Kant held that necessity is a sufficient condition for deductive justification. This claim, however, needs further clarification. There are three theses regarding the relationship between theoretical and the necessary, which can be distinguished: (i) if ‘p’ is a necessary proposition and ‘S’ is justified in believing that ‘p’ is necessary, then S’s justification is deductive: (ii) If ‘p’ is a necessary proposition and ‘S’ is justified in believing that ‘p’ is necessarily true, then S’s justification is deductive: And (iii) If ‘p’ is a necessary proposition and ‘S’ is justified in believing that ‘p’, then S’s justification is deductive. For example, many proponents of deductive contend that all knowledge of a necessary proposition is deductive. (ii) and (iii) have the shortcoming of setting by stipulation the issue of whether inductive knowledge of necessary propositions is possible. (i) does not have this shortcoming since the recent examples offered in support of this claim by Kriple (1980) and others have been cases where it is alleged that knowledge of the ‘truth value’ of necessary propositions is knowable inductive. (i) has the shortcoming, however, of either ruling out the possibility of being justified in believing that a proposition is necessary on the basis of testimony or else sanctioning such justification as deductive. (ii) and (iii), of course, suffer from an analogous problem. These problems are symptomatic of a general shortcoming of the approach: It attempts to provide a sufficient condition for deductive justification solely in terms of the modal status of the proposition believed without making reference to the manner in which it is justified. This shortcoming, however, can be avoided by incorporating necessity as a necessary but not sufficient condition for knowable justification as, for example, in Chisholm (1989). Here there are two theses that must be distinguished: (1) If ‘S’ is justified deductively in believing that ‘p’, then ‘p’ is necessarily true. (2) If ‘S’ is justified deductively in believing that ‘p’. Then ‘p’ is a necessary proposition. (1) and (2), however, allows this possibility. A further problem with both (1) and (2) is that it is not clear whether they permit deductively justified beliefs about the modal status of a proposition. For they require that in order for ‘S’ to be justified deductively in believing that ‘p’ is a necessary preposition it must be necessary that ‘p’ is a necessary proposition. But the status of iterated modal propositions is controversial. Finally, (1) and (2) both preclude by stipulation the position advanced by Kripke (1980) and Kitcher (1980) that there is deductive knowledge of contingent propositions.
The concept of rational unrevisability has also been invoked to characterize deductive justification. The precise sense of rational unrevisability has been presented in different ways. Putnam (1983) takes rational unrevisability to be both a necessary and sufficient condition for deductive justification while Kitcher (1980) takes it to be only a necessary condition. There are also two different senses of rational unrevisability that have been associated with the deductive (I) a proposition is weakly unreviable just in case it is rationally unrevisable in light of any future ‘experiential’ evidence, and (II) a proposition is strongly unrevisable just in case it is rationally unrevisable in light of any future evidence. Let us consider the plausibility of requiring either form of rational unrevisability as a necessary condition for deductive justification. The view that a proposition is justified deductive only if it is strongly unrevisable entails that if a non-experiential source of justified beliefs is fallible but self-correcting, it is not a deductive source of justification. Casullo (1988) has argued that it vis implausible to maintain that a proposition that is justified non-experientially is ‘not’ justified deductively merely because it is revisable in light of further non-experiential evidence. The view that a proposition is justified deductively only if it is, weakly unrevisable is not open to this objection since it excludes only recision in light of experiential evidence. It does, however, face a different problem. To maintain that ‘S’s’ justified belief that ‘p’ is justified deductively is to make a claim about the type of evidence that justifies ‘S’ in believing that ‘p’. On the other hand, to maintain that S’s justified belief that ‘p’ is rationally revisable in light of experiential evidence is to make a claim about the type of evidence that can defeat ‘S’s’ justification for believing that ‘p’ that a claim about the type of evidence that justifies ‘S’ in believing that ‘p’. Hence, it has been argued by Edidin (1984) and Casullo (1988) that to hold that a belief is justified deductively only if it is weakly unrevisable is either to confuse supporting evidence with defeating evidence or to endorse some implausible this about the relationship between the two such that if evidence of the sort as the kind ‘A’ can be in defeat, the justification conferred on ‘S’s’ belief that ‘p’ by evidence of kind ‘B’ then S’s justification for believing that ‘p’ is based on evidence of kind ‘A’.
The most influential idea in the theory of meaning in the past hundred years is the thesis that the meaning of an indicative sentence is given by its truth-conditions. On this conception, to understand a sentence is to know its truth-conditions. The conception was first clearly formulated by Frége, was developed in a distinctive way by the early Wittgenstein, and is a leading idea of Donald Herbert Davidson (1917-), who is also known for rejection of the idea of as conceptual scheme, thought of as something peculiar to one language or one way of looking at the world, arguing that where the possibility of translation stops so dopes the coherence of the idea that there is anything to translate. His [papers are collected in the “Essays on Actions and Events” (1980) and “Inquiries into Truth and Interpretation” (1983). However, the conception has remained so central that those who offer opposing theories characteristically define their position by reference to it.
Wittgenstein’s main achievement is a uniform theory of language that yields an explanation of logical truth. A factual sentence achieves sense by dividing the possibilities exhaustively into two groups, those that would make it true and those that would make it false. A truth of logic does not divide the possibilities but comes out true in all of them. It, therefore, lacks sense and says nothing, but it is not nonsense. It is a self-cancellation of sense, necessarily true because it is a tautology, the limiting case of factual discourse, like the figure ‘0' in mathematics. Language takes many forms and even factual discourse does not consist entirely of sentences like ‘The fork is placed to the left of the knife’. However, the first thing that he gave up was the idea that this sentence itself needed further analysis into basic sentences mentioning simple objects with no internal structure. He was to concede, that a descriptive word will often get its meaning partly from its place in a system, and he applied this idea to colour-words, arguing that the essential relations between different colours do not indicate that each colour has an internal structure that needs to be taken apart. On the contrary, analysis of our colour-words would only reveal the same pattern-ranges of incompatible properties-recurring at every level, because that is how we carve up the world.
Indeed, it may even be the case that of our ordinary language is created by moves that we ourselves make. If so, the philosophy of language will lead into the connection between the meaning of a word and the applications of it that its users intend to make. There is also an obvious need for people to understand each other’s meanings of their words. There are many links between the philosophy of language and the philosophy of mind and it is not surprising that the impersonal examination of language in the “Tractatus: was replaced by a very different, anthropocentric treatment in “Philosophical Investigations?”
If the logic of our language is created by moves that we ourselves make, various kinds of realisms are threatened. First, the way in which our descriptive language carves up the world will not be forces on ‘us’ by the natures of things, and the rules for the application of our words, which feel the external constraints, will really come from within ‘us’. That is a concession to nominalism that is, perhaps, readily made. The idea that logical and mathematical necessity is also generated by what we ourselves accomplish what is more paradoxical. Yet, that is the conclusion of Wittengenstein (1956) and (1976), and here his anthropocentricism has carried less conviction. However, a paradox is not sure of error and it is possible that what is needed here is a more sophisticated concept of objectivity than Platonism provides.
In his later work Wittgenstein brings the great problem of philosophy down to earth and traces them to very ordinary origins. His examination of the concept of ‘following a rule’ takes him back to a fundamental question about counting things and sorting them into types: ‘What qualifies as doing the same again? Of a courser, this question as an inconsequential fundamental and would suggest that we forget it and get on with the subject. But Wittgenstein’s question is not so easily dismissed. It has the naive profundity of questions that children ask when they are first taught a new subject. Such questions remain unanswered without detriment to their learning, but they point the only way to complete understanding of what is learned.
It is, nevertheless, the meaning of a complex expression in a function of the meaning of its constituents, that is, indeed, that it is just a statement of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning as truths-conditions that it permits a smooth and satisfying account of the way in which the meaning of a complex expression is a dynamic function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. for singular terms-proper names, indexicals, and certain pronoun’s - this is done by stating the reference of the term in question.
The truth condition of a statement is the condition the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although, this sounds as if it gives a solid anchorage for meaning, some of the security disappears when it turns out that the truth condition can only be defined by repeating the very same statement, the truth condition of ‘snow is white’ is that snow is white, the truth condition of ‘Britain would have capitulated had Hitler invaded’ is that Britain would halve capitulated had Hitler invaded. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to users it in a network of inferences.
On the truth-conditional conception, to give the meaning of expressions is to state the contributive function it makes to the dynamic function of sentences in which it occurs. For singular terms-proper names, and certain pronouns, as well are indexicals-this is done by stating the reference of the term in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentence containing it is true. The meaning of a sentence-forming operator is given by stating its distributive contribution to the truth-conditions of a complete sentence, as a function of the semantic values of the sentences on which it operates. For an extremely simple, but nonetheless, it is a structured language, we can state the contributions various expressions make to truth conditions as follows:
A1: The referent of ‘London’ is London.
A2: The referent of ‘Paris’ is Paris.
A3: Any sentence of the form ‘a is beautiful’ is true if and only if the referent of ‘a’ is beautiful.
A4: Any sentence of the form ‘a is larger than b’ is true if and only if the referent of ‘a’ is larger than the referent of ‘b’.
A5: Any sentence of the form ‘It is not the case that A’ is true if and only if it is not the case that ‘A’ is true.
A6: Any sentence of the form “A and B’ are true if and only is ‘A’ is true and ‘B’ is true.
The principle’s A2-A6 form a simple theory of truth for a fragment of English. In this theory, it is possible to derive these consequences: That ‘Paris is beautiful’ is true if and only if Paris is beautiful (from A2 and A3), which ‘London is larger than Paris and it is not the cases that London is beautiful’ is true if and only if London is larger than Paris and it is not the case that London is beautiful (from A1 - As): And in general, for any sentence ‘A’ of this simple language, we can derive something of the form ‘A’ is true if and only if A’.
The theorist of truth conditions should insist that not every true statement about the reference of an expression be fit to be an axiom in a meaning-giving theory of truth for a language. The axiom: London’ refers to the city in which there was a huge fire in 1666 is a true statement about the reference of ‘London?’. It is a consequence of a theory that substitutes this axiom for A! In our simple truth theory that ‘London is beautiful’ is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a subject can understand the name ‘London’ without knowing that last-mentioned truth conditions, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorist of meaning as truth conditions to state the constraints on the acceptability of axioms in a way that does not presuppose a deductive, non-truth conditional conception of meaning.
Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity. Second, the theorist must offer an account of what it is for a person’s language to be truly descriptive by a semantic theory containing a given semantic axiom.
We can take the charge of triviality first. In more detail, it would run thus: Since the content of a claim that the sentence ‘Paris is beautiful’ in which is true of the divisional region, which is no more than the claim that Paris is beautiful, we can trivially describe understanding a sentence, if we wish, as knowing its truth-conditions, but this gives ‘us’ no substantive account of understanding whatsoever. Something other than a grasp to truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory that, is somewhat more discriminative. Horwich calls the minimal theory of truth, or deflationary view of truth, as fathered by Frége and Ramsey. The essential claim is that the predicate’ . . . is true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concepts that ought be the topic of philosophical enquiry. The approach admits of different versions, but centres on the points (1) that ‘it is true that p’ says no more nor less than ‘p’ (hence redundancy) (2) that in less direct context, such as ‘everything he said was true’, or ‘all logical consequences of true propositions are true’, the predicate functions as a device enabling ‘us’; to generalize than as an adjective or predicate describing the thing he said, or the kinds of propositions that follow from true propositions. For example, the second may translate as ‘ (∀ p, q) (p & p ➝ q ➝q) ‘ where there is no use of a notion of truth.
There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such a; science aims at the truth’, or ‘truth is a norm governing discourse’. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited ‘objective’ conception of truth. But perhaps, we can have the norms even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whenever science holds that ‘p’. Then ‘p’. Discourse is to be regulated by the principle that it is wrong to assert ‘p’ when ‘not-p’.
The disquotational theory of truth finds that the simplest formulation is the claim that expressions of the fern ‘S is true’ mean the same as expressions of the form ’S’. Some philosophers dislike the idea of sameness of meaning, and if this is disallowed, then the claim is that the two forms are equivalent in any sense of equivalence that matters. That is, it makes no difference whether people say ‘Dogs bark’ is true, or whether they say that ‘dogs bark’. In the former representation of what they say the sentence ‘Dogs bark’ is mentioned, but in the latter it appears to be used, so the claim that the two are equivalent needs careful formulation and defence. On the face of it someone might know that ‘Dogs bark’ is true without knowing what it means, for instance, if one were to find it in a list of acknowledged truths, although he does not understand English, and this is different from knowing that dogs bark. Disquotational theories are usually presented as versions of the redundancy theory of truth.
The minimal theory states that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition ‘p’, it is true that ‘p’ if and only if ‘p’. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truths. It is how widely accepted, that both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of truth and a truth conditional account of meaning (Davidson, 1990, Dummett, 1959 and Horwich, 1990). If the claim that the sentence ‘Paris is beautiful’ is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try to explain the sentence’s meaning in terms of its truth conditions. The minimal theory of truth has been endorsed by Ramsey, Ayer, the later Wittgenstein, Quine, Strawson, Horwich and-confusingly and inconsistently if be it correct. ~ Frége himself. But is the minimal theory correct?
The minimal or redundancy theory treats instances of the equivalence principle as definitional of truth for a given sentence. But in fact, it seems that each instance of the equivalence principle can itself be explained. The truths from which such an instance as.
‘London is beautiful’ is true if and only if London is beautiful
preserve a right to be interpreted specifically of A1 and A3 above? This would be a pseudo-explanation if the fact that ‘London’ refers to ‘London is beautiful’ has the truth-condition it does. But that is very implausible: It is, after all, possible to understand in the name ‘London’ without understanding the predicate ‘is beautiful’. The idea that facts about the reference of particular words can be explanatory of facts about the truth conditions of sentences containing them in no way requires any naturalistic or any other kind of reduction of the notion of reference. Nor is the idea incompatible with the plausible point that singular reference can be attributed at all only to something that is capable of combining with other expressions to form complete sentences. That still leaves room for facts about an expression’s having the particular reference it does to be partially explanatory of the particular truth condition possessed by a given sentence containing it. The minimal; Theory thus treats as definitional or stimulative something that is in fact open to explanation. What makes this explanation possible is that there is a general notion of truth that has, among the many links that hold it in place, systematic connections with the semantic values of sub-sentential expressions.
A second problem with the minimal theory is that it seems impossible to formulate it without at some point relying implicitly on features and principles involving truths that go beyond anything countenanced by the minimal theory. If the minimal theory treats truth as a predicate of anything linguistic, be it utterances, type-in-a-language, or whatever, then the equivalence schema will not cover all cases, but only of those in the theorist’s own language. Some account has to be given of truth for sentences of other languages. Speaking of the truth of language-independence propositions or thoughts will only postpone, not avoid, this issue, since at some point principles have to be stated associating these language-independent entities with sentences of particular languages. The defender of the minimalist theory is likely to say that if a sentence ‘S’ of a foreign language is best translated by our sentence ‘p’, then the foreign sentence ‘S’ is true if and only if ‘p’. Now the best translation of a sentence must preserve the concepts expressed in the sentence. Constraints involving a general notion of truth are persuasive in a plausible philosophical theory of concepts. It is, for example, a condition of adequacy on an individualized account of any concept that there exists what is called ‘Determination Theory’ for that account-that is, a specification of how the account contributes to fixing the semantic value of that concept, the notion of a concept’s semantic value is the notion of something that makes a certain contribution to the truth conditions of thoughts in which the concept occurs. but this is to presuppose, than to elucidate, a general notion of truth.
It is also plausible that there are general constraints on the form of such Determination Theories, constraints that involve truth and which are not derivable from the minimalist’s conception. Suppose that concepts are individuated by their possession conditions. A concept is something that is capable of being a constituent of such contentual representational in a way of thinking of something-a particular object, or property, or relation, or another entity. A possession condition may in various says makes a thanker’s possession of a particular concept dependent upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world for being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to the subject’s environment. If this is so, then mention of such experiences in a possession condition will make possession of that condition will make possession of that concept dependent in part upon the environment relations of the thinker. Burge (1979) has also argued from intuitions about particular examples that, even though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinker’s social environment is varied. A possession condition which property individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.
One such plausible general constraint is then the requirement that when a thinker forms beliefs involving a concept in accordance with its possession condition, a semantic value is assigned to the concept in such a way that the belief is true. Some general principles involving truth can indeed, as Horwich has emphasized, be derived from the equivalence schema using minimal logical apparatus. Consider, for instance, the principle that ‘Paris is beautiful and London is beautiful’ is true if and only if ‘Paris is beautiful’ is true if and only if ‘Paris is beautiful’ is true and ‘London is beautiful’ is true. This follows logically from the three instances of the equivalence principle: ‘Paris is beautiful and London is beautiful’ is rue if and only if Paris is beautiful, and ‘London is beautiful’ is true if and only if London is beautiful. But no logical manipulations of the equivalence schemas will allow the deprivation of that general constraint governing possession conditions, truth and the assignment of semantic values. That constraint can have courses be regarded as a further elaboration of the idea that truth is one of the aims of judgement.
We now turn to the other question, ‘What is it for a person’s language to be correctly describable by a semantic theory containing a particular axiom, such as the axiom A6 above for conjunction?’ This question may be addressed at two depths of generality. At the shallower level, the question may take for granted the person’s possession of the concept of conjunction, and be concerned with what has to be true for the axiom correctly to describe his language. At a deeper level, an answer should not duck the issue of what it is to possess the concept. The answers to both questions are of great interest: We will take the lesser level of generality first.
When a person means conjunction by ‘sand’, he is not necessarily capable of formulating the axiom A6 explicitly. Even if he can formulate it, his ability to formulate it is not the causal basis of his capacity to hear sentences containing the word ‘and’ as meaning something involving conjunction. Nor is it the causal basis of his capacity to mean something involving conjunction by sentences he utters containing the word ‘and’. Is it then right to regard a truth theory as part of an unconscious psychological computation, and to regard understanding a sentence as involving a particular way of depriving a theorem from a truth theory at some level of conscious proceedings? One problem with this is that it is quite implausible that everyone who speaks the same language has to use the same algorithms for computing the meaning of a sentence. In the past thirteen years, thanks particularly to the work of Davies and Evans, a conception has evolved according to which an axiom like A6 is true of a person’s language only if there is a common component in the explanation of his understanding of each sentence containing the word ‘and’, a common component that explains why each such sentence is understood as meaning something involving conjunction (Davies, 1987). This conception can also be elaborated in computational terms: Suggesting that for an axiom like A6 to be true of a person’s language is for the unconscious mechanisms which produce understanding to draw on the information that a sentence of the form ‘A and B’ are true if and only if ‘A’ is true and ‘B’ is true (Peacocke, 1986). Many different algorithms may equally draw n this information. The psychological reality of a semantic theory thus involves, in Marr’s (1982) famous classification, something intermediate between his level one, the function computed, and his level two, the algorithm by which it is computed. This conception of the psychological reality of a semantic theory can also be applied to syntactic and phonol logical theories. Theories in semantics, syntax and phonology are not themselves required to specify the particular algorithms that the language user employs. The identification of the particular computational methods employed is a task for psychology. But semantics, syntactic and phonology theories are answerable to psychological data, and are potentially refutable by them-for these linguistic theories do make commitments to the information drawn upon by mechanisms in the language user.
This answer to the question of what it is for an axiom to be true of a person’s language clearly takes for granted the person’s possession of the concept expressed by the word treated by the axiom. In the example of the axiom A6, the information drawn upon is that sentences of the form ‘A and B’ are true if and only if ‘A’ is true and ‘B’ is true. This informational content employs, as it has to if it is to be adequate, the concept of conjunction used in stating the meaning of sentences containing ‘and’. So the computational answer we have returned needs further elaboration if we are to address the deeper question, which does not want to take for granted possession of the concepts expressed in the language. It is at this point that the theory of linguistic understanding has to draws upon a theory of concepts. It is plausible that the concepts of conjunction are individuated by the following condition for a thinker to possess it.
Finally, this response to the deeper question allows ‘us’ to answer two challenges to the conception of meaning as truth-conditions. First, there was the question left hanging earlier, of how the theorist of truth-conditions is to say what makes one axiom of a semantic theory is correctly in that of another, when the two axioms assign the same semantic values, but do so by means of different concepts. Since the different concepts will have different possession conditions, the dovetailing accounts, at the deeper level of what it is for each axiom to be correct for a person’s language will be different accounts. Second, there is a challenge repeatedly made by the minimalist theorists of truth, to the effect that the theorist of meaning as truth-conditions should give some non-circular account of what it is to understand a sentence, or to be capable of understanding all sentences containing a given constituent. For each expression in a sentence, the corresponding dovetailing account, together with the possession condition, supplies a non-circular account of what it is to understand any sentence containing that expression. The combined accounts for each of he expressions that comprise a given sentence together constitute a non-circular account of what it is to understand the compete sentences. Taken together, they allow the theorists of meaning as truth-conditions fully to meet the challenge.
A curious view common to that which is expressed by an utterance or sentence: The proposition or claim made about the world. By extension, the content of a predicate or other sub-sentential component is what it contributes to the content of sentences that contain it. The nature of content is the central concern of the philosophy of language, in that mental states have contents: A belief may have the content that the prime minister will resign. A concept is something that is capable of bringing a constituent of such contents. More specifically, a concept is a way of thinking of something-a particular object, or property or relation, or another entity. Such a distinction was held in Frége’s philosophy of language, explored in “On Concept and Object” (1892). Frége regarded predicates as incomplete expressions, in the same way as a mathematical expression for a function, such as sines . . . a log . . . , is incomplete. Predicates refer to concepts, which themselves are ‘unsaturated’, and cannot be referred to by subject expressions (we thus get the paradox that the concept of a horse is not a concept). Although Frége recognized the metaphorical nature of the notion of a concept being unsaturated, he was rightly convinced that some such notion is needed to explain the unity of a sentence, and to prevent sentences from being thought of as mere lists of names.
Several different concepts may each be ways of thinking of the same object. A person may think of himself in the first-person way, or think of himself as the spouse of Mary Smith, or as the person located in a certain room now. More generally, a concept ‘c’ is distinct from a concept ‘d’ if it is possible for a person rationally to believe ‘d is such-and-such’. As words can be combined to form structured sentences, concepts have also been conceived as combinable into structured complex contents. When these complex contents are expressed in English by ‘that . . . ’clauses, as in our opening examples, they will be capable of being true or false, depending on the way the world is.
The general system of concepts with which we organize our thoughts and perceptions are to encourage a conceptual scheme of which the outstanding elements of our every day conceptual formalities include spatial and temporal relations between events and enduring objects, causal relations, other persons, meaning-bearing utterances of others, . . . and so on. To see the world as containing such things is to share this much of our conceptual scheme. A controversial argument of Davidson’s urges that we would be unable to interpret speech from a different conceptual scheme as even meaningful, Davidson daringly goes on to argue that since translation proceeds according ti a principle of clarity, and since it must be possible of an omniscient translator to make sense of, ‘us’ we can be assured that most of the beliefs formed within the commonsense conceptual framework are true.
Concepts are to be distinguished from a stereotype and from conceptions. The stereotypical spy may be a middle-level official down on his luck and in need of money. None the less, we can come to learn that Anthony Blunt, art historian and Surveyor of the Queen’s Pictures, are a spy; we can come to believe that something falls under a concept while positively disbelieving that the same thing falls under the stereotype associated wit the concept. Similarly, a person’s conception of a just arrangement for resolving disputes may involve something like contemporary Western legal systems. But whether or not it would be correct, it is quite intelligible for someone to rejects this conception by arguing that it dies not adequately provide for the elements of fairness and respect that are required by the concepts of justice.
Basically, a concept is that which is understood by a term, particularly a predicate. To posses a concept is to be able to deploy a term expressing it in making judgements, in which the ability connection is such things as recognizing when the term applies, and being able to understand the consequences of its application. The term ‘idea’ was formally used in the came way, but is avoided because of its associations with subjective matters inferred upon mental imagery in which may be irrelevant ti the possession of a concept. In the semantics of Frége, a concept is the reference of a predicate, and cannot be referred to by a subjective term, although its recognition of as a concept, in that some such notion is needed to the explanatory justification of which that sentence of unity finds of itself from being thought of as namely categorized lists of itemized priorities.
A theory of a particular concept must be distinguished from a theory of the object or objects it selectively picks out. The theory of the concept is part if the theory of thought and epistemology. A theory of the object or objects is part of metaphysics and ontology. Some figures in the history of philosophy-and are open to the accusation of not having fully respected the distinction between the kinds of theory. Descartes appears to have moved from facts about the indubitability of the thought ‘I think’, containing the fist-person was of thinking, to conclusions about the nonmaterial nature of the object he himself was. But though the goals of a theory of concepts and a theory of objects are distinct, each theory is required to have an adequate account of its relation to the other theory. A theory if concept is unacceptable if it gives no account of how the concept is capable of picking out the object it evidently does pick out. A theory of objects is unacceptable if it makes it impossible to understand how we could have concepts of those objects.
A fundamental question for philosophy is: What individuates a given concept-that is, what makes it the one it is, rather than any other concept? One answer, which has been developed in great detail, is that it is impossible to give a non-trivial answer to this question (Schiffer, 1987). An alternative approach, addressees the question by starting from the idea that a concept id individuated by the condition that must be satisfied if a thinker is to posses that concept and to be capable of having beliefs and other attitudes whose content contains it as a constituent. So, to take a simple case, one could propose that the logical concept ‘and’ is individuated by this condition, it be the unique concept ‘C’ to posses that a thinker has to find these forms of inference compelling, without and ‘B’, ACB can be inferred, and from any premiss ACB, each of ‘A’ and ‘B’ can be inferred. Again, a relatively observational concept such as ‘round’ can be individuated in part by stating that the thinker finds specified contents containing it compelling when he has certain kinds of perception, and in part by relating those judgements containing the concept and which are not based on perception to those judgements that are. A statement that individuates a concept by saying what is required for a thinker to posses it can be described as giving the possession condition for the concept.
A possession condition for a particular concept may actually make use of that concept. The possession condition for ‘and’ does so. We can also expect to use relatively observational concepts in specifying the kind of experience that have to be mentioned in the possession conditions for relatively observational concepts. What we must avoid is mention of the concept in question as such within the content of the attitudes attributed to the thinker in the possession condition. Otherwise we would be presupposing possession of the concept in an account that was meant to elucidate its possession. In talking of what the thinker finds compelling, the possession conditions can also respect an insight of the later Wittgenstein: That to find her finds it natural to go on in new cases in applying the concept.
Sometimes a family of concepts has this property: It is not possible to master any one of the members of the family without mastering the others. Two of the families that plausibly have this status are these: The family consisting of some simple concepts 0, 1, 2, . . . of the natural numbers and the corresponding concepts of numerical quantifiers there are 0 so-and-so, there is 1 so-and-so, . . . and the family consisting of the concepts; belief’ and ‘desire’. Such families have come to be known as ‘local holism’. A local holism does not prevent the individuation of a concept by its possession condition. Rather, it demands that all the concepts in the family be individuated simultaneously. So one would say something of this form: Belief and desire form the unique pair of concepts C1 and C2 such that for as thinker to posses them are to meet such-and-such condition involving the thinker, C1 and C2. For these and other possession conditions to individuate properly, it is necessary that there be some ranking of the concepts treated. The possession conditions for concepts higher in the ranking must presuppose only possession of concepts at the same or lower levels in the ranking.
A possession conditions may in various way’s make a thinker’s possession of a particular concept dependent upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world as a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to the subject’s environment. If this is so, then mention of such experiences in a possession condition will make possession of that concept dependent in part upon the environmental relations of the thinker. Burge (1979) has also argued from intuitions about particular examples that, even though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinker’s social environment is varied. A possession condition that properly individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.
Concepts have a normative dimension, a fact strongly emphasized by Kripke. For any judgement whose content involves a given concept, there is a correctness condition for that judgement, a condition that is dependent in part upon the identity of the concept. The normative character of concepts also extends into making the territory of a thinker’s reasons for making judgements. A thinker’s visual perception can give him good reason for judging ‘That man is bald’: It does not by itself give him good reason for judging ‘Rostropovich ids bald’, even if the man he sees is Rostropovich. All these normative connections must be explained by a theory of concepts one approach to these matters is to look to the possession condition for the concept, and consider how the referent of a concept is fixed from it, together with the world. One proposal is that the referent of the concept is that object (or property, or function, . . .) which makes the practices of judgement and inference mentioned which always lead to true judgements and truth-preserving inferences. This proposal would explain why certain reasons are necessity good reasons for judging given contents. Provided the possession condition permits ‘us’ to say what it is about a thinker’s previous judgements that masker it the case that he is employing one concept rather than another, this proposal would also have another virtue. It would allow ‘us’ to say how the correctness condition is determined for a judgement in which the concept is applied to newly encountered objects. The judgement is correct if the new object has the property that in fact makes the judgemental practices mentioned in the possession condition yield true judgements, or truth-preserving inferences.
These manifesting dissimilations have occasioned the affiliated differences accorded within the distinction as associated with Leibniz, who declares that there are only two kinds of truths-truths of reason and truths of fact. The forms are all either explicit identities, i.e., of the form ‘A is A’, ‘AB is B’, etc., or they are reducible to this form by successively substituting equivalent terms. Leibniz dubs them ‘truths of reason’ because the explicit identities are self-evident deducible truths, whereas the rest can be converted to such by purely rational operations. Because their denial involves a demonstrable contradiction, Leibniz also says that truths of reason ‘rest on the principle of contradiction, or identity’ and that they are necessary [propositions, which are true of all possible words. Some examples are ‘All equilateral rectangles are rectangles’ and ‘All bachelors are unmarried’: The first is already of the form AB is B’ and the latter can be reduced to this form by substituting ‘unmarried man’ fort ‘bachelor’. Other examples, or so Leibniz believes, are ‘God exists’ and the truths of logic, arithmetic and geometry.
Truths of fact, on the other hand, cannot be reduced to an identity and our only way of knowing them is empirically by reference to the facts of the empirical world. Likewise, since their denial does not involve a contradiction, their truth is merely contingent: They could have been otherwise and hold of the actual world, but not of every possible one. Some examples are ‘Caesar crossed the Rubicon’ and ‘Leibniz was born in Leipzig’, as well as propositions expressing correct scientific generalizations. In Leibniz’s view, truths of fact rest on the principle of sufficient reason, which states that nothing can be so unless there is a reason that it is so. This reason is that the actual world (by which he means the total collection of things past, present and future) is better than any other possible worlds and was therefore created by ‘God’.
In defending the principle of sufficient reason, Leibniz runs into serious problems. He believes that in every true proposition, the concept of the predicate is contained in that of the subject. (This holds even for propositions like ‘Caesar crossed the Rubicon’: Leibniz thinks anyone who dids not cross the Rubicon, would not have been Caesar). And this containment relationship! Which is eternal and unalterable even by God ~?! Guarantees that every truth has a sufficient reason. If truths consists in concept containment, however, then it seems that all truths are analytic and hence necessary, and if they are all necessary, surely they are all truths of reason. Leibnitz responds that not every truth can be reduced to an identity in a finite number of steps, in some instances revealing the connection between subject and predicate concepts would requite an infinite analysis. But while this may entail that we cannot prove such propositions as deductively manifested, it does not appear to show that the proposition could have been false. Intuitively, it seems a better ground for supposing that it is necessary truth of a special sort. A related question arises from the idea that truths of fact depend on God’s decision to create he best of all possible worlds: If it is part of the concept of this world that it is best, now could its existence be other than necessary? Leibniz answers that its existence is only hypothetically necessary, i.e., it follows from God’s decision to create this world, but God had the power to decide otherwise. Yet God is necessarily good and non-deceiving, so how could he have decided to do anything else? Leibniz says much more about these masters, but it is not clear whether he offers any satisfactory solutions.
Finally, Kripke (1972) and Plantinga (1974) argues that some contingent truths are knowable by deductive reasoning. Similar problems face the suggestion that necessary truths are the ones we know with the fairest of certainties: We lack a criterion for certainty, there are necessary truths we do not know, and (barring dubious arguments for scepticism) it is reasonable to suppose that we know some contingent truths with certainty.
Issues surrounding certainty are inexorably connected with those concerning scepticism. For many sceptics have traditionally held that knowledge requires certainty, and, of course, the claim that certain knowledge is not possible. in part , in order to avoid scepticism, the anti-sceptics have generally held that knowledge does not require certainty (Lehrer, 1974: Dewey, 1960). A few ant-sceptics, that knowledge does require certain but, against the sceptic that certainty is possible. The task is to provide a characterization of certainty which wou b acceptable to both sceptic and anti-sceptics. For such an agreement is a pre-condition of an interesting debate between them.
It seems clear that certainty is a property that an be ascribed to either a person or belief. We can say that a person,’S’, is certain - belief. We can say that a person ‘S’, is certain, or we can say that a proposition ‘p’, is certain, or we can be connected=by saying that ‘the two use can be connected by saying that ‘S’ has the right to be certain just in case ‘p is sufficiently warranted (Ayer, 1956). Following this lead, most philosophers who have take the second sense, the sense in which a proposition is said to be certain, as the important one to be investigated by epistemology, an exception is Unger who defends scepticism by arguing that psychological certainty is not possible (Ungr, 1975).
In defining certainty, is crucial to note that the term has both an absolute and relative sense, very roughly, one can say that a proposition is absolutely certain just in case there is no proposition more warranted than there is no proposition more warranted that it (Chisholm, 1977), But we also commonly say that one proposition is more certain than say that one proposition is more certain than another, implying that the second one, though less certain, is still certain.
Now some philosophers, have argued that the absolute sense is the only sense, and that the relative sense is only apparent. Even if those arguments are convincing, what remains clear is that here is an absolute sense and it is that some sense which is crucial to the issues surrounding scepticism,
Let us suppose that the interesting question is this. What makes a belief or proposition absolutely certain?
There are several ways of approaching an answer to that question, some like Russell, will take a belief to be certain just in case there is no logical possibility that our belief is false (Russell, 1922). On this definition proposition about physical objects (objects occupying space) cannot be certain, however, that characterization of certainty should be rejected precisely because it makes the question of the existence of absolute certain empirical propositions uninteresting. For it concedes to the sceptic the impassivity of certainty bout physical objects too easily, thus, this approach would not be acceptable to the anti-sceptics.
Other philosophers have suggested that the role hast a belef plays within our set of actual beliefs makes a belief certain, for example, Wittgenstein has suggested that a belief is certain just in case it can be appealed to in order to justify other beliefs, as other beliefs however, promote without some needs of justification itself but appealed to in order to justify other beliefs but stands in no need of justification itself. Thus, the question of the existence of beliefs has are certain can be answered by merely inspecting our practices to determine that there are beliefs which play the specific role. This approach would not be acceptable to the sceptics. For it, too, makes the question of the existence of absolutely certain belief uninteresting. The issue is not whether there are beliefs which play such a role, but whether the are any beliefs which should play that role. Perhaps our practices cannot be defended.
Off the cuff, he characterization of absolute certainty given that a belief ‘p’, is certain just in case there is no belief which is more warranted than ‘p’. Although it does delineate a necessary condition of absolute certainty an is preferable to the Wittgenstein approach , as it does not capture the full sense of ‘absolute certainty’. The sceptic would argue that it is not strong enough. For, according to this rough characterization, a belief could be absolutely certain and yet there could be good grounds for doubting - just as long as there were equally good ground for doubting every proposition that was equally warranted, in addition, to say that a belie isd certain is to say, in part, that we have a guarantee of its truth, there is no such guarantee provided by this rough characterisation.
A Cartesian characterization certainty seem more promising. Roughly, this approach is that a proposition ‘p’, is certain for ‘S’ just in case ‘S’ is warranted in believing that ‘p’ an there ae absolutely no grounds whatsoever or doubting it. Now one, could characterize those grounds in a variety of ways, for example, a ground ‘g’ for making ‘p’ doubtful for ‘S’ could be such that (a) ‘S’ is not warranted in denying ‘g’ and:
(B1) If ‘g’ is added to ‘S’s’ beliefs, the negation of ‘p’ is warranted: Or.
(B2) If ‘g’ is added to ‘S’s’ beliefs, ‘p’ is no longer warranted: Or,
(B3) If ‘g’ is added to ‘S’s’ beliefs, ‘p’ becomes less warranted (even if only slightly so.)
Although there is a guarantee of sorts of ‘p’s’ truth contained in (b1) and (b2), those notions of grounds for doubt do not seem to capture a basic feature in absolute certainty delineated in the rough account given as such, that for a proposition ‘p’, could be immune to grounds for doubt ‘g’ in those two senses and yet another preposition would be ;more certain’ if there were no grounds for doubt like those specified in (b3), so, only (b3) can succeed on providing part of the required guarantee of ‘p’s’ truth.
An account like that contained in (b3) can provide only part of the guarantee because it is only a subjective guarantee of ‘p’s’ truth, ‘S’s; belief system would contain adequate grounds for assuring’S’ and ’p’ is true because ‘S’s’ belief system would warrant the denial of ever preposition that would lower the warrant of ‘p’. But ‘S’s’ belief system might contain false beliefs and still be immune to doubt in this sense. Indeed, ‘p’ itself could be certain and false in this subjective sense.
An objective guarantee is needed as well. We can capture such objective immunity to doubt by requiring roughly that there be no true proposition such that if it is added to ‘S’s’ beliefs, the result is reduction in the warrant for ’p’ (even if only slightly). That is, there will be true propositions which if added to ‘S’s’ beliefs result in lowering the warrant of ‘p’ because the y render evident some false proposition which actually reduces the warrant of ‘p’. It is debatable whether leading defeaters provide genius grounds for doubt. Thus, we can sa that a belief that ‘p’ is absolutely certain just in case it is subjectively and objectively immune to doubt. In other words a proposition ‘p’ is absolutely certain for ‘S’ if and only if (1) ‘p’ is warranted for ‘S’ and (2) ‘S’ is warranted in denying every proposition ‘g, such that if’g’ is added to ‘S’s’ beliefs, the warrant for ‘p’ is reduced and (3) there is no true preposition, ‘d’, sh that if ‘d’ is added to ‘S’s’ beliefs the warrant for ‘p’ is reduced.
This is an amount of absolute certainty which captures what is demanded by the sceptic, it is indubitable and guarantee both objectively and objectively to be true. In addition, such a characterization of certainty does not automatically lead to scepticism. Thus, this is an account of certainty that satisfies the task at hand, namely to find an account of certainty that provides the precondition for dialogue, and, of course, alongside with a complete set for its dialectic awareness, if only between the sceptic and anti-sceptic.
Leibniz defined a necessary truth as one whose opposite implies a contradiction. Every such proposition, he held, is either an explicit identity, i.e., of the form ‘A is A’, ‘AB is B’, etc. or is reducible to an identity by successively substituting equivalent terms. (thus, 3 above might be so reduced by substituting ‘unmarried man’; for ‘bachelor’.) This has several advantages over the ideas of the previous paragraph. First, it explicated the notion of necessity and possibility and seems to provide a criterion we can apply. Second, because explicit identities are self-evident a deductive propositions, the theory implies that all necessary truths are knowable deductively, but it does not entail that wee actually know all of them, nor does it define ‘knowable’ in a circular way. Third, it implies that necessary truths are knowable with certainty, but does not preclude our having certain knowledge of contingent truths by means other than a reduction.
Leibniz and others have thought of truths as a property of propositions, where the latter are conceived as things that may be expressed by, but are distinct from, linguistic items like statements. On another approach, truth is a property of linguistic entities, and the basis of necessary truth in convention. Thus A.J. Ayer, for example,. Argued that the only necessary truths are analytic statements and that the latter rest entirely on our commitment to use words in certain ways.
The slogan ‘the meaning of a statement is its method of verification’ expresses the empirical verification’s theory of meaning. It is more than the general criterion of meaningfulness if and only if it is empirically verifiable. If says in addition what the meaning of a sentence is: All those observations would confirm or disconfirmed the sentence. Sentences that would be verified or falsified by all the same observations are empirically equivalent or have the same meaning. A sentence is said to be cognitively meaningful if and only if it can be verified or falsified in experience. This is not meant to require that the sentence be conclusively verified or falsified, since universal scientific laws or hypotheses (which are supposed to pass the test) are not logically deducible from any amount of actually observed evidence.
When one predicate’s necessary truth of a preposition one speaks of modality de dicto. For one ascribes the modal property, necessary truth, to a dictum, namely, whatever proposition is taken as necessary. A venerable tradition, however, distinguishes this from necessary de re, wherein one predicates necessary or essential possession of some property to an on object. For example, the statement ‘4 is necessarily greater than 2' might be used to predicate of the object, 4, the property, being necessarily greater than 2. That objects have some of their properties necessarily, or essentially, and others only contingently, or accidentally, are a main part of the doctrine called, essentialism’. Thus, an essentialist might say that Socrates had the property of being bald accidentally, but that of being self-identical, or perhaps of being human, essentially. Although essentialism has been vigorously attacked in recent years, most particularly by Quine, it also has able contemporary proponents, such as Plantinga.
Modal necessity as seen by many philosophers whom have traditionally held that every proposition has a modal status as well as a truth value. Every proposition is either necessary or contingent as well as either true or false. The issue of knowledge of the modal status of propositions has received much attention because of its intimate relationship to the issue of deductive reasoning. For example, no propositions of the theoretic content that all knowledge of necessary propositions is deductively knowledgeable. Others reject this claim by citing Kripke’s (1980) alleged cases of necessary theoretical propositions. Such contentions are often inconclusive, for they fail to take into account the following tripartite distinction: ‘S’ knows the general modal status of ‘p’ just in case ‘S’ knows that ‘p’ is a necessary proposition or ‘S’ knows the truth that ‘p’ is a contingent proposition. ‘S’ knows the truth value of ‘p’ just in case ‘S’ knows that ‘p’ is true or ‘S’ knows that ‘p’ is false. ‘S’ knows the specific modal status of ‘p’ just in case ‘S’ knows that ‘p’ is necessarily true or ‘S’ knows that ‘p’ is necessarily false or ‘S’ knows that ‘p’ is contingently true or ‘S’ knows that ‘p’ is contingently false. It does not follow from the fact that knowledge of the general modal status of a proposition is a deductively reasoned distinctive modal status is also given to theoretical principles. Nor des it follow from the fact that knowledge of a specific modal status of a proposition is theoretically given as to the knowledge of its general modal status that also is deductive.
The certainties involving reason and a truth of fact are much in distinction by associative measures given through Leibniz, who declares that there are only two kinds of truths-truths of reason and truths of fact. The former are all either explicit identities, i.e., of the form ‘A is A’, ‘AB is B’, etc., or they are reducible to this form by successively substituting equivalent terms. Leibniz dubs them ‘truths of reason’ because the explicit identities are self-evident theoretical truth, whereas the rest can be converted to such by purely rational operations. Because their denial involves a demonstrable contradiction, Leibniz also says that truths of reason ‘rest on the principle of contraction, or identity’ and that they are necessary propositions, which are true of all possible worlds. Some examples are that All bachelors are unmarried’: The first is already of the form ‘AB is B’ and the latter can be reduced to this form by substituting ‘unmarried man’ for ‘bachelor’. Other examples, or so Leibniz believes, are ‘God exists’ and the truth of logic, arithmetic and geometry.
Truths of fact, on the other hand, cannot be reduced to an identity and our only way of knowing hem os a theoretical manifestations, or by reference to the fact of the empirical world. Likewise, since their denial does not involve as contradiction, their truth is merely contingent: They could have been otherwise and hold of the actual world, but not of every possible one. Some examples are ‘Caesar crossed the Rubicon’ and ‘Leibniz was born in Leipzig’, as well as propositions expressing correct scientific generalizations. In Leibniz’s view, truths of fact rest on the principle of sufficient reason, which states that nothing can be so unless thee is a reason that it is so. This reason is that the actual world (by which he means the total collection of things past, present and future) is better than any other possible world and was therefore created by God.
In defending the principle of sufficient reason, Leibniz runs into serious problems. He believes that in every true proposition, the concept of the predicate is contained in that of the subject. (This hols even for propositions like ‘Caesar crossed the Rubicon’: Leibniz thinks anyone who did not cross the Rubicon would not have been Caesar) And this containment relationship-that is eternal and unalterable even by God-guarantees that every truth has a sufficient reason. If truth consists in concept containment, however, then it seems that all truths are analytic and hence necessary, and if they are all necessary, surely they are all truths of reason. Leibniz responds that not evert truth can be reduced to an identity in a finite number of steps: In some instances revealing the connection between subject and predicate concepts would require an infinite analysis. But while this may entail that we cannot prove such propositions as deductively probable, it does not appear to show that the proposition could have been false. Intuitively, it seems a better ground for supposing that it is a necessary truth of a special sort. A related question arises from the idea that truths of fact depend on God’s decision to create the best world, if it is part of the concept of this world that it is best, how could its existence be other than necessary? Leibniz answers that its existence is only hypothetically necessary, i.e., it follows from God’s decision to create this world, but God is necessarily good, so how could he have decided to do anything else? Leibniz says much more about the matters, but it is not clear whether he offers any satisfactory solutions.
The modality of a proposition is the way in which it is true or false. The most important division is between propositions true of necessity, and those true asa things are: Necessary as opposed to contingent propositions. Other qualifiers sometimes called ‘modal’ include the tense indicators ‘It will be the case that p’ or It was the case that p’, and there are affinities between the ‘deontic indicators’, as, ;it ought to be the case that p’ or ‘it is permissible that p’, and the logical modalities as a logic that study the notions of necessity and possibility. Modal logic was of a great importance historically, particularly in the light of various doctrines concerning the necessary properties of the deity, but was not a central topic of modern logic in its golden period at the beginning of the 20th century. It was, however, revived by C. I. Lewis, by adding to a propositional or predicate calculus two operators, □ and ◊ (sometimes written N and M), meaning necessarily and possibly, respectively. These like p ➞ ◊ p and □ p ➞ p will be wanted. Controversial theses include □ p ➞ □□ p (if a proposition is necessary, it is necessarily necessary, characteristic of the system known as S4) and ◊ p ➞ □ ◊ p (if a proposition is possible, it is necessarily possible, characteristic of the system known as S5). The classical ‘modal theory’ for modal logic, due to Kripke and the Swedish logician Stig Kanger, involves valuing propositions not as true or false ‘simpliciers’, but as true or false art possible worlds, with necessity then corresponding to truth in all worlds, and possibly to truth in some world.
The doctrine advocated by David Lewis, which different ‘possible worlds’ are to be thought of as existing exactly as this one does. Thinking in terms of possibilities is thinking of real worlds where things are different, this view has been charged with misrepresenting it as some insurmountably unseeing to why it is good to save the child from drowning, since there is still a possible world in which she (or her counterpart) drowned, and from the standpoint of the universe it should make no difference that world is actual. Critics asio charge either that the notion fails to fit with a coherent theory of how we know about possible worlds, or with a coherent theory about possible worlds, or with a coherent theory of why we are interested in them, but Lewis denies that any other way of interpreting modal statements is tenable.
Thus and so, the ‘standard analysis’ of propositional knowledge, suggested by Plato and Kant among others, implies that if one has a justified true belief that ‘p’, then one knows that ‘p’. The belief condition ‘p’ believes that ‘p’, the truth condition requires that any known proposition be true. And the justification condition requires that any known proposition be adequately justified, warranted or evidentially supported. Plato appears to be considering the tripartite definition in the “Theaetetus” (201c-202d), and to be endorsing its jointly sufficient conditions for knowledge in the “Meno” (97e-98a). This definition has come to be called ‘the standard analysis’ of knowledge, and has received a serious challenge from Edmund Gettier’s counterexamples in 1963. Gettier published two counterexamples to this implication of the standard analysis. In essence, they are:
(1) Smith and Jones have applied for the same job. Smith is justified in believing that (a) Jones will get the job, and that (b) Jones has ten coins in his pocket. On the basis of (a) and (b) Smith infers, and thus is justified in believing, that © the person who will get the job has ten coins in his pocket. At it turns out, Smith himself will get the job, and he also happens to have ten coins in his pocket. So, although Smith is justified in believing the true proposition ©, Smith does not know ©.
(2) Smith is justified in believing the false proposition that (a) Smith owns a Ford. On the basis of (a) Smith infers, and thus is justified in believing, that (b) either Jones owns a Ford or Brown is in Barcelona. As it turns out, Brown or in Barcelona, and so (b) is true. So although Smith is justified in believing the true proposition (b). Smith does not know (b).
Gettier’s counterexamples are thus cases where one has justified true belief that ‘p’, but lacks knowledge that ‘p’. The Gettier problem is the problem of finding a modification of, or an alterative to, the standard justified-true-belief analysis of knowledge that avoids counterexamples like Gettier’s. Some philosophers have suggested that Gettier style counterexamples are defective owing to their reliance on the false principle that false propositions can justify one’s belief in other propositions. But there are examples much like Gettier’s that do not depend on this allegedly false principle. Here is one example inspired by Keith and Richard Feldman:
(3) Suppose Smith knows the following proposition, ‘m’: Jones, whom Smith has always found to be reliable and whom Smith, has no reason to distrust now, has told Smith, his office-mate, that ‘p’: He, Jones owns a Ford. Suppose also that Jones has told Smith that ‘p’ only because of a state of hypnosis Jones is in, and that ‘p’ is true only because, unknown to himself, Jones has won a Ford in a lottery since entering the state of hypnosis. And suppose further that Smith deduces from ‘m’ its existential generalization, ‘q’: There is someone, whom Smith has always found to be reliable and whom Smith has no reason to distrust now, who has told Smith, his office-mate, that he owns a Ford. Smith, then, knows that ‘q’, since he has correctly deduced ‘q’ from ‘m’, which he also knows. But suppose also that on the basis of his knowledge that ‘q’. Smith believes that ‘r’: Someone in the office owns a Ford. Under these conditions, Smith has justified true belief that ‘r’, knows his evidence for ‘r’, but does not know that ‘r’.
Gettier-style examples of this sort have proven especially difficult for attempts to analyse the concept of propositional knowledge. The history of attempted solutions to the Gettier problem is complex and open-ended. It has not produced consensus on any solution. Many philosophers hold, in light of Gettier-style examples, that propositional knowledge requires a fourth condition, beyond the justification, truth and belief conditions. Although no particular fourth condition enjoys widespread endorsement, there are some prominent general proposals in circulation. One sort of proposed modification, the so-called ‘defeasibility analysis’, requires that the justification appropriate to knowledge be ‘undefeated’ in the general sense that some appropriate subjunctive conditional concerning genuine defeaters of justification be true of that justification. One straightforward defeasibility fourth condition, for instance, requires of Smith’s knowing that ‘p’ that there be no true proposition ‘q’, such that if ‘q’ became justified for Smith, ‘p’ would no longer be justified for Smith (Pappas and Swain, 1978). A different prominent modification requires that the actual justification for a true belief qualifying as knowledge not depend I a specified way on any falsehood (Armstrong, 1973). The details proposed to elaborate such approaches have met with considerable controversy.
The fourth condition of evidential truth-sustenance may be a speculative solution to the Gettier problem. More specifically, for a person, ‘S’, to have knowledge that ‘p’ on justifying evidence ‘e’, ‘e’ must be truth-sustained in this sense for every true proposition ‘t’ that, when conjoined with ‘e’, undermines S’s justification for ‘p’ on ‘e’, there is a true proposition, ‘t’, that, when conjoined with ‘e’ & ‘t’, restores the justification of ‘p’ for ‘S’ in a way that ‘S’ is actually justified in believing that ‘p’. The gist of this resolving evolution, put roughly, is that propositional knowledge requires justified true belief that is sustained by the collective totality of truths. Herein, is to argue in Knowledge and Evidence, that Gettier-style examples as (1)-(3), but various others as well.
Three features that proposed this solution merit emphasis. First, it avoids a subjunctive conditional in its fourth condition, and so escapes some difficult problems facing the use of such a conditional in an analysis of knowledge. Second, it allows for non-deductive justifying evidence as a component of propositional knowledge. An adequacy condition on an analysis of knowledge is that it does not restrict justifying evidence to relations of deductive support. Third, its proposed solution is sufficiently flexible to handle cases describable as follows:
(4) Smith has a justified true belief that ‘p’, but there is a true proposition, ‘t’, which undermines Smith’s justification for ‘p’ when conjoined with it, and which is such that it is either physically or humanly impossible for Smith to be justified in believing that ‘t’.
Examples represented by (4) suggest that we should countenance varying strengths in notions of propositional knowledge. These strengths are determined by accessibility qualifications on the set of relevant knowledge-precluding underminers. A very demanding concept of knowledge assumes that it need only be logically possible for a Knower to believe a knowledge-precluding underminer. Fewer demanding concepts assume that it must be physically or humanly possible for a Knower to believe knowledge-precluding underminers. But even such less demanding concepts of knowledge need to rely on a notion of truth-sustained evidence if they are to survive a threatening range of Gettier-style examples. Given to some resolution that it needs be that the forth condition for a notion of knowledge is not a function simply of the evidence a Knower actually possesses.
The higher controversial aftermath of Gettier’s original counterexamples has left some philosophers doubted of the real philosophical significance of the Gettier problem. Such doubt, however, seems misplaced. One fundamental branch of epistemology seeks understanding of the nature of propositional knowledge. And our understanding exactly what prepositional knowledge is essentially involves having a Gettier-resistant analysis of such knowledge. If our analysis is not Gettier-resistant, we will lack an exact understanding of what propositional knowledge is. It is epistemologically important, therefore, to have a defensible solution to the Gettier problem, however, demanding such a solution is.
Propositional knowledge (PK) is the type of knowing whose instance are labelled by means of a phrase expressing some proposition, e.g., in English a phrase of the form ‘that h’, where some complete declarative sentence is instantial for ‘h’.
Theories of ‘PK’ differ over whether the proposition that ‘h’ is involved in a more intimate fashion, such as serving as a way of picking out a proposition attitude required for knowing, e.g., believing that ‘h’, accepting that ‘h’ or being sure that ‘h’. For instance, the tripartite analysis or standard analysis, treats ‘PK’ as consisting in having a justified, true belief that ‘h’ , the belief condition requires that anyone who knows that ‘h’ believes that ‘h’, the truth condition requires that any known proposition be true, in contrast, some regarded theories do so consider and treat ‘PK’ as the possession of specific abilities, capabilities, or powers, and that view the proposition that ‘h’ as needed to be expressed only in order to label a specific instance of ‘PK’.
Although most theories of Propositional knowledge (PK) purport to analyse it, philosophers disagree about the goal of a philosophical analysis. Theories of ‘PK’ may differ over whether they aim to cover all species of ‘PK’ and, if they do not have this goal, over whether they aim to reveal any unifying link between the species that they investigate, e.g., empirical knowledge, and other species of knowing.
Very many accounts of ‘PK’ have been inspired by the quest to add a fourth condition to the tripartite analysis so as to avoid Gettier-type counterexamples to it, whereby a fourth condition of evidential truth-sustenance for every true proposition when conjoined with a regaining justification, which may require the justified true belief that is sustained by the collective totality of truths that an adequacy condition of propositional knowledge not restrict justified evidences in relation of deductive support, such that we should countenance varying strengths in notions of propositional knowledge. Restoratively, these strengths are determined by accessibility qualifications on the set of relevant knowledge-precluding underminers. A very demanding concept of knowledge assumes that it need only be logically possible for a Knower to believe a knowledge-precluding undeterminers, and less demanding concepts that it must physically or humanly possible for a Knower to believe knowledge-precluding undeterminers. But even such demanding concepts of knowledge need to rely on a notion of truth-sustaining evidence if they are to survive a threatening range of Gettier-style examples. As the needed fourth condition for a notion of knowledge is not a function simply of the evidence a Knower actually possesses. One fundamental source of epistemology seeks understanding of the nature of propositional knowledge, and our understanding exactly what propositional knowledge is essentially involves our having a Gettier-resistant analysis of such knowledge. If our analysis is not Gettier-resistant, we will lack an exact understanding of what propositional knowledge is. It is epistemologically important, therefore, to have a defensible solution to the Gettier problem, however, demanding such a solution is. And by the resulting need to deal with other counterexamples provoked by these new analyses.
Keith Lehrer (1965) originated a Gettier-type example that has been a fertile source of important variants. It is the case of Mr Notgot, who is in one’s office and has provided some evidence, ‘e’, in response to all of which one forms a justified belief that Mr. Notgot is in the office and owns a Ford, thanks to which one arrives at the justified belief that ‘h': ‘Someone in the office owns a Ford’. In the example, ‘e’ consists of such things as Mr. Notgot’s presently showing one a certificate of Ford ownership while claiming to own a Ford and having been reliable in the past. Yet, Mr Notgot has just been shamming, and the only reason that it is true that ‘h1' is because, unbeknown to oneself, a different person in the office owns a convertible Ford.
Variants on this example continue to challenge efforts to analyse species of ‘PK’. For instance, Alan Goldman (1988) has proposed that when one has empirical knowledge that ‘h’, when the state of affairs (call it h*) expressed by the proposition that ‘h’ figures prominently in an explanation of the occurrence of one’s believing that ‘h’, where explanation is taken to involve one of a variety of probability relations concerning ‘h*’ , and the belief state. But this account runs foul of a variant on the Notgot case akin to one that Lehrer (1979) has described. In Lehrer’s variant, Mr Notgot has manifested a compulsion to trick people into justified believing truths yet falling short of knowledge by means of concocting Gettierized evidence for those truths. It we make the trickster’s neuroses highly specific ti the type of information contained in the proposition that ‘h’, we obtain a variant satisfying Goldman’s requirement That the occurrences of ‘h*’ significantly raises the probability of one’s believing that ‘h’. (Lehrer himself (1990, pp. 103-4) has criticized Goldman by questioning whether, when one has ordinary perceptual knowledge that abn object is present, the presence of the object is what explains one’s believing it to be present.)
In grappling with Gettier-type examples, some analyses proscribe specific relations between falsehoods and the evidence or grounds that justify one’s believing. A simple restriction of this type requires that one’s reasoning to the belief that ‘h’ does not crucially depend upon any false lemma (such as the false proposition that Mr Notgot is in the office and owns a Ford). However, Gettier-type examples have been constructed where one does not reason through and false belief, e.g., a variant of the Notgot case where one arrives at belief that ‘h’, by basing it upon a true existential generalization of one’s evidence: ‘There is someone in the office who has provided evidence e’, in response to similar cases, Sosa (1991) has proposed that for ‘PK’ the ‘basis’ for the justification of one’s belief that ‘h’ must not involve one’s being justified in believing or in ‘presupposing’ any falsehood, even if one’s reasoning to the belief does not employ that falsehood as a lemma. Alternatively, Roderick Chisholm (1989) requires that if there is something that makes the proposition that ‘h’ evident for one and yet makes something else that is false evident for one, then the proposition that ‘h’ is implied by a conjunction of propositions, each of which is evident for one and is such that something that makes it evident for one makes no falsehood evident for one. Other types of analyses are concerned with the role of falsehoods within the justification of the proposition that ‘h’ (Versus the justification of one’s believing that ‘h’). Such a theory may require that one’s evidence bearing on this justification not already contain falsehoods. Or it may require that no falsehoods are involved at specific places in a special explanatory structure relating to the justification of the proposition that ‘h’ (Shope, 1983.).
A frequently pursued line of research concerning a fourth condition of knowing seeks what is called a ‘defeasibility’ analysis of ‘PK’. Early versions characterized defeasibility by means of subjunctive conditionals of the form, ‘If ‘A’ were the case then ‘B’ would be the case’. But more recently the label has been applied to conditions about evidential or justificational relations that are not themselves characterized in terms of conditionals. Early versions of defeasibility theories advanced conditionals where ‘A’ is a hypothetical situation concerning one’s acquisition of a specified sort of epistemic status for specified propositions, e.g., one’s acquiring justified belief in some further evidence or truths, and ‘B’; concerned, for instance, the continued justified status of the proposition that ‘h’ or of one’s believing that ‘h’.
A unifying thread connecting the conditional and non-conditional approaches to defeasibility may lie in the following facts: (1) What is a reason for being in a propositional attitude is in part a consideration , instances of the thought of which have the power to affect relevant processes of propositional attitude formation? : (2) Philosophers have often hoped to analyse power ascriptions by means of conditional statements: And (3) Arguments portraying evidential or justificational relations are abstractions from those processes of propositional attitude maintenance and formation that manifest rationality. So even when some circumstance, ‘R’, is a reason for believing or accepting that ‘h’, another circumstance, ‘K’ may present an occasion from being present for a rational manifestation of the relevant power of the thought of ‘R’ and it will not be a good argument to base a conclusion that ‘h’ on the premiss that ‘R’ and ‘K’ obtain. Whether ‘K’ does play this interfering, ‘defeating’. Role will depend upon the total relevant situation.
Accordingly, one of the most sophisticated defeasibility accounts, which has been proposed by John Pollock (1986), requires that in order to know that ‘h’, one must believe that ‘h’ on the basis of an argument whose force is not defeated in the above way, given the total set of circumstances described by all truths. More specifically, Pollock defines defeat as a situation where (1) one believes that ‘p’ and it is logically possible for one to become justified in believing that ‘h’ by believing that ’p’, and (2) on e actually has a further set of beliefs, ‘S’ logically has a further set of beliefs, ‘S’, logically consistent with the proposition that ‘h’, such that it is not logically possible for one to become justified in believing that ‘h’ by believing it ion the basis of holding the set of beliefs that is the union of ‘S’ with the belief that ‘p’ (Pollock, 1986, pp. 36, 38). Furthermore, Pollock requires for ‘PK’ that the rational presupposition in favour of one’s believing that ‘h’ created by one’s believing that ‘p’ is undefeated by the set of all truths, including considerations that one does not actually believe. Pollock offers no definition of what this requirements means. But he may intend roughly the following: There ‘T’ is the set of all true propositions: (I) one believes that ‘p’ and it is logically possible for one to become justified in believing that ‘h’; by believing that ‘p’. And (II) there are logically possible situations in which one becomes justified in believing that ‘h’ on the bass of having the belief that ‘p’ and the beliefs in ‘T’. Thus, in the Notgot example, since ‘T’ includes the proposition that Mr. Notgot does own a sedan Ford, one lack’s knowledge because condition (II) is not satisfied.
But given such an interpretation, Pollock’s account illustrates the fact that defeasibility theories typically have difficulty dealing with introspective knowledge of one’s beliefs. Suppose that some proposition, say that ƒ, is false, but one does not realize this and holds the belief that ƒ. Condition
(II) has no knowledge that h2?: ‘I believe that ƒ’. At least this is so if one’s reason for believing that h2 includes the presence of the very condition of which one is aware, i.e., one’s believing that ƒ. It is incoherent to suppose hat one retains the latter reason, also, believes the truth that not-ƒ. This objection can be avoided, but at the cost of adopting what is a controversial view about introspective knowledge that ‘h’,namely, the view that one’s belief that ‘h’ is in such cases mediated by some mental state intervening between the mental state of which there is introspective knowledge and he belief that ‘h’, so that is mental state is rather than the introspected state that it is included in one’s reason for believing that ‘h’. In order to avoid adopting this controversial view, Paul Moser (1989) gas proposed a disjunctive analysis of ‘PK’, which requires that either one satisfy a defeasibility condition rather than like Pollock’s or else one believes that ‘h’ by introspection. However, Moser leaves obscure exactly why beliefs arrived at by introspections account as knowledge.
Early versions of defeasibility theories had difficulty allowing for the existence of evidence that is ‘merely misleading’, as in the case where one does know that ‘h3: ‘Tom Grabit stole a book from the library’, thanks to having seen him steal it, yet where, unbeknown to oneself, Tom’s mother out of dementia gas testified that Tom was far away from the library at the time of the theft. One’s justifiably believing that she gave the testimony would destroy one’s justification for believing that ‘h3' if added by itself to one’s present evidence.
At least some defeasibility theories cannot deal with the knowledge one has while dying that ‘h4: ‘In this life there is no timer at which I believe that ‘d’, where the proposition that ‘d’ expresses the details regarding some philosophical matter, e.g., the maximum number of blades of grass ever simultaneously growing on the earth. When it just so happens that it is true that ‘d’, defeasibility analyses typically consider the addition to one’s dying thoughts of a belief that ‘d’ in such a way as to improperly rule out actual knowledge that ‘h4'.
A quite different approach to knowledge, and one able to deal with some Gettier-type cases, involves developing some type of causal theory of Propositional knowledge. The interesting thesis that counts as a causal theory of justification (in the meaning of ‘causal theory; intended here) is the that of a belief is justified just in case it was produced by a type of process that is ‘globally’ reliable, that is, its propensity to produce true beliefs-that can be defined (to a god enough approximation) as the proportion of the bailiffs it produces (or would produce where it used as much as opportunity allows) that are true-is sufficiently meaningful-variations of this view have been advanced for both knowledge and justified belief. The first formulation of reliability account of knowing appeared in a note by F.P. Ramsey (1931), who said that a belief was knowledge if it is true, certain can obtain by a reliable process. P. Unger (1968) suggested that “S’ knows that ‘p’ just in case it is not at all accidental that ‘S’ is right about its being the casse that ‘p’. D.M. Armstrong (1973) said that a non-inferential belief qualified as knowledge if the belief has properties that are nominally sufficient for its truth, i.e., guarantee its truth through and by the laws of nature.
Such theories require that one or another specified relation hold that can be characterized by mention of some aspect of cassation concerning one’s belief that ‘h’ (or one’s acceptance of the proposition that ‘h’) and its relation to state of affairs ‘h*’, e.g., h* causes the belief: h* is causally sufficient for the belief h* and the belief have a common cause. Such simple versions of a causal theory are able to deal with the original Notgot case, since it involves no such causal relationship, but cannot explain why there is ignorance in the variants where Notgot and Berent Enç (1984) have pointed out that sometimes one knows of ‘χ’ that is øf thanks to recognizing a feature merely corelated with the presence of øneness without endorsing a causal theory themselves, there suggest that it would need to be elaborated so as to allow that one’s belief that ‘χ’ has ø has been caused by a factor whose correlation with the presence of øneness has caused in oneself, e.g., by evolutionary adaption in one’s ancestors, the disposition that one manifests in acquiring the belief in response to the correlated factor. Not only does this strain the unity of as causal theory by complicating it, but no causal theory without other shortcomings has been able to cover instances of deductively reasoned knowledge.
Causal theories of Propositional knowledge differ over whether they deviate from the tripartite analysis by dropping the requirements that one’s believing (accepting) that ‘h’ be justified. The same variation occurs regarding reliability theories, which present the Knower as reliable concerning the issue of whether or not ‘h’, in the sense that some of one’s cognitive or epistemic states, θ, are such that, given further characteristics of oneself-possibly including relations to factors external to one and which one may not be aware-it is nomologically necessary (or at least probable) that ‘h’. In some versions, the reliability is required to be ‘global’ in as far as it must concern a nomologically (probabilistic) relationship) relationship of states of type θ to the acquisition of true beliefs about a wider range of issues than merely whether or not ‘h’. There is also controversy about how to delineate the limits of what constitutes a type of relevant personal state or characteristic. (For example, in a case where Mr Notgot has not been shamming and one does know thereby that someone in the office owns a Ford, such as a way of forming beliefs about the properties of persons spatially close to one, or instead something narrower, such as a way of forming beliefs about Ford owners in offices partly upon the basis of their relevant testimony?)
One important variety of reliability theory is a conclusive reason account, which includes a requirement that one’s reasons for believing that ‘h’ be such that in one’s circumstances, if h* were not to occur then, e.g., one would not have the reasons one does for believing that ‘h’, or, e.g., one would not believe that ‘h’. Roughly, the latter is demanded by theories that treat a Knower as ‘tracking the truth’, theories that include the further demand that is roughly, if it were the case, that ‘h’, then one would believe that ‘h’. A version of the tracking theory has been defended by Robert Nozick (1981), who adds that if what he calls a ‘method’ has been used to arrive at the belief that ‘h’, then the antecedent clauses of the two conditionals that characterize tracking will need to include the hypothesis that one would employ the very same method.
But unless more conditions are added to Nozick’s analysis, it will be too weak to explain why one lack’s knowledge in a version of the last variant of the tricky Mr Notgot case described above, where we add the following details: (a) Mr Notgot’s compulsion is not easily changed, (b) while in the office, Mr Notgot has no other easy trick of the relevant type to play on one, and © one arrives at one’s belief that ‘h’, not by reasoning through a false belief ut by basing belief that ‘h’, upon a true existential generalization of one’s evidence.
Nozick’s analysis is in addition too strong to permit anyone ever to know that ‘h’: ‘Some of my beliefs about beliefs might be otherwise, e.g., I might have rejected on of them’. If I know that ‘h5' then satisfaction of the antecedent of one of Nozick’s conditionals would involve its being false that ‘h5', thereby thwarting satisfaction of the consequent’s requirement that I not then believe that ‘h5'. For the belief that ‘h5' is itself one of my beliefs about beliefs (Shope, 1984).
Some philosophers think that the category of knowing for which true. Justified believing (accepting) is a requirement constituting only a species of Propositional knowledge, construed as an even broader category. They have proposed various examples of ‘PK’ that do not satisfy the belief and/ort justification conditions of the tripartite analysis. Such cases are often recognized by analyses of Propositional knowledge in terms of powers, capacities, or abilities. For instance, Alan R. White (1982) treats ‘PK’ as merely the ability to provide a correct answer to possible questions, however, White may be equating ‘producing’ knowledge in the sense of producing ‘the correct answer to a possible question’ with ‘displaying’ knowledge in the sense of manifesting knowledge. (White, 1982). The latter can be done even by very young children and some non-human animals independently of their being asked questions, understanding questions, or recognizing answers to questions. Indeed, an example that has been proposed as an instance of knowing that ‘h’ without believing or accepting that ‘h’ can be modified so as to illustrate this point. Two examples concerns an imaginary person who has no special training or information about horses or racing, but who in an experiment persistently and correctly picks the winners of upcoming horseraces. If the example is modified so that the hypothetical ‘seer’ never picks winners but only muses over whether those horses wight win, or only reports those horses winning, this behaviour should be as much of a candidate for the person’s manifesting knowledge that the horse in question will win as would be the behaviour of picking it as a winner.
These considerations expose limitations in Edward Craig’s analysis (1990) of the concept of knowing of a person’s being a satisfactory informant in relation to an inquirer who wants to find out whether or not ‘h’. Craig realizes that counterexamples to his analysis appear to be constituted by Knower who are too recalcitrant to inform the inquirer, or too incapacitate to inform, or too discredited to be worth considering (as with the boy who cried ‘Wolf’). Craig admits that this might make preferable some alternative view of knowledge as a different state that helps to explain the presence of the state of being a suitable informant when the latter does obtain. Such the alternate, which offers a recursive definition that concerns one’s having the power to proceed in a way representing the state of affairs, causally involved in one’s proceeding in this way. When combined with a suitable analysis of representing, this theory of propositional knowledge can be unified with a structurally similar analysis of knowing how to do something.
Knowledge and belief, according to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such is the case. Others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entail psychological certainties (Prichard, 1950 and Ayer, 1956) or conviction (Lehrer, 1974) or acceptance (Lehrer, 1989). None the less, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These arguments are given by philosophers who think that knowledge and belief (or a facsimile) are mutually incompatible (the incomparability thesis), or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, but the two may also coexist (the separability thesis).
The incompatibility thesis is sometimes traced to Plato ©. 429-347 BC) in view of his claim that knowledge is infallible while belief or opinion is fallible (“Republic” 476-9). But this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps, knowledge involves some factor that compensates for the fallibility of belief.
A. Duncan-Jones (1939: Also Vendler, 1978) cite linguistic evidence to back up the incompatibility thesis. He notes that people often say ‘I do not believe she is guilty. I know she is’ and the like, which suggest that belief rule out knowledge. However, as Lehrer (1974) indicates, the above exclamation is only a more emphatic way of saying ‘I do not just believe she is guilty, I know she is’ where ‘just’ makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: ‘You do not hurt him, you killed him’.
H.A. Prichard (1966) offers a defence of the incompatibility thesis that hinges on the equation of knowledge with certainty (both infallibility and psychological certitude) and the assumption that when we believe in the truth of a claim we are not certain about its truth. Given that belief always involves uncertainty while knowledge never dies, believing something rules out the possibility of knowing it. Unfortunately, however, Prichard gives ‘us’ no goods reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, to suggest that we cease to believe things about which we are completely confident is bizarre.
A.D. Woozley (1953) defends a version of the separability thesis. Woozley’s version, which deals with psychological certainty rather than belief per se, is that knowledge can exist in the absence of confidence about the item known, although might also be accompanied by confidence as well. Woozley remarks that the test of whether I know something is ‘what I can do, where what I can do may include answering questions’. On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, ‘I am unsure whether my answer is true: Still, I know it is correct’. But this tension Woozley explains using a distinction between conditions under which we are justified in making a claim (such as a claim to know something), and conditions under which the claim we make is true. While ‘I know such and such’ might be true even if I am unsure whether such and such holds, nonetheless it would be inappropriate for me to claim that I know that such and such unless I were sure of the truth of my claim.
Colin Radford (1966) extends Woozley’s defence of the separability thesis. In Radford’s view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example. In one example, Jean has forgotten that he learned some English history year’s priori and yet he is able to give several correct responses to questions such as ‘When did the Battle of Hastings occur’? Since he forgot that he took history, he considers the correct response to be no more than guesses. Thus, when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hastings took place in 1066. A disposition he would deny being responsible (or having the right to be convincing) that 1066 was the correct date. Radford would none the less insist that Jean know when the Battle occurred, since clearly be remembering the correct date. Radford admits that it would be inappropriate for Jean to say that he knew when the Battle of Hastings occurred, but, like Woozley he attributes the impropriety to a fact about when it is and is not appropriate to claim knowledge. When we claim knowledge, we ought, at least to believe that we have the knowledge we claim, or else our behaviour is ‘intentionally misleading’.
Those that agree with Radford’s defence of the separability thesis will probably think of belief as an inner state that can be detected through introspection. That Jean lack’s beliefs about English history is plausible on this Cartesian picture since Jean does not find himself with any beliefs about English history when ne seek them out. One might criticize Radford, however, by rejecting that Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious, for example. Or one could adopt a behaviourist conception of belief, such as Alexander Bain’s (1859), according to which having beliefs is a matter of the way people are disposed to behave (and has not Radford already adopted a behaviourist conception of knowledge?) Since Jean gives the correct response when queried, a form of verbal behaviour, a behaviourist would be tempted to credit him with the belief that the Battle of Hastings occurred in 1066.
D.M. Armstrong (1873) takes a different tack against Radford. Jean does know that the Battle of Hastings took place in 1066. Armstrong will grant Radfod that point, in fact, Armstrong suggests that Jean believe that 1066 is not the date the Battle of Hastings occurred, for Armstrong equates the belief that such and such is just possible but no more than just possible with the belief that such and such is not the case. However, Armstrong insists, Jean also believes that the Battle did occur in 1066. After all, had Jean been mistaught that the Battle occurred in 1066, and subsequently ‘guessed’ that it took place in 1066, we would surely describe the situation as one in which Jean’s false belief about the Battle became unconscious over time but persisted of a memory trace that was causally responsible for his guess. Out of consistency, we must describe Radford’s original case as one that Jean’s true belief became unconscious but persisted long enough to cause his guess. Thus, while Jean consciously believes that the Battle did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to the claim that knowledge entails belief.
Armstrong’s response to Radford was to reject Radford’s claim that the examinee lacked the relevant belief about English history. Another response is to argue that the examinee lacks the knowledge Radford attributes to him (Sorenson, 1982). If Armstrong is correct in suggesting that Jean believes both that 1066 is and that it is not the date of the Battle of Hastings, one might deny Jean knowledge on the grounds that people who believe the denial of what they believe cannot be said t know the truth of their belief. Another strategy might be to compare the examinee case with examples of ignorance given in recent attacks on externalist accounts of knowledge (needless to say. Externalists themselves would tend not to favour this strategy). Consider the following case developed by BonJour (1985): For no apparent reason, Samantha believes that she is clairvoyant. Again, for no apparent reason, she one day comes to believe that the President is in New York City, even though she has every reason to believe that the President is in Washington, DC. In fact, Samantha is a completely reliable clairvoyant, and she has arrived at her belief about the whereabouts of the President thorough the power of her clairvoyance. Yet surely Samantha’s belief is completely irrational. She is not justified in thinking what she does. If so, then she does not know where the President is. But Radford’s examinee is unconventional. Even if Jean lacks the belief that Radford denies him, Radford does not have an example of knowledge that is unattended with belief. Suppose that Jean’s memory had been sufficiently powerful to produce the relevant belief. As Radford says, in having every reason to suppose that his response is mere guesswork, and he has every reason to consider his belief false. His belief would be an irrational one, and hence one about whose truth Jean would be ignorant.
Least has been of mention to an approaching view from which ‘perception’ basis upon itself as a fundamental philosophical topic both for its central place in ant theory of knowledge, and its central place un any theory of consciousness. Philosophy in this area is constrained by a number of properties that we believe to hold of perception, (1) It gives ‘us’ knowledge of the world around ‘us’. (2) We are conscious of that world by being aware of ‘sensible qualities’: Colour, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is effected through highly complex information channels, such as the output of the three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more complex neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received. (Much of this complexity has been revealed by the difficulties of writing programs enabling computers to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of here being a central, ghostly, conscious self, fed information in the same way that a screen if fed information by a remote television camera. Once such a model is in place, experience will seem like a veil getting between ‘us’ and the world, and the direct objects of perception will seem to be private items in an inner theatre or sensorium. The difficulty of avoiding this model is epically cute when we considered the secondary qualities of colour, sound, tactile feelings and taste, which can easily seem to have a purely private existence inside the perceiver, like sensation of pain. Calling such supposed items names like ‘sense-data’ or ‘percepts’ exacerbates the tendency, but once the model is in place, the first property, that perception gives ‘us’ knowledge of the world and its surrounding surfaces, is quickly threatened, for there will now seem little connection between these items in immediate experience and any independent reality. Reactions to this problem include ‘scepticism’ and ‘idealism’.
A more hopeful approach is to claim that the complexities of (3) and (4) explain how we can have direct acquaintance of the world, than suggesting that the acquaintance we do have been at best indirect. It is pointed out that perceptions are not like sensation, precisely because they have a content, or outer-directed nature. To have a perception is to be aware of the world for being such-and-such a way, than to enjoy a mere modification of sensation. But such direct realism has to be sustained in the face of the evident personal (neurophysiological and other) factors determining haw we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than a strange optional extra.
Furthering, perceptual knowledge is knowledge acquired by or through the senses and includes most of what we know. We cross intersections when we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm ring. In each case we come to know something-that the light has turned green, that the roast is burning, that the melon is overripe, and that it is time to get up-by some sensory means. Seeing that the light has turned green is learning something-that the light has turned green-by use of the eyes. Feeling that the melon is overripe is coming to know a fact-that the melon is overripe-by one’s sense to touch. In each case the resulting knowledge is somehow based on, derived from or grounded in the sort of experience that characterizes the sense modality in question.
Much of our perceptual knowledge is indirect, dependent or derived. By this I mean that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, some other fact, in a more direct way. We see, by the gauge, that we need gas, see, by the newspapers, that our team has lost again, see, by her expression, that she is nervous. This derived or dependent sort of knowledge is particularly prevalent in the cases of vision, but it occurs, to a lesser degree, in every sense modality. We install bells and other noise-makers so that we calm for example, hear (by the bell) that someone is at the door and (by the alarm) that its time to get up. When we obtain knowledge in this way, it is clear that unless one sees-hence, comes to know something about the gauge (that it says) and (hence, know) that one is described as coming to know by perceptual means. If one cannot hear that the bell is ringing, one cannot-in at least in this way-hear that one’s visitors have arrived. In such cases one sees (hears, smells, etc.) that ‘a’ is ‘F’, coming to know thereby that ‘a’ is ‘F’, by seeing (hearing, etc.) that some other condition, ‘b’s’ being ‘G’, obtains when this occurs, the knowledge (that ‘a’ is ‘F’) is derived from, or dependent on, the more basic perceptual knowledge that ‘b’ is ‘G’. Consciousness seems cognitive and brain sciences that over the past three decades that instead of ignoring it, many physicalists now seek to explain it (Dennett, 1991). Here we focus exclusively on ways those neuro-scientific discoveries have impacted philosophical debates about the nature of consciousness and its relation to physical mechanisms. Thomas Nagel argues that conscious experience is subjective, and thus permanently recalcitrant to objective scientific understanding. He invites us to ponder ‘what it is like to be a bat’ and urges the intuition that no amount of physical-scientific knowledge (including neuro-scientific) supplies a complete answer. Nagel's intuition pump has generated extensive philosophical discussion. At least two well-known replies make direct appeal to neurophysiology. John Biro suggests that part of the intuition pumped by Nagel, that bat experience is substantially different from human experience, presupposes systematic emerged as a topic in philosophy of mind and relations between physiology and phenomenology. Kathleen Akins (1993) delves deeper into existing knowledge of bat physiology and reports much that is pertinent to Nagel's question. She argues that many of the questions about bat subjectivity that we still consider open hinge on questions that remain unanswered about neuro-scientific details. One example of the latter is the function of various cortical activity profiles in the active bat.
More recently philosopher David Chalmers (1996) has argued that any possible brain-process account of consciousness will leave open an ‘explanatory gap’ between the brain process and properties of the conscious experience. This is because no brain-process theory can answer the "hard" question: Why should that particular brain process give rise to conscious experience? We can always imagine ("conceive of") a universe populated by creatures having those brain processes but completely lacking conscious experience. A theory of consciousness requires an explanation of how and why some brain process causes consciousness replete with all the features we commonly experience. The fact that the hard question remains unanswered shows that we will probably never get a complete explanation of consciousness at the level of neural mechanisms. Paul and Patricia Churchland have recently offered the following diagnosis and reply. Chalmers offer a conceptual argument, based on our ability to imagine creatures possessing brains like ours but wholly lacking in conscious experience. But the more one learns about how the brain produces conscious experience-and literature is beginning to emerge (e.g., Gazzaniga, 1995) - the harder it becomes to imagine a universe consisting of creatures with brain processes like ours but lacking consciousness. This is not just to bare assertions. The Churchlands appeal to some neurobiological detail. For example, Paul Churchland (1995) develops a neuro-scientific account of consciousness based on recurrent connections between thalamic nuclei (particularly "diffusely projecting" nuclei like the intralaminar nuclei) and the cortex. Churchland argues that the thalamocortical recurrency accounts for the selective features of consciousness, for the effects of short-term memory on conscious experience, for vivid dreaming during REM. (rapid-eye movement) sleep, and other "core" features of conscious experience. In other words, the Churchlands are claiming that when one learns about activity patterns in these recurrent circuits, one can't "imagine" or "conceive of" this activity occurring without these core features of conscious experience. (Other than just mouthing the words, "I am now imagining activity in these circuits without selective attention/the effects of short-term memory/vivid dreaming . . . ")
A second focus of sceptical arguments about a complete neuro-scientific explanation of consciousness is sensory qualia: the intro-spectable qualitative aspects of sensory experience, the features by which subjects discern similarities and differences among their experiences. The colours of visual sensations are a philosopher's favourite example. One famous puzzle about colour qualia is the alleged conceivability of spectral inversions. Many philosophers claim that it is conceptually possible (if perhaps physically impossible) for two humans not to differ neurophysiological, while the Collor that fire engines and tomatoes appear to have to one subject is the Collor that grass and frogs appear to have to the other (and vice versa). A large amount of neuro-scientifically-informed philosophy has addressed this question. A related area where neuro-philosophical considerations have emerged concerns the metaphysics of colours themselves (rather than Collor experiences). A longstanding philosophical dispute is whether colours are objective property’s Existing external to perceiver or rather identifiable as or dependent upon minds or nervous systems. Some recent work on this problem begins with characteristics of Collor experiences: For example that Collor similarity judgments produce Collor orderings that align on a circle. With this resource, one can seek mappings of phenomenology onto environmental or physiological regularities. Identifying colours with particular frequencies of electromagnetic radiation does not preserve the structure of the hue circle, whereas identifying colours with activity in opponent processing neurons does. Such a tidbit is not decisive for the Collor objectivist-subjectivist debate, but it does convey the type of neuro-philosophical work being done on traditional metaphysical issues beyond the philosophy of mind.
We saw in the discussion of Hardcastle (1997) two sections above that Neuro-philosophers have entered disputes about the nature and methodological import of pain experiences. Two decades earlier, Dan Dennett (1978) took up the question of whether it is possible to build a computer that feels pain. He compares and notes pressure between neurophysiological discoveries and common sense intuitions about pain experience. He suspects that the incommensurability between scientific and common sense views is due to incoherence in the latter. His attitude is wait-and-see. But foreshadowing Churchland's reply to Chalmers, Dennett favours scientific investigations over conceivability-based philosophical arguments.
Neurological deficits have attracted philosophical interest. For thirty years philosophers have found implications for the unity of the self in experiments with commissurotomy patients. In carefully controlled experiments, commissurotomy patients display two dissociable seats of consciousness. Patricia Churchland scouts philosophical implications of a variety of neurological deficits. One deficit is blind-sight. Some patients with lesions to primary visual cortex report being unable to see items in regions of their visual fields, yet perform far better than chance in forced guess trials about stimuli in those regions. A variety of scientific and philosophical interpretations have been offered. Ned Form (1988) worries that many of these conflate distinct notions of consciousness. He labels these notions ‘phenomenal consciousness’ (‘P-consciousness’) and ‘access consciousness’ (‘A-consciousness’). The former is that which, ‘what it is like-ness of experience. The latter is the availability of representational content to self-initiated action and speech. Form argues that P-consciousness is not always representational whereas A-consciousness is. Dennett and Michael Tye are sceptical of non-representational analyses of consciousness in general. They provide accounts of blind-sight that do not depend on Form's distinction.
Many other topics are worth neuro-philosophical pursuit. We mentioned commissurotomy and the unity of consciousness and the self, which continues to generate discussion. Qualia beyond those of Collor and pain have begun to attract neuro-philosophical attention has self-consciousness. The first issue to arise in the ‘philosophy of neuroscience’ (before there was a recognized area) was the localization of cognitive functions to specific neural regions. Although the ‘localization’ approach had dubious origins in the phrenology of Gall and Spurzheim, and was challenged severely by Flourens throughout the early nineteenth century, it reemerged in the study of aphasia by Bouillaud, Auburtin, Broca, and Wernicke. These neurologists made careful studies (where possible) of linguistic deficits in their aphasic patients followed by brain autopsies postmortem. Broca's initial study of twenty-two patients in the mid-nineteenth century confirmed that damage to the left cortical hemisphere was predominant, and that damage to the second and third frontal convolutions was necessary to produce speech production deficits. Although the anatomical coordinates’ Broca postulates for the ‘speech production centres do not correlate exactly with damage producing production deficits, both are that in this area of frontal cortex and speech production deficits still bear his name (‘Broca's area’ and ‘Broca's aphasia’). Less than two decades later Carl Wernicke published evidence for a second language centre. This area is anatomically distinct from Broca's area, and damage to it produced a very different set of aphasic symptoms. The cortical area that still bears his name (‘Wernicke's area’) is located around the first and second convolutions in temporal cortex, and the aphasia that bears his name (‘Wernicke's aphasia’) involves deficits in language comprehension. Wernicke's method, like Broca's, was based on lesion studies: a careful evaluation of the behavioural deficits followed by post mortem examination to find the sites of tissue damage and atrophy. Lesion studies suggesting more precise localization of specific linguistic functions remain a cornerstone to this day in aphasic research.
Lesion studies have also produced evidence for the localization of other cognitive functions: For example, sensory processing and certain types of learning and memory. However, localization arguments for these other functions invariably include studies using animal models. With an animal model, one can perform careful behavioural measures in highly controlled settings, then ablate specific areas of neural tissue (or use a variety of other techniques to Form or enhance activity in these areas) and remeasure performance on the same behavioural tests. But since we lack an animal model for (human) language production and comprehension, this additional evidence isn't available to the neurologist or neurolinguist. This fact makes the study of language a paradigm case for evaluating the logic of the lesion/deficit method of inferring functional localization. Philosopher Barbara Von Eckardt (1978) attempts to make explicit the steps of reasoning involved in this common and historically important method. Her analysis begins with Robert Cummins' early analysis of functional explanation, but she extends it into a notion of structurally adequate functional analysis. These analyses break down a complex capacity C into its constituent capacity’s c1, c2, . . . cn, where the constituent capacities are consistent with the underlying structural details of the system. For example, human speech production (complex capacity ‘C’) results from formulating a speech intention, then selecting appropriate linguistic representations to capture the content of the speech intention, then formulating the motor commands to produce the appropriate sounds, then communicating these motor commands to the appropriate motor pathways (constituent capacity’s c1, c2, . . . , cn). A functional-localization hypothesis has the form: Brain structure S in an organism (type) O has constituent capacity ci, where ci is a function of some part of O. An example, Brains Broca's area (S) in humans (O) formulates motor commands to produce the appropriate sounds (one of the constituent capacities C1). Such hypotheses specify aspects of the structural realization of a functional-component model. They are part of the theory of the neural realization of the functional model.
Armed with these characterizations, Von Eckardt argues that inference to a functional-localization hypothesis proceeds in two steps. First, a functional deficit in a patient is hypothesized based on the abnormal behaviour the patient exhibits. Second, localization of function in normal brains is inferred on the basis of the functional deficit hypothesis plus the evidence about the site of brain damage. The structurally-adequate functional analysis of the capacity connects the pathological behaviour to the hypothesized functional deficit. This connection suggests four adequacy conditions on a functional deficit hypothesis. First, the pathological behaviour ‘P’ (e.g., the speech deficits characteristic of Broca's aphasia) must result from failing to exercise some complex capacity ‘C’ (human speech production). Second, there must be a structurally-adequate functional analysis of how people exercise capacity ‘C’ that involves some constituent capacity C1 (formulating motor commands to produce the appropriate sounds). Third, the operation of the steps described by the structurally-adequate functional analysis minus the operation of the component performing ci (Broca's area) must result in pathological behaviour P. Fourth, there must not be a better available explanation for why the patient does P. Arguments to a functional deficit hypothesis on the basis of pathological behaviour is thus an instance of argument to the best available explanation. When postulating a deficit in a normal functional component provides the best available explanation of the pathological data, we are justified in drawing the inference.
Von Eckardt applies this analysis to a neurological case study involving a controversial reinterpretation of agnosia. Her philosophical explication of this important neurological method reveals that most challenges to localization arguments of whether to argue only against the localization of a particular type of functional capacity or against generalizing from localization of function in one individual to all normal individuals. (She presents examples of each from the neurological literature.) Such challenges do not impugn the validity of standard arguments for functional localization from deficits. It does not follow that such arguments are unproblematic. But they face difficult factual and methodological problems, not logical ones. Furthermore, the analysis of these arguments as involving a type of functional analysis and inference to the best available explanation carries an important implication for the biological study of cognitive function. Functional analyses require functional theories, and structurally adequate functional analyses require checks imposed by the lower level sciences investigating the underlying physical mechanisms. Arguments to best available explanation are often hampered by a lack of theoretical imagination: the available explanations are often severely limited. We must seek theoretical inspiration from any level of theory and explanation. Hence making explicit the ‘logic’ of this common and historically important form of neurological explanation reveals the necessity of joint participation from all scientific levels, from cognitive psychology down to molecular neuroscience. Von Eckardt anticipated what came to be heralded as the ‘co-evolutionary research methodology,’ which remains a centerpiece of neurophilosophy to the present day.
Over the last two decades, evidence for localization of cognitive function has come increasingly from a new source: the development and refinement of neuroimaging techniques. The form of localization-of-function argument appears not to have changed from that employing lesion studies (as analysed by Von Eckardt). Instead, these imaging technologies resolve some of the methodological problems that plage lesion studies. For example, researchers do not need to wait until the patient dies, and in the meantime probably acquires additional brain damage, to find the lesion sites. Two functional imaging techniques are prominent: Positron emission tomography, or PET, and functional magnetic resonance imaging, or MRI. Although these measure different biological markers of functional activity, both now have a resolution down to around 1mm. As these techniques increase spatial and temporal resolution of functional markers and continue to be used with sophisticated behavioural methodologies, the possibility of localizing specific psychological functions to increasingly specific neural regions continues to grow
What we now know about the cellular and molecular mechanisms of neural conductance and transmission is spectacular. The same evaluation holds for all levels of explanation and theory about the mind/brain: maps, networks, systems, and behaviour. This is a natural outcome of increasing scientific specialization. We develop the technology, the experimental techniques, and the theoretical frameworks within specific disciplines to push forward our understanding. Still, a crucial aspect of the total picture gets neglected: the relationship between the levels, the ‘glue’ that binds knowledge of neuron activity to subcellular and molecular mechanisms, network activity patterns to the activity of and connectivity between single neurons, and behaviour to network activity. This problem is especially glaring when we focus on the relationship between ‘cognitivist’ psychological theories, postulating information-bearing representations and processes operating over their contents, and the activity patterns in networks of neurons. Co-evolution between explanatory levels still seems more like a distant dream rather than an operative methodology.
It is here that some neuroscientists appeal to ‘computational’ methods. If we examine the way that computational models function in more developed sciences (like physics), we find the resources of dynamical systems constantly employed. Global effects (such as large-scale meteorological patterns) are explained in terms of the interaction of ‘local’ lower-level physical phenomena, but only by dynamical, nonlinear, and often chaotic sequences and combinations. Addressing the interlocking levels of theory and explanation in the mind/brain using computational resources that have worked to bridge levels in more mature sciences might yield comparable results. This methodology is necessarily interdisciplinary, drawing on resources and researchers from a variety of levels, including higher levels like experimental psychology, ‘program-writing’ and ‘connectionist’ artificial intelligence, and philosophy of science.
However, the use of computational methods in neuroscience is not new. Hodgkin, Huxley, and Katz incorporated values of voltage-dependent potassium conductance they had measured experimentally in the squid giant axon into an equation from physics describing the time evolution of a first-order kinetic process. This equation enabled them to calculate best-fit curves for modelled conductance versus time data that reproduced the S-shaped (sigmoidal) function suggested by their experimental data. Using equations borrowed from physics, Rall (1959) developed the cable model of dendrites. This theory provided an account of how the various inputs from across the dendritic tree interact temporally and spatially to determine the input-output properties of single neurons. It remains influential today, and has been incorporated into the genesis software for programming neurally realistic networks. More recently, David Sparks and his colleagues have shown that a vector-averaging model of activity in neurons of superior caliculi correctly predicts experimental results about the amplitude and direction of saccadic eye movements. Working with a more sophisticated mathematical model, Apostolos Georgopoulos and his colleagues have predicted direction and amplitude of hand and arm movements based on averaged activity of 224 cells in motor cortices. Their predictions have borne out under a variety of experimental tests. We mention these particular studies only because we are familiar with them. We could multiply examples of the fruitful interaction of computational and experimental methods in neuroscience easily by one-hundred-fold. Many of these extend back before ‘computational neuroscience’ was a recognized research endeavour.
We've already seen one example, the vector transformation account, of neural representation and computation, under active development in cognitive neuroscience. Other approaches using ‘cognitivist’ resources are also being pursued. Many of these projects draw upon ‘cognitivist’ characterizations of the phenomena to be explained. Many exploit ‘cognitivist’ experimental techniques and methodologies. Some even attempt to derive ‘cognitivist’ explanations from cell-biological processes (e.g., Hawkins and Kandel 1984). As Stephen Kosslyn puts it, cognitive neuro-scientists employ the ‘information processing’ view of the mind characteristic of cognitivism without trying to separate it from theories of brain mechanisms. Such an endeavour calls for an interdisciplinary community willing to communicate the relevant portions of the mountain of detail gathered in individual disciplines with interested nonspecialists: not just people willing to confer with those working at related levels, but researchers trained in the methods and factual details of a variety of levels. This is a daunting requirement, but it does offer some hope for philosophers wishing to contribute to future neuroscience. Thinkers trained in both the ‘synoptic vision’ afforded by philosophy and the factual and experimental basis of genuine graduate-level science would be ideally equipped for this task. Recognition of this potential niche has been shown among graduate programs in philosophy, but there is some hope that a few programs are taking steps to fill it.
In the final analysis there will be philosophers unprepared to accept that, if a given cognitive capacity is psychologically real, then there must be an explanation of how it is possible for an individual in the course of human development to acquire that cognitive capacity, or anything like it, can have a role to play in philosophical accounts of concepts and conceptual abilities. The most obvious basis for such a view would be a Frégean distrust of “psychology” that leads to a rigid division of labour between philosophy and psychology. The operative thought is that the task of a philosophical theory of concepts is to explain what a given concept is or what a given conceptual ability consist in. This, it is frequently maintained, is something that can be done in complete independence of explaining how such a concept or ability might be acquired. The underlying distinction is one between philosophical questions centring around concept possession and psychological questions centring around concept possibilities for an individual to acquire that ability, then it cannot be psychologically real. Nevertheless, this distinction is, however, strictly one does adhere to the distinction, it provides no support for a rejection of any given cognitive capacity for which is psychologically real. The neo-Frégean distinction is directly against the view that facts about how concepts are acquired have a role to play in explaining and individualizing concepts. But this view does not have to be disputed by a supporter as such, nonetheless, all that the supporter is to commit is that the principle that no satisfactory account of what a concept is should make it impossible to provide explanation of how that concept can be acquired. That is, that this principle has nothing to say about the further question of whether the psychological explanation has a role to play in a constitutive explanation of the concept, and hence is not in conflict with the neo-Frégean distinction.
The world-view, whereby modernity is to assume that communion with the essences of physical reality and associated theories was possible, but it made no other provisions for the knowing mind. In that, the totality from which modern theory contributes to a view of the universe as an unbroken, undissectible, and undivided dynamic whole. Even so, a complicated tissue of an event, in which connections of different kinds alternate or overlay or combine and in such a way determine the texture of the whole. Errol Harris noted in thinking about the special character of wholeness in modern epistemology, a unity with internal content is a blank or empty set and is not recognized as a whole. A collection of merely externally related parts does not constitute a whole in that the parts will not be “mutually adaptive and complementary to one another.”
Wholeness requires a complementary relationship between unity and difference and is governed by a principle of organization determining the interrelationship between parts. This organizing principle must be universal to a genuine whole and implicit in all parts that constitute the whole, even though the whole is exemplified in its parts. This principle of order, “is nothing real in and of itself. It is the way of the parts are organized, and not another consistent additional to those that constitute the totality.”
In a genuine whole, the relationships between the constituent parts must be “internal or immanent” in the parts, as opposed to a more spurious whole in which parts appear to disclose wholeness due to relationships that are external to the parts. The collections of parts that would allegedly constitute the whole in both subjective theory and physical reality are each exampled of the spurious whole. Parts constitute a genuine whole when the universal principle of order is inside the parts and thereby adjusts each to all that they interlock and become mutually binding. All the same, it is also consistent with the manner in which we have begun to understand the relation between parts and whole in modern biology.
Much of the ambiguity to explain the character of wholes in both physical reality and biology derives from the assumption that order exists between or outside parts. But order complementary relationships between difference and sameness in any physical reality as forwarded through physical events is never external to that event - the connections are immanent in the event. From this perspective, the addition of non-locality to this picture of the dynamic whole is not surprising. The relationship between part, as quantum events apparent in observation or measurement, and the undissectible whole: Having revealed but not described by the instantaneous correlations between measurements in space-like separated regions, is another extension of the part-whole complementarity in modern physical reality.
If the universe is a seamlessly interactive system that evolves to higher levels of complexity and if the lawful regularise of this universe are emergent properties of this system, we can assume that the cosmos is a single significant whole that evinces progressive order in complementary relations to its parts. Given that this whole exists in some sense within all parts, one can then argue that it operates in self-reflective fashions and is the ground for all emergent complexity. Since, human consciousness evinces self-reflective awareness in the human brain and since this brain, like all physical phenomena, can be viewed as an emergent property of the whole, it is unreasonable to conclude, in philosophical terms at least, that the universe is conscious.
But since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite literally, beyond all human representations or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatsoever to conceptions of design, meaning, purpose, intent, or plan associated with mytho-religious or cultural heritage. However, if one does not accept this view of the universe, there is nothing in the scientific description of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as the foundation to religious experience, can be dismissed, undermined, or invalidate with appeals to scientific knowledge.
A full account of the structure of consciousness, will need to illustrate those higher, conceptual forms of consciousness to which little attention on such an account will take and about how it might emerge from given points of value, is the thought that an explanation of everything that is distinctive about consciousness will emerge out of an account of what it is for a subject, to be capable of thinking about himself. But, to a proper understanding of the complex phenomenon of consciousness. There are no facts about linguistic mastery that will determine or explain what might be termed the cognitive dynamics that are individual processes that have found their way forward for a theory of consciousness, it sees, to chart the characteristic features individualizing the various distinct conceptual forms of consciousness in a way that will provide a taxonomy of unconsciousness they to will show in what way the manifesting characterlogical functions that can to determine at the level of content. What so is, our promising images of hope, accomplishes the responsibilities that these delegated forms of higher forms of consciousness emerge from a rich foundation of non-conceptual representations of thought, which can only expose and clarify their conviction that these forms of conscious thought hold the key, not just to an eventful account of how mastery of the conscious paradigms, but to a proper understanding of the plexuity of self-consciousness and/or the overall conjecture of consciousness that stands alone as to an everlasting, and the ever unchangeless states of unconsciousness, in the abysses which are held by some estranged crypto-mystification in enciphering cryptanalysis.
And, yet, to believe a proposition is to hold to be true, incorporates the philosophical problems that include discovering whether beliefs differ from varieties of assent, such as acceptance, discovering to what extent degree of belief are possible, understanding the ways in which belief is controlled by rational and irrational factors, And discovering its links with other properties, such as the possession of conceptual or linguistic skills. This last set of problems includes the question of whether prelinguistic infants or animals are proprieties said to have beliefs
Traditionally, belief has been of epistemological interest in its propositional guise: ‘S’
believes that ‘p’, where ‘p’ is a proposition toward which an agent, ‘S’, exhibits an attitude of acceptance. Not all belief is of this sort. If I trust what you say, I believe you. And someone may believe in Mrs. Thatcher, or in a free-market economy, or in God. It is sometimes supposed that all belief is ‘reducible’ to propositional belief, belief-that. Thus, my believing you might be thought a matter of my believing, perhaps, that what you say is true, and tour belief in free markets or in God, a matter of your believing that free-market economics are desirable or that God exists.
It is doubtful, however, that non-propositional believing can, in every casse, be reduced in this way. Debate on this point has tended to focus on an apparent distinction between belief-that and belief-in, and the application of this distinction to belief in God. Some philosophers have followed Aquinas in supposing that to believe in God is simply to believe that certain truths hold that God exists, that he is benevolent, etc. Others (e.g., Hick, 157) argues that brief-in is a distinctive attitude, one that include s essentially an element of trust. More commonly, belief-in has been taken to involve a combination of propositional belief together with some further attitude.
H.H. Price (1969) defends the claim that there are different sorts of belief-in, some, but not all, reducible to beliefs-that. If you believe in God, etc. But, according to Price, your belief involves, in addition, a certain complex pro-attitude toward its object. One might attempt to analyse tis further attitude in terms of additional beliefs-that: ‘S’ believes in ‘X’ just in case (1) ‘S’ believes that ‘X’ exists (and perhaps holds further factual beliefs about ‘X’) (2) ‘S’ beliefs that ‘X’ is good or valuable in some respect, and (3) ‘S’ believes that ’X’s’ being good or valuable in this respect is itself is a good thing. An analysis of this sort, however, fails adequately to capture the further affective component of belief-in. Thus, according to Price, if you believe in God, your beliefs not merely that certain truths hold, you possess, in addition, an attitude if commitment and trust toward God.
Notoriously, belief-in outruns the evidence for the corresponding belief-that. Does this diminish its rationality? If belief-in presupposes belief-that, it might be thought that the evidential standards for the former must be at least as high as standards for the latter. And any additional pro-attitude might be thought to require further justification not required for case of belief-that.
Some philosophers have argued that, at least for cases in which belief-in is synonymous with faith (or faith-in), evidential thresholds for constituent propositional beliefs are diminished (Audi, 1990). You may reasonably have faith in God or one to many governmental officials respectively, even though beliefs about their respective attitudes, were you to harbour them, would be evidentially substandard.
Belief-in may be, in general less susceptible to alternation in the face of unfavourable evidence than belief-that. A believer which encounters evidence against God’s exists may remain an undiminished belief, in pas t because the evidence does not bear on his pro-attitude. So long a this is united with his belief that God exists. The belief may survive epistemic buffeting and reasonably so, in that any other formed ordinary propositional belief that would not.
To place, position, or put through the informalities to finding reason and causes, the freeing liberation to express of such a definable emergence. Justly, when we act for a reason, is the reason a cause of our action? Is explaining an action by means if giving the reason for which it is done, a kind of causal explanation? The view that it will not cite the existence of a logical relation between an action and its reason: It will say that an action would not be the action it is if it did not get its identity from its place in an intentional plan of the agent (it would just be a pierce of behaviour, not explicable by reasons at all). Reasons and actions are not the ‘loose and separate’ events between which causal relations hold. The contrary view, espoused by Davidson, in his influential paper “Actions, Reasons, and Causes” (1963), claims that the existence of a reason is a mental event, and unless this event is causally linked to the acting we could not say that it is the reason for which the action is performed: Actions may be performed for one reason than of another, and the reason that explains then is the one that is causally efficacious in prompting the action.
The distinction between reason and causes is motivated in good part by s desire to separate the rational from the natural order. Historically, it probably traces back at least to Aristotle’s similar (but not identical) distinction between final and efficient, recently, the contract has been drawn primarily in the domain of actions and, secondarily, elsewhere.
Many who have insisted on distinguishing reasons from causes have failed to distinguish two kinds of reason. Consider my reason for sending a letter by express mail. Asked why I did so, I might say I wanted to get it there in a day, or simply, to get it there in a day. strictly, the reason is expressed by ‘to get it there in a day’. But what this expresses is my reason only because I am suitably motivated’: I am in a reason state, wanting to get the letter there in a day. It is reason states - especially want, belief and intentions - and no reasons strictly, so called, that are candidates for causes. The later are abstract contents of propositional attitude, the former are psychological elements that play motivational roles.
If reason states can motivate, however, why (apart from confusing them with reason proper) deny that they are causes? For one thing they are not events, at least in the usual sense entailing change: They are dispositional states (this contrasts them with occurrences, but does not imply that they admit of dispositional analysis). It has also seemed to those who deny that reason are causes that the former justly as well as explain the actions for which they are reasons where the role at cayuses is at not to explain. Another claim is hat the relation between reasons (and here reason states are often cited explicitly) and the actions they explain is non-contingent, whereas the relation of causes to their effect is contingent. The ‘logical connection argument’ proceed from this claim to her conclusion that reasons ae not causes.
These arguments are inconclusive. First, even if causes are events, sustaining causation may explain, as where the (state of) standing of a broken table is explained by the (conditions of) support of stacked boards replacing its missing legs. Second, the ‘because’ in ‘I sent it by express because I wanted to get it there in a day’ is in some sense causal - indeed, where it is not so taken, this purported explanation would at best be construed as only rationalized, than justifying, my action. And third, if any non-contingent connection can be established between, sa y, my wanting some thing and the action it explains, there are close causal analogues, such as the connection between bringing a magnet to iron filings and their gravitating to it: This is, after all, a ‘definitive’ connection, expressing part of what it is to be magnetic, yet the magnet causes the filings to move .
There is, then, a clear distinction between reasons proper and causes, and even between reason states and event causes, : But, the distinction cannot be used to show that the relation between reasons and the actions they justify is in no way causal. Precisely parallel point hold in the epistemic domain (and for all propositional attitudes, since they all similarly admit of justification, and explanation, by reasons). Suppose my reason for believing that you received my letter today is that I sent it by express yesterday . My reason, strictly speaking, is that I sent it by express yesterday, my reason justifies the further proportion I believe of which it is my reason, and my reason state - my evidence belief - both explain and justifies my belief that you received the letter today. I can say that what justifies that belief is (in fat) that I sen t the letter by express yesterday, but this statement expresses my believing that evidence proposition, and if I do not believe it then my belief that you received the letter is not justified: It is not justified by the mere truth of that proposition (and can be justified eve n if that preposition is false).
Similarly, there are, or beliefs as for action, at least five main kinds of reasons: (1) normative reasons, reasons (objective grounds) there are to believe (say, to believe that there is a greenhouse effect): (2) person-relative normative reasons, reasons for (say) me to believe: (3) subjective reasons, reasons I have to believe (4) explanatory reasons, reasons why I believe and (5) motivating reasons, reasons for which I believe. (1) and (2) are proposition and thus not serious candidates to be causal factors. The states corresponding to (3) may or may not be causal elements, reasons why, case (4) are always (sustaining) explainers, though not necessarily even prima facie justifiers, since a belief can be causally sustained by factors with no evidential value. Motivating reasons minimal justificatory power (if any) a reason must have to be a basis of belief.
Finally, the natural tendency of the mind is to be restless. Thinking seems to be a continuous and ongoing activity. The restless mind lets thoughts come and go incessantly from morning till night. They give us no rest for a moment. Most of these thoughts are not exactly invited; they just come, occupy our attention for a while, and then disappear. Our true essence can be likened to the sky, and our thoughts are the clouds. The clouds drift through the sky, hide it for a while and then disappear. They are not permanent. So are thoughts. Because of their incessant movement they hide our essence, our core, and then move away to make room for other thoughts. Thoughts resemble the waves of the ocean, always in a state of motion, never standing still. These thoughts arise in our mind due to many reasons. There is a tendency on the part of the mind to analyse whatever it contacts. It likes to compare, to reason, and to ask questions. It constantly indulges in these activities.
Everyone's mind has a kind of a filter, which allows it to accept, let in certain thoughts, and reject others. This is the reason why some people occupy their minds with thoughts about a certain subject, while others don't even think about the same subject.
Why some people are attracted to football and others don't? Why some love and admire a certain singer and others don't? Why some people think incessantly about a certain subject, and others never think about it? It is all due to this inner filter. This is an automatic unconscious filter. We never stop and say to certain thoughts 'come' and to others we say 'go away'. It is an automatic activity. This filter was built during the years. It was and is built constantly by the suggestions and words of people we meet, and as a consequence of our daily experiences.
Every event, happening or word has an affect on the mind, which produces thoughts accordingly. The mind is like a thought factory, working in shifts day and night, producing thoughts. The mind also gets thoughts directly from the surrounding world. The space around us is full of thoughts, which we constantly pick, let pass through our minds, and then pick up new ones. It is like catching fish in the ocean, throwing them back into the water and then catching a new ones.
This activity of the restless mind occupies our attention all the time. Now our attention is on this thought and then on another one. We pay a lot of energy and attention to these passing thoughts. Most of them are not important. They just waste our time and energy.
This is enslavement. It is as if some outside power is always putting a thought in front of us to pay attention to. It is like a relentless boss constantly giving us a job to do. There is no real freedom. We enjoy freedom only when we are able to still the mind and choose our thoughts. There is freedom, when we are able to decide which thought to think and which one to reject. We live in freedom, when we are able to stop the incessant flow of thoughts.
Stopping the flow of thoughts may look infeasible, but constant training and exercising with concentration exercises and meditation, eventually lead to this condition. The mind is like an untamed animal. It can be taught self-discipline and obedience to a higher power. Concentration and meditation show us in a clear and practical manner that we, the inner true essences, are this controlling power. We are the bosses of our minds.
That’s to say, in whatever way possible, no assumptions are to be taken
for granted, as no thoughtful conclusion should be lightly dismissed as fallacious in the study assembled through the phenomenon of consciousness. Becoming even more so, when exercising the ingenuous humanness that caution measures, that we must try to move ahead to reach forward into the positive conclusion to its topic.
Many writers, along with a few well-known new-age gurus, have played fast and loosely with firm interpretations of some new but informal understanding grounded within the mental in some vague sense of cosmic consciousness. However, these new age nuances are ever so erroneously placed in the new-age section of a commercial bookstore and purchased by those interested in new-age literature, and they will be quite disappointed.
What makes our species unique is the ability to construct a virtual world in which the real world can be imaged and manipulated in abstract forms and idea. Evolution has produced hundreds of thousands of species with brains, in which tens of thousands of species with complex behavioural and learning abilities. In that respect are also many species in which sophisticated forms of group communication have evolved. For example, birds, primates, and social carnivores use extensive vocal and gestural repertoires to structure behaviour in large social groups. Although we share roughly 98 percent of our genes with our primate cousins, the course of human evolution widened the cognitive gap between us and all other species, including our cousins, into a yawning chasm.
Research in neuroscience has shown that language processing is a staggeringly complex phenomenon that places incredible demands on memory and learning. Language functions extend, for example, into all major lobes of the neocortex: Auditory opinion is associated with the temporal area; tactile information is associated with the parietal area, and attention, working memory, and planning are associated with the frontal cortex of the left or dominant hemisphere. The left prefrontal region is associated with verb and noun production tasks and in the retrieval of words representing action. Broca’s area, next to the mouth-tongue region of a motor cortex, is associated with vocalization in word formation, and Wernicke’s area, by the auditory cortex, is associated with sound analysis in the sequencing of words.
Lower brain regions, like the cerebellum, have also evolved in our species to help in language processing. Until recently, we thought the cerebellum to be exclusively involved with automatic or preprogrammed movements such as throwing a ball, jumping over a high hurdle or playing noted orchestrations as on a musical instrument. Imaging studies in neuroscience suggest, however, that the cerebellum awaken within the smoldering embers brought aflame by the sparks of awakening consciousness, to think communicatively during the spoken exchange. Mostly actuated when the psychological subject occurs in making difficult the word associations that the cerebellum plays a role in associations by providing access to automatic word sequences and by augmenting rapid shifts in attention.
The midbrain and brain stem, situated on top of the spinal cord, coordinate and articulate the numerous amounts of ideas and output systems that, to play an extreme and crucial role in the interplay through which we have adaptively adjusted and coordinated the distributable dynamic communicative functions. Vocalization has some special associations with the midbrain, which coordinates the interaction of the oral and respiratory tracks necessary to make speech sounds. Since this vocalization requires synchronous activity among oral, vocal, and respiratory muscles, these functions probably connect to a central site. This site resembles the central greyness founded around the brain. The central gray area links the reticular nuclei and brain stem motor nuclei to comprise a distributed network for sound production. While human speech is dependent on structures in the cerebral cortex, and on rapid movement of the oral and vocal muscles, this is not true for vocalisation in other mammals.
Research in neuroscience reveals that the human brain is a massively parallel system in which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchical organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. Language processing is clearly not accomplished by stand-alone or unitary modules that evolved with the addition of separate modules were eventually wired together on some neural circuit board.
Similarly, individual linguistic symbols are continued as given to clusters of distributed brain areas and are not in a particular area. The specific sound patterns of words may be produced in dedicated regions. All the same, the symbolic and referential relationships between words are generated through a convergence of neural codes from different and independent brain regions. The processes of words comprehension and retrieval result from combinations simpler associative processes in several separate brain regions that require input from other regions. The symbolic meaning of words, like the grammar that is essential for the construction of meaningful relationships between stings of words, is an emergent property from the complex interaction of several brain parts.
While the brain that evolved this capacity was obviously a product of Darwinian evolution, the most critical precondition for the evolution of brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered condition for survival in a ne ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. Darwinian evolution can also explain why selective pressure in this new ecological niche favoured pre-adaptive changes required for symbolic commonisation. Nevertheless, as this communication resulted in increasingly more complex behaviour evolution began to take precedence of physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.
Although male and female hominids favoured pair bonding and created more complex social organizations in the interests of survival, the interplay between social evolution and biological evolution changed the terms of survival radically. The enhanced ability to use symbolic communication to construct of social interaction eventually made this communication the largest determinant of survival. Since this communication was based on a symbolic vocalization that requires the evolution of neural mechanisms and processes that did not evolve in any other species, this marked the emergence of a mental realm that would increasingly appear as separate nd distinct from the external material realm.
Nonetheless, if we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the active experience of the world symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. While one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.
Most experts agree that our ancestries became knowledgeably articulated in the spoken exchange as based on complex grammar and syntax between two hundred thousand and some hundred thousand years ago. The mechanisms in the human brain that allowed for this great achievement clearly evolved, however, over great spans of time. In biology textbooks, the lists of prior adaptations that enhanced the ability of our ancestors to use communication normally include those that are inclining to inclinations to increase intelligence. As to approach a significant alteration of oral and auditory abilities, in that the separation or localization of functional representations is found on two sides of the brain. The evolution of some innate or hard wired grammar, however, when we look at how our ability to use language could have really evolved over the entire course of hominid evolution. The process seems more basic and more counterintuitive than we had previously imagined.
Although we share some aspects of vocalization with our primate cousins, the mechanisms of human vocalization are quite different and have evolved over great spans of time. Incremental increases in hominid brain size over the last 2.5 million years enhanced cortical control over the larynx, which originally evolved to prevent food and other particles from entering the windpipe or trachea; This eventually contributed to the use of vocal symbolization. Humans have more voluntary motor control over sound produced in the larynx than any other vocal species, and this control are associated with higher brain systems involved in skeletal muscle control as opposed to just visceral control. As a result, humans have direct cortical motor control over phonation and oral movement while chimps do not.
The larynx in modern humans is positioned in a comparatively low position to the throat and significantly increases the range and flexibility of sound production. The low position of the larynx allows greater changes in the volume to the resonant chamber formed by the mouth and pharynx and makes it easier to shift sounds to the mouth and away from the nasal cavity. Formidable conclusions are those of the sounds that comprise vowel components of speeches that become much more variable, including extremes in resonance combinations such as the “ee” sound in “tree” and the “aw” sound in “flaw.” Equally important, the repositioning of the larynx dramatically increases the ability of the mouth and tongue to modify vocal sounds. This shift in the larynx also makes it more likely that food and water passing over the larynx will enter the trachea, and this explains why humans are more inclined to experience choking. Yet this disadvantage, which could have caused the shift to e selected against, was clearly out-weighed by the advantage of being able to produce all the sounds used in modern language systems.
Some have argued that this removal of constraints on vocalization suggests that spoken language based on complex symbol systems emerged quite suddenly in modern humans only about one hundred thousand years ago. It is, however, far more likely that language use began with very primitive symbolic systems and evolved over time to increasingly complex systems. The first symbolic systems were not full-blown language systems, and they were probably not as flexible and complex as the vocal calls and gestural displays of modern primates. The first users of primitive symbolic systems probably coordinated most of their social comminations with call and display behavioural attitudes alike those of the modern ape and monkeys.
Critically important to the evolution of enhanced language skills are that behavioural adaptive adjustments that serve to precede and situate biological changes. This represents a reversal of the usual course of evolution where biological change precedes behavioural adaption. When the first hominids began to use stone tools, they probably rendered of a very haphazard fashion, by drawing on their flexible ape-like learning abilities. Still, the use of this technology over time opened a new ecological niche where selective pressures occasioned new adaptions. A tool use became more indispensable for obtaining food and organized social behaviours, mutations that enhanced the use of tools probably functioned as a principal source of selection for both bodied and brains.
The first stone choppers appear in the fossil remnant fragments remaining about 2.5 million years ago, and they appear to have been fabricated with a few sharp blows of stone on stone. If these primitive tools are reasonable, which were hand-held and probably used to cut flesh and to chip bone to expose the marrow, were created by Homo habilis - the first large-brained hominid. Stone making is obviously a skill passed on from one generation to the next by learning as opposed to a physical trait passed on genetically. After these tools became critical to survival, this introduced selection for learning abilities that did not exist for other species. Although the early tool maskers may have had brains roughly comparable to those of modern apes, they were already confronting the processes for being adapted for symbol learning.
The first symbolic representations were probably associated with social adaptations that were quite fragile, and any support that could reinforce these adaptions in the interest of survival would have been favoured by evolution. The expansion of the forebrain in Homo habilis, particularly the prefrontal cortex, was on of the core adaptations. Increased connectivity enhanced this adaption over time to brain regions involved in language processing.
Imagining why incremental improvements in symbolic representations provided a selective advantage is easy. Symbolic communication probably enhanced cooperation in the relationship of mothers to infants, allowed forgoing techniques to be more easily learned, served as the basis for better coordinating scavenging and hunting activities, and generally improved the prospect of attracting a mate. As the list of domains in which symbolic communication was introduced became longer over time, this probably resulted in new selective pressures that served to make this communication more elaborate. After more functions became dependent on this communication, those who failed in symbol learning or could only use symbols awkwardly were less likely to pass on their genes to subsequent generations.
The crude language of the earliest users of symbolics must have been considerably gestured and nonsymbiotic vocalizations. Their spoken language probably became reactively independent and a closed cooperative system. Only after the emergence of hominids were to use symbolic communication evolved, symbolic forms progressively took over functions served by non-anecdotical symbolic forms. This is reflected in modern languages. The structure of syntax in these languages often reveals its origins in pointing gestures, in the manipulation and exchange of objects, and in more primitive constructions of spatial and temporal relationships. We still use nonverbal vocalizations and gestures to complement meaning in spoken language.
The general idea is very powerful, however, the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be ware that one is a spatial element of the world without a grasp of the spatial nature of the world. Face to face, the idea of a perceivable, objective spatial world that causes ideas too subjectively becoming to denote in the wold. During which time, his perceptions as they have of changing position within the world and to the essentially stable way the world is. The idea that there is an objective world and the idea that the subject is somewhere, and where he is given by what he can perceive.
Research, however distant, are those that neuroscience reveals in that the human brain is a massive parallel system which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchal organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. Language processing is clearly not accomplished by stand-alone or unitary modules that evolved with the addition of separate modules that were eventually wired together on some neutral circuit board.
While the brain that evolved this capacity was obviously a product of Darwinian evolution, he realized that the different chances of survival of different endowed offsprings could account for the natural evolution of species. Nature “selects” those members of some spacies best adapted to the environment in which they are themselves, just as human animal breeders may select for desirable traits for their livestock, and by that control the evolution of the kind of animal they wish. In the phase of Spencer, nature guarantees the “survival of the fittest.” The Origin of Species was principally successful in marshalling the evidence for evolution, than providing a convincing mechanism for genetic change, and Darwin himself remained open to the search for additional mechanisms, also reaming convinced that natural selection was at the heat of it. It was only with the later discovery of the “gene” as the unit of inheritance that the syntheses known as “neo-Darwinism” became the orthodox theory of evolution.
The solutions to the mysterious evolution by natural selection can shape sophisticated mechanisms are to found in the working of natural section, in that for the sake of some purpose, namely, some action, the body as a whole must evidently exist for the sake of some complex action: The process is fundamentally very simple as natural selection occurs whenever genetically influence’s variation among individual effects their survival and reproduction. If a gene codes for characteristics that result in fewer viable offspring in future generations, that gene is gradually eliminated. For instance, genetic mutation that an increase vulnerability to infection, or cause foolish risk taking or lack of interest in sex, will never become common. On the other hand, genes that cause resistance that causes infection, appropriate risk taking and success in choosing fertile mates are likely to spread in the gene pool even if they have substantial costs.
A classical example is the spread of a gene for dark wing colour in a British moth population living downward form major source of air pollution. Pale moths were conspicuous on smoke-darkened trees and easily caught by birds, while a rare mutant form of a moth whose colour closely matched that of the bark escaped the predator beaks. As the tree trucks became darkened, the mutant gene spread rapidly and largely displaced the gene for pale wing colour. That is all on that point to say is that natural selection insole no plan, no goal, and no direction - just genes increasing and decreasing in frequency depending on whether individuals with these genes have, compared with order individuals, greater of lesser reproductive success.
The simplicity of natural selection has been obscured by many misconceptions. For instance, Herbert Spencer’s nineteenth-century catch phrase “survival of the fittest” is widely thought to summarize the process, but an abstractive actuality openly provides a given forwarding to several misunderstandings. First, survival is of no consequence by itself. This is why natural selection has created some organisms, such as salmon and annual plants, that reproduces only once, the die. Survival increases fitness only insofar as it increases later reproduction. Genes that increase lifetime reproduction will be selected for even if they result in a reduced longevity. Conversely, a gene that deceases total lifetime reproduction will obviously be eliminated by selection even if it increases an individual’s survival.
Further confusion arises from the ambiguous meaning of “fittest.” The fittest individuals in the biological scene, is not necessarily the healthiest, stronger, or fastest. In today’s world, and many of those of the past, individuals of outstanding athletic accomplishment need not be the ones who produce the most grandchildren, a measure that should be roughly correlated with fattiness. To someone who understands natural selection, it is no surprise that the parents who are not concerned about their children;’s reproduction.
A gene or an individual cannot be called “fit” in isolation but only with reference to some particular spacies in a particular environment. Even in a single environment, every gene involves compromise. Consider a gene that makes rabbits more fearful and thereby helps to keep then from the jaws of foxes. Imagine that half the rabbits in a field have this gene. Because they do more hiding and less eating, these timid rabbits might be, on average, some bitless well fed than their bolder companions. Of, a hundred downbounded in the March swamps awaiting for spring, two thirds of them starve to death while this is the fate of only one-third of the rabbits who lack the gene for fearfulness, it has been selected against. It might be nearly eliminated by a few harsh winters. Milder winters or an increased number of foxes could have the opposite effect, but it all depends on the current environment.
The version of an evolutionary ethic called “social Darwinism” emphasizes the struggle for natural selection, and draws the conclusion that we should glorify the assists each struggle, usually by enhancing competitive and aggressive relations between people in society, or better societies themselves. More recently the reaction between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.
The most critical precondition for the evolution of this brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered conditions for survival in a new ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. Darwinian evolution can also explain why selective pressures in this new ecological niche favoured pre-adaptive changes required for symbolic communication. All the same, this communication resulted directly through its passing an increasingly atypically structural complex and intensively condensed behaviour. Social evolution began to take precedence over physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.
Because this communication was based on symbolic vocalization that required the evolution of neural mechanisms and processes that did not evolve in any other species. As this marked the emergence of a mental realm that would increasingly appear as separate and distinct from the external material realm.
If the emergent reality in this mental realm cannot be reduced to, or entirely explained as for, the sum of its parts, concluding that this reality is greater than the sum of its parts seems reasonable. For example, a complete proceeding of the manner in which light in particular wave lengths has ben advancing by the human brain to generate a particular colour says nothing about the experience of colour. In other words, a complete scientific description of all the mechanisms involved in processing the colour blue does not correspond with the colour blue as perceived in human consciousness. No scientific description of the physical substrate of a thought or feeling, no matter how accomplish it can but be accounted for in actualized experience, especially of a thought or feeling, as an emergent aspect of global brain function.
If we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the experience of the word symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. While one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.
Even if we are to include two aspects of biological reality, finding to a more complex order in biological reality is associated with the emergence of new wholes that are greater than the orbital parts. Yet, the entire biosphere is of a whole that displays self-regulating behaviour that is greater than the sum of its parts. The emergence of a symbolic universe based on a complex language system could be viewed as another stage in the evolution of more complicated and complex systems. As marked and noted by the appearance of a new profound complementarity in relationships between parts and wholes. This does not allow us to assume that human consciousness was in any sense preordained or predestined by natural process. Even so, it does make it possible, in philosophical terms at least, to argue that this consciousness is an emergent aspect of the self-organizing properties of biological life.
If we also concede that an indivisible whole contains, by definition, no separate parts and that a phenomenon can be assumed to be “real” only when it is “observed” phenomenon, we are led to more interesting conclusions. The indivisible whole whose existence is inferred in the results of the aspectual experiments that cannot in principle is itself the subject of scientific investigation. In that respect, no simple reason of why this is the case. Science can claim knowledge of physical reality only when the predictions of a physical theory are validated by experiment. Since the indivisible whole cannot be measured or observed, we encounter by engaging the “eventful horizon” or knowledge where science can say nothing about the actual character of this reality. Why this is so, is a property of the entire universe, then we must also conclude that undivided wholeness exists on the most primary and basic level in all aspects of physical reality. What we are dealing within science per se, however, are manifestations of tis reality, which are invoked or “actualized” in making acts of observation or measurement. Since the reality that exists between the space-like separated regions is a whole whose existence can only be inferred in experience. As opposed to proven experiment, the correlations between the particles, and the sum of these parts, do not constitute the “indivisible” whole. Physical theory allows us to understand why the correlations occur. Nevertheless, it cannot in principle disclose or describe the actualized character of the indivisible whole.
The scientific implications to this extraordinary relationship between parts ( in that, to know what it is like to have an experience is to know its qualia) and indivisible whole (the universe) are quite staggering. Our primary concern, however, is a new view of the relationship between mind and world that carries even larger implications in human terms. When factors into our understanding of the relationship between parts and wholes in physics and biology, then mind, or human consciousness, must be viewed as an emergent phenomenon in a seamlessly interconnected whole called the cosmos.
All that is required to embrace the alternative view of the relationship between mind and world that are consistent with our most advanced scientific knowledge is a commitment to metaphysical and epistemological realism and a willingness to follow arguments to their logical conclusions. Metaphysical realism assumes that physical reality or has an actual existence independent of human observers or any act of observation, epistemological realism assumes that progress in science requires strict adherence to scientific mythology, or to the rules and procedures for doing science. If one can accept these assumptions, most of the conclusions drawn should appear self-evident in logical and philosophical terms. Attributing any extra-scientific properties to the whole to understand is also not necessary and embrace the new relationship between part and whole and the alternative view of human consciousness that is consistent with this relationship. This is, in this that our distinguishing character between what can be “proven” in scientific terms and what can be reasonably “inferred” in philosophical terms based on the scientific evidence.
Moreover, advances in scientific knowledge rapidly became the basis for the creation of a host of new technologies. Yet are those responsible for evaluating the benefits and risks associated with the use of these technologies, much less their potential impact on human needs and values, normally have expertise on only one side of a two-culture divide. Perhaps, more important, many potential threats to the human future - such as, to, environmental pollution, arms development, overpopulation, and spread of infectious diseases, poverty, and starvation - can be effectively solved only by integrating scientific knowledge with knowledge from the social sciences and humanities. We have not done so for a simple reason - the implications of the amazing new fact of nature named for by non-locality, and cannot be properly understood without some familiarity with the actual history of scientific thought. The intent is to suggest that what is most important about this back-ground can be understood in its absence. Those who do not wish to struggle with the small and perhaps, fewer resultant amounts of back-ground implications should feel free to ignore it. Yet this material will be no more challenging as such, that the hope is that from those of which will find a common ground for understanding and that will meet again on this commonly functions as addressed to the relinquishing clasp of closure, and unswervingly close of its circle, resolve in the equations of eternity and complete of the universe of its obtainable gains for which its unification holds all that should be.
Another aspect of the evolution of a brain that allowed us to construct symbolic universes based on complex language system that is particularly relevant for our purposes concerns consciousness of self. Consciousness of self as an independent agency or actor is predicted on a fundamental distinction or dichotomy between this self and the other selves. Self, as it is constructed in human subjective reality, is perceived as having an independent existence and a self-referential character in a mental realm separately distinct from the material realm. It was, the assumed separation between these realms that led Descartes to posit his famous dualism in understanding the nature of consciousness in the mechanistic classical universe.
In a thought experiment, instead of bringing a course of events, as in a normal experiment, we are invited to imagine one. We may tenably be able to “see” that some result’s following, or that by some description is appropriate, or our inability to describe the situation may itself have some consequential consequence. Thought experiments played a major role in the development of physics: For example, Galileo probably never dropped two balls of unequal weight from the leaning Tower of Pisa, to refute the Aristotelean view that a heavy body falls faster than a lighter one. He merely asked used to imagine a heavy body made into the shape of a dumbbell, and then connecting rod gradually thinner, until it is finally severed. The thing is one heavy body until the last moment and he n two light ones, but it is incredible that this final snip alters the velocity dramatically. Other famous examples include the Einstein-Podolsky-Rosen thought experiment. In the philosophy of personal identity, our apparent capacity to imagine ourselves surviving drastic changes of body, brain, and mind is a permanent source of difficulty. On that point, no consensus on the legitimate place of thought experiments, to substitute either for real experiment, or as a reliable device for discerning possibilities. Though experiments with and one dislike is sometimes called intuition pumps.
For overfamiliar reasons, of hypothesizing that people are characterized by their rationality is common, and the most evident display of our rationality is our capacity to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers and painters all think, and in that respect no deductive reason that their deliberations should take any more verbal a form than this action. It is permanently tempting to conceive of this activity as for the presence inbounded in the mind of elements of some language, or other medium that represents aspects of the world. In whatever manner, the model has been attacked, notably by Wittgenstein, as insufficient, since no such presence could carry a guarantee that the right use would be made of it. And such of an inner present seems unnecessary, since an intelligent outcome might arouse of some principal measure from it.
In the philosophy of mind and ethics the treatment of animals exposes major problems if other animals differ from human beings, how is the difference to be characterized: Do animals think and reason, or have thoughts and beliefs? In philosophers as different as Aristotle and Kant the possession of reason separates humans from animals, and alone allows entry to the moral community.
For Descartes, animals are mere machines and ee lack consciousness or feelings. In the ancient world the rationality of animals is defended with the example of Chrysippus’ dog. This animal, tracking prey, comes to a cross-roads with three exits, and without pausing to pick-up the scent, reasoning, according to Sextus Empiricus. The animal went either by this road, or by this road, or by that, or by the other. However, it did not go by this or that. Therefore, he went the other way. The ‘syllogism of the dog’ was discussed by many writers, since in Stoic cosmology animals should occupy a place on the great chain of being to an exceeding degree below human beings, the only terrestrial rational agents: Philo Judaeus wrote a dialogue attempting to show again Alexander of Aphrodisias that the dog’s behaviour does no t exhibit rationality, but simply shows it following the scent, by way of response Alexander has the animal jump down a shaft (where the scent would not have lingered). Plutah sides with Philo, Aquinas discusses the dog and scholastic thought in general was quite favourable to brute intelligence (being made to stand trail for various offences in medieval times was common for animals). In the modern era Montaigne uses the dog to remind us of the frailties of human reason: Rorarious undertook to show not only that beasts are rational, but that they make better use of reason than people do. James the first of England defends the syllogising dog, sand Henry More and Gassendi both takes issue with Descartes on that matter. Hume is an outspoken defender of animal cognition, but with their use of the view that language is the essential manifestation of mentality, animals’ silence began to count heavily against them, and they are completely denied thoughts by, for instance Davidson.
Dogs are frequently shown in pictures of philosophers, as their assiduity and fidelity are some symbols
It is, nonetheless, that Decanters’s first work, the Regulae ad Directionem Ingenii (1628/9), was never complected, yet in Holland between 1628 and 1649, Descartes first wrote, and then cautiously suppressed, Le Monde (1934), and in 1637 produced the Discours de la méthode as a preface to the treatise on mathematics and physics in which he introduced the notion of Cartesian co-ordinates. His best-known philosophical work, the Meditationes de Prima Phi losophiia (Meditations on First Philosophy), together with objections by distinguished contemporaries and replies by Descartes (The Objections and Replies), appeared in 1641. The authors of the objections are: First set, the Dutch, thgirst aet, Hobbes, fourth set. Arnauld, fifth set, Gassendi and the sixth set, Mersenne. The second edition (1642) of the Meditations included a seventh se t by the Jesuit Pierre Bourdin. Descartes’s penultimate work, the Principia Pilosophiae (Principles of the Soul), published in 1644 was designed partly for use as a theological textbook. His last work was Les Passions de l´ame (The Passions of the Soul) published in 1649. When in Sweden, where he contracted pneumonia, allegedly through being required to break his normal habit of late rising in order to give lessons at 5:00 a.m. His last words are supposed to have been “Ça, mon âme, il faut partir” (so, my soul, it is time to part).
All the same, Descartes’s theory of knowledge starts with the quest for certainty, for an indubitable starting-point or foundation on the bassi alone of which progress is possible.
The Cartesian doubt is the method of investigating how much knowledge and its basis in reason or experience as used by Descartes in the first two Medications. It attempted to put knowledge upon secure foundation by first inviting us to suspend judgements on any proportion whose truth can be doubted, even as a bare possibility. The standards of acceptance are gradually raised as we are asked to doubt the deliverance of memory, the senses, and eve n reason, all of which are in principle capable of letting us down. This is eventually found in the celebrated “Cogito ergo sum”: I think, therefore I am. By locating the point of certainty in my awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated the following centuries in spite of a various counter-attack on behalf of social and public starting-points. The metaphysics associated with this priority are the Cartesian dualism, or separation of mind and matter into two different but interacting substances. Descartes rigorously and rightly to ascertain that it takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a “clear and distinct perception” of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: A Hume drily puts it, “to have recourse to the veracity of the supreme Being, in order to prove the veracity of our senses, is surely making a very unexpected circuit.”
By dissimilarity, Descartes’s notorious denial that non-human animals are conscious is a stark illustration of dissimulation. In his conception of matter Descartes also gives preference to rational cogitation over anything from the senses. Since we can conceive of the matter of a ball of wax, surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature.
Although the structure of Descartes’s epistemology, theory of mind and theory of matter have been rejected many times, their relentless exposure of the hardest issues, their exemplary clarity and even their initial plausibility, all contrives to make him the central point of reference for modern philosophy.
The term instinct (Lat., instinctus, impulse or urge) implies innately determined behaviour, flexible to change in circumstance outside the control of deliberation and reason. The view that animals accomplish even complex tasks not by reason was common to Aristotle and the Stoics, and the inflexibility of their outline was used in defence of this position as early as Avicennia. A continuity between animal and human reason was proposed by Hume, and followed by sensationalist such as the naturalist Erasmus Darwin (1731-1802). The theory of evolution prompted various views of the emergence of stereotypical behaviour, and the idea that innate determinants of behaviour are fostered by specific environments is a guiding principle of ethology. In this sense that being social may be instinctive in human beings, and for that matter too reasoned on what we now know about the evolution of human language abilities, however, our real or actualized self is clearly not imprisoned in our minds.
It is implicitly a part of the larger whole of biological life, human observers its existence from embedded relations to this whole, and constructs its reality as based on evolved mechanisms that exist in all human brains. This suggests that any sense of the “otherness” of self and world be is an illusion, in that disguises of its own actualization are to find all its relations between the part that are of their own characterization. Its self as related to the temporality of being whole is that of a biological reality. It can be viewed, of course, that a proper definition of this whole must not include the evolution of the larger indivisible whole. Yet, the cosmos and unbroken evolution of all life, by that of the first self-replication molecule that was the ancestor of DNA. It should include the complex interactions that have proven that among all the parts in biological reality that any resultant of emerging is self-regulating. This, of course, is responsible to properties owing to the whole of what might be to sustain the existence of the parts.
Founded on complications and complex coordinate systems in ordinary language may be conditioned as to establish some developments have been descriptively made by its physical reality and metaphysical concerns. That is, that it is in the history of mathematics and that the exchanges between the mega-narratives and frame tales of religion and science were critical factors in the minds of those who contributed. The first scientific revolution of the seventeenth century, allowed scientists to better them in the understudy of how the classical paradigm in physical reality has marked results in the stark Cartesian division between mind and world that became one of the most characteristic features of Western thought. This is not, however, another strident and ill-mannered diatribe against our misunderstandings, but drawn upon equivalent self realization and undivided wholeness or predicted characterlogic principles of physical reality and the epistemological foundations of physical theory.
The subjectivity of our mind affects our perceptions of the world that is held to be objective by natural science. Create both aspects of mind and matter as individualized forms that belong to the same underlying reality.
Our everyday experience confirms the apparent fact that there is a dual-valued world as subject and objects. We as having consciousness, as personality and as experiencing beings are the subjects, whereas for everything for which we can come up with a name or designation, seems to be the object, that which is opposed to us as a subject. Physical objects are only part of the object-world. There are also mental objects, objects of our emotions, abstract objects, religious objects etc. language objectifies our experience. Experiences per se are purely sensational experienced that do not make a distinction between object and subject. Only verbalized thought reifies the sensations by conceptualizing them and pigeonholing them into the given entities of language.
Some thinkers maintain, that subject and object are only different aspects of experience. I can experience myself as subject, and in the act of self-reflection. The fallacy of this argument is obvious: Being a subject implies having an object. We cannot experience something consciously without the mediation of understanding and mind. Our experience is already conceptualized at the time it comes into our consciousness. Our experience is negative insofar as it destroys the original pure experience. In a dialectical process of synthesis, the original pure experience becomes an object for us. The common state of our mind is only capable of apperceiving objects. Objects are reified negative experience. The same is true for the objective aspect of this theory: by objectifying myself I do not dispense with the subject, but the subject is causally and apodeictically linked to the object. As soon as I make an object of anything, I have to realize, that it is the subject, which objectifies something. It is only the subject who can do that. Without the subject there are no objects, and without objects there is no subject. This interdependence, however, is not to be understood in terms of a dualism, so that the object and the subject are really independent substances. Since the object is only created by the activity of the subject, and the subject is not a physical entity, but a mental one, we have to conclude then, that the subject-object dualism is purely mentalistic.
The Cartesian dualism posits the subject and the object as separate, independent and real substances, both of which have their ground and origin in the highest substance of God. Cartesian dualism, however, contradicts itself: The very fact, which Descartes posits the "I,” that is the subject, as the only certainty, he defied materialism, and thus the concept of some "res extensa.” The physical thing is only probable in its existence, whereas the mental thing is absolutely and necessarily certain. The subject is superior to the object. The object is only derived, but the subject is the original. This makes the object not only inferior in its substantive quality and in its essence, but relegates it to a level of dependence on the subject. The subject recognizes that the object is a "res extensa" and this means, that the object cannot have essence or existence without the acknowledgment through the subject. The subject posits the world in the first place and the subject is posited by God. Apart from the problem of interaction between these two different substances, Cartesian dualism is not eligible for explaining and understanding the subject-object relation.
By denying Cartesian dualism and resorting to monistic theories such as extreme idealism, materialism or positivism, the problem is not resolved either. What the positivists did, was just verbalizing the subject-object relation by linguistic forms. It was no longer a metaphysical problem, but only a linguistic problem. Our language has formed this object-subject dualism. These thinkers are very superficial and shallow thinkers, because they do not see that in the very act of their analysis they inevitably think in the mind-set of subject and object. By relativizing the object and subject in terms of language and analytical philosophy, they avoid the elusive and problematical oppure of subject-object, since which has been the fundamental question in philosophy ever. Shunning these metaphysical questions is no solution. Excluding something, by reducing it to a more material and verifiable level, is not only pseudo-philosophy but actually a depreciation and decadence of the great philosophical ideas of mankind.
Therefore, we have to come to grips with idea of subject-object in a new manner. We experience this dualism as a fact in our everyday lives. Every experience is subject to this dualistic pattern. The question, however, is, whether this underlying pattern of subject-object dualism is real or only mental. Science assumes it to be real. This assumption does not prove the reality of our experience, but only that with this method science is most successful in explaining our empirical facts. Mysticism, on the other hand, believes that there is an original unity of subject and objects. To attain this unity is the goal of religion and mysticism. Man has fallen from this unity by disgrace and by sinful behaviour. Now the task of man is to get back on track again and strive toward this highest fulfilment. Again, are we not, on the conclusion made above, forced to admit, that also the mystic way of thinking is only a pattern of the mind and, as the scientists, that they have their own frame of reference and methodology to explain the supra-sensible facts most successfully?
If we assume mind to be the originator of the subject-object dualism, then we cannot confer more reality on the physical or the mental aspect, as well as we cannot deny the one in terms of the other.
The crude language of the earliest users of symbolics must have been considerably gestured and nonsymbiotic vocalizations. Their spoken language probably became reactively independent and a closed cooperative system. Only after the emergence of hominids were to use symbolic communication evolved, symbolic forms progressively took over functions served by non-vocal symbolic forms. This is reflected in modern languages. The structure of syntax in these languages often reveals its origins in pointing gestures, in the manipulation and exchange of objects, and in more primitive constructions of spatial and temporal relationships. We still use nonverbal vocalizations and gestures to compliment meaning in spoken language.
The general idea is very powerful, however, the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be ware that one is a spatial element of the world without a grasp of the spatial nature of the world. Face to face, the idea of a perceivable, objective spatial world that causes ideas too subjectively becoming to denote in the wold. During which time, his perceptions as they have of changing position within the world and to the more or less stable way the world is. The idea that there is an objective world and the idea that the subject is somewhere, and where he is given by what he can perceive.
Research, however distant, are those that neuroscience reveals in that the human brain is a massive parallel system which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchal organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. Language processing is clearly not accomplished by stand-alone or unitary modules that evolved with the addition of separate modules that were eventually wired together on some neutral circuit board.
While the brain that evolved this capacity was obviously a product of Darwinian evolution, the most critical precondition for the evolution of this brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered conditions for survival in a new ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. Darwinian evolution can also explain why selective pressures in this new ecological niche favoured pre-adaptive changes required for symbolic communication. All the same, this communication resulted directly through its passing an increasingly atypically structural complex and intensively condensed behaviour. Social evolution began to take precedence over physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.
Because this communication was based on symbolic vocalization that required the evolution of neural mechanisms and processes that did not evolve in any other species. As this marked the emergence of a mental realm that would increasingly appear as separate and distinct from the external material realm.
If the emergent reality in this mental realm cannot be reduced to, or entirely explained as for, the sum of its parts, concluding that this reality is greater than the sum of its parts seems reasonable. For example, a complete proceeding of the manner in which light in particular wave lengths has ben advancing by the human brain to generate a particular colour says nothing about the experience of colour. In other words, a complete scientific description of all the mechanisms involved in processing the colour blue does not correspond with the colour blue as perceived in human consciousness. No scientific description of the physical substrate of a thought or feeling, no matter how accomplish it can but be accounted for in actualized experience, especially of a thought or feeling, as an emergent aspect of global brain function.
If we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the experience of the word symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. While one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.
Even if we are to include two aspects of biological reality, finding to a more complex order in biological reality is associated with the emergence of new wholes that are greater than the orbital parts. Yet, the entire biosphere is of a whole that displays self-regulating behaviour that is greater than the sum of its parts. The emergence of a symbolic universe based on a complex language system could be viewed as another stage in the evolution of more complicated and complex systems. As marked and noted by the appearance of a new profound complementarity in relationships between parts and wholes. This does not allow us to assume that human consciousness was in any sense preordained or predestined by natural process. Thus far it does make it possible, in philosophical terms at least, to argue that this consciousness is an emergent aspect of the self-organizing properties of biological life.
If we also concede that an indivisible whole contains, by definition, no separate parts and that a phenomenon can be assumed to be “real” only when it is “observed” phenomenon, we are led to more interesting conclusions. The indivisible whole whose existence is inferred in the results of the aspectual experiments that cannot in principle is itself the subject of scientific investigation. There is a simple reason that this is the case. Science can claim knowledge of physical reality only when the predictions of a physical theory are validated by experiment. Since the indivisible whole cannot be measured or observed, we confront as the “event horizon” or knowledge where science can say nothing about the actual character of this reality. Why this is so, is a property of the entire universe, then we must also conclude that an undivided wholeness exists on the most primary and basic level in all aspects of physical reality. What we are dealing within science per se, however, are manifestations of tis reality, which are invoked or “actualized” in making acts of observation or measurement. Since the reality that exists between the space-like separated regions is a whole whose existence can only be inferred in experience. As opposed to proven experiment, the correlations between the particles, and the sum of these parts, do not constitute the “indivisible” whole. Physical theory allows us to understand why the correlations occur. Nevertheless, it cannot in principle disclose or describe the actualized character of the indivisible whole.
The scientific implications to this extraordinary relationship between parts (qualia) and indivisible whole (the universe) are quite staggering. Our primary concern, however, is a new view of the relationship between mind and world that carries even larger implications in human terms. When factors into our understanding of the relationship between parts and wholes in physics and biology, then mind, or human consciousness, must be viewed as an emergent phenomenon in a seamlessly interconnected whole called the cosmos.
All that is required to embrace the alternative view of the relationship between mind and world that are consistent with our most advanced scientific knowledge is a commitment to metaphysical and epistemological realism and a willingness to follow arguments to their logical conclusions. Metaphysical realism assumes that physical reality or has an actual existence independent of human observers or any act of observation, epistemological realism assumes that progress in science requires strict adherence to scientific mythology, or to the rules and procedures for doing science. If one can accept these assumptions, most of the conclusions drawn should appear self-evident in logical and philosophical terms. Attributing any extra-scientific properties to the whole to understand is also not necessary and embrace the new relationship between part and whole and the alternative view of human consciousness that is consistent with this relationship. This is, in this that our distinguishing character between what can be “proven” in scientific terms and what can be reasonably “inferred” in philosophical terms based on the scientific evidence.
Moreover, advances in scientific knowledge rapidly became the basis for the creation of a host of new technologies. Yet those responsible for evaluating the benefits and risks associated with the use of these technologies, much less their potential impact on human needs and values, normally had expertise on only one side of a two-culture divide. Perhaps, more important, many of the potential threats to the human future - such as, to, environmental pollution, arms development, overpopulation, and spread of infectious diseases, poverty, and starvation - can be effectively solved only by integrating scientific knowledge with knowledge from the social sciences and humanities. We have not done so for a simple reason, the implications of the amazing new fact of nature sustaining the non-locality that cannot be properly understood without some familiarity wit the actual history of scientific thought. The intent is to suggest that what is most important about this back-ground can be understood in its absence. Those who do not wish to struggle with the small and perhaps, the fewer amounts of back-ground implications should feel free to ignore it. However, this material will be no more challenging as such, that the hope is that from those of which will find a common ground for understanding and that will meet again on this commonly functions in an effort to close the circle, resolves the equations of eternity and complete the universe to obtainably gain in its unification of which that holds within.
Another aspect of the evolution of a brain that allowed us to construct symbolic universes based on complex language system that is particularly relevant for our purposes concerns consciousness of self. Consciousness of self as an independent agency or actor is predicted on a fundamental distinction or dichotomy between this self and the other selves. Self, as it is constructed in human subjective reality, is perceived as having an independent existence and a self-referential character in a mental realm separately distinct from the material realm. It was, the assumed separation between these realms that led Descartes to posit his famous dualism in understanding the nature of consciousness in the mechanistic classical universe.
In a thought experiment, instead of bringing a course of events, as in a normal experiment, we are invited to imagine one. We may then be able to “see” that some result following, or tat some description is appropriate, or our inability to describe the situation may itself have some consequences. Thought experiments played a major role in the development of physics: For example, Galileo probably never dropped two balls of unequal weight from the leaning Tower of Pisa, in order to refute the Aristotelean view that a heavy body falls faster than a lighter one. He merely asked used to imagine a heavy body made into the shape of a dumbbell, and then connecting rod gradually thinner, until it is finally severed. The thing is one heavy body until the last moment and he n two light ones, but it is incredible that this final outline alters the velocity dramatically. Other famous examples include the Einstein-Podolsky-Rosen thought experiment. In the philosophy of personal identity, our apparent capacity to imagine ourselves surviving drastic changes of body, brain, and mind is a permanent source of difficulty. There is no consensus on the legitimate place of thought experiments, to substitute either for real experiment, or as a reliable device for discerning possibilities. Thought experiments are alike of one that dislikes and are sometimes called intuition pumps.
For familiar reasons, supposing that people are characterized by their rationality is common, and the most evident display of our rationality is our capacity to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers and painters all think, and there is no a priori reason that their deliberations should take any more verbal a form than this actions. It is permanently tempting to conceive of this activity in terms of the presence in the mind of elements of some language, or other medium that represents aspects of the world. Still, the model has been attacked, notably by Wittgenstein, as insufficient, since no such presence could carry a guarantee that the right use would be made of it. Such an inner present seems unnecessary, since an intelligent outcome might arise in principle weigh out it.
In the philosophy of mind as well as ethics the treatment of animals exposes major problems if other animals differ from human beings, how is the difference to be characterized: Do animals think and reason, or have thoughts and beliefs? In philosophers as different as Aristotle and Kant the possession of reason separates humans from animals, and alone allows entry to the moral community.
For Descartes, animals are mere machines and ee lack consciousness or feelings. In the ancient world the rationality of animals is defended with the example of Chrysippus’ dog. This animal, tracking a prey, comes to a cross-roads with three exits, and without pausing to pick-up the scent, reasoning, according to Sextus Empiricus. The animal went either by this road, or by this road, or by that, or by the other. However, it did not go by this or that, but he went the other way. The ‘syllogism of the dog’ was discussed by many writers, since in Stoic cosmology animals should occupy a place on the great chain of being somewhat below human beings, the only terrestrial rational agents: Philo Judaeus wrote a dialogue attempting to show again Alexander of Aphrodisias that the dog’s behaviour does no t exhibit rationality, but simply shows it following the scent, by way of response Alexander has the animal jump down a shaft (where the scent would not have lingered). Plutah sides with Philo, Aquinas discusses the dog and scholastic thought in general was quite favourable to brute intelligence (being made to stand trail for various offences in medieval times was common for animals). In the modern era Montaigne uses the dog to remind us of the frailties of human reason: Rorarious undertook to show not only that beasts are rational, but that they make better use of reason than people do. James the first of England defends the syllogising dog, and Henry More and Gassendi both takes issue with Descartes on that matter. Hume is an outspoken defender of animal cognition, but with their use of the view that language is the essential manifestation of mentality, animals’ silence began to count heavily against them, and they are completely denied thoughts by, for instance Davidson.
Dogs are frequently shown in pictures of philosophers, as their assiduity and fidelity are a symbol
The term instinct (Lat., instinctus, impulse or urge) implies innately determined behaviour, flexible to change in circumstance outside the control of deliberation and reason. The view that animals accomplish even complex tasks not by reason was common to Aristotle and the Stoics, and the inflexibility of their outline was used in defence of this position as early as Avicennia. A continuity between animal and human reason was proposed by Hume, and followed by sensationalist such as the naturalist Erasmus Darwin (1731-1802). The theory of evolution prompted various views of the emergence of stereotypical behaviour, and the idea that innate determinants of behaviour are fostered by specific environments is a guiding principle of ethology. In this sense that being social may be instinctive in human beings, and for that matter too reasoned on what we now know about the evolution of human language abilities, however, our real or actualized self is clearly not imprisoned in our minds.
It is implicitly a part of the larger whole of biological life, human observers its existence from embedded relations to this whole, and constructs its reality as based on evolved mechanisms that exist in all human brains. This suggests that any sense of the “otherness” of self and world be is an illusion, in that disguises of its own actualization are to find all its relations between the part that are of their own characterization. Its self as related to the temporality of being whole is that of a biological reality. It can be viewed, of course, that a proper definition of this whole must not include the evolution of the larger undissectible whole. Yet, the cosmos and unbroken evolution of all life, by that of the first self-replication molecule that was the ancestor of DNA. It should include the complex interactions that have proven that among all the parts in biological reality that any resultant of emerging is self-regulating. This, of course, is responsible to properties owing to the whole of what might be to sustain the existence of the parts.
Founded on complications and complex coordinate systems in ordinary language may be conditioned as to establish some developments have been descriptively made by its physical reality and metaphysical concerns. That is, that it is in the history of mathematics and that the exchanges between the mega-narratives and frame tales of religion and science were critical factors in the minds of those who contributed. The first scientific revolution of the seventeenth century, allowed scientists to better them in the understudy of how the classical paradigm in physical reality has marked results in the stark Cartesian division between mind and world that became one of the most characteristic features of Western thought. This is not, however, another strident and ill-mannered diatribe against our misunderstandings, but drawn upon equivalent self realization and undivided wholeness or predicted characterlogic principles of physical reality and the epistemological foundations of physical theory.
Scientific knowledge is an extension of ordinary language into greater levels of abstraction and precision through reliance upon geometry and numerical relationships. We imagine that the seeds of the scientific imagination were planted in ancient Greece. This, of course, opposes any other option but to speculate some displacement afar from the Chinese or Babylonian cultures. Partly because the social, political, and economic climates in Greece were more open in the pursuit of knowledge along with greater margins that reflect upon cultural accessibility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigations. However, it was only after this inheritance from Greek philosophy was wedded to some essential feature of Judeo-Christian beliefs about the origin of the cosmos that the paradigm for classical physics emerged.
The Greek philosophers we now recognized as the originator’s scientific thoughts were oraclically mystic who probably perceived their world as replete with spiritual agencies and forces. The Greek religious heritage made it possible for these thinkers to attempt to coordinate diverse physical events within a framework of immaterial and unifying ideas. The fundamental assumption that there is a pervasive, underlying substance out of which everything emerges and into which everything returns are attributed to Thales of Miletos. Thales had apparently transcended to this conclusion out of the belief that the world was full of gods, and his unifying substance, water, was similarly charged with spiritual presence. Religion in this instance served the interests of science because it allowed the Greek philosophers to view “essences” underlying and unifying physical reality as if they were “substances.”
Nonetheless, the belief that the mind of God as the Divine Architect permeates the workings of nature. All of which, is the principle of scientific thought, as pronounced through Johannes Kepler, and subsequently to most contemporaneous physicists, as the consigned probability can feel of some discomfort, that in reading Kepler’s original manuscripts. Physics and metaphysics, astronomy and astrology, geometry and theology commingle with an intensity that might offend those who practice science in the modern sense of that word. “Physical laws,” wrote Kepler, “lie within the power of understanding of the human mind, God wanted us to perceive them when he created us in His image so that we may take part in His own thoughts . . . Our knowledge of numbers and quantities are the same as that of God’s, at least as far as we can understand something of it in this mortal life.”
The history of science grandly testifies to the manner in which scientific objectivity results in physical theories that must be assimilated into “customary points of view and forms of perception.” The framers of classical physics derived, like the rest of us there, “customary points of view and forms of perception” from macro-level visualized experience. Thus, the descriptive apparatus of visualizable experience became reflected in the classical descriptive categories.
A major discontinuity appears, however, as we moved from descriptive apparatus dominated by the character of our visualizable experience to a complete description of physical reality in relativistic and quantum physics. The actual character of physical reality in modern physics lies largely outside the range of visualizable experience. Einstein, was acutely aware of this discontinuity: “We have forgotten what features of the world of experience caused us to frame pre-scientific concepts, and we have great difficulty in representing the world of experience to ourselves without the spectacles of the old-established conceptual interpretation. There is the further difficulty that our language is compelled to work with words that are inseparably connected with those primitive concepts.”
It is time, for the religious imagination and the religious experience to engage the complementary truths of science in filling that which is silence with meaning. However, this does not mean that those who do not believe in the existence of God or Being should refrain in any sense for assessing the implications of the new truths of science. Understanding these implications does not require to some ontology, and is in no way diminished by the lack of ontology. And one is free to recognize a basis for an exchange between science and religion since one is free to deny that this basis exists - there is nothing in our current scientific world-view that can prove the existence of God or Being and nothing that legitimate any anthropomorphic conceptions of the nature of God or Being. The question of belief in onology remains what it has always been - a question, and the physical universe on the most basic level remains what has always been - a riddle. And the ultimate answer to the question and the ultimate meaning of the riddle are, and probably will always be, a mater of personal choice and conviction.
Our frame reference work is mostly to incorporate in an abounding set-class affiliation between mind and world, by that lay to some defining features and fundamental preoccupations, for which there is certainly nothing new in the suggestion that contemporary scientific world-view legitimates an alternate conception of the relationship between mind and world. The essential point of attention is that one of “consciousness” and remains in a certain state of our study.
But at the end of this, sometimes labourious journey that precipitate to some conclusion that should make the trip very worthwhile. Initiatory comments offer resistance in contemporaneous physics or biology for believing “I” in the stark Cartesian division between mind and world that some have rather aptly described as “the disease of the Western mind.” In addition, let us consider the legacy in Western intellectual life of the stark division between mind and world sanctioned by René Descartes.
Descartes, the father of modern philosophy, inasmuch as he made epistemological questions the primary and central questions of the discipline. But this is misleading for several reasons. In the first, Descartes conception of philosophy was very different from our own. The term “philosophy” in the seventeenth century was far more comprehensive than it is today, and embraced the whole of what we nowadays call natural science, including cosmology and physics, and subjects like anatomy, optics and medicine. Descartes reputation as a philosopher in his own time was based as much as anything on his contributions in these scientific areas. Secondly, even in those Cartesian writings that are philosophical in the modern academic sense, the e epistemological concerns are rather different from the conceptual and linguistic inquiries that characterize present-day theory of knowledge. Descartes saw the need to base his scientific system on secure metaphysical foundations: By “metaphysics” he meant that in the queries into God and the soul and usually all the first things to be discovered by philosophizing. Yet, he was quick to realize that there was nothing in this view that provided untold benefits between heaven and earth and united the universe in a shared and communicable frame of knowledge, it presented us with a view of physical reality that was totally alien from the world of everyday life. Even so, there was nothing in this view of nature that could explain or provide a foundation for the mental, or for all that of direct experience as distinctly human, with no ups, downs or any which ways of direction.
Following these fundamentals’ explorations that include questions about knowledge and certainty, but even here, Descartes is not primarily concerned with the criteria for knowledge claims, or with definitions of the epistemic concepts involved, as his aim is to provide a unified framework for understanding the universe. And with this, Descartes was convinced that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invented algebraic geometry.
A scientific understanding to these ideas could be derived, as did that Descartes declared, that with the aid of precise deduction, and he also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Isaac Newton’s “Principia Mathematica” in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and principle of scientific knowledge.
The radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms lacking any concerns about its spiritual dimension or ontological foundations. Meanwhile, attempts to rationalize, reconcile, or eliminate Descartes’s stark division between mind and matter became perhaps the most central feature of Western intellectual life.
As in the view of the relationship between mind and world sanctioned by classical physics and formalized by Descartes became a central preoccupation in Western intellectual life. And the tragedy of the Western mind is that we have lived since the seventeenth century with the prospect that the inner world of human consciousness and the outer world of physical reality are separated by an abyss or a void that cannot be bridged or to agree with reconciliation.
In classical physics, external reality consisted of inert and inanimate matter moving according to wholly deterministic natural laws, and collections of discrete atomized parts made up wholes. Classical physics was also premised, however, a dualistic conception of reality as consisting of abstract disembodied ideas existing in a domain separate form and superior to sensible objects and movements. The notion that the material world experienced by the senses was inferior to the immaterial world experienced by mind or spirit has been blamed for frustrating the progress of physics up too at least the time of Galileo. But in one very important respect, it also made the first scientific revolution possible. Copernicus, Galileo, Kepler, and Newton firmly believed that the immaterial geometrical and mathematical ideas that inform physical reality had a prior existence in the mind of God and that doing physics was a form of communion with these ideas.
The tragedy of the Western mind is a direct consequence of the stark Cartesian division between mind and world. This is the tragedy of the modern mind which “solved the riddle of the universe,” but only to replace it by another riddle: The riddle of itself. Yet, we discover the “certain principles of physical reality,” said Descartes, “not by the prejudices of the senses, but by rational analysis, which thus possess so great evidence that we cannot doubt of their truth.” Since the real, or that which actually remains external to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes concluded that all qualitative aspects of reality could be traced to the deceitfulness of the senses.
Given that Descartes distrusted the information from the senses to the point of doubting the perceived results of repeatable scientific experiments, how did he conclude that our knowledge of the mathematical ideas residing only in mind or in human subjectivity was accurate, much less the absolute truth? He did so by making a leap of faith - God constructed the world, said Descartes, according to the mathematical ideas that our minds could uncover in their pristine essence. The truths of classical physics as Descartes viewed them were quite literally “revealed” truths, and it was this seventeenth-century metaphysical presupposition that became in the history of science what is termed the “hidden ontology of classical epistemology.” Descartes lingers in the widespread conviction that science does not provide a “place for man” or for all that we know as distinctly human in subjective reality.
The historical notion in the unity of consciousness has had an interesting history in philosophy and psychology. Taking Descartes to be the first major philosopher of the modern period, the unity of consciousness was central to the study of the mind for the whole of the modern period until the 20th century. The notion figured centrally in the work of Descartes, Leibniz, Hume, Reid, Kant, Brennan, James, and, in most of the major precursors of contemporary philosophy of mind and cognitive psychology. It played a particularly important role in Kant's work.
A couple of examples will illustrate the role that the notion of the unity of consciousness played in this long literature. Consider a classical argument for dualism (the view that the mind is not the body, indeed is not made out of matter at all). It starts like this: When I consider the mind, which is to say of myself, as far as I am only a thinking thing, I cannot distinguish in myself any parts, but apprehend myself to be clearly one and entire.
Descartes then asserts that if the mind is not made up of parts, it cannot consist of matter, presumably because, as he saw it, anything material has parts. He then goes on to say that this would be enough to prove dualism by itself, had he not already proved it elsewhere. It is in the unified consciousness that I have of myself.
Here is another, more elaborate argument based on unified consciousness. The conclusion will be that any system of components could never achieve unified consciousness acting in concert. William James' well-known version of the argument starts as follows: Take a sentence of a dozen words, take twelve men, and to each word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will; Nowhere will there be a consciousness of the whole sentence.
James generalizes this observation to all conscious states. To get dualism out of this, we need to add a premise: That if the mind were made out of matter, conscious states would have to be distributed over some group of components in some relevant way. Nevertheless, this thought experiment is meant to show that conscious states cannot be so distributed. Therefore, the conscious mind is not made out of matter. Calling the argument that James is using is the Unity Argument. Clearly, the idea that our consciousness of, here, the parts of a sentence are unified is at the centre of the Unity Argument. Like the first, this argument goes all the way back to Descartes. Versions of it can be found in thinkers otherwise as different from one another as Leibniz, Reid, and James. The Unity Argument continued to be influential into the 20th century. That the argument was considered a powerful reason for concluding that the mind is not the body is illustrated in a backhanded way by Kant's treatment of it (as he found it in Descartes and Leibniz, not James, of course).
Kant did not think that we could uncover anything about the nature of the mind, including whether nor is it made out of matter. To make the case for this view, he had to show that all existing arguments that the mind is not material do not work and he set out to do just this in the chapter in the Critique of Pure Reason on the Paralogisms of Pure Reason (1781), paralogisms are faulty inferences about the nature of the mind. The Unity Argument is the target of a major part of that chapter; if one is going to show that we cannot know what the mind is like, we must dispose of the Unity Argument, which purports to show that the mind is not made out of matter. Kant's argument that the Unity Argument does not support dualism is simple. He urges that the idea of unified consciousness being achieved by something that has no parts or components are no less mysterious than its being achieved by a system of components acting together. Remarkably enough, though no philosopher has ever met this challenge of Kant's and no account exists of what an immaterial mind not made out of parts might be like, philosophers continued to rely on the Unity Argument until well into the 20th century. It may be a bit difficult for us to capture this now but the idea any system of components, and for an even stronger reason might not realize that merge with consciousness, that each system of material components, had a strong intuitive appeal for a long time.
The notion that consciousness agrees to unification and was in addition central to one of Kant's own famous arguments, his ‘transcendental deduction of the categories’. In this argument, boiled down to its essentials, Kant claims that to tie various objects of experience together into a single unified conscious representation of the world, something that he simply assumed that we could do, we could probably apply certain concepts to the items in question. In particular we have to apply concepts from each of four fundamental categories of concept: Quantitative, qualitative, relational, and what he called ‘modal’ concepts. Modal concept’s concern of whether an item might exist, does exist, or must exist. Thus, the four kinds of concept are concepts for how many units, what features, what relations to other objects, and what existence status is represented in an experience.
It was relational conceptual representation that most interested Kant and of relational concepts, he thought the concept of cause-and-effect to be by far the most important. Kant wanted to show that natural science (which for him meant primarily physics) was genuine knowledge (he thought that Hume's sceptical treatment of cause and effect relations challenged this status). He believed that if he could prove that we must tie items in our experience together causally if we are to have a unified awareness of them, he would have put physics back on "the secure path of a science.” The details of his argument have exercised philosophers for more than two hundred years. We will not go into them here, but the argument illustrates how central the notion of the unity of consciousness was in Kant's thinking about the mind and its relation to the world.
Although the unity of consciousness had been at the centre of pre-20th century research on the mind, early in the 20th century the notion almost disappeared. Logical atomism in philosophy and behaviourism in psychology were both unsympathetic to the notion. Logical atomism focussed on the atomic elements of cognition (sense data, simple propositional judgments, etc.), rather than on how these elements are tied together to form a mind. Behaviourism urged that we focus on behaviour, the mind being alternatively myth or something otherwise that we cannot and do not need of studying the mysteriousness of science, from which brings meaning and purpose to humanity. This attitude extended to consciousness, of course. The philosopher Daniel Dennett summarizes the attitude prevalent at the time this way: Consciousness may be the last bastion of occult properties, epiphenomena, immeasurable subjective states - in short, the one area of mind best left to the philosophers. Let them make fools of themselves trying to corral the quicksilver of ‘phenomenology’ into a respectable theory.
The unity of consciousness next became an object of serious attention in analytic philosophy only as late as the 1960s. In the years since, new work has appeared regularly. The accumulated literature is still not massive but the unity of consciousness has again become an object of serious study. Before we examine the more recent work, we need to explicate the notion in more detail than we have done so far and introduce some empirical findings. Both are required to understand recent work on the issue.
To expand on our earlier notion of the unity of consciousness, we need to introduce a pair of distinctions. Current works on consciousness labours under a huge, confusing terminology. Different theorists exchange dialogue over the excess consciousness, phenomenal consciousness, self-consciousness, simple consciousness, creature consciousness, states consciousness, monitoring consciousness, awareness as equated with consciousness, awareness distinguished from consciousness, higher orders thought, higher orders experience, qualia, the felt qualities of representations, consciousness as displaced perception, . . . and on and on and on. We can ignore most of this profusion but we do need two distinctions: between consciousness of objects and consciousness of our representations of objects, and between consciousness of representations and consciousness of self.
It is very natural to think of self-consciousness or, cognitive state more accurately, as a set of cognitive states. Self-knowledge is an example of such a cognitive state. There are plenty of things that I know bout self. I know the sort of thing I am: a human being, a warm-blooded rational animal with two legs. I know of many properties and much of what is happening to me, at both physical and mental levels. I also know things about my past, things I have done and that of whom I have been with other people I have met. But I have many self-conscious cognitive states that are not instances of knowledge. For example, I have the capacity to plan for the future - to weigh up possible courses of action in the light of goals, desires, and ambitions. I am capable of ca certain type of moral reflection, tide to moral self-and understanding and moral self-evaluation. I can pursue questions like, what sort of person I am? Am I the sort of person I want to be? Am I the sort of individual that I ought to be? This is my ability to think about myself. Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employing in my thought about other people and other objects.
When I say that I am a self-conscious creature, I am saying that I can do all these things. But what do they have in common? Could I lack some and still be self-conscious? These are central questions that take us to the heart of many issues in metaphysics, the philosophy of mind, and the philosophy of psychology.
Even so, with the range of putatively self-conscious cognitive states, one might naturally assume that there is a single ability that all presuppose. This is my ability to think about myself. I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autobiographical memories and moral self-understanding.
The proposing account would be on par with other noted examples of the deflationary account of self-consciousness. If, in at all, a straightforward explanation to what makes those of the “self contents” immune to error through misidentification concerning the semantics of self, then it seems fair to say that the problem of self-consciousness has been dissolved, at least as much as solved.
This proposed account would be on a par with other noted examples as such as the redundancy theory of truth. That is to say, the redundancy theory or the deflationary view of truth claims that the predicate ‘ . . . true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophic enquiry. The approach admits of different versions, but centres on the pints (1) that ‘it is true that p’ says no more nor less than ‘p’ (so, redundancy”) (2) that in less direct context, such as ‘everything he said was true’, or ‘all logical consequences of true propositions as true’, the predicated functions as a device enabling us to generalize rather than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from true propositions. For example, its translation is to infer that: (∀p, q)(p & p ➝ q ➝ q)’ where there is no use of a notion of true statements. It is supposed in classical (two-valued) logic that each statement has one of these values, and not as both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true, if this condition obtains the statement is true, and otherwise false. Statements may indeed be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative notion governing assertion. Considerations of vagueness may introduce greys into this black-and-white schemes. For the issue of whether falsity is the only way of failing to be true. The view, if a language is provided with a truth definition, according to the semantic theory of th truth is a sufficiently characterization of its concept of truth, there is no further philosophical chapter to write about truth itself or truth as shared across different languages. The view is similar to that of the disquotational theory
There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as . . . ‘science aims at the truth’ or ‘truth is a norm governing discourse. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited ‘objective’ concept ion of truth. But perhaps, we can have the norms even when objectivity is problematic, since they can be framed within mention of truth: Science wants to be so that whenever science holds that ‘p’, when ‘p’‘. Discourse is to be regulated by the principle that it is wrong to assert ‘p’: When not-p.
It is important to stress how redundancy or the deflationary theory of self-consciousness, and any theory of consciousness that accords a serious role in self-consciousness to mastery of the semantics of the first-person pronoun, is motivated by an important principle that ha governed much of the development of analytical philosophy. This is the principle that the philosophical analysis of thought can only proceed through the philosophical analysis of language:
Thoughts differ from all else that is aid to be among the contents of the mind in being wholly communicable: It is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my thought is like. It is of the essence of thought not merely to be communicable, but to be communicable, without residue, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed. We communicate thought by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language, it is these principles, which relate to what is open to view in the employment of language, unaided by any supposed contact between mind and the senses that they carry. In order to analyses thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp. (Dummett, 1978)
So how can such thoughts be entertained by a thinker incapable of reflexively referring to himself as English speakers do with the first-person pronoun be plausibly ascribed thought with first-person contents? The thought that, despite all this, there are in fact first-person contents that do not presuppose mastery of the first-person pronoun is at the core of the functionalist theory of self-reference and first-person belief.
The best developed functionalist theory of self-reference has been deployed by Hugh Mellor (1988-1989). The basic phenomenon he is interested in explaining is what it is for a creature to have what he terms as subjective belief, which is to say, a belief whose content is naturally expressed by a sentence in the first-person singular and the present tense. Mellor starts from the functionalist premise that beliefs are causal functions from desires to actions. It is, of course, the emphasis on causal links between belief and action that make it plausible to think that belief might be independent of language and conscious belief, since “agency entails neither linguistic ability nor conscious belief. The idea that beliefs are causal functions from desires to actions can be deployed to explain the content of a give n belief through which the equation of truth conditions and utility conditions, where utility conditions are those in which the actions caused by the conjunction of that belief with a single desire result in the satisfaction of that desire. To expound forthwith, consider a creature ‘x’ who is hungry and has a desire for food at time ‘t’. That creature has a token belief b/(p) that conjoins with its desire for food to cause it to eat what is in front of it at that time. The utility condition of that belief is that there is food in front of it at that time. The utility condition of that belief is that there is food in from it of ‘x’ at that time. Moreover, for b/(p) to cause ‘x’ to eat what is in front of it at ‘t’, b/(p) must be a belief that ‘x’ has at ‘t’. Therefore, the utility/truth conditions of b/(p) is that whatever creature has this belief faces food when it is in fact facing food. And a belief with this content is, of course, the subjective belief whose natural linguistic expression would be “I am facing food now.” On the other hand, however, a belief that would naturally be expressed wit these words can be ascribed to a non-linguistic creature, because what makes it the belief that it is depending not on whether it can be linguistically expressed but on how it affects behaviour.
For in order to believe ‘p’, I need only be disposed to eat what I face if I feel hungry: A disposition which causal contiguity ensures that only my simultaneous hunger can provide, and only into making me eat, and only then. That’s what makes my belief refers to me and to when I have it. And that’s why I need have no idea who I am or what the time is, no concept of the self or of the present, no implicit or explicit grasp of any “sense” of “I” or “now,” to fix the reference of my subjective belies: Causal contiguity fixes them for me.
Causal contiguity, according to explanation may well be to why no internal representation of the self is required, even at what other philosophers has called the subpersonal level. Mellor believes that reference to distal objects can take place when in internal state serves as a causal surrogate for the distal object, and hence as an internal representation of that object. No such causal surrogate, and hence no such internal representation, is required in the case of subjective beliefs. The relevant casual components of subjective beliefs are the believer and the time.
The necessary contiguity of cause and effect is also the key to =the functionalist account of self-reference in conscious subjective belief. Mellor adopts a relational theory of consciousness, equating conscious beliefs with second-order beliefs to the effect that one is having a particular first-order subjective belief, it is, simply a fact about our cognitive constitution that these second-order beliefs are reliably, though of course fallibly, generated so that we tend to believe that we believe things that we do in fact believe.
The contiguity law in Leibniz, extends the principles that there are no discontinuous changes in nature”: “natura non facit saltum, nature makes no leaps.” Leibniz was able to use the principle to criticize the mechanical system of Descartes, which would imply such leaps in some circumstances, and to criticize contemporary atomism, which implied discontinuous changes of density at the edge of an atom. However, according to Hume the contiguity of evens is an important element in our interpretation of their conjunction for being causal.
Others attending to the functionalist point of view are it’s the advocate’s Putnam and Stellars, and its guiding principle is that we can define mental states by a triplet of relations: What typically cayuses them, what affects they have on other mental states and what affects they have on behaviour. The definition need not take the form of a simple analysis, but if we could write down the totality of axioms, or postulates, or platitudes that govern our theories about what things are apt to cause (for example) a belief state, what effects it would have on a variety of other mental states, and what effect it us likely to have on behaviour, then we would have done all that is needed to maske the state a proper theoretical notion. It would be implicitly defined by these theses. Functionalism is often compared with descriptions of a computer, since according to it mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlying hardware ee or “realization” of the program the machine is running. The principal advantages of functionalism include its fit with the way we know of mental states both of ourselves and others are via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless imitate the functions that are cited. According to this criticism functionalism is too generous, and would count too many things as having minds. It is also queried whether functionalism is too parochial, able to see mental similarities only when there is causal similarity, when our actual practices of interpretation enable us to ascribe thoughts and desires to persons whose causal structure may be rather different from our own. It may then seem as though beliefs and desires can be variably realized in causal architectures, just as much as they can be in different neurophysiological stares.
Nevertheless, we are confronted with the range of putatively self-conscious cognitive states, one might assume that there is a single ability that is presupposed. This is my ability to think about myself, and I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autographical memories and moral self-understanding. These are ways of thinking about myself.
Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employ in my thoughts about other people and other objects. My knowledge that I am a human being deploys certain conceptual abilities that I can also deploy in thinking that you are a human being. The same holds when I congratulate myself for satisfying the exacting moral standards of autonomous moral agencies. This involves concepts and descriptions that can apply equally to themselves and to others. On the other hand, when I think about myself, I am also putting to work an ability that I cannot put to work in thinking about other people and other objects. This is precisely the ability to apply those concepts and descriptions to myself. It has become common to refer to this ability as the ability to entertain “I’-thoughts.
Nonetheless, both subject and object, either mind or matter, are real or both are unreal, imaginary. The assumption of just an illusory subject or illusory object leads to dead-ends and to absurdities. This would entail an extreme form of skepticism, wherein everything is relative or subjective and nothing could be known for sure. This is not only devastating for the human mind, but also most ludicrous.
Does this leave us with the only option, that both, subject and objects are alike real? That would again create a real dualism, which we realized, is only created in our mind. So, what part of this dualism is not real?
To answer this, we have first to inquire into the meaning of the term "real.” Reality comes from the Latin word "realitas,” which could be literally translated by "thing-hood.” "Res" does not only have the meaning of a material thing.” "Res" can have a lot of different meanings in Latin. Most of them have little to do with materiality, e.g., affairs, events, business, a coherent collection of any kind, situation, etc. These so-called simulative terms are always subjective, and therefore related to the way of thinking and feeling of human beings. Outside of the realm of human beings, reality has no meaning at all. Only in the context of conscious and rational beings does reality become something meaningful. Reality is the whole of the human affairs insofar as these are related to our world around us. Reality is never the bare physical world, without the human being. Reality is the totality of human experience and thought in relation to an objective world.
Now this is the next aspect we have to analyse. Is this objective world, which we encounter in our experience and thought, something that exists on its own or is it dependent on our subjectivity? That the subjective mode of our consciousness affects the perceptions of the objective world is conceded by most of the scientists. Nevertheless, they assume a real and objective world, that would even exist without a human being alive or observing it. One way to handle this problem is the Kantian solution of the "thing-in-itself," that is inaccessible to our mind because of mind's inherent limitations. This does not help us very much, but just posits some undefinable entity outside of our experience and understanding. Hegel, on the other side, denied the inaccessibility of the "thing-in-itself" and thought, that knowledge of the world as it is in itself is attainable, but only by "absolute knowing" the highest form of consciousness.
One of the most persuasive proofs of an independent objective world, is the following thesis by science: If we put a camera into a landscape, where no human beings are present, and when we leave this place and let the camera take some pictures automatically through a timer, and when we come back some days later to develop the pictures, we will find the same picture of the landscape as if we had taken the picture ourselves. Also, common-sense tells us: if we wake up in the morning, it is highly probable, even sure, that we find ourselves in the same environment, without changes, without things having left their places uncaused.
Is this empirical argument sufficient to persuade even the most sceptical thinker, which there is an objective world out there? Hardly. If a sceptic nonetheless tries to uphold the position of a solipsistic monism, then the above-mentioned argument would only be valid, if the objects out there were assumed to be subjective mental constructs. Not even Berkeley assumed such an extreme position. His immaterialism was based on the presumption, that the world around us is the object of God's mind, that means, that all the objects are ideas in a universal mind. This is more persuasive. We could even close the gap between the religious concept of "God" and the philosophical concept by relating both of them to the modern quantum physical concept of a vacuum. All have one thing in common: there must be an underlying reality, which contains and produces all the objects. This idea of an underlying reality is interestingly enough a continuous line of thought throughout the history of mankind. Almost every great philosopher or every great religion assumed some kind of supreme reality. I deal with this idea in my historical account of mind's development.
We're still stuck with the problem of subject and object. If we assume, that there may be an underlying reality, neither physical nor mental, neither object nor subject, but producing both aspects, we end up with the identity of subject and object. So long as there is only this universal "vacuum,” nothing is yet differentiated. Everything is one and the same. By a dialectical process of division or by random fluctuations of the vacuum, elementary forms are created, which develop into more complex forms and finally into living beings with both a mental and a physical aspect. The only question to answer is, how these two aspects were produced and developed. Maybe there are an infinite numbers of aspects, but only two are visible to us, such as Spinoza postulated it. Also, since the mind does not evolve out of matter, there must have been either a concomitant evolution of mind and matter or matter has evolved whereas mind has not. Consequently mind is valued somehow superiorly to matter. Since both are aspects of one reality, both are alike significant. Science conceives the whole physical world and the human beings to have evolved gradually from an original vacuum state of the universe (singularity). So, has mind just popped into the world at some time in the past, or has mind emerged from the complexity of matter? The latter are not sustainable, and this leaves us with the possibility, that the other aspect, mind, has different attributes and qualities. This could be proven empirically. We do not believe, that our personality is something material, that our emotions, our love and fear are of a physical nature. The qualia and properties of consciousness are completely different from the properties of matter as science has defined it. By the very nature and essence of each aspect, we can assume therefore a different dialectical movement. Whereas matter is by the very nature of its properties bound to evolve gradually and existing in a perpetual movement and change, mind, on the other hand, by the very nature of its own properties, is bound to a different evolution and existence. Mind as such has not evolved. The individualized form of mind in the human body, that is, the subject, can change, although in different ways than matter changes. Both aspects have their own sets of laws and patterns. Since mind is also non-local, it comprises all individual minds. Actually, there is only one consciousness, which is only artificially split into individual minds. That's because of the connection with brain-organs, which are the means of manifestation and expression for consciousness. Both aspects are interdependent and constitute the world and the beings as we know them.
Scientific knowledge is an extension of ordinary language into greater levels of abstraction and precision through reliance upon geometry and numerical relationships. We imagine that the seeds of the scientific imagination were
planted in ancient Greece. This, of course, opposes any other option but to speculate some displacement afar from the Chinese or Babylonian cultures. Partly because the social, political, and economic climates in Greece were more open in the pursuit of knowledge along with greater margins that reflect upon cultural accessibility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigations. But it was only after this inheritance from Greek philosophy was wedded to some essential feature of Judeo-Christian beliefs about the origin of the cosmos that the paradigm for classical physics emerged.
The Greek philosophers we now recognized as the originator’s scientific thoughts were oraclically mystic who probably perceived their world as replete with spiritual agencies and forces. The Greek religious heritage made it possible for these thinkers to attempt to coordinate diverse physical events within a framework of immaterial and unifying ideas. The fundamental assumption that there is a pervasive, underlying substance out of which everything emerges and into which everything returns are attributed to Thales of Miletos. Thales had apparently transcended to this conclusion out of the belief that the world was full of gods, and his unifying substance, water, was similarly charged with spiritual presence. Religion in this instance served the interests of science because it allowed the Greek philosophers to view “essences” underlying and unifying physical reality as if they were “substances.”
Nonetheless, the belief that the mind of God as the Divine Architect permeates the workings of nature. All of which, is the principle of scientific thought, as pronounced through Johannes Kepler, and subsequently to most contemporaneous physicists, as the consigned probability can feel of some discomfort, that in reading Kepler’s original manuscripts. Physics and metaphysics, astronomy and astrology, geometry and theology commingle with an intensity that might offend those who practice science in the modern sense of that word. “Physical laws,” wrote Kepler, “lie within the power of understanding of the human mind, God wanted us to perceive them when he created us in His image so that we may take part in His own thoughts . . . Our knowledge of numbers and quantities are the same as that of God’s, at least as far as we can understand something of it in this mortal life.”
The history of science grandly testifies to the manner in which scientific objectivity results in physical theories that must be assimilated into “customary points of view and forms of perception.” The framers of classical physics derived, like the rest of us there, “customary points of view and forms of perception” from macro-level visualized experience. Thus, the descriptive apparatus of visualizable experience became reflected in the classical descriptive categories.
A major discontinuity appears, however, as we moved from descriptive apparatus dominated by the character of our visualizable experience to a complete description of physical reality in relativistic and quantum physics. The actual character of physical reality in modern physics lies largely outside the range of visualizable experience. Einstein, was acutely aware of this discontinuity: “We have forgotten what features of the world of experience caused us to frame pre-scientific concepts, and we have great difficulty in representing the world of experience to ourselves without the spectacles of the old-established conceptual interpretation. There is the further difficulty that our language is compelled to work with words that are inseparably connected with those primitive concepts.”
It is time, for the religious imagination and the religious experience to engage the complementary truths of science in filling that which is silence with meaning. However, this does not mean that those who do not believe in the existence of God or Being should refrain in any sense for assessing the implications of the new truths of science. Understanding these implications does not require to some ontology, and is in no way diminished by the lack of ontology. And one is free to recognize a basis for an exchange between science and religion since one is free to deny that this basis exists - there is nothing in our current scientific world-view that can prove the existence of God or Being and nothing that legitimate any anthropomorphic conceptions of the nature of God or Being. The question of belief in onology remains what it has always been - a question, and the physical universe on the most basic level remains what has always been - a riddle. And the ultimate answer to the question and the ultimate meaning of the riddle are, and probably will always be, a mater of personal choice and conviction.
Our frame reference work is mostly to incorporate in an abounding set-class affiliation between mind and world, by that lay to some defining features and fundamental preoccupations, for which there is certainly nothing new in the suggestion that contemporary scientific world-view legitimates an alternate conception of the relationship between mind and world. The essential point of attention is that one of “consciousness” and remains in a certain state of our study.
But at the end of this, sometimes labourious journey that precipitate to some conclusion that should make the trip very worthwhile. Initiatory comments offer resistance in contemporaneous physics or biology for believing “I” in the stark Cartesian division between mind and world that some have rather aptly described as “the disease of the Western mind.” In addition, let us consider the legacy in Western intellectual life of the stark division between mind and world sanctioned by René Descartes.
Descartes, the father of modern philosophy, inasmuch as he made epistemological questions the primary and central questions of the discipline. But this is misleading for several reasons. In the first, Descartes conception of philosophy was very different from our own. The term “philosophy” in the seventeenth century was far more comprehensive than it is today, and embraced the whole of what we nowadays call natural science, including cosmology and physics, and subjects like anatomy, optics and medicine. Descartes reputation as a philosopher in his own time was based as much as anything on his contributions in these scientific areas. Secondly, even in those Cartesian writings that are philosophical in the modern academic sense, the e epistemological concerns are rather different from the conceptual and linguistic inquiries that characterize present-day theory of knowledge. Descartes saw the need to base his scientific system on secure metaphysical foundations: By “metaphysics” he meant that in the queries into God and the soul and usually all the first things to be discovered by philosophizing. Yet, he was quick to realize that there was nothing in this view that provided untold benefits between heaven and earth and united the universe in a shared and communicable frame of knowledge, it presented us with a view of physical reality that was totally alien from the world of everyday life. Even so, there was nothing in this view of nature that could explain or provide a foundation for the mental, or for all that of direct experience as distinctly human, with no ups, downs or any which ways of direction.
Following these fundamentals’ explorations that include questions about knowledge and certainty, but even here, Descartes is not primarily concerned with the criteria for knowledge claims, or with definitions of the epistemic concepts involved, as his aim is to provide a unified framework for understanding the universe. And with this, Descartes was convinced that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invented algebraic geometry.
A scientific understanding to these ideas could be derived, as did that Descartes declared, that with the aid of precise deduction, and he also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Isaac Newton’s “Principia Mathematica” in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and principle of scientific knowledge.
The radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms lacking any concerns about its spiritual dimension or ontological foundations. Meanwhile, attempts to rationalize, reconcile, or eliminate Descartes’s stark division between mind and matter became perhaps the most central feature of Western intellectual life.
As in the view of the relationship between mind and world sanctioned by classical physics and formalized by Descartes became a central preoccupation in Western intellectual life. And the tragedy of the Western mind is that we have lived since the seventeenth century with the prospect that the inner world of human consciousness and the outer world of physical reality are separated by an abyss or a void that cannot be bridged or to agree with reconciliation.
In classical physics, external reality consisted of inert and inanimate matter moving according to wholly deterministic natural laws, and collections of discrete atomized parts made up wholes. Classical physics was also premised, however, a dualistic conception of reality as consisting of abstract disembodied ideas existing in a domain separate form and superior to sensible objects and movements. The notion that the material world experienced by the senses was inferior to the immaterial world experienced by mind or spirit has been blamed for frustrating the progress of physics up too at least the time of Galileo. But in one very important respect, it also made the first scientific revolution possible. Copernicus, Galileo, Kepler, and Newton firmly believed that the immaterial geometrical and mathematical ideas that inform physical reality had a prior existence in the mind of God and that doing physics was a form of communion with these ideas.
The tragedy of the Western mind is a direct consequence of the stark Cartesian division between mind and world. This is the tragedy of the modern mind which “solved the riddle of the universe,” but only to replace it by another riddle: The riddle of itself. Yet, we discover the “certain principles of physical reality,” said Descartes, “not by the prejudices of the senses, but by rational analysis, which thus possess so great evidence that we cannot doubt of their truth.” Since the real, or that which actually remains external to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes concluded that all qualitative aspects of reality could be traced to the deceitfulness of the senses.
Given that Descartes distrusted the information from the senses to the point of doubting the perceived results of repeatable scientific experiments, how did he conclude that our knowledge of the mathematical ideas residing only in mind or in human subjectivity was accurate, much less the absolute truth? He did so by making a leap of faith - God constructed the world, said Descartes, according to the mathematical ideas that our minds could uncover in their pristine essence. The truths of classical physics as Descartes viewed them were quite literally “revealed” truths, and it was this seventeenth-century metaphysical presupposition that became in the history of science what is termed the “hidden ontology of classical epistemology.” Descartes lingers in the widespread conviction that science does not provide a “place for man” or for all that we know as distinctly human in subjective reality.
The historical notion in the unity of consciousness has had an interesting history in philosophy and psychology. Taking Descartes to be the first major philosopher of the modern period, the unity of consciousness was central to the study of the mind for the whole of the modern period until the 20th century. The notion figured centrally in the work of Descartes, Leibniz, Hume, Reid, Kant, Brennan, James, and, in most of the major precursors of contemporary philosophy of mind and cognitive psychology. It played a particularly important role in Kant's work.
A couple of examples will illustrate the role that the notion of the unity of consciousness played in this long literature. Consider a classical argument for dualism (the view that the mind is not the body, indeed is not made out of matter at all). It starts like this: When I consider the mind, which is to say of myself, as far as I am only a thinking thing, I cannot distinguish in myself any parts, but apprehend myself to be clearly one and entire.
Descartes then asserts that if the mind is not made up of parts, it cannot consist of matter, presumably because, as he saw it, anything material has parts. He then goes on to say that this would be enough to prove dualism by itself, had he not already proved it elsewhere. It is in the unified consciousness that I have of myself.
Here is another, more elaborate argument based on unified consciousness. The conclusion will be that any system of components could never achieve unified consciousness acting in concert. William James' well-known version of the argument starts as follows: Take a sentence of a dozen words, take twelve men, and to each word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will; Nowhere will there be a consciousness of the whole sentence.
James generalizes this observation to all conscious states. To get dualism out of this, we need to add a premise: That if the mind were made out of matter, conscious states would have to be distributed over some group of components in some relevant way. Nevertheless, this thought experiment is meant to show that conscious states cannot be so distributed. Therefore, the conscious mind is not made out of matter. Calling the argument that James is using is the Unity Argument. Clearly, the idea that our consciousness of, here, the parts of a sentence are unified is at the centre of the Unity Argument. Like the first, this argument goes all the way back to Descartes. Versions of it can be found in thinkers otherwise as different from one another as Leibniz, Reid, and James. The Unity Argument continued to be influential into the 20th century. That the argument was considered a powerful reason for concluding that the mind is not the body is illustrated in a backhanded way by Kant's treatment of it (as he found it in Descartes and Leibniz, not James, of course).
Kant did not think that we could uncover anything about the nature of the mind, including whether nor is it made out of matter. To make the case for this view, he had to show that all existing arguments that the mind is not material do not work and he set out to do just this in the chapter in the Critique of Pure Reason on the Paralogisms of Pure Reason (1781), paralogisms are faulty inferences about the nature of the mind. The Unity Argument is the target of a major part of that chapter; if one is going to show that we cannot know what the mind is like, we must dispose of the Unity Argument, which purports to show that the mind is not made out of matter. Kant's argument that the Unity Argument does not support dualism is simple. He urges that the idea of unified consciousness being achieved by something that has no parts or components are no less mysterious than its being achieved by a system of components acting together. Remarkably enough, though no philosopher has ever met this challenge of Kant's and no account exists of what an immaterial mind not made out of parts might be like, philosophers continued to rely on the Unity Argument until well into the 20th century. It may be a bit difficult for us to capture this now but the idea any system of components, and for an even stronger reason might not realize that merge with consciousness, that each system of material components, had a strong intuitive appeal for a long time.
The notion that consciousness agrees to unification and was in addition central to one of Kant's own famous arguments, his ‘transcendental deduction of the categories’. In this argument, boiled down to its essentials, Kant claims that to tie various objects of experience together into a single unified conscious representation of the world, something that he simply assumed that we could do, we could probably apply certain concepts to the items in question. In particular we have to apply concepts from each of four fundamental categories of concept: Quantitative, qualitative, relational, and what he called ‘modal’ concepts. Modal concept’s concern of whether an item might exist, does exist, or must exist. Thus, the four kinds of concept are concepts for how many units, what features, what relations to other objects, and what existence status is represented in an experience.
It was relational conceptual representation that most interested Kant and of relational concepts, he thought the concept of cause-and-effect to be by far the most important. Kant wanted to show that natural science (which for him meant primarily physics) was genuine knowledge (he thought that Hume's sceptical treatment of cause and effect relations challenged this status). He believed that if he could prove that we must tie items in our experience together causally if we are to have a unified awareness of them, he would have put physics back on "the secure path of a science.” The details of his argument have exercised philosophers for more than two hundred years. We will not go into them here, but the argument illustrates how central the notion of the unity of consciousness was in Kant's thinking about the mind and its relation to the world.
Consciousness may possibly be the most challenging and pervasive source of problems in the whole of philosophy. Our own consciousness seems to be the most basic fact confronting us, yet it is almost impossible to say what consciousness is. Is mine like your? Is ours like that of animals? Might machines come to have consciousness? Is it possible for there to be disembodied consciousness? Whatever complex biological and neural processes go backstage, it is my consciousness that provides the theatre where my experiences and thoughts have their existence, where my desires are felt and where my intentions are formed. But then how am I to conceive the “I,” or self that is the spectator of this theatre? One of the difficulties in thinking about consciousness is that the problems seem not to be scientific ones: Leibniz remarked that if we could construct a machine that could think and feel, and blow it up to the size of a mill and thus be able to examine its working parts as thoroughly as we pleased, we would still not find consciousness and draw the conclusion that consciousness resides in simple subjects, not complex ones. Eve n if we are convinced that consciousness somehow emerges from the complexity of brain functioning, we many still feel baffled about the way the emergence takes place, or why it takes place in just the way it does.
The nature of the conscious experience has been the largest single obstacle to physicalism, behaviourism, and functionalism in the philosophy of mind: These are all views that according to their opponents, can only be believed by feigning permanent anaesthesin. But many philosophers are convinced that we can divide and conquer: We may make progress by breaking the subject into different skills and recognizing that rather than a single self or observer we would do better to think of a relatively undirected whirl of cerebral activity, with no inner theatre, no inner lights, ad above all no inner spectator.
A fundamental philosophical topic both for its central place in any theory of knowledge, and its central place in any theory of consciousness. Philosophy in this area is constrained by a number of properties that we believe to hold of perception. (1) It gives us knowledge of the world around us (2) We are conscious of that world by being aware of “sensible qualities,” colours, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is affected through highly complex information channels, such as the output of three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received (much of this complexity has been revealed by the difficulty of writing programs enabling commuters to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of there being a central, ghostly, conscious self. Fed information in the same way that a screen is fed information by a remote television camera. Once such a model is in place, experience will seem like a model getting between us and the world, and the direct objects of perception will seem to be private items in an inner theatre or sensorium. The difficulty of avoiding this model is especially acuter when we consider the secondary qualities of colour, sound, tactile feelings, and taste, which can easily seem to have a purely private existence inside the perceiver, like sensations of pain. Calling such supposed items names like sense data or percepts exacerbate the tendency. But once the model is in place, the fist property, the perception gives us knowledge or the inner world around us, is quickly threatened, for there now seem little connection between these items in immediate experience and any independent reality. Reactions to this problem include scepticism and idealism.
A more hopeful approach is to claim that complexities of (3) and (4) explain how we can have direct acquaintances of the world, than suggesting that the acquaintance we do have at best an emendable indiction. It is pointed out that perceptions are not like sensations, precisely because they have a content, or outer-directed nature. To have a perception is to be aware of the world as bing such-and-such a way, than to enjoy a mere modification of sensation. Nut. Such direct realism has to be sustained in the face of the evident personal (neurophysiological and other) factors determining how we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than strange optional extra.
Even to be, that if one is without idea, one is without concept, and, in the same likeness that, if one is without concept he too is without idea. Idea (Gk., visible form) that may be a notion as if by stretching all the way from one pole, where it denotes a subjective, internal presence in the mind, somehow though t of as representing something about the orld, to the other pole, where it represents an eternal, timeless unchanging form or concept: The concept of the number series or of justice, for example, thought of as independent objects of enquiry and perhaps of knowledge. These two poles are not distinct in meaning by the term kept, although they give rise to many problems of interpretation, but between them they define a space of philosophical problems. On the one hand, ideas are that with which we think. Or in Locke’s terms, whatever the mind may ne employed about in thinking Looked at that way they seem to be inherently transient, fleeting, and unstable private presence. On the other, ideas provide the way in which objective knowledge can ne expressed. They are the essential components of understanding and any intelligible proposition that is true must be capable of being understood. Plato’s theory of “Form” is a celebration of the objective and timeless existence of ideas as concepts, and in this hand ideas are reified to the point where they make up the only real world, of separate and perfect models of which the empirical world is only a poor cousin, this doctrine, notably in the Timarus opened the way for the Neoplatonic notion of ideas as the thoughts of God. The concept gradually lost this other-worldly aspect, until after Descartes ideas become assimilated to whatever it is that lies in the mind of any thinking being.
Together with a general bias toward the sensory, so that what lies in the mind may be thought of as something like images, and a belief that thinking is well explained as the manipulation of images, this was developed by Locke, Berkeley, and Hume into a full-scale view of the understanding as the domain of images, although they were all aware of anomalies that were later regarded as fatal to this doctrine. The defects in the account were exposed by Kant, who realized that the understanding needs to be thought of more in terms of rules and organized principles than of any kind of copy of what is given in experience. Kant also recognized the danger of the opposite extreme (that of Leibniz) of failing to connect the elements of understanding with those of experience at all (Critique of Pure Reason).
It has become more common to think of ideas, or concepts as dependent upon social and especially linguistic structures, than the self-standing creatures of an individual mind, but the tension between the objective and the subjective aspects of the matter lingers on, for instance in debates about the possibility of objective knowledge, of indeterminacy in translation, and of identity between thoughts people entertain at one time and those that they entertain at another.
To possess a concept is able to deploy a term expressing it in making judgements: The ability connects with such things as recognizing when the term applies, and being able to understand the consequences of its application. The term “idea” was formerly used in the same way, but is avoided because of its association with subjective mental imagery, which may be irrelevant to the possession of concept. In the semantics of Frége, a concept is the reference of a predicate, and cannot be referred to by a subject term. Frége regarded predicates as incomplete expressions for a function, such as, sine . . . or log . . . is incomplete. Predicates refer to concepts, which themselves are “unsaturated,” and cannot be referred to by subject expressions (we thus get the paradox that the concept of a horse is not a concept). Although Frége recognized the metaphorical nature of the notion of a concept being unsaturated, he was rightly convinced that some such notion is needed to explain the unity of a sentence, and to prevent sentences from being thought of as mere lists of names.
Mental states have contents: A belief may have the content that I will catch the train, a hope may have the content that the prime minister will resign. A concept is something that is capable of being a constituent of such contents. More specifically, a concept is a way of thinking of something – a particular object, or property, or relation. Or another entity.
Several different concepts may each be ways of thinking of the same object. A person may think of himself in the first-person way, or think of himself as the spouse of May Smith, or as the person located in a certain room now. More generally, a concept “c” is such-and-such without believing “d” is such-and-such. As words can be combined to form structured sentences, concepts have also been conceived as combinable into structured complex contents. When these complex contents are expressed in English by “that . . . “ clauses, as in our opening examples, they will be capable of been true or false, depending on the way the world is.
Concepts are to be distinguished from stereotypes and from conceptions. The stereotypical spy may be a middle-level official down on his luck and in need of money, none the less, we can come to learn that Anthony Blunt, are historian and Surveyor of the Queen’s Picture, is a spy: We can come to believe that something falls under a concept while positively disbelieving that the same thing falls under the stereotype association with the concept. Similarly, a person’s conception of a just arrangement for resolving disputes may involve something like contemporary Western legal systems. But whether or not it would be correct, it is quite intelligible for someone to reject this conception by arguing that it does not adequately provide for the elements of fairness and respect that are required by the concept of justice.
A theory of a particular concept must be distinguished from a theory of the object or objects it picks out. The theory of the concept is part of the theory of thought and epistemology: A theory of the object or objects is part of metaphysics and ontology. Some figures in the history of philosophy - and perhaps even some of our contemporaries - are open to the accusation of not having fully respected the distinction between the two kinds of theory. Descartes appears to have moved from facts about the indubitability of the thought “I think,” containing the first-person way of thinking, to conclusions about the non-material nature of the object he himself was. But though the goals of a theory of concepts theory is required to have an adequate account to its relation to the other theory. A theory of concepts is unacceptable if it gives no account of how the concept is capable of picking out the object it evidently does pick out. A theory of objects is unacceptable if it makes it impossible to understand how we could have concepts of those objects.
A fundamental question for philosophy is: What individuates a given concept - that is, what makes it the one is, than any other concept? One answer, which has been developed in great detail, is that it is impossible to give a non-trivial answer to this question. An alternative addresses the question by stating from the ideas that a concept is individuated by the condition that must be satisfied if a thinker is to posses that concept and to be capable of having beliefs and other attitudes whose contents contain it as a constituent. So to take a simple case, on e could propose the logical concept “and” is individuated by this conditions: It is the unique concept “C” to possess which a thinker has to find these forms of inference compelling, without basing them on any further inference or information: From any to premisses “A” and “B,” “ABC” can be inferred: And from any premiss “ABC,” each of “A” and “B” can be inferred. Again, a relatively observational concept such as “round” can be individuated in part by stating that the thinker find specified contents containing it compelling when he has certain kinds of perception, and in part by relating those judgements containing the concept and which are based on perception that individuates a concept by saying what is required for a thinker to possess it can be described as giving the possession condition for the concept.
A possession condition for a particular concept may actually make use of that concept. The possession condition for “and” does not. We can also expect to use relatively observational concepts in specifying the kind of experiences that have to be of comment in the possession condition for relatively observational concepts. We must avoid, as mentioned of the concept in question as such, within the content of the attitudes attributed to the thinker in the possession condition. Otherwise we would be presupposing possession of the concept in an account that was meant to elucidate its possession, in talking of what the thinker finds compelling, the possession conditions can also respect an insight of the later Wittgenstein: That a thinker’s mastery of a concept is inextricably tied to how he finds it natural to go on in new cases in applying the concept.
Sometimes a family of concepts has this property: It is not possible to master any one of the members of the family without mastering the other. Two of the families that plausibly have this status are these: The family consisting of some simple concepts 0, 1, 2, . . . of the natural numbers and the corresponding concepts of numerical quantifiers there are 0 so-and-so’s, there is 1 so-and-so, . . . : And the family consisting of the concepts “belief” ad “desire.” Such families have come to be known as “local holisms.” A local Holism does not prevent the individuation of a concept by its possession condition. Rather, it demands that all the concepts in the family be individuated simultaneously. So one would say something of this form: Belief and desire form the unique pair of concepts C1 and C2 such that for a thinker to possess them is to meet such-and-such condition involving the thinker, C1 and C2. For these and other possession conditions to individuate properly, it is necessary that there be some ranking of the concepts treated, and the possession conditions for concepts higher in ranking must presuppose only possession of concepts at the same or lower level in the ranking.
A possession condition may in various ways make a thinker’s possession of a particular concept dependents upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world for being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience e to the subject’s environment. If this is so, then, more is of mention, that it is much greater of the experiences in a possession condition will make possession of that concept dependent in particular upon the environmental relations of the thinker. Also, from intuitive particularities, that evens though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinker’s social environment is varied. A possession condition that properly individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.
Concepts have a normative dimension, a fact strongly emphasized by Kriple. For any judgement whose content involves s a given concept, there is a correctness condition for that judgement, a condition that is dependent in part upon the identity of the concept. The normative character of concepts also extends into the territory of a thinker’s reason for making judgements. A thinker’s visual perception can give him good reason for judging “That man is bald”: It does not by itself give him good reason for judging “Rostropovich is bald,” even if the man he sees is Rostropovich. All these normative connections must be explained by a theory of concepts. One approach to these matters is to look to the possession condition for a concept, and consider how the referent of the concept is fixed from it, together with the world. One proposal is that the referent if the concept is that object (or property, or function, . . . ) which makes the practices of judgement and inference in the possession condition always lead to true judgements and truth-preserving inferences. This proposal would explain why certain reasons are necessarily good reasons for judging given contents. Provided the possession condition permit s us to say what it is about a thinker’s previous judgements that make it the case that he is employing one concept rather than another, this proposal would also have another virtue. It would allow us to say how the correctness condition is determined for a newly encountered object. The judgement is correct if t he new object has the property that in fact makes the judgmental practices in the possession condition yield true judgements, or truth-preserving inferences.
Despite the fact that the unity of consciousness had been at the centre of pre-20th century research on the mind, early in the 20th century the notion almost disappeared. Logical atomism in philosophy and behaviourism in psychology were both unsympathetic to the notion. Logical atomism focussed on the atomic elements of cognition (sense data, simple propositional judgments, etc.), rather than on how these elements are tied together to form a mind. Behaviourism urged that we focus on behaviour, the mind being alternatively myth or something otherwise that we cannot and do not need of studying the mysteriousness of science, from which brings meaning and purpose to humanity. This attitude extended to consciousness, of course. The philosopher Daniel Dennett summarizes the attitude prevalent at the time this way: Consciousness may be the last bastion of occult properties, epiphenomena, immeasurable subjective states - in short, the one area of mind best left to the philosophers. Let them make fools of themselves trying to corral the quicksilver of ‘phenomenology’ into a respectable theory.
The unity of consciousness next became an object of serious attention in analytic philosophy only as late as the 1960s. In the years since, new work has appeared regularly. The accumulated literature is still not massive but the unity of consciousness has again become an object of serious study. Before we examine the more recent work, we need to explicate the notion in more detail than we have done so far and introduce some empirical findings. Both are required to understand recent work on the issue.
To expand on our earlier notion of the unity of consciousness, we need to introduce a pair of distinctions. Current works on consciousness labours under a huge, confusing terminology. Different theorists exchange dialogue over the excess consciousness, phenomenal consciousness, self-consciousness, simple consciousness, creature consciousness, states consciousness, monitoring consciousness, awareness as equated with consciousness, awareness distinguished from consciousness, higher orders thought, higher orders experience, qualia, the felt qualities of representations, consciousness as displaced perception, . . . and on and on and on. We can ignore most of this profusion but we do need two distinctions: between consciousness of objects and consciousness of our representations of objects, and between consciousness of representations and consciousness of self.
It is very natural to think of self-consciousness or, cognitive state more accurately, as a set of cognitive states. Self-knowledge is an example of such a cognitive state. There are plenty of things that I know bout self. I know the sort of thing I am: a human being, a warm-blooded rational animal with two legs. I know of many properties and much of what is happening to me, at both physical and mental levels. I also know things about my past, things I have done and that of whom I have been with other people I have met. But I have many self-conscious cognitive states that are not instances of knowledge. For example, I have the capacity to plan for the future - to weigh up possible courses of action in the light of goals, desires, and ambitions. I am capable of ca certain type of moral reflection, tide to moral self-and understanding and moral self-evaluation. I can pursue questions like, what sort of person I am? Am I the sort of person I want to be? Am I the sort of individual that I ought to be? This is my ability to think about myself. Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employing in my thought about other people and other objects.
When I say that I am a self-conscious creature, I am saying that I can do all these things. But what do they have in common? Could I lack some and still be self-conscious? These are central questions that take us to the heart of many issues in metaphysics, the philosophy of mind, and the philosophy of psychology.
Even so, with the range of putatively self-conscious cognitive states, one might naturally assume that there is a single ability that all presuppose. This is my ability to think about myself. I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autobiographical memories and moral self-understanding.
The proposing account would be on par with other noted examples of the deflationary account of self-consciousness. If, in at all, a straightforward explanation to what makes those of the “self contents” immune to error through misidentification concerning the semantics of self, then it seems fair to say that the problem of self-consciousness has been dissolved, at least as much as solved.
This proposed account would be on a par with other noted examples as such as the redundancy theory of truth. That is to say, the redundancy theory or the deflationary view of truth claims that the predicate ‘ . . . true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophic enquiry. The approach admits of different versions, but centres on the pints (1) that ‘it is true that p’ says no more nor less than ‘p’ (so, redundancy”) (2) that in less direct context, such as ‘everything he said was true’, or ‘all logical consequences of true propositions as true’, the predicated functions as a device enabling us to generalize rather than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from true propositions. For example, its translation is to infer that: (∀p, Q)(P & p ➞ q ➞ q)’ where there is no use of a notion of truth.
There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as . . . ‘science aims at the truth’ or ‘truth is a norm governing discourse. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited ‘objective’ concept ion of truth. But perhaps, we can have the norms even when objectivity is problematic, since they can be framed within mention of truth: Science wants to be so that whenever science holds that ‘p’, when ‘p’‘. Discourse is to be regulated by the principle that it is wrong to assert ‘p’. When not-p.
It is important to stress how redundancy or the deflationary theory of self-consciousness, and any theory of consciousness that accords a serious role in self-consciousness to mastery of the semantics of the first-person pronoun, is motivated by an important principle that ha governed much of the development of analytical philosophy. This is the principle that the philosophical analysis of thought can only proceed through the philosophical analysis of language:
Thoughts differ from all else that is aid to be among the contents of the mind in being wholly communicable: It is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my thought is like. It is of the essence of thought not merely to be communicable, but to be communicable, without residue, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed. We communicate thought by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language, it is these principles, which relate to what is open to view in the employment of language, unaided by any supposed contact between mind and the senses that they carry. In order to analyses thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp. (Dummett, 1978)
So how can such thoughts be entertained by a thinker incapable of reflexively referring to himself as English speakers do with the first-person pronoun be plausibly ascribed thought with first-person contents? The thought that, despite all this, there are in fact first-person contents that do not presuppose mastery of the first-person pronoun is at the core of the functionalist theory of self-reference and first-person belief.
The best developed functionalist theory of self-reference has been deployed by Hugh Mellor (1988-1989). The basic phenomenon he is interested in explaining is what it is for a creature to have what he terms as subjective belief, which is to say, a belief whose content is naturally expressed by a sentence in the first-person singular and the present tense. Mellor starts from the functionalist premise that beliefs are causal functions from desires to actions. It is, of course, the emphasis on causal links between belief and action that make it plausible to think that belief might be independent of language and conscious belief, since “agency entails neither linguistic ability nor conscious belief. The idea that beliefs are causal functions from desires to actions can be deployed to explain the content of a give n belief through which the equation of truth conditions and utility conditions, where utility conditions are those in which the actions caused by the conjunction of that belief with a single desire result in the satisfaction of that desire. To expound forthwith, consider a creature ‘x’ who is hungry and has a desire for food at time ‘t’. That creature has a token belief b/(p) that conjoins with its desire for food to cause it to eat what is in front of it at that time. The utility condition of that belief is that there is food in front of it at that time. The utility condition of that belief is that there is food in from it of ‘x’ at that time. Moreover, for b/(p) to cause ‘x’ to eat what is in front of it at ‘t’, b/(p) must be a belief that ‘x’ has at ‘t’. Therefore, the utility/truth conditions of b/(p) is that whatever creature has this belief faces food when it is in fact facing food. And a belief with this content is, of course, the subjective belief whose natural linguistic expression would be “I am facing food now.” On the other hand, however, a belief that would naturally be expressed wit these words can be ascribed to a non-linguistic creature, because what makes it the belief that it is depending not on whether it can be linguistically expressed but on how it affects behaviour.
For in order to believe ‘p’, I need only be disposed to eat what I face if I feel hungry: A disposition which causal contiguity ensures that only my simultaneous hunger can provide, and only into making me eat, and only then. That’s what makes my belief refers to me and to when I have it. And that’s why I need have no idea who I am or what the time is, no concept of the self or of the present, no implicit or explicit grasp of any “sense” of “I” or “now,” to fix the reference of my subjective belies: Causal contiguity fixes them for me.
Causal contiguities, according to explanation may well be to why no internal representation of the self is required, even at what other philosophers have called the sub-personal level. Mellor believes that reference to distal objects can take place when in internal state serves as a causal surrogate for the distal object, and hence as an internal representation of that object. No such causal surrogate, and hence no such internal representation, is required in the case of subjective beliefs. The relevant casual component of subjective belies are the believer and the time.
The necessary contiguity of cause and effect is also the key to =the functionalist account of self-reference in conscious subjective belief. Mellor adopts a relational theory of consciousness, equating conscious beliefs with second-order beliefs to the effect that one is having a particular first-order subjective belief, it is, simply a fact about our cognitive constitution that these second-order beliefs are reliably, though of course fallibly, generated so that we tend to believe that we believe things that we do in fact believe.
The contiguity law in Leibniz, extends the principles that there are no discontinuous changes in nature, “natura non facit saltum,” nature makes no leaps. Leibniz was able to use the principle to criticize the mechanical system of Descartes, which would imply such leaps in some circumstances, and to criticize contemporary atomism, which implied discontinuous changes of density at the edge of an atom however, according to Hume the contiguity of evens is an important element in our interpretation of their conjunction for being causal.
Others attending to the functionalist points of view are it’s the advocate’s Putnam and Stellars, and its guiding principle is that we can define mental states by a triplet of relations: What typically situations to them, in of what effects them have on other mental states and what affects them have on behaviour. The definition need not take the form of a simple analysis, but if we could write down the totality of axioms, or postulates, or platitudes that govern our theories about what things are apt to cause (for example) a belief state, what effects it would have on a variety of other mental states, and what effect it us likely to have on behaviour, then we would have done all that is needed to make the state a proper theoretical notion. It would be implicitly defined by these theses. Functionalism is often compared with descriptions of a computer, since according to it mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlying hardware or “realization” of the program the machine is running. The principal advantage of functionalism includes its fit with the way we know of mental states both of ourselves and others are via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless imitate the functions that are cited. According to this criticism functionalism is too generous, and would count too many things as having minds. It is also queried whether functionalism is too parochial, able to see mental similarities only when there is causal similarity, when our actual practices of interpretation enable us to ascribe thoughts and desires to persons whose causal structure may be rather different from our own. It may then seem as though beliefs and desires can be “variably realized” in causal architectures, just as much as they can be in different neurophysiological stares.
The anticipation, to revolve os such can find the tranquillity in functional logic and mathematics as function, a relation that auspicates members of one class “X” with some unique member “y” of another “Y.” The associations are written as y = f(x), The class “X” is called the domain of the function, and “Y” its range. Thus “the father of x” is a function whose domain includes all people, and whose range is the class of male parents. Whose range is the class of male parents, but the relation “by that x” is not a function, because a person can have more than one son. “Sine x” is a function from angles of a circle πx, is a function of its diameter x, . . . and so on. Functions may take sequences x1. . . .Xn as their arguments, in which case they may be thought of as associating a unique member of “Y” with any ordered, n-tuple as argument. Given the equation y = f(x1 . . . Xn), x1 . . . Xn is called the independent variables, or argument of the function, and “y” the dependent variable or value, functions may be many-one, meaning that differed not members of “X” may take the same member of “Y” as their value, or one-one when to each member of “X” may take the same member of “Y” as their value, or one-one when to each member of “X” their corresponds a distinct member of “Y.” A function with “X” and “Y” is called a mapping from “X” to”Y” is also called a mapping from “X” to “Y,” written f X ➝ Y, if the function is such that (1) If x, y ∈ X and f(x) = f(y) then x’s = y, then the function is an injection from to Y, if also: (2) If y ∈ Y, then (∃x)(x ∈ X & Y = f(x)). Then the function is a bi-jection of “X” onto “Y.” A di-jection is both an injection and a sir-jection where a subjection is any function whose domain is “X” and whose range is the whole of “Y.” Since functions ae relations a function may be defined asa set of “ordered” pairs
One of Frége’s logical insights was that a concept is analogous of a function, as a predicate analogous to the expression for a function (a functor). Just as “the square root of x” takes you from one number to another, so “x is a philosopher’ refers to a function that takes us from his person to truth-values: True for values of “x” who are philosophers, and false otherwise.”
Functionalism can be attached both in its commitment to immediate justification and its claim that all medially justified beliefs ultimately depend on the former. Though, in cases, is the latter that is the position’s weaker point, most of the critical immediately unremitting have been directed ti the former. As much of this criticism has ben directed against some particular from of immediate justification, ignoring the possibility of other forms. Thus much anti-foundationalist artillery has been derricked at the “myth of the given” to consciousness in pre-conceptual, pre-judgmental mode, and that beliefs can be justified on that basis (Sellars, 1963) The most prominent general argument against immediate justifications is a whatever use taken does so if the subject is justified in supposing that the putative justifier has what it takes to do so. Hence, since the justification of the original belief depends on the justification of the higher level belief just specified, the justification is not immediate after all. We may lack adequate support for any such higher level as requirement for justification: And if it were imposed we would be launched on an infinite regress, for a similar requirement would hold equally for the higher belief that the original justifier was efficacious.
The reflexive considerations initiated by functionalism evoke an intelligent system, or mind, may fruitfully be thought of as the result of a number of sub-systems performing more simple tasks in co-ordination switch each other. The sub-systems may be envisaged as homunculi, or small, relatively stupid agents. The archetype is a digital computer, where a battery of switches capable of only one response (on or off) can make u a machine that can play chess, write dictionaries, etc.
Nonetheless, we are confronted with the range of putatively self-conscious cognitive states, one might assume that there is a single ability that is presupposed. This is my ability to think about myself, and I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autographical memories and moral self-understanding. These are ways of thinking about myself.
Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employ in my thoughts about other people and other objects. My knowledge that I am a human being deploys certain conceptual abilities that I can also deploy in thinking that you are a human being. The same holds when I congratulate myself for satisfying the exacting moral standards of autonomous moral agencies. This involves concepts and descriptions that can apply equally to me and to others. On the other hand, when I think about myself, I am also putting to work an ability that I cannot put to work in thinking about other people and other objects. This is precisely the ability to apply those concepts and descriptions to myself. It has become common to refer to this ability as the ability to entertain “I’-thoughts.
What is an, “I”-thought” Obviously, an “I”-thought is a thought that involves self-reference. I can think an, “I”-thought only by thinking about myself. Equally obvious, though, this cannot be all that there is to say on the subject. I can think thoughts that involve a self-reference but are not “I”-thoughts. Suppose I think that the next person to set a parking ticket in the centre of Toronto deserves everything he gets. Unbeknown to be, the very next recipient of a parking ticket will be me. This makes my thought self-referencing, but it does not make it an “I”-thought. Why not? The answer is simply that I do not know that I will be the next person to get a parking ticket in downtown Toronto. Is ‘A’, is that unfortunate person, then there is a true identity statement of the form I = A, but I do not know that this identity holds, I cannot be ascribed the thoughts that I will deserve everything I get? And si I am not thinking genuine “I”-thoughts, because one cannot think a genuine “I”-thought if one is ignorant that one is thinking about oneself. So it is natural to conclude that “I”-thoughts involve a distinctive type of self-reference. This is the sort of self-reference whose natural linguistic expression is the first-person pronoun “I,” because one cannot be the first-person pronoun without knowing that one is thinking about oneself.
This is still not quite right, however, because thought contents can be specific, perhaps, they can be specified directly or indirectly. That is, all cognitive states to be considered, presuppose the ability to think about oneself. This is not only that they all have to some commonality, but it is also what underlies them all. We can see is more detail what this suggestion amounts to. This claim is that what makes all those cognitive states modes of self-consciousness is the fact that they all have content that can be specified directly by means of the first person pronoun “I” or indirectly by means of the direct reflexive pronoun “he,” such they are first-person contents.
The class of first-person contents is not a homogenous class. There is an important distinction to be drawn between two different types of first-person contents, corresponding to two different modes in which the first person can be employed. The existence of this distinction was first noted by Wittgenstein in an important passage from The Blue Book: That there are two different cases in the use of the word “I” (or, “my”) of which is called “the use as object” and “the use as subject.” Examples of the first kind of use are these” “My arm is broken,” “I have grown six inches,” “I have a bump on my forehead,” “The wind blows my hair about.” Examples of the second kind are: “I see so-and-so,” “I try to lift my arm,” “I think it will rain,” “I have a toothache.” (Wittgenstein 1958)
The explanations given are of the distinction that hinge on whether or not they are judgements that involve identification. However, one can point to the difference between these two categories by saying: The cases of the first category involve the recognition of a particular person, and there is in these cases the possibility of an error, or as: The possibility of can error has been provided for . . . It is possible that, say in an accident, I should feel a pain in my arm, see a broken arm at my side, and think it is mine when really it is my neighbour’s. And I could, looking into a mirror, mistake a bump on his forehead for one on mine. On the other hand, there is no question of recognizing when I have a toothache. To ask “are you sure that it is you who have pains?” would be nonsensical (Wittgenstein, 1958?).
Wittgenstein is drawing a distinction between two types of first-person contents. The first type, which is describes as invoking the use of “I” as object, can be analysed in terms of more basic propositions. Such that the thought “I am B” involves such a use of “I.” Then we can understand it as a conjunction of the following two thoughts” “a is B” and “I am a.” We can term the former a predication component and the latter an identification component (Evans 1982). The reason for braking the original thought down into these two components is precisely the possibility of error that Wittgenstein stresses in the second passages stated. One can be quite correct in predicating that someone is B, even though mistaken in identifying oneself as that person.
To say that a statement “a is B” is subject to error through misidentification relative to the term “a” means the following is possible: The speaker knows some particular thing to be “B,” but makes the mistake of asserting “a is B” because, and only because, he mistakenly thinks that the thing he knows to be “B” is what “a” refers to (Shoemaker 1968).
The point, then, is that one cannot be mistaken about who is being thought about. In one sense, Shoemaker’s criterion of immunity to error through misidentification relative to the first-person pronoun (simply “immunity to error through misidentification”) is too restrictive. Beliefs with first-person contents that are immune to error through identification tend to be acquired on grounds that usually do result in knowledge, but they do not have to be. The definition of immunity to error trough misidentification needs to be adjusted to accommodate them by formulating it in terms of justification rather than knowledge.
The connection to be captured is between the sources and grounds from which a belief is derived and the justification there is for that belief. Beliefs and judgements are immune to error through misidentification in virtue of the grounds on which they are based. The category of first-person contents being picked out is not defined by its subject matter or by any points of grammar. What demarcates the class of judgements and beliefs that are immune to error through misidentification is evidence base from which they are derived, or the information on which they are based. So, to take by example, my thought that I have a toothache is immune to error through misidentification because it is based on my feeling a pain in my teeth. Similarly, the fact that I am consciously perceiving you makes my belief that I am seeing you immune to error through misidentification.
To say that a statement “a is b” is subject to error through misidentification relative to the term “a” means that some particular thing is “b,” because his belief is based on an appropriate evidence base, but he makes the mistake of asserting “a is b” because, and only because, he mistakenly thinks that the thing he justified believes to be “b” is what “a” refers to.
Beliefs with first-person contents that are immune to error through misidentification tend to be acquired on grounds that usually result in knowledge, but they do not have to be. The definition of immunity to error through misidentification needs to be adjusted to accommodate by formulating in terms of justification rather than knowledge. The connection to be captured is between the sources and grounds from which a beef is derived and the justification there is for that belief. Beliefs and judgements are immune to error through misidentification in virtue of the grounds on which they are based. The category of first-person contents picked out is not defined by its subject matter or by any points of grammar. What demarcates the class of judgements and beliefs that ae immune to error through misidentification is the evidence base from which they are derived, or the information on which they are based. For example, my thought that I have a toothache is immune to error through misidentification because it is based on my feeling a pain in my teeth. Similarly, the fact that I am consciously perceiving you makes my belief that I am seeing you immune to error through misidentification.
A suggestive definition is to say that a statement “a is b” is subject to error through misidentification relative to the term “a” means that the following is possible: The speaker is warranted in believing that some particular thing is “b,” because his belief is based on an appropriate evidence base, but he makes the mistake of asserting “a is b” because, and only because, he mistakenly thinks that the thing he justified believes to be “b” is what “a” refers to.
First-person contents that are immune to error through misidentification can be mistaken, but they do have a basic warrant in virtue of the evidence on which they are based, because the fact that they are derived from such an evidence base is closely linked to the fact that they are immune to error thought misidentification. Of course, there is room for considerable debate about what types of evidence base ae correlated with this class of first-person contents. Seemingly, then, that the distinction between different types of first-person content can be characterized in two different ways. We can distinguish between those first-person contents that are immune to error through misidentification and those that are subject to such error. Alternatively, we can discriminate between first-person contents with an identification component and those without such a component. For purposes rendered, in that these different formulations each pick out the same classes of first-person contents, although in interestingly different ways.
All first-person consent subject to error through misidentification contain an identification component of the form “I am a” and employ of the first-person-pronoun contents with an identification component and those without such a component. In that identification component, does it or does it not have an identification component? Clearly, then, on pain of an infinite regress, at some stage we will have to arrive at an employment of the first-person pronoun that does not have to arrive at an employment of the first-person pronoun that does not presuppose an identification components, then, is that any first-person content subject to error through misidentification will ultimately be anchored in a first-person content that is immune to error through misidentification.
It is also important to stress how self-consciousness, and any theory of self-consciousness that accords a serious role in self-consciousness to mastery of the semantics of the first-person pronoun, is motivated by an important principle that has governed much if the development of analytical philosophy. This is the principle that the philosophical analysis of though can only proceed through the principle analysis of language. The principle has been defended most vigorously by Michael Dummett.
Even so, thoughts differ from that is said to be among the contents of the mind in being wholly communicable: It is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my though is like. It is of the essence of thought not merely to be communicable, but to be communicable, without residue, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed (Dummett 1978).
Dummett goes on to draw the clear methodological implications of this view of the nature of thought: We communicate thoughts by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language, it is these principles, which relate to what is open to view in the mind other than via the medium of language that endow our sentences with the senses that they carry. In order to analyse thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp.
Many philosophers would want to dissent from the strong claim that the philosophical analysis of thought through the philosophical analysis of language is the fundamental task of philosophy. But there is a weaker principle that is very widely held as The Thought-Language Principle.
As it stands, the problem between to different roles that the pronoun “he” can play of such oracle clauses. On the one hand, “he” can be employed in a proposition that the antecedent of the pronoun (i.e., the person named just before the clause in question) would have expressed using the first-person pronoun. In such a situation that holds that “he,” is functioning as a quasi-indicator. Then when “he” is functioning as a quasi-indicator, it be written as “he.” Others have described this as the indirect reflexive pronoun. When “he” is functioning as an ordinary indicator, it picks out an individual in such a way that the person named just before the clause of o reality need not realize the identity of himself with that person. Clearly, the class of first-person contents is not homogenous class.
There is canning obviousness, but central question that arises in considering the relation between the content of thought and the content of language, namely, whether there can be thought without language as theories like the functionalist theory. The conception of thought and language that underlies the Thought-Language Principe is clearly opposed to the proposal that there might be thought without language, but it is important to realize that neither the principle nor the considerations adverted to by Dummett directly yield the conclusion that there cannot be that in the absence of language. According to the principle, the capacity for thinking particular thoughts can only be analysed through the capacity for linguistic expression of those thoughts. On the face of it, however, this does not yield the claim that the capacity for thinking particular thoughts cannot exist without the capacity for their linguistic expression.
Thoughts being wholly communicable not entail that thoughts must always be communicated, which would be an absurd conclusion. Nor does it appear to entail that there must always be a possibility of communicating thoughts in any sense in which this would be incompatible with the ascription of thoughts to a nonlinguistic creature. There is, after all, a primary distinction between thoughts being wholly communicable and it being actually possible to communicate any given thought. But without that conclusion there seems no way of getting from a thesis about the necessary communicability of thought to a thesis about the impossibility of thought without language.
A subject has distinguished self-awareness to the extent that he is able to distinguish himself from the environment and its content. He has distinguished psychological self-awareness to the extent that he is able to distinguish himself as a psychological subject within a contract space of other psychological subjects. What does this require? The notion of a non-conceptual point of view brings together the capacity to register one’s distinctness from the physical environment and various navigational capacities that manifest a degree of understanding of the spatial nature of the physical environment. One very basic reason for thinking that these two elements must be considered together emerges from a point made in the richness of the self-awareness that accompanies the capacity to distinguish the self from the environment is directly proportion are to the richness of the awareness of the environment from which the self is being distinguished. So no creature can understand its own distinction from the physical enjoinment without having an independent understanding of the nature of the physical environment, and since the physical environment is essentially spatial, this requires an understanding of the spatial nature of the physical environment. But this cannot be the whole story. It leaves unexplained why an understanding should be required of this particular essential feature of the physical environment. Afer all, it is also an essential feature of the physical environment that it be composed of a an objects that have both primary and secondary qualities, but thee is n reflection of this in the notion of a non-conceptual point of view. More is needed to understand the significance of spatiality.
First, to take a step back from primitive self-consciousness to consider the account of self-identifying first-person thoughts as given in Gareth Evans’s Variety of Reference (1982). Evens places considerable stress on the connection between the form of self-consciousness that he is considering and a grasp of the spatial nature of the world. As far as Evans is concerned, the capacity to think genuine first-person thought implicates a capacity for self-location, which he construes in terms of a thinker’s to conceive of himself as an idea with an element of the objective order. Thought, do not endorse the particular gloss that Evans puts on this, the general idea is very powerful. The relevance of spatiality to self-consciousness comes about not merely because he world is spatial but also because the self-consciousness subject is himself a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be aware that one is a spatial element of the world without a grasp of the spatial nature of the world. Evans tends to stress a dependence in the opposite direction between these notions
The very idea of a perceived objective spatial world brings with it the ideas of the subject for being in the world, which the course of his perceptions due to his changing position in the world and to the more or less stable in the way of the world is. The idea that there is an objective world and the idea that the subject is somewhere cannot be separated, and where he is given by what he can perceive (Evans 1982).
But the main criteria of his work is very much that the dependence holds equally in the opposite direction.
It seems that this general idea can be extrapolated and brought to bar on the notion of a non-conceptual point of view. What binds together the two apparently discrete components of a non-conceptual point of view is precisely the fact that a creature’s self-awareness must be awareness of itself as a spatial bing that acts up and is acted upon by the spatial world. Evans’s own gloss on how a subject’s self-awareness, is awareness of himself as a spatial being involves the subject’s mastery of a simple theory explaining how the world makes his perceptions as they are, with principles like “I perceive such and such, such and such holds at P; So (probably) am P and “I am, such who does not hold at P, so I cannot really be perceiving such and such, even though it appears that I am” (Evans 1982). This is not very satisfactory, though. If the claim is that the subject must explicitly hold these principles, then it is clearly false. If, on the other hand, the claim is that these are the principles of a theory that a self-conscious subject must tacitly know, then the claim seems very uninformative in the absence of a specification of the precise forms of behaviour that can only be explained by there ascription of such a body of tacit knowledge. We need an account of what it is for a subject to be correctly described as possessing such a simple theory of perception. The point however, is simply that the notion of as non-conceptual point of view as presented, can be viewed as capturing, at a more primitive level, precisely the same phenomenon that Evans is trying to capture with his notion of a simple theory of perception.
But it must not be forgotten that a vital role in this is layed by the subject’s own actions and movements. Appreciating the spatiality of the environment and one’s place in it is largely a function of grasping one’s possibilities for action within the environment: Realizing that f one wants to return to a particular place from here one must pass through these intermediate places, or that if there is something there that one wants, one should take this route to obtain it. That this is something that Evans’s account could potentially overlook emerge when one reflects that a simple theory of perception of the form that described could be possessed and decoyed by a subject that only moves passively, in that it incorporates the dimension of action by emphasizing the particularities of navigation.
Moreover, stressing the importance of action and movement indicates how the notion of a non-conceptual point of view might be grounded in the self-specifying in for action to be found in visual perception. By that in thinking particularly of the concept of an affordance so central to Gibsonian theories of perception. One important type of self-specifying information in the visual field is information about the possibilities for action and reaction that the environment affords the perceiver, by which that affordancs are non-conceptual first-person contents. The development of a non-conceptual point of view clearly involves certain forms of reasoning, and clearly, we will not have a full understanding of he notion of a non-conceptual point of view until we have an explanation of how this reasoning can take place. The spatial reasoning involved in over which this reasoning takes place. The spatial reasoning involved in developing a non-conceptual point of view upon the world is largely a matter of calibrating different affordances into an integrated representation of the world.
In short, any learned cognitive ability be contractible out of more primitive abilities already in existence. There are good reason to think that the perception of affordance is innate. And so if, the perception of affordances is the key to the acquisition of an integrated spatial representation of the environment via the recognition of affordance symmetries, affordance transitives, and affordance identities, then it is precisely conceivable that the capacities implicated in an integrated representation of the world could emerge non-mysteriously from innate abilities.
Nonetheless, there are many philosophers who would be prepared to countenance the possibility of non-conceptual content without accepting that to use the theory of non-conceptual content so solve the paradox of self-consciousness. This is ca more substantial task, as the methodology that is adapted rested on the first of the marks of content, namely that content-bearing states serve to explain behaviour in situations where the connections between sensory input and behaviour output cannot be plotted in a law-like manner (the functionalist theory of self-reference). As such, not of allowing that every instance of intentional behaviour where there are no such law-like connections between sensory input and behaviour output needs to be explained by attributing to the creature in question of representational states with first-person contents. Even so, many such instances of intentional behaviour do need to be explained in this way. This offers a way of establishing the legitimacy of non-conceptual first-person contents. What would satisfactorily demonstrate the legitimacy of non-conceptual first-person contents would be the existence of forms of behaviour in pre-linguistic or non-linguistic creatures for which inference to the best understanding or explanation (which in this context includes inference to the most parsimonious understanding, or explanation) demands the ascription of states with non-conceptual first-person contents.
The non-conceptual first-person contents and the pick-up of self-specifying information in the structure of exteroceptive perception provide very primitive forms of non-conceptual self-consciousness, even if forms that can plausibly be viewed as in place rom. birth or shortly afterward. The dimension along which forms of self-consciousness must be compared is the richest of the conception of the self that they provide. All of which, a crucial element in any form of self-consciousness is how it enables the self-conscious subject to distinguish between self and environment - what many developmental psychologists term self-world dualism. In this sense, self-consciousness is essentially a contrastive notion. One implication of this is that a proper understanding of the richness of the conception that we take into account the richness of the conception of the environment with which it is associated. In the case of both somatic proprioception and the pick-up of self-specifying information in exteroceptive perception, there is a relatively impoverished conception of the environment. One prominent limitation is that both are synchronic than diachronic. The distinction between self and environment that they offer is a distinction that is effective at a time but not over time. The contrast between propriospecific and exterospecific invariant in visual perception, for example, provides a way for a creature to distinguish between itself and the world at any given moment, but this is not the same as a conception of oneself as an enduring thing distinguishable over time from an environment that also endures over time.
The notion of a non-conceptual point of view brings together the capacity to register one’s distinctness from the physical environment and various navigational capacities that manifest a degree of understanding of the spatial nature of the physical environment. One very basic reason for thinking that these elements must be considered together emerges from a point made from which the richness of the awareness of the environment from which the self is being distinguished. So no creature can understand its own distinctness from the physical environment without having an independent understanding of the nature of the physical environment, and since the physical environment is essentially spatial, this requires an understanding of the spatial nature of the physical environment. But this cannot be the whole story. It leaves unexplained why an understanding should be required of this particular essential feature of the physical environment. Afer all, it is also an essential feature of the physical environment that it be composed of objects that have both primary and secondary qualities, but there is no reflection of this in the notion of a non-conceptual point of view. More is needed to understand the significance of spatiality.
The general idea is very powerful, that the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is himself a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be aware that one is a spatial element of the world, and one cannot be aware that one is a spatial element of the world without a grasp of the spatial nature of the world.
The very idea of a perceivable, objectively spatial would be the idea of the subject for being in the world, with the course of his perceptions due to his changing position in the world and to the more or less stable way the world is. The idea that there is an objective world and the idea that the subject is somewhere cannot be separated, and where he is given by what he can perceive.
One possible reaction to consciousness, is that it is only because unrealistic and ultimately unwarranted requirements are being placed on what are to count as genuinely self-referring first-person thoughts. Suppose for such an objection will be found in those theories tat attempt to explain first-person thoughts in a way that does not presuppose any form of internal representation of the self or any form of self-knowledge. Consciousness arises because mastery of the semantics of he first-person pronoun is available only to creatures capable of thinking first-person thoughts whose contents involve reflexive self-reference and thus, seem to presuppose mastery of the first-person pronoun. If, thought, it can be established that the capacity to think genuinely first-person thoughts does not depend on any linguistic and conceptual abilities, then arguably the problem of circularity will no longer have purchase.
There is no account of self-reference and genuinely first-person thought that can be read in a way that poses just such a direct challenge to the account of self-reference underpinning the conscious. This is the functionalist account, although spoken before, the functionalist view, reflexive self-reference is a completely non-mysterious phenomenon susceptible to a functional analysis. Reflexive self-reference is not dependent upon any antecedent conceptual or linguistic skills. Nonetheless, the functionalist account of a reflexive self-reference is deemed to be sufficiently rich to provide the foundation for an account of the semantics of the first-person pronoun. If this is right, then the circularity at which consciousness is at its heart, and can be avoided.
The circularity problems at the root of consciousness arise because mastery of the semantics of the first-person pronoun requires the capacity to think fist-person thoughts whose natural expression is by means of the first-person pronoun. It seems clear that the circle will be broken if there are forms of first-person thought that are more primitive than those that do not require linguistic mastery of the first-person pronoun. What creates the problem of capacity circularity is the thought that we need to appeal to first-person contents in explaining mastery of the first-person pronoun, combined with the thought that any creature capable of entertaining first-person contents will have mastered the first-person pronoun. So if we want to retain the thought that mastery of the first-person pronoun can only be explained in terms of first-person contents, capacity circularity can only be avoided if there are first-person contents that do not presuppose mastery of the first-person pronoun.
On the other hand, however, it seems to follow from everything earlier mentioned about “I”-thoughts that conscious thought in the absence of linguistic mastery of the first-person pronoun is a contradiction in terms. First-person thoughts have first-person contents, where first-person contents can only be specified in terms of either the first-person pronoun or the indirect reflexive pronoun. So how could such thoughts be entertained by a thinker incapable of reflexive self-reference? How can a thinker who is not capable of reflexively reference? How can a thinker who is not the first-person pronoun be plausibly ascribed thoughts with first-person contents? The thought that, despite all this, there are real first-person contents that do not presuppose mastery of the first-person pronoun is at the core of the functionalist theory of self-reference and first-person belief.
The best developed functionalist theory of self-reference has been provided by Hugh Mellor (1988-1089). The basic phenomenon he is interested in explaining is what it is for a creature to have what he terms a “subjective belief,” that is to say, a belief whose content is naturally expressed by a sentence in the first-person singular and the present tense. The explanation of subjective belief that he offers makes such beliefs independent of both linguistic abilities and conscious beliefs. From this basic account he constructs an account of conscious subjective beliefs and the of the reference of the first-person pronoun “I.” These putatively more sophisticated cognitive states are casually derivable from basic subjective beliefs.
Mellor starts from the functionalist premise that beliefs are causal functions from desire to actions. It is, of course, the emphasis in causal links between belief and action that make it plausible to think that belief might be independent of language and conscious belief “agency entails neither linguistic ability nor conscious belief” (Mellor 1988). The idea that beliefs are causal functions from desires to action can be deployed to explain the content of a given belief via the equation of truth conditions and utility conditions, where utility conditions are those in which are actions caused by the conjunction of that belief with a single desire result in the satisfaction of that desire. We can see how this works by considering Mellor’s own example. Consider a creature ‘x’ who is hungry and has a desire for food at time ‘t’. That creature has a token belief b/(p) that conjoins with its desire for food to cause it to eat that there food in front of ‘x at that time. Moreover, for b/(p) to cause ‘x’ to eat what is in front of it at ‘t’. b/(p) mus t be a belief that ‘x’ has at ‘t’. For Mellor, therefore, the utility/truth condition of b/(p) is that whatever creature has this belief faces when it is actually facing food. And a belief with this content is, of course, the subjective belief whose natural linguistic expression would be ‘I am facing food now’ on the other hand, however, belief that would naturally be expressed with these words can be ascribed to a non-linguistic creature, because what makes it te belief that it is depends no on whether it can be linguistically expressed but on how it affects behaviour.
What secures a self-reference in belief b/(p) is the contiguity of cause and effect. The essence of a subjective conjointly with a desire or set of desires, and the relevant sort of conjunction is possible only if it is the same agent at the same time who has the desire and the belief.
For in order to believe ‘p’, I need only be disposed to eat what I face if I feel hungry, a disposition which causal contiguity ensures that only my simultaneous hunger can provoke, and only into masking me eat, and only then.
Scientific knowledge is an extension of ordinary language into greater levels of abstraction and precision through reliance upon geometric and numerical relationships. We speculate that the seeds of the scientific imagination were planted in ancient Greece, as opposed to Chinese or Babylonian culture, partly because the social, political, and economic climate in Greece was more open to the pursuit of knowledge with marginal cultural utility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigation, but this inheritance from Greek philosophy was wedded to some essential features in beliefs about the origin of the cosmos that the paradigm for classical physics emerged.
All the same, newer logical frameworks point to the logical condition for description and comprehension of experience such as to quantum physics. While normally referred to as the principle of complementarity, the use of the word principle was unfortunate in that complementarity is not a principle as that word is used in physics. A complementarity is rather a logical framework for the acquisition and comprehension of scientific knowledge that discloses a new relationship between physical theory and physical reality that undermines all appeals to metaphysics.
The logical conditions for description in quantum mechanics, the two conceptual components of classical causality, space-time description and energy-momentum conservation are all mutually exclusive and can be coordinated through the limitations imposed by Heisenberg’s indeterminacy principle.
The logical farmwork of complementarity is useful and necessary when the following requirements are met: (1) When the theory consists of two individually complete constructs: (2) when the constructs preclude one another in a description of the unique physical situation to which they both apply, (3) when both constitute a complete description of that situation. As we are to discover a situation, in which complementarity clearly applies, we necessarily confront an imposing limit to our knowledge of this situation. Knowledge can never be complete in the classical sense because we are able simultaneously to apply the mutual exclusive constructs that make up the complete description.
Why, then, must we use classical descriptive categories, like space-time descriptions and energy-momentum conservation, in our descriptions of quantum events? If classical mechanics is an approximation of the actual physical situation, it would seem to follow that classically; descriptive categories are not adequate to describe this situation. If, for example, quantities like position and momentum are abstractions with properties that are “definable and observable only through their interactions with other systems,” why should we represent these classical categories as if they were actual quantities in physical theory and experiment? However, these categories are rarely discussed, but it carries some formidable implications for the future of scientific thought.
Nevertheless, our journeys through which the corses of times generations we historically the challenge when it became of Heidegger' theory of spatiality distinguishes that concludes to three different types of space: (1) world-space, (2) regions (Gegend), and (3) Dasein's spatiality. What Heidegger calls "world-space" is space conceived as an “arena” or “container” for objects. It captures both our ordinary conception of space and theoretical space - in particular absolute space. Chairs, desks, and buildings exist “in” space, but world-space is independent of such objects, much like absolute space “in which” things exist. However, Heidegger thinks that such a conception of space is an abstraction from the spatializing conduct of our everyday activities. The things that we deal with are near or far relative to us; according to Heidegger, this nearness or farness of things is how we first become familiar with that which we (later) represent to ourselves as "space." This familiarity is what renders the understanding of space (in a "container" metaphor or in any other way) possible. It is because we act spatially, going to places and reaching for things to use, that we can even develop a conception of abstract space at all. What we normally think of as space - world-space - turns out not to be what space fundamentally is; world-space is, in Heidegger's terminology, space conceived as vorhanden. It is an objectified space founded on a more basic space-of-action.
Since Heidegger thinks that space-of-action is the condition for world-space, he must explain the former without appealing to the latter. Heidegger's task then is to describe the space-of-action without presupposing such world-space and the derived concept of a system of spatial coordinates. However, this is difficult because all our usual linguistic expressions for describing spatial relations presuppose world-space. For example, how can one talk about the "distance between you and me" without presupposing some sort of metric, i.e., without presupposing an objective access to the relation? Our spatial notions such as "distance," "location," etc. must now be redescribed from a standpoint within the spatial relation of self (Dasein) to the things dealt with. This problem is what motivates Heidegger to invent his own terminology and makes his discussion of space awkward. In what follows I will try to use ordinary language whenever possible to explain his principal ideas.
The space-of-action has two aspects: regions (space as Zuhandenheit) and Dasein's spatiality (space as Existentiale). The sort of space we deal within our daily activity is "functional" or zuhanden, and Heidegger's term for it is "region." The places we work and live-the office, the park, the kitchen, etc.-all have different regions that organize our activities and contextualize “equipment.” My desk area as my work region has a computer, printer, telephone, books, etc., in their appropriate “places,” according to the spatiality of the way in which I work. Regions differ from space viewed as a "container"; the latter notion lacks a "referential" organization with respect to our context of activities. Heidegger wants to claim that referential functionality is an inherent feature of space itself, and not just a "human" characteristic added to a container-like space.
In our activity, how do we specifically stand with respect to functional space? We are not "in" space as things are, but we do exist in some spatially salient manner. What Heidegger is trying to capture is the difference between the nominal expression "we exist in space" and the adverbial expression "we exist spatially." He wants to describe spatiality as a mode of our existence rather than conceiving space as an independent entity. Heidegger identifies two features of Dasein's spatiality - "de-severance" (Ent-fernung) and "directionality" (Ausrichtung).
De-severance describes the way we exist as a process of spatial self-determination by “making things available” to ourselves. In Heidegger's language, in making things available we "take in space" by "making the farness vanish" and by "bringing things close"
We are not simply contemplative beings, but we exist through concretely acting in the world - by reaching for things and going to places. When I walk from my desk area into the kitchen, I am not simply changing locations from point A to B in an arena-like space, but I am “taking in space” as I move, continuously making the “farness” of the kitchen “vanish,” as the shifting spatial perspectives are opened as I go along.
This process is also inherently "directional." Every de-severing is aimed toward something or in a certain direction that is determined by our concern and by specific regions. I must always face and move in a certain direction that is dictated by a specific region. If I want to get a glass of ice tea, instead of going out into the yard, I face toward the kitchen and move in that direction, following the region of the hallway and the kitchen. Regions determine where things belong, and our actions are coordinated in directional ways accordingly.
De-severance, directionality, and regionality are three ways of describing the spatiality of a unified Being-in-the-world. As aspects of Being-in-the-world, these spatial modes of being are equiprimordial.9 10 Regions "refer" to our activities, since they are established by our ways of being and our activities. Our activities, in turn, are defined in terms of regions. Only through the region can our de-severance and directionality be established. Our object of concern always appears in a certain context and place, in a certain direction. It is because things appear in a certain direction and in their places “there” that we have our “here.” We orient ourselves and organize our activities, always within regions that must already be given to us.
Heidegger's analysis of space does not refer to temporal aspects of Being-in-the-world, even though they are presupposed. In the second half of Being and Time he explicitly turns to the analysis of time and temporality in a discussion that is significantly more complex than the earlier account of spatiality. Heidegger makes the following five distinctions between types of time and temporality: (1) the ordinary or "vulgar" conception of time; this is time conceived as Vorhandenheit. (2) world-time; this is time as Zuhandenheit. Dasein's temporality is divided into three types: (3) Dasein's inauthentic (uneigentlich) temporality, (4) Dasein's authentic (eigentlich) temporality, and (5) temporal originality or “temporality as such.” The analyses of the vorhanden and zuhanden modes of time are interesting, but it is Dasein's temporality that is relevant to our discussion, since it is this form of time that is said to be founding for space. Unfortunately, Heidegger is not clear about which temporality plays this founding role.
We can begin by excluding Dasein's inauthentic temporality. This mode of time refers to our unengaged, "average" way in which we regard time. It is the “past we forget” and the “future we expect,” all without decisiveness and resolute understanding. Heidegger seems to consider that this mode of temporality is the temporal dimension of de-severance and directionality, since de-severance and directionality deal only with everyday actions. As such, inauthentic temporality must itself be founded in an authentic basis of some sort. The two remaining candidates for the foundation are Dasein's authentic temporality and temporal originality.
Dasein's authentic temporality is the "resolute" mode of temporal existence. Authentic temporality is realized when Dasein becomes aware of its own finite existence. This temporality has to do with one's grasp of his or her own life as a whole from one's own unique perspective. Life gains meaning as one's own life-project, bounded by the sense of one's realization that he or she is not immortal. This mode of time appears to have a normative function within Heidegger's theory. In the second half of BT he often refers to inauthentic or "everyday" mode of time as lacking some primordial quality which authentic temporality possesses.
In contrast, temporal originality is the formal structure of Dasein's temporality itself. In addition to its spatial Being-in-the-world, Dasein also exists essentially as "projection." Projection is oriented toward the future, and this futural orientation regulates our concern by constantly realizing various possibilities. Temporality is characterized formally as this dynamic structure of "a future that makes present in the process of having been." Heidegger calls the three moments of temporality - the future, the present, and the past - the three ecstases of temporality. This mode of time is not normative but rather formal or neutral; as Blattner argues, the temporal features that constitute Dasein's temporality describe both inauthentic and authentic temporality.
There are some passages that indicate that authentic temporality is the primary manifestation of temporalities, because of its essential orientation toward the future. For instance, Heidegger states that "temporality first showed itself in anticipatory resoluteness." Elsewhere, he argues that "the ‘time’ which is accessible to Dasein's common sense is not primordial, but arises rather from authentic temporality." In these formulations, authentic temporalities is said to found other inauthentic modes. According to Blattner, this is "by far the most common" interpretation of the status of authentic time.
However, to ague with Blattner and Haar, in that there are far more passages where Heidegger considers temporal originality as temporality as distinct from authentic temporality, and founding for it and for Being-in-the-world as well. Here are some examples: Temporality has different possibilities and different ways of temporalizing itself. The basic possibilities of existence, the authenticity and inauthenticity of Dasein, are grounded ontologically on possible temporalizations of temporality. Time is primordial as the temporalizing of temporality, and as such it makes possible the Constitution of the structure of care.
Heidegger's conception seems to be that it is because we are fundamentally temporal - having the formal structure of ecstatico-horizonal unity - that we can project, authentically or inauthentically, our concernful dealings in the world and exist as Being-in-the-world. It is on this account that temporality is said to found spatiality.
Since Heidegger uses the term "temporality" rather than "authentic temporality" whenever the founding relation is discussed between space and time, I will begin the following analysis by assuming that it is originary temporality that founds Dasein's spatiality. On this assumption two interpretations of the argument are possible, but both are unsuccessful given his phenomenological framework.
I will then consider the possibility that it is "authentic temporality" which founds spatiality. Two interpretations are also possible in this case, but neither will establish a founding relation successfully. I will conclude that despite Heidegger's claim, an equiprimordial relation between time and space is most consistent with his own theoretical framework. I will now evaluate the specific arguments in which Heidegger tries to prove that temporality founds spatiality.
The principal argument, entitled "The Temporality of the Spatiality that is Characteristic of Dasein." Heidegger begins the section with the following remark: Though the expression `temporality' does not signify what one understands by "time" when one talks about `space and time', nevertheless spatiality seems to make up another basic attribute of Dasein corresponding to temporality. Thus with Dasein's spatiality, existential-temporal analysis seems to come to a limit, so that this entity that we call "Dasein," must be considered as `temporal' `and' as spatial coordinately.
Accordingly, Heidegger asks, "Has our existential-temporal analysis of Dasein thus been brought to a halt . . . by the spatiality that is characteristic of Dasein . . . and Being-in-the-world?" His answer is no. He argues that since "Dasein's constitution and its ways to being possible are ontologically only on the basis of temporality," and since the "spatiality that is characteristic of Dasein . . . belongs to Being-in-the-world," it follows that "Dasein's specific spatiality must be grounded in temporality."
Heidegger's claim is that the totality of regions-de-severance-directionality can be organized and re-organized, "because Dasein as temporality is ecstatico-horizonal in its Being." Because Dasein exists futurally as "for-the-sake-of-which," it can discover regions. Thus, Heidegger remarks: "Only on the basis of its ecstatico-horizonal temporality is it possible for Dasein to break into space."
However, in order to establish that temporality founds spatiality, Heidegger would have to show that spatiality and temporality must be distinguished in such a way that temporality not only shares a content with spatiality but also has additional content as well. In other words, they must be truly distinct and not just analytically distinguishable. But what is the content of "the ecstatic-horizonal constitution of temporality?" Does it have a content above and beyond Being-in-the-world? Nicholson poses the same question as follows: Is it human care that accounts for the characteristic features of human temporality? Or is it, as Heidegger says, human temporality that accounts for the characteristic features of human care, serves as their foundation? The first alternative, according to Nicholson, is to reduce temporality to care: "the specific attributes of the temporality of Dasein . . . would be in their roots not aspects of temporality but reflections of Dasein's care." The second alternative is to treat temporality as having some content above and beyond care: "the three-fold constitution of care stems from the three-fold constitution of temporality."
Nicholson argues that the second alternative is the correct reading.18 Dasein lives in the world by making choices, but "the ekstasis of temporality lies well prior to any choice . . . so our study of care introduces us to a matter whose scope outreaches care: the ekstases of temporality itself." Accordingly, "What was able to make clear is that the reign of temporal ekstasis over the choices we make accords with the place we occupy as finite beings in the world."
But if Nicholson's interpretation is right, what would be the content of "the ekstases of temporality itself," if not some sort of purely formal entity or condition such as Kant's "pure intuition?" But this would imply that Heidegger has left phenomenology behind and is now engaging in establishing a transcendental framework outside the analysis of Being-in-the-world, such that this formal structure founds Being-in-the-world. This is inconsistent with his initial claim that Being-in-the-world is itself foundational.
I believe Nicholson's first alternative offers a more consistent reading. The structure of temporality should be treated as an abstraction from Dasein's Being-in-the-world, specifically from care. In this case, the content of temporality is just the past and the present and the future ways of Being-in-the-world. Heidegger's own words support this reading: "as Dasein temporalizes itself, a world is too," and "the world is neither present-at-hand nor ready-to-hand, but temporalizes itself in temporality." He also states that the zuhanden "world-time, in the rigorous sense of the existential-temporal conception of the world, belongs to temporality itself." In this reading, "temporality temporalizing itself," "Dasein's projection," and "the temporal projection of the world" are three different ways of describing the same "happening" of Being-in-the-world, which Heidegger calls "self-directive."
However, if this is the case, then temporality does not found spatiality, except perhaps in the trivial sense that spatiality is built into the notion of care that is identified with temporality. The content of "temporality temporalizing itself" simply is the various openings of regions, i.e., Dasein's "breaking into space." Certainly, as Stroeker points out, it is true that "nearness and remoteness are spatio-temporal phenomena and cannot be conceived without a temporal moment." But this necessity does not constitute a foundation. Rather, they are equiprimordial. The addition of temporal dimensions does indeed complete the discussion of spatiality, which abstracted from time. But this completion, while it better articulates the whole of Being-in-the-world, does not show that temporality is more fundamental.
If temporality and spatiality are equiprimordial, then all of the supposedly founding relations between temporality and spatiality could just as well be reversed and still hold true. Heidegger's view is that "because Dasein as temporality is ecstatico-horizonal in its Being, it can take along with it a space for which it has made room, and it can do so factically and constantly." But if Dasein is essentially a factical projection, then the reverse should also be true. Heidegger appears to have assumed the priority of temporality over spatiality perhaps under the influence of Kant, Husserl, or Dilthey, and then based his analyses on that assumption.
However, there may still be a way to save Heidegger's foundational project in terms of authentic temporality. Heidegger never specifically mentions authentic temporality, since he suggests earlier that the primary manifestation of temporality is authentic temporality, such a reading may perhaps be justified. This reading would treat the whole spatio-temporal structure of Being-in-the-world. The resoluteness of authentic temporality, arising out of Dasein's own "Being-towards-death," would supply a content to temporality above and beyond everyday involvements.
Heidegger is said to have its foundations in resoluteness, Dasein determines its own Situation through anticipatory resoluteness, which includes particular locations and involvements, i.e., the spatiality of Being-in-the-world. The same set of circumstances could be transformed into a new situation with different significance, if Dasein chooses resolutely to bring that about. Authentic temporality in this case can be said to found spatiality, since Dasein's spatiality is determined by resoluteness. This reading moreover enables Heidegger to construct a hierarchical relation between temporality and spatiality within Being-in-the-world rather than going outside of it to a formal transcendental principle, since the choice of spatiality is grasped phenomenologically in terms of the concrete experience of decision.
Moreover, one might argue that according to Heidegger one's own grasp of "death" is uniquely a temporal mode of existence, whereas there is no such weighty conception involving spatiality. Death is what makes Dasein "stand before itself in its own most potentiality-for-Being." Authentic Being-towards-death is a "Being toward a possibility - indeed, toward a distinctive possibility of Dasein itself." One could argue that notions such as "potentiality" and "possibility" are distinctively temporal, nonspatial notions. So "Being-towards-death," as temporal, appears to be much more ontologically "fundamental" than spatiality.
However, Heidegger is not yet out of the woods. I believe that labelling the notions of anticipatory resoluteness, Being-towards-death, potentiality, and possibility specifically as temporal modes of being (to the exclusion of spatiality) begs the question. Given Heidegger's phenomenological framework, why assume that these notions are only temporal (without spatial dimensions)? If Being-towards-death, potentiality-for-Being, and possibility were "purely" temporal notions, what phenomenological sense can we make of such abstract conceptions, given that these are manifestly our modes of existence as bodily beings? Heidegger cannot have in mind such an abstract notion of time, if he wants to treat authentic temporality as the meaning of care. It would seem more consistent with his theoretical framework to say that Being-towards-death is a rich spatio-temporal mode of being, given that Dasein is Being-in-the-world.
Furthermore, the interpretation that defines resoluteness as uniquely temporal suggests too much of a voluntaristic or subjectivistic notion of the self that controls its own Being-in-the-world as for its future. This would drive a wedge between the self and its Being-in-the-world, thereby creating a temporal "inner self" which can decide its own spatiality. However, if Dasein is Being-in-the-world as Heidegger claims, then all of Dasein's decisions should be viewed as concretely grounded in Being-in-the-world. If so, spatiality must be an essential constitutive element.
Hence, authentic temporality, if construed narrowly as the mode of temporality, at first appears to be able to found spatiality, but it also commits Heidegger either to an account of time that is too abstract, or to the notion of the self far more like Sartre's than his own. What is lacking in Heidegger's theory that generates this sort of difficulty is a developed conception of Dasein as a lived body - a notion more fully developed by Merleau-Ponty.
The elements of a more consistent interpretation of authentic temporality are present in Being and Time. This interpretation incorporates a view of "authentic spatiality" in the notion of authentic temporality. This would be Dasein's resolutely grasping its own spatio-temporal finitude with respect to its place and its world. Dasein is born at a particular place, but lives in a particular place, dies in a particular place, all of which it can relate to in an authentic way. The place Dasein lives is not a place of anonymous involvements. The place of Dasein must be there where its own potentiality-for-Being is realized. Dasein's place is thus a determination of its existence. Had Heidegger developed such a conception more fully, he would have seen that temporality is equiprimordial with thoroughly spatial and contextual Being-in-the-world. They are distinguishable but equally fundamental ways of emphasizing our finitude.
The internal tensions within his theory eventually leads Heidegger to reconsider his own positions. In his later period, he explicitly develops what may be viewed as a conception of authentic spatiality. For instance, in "Building Dwelling Thinking," Heidegger states that Dasein's relations to locations and to spaces inheres in dwelling, and dwelling is the basic character of our Being. The notion of dwelling expresses an affirmation of spatial finitude. Through this affirmation one acquires a proper relation to one's environment.
But the idea of dwelling is in fact already discussed in Being and Time, regarding the term "Being-in-the-world," Heidegger explains that the word "in" is derived from "innan" - to "reside," "habitare," "to dwell." The emphasis on "dwelling" highlights the essentially "worldly" character of the self.
Thus from the beginning Heidegger had a conception of spatial finitude, but this fundamental insight was undeveloped because of his ambition to carry out the foundational project that favoured time. From the 1930's on, as Heidegger abandons the foundational project focussing on temporality, the conception of authentic spatiality comes to the fore. For example, in Discourse on Thinking Heidegger considers the spatial character of Being as "that-which-regions (die Gegnet)." The peculiar expression is a re-conceptualization of the notion of "region" as it appeared in Being and Time. Region is given an active character and defined as the "openness that surrounds us" which "comes to meet us." By giving it an active character, Heidegger wants to emphasize that region is not brought into being by us, but rather exists in its own right, as that which expresses our spatial existence. Heidegger states that "one needs to understand ‘resolve’ (Entschlossenheit) as it is understood in Being and Time: as the opening of man [Dasein] particularly undertaken by him for openness, . . . which we think of as that-which-regions." Here Heidegger is asserting an authentic conception of spatiality. The finitude expressed in the notion of Being-in-the-world is thus transformed into an authentic recognition of our finite worldly existence in later writings.
The return to the conception of spatial finitude in the later period shows that Heidegger never abandoned the original insight behind his conception of Being-in-the-world. But once committed to this idea, it is hard to justify singling out an aspect of the self -temporality - as the foundation for the rest of the structure. All of the existentiales and zuhanden modes, which constitute the whole of Being-in-the-world, are equiprimordial, each mode articulating different aspects of a unified whole. The preference for temporality as the privileged meaning of existence reflects the Kantian residue in Heidegger's early doctrine that he later rejected as still excessively subjectivistic.
Meanwhile, it seems that it is nonetheless, natural to combine this close connection with conclusions by proposing an account of self-consciousness, as to the capacity to think “I”-thoughts that are immune to error through misidentification, though misidentification varies with the semantics of the “self” - this would be a redundant account of self-consciousness. Once we have an account of what it is to be capable of thinking “I”-thoughts, we will have explained everything distinctive about self-consciousness. It stems from the thought that what is distinctive about “I”-thoughts are that they are either themselves immune to error or they rest on further “I” -Thoughts that are immune in that way.
Once we have an account of what it is to be capable of thinking thoughts that are immune to error through misidentification, we will have explained everything about the capacity to think “I”-thoughts. As it would to claim of deriving from the thought that immunity to error through misidentification depends on the semantics of the “self.”
Once, again, that when we have an account of the semantics in that we will have explained everything distinctive about the capacity to think thoughts that are immune to error through misidentification.
The suggestion is that the semantics of “self-ness” will explain what is distinctive about the capacity to think thoughts immune to error through misidentification. Semantics alone cannot be expected to explain the capacity for thinking thoughts. The point in fact, that all that there is to the capacity of think thoughts that are immune tp error is the capacity to think the sort of thought whose natural linguistic expression involves the “self,” where this capacity is given by mastery of the semantics of “self-ness.” Yielding, to explain what it is to master the semantics of “self-ness,” especially to think thoughts immune to error through misidentification.
On this view, the mastery of the semantics of “self-ness” may be construed as for the single most important explanation in a theory of “self-consciousness.”
Its quickened reformulation might be put to a defender of “redundancy” or the deflationary theory is how mastery of the semantics of “self-ness” can make sense of the distinction between “self-ness contents” that are immune to error through misidentification and the “self contents” that lack such immunity. However, this is only an apparent difficulty when one remembers that those of the “selves” content is immune to error through misidentification, because, those employing ‘”I” as object, were able in having to break down their component elements. The identification component and the predication components that for which if the composite identification components of each are of such judgements that mastery of the semantics of “self-regulatory” content must be called upon to explain. Identification component are, of course, immune to error through misidentification.
It is also important to stress how the redundancy and the deflationary theory of self-consciousness, and any theory of self-consciousness that accords a serious role in self-consciousness to mastery of the semantics of the “self-ness,” are motivated by an important principle that has governed much of the development of analytical philosophy. The principle is the principle that the analysis of thought can only continue thought, the philosophical analysis of language such that we communicate thoughts by means of language because we have an implicit understanding of the workings of language, that is, of the principle governing the use of language: It is these principles, which relate to what is open to view and mind other that via the medium of language, which endow our sentences with the senses that they carry. In order to analyse thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp.
Still, at the core of the notion of broad self-consciousness is the recognition of what consciousness is the recognition of what developmental psychologist’s call “self-world dualism.” Any subject properly described as self-conscious must be able to register the distinction between himself and the world, of course, this is a distinction that can be registered in a variety of way. The capacity for self-ascription of thoughts and experiences, in combination with the capacity to understand the world as a spatial and causally structured system of mind-independent objects, is a high-level way of registering of this distinction.
Consciousness of objects is closely related to sentience and to being awake. It is (at least) being in somewhat of a distinct informational and behavioural intention where its responsive state is for one's condition as played within the immediateness of environmental surroundings. It is the ability, for example, to process and act responsively to information about food, friends, foes, and other items of relevance. One finds consciousness of objects in creatures much less complex than human beings. It is what we (at any rate first and primarily) have in mind when we say of some person or animal as it is coming out of general anaesthesia, ‘It is regaining consciousness’ as consciousness of objects is not just any form of informational access to the world, but the knowing about and being conscious of, things in the world.
We are conscious of our representations when we are conscious, not (just) of some object, but of our representations: ‘I am seeing [as opposed to touching, smelling, tasting] and seeing clearly [as opposed too dimly].’ Consciousness of our own representations it is the ability to process and act responsively to information about oneself, but it is not just any form of such informational access. It is knowing about, being conscious of, one's own psychological states. In Nagel's famous phrase (1974), when we are conscious of our representations, it is ‘like something’ to have them. If, that which seems likely, there are forms of consciousness that do not involve consciousness of objects, they might consist in consciousness of representations, though some theorists would insist that this kind of consciousness be not of representations either (via representations, perhaps, but not of them).
The distinction just drawn between consciousness of objects and consciousness of our representations of objects may seem similar to Form's (1995) contributes of a well-known distinction between P- [phenomenal] and A- [access] consciousness. Here is his definition of ‘A-consciousness’: "A state is A-conscious if it is poised for direct control of thought and action." He tells us that he cannot define ‘P-consciousness’ in any "remotely non-circular way" but will use it to refer to what he calls "experiential properties,” what it is like to have certain states. Our consciousness of objects may appear to be like A-consciousness. It is not, however, it is a form of P-consciousness. Consciousness of an object is - how else can we put it? - consciousness of the object. Even if consciousness is just informational excess of a certain kind (something that Form would deny), it is not all form of informational access and we are talking about conscious access here. Recall the idea that it is like something to have a conscious state. Other closely related ideas are that in a conscious state, something appears to one, that conscious states have a ‘felt quality’. A term for all this is phenomenology: Conscious states have a phenomenology. (Thus some philosophers speak of phenomenal consciousness here.) We could now state the point we are trying to make this way. If I am conscious of an object, then it is like something to have that object as the content of a representation.
Some theorists would insist that this last statement be qualified. While such a representation of an object may provide everything that a representation has to have for its contents to be like something to me, they would urge, something more is needed. Different theorists would add different elements. For some, I would have to be aware, not just of the object, but of my representation of it. For others, I would have directorial implications that infer of the certain attentive considerations to its way or something other than is elsewhere. We cannot go into this controversy here. As, we are merely making the point that consciousness of objects is more than Form's A-consciousness.
Consciousness self involves, not just consciousness of states that it is like something to have, but consciousness of the thing that has them, i.e., of ones-self. It is the ability to process and act responsively to information about oneself, but again it is more than that. It is knowing about, being conscious of, oneself, indeed of itself as itself. And consciousness of oneself in this way it is often called consciousness of self as the subject of experience. Consciousness of oneself as oneself seems to require indexical adeptness and by preference to a special indexical ability at that, not just an ability to pick out something out but to pick something out as oneself. Human beings have such self-referential indexical ability. Whether any other creatures have, it is controversial. The leading nonhuman candidate would be chimpanzees and other primates whom they have taught enough language to use first-person pronouns.
The literature on consciousness sometimes fails to distinguish consciousness of objects, consciousness of one's own representations, and consciousness of self, or treat one three, usually consciousness of one's own representations, as actualized of its owing totality in consciousness. (Conscious states do not have objects, yet is not consciousness of a representation either. We cannot pursue that complication here.) The term ‘conscious’ and cognates are ambiguous in everyday English. We speak of someone regaining consciousness - where we mean simple consciousness of the world. Yet we also say things like, She was haphazardly conscious of what motivated her to say that - where we do not mean that she lacked either consciousness of the world or consciousness of self but rather than she was not conscious of certain things about herself, specifically, certain of her own representational states. To understand the unity of consciousness, making these distinctions is important. The reason is this: the unity of consciousness takes a different form in consciousness of self than it takes in either consciousness of one's own representations or consciousness of objects.
So what is unified consciousness? As we said, the predominant form of the unity of consciousness is being aware of several things at the same time. Intuitively, this is the notion of several representations being aspects of a single encompassing conscious state. A more informative idea can be gleaned from the way philosophers have written about unified consciousness. As emerging from what they have said, the central feature of unified consciousness is taken to be something like this unity of consciousness: A group of representational relations related to each other that to be conscious of any of them is to be conscious of others of them and of the group of them as a single group.
Call this notion (x). Now, unified consciousness of some sort can be found in all three of the kinds of consciousness we delineated. (It can be found in a fourth, too, as we will see in a moment.) We can have unified consciousness of: Objectively represented to us; These are existent representations of themselves, but are contained in being alone, that in their findings are a basis held to oneself, that of something has of each the discerning character to value their considerations in the qualities of such that represents our qualifying phenomenon. In the first case, the represented objects would appear as aspects of a single encompassing conscious states. In the second case, the representations themselves would thus appear. In the third case, one is aware of oneself as a single, unified subject. Does (x) fit all three (or all four, including the fourth yet to be introduced)? It does not. At most, it fits the first two. Let us see how this unfolds.
Its collective and unified consciousness manifests as of such a form that most substantively awaken sustenance are purposive and may be considered for occurring to consciousness. Is that one has of the world around one (including one's own body) as aspects of a single world, of the various items in it as linked to other items in it? What makes it unified can be illustrated by an example. Suppose that I am aware of the computer screen in front of me and of the car sitting in my driveway. If awareness of these two items is not unified, I will lack the ability to compare the two. If I cannot bring the car as I am aware of it to the state in which I am aware of the computer screen, I could not answer questions such as, Is the car the same colour as the WordPerfect icon? Or even, As I am experiencing them, is the car to the left or to the right of the computer screen? We can compare represented items in these ways only if we are aware of both items together, as parts of the same field or state or act of conscious. That is what unified consciousness doe for us. (x) fits this kind of unified consciousness well. There are a couple of disorders of consciousness in which this unity seems to break down or be missing. We will examine them shortly.
Unified consciousness of one's own representations is the consciousness that we have of our representations, consciousness of our own psychological states. The representations by which we are conscious of the world are particularly important but, if those theorists who maintain that there are forms of consciousness that does not have objects are right, they are not the only ones. What makes consciousness of our representations unified? We are aware of many representations together, so that they appear as aspects of a single state of consciousness. As with unified consciousness of the world, here we can compare items of which we have unified consciousness. For example, we can compare what it is like to see an object to what it is like to touch the same object. Thus, (x) fits this kind of unified consciousness well, too.
When one has unified consciousness of self, it is to occur that at least one signifies its own awareness of oneself, not just as the subject but in Kant's words, as the “single common subject” of many representations and the single common agent of various acts of deliberation and action.
This is one of the two forms of unified consciousness that (x) does not fit. When one is aware of oneself as the common subject of experiences, the common agent of actions, one is not aware of several objects. Some think that when one is aware of oneself as subject, one is not aware of oneself as an object at all. Kant believed this. Whatever the merits of this view, when one is clearly aware of oneself as the single common subject of many representations, one is not aware of several things. As an alternative, one is aware of, and knows that one is aware of, the same thing - via many representations. Call this kind of unified consciousness (Y). Although (Y) is different form (x), we still have the core idea: Unified consciousness consists in tying what is contained in several representations, here most representations of oneself, together so that they are all part of a single field or state or act of consciousness.
Unified consciousness of self has been argued to have some very special properties. In particular, there is a small but important literature on the idea that the reference to oneself as oneself by which one achieves awareness of oneself as subject involves no ‘identification.’ Generalizing the notion a bit, some claim that reference to self does not proceed by way of attribution of properties or features to oneself at all. One argument for this view is that one is or could be aware of oneself as the subject of each of one's conscious experiences. If so, awareness of self is not what Bennett call ‘experience-dividing’ - statements expressing it have "no direct implications of the form “I” will experience C rather than D.” If this is so, the linguistic activities using first person pronouns by which we call ourselves subject and the representational states that result have to have some unusual properties.
Finally, we need to distinguish a fourth site of unified consciousness. Let us call it unity of focus. Unity of focus is our ability to pay unified attention to objects, one's representations, and one's own self. It is different from the other sorts of unified consciousness. In the other three situations, consciousness ranges over many alternate objects or many instances of consciousness of an object (in unified consciousness of self). Unity of focus picks out one such item (or a small numbers of them). Wundt captures what I have in mind well in his distinction between the field of consciousness and the focus of consciousness. The consciousness of a single item on which one is focussing is unified because one is aware of many aspects of the item in one state or act of consciousness (especially relational aspects, e.g., any dangers it poses, how it relates to one's goals, etc.) and one is aware of many different considerations with respect to it in one state or act of consciousness (goals, how well one is achieving them with respect to this object, etc.). (x) does not fit this kind of unified consciousness any better than it fit unified consciousness of self? Here that we are not, or need not be, aware of most items. Instead, one is integrating most properties of an item, especially properties that involve relationships to oneself, and integrating most of one's abilities and applying them to the item, and so on. Call this form of unified consciousness (z). One way to think of the affinity of (z) (a unified focus) to (x) and (Y) is this. (z) occurs within (x) and (Y) - within unified consciousness of world and self.
Though this has often been overlooked, all forms of unified consciousness come in both simultaneous and across-time versions. That is to say, the unity can consist in links of certain kinds among phenomena occurring at the same time (synchronically) and it can consist in links of certain kinds among phenomena occurring at different times (diachronically). In its synchronic form, it consists in such things as our ability to compare items with one of another, for example, to see if an item fits into another item. Diachronically, it consists in a certain crucial form of memory, namely, our ability to retain a representation of an earlier object in the right way and for long enough to bring it as recalled into current consciousness of currently represented objects in the same as we do with simultaneously represented objects. Though this process across time has always been called the unity of consciousness, sometimes even to the exclusion of the synchronic unity just delineated, another good name for it would be continuity of consciousness. Note that this process of relating earlier to current items in consciousness is more than, and perhaps different from, the learning of new skills and associations. Even severe amnesiacs can do the latter.
That consciousness can be unified across time and at given time points merited of how central unity of consciousness is to cognition. Without the ability to retain representations of earlier objects and unite them with current represented objects, most complex cognition would simply be impossible. The only bits of language that one could probably understand, for example, would be single words; The simplest of sentences is an entity spread over time. Now, unification in consciousness might not be the only way to unite earlier cognitive states (earlier thoughts, earlier experiences) with current ones but it is a central way and the one best known to us. The unity of consciousness is central to cognition.
Justly as thoughts differ from all else that is said to correspond among the contents of the mind in being wholly communicable, it is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my thought is like. It is of the essence of thought not merely to be communicable, but to be communicable, without excess, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed.
We communicate thoughts by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language. Of these principles, which relate to what is open to view in the employment of language, unaided by any supposed contact between mind and mind other than a formal medium of language, which endow our sentences with the senses that they carry. In order to analyses thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp.
By noting that (x), (y) and (z) are not the only kinds of mental unity. Our remarks about (z), specifically about what can be integrated in focal attention, might already have suggested as much. There is unity in the exercise of our cognitive capacities, unity that consists of integration of motivating factors, perceptions, beliefs, etc., and there is unity in the outputs, unity that consists of integration of behaviour.
Human beings bring a strikingly wide range of factors to bear on a cognitive task such as seeking to characterize something or trying to decide what to do about something. For example, we can bring to bear of what we want, and what we believe, and of our attitudinal values for which we can of our own self, situation, and context, allotted from each of our various senses: It has continuing causality in the information about the situation, other people, others' beliefs, desires, attitudes, etc.; The resources of however many languages we have possession in the availabilities for us, and include of the many-sided kinds of memory, bodily sensations, our various and very diverse problem-solving skills, . . . and so on. Not only can we bring all these elements to bear, we can integrate them in a way that is highly structured and ingeniously appropriate to our goals and the situation(s) before us. This form of mental unity could appropriately be called unity of cognition. Unity of consciousness often goes with unity of cognition because one of our means of unifying cognition with respect to some object or situation is to focus on it consciously. However, there is at least some measure of unified cognition in many situations of which we are not conscious, as is testified by our ability to balance, control our posture, manoeuver around obstacles while our
consciousness is entirely absorbed with something else, and so on.
At the other end of the cognitive process, we find an equally interesting form of unity, what we might call unity of behaviour, our ability to establish uninterruptedly some progressively rhythmic and keenly independent method for which to integrate our limbs, eyes, and bodily attitude, etc. The precision and complexity of the behavioural coordination we can achieve would be difficult to exaggerate. Think of a concert pianist performing the complicated work.
One of the most interesting ways to study psychological phenomena is to see what happens when they or related phenomena break down. Phenomena that look simple and seamless when functioning smoothly often turns out to have all sorts of structure when they begin to malfunction. Like other psychological phenomena, we would expect unified consciousness to be open to being damaged, distorted, etc., too. If the unity of consciousness is as important to cognitive functioning as we have been suggesting, such damage or distortion should create serious problems for the people to whom it happens. The unity of consciousness is damaged and distorted in both naturally-occurring and experimental situations. Some of these situations are indeed very serious for those undergoing them.
In fact, unified consciousness can break down in what look to be two distinct ways. There are situations in which saying that one unified conscious being has split into two unified conscious beings without the unity itself being destroyed is natural or even significantly damaged, and situations in which always we have one being with one instance of consciousness. However, the unity itself may be damaged or even destroyed. In the former cases, there is reason to think that a single instance of unified consciousness has become two (or something like two). In the latter cases, unity of consciousness has been compromised in some way but nothing suggests that anything have split.
The point in fact, is that it is possibly the most challenging and persuasive source of problems in the whole of philosophy. Our own consciousness may be the most basic of fact confronting us, yet it is almost impossible to say that consciousness is. Is yours like yours? Is ours like that of animals? Might machines come to have consciousness? Is it possible that there might be disembodied consciousness? Whatever complex biological and neural processes go on backstage, it is my consciousness that provides the theatre where my experiences and thoughts have their existence: Where my desires are felt and where my intentions are formed. But then how am I to conceive the “I,” or self that is the spectator of this theatre? One of the difficulties in thinking about consciousness is that the problems seem not to be scientific ones: Leibniz remarked that if we could construct a machine that could think and feel, and blow it up to the size of a mill and thus be able to examine its working parts as thoroughly as we pleased, we would still not find copiousness, and draw the conclusion that consciousness resides in simple subjects, not complex ones. Even if we are convinced that consciousness somehow emerges from the complexity of brain functioning, we may still feel baffled about the way the emergence takes place, or why it takes place in just the way it does.
Subsequently, it is natural to concede that a given thought has a natural linguistic expression. We are also saying something about how it is appropriate to characterize the contents of that thought. We are saying something about what is being thought. This “I” term is given by the sentence that follows the “that” clause in reporting a thought, a belief, or any propositional attitude. The proposal, then, is that “I”-thoughts are all and only the thoughts whose propositional contents constitutively involve the first-person pronoun. This is still not quite right, however, because thought contents can be specified in ways. They can be specified directly or indirectly.
In the examination of the functionalist account of self-reference as a possible strategy, although it is not ultimately successful, attention to the functionalist account reveal the correct approach for solving the paradox of self-consciousness. The successful response to the paradox of seif
consciousness must reject the classical view of contents. The thought that, despite all this, there are first-person contents that do not presuppose mastery of the first-person pronoun is at the core of the functionalist theory of self-reference and first-person belief.
The best developed functionalist theory of self-reference has been provided by Hugh Mellor (1988-1989). As, the basic phenomenon in the explaining to, is what it is for a creature to have what is termed as a subjective belief, that is to say, a belief whose content is naturally expressed by a sentence in the first-person singular and the present tense. The explanation of subjective beliefs that offers to makes such beliefs independent of both linguistic abilities and conscious beliefs. From this basic account of construing an account of conscious subjective beliefs and then of the reference of the first-person pronoun “I.” These putatively more sophisticated cognitive states are causally derivable from basic subjective beliefs.
Another phenomenon where we may find something like a split without diminished or destroyed unity is hemi-neglect, the strange phenomenon of losing all sense of one side of one's body or sometimes a part of one side of the body. Whatever it is exactly that is going on in hemi-neglect, unified consciousness remains. It is just that its ‘range’ has been bizarrely circumscribed. It ranges over only half the body (in the most common situation), not seamlessly over the whole body. Where we expect proprioception and perception of the whole body, in these patients they are of (usually) only one-half of the body.
A third candidate phenomenon is what used to be called Multiple Personality Disorder, now, more neutrally, Dissociative Identity Disorder (DID), everything about this phenomenon is controversial, including whether there is any real multiplicity of consciousness at all, but one common way of describing what is going on in at least some central cases is to say that the units if whether we call them persons, personalities, sides of a single personality, or whatever, ‘take turns’, usually with pronounced changes in personality. When one is active, the other(s) is usually(are) not. If this is an accurate description, then here to we have a breach in unity of some kind in which unity is nevertheless not destroyed. Notice that whereas in brain bisection cases the breach, whatever it is like, is synchronic (at a time), here it is diachronic (across time), different unified ‘package’ of consciousness taking turns. The breach consists primarily in some pattern of reciprocal (or sometimes one way) amnesia - some pattern of each ‘package’ not remembering having the experiences or doing the things had or done when another ‘package’ was in charge.
By contrast to brain bisection and DID cases, there are phenomena in which unified consciousness does not seem to split and does seem to be damaged or even destroyed together. In brain bisection and dissociative identity cases, the most that is happening is that unified consciousness is splitting into two or more proportionally intact units - two or more at a time or two or more across time. It is a matter of controversy whether even that is happening, especially in DID cases, but we clearly do not have more than that. In particular, the unity itself does not disappear, although it may split, but we could say, it does not shatter. There are at least three kinds of case in which unity does appear to shatter.
One is some particularly severe form of schizophrenia. Here the victim seems to lose the ability to form an integrated, interrelated representation of his or her world and his or her self together. The person speaks in ‘word salads’ that never get anywhere, indeed sometimes never become complete sentences. The person is unable to put together integrated plans of actions even at the level necessary to obtain sustenance, tend to bodily needs, or escape painful irritants. So on, here, saying that unity of consciousness has shattered seems correct than split. The behaviour of these people seems to express no more than what we might call experience-fragmentation, each lasting a tiny length of time and unconnected to any others. In particular, except for the (usually semantically irrelevant) associations that lead these people from each entry to the next in the word salads they create, to be aware of one of these states is not to be aware of any others - or so to an evidentiary proposition.
In schizophrenia of this sort, the shattering of unified consciousness is part of a general breakdown or deformation of mental functioning: pertain to, desire, belief, even memory all suffer massive distortions. In another kind of case, the normal unity of consciousness seems to be just as absent but there does not seem to be general disturbance of the mind. This is what some researchers call dysexecutive syndrome. What characterizes the breakdown in the unity of consciousness here is that subjects are unable to consider two things together, even things that are directly related to one another. For example, such people cannot figure out whether a piece of a puzzle fits into a certain place even when the piece and the puzzle are both clearly visibly and the piece obviously fits. They cannot crack an egg into a pan. So on.
A disorder presenting similar symptoms is simultagnosia or Balint's syndrome (Balint was an earlier 20th century German neurologist). In this disorder, which is fortunately rare, patients see only one object located at one ‘place’ in the visual field at a time. Outside of a few ‘degrees of arc’ in the visual field, these patients say they see nothing and seem to be receiving no information (Hardcastle, in progress). In both dysexecutive disorder and simultagnosia (if we have two different phenomena here), subjects seem not to be aware of even two items in a single conscious state.
We can pin down what is missing in each case a bit more precisely. Recall the distinction between being conscious of individual objects and having unified consciousness of a number of objects at the same time introduced at the beginning of this article. Broadly speaking, we can think of the two phenomena isolated by this distinction as two stages. First, the mind ties together various sensory information into representations of objects. In contemporary cognitive research, this activity has come to be called binding (Hardcastle 1998 is a good review). Then, the mind ties these represented objects together to achieve unified consciousness of a number of them at the same time. (The first theorist to separate these two stages was Kant, in his doctrine of synthesis.) The first stage continues to be available to dysexecutive and simultagnosia patients: They continue to be aware of individual objects, events, etc. The damage seems to be to the second stage: it is the tying of objects together in consciousness that is impaired or missing altogether. The distinction can be made this way: these people can achieve some (z), unity of focus with respect to individual objects, but little or no unified consciousness of any of the three kinds over a number of objects.
The same distinction can also help make clear what is going on in the severe forms of schizophrenia just discussed. Like dysexecutive syndrome and simultagnosia patients, severe schizophrenics lack the ability to tie represented objects together, but they also seem to lack the ability to form unified representations of individual objects. In a different jargon, these people seem to lack even the capacity for object constancy. Thus their cognitive impairment is much more severe than that experienced by dysexecutive syndrome and simultagnosia patients.
With the exception of brain bisection patients, who do not evidence distortion of consciousness outside of specially contrived laboratory situations, the split or breach occurs naturally in all the patients just discussed. Indeed, they are a central class of the so-called ‘experiments of nature’ that are the subject-matter of contemporary neuropsychology. Since all the patients in whom these problems occur naturally are severely disadvantaged by their situation, this is further evidence that the ability to unify the contents of consciousness is central to proper cognitive functioning.
Is there anything common to the six situations of breakdowns in unified consciousness just sketched? How do they relate to (x), (Y) or (z)?
In brain bisection cases, the key evidence for a duality of some kind is that there are situations in which whatever is aware of some items being represented in the body in question is not aware of other items being represented in that same body at the same time. We looked at two examples of the phenomenon connection with the word TAXABLE and the doing of arithmetic. With respect to these represented items, there is a significant and systematically extendable situation in which to be aware of some of these items is not to be aware of others of them. This seems to be what motivates the judgment in us that these patients’ evidence a split in unified consciousness. If so, brain bisection cases are a straightforward case of a failure to meet the conditions for (x). However, they are more than that. Because the ‘centres of consciousness’ created in the lab do not communicate with one another except in the way that any mind can communicate with any other mind, there is also a breakdown in (Y). One subject of experience aware of itself as the single common subject of its experience seems to become two (in some measure at least).
In DID cases, and a central feature of the case is some pattern of amnesia. Again, this is a situation in which being conscious of some represented objects goes with not being conscious of others in a systematic way. The main difference is that the breach is at a time in brain bisection cases, across time in DID cases. So again the breakdown in unity consists in a failure to meet the conditions for (x). However, DID cases for being diachronic, there is also a breakdown in (Y) across time - though there is continuity across time within each personality, there seems to be little or no continuity, conscious continuity at any rate, from one to another.
The same pattern is evident in the cases of severe schizophrenia, dysexecutive disorder and simultagnosia that we considered. In all three cases, consciousness of some items goes with lack of consciousness of others. In these cases, to be aware of a given item is precisely not to be aware of other relevant items. However, in the severe schizophrenia cases we considered, there is also a failure to meet the conditions of (z).
Hemi-neglect is a bit different. Here we do not have in company of two or more ‘packages’ of consciousness and we do not have individual conscious states that are not unified with other conscious states. Not, as far as we know - for there to be conscious states not unified with the states on which the patient can report, there would have to be consciousness of what is going on in the side neglected by the subject with whom we can communicate and there is no evidence for this. Here none of the conditions for (x), (y) or (z) fail to be met - but that may be because hemi-neglect is not a split or a breakdown in unified consciousness in the first place. It may be simply a shrinking of the range of phenomena over which otherwise intact unified consciousness amplifies.
It is interesting that none of the breakdown cases we have considered evidence damage to or destruction of the unity in (y). We have seen cases in which unified consciousness it might split at a time (brain bisection cases) or over time (DID cases) but not cases in which the unity itself is significantly damaged or destroyed. Nor is our sample unrepresentative; the cases we have considered are the most widely discussed cases in the literature. There do not seem to be many cases in which saying that is plausible (y), awareness of oneself as a single common subject, has been damaged or destroyed.
After a long hiatus, serious work on the unity of consciousness began in recent philosophy with two books on Kant, P. F. Strawson (1966) and Jonathan Bennett (1966). Both of them had an influence far beyond the bounds of Kant scholarship. Central to these works is an exploration of the relationship between unified consciousness, especially unified consciousness of self, and our ability to form an integrated, coherent representation of the world, a linkage that the authors took to be central to Kant's transcendental deduction of the categories. Whatever the merits of the claim for a sceptical judgment, their work set off a long line of writings on the supposed link. Quite recently the approach prompted a debate about unity and objectivity among Michael Lockwood, Susan Hurley and Anthony Marcel in Peacocke (1994).
Another point in fact, are the issues that led philosophers back within the unity of consciousness, is, perhaps, the next historicity, for which had the neuropsychological results of brain bisection operations, only that we can explore at an earlier time. Starting with Thomas Nagel (1971) and continuing in the work of Charles Marks (1981), Derek Parfit (1971 and 1984), Lockwood (1989), Hurley (1998) and many others, these operations have been a major themes in work on the unity of consciousness since the 1970s. Much ink has been spilled on the question of what exactly is going on in the phenomenology of brain bisection patients. Nagel goes insofar as to claim that there is no whole number of ‘centres of consciousness’ in these patients: There is too much unity to say "two,” yet too much splitting to say "one.”
Some recent work by Jocelyne Sergent (1990) might seem to support this conclusion. She found, for example, that when a sign ‘6’ was sent to one hemisphere of the brain in these subjects and a sign ‘7’ was sent to the other in such a way that a crossover of information from one hemisphere to the other was extremely unlikely, they could say that the six is a smaller number than the seven but could not say whether the signs were the same or different. It is not certain that Sergent's work does support Nagel's conclusions. First, Sergent's claims are controversial - not, but all researchers have been able to replicate them. Second, even if the data are good, the interpretation of them is far from straightforward. In particular, they seem to be consistent with there being a clear answer to any precise ‘one or two?’ Question that we could ask. (’Unified consciousness of the two signs with respect to numerical size?’ Yes. ‘Unified consciousness of the visible structure of the signs?’ No). If so, the fact that there is obviously mixed evidence, some pointing to the conclusion ‘one’, some pointing to the conclusion ‘two’, supports the view expressed by Nagel that there may be no whole number of subjects that these patients are.
Much of the work since Nagel has focussed on the same issue of the kind of split that the laboratory manipulation of brain bisection patients induces. Some attention has also been paid to the implications of these splits. For example, could one hemisphere commit a crime in such a way that the other could not justifiably be held responsible for it? Or, if such splitting occurred regularly and was regularly followed by merging with ‘halves’ from other splits, what would the implications are for our traditional notion of what philosophers call ‘personal identity’, namely, being or remaining one and the same thing. (Here we are talking about identity in the philosopher's sense of being or remaining one things, not in the sense of the term that psychologists use when they talk of such things as ‘identity crises’.)
Parfit has made perhaps the largest contributions to the issue of the implications of brain bisection cases for personal identity. Phenomena relevant to identity in things others than persons can be a matter of degree. This is well illustrated by the famous ship of Theseus examples. Suppose that over the years, a certain ship in Theseus was rebuilt, boards by board, until every single board in it has been replaced. Is the ship at the end of the process the ship that started the process or not? Now suppose that we take all those rotten, replaced boards and reassemble them into a ship? Is this ship the original ship of Theseus or not? Many philosophers have been certain that such questions cannot arise for persons; identity in persons is completely clear and unambiguous, not something that could be a matter of degree as related phenomena obviously can be with other objects is a well-known example. As Parfit argues, the possibility of persons (or at any rate minds) splitting and fusing puts real pressure on such intuitions about our specialness; perhaps the continuity of persons can be as partial and tangled as the continuity of other middle-sized objects.
Lockwood's exploration of brain bisections cases go off in a different direction, two different directions in fact (we will examine the second below). Like Nagel, Marks, and Parfit, Lockwood has written on the extent to which what he calls ‘co-consciousness’ can split. (’Co-consciousness’ is the term that many philosophers now use for the unity of consciousness; Roughly, two conscious states are said to be co-conscious when they are related to another as finding conscious states are related of yet to another in unified consciousness.) He also explores the possibility of psychological states that are not determinately in any of the available ‘centres of consciousness’ and the implications of this possibility for the idea of the specious present, the idea that we are directly and immediately aware of a certain tiny spread of time, not just the current infinitesimal moment of time. He concludes that the determinateness of psychological states being in an available ‘centre of consciousness’ and the notion that psychological states spread over at least a small amount of time in the specious might present stand or fall together.
Some philosopher’s interests in pathologies of unified consciousness examine more than brain bisection cases. In what is perhaps the most complex work on the unity of consciousness to date, Hurley examines most of the kinds of breakdown phenomena that we introduced earlier. She starts with an intuitive notion of co-consciousness that she does not formally define. She then explores the implications of a wide range of ‘experiments of nature’ and laboratory experiments for the presence or absence of co-consciousness across the psychological states of a person. For example, she considers acallosal patients (people born without a corpus callosum). When present, the corpus callosum is the chief channel of communication between the hemispheres. When it is cut, generating what looks like a possibility that two centres of consciousness, two internally co-conscious systems that are not co-consciousness with one another. Hurley argues that in patients in whom it never existed, things are not so clear. Even though the channels of communication in these patients are often in part external (behavioural cuing activity, etc.), the result may still be a single co-conscious system. That is to say, the neurological and behavioural basis of unified consciousness may be very different in different people.
Hurley also considers research by Trewarthen in which a patient is conscious of some object seen by, say, the right hemisphere until her left hand, which is controlled by the right hemisphere, reaches for it. Somehow the act of reaching for it seems to obliterate the consciousness of it. Very strange - how can something pop into and disappear from unified consciousness in this way? This leads her to consider the notion of partial unity. Could two centres of consciousness be as integrated in ‘A’, only to find of its relation to ‘B’, though not co-conscious with one another, nonetheless these of them is co-conscious with some third thing, e.g., the volitional system B (the system of intentions, desires, etc.?). If so, ‘co-conscious’ is not a transitive relationship - ‘A’ is co-conscious with ‘B’ and ‘C’ could be co-conscious with B without A being co-conscious with ‘C’. This is puzzling enough. Even more puzzling would be the question of how activation of the system ‘B’ with which both ‘A’ and ‘C’ are co-conscious could result in either ‘A’ or ‘C’ ceasing to be conscious of an object aimed at by ‘B’.
Hurleys’ response to Trewarthen's cases (and Sergent's cases that we examined in the previous section) is to accept that intention can obliterate consciousness and then distinguish times. At any given time in Trewarthen's cases, the situation with respect to unity is clear. That the picture does not conform to our usual expectations for diachronic singularity or transitivity then becomes simply an artefact of the cases, not a problem. It is not made clear how this reconciles Sergent's evidence with unity. One strategy would be that the one we considered earlier was of making questions in incomparably precise comprehension. For precise questions, there seems to be a coherent answer about unity for every phenomenon Sergent describes.
Hurleys’ consideration of what she calls Marcel's case. Here subjects are asked to report the appearance of some item in consciousness in three ways at the same time - say, by blinking, pushing a button, and saying, ‘I see it’. Remarkably, any of these acts can be done without the other two. The question is, What does this allude to unified consciousness? In a case in which the subject pushes the button but neither blinks nor says anything, for example, is the hand-controller aware of the object while the blink-controller and the speech-controller are not? How could the conscious system become fragmented in such a way?
Hurleys’ stipulation is that they cannot. What induces the appearance of incoherence about unity is the short time scale. Suppose that it takes some time to achieve unified consciousness, perhaps because some complex reaction’s processes are involved. If that were the case, then we do not have a stable unity situation in Marcel's case. The subjects are not given enough time to achieve unified consciousness of any kind.
There is a great deal more to Hurley's work. She urges, for example, that theirs a normative dimension to unified consciousness - conscious states have to cohere for unified consciousness to result. Systems in the brain have to achieve her calls ‘dynamic singularity’ - being a single system - for unified consciousness to result.
A third issue that got philosophers working on the unity of consciousness again is binding. Here the connection is more distant because binding as usually understood is not unified consciousness as we have been discussing it. Recall the two stages of cognition laid out earlier. First, the mind ties together various sensory information into representations of objects. Then the mind ties these represented objects to one other to achieve unified consciousness of a number of them at the same time. It is the first stage that is usually called binding. The representations that result at this stage need not be conscious in any of the ways delineating earlier - many perfectly good representations affect behaviour and even enter memory without ever becoming conscious. Representations resulting from the second stage need not be conscious, either, but when they are, we have at least some of the kinds of unified consciousness delineated.
In the past few decades, philosophers have also worked on how unified consciousness relates to the brain. Lockwood, for example, thinks that relating consciousness to matter will involve more issues on the side of matter than most philosophers think. (We mentioned that his work goes off in two new directions. This is the second one.) Quantum mechanics teach us that the way in which observation links to physical reality is a subtle and complex matter. Lockwood urges that our conceptions will have to be adjusted on the side of matter as much as on the side of mind if we are to understand consciousness as a physical phenomenon and physical phenomena as open to conscious observation. If it is the case not only that our understanding of consciousness is affected by how we think it might be implemented in matter but also that process of matter is affected by our (conscious) observation of them, then our picture of consciousness stands as ready to affect our picture of matter as vice-versa.
The Churchlands, Paul M. and Patricia S. and Daniel Dennett (1991) has radical views of the underlying architecture of unified consciousness. The Churchlands see unity itself much as other philosophers do. They do argue that the term ‘consciousness’ covers a range of different phenomena that need to be distinguished from another but the important point that presents to some attending characteristic is that they urge that the architecture of the underlying processes probably consist not of transformations of symbolically encoded objects of representations, as most philosophers have believed, but of vector transformations in what are called phase spaces. Dennett articulates an even more radical view, encompassing both unity and underlying architecture. For him, unified consciousness is simply a temporary ‘virtual captain’, a small group of related information-parcels that happens to gain temporary dominance in a struggle for control of such cognitive activities as self-monitoring and self-reporting in the vast array of microcircuits of the brain. We take these transient phenomena to be more than they are because each of them holds to some immediacy of ‘me’, particularly of the moment; The temporary coalition of conscious states winning at the moment is what I am, is the self. Radical implementation, narrowed range and transitoriness notwithstanding, when unified consciousness is achieved, these philosophers tend to see it in the way we have presented it.
Dennett's and the Churchlands' views fit naturally with a dynamic systems view of the underlying neural implementation. The dynamic systems view is the view that unified consciousness is a result of certain self-organizing activities in the brain. Dennett thinks that given the nature of the brain, a vast assembly of neurons receiving electrochemical signals from other neurons and passing such signals to yet other neurons, cognition could not take any form other than something like a pandemonium of competing bits of content, the ones that win the competitions being the ones that are conscious. The Churchlands nonexistence tends to agree with Dennett about this. They see consciousness as a state of the brain, the ‘wet-ware’, not a result of information processing, of ‘software’. They also advocate a different picture of the underlying neurological process. As we said, they think that transformations of complex vectors in a multi-dimensional phase space are the crucial processes, not competition among bits of content. However, they agree that it is very unlikely that the processes that subserve unified consciousness are sentence-like or language-like at all. It is too early to say whether these radically novel pictures of what the system that implements unified consciousness is like will hold any important implications for what unified consciousness is or when it is present.
Hurley is also interested in the relationship of unified consciousness to brain physiology. Saying it of her that she resists certain standard ways of linking them would be truer, however, than to say that she herself links them. In particular, while she clearly thinks that physiological phenomena have all sorts of implications and give rise to all sorts of questions about the unity of consciousness, she strongly resists any simplistic patterns of connection. Many researchers have been attracted by some variant of what she calls the isomorphism hypothesis. This is the idea that changes in consciousness will parallel changes in brain structure or function. She wants to insist, to the contrary, that often two instances of the same change in consciousness will go with very different changes in the brain. We saw an example in the last section. In most of us, unified consciousness is closely linked to an intact, functioning corpus callosum. However, in acallosal people, there may be the same unity but achieved by mechanisms such as cuing activity external to the body that are utterly different from communication though a corpus callosum. Going the opposite way, different changes in consciousness can go with the same changes to structure and function in the brain.
Two philosophers have gone off in directions different from any of the above, Stephen White (1991) and Christopher Hill (1991). White's main interest is not the unity of consciousness as such but what one might call the unified locus of responsibility - what it is that ties something together to make it a single agent of actions, i.e., something to which attributions of responsibility can appropriately be made. He argues that unity of consciousness is one of the things that go into becoming unified as such an agent but not the only thing. Focussed coherent plans, a continuing single conception of the good, with reason of a good autobiographical memory, certain future states of persons mattering to us in a special way (mattering to us because we take them to be future states of ourselves, one would say if it were not blatantly circular), a certain continuing kind and degree of rationality, certain social norms and practices, and so forth. In his picture of moral responsibility, unbroken unity of consciousness at and over time is only a small part of the story.
Hills’ fundamental claim is that a number of different relationships between psychological states have a claim to be considered unity relationships, including: Being owned by the same subject, being [phenomenally] next to (and other relationships that state in the field of consciousness appear to have to one another), as both embrace the singularity of objects contained of other conscious states, and jointly having the appropriate sorts of effects (functions). An interesting question, one that Hill does not consider, is whether all these relations are what interests us when we talk about the unity of consciousness or only some of them (and if only some of them, which ones). Hill also examines scepticism about the idea that clearly bounded individual conscious states exist. Since we have been assuming throughout that such states do exist, it is perhaps fortunate that Hill argues that we could safely do so.
In some circles, the idea that consciousness has a special kind of unity has fallen into disfavour. Nagel (1971), Donald Davidson (1982), and Dennett (1991) have all urged that the mind's unity has been greatly overstated in the history of philosophy. The mind, they say, works mostly out of the sight and the control of consciousness. Moreover, even states and acts of ours that are conscious can fail to cohere. We act against what we know perfectly well to be our own most desired courses of action, for example, or do things while telling ourselves that we must avoid doing them. There is an approach to the small incoherencies of everyday life that does not requires us to question whether consciousness is unified in this way, the Freudian approach (e.g., Freud 1916/17). This approach accepts that the unity of consciousness exists much as it presents itself but argues that the range of material over which it extends is much smaller than philosophers once thought. This latter approach has some appeal. If something is out of sight and/or control, it is out of the sight or control of what? The answer would seem to be, the unified conscious mind. If so, the only necessary difference among the pre-twentieth centuries visions of unified consciousness as ranging over everything in the mind and our current vision of unified consciousness is that the range of psychological phenomena over which unified consciousness ranges has shrunk.
A final historical note. At the beginning of the 21st century, work on the unity of consciousness continues apace. For example, a major conference was recently devoted to the unity of consciousness, the Association for the Scientific Study of Consciousness Conference assembled inside Brussels in 2000, and the Encyclopaedias of philosophy (such as this one) and of cognitive science are commissioning articles on the topic. Psychologists are taking up the issue. Bernard Baars (1988, 1997) notion of the global workspace is an example. Another example is work on the role of unified consciousness in precise control of attention. However, the topic is not yet at the centre of consciousness studies. One illustration of this is that it can still be missing entirely in anthologies of current work on consciousness.
With a different issue, philosophers used to think that the unity of consciousness has huge implications for the nature of the mind, indeed entails that the mind could not be made out of matter. We also saw that the prospects for this inference are not good. What about the nature of consciousness? Does the unity of consciousness have any implications for this issue?
There are currently at least three major camps on the nature of consciousness. One camp sees the ‘felt quality’ of representations as something unique, in particular as quite different from the power of representations to change other representations and shape belief and action. On this picture, representations could function much as they do without it being like anything to have them. They would merely not be conscious. If so, consciousness may not play any important cognitive role at all, its unity included (Jackson 1986; Chalmers 1996). A second camp holds, to the contrary, that consciousness is simply a special kind of representation (Rosenthal 1991, Dretske 1995, and Tye 1995). A third holds that what we label ‘consciousness’ are really something else. On this view, consciousness will in the end be ‘analysed away’ - the term is too coarse-grained and presents things in too unquantifiable a way to have any use in a mature science of the mind.
The unity of consciousness obviously has strong implications for the truth or falsity of any of these views. If it is as central and undeniable as many have suggested, its existence may cut against the eliminativist position. With respect to the other positions, in that the unity of consciousness seems neutral.
Whatever its implications for other issues, the unity of consciousness seems to be a real feature of the human mind, indeed central to it. If so, any complete picture of the mind will have to provide an account of it. Even those who hold that the extent to which consciousness is unified has been overrated owing us and account of what has been overrated.
To say one has an experience that is conscious (in the phenomenal sense) is to say, that one is in a state of its seeming to one some way. In another formulation, to say experience is conscious is to say that there is something that stands alone, like for only one to have. Feeling pain and sensing colours are common illustrations of phenomenally conscious states. Consciousness has also been taken to consist in the monitoring of one's own state of mind (e.g., by forming thoughts about them, or by somehow "sensing" them), or else in the accessibility of information to one's capacity for rational control or self-report. Intentionality has to do with the directedness or aboutness of mental states - the fact that, for example, one's thinking is of or about something. Intentionality includes, and is sometimes taken to be equivalent to, what is called “mental representation.”
It can seem that consciousness and intentionality pervade mental life -perhaps, but one or both somehow constitute what it is to have a mind. But achieving an articulate general understanding of either consciousness or intentionality presents, an enormous challenge, part of which lies in figuring out how the two are related. Is one in some sense derived from or dependent on the other? Or are they perhaps quite independent and separate aspects of mind?
One frequent understanding among philosophers, that consciousness is a certain feature shared by sense-experience and imagery, perhaps belonging also to a broad range of other mental phenomena (e.g., episodic thought, memory, and emotion). It is the feature that consists in its seeming some way to one to have experiences. To put it another way: Conscious states are states of its seeming somehow to a subject.
For example, it seems to you some way to see red, and seems to you in another way, to hear a crash, to visualize a triangle, and to suffer pain. The sense of ‘seems’ relevant here may be brought out by noting that, in the last example, we might just as well speak of the way it feels to be in pain. And - some may say - in the same sense, it seems to you some way to think through the answer to a math problem, or to recall where you parked the car, or to feel anger, shame, or elation. (However, that it is not simply to be assumed that saying it seems some way to you to have an experience is equivalent to saying that the experience itself seems or appears some way to you - that it, is - an object of appearance. The point is just that the way something sounds to you, the way something looks to you, etc., all constitute ‘ways of seeming.’) States that are conscious in this sense are said to have some phenomenal character or other - their phenomenal character being the specific way it seems to one to have a given experience. Sometimes this is called the ‘qualitative’ or ‘subjective’ character of experience.
Another oft-used means for trying to get at the relevant notion of consciousness, preferable to some, is to say that there is, in a certain sense, always ‘something it is like’ to be in a given conscious state - something it has, in the like for one who is in that state. Relating the two locutions, we might say: There is something it is like for you to see red, to feel pain, etc., and the way it seems to you to have one of these experiences is what it is like for you to have it. The phenomenal character of an experience then, is what someone would inquire about by asking, e.g., ‘What is it like to experience orgasm?’ - and it is what we speak of when we say that we know what that is like, even if we cannot convey this to one who does not know. And, if we want to speak of persons, or other creatures (as distinct from their states) being conscious, we will say that they are conscious just if there is something it is like for them to be the creature they are - for example, something it is like to be a nocturnal creature as inferred too as a bat.
The examples of conscious states given comprise a various lot. But some sense of their putative unity as instances of consciousness might be gained by contrasting them with what we are inclined to exclude, or can at least conceive of excluding, from their company. Much of what goes on, but we would ordinarily believe is not (or at any rate, we may suppose is not) conscious in the sense at issue. The leaf's fall from a tree branch, we may suppose, is not a conscious state of the leaf - a state of its seeming somehow to the leaf. Nor, for that matter, is a person falling off a branch held of a conscious state - is rather the feeling of falling the sort of consciousness, if anything is. Dreaming of falling would also be a conscious experience in this sense. But, while we can in some way be said to sense the position of our limbs even while dreamlessly asleep, we may still suppose that this proprioception (though perhaps in some sense a mental or cognitive affair) is not conscious - we may suppose that it does not then seem (or feel) any way to us sleepers to sense our limbs, as ordinarily it does when we are awake.
The way of seeming’ or ‘what it is like’ conception of consciousness I have just invoked is sometimes marked by the term ‘phenomenal consciousness.’ But this qualifier ‘phenomenal’ suggests that there are other kinds of consciousness (or perhaps, other senses of ‘consciousness’). Indeed there are, at least, other ways of introducing notions of consciousness. And these may appear to pick out features or senses altogether distinct from that just presented. For example, it is said that some (but not all) that goes on in the mind is ‘accessible to consciousness.’ Of course this by itself does not so much specifies a sense of ‘conscious’ as put one in use. (One will want to ask: And just what is this ‘consciousness’ that has ‘access’ to some mental goings-on but not others, and what could ‘access’ efforts that mean in of having it anyway? However, some have evidently thought that, rather than speak of consciousness as what has access, we should understand consciousness as itself a certain kind of susceptibility to access. For example, Daniel Dennett (1969) once theorized that one's conscious states are just those whose contents are available to one's direct verbal report - or, at least, to the ‘speech centre’ responsible for generating such reports. And Ned Form (1995) has proposed that, on one understanding of ‘conscious,’ (to be found at work in many ‘cognitive’ theories of consciousness) a conscious state is just a ‘representation poised for free use in reasoning and other direct ‘rational’ control of action and speech.’ Form labels consciousness in this sense ‘excess consciousness.’
Forms’ would insist that we should distinguish phenomenal consciousness from ‘excess consciousness’, and he argues that a mental representation's being poised for use in reasoning and rational control of action is neither a necessary nor a sufficient condition for the state's being phenomenally conscious. Similarly he distinguishes phenomenal consciousness from what he calls ‘reflexive consciousness’ - where this has to do with one's capacity to represent one's mind's to oneself - to have, for example, thoughts about one's own thoughts, feelings, or desires. Such a conception of consciousness finds some support in a tendency to say that conscious states of mind are those one is ‘conscious of’ or ‘aware of’ being in, and to interpret this ‘of’ to indicate some kind of reflexivity is involved - wherein one represents one's own mental representations. On one prominent variant of this conception, consciousness is taken to be a kind of scanning or perceiving of one's own psychological states or processes - an ‘inner sense.’
Forming a threefold division of our phenomenon, whereby its access, and reflexive consciousness need not be taken to reflect clear and coherent distinctions already contained in our pre-theoretical use of the term ‘conscious.’ Form seems to think that (on the contrary) our initial, ordinary use of ‘conscious’ is too confused even to count as ambiguous. Thus in articulating an interpretation, or set of interpretations, of the term adequate to frame theoretical issues, we cannot simply describe how it is currently employed - we must assign it a more definite and coherent meaning than extant in common usage.
Whether or not this is correct, getting a solid ground here is not easy, and a number of theorists of consciousness would balk at proceeding on the basis of Form's proposed threefold distinction. Sometimes the difficulty may be merely terminological. John Searle, for example, would recognize phenomenal consciousness, but deny Form's other two candidates are proper senses of ‘conscious’ at all. The reality of some sort of access and reflexivity is apparently not at issue - just whether either captures a sense of ‘conscious’ (perhaps confusedly) woven into our use of the term. However, in contrast to both Form and Searle, there are also those who raise doubt that there is a properly phenomenal sense we can apply, distinct from both of the other two, for us to pick out with any term. This is not just a dispute about words, but about what there is for us to talk about with them.
The substantive issues here are very much bound up with differences over the proper way to conceive of the relationship between consciousness and intentionality. If there are distinct senses in which states of mind could be correctly said to be ‘conscious’ (answering perhaps to something like Form's three-fold distinction), then there will be distinct questions we can pose about the relation between consciousness and intentionality. But if one of Form's alleged senses is somehow fatally confused, or if he is wrong to distinguish it from the others, or if it is the sense of no term we can with warrant apply to ourselves or our states, then there will be no separate question in which it figures we should try to answer. Thus, trying to work out a reasoned view about what we are (or should be) talking about when we talk about consciousness is an unavoidable and non-trivial part of trying to understand the relation between consciousness and intentionality.
To clarify further the disputes about consciousness and their links to questions about its relation to intentionality, we need to get an initial grasp of the relevant way the terms ‘intentionality’ and ‘intentional’ are used in philosophy of mind.
Previously, some indication of why it is difficult to get a theory of consciousness started. While the term ‘conscious’ is not esoteric, its use is not easily characterized or rendered consistent in a manner providing some uncontentious framework for theoretical discussion. Where the term ‘intentional’ is concerned, we also face initially confusing and contentious usage. But here the difficulty lies partly in the fact that the relevant use of cognate terms is simply not that found in common speech (as when we speak of doing something ‘intentionally’). Though ‘intentionality,’ in the sense here at issue, does seem to attach to some real and fundamental (maybe even defining) aspect of mental phenomena, the relevant use of the term is tangled up with some rather involved philosophical history.
One way of explaining what is meant by ‘intentionality’ in the (more obscure) philosophical sense is this: it is that aspect of mental states or events that consists in their being of or about things, as pertains to the questions, ‘What are you thinking of?’ And, what are you thinking about?’ Intentionality is the aboutness or directedness of mind (or states of mind) to things, objects, states of affairs, events. So if you are thinking about San Francisco, or about the increased cost of living there, or about your meeting someone there at Union Square - your mind, your thinking, is directed toward San Francisco, or the increased cost of living, or the meeting in Union Square. To think at all is to think of or about something in this sense. This ‘directedness’ conception of intentionality plays a prominent role in the influential philosophical writings of Franz Brentano and those whose views developed in response to his.
But what kind of ‘aboutness’ or ‘of-ness’ or ‘directedness’ is this, and to what sorts of things does it apply? How do the relevant ‘intentionality-marking’ senses of these words (‘about,’ ‘of,’ ‘directed’) differ from? : the sense in which the cat is wandering ‘about’ the room; the sense in which someone is a person ‘of’ high integrity; the sense in which the river's course is ‘directed’ toward the fields?
It has been said that the peculiarity of this kind of directedness/aboutness/of-ness lies in its capacity to relate thought or experience to objects that (unlike San Francisco) do not exist. One can think about a meeting that has not, or will never occur; One can think of Shangri La, or El Dorado, or the New Jerusalem, as one may think of their shining streets, of their total lack of poverty, or their citizens' peculiar garb. Thoughts, unlike roads, can lead to a city that is not there.
But to talk in this way only invites new perplexities. Is this to say (with apparent incoherence) that there are cities that do not exist? And what does it mean to say that, when a state of mind is in fact directed toward’ something that does exist, that state nevertheless could be directed toward something that does not exist? It can well seem to be something very fundamental to the nature of mind that our thoughts, or states of mind more generally, can be of or about things or ‘point beyond themselves.’ But a coherent and satisfactory theoretical grasp of this phenomenon of ‘mental pointing’ in all its generality is difficult to achieve.
Another way of trying to get a grip on the topic asks us to note that the potential for a mental directedness toward the non-existent be evidently closely associated with the mind's potential for falsehood, error, inaccuracy, illusion, hallucination, and dissatisfaction. What makes it possible to believe (or even just suppose) something about Shangri La is that one can falsely believe (or suppose) that something exists? In the case of perception, what makes it possible to seem to see or hear what is not there is that one's experience may in various ways be inaccurate, non-existent, subject to illusion, or hallucinatory. And, what makes it possible for one's desires and intentions to be directed toward what does not and will never exist is that one’s desire and intentions can be unfulfilled or unsatisfied. This suggests another strategy for getting a theoretical hold on intentionality, employing a notion of satisfaction, stretched to encompass susceptibility to each of these modes of assessment, each of these ways in which something can either go right, or go wrong (true/false, veridical/nonveridical, fulfilled/unfulfilled), and speak of intentionality in terms of having ‘conditions of satisfaction.’ On John Searle's (1983) conception, intentional states are those having conditions of satisfaction. What are conditions of satisfaction? In the case of belief, these are the conditions under which the belief is true; Even so, the instance of perception, they are the conditions under which sense-experience is veridical: In the case of intention, the conditions under which an intention is fulfilled or carried out.
However, while the conditions of satisfaction approach to the notion of intentionality may furnish an alternative to introducing this notion by talking of ‘directedness to objects,’ it is not clear that it can get us around the problems posed by the ‘directedness’ talk. For instance, what are we to say where thoughts are expressed using names of nonexistent deities or fictional characters? Will we do away with a troublesome directedness to the nonexistent by saying that the thoughts that Zeus is Poseidon's brother, and that Hamlet is a prince, is just false? This is problematic. Moreover, how will we state the conditions of satisfaction of such thoughts? Will this not also involve an apparent reference to the nonexistent?
A third important way of conceiving of intentionality, one particularly central to the analytic tradition derived from the study of Frege and Russell whom asks us to concentrate on the notion of mental (or intentional) content. Often, it is assumed to have intentionality is to have content. And frequently mental content is otherwise described as representational or informational content - and ‘intentionality’ (at least, as this applies to the mind) is seen as just another word for what is called ‘mental representation,’ or a certain way of bearing or carrying information.
But what is meant by ‘content’ here? As a start we may note: The content of thought, in this sense, is what, is reported when answering the question, What does she think? By something of the form, ‘She thinks that p.’ And the content of thought is what two people are said to share, when they are said to think the same thought. (Similarly, that contents of belief are what two persons commonly share when they hold the same belief.) Content is also what may be shared in this way even while ‘psychological modes’ of states of mind may differ. For example: Believing that I will soon be bald and fearing that I will soon be a bald share in that the content of bald shares that I will soon be bald.
Also, commonly, content is taken as not only that which is shared in the ways illustrated, but that which differs in a way revealed by considering certain logical features of sentences we use to talk about states of mind. Notably: the constituents of the sentence that fills in for ‘p’ when we say ‘x thinks that p’ or ‘x believes that p’ are often interpreted in such a way that they display ‘failures of substitutivity’ of (ordinarily) co-referential or co-extensional expressions, and this appear to reflect differences in mental content. For example: if George W. Bush is the eldest son of the vice-president under Ronald Reagan, and George W. Bush is the current US. President, then it can be validly inferred that the eldest son of Reagan's vice-president is the current US President. However, we cannot always make the same sort of substitutions of terms when we use them to report what someone believes. From the fact that you believe that George W. Bush is the current US. President, we cannot validly infer that you believe that the eldest son of Reagan's vice-president is the current US. President. That last may still be false, even if George W. Bush is indeed the eldest son. These logical features of the sentences ‘x believes that George W. Bush is the current US. President’ and ‘x believe that George W. Bush is the eldest son of Reagan's vice-president’ seem to reflect the fact that the beliefs reported by their use have different contents: these sentences are used by someone to state what is believed (the belief content), and what is believed in each case is not just the same. Someone's belief may have the one content without having the other.
Similar observations can be made for other intentional states and the reports made of them - especially when these reports contain an object clause beginning with ‘that’ and followed by a complete sentence (e.g., she thinks that p; He intends that p; She hopes that p and the fear that p; She sees that p). Sometimes it is said that the content of the states is ‘given’ by such a ‘that p’ clause when ‘p’ is replaced by a sentence - the so-called ‘content clause.’
This ‘possession of content’ conception of intentionality may be coordinated with the ‘conditions of satisfaction’ conception roughly as follows. If states of mind contrast in respect of their satisfaction (say, one is true and the other false), they differ in content. (One and the same belief content cannot be both true and false - at least not in the same context at the same time.) And if one says what the intentional content of a state of mind is, one says much or perhaps all of what conditions must be met if it is to be satisfied - what its conditions of truth, or veridicality, or fulfilment, are. But one should be alert to how the notion of content employed in a given philosopher's views is heavily shaped by these views. One should note how commonly it is held that the notion of the finding representation of content is of that way of an ambiguous or in need of refinement. (Consider, for example: Jerry Fodor's) defence of a distinction between ‘narrow’ and ‘wide’ content, as Edward Zalta’s characterlogical distinction between ‘cognitive’ and ‘objective content’ (1988), and that of John Perry's distinction between ‘reflexive' and ‘subject-matter content’.
It is arguable that each of these gates of entry into the topic of intentionality (directedness, condition of satisfaction, and mental content) opens onto a unitary phenomenon. But evidently there is also considerable fragmentation in the conceptions of both consciousness and intentionality that are in the field. To get a better grasp of some of the ways the relationship between consciousness and intentionality can be viewed, without begging questions or trying to present a positive theory on the topic, it is useful to take a look at the recent history of thinking about intentionality, in a way that will bring several issues about its relationship with consciousness to the fore. Together with the preceding discussion, this should provide the background necessary for examining some of the differences that divide those who theorize about consciousness that is very intimately involved with views of the consciousness-intentionality relation.
If we are to acknowledge the extent to which the notion of intentionality is the creature of philosophical history, we have to come to terms with the divide in twentieth century western philosophy between so-called ‘analytic’ and ‘continental’ philosophical traditions. Both have been significantly concerned with intentionality. But differences in approach, vocabulary, and background assumptions have made dialogue between them difficult. It is almost inevitable, in a brief exposition, to give largely independent summaries of the two. We will start with the ‘continental’ side of the story - more, specifically, with the Phenomenological movement in continental philosophy. However, while these traditions have developed without a great deal of intercommunication, they do have common sources, and have come to focus on issues concerning the relationship of consciousness and intentionality that are recognizably similar.
A thorough look at the historical roots of controversies over consciousness and intentionality would take us farther into the past than it is feasible to go in this article. A relatively recent, convenient starting point would be in the philosophy of Franz Brentano. He more than any other single thinker is responsible for keeping the term ‘intentional’ alive in philosophical discussions of the last century or so, with something like its current use, and was much concerned to understand its relationship with consciousness. However, it is worth noting that Brentano himself was very aware of the deep historical background to his notion of intentionality: He looked back through scholastic discussions (crucial to the development of Descartes' immensely influential theory of ideas), and ultimately to Aristotle for his theme of intentionality. One may go further back, to Plato's discussion (in the Sophist, and the Theaetetus) of difficulties in making sense of false belief, and yet further still, to the dawn of Western Philosophy, and Parmenides' attempt to draw momentous consequences from his alleged finding that it is not possible to think or speak of what is not.
In Brentano's treatment what seems crucial to intentionality is the mind's capacity to ‘refer’ or be ‘directed’ to objects existing solely in the mind - what he called ‘mental or intentional inexistence.’ It is subject to interpretation just what Brentano meant by speaking of an object existing only in the mind and not outside of it, and what he meant by saying that such ‘immanent’ objects of thought are not ‘real.’ He complained that critics had misunderstood him here, and appears to have revised his position significantly as his thought developed. But it is clear at least that his conception of intentionality is dominated by the first strand in thought about intentionality mentioned above - intentionality as ‘directedness toward an object’ - and whatever difficulty that brings in the point.
Brentano's conception of the relation between consciousness and intentionality can be brought out partly by noting he held that every conscious mental phenomenon is both directed toward an object, and always (if only ‘secondarily’) directed toward itself. (That is, it includes a ‘presentation’ - and ‘inner perception’ - of itself). Since Brentano also denied the existence of unconscious mental phenomena, this amounts to the view that all mental phenomena are, in a sense ‘self-presentational.’
His lectures in the late nineteenth century attracted a diverse group of central European intellectuals (including that great promoter of the unconscious, Sigmund Freud) and the problems raised by Brentano's views were taken up by a number of prominent philosophers of the era, including Edmund Husserl, Alexius Meinong, and Kasimir Twardowski. Of these, it was Husserl's treatment of the Brentanian theme of intentionality that was to have the widest philosophical influence on the European Continent in the twentieth century - both by means of its transformation in the hands of other prominent thinkers who worked under the aegis of ‘phenomenology’ - such as Martin Heidegger, Jean-Paul Sartre, and Maurice Merleau-Ponty - and through its rejection by those embracing the ‘deconstructionism’ of Jacques Derrida.
In responding to Brentano, Husserl also adopted his concern with properly understanding the way in which thought and experience are “directed toward objects.” Husserl criticized Brentano's doctrine of ‘inner perception,’ and did not deny (even if he did not affirm) the reality of unconscious mentation. But Husserl retained Brentano's primary focus on describing conscious ‘mental acts.’ Also he believed that knowledge of one's own mental acts rests on an ‘intuitive’ apprehension of their instances, and held that one is, in some sense, conscious of each of one's conscious experiences (though he denied this meant that every conscious experience is an object of an intentional act). Evidently Husserl wished to deny that all conscious acts are objects of inner perception, while also affirming that some kind of reflexivity - one that is, however, neither judgment-like nor sense-like - is essentially built into every conscious act. But the details of the view are not easy to make out. (A similar (and similarly elusive) view was expressed by Jean-Paul Sartre in the doctrine that “All consciousness is a non-positional consciousness of itself.”
One of Husserl's principal points of departure in his early treatment of intentionality (in the Logical Investigations) was his criticism of (what he took to be) Brentano's notion of the ‘mental inexistence’ of the objects of thought and perception. Husserl thought it a fundamental error to suppose that the object (the ‘intentional object’) of a thought, judgment, desire, etc. is always an object ‘in’ (or ‘immanent to’) the mind of the thinker, judger, or desirer. The objects of one's ‘mental acts’ of thinking, judging, etc. are often objects that ‘transcend,’ and exist independently of these acts (states of mind) that are directed toward them (that ‘intend’ them, in Husserl's terms). This is particularly striking, Husserl thought, if we focus on the intentionality of sense perception. The object of my visual experience is not something ‘in my mind,’ whose existence depends on the experience - but something that goes beyond or ‘transcends’ any (necessarily perspectival) experience I may have of it. This view is phenomenologically based, for (Husserl says), the object is experienced as perspectivally given, hence as ‘transcendent’ in this sense.
In cases of hallucination, we should say, on Husserl's view, not that there is an object existing ‘in one's mind,’ but that the object intended does not exist at all. This does not do away with the ‘directedness’ of the experience, for that is properly understood (according to the Logical Investigations) as it is having a certain ‘matter’- where the matter of a mental act is what may be common to different acts, when, for example, one believes that it will not rain tomorrow, and hopes that it will not rain tomorrow. The difference between the mental acts illustrated (between hoping and believing) Husserl would term a difference in their ‘quality.’ Husserl was to re-interpret his notions of act-matter and quality as components of what he called (in Ideas, 1983) the ‘noema’ or ‘noematic structure’ that can be common to distinct particular acts. So intentional directedness is understood not as a relation to special (mental) objects toward which one is directed, but rather: as the possession by mental acts of matter/quality (or later, ‘noematic’) structure.
This unites Husserl's discussion with the ‘content’ conception of intentionality described above: he himself would accept that the matter of an act (later, its ‘noematic sense’) is the same as the content of judgment, belief, desire, etc., in one sense of the term (or rather, in one sense he found in the ambiguous German ‘Gestalt’). However, it is not fully clear how Husserl would view the relationship between either act-matter and noematic sense quite generally and such semantic correlates of ordinary language sentences that some would identify as the contents of states of mind reported in them. Nonetheless, this is a difficulty partly because of his later emphasis (e.g., in Experience and Judgment) on the importance of what he called ‘pre-predicative’ experience. He believed that the sort of judgments we express in ordinary and scientific languages are ‘founded on’ the intentionality of pre-predicative experience, and that it is a central task of philosophy to clarify the way in which such experience of our surroundings and our own bodies underlies judgment, and the capacity it affords us to construct an ‘objective’ conception of the world. Prepredicative experience’s are, paradigmatically, sense experience as it is given to us, independently of any active judging or predication. But did Husserl hold that what makes such experience pre-predicative is that it altogether lacks the content that is expressed linguistically in predicative judgment, or did he think that such judgment merely renders explicitly a predicative content that even ‘pre-predicative’ experience already (implicitly) has? Just what does the ‘pre-’ in ‘pre-predicative’ entail?
Perhaps this is not clear. In any case, the theme of a type of intentionality more fundamental than that involved in predicative judgments that ‘posit’ objects, and to be found in everyday experience of our surroundings, was taken up, in different ways, by later phenomenologists, Heidegger and Merleau-Ponty. The former describes a type of ‘directed’ ‘comportment’ toward beings in which they ‘show themselves’ as ‘ready-to-hand. Heidegger thinks this characterizes our ordinary practical involvement with our surroundings, and regards it as distinct from, and somehow providing a basis for, entities showing themselves to us as ‘present-at-hand’ (or ‘occurrent’) - as they do when we take of less context-bound, and more in a theoretical stance toward the world. Later, Merleau-Ponty (1949-1962), influenced by his study of Gestalt psychology and neurological case studies describing pathologies of perception and action, held that normal perception involves a consciousness of place tied essentially to one's capacities for exploratory and goal-directed movement, which is indeterminate relative to attempts to express or characterize it in terms of ‘objective’ representations - though it makes such an objective conception of the world possible.
Whether Heidegger and Merleau-Ponty's moves in these directions actually contradict Husserl, they clearly go beyond what he says. Another basic, exegetically complex, apparent difference between Husserl and the two later philosophers, pertinent to the relationship of consciousness and intentionality, there lies the disputation over Husserl's proposed ‘Phenomenological reduction.’ Husserl claimed it is possible (and, indeed, essential to the practice of phenomenology) that one conduct and investigation into the structure of consciousness that carefully abstains from affirming the existence of anything in spatial-temporal reality. By this ‘bracketing’ of the natural world, by reducing the scope of one's assertions first to the subjective sphere of consciousness, then to its abstract (or ‘ideal’) atemporal structure, one is able to apprehend what consciousness. Its various forms essentially are, in a way that supplies a foundation to the philosophical study of knowledge, meaning and value. Both Heidegger and Merleau-Ponty (along with a number of Husserl's other students) appear to have questioned whether it is possible to reduce one's commitments as thoroughly as Husserl appears to have prescribed through a ‘mass abstention’ from judgment about the world, and thus whether it is correct to regard one's intentional experience as a whole as essentially detachable from the world at which it is directed. Seemingly crucial to their doubts about Husserl's reduction is their belief that an essential part of intentionality consists in a distinctively practical involvement with the world that cannot be broken by any mere abstention from judgment.
The Phenomenological themes just hinted at (the notion of a ‘pre-predicative’ type of intentionality; the (un)detachability of intentionality from the world) link with issues regarding consciousness and intentionality as these are understood outside the Phenomenological tradition - in particular, the notion of non-conceptual content, and the internalism/externalism debate, to be considered in Section (4). But it is by no means a straightforward matter to describe these links in detail. Part of the reason lies in the general difficulty in being clear about whether what one philosopher means by ‘consciousness’ (or its standard translations) is close enough to what another means for it to be correct to see them as speaking to the same issues. And while some of the Phenomenological philosophers (Brentano, Husserl, Sartre) make thematically central use of terms cognate with ‘consciousness’ and ‘intentionality,’ and consider questions about intentionality first and foremost as questions about the intentionality of consciousness, they do not explicitly address much that (in the latter half of the twentieth century) came to seem problematic about consciousness and intentionality. Is their ‘consciousness’ the phenomenal kind? Would they reject theories of consciousness that reduce it to a species of access to content? If so, on what grounds? (Given their interest in the relation of consciousness, inner perception, and reflection, it may be easier to discern what their stances on reductive ‘higher order representation’ theories of consciousness would be.)
In some ways the situation is more difficult still in the cases of Merleau-Ponty and Heidegger. For the former, though he willingly enough uses’ words standardly translated as ‘consciousness’ and ‘intentionality,’ says little to explain how he understands such terms generally. And the latter deliberately avoid these terms in his central work, Being and Time, in order to forge a philosophical vocabulary free of errors in which they had, he thought, become enmeshed. However, it is not obvious how to articulate the precise difference between what Heidegger rejects, in rejecting the alleged error-laden understanding of ‘consciousness’ and ‘intentionality’, or their German translations, by what he accepts when he speaks of being to his ‘showing’ or ‘disclosing’ them to us, and of our ‘comportment’ directed toward them.
Nevertheless, one can plausibly read Brentano's notion of ‘presentation’ as equivalent to the notion of phenomenally conscious experience, as this is understood in other writers. For Brentano says, ‘We speak of presentation whenever something appears to us.’ And one may take ways of appearing as equivalent to ways of seeming, in the sense proper to phenomenal consciousness. Further, Brentano's attempt to state that through his analysis as described through that of ‘descriptive or Phenomenological psychology,’ became atypically based on how intentional manifestations are to present of themselves, the fundamental kinds to which they belong and their necessary interrelationships, may plausibly be interpreted as an effort to articulate the philosophical salient, highly general phenomenal character of intentional states (or acts) of mind. And Husserl's attempts to delineate the structure of intentionality as it is ‘given’ in consciousness, as well as the Phenomenological productions of Sartre, can arguably be seen as devoted to laying bare to thought the deepest and most general characteristics of phenomenal consciousness, as they are found in ‘directed’ perception, judgment, imagination, emotion and action. Also, one might reasonably regard Heideggerean disclosure of the ready-to-hand and Merleau-Ponty's ‘motor-intentional’ consciousness of place as forms of phenomenally conscious experience - as long as one's conception of phenomenal consciousness is not tied to the notion that the subjective ‘sphere’ of consciousness is, in essence, independent of the world revealed through it.
In any event, to connect classic Phenomenological writings with current discussions of consciousness and its relation to intentionality, more background is needed on aspects of the other main current of Western philosophy in the past century particularly relevant to the topic of intentionality - broadly labelled ‘analytic’.
It seems fair to say that recent work in philosophy of mind in the analytic tradition that has focussed on questions about the nature of intentionality (or ‘mental content’) has been most formed not by the writings of Brentano, Husserl and their direct intellectual descendants, but by the seminal discussions of logico-linguistic concerns found in Gottlob Frége's (1892) “On Sense and Reference,” and Bertrand Russell's “On Denoting” (1905).
But Frége and Russell's work comes from much the same era, and from much the same intellectual environment as Brentano and the early Husserl. And fairly clear points of contact have long been recognized, such as: Russell's criticism of Meinong's ‘theory of objects’, derived from the problem of intentionality, which led him to countenance objects, such as the golden mountain, that are capable of being the object of thought, although they do not exist. This doctrine was one of the principal theories of Russell’s theory of definite descriptions. However, it came as part of a complex and interesting package of concepts in the theory of meaning and scholars are not united in supposing that Russell was fair to it.
The similarities between Husserl's meaning/object distinction (in Logical Investigation I) and Frége's (prior) sense/reference distinction. Indeed the case has been influentially made (by Follesdal 1969, 1990) that Husserl's ‘meaning/object’ distinction is borrowed from Frege (though with a change in terminology) and that Husserl's ‘noema’ is properly interpreted as having the characteristics of Frégean ‘sense.’
Nonetheless, a number of factors make comparison and integration of debates within the two traditions complicated and strenuous. Husserl's notion of noema (hence his notion of intentionality) is most fundamentally rooted, not in reflections on the logical features of language, but in a contrast between the object of an intentional act, and the object ‘as intended’ (the way in which it is intended), and in the idea that a structure would remain to perceptual experience, even if it were radically non-veridical. And what Husserl seeks is a ‘direct’ characterization of this (and other) kinds of experience from the point of view of the experiencer. On the other hand, Frége and Russell's writings bearing on the topic of intentionality concentrate mainly and most explicitly on issues that grow from their own pioneering achievements in logic, and have given rise to ways of understanding mental states primarily through questions about the logic and semantics of the language used to speak of them.
Broadly speaking, logico-linguistic concerns have been methodologically and thematically dominant in the analytic Frége-Russell tradition, while the Phenomenological Brentano-Husserl lineage is rooted in attempts to characterize experience as it is evident from the subject's point of view. For this reason perhaps, discussions of consciousness and intentionality are more obviously intertwined from the start in the Phenomenological tradition than in the analytic one. The following sketch of relevant background in the latter case will, accordingly, most directly concern the treatment of intentionality. But by the end, the bearing of this on the treatment of consciousness in analytic philosophy of mind will have become more evident, and it will be clearer how similar issues concerning the consciousness-intentionality relationship arise in each tradition.
Central to Frége's legacy for discussions of mental or intentional content has been his distinction between ‘sense’ (Sinn) and ‘reference’ (Bedeutung), and his application in his distinction is to cope with an apparent failure of substitutivity to something of an ordinary co-referential expression. In that contexts created by psychological verbs, the sort mentioned in exposition of the notion of mental content - a task important to his development of logic. The need for a distinction between the sense and reference of an expression became evident to Frége, when he considered that, even if ‘a’ is identical to ‘b’, and you understand both ‘a’ and ‘b,’ still, it can be for you a discovery, an addition to your knowledge, that a = b. This is intelligible, Frege thought, only if you have different ways of understanding the expressions ‘a’ and ‘b’ - only if they involve for your distinct ‘modes of presentation’ of the self-same object to which they refer. In Frége's celebrated example: you may understand the expressions ‘The Morning Star’ and ‘The Evening Star’ and use them to refer to what is one and the same object - the planet Venus. But this is not sufficient for you to know that the Morning Star is identical with the Evening Star. For the ways in which an object (‘the reference’) is ‘given’ to your mind when you employ these expressions (the senses or Sinne you ‘grasp’ when you use them) may differ in such a manner that ignorance of astronomy would prevent your realizing that they are but two ways in which the same object can be given.
The relevance of all this to intentionality becomes clearer, once we see how Frege applied the sense/reference distinction to whole sentences. The sentence, ‘The Evening Star = The Morning Star’ has a different sense than the sentence ‘The Evening Star = The Evening Star’, even if their reference (according to Frége, their truth value) is the same. The failure of substitutivity of co-referential expressions in ‘that p’ contexts created by psychological verbs can consequently be understood (Frége proposed) in this way: The reference of the terms shifts in these contexts, so that, for example, ‘the Evening Star’ no longer refers to its customary reference (the planet Venus), but to a sense that functions, for the subject of the verb (the person who thinks, judges, desires) as his or her mode of presentation of this object. The sentence occurring in this context no longer refers to its truth value, but to the sense in which the mode of presentation is embedded - which might otherwise be called the ‘thought’ - or, by other philosophers, the ‘content’ of the subject's state of mind. This thought or content representation is to be understood not as a mental image, or literally as anything essentially private is the assemblage of its thinking mind - but as one and the same abstract entity that can be ‘grasped’ by two minds, and that must be so grasped if communication is to occur.
While on the surface this story may appear to be only about logic and semantics, and though Frege did not himself elaborate a general account of intentionality, what he says readily suggests the following picture. Intentional states of mind - thinking about Venus, wishing to visit it - involve some special relation (such as ‘mental grasping’)- not ‘in one's mind,’ nor to any imagery, but to an abstractive entity, a thought, which also constitutes the sense of a linguistic expression that can be used to report one's state of mind, a sense that is grasped or understood by speakers who use it.
This style of account, together with the Frégean thesis that ‘sense determines reference,’ and the history of criticisms both have elicited, form much of the background of contemporary discussions of mental content. It is often assumed, with Frege, that we must recognize (as some thinkers in the empiricist tradition allegedly did not) that thoughts or contents cannot consist in images or essentially private ‘ideas.’ But philosophers have frequently criticized Frége's view of thought as some abstract entity ‘grasped’ or ‘present to’ the mind, and have wanted to replace Frége's unanalyzed ‘grasping’ with something more ‘naturalistic.’
Relatedly, it may be granted that the content of the thought reported is to be identified with the sense of the expression with which we report it. But then, it is argued, the identity of this content will not be determined individualistically, and may, in some respect’s lay beyond the grasp (or not be fully ‘present to’ the mind of) the psychological subject. For what determines the reference of an expression may be a natural causal relation to the world - as influentially argued is true for proper names, like ‘Nixon’ and ‘Cicero,’ and ‘natural kind’ terms like ‘gold’ and ‘water.’ Or (as Tyler Burge (1979) has influentially argued) two speakers who, considered as individuals, are qualitatively the same, may nevertheless each assert something different simply because of differing relations they bear to their respective linguistic communities. (For example, what one speaker's utterance of ‘arthritis’ means is determined not by what is ‘in the head’ of that speaker, but by the medical experts in his or her community.) And, if referentially truth conditions of expressions by which one's thought is reported or expressed are not determined by what is in one's head, and the content of one's thought determines their reference and truth conditions, then the content of one's thought is also not determined individualistically. Rather, it is necessarily bound up with one's causal relations to certain natural substances, and one's membership in a certain linguistic community. Both linguistic meaning and mental contents are ‘externally’ determined.
The development of such ‘externalist’ conceptions of intentionality informs the reception of Russell's legacy in contemporary philosophy of mind as well. Russell also helped to put in play a conception of the intentionality of mental states, according to which each such state is seen as involving the individual's ‘acquaintance with a proposition’ (counterpart to Fregean ‘grasping’) - which proposition is at once both what is understood in understanding expressions by which the state of mind is reported, and the content of the individual's state of mind. Thus, intentional states are ‘propositional attitudes.’ Also importantly, Russell's famous analysis of definite descriptions into phrases employing existential quantifiers and general predicates underlay many subsequent philosophers' rejection of any conception of intentionality (like Meinong's) that sees in it a relation to non-existent objects. And, Russell's treatment drew attention to cases of what he called ‘logically proper names’ that apparently defies such analysis in descriptive terms (paradigmatically, the terms ‘this’ and ‘that’), and which (he thought) thus must refer ‘directly’ to objects. Reflection on such ‘demonstratives’ and ‘indexical’ (e.g., ‘I,’ ‘here,’ ‘now’) reference has led some to maintain that the content of our states of mind cannot always be constituted by Fregean senses but must be seen as consisting partly of the very objects in the world outside our heads to which we refer, demonstratively, indexically - another source of support for an ‘externalist’ view of mental content, hence, of intentionality.
Yet another important source of externalist proclivities in twentieth century philosophy lies in the thought that the meaningfulness of a speaker's utterances depends on its potential intelligibility to hearers: language must be public - an idea that has found varying and influential expression in the work of Ludwig Wittgenstein, W.V.O. Quine, and Donald Davidson. This, coupled with the assumption that intentionality (or ‘thought’ in the broad (Cartesian) sense) must be expressible in language, has led some to conclude that what determines the content of one's mind must lie in the external conditions that enable others to attribute content.
However, the movement from Frege and Russell toward externalist views of intentionality should not simply be accepted as yielding a fund of established results: it has been subject to powerful and detailed challenges, but without plunging into the details of the internalism/externalism debate about mental content, we can recognize, in the issues just raised, certain themes bearing particularly on the connection between consciousness and intentionality.
For example: it is sometimes assumed that, whatever may be true of content or intentionality, the phenomenal character of one's experience, at least, is ‘fixed internally’ -, i.e., it involves no necessary relations to the nature of particular substances in one's external environment or to one's linguistic community. But then the purported externalist finding that meaning nor contents are ‘in the head’ and, of course, be read as showing the insufficiency of phenomenal consciousness to determine any intentionality or content. Something like this consequence is drawn by Putnam (1981), who takes the stream of consciousness to comprise nothing more than sensations and images, which (as Frege saw) should be sharply distinguished from thought and meaning. This interpretation of the import of externalist arguments may be reinforced by a tendency to tie (phenomenal) consciousness to non-intentional sensations, sensory qualities, or ‘raw feels,’ and hence to dissociate consciousness from intentionality (and allied notions of meaning and reference), a tendency that has been prominent in the analytic tradition.
But it is not at all evident that externalist theories of content require us to estrange consciousness from intentionality. One might argue (as do Martin Davies (1997) and Fred Dretske (1997)) that in certain relevant respects the phenomenal character of experience is also essentially determined by causal environmental connections. By contrast, one may argue (as do Ludwig (1996b) and Horgan and Tienson (2002)) that since it is conceivable that a subject has experience is much like our own in phenomenal character, but radically different in external causes from what we take our own to be (in the extreme case, a mind bewitched by a Cartesian demon into massive hallucination), there must indeed be a realm of mental content that is not externally determined.
One other aspect of the Frége-Russell tradition of theorizing about content that impinges on the consciousness/intentionality connection is this. If ‘content’ is identified with the sense or the truth-condition determiners of the expressions used in the object-clause reporting intentional states of mind, it will seem natural to suppose that possession of mental content requires the possession of conceptual capacities of the sort involved in linguistic understanding - ‘grasping senses.’ But then, to the extent the phenomenal character of experience is inadequate to endow a creature with such capacities, it may seem that phenomenal consciousness has little to do with intentionality.
However, this raises large issues. One is this: it should not be granted without question that the phenomenal character of our experience could be as it is in the absence to the sorts of conceptual capacities sufficient for (at least some types of) intentionality. And this is tied to the issue of whether or not the phenomenal character of experience is (as some suppose) a purely sensory affair. Some would maintain, on the contrary, that thought (not just imagistic, but conceptual thought) has phenomenal character too. If so, then it is very far from clear that phenomenal character can be divorced from whatever conceptual capacities are necessary for intentionality.
Moreover, we may ask: Are concepts, properly speaking, always necessary for intentionality anyway? Here another issue rears its head: is there not perhaps a form of sensory intentionality, which does not require anything as distinctively intellectual or conceptual as is needed for the grasping of linguistic senses or propositions? (This presumably would be a kind of intentionality had by the pre-linguistic (e.g., babies) or by non-linguistic creatures (e.g., dogs).) Suppose that there is, and that this type of intentionality is inseparable from the phenomenal character of perceptual experience. Then, even if one assumes that such phenomenal consciousness is insufficient to guarantee the possession of concepts, it would be wrong to say that it has little to do with intentionality. (Advocates of varying versions of the idea that there is a distinctively ‘non-conceptual’ kind of content include Bermudez 1998, Crane 1992, Evans 1982, Peacocke 1992, and Tye 1995 - for a notable voice of opposition to this trend, see McDowell 1994.) A deep difficulty in assessing these debates lies in getting an acceptable conception of concepts with which to work. We need to understand clearly what ‘having a concept of F’ does and does not require, before we can be clear about the content of and justification for the thesis of non-conceptual content.
These proposals about non-conceptual content bear some affinity with aspects of the Phenomenological tradition eluded too earlier: Husserl's notion of ‘pre-predicative’ experience as to Heidegger's procedures of ‘ready-to-hand;’ and Merleau-Ponty's idea that in normal active perception we are conscious of place, not via a determinate ‘representation’ of it, but rather, relative to our capacities for goal-directed bodily behaviour. Though to see the extent to which any of these are ‘non-conceptual’ in character would require not only more clarity about the conceptual/non-conceptual contrast, save that a considerable novel exegesis of these philosophers' works.
Also, one may plausibly try to find an affinity between externalist views in analytic philosophy, and the later phenomenologists' rejection of Husserl's reduction, based on their doubt that we can prise consciousness off from the world at which it is directed, and study its ‘intentional essence’ in solipsistic isolation. But if externalism can be defined broadly enough to encompass Heidegger, Merleau-Ponty, Kripke, and Burge, still the comparison is strained when we take account of the different sources of ‘externalism’ in the phenomenologists. These have to do it seems (very roughly) with the idea that the way we are conscious of things (or at least, for Heidegger, the way they ‘show themselves’ to us) in our everyday activity cannot be quite generally separated from our actual engagement with entities of which we are thus conscious (which show themselves in this way). Also relevant is the idea that one's use of language (hence one's capacity for thought) requires gearing one's activity to a social world or cultural tradition, in which antecedently employed linguistic meaning is taken up and made one's own through one's relation with others. All this is supposed to make it infeasible to study the nature of intentionality by globally uprooting, in thought, the connection of experience with one's spatial surroundings (and - crucially for Merleau-Ponty - one's own body), and one's social environment. Whatever the merits of this line of thought, we should note: Neither a causal connection with ‘natural kinds’ unmediated by reference-determining ‘modes of presentation,’ nor deference to the linguistic usage of specialists, nor belief in the need to reconstruct speakers’ meaning from observed behaviour, plays a role in the phenomenologists' doubts about the reduction.
The arduous exegesis required for a clearer and more detailed comparison of these views is not possible here. Nevertheless, following some of the main lines of thought in treatments of intentionality, descending on the one hand, primarily from Brentano and Husserl, and on the other, from Frége and Russell, certain fundamental issues concerning its relationship to consciousness have emerged. These include, first, the connection between consciousness and self-directed and self-reflexive intentionality. (It has already been seen that this topic preoccupied Brentano, Husserl and Sartre; its emergence as an important issue in analytic philosophy of mind will become more evident below, Second, there is concern with the way in which (and the extent to which) mind is world-involving. (In the Phenomenological tradition this can be seen in controversy over Husserl's Phenomenological reduction; That within Frégean cognitive traditions are exhibited through some formal critique as drawn upon sensationalism, in which only internalism/externalism are argued traditionally atypically in the passage through which are formally debated. Third, there is the putative distinction between conceptual and theoretical, and sensory or practical forms of intentionality. (In phenomenology this shows up in Husserl's contrast between judgment and pre-predicative experience, and related notions of his successors; In analytic philosophy this shows up in the (more recent) attention to the notion of ‘non-conceptual’ content.)
For more clarity regarding the consciousness-intentionality relationship and how these three topics figure prominently in views about it, it is necessary now to turn attention back to philosophical disagreements regarding consciousness that abruptly have of an abounding easy to each separate relation, til their distinctions have of occurring.
Consider the proposal that sense experience manifests a kind of intentionality distinct from and more basic than that involved in propositional thought and conceptual understanding. This might help form the basis for an account of consciousness. Perhaps conscious states of mind are distinguished partly by their possession of a type of content proper to the sensory subdivision of mind.
One source of the idea that a difference in type of content helps constitute a distinction between what is and is not phenomenally conscious, lies in the apparent distinction between sense experience and judgment. To have conscious visual experience of a stimulus - for it to look some way to you - is one thing. To make judgments about it is something else. (This seems evident in the persistence of a visual illusion, even once one has become convinced of the error.) However, on some accounts of consciousness, this distinction itself is doubtful, since conscious sense experience is taken to be nothing more than a form of judging. However, such to this view is expressed by Daniel Dummett (1991), who takes the relevant form of judging to consist in one's possession of information or mental content available to the appropriate sort of ‘probes’ - the availability of content he calls ‘cerebral celebrity.’ For Dummett what distinguishes conscious states of mind is not their possession of a distinctive type of intentional content, but rather the richness of that content, and its availability to the appropriate sort of cognitive operations. (Since the relevant class of operations is not sharply defined, neither, for Dummett, is the difference between which states of mind are conscious and which are not.)
Recent accounts of consciousness that, by contrast, give central place to a distinction between (conceptual) judgment and (non-conceptual - but still intentional) sense-experience includes Michael Tye's (1995) theory, holding that it is (by metaphysical necessity) sufficient to have a conscious sense-perception that some representation of sensory stimuli is formed in one's head, ‘map-like’ in character, whose (‘non-conceptual’) content is ‘poised’ to affect one's (conceptual) beliefs. This form of mental representation Tye would contrast with the ‘sentential’ form proper to belief and judgment - and in that way, he might preserve the judgment/experience contrast as Dummett does not. Consider also Fred Dretske's (1995) view, that phenomenally conscious sensory intentionality consists in a kind of mental representation whose content is bestowed through a naturally selected ‘function to indicate.’ Such natural (evolution-implanted) sensory representation can arise independently of learning (unlike the more conceptual, language dependent sort), and is found widely distributed among evolved lives.
Both Tye and Dretske's views of consciousness (unlike Dummett's) make crucial use of a contrast between the types of intentionality proper to sense-experience, and that proper to linguistically expressed judgment. On the other hand, there is also some similarity among the theories, which can be brought out by noting a criticism of Dummett's view, analogues of which arise for Tye and Dretske's views as well.
Some might think Dummett's account concerns only some variety of what Form would call ‘ascensive consciousness’. For on Dummett's account, it seems, to speak of visual consciousness is to speak of nothing over and above the sort of availability of informational content that is evinced in unprompted verbal discriminations of visual stimuli. And this view has been criticized for neglecting phenomenal consciousness. It seems we may conceive of a capacity for spontaneous judgment triggered by and responsive to visual stimuli, which would occur in the absence of the judger's phenomenally conscious visual experience of the stimuli: The stimuli do not look in any way impulsively subjective, and yet they trigger accurate judgments about their presence. The notion of such a (hypothetical) form of ‘blind-sight’ may be elaborated in such a way that we conceive of the judgment it affords for being at least as finely discriminatory (and as fine in informational content) as that enjoyed by those with extremely poor, blurry and un-acute conscious visual experience (as in the ‘legally blind’). But a view like Dummett's seems to make this scenario inconceivable.
However, this kind of criticism does not concern only those theories that would elide any experience/judgment distinction. For Tye and Dretske's theories, though they depend on forms of that contrast (and are offered as theories of phenomenal consciousness), can raise similar concerns. For one might think that the hypothetical blind-sighter would be as rightly regarded as having Tye ‘support’ some maplike representations in her visual system as would be someone with a comparable form of conscious vision. And one might find it unclear why we should think the visual system of such a blind-sighter must be performing naturally endowed indicating functions more poorly than the visual system of a consciously sighted subject would.
Whatever the cogency of these concerns, one should note their distinctness from the issues about ‘kinds of intentionality’ that appear to separate both Tye and Dretske from Dummett. The notion that there is a fundamental distinction to be drawn in kinds of intentional content (separating the more intellectual from the more sensory departments of mind) sometimes forms the basis of an account of consciousness (as with Dretske and Tye's, though not with Dummett's). But it is also important to recognize what unites Dummett, Tye, and Dretske. Despite their differences, all propose to account for consciousness by starting with a general understanding of intentionality (or mental content or representation) to which consciousness is inessential. Dummett is known for an uncompromising re-evaluation of the Western tradition, viewing writings before the rise of anaclitic philosophy as fatty and flawed by having take epistemology to be fundamental, whereas the correct approach, giving a foundational place to a concern with language, only took to a point-start with the work of Frége. Equally, the supposedly pure investigation of language in the 20th century has often kept some dubious epistemological and metaphysical company.
They then offer to explain consciousness as a special case of intentionality thus understood - so, in terms of the operations the content is available for, or the form in which it is represented, or the nature of its external source. The blind-sight-based objection to Dennett, and its possible extension to Dretske and Tye, helps bring this commonality to light. The last of these issues showed how some theories purport to account for consciousness on the basis of intentionality, in a way that focuses attention on attempts to discern a distinctively sensory type of intentionality. A different strategy for explaining consciousness via intentionality highlights the importance of clarity regarding the connection between consciousness and reflexivity. On such a view (roughly): Experiences or states of mind are conscious just insofar as the mind represents itself as having them.
In David Rosenthal's variant of this approach, a state is conscious just when it is a kind of (potentially non-conscious) mental state one has, which one (seemingly without inference) thinks that one is in. A theory of this sort starts with some way of classifying mental states that is supposed to apply to conscious and non-conscious states of mind alike. The proposal then is that such a state is conscious just when it belongs to one of those mental kinds, and the (‘higher order’) thought occurs to the person in that state that he or she is in a state of that kind. So, for example it is maintained that certain non-conscious states of mind can possess ‘sensory qualities’ of various sorts - one may, in a sense, be in pain without feeling pain, one may have a red sensory quality, even when nothing looks red to one. The idea is that one has a conscious visual experience of red, or a conscious pain sensation, just when one has such a red sensory quality, or pain-quality, and the thought (itself also potentially non-conscious) occurs to one that one has a red sensory quality, or pain-quality.
This way of accounting for consciousness in terms of intentionality may, like theories mentioned, provoke the concern that the distinctively phenomenal sense of consciousness has been slighted - though this time, not in favour of some ‘access’ consciousness, but in favour of reflexive consciousness. One focus of such criticism lies in the idea that such higher-order thought requires the possession of concepts - concepts of types of mental states - that may be lacking in creatures with first order mentality. And it is unclear (in fact it seems false to say) these beings would therefore have no conscious sensory experience in the phenomenal sense. Might that they enduringly exist in a way the world looks to rabbits, dogs, monkeys, and human babies, and might they agreeably feel pain, though they lack the conceptual wherewithal to think about their own experience?
One line of response to such concerns is simply to bite the bullet: dogs, babies and the like might altogether lack higher order thought, but that is no problem for the theory because, indeed, they also altogether lack feelings. Rosenthal, for his part, takes a different line: lack of cognitive sophistication need not instantly disqualify one for consciousness, since the possession of primitive mentalistic concepts requires so little that practically any organism we would consider a serious candidate for sensory consciousness (certainly babies, dogs and bunnies) would obviously pass conscription.
A number of additional worries have been raised about both the necessity and the sufficiency of ‘higher order thought’ for conscious sense experience. In the face of such doubts, one may preserve the idea that consciousness consists in some kind of higher order representation - the mind's ‘scanning’ itself - by abandoning ‘higher order thought’ for another form of representation: one that is not thought-like or conceptual, but somehow sensory in character. Maybe somewhat as we can distinguish between primitive sensory perception of things in our environment, and the more intellectual, conceptual operations based on them, so we can distinguish the thoughts we have about our own (‘inner’) mental goings-on from the (‘inner’) sensing of them. And, if we propose that consciousness consist in this latter sort of higher order representation, it seems we will escape the worries occasioned by the Rosenthalian variant of the ‘reflexivist’ doctrine. In considering such theories, two of the consciousness-themes that earlier discern had in coming together, namely the reflexivity of thought, or higher order representations, and, by contrast, between the conceptual and non-conceptual presentations, as sensory data,
Criticism of ‘inner sense’ theories is likely to focus not so much on the thought that such inner sensing can occur without phenomenal consciousness, or that the latter can occur without the former, as on the difficulty in understanding just what inner sensing (as distinct from higher order thought) is supposed to be, and why we should think we have it. It seems the inner sense theorist’s share with those who distinguish between conceptual and non-conceptual (or sensory) flavours of intentionality the challenge of clarifying and justifying some version of this distinction. But they bear the additional burden of showing how such a distinction can be applied not just to intentionality directed at tables and chairs, but at the "furniture of the mind" as well. One may grant that there are non-conceptual sensory experiences of objects in one's external environment while doubting one has anything analogous regarding the ‘inner’ landscape of mind.
It should be noted that, in spite of the difficulties faced by higher order representation theories, they draw on certain perennially influential sources of philosophical appeal. We do have some willingness to speak of conscious states of mind as states we are conscious or aware of being in. It is tempting to interpret this as indicating some kind of reflexivity. And the history of philosophy reveals many thinkers attracted to the idea that consciousness is inseparable from some kind of self-reflexivity of mind. As noted, varying versions of this idea can be found in Brentano, Husserl, and Sartre, as well as we can go further back in which case of Kant (1787) who spoke explicitly of ‘inner sense,’ and Locke (1690) defined consciousness as the ‘perception of what passes in a man's mind.’ Brentano (controversially) interpreted Aristotle's enigmatic and terse discussion of “seeing that one sees” in De Anima, as an anticipation of his own ‘inner perception’ view. However, there is this critical difference between the thinkers just cited and contemporary purveyors of higher order representation theories. The former does not maintain, as do the latter, that consciousness consists in one's forming the right sort of higher order representation of a possible non-conscious type of mental state. Even if they think that consciousness is inseparable from some sort of mental reflexivity, they do not suggest that consciousness can, so to speak, be analysed into mental parts, none of which they essentially require consciousness. (Some could not maintain this, since they explicitly deny mentality without consciousness.) There is a difference between saying that reflexivity is essential to consciousness and saying that consciousness just consists in or is reducible to a species of mental reflexivity. Advocacy of the former without advocacy of the latter is certainly possible.
Suppose one holds that phenomenal consciousness is distinguishable both from ‘access’ and ‘reflexivity,’ and that it cannot be explained as a special case of intentionality. One might conclude from this that phenomenal consciousness and intentionality are two composite structures exhibiting of themselves of distinct realms as founded in the psychic domain as called the mental, and embrace the idea that the phenomenal are a matter of non-intentional qualia or raw feels. One important current in the analytic tradition has evinced this attitude - it is found, for example, in Wilfrid Sellars' (1956) distinction between ‘sentience’ (sensation) and ‘sapience.’ Whereas the qualities of feelings involved in the former - mere sensations - require no cognitive sophistication and are readily attributable to brutes, the latter - involving awareness of, awareness that - requires that one have the appropriate concepts, which cannot be guaranteed by just having sensations; one needs learning and inferential capacities of a sort Sellars believed possibly only with language. “Awareness,” Sellars says, “is a linguistic affair.”
Thus we may arrive at a picture of mind that places sensation on one side, and thought, concepts, and ‘propositional attitudes’ on the other. If one recognizes the distinctively phenomenal consciousness not captured in ‘representationalist’ theories of the kinds just scouted, one may then want to say: that is because the phenomenal belong to mere sentience, and the intentional to sapience. Other influential philosophers of mind have operated with a similar picture. Consider Gilbert Ryle's (1949) contention that the stream of consciousness contains nothing but sensations that provide “no possibility of deciding whether the creature that had these was an animal or a human being; An ignoramus, simpletons, or a sane man, only from which nothing is appropriately asked of whether it is correct or incorrect, veridical or nonveridical. And Wittgenstein's (1953) influential criticism of the notion of understanding as an ‘inner process,’ and of the idea of a language for private sensation divorced from public criteria, could be interpreted in ways that sever (phenomenal) consciousness from intentionality. (Such an interpretation would assume that if consciousness could secure understanding, understanding would be an ‘inner process,’ and if phenomenal character bore intentionality with it, private sensations could impart meaning to words.) Also recall Putnam's conviction that the (internal) stream of consciousness cannot furnish the (externally fixed) content of meaning and belief. A similar attitude is evident in Donald Davidson's distinction between sensation and thought (the former is nothing more than a causal condition of knowledge, while the latter can furnish reasons and justifications, but cannot occur without language). Richard Rorty (1979) makes a Sellarsian distinction between the phenomenal and the intentional key to his polemic against epistemological philosophy overall, and ‘foundationalism’ in particular (and takes a generally deflationary view of the phenomenal or ‘qualitative’ side of this divide).
But it is possible to reject attempts to subsume the phenomenal under the intentional as in the ‘representationalist’ accounts of consciousness variously exemplified in Dennett, Dretske, Lycan, Rosenthal, and Tye, without adopting this ‘two separate realms’ conception. We can believe that there is no conception of the intentional from which the phenomenal can be explanatorily derived that does not already include the phenomenal, but still believe also that the phenomenal character of experience cannot be separated from its intentionality, and that having experience of the right sort of phenomenal character is sufficient for having certain forms of intentionality.
Here one might leave open the question whether there is also some kind of phenomenal character (perhaps that involved in some kinds of bodily sensation or after-images) whose possession is not sufficient for intentionality. (Though if we say there is such non-intentional phenomenal character, this would give us a special reason for rejecting the representationalist explanations of phenomenal consciousness) on the other hand, we say phenomenal character always brings intentionality with it, that might be ‘representational’’ of a sort. But its endorsement is consistent with a rejection of attempts to derive phenomenality from intentionality, or reduce the former to a species of the latter, which commonly attract the ‘representationalist’ label. We should distinguish the question of whether the phenomenal can be explained by the intentional from the question of whether the phenomenal are separable from the intentional.
Closer consideration of two of the three themes earlier identified as common to Phenomenological and analytic traditions is needed to come to grips with the latter question. It is necessary to inquire: (1) whether an externalist conception of intentionality can justify separating phenomenal character from intentionality. And one needs to ask: (2) whether one's verdict on the ‘separability’ question stands or falls with acceptance of some version of a distinction between conceptual and non-conceptual (or distinctively sensory) form of intentionality.
The dialectical situation regarding (1) is complex. One way it may seem plausible to answer question (1) in the affirmative, and restrict phenomenal character and intentionality to different sides of some internal/external divide, is to conduct a Cartesian thought experiment, in which one conceives of consciousness with all its subjective riches surviving the utter annihilation of the spatial realm of nature. (Similarly, but less radical, one may conceive of a ‘brain in a vat’ generating an extended history of sense experience indistinguishable in phenomenal character from that of an embodied subject.) If one is committed to an externalist view of intentionality - but rejects the intentionalizing strategies for dealing with consciousness - one may conclude that phenomenal character is altogether separable from (and insufficient for) intentionality. However, one may draw rather different conclusions from the Cartesian thought experiment - turning it against externalism. It may seem to one that, since the intentionality of experience would apparently survive along with its phenomenal character, one may then infer that the causal tie between the mind's content and the world of objects beyond it that (according to some versions of externalism) fixes content, is in reality and in at least some cases (or for some contents), no more than contingent. Alternatively, whatever one relies on to argue that this or that relation of experience and world is essential to having any intentionality at all, one might take this to show that phenomenal character is also externally determined in a way that renders the Cartesian scenario of consciousness totally unmoored from the world an illusion. And, if Merleau-Ponty or Heidegger thinks that Husserl's Phenomenological reduction to a sphere of ‘pure’ consciousness cannot be completed, and their reasons make them externalists of some sort, it hardly seems to establish that they are committed to a realm of raw sensory phenomenal consciousness, devoid of intentionality. In fact their rejection of Husserl's notion of ‘uninterpreted’ sensory or ‘hyletic’ data in experience would seem to indicate them, at least, would strongly deny they held such views.
In this arena it is far from clear what we are entitled to regard as secure ground and what as ‘up for grabs.’ However, there do seem to be ways in which all would probably admit that the phenomenal character of experience and externally individuated content come apart, ways in which such content goes beyond anything phenomenal consciousness can supply. For the way it seems to me to experience this computer screen may be no different from the way it seems to my twin to experience some entirely distinct one. Thus where intentional contents are distinguished in such a way as to include the particular objects experienced or thought of, phenomenal character cannot determine the possession of content. Still, that does not show that no content of any sort is fixed by phenomenal character. Perhaps, as some would say, phenomenal character determines ‘narrow’ or ‘notional’ content, but not ‘wide’ (externally ‘fixed’) content. Nor is it even clear that we must judge the sufficiency of phenomenal character for intentionality by adopting some general account of content and its individuation (as ‘narrow’ or ‘wide’ for instance), and then ask whether one's possession of content so considered is entailed by the phenomenal character of one's experience. One may argue that the phenomenal character of one's experience suffices for intentionality as long as having it makes one assessable for truth, accuracy (or other sorts of ‘satisfaction’) without the addition of any interpretation, properly so-called, such as is involved in assessment of the truth or accuracy of sentences or pictures.
Even if one does not globally divide phenomenal character from intentionality along some inner/outer boundary line, to address questions of the sufficiency of phenomenal character for intentionality (and thus of the separability of the latter from the former), one still needs to look at question (2) as above, and the potential relevance of distinctions that have been proposed between conceptual and non-conceptual forms of content or intentionality. Again the situation is complex. Suppose one regards the notion of non-conceptual intentionality or content as unacceptable on the grounds that all content is conceptual. But suppose one also thinks it is clear that phenomenal character is confined to sensory experience and imagery, and that this cannot bring with it the rational and inferential capacities required for genuine concept possession. Then one will have accepted the separability of phenomenal consciousness from intentionality. However, one may, by contrast, take the apparent susceptiblity of phenomenally conscious sense experience to assessment for accuracy, without need for additional, potentially absent interpretation, to show that the phenomenal character of experience is inherently intentional. Then one will say that the burden lies on anyone who claims conceptual powers are crucial to such assessability and can be detached from the possession of such experience: They must identify those powers and show that they are both crucial and detachable in this way. Additionally, one may reasonably challenge the assumption that phenomenal consciousness is indeed confined to the sensory realm; One may say that conceptual thought also has phenomenal character. Even if one does not, one may still base one's confidence in the sufficiency of phenomenal character for intentionality on one's confidence that there is a kind of non-conceptual intentionality that clearly belongs essentially to sense experience.
These considerations, we can see that it is critical to answer the following questions in order to decide whether or not phenomenal character is wholly or significantly separable from intentionality. Does every sort of intentionality that belongs to thought and experiences require an external connection, for which phenomenal characters are insufficient?
Does every sort of intentionality that belongs to sense-experience and sensory imageries require conceptual abilities for which phenomenal character is insufficient? And does every sort of intentionality that belongs to thought require conceptual capacities for which phenomenal character is insufficient?
Suppose one finds phenomenal character quite generally inadequate for the intentionality of thought and sense-experience by answering ‘yes’ either to (i), or to both (ii) and (iii). And suppose one makes the plausible (if non-trivial) assumption that what guarantees’ intentionality for neither sensory experience, nor imagery, nor conceptual thought, guarantees no intentionality that belongs to our minds (including that of emotion, desire and intention - for these later presuppose the former). Then one will find phenomenal character altogether separable from intentionality. Phenomenal character could be as it is, even if intentionality were completely taken away. There is no form of phenomenal consciousness, and no sort of intentionality, such that the first suffices for the second.
A more moderate view might merely answer only one of either (ii) or (iii) in the affirmative (and probably (iii) would be the choice). But still, in that case one recognizes some broad mental domain whose intentionality is in no respect guaranteed by phenomenal character. And that too would mark a considerable limitation on the extent to which phenomenal consciousness brings intentionality with it.
On the other hand, suppose that one answer ‘no’ to (i), and to either (ii) or (iii). Now, external connections and conceptual capacities seem to be what we might most plausibly regard as conditions necessary for the intentionality of thought and experience that could be stripped away while phenomenal character remains constant. So if one thinks that actually neither are generally essential to intentionality and removable while phenomenal character persists unchanged, and one can think of nothing else that is essential for thought and experience to have any intentionality, but for which phenomenal character is insufficient, it seems reasonable to conclude that phenomenal character is indeed sufficient for intentionality of some sort. If one has gone this far, it seems unlikely that one will then think that actual differences in phenomenal character still leave massively underdetermined the different forms of intentionality we enjoy in perceiving and thinking. So, one will probably judge that some kind of phenomenal character suffices for, and is inseparable from, many significant forms of intentionality in at least one of these domains (sensory or cognitive): There are many differences in phenomenal character, and many in intentionality, such that you cannot have the former without the latter. If one also rejects both (ii) and (iii), then one will accept that appropriate forms of phenomenal consciousness are sufficient for a very broad and important range of human intentionality.
Suppose one rejects both the views that consciousness is explanatorily derived from a more fundamental intentionality, as well as the view that phenomenal character is insufficient for intentionality because it is a matter of a purely inward feeling. It seems one might then press farther, and argue for what Flanagan calls ‘consciousness essentialism’ - the view that the phenomenal character of experience is not only sufficient for various forms of intentionality, but necessary also.
This type of thesis needs careful formulation. It does not necessarily commit one to a Cartesian (or Brentanian or Sartrean) claim that all states of mind are conscious - a total denial of the reality of the unconscious. A more qualified thesis does seem desirable. Freud's waning prestige has weakened tendencies to assume that he had somehow demonstrated the reality of unconscious intentionality, the rise of cognitive science has created a new climate of educated opinion that also takes elaborate non-conscious mental machinations for granted. Even if we do not acquiesce in this view, we do (and long have) appealed to explanations of human behaviour that recognize some sort of intentional state other than phenomenally conscious experiences and thoughts.
The way of maintaining the necessity of consciousness to mind that can preserve some space for mind that is not conscious is Searle's agreement, roughly, that we should first distinguish between what he calls ‘intrinsic’ intentionality on the one hand, and merely ‘as if’ intentionality, and ‘interpreter relative’ intentionality, on the other. We may sometimes speak as if artifacts (like thermostats) had beliefs or desires - but this is not to be taken literally. And we may impose ‘conditions of satisfaction’ on our acts and creations (words, pictures, diagrams, etc.) by our interpretation of them - but they have no intentionality independent of our interpretive practices. Intrinsic intentionality, on the other hand - the kind that pertains to our beliefs, perception, and intentions - is neither a mere ‘manner of speech,’ nor our possession of it derived from others' interpretive stance toward us. But then, Searle asks, what accounts for the fact that some state of affairs in the world for which in having an intrinsic intentionality - that they are directed at objects under aspects - and why they are directed under the aspects they are (why they have the content they do)? With conscious states of mind, Searle says, their phenomenal or subjective character determines their ‘aspectual shape.’ Where non-conscious states of mind are concerned, there is nothing to do the job, but their relationship to consciousness. The right relationship, he holds, is this non-conscious state of mind and must be ‘potentially conscious.’ If some psychological theories (of language, of vision) postulated an unconscious so deeply buried that its mental representations cannot even potentially become conscious, so much the worse for those theories.
Searle's views have aroused a number of criticisms. Among the problems areas are these. First, how are we to explain the requirement that intrinsically intentional states be ‘potentially conscious,’ without making it either too easy or too difficult to satisfy? Second, just why is it that the intrinsic intentionality of non-conscious states need’s accounting for, while conscious states are somehow unproblematic. Third, it appears Searle's argument does not offer some general reason to rule out all efforts to give ‘naturalistic’ accounts of conditions sufficient to impose - without the help of consciousness - genuine and not merely interpreter relative intentionality.
Another approach is taken by Kirk Ludwig, who argues that there is nothing to determine whose state of mind a given non-conscious state of mind is, unless that state consists in a disposition to produce a conscious mental state of the right sort. Alleged mental processes that did not tend to produce someone's conscious states of mind appropriately would be no one's, which is to say that they would not be mental states at all. Roughly: consciousness is needed to provide that unity of mind without which there would be no mind. And Ludwig argues that it is therefore a mistake to attribute many of the unconscious inferences with which psychological theorists have long been wont to populate our minds.
The persuasiveness of Searle and Ludwig's arguments depends heavily on demonstrating the failure of alternative accounts of the job that they enlist consciousness to do (such as secure ‘aspectual shape,’ or ownership). One may grant (as does Colin McGinn 1991) that phenomenal character is inseparable from intentionality, but cannot be explained by it, while still maintaining that genuine intentionality (mental content) is quite adequately imposed on animal brains by their acquisition of natural functions of content-bearing - in which consciousness evidently plays no essential role. Or one may (like Jerry Fodor 1987) maintain a robust realist ‘representational theory of mind,’ proposing that the content of mental symbols is stamped on them by their being in the ‘right causal relation’ to the world - while despairing of the prospects for a credible naturalistic theory of consciousness.
The preceding discussion has conveyed some of the complexities and potential ambiguities in talk of ‘consciousness’ and ‘intentionality’ that must be appreciated if one is to resolve questions about the relationship between consciousness and intentionality with any clarity. Brief surveys of relevant aspects of Phenomenological and analytic traditions have brought out some shared areas of interest, namely: The relationship of consciousness to reflexivity and ‘self-directed’ intentionality manages to distinguish events between conceptual and non-conceptual (or sensory) forms of intentionality, and the concerns with which the extent is characterized by either conscious experience or intentional states of mind is essentially ‘world-involving.’ These concerns were seen to bear on attempts to account for consciousness in terms of intentionality, and on questions that arise even if those attempts are rejected - questions regarding the separability of phenomenal consciousness and intentionality. Some attention is given to views that, in some sense, reverse the order of explanation proposed by intentionalizing views of consciousness, and take the facts of consciousness to explain the facts of intentionality. Now it is possible to step back and distinguish four general views of the consciousness-intentionality relationship discernable in the philosophical positions canvassed above, as follows.
(1) Consciousness is explanatorily derived from intention
(2) Consciousness is explanatorily derived from intentionality.
(3) Consciousness is underived and separable from intentionality.
(4) Consciousness is underived but also inseparable from intentionality.
(5) Consciousness is underived from, inseparable from, and essential to intentionality.
To adopted view (1) is to accept some intentionalizing strategy with respect to consciousness, such as is variously represented by Dennett, Dretske, Lycan, Rosenthal, and Tye. These views differ importantly among themselves. Their differences have much to do with how they treat consciousness-reflexivity issues and the conceptual/non-conceptual (or conceptual/sensory) contrast, and how they view the intersection between the two. But if we accept (1), then our answer to the question of what consciousness has to do with intentionality will ultimately be given in some prior general account of content or intentionality. And there will be no special issue regarding the internal or external fixation of the phenomenal character of experience, over and above what arises for mental content generally.
On the other hand, suppose one reject (1), and holds that experiences are conscious in a phenomenal sense that does not yield to an approach in which one conceives of intentionality (or content, or information bearing) independently of consciousness, and then, by adverting to special operations, or sources, or contents, tells us what consciousness is. At this point, one would face a choice between (2) and (3).
By embracing (2) we yield the ‘raw feel’ in its conception of phenomenality seemingly implicit in Sellars and Ryle. If, on the other hand, we accept (3), we endorse a much more intimate relationship between consciousness and intentionality. Without proposing to account for the former on the basis of the latter, we would hold that phenomenal character is sufficient for intentionality.
But adoption of (3) leaves open a further basic question. Consciousness (of the appropriate sort) may be sufficient, but underived from intentionality. Yet, intentionality does not require consciousness. Thus we come to ask whether having conscious experience of an appropriate sort is necessary to having either sensory or more-than-sensory (conceptual) intentionality. Adopting theses (4), we say ‘yes’ - that such intentionality can come only with consciousness - we will probably have gone as far in making consciousness fundamental to mind as one reasonably can. Again, this is not necessarily to deny the reality of non-conscious mental phenomena. But it could, in a broad way, be interpreted as siding with Husserl, Ludwig and Searle in thinking of consciousness as the irreplaceable source of intentionality and meaning.
This abstract list of four options might leave one without a sense of what is at stake in adopting this or that view. Perhaps the positions themselves will become a little clearer if we make explicit four broad areas of philosophical concern to which the choice among them is relevant.
First, they are relevant to the issue of how to conceive of the mind or the domain of psychology as a whole. Is there some unity to the concept of mind or psychologically phenomenal? Is there something that deserves to be considered the essence of the mental? If consciousness can be thoroughly intentionalized (as (1) would have it), maybe (with suitable qualifications), we could uphold the thesis that intentionality is the "mark of the mental." If we disapprove of (1) and embrace (3), seeing intentionality as inseparable from the phenomenal character of experience, then we still might maintain that both consciousness and intentionality are necessary for real minds - at least, if we adopt (4) as well. But a unified view of the mind seems difficult (if possible) to maintain if one segregates phenomenal character to non-intentional sensation - as in (2). Even if one does not, one may lack a unifying conception of the mental domain, if one is not satisfied with arguments that show that phenomenal consciousness is essential to genuine (not merely “as if” or “interpreter derived”) intentionality. In any case, both consciousness and intentionality signify a broad enough psychological categories, in that one's view of their extension and relationship will do much to draw one's map of psychology's terrain.
Second (and relatedly), views about the consciousness-intentionality relationship bear significantly on general questions about the explanation of mental phenomena. One may ask what kinds of things we might try to explain in the mental domain, what sorts of explanations we should seek, and what prospects of success we have in finding them. If we accept (1) and some intentionalizing account of consciousness, we will not suppose as do some (Chalmers 1996, Levine 2001, McGinn 1991, and Nagel 1974) those phenomenal consciousness poses some specially recalcitrant (maybe hopelessly unsolvable) problem for reductive physicalist or materialist explanations. Rather, we will see the basic challenge as that of giving a natural scientific account of intentionality or mental representation. And this indeed is a reason some are attracted to (1). One may believe that it offers us the only hope for a natural scientific understanding of consciousness. The underlying thought is that a science of consciousness must adopt this strategy: First conceive of intentionality (or content or mental representation) in a way that separates it from consciousness, and see intentionality as the outcome of familiar (and non-intentional) natural causal processes. Then, by further specifying the kind of intentionality involved (in terms of its use, its sources, its content), we can account for consciousness. In other words: ‘naturalize’ intentionality, then intentionalized consciousness, and mind has found its place in nature.
However, we should recognize a distinction between those whose envisioned naturalistic explanation would require underlying forms of necessity and impossibility stronger that pertaining to laws of nature generally - such as either conceptual or ‘metaphysical’ necessity - and those who see the link between explanans and explanandum as simply one of natural scientific law. David Chalmers' (1996) proposals for ‘naturalistic dualism’ (unlike those of the aforementioned naturalizers) put him in the second group. He argues that phenomenal consciousness in its various forms supervenes (not conceptually or metaphysically but only as a matter of nature's laws) on functional organization, and that this permits us to envisage (‘non-reductive’) ways of explaining consciousness by appeal to such organization.
Those who reject attempts to explain the phenomenal consciousness via a theory of intentionality still may reasonably proclaim allegiance to ‘naturalism.’ One may take phenomenal consciousness to be, in a sense, psychologically basic (if all that is mental is phenomenally either conscious or intentional, and no intentionalizing account of phenomenal character is feasible). But one might still hold that some non-intentional neuropsychological, or other recognizable physicalist, there to some explanation of the phenomenal character of experience is to be had, because the explanatory link is otherwise to exhibit an appropriately strong conceptual or metaphysical necessity. Only for measures for which are regarded to that of nothing stronger than psychophysical laws of nature are needed to give us the prospect of a natural scientific account of consciousness.
However, if we not only reject intentionalizing accounts of phenomenal character, but also see it as inseparable from intentionality (if we reject both (1) and (2) and agree to whatever problems are attached to physicalist explanations of consciousness will also infect prospects for explaining intentionality - to some extent at least. And this will hold, even if we remain aloof from (d), and do not claim that phenomenal consciousness is essential to intentionality. For if we think that much of the intentionality we have in perceiving, imagining, and thinking is integral to the phenomenal character of such experience, then without a reductive explanation of that phenomenal character, our possession of the intentionality it brings with it will not be reductively explained either.
Finally, it should be noted that if one holds (4), this may have important consequences for what forms of psychological explanation that once acceptably found the remaining agreement stems for that which one's mental processes must have the right relationship to one's conscious experiences to count as one's mental processes at all. If they are right, postulated processes that do not bear this relation to our experiential lives cannot be going on in our minds.
Regardlessly, another enlarging area of concern is of choice, in that between (1)-(4) tells of a direction among proven rights is to continue in having epistemological connections, in that if one is to embrace of (2), and something like a Sellarsian or Davidsonian distinction between sensation and thought, putting phenomenal character exclusively on the ‘sensation’ side, and intentionality exclusively on the ‘thought’ side of this divide, the place of consciousness in a philosophical account of knowledge will likely be meager - at most phenomenal character will be a causal condition, without a role to play in the warrant or justification of claims to knowledge. However, if one takes routes (1) or (3) the situation will appear rather different. If one is to consider the consequent for, either of which is to internationalize consciousness, or else views intentionality as inseparable from phenomenal character, there will then be more room to view consciousness as central to accounts of the warrant involved in first-person (‘introspective’) knowledge of mind, and empirical or perceptual knowledge. Though just how one goes about this, and with what success, will depend on how (if one chooses (1)) one intentionalizes consciousness, and (if one chooses (1) or (3), that will depend on what sort of intentionality or content one thinks phenomenal consciousness brings with it. The place of consciousness in one's understanding of introspective or empirical knowledge will be rather different, depending on how one resolves the issues regarding: Reflexivity, the conceptual/non-conceptual distinction, and externalisms.
A fourth area of philosophical concern we may indicate broadly, closely bound to our conception of the relation of consciousness and intentionality, has to do with value. How intimately is consciously bound up with those features of our own and others' lives that give them intrinsic or non-instrumental value for us? We may think that the pleasure and suffering that demand our ethical concern are necessarily phenomenally conscious - and that this evaluative significance remains even if phenomenal character is non-intentional. However, the more intentionality is seen as inherent to the phenomenal character of experience, the more the latter will be bound to manifestations of intelligence, emotion, and understanding that appear to give human (and perhaps at least another animal life) its special importance for us. It may seem that those opting for (3) share at least this much ground with their intentionalizing opponents who go for (1): They both (unlike those who adopt (2)) are in a position to claim consciousness is crucial to whatever special moral regard we think appropriate only toward those whose psychologies involve a kind of intentionality for which possession of painful or pleasant experience is not sufficient. However, this needs qualification on two counts. First, if one's embrace of (1) includes an intentionalizing strategy that limits phenomenal character to the sensory realm, one will limit the moral significance of phenomenal consciousness accordingly. Second, to those who hold, it may seem their opponents' intentionalizing theories remove from view those very qualities of experience that make life worth living, and so they will hardly seem like allies on the issue of value. Further, if the proponent of hesitant anticipations were in going insofar as to be taken on (4) - conscious essentialism - those who make that additional commitment might wonder how those who do not could ultimately accord the possession of consciousness much greater non-instrumental value than the possession of a sophisticated but totally non-conscious mind.
From this survey it seems fair to conclude that working out a detailed view of the relation between consciousness and intentionality is hardly a peripheral matter philosophically. Potentially it has extensive consequences for one's views concerning these four important, broad topics: (I) The unity of mental phenomena (Do consciousness or intentionality (or both together) somehow unifies the domain of the psychological?) (II) The explanation of mental phenomena (Can consciousness and intentionality are explained separately? (III) Is explaining the one key to explaining the other? Introspective and empirical knowledge (What relation to intentionality would give consciousness a central epistemological role in either?) (IV) The value of human and other animal life. (What relation of consciousness and intentionality (if any) underlies the non-instrumental value we accord ourselves and others?)
We collectively glorify our ability to think as the distinguishing characteristic of humanity; We personally and mistakenly glorify our thoughts as the distinguishing pattern of whom we are. From the inner voice of thought-as-words to the wordless images within our minds, thoughts create and limit our personal world. Through thinking we abstract and define reality, reason about it, react to it, recall past events and plan for the future. Yet thinking remains both woefully underdeveloped in most of us, as well as grossly overvalued. We can best gain some perspective on thinking in terms of energies.
Automatic thinking draws us away from the present, and wistfully allows our thoughts to meander where they would, carrying our passive attention along with them. Like water running down a mountain stream, thoughts running on auto-pilot careens through the spaces of perception, randomly triggering associative links within our vast storehouse of memory. By way of itself, such associative thought is harmless. However, our tendency to believe in, act upon, and drift away with such undirected thought keeps us operating in an automatic mode. Lulled into an inner passivity by our daydreams and thought streams, we lose contact with the world of actual perceptions, of real life. In the automatic mode of thinking, I am completely identified with my thoughts, believing my thoughts are I, and believing that I am the conceptualization forwarded by me to think of thoughts that are sometimes thought as unthinkable.
Another mode of automatic thinking consists of repetitious and habitual patterns of thought. These thought tapes and our running commentary on life, unexamined by the light of awareness, keep us enthralled, defining who we are and perpetuating all our limiting assumptions about what is possible for us. Driving and driven by our emotions, these ruts of thought create our false persona, the mask that keeps us disconnected from others and from our own authentic self. More than any other single factor, automatic thinking hinders our contact with presence, limits our being, and Forms our path. The autopilot of thought constantly calls us away from the most recent or the current of immediacy. Thus, keeping us fixed on the most superficial levels of our being.
Sometimes we even notice strange, unwanted thoughts that we consider horrible or shameful. We might be upset or shaken that we would think such thoughts, but those reactions only serves to sustain the problematic thoughts by feeding them energy. Furthermore, that self-disgust is based on the false assumption that we are our thoughts, that even unintentional thoughts, arising from our conditioned minds, are we. They are not we and we need not act upon or react to them. They are just thoughts with no inherent power and no real message about whom we are. We can just relax and let them go - or not. Troubling thoughts that recur over a long period and hinder our inner work may require us to examine and heal their roots in our conditioning, perhaps with the help of a psychotherapist.
Sensitive thinking puts us in touch with the meaning of our thoughts and enables us to think logically, solve problems, make plans, and carry on a substantive conversation. A good education develops our ability to think clearly and intentionally with the sensitive energy. With that energy level in our thinking brain, no longer totally submerged in the thought stream, we can move about in it, choosing among and directing our thoughts based on their meaning.
Conscious thinking means stepping out of the thought stream altogether, and surveying it from the shore. The thoughts themselves may even evaporate, leaving behind a temporary empty streambed. Consciousness reveals the banality and emptiness of ordinary thinking. Consciousness also permits us to think more powerfully, holding several ideas, their meanings and ramifications in our minds at once.
When the creative energy reaches thought, truly new ideas spring up. Creative thinking can happen after a struggle, after exhausting all known avenues of relevant ideas and giving up, shaping and emptying the stage so the creative spark may enter. The quiet, relaxed mind also leaves room for the creative thought, a clear channel for creativity. Creative and insightful thoughts come to all of us in regard to the situations we face in life. The trick is to be aware enough to catch them, to notice their significance, and if they withstand the light of sober and unbiased evaluation, to act on them.
In the spiritual path, we work to recognize the limitations of thought, to recognize its power over us, and especially to move beyond it. Along with Descartes, we subsist in the realm of “thoughts‘, but thoughts are just thoughts. They are not we. They are not who we are. No thought can enter the spiritual realms. Rather, the material world defines the boundaries of thought, despite its power to conceive lofty abstractions. We cannot think our way into the spiritual reality. On the contrary, identification with thinking prevents us from entering the depths. As long as we believe that refined thinking represents our highest capacity, we shackle ourselves exclusively to this world. All our thoughts, all our books, all our ideas wither before the immensity of the higher realms.
A richly developed body of spiritual practices engages of thought, from repetitive prayer and mantras, to contemplation of an idea, to visualizations of deities. In a most instructive and invaluable exercise, we learn to see beyond thought by embracing the gaps, the spaces between thoughts. After sitting quietly and relaxing for some time, we turn our attention toward the thought stream within us. We notice thoughts come and go of their own accord, without prodding or pushing from us. If we can abide in this relaxed watching of thought, without falling into the stream and flowing away with it, the thought stream begins to slow, the thoughts fragment. Less enthralled by our thoughts, we begin to see that we are not our thoughts. Less controlled by, and at the mercy of, our thoughts, we begin to be aware of the gaps between thought particles. These gaps open to consciousness, underlying all thought. Settling into these gaps, we enter and become the silent consciousness beneath thought. Instead of being in our thoughts, our thoughts are in us.
There is potentially a rich and productive interface between neuroscience/cognitive science. The two traditions, however, have evolved largely independent, based on differing sets of observations and objectives, and tend to use different conceptual frameworks and vocabulary representations. The distributive contributions to each their dynamic functions of finding a useful common reference to further exploration of the relations between neuroscience/cognitive science and psychoanalysis/psychotherapy.
Recent historical gaps between neuroscience/cognitive science and psychotherapy are being productively closed by, among other things, the suggestion that recent understandings of the nervous system as a modeler and predictor bear a close and useful similarity to the concepts of projection and transference. The gap could perhaps be valuably narrowed still further by a comparison in the two traditions of the concepts of the "unconscious" and the "conscious" and the relations between the two. It is suggested that these be understood as two independent "story generators" - each with different styles of function and both operating optimally as reciprocal contributors to each others' ongoing story evolution. A parallel and comparably optimal relation might be imagined for neuroscience/cognitive science and psychotherapy.
For the sake of argument, imagine that human behaviour and all that it entails (including the experience of being a human and interacting with a world that includes other humans) is a function of the nervous system. If this were so, then there would be lots of different people who are making observations of (perhaps different) aspects of the same thing, and telling (perhaps different) stories to make sense of their observations. The list would include neuroscientists and cognitive scientists and psychologists. It would include as well psychoanalysts, psychotherapists, psychiatrists, and social workers. If we were not too fussy about credentials, it should probably include as well educators, and parents and . . . babies? Arguably, all humans, from the time they are born, spend a considerable reckoning of time making observations of how people (others and themselves) behave and why, and telling stories to make sense of those observations.
The stories, of course, all differ from one another to greater or lesser degrees. In fact, the notion that "human behaviour and all that it entails . . . are a function of the nervous system" is itself a story used to make sense of observations by some people and not by any other? It is not my intent here to try to defend this particular story, or any other story for that matter. Very much to the contrary, what I want to do is to explore the implications and significance of the fact that there are different stories and that they might be about the same (some)thing.
In so doing, I want to try to create a new story that helps to facilitate an enhanced dialogue between neuroscience/cognitive science, on the one hand, and psychotherapy, on the other. That new stories of itself are stories of conflicting historical narratives . . . what is within being called the "nervous system" but others are free to call the "self," "mind," "soul," or whatever best fits their own stories. What is important is the idea that multiple things, evident by their conflicts, may not in fact be disconnected and adversarial entities but could rather be fundamentally, understandably, and valuably interconnected parts of the same thing.
"Non-conscious Prediction and a Role for Consciousness in Correcting Prediction Errors" by Regina Pally (Pally, 2004) is the take-off point for my enterprise. Pally is a practising psychiatrist, psychoanalyst, and psychotherapist who have actively engaged with neuroscientists to help make sense of her own observations. I am a neuroscientist who recently spent two years as an Academic Fellow of the Psychoanalytic Centre of Philadelphia, an engagement intended to expand my own set of observations and forms of story-telling. The significance of this complementarity, and of our similarities and differences, is that something will emerge in this commentary.
Many psychoanalysts (and psychotherapists too, I suspect) feel that the observations/stories of neuroscience/cognitive science are for their own activities at best, find to some irrelevance, and at worst destructive or are they not the same probability that holds for many neuroscientists/cognitive scientists. Pally clearly feels otherwise, and it is worth exploring a bit why this is so in her case. A general key, I think, is in her line "In current paradigms, the brain has intrinsic activity, is highly integrated, is interactive with the environment, and is goal-oriented, with predictions operating at every level, from lower systems to . . . the highest functions of abstract thought.” Contemporary neuroscience/cognitive science has indeed uncovered an enormous complexity and richness in the nervous system, "making it not so different from how psychoanalysts (or most other people) would characterize the self, at least not in terms of complexity, potential, and vagary." Given this complexity and richness, there is substantially less reason than there once was to believe psychotherapists and neuroscientists/cognitive scientists are dealing with two fundamentally different things.
Pally suspect, more aware of this than many psychotherapists because she has been working closely with contemporary neuroscientists who are excited about the complexity to be found in the nervous system. And that has an important lesson, but there is an additional one at least as important in the immediate context. In 1950, two neuroscientists wrote that, "the sooner we recognize the certainty of the complexity that is highly functional, just as those who recognize the Gestalts under which they leave the reflex physiologist confounded, in fact they support the simplest functions in the sooner that we will see that the previous terminological peculiarities that seem insurmountably carried between the lower levels of neurophysiology and higher behavioural theory simply dissolve away."
And in 1951 another said: " I am coming more to the conviction that the rudiments of every behavioural mechanism will be found far down in the evolutionary scale and represented in primitive activities of the nervous system."
Neuroscience (and what came to be cognitive science) was engaged from very early on in an enterprise committed to the same kind of understanding sought by psychotherapists, but passed through a phase (roughly from the 1950 through to the 1980's) when its own observations and stories were less rich in those terms. It was a period that gave rise to the notion that the nervous system was "simple" and "mechanistic," which in turn made neuroscience/cognitive science seem less relevant to those with broader concerns, perhaps even threatening and apparently adversarial if one equated the nervous system with "mind," or "self," or "soul," since mechanics seemed degrading to those ideas. Arguably, though, the period was an essential part of the evolution of the contemporary neuroscience/cognitive science story, one that laid needed groundwork for rediscovery and productive exploration of the richness of the nervous system. Psychoanalysis/psychotherapy, and, of course, move through their own story of evolution over its presented time. That the two stories seemed remote from one another during this period was never adequate evidence that they were not about the same thing but only an expression of their needed independent evolutions.
An additional reason that Pally is comfortable with the likelihood that psychotherapists and neuroscientists/cognitive scientists are talking about the same thing is her recognition of isomorphisms (or congruities, Pulver 2003) between the two sets of stories, places where different vocabularies in fact seem to be representing the same (or quite similar) things. I am not sure I am comfortable calling these "shared assumptions" (as Pally does) since they are actually more interesting and probably more significant if they are instead instances of coming to the same ideas from different directions (as I think they are). In this case, the isomorphisms tend to imply that rephrasing Gertrude Stein, that "there proves to be the actualization in the exception of there.” Regardless, Pally has entirely appropriately and, I think, usefully called attention to an important similarity between the psychotherapeutic concept of "transference" and an emerging recognition within neuroscience/cognitive science that the nervous system does not so much collect information about the world as generate a model of it, act in relation to that model, and then check incoming information against the predictions of that model. Pally's suggestion that this model reflects in part early interpersonal experiences, can be largely "unconscious," and so may cause inappropriate and troubling behaviour in current time seems to be entirely reasonable. So too, are those that constitute her thought, in that of the interactions with which an analyst can help by bringing the model to "consciousness" through the intermediary of recognizing the transference onto the analyst.
The increasing recognition of substantial complexity in the nervous system together with the presence of identifiable isomorphisms that provide a solid foundation for suspecting that psychotherapists and neuroscientists/cognitive scientists are indeed talking about the same thing. But the significance of different stories for better understanding a single thing lies as much in the differences between the stories as it does in their similarities/isomorphisms, in the potential for differing and not obviously isomorphic stories to modify another productively, and yielding a new story in the process. With this thought in mind, I want to call attention to some places where the psychotherapeutic and the neuroscientific/cognitive scientific stories have edges that rub against one another than smoothly fitting together. And perhaps to ways each could be usefully further evolved in response to those non-isomorphisms.
Unconscious stories and "reality.” Though her primary concern is with interpersonal relations, Pally clearly recognizes that transference and related psychotherapeutic phenomena are one (actually relatively small) facet of a much more general phenomenon, the creation, largely unconsciously, of stories that are understood to be but are not that any necessary thoughtful pronunciations inclined for the "real world.” Ambiguous figures illustrate the same general phenomenon in a much simpler case, that of visual perception. Such figures may be seen in either of two ways; They represent two "stories" with the choice between them being, at any given time, largely unconscious. More generally, a serious consideration of a wide array of neurobiological/cognitive phenomena clearly implies that, as Pally says, we do not see "reality," but only have stories to describe it that result from processes of which we are not consciously aware.
All of this raises some quite serious philosophical questions about the meaning and usefulness of the concept of "reality." In the present context, what is important is that it is a set of questions that sometimes seem to provide an insurmountable barrier between the stories of neuroscientists/cognitive scientists, who largely think they are dealing with reality, and psychotherapists, who feel more comfortable in more idiosyncratic and fluid spaces. In fact, neuroscience and cognitive science can proceed perfectly well in the absence of a well-defined concept of "reality" and, without being fully conscious of it, committing to fact as they do so. And psychotherapists actually make more use of the idea of "reality" than is entirely appropriate. There is, for example, a tendency within the psychotherapeutic community to presume that unconscious stories reflect "traumas" and other historically verifiable events, while the neurobiological/cognitive science story says quite clearly that they may equally reflect predispositions whose origins reflect genetic information and hence bear little or no relation to "reality" in the sense usually meant. They may, in addition, reflect random "play," putting them even further out of reach of easy historical interpretation. In short, with regard to the relation between "story" and "reality," each set of stories could usefully be modified by greater attention to the other. Differing concepts of "reality" (perhaps the very concept itself) gets in the way of usefully sharing stories. The mental/cognitive scientists' preoccupation with "reality" as an essential touchstone could valuably be lessened, and the therapist's sense of the validation of stories in terms of personal and historical idiosyncracies could be helpfully adjusted to include a sense of actual material underpinnings.
The Unconscious and the Conscious. Pally appropriately makes a distinction between the unconscious and the conscious, one that has always been fundamental to psychotherapy. Neuroscience/cognitive science has been slower to make a comparable distinction but is now rapidly beginning to catch up. Clearly some neural processes generate behaviour in the absence of awareness and intent and others yield awareness and intent with or without accompanying behaviour. An interesting question however, raised at a recent open discussion of the relations between neuroscience and psychoanalysis, is whether the "neurobiological unconscious" is the same thing as the "psychotherapeutic unconscious," and whether the perceived relations between the "unconscious" and the"conscious" are the same in the two sets of stories. Is this a case of an isomorphism or, perhaps more usefully, a masked difference?
An oddity of Pally's article is that she herself acknowledges that the unconscious has mechanisms for monitoring prediction errors and yet implies, both in the title of the paper, and in much of its argument, that there is something special or distinctive about consciousness (or conscious processing) in its ability to correct prediction errors. And here, I think, there is evidence of a potentially useful "rubbing of edges" between the neuroscientific/cognitive scientific tradition and the psychotherapeutic one. The issue is whether one regards consciousness (or conscious processing) as somehow "superior" to the unconscious (or unconscious processing). There is a sense in Pally of an old psychotherapeutic perspective of the conscious as a mechanism for overcoming the deficiencies of the unconscious, of the conscious as the wise father/mother and the unconscious as the willful child. Actually, Pally does not quite go this far, as I will point out in the following, but there is enough of a trend to illustrate the point and, without more elaboration, I do not think of many neuroscientists/cognitive scientists will catch Pally's more insightful lesson. I think Pally is almost certainly correct that the interplay of the conscious and the unconscious can achieve results unachievable by the unconscious alone, but think also that neither psychotherapy nor neuroscience/cognitive science are yet in a position to say exactly why this is so. So let me take a crack here at a new, perhaps bi-dimensional story that could help with that common problem and perhaps both traditions as well.
A major and surprising lesson of comparative neuroscience, supported more recently by neuropsychology (Weiskrantz, 1986) and, more recently still, by artificial intelligence, is that an extraordinarily rich repertoire of adaptive behaviour can occur unconsciously, in the absence of awareness of intent (be supported by unconscious neural processes). It is not only modelling the world and prediction. Error correction that can occur this way but virtually (and perhaps literally) the entire spectrum of behaviour externally observed, including fleeing from a threat, and of approaching good things, generating novel outputs, learning from doing so, and so on.
This extraordinary terrain, discovered by neuroanatomists, electrophysiologists, neurologists, behavioural biologists, and recently extended by others using more modern techniques, is the unconscious of which the neuroscientist/cognitive scientist speaks. It is the area that is so surprisingly rich that it creates, for some people, the puzzle about whether there is anything else at all. Moreover, it seems, at first glance, to be a totally different terrain from that of the psychotherapist, whose clinical experience reveals a territory occupied by drives, unfulfilled needs, and the detritus with which the conscious would prefer not to deal.
As indicated earlier, it is one of the great strengths of Pally's article to suggest that the two areas may in fact, turns out to be the same as in many ways that if they are of the same, then its question only compliments in what way are the "unconscious" and the "conscious" of showing to any difference? Where now are the "two stories?” Pally touches briefly on this point, suggesting that the two systems differ not so much (or at all?) In what they do, but rather in how they do it. This notion of two systems with different styles seems to me worth emphasizing and expanding. Unconscious processing is faster and handles many more variables simultaneously. Conscious processing is slower and handles numerously fewer variables at one time. It is likely that their equalling a host of other differences in style as well, in the handling of number for example, and of time.
No comments:
Post a Comment