THE OPEN VOICE OF THOUGHT
By: Richard j.Kosciejew
The first decade of the seventeenth-century, the invention of the telescope provided independent evidence to support Copernicus’s views. Italian physicist and astronomer Galileo Galilee used the new device to unexpended effects. Unsurmountably, he became the first person to observe occupancies circling Jupiter, the first to make detailed illustrations of the surface of the Moon, and the first to see how Venous increases in a phase and decreases in a phase as it circles the Sun.
Still, in the first decade of the seventeenth-century, the invention of the telescope provided independent evidence to support Copernicus’s views. Italian physicist and astronomer Galileo Galilee used the new device to unexpended effects. Unsurmountably, he became the first person to observe occupancies circling Jupiter, the first to make detailed illustrations of the surface of the Moon, and the first to see how Venous increases in a phase and decreases in a phase as it circles the Sun.
This telescopic observational position, as placed to view Venus helped to convince Galileo that Copernicus’s Sun-Centring capacity for being made actual, was it not to form of something in the mind, the comprehensible considerations in the depth of thought, that only for which it goes into the inherent detail of worldly perceptions, however, in as much as the act or process of thinking that were immersed in the unremitting deliberations. The fully understood danger of supporting of, relating to or characterized by heresy, that the heretical sectarian disbelieving nonconformist or the dissenting infidel’s, as they, who are not orthodoxically Privileged by the religions, were at that time, the ordinand holders to what are true, and right. Apostolically atoned for which of reasons were based on grounds to their beliefs.
Nonetheless, his, “Dialogue on the Two Chief World Systems,” Ptolemaic and Copernican qualities of notation had learned in the affirmative predictions for which they were to take something for granted or as true or existent especially as a basis for action or reasoning, too, understand the body of things in science, which has made major contributions to scientific knowledge as an extensive part of the deferential insinuations against the Church. Nevertheless, the decree inferring to lines of something that restricts or restrains by which of an act of restricting or the condition of being restricted, for these circumscriptions are to occasion in that (as a person, fact or condition) which is responsible for an effect as, perhaps, was the cause of all our difficulties. Whereas it is not a form of language that is not recognized as standard, the terminological dialectic awareness in the course and its continuatives dialogue, was entirely mathematical, in the sense of predicting the observed positions of celestial bodies on the basis of an underlying geometry without exploring the mechanics of celestial motion. Ptolemaic system was not as direct as popular history suggests: Copernicus’s system adhered to circular planetary motion, and lest the planets run of 48 epicycles and eccentrics. It was not until the work of the founder of modern astronomy, Johannes Kepler (1571-1630) and the Italian scientist, Galileo Galilee (1564-1642), that the system became markedly simpler than the Ptolemaic system.
Ptolemaic and Copernican published in 1632, and exemplified a hypocritical reminiscence of an unscrupulous worldly-wise notion to avoid controversy, even so, he was summoned before the Inquisition and tried under the legislation called in English, “The Witches Hammer.” In the following year and, under threat of torture, he was forced to recant.
Nicolaus Copernicus (1473-1543), the Polish astronomer had on this occasion to develop the first heliocentric theory of the universe in the modern era was presented in “De Revolutionibus Orbium Coelestium,” was published in the year of Copernicus’s death? The system is entirely mathematical, in the sense of predicting the observational positions of the celestial bodies on the basis of underling geometry, without exploring the mechanics of celestial motion. Its mathematical and scientific superiority over the ‘Ptolemaic’ system was not as direct as popular history suggests: Although Ptolemy’s astronomy was a magnificent mathematical, observationally adequate as late as the sixteenth-century, and not markedly more complex than its Copernican revival, its basis was a series of disconnected, ad hoc hypotheses, hence it has become a symbol for any theory that shares the same disadvantage. As Ptolemy (∮L. AD 146-170) wrote in the wide-ranging astronomical theories in Byzantium, the Islamic worlds, as they are foreign countries and they do things differently there. Ptolemy also wrote extensively on geography, where he was probably the first to use systematic coordinates of latitude and longitude, and his work was superseded until the sixteenth-century. Similarly, in musical theory his treatise on “Harmonics” is a detailed synthesis of Pythagorean mathematics and empirical musical observations.
The Copernican’ cestrum adhered to circular planetary motion, and let the planets run on 48 epicycles and eccentrics. It was not until the work of Johannes Kepler (1571-1630), who his laws of planetary motion are the first mathematical, scientific, laws of astronomy of the modern area. They state (1) that the planets travel in elliptical orbits, with one focus of the ellipse being the sun (2) that the radius between sun and planet sweeps equal areas in equal times, and (3) that the squares of the periods of revolution of any two planers are the same ratios as the cube of their mean distance from the sun.
Progress was made in mathematics, and to a lesser extent in physics, from the time of classical Greek philosophy to the seventeenth-century in Europe. In Baghdad, for example, from about A.D. 750 to A.D. 1000, substantial advancements were made in medicine and chemistry, and the relics of Greek science were translated into Arabic, digested, and preserved. Eventually these relics reentered Europe via the Arabic kingdom of Spain and Sicily, and the work of figures like Aristotle and Ptolemy reached the budding universities of France, Italy, and England during the Middle ages.
For much of this period the Church provided the institutions, like the reaching orders, needed for the rehabilitation of philosophy. But the social, political, and an intellectual climate in Europe was not ripe for a revolution in scientific thought until the seventeenth-century, until far and beyond into the nineteenth-century, the works of the new class of intellectuals we call scientists were more advocations than vocation, and the word scientific do not appear in English until around 1840.
Copernicus would have been described by his contemporaries as administer, a diplomat, and vivid student of economics and classical literature, and, mostly notably, a highly honoured and placed church dignitary. Although we named a revolution after him, this devoutly conservative man did not set out to create one. The placement of the sun at the centre of the universe, which seemed right and necessary to Copernicus, was not a result of making carefully astronomical observations. In fact, he made very few observations in the course of developing his theory, and then only to ascertain if his previous conclusions seemed correct. The Copernican system was also not any more useful in making astronomical calculations that the accepted model and was, in some ways, much more difficult to implement. What, then, was his motivation for creating the model and his reasons for presuming that the model was correct?
Copernicus felt that the placement of the sun at the centre of the universe made sense because he viewed the sun as the symbol of the presence of a supremely intelligent God in a man-centred world. He was apparently led to this conclusion in part because the Pythagoreans believed that fire exists at the centre of the cosmos, and Copernicus identified this fire with the fireball of the sun. The only support that Copernicus could offer for the greater efficacy of his model was that it represented a simper and more mathematically harmonious model of the sort that the Creator would obviously prefer. The language used by Copernicus in “The Revolution of Heavenly Orbs” illustrates the religious dimension of his scientific thought: “In the midst of all the sun responses, unmoving. Who, indeed, in this most beautiful temple would place the light giver in any other part than whence it can illumine all other parts?”
The belief that the mind of God as Divine Architect permeates the working of nature was the guiding principle of the scientific thought of Johannes Kepler. For this reason, most modern physicists would probably feel some discomfort in reading Kepler’s original manuscripts. Physics and metaphysics, astronomy and astrology, geometry and theology commingle with an intensity that might offend those who practice science in the modern sense of that word: “Physical laws,” wrote Kepler, “lie within the power of understanding of the human mind; God wanted us to perceive them when he created us in His image in order that we may take part in His own thoughts. Our knowledge of numbers and quantities is the same as that of God’s, at least insofar as we can understand something of it in this mental life.”
Believing, like Newton after him, in the literal truth of the words of the Bible, Kepler concluded that the word of God is also transcribed in the immediacy of observable nature. Kepler’s discovery that the motions of the planets around the sun were elliptical, as opposed perfecting circles, may have made the universe seem a less perfect creation of God in ordinary language. For Kepler, however, the new model placed the sun, which he also viewed as the emblem of a divine agency, more at the centre of a mathematically harmonious universe than the Copernican system allowed. Communing with the perfect mind of God requires, as Kepler put it, “knowledge of numbers and quantities.”
Since Galileo did not use, or even refer to, the planetary laws of Kepler when those laws would have made his defence of the heliocentric universe more credible, his attachment to the god like circles were probably a more deeply rooted aesthetic and religious ideals. But it was Galileo, who more than equalled to move upward to or toward a summit of which of surmounting that of Newton who was responsible for formulating the scientific idealism that quantum mechanic now forces us to abandon. In “Dialogue Concerning the Two Systems of the World,” Galileo said the following about the followers of Pythagoras: “I know perfectly well that the Pythagoreans had the highest esteem for the science of number and that Plato himself admired the human intellect and believed that it participates in divinity solely because it is able to understand the nature of numbers. And I myself am inclined to make the same judgement.”
This article of faith - mathematical ad geometrical ideas mirror precisely the essences of physical reality - was the basis for the first scientific revolution. Galileo’s faith is illustrated by the fact that the first mathematical law of this new science, a constant describing the acceleration of bodies in free fall, could not be confirmed by experiment. The experiment conducted by Galileo in which balls of different sizes and weights were rolled simultaneously down an inclined or the declination plane for which it does not, as he frankly admitted, yield precise results. And since the vacuum pumps had not yet been invented, there was simply no way that Galileo could subject his law to rigorous experimental proof in the seventeenth-century. Galileo believed in the absolute validity of this law in the absence of experimental proof because he also believed that movement could be subjected absolutely to the law of number. What Galileo asserted, as the French historian of science Alexander Koyré put it, was “that the real are in its essence, geometrical and, consequently, subject to rigorous determination and measurement.”
The popular image of Isaac Newton is that of a supremely rational dispassionate empirical thinker. Newton, like Einstein, had the ability to concentrate unswervingly on complex and complicating theoretical problems until they yielded a solution. But what most consumed his restless intellect were not the laws of physics. In addition to believing, like Galileo, that the essences of physical reality could be read in the language of mathematics, Newton also believed, with perhaps even greater intensity than Kepler, in the literal truths of the Bible.
Nonetheless, for Newton the mathematical languages of physics and the language of biblical literature were equally valid sources of communion with the natural and immediate truths existing in the mind of God. At this point, is that during the first scientific revolution the marriage between mathematical idea and physical reality, or between mind and nature through mathematical theory, was viewed as a sacred union. In our more secular age, the correspondence takes on the appearance of an unexamined article of faith or, to borrow a phrase from William James, “an altar to an unknown god.” Heinrich Hertz, the famous nineteenth-century German physicist, nicely described what there is about the practice of physics that tends to inculcate this belief: “One cannot escape the feeling that these mathematical formulae have an independent existence and intelligence of their own that they are wiser than we, wiser than their discoverers, that we get more out of them than we originally put into them.”
While Hertz made this statement without having to contend with the implications of quantum mechanics, the feeling, that he described remains the most enticing and exciting aspect of physics. The elegant mathematical formulae provide a framework for understanding the origins and transformations of a cosmos of enormous age and dimension in a staggering discovery for budding physicists. Professors of physics do not, of course, tell their student that the study of physical laws is an act of communion with the perfect mind of God or that these laws have an independent existence outside the minds that discovery them. The business of becoming a physicist typically begins, however, with the study of classical or Newtonian dynamics, and this training provides considerable covert reinforcement of the feeling that Hertz described.
Thus, in evaluating Copernicus’s legacy, it should be noted that he set the stage for far more daring speculations than he himself could make. The heavy metaphysical underpinning of Kepler’s laws, combined with an obscure type and demanding mathematics, caused most contemporaries to ignore his discoveries. Even his Italian contemporary Galileo Galilee, who corresponded with Kepler and possessed his books, never referred to the three laws. Instead, Galileo provided the two important elements missing from Kepler’s work: A new science of dynamics that could be employed in an explanation of planetary motion, and a staggering new body of astronomical observations. The observations were made possible by the invention of the telescoped in Holland c.1608 and by Galileo’s ability too improved on this instrument without having ever seen the original. Thus equipped, he turned his telescope skyward, and saw some spectacular sights.
It was only after the publication in 1632 of Galileo’s famous book supporting the Copernican theory that point the sun and not the earth at the centre of things, “Dialogue on the Two Principle World Systems” that he was to commit his ideas on infinity to paper. By then he had been brought before the Inquisition, has been tried and imprisoned. It was ‘Dialogue on the Two Principle World Systems’ that caused his precipitous fall from favour. Although Galileo had been careful to have his book passed by the official censors, it still fell foul of the religious authorities, particularly as Galileo had put into the mouth of his ‘dim but traditional’ character Symploce an after-word that could be taken to be the viewpoint of the Pope. This gave the impression of being without necessarily being so in fact, its pretence had apparently implied that, the Vicar of Christ was backward in his thinking.
Whether triggered by his self-evident disrespect, or the antipathy a man of Galileo’s character would inevitably generate in a bureaucracy, the authorities decided he needed to be taught a lesson. Someone dug back in the recent records and found that Galileo has been warned off this particular astronomical topic before. When he first mentioned the Copernican theory in writing, back in 1616, it had been decided that patting the sun at the centre of the universe than the earth was nothing short of heretical. Galileo had been told that he must not hold or defend such views if he would not agree to the restriction. There is no evidence that this third part of the injunction was ever put in place. The distinction is that Galileo should have been allowed to teach (and write about) the idea of a sun centred Universe provided he did not try to show that it was actually true. Although there is no record that Galilee against this instruction, the Inquisition acted as if he had.
On which the corpses to times generations lay above and beyond the developments of science, our picture, if the size of the universe has been expanding. In the classical concept of the universe developed by the late Greek philosophe, Ptolemy, where the earth was the centre of a series of spheres, the outermost being the one that carries the stars, this ‘sphere of fixed stars’ (as opposed to the moving planets) began at 5 myriad states and 6,946 myriad states and a third of a Marist state. A myriad is 10,000 and each of the states is around 180 metres long, amounting to about 100 million kilometres. Though, it was not clear how thick this sphere was considered to be, but it still is on the small side when you take to consider that the nearest star, Alpha Centauri, which is actually around 4 light years roughly 38 million-million kilometres away.
Copernicus not only transformed astronomy by putting the sun at the centre of the solar system. He expanded its scale, putting the sphere of the stars at around 9 billion kilometres. It was not until the nineteenth-century that these figures, little more than guesses were finally put aside when the technology has been developed sufficiently for the first reasonably accurate measurements to be made (in galactic terms) stars, made it clearer that the stars varied considerably in distance, with one of the first stars measured, Vega, found to be more than six times as far away as Alpha Centauri - a difference in distance of a good 2 x 1014 kilometres - nothing trial.
The publication of Nicolaus Copernicus’s “De Revolutionibus Orbium Coelestium” (On the Revolution of the Heavenly Spheres) in 1543 is traditionally considered the inauguration of the scientific revolution. Ironically, Copernicus had no intention of introducing radical ideas of the cosmology. His aim was only to restore the purity of ancient Greek astronomy by eliminating novelties that were initially brought into practice or use by Ptolemy. With such an aim in mind he modelled his book, which would turn astronomy upside down, based to a greater extent on Ptolemy’s “Almagest.” At the core of the stationary sun at the centre of the universe, and the revolution of the planets, earth included, around the sun the earth was ascribed, in addition to an annual revolution around the sun, a daily rotation about its axis of rotation.
Copernicus’s greatest achievement is his legacy. By introducing mathematical reasoning into cosmology, he dealt a severe blow to Aristotelian commonsense physics. His concept of an earth in motion launched the notion of the earth as a planet. His explanation that he has been unable to detect stellar parallax because of the enormous distance of the sphere of the fixed stars opened the way for future speculation about an infinite universe. Nonetheless, Copernicus still clung to many traditional features of Aristotelian cosmology. He continued to advocate the entrenched view of the universe as a closed world and to see the motion of the planets as uniform and circular.
The results of his discoveries were immediately published in the “Sidereus nuncius” (The Starry Messenger) of 1610. Galileo observed that the moon was very similar to the earth, with mountains, valleys and oceans, and not at all, that perfect, smooth spherical body it was claimed to be. He also discovered four moons orbiting Jupiter. As far, the Milky Way, instead of being a stream of light, it was, alternatively a large aggregate of stars. Later observations resulted in the discovery of sunspots, the phases of Venus, and that stranger phenomenon that would be designated as the rings of Saturn.
Having announced these sensational astronomical discoveries which reinforce his conviction of the reality of the heliocentric theory - Galileo resumed his earlier studies of motion. He now attempted to construct a comprehensively new science of mechanics necessary in the Copernican world, and the result of his labours were published in Italian in two epochs - making books: “Dialogue Concerning the Two Chief World Systems” (1632) and “Discourses and Mathematical Demonstrations concerning the Two New Sciences” (1638). His studies of projectiles and free-falling bodies brought him very close to the full formulation of the law of inertia and acceleration (the first two laws of Isaac Newton). Galileo’s legacy includes both the modern notion of ‘laws of nature’ and the idea of mathematics as nature’s true language: He contributed to the mathematization of nature and the geometry of space, as well as to the mechanical philosophy that would dominate the seventeenth and eighteenth centuries. Perhaps most important, it is largely due to Galileo that experiments and observation serve as the cornerstone of scientific reasoning.
Today, Galileo is remembered equally well because of his conflict with the Roman Catholic church. His uncompromising advocacy of Copernicanism after 1610 was responsible, in part, for the placement of Copernicus’s “De Revolutionibus” on the Index of Forbidden Books in 1616. At the same time, Galileo was warned not to teach or defend to any Copernicanism in public. Nonetheless, the election of Galileo’s friend Maffeo Barbering as Pope Urban VIII in 1624 filled Galileo with the hope that such a verdict could be revoked. With, perhaps, some unwarranted optimism, Galileo set to work to complete his “Dialogue” (1632). However Galileo underestimated the power of the enemies he has made during the previous two decades, particularly some Jesuits who had been the targets of his acerbic tongue. The outcome was that Galileo was summoned to Rome and there forced to abjure, on his knees, the views he had expressed in his book. Ever since, Galileo has been portrayed as a victim of a repressive church and a martyr in the cayuse of freedom of thought, as such, he has become a powerful symbol.
Despite his passionate advocacy of Copernicism and his fundamental work in mechanics, Galileo continued to accept the age-old views that planetary orbits were circulars and the cosmos and enclosed worlds. These beliefs, as well as anticipatorial hesitations that were rigorously to apply mathematics to astronomy as he had previously applied it to terrestrial mechanics, prevented him from arriving at the correct law of inertia. Thus, it remained for Isaac Newton to unite heaven and earth in his assimulating integral achievement in “Philosophiae Naturalis principia mathematica” (Mathematical Principles of Natural Philosophy), which was published in 1687? The first book of the “Principia” contained Newton’s three laws of motion. The first expounds the law of inertia: Every represented body persists in a state of rest or uniform motion in a straight line unless compelled to change such a state by an impressing force. The second is the la of acceleration, according to which the change of motion of a body is proportional to the force acting upon it and takes place in the direction of the straight line along which that force is impressed. The third, and most original, laws assigning to every exertion of something done or effected in the displacing of an action as an opposite and equal reaction. These laws governing terrestrial motion were extended to include celestial motion in book three of the “Principia,” where Newton formulated his most famous law, the law of gravitation: Every essential bulk of mass determines a discrete aspect whose body in the universe attracts any other body with a force directly proportional to the product of their mass and inversely proportional to the square of the distance between them.
The “Principia” is deservedly considered one of the greatest scientific masterpieces of all time. Nevertheless, in 1704, Newton published his second great work, the “Opticks” in which he formulated his corpuscular theory of light and his theory on colours. In later editions Newton appended a series of ‘queries’ concerning various related topics’ ion natural philosophy. These speculative and sometimes metaphysical statements, on such issues as light, heat, ether, and matter became most productive during the eighteenth-century, when the book and experimental method began to propagate and became immensely popular.
The seventeenth-century French scientist and mathematician René Descartes was also one of the important determinative thinkers in Western philosophy. Descartes stressed the importance of scepticism in thought and proposed the idea that existence had a dual nature: One physical and the other mental. The latter concept, known as Cartesians dualism, continues to engage philosophers today. This passage from “Discourse on Method” (first published in his Philosophical Essays in 1637) contains a summary of his thesis, which includes the celebrated phrase “I think: Therefore, I am.”
So, then, attentively examining who I was in all points of my life, and seeing that I could pretend that I have no physical body and that there was no worldly possessions or place in it, that I [was] in, but that I cannot, for all that, pretend that I did not exist, and that on the contrary, is there any real meaning for existence at all, yet having a natural tendency to learn and understand, that from that very fact had I appropriate given to yield to change, as reasons to posit the determinant causalities, that the uncensurable postulated outcome, condition, or contingency by which as particular point of time at which something takes place in occasion to cease to think. Although all the rest of what I am or had ever imagined had been true, I would have had no reason to believe that I existed. That I doubtingly thought against all of truths and all conditions of other things, it evidentially followed and earnestly conveniently that I do or have existed: No matter how, is that I have an enabling capacity to conclude that I had no reason to believe that I existed: Of the abilities contained, I concluded that I was a substance, of which, for the moment that of me that I am accorded of mind, all of which the whole essence or nature consists in thinking, for which in order to live a life or to exist, which needs no place and depends on no material thing. So, by which I am, the mind is distinct and entirely separate from the physical body, and that in knowing is easier than the bodies that even if it where it would cease to be all that it is.
William Blake’s religious beliefs were never entirely orthodox, but it would not be surprising if his concept of infinity embraced God or even if he had equated the infinite with God. It is a very natural thing to do. If you believe if a divine creator who is more than the universes, unbounded by the extent of time, it’s hard not to make a connection between this figure and infinity itself.
There have been exceptions, philosophers and theologians who were unwilling to make this linkage. Such was the ancient Greek distaste for infinity that Plato, for example, could only conceive of an ultimate form, the Good, that was finite. Aristotle saw the practical need for infinity, but still felt the chaotic influence of apeiron was too strong, and so came up, as we have seen, with the concepts of potential infinity - not a real thing, but a direction toward which real numbers could appoint of a direction. But such ideas largely died out with ancient Greek intellectuals supremacy.
It is hard to attribute the break away from this tradition to one individual, but Plotinus was one of the first of the Greeks to make a specific one-to-one correspondence between God and the infinite. Born ion A.D. 204, Plotinus was technically Roman, but was so strongly influenced by the Greek culture of Alexandria (he was born in the Egyptian town of Asyut) that intellectually, at least, he can be considered a Greek philosopher. He incorporated a mystical element (largely derived from Jewish tradition) into the teachings of Plat, sparking off the branch of philosophy since called Neoplatonism - as far as Plotinus was concerned, though, he was a simple interpreter of Plato with no intention of generating a new philosophy.
He argued that his rather loosely conceived gods, the One, had to be infinite, as to confine it to any measurable number would in some way reduce its oneness, introducing a form of duality. This was presumably because once a finite limit was imposed on God there had to be ‘something else’ beyond the One, and that meant the collapse of unity.
The early Christian scholars followed in a similar tradition. Although they were aware that Greek philosophy was developed outside of the Christian framework, they were able to take the core of Greek thought, particularly the works of Aristotle and Plato, and affirmingly correlate its structure that made it compatible with the Christianity of the time.
St. Augustine, one of the first to bring Plato’s philosophy into line with the Christian message, was not limited by Plato’s thinking on infinity. In fact, he was to argue not only that God was infinite, but could deal with and contain infinity.
Augustine is one of the first Christian writers after the original authors of the New Testament whose work is still widely read, born in A.D. 354 in the town of Tagaste (now Souk Ahras in Algeria), Augustine seemed originally to be set on a glittering career as a scholar and orator, first in Carthage, then in Rome and Milan. Although his mother was Christian, he himself dabbled with the duellist Manichean sect, but found its claims to be poorly supported intellectually, and was baptized a Christian in 387. He intended at this point to retire into a monastic state of quiet contemplation, but the Church hierarchy was not going to let a man of his talents go to waste. He was made a priest in 391 and became Bishop of Hippo (now Annaba or Bona, on the Mediterranean coast) in 395.
Later heavyweight theologians would pul back a little from Augustine’s certainty that God was able to deal with the infinite. While God himself was in some senses equated with infinity, it was doubted that he could really deal with infinite concepts other than Himself, not because he was incapable of managing such a thing, but because they could not exist. Those who restricted God’s imagination in this way might argue that he similarly could not conceive of a square circle, not because of some divine limitation, but because there simply was no such thing to imagine. A good example is the argument put forward by St. Thomas Aquinas.
Aquinas, born at Roccasecca in Italy in 1225, joined the then newly formed Dominican order in 1243. His prime years of input to philosophy and the teachings of the Church were the 1250s and 1260s, when he managed to overcome the apparent conflict between Augustine’s dependence on spiritual interpretation, and the newly reemerging views of Aristotle, flavoured by the intermediary work of the Arab scholar Averroé, which placed much more emphasis on deductions made from the senses.
Aquinas managed to bring together these two, apparently incompatible views by suggesting that, though we can only know of things through the senses, interpretation has to come from the intellect, which is inevitably influenced by the spiritual. When considering the infinite, Aquinas put forward the interesting challenge that although God’s power is unlimited, he still cannot make an absolutely unlimited thing, no more than he can make an unmade thing (for this involves contradictory statements being both true).
Sadly, Aquinas’s argument is not very useful, because it relies on the definition of a ‘thing’ for being inherently putting restrictions on echoing Aristotle’s argument that there cannot be an infinite body as a body has to be bounded by a surface, and infinity cannot be totally bounded. Simply saying that ‘a thing cannot be infinite because a thing has to be finite’ is a circular argument that does not take the point any further. He does, however, have another go at showing how creation can be finite, even if God is infinite, that has more logical strength.
In his book “Summa theoliae,” Aquinas agues that nothing creating can be infinite, because aby set of things, whatever they might be, have to be a specific set of entities, and the way entities is specified is by numbering them off. But there are no infinite numbers, so there can be no infinite real things. This was a point of view that would have a lot going for it right through to the late nineteenth-century when infinite countable sets crashed on the mathematical scene.
Yet, it seems that the challenge of difficulty stimulated the young moral philosopher and epistemologist Bernard Bolzano (1781-1848), pushing him into original patterns of thought, than leaving him to follow, sheep-like, the teachings at the university. He was marked out as something special. In 1805, still only 24, he was awarded the chair of philosophy of religion. In the same year he was ordained a priest, and it was with this status, as a Christian philosopher rather than from any position of mathematical authority, that he would produce most of his important texts.
Most, but not all, are given to the consideration of infinity, Bolzano’s significant work as, “Paradfoxien des Unendlichen,” written in retirement and only published after his death in 1848. This translates as “Paradoxes of the Infinite.”
Bolzano looks at two possible approaches to infinity. One is simply the case of setting up a sequence of numbers, such as the whole numbers, and saying that as it cannot conceivably be said to have a last term, it is inherently infinite - not finite. It is easy enough to show that the whole numbers do not have a point at which they stop. Nonetheless, given to a name to that last number whatever it might be and call it ‘ultimate’. Then what’s wrong with ultimate +1? Why is that not also a whole number?
The second approach to infinity, which he ascribes in “Paradoxes of the Infinite” to ‘some philosophers’ . . . and, notably in our day . . . the German philosopher Friedrich Wilhelm Hegel (1770-1831), and his followers, considers the ‘true’ infinity to be found only in God, the absolute. That taking this approach, Bolzano says, describes his first conception of infinity as the ‘bad infinity’.
Despite the fact that Hegel’s form of infinity is reminiscent of the vague Augustinian infinity of God, nonetheless, Bolzano points out that as an alternative, it is considerably enough for any categorical basis that something that supports or sustains anything immaterial to rest on a basis for a substandard infinity that merely reaches toward the absolute, but never reaches it. In “Paradoxes of the Infinity,” he calls this form of potential infinity as a variable quantity knowing no limit to its growth, always growing into the infinite and never reaching it.
As far as Hegel and his colleagues were concerned, using this approach, there was no need for a real infinity beyond some unreachable absolute. Instead we deal with a variable quality that is as big as we need it to be (or, often in calculus as small as we need it to be) without ever reaching the absolute, ultimate, truly infinite.
Bolzano argues, though, that there is something else, an infinity that does not have this ‘whatever you need it to be’ elasticity: In fact, a truly infinite quality (for example, the length of a straight line unbounded in either direction, meaning: the magnitude of the spatial entity containing all the points determined solely by their abstractly conceivable relation to two fixed points) does not by any means need to be variable, and in the adduced example, it is, in fact, not at all variable. Conversely, it is quite possible for a quantity merely capable of being taken greater than we have already taken it, and of becoming larger than any preassigned (finite) quantity, nevertheless to remain constantly finite, which holds in particular of every numerical quantity 1, 2, 3, 4, . . .
In the meantime, the eighteenth-century progressed, the optimism of the philosophies waned and a reaction began to set in. Its first manifestation occurred in the religious real. The mechanistic interpretation of the world-shared by Newton and Descartes - had, in the hands of the philosopher, led to ‘materialism’ and ‘atheism’. Thus, by mid-century the stage was set for a revivalist movement, which took the form of Methodism in England and pietism in Germany. By the end of the century the romantic reaction had begun. Fuelled in part by religious revivalism, the romantics attacked the extreme rationalism of the Enlightenment, the impersonalization of the mechanistic universe, and the contemptuous attitude of ‘mathematicians’ toward imagination, emotion, and religion.
The romantic reaction, however, was not anti-scientific, its adherents rejected a specific type of the mathematical science, not the entire enterprise. In fact, the romantic reaction, particularly in Germany, would give rise to a creative movement - the “Naturphilosophie” -that in turn would be crucial for the development of the biological and life sciences in the nineteenth-century, and would nourish the metaphysical foundation necessary for the emergence of the concepts of energy, forces and conservation.
Thus and so, in classical physics, externa reality consisted of inert and inanimate matter moving in accordance with wholly deterministic natural laws, and collections of discrete atomized parts constituted wholes. Classical physics was also premised, however, on a dualistic conception of reality as consisting of abstract disembodied ideas existing in a domain separate from and superior to sensible objects and movements. The motion that the material world experienced by the senses was inferior to the immaterial world experienced by mind or spirit has been blamed for frustrating the progress of physics up too and ast least the time of Galileo. Nevertheless, in one very important respect it also made the first scientific revolution possible. Copernicus, Galileo, Kepler and Newton firmly believed that the immaterial geometrical mathematical ideas that inform physical reality had a previous existence in the mind of God and that doing physics was a form of communion with these ideas.
Even though instruction at Cambridge was still dominated by the philosophy of Aristotle, some freedom of study was permitted in the student’s third year. Newton immersed himself in the new mechanical philosophy of Descartes, Gassendi, and Boyle: In the new algebra and analytical geometry of Vieta, Descartes, and Wallis, and in the mechanics of Copernican astronomy of Galileo. At this stage Newton showed no great talent. His scientific genius emerged suddenly when the plague closed the University in the summer of 1665 and he had to return to Lincolnshire. There, within eighteen months he began revolutionary advances in mathematics, optics, and astronomy.
During the plague years Newton laid the foundation for elementary differential and integral Calculus, seven years before its independent discovery by the German philosopher and mathematician Leibniz. The ‘method of fluxion’, as he termed it, was based on his critical insights that the integration of a function (or finding the area under its curve) is merely the inverse procedure by its differentiating (or finding the slope of the curve at any point), and looking on as differentiations are basic operations. Newton produced simple analytical methods that unified a host of disparate techniques previously developed on the piecemeal basis to deal with such problems as the finding areas, tangents, the lengths of curves, and their maxima and minima. Even though Newton could not fully justify his methods - rigorous logical foundations for the calculus were not developed until the nineteenth-century - he received the credit for developing a powerful tool of problem solving and analysis in pure mathematics and physics. Isaac Barrow, a Fellow of Trinity College and Lucasian Professor of Mathematics I the University, was so impressed by Newton’s achievement that when he resigned his chair in 1669 to devote himself to Theology, he recommended that the 27-year-old Newton take his place.
Newton’s initial lectures as Lucasian Professor dealt with optics, including his remarkable discoveries made during the plague years. He had reached the revolutionary conclusion that white light is not a simple homogeneous entity, as natural philosophers since Aristotle had believed. When he passed a thin beam of sunlight through a glass prism, he noted the oblong spectrum of colours-red, yellow, green, blue, violet - that formed on the wall opposite. Newton showed that the spectrum was too long to be explained by the accepted theory of the bending (or refraction) of light by dense media. The old theories improved the condition of all rays of white light striking the prism at the same angle would be equally refracted. Newton argued that white light is really a mixture of many different types of rays, that the different types of rays are refracted at different angles, and that each different type of ray is responsible for producing a given spectral colour. A so-called crucial experiment confirmed the theory. Newton selected out of the spectrum a narrow band of light of one colour. He sent it through a second prism and observed that no further elongation occurred. All the selected rays of the one colour were refracted at the same angle.
These discoveries led Newton to the logical, but erroneous, conclusion that telescopes using refracting lenses could never overcome the distortions of chromatic dispersion. Therefore, proposed and constructed a reflecting telescope, the first of its kind, its prototype of the largest modern optical telescopes. In 1671 Newton donated an improved adaptation or of somewhat previous renditions of the telescope to the Royal Society of London, the foremost scientific society of the day. As a consequence, he was elected a fellow of the society in 1672. Later that year Newton published his first scientific paper in the Philosophical Transactions of the society, it dealt with the new theory of light and colour and is one of the earliest examples of the short research paper.
Newton’s paper was well received, but two leading natural philosophers, Robert Hooke and Christian Huygens rejected Newton’s naive claim that his theory was simply derived with certainty from experiments. In particular they objected to what they took to be Newton’s attempt to prove by experiment alone that light consists in the motion of small particles, or corpuscles, rather than in the transmission of waves or pulses, as they both believed. Although Newton’s subsequent denial of the use of hypotheses was not convincing, his ideas about scientific method won universal assent, along with his corpuscular theory, which reigned until the wave theory was revived in the early nineteenth-century.
The debate soured Newton’s relations with Hooke. Newton withdrew from public scientific discussion for about a decade after 1675, devoting himself to chemical and alchemical researches. He delayed the publication of a full account of his optical researches until the death of Hooke in 1703. Newton’s “Opticks” appeared the following year. It dealt with the theory of light and colour and with Newton’s investigations of colours of thin sheets, of ‘Newton’s Rings’, and the phenomenon of the diffraction of light, which explains some of his observations that concluded in the complex of elements sustained of a wave theory of light as based on his basic corpuscular theory.
Newton’ greatest achievement was his work in physics and celestial mechanics, which culminated in the theory of universal gravitation. Even though Newton also began this research in the plague infested years, the story that he discovered universal gravitation in 1666 while watching an apple free-fall from a tree in his garden is merely a myth. By 1666, Newton had formulated early versions of his three laws of motion. He has also discovered the law stating the centrifugal force (or, force away from the centre) a distinction between the objective and phenomena’s body is central to understanding the phenomenonlogical treatment of the embodiment. An embodiment is not a concept that pertains to the body grasped as a physiological entity, but it pertains to the phenomenal body and to the role it plays in our object-directed experience. Moreover, although he knew the law of centrifugal force, he did not have a correct understanding of the mechanics of corpuscular motion. He thought of circular motion as the result of a balance between two forces, one of a centrifugal force, and the other centripetal force (toward the centre) - that, as the result of one force, a centripetal force, which constantly deflects the body away from its inertial path in a straight line.
Newton’s outstanding insights of 1666 were to imagine that the earth’s gravity extended to the moon, counterbalancing its centrifugal force. From his law of centrifugal force and Kepler’s third law of planetary notions, Newton deduced that the centrifugal (and hence centripetal) forced of the moon or of any planet must decrease as the inverse square of its distance from the centre of its motion. For example, if the distance is doubled, the force becomes one-fourth as much. If distance is tripled, the force becomes one-ninth as much. This theory agreed with Newton’s data too within about 11 percent.
In 1679, Newton returned to his study of celestial mechanics when his adversary Hooke drew him into a discussion of the problem of orbital motion. Hooke is credited for calling to mind to Newton that circular motion arises from the centripetal deflection of inertially moving bodies. Hooke further conjectured that since the planets move in ellipses with the sun at one focus (Kepler’s first law), the centripetal force drawing them to the sun should vary as the inverse square of their distances from it. Hooke could not prove this theory mathematically, although he boasted that he could. Not to be shown up by his rival, Newton applied mathematical talents to proving Hookes conjecture. He showed that if a body obeys Kepler’s second law (which states that the line joining a planet to the sun sweeps out equal areas in equal times), then the body is being acted upon by a centripetal force. This uncovering discovery had shown that for the first time the physical significance of Kepler’s second law. Given this discovery, Newton succeeded in shown that a body moving in an elliptical path and attracted to one focus must truly be drawn by a force that varies as the inverse square of the distance. Later these results were set aside by Newton.
In 1684 the young astronomer Edmund Halley, tried of Hooke’s fruitless boasting, asked Newton whether he could prove Hookes’s conjecture and to his surprise was told that Newton solved the problem a full five years before but had mow mislaid the paper. At Halley’s constant urging Newton reproduced the proofs and expanded them into a paper on the laws of motion and problems of orbital mechanics. Finally Halley persuaded Newton to compose a full-length treatment of his new physics and its application to astronomy. After eighteen months of sustained effort, Newton published (1687) the “Philosophiae Naturalis Principia Mathematica” (The Mathematical Principles of Natural Philosophy), or the “Principia,” as it is universally known.
By common consent the ‘Principia’ is the greatest scientific book ever written, within the framework of an infinite, homogeneous, three-dimensional, empty space and a uniform and eternally flowing ‘absolute’ time, Newton fully analysed the motion of bodies in resisting and non-resisting media under the action of centripetal forces. The results were applied to orbiting bodies, projectiles, pendula, and free-falling near the earth. He further demonstrated that the planets were attracted toward the sun by a force varying as the inverse square of the distance and generalizations that all heavenly bodies mutually attract one-another. By further generalizations, he reached his law of universal gravitation: Every piece of matter attracts every other piece with a force proportional to the product of their masses and inversely propositional to the square of the significance between them.
Given the law of gravitation and the laws of motion, Newton could explain a wide range of hitherto disparate phenomena such as the eccentric orbits of comers, the cause of the tides and their major variations, the precession of the earth’s axis, and the perturbation of the motion of the moon by the gravity of the sun. Newton’s one general law of nature and one system of mechanistic reduced to order most of the known problems of astronomy and terrestrial physics. The work of Galileo, Copernicus, and Kepler was united and transformed into one coherent scientific theory. The new Copernican world-picture had a firm physical basis.
Because Newton repeatedly used the term ‘attraction’ in the ‘Principia’, mechanistic philosophers attacked him for reintroducing into science the idea that mere matter could act at a distance upon other matter. Newton replied that he had only intended to show the existence of gravitational attraction and to discover its mathematical law, not to inquire into its cause. Having no more than his critics believed that brute matter could act at a distance. Having rejected the Cartesian vortices, he reverted in the early 1700s to the idea that some material medium, or ether, caused gravity. Nonetheless, Newton’s ether was no longer a Cartesian-characteristics of ether acting solely by impacts among particles. The ether had to be extremely rare, but it would not obstruct the motions of the celestial bodies, and yet elastic or springy so it could push large masses toward one-another. Newton postulated that the ether consisted of particles endowed with very powerful short-range repulsive forces, his unreconciled ideas of forces and ether influenced later natural philosophers in the eighteenth-century, when they turned to the phenomena of chemistry, electricity and magnetism, and physiology.
With the publication of the “Principia,” Newton was recognized as the leading natural philosopher of the age, but his creative career was effectively over. After suffering a nervous breakdown in 1693, he retired from research to seek a governmental position in London. In 1696 he became Warden of the Royal Mint and in 1690 its Master, an extremely lucrative position. He oversaw the great English recoinage on the 1690s and pursued counterfeiters with ferocity. In 1703 he was elected president of the Royal Society and was reelected each year until his death. She was knighted in 1709 by Queen Anne, the first scientist to be so honoured for their work.
As any overt appeal to metaphysics became unfashionable, the science of mechanics was increasingly regarded, says Ivor Leclerc, as ‘an autonomous science’, and any alleged role of God as ‘deus ex machina’. At the beginning of the nineteenth-century, Pierre-Simon LaPlace, along with a number of other great French mathematicians and, advanced the view that science of mechanics constituted a complicating and complex view of nature. Since this science, by observing its epistemology, has revealed itself to be the fundamental science, the hypothesis of God as, they concluded unnecessary.
Pierre de Simon LaPlace (1749-1827) is recognized for eliminating not only the theological components of classical physics but the ‘entire metaphysical component’ as well. The epistemology of science requires, had that we move ahead to advance of engaging inductive generalizations from observed facts to hypotheses that are ‘tested by observed conformity of the phenomena’. What was unique about LaPlace’s view of hypotheses as insistence that we cannot attribute reality to them. Although concepts like force, mass, notion, cause, and laws are obviously present in classical physics, they exist in LaPlace’s view only as quantities. Physics is concerned, he argued, with quantities that wee associate as a matter of convenience with concepts, and the truths about nature are only quantities.
The seventeenth-century view of physics is a philosophy of nature or a natural philosophy was displaced by the view of physics as an autonomous science that was: The science of nature. This view, which was premised on the doctrine of positivism, promised to subsume all of the nature with mathematical analysis of entities in motion and claimed that the true understanding of nature was revealed only in the true understanding of nature was revealed only in the mathematical descriptions. Since the doctrine of positivism, assumed that the knowledge we call physics resides only in the mathematical formalism of physical theory. It disallows the prospect that the vision of physical reality revealed in physical theory can have any other meaning. In the history of science, the irony is that positivism, which was intended to banish metaphysical concerns from the domain of science, served to perpetuate a seventeenth-century metaphysically: Assumption about the relationship between physical reality and physical theory.
So, then, the decision was motivated by our conviction that our discoveries have more potential to transform our conception of thee ‘way things are’ than any previous discovery in the history of science, as these implications of discovery extend well beyond the domain of the physical sciences, and the best efforts of large numbers of thoughtfully convincing in other than I will be required to understanding them.
In fewer contentious areas, European scientists made rapid progress on many fronts in the seventeenth-century. Galileo himself investigated the laws governing falling objects, and discovered that the duration of a pendulum’s awing is constant for any given length. He explored the possibility of using this to control a clock, an idea that his son put into practice in 1641. Two years later, another Italian mathematician and physicist, Evangelist Torricelli, made the first barometer. In doing so, he discovered atmospheric pressure and produced the first artificial vacuum known to science. In 1650 German physicist Otto von Guericke invented the air pump. He is best remembered for carrying out a demonstration on the effects of atmospheric pressure. Von Guericke joined two large hollow bronze hemispheres, and then pumped out the sir within them to form a vacuum. To illustrate the strength of a vacuum, von Guericke showed how two teams of eight horses pulling in opposite directions could not separate the hemispheres. Yet, the hemispheres fell apart as soon as the air was let in.
Throughout the seventeenth-century major advances occurred in the life sciences, including the discovery of the circulatory system by the English physician William Harvey and the discovery of microorganisms by the Dutch microscope maker Antoni van Leeuwenhoek. In England, Robert Boyle established modern chemistry as a full-fledged science, while in France, philosopher and scientist René Descartes made numerous discoveries in mathematics, as well an advancing the case for rationalism in scientific research.
However, the century’s greatest achievements came in 1665, when the English physicist and mathematician Isaac Newton fled from Cambridge to his rural birthplace in Woolsthorpe to escape an epidemic of the plague. There, in the course of a single year, he made a series of extraordinary breakthroughs, including new theories about the nature of light and gravitation and the developments of calculus. Newton is perhaps best known for his proof that the force of gravity extends throughout the universe and that all objects attract each other with a precisely defined and predictable force. Gravity holds the moon in its orbit around the earth and is the principal cause of the earth’s tides. These discoveries revolutionized how people viewed the universe and they marked the birth of modern science.
Newton’s work demonstrated that nature was governed by basic rules that could be identified using the scientific method. This new approach to nature and discovery liberated eighteenth-century scientists from passively accepting the wisdom of ancient writings or religious authorities that had never been tested by experiment. In what became known as the Age of Reason, or the Age of Enlightenment, scientists in the eighteenth century began to apply rational activity, careful observations, and experimental solutions of a variety of problems.
Advances in the life sciences saw the gradual erosion of the theory of spontaneous generation, a long-held notion that life could spring from nonliving matter. It also brought the beginning of scientific classifications, pioneered by the Swedish naturalist Carolus Linnaeus, whose clarification categorically classified up to 12,000 living plants and live animals into a systematic arrangement.
By the year1700 the first steam engine has been built. Improvements in the telescope enabled German-born British astronomer Sir William Herschel to discover the planet Uranus in 1781. Throughout the eighteenth-century science began to play an increasing role in everyday life. New manufacturing processes revolutionized the way that products were made, heralding the Industrial Revolution. In “An Inquiry into the Nature and Causes of the Wealth of Nations’,” published in 1776, British economist Adam Smith stressed the advantage of division of labour and advocated the use of machinery to increase production. He argued governments to allow individuals to compete within a free market in order to produce fair prices and maximum social benefits. Smith’s work for the first time gave economics the stature of an independent subject of study and his theories greatly influenced the course of economic thought for more than a century.
With knowledge in all branched of science accumulated rapidly, scientists began to specialize in particular fields. Specialization did not necessarily mean that discoveries were specializing as well: From the nineteenth-century onward, research began to uncover principles that unite the universe as a whole.
In chemistry, one of these discoveries was a conceptual one: That all matter is made of atoms. Originally debated in ancient Greece, atomic theory was revived in a modern form by the English chemist John Dalton in 1803. Dalton provided clear and convincing chemical proo1f that such particles exist. He discovered that each atom has a characteristic mass and that atoms remain unchanged when they combine with other atoms of form compound substances. Dalton used atomic theory to explain why substances always combine in fixed proportions - a field of study known as quantitative chemistry. In 1869 Russian chemist Dmitry Mendeleyev used Dalton’s discoveries about atoms and their behaviour to draw up his periodic table of the elements.
Other nineteenth-century discoveries in chemistry included the world’s first synthetic fertilizer, manufactured in England in 1842. In 1846 German chemist Christian Schoenbein accidentally developed the powerful and unstable explosive nitrocellulose. The discovery occurred after he has spoiled a mixture of nitric and sulfuric acids and then mopped it up with a cotton apron. After the apron had been hung up to dry, it exploded. He later learned that the cellulose in the cotton apron combine with the acids to form a highly flammable explosive.
In 1828 the German chemist Friedrich Wöhler showed that making carbon - containing was possible. Organic compounds from inorganic ingredients, a breakthrough that opened an entirely new field of research. By the end of the nineteenth-century, hundreds of organic compounds had been synthesized, including mauve, magenta, and other synthetic dues, as well as aspirin, still one of the world’s most useful drugs.
In physics, the nineteenth-century were remembered chiefly for research into electricity and magnetism, which were pioneered by physicists such as Michael Faraday and James Clerk Maxwell of Great Britain. In 1821 Faraday demonstrated that a moving magnet could set the arms of time using an electric magnetic current or stream as flowing in a conductor. This experiment and others he carried a process, led to the development of electric motors and generators. While Faraday’s genius lay in discovery by experiments, Maxwell produced theoretical breakthroughs of even greater note. Maxwell’s development of the electromagnetic theory of light took many tears. It began with the paper “On Faraday’s Liners of Force” (1855-1856), in which Maxwell built on the ideas of British physicist Michael Faraday. Faraday explained the electric magnetic effect’s result from lines of forces that surround conductors and magnets. Maxwell drew an analogy between the behaviour of the lines of force and the flow of a liquid, deriving equations that represented electric and magnetic effects. The next step toward Maxwell’s electromagnetic theory was the publication of the paper, “On Physical Lines of Force” (1861-1862). Here Maxwell developed a model for the medium that could carry electric and magnetic effects. He devised a hypothetical medium that consisted of a fluid in which magnetic effects created whirlpool-like structures. These whirlpools were separated by cells created by electric effects, so the combination of magnetic and electric effects formed a honeycomb pattern.
Maxwell could explain all known effects’ of electromagnetism by considering how the motion of the whirlpools, or vortices, and cells could produce magnetic and electric effects. He showed that the lines of force behave like the structures in the hypothetical fluid. Maxwell went further, considering what happens if the fluid could change density, or be elastic. The movement of a charge would set up a disturbance. The speed of these waves would be equal to the ratio of the value for an electrical moderation of measured electrostatic units to the value of the same current measured in electromagnetic units. German physicists’ Friedrick Kohlrausch and Wilhelm Weber had calculated this ratio and found it the same as the speed of light. Maxwell inferred that light consists of waves in the same medium that causes electric and magnetic phenomena.
Maxwell launched the celebrations that proved gratifying in supporting evidence for this inference in work he did on defining basic electrical and magnetic quantics in terms of mass, length, and time. In the paper, “On the Elementary Relations of Electric Quantities” (1863), he wrote that the ratio of the two definitions of any quantity based on electric and magnetic forces is always equal to the velocity of light. He considered that light must consist of electromagnetic waves but first needed to prove this by abandoning the vortex analogy and developing a mathematical system. He achieved this in “A Dynamical Theory of the Electromagnetic Field” (1864), in which he developed the fundamental equations that describe the electromagnetic field. These equations showed that light is propagated in two waves, one magnetic and the other electric, which vibrate perpendicular to each other and perpendicular to the direction which they are moving (like a weave travelling along a string). Maxwell first published this solution in “Note on Electromagnetic Theory of Light” (1868) and summarized in concessions to circumstantial totalities and without the formality to restate in the alliance associated with another or wrong configurations, in that of comprising with any conveyance as having the direction of responsibilities for or against the study of the phenomena associated with electric charge at rest. However, radiation in which associated electric and magnetic field oscillation are propagated through space. The electric and magnetic fields are at right angles to each other and to the direction of propagation. In free space the phase speeds of waves all frequencies have the same value (c = 2.99 792 458m-1). As the range of frequencies over which electromagnetic radiation has been studied is called the ‘electromagnetic spectrum’ as the methods of generating radiation and their interactions depend upon frequency, it can be shown that the rate of radiation of energy caused by the acceleration of a given charge is proportional to the square of the acceleration. These points and others are shown in proofs that all of his work on electricity and magnetism are situated as, ‘Treatises on Electricity and Magnetism’ (1873).
The treatise also suggested that a whole family of electromagnetic radiation must exist, of which visible light was only one part. In 1888 German physicist Heinrich Hertz made sensational discovery of radio waves, a form of electromagnetic radiation with wavelengths too long for our eyes to see, confirming Maxwell’s ideas. Unfortunately, Maxwell did not live long enough to see this vindication of his work. He also did not to see the ether (the medium in which light waves were said to be propagated) disproved with the classic experiments of German-born American physicists Albert Michelson and American chemist Edward Morley in 1881 and 1887. Maxwell had suggested an experiment much like the Michelson-Morley experiment in the last year of his life. Although Maxwell believed the ether existed, his equations were not dependent on its existence, and so continues in its validation.
Maxwell’s other major contributions to physics were to provide a mathematical basis for the kinetic theory of gases, which explains that gases behave as they do because they are composed of particles in constant motion. Maxwell built on the achievements of German physicist Rudolf Clausius, who in 1857 and 1858 had shown that a gas must consist of molecules in constant motion colliding with each other and with the walls of their container. Clausius developed the idea of a man free path, which is the average e distance that a molecule travels between collisions.
Maxwell’s development of the kinetic theory of gases was stimulated by his success in the similar problem of Saturn’s rings. It dates from 1860, when he used a statistical treatment to express the wide range of velocities (speeds and the directions of the speeds) that the molecules in a quantity of gas must inevitably possess. He arrived at a formula to express the distribution of velocity in gas molecules, relating it to temperature. He showed that gases centrally gather or group together when in the motion of their molecules, so the molecules in a gas will speed up as the gases temperature increases. Maxwell then applied his theory with some success to viscosity (how much gas resists movement), diffusion (how gas molecules move from an area of higher concentration to an area of lower concentration), and other properties of gases that depend on the nature of the molecules’ motion.
Maxwell’s kinetic theory did not fully explain heat conduction (how heat travels through a gas). Austrian physicist Ludwig Boltzman modified Maxwell’s theory in 1868, resulting in the Maxwell-Boltzman distribution law, showing the number of particles (n) having and energy (E) in a system of particles in thermal equilibrium. It has the form:
n + n0 exp( -E/kT).
Where n0 is the number of particles having the lowest energy, ‘k’ the Boltzman constant, and ‘T’ the thermodynamic temperature.
If the particles can only have certain fixed energies, such as the energy levels of atoms, the formula gives the number (K1) above the ground state energy. In certain cases several distinct states may have the same energy and the formula then becomes:
n1 = g1n0 exp( -K1/kT),
Where (g)1 is the statistical weight of the level of energy E1, i.e., the number of states having energy E1. The distribution of energies obtained by the formula is called a Boltzmann distribution.
Both Maxwell’s thermodynamic relational equations and the Boltzman formulation to a contributional successive succession of refinements of kinetic theory, and it proves fully applicable to all size of molecules and to a method of separating gases in a centrifuge. The kinetic theory was derived using statistics, so it also revised opinions on the validity of the second law of thermodynamics, which states that heat cannot flow from a colder to a hotter body of its own accord. In the case of two connected containers of gases at the same temperature, it is statistically possible for the molecule to diffuse so that the faster-moving molecules all concentrate in one container while the slower molecules gather in the other, making this hypophysis which is known as Maxwell’s demon. Although this event is very unlikely, it is possible, and the second law is therefore not absolute, but highly probable.
These sources provide additional information on James Maxwell Clerk: Maxwell is generally considered the greatest theoretical physicist of the 1800s. He combined a rigorous mathematical ability with great insight, which enabled him to make brilliant advances in the two most important areas of physics at that time. In building on or upon Faraday’s work to discover the electromagnetic nature of light, Maxwell not only explained electromagnetism but also paved the way for the discovery and application of the whole spectrum of electromagnetic radiation that has characterized modern physics. Physicists now know that this spectrum also includes radio, ultraviolet, and X-ray waves, to name a few. In developing the kinetic theory y of gases, Maxwell gave the final proof that the nature of heat resides in the motion of molecules.
With Maxwell’s famous equations, as devised in 1864, uses mathematical explanations in the interaction between electric and magnetic fields. His work demonstrated the principles behind electromagnetic waves created when electric and magnetic fields oscillate simultaneously Maxwell realized that light was a form of electromagnetic energy, but he also thought that the complete electromagnetic spectrum must include many other forms of waves as well.
With the discovery of radio waves by German physicist Heinrich Hertz in 1888 and X-rays by German physicist Wilhelm Roentgen in 1895, Maxwell’s ideas were proved correct. In 1897 British physicist Sir Joseph J. Thomas discovered the electron, a subatomic particular with a negative charge, this discovery countered the long-held notion that atoms were the basic unit of matter.
As in chemistry, these nineteenth-century discoveries in physics proved to have immense practical value. No one was more adept at harnessing them than American physicist and prolific inventor Thomas Edison. Working from his laboratories in Menlo Park, Mew Jersey, Edison devised the carbon-granule microphone in 1877, which greatly improved the recently invented telephone. He also invented the phonograph, the electric light bulb, several kinds of batteries, and the electric metre. Edison was granted more than 1, 000 patents for electrical devises, a phenomenal feat for a man who had no formal schooling.
In the earth sciences, the nineteenth-century were a time of controversy, with scientists debating earth’s age. Estimated ranges may be as far as from less than 100,000 years to several hundred million years. In astronomy, greatly improved optical instruments enabled important discoveries to be made. The first observation of an asteroid, Ceres, took place 1801. Astronomers had long noticed that Uranus exhibited an unusual orbit. French Astronomer Urbin Jean Joseph Leverrier predicated that another planet nearly caused Uranus’s odd orbit. Using mathematical calculations, he narrowed down where such a planet would be located in the sky. In 1846, with the help of German astronomer Johann Galle, Leverrier discovered Neptune. The Irish astronomer William Parsons, the third Earl of Rosse, became the first person to see the spiral form of galaxies beyond our own solar system. He did this with the Leviathan, a 183-cm. (72-in.) Reflecting telescopes, built on the grounds of his estate in Parsonstown (now Birr), Ireland, in the 1840s. His observations were hampered by Ireland’s damp and cloudy climate, but his gigantic telescope remained the world’s largest for more than 70 years.
In the nineteenth century the study of microorganisms became increasingly important, particularly after French biologist Louis Pasteur revolutionized medicine by correctly deducing that some microorganisms are involved in disease. In the 1880s Pasteur devised methods of immunizing people against diseases by deliberately treating them with vaccine against rabies was a milestone in the field of immunization, one of the most effective forms of preventive medicine the world has to yet seen. In the area of industrial science, Pasteur invented the process of pasteurization to help prevent the spread of disease through milk and other foods.
Pasteur’s work on fermentation and spontaneous generation had considerable implications for medicine, because he believed that the origin and development of disease are analogous to the origin and process of fermentation. That is, disease arises from germs attacking the body from outside, just as unwanted microorganisms invade milk and cause fermentation. This concept, called the germ theory of disease, was strongly debated by physicians and scientists around the world. One of the main arguments against it was the contention that the role germs played during the course of disease was secondary and in important: The notion that tiny organisms could kill vastly larger ones seemed ridiculous to many people. Pasteur’s studies convinced him that he was right, however, and in the course of his career, he extended the germ theory to explain the cause of many diseases.
Pasteur also determined the natural history of anthrax, a fatal disease of cattle. He proved that anthrax is caused by a particular bacillus and suggested that animals could be given anthrax in a mild form by vaccinating them with attenuated (weakened) bacilli, thus providing immunity from potentially fatal attacks. In order to prove his theory, Pasteur began by inoculating twenty-five sheep, and a day later he inoculated these and twenrt0five more sheep with an especially strong inoculant, and he left teen sheep untreated. He predicted that the second twenty-five sheep would all perish and concluded the experiment dramatically showing, to a sceptical crowd, the carcasses of the twenty-five sheep lying side by side.
Pasteur spent the rest of his life working on the causes of various disease, including septicaemia, cholera, diphtheria, fowl cloera, tuberculosis and smallpox and their prevention by means of vaccination. His best known for his investigations concerning the prevention of rabies, otherwise known in humans as hydrophobia. After experimenting with the saliva of animals suffering from this disease, Pasteur concluded that the disease rests in the nerve centres of the body: When an extract from the spinal column of a rabid dog was injected into the bodies of healthy animals, symptoms of rabies were produced. By studying the tissues of infected animals, particularly rabbits, Pasteur was able to develop an attenuated form of the virus that could be used for inoculation.
In 1885, a young boy and his mother arrived at Pasteur’s laboratory, the boy had been bitten badly by a rabid dog, and Pasteur was urged to treat him with his new method. At the end of the treatment, which lasted ten days, the boy was being inoculated with the most potent rabies virus known: He recovered and remained healthy. Since that time, thousands of people have been saved from rabies by this treatment.
Pasteur’s research on rabies resulted, in 1888, in the founding of a special institute in Paris for treatment of the disease. This became known as the Institute of Pasteur, and it was directed by Pasteur himself until he died. (The institute still flourishes and is one of the most important centres in the world for the study of infectious diseases and other subjects related to microorganisms, including molecular genetics). By the time if his death in Saint-Cloud on September 28, 1895, Pasteur had long since become a national hero and had been honoured in many ways. He was given a state Funeral at the Cathedral of Nôtre Dame, and his body was placed in a permanent crypt in his institute.
Also during the nineteenth-century, the Austrian monk Gregor Mendel laid the foundations of genetics, although his work, published in 1866, was not recognized until after the century had closed. Nevertheless, the British scientist Charles Darwin towers above and beyond all other scientists of the nineteenth-century. His publication of “On the Origin of Species” in 1859, marked a major turning point for both biology and of the human thought. His theory of evolution by natural selection (independently and simultaneously developed by British naturalist Alfred Russel Wallace) initiated a violent controversy that until it has as yet, has not been subsiding. Particularly controversial was Darwin’s theory that humans resulted from a long process of biological evolution from apelike ancestors. The greatest opposition to Darwin’s ideas came from whose who believed that the Bible was an exact and literal statement of the origin of the world and of humans. Although the public initially castigated Darwin’s ideas, by the late 1800s most biologists had accepted that evolution occurred, although not all agreed on the mechanism, known as natural selection.
In the twentieth-century, scientists achieved spectacular advances in the fields of genetics, medicine, social sciences, technology, and physics.
At the beginning of the twentieth-century, the life sciences entered a period of rapid progress. Mendel’s work in genetics was rediscovered on 1900, and by 1910 biologists had become convinced that genes are located in chromosomes, the threadlike structures that contain proteins and deoxyribonucleic acid (DNA). During the 1940s American biochemists discovered that DNA taken from one kind of bacterium could influence the characteristics of another. From these experiments, DNA is clearly the chemical that makes up genes and the key to heredity.
After American biochemist James Watson and British biophysicist Francis Crick established the structure of DNA in 1953, geneticists became able to understand heredity in chemical terms. Since then, progress in this field has had astounding results. Scientists have identified the complete genome, or genetic catalogue of the human body. In many cases, scientists now know how individual genes become activated and what affect’s as something suiting the purpose or dealt with, produce a usually mental or emotional effect on one capable of reactions upon acting against or in a contrary direction they have in the human body. Genes can now be transferred from one species to another, sidestepping the normal processes of heredity and creating hybrid organisms that are known in the natural world.
At the turn of the twentieth-century, Dutch physicist Christian Eijkman showed that disease can be caused not only by microorganisms but by a dietary deficiency of certain substances now called vitamins. In 1909 German bacteriologist Paul Ehrlich introduced the world’s first bactericide, a chemical designed to kill specific kinds of bacteria without killing the patient’s cells as well. Following the discovery of penicillin in 1928 by British bacteriologist Sir Alexander Fleming, antibiotics joined medicine’s chemical armoury, making the fight against bacterial infection almost a routine matter. Antibiotics cannot act against viruses, but vaccines have been used to great effect to prevent some of the deadliest viral diseases. Smallpox, once a worldwide killer was completely eradicated by the late 1970s and in the United States the number of polio cases dropped from 38, 000 in the 1950s to less than ten a year by the twentieth-century. By the middle of the twentieth-century, scientists believed they were well on the way to treating, preventing, or eradicating many of most deadly infectious diseases that had plagued humankind for centuries. Nonetheless, by the 1980s the medical community’s confidence in its ability to control infectious diseases had been shaken by the emergence of a new type of disease-causing microorganisms. New cases of tuberculosis developed, caused by bacteria strains that were resistant to antibiotics. New, deadly infections for which there was no known cure also appeared, including the viruses that cause haemorrhagic fever and the human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome.
In other fields of medicine, the diagnosis of diseases had been revolutionized by the use of new imaging techniques, including magnetic resonance imaging and computer tomography. Scientists were also on the verge of success in curing some diseases using gene therapy, in which the insertion of normal or genetically an altered gene into a patient’s cells replaces nonfunctional or missing genes.
Improved drugs and new tools have made surgical operations that were once considered impossible are now routine. For instance, drugs that suppress the immune system enable the transplant of organs or tissues with a reduced risk of rejection: Endoscopy permits the diagnosis and surgical treatment of a wide variety of ailments using minimally invasive surgery. Advances in high-speed fiberoptic connections permit surgery on a patient using robotic instruments controlled by surgeons at another location. Known as ‘telemedicine’, this form of medicine makes it possible for skilled physicians to treat patients in remoter locations or places that lack medical help.
In the twentieth-century the social sciences emerged from relative obscurity to become prominent fields of research. Austrian physician Sigmund Freud founded the practice of psychoanalysis, creating a revolution in psychology that led him to be called the ‘Copernicus of the mind’. In 1948 the American biologist Alfred Kinsey published “Sexual Behaviour in the Human Male,” which proved to be one of the best-selling scientific works of all time. Although criticized for his methodology and conclusions, Kinsey succeeded in making human sexuality an acceptable subject for scientific research.
The twentieth-century also brought dramatic discoveries in the field of anthropology, with new fossil finds helping to piece together the story of human evolution. A completely new and surprising sources of anthropological information became available from studies of the DNA in mitochondria, sell structures that provide energy to fuel the cell’s activities. Mitochondrial DNA has been used to track certain genetic diseases and to trace the ancestry of a variety of organisms, including humans.
In the field of communications, Italian electrical engineer Guglielmo Marconi sent his first radio signal across the Atlantic Ocean in 1901. American inventor Lee De Forest invented the triode, or vacuum tube, in 1906. The triode eventually became a key component in nearly all early radio, television, and computer systems. In 1920, Scottish engineer John Logie Baird developed the first transmission of a recognizable moving image. In the 1920s and 1930s American electronic engineer Vladimir Kosma Zworykin significantly improved the television’s picture and reception. In 1935 British physicist Sir Robert Watson-Watt used reflected radio waves to locate aircraft in flight. Radar signals have since been reflected from the moon, planets, and stars to learn their distance from and to track their movements.
In 1947 American physicist John Bardeen, Walter Brattain, and William Shockley invented the transistor, an electronic device used to control or amplify an electrical current. Transistors are much smaller, far less expensive, require less power to operate, and are considerably more reliable than triodes. Since their first commercial use in hearing aids in 1952, transistors have replaced triodes in virtually all applications.
During the 1950s and early 1960s minicomputers were developed using transistors rather than triodes. Earlier computers, such as the electronic numerical integrator and computer (ENIAC), first introduced in 1946 by American electrical engineer John Presper Eckert Jr. used as many as 18, 000 triodes and filled a large room. However, the transistor initiated a trend toward microminiaturization, in which individual electronic circuits can be reduced to microscopic size. This drastically reduced the computers’ size, cost, and power requirements and eventually enabled the development of electronic circuits with processing speeds measured in billionths of a second.
Further miniaturization led in 1971 to the first microprocessor - a computer on a chip. When combined with other specialized chips, the microprocessor becomes the central arithmetic and logic unit of a computer smaller than a portable typewriter. With their small size and a price less than of that of a used car, today’s personal computers are many times more powerful than the physically huge, multimillion-dollar computers of the 1950s. Once used only by large businesses, computers are now used by professionals, small retailers, and students to complete a wide variety of everyday tasks, such as keeping data on clients, tracking budgets, and writing school reports. People also use computers to understand each other with worldwide communications networks, such as the Internet and the World wide Web, to send and receive E-mail, to shop, or to find information on just about any subject.
During the early 1950s public interest in space exploration developed. The focal event that opened the space age was th International Geophysical Year from July 1957 to December 1958, during which hundreds of scientists around the world coordinated their efforts to measure the earth’s near-space environment. As part of this study, both the United States and the Soviet Union announced that they would launch artificial satellites into orbit for nonmilitary space activities.
When the Soviet Union launched the first Sputnik satellite in 1957, the feat spurred the United States to intensify its own spacer exploration efforts. In 1958 the National Aeronautics and Space Administration (NASA) was founded for the purposes of developing human spaceflight. Throughout the 1960s NASA experienced its greatest growth, among its achievements, NASA designed, manufactured, tested, and eventually used the Saturn rocket and the Apollo spacecraft for the first manned landing on the Moon in 1969. In the 1960s and 1970s, NASA also developed the first robotic space probed to explore the planet’s Mercury, Venus, and Mars. The success of the Mariner probes paved the way for the unmanned exploration of the outer planets in earth’s solar system.
In the 1970s through 1990s, NASA focussed its space exploration efforts on a reusable space shuttle, which was first deplored in 1981. In 1998 the space shuttle, along with its Russian counterpart known as Soyuz, became the workhorses that enable the constriction of the International Space Station.
In 1990 the German physicist Max Planck proposed the then sensational idea that energy be not divisible but is always given off on small amounts, of quanta. Five years later, German-born American physicist Alfred Einstein successfully used quanta to explain the photoelectric effect, which is the release of electrons when metals are bombarded by light. This, together with Einstein’s special and general theories of relativity, challenged some of the most fundamental assumptions of the Newtonian era.
Unlike the laws of classical physics, quantum theory carries out, with that which can occur on the infinitesimal of scales. Quantum theory explains how subatomic particles form atoms, and how atoms interact when they combined to form chemical components. Quantum theory deals with a world where the attributes of any single particle can never be completely known - an idea known as the uncertainty principle, put forward by the German physicist Werner Heidelberg ion 1927, whereby, the principle that the product of the uncertainty is a measured value of a component of momentum (px) and the uncertainty in the corresponding co-ordinates of (X) is of the equivalent set-order of magnitude, as the Planck constant. Δp2 x ΔX ≥ h/4π
Where ΔX represents the root-mean-square value of uncertainty, as for most purposes’ one can assume the following
Δpx x ΔX = h/2π
The principle can be derived exactly from quantum mechanics, a physical theory that grew out of Planck’s quantum theory and deals with the mechanics of atomic and related systems in terms of quantities that can be measures mathematical forms, including ‘wave mechanics’ (Schrödinger) and ‘matrix mechanics’ (Born and Heisenberg), all of which are equivalent.
Nonetheless, it is most easily understood as a consequence of the fact that any measurement of a system disturbs the system under investigation, with a resulting lack of precision in measurement. For example, if seeing an electron was possible and thus measures its position, photons would have to be reflected from the electron. If a single photon could be used and detected with a microscope, the collision between the electron and photon would change the electron’s momentum, as to its effectuality Compton Effect as a result to wavelengths of the photon is increased by an amount Δλ, whereby:
Δλ = (2h/m0c)sin2 ½ φ.
This is the Compton equation, and contained by ‘h’, of which is the Planck constant, m0 the rest mass of the particle, and ‘c’ is the speed of light, where φ is the angle between the direction of the incident and scattering photon. The quantity h/m0c is known as the Compton wavelength, symbol λc to which for an electron is equal to 0.002 43nm.
A similar relationship applies to the determination of energy and time, thus:
ΔE x Δt ≥ h/4π
The effects of the uncertainty principle are not apparent with large systems because of the small size of h, however, the principle is of fundamental importance in the behaviour of systems on the atomic scale. For example, the principle explains the inherent width of spectral lines, if the lifetime of an atom in an excited state is very short there is a large uncertainty in its energy and line resulting from a transition is broad.
Thus, there is uncertainty on the subatomic level. Quantum physics successfully predicts the overall output of subatomic events, a fact that firmly relates it to the macroscopic world, that is, the one in which we live.
In 1934 Italian-born American physicist Enrico Fermui began a series of experiments in which he used neutrons (subatomic particles without an electric charge) to bombard atoms of various elements, including uranium. The neutrons combined with the nuclei of the uranium atoms to produce what he thought were elements heavier than uranium, Known as transuranium elements. In 1939 other scientists demonstrated that in these experiments’ Fermi had not formed heavier elements, but instead had achieved the spilt, or fission of the uranium atom’s nucleus. These early experiments led to the development of fissions as both energy sources.
These fission studies, coupled with the development of particle accelerations in the 1950s, initiated a long and remarkable journey into the nature of subatomic particles that continues today. Far from being indivisible, scientists’ now know that atoms are made up of twelve fundamental particles known as quarks and leptons, which combine in different ways to make all the kinds of matter currently known.
Advances in particle physics have been closely linked to progress in cosmology. From the 1920s onward, when the American astronomer Edwin Hubble showed that the universe is expanding, cosmologists have sought to rewind the clock and establish how the universe began. Today, most scientists believe that the universe started with a cosmic explosion some time between ten and twenty billion years ago. However, the exact sequence of events surrounding its birth, and its ultimate fate, are still matters of ongoing debate.
Apart from their assimilations affiliated within the paradigms of science, Descartes was to posit the existence of two categorically different domains of existence for immaterial ideas - the res’ extensa, and the res’ cognitans or the ‘extended substance’ and the ‘thinking substance’, as defined by Descartes is the extended substance as the notability for which area of physical reality within primary mathematical and geometrical forms resides of a thinking substance as the realm of human subjective reality, as for corresponding to known facts, and having no illusions and facing reality squarely. Given that Descartes distrusted the information from the senses to the point of doubting the perceived results of repeatable scientific experiments, how did he conclude that our knowledge of the mathematical ideas residing only in the mind or in human subjectivity was accurate, much less the absolute truth? He did so by making a lap of faith-God constructed the world, said Descartes, in accordance with the mathematical ideas that our minds are capable of uncovering in their pristine essence. The truth of classical physics as Descartes viewed them were quite literally ‘revealed’ truths, and it was this seventeenth-century metaphysical presupposition that became in the history of science that we term the ‘hidden ontology of classical epistemology’.
While classical epistemology would serve the progress of science very well, it also presents us with a terrible dilemma about the relationship between ‘mind’ and the ‘world’. If there are no real or necessary correspondences between non-mathematical ideas in subjective reality and external physical reality, how do we now that the world in which we live, breath, and have to our Being, then perish in so that we undeniably exist? Descartes resolution of this dilemma took the form of an exercise. He asked us to direct our attention inward and to divest out consciousness of all awareness of eternal physical reality. If we do so, he concluded, the real existence of human subjective reality could be confirmed.
As it turned out, this revolution was considerably more problematic and oppressive than Descartes could have imagined. “I think: Therefore? I am, may be a marginally persuasive way of confirming the real existence of the thinking ‘self’. However, the understanding of physical reality that obliged Descartes and others to doubt the existence of this selfness as implied that the separation between the subjective world, or the world of life, and the real world of physical reality was ‘absolute’, an attribute belonging to the understanding of the relationship between mind and world is framed within the large context of the history of mathematical physics. Whereby, the organs and extensions of the classical view the foundation of scientific knowledge, and the diverse peculiarities as distinctively dissimulated by the various ways that physicists have attempted to obviate previous challenges within the efficacy of classical epistemology: This was made so, as to serve as background for a new relationship between parts and wholes in quantum physics, as well as similar views of the relationship that had emerged in the so-called ‘new biology’ and in recent studies of the evolution sustained by the modernity of the human.
Nevertheless, at the end of such as this arduous journey lay of two conclusions, that should make possible that first, there is no solid functional basis in contemporary physics or biology for believing in the stark Cartesian division between ‘mind’ and ‘world’, that some have alternatively given to describe as ‘the disease of the Western mind’. Secondly, there is a new basis for dialogue between two cultures that are now badly divided and very much in need of an enlarged sense of common understanding and shared purpose: Let us briefly consider the legacy in Western intellectual life of the stark division between mind and world sanctioned by classical physics and formalized by Descartes.
The first scientific revolution of the seventeenth-century freed Western civilization from the paralysing and demeaning fields of superstition. Laying the foundations for rational understanding and control of the processes of nature, and ushered in an era of technological innovation and progress that provided untold benefits for humanity. Nevertheless, as classical physics progressively dissolved the distinction between heaven and earth and united the universe in a shared and communicable frame of knowledge, it presented us with a view of physical reality that was totally alien from the world of everyday life.
Philosophy, quickly realized that there was nothing in this view of nature that could explain or provide a foundation for the mental, or for all that we know from direct experience as distinctly human. In a mechanistic universe, Descartes said, there is no privileged place or function for mind, and the separation between mind and matter is absolute. Descartes was also convinced, least of mention, that the immaterial essences that gave form and stricture to this universe were coded in geometrical and mathematical ideas, and this insight idea to invent ‘algebraic geometry’.
A scientific understanding of these ideas could be derived, he said, with the aid of precise deduction, and also claimed that the contours of physical reality could be laid out in three-dimensional co-ordinates. Following the publication of Isaac Newton’s “Principia Mathematica.” In 1687, reductionism and mathematical modelling became the most powerful tools of modern science. The dream that the entire physical world would be known and mastered through the extension refinement of mathematical theory for which has become the central feature and guiding principle of scientific knowledge.
Descartes’s theory of knowledge starts with the quest for certainty, for an indubitable start-point or foundation on the basis alone of which progress is possible. This is the method of investigating the extent of knowledge upon a secure formation by first invoking us to suspend judgement on any proposition whose truth can be doubted, even as a bare possibility. The standards of acceptance are gradually raised as we are asked to doubt the deliverance of memory, the senses, and even reason, all of which are in principle capable of letting us down. The process is eventually dramatized in the figure of the evil-demon, or malin génie, whose aim is to deceive us, so that our senses, memories, and reasoning lead us astray. The task then becomes one of finding a demon-proof point of certainty, and Descartes produces his famous ‘Cogito ergo sum’, I think: Therefore? I am. It is on this slender basis that the correct use of our faculties has to be reestablished, but it seems as though Descartes has denied himself any materials to use in reconstructing the edifice of knowledge. He has a basis, but any way of building on it without invoking principles that will not be demon-proof, and so will not meet the standards he had apparently set for himself. It is possible to interpret him as using ‘clear and distinct ideas’ to prove the existence of God, whose benevolence then justifies our use of clear and distinct ideas (‘God is no deceiver’): This is the notorious Cartesian circle. Descartes’s own attitude to this problem is not quite clear, at times he seems more concerned with providing a counterbalance of body of knowledge, that our natural faculties will endorse, than one that meets the more severe standards with which he starts. For example, in the second set of ‘Replies’ he shrugs off the possibility of ‘absolute falsity’ of our natural system of beliefs, in favour of our right to retain ‘any conviction so to firm, that it is quite incapable of being destroyed’. Nonetheless, the act or assenting intellectually to something proposed as true or the state of mind of one who so ssents, has of an as session toward beliefs are worthy of our belief as to have a firm conviction in the reality of something having no doubts but a point or points that support something for the proposed change. Reasons that we take to consider the implications are justifiably accountable, in that they support something open to question and implicitly take at oine’s word, as taken to ber as one’s word. As the power of the mind by which man attains truth or knowledge uses reason to solve this problem, however, the ethics reassembling the discipline, dealing with that which is good and bad and with moral duty and obligation, one can assume that any given idealisations are without personal governing, yet an exhaustive formality in testing all possibilities or considering all the complex elements of our concerns, are of realizing in accorded actions duties or functions of that act or operations exected of a person or thing, as such to perform, react, take, and work in the proper expoescted manner and finally finding success in getting the practibility for servicing its active function. Nevertheless, of a designing character of intentment seem purposively as haning one’s mind of attention deeply, fixed to accomplish or do and finish the proposed intentionality with intention to exemplify its use in iorder to clarify in opposion to whatever muddies the water. However, it belongs to any law that to clarify the entangling of any exclusion as having or exercising the power to limit or exclude the refutation embracing upon one’s explanation, therefore, individual events, say, the collapse of a bridge are uysuaklly explkainbed by specifying their cause: The bridge collapsed because of the pressures of the flood water and its weakened structure. This is an example of causal explanation. There usually are indefinitely many csausal factors responsible for the occurrenc e of an event, and the choice of a particular factor as ‘the cause’ appears to depend primarily on contextual considerations. Thus, one explanation of an automobile accident may cite the key road condition: Another of the inexperienced driver, and still another on the defective brakes. Context may determine which of these and other possible explanations is the appropriate one. These explanations of ‘why’ an event occurred are sometimes contrasted with explanations of ‘how’ an event occurred. A ‘how’ explanation of an event consists in an informative description of the process that has led to the occurrence iof the event, and such descriptions are likely to involve descriptions of causal processes.
Further more, Human actions are often explained by being ‘rationalized’ - i.e., by citing the agent’s beliefs and desires (and other ‘intentional’ mental states such as emotions, hopes, and expressions) that constitute a reason, and for doing what was done. You opened the window because you wanted some fresh air and believed tha t by opening the window you could secure this result. It hass been a controversial issue whether such rationalizing explanations are causes, i.e., whether they involve beliefs and desires as a cause of the action. Another issue is whether there ‘rationalizing’ explanations must conform to the covering law model, and if so, what laws might underwrite such explanations.
The need to add such natural beliefs have been to accede in opinions that in its very conviction that anything certified by reasonalized events are eventually established as the cornerstone of Hume’s philosophy, and the basis of most twenty
eth-century reactionism, to the method of doubt.
In his own time, René Descartes’ conception of the entirely separate substance of the mind was recognized to give rise to insoluble problems of the nature of the causal efficacy to the action of God. Events in the world merely form occasions on which God acts so as to bring about the events normally accompanied them, and thought of as their effects, although the position is associated especially with Malebrallium, it is much older, many among the Islamic philosophies, their processes for adducing philosophical proofs to justify elements of religious doctrine. It plays the parallel role in Islam to that which scholastic philosophy played in the development of Christianity. The practitioners of kalam were known as the Mutakallimun. It also gives rise to the problem, insoluble in its own terms, of ‘other minds’. Descartes notorious denial that nonhuman animals are conscious is a stark illustration of the problem.
In his conception of matter Descartes also gives preference to rational cogitation over anything derived from the senses, since we can envisage comprehensiblyof wax’ surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature? Descartes thought there is reflected in Leibniz’s view, as held later by Russell, that the qualities of sense experience have no resemblance to qualities of things, so that knowledge of the external world is essentially knowledge of structure than of filling this basis Descartes erects a remarkable physics. Since matter is in effect the same as extension there can be no empty space or ‘void’, since there is no empty space motion is not a question of occupying previously empty space, but is to be thought of in terms of vortices (like the motion of a liquid).
Although the structure of Descartes epistemology, theory of mind, and theory of matter have been rejected many times, their relentless exposure of the hardest issues, their exemplary clarity, and even their initial plausibility all contrive to make him the central point of reference for modern philosophy.
It seems, nonetheless, that the radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms without any concerns about spiritual dimensions or ontological foundations. In the meantime, attempted to rationalize, reconcile, or eliminate Descartes stark division between mind and matter became perhaps the most central feature of Western intellectual life.
Philosophers in the like of John Locke, Thomas Hobbes, and David Hume tried to articulate some basis for linking the mathematical descriptions motions of matter with linguistic representations of external reality in the subjective space of mind. Descartes compatriot Jean-Jacques Rousseau reified nature as the ground of human consciousness in a state of innocence and proclaimed that “Liberty, Equality, Fraternities” are the guiding principles of this consciousness. Rousseau also made godlike the ideas of the ‘general will’ of the people to achieve these goals and declared that those who do not conform to this will were social deviants.
Evenhandedly, Rousseau’s attempt to posit a ground for human consciousness by reifying nature was revived in a measure more different in form by the nineteenth-century Romantics in Germany, England and the United States, Goethe and Friedrich Schelling proposed a natural philosophy premised on ontological monism (the idea that God, man, and nature are grounded in an indivisible spiritual Oneness) and argued for the reconciliation of mind and matter with an appeal to sentiment, mystical awareness, and quasi-scientific musing. In Goethe’s attempt to wed mind and matter, nature and matter, nature became a mindful agency that ‘loves illusion’. Shrouding men in the mist that ‘presses him to her heart’, and punishes those who fail to see the ‘light’. Schelling, in his version of cosmic unity, argued that scientific facts were at best partial truths and that the mindful creative spirit that unifies mind and matter is progressively moving toward ‘self-realization’ and the subjectivism of ‘undivided wholeness’.
Descartes believed there are two basic kinds of things in the world, a belief known as substance dualism. For Descartes, the principles of existence for these two groups of things - bodies and minds - are completely different from that of any other: Bodies exist by being extended in space, while minds exist by being conscious. According to Descartes, nothing can be done to give a body thought and consciousness. No matter how we shape a body or combine it with other bodies, we cannot turn the body into mind, a thing that is conscious, because being conscious is not a way of being extended.
For Descartes, a person consists of a human body and that humankind as existing or dealing with what exists only in the mind, for which causality is interacting with humankind and causally interacting with being such as it should be, as its meaning is found in that of a person, fact, or condition which is responsible for an effect. For example, the intentions of human beings might have been awakened from the sparking embers that bring aflame the consciousness to some persons limbs to cause to occur of his mobility. In this way, the mind can affect the body. In addition, the sense organs of a human being maybe affected by light, pressure, or sound, external sources that in turn affect the brain, affecting mental state. Thus, the body may affect the mind. Exactly how mind can affect body, and vice versa, is a central issue in the philosophy of mind, and is known as the mind-body problem. According to Descartes, this interaction of mind and body is peculiarly intimate. Unlike the interaction between a pilot and his ship, the connection between mind and body more closely resembles two substances that have been thoroughly mixed.
Because of the diversity of positions associated with existentialism, the term is impossible to define precisely. Certain themes common to virtually all existentialist writers can, however, be identified. The term itself suggests one major theme: The stress on concrete individual existence and consequently on subjectivity, individual freedom and choice.
Most philosophers since Plato have held that the highest ethical good are the same for everyone: insofar as one approaches moral perfection, one resembles other morally perfect individuals. The nineteenth-century Danish philosopher Søren Kierkegaard, who was the first writer to call himself existential, reacted against this tradition by insisting that the highest good for the individual are to fin his or her own unique vocation. As he wrote in his journal, “I must find a truth that is true for me . . . the idea for which I can live or die.” Other existentialist writers have echoed Kierkegaard’s belief that one must choose one’s own way without the aid of universal, objective standards. Against the traditional view that moral choice involves an objective judgement of right and wrong, existentialists have argued that no objective, rational basis can be found for moral decisions. The nineteenth-century German philosopher Friedrich Nietzsche further contended that the individual must decide which situations are to count as moral situations.
All existentialists have followed Kierkegaard in stressing the importance of passionate individual action in deciding questions of both morality and truth. They have insisted, accordingly, that personal experience and acting on one’s own convictions are essential in arriving at the truth. Thus, the understanding of a situation by someone involved in that situation is superior to that of a detached, an objective observer. This emphasis on the perspective of the individual agency has also make existentialists suspicious of systematic reasoning. Kierkegaard, Nietzsche, and other existentialist writers have been deliberately unsystematic in the exposition of their philosophies, preferring to express themselves in aphorisms, dialogues, parables, and other literary firms. Despite their antirationalist position, however, most existentialists cannot be said to be irrationalists in the sense of denying all validity to rational thought. They have held that rational clarity is desirable wherever possible, but that the most important questions in life are not accessible to reason out or science. Furthermore, they have argued that even science is not as rational as is commonly supposed. Nietzsche, for instance, asserted that the scientific assumption of an orderly universe is for the most part, a story or conceptual account which is an invention of the human mind.
Perhaps the most prominent theme in existentialist writing is that of choice. Humanity’s primary distinction in the view of most existentialists, is the freedom to choose. Existentialists have held that human beings do not have a fixed nature, or essence, as other animals and plants do, of which are founded as individuals living within the life having a presence to the future, in the state or fact of having independent reality where customs that have recently come into existence: Each human being makes choices that create his or her own nature. In the formulations of the twentieth-century French philosopher Jean-Paul Sartre, existence precedes essence. Choice is therefore central to human existence, and it is inescapable, that in like manners, to make even or balanced in advantage to emphasize the identity or character of something as expressed for being neither more nor less than the named or understood substance, extent, or number at the very time the moment has come to consider to choose is a choice. Freedom of choice entails commitment and responsibility. Because individuals are free to choos their own path, existentialists have argued, they must accept the risk and responsibility of following their commitment wherever it leads.
Kierkegaard held that recognizing that one experience is spiritually crucial not only a fear of specific objects but also a feeling of general apprehension, which he called dread. He interpreted it as God’s way of calling each individual to make a commitment to a personally valid way of life. The word anxiety (German ‘angst’) has a similar crucial role in the work of the twentieth-century German philosopher Martin Heidegger: Anxiety leads to the individual’s confrontation with nothingness and with the impossibility of finding ultimate justification for the choices he or she must make. In the philosophy of Sartre, the word nausea is used for the individual’s recognition of the pure contingency of the universe, and the word anguish is used for the recognition of the total freedom of choice that confronts the individual at every moment.
Existentialism as a distinct philosophical and literary movement belonging to the nineteenth and twentieth centuries, however, elements of existentialism can be found in the thought (and life) of Socrates, in the Bible, and in the work of many premodern philosophers and writers.
The first to anticipate the major concerns of modern existentialism was the seventeenth-century French philosopher Blaise Pascal. Pascal rejected the rigorous rationalism of his contemporary René Descartes, asserting, in his Pensées (1670), that a systematic philosophy that presumes to explain God and humanity is a form of pride: The human self, which combines mind and body, is itself a paradox and contradiction.
Kierkegaard, generally regarded as the founder of modern existentialism, reacted against the systematic absolute idealism of the nineteenth-century German philosopher Georg Wilhelm Friedrich Hegel, who claimed to have worked out a total rational understanding of humanity and history: Kierkegaard, on the contrary, stressed the ambiguity and absurdity of the human situation. The individual’s response to this situation must be to live a totally committed life, and this commitment can only be understood by the individual who has made it. The individual therefore must always be prepared to defy the norms of society for the sake of the higher authority of a personal valid way of life. Kierkegaard ultimately advocated a ‘leap of faith’ into a Christian way of life, which, although incomprehensible and full of risk, was the only commitment he believed could save the individual from despair.
Nietzsche, who was not acquainted with the work of Kierkegaard, Influenced subsequent existentialist thought through his criticism of traditional metaphysical and moral assumptions and through his espousal of tragic pessimism and the life-affirming individual will that opposes itself to the moral conformity of the majority. In contrast to Kierkegaard, whose attack on conventional morality led him to advocate a radically individualistic Christianity, Nietzsche proclaimed the “Death of God” and went on to reject the entire Judeo-Christian moral tradition in favour of a heroic pagan ideal.
Heidegger, like Pascal and Kierkegaard, reacted against an attempt to put philosophy on a conclusive rationalistic basis - in this case the phenomenology of the twentieth-century German philosopher Edmund Husserl. Husserl argued that humanity finds itself in an incomprehensible, indifferent world. Human beings can never hope to understand why they are here: Instead, each individual must choose a goal and follow it with passionate conviction, aware of the certainty of death and the ultimate meaninglessness of one’s life. Heidegger contributed to give in common with others as in, to put something as others have a share in something (as an act or effect) for planning contributed greatly to the success of existentialist thought as an original emphasis on being and ontology as well as on language.
Sartre first gave the term existentialism general currency by using it for his own philosophy and by becoming the leading figure of the distinct movement in France that became internationally influential after World War II. Sartre’s philosophy is explicitly atheistic and pessimistic: He declared that human beings require a rational basis for their lives but are unable to achieve one, and thus human life is a ‘futile passion’. Sartre, nonetheless, insisted that his existentialism be a form of humanism. And he strongly emphasized human freedom, choice and responsibility. He eventually tried to reconcile these existentialist concepts with a Marxist analysis of society and history.
Although existentialist thought encompasses the uncompromising atheism of Nietzsche and Sartre and the agnosticisms of Heidegger, its origin in the intensely religious philosophies of Pascal and Kierkegaard foreshadowed its profound influence on a twentieth-century theology. The twentieth-century y German philosopher Karl Jasper, although he rejected explicit religious doctrines, influenced contemporary theologies through his preoccupation with transcendence and the limits of human experience. The German Protestant theologian’s Paul Tillich and Rudolf Bultmann, the French Roman Catholic theologian Gabriel Marcel, the Russian Orthodox philosopher Nikolay Berdyayev, sand the German Jewish philosopher Martin Buber inherited many of Kierkegaard’s concerns, especially that a personal sense of authenticity and commitment is essential to religious faith.
A number of existentialist philosophers used literal forms to convey their thoughts, and existentialism has been as vital and ass extensive a movement in literature as in philosophy. The nineteenth-century Russia novelist Fyodor Dostoyevsky is probably the greatest existentialist literary figure. In “Notes from the Underground” (1864), the alienated antihero rages against the optimistic assumption of rationalist humanism. The view of human nature that emerges in this and other novels of Dostoyevsky is that it is unpredictable and perversely self-destructive: Only Christian love can save humanity from itself, but such love is understood philosophically. As the character Alyosha says in “The brother’s Karamazov” (1879-80), “We must love more than the meaning of it.”
In the twentieth-century, the novels of the Austrian Jewish writer Franz Kafka, such as “The Trail” *1925, trans., 1937) and “The Castle” (1926, trans. 1930), present isolated men confronting vast, elusive, menacing bureaucracies: Fafka’s themes on anxiety, guilt and solitude reflect the influence of Kierkeggaard, Dostoyevsky, and Nietzsche. The influence of Nietzsche e is also discernable in the novels of the French writer’s André Malraux and in the plays of Sartre. The works of the French writer Albert Camus is usually associated with existentialism because of the prominence in it of such themes as the apparent absurdity and futility of life, the indifference of the universes, and the necessity of engagement in a just cause. Existentialist themes are also reflected in the theatre of the absurd, notably in the plays of Samuel Beckett and Eugène Ionesco. In the United States, the influence of existentialism on literature has been more indirect and diffuse, but traces of Kierkeggaard’s thought can be found in the novels of Walker Percy and John Updike, and various existential themes are apparent in the work of such diverse writers as Norman Mailer, John Barth, and Arthur Miller.
The fatal flaw of pure reason is, of course, the absence of emotion, and purely rational explanations of the division between subjective reality and external reality had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of Cartesian dualism with emotional content was the death of God theologian Friedrich Nietzsche. After declaring that God and ‘divine will’ do not exist, Nietzsche reified the ‘essences’ of consciousness in the domain of subjectivity as the ground for individual ‘will’ and summarily dismissed all pervious philosophical attempts to articulate the ‘will to truth’. The problem, claimed Nietzsche, is that earlier versions of the ‘will to power’ disguises the fact that all allege truths were arbitrarily created in the subjective reality of the individual and are expressions or manifestations of individual ‘will’.
In Nietzsche’s view, the separation between mind and matter is more absolute and total than had previously been imagined. Based on the assumption that there is no real or necessary corresponded between linguistic constructions of reality in human subjectivity and external reality, he declared that we are all locked in ‘a prison house of language’. The prison as he conceived it, nonetheless, also gave to represent a ‘space’ where the philosopher can examine the ‘innermost desires of his nature’ and articulate a new message of individual existence founded of ones ‘willing’
Those who fail to enact their existence in this space, says Nietzsche, are enticed into sacrificing their individuality on the nonexistent altar of religious beliefs and democratic or socialist ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said, not only exalted natural phenomena and favours reductionists’ examinations of phenomena at the expense of an individual that feels, perceives, thinks, wills, and especially reasons. It also seeks to seduce mind to a mere material substance, and thereby to displace or subsume the separateness and uniqueness of mind with mechanistic deception that disallows any basis for the free exercising of the individual will.
A considerable diversity of views exists among analytic and linguistic philosophers regarding the nature of conceptual or linguistic analysis. Some have been primarily concerned with clarifying the meaning of specific words or phrases as an essential step in making philosophical assertions clear and unambiguous. Others have been more concerned with determining the general conditions that must be met for any linguistic utterance to be meaningful; their intent is to establish a criterion that will distinguish between meaningful and nonsensical sentences. Still other analysts have been interested in creating formal, symbolic languages that are mathematical in nature. Their claim is that philosophical problems can be more effectively dealt with once they are formulated in a rigorous logical language.
By contrast, many philosophers associated with the movement have focussed on the analysis of ordinary, or natural, language. Difficulties arise when concepts such as ‘time’ and ‘freedom’, for example, are considered apart from the linguistic context in which they normally appear. Attention to language as it is ordinarily used as the key, it is argued, to resolving many philosophical puzzles.
Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th-century English philosopher’s G.E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F.Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry. They set the mood and style of philosophizing for much of the 20th century English-speaking world.
For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating fewer puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as ‘time is unreal,’ analyses that then aided in the determining of the truth of such assertions.
Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called ‘atomic propositions’. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical views based on this logical analysis of language, and the insistence that meaningful propositions must correspond to facts constitute what Russell called ‘logical atomism’. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements ‘John is good’ and ‘John is tall’ have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property ‘goodness’ as if it were a characteristic of John in the same way that the property ‘tallness’ is a characteristic of John. Such failure results in philosophical confusion.
Russell’s work in mathematics gave power to the adherent correspondences what to Cambridge the Austrian philosopher Ludwig Wittgenstein, became a central figure in the analytic and linguistic movement. In his first major work, ‘Tractatus Logico-Philosophicus’, (1921, trans., 1922) in which he first presented his theory of language, Wittgenstein argued that all philosophy is a ‘critique of language’ and that philosophy aims at the ‘logical clarification of thoughts’. The results of Wittgenstein’s analysis resembled Russell’s logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts - the propositions of science - are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.
Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as ‘logical positivism’. Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivists, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).
The positivists divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend together on the meanings of the terms constituting the statement. An example would be the proposition ‘two plus two equals four.’ The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivists concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually overflowing emptiness. The ideas of logical positivism were made popular in England by the publication of A.J. Ayer’s, ‘Language, Truth and Logic’ in 1936.
The positivists’ verifiability theory of meaning came under intense criticism by philosophers such as the Austrian-born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the ‘Tractatus’, he initiated a new line of thought culminating in his posthumously published ‘Philosophical Investigations’ (1953: trans., 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.
This recognition led to Wittgenstein’s influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.
Additional contributions within the analytic and linguistic movement include the work of the British philosopher’s Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate ‘systematically misleading expressions’ in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of mentalistic language, language that misleadingly suggests that the mind is an entity in the same way as the body.
Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered.
Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, are needed in addition to logic in analysing ordinary language.
Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.
The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist, between those who prefer to work with the precision and rigour of symbolic logical systems and those who prefer to analyse ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse in resolving philosophical problems. The examination of one’s own thought and feeling, is the basis of a man much given to introspection, as a sense of self-searching is a limited, definite or measurable extent of time during which something exists, that its condition is reserved in the term of having or showing skill in thinking or reasoning, the Rationale is marked by the reasonable logical calculus and is also called a formal language, and a logical system? A system in which explicit rules are provided to determining (1) which are the expressions of the system (2) which sequence of expressions count as well formed (well-forced formulae) (3) which sequence would count as proofs. An indefinable system that may include axioms for which leaves terminate a proof, however, it shows of the prepositional calculus and the predicated calculus.
It’s most immediate of issues surrounding certainty are especially connected with those concerning ‘scepticism’. Although Greek scepticism entered on the value of enquiry and questioning, scepticism is now the denial that knowledge or even rational belief is possible, either about some specific subject-matter, e.g., ethics, or in any area whatsoever. Classical scepticism, springs from the observation that the best method in some area seems to fall short of giving us contact with the truth, e.g., there is a gulf between appearances and reality, it frequently cites the conflicting judgements that our methods deliver, with the effectualities that express doubt about truth becoming narrowly spaced that to do what is required by the terms of so as to make effective, to know or expect in advance that something will happen or come into existence or be made manifest, in at least, ascribed of being indefinable as containing as much as is possible, is that, to come or to go into some place or thing for which of issues is to cause or permit to go in or into. The condition of being deeply involved or closely linked in an embarrassing or compromising way with which underworld figures tarnished only by reputation, that, the most basic, significant, and indispensable elements attribute, quality, property, or aspect of a thing would find a postulated outcome, condition or contingency in the chance that exist upon their ledger of entry. In classic thought, the various examples of this conflict were systemized in the tropes of Aenesidemus. So that, the scepticism of Pyrrho and the new Academy was a system of argument and inasmuch as opposing dogmatism, and, particularly the philosophical system building of the Stoics.
As it has come down to us, particularly in the writings of Sextus Empiricus, its method was typically to cite reasons for finding our issue undesirable (sceptics devoted particular energy to undermining the Stoics conception of some truths as delivered by direct apprehension or some katalepsis). As a result the sceptics conclude eposhé, or the suspension of belief, and then go on to celebrate a way of life whose object was ataraxia, or the tranquillity resulting from suspension of belief.
Fixed by for and of itself, the mere mitigated scepticism which accepts every day or commonsense belief, is that, not the delivery of reason, but as due more to custom and habit. Nonetheless, it is self-satisfied at the proper time, however, the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by the accentuations from Pyrrho through to Sextus Expiricus. Despite the fact that the phrase ‘Cartesian scepticism’ is sometimes used, Descartes himself was not a sceptic, however, in the ‘method of doubt’ uses a sceptical scenario in order to begin the process of finding a general distinction to mark its point of knowledge. Descartes trusts in categories of ‘clear and distinct’ ideas, not far removed from the phantasiá kataleptikê of the Stoics.
For many sceptics had traditionally held that knowledge requires certainty, artistry. And, of course, they claim that certainty of knowledge is not possible. In part, nonetheless, of the principle that every effect it’s a consequence of an antecedent cause or causes. For causality to be true it is not necessary for an effect to be predictable as the antecedent causes may be numerous, too complicated, or too interrelated for analysis. Nevertheless, in order to avoid scepticism, this participating sceptic has generally held that knowledge does not require certainty. Except for alleged cases of things that are evident for one just by being true, it has often been thought, that any thing known must satisfy certain criteria as well for being true. It is often taught that anything is known must satisfy certain standards. In so saying, that by ‘deduction’ or ‘induction’, there will be criteria specifying when it is. As these alleged cases of self-evident truths, the general principle specifying the sort of consideration that will make such standards in the apparent or justly conclude in accepting it warranted to some degree.
Besides, there is another view - the absolute global view that we do not have any knowledge whatsoever. In whatever manner, it is doubtful that any philosopher seriously entertains of absolute scepticism. Even the Pyrrhonist sceptics, who held that we should refrain from accenting to any non-evident standards that no such hesitancy about asserting to ‘the evident’, the non-evident are any belief that requires evidences because it is warranted.
René Descartes (1596-1650), in his sceptical guise, never doubted the content of his own ideas. It’s challenging logic, inasmuch as of whether they ‘corresponded’ to anything beyond ideas.
All the same, Pyrrhonism and Cartesian form of virtual global scepticism, in having been held and defended, that of assuming that knowledge is some form of true, sufficiently warranted belief, it is the warranted condition that provides the truth or belief conditions, in that of providing the grist for developing upon the sceptic’s undertaking. The Pyrrhonist will suggest that there are no non-evident, empirically deferring the sufficiency of giving in but warranted. Whereas, a Cartesian sceptic will agree that no empirical standards have placed anything other than one’s own mind and its contentually subjective matters for which are sufficiently warranted, because there are always legitimate grounds for doubting it. Whereunto, the essential differences between the two views concern the stringency of the requirements for a belief being sufficiently warranted justly, to take account of as knowledge.
James, (1842-1910), although with characteristic generosity exaggerated in his debt to Charles S. Peirce (1839-1914), he charted that the method of doubt encouraged people in pretending to doubt what they did not doubt in their hearts, and criticize its individualist’s insistence, that the ultimate test of certainty is to be found in the individuals personalized consciousness.
From his earliest writings, James understood cognitive processes in teleological terms. ‘Thought’, he held, assists us in the satisfactory interests. His ‘will to believe’ doctrine, the view that we are sometimes justified in believing beyond the evidential relics upon the notion that a belief’s benefits are relevant to its justification. His pragmatic method of analysing philosophical problems, for which considers that we find the meaning of terms by examining their application to objects in experimental situations, similarly reflects the teleological approach in its attention to consequences.
Such an approach, however, sets’ James’ theory of meaning apart from verification, dismissive of metaphysics and unlike the verificationalists, who take cognitive meaning to be a matter only of consequences in sensory experience? James’ took pragmatic meaning to include emotional and matter responses. Moreover his, metaphysical standard of quality value, not a way of dismissing them as meaningless. It should also be noted that in a greater extent, circumspective moments’ James did not hold that even his broad set of consequences was exhaustive of a term meaning. ‘Theism’, for example, he took to have antecedently, definitional meaning, in addition to its varying degree of importance and chance upon an important pragmatic meaning.
James’ theory of truth reflects upon his teleological conception of cognition, by considering a true belief to be one which is compatible with our existing system of beliefs, and leads us to satisfactory interaction with the world.
However, Peirce’s famous pragmatist principle is a rule of logic employed in clarifying our concepts and ideas. Consider the claim the liquid in a flask is an acid, if, we believe this, we except that it would turn red: We accept an action of ours to have certain experimental results. The pragmatic principle holds that listing the conditional expectations of this kind, in that we associate such immediacy with applications of a conceptual representation that provides a complete and orderly set clarification of the concept. This is irrelevant to the logic of abduction: Clarificationists using the pragmatic principle provides all the information about the content of a hypothesis that is relevantly to decide whether it is worth testing.
To a greater extent, and what is most important, is the framed apprehension of the pragmatic principle, in so that, Pierces’ account of reality: When we take something to be real that by this single case, we think it is ‘fated to be agreed upon by all who investigate’ the matter to which it stand, in other words, if I believe that it is really the case that ‘p’, then I except that if anyone were to inquire into the findings of the measure into whether of which ‘p’, would arrive at the belief that ‘p’. It is not part of the theory that the experimental consequences of our actions should be specified by a warranted empiricist vocabulary - Peirce insisted that perceptual theories are abounding in latency. Even so, nor is it his view that the collected conditionals do or not clarify a concept as all analytic. In addition, in later writings, he argues that the pragmatic principle could only be made plausible to someone who accepted its metaphysical realism: It requires that ‘would-bees’ are objective and, of course, real.
If realism itself can be given a fairly quick clarification, it is more difficult to chart the various forms of supposition, for they seem legendary. Other opponents deny that entities posited by the relevant discourses that exist or at least exists: The standard example is ‘idealism’ that reality is somehow mind-curative or mind-co-ordinated - that substantially real objects consist of the ‘external world’ through which is nothing but independently of eloping minds, but only exist as in some way correlative to the mental operations. The doctrine assembled of ‘idealism’ enters on the conceptual note that reality as we understand this as meaningful and reflects the working of mindful purposes. And it construes this as meaning that the inquiring mind itself makes of some formative constellations and not of any mere understanding of the nature of the ‘real’ bit even the resulting charger we ascribe to.
Wherefore, the term is most straightforwardly used when qualifying another linguistic form of Grammatik: a real ‘x’ may be contrasted with a fake, a failed ‘x’, a near ‘x’, and so on. To treat something as real, without qualification, is to suppose it to be part of the actualized world. To reify something is to suppose that we have committed by some indoctrinated treatise, as that of a theory. The central error in thinking of reality and the totality of existence is to think of the ‘unreal’ as a separate domain of things, perhaps, unfairly to that of the benefits of existence.
Such being previously characterized or specified, or authorized to siege using ways through some extreme degree or quality in as much as having had never before, is that nonexistence of all things. To set before the mind for consideration, to forward the literary products of the Age of Reason, something produced was labouriously implicated. Nevertheless, the product of logical thinking or reasoning the argument confusion which things are out of their normal or proper places or relationships, as misoffering conduct derange the methodization and disorganization instead of a ‘quantifier’. (Stating informally as a quantifier is an expression that reports of a quantity of times that a predicate is satisfied in some class of things, i.e., in a domain.) This confusion leads the unsuspecting to think that a sentence such as ‘Nothing is all around us’ talks of a special kind of thing that is all around us, when in fact it merely denies that the predicate ‘is all around us’ have appreciations. The feelings that led some philosophers and theologians, notably Heidegger, to talk of the experience of a quality or state of being as un-quantified as of nothing, in that nothing as something that does not exist was it not his hopes that a worthless account is the quality or state of being that which something has come. This is not properly the experience of anything, but rather the failure of a hope or expectations that there would be something of some kind at some point. This may arise in quite everyday cases, as when one finds that the article of functions one expected to see as usual, in the corner has disappeared. The difference between ‘existentialist’‘ and ‘analytic philosophy’, on the point of what, whereas the former is afraid of nothing, and the latter think that there is nothing to be afraid of.
A rather different set of concerns arises when actions are specified in terms of doing nothing, saying nothing may be an admission of guilt, and doing nothing in some circumstances may be tantamount to murder. Still, other substantiated problems arise over conceptualizing empty space and time.
Whereas, the standard opposition between those who affirm and those who deny, the real existence of some kind of thing or some kind of fact or state of affairs. Almost any area of discourse may be the focus of this dispute: The external world, the past and future, other minds, mathematical objects, possibilities, universals, moral or aesthetic properties are examples. There be to one influential suggestion, as associated with the British philosopher of logic and language, and the most determinative of philosophers centred round Anthony Dummett (1925), to which is borrowed from the ‘intuitivistic’ critique of classical mathematics, and suggested that the unrestricted use of the ‘principle of a bivalence’ is the trademark of ‘realism’. However, this has to overcome counterexamples both ways: Although Aquinas wads a moral ‘realist’, he held that morally real was not sufficiently structured to make true or false every moral claim. Unlike Kant who believed that he could use the law of the bivalence happily in mathematics, precisely because it was only our own construction. Realism can itself be subdivided: Kant, for example, combines empirical realism (within the phenomenal world the realist says the right things - surrounding objects that really exist and is independent of us but are so of our mental states) with transcendental idealism (the phenomenal world as whole reflects the structures imposed on it by the activity of our minds as they render it intelligible to us). In modern philosophy the orthodox oppositions to realism have been from a philosopher such as Goodman, who, impressed by the extent to which we perceive the world through conceptual and linguistic lenses of our own making.
Assigned to the modern treatment of existence in the theory of ‘quantification’ is sometimes put by saying that existence is not a predicate. The idea is that the existential quantify themselves and add an operator onto the predicate, indicating that the property it expresses has instances. Existence is therefore treated as a second-order property, or a property of properties. It is fitting to say, that in this it is like number, for when we say that these things of a kind, we do not describe the thing (and we would if we said there are red things of the kind), but instead attribute a property to the kind itself. The parallelled numbers are exploited by the German mathematician and philosopher of mathematics Gottlob Frége in the dictum that affirmation of existence is merely denied of the number nought. A problem, nevertheless, proves accountable for it’s crated by sentences like ‘This exists’, where some particular thing is undirected, such that a sentence seems to express a contingent truth (for this insight has not existed), yet no other predicate is involved. ‘This exists’ is that unlike ‘Tamed tigers exist’, where a property is said to have an instance, for the word ‘this’ and is not unearthed as a property, but exclusively characterized by the peculiarity of individuality for being distinctively identified in the likeness of human beings.
In the transition, ever since Plato, this ground becomes a self-sufficient, perfect, unchanging, and external something, identified with the Good or that of God, but whose relation with the everyday world, remains obscure. The celebrated argument for the existence of God first proposed by Anselm in his Proslogin. The argument by defining God as ‘something than which nothing greater can be conceived’. God then exists in the understanding since we understand this concept. However, if he only existed in the understanding something greater could be conceived, for a being that exists in reality is greater than one that exists in the understanding. But, then, we can conceivably have something greater than that which nothing greater can be conceived, which is antithetically, therefore, God cannot exist on the understanding, but exists in reality.
An influential argument (or family of arguments) for the existence of God, finding its premises are that all natural things are dependent for their existence on something else. The totality of dependence must bring about then, in that which depends upon a non-dependent, or necessarily existent Being of which is God. Like the argument to design, the cosmological argument was attacked by the Scottish philosopher and historian David Hume (1711-76) and Immanuel Kant.
Its main problem, nonetheless, is that it requires us to make sense of the notion of necessary existence. For if the answer to the question of why anything exists is that some other things of a similar kind exist, the question merely arises repeatedly, in that ‘God’, who ends the question must exist necessarily: It must not be an entity of which the same kinds of questions can be raised. The other problem with the argument is attributing concern and care to the deity, not for connecting the necessarily existent being it derives with human values and aspirations.
The ontological argument has been treated by modern theologians such as Barth, following Hegel, not so much as a proof with which to confront irreligiously, but as an explanation of the deep meaning of religious belief. Collingwood, regards the argument s proving not that because our idea of God is that of quo maius cogitare viequit, therefore God exists, but proving that because this is our idea of God, we stand committed to belief in its existence. Its existence is a metaphysical point or absolute presupposition of certain forms of thought.
In the 20th century, modal versions of the ontological argument have been propounded by the American philosophers Charles Hertshorne, Norman Malcolm, and Alvin Plantinga. One version is to define something as unsurpassable distinguished, if it exists and is perfect in every ‘possible world’. Then, to allow that it is at least possible that an unsurpassable great being existing. This means that there is a possible world in which such a being exists. However, if it exists in one world, it exists in all (for the fact that such a being exists in a world that entails, in at least, it exists and is perfect in every world), so, it exists necessarily. The correct response to this argument is to disallow the apparently reasonable concession that it is possible that such a being exists. This concession is much more dangerous than it looks, since in the modal logic, involved from possibly necessarily ‘p’, we can derive in the necessarily ‘p’. A symmetrical proof starting from the assumption that it is possibly that such a being does not exist would derive that it is impossible that it exists.
The doctrine that it makes an ethical difference of whether an agent actively intervenes to bring about a result, or omits to act in circumstances in which it is foreseen, that as a resultant of omissions, the same result occurs. Thus, suppose that I wish you dead. If I act to bring about your death, I am a murderer, however, if I happily discover you in danger of death, and fail to act to save you, I am not acting, and therefore, according to the doctrine of acts and omissions not a murderer. Critics implore that omissions can be as deliberate and immoral as I am responsible for your food and fact to feed you. Only omission is surely a killing, ‘Doing nothing’ can be a way of doing something, or in other worlds, absence of bodily movement can also constitute acting negligently, or deliberately, and defending on the context, may be a way of deceiving, betraying, or killing. Nonetheless, criminal law offers to find its conveniences, from which to distinguish discontinuous intervention, for which is permissible, from bringing about resultant amounts from which it may not be, if, for instance, the result is death of a patient. The question is whether the difference, if there is one, is, between acting and omitting to act be discernibly or defined in a way that bars a general moral might.
The double effect of a principle attempting to define when an action that had both good and bad results are morally permissible. I one formation such an action is permissible if (1) The action is not wrong in itself, (2) the bad consequences are not that which is intended (3) the good is not itself a result of the bad consequences, and (4) the two consequential effects are commensurate. Thus, for instance, I might justifiably bomb an enemy factory, foreseeing but intending that the death of nearby civilians, whereas bombing the death of nearby civilians intentionally would be disallowed. The principle has its roots in Thomist moral philosophy, accordingly. St. Thomas Aquinas (1225-74), held that it is meaningless to ask whether a human being is two tings (soul and body) or, only just as it is meaningless to ask whether the wax and the shape given to it by the stamp are one: On this analogy the sound is ye form of the body. Life after death is possible only because a form itself does not perish (pricking is a loss of form).
And am, therefore, in some sense available to reactivating a new body, therefore, not I who survive body death, but I may be resurrected in the same personalized body that becomes reanimated by the same form, that which Aquinas’s account, as a person has no privileged self-understanding, we understand ourselves as we do everything else, by way of sense experience and abstraction, and knowing the principle of our own lives is an achievement, not as a given. Difficult as this point led the logical positivist to abandon the notion of an epistemological foundation altogether, and to flirt with the coherence theory of truth, it is widely accepted that trying to make the connection between thought and experience through basic sentences depends on an untenable ‘myth of the given
The special way that we each have of knowing our own thoughts, intentions, and sensationalist have brought in the many philosophical ‘behaviorist and functionalist tendencies, that have found it important to deny that there is such a special way, arguing the way that I know of my own mind inasmuch as the way that I know of yours, e.g., by seeing what I say when asked. Others, however, point out that the behaviour of reporting the result of introspection in a particular and legitimate kind of behavioural access that deserves notice in any account of historically human psychology. The historical philosophy of reflection upon the astute of history, or of historical, thinking, finds the term was used in the 18th century, e.g., by Volante was to mean critical historical thinking as opposed to the mere collection and repetition of stories about the past. In Hegelian, particularly by conflicting elements within his own system, however, it came to man universal or world history. The Enlightenment confidence was being replaced by science, reason, and understanding that gave history a progressive moral thread, and under the influence of the German philosopher, whom is in spreading Romanticism, is Gottfried Herder (1744-1803), and, Immanuel Kant, this idea took it further to hold, so that philosophy of history cannot be the detecting of a grand system, the unfolding of the evolution of human nature as witnessed in successive sages (the progress of rationality or of Spirit). This essential speculative philosophy of history is given an extra Kantian twist in the German idealist Johann Fichte, in whom the extra association of temporal succession with logical implication introduces the idea that concepts themselves are the dynamic engines of historical change. The idea is readily intelligible in that their world of nature and of thought becomes identified. The work of Herder, Kant, Flichte and Schelling is synthesized by Hegel: History has a conspiracy, as too, this or to the moral development of man, but whichever equation resolves a freedom, will be the development of thought, or a logical development in which various necessary moment in the life of the concept are successively achieved and improved upon. Hegel’s method is at it’s most successful, when the object is the history of ideas, and the evolution of thinking may march in the gaiting steps with which logical oppositions and their resolution encounters red by various systems of thought.
Within the revolutionary communism, Karl Marx (1818-83) and the German social philosopher Friedrich Engels (1820-95), there emerges a rather different kind of story, based upon Hefl’s progressive structure not laying the achievement of the goal of history to a future in which the political condition for freedom comes to exist, so that economic and political fears than ‘reason’ is in the engine room. Although, it is such that speculations upon the history may that it is continued to be written, notably, stays a late example, for which speculation of this kind with the nature of historical understanding, and in particular with a comparison between the methods of natural science and with the historians. For writers such as the German neo-Kantian Wilhelm Windelband and the German philosopher and literary critic and historian Wilhelm Dilthey, it is important to show that the human sciences such. As history is objective and legitimate, nonetheless they are in some way deferent from the enquiry of the scientist. Since the subjective-matter is the past thought and actions of human brings, what is needed and actions of human beings, past thought and actions of human beings, what is needed is an ability to re-live that past thought, knowing the deliberations of past agents, as if they were the historian’s own. The most influential British writer on this theme signifies the philosopher and historian George Collingwood (1889-1943), whose, ‘The Idea of History’ (1946), contains an extensive defence of the Verstehen approach, but it is, nonetheless, the explanation from their actions. However, by re-living the situation as our understanding that understanding other is not gained by the tactic use of a ‘theory’, enabling us to infer what thoughts or intentionality experienced, again, the matter to which the subjective-matters of past thoughts and actions, as I have a human ability of knowing the deliberations of past agents as if they were the historian’s own. The immediate question of the form of historical explanation, and the fact that general laws have other than no place or any apprentices in the order of a minor place in the human sciences, it is also prominent in thoughts about distinctiveness as to regain their actions, but by re-living the situation in or thereby an understanding of what they experience and thought.
The view that everyday attributions of intention, belief and meaning to other persons proceeded via tacit use of a theory that enables me to construct these interpretations as explanations of their doings. The view is commonly held along with functionalism, according to which psychological states theoretical entities, identified by the network of their causes and effects. The theory-theory had different implications, depending on which feature of theories is being stressed. Theories may be though of as capable of formalization, as yielding predications and explanations, as achieved by a process of theorizing, as achieved by predictions and explanations, as achieved by a process of theorizing, as answering to empirically evince that is in principle describable without them, as liable to be overturned by newer and better theories, and so on. The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the nonexistence of a medium in which this theory can be couched, as the child learns simultaneously his minds of others and the meaning of terms in its native language.
Our understanding of others is not gained by the tacit use of a ‘theory’. Enabling us to infer what thoughts or intentions explain their actions, however, by re-living the situation by living ‘in their moccasins’, or from their point of view, and thereby understanding what hey experienced and thought, and therefore expressed. Understanding others is achieved when we can ourselves deliberate as they did, and hear their words as if they are our own. The suggestion is a modern development of the ‘Verstehen’ tradition associated with Dilthey, Weber and Collingwood.
Much as much is therefore, in some sense available to reactivate a new body, however, not that I, who survives bodily death, but I may be resurrected in the same body that becomes reanimated by the same form, in that of Aquinas’s abstractive account, that Non-religions belief, existence, necessity, fate, creation, sin, judice, mercy, redemption, God and, once descriptions of supreme Being impacted upon, that there remains the problem of providing any reason for supporting that anything answering to this description exists. People that take place or come about, in effect, induce to come into being to conditions or occurrences traceable to a cause seems in pursuit of a good place to be, but are not exempt of privatized privilege of self-understanding. We understand ourselves, just as we do everything else, that through the sense experience, in that of an abstraction, may justly be of knowing the principle of our own lives, is to obtainably achieve, and not as a given. In the theory of knowledge that knowing Aquinas holds the Aristotelian doctrine that knowing entails some similarities between the Knower and what there is to be known: A human’s corporal nature, therefore, requires that knowledge start with sense perception. As yet, the same limitations that do not apply of bringing further the levelling stabilities that are contained within the hierarchical mosaic, such as the celestial heavens that open in bringing forth to angels.
In the domain of theology Aquinas deploys the distraction emphasized by Eringena, between the existences of God in understanding the significance, of five relevant contentions aiming at their significance. They are (1) Motion is only explicable if there exists an unmoved, a first mover (2) the chain of efficient causes demands a first cause (3) the contingent character of existing things in the world demands a different order of existence, or in other words as something that has a necessary existence (4) the extensional graduations of values of things in the world require the existence of something that is most valuable, or perfect, and (5) the orderly character of events points to a final cause, or end which all things are directed, and the existence of this end demands a being that ordained it. All the arguments are physico-theological arguments, in that between reason and faith, Aquinas lays out proofs of the existence of God.
He readily recognizes that there are doctrines such that are the Incarnation and the nature of the Trinity, know only through revelations, and whose acceptance is more a matter of moral will. God’s essence is identified with his existence, as pure activity. God is simple, containing no potential. No matter how, we cannot obtain knowledge of what God is (his quiddity), perhaps, doing the same work as the principle of charity, but suggesting that we regulate our procedures of interpretation by maximizing the extent to which we see the subjects humanly reasonable, than the extent to which we see the subject as right about things. Whereby remaining content with descriptions that apply to him partly by way of analogy, God reveals of Himself and not himself. The immediate problem availed of ethics is posed by the English philosopher Phillippa Foot, in her ‘The Problem of Abortion and the Doctrine of the Double Effect’ (1967). A runaway train or trolley comes to a section in the track that is under construction and completely impassable. One person is working on one part and five on the other and the trolley will put an end to anyone working on the branch it enters. Clearly, to most minds, the driver should steer for the fewest populated branch. But now suppose that, left to it, it will enter the branch with its five employees that are there, and you as a bystander can intervene, altering the points so that it veers through the other. Is it right or obligors, or even permissible for you to do this, thereby, apparently involving you in ways that responsibility ends in a death of one person? After all, who have you wronged if you leave it to go its own way? The situation is similarly standardized of others in which utilitarian reasoning seems to lead to one course of action, but a person’s integrity or principles may oppose it.
Describing events that haphazardly took place does not of for it apprehensively deliberates, and revolve in the mind many great steps of his plan, as thought, considered, design, presence, studied, thought-out, which seeming inaccurately responsible to reason-sensitive, in that sanction the exceptionality in the break of the divine. This permit we to talk of rationality and intention, which are the categories, we may apply if we conceive of them as action. We think of ourselves not only passively, as creatures that make things happen. Understanding this distinction gives forth of its many major problems concerning the nature of an agency for the causation of bodily events by mental events, and of better understanding the ‘will’ and ‘free will’. Other problems in the theory of action include drawing the distinction between an action and its consequence, and describing the structure involved when we do one thing ‘by’ doing additional applicative attributes. Even the planning and dating where someone shoots someone on one day and in one place, whereby the victim then dies on another day and in another place. Where and when did the murderous act take place?
Causation, least of mention, is not clear that only events are created for and of themselves. Kant refers to the example of a cannonball at rest and stationed upon a cushion, but causing the cushion to be the shape that it is, and thus to suggest that the causal states of affairs or objects or facts may also be casually related. All of which, the central problem is to understand the elements of necessitation or determinacy of the future. Events of which were thought by Hume are in themselves ‘loose and separate’: How then are we to conceive of others? The relationship seems not too perceptible, for all that perception gives us (Hume argues) is knowledge of the patterns that events do, actually falling into than any acquaintance with the connections determining the pattern. It is, however, clear that our conception of everyday objects is largely determined by their casual powers, and all our action is based on the belief that these causal powers are stable and reliable. Although scientific investigation can give us wider and deeper dependable patterns, it seems incapable of bringing us any nearer to the ‘must’ of causal necessitation. Particular examples’ of puzzles with causalities are quite apart from general problems of forming any conception of what it is: How are we to understand the casual interaction between mind and body? How can the present, which exists, or its existence to a past that no longer exists? How is the stability of the casual order to be understood? Is backward causality possible? Is causation a concept needed in science, or dispensable?
The news concerning free-will, is nonetheless, a problem for which is to reconcile our everyday consciousness of ourselves as agent, with the best view of what science tells us that we are. Determinism is one part of the problem. It may be defined as the doctrine that every event has a cause. More precisely, for any event ‘C’, there will be one antecedent state of nature ‘N’, and a law of nature ‘L’, such that given ‘L’, ‘N’, will be followed by ‘C’. But if this is true of every event, it is true of events such as my doing something or choosing to do something. So my choosing or doing something is fixed by some antecedent state ‘N’ and the laws. Since determinism is universal that these in turn are fixed, and so backwards to actions, for which I am clearly not responsible (events before my birth, for example). So, no events can be voluntary or free, where that means that they come about purely because of my willing them I could have done otherwise. If determinism is true, then there will be antecedent states and laws already determining such events: How then can I truly be said to be their author, or be responsible for them?
The dilemma for which determinism is for itself often supposes of an action that seems as the end of a causal chain, or, perhaps, by some hieratical set of suppositional actions that would stretch back in time to events for which an agent has no conceivable responsibility, then the agent is not responsible for the action.
Once, again, the dilemma adds that if an action is not the end of such a chain, so that, at another time, its focus is fastening convergently by its causing occurrences that randomly lack a definite plan, purpose or pattern, justly a randomizing of choice. In that no antecedent events brought it about, and in that case nobody is responsible for it’s ever to occur. So, whether or not determinism is true, responsibility is shown to be illusory.
Still, there is to say, to have a will is to be able to desire an outcome and to purpose to bring it about. Strength of will, or firmness of purpose, is supposed to be good and weakness of will or bad.
A mental act of willing or trying whose presence is sometimes supposed to make the difference between intentional and voluntary action, as well of mere behaviour. The theories that there are such acts are problematic, and the idea that they make the required difference is a case of explaining a phenomenon by citing another that raises exactly the same problem, since the intentional or voluntary nature of the set of volition now needs explanation. For determinism to act in accordance with the law of autonomy or freedom is that in ascendance with universal moral law and regardless of selfish advantage.
A categorical notion in the work as contrasted in Kantian ethics show of a hypothetical imperative that embeds of a commentary which is in place only givens some antecedent desire or project. ‘If you want to look wise, stay quiet’. The injunction to stay quiet only applicable to those with the antecedent desire or inclination: If one has no enacting desire upon considerations for being wise, may, that the injunction or advice lapse. A categorical imperative cannot be so avoided; it is a requirement that binds anybody, regardless of their inclination. It could be repressed as, for example, ‘Tell the truth (regardless of whether you want to or not)’. The distinction is not always mistakably presumed or absence of the conditional or hypothetical form: ‘If you crave drink, don’t become a bartender’ may be regarded as an absolute injunction applying to anyone, although only activated in the case of those with the stated desire.
In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed some of the given forms of categorical imperatives, such that of (1) The formula of universal law: ‘act only on that maxim through which you can at the same time will that it should become universal law’, (2) the formula of the law of nature: ‘Act as if the maxim of your action were to become to completion of our will as a universal law of nature’, (3) the formula of the end-in-itself, ‘Act in such a way that you always treat humanity of whether in your own person or in the person of any other, never simply as an end, but always at the same time as an end’, (4) the formula of autonomy, or consideration: ‘The will’ of every rational being a will which makes universal law’, and (5) the formula of the Kingdom of Ends, which provides a model for systematic union of different rational beings under common laws.
A central object in the study of Kant’s ethics is to understanding the expressions of the inescapable, binding requirements of their categorical importance, and to understand whether they are equivalent at some deep level. Kant’s own applications of the notions are always convincing: One cause of confusion is relating Kant’s ethical values to theories such as ‘expressionism’ in that it is easy but imperatively must that it cannot be the expression of a sentiment, yet, it must derive from something ‘unconditional’ or necessary’ such as the voice of reason. The standard mood of sentences used to issue request and commands are their imperative needs to issue as basic the need to communicate information, and as such to animals signalling systems may as often be interpreted either way, and understanding the relationship between commands and other action-guiding uses of language, such as ethical discourse. The ethical theory of ‘prescriptivism’ in fact equates the two functions. A further question is whether there is an imperative logic. ‘Hump that bale’ seems to follow from ‘Tote that barge and hump that bale’, follows from ‘Its windy and its raining’: But it is harder to say how to include other forms, does ‘Shut the door or shut the window’ follow from ‘Shut the window’, for example? The usual way to develop an imperative logic is to work in terms of the possibility of satisfying the other one command without satisfying the other, thereby turning it into a variation of ordinary deductive logic.
Despite the fact that the morality of people and their ethics amount to the same thing, there is a contingency in the use that is to re-start or enhance the morality and systemize such in that of Kant, based on notions given as duty, obligation, and principles of conduct, reserving ethics for the more Aristotelian approach to practical reasoning as based on the valuing notions that are characterized by their particular virtue, and generally avoiding the separation of ‘moral’ considerations from other practical considerations. The scholarly issues are complicated and complex, with some writers seeing Kant as more Aristotelian. And Aristotle was more involved with a separate sphere of responsibility and duty, than the simple contrast suggests.
A major topic of philosophical inquiry, especially in Aristotle, and subsequently since the seventeenth and eighteenth centuries, when the ‘science of man’ began to probe into human motivation and emotion. For such as these, the French moralist, or Hutcheson, Hume, Smith and Kant, a primary task is to delineate the variety of human reactions and motivations. Such an inquiry would locate our propensity for moral thinking among other faculties, such as perception and reason, and other tendencies as empathy, sympathy or self-interest. The task continues especially in the light of a post-Darwinian understanding of ‘us’.
In some moral systems, notably that of Immanuel Kant, ‘real’ moral worth comes only with interactivity, justly because it is right. However, if you do what is purposely becoming, equitable, but from some other equitable motive, such as the fear or prudence, no moral merit accrues to you. Yet, that in turn seems to discount other admirable motivations, as acting from main-sheet benevolence, or ‘sympathy’. The question is how to balance these opposing ideas and how to understand acting from a sense of obligation without duty or rightness, through which their beginning to seem a kind of fetish. It thus stands opposed to ethics and relying on highly general and abstractive principles, particularly. Those associated with the Kantian categorical imperatives. The view may go as far back as to say that taken in its own, no consideration point, for that which of any particular way of life, that, least of mention, the contributing steps so taken as forwarded by reason or be to an understanding estimate that can only proceed by identifying salient features of a situation that weighs on one’s side or another.
As random moral dilemmas set out with intense concern, inasmuch as philosophical matters, that applies a profound but influential defence of common sense. Situations, in which each possible course of action breeches some otherwise binding moral principle, are, nonetheless, serious dilemmas making the stuff of many tragedies. The conflict can be described in different ways. One suggestion is that whichever action the subject undertakes, that he or she does something wrong. Another is that his is not so, for the dilemma means that in the circumstances for what she or he did was right as any alternate. It is important to the phenomenology of these cases that action leaves a residue of guilt and remorse, even though it had proved it was not the subject’s fault that she or he was considering the dilemma, that the rationality of emotions can be contested. Any normality with more than one fundamental principle seems capable of generating dilemmas, however, dilemmas exist, such as where a mother must decide which of two children to sacrifice, least of mention, no principles are pitted against each other, only if we accept that dilemmas from principles are real and important, this fact can then be used to approach of them to such a degree as qualified of ‘utilitarianism’ to adopt various kinds may, perhaps, be centred upon the possibility of relating to independent feelings, liken to recognize only one sovereign principle. Alternatively, of regretting the existence of dilemmas and the unordered jumble of furthering principles, in that of creating several of them, a theorist may use their occurrences to encounter upon that which it is to argue for the desirability of locating and promoting a single sovereign principle.
No comments:
Post a Comment