Real Security is a Girl with a Gun

Among the many astounding, double-speakish things Donald Trump has said on his campaign trail is the suggestion that readier access to guns, and looser laws about who can carry these and where, would somehow make us safer. Trump commented on the Paris terrorist attacks of November 2015 with the following words:

“When you look at Paris — you know the toughest gun laws in the world, Paris — nobody had guns but the bad guys. Nobody had guns. Nobody. … They were just shooting them one by one and then they (security forces) broke in and had a big shootout and ultimately killed the terrorists. …  You can say what you want, but if they had guns, if our people had guns, if they were allowed to carry — it would’ve been a much, much different situation.”*

*reported by Jeremy Diamond at CNN, November 2015

sarahconnor _ terminator 2
Sarah Conner from Terminator 2. Image borrowed from Rachel B’s 2010 post on women in Sci-Fi.

My sense is that the statistics are very much against that proposal. The U.S. has some of the most lenient laws regarding guns, and some of the highest rate of gun-related violent crimes, among similarly situated countries (e.g. Canada, France, Germany, etc.). For a quick version of the argument, read John Donahue’s October 2015 piece in Newsweek.
Trump’s argument to the contrary employs a common statistical fallacy: focusing on a single event, rather than the aggregate of events of a certain type. It’s like people who are scared to fly, yet not afraid to drive or ride in cars, despite the fact that each hour in a moving car is far riskier than each hour on an airplane in flight. (To be fair, similarly fallacious arguments have been offered by gun control advocates who seek to draw their conclusions from reflection on single events.)

At the same time, I’m willing to consider well-reasoned and evidence-based arguments, such as that of John Lott in More Guns, Less Crime (University of Chicago Press, 2010), that readier access to guns does lead to greater safety in many circumstances. Whatever the outcome of that discussion, however, I’d like to “trump” Trump’s trumpism with a radical proposal of my own: If greater legal access to guns would make all of us safer, yet the vast majority of violent crimes are committed by men, why not restrict the legal use of guns to women?** We could even initiate public training programs for such women, who would thereby become better protected against violent attacks (such as rapes and muggings) from men with or without guns. Such women could also take protector roles in rogue shooter scenarios.

**For statistics regarding male and female violent crime, at least in the US, see the 2011 data compiled by the FBI and a nice interpretive post by Jennie Ruby

trinity-2-500x248
Trinity from The Matrix. Image borrowed from Rachel B’s 2010 post on women in Sci-Fi.

Such a policy, I reason, would kill two birds with one stone (if you’ll forgive the metaphor): (1) If Trump or Lott are right (though of course they might not be), then it would “make all of us safer” in situations like those of the Paris attacks; and (2) it would especially protect a class of human beings that are so often (and so asymmetrically) the victims of violent crimes such as rape, murder, and assault.

Though a mere feeling is not a satisfactory argument, I (who am male) would personally feel much safer to live in a world where only women had legal access to guns. The vast majority of violent crimes are committed by men. Men, with or without guns, are statistically a serious threat to the well-being of all citizens. I’m not at all comforted by the thought that I or the men around me could have more guns than they currently do. How well have the men been doing with the guns so far? Not well, statistically speaking. If the women around me were more fully equipped and trained than the men, however, I would feel that the firepower was in better hands. Women, after all, are so rarely the perpetrators of violent crimes; and any rogue male or female that went on a spree would quickly be put in check by his or her sisters.

Most importantly, if women were the only ones allowed to legally own and operate firearms, this additional power would significantly even the playing field between men and women: that is, between a class of people that are more often the perpetrators of violent crime than the other, and a group of people that are rarely the perpetrators yet frequently the victims. Anytime a man sought to intimidate a woman with physical force, he would be roundly put in check.

ellenripley
Sigourney Weaver in Alien. Image borrowed from Rachel B’s 2010 post on women in Sci-Fi.

Of course I’m not entirely serious in this proposal. And certain aspects of the argument don’t quite make sense. For instance: If more guns really do make us safer (which, again, has not been established), what would a criminally threatened man do, in my imagined scenario, when no gun-toting women were around?

Nonetheless, the proposal is worth considering, if only as a thought experiment. Those who object to the policy may implicitly reveal their investment in male privilege by doing so. Statistically speaking, men with or without guns are dangerous. Women bear almost no responsibility for the threats of violence that we face, yet they are often its victims. How can we solve the problem? Restricting male opportunities for violence, and enhancing female protections against that violence, does sound like a good policy.

Advertisements

Religion vs. Science vs. … Anthropology?: On the Way to a Third Way

It’s a commonplace of modern thought that religion may be opposed to science and vice versa. That old story about a stand-off between Galileo and the Catholic Church,  high profile debates about evolutionism vs. creationism, and, more esoterically and academically, conversations about secularism and the recent “return” of religion — these instances and many others bear witness to this sense of tension.

One of the most striking images of the conflict comes in the form of those pop-up bumper decals: the fish that reads “Jesus,” and the fish with legs that reads “Darwin.”

3-jesus_darwin_fish1
Image from http://onetotheother.com/category/evolution/

For my part, I prefer a third and very different fish: the fish presented in René Magritte’s painting “The Search for Truth” (1963). Magritte’s painting present only four recognizable objects in a canvas boasting such a noble and venerable name: an open window, a wall, an abstract circular object, and a giant fish.

the-search-for-truth-1963 _ small
René Magritte – “The Search for Truth” (1963)

Considered on its own, the fish is an ambiguous figure: something we eat, that nourishes us; yet, at the same time, like us a living animal, and thus capable of reminding us of the living nature we share, just so far, with other animals. In Magritte’s rendition, the fish’s ambiguity is exploited through its connection with the usually serious topic of “truth.” Is Magritte suggesting that the truth is somehow especially close to the mundane facts and interests of our digestion? Or perhaps that it is somehow connected to our aqueous animal origins? Or — by virtue of the fish’s eerily floating and more-than-average-size appearance —  that the truth goes far beyond the expectations and requirements of our natural understanding?

I’d like to read the Magritte fish as suggesting all of these things at once. If I was in the market for bumper decals, I’ll order a special “Magritte fish” to answer the Jesuses and the Darwins. And what, more prosaically, would that answer consist in?

The very fact that religion and science seem to be able to “stand off” suggests that they are entities of a common type. From this perspective, we might try to understand what religion and science are competing about, and what they are competing for, by asking the question: What do religion and science have in common? Among other things, they are both historical institutions that make use of ecological resources, financial assets and economic systems, as well as the practices and commitments of individual human beings, to realize their ends. Their ends may appear to differ more radically than their means, but some of those ends are shared as well: for instance, the fixation of individual agents’ beliefs and commitments, and propagation of the institutional framework and practices that sustain each as an institution.

To see religion and science as institutions is to see them as a part of human life and human history, without denying either that (a) human life and human history are a part of natural history, or that (b) natural history may be only one kind of history or reality among others. In other words: the institutional framework is agnostic about which of the two, religion or science, is “right” when they conflict; nonetheless, the framework allows for cases of competition to be identified, analyzed, and explained. Might the framework not also suggests ways in which they could be solved?

As Bruno Latour notes in his We Have Never Been Modern (1993), cultural anthropologists have long conducted studies of foreign peoples without denying either (a) the materiality of their artifacts and practices, nor (b) the reality of the putative objects of their languages and conceptual schemes. The anthropologists are characteristically ontologically agnostic, both on practices and on deities. They are fully symmetrical in their treatment of the “scientific” and the “mythic,” the “material” and the “spiritual.” As a result, they’re able to see human beings as animals with access to the divine (without settling, or even attempting to settle, issues about the ontological status of “animality” and “divinity”). They adopt the same self-distancing from “truth” that Magritte’s fish symbolizes. They recognize the human, and seek an explanation for both the sciences and religion  on that basis.

Jesus-Darwin-kiss_2
Image from http://thankgodforevolution.com/node/2079

In Defense of Analytic-Continental Crossovers

As the old saying goes, “There are two kinds of people in the world: those who believe that there are only two kinds of people in the world, and those who know better.” Apart from any lessons this statement might have for the liar paradox, it bears considering when confronting the issue of the so-called “analytic-continental” divide in 20th- and 21st-century European and American philosophy. I won’t attempt to summarize that contested distinction here — there are discussions of it all over the place; Gary Gutting’s NYT op-ed is a good place to start — but I wanted to say a bit about why I’m personally interested in work in both areas, and how I conceive of the relation between analytic and continental philosophy in my own work.



analytic and continental philosophy

From Sticky Embraces, author of the very funny philosophy blog, Hugging the Horse: http://stickyembraces.tumblr.com/page/2
Though it’s still obviously an oversimplification, we might start by dividing professional philosophers into four categories rather than the original two:

(1) those who work entirely (or almost entirely) in analytic philosophy
(2) those who work entirely (or almost entirely) in continental philosophy
(3) those who work almost entirely in neither (for instance, some of those in Asian philosophy), and
(4) those whose work spans the divide

The division is still insufficiently nuanced, of course. Some groups of professional philosophers that it arguably doesn’t capture very well include those working in “Philosophy of X” sub-disciplines (such as philosophers of science or philosophers of art) whose work is anchored in study and reflection upon X. Such philosophers may engage with both analytic and continental treatments of their main subject, without having to identify as an analytic or continental thinker themselves. The same is true of historians of philosophy.* Also, analytic and continental philosophy might be distinguished along several different lines, of which an initial tally might include the methodological, the thematic, and the ancestral (that is, in terms of the authors and authorities considered to be “canonical.” For instance: Frege for the analytics, Heidegger for the continentals).

*The historian of German philosophy Robert Pippin, for instance, discusses both Strawson and Heidegger as secondary source materials for making sense of Kant, in Kant’s Theory of Form: An Essay on the Critique of Pure Reason (Yale, 1982).

How do I conceptualize the relation between analytic and continental philosophy (including characteristic methodologies, themes, and ancestries of each) in my own work?

When I look over the history of philosophy from Plato to the present day, the philosophers whose methodologies I admire the most, and would like most to imitate, are those who effectively and instructively combine rigor of argumentation with breadth of vision. From this perspective, figures like Plato, Aristotle, Leibniz, and Peirce are most impressive to me. Frege thought and argued very rigorously — as rigorously or more rigorously than Peirce — but his philosophical vision was much, much narrower. Dewey, on the other hand, was broader than Peirce, but hardly as rigorous. Russell was both rigorous and broad-minded, but he never (as far as I can see) managed to connect the rigorous treatments of logical and epistemological issues, with his broader interests in society, politics, and religion. (In this way, he resembled his great ancestor in British philosophy, John Locke.) Analyses of this sort could, of course, be continued. Let me clarify that I only mean for this “rigor + breadth” formula to describe my own ideal for a philosophical methodology. I don’t take it to describe the only valid kind of work in philosophy, nor do I take it to (by itself) give much guidance regarding what substantive philosophical positions should be adopted.

20th- and 21st-century analytic philosophy provides an extraordinary pool of resources for enhancing the rigor of a course of thinking and argumentation. It’s thus an excellent resource and sets an indispensable criterion for anyone seeking to follow a rigorous philosophical methodology. At the same time, the effort to achieve and defend an overall philosophical vision will inevitably be restricted if one refuses to engage with the accumulated intellectual experiments of 19th-, 20th-, and 21st-century continental philosophy, including Marx, Nietzsche, Heidegger, Gadamer, Merleau-Ponty, Foucault, and many others.

From this perspective, some familiar criticisms of continental and analytic philosophy take on a new appearance. Analytic philosophy is sometimes said to be narrow, provincial, and empty of experiential content or cultural implications. Continental philosophy is sometimes said to be insufficiently careful and articulate about the inference structure of its arguments. This perspective also allows us to see a possible philosophical advantage to engaging deeply with both analytic and continental traditions: an enhanced ability to practice philosophy rigorously, articulately, boldly, and imaginatively, all at the same time.

The Meaning of Metal

Like many Americans of my generation, I have a lot of experience with the genre of music called heavy metal — represented by bands like Black Sabbath, Slayer, Metallica, and Linkin Park. When I was in high school, I knew a lot of heavy metal fans and I attended a number of concerts. As I got older, however, my musical tastes moved away from metal. At the time, I thought it was because I had grown tired of what I perceived as limitations of the genre: its focus on the emotions of anger and fear, on the topics of death and destruction, and its seeming inability to express a wider range of emotions and thoughts than these.

Last week, however, as I was flipping through radio channels, I heard the opening chords of “Enter Sandman” by Metallica, and I decided to give this old metal classic a listen. For those who don’t know, “Enter Sandman” was the opening track of Metallica’s Black Album, an early 90s album that might be the most famous heavy metal album of all time.* “Enter Sandman” itself is an excellent and unusually accessible example of a song in the metal genre — hence its continued radio play even after the 15+ years that separate us from its original release date.

While listening to this song, I had a revelation that changed my perception of the metal genre entirely. What I realized is simply this: Heavy metal is valuable precisely because it puts emotions like anger and fear at the center of its emotional landscape. In this way, it is a counterweight to the false “positivity” and shallow sentimentality of so much other popular music. And, for all of these reasons, it has a strong affinity (or, at least, an analogy) to those modern intellectual traditions that have also emphasized the significance of such dark and difficult emotions, as well as diagnosed the reasons for the avoidance of these emotions by so many widely accepted interpretive frameworks. I am thinking in particular of Arthur Schopenhauer and Friedrich Nietzsche, Sigmund Freud, and the existentialism of Camus and Sartre.

I encourage readers to take another listen to Metallica’s “Enter Sandman” from this perspective. The song includes the voice of a young boy saying a conventional Christian prayer that includes the words, “If I die before I wake / I pray the Lord my soul to take.” The lead singer then sings “Hush little baby, don’t say a word / And never mind that noise you heard / It’s just the beast under your bed / in your closet, in your head.” Here the child is being told (truthfully) that there are beasts in the world: other people can be beastly, the ways of the world can be beastly, and we ourselves harbor beasts in the unexplored or unacknowledged parts of ourselves (what Freud calls the unconscious). The world is not (only) a playground of sun and light, it includes dark, difficult, cruel, destructive forces (what Freud calls the death drive). Near the end of this verse, the volume and intensity of the guitars rises until we reach the chorus, at which point the rhythms and harmonies suggest a shift from the verse’s vulnerability to a manifestation of some unexpected new power, even a kind of security maintained by the forcefulness and violence of the new deeper and darker position to which we’ve just arrived. And the listener hears (singing): “Exit light / Enter night / Take my hand / We’re off to never-never land.” In other words: I will lead you into the hell of the unconscious, where fear and anger are among the most powerful forces, so that you can master these and become a stronger person.

I still think of heavy metal as just one genre among others, with its own conventionally-based limitations. But I can better appreciate now the value of its message. It helps us to notice and to more creatively orient ourselves to the tragic, aggressive, and destructive undercurrents that are an unavoidable and important aspect of reality itself.

*It’s generally thought that the genre began with Black Sabbath in the 1970s — who were the first to take the longer-standing tradition of hard rock, inclusive of groups like Led Zeppelin and the Rolling Stones, in the “darker” and “heavier” direction that defines metal. Sabbath was followed by Slayer, Anthrax, Megadeth, and Metallica, among others, but Metallica seems to have been the band from this early era with the greatest productivity and staying power (as judged by popularity and name recognition) in the 1990s and 2000s.

Animals, Species Identity, and Personality

Captioned cat photos, stuffed animals, cartoons, and shamanism are just a few examples of the ways that human beings personify animals. They’ve been doing it for a long time — perhaps for as long as there have been human beings at all. And when we personify animals, we suppose that it is their species that gives them some, but not all, of their personalities. Our dog Donner is always hungry, always loyal — our cat Caitlyn is always aloof and sly. Yet Donner is unusually quiet for a dog, and Caitlyn is unusually playful with water for a cat.

Navajo Kachina Dancer Dolls - Bear - from http://greywolftradingpost.com/kachinas.htm

Navajo Kachina Dancer Dolls – Bear – from http://greywolftradingpost.com/kachinas.htm

In this way, we include the species identity of Donner and Caitlyn, within our conception of their personal identity. Now imagine the way that we humans would look to a dog or cat.* Wouldn’t they include our species identity within their conception of our (individual) personal identities? From this perspective, wouldn’t they sometimes laugh at how “typically human” certain human behaviors really are? Wouldn’t they recognize (for instance) a proud investment banker and a humble small-town farmer — whom we might think of as having very different personalities — as different, yes, but also both typically human in many ways? We might even imagine these animals laughing at the way that, even when human beings do something unusual for the human species as a whole (say, tightrope walking, scuba diving, or airplane flying), the humans involved use their human bodies to do it. From this perspective, the image of a human being scuba diving or airplane flying is actually a little ridiculous. Perhaps an ironic approach to our species-typical characteristics — the inevitably “human, all too human” remainder that accompanies all of our extraordinary conduct — is just what the doctor ordered, at the onset of what some have described as a “posthumanist” age.** Another conclusion one might draw from this thought experiment is that the notion of “human nature” may have an inning even after certain classical humanist, anthropocentric, and “essentializing” tendencies have been overcome. *This assumes that we can “look like” something to them — that is, that humans share some cognitive competence and phenomenological experience of the “looking like” sort with dogs and cats. I think the assumption is eminently reasonable, though it is of course controversial. Readers who object to the assumption can follow the argument by considering what human beings would look like to a non-human species to which things could “look like” something. **See, for instance, Cary Wolfe, What is Posthumanism? (University of Minnesota Press, 2010) and Rosi Braidotti, The Posthuman (Polity Press, 2013).

What is the Value of Historical Thinking?

According to a certain way of thinking about history and present-day life, historical research has little to no practical value, since its subject matter (the past) is, by definition, “dead and gone.”* From this perspective, historical inquiries concerning the emergence of written language, or the French Revolution, have no contemporary relevance; they are merely divertissements for those peculiarly curious about them.

Jacques Louis David - "Napoleon Crossing the Alps" (1803)

Jacques Louis David – “Napoleon Crossing the Alps” (1803)

This way of thinking is profoundly oversimplified. We human beings are “always already” historical beings, in the sense that we are always oriented to the past in one way or another.** And how we are so oriented, makes a difference to our everyday conduct. A person who believes that the French revolution was motivated primarily by economic interests will behave differently than one who believes it was motivated by moral ones; and this is the sort of question that careful historiography helps to clarify (generally in the form of making-more-complex one’s initial imagination of the past). Granted, this difference is sometimes quite subtle: of the magnitude, for instance, of the difference it would make to the average U.S. Citizen’s day-to-day behavior, if he or she believed that the continent of Europe was situated in the Southern rather than the Northern Hemisphere.

This orientation takes place through (1) the present-day conditions that the past has brought about, as well as (2) through our ideas about what the past has been like, and finally through (3) our ideas about how present-day conditions have been brought about.*** It is precisely this orientation to the past that historical inquiry and, above all, well-crafted historical argument, stands to destabilize and change.

A deeper and more nuanced historical sensibility about some set of past events enables a deeper and more nuanced orientation to those parts of the present world that these past events affect.

___

*I borrow the phrase “dead and gone” from Allan Megill, who employs it as the name of one of four modes of historiography in Megill, Historical Knowledge, Historical Error (Chicago U.P., 2007).

**The term and concept of a state in which something “always already” is, is borrowed from Martin Heidegger, Being and Time 1965 [1927].

***For a brilliant analysis of the variety of (interconnected) ways in which the past, and our awareness of the past, affect our present, see Hans-Georg Gadamer, Truth and Method (1960).

The Question(s) of Human Nature

There is more than one question of human nature. As has often been noted, discussants of “human nature” employ the term in a wide variety of different ways, and this ambiguity easily leads to confusion.*

*Such confusions have been noted and discussed by many: for instance, in David L. Hull, “On Human Nature” (1986), reprinted in Hull, The Metaphysics of Evolution (U. of Chicago Press, 1987); Susan Oyama, Evolution’s Eye (Duke U., 2000), Evelyn Fox Keller, The Mirage of a Space Between Nature and Nurture (Duke U., 2010), Helen Longino, Studying Human Behavior (U. of Chicago Press, 2012); Roger Smith, Being Human (Columbia U.P., 2007).

For instance: When asking “What is human nature?,” are we asking for the human essence (nature as Wesen or ousia) – that is, those features or qualities that are only and always true of human beings? Or are we asking for an identification of that part of human life that is attributable to nature (as opposed to the non-natural)? And, if this last is our aim, what is our intended contrast class – that is, the “non-natural”? For instance: do we mean the natural as opposed to the cultural, or as opposed to the super-natural, or something else? Or, yet again, are we asking for a description of the general or typical tendency of human beings (something like nature as physis)?** Or are we asking for a tally of human universals – those aspects of human life (if any) that appear regardless of geographical, cultural, or other peculiarities whenever or wherever human beings live their lives? And, if this last, how do we determine what cases to include or exclude from such universal claims? What unusual cases – feral children, for instance – do we exclude or at least qualify by ceteris paribus clauses or similar devices as we draw up our tally – and how, precisely, do we do this?

** In an essay on essentialism from 1980, Elliot Sober analyzes this sense of “nature” under the heading of the “natural state model,” appropriately associated with Aristotle’s views of natures (physis).

Conflating these senses leads to comical inferential non-sequiturs, as in the following passage, appropriately ridiculed by Hull (1986): “[O]ne trait common to man everywhere is language; in the sense that only the human species displays it, the capacity to acquire language must be genetic” (Eisenberg 1972, 126, cited in Hull 1986, 386). Hull’s own commentary on the passage makes the problem entirely clear: “In the space of a few words, [the author] elides from language being common to man everywhere (universality), to the capacity to acquire language being unique to the human species (species specificity), to its being genetic” (Hull 1986, 386-7).

Due to such ambiguities, it makes more sense to speak of the “questions” of human nature than the question of it. But this ambiguity has another important consequence: the question appears, for this reason, deceptively difficult to resolve. Answers that appear decisive from a precisely articulated point of view – for instance, the question of a biologically-supported concept of an essential human nature in Hull’s article – nonetheless fail to foreclose “the” question insofar as they fail to satisfy inquirers concerned with other aspects of the conceptual constellation that has formed around the term “human nature.”

Efforts to develop a full “philosophy of human nature,” or what is sometimes called a “philosophical anthropology,” seem to have foundered on the sheer magnitude of the task, producing analyses that are disappointingly thin*** or unrealistically concise and principled. (The latter error appears to affect nearly the entire tradition of reflection on the question from Aristotle forward.) Nonetheless, if various meanings of the question of human nature are carefully distinguished, the effort to answer each of them individually would not be an absurd one. Both the picking-apart of the questions, the careful attempt to answer them, and the evaluation of other thinkers’ attempts to answer them, would appear to be legitimate pursuits for philosophers.

*** Unfortunately, this weakness characterizes much of the contemporary introductory literature on this topic – for instance, Howard P. Kainz’s Philosophy of Human Nature (Open Court, 2008), Joel J. Kupperman’s Theories of Human Nature (Hackett, 2010), and Leslie Stevenson and David L. Haberman’s Ten Theories of Human Nature (Oxford U.P., 1998). On this topic I much prefer the shorter and denser treatments of Richard Schacht, “Philosophical Anthropology: What, Why, and How,” Philosophy and Phenomenological Research (1990) and Sami Pihlström, “On the Concept of Philosophical Anthropology,” Journal of Philosophical Research (2003), as well as the literature listed in the first footnote above.

Here I’d like to contribute to such a project by seeking to carefully discriminate some of the various senses in which human nature might be asked about. The procedure I will employ loosely follows Roger Smith’s in Being Human (Columbia U.P., 2007): a distinction between meanings of the English word “nature,” which will then be read into the question of the meaning of “human nature” itself.

1. Nature-as-opposed-to-non-nature

If the question is about the natural-as-opposed-to-non-natural components or aspects of human life, the answer requires some standard for estimating what is and is not a part of nature. Of course, the latter question is controversial. Many would say that everything that happens is part of nature. Such a position entails that everything human beings do or are is part of human nature in that sense.****

**** This, incidentally, was the all-inclusive sense of nature that Descartes distinguished, in Meditation VI, from the more restricted, Aristotelian sense of nature as usual (“normal”) tendency. Meditations, Objections, and Replies (Hackett Publishing, 2006 [1641]), 47-48. Descartes construes the Aristotelian conception as “extrinsic,” that is, “merely a designation dependent on my thought,” whereas the other is “really in things and thus is not without some truth” (48).

Yet, for non-naturalists and some “qualified” (say, “non-reductive” or “soft”) naturalists, “natural” descriptions and disciplinary perspectives form only a subset of all epistemically-acceptable descriptions and disciplinary perspectives. For such thinkers the question, “What part of human existence, life, and behavior is natural?” could be rephrased as the question, “What part of human existence, life, and behavior is treatable by (or appears within) naturalistic disciplinary perspectives?” For instance: What aspects of human forms of life are epistemically accessible to physics, chemistry, and biology? How do human beings appear from within these perspectives? These questions lead intuitively to a further question, which presents a strategy for delimiting naturalism itself: What real aspects of human forms of life, if any, are not epistemically accessible to naturalistic perspectives (or, to particular kinds of naturalistic perspectives), and why?

How are we to decide between the broadly naturalistic and non-naturalistic perspectives just described? I’m not at all sure. My point is just that there is a legitimate future for the question of human nature beyond the kinds of semantic ambiguity, coupled with metaphysical disagreement, just recounted.

2. Nature-as-physis

Another of our identified senses of the human nature question is the one wherein “human nature” is synonymous with “typical tendency, barring other intervening factors.” Note that despite the association of this notion of nature (nature as physis) with Aristotelianism and hence essentialism, there is nothing self-contradictory in supposing that nature of this kind is changeable or transformable. Even under the assumption of nature as physis, there may be changes of nature (for instance: changes of gene-frequency, or changes of typical function) as well as changes that override nature (as when a natural tendency is diverted or destroyed, and thus the mode of existence of the changed thing or organism ceases to be classifiable as “natural”). Human nature may be largely an empty set in that human life may have no “typical tendency” or character; or it may be that a great deal of its “typical” tendencies or characters are nonetheless also changeable.

3. and 4. Nature-as-essence

Finally, one might mean human nature-as-essence. The question of human essences is really a combination of two questions, however: 3. What is distinctive about human beings? and 4. What is common to all human beings? The former question is a long-standing concern of comparative psychologists (which I will discuss in more detail in a later post); the latter, of cultural anthropologists and historians of (relatively) exotic civilizations.

As is well-known, universal claims have the peculiar logical feature that they can be refuted by a single counter-instance. We might seek to immunize a theory of human universals against such counter-instances by construing human universals as a long series of conditional statements such as, “When the number of commodities of a certain kind brought to market is relatively low, and the demand is relatively high, the price of those commodities will be relatively high,” or “Suicides are (ceteris paribus) more common in times of peace than in times of war.”*****

***** These examples are inspired by economist Alfred Marshall’s supply and demand curves, and sociologist Émile Durkheim’s famous book on suicide, respectively.

But each of these “universals” will have exceptions, themselves not easily captured (without exceptions of their own) by any single set of conditional statements. As a heuristic formal description, the series-of-conditionals model helps us understand the kind of reasoning some advocates of human universals would like to engage in. But its difficulties raise the question of whether an exhaustive tally of such descriptions can fairly be expected, and, if not, why we should hold out hope for a theory of human universals at all, except in some appropriately qualified sense. A similar problem threatens the prospects of many universalisms, whether naturalistic and Darwinian or more purely logical (as in the case of, say, Hegel or Lévi-Strauss). Given this difficulty, one might relax the requirement of a “universal” characterization of human beings to the requirement of a “general” characterization.****** In any case, the intended meanings of our claims about universal characteristics of human life are often more nuanced and qualified than is represented by universal statements, or any set of conditional statements, so rigorously construed.

****** Some resources for thinking about the logic of general claims like these are provided by the first few chapters of Michael Thompson, Life and Action (Harvard U.P., 2008).

Anthropological and Historical Definitions … and the Question, “What is Philosophy?”

As a professional philosopher, I’m often expected to have something to say about definitional questions. When it comes to those definitional questions that concern human practices or disciplines – questions like, “What is Art?” “What is Science?” and “What is Philosophy?” – I find it useful to distinguish between what we might call “anthropological” and “historical” strategies of definition.* I’m unsure whether this distinction applies to other concepts as effectively, and/or what kind of changes to the distinction might be necessary to make it apply (but see my recent post on “natural kinds” for some thought on this). In this post, I will attempt simply to clarify the distinction as it applies to human practices, taking the familiar (at least to me!) case of “philosophy” as my example.

* I came to notice this distinction, by the way, in the process of reading Stephen Davies’ and Cynthia Freeland’s excellent introductory texts in the Philosophy of Art: here and here, respectively. What follows may be read as a distillation and commentary on a distinction they recognized before me, as well as an extension of that distinction to definitional questions other than that of “Art.”

The historical strategy of definition attempts to track the changing status and role of a practice throughout its history. “Etymological” definitions are an attempt of the “historical” sort, but obviously incomplete from the standards of full historical consciousness: the origins of something do not include or necessitate that their course or their end will be any particular way. Employing this strategy also involves issues of historical reconstruction, and therefore hermeneutics, when we attempt to ‘recapture’ and make sense of this history.

The anthropological strategy of definition attempts to say what the practice in question “essentially” is: What, if anything, are the common roots, causes, and types of the practice? Of course, “essentialism” (whether Aristotlean, Kripkean, or otherwise) is a problematic and controversial position. It also raises the possibility of a critique of one or another instance of the practice in question, on the basis of those allowable features (per its “essential” definition) that it does not instantiate. Again, there is a hermeneutic dimension here.

Having distinguished these strategies of definition, I’d like to propose two controversial theses about them. First, that efforts to define human practices today will find that both of these two strategies of definition are indispensable. Second, that neither strategy may successfully be employed independently of the other. Anthropological commitments have a rightful claim to inform historical views; historical views have a rightful claim to inform anthropological ones.

As promised, we will now take “Philosophy” as an example.

I. Criteria of a Satisfactory Answer to the Question, “What is Philosophy?”

Following the first of the controversial theses, I would argue that any satisfactory answer to the question, “What is Philosophy?,” will include orientation to anthropological commitments about what kind of practice philosophy is, and to an historical account of the traditions that issue in what we today call “Philosophy.” It will also involve a hermeneutic dimension in regard to both strategies, a fact that could be captured in saying that it will be both creative and constructive activity (that is, the definition itself will be partly a recommendation and proposed organization of semiotic space), and it will be evidence-responsive – including, for instance, the interest in paying due consideration to the following:

(1) the motivations for asking the question, which force attention to “common language” and historical tradition aspects of meaning-fixation

(2) the principle-of-parity, which forces inclusion of extra-traditional elements into the tradition and thus suggests the existence of at least some cases wherein “anthropological” strategies of definition make fair claims of authority to revise definition arrived at by “historical” means alone

(3) Those evidence bases and commitments (natural-scientific, religious, political) that we don’t want to (otherwise) abandon, or could only abandon here on pain of contradiction.

II. Attempt at Substantive Answer to the Question, “What is Philosophy?”

From an etymological-historical perspective, philosophy began with the ancient Greeks, where it was identified with the project of learning for the sake of learning. The effect to carry out this project soon raised procedural questions, however, which transformed the identity of the practice in question. These questions included, “How does one or how can one best learn in the most comprehensive and effective, or ‘best,’ sense? Indeed, how should we determine what is ‘best’ here?” These procedural questions were soon deemed as fundamental to the project as its pre-procedural, substantive aims (since the value and effectiveness of the pre-procedural aims were seen to be dependent, in a way, on how the procedural questions were answered – see Plato’s Republic for support of this claim and others in this paragraph). At this point, various features that are still identifiable in contemporary representatives of philosophy, took shape within the tradition:

(1) A comprehensive hunger for knowledge.
(2) Concern with “meta” and “reflective” (what I’ve so far called “procedural”) questions.
(3) Commitment to responsiveness to “reason” – that is, to objections brought from any quarter
(where “any corner” is meant both sociologically and ideologically)

This practice was then carried, self-consciously (as tradition) through a variety of instantiations, including the Hellenic, medieval (Arabic, Jewish, and Christian), early modern, and late modern phases. In the views of various contributors to philosophy, these three features were differently emphasized or distributed, but all three were retained as strong possibilities within the tradition. Thus, medieval philosophers toyed with the idea of a domain of reasons that was fundamentally non-rational – what is variously called “faith” or “revelation” – while retaining, at an institutional level, the open-endedness of this particular question. Modern empiricism, various “subjectivisms,” and Lebensphilosophie did something similar. Regarding “comprehensiveness” of attention, some philosophers retained this in a very deliberate and explicit way (for instance, Hegel), while others combined this kind of commitment with a strong sense of the finitude of human life, thus adopting a “generality” and (inevitably relatively superficial) “breadth,” rather than full comprehensiveness. Hume, for instance, once wrote of a personal distaste for everything besides “philosophy and general learning,” which distaste was sufficiently strong to motivate him to live modestly on a modest inheritance and avoid the necessity of working. Philosophy and general learning are thereby, in the taste and language of the Edinburgh philosopher as in other places, very closely associated. Sketching a degree of comprehensiveness, generality, and rigor somewhat intermediate between Hume’s and Hegel’s, Wilfrid Sellars once described the philosophers’ characteristic concern as with “how things, in the widest sense of the term, hang together in the widest sense of the term.”

My suggestion is that these three features are a reasonable “cluster-concept” characterization of the practice of Philosophy, anthropologically defined, where Philosophy is understood as whatever is (i) linguistically and historically continuous with this tradition; or, (ii) was linguistically identified as “philosophy” at one or another time in the past; or, (iii) is or was sufficiently anthropologically analogous to our own conception, to support (via parity arguments) our own contemporary identification of the instance in question as one of Philosophy. These three features are selected, specifically, as ones that a wide variety of present-day practitioners, as well as the most plausible and widely-recognized predecessors or ancestors of the present-day practice, would recognize as philosophical. These criteria have been roughly shared among quite a wide range of practitioners, for several thousand years – namely, wherever Greek civilization made a mark and the accounts of philosophy in Plato, Aristotle, and the Hellenic commentators served as a constraint on the interpretation of the meaning of φιλοσοφια. Until the 19th century, this included at least groups in Europe, broadly construed, and a substantial part of the Middle East. Today it includes groups of people all over the world.

The historical narrative that connects us to the baptismal origins of Philosophy eventually comes – in the course of developments spanning the 17th and 19th century – to the point of enabling the identification of practices in non-Western contexts as also instances of Philosophy. In other words: The history of Philosophy itself includes a moment wherein the history is discovered to be insufficient to define and delimit the concept of Philosophy itself. Philosophy is discovered to be an anthropological as well as a historical phenomenon. Those who were familiar with Philosophy, within the so-called Western tradition, came to know of texts, traditions, and practices largely historically unconnected to this tradition, and to identify these practices as also “philosophical” by virtue of their similarity to the practices familiar in the West. Thus Schoepenhauer, as is well-known, was impressed by the philosophy of the ancient Indian Upanishads. The existence of a Chinese philosophical tradition was another early realization along these lines. Since that time, African and Native American traditions, among others, have also been recognized and studied. (See, for instance, the work of Kwame Gyeke on the Akan conceptual scheme; Claude Lévi-Strauss’s La Pensée Sauvage [The Savage Mind]; and Keith Basso’s Wisdom Sits in Places on the Apache.)

Undoubtedly, further awareness of these formerly disconnected historical and cultural situations in which philosophy had taken shape, had an effect on the subsequent history of philosophy itself. To some extent, these various traditions have had the opportunity to merge into a single tradition, albeit (of course) one that remains relatively easily capable of demarcation into a great variety of separate conversations.

Fake Carnage and Real Carnage

In the USA, we watch action movies for fun, confident that the carnage they portray is something that never really happens: buildings explode, but the good guy escapes, rescues the girl, saves the innocent townspeople from the bad guys, and everyone leaves the theater with a smile on their face.

Promotional photography for the film A Good Day to Die Hard (2013), directed by John Moore and starring Bruce Willis (depicted here)

Promotional photography for the film A Good Day to Die Hard (2013), directed by John Moore and starring Bruce Willis (depicted here)

In real wars, buildings really do explode, and the good guys don’t always (or even usually) escape. Innocent women and children die by the hundreds or thousands in such explosions on a daily basis. If you’ve ever experienced the unexpected death of someone emotionally close to you, then you know about the irreplaceable hole that their absence leaves in your daily existence. In extreme cases (for instance: the death of a spouse or a child), you may ask yourself, “How can I possibly continue?” and “What possible meaning could there be to my own life?” On the basis of this experience, one can begin to imagine what it is like to be the victim of a real war – not the imaginary war that is portrayed in the movies. What is it really like, for instance, to have dozens of the people closest to you – your family, friends, and fellow villagers – ripped from the world forever, blown to pieces in the carnage of war? Would you ever forgive the attackers? Could you ever survive, as a person, beyond this utter rape and destruction of the most basic sources of meaning of your life?

Shock-and-awe-Iraq-600x4001406154aa04b5505384d8a4cc2f89e3f3d288d

[1st image from http://bangordailynews.com/2012/01/08/politics/iraq-war-second-costliest-ever-to-fuel-debt-for-years-to-come/, accessed 9-10-2014]
[2nd image from http://www.petitionbuzz.com/petitions/bushandblair, accessed 9-10-2014]

In the past week, I spoke with several Iraq war veterans. Their frank recountings of their experiences in Iraq convinced me of two very troubling things:

(1) These ex-soldiers were fully aware of the horrors of war.
(2) Most Americans have no idea what such soldiers have experienced – and, relatedly, most
Americans have no appreciation of the horror of war.

I am myself a non-veteran, and item (2) applies as much to me as anyone else (though we are all able to imaginatively identify with the victims of war, in accordance with the thought-experiment I described above).

34
[from Islamic State video, as posted at http://rt.com/news/176528-islamic-state-iraq-video/, accessed 9-10-2014]

U.S. officials and journalists are currently discussing the possibility that the newly formed Islamic State could carry out anti-U.S. terrorist campaigns on U.S. soil, utilizing the border between the U.S. and Mexico to gain access to the country. As security forces scramble to defend the border, a great many Americans probably have never heard of the Islamic State, or know next to nothing about its origins and aims. We are a currently a country of extraordinarily superficial minds. Compare the attention that the average American gives to their cell-phone, their car, their favorite music, or their online dating profile, on the one hand, to the attention that they give to current affairs, the history of human civilization and human conflict, the intellectual and cultural foundations of their own nation’s form of government (such as the Constitution), and philosophical questions like, “What is democracy?” “What is the best form of goverment, and why?” “What makes a human life more or less meaningful to the person living it?”, on the other.

If war hit the U.S. directly today – as it has eventually hit the territory of every empire in every part of the world for all of human history (see the histories of China, Rome, France, Italy, Germany, Japan, UK, and so on) – would Americans be even remotely prepared for it? Would they know how to respond tactically? Even more troublingly – would they know how to respond philosophically? Would they know what was worth fighting for, and what side to fight for? Or would they be so shocked that their friends and family, their public institutions, their cell-phones and their internet, had been ripped from their hands, that they would stare blindly into the black hole formed by the sudden absence of everything that they had ever known or ever cared about, and would simply whine and cajole and become “depressed” until their addictions were returned to them?

AsktheEditors_cellphones2012_610x426

[from http://www.blackpressusa.com/2014/02/cellphones-may-accelerate-nj-online-gambling/,, accessed 9-10-2014]

We must seriously consider the possibility that Americans are a people currently tethered to illusion. Their dependence on illusion is existential: Without the illusions provided by cellular phones, video games, television and film, they would literally have no idea who they are, and no idea of reality. Older Americans are addicted to the myth of American exceptionalism and the good old “downhome” ways; younger Americans are addicted to soft drugs and their cell-phones.

It would be self-contradictory to fault mainstream Americans for making their lives meaningful as best they can, in the ways that are available to them: I began this post by noting that the unfathomable tragedy of war is that it fatally severs such meaningful human relationships. But I am concerned that Americans’ habit of immersing themselves in the comfortable insularity of yesteryear, or the radical superficiality so abundantly (and profitably) made available to them by the design-and-marketing geniuses of Disney, Apple, Microsoft, Walmart, Nintendo, And So On, will result (if it hasn’t already) in an irremediable impoverishment of the American soul. It also portends a quick and lemming-like demise of their castles made of dreams, if and when those stronger, tougher sandstorm winds of the East blow in.

How to tell if you’re having a legitimation crisis

I’m reading Habermas’s 1973 book, Legitimation Problems in Late Capitalism (translated as Legitimation Crisis), and wondering whether the United States is currently undergoing a legitimation crisis (as well as a “rationality crisis”) in Habermas’s sense. Thoughts?

I have my own suspicions about Habermas’s positions in that book, particularly his sense of the import of “motivation problems.” But I’m impressed by the apparent relevance of the various categories of social crisis that he delineates and relates to one another. The analysis is, of course, built on a Marxist framework (the notion of the “crises of capitalism”), but Habermas updates the thesis for later developments, including the growth of administrative authority, the phenomenon of “class compromise” (basically, class conflicts mediated and lukewarmly/incompletely resolved through governmental policy), and the contribution that a bourgeois “achievement” morality, supplemented by cultural institutions like Protestantism (as Weber famously suggested), has made to the acceptance of capitalism among the middle classes.