Each Generation Has Something Valuable To Offer Essays

The Bicentennial of the American Revolution ought to be a time for restoring the dialogue between the spirit of the past and the spirit of the future in our national life. We commemorate our origins because our origins are intertwined with our destiny; memory is the reciprocal of hope, and conservation and change are essential to each other. “There is nothing real without both . . . ,” as Alfred North Whitehead once said. “Mere conservation without change cannot conserve . . . , mere change without conservation is a passage from nothing to nothing.”

The dominant tense in America has been the future. The nation began in revolt not only against the British Empire but against the empire of the past. It began with a fundamental commitment to redeem man from history, with all its accumulated guilts and terrors, and to place him in possession of himself. Nature eclipsed history as the director of human affairs. A curious national tradition arose, one whose libertarian principles contravened the force of tradition itself. In “the American Creed,” as Gunnar Myrdal once reminded us, “the principles conserved are liberal and some, indeed, are radical.” So, paradoxically, successive generations of Americans freely legitimated change at the cutting-edge of the future without changing much of anything in the venerable core of national values and goals. They were radical in 1776, and so they may still be; but in celebrating them we are all conservatives.

From the beginning, of course, there have been different orientations, prospective and retrospective, reforming and preserving—a party of hope and a party of memory—and much of our history centers on the conflict between them. Thomas Jefferson stood at the forefront of change during the Revolutionary Era; through him the idea of progress entered into the American democratic ideal, and he became the paramount symbol uniting the nation’s promise with its revolutionary birth. When he was 70 years of age, reviewing the great ideological conflict of his time for the benefit of John Adams, Mr. Jefferson saw it fundamentally as a conflict between the friends and the enemies of enlightened progress. “One of the questions you know on which our parties took different sides, was on the improvability of the human mind, in science, in ethics, in government, etc. Those who advocated reformation of institutions, pari passu, with the progress of science, maintained that no definite limits could be assigned to that progress. The enemies of reform, on the other hand, denied improvement, and advocated steady adherence to the principles, practices, and institutions of our fathers, which they represented as the consummation of wisdom, the acme of excellence, beyond which the human mind could never advance.” It was this faith that lay behind the most radical idea in the Jeffersonian catalogue: “the sovereignty of the living generation.”


Like most ideas this one germinated for some time before it came to birth. Jefferson was led to formulate it in September 1789 in the course of reflection on events of the preceding months inaugurating the French Revolution. As United States Minister to France, he observed these events closely; as a philosopher and friend of democratic revolution, he was more than a detached observer. In liberal circles at Paris he stood as the oracle of the revolutionary nation that inspired France. His advice was sought, and he gave it. He wished for France all the blessings of freedom and self-government, such as the Americans possessed, but cautioned that the country could not go from despotism to liberty all at once. Everywhere, during a residence of five years abroad, Jefferson saw the heavy hand of oppression. “The truth of Voltaire’s observation offers itself perpetually,” he wrote, “that every man here must be either the hammer or the anvil.” Once while on a country walk he fell into conversation with a poor laboring woman met along the way. Her melancholy tale vividly enforced upon his mind the wretchedness produced by aristocratic privilege and the concentration of property in a few hands. “I am conscious that an equal division of property is impracticable,” Jefferson reflected on this “little attend-rissement.” “But the consequences of this enormous inequality producing so much misery to the bulk of mankind, legislators cannot invent too many devices for subdividing property. . . . Whenever there is in any country, uncultivated lands and unemployed poor, it is clear that the laws of property have been so far extended as to violate natural right. The earth,” he concluded, “is given in common stock for men to labor and live on.” America, fortunately, was a long way from the European condition, yet it was not too early to introduce safeguards against falling into it.

With the French Revolution this sentiment took on the precision of an idea. It became for Jefferson the rationale for sweeping social and political reform, and he laid it out in a long letter addressed to James Madison. “The question Whether one generation of men has a right to bind another, seems never to have been started either on this or our side of the water,” he wrote. “Yet it is a question of such consequences as not only to merit decision, but [also to place it] among the fundamental principles of every government.” Setting out from the basic proposition “that the earth belongs in usufruct to the living: that the dead have neither powers nor rights over it,” Jefferson calculated the natural life of a generation during its majority. He consulted the mortality tables of the great scientist Buffon and arrived at the term of 19 years. He then gave three specific applications of the principle. First, as to property, above all landed property. Every generation had a natural right to labor on the earth. If one could “eat up the usufruct,” or withhold it from those to come, “the lands would belong to the dead, and not to the living.” Second, as to public debts. One generation could not be burdened with the debts of another. The enormous debts of the Bourbon monarchy had contributed to the French Revolution, just as those of Great Britain had earlier started the chain of events culminating in the American Revolution. Would it not be wise and just for France to declare in its new constitution that no debt could be contracted for payment beyond the term of 19 years? Indeed, would not this furnish “a fine preamble” to the first American law appropriating the public revenue? Not only, Jefferson thought, would such a provision save the people from oppressive taxes; it would also “bridle the spirit of war” by reducing the power to borrow within natural limits. Third, and most importantly, Jefferson applied the principle to the constitution and laws of government. “No society can make a perpetual constitution, or even a perpetual law. The earth belongs always to the living generation. . . . The constitution and laws of their predecessors [are] extinguished . . . in their natural course with those who gave them being. . . . Every constitution then, and every law, naturally expires at the end of 19 years. If it be enforced longer, it is an act of force, and not of right.”

Such was Jefferson’s idea. He conceded to Madison that it might “at first blush . . . be laughed at as the dream of a theorist.” But, on reflection, his brilliant friend would surely find it sound. It would “exclude at the threshold of our new government the contagious and ruinous errors of this quarter of the globe, which have armed despots with means, not sanctioned by nature, for binding in chains their fellow men.” Once established, the doctrine would be seen as still another instance, like the Federal Constitution over which Madison had labored, of the triumph of reason over habit in the conduct of human affairs.


These American references, as has sometimes been observed, had the appearance of an afterthought in Jefferson’s essay. The idea could scarcely have matured in America. It was formed by European realities, specifically those of France in 1789, and urgently addressed that situation. At the time Jefferson wrote, he was under the care of his physician, Dr. Richard Gem, an elderly Englishman practicing in Paris, a friend of the philosophes, an ardent champion of the revolution; and “the sovereignty of the living generation” seems to have been a favorite sentiment with him. The naked sentiment, certainly, was not unknown to 18th-century political speculation. Locke and Rousseau had made the point that all men have an equal right to the earth; Adam Smith had extended the reasoning to “successive generations of men”; and David Hume had taken up the argument in order to refute it. No one before Jefferson, however, had given form and dress to the notion. And while it is true that his theory expressed the speculative fervor— the rage against the past— more characteristic of the French Revolution than of the American, we cannot brush aside Jefferson’s American references, The essay, after all, was sent to Madison, not to Mirabeau, and he was urged to “force” the idea into discussion in American councils. It found its place readily enough within the confines of Jefferson’s political philosophy. If men are by nature free and equal, no great leap of logic was necessary to argue that generations of men, in organized societies, are also free and equal, If the people are sovereign, if government rests on their consent, then that sovereignty must be a living presence, not something which, exercised once, is dead and gone forever after. Jefferson’s aversion to the claims of inheritance, his reforms in Virginia to make the laws work for the diffusion of property, his abiding concern to keep alive “the spirit of revolution” in the people, his commitment to the progress of mankind—for all this, and more, “the sovereignty of the living generation” might appear as the grand organizing concept. Although the French Revolution gave birth to the idea, it ought to be seen as an illustration of how that revolution enlarged and clarified Jefferson’s understanding of the meaning and the promise of the American Revolution.

Jefferson was back in the United States, about to become the country’s first secretary of state, when Madison offered his reflections on the theory. He did not laugh but gently suggested that for all its philosophical magnificence the doctrine was “not in all respects compatible with the course of human affairs,” and proceeded to a refutation that might have devastated anyone but Jefferson. Madison had earlier, in The Federalist, taken issue with his friend’s advocacy, in the Notes on Virginia, of frequent revision of the state constitution, since such continual change would “in a great measure, deprive the government of that veneration which time? bestows on everything, and without which perhaps the wisest and freest government would not possess the requisite stability.” Now Madison painted a frightful picture of the hazards of an interregnum every 19 years. If the rights of property became “absolutely defunct” at the end of a fixed term, the most violent class conflict must ensue, property values would depreciate, and industry would be deprived of the encouragement offered by stable laws. Although the earth might be viewed as a gift to the living, this could be true only of the earth in its natural state, said Madison, for the “improvements made by the dead form a debt against the living who take the benefit of them.” Finally, Madison argued, there was no way the theory could be practically applied. Generations were not fixed mathematical points, as on Jefferson’s model; they were, rather, like flowing waves, changing daily and hourly as new members were added to the society and old members taken from it. The only escape from the embarrassments of the generational theory lay in the doctrine of the implied consent of the living to the constitution, the laws, the obligations descending from the dead. None of this, Madison assured his philosophical friend, was meant to impeach the bolder truths contained in his “great plan”; but it would be some time, Madison concluded with charming understatement, before such truths, “seen through the medium of philosophy, became visible to the naked eye of the ordinary politician.”

As Jefferson himself turned his attention to the mundane affairs of the new government, he made no effort to “force” the doctrine into discussion. His great rival, Alexander Hamilton, hearing of it, thought that the doctrine aimed at the repudiation of debt and the destruction of property, But so far as it came into discussion in the ideological controversy of the 1790’s, it was in connection with the French Revolution. The leading English polemicist aginst the revolution, Edmund Burke, appealed to the authority of ancient laws and institutions—to the spiritual partnership of the dead, the living, and the unborn—and denounced mischievous democratic ideas which broke the bonds between one generation and another and rendered men “little better than flies of a summer.” Burke was answered by Thomas Paine, who helped to build the ideological bridge between the American and French revolutions. Paine and Jefferson had sometimes been together in Paris, and it is possible that “the sovereignty of the living generation” came into their conversation. At any rate, in The Rights of Man Paine employed the argument with withering scorn against Burke and on behalf of democratic revolution. “Every age and generation must be free to act for itself, in all cases, as the ages and generations which preceded it,” Paine wrote. “The vanity and presumption of governing beyond the grave is the most ridiculous and insolent of all tyrannies.” This great controversy reverberated through American politics, sharpened the opposing ideologies of Federalists and Republicans, and contributed to Jefferson’s own “revolution of 1800,” marked by his ascendancy to the presidency.


Jefferson never attempted to institutionalize the theory, yet never abandoned it. Indeed, he often reasserted it, as in criticism of public financial policies that mortgaged future generations and, most notably, in champiening reform of the Virginia Constitution of 1776. That constitution, too conservative for Jefferson in its time, became an anachronism in the 19th century. It was unjust. Two-thirds of the adults then living had died by 1816. “This corporeal globe, and everything upon it, belong to its present corporeal inhabitants during their generation,” Jefferson insisted. “They alone have a right to declare what is the concern of themselves alone, and to declare the law of that direction.” Every constitution should be revised at generational intervals. It was the only substitute for atrophy on one side or violent revolution on the other. For “laws and constitutions must go hand in hand with the progress of the human mind. . . . We might as well require a man still to wear the coat which fitted him when a boy,” Jefferson declared, “as civilized society to remain ever under the regimen of their barbarous ancestors.” The same spirit presided over the birth of the University of Virginia, today still remembered as “Mr. Jefferson’s University.” It was by the advance of knowledge from generation to generation that the freedom and happiness of mankind were advanced, “not infinitely,” Jefferson said, “but indefinitely, and to a term which no man can fix or foresee.”

Given the delicate balance between theory and practice which Jefferson maintained in his politics, he probably never intended rigorous application of the doctrine, meaning it, rather, as a moral directive to society. And in this sense it entered into the spirit of American democracy. There is a Jeffersonian ring in Alexis de Tocqueville’s observation that in America “every man forgets his ancestors”—”each generation is a new people”—and the Burkian protest can be heard in Lord Macaulay’s solemn warning, “Your constitution is all sail and no anchor.” The doctrine was repeatedly invoked in the 19th century to justify the overhaul of state constitutions, just as Jefferson had invoked it in Virginia. Frederick Jackson Turner set forth a celebrated theory of American history which turned on the idea of extended genesis, of continuous rebirth and renewal as the frontier moved across the continent. Henry George, the great social reformer, made Jefferson the patron saint of the Single Tax founded on the natural right of equal opportunity to land. New Deal reformers in the 1930s made the doctrine part of their creed for remodeling the federal government; and one of Jefferson’s scriptures was engraved on the grand memorial erected to him in Washington. A directive toward democratic change, its appeal has generally been to liberals and radicals, yet even conservatives have found comfort in Jefferson’s denunciation of free-spending governments which burden posterity with the debts of the dead.

In our day Rexford Tugwell’s provocative book, The Emerging Constitution, has again called attention to the theory of generational sovereignty. The theory is as old as the United States Constitution, but no one before Tugwell had the audacity to apply it to the nation’s sacred covenant. The Constitution, he argues, is antiquated in concept, anachronistic in many of its provisions, and wholly unresponsive to the needs of modern society. It has no basis in the reason or will of the people; it lacks both vigor and credibility and is on the way to becoming a lifeless monument from the American past. Tugwell rejects as a shabby fiction the idea of “a living constitution,” that is, one resting on implied consent and adapted to every occasion by legislative, administrative, above all judicial improvization—rarely, by amendment. The conception of the Supreme Court as “a constituent assembly in continuous session,” to use Woodrow Wilson’s language, is seen as an impertinence in a democracy. Such a malleable, ad hoc constitution brings the nation perilously close to having no constitution at all. Like Jefferson, Tugwell maintains that the only alternative to the subversion of the process of free government is the periodic remaking of the fundamental law. “In a society as mobile and complex as ours,” he writes, “the Constitution ought never to be more than one generation old.” Without repeating Jefferson’s mathematical calculations, Tugwell settles on virtually the same generational term, 20 years; and, going beyond Jefferson, he would void every constitution by its own clause after 25 years.


So there is still some kick in Jefferson’s revolutionary idea. The instance of Tugwell calls up the larger question with which I began of the uses of the past in America. If the American Revolution was a revolt against the past—a leap into the future—ought not its bearings for us to be liberating rather than conserving, directed toward making the new rather than saving the old? The rationality of Tugwell’s position is unassailable on Jefferson’s terms, or perhaps on those of the Founding Fathers, who would doubtless be amazed to discover that their Constitution had endured to the nation’s third century. Yet, in 1976, it is hard to imagine a more foolhardy undertaking than the formation of a new constitution of the United States; and one is tempted to reply to Tugwell as Madison replied to Jefferson. Whatever the value of Jefferson’s assault on the vaunted “wisdom of ancestors”— the tyranny of the dead over the living—at the dawn of the age of democratic revolution, it loses a good deal in the gathering twilight. The party of memory has this to be said for it: the classic forms and principles it would conserve are the surest embodiment of authority, clarity, and coherence in a frenzied time, and without these enlightened progress is hopeless. The sense of tradition, of continuity with our origins, of creative dialogue with out past may offer the strongest basis of rationality we as a nation now possess. In today’s fragmented and tormented society, where there is so little consensus of belief or even consciousness of first principles, the values and institutions received from the past provide us with the principal source of legitimacy. And so, if we affirm Jefferson, we can no longer deny Burke.

Yet the spirit, if not the letter, of Jefferson’s bold proposition still speaks to us. A generation’s sense of obligation to the past is valuable only if it serves a greater obligation to the future. As Whitehead said, “conservation without change cannot conserve.” The same philosopher once wrote, “The art of free society consists in the maintenance of the symbolic code; and secondly in fearlessness of revision, to secure that the code serves the purposes which satisfy an enlightened reason. Those societies which cannot combine reverence to their symbols with freedom of revision, must ultimately decay either from anarchy or from the slow atrophy of a life stifled by useless shadows.” This is the Jeffersonian directive. In the case of America, the symbolic code has its core in what Myrdal called “the American Creed” and traced back to the American Revolution. The challenge in this Bicentennial season is neither to exalt nor to resign the code but to reexamine, redefine, and reconstruct it so that it might answer the purposes of the future after it has answered those of the past. For unless the code serves change we are indeed faced with the prospect of atrophy or anarchy. The long heritage of freedom in this nation is not just a thing to save—to be fenced about and decorated like a dead man’s grave—it is a thing to use. Jefferson understood this. “The earth belongs to the living, not to the dead.” Each generation is responsible for working out its own vision of freedom, always on condition of fidelity to the ends of freedom itself. “Nothing is unchangeable,” Jefferson intoned, “but the inherent and inalienable rights of man.” This is both the anchor and the sail; and it is his most enduring legacy 200 years after he wrote the nation’s charter of liberty.

Merrill Peterson

A University of Virginia history professor, former chairman of the history department, and noted Jeffersonian scholar, Peterson wrote or edited 37 books in his lifetime. He fought in WWII, won a Guggenheim fellowship in 1962, and joined the Peace Corps at the age of 76.

September 2004

Remember the essays you had to write in high school? Topic sentence, introductory paragraph, supporting paragraphs, conclusion. The conclusion being, say, that Ahab in Moby Dick was a Christ-like figure.

Oy. So I'm going to try to give the other side of the story: what an essay really is, and how you write one. Or at least, how I write one.


The most obvious difference between real essays and the things one has to write in school is that real essays are not exclusively about English literature. Certainly schools should teach students how to write. But due to a series of historical accidents the teaching of writing has gotten mixed together with the study of literature. And so all over the country students are writing not about how a baseball team with a small budget might compete with the Yankees, or the role of color in fashion, or what constitutes a good dessert, but about symbolism in Dickens.

With the result that writing is made to seem boring and pointless. Who cares about symbolism in Dickens? Dickens himself would be more interested in an essay about color or baseball.

How did things get this way? To answer that we have to go back almost a thousand years. Around 1100, Europe at last began to catch its breath after centuries of chaos, and once they had the luxury of curiosity they rediscovered what we call "the classics." The effect was rather as if we were visited by beings from another solar system. These earlier civilizations were so much more sophisticated that for the next several centuries the main work of European scholars, in almost every field, was to assimilate what they knew.

During this period the study of ancient texts acquired great prestige. It seemed the essence of what scholars did. As European scholarship gained momentum it became less and less important; by 1350 someone who wanted to learn about science could find better teachers than Aristotle in his own era. [1] But schools change slower than scholarship. In the 19th century the study of ancient texts was still the backbone of the curriculum.

The time was then ripe for the question: if the study of ancient texts is a valid field for scholarship, why not modern texts? The answer, of course, is that the original raison d'etre of classical scholarship was a kind of intellectual archaeology that does not need to be done in the case of contemporary authors. But for obvious reasons no one wanted to give that answer. The archaeological work being mostly done, it implied that those studying the classics were, if not wasting their time, at least working on problems of minor importance.

And so began the study of modern literature. There was a good deal of resistance at first. The first courses in English literature seem to have been offered by the newer colleges, particularly American ones. Dartmouth, the University of Vermont, Amherst, and University College, London taught English literature in the 1820s. But Harvard didn't have a professor of English literature until 1876, and Oxford not till 1885. (Oxford had a chair of Chinese before it had one of English.) [2]

What tipped the scales, at least in the US, seems to have been the idea that professors should do research as well as teach. This idea (along with the PhD, the department, and indeed the whole concept of the modern university) was imported from Germany in the late 19th century. Beginning at Johns Hopkins in 1876, the new model spread rapidly.

Writing was one of the casualties. Colleges had long taught English composition. But how do you do research on composition? The professors who taught math could be required to do original math, the professors who taught history could be required to write scholarly articles about history, but what about the professors who taught rhetoric or composition? What should they do research on? The closest thing seemed to be English literature. [3]

And so in the late 19th century the teaching of writing was inherited by English professors. This had two drawbacks: (a) an expert on literature need not himself be a good writer, any more than an art historian has to be a good painter, and (b) the subject of writing now tends to be literature, since that's what the professor is interested in.

High schools imitate universities. The seeds of our miserable high school experiences were sown in 1892, when the National Education Association "formally recommended that literature and composition be unified in the high school course." [4] The 'riting component of the 3 Rs then morphed into English, with the bizarre consequence that high school students now had to write about English literature-- to write, without even realizing it, imitations of whatever English professors had been publishing in their journals a few decades before.

It's no wonder if this seems to the student a pointless exercise, because we're now three steps removed from real work: the students are imitating English professors, who are imitating classical scholars, who are merely the inheritors of a tradition growing out of what was, 700 years ago, fascinating and urgently needed work.

No Defense

The other big difference between a real essay and the things they make you write in school is that a real essay doesn't take a position and then defend it. That principle, like the idea that we ought to be writing about literature, turns out to be another intellectual hangover of long forgotten origins.

It's often mistakenly believed that medieval universities were mostly seminaries. In fact they were more law schools. And at least in our tradition lawyers are advocates, trained to take either side of an argument and make as good a case for it as they can. Whether cause or effect, this spirit pervaded early universities. The study of rhetoric, the art of arguing persuasively, was a third of the undergraduate curriculum. [5] And after the lecture the most common form of discussion was the disputation. This is at least nominally preserved in our present-day thesis defense: most people treat the words thesis and dissertation as interchangeable, but originally, at least, a thesis was a position one took and the dissertation was the argument by which one defended it.

Defending a position may be a necessary evil in a legal dispute, but it's not the best way to get at the truth, as I think lawyers would be the first to admit. It's not just that you miss subtleties this way. The real problem is that you can't change the question.

And yet this principle is built into the very structure of the things they teach you to write in high school. The topic sentence is your thesis, chosen in advance, the supporting paragraphs the blows you strike in the conflict, and the conclusion-- uh, what is the conclusion? I was never sure about that in high school. It seemed as if we were just supposed to restate what we said in the first paragraph, but in different enough words that no one could tell. Why bother? But when you understand the origins of this sort of "essay," you can see where the conclusion comes from. It's the concluding remarks to the jury.

Good writing should be convincing, certainly, but it should be convincing because you got the right answers, not because you did a good job of arguing. When I give a draft of an essay to friends, there are two things I want to know: which parts bore them, and which seem unconvincing. The boring bits can usually be fixed by cutting. But I don't try to fix the unconvincing bits by arguing more cleverly. I need to talk the matter over.

At the very least I must have explained something badly. In that case, in the course of the conversation I'll be forced to come up a with a clearer explanation, which I can just incorporate in the essay. More often than not I have to change what I was saying as well. But the aim is never to be convincing per se. As the reader gets smarter, convincing and true become identical, so if I can convince smart readers I must be near the truth.

The sort of writing that attempts to persuade may be a valid (or at least inevitable) form, but it's historically inaccurate to call it an essay. An essay is something else.


To understand what a real essay is, we have to reach back into history again, though this time not so far. To Michel de Montaigne, who in 1580 published a book of what he called "essais." He was doing something quite different from what lawyers do, and the difference is embodied in the name. Essayer is the French verb meaning "to try" and an essai is an attempt. An essay is something you write to try to figure something out.

Figure out what? You don't know yet. And so you can't begin with a thesis, because you don't have one, and may never have one. An essay doesn't begin with a statement, but with a question. In a real essay, you don't take a position and defend it. You notice a door that's ajar, and you open it and walk in to see what's inside.

If all you want to do is figure things out, why do you need to write anything, though? Why not just sit and think? Well, there precisely is Montaigne's great discovery. Expressing ideas helps to form them. Indeed, helps is far too weak a word. Most of what ends up in my essays I only thought of when I sat down to write them. That's why I write them.

In the things you write in school you are, in theory, merely explaining yourself to the reader. In a real essay you're writing for yourself. You're thinking out loud.

But not quite. Just as inviting people over forces you to clean up your apartment, writing something that other people will read forces you to think well. So it does matter to have an audience. The things I've written just for myself are no good. They tend to peter out. When I run into difficulties, I find I conclude with a few vague questions and then drift off to get a cup of tea.

Many published essays peter out in the same way. Particularly the sort written by the staff writers of newsmagazines. Outside writers tend to supply editorials of the defend-a-position variety, which make a beeline toward a rousing (and foreordained) conclusion. But the staff writers feel obliged to write something "balanced." Since they're writing for a popular magazine, they start with the most radioactively controversial questions, from which-- because they're writing for a popular magazine-- they then proceed to recoil in terror. Abortion, for or against? This group says one thing. That group says another. One thing is certain: the question is a complex one. (But don't get mad at us. We didn't draw any conclusions.)

The River

Questions aren't enough. An essay has to come up with answers. They don't always, of course. Sometimes you start with a promising question and get nowhere. But those you don't publish. Those are like experiments that get inconclusive results. An essay you publish ought to tell the reader something he didn't already know.

But what you tell him doesn't matter, so long as it's interesting. I'm sometimes accused of meandering. In defend-a-position writing that would be a flaw. There you're not concerned with truth. You already know where you're going, and you want to go straight there, blustering through obstacles, and hand-waving your way across swampy ground. But that's not what you're trying to do in an essay. An essay is supposed to be a search for truth. It would be suspicious if it didn't meander.

The Meander (aka Menderes) is a river in Turkey. As you might expect, it winds all over the place. But it doesn't do this out of frivolity. The path it has discovered is the most economical route to the sea. [6]

The river's algorithm is simple. At each step, flow down. For the essayist this translates to: flow interesting. Of all the places to go next, choose the most interesting. One can't have quite as little foresight as a river. I always know generally what I want to write about. But not the specific conclusions I want to reach; from paragraph to paragraph I let the ideas take their course.

This doesn't always work. Sometimes, like a river, one runs up against a wall. Then I do the same thing the river does: backtrack. At one point in this essay I found that after following a certain thread I ran out of ideas. I had to go back seven paragraphs and start over in another direction.

Fundamentally an essay is a train of thought-- but a cleaned-up train of thought, as dialogue is cleaned-up conversation. Real thought, like real conversation, is full of false starts. It would be exhausting to read. You need to cut and fill to emphasize the central thread, like an illustrator inking over a pencil drawing. But don't change so much that you lose the spontaneity of the original.

Err on the side of the river. An essay is not a reference work. It's not something you read looking for a specific answer, and feel cheated if you don't find it. I'd much rather read an essay that went off in an unexpected but interesting direction than one that plodded dutifully along a prescribed course.


So what's interesting? For me, interesting means surprise. Interfaces, as Geoffrey James has said, should follow the principle of least astonishment. A button that looks like it will make a machine stop should make it stop, not speed up. Essays should do the opposite. Essays should aim for maximum surprise.

I was afraid of flying for a long time and could only travel vicariously. When friends came back from faraway places, it wasn't just out of politeness that I asked what they saw. I really wanted to know. And I found the best way to get information out of them was to ask what surprised them. How was the place different from what they expected? This is an extremely useful question. You can ask it of the most unobservant people, and it will extract information they didn't even know they were recording.

Surprises are things that you not only didn't know, but that contradict things you thought you knew. And so they're the most valuable sort of fact you can get. They're like a food that's not merely healthy, but counteracts the unhealthy effects of things you've already eaten.

How do you find surprises? Well, therein lies half the work of essay writing. (The other half is expressing yourself well.) The trick is to use yourself as a proxy for the reader. You should only write about things you've thought about a lot. And anything you come across that surprises you, who've thought about the topic a lot, will probably surprise most readers.

For example, in a recent essay I pointed out that because you can only judge computer programmers by working with them, no one knows who the best programmers are overall. I didn't realize this when I began that essay, and even now I find it kind of weird. That's what you're looking for.

So if you want to write essays, you need two ingredients: a few topics you've thought about a lot, and some ability to ferret out the unexpected.

What should you think about? My guess is that it doesn't matter-- that anything can be interesting if you get deeply enough into it. One possible exception might be things that have deliberately had all the variation sucked out of them, like working in fast food. In retrospect, was there anything interesting about working at Baskin-Robbins? Well, it was interesting how important color was to the customers. Kids a certain age would point into the case and say that they wanted yellow. Did they want French Vanilla or Lemon? They would just look at you blankly. They wanted yellow. And then there was the mystery of why the perennial favorite Pralines 'n' Cream was so appealing. (I think now it was the salt.) And the difference in the way fathers and mothers bought ice cream for their kids: the fathers like benevolent kings bestowing largesse, the mothers harried, giving in to pressure. So, yes, there does seem to be some material even in fast food.

I didn't notice those things at the time, though. At sixteen I was about as observant as a lump of rock. I can see more now in the fragments of memory I preserve of that age than I could see at the time from having it all happening live, right in front of me.


So the ability to ferret out the unexpected must not merely be an inborn one. It must be something you can learn. How do you learn it?

To some extent it's like learning history. When you first read history, it's just a whirl of names and dates. Nothing seems to stick. But the more you learn, the more hooks you have for new facts to stick onto-- which means you accumulate knowledge at what's colloquially called an exponential rate. Once you remember that Normans conquered England in 1066, it will catch your attention when you hear that other Normans conquered southern Italy at about the same time. Which will make you wonder about Normandy, and take note when a third book mentions that Normans were not, like most of what is now called France, tribes that flowed in as the Roman empire collapsed, but Vikings (norman = north man) who arrived four centuries later in 911. Which makes it easier to remember that Dublin was also established by Vikings in the 840s. Etc, etc squared.

Collecting surprises is a similar process. The more anomalies you've seen, the more easily you'll notice new ones. Which means, oddly enough, that as you grow older, life should become more and more surprising. When I was a kid, I used to think adults had it all figured out. I had it backwards. Kids are the ones who have it all figured out. They're just mistaken.

When it comes to surprises, the rich get richer. But (as with wealth) there may be habits of mind that will help the process along. It's good to have a habit of asking questions, especially questions beginning with Why. But not in the random way that three year olds ask why. There are an infinite number of questions. How do you find the fruitful ones?

I find it especially useful to ask why about things that seem wrong. For example, why should there be a connection between humor and misfortune? Why do we find it funny when a character, even one we like, slips on a banana peel? There's a whole essay's worth of surprises there for sure.

If you want to notice things that seem wrong, you'll find a degree of skepticism helpful. I take it as an axiom that we're only achieving 1% of what we could. This helps counteract the rule that gets beaten into our heads as children: that things are the way they are because that is how things have to be. For example, everyone I've talked to while writing this essay felt the same about English classes-- that the whole process seemed pointless. But none of us had the balls at the time to hypothesize that it was, in fact, all a mistake. We all thought there was just something we weren't getting.

I have a hunch you want to pay attention not just to things that seem wrong, but things that seem wrong in a humorous way. I'm always pleased when I see someone laugh as they read a draft of an essay. But why should I be? I'm aiming for good ideas. Why should good ideas be funny? The connection may be surprise. Surprises make us laugh, and surprises are what one wants to deliver.

I write down things that surprise me in notebooks. I never actually get around to reading them and using what I've written, but I do tend to reproduce the same thoughts later. So the main value of notebooks may be what writing things down leaves in your head.

People trying to be cool will find themselves at a disadvantage when collecting surprises. To be surprised is to be mistaken. And the essence of cool, as any fourteen year old could tell you, is nil admirari. When you're mistaken, don't dwell on it; just act like nothing's wrong and maybe no one will notice.

One of the keys to coolness is to avoid situations where inexperience may make you look foolish. If you want to find surprises you should do the opposite. Study lots of different things, because some of the most interesting surprises are unexpected connections between different fields. For example, jam, bacon, pickles, and cheese, which are among the most pleasing of foods, were all originally intended as methods of preservation. And so were books and paintings.

Whatever you study, include history-- but social and economic history, not political history. History seems to me so important that it's misleading to treat it as a mere field of study. Another way to describe it is all the data we have so far.

Among other things, studying history gives one confidence that there are good ideas waiting to be discovered right under our noses. Swords evolved during the Bronze Age out of daggers, which (like their flint predecessors) had a hilt separate from the blade. Because swords are longer the hilts kept breaking off. But it took five hundred years before someone thought of casting hilt and blade as one piece.


Above all, make a habit of paying attention to things you're not supposed to, either because they're "inappropriate," or not important, or not what you're supposed to be working on. If you're curious about something, trust your instincts. Follow the threads that attract your attention. If there's something you're really interested in, you'll find they have an uncanny way of leading back to it anyway, just as the conversation of people who are especially proud of something always tends to lead back to it.

For example, I've always been fascinated by comb-overs, especially the extreme sort that make a man look as if he's wearing a beret made of his own hair. Surely this is a lowly sort of thing to be interested in-- the sort of superficial quizzing best left to teenage girls. And yet there is something underneath. The key question, I realized, is how does the comber-over not see how odd he looks? And the answer is that he got to look that way incrementally. What began as combing his hair a little carefully over a thin patch has gradually, over 20 years, grown into a monstrosity. Gradualness is very powerful. And that power can be used for constructive purposes too: just as you can trick yourself into looking like a freak, you can trick yourself into creating something so grand that you would never have dared to plan such a thing. Indeed, this is just how most good software gets created. You start by writing a stripped-down kernel (how hard can it be?) and gradually it grows into a complete operating system. Hence the next leap: could you do the same thing in painting, or in a novel?

See what you can extract from a frivolous question? If there's one piece of advice I would give about writing essays, it would be: don't do as you're told. Don't believe what you're supposed to. Don't write the essay readers expect; one learns nothing from what one expects. And don't write the way they taught you to in school.

The most important sort of disobedience is to write essays at all. Fortunately, this sort of disobedience shows signs of becoming rampant. It used to be that only a tiny number of officially approved writers were allowed to write essays. Magazines published few of them, and judged them less by what they said than who wrote them; a magazine might publish a story by an unknown writer if it was good enough, but if they published an essay on x it had to be by someone who was at least forty and whose job title had x in it. Which is a problem, because there are a lot of things insiders can't say precisely because they're insiders.

The Internet is changing that. Anyone can publish an essay on the Web, and it gets judged, as any writing should, by what it says, not who wrote it. Who are you to write about x? You are whatever you wrote.

Popular magazines made the period between the spread of literacy and the arrival of TV the golden age of the short story. The Web may well make this the golden age of the essay. And that's certainly not something I realized when I started writing this.


[1] I'm thinking of Oresme (c. 1323-82). But it's hard to pick a date, because there was a sudden drop-off in scholarship just as Europeans finished assimilating classical science. The cause may have been the plague of 1347; the trend in scientific progress matches the population curve.

[2] Parker, William R. "Where Do College English Departments Come From?" College English 28 (1966-67), pp. 339-351. Reprinted in Gray, Donald J. (ed). The Department of English at Indiana University Bloomington 1868-1970. Indiana University Publications.

Daniels, Robert V. The University of Vermont: The First Two Hundred Years. University of Vermont, 1991.

Mueller, Friedrich M. Letter to the Pall Mall Gazette. 1886/87. Reprinted in Bacon, Alan (ed). The Nineteenth-Century History of English Studies. Ashgate, 1998.

[3] I'm compressing the story a bit. At first literature took a back seat to philology, which (a) seemed more serious and (b) was popular in Germany, where many of the leading scholars of that generation had been trained.

In some cases the writing teachers were transformed in situ into English professors. Francis James Child, who had been Boylston Professor of Rhetoric at Harvard since 1851, became in 1876 the university's first professor of English.

[4] Parker, op. cit., p. 25.

[5] The undergraduate curriculum or trivium (whence "trivial") consisted of Latin grammar, rhetoric, and logic. Candidates for masters' degrees went on to study the quadrivium of arithmetic, geometry, music, and astronomy. Together these were the seven liberal arts.

The study of rhetoric was inherited directly from Rome, where it was considered the most important subject. It would not be far from the truth to say that education in the classical world meant training landowners' sons to speak well enough to defend their interests in political and legal disputes.

[6] Trevor Blackwell points out that this isn't strictly true, because the outside edges of curves erode faster.

Thanks to Ken Anderson, Trevor Blackwell, Sarah Harlin, Jessica Livingston, Jackie McDonough, and Robert Morris for reading drafts of this.

0 Thoughts to “Each Generation Has Something Valuable To Offer Essays

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *