We begin as pulses of electricity.

Across billions of neurons, mapped not quite chaotically, but with only that structure which provides for rudimentary autonomic function and the building blocks for more. For cognition, for personhood. As we receive stimuli, we change the structure of this web of pathways. Some grow stronger, others weaken and atrophy.

Patterns develop. Stimuli arriving in near-simultaneity become linked structurally. Impressions are formed, impressions in the physical structure of the brain, affecting the strength and reach of signal passage. The brain grows in sophistication, in connectedness. Basic if not simple functions, those coded for in DNA, are used as the building blocks of yet higher connections and functions. We bend the brain’s capacity to detect visual features, like vertical, horizontal, and diagonal lines, into the ability to recognize a chair or a face. We feel pain, associate it with the visual and spatial attributes of the stovetop, and learn to avoid it with our fingers.

We begin with simple atomic units, and build.

Some few layers beyond, after spending years bathed in sensory data, sorting it by building stronger pathways when they’re helpful and diminishing others when not, the brain has a refined capacity for parsing impressions of the world. It develops a sensory algebra for referring to the world through the senses. We begin to see that the world includes us, that we are in the world, and refine our reference frame such that by echoing our worldview back against the world, we can see our shadow upon it. We are now self-aware.

Thereafter, we have larger units built upon all those smaller ones. We are a unit, an algebraic element, as is the seen world, the heard world, the felt world, the smelled world, the tasted world. We are now world-aware, in a local sense, as that locality follows us.

At some point, before or after, we find a curious application of these patterns: they can be encoded and transferred. We call this language, including but not limited to the grammatical spoken and written languages. With this phenomenon arrives the possibility of creating algebraic elements which are actually only references to others, and the ability to combine these references into yet larger references, systematically. What the world is is reshaped as we fold sensory data into lingual structures, as we see a dog and understand her to be a “dog” and then read Old Yeller.

In this way, electrical signals passing through neuron gaps, down myelin-sheathed neuron fibers, across networks of fibers and gaps, form higher-order structures, which in turn form yet higher-order structure, and so on, until our “instincts” (whatever those are) are joined by our cognitive machinery to create our world. Just as a brick house is made of walls, which are made of brick, which are made of compacted clay and sediment, and so on, the space which our cognition occupies evolves from more fundamental blocks.

We may–and many people do–call this process or its result “analogy.” I describe it as the inference of structure via the combination of available cognitive elements, at least for now. It forms the basis for subsequent discussion.


People and me, we’re a hot and cold thing.

I get lost in a flesh-colored sea of mundanity, and feel powerless by virtue of membership. Alternately, the slightest brush of sentiment, a sudden memory of a childhood contemplation, a simple courtesy, each can bring me to sentimentality and emotional vulnerability. I may be awed by a feat of community, only to be horrified by the quickness of depravity. Hot and cold.

Hidden somewhere in my DNA is whatever code whose execution makes me need community. I don’t pretend to understand it, and I’m nearing the end of my too-cool aloofness toward the idea of sharing my concern with others, even putting theirs above mine. Each time I roll my eyes at Christian goodness expressed as a bumper sticker, I’m hoping someone sees it. I want to share my weak outrage, cleverly if possible.

I suffer the fools, nodding with a forced grin, or, more frequently these days, a committal nonchalance, a disaffection. I try not to enable their insipidity. My working hypothesis is that I can condition people to spare me their American Idol gossip and formulaic well wishes, this hypothesis underpinned by the presumption that they proffer the former to justify their interest, and the latter at the behest of perceived cultural obligation. Neither, then, is anything better than the misapplication of earnestness, a motive earnestness powering a crass social algorithm.

Whatever the quality of the hypothesis, I am yet not quite the good scientist. While you may rest your willing suspension of disbelief of the meaninglessness of our entire venture on a fetish for familial melodrama, I am just as easily duped into giving a damn by the ignorant wagging of a small dog’s tail. There’s no sound nihilistic conclusion that doesn’t undermine the value of both.

On Community

In our best moments, people and I dance like we’ve been at it for years, like we took lessons and we know what we’re doing. We finally cave and celebrate Christmas, because, damn it, it’s as good an excuse as any to get together and laugh and talk about the good old times and make yet another promise to stay in touch and yes I mean it this time. They’ve got a way of confusing me, people do, making me forget for a moment the difficulty of thinking deeply about continued existence without concluding that it’s ludicrous. I can get lost in that attachment. I can want a social life, a life filled with people and their straining against credulity to believe in the President or their political parties or how wonderful their children are. I can get lost in it, because I want their illusions to be true, because it makes those first steps out of bed each morning easier to take.

Convinced that I can circumnavigate banality by choosing to associate with like-minded folks, I seek them out, find them, and usually make them regret having admitted to a shared interest. I have gone so far as to bemoan the absence of this social cloud, blaming my fractured attentiveness to fields of my interest on the fact that I’ve no one of whom to ask for help and no one to whom to offer it. If only I could surround myself with ambient similarity of intent and disposition, I suppose, I’d finally earn my pride.

There’s logic in that, of course. There are fewer things any one of us can do better alone than there are things better suited to multiple effort, and individual aptitude doesn’t scale well. And maybe my American pedigree comes with too great a focus on not only genius, but on solitary genius, the single brilliant point in all the surrounding murky darkness. Too great, as I’ve only just seen my way through it and started learning that genius doesn’t exist, that the question isn’t even relevant to any substantive success. No, whatever our natural aptitude, it’s our work that achieves; and work is about the goal, independent of whether we reached out for help or soldiered on in private.

The Teachers Lounge

I have this idea, right? I’ve tinkered with it, and I’m convinced it’s a winner. It’s about a coffee house, but not Starbucks or that local shop you go to to keep it real. No, this is a shop with coffee, and healthy snacks, and white- and/or chalkboards, and computers loaded with fancy math software. And books—oh, the books. Shelves of canonical tomes, of Newton and Euclid and Euler and Gauss. Lesser works, too, but nothing without rigor. The real stars, though, are the customers.

They’re math geeks, physics geeks, engineers in fact or aspiration, hackers and crypto wonks, and the taggers-along who dig the nerd scene. You could float from one table to the next and overhear all manner of esoteric enthusiasm; and you could watch the hasty cycle of writing, erasing, lip-chewing, and writing at the boards, two or three contributors locked on a proof or a messy optics problem. All the reference material you’d ever need, all the theorems and algorithms and particle masses, are sitting on the shelves, in multiplicity. You could just soak it in, and maybe learn a thing or two.

Or, rather, I could. Ambient similarity of intent and disposition + ready access to coffee and geeks. And I own the place, which means all the business is my business; and it means that I’ve made it out of the cube farm and into the fold. No longer on part-time terms, I’m a full-time, fully vested member of that community. I’d call this place The Teachers Lounge.

The Nothing

Despite the eagerness with which I hang it, there’s a window behind all that window dressing, and it opens out into a void. All our efforts, all our cares, all our lofty endeavors, all our conniving and violence and pity, all of it breaks down to a single concern: survival. Depending on the scope, survival of the individual or survival of the species; but each of our urges to survive arises from similar genetic heritage, so the former is just an instance of the latter. How, then, can survival actually mean anything, really? If we matter to more than ourselves, I have yet to see how. Put those things together—that all our business is merely survival, and that our existence bears no objective worth—then our business is not itself imbued with intrinsic value. We must give it any value it has.

But the form of this value varies across cultures, measured nationally or racially or ethnically or socioeconomically, so we can’t even provide a sake for life other than life itself. This value is for each of us to assign if it is to be nontrivial, which is problematic. It’s problematic because the shared vision, the community, is in its sparest incarnation a search for authenticity, of an authority for evaluation; if we can each count on no one else but ourselves to define our value, then the community offers false hope at best, confusion or worse otherwise.

I’ll skirt an invocation of Godwin’s law and instead generically assert that history provides us many glaring examples of vicarious valuation gone wrong. Wherever there is a prostration to the wisdom of crowds, there stands a good chance for disaster and mayhem, though we might merely suffer banality instead. And we’re back where we started.


I have a few projects in mind that revolve around a surprisingly resilient interest in sharing something with a group of people. Most are more immediately tenable than The Teachers Lounge. I’ve decided—again, but more resolutely now—to give this community thing a serious whirl. I’m anticipating making use of the extra computing power; for anything my brain can crunch, mine and someone else’s might do it better and/or faster. And there are some prospects for sustainable business models among them, too. But most of all, I’m interested in collecting data and testing my hypotheses. Maybe I’m wrong, maybe there’s some kind of meaning hidden away in the spaces between us. I don’t think so, I’m not expecting much, and I’m not quite beyond feeling I’m selling out. Alas, there is but to give it a shot.

Even with a balanced coin, it’s no mean feat to make sense of the world.

Upon cursory reading, Chesterton also makes several comments to this effect, though madness, not simple despair, is his result. I concur: it’s quite daunting to have none but yourself to rely upon for guidance and whatever “purpose” can be had.

It doesn’t follow, however, that the truth need not be dire. Invoking fruitlessness or direness as illustrations or proof of absurdity would seem relevant only if we take as given a need for life, a purpose in it other than as a means to excrete and feed other life by dying. That effectively cuts to the heart of the issue, and begs the question of whether or not humanity exists to serve a designed purpose.

Furthermore, to support a designed Reason via despair in its absence is tantamount to “finding religion” as a means of escapism, and it should be insulting—to the believer. Clear said, “There are no atheists in foxholes”; and I say that grenades don’t make believers. More than mere delusion, it appears circular: believing in a higher authority as an alternative to a dire life in its absence presupposes that the universe gives a fuck. Again, that points to the face of the question, and is unconvincing.

Of course life can be desperate. It has, does, and by all indications, will, from time to time, become so. I counter that it must be so in the absence of an extrahuman power, man-jerk or otherwise. I find it unfortunate, both that the believer is oft characterized as a zealous mystic, ignorant of fact; and that the nonbeliever is characterized as suffering from a melancholic humor, lost and listless.

To say that death is the only viable option in a world without an intrinsic, enterprising purpose is to say that life lived for its own sake is not viable. If a morality devoid of extrahuman stewardship requires no distinction between raping and not raping, then there can’t be any differentiation between the choice to live and the choice to die. Even if the world of the nonbeliever is this absurd, there still is no de facto reason for suicide. Even if life is nothing more than a protracted process of cellular division and decay, of conversion of energy into food into energy, why is living not a justifiable choice?

Consider, then, that even the nonbeliever can accommodate a belief in something grander than himself: an elegant proof; a refinement of a physical theory; the next quirky thing your son says. All are informing, all contribute to making you more than you were before them. Even oppression, persecution, and assault are potentially additive, if variably unpleasant. Those who don’t believe it was created are not necessarily restricted from admiring the natural world—this is no one’s special province.

I awake each morning and rise, not because today will be better than yesterday; but because today is today, and is not yesterday.

The tailbone’s connected to the funny bone.

And if it’s not, you’re doing it wrong. Which represents the greater hubris: we have found structures in the universe whose origins we can understand only if they were created by a higher consciousness; or given an intelligent creator, whoever that is must’ve received, like, a C at best for ENG302: Creating Things That Work Well to Populate Your New Universe?

The first point is rather subtle, or seems so given the enthusiasm with which the standard of intelligent design is being raised. The creative capacity of the human species is lauded as one of the characteristics that significantly separate us from other species, and is fundamental to viewing humans as undoubtedly greater. We look to our semiconductor chipsets, our skyscrapers, our movable type, our perfect novels, and our weapons of mass destruction and proclaim, awfully, “Holy shit, we own.” To us, these results of our creative endeavors are without the bounds of nature.

No surprise, then, that upon observing some phenomenon in nature which bears strikingly similar complexity as we’ve come to understand creations wrought by our hands and minds, we conclude said phenomenon is the product of some creative force. Of course that force would have to be intelligent, must be conscious and create with intent, because that’s what we do. That force, though, is not human.

Let’s assume there is an intelligent designer. It’s no feat whatsoever to find fault with this designer’s product (though this is by no means perfectly demonstrative of the claim, as some responses note). I have also heard, more than once and from more than one source, that engineers interested in the topic generally consider the human body to be poorly engineered. Our only saving grace, it would seem, is our intelligence, as, without it, we surely would not prosper as we do. However, a useful argument can be framed which would call even this conclusion into question. It is a long way from self-evident that we are, in fact, the product of an intelligent designer, if a product of any designer we are.

Proponents of intelligent design might have their bets hedged. It’s likely that someone, upon reading the preceding paragraph, would respond, “Well, sure, maybe the coccyx seems ill-planned; but you can’t claim to know the Creator’s plan! You’re overstepping the bounds of your understanding, sir, and I’ll have none of that.” To which I would reply, “I was only following your lead, my fellow, given that you’ve deemed your understanding of nature to be quite conclusive on the matter of what occurs naturally and what must be created to exist.”

“Further,” I’d add, straightening my smoking jacket, “it might appear to the wary observer that it is you who has overstepped the bounds of your understanding more grossly, decided as you have that whatever you might make with some hardware and a free afternoon is beyond the means of the remaining entirety of the universe…except an agent fashioned after humans. Could it be, rather, that ‘creation’ is not the province of intelligence at all, but instead just one other dynamic of the universe, like entropy? Is it really so odd to think that we ‘create’ because we are wholly within nature, inextricably, and are no less an agent of nature, an organ, than gravity or stars or diseases; and, further, is it really so odd to think that, as an organ of nature, we should propagate a dynamism found elsewhere in nature, producing output which had not been observed prior? What if ‘creation’ is a product of the universe, and not vice versa?”

How different, really, are these conceptions? What is really so different about explaining the development of some observed phenomena with an appeal to this Greatness over that one? We may debate the details but not the Greatness. That is, no one really presumes to understand everything about everything, or even everything about much of anything; and whether they’re particle physicists or a pastors (or both), no matter in which of these camps they might fall, for each contributor of this large discussion there is a threshold beyond which we seem commonly to say, “Well, beyond that, we just don’t know.”

Some folks deified George Carlin, which I find wholly ironic.

Carlin died in June of 2008, at the age of 71. It’s hard to estimate Carlin’s value in the Western culture; his Seven Words are, today, somewhat less powerful than they were in 1972, and no matter his one-time popularity, their deflation is not significantly his doing, I think. At his height, TV love scenes still involved the “one foot on the floor” rule, so for many the Carlin spiel was apparently quite powerful.

And while the the Carlin canon pivots on those seven words, they were only part of his larger social commentary machine. No doubt the Richard Pryors and Steven Wrights of the world owe some small debt to George Carlin. As social creatures who, if unwittingly, make use of their exploits, we, too, owe him that debt. I often found his delivery a little too fond of itself, but maybe that was part of the bit.

However his cultural value is estimated, he seems one of the last of a dying breed, if not necessarily a dying species. I don’t get out much, but I don’t recall hearing of anyone with a similar currency, in terms of popularity and stickiness, who challenges social norms as a matter of principle. The closest might be Dave Chappelle, off the top of my head, but that’s not where I keep my good stuff so I’m not sure how much you should invest in that idea. Oprah Winfrey notwithstanding, the popular culture has changed over these decades of Carlin’s popular decline, and any market research company (including mine) will tell you that it’s increasingly difficult to intercept peoples’ attention. Some have called George Carlin a “hero“; I’m not sure anyone can become that kind of hero in a culture that cherishes its schisms and disaffection.

Does prayer beget opposable thumbs?

The University of Oxford has apparently taken £2M toward answering “Why do we believe in God?” From the Times article:

They will not attempt to solve the question of whether God exists but they will examine evidence to try to prove whether belief in God conferred an evolutionary advantage to mankind. They will also consider the possibility that faith developed as a byproduct of other human characteristics, such as sociability.

I found this in a discussion about the study in the Galilean Library, which is a wonderful resource for thoughtful, and almost more importantly, civil discussion across the philosophical spectrum. I added a (typically long-winded) point of consideration, summarized by saying that, as much as “God” might just be a label for the collection of “all things we think we don’t know, or which are currently ineffable and beyond our scrutiny,” then any such study, however expensive, really aims to investigate just one in a class of cultural metaphors, models, for things we want to understand.

There are undoubtedly flaws in viewing all our cognitive endeavors as mere mapping sensation to lingual metaphors, as it is certain to be a crude approximation, if a valid approximation at all; but just as in the case of the infinite square well, first-order, crude models work well as insertion points to broader and/or deeper analysis. And lest it be thought that I’m of the same “science != religion” mindset I was just a few short years ago, here’s a hopefully fruitful excerpt.

Of course, science itself probably holds no special claim to some objectivity, some circumnavigation of our use of metaphors as models. Just because scientific models use polynomial notation, and are algebraically derivable, and embed quite a lot of effective structure in the mathematical language they inhabit, doesn’t mean they’re anything more than just intricate metaphors. Niels Bohr developed a pretty model for the atom, but this perfect picture has since been rebutted. It worked, though, and, in many contexts, is a fine picture of atomic structure. Physics students still make use of metaphors in university, e.g. the infinite square well. These are simplified pictures used to motivate a subtle concept, before their complexity is increased on the way to deeper understanding.

The specific question about whether belief in “God” conferred/confers an evolutionary benefit is interesting, and something I’ve pondered for a while. In general, it seems that people with some kind of workable model prosper to a greater degree than those who don’t, independent of the model’s approximation of “truth.” This is related, I think, to results of a study concluding that those able to deceive themselves tended to be happier than those who aren’t, given that (a) models are only ever approximate, and (b) the utility of a model is probably proportionate to how strongly we believe in it’s explanatory power.

I need help, and I need to help.

Of all the memes wriggling across The One True Web (that social network that includes and exceeds the digital), I suggest that the greatest compels some to describe helpful normative domains, and compels others to seek these out.

I came across the following, via kottke.org, and while it’s not quite as concise as other efforts, “Taleb’s top life tips” nevertheless makes for interesting brain fodder. I’d care to hear which of these are particularly interesting to you.

  1. Scepticism is effortful and costly. It is better to be sceptical about matters of large consequences, and be imperfect, foolish and human in the small and the aesthetic.
  2. Go to parties. You can’t even start to know what you may find on the envelope of serendipity. If you suffer from agoraphobia, send colleagues.
  3. It’s not a good idea to take a forecast from someone wearing a tie. If possible, tease people who take themselves and their knowledge too seriously.
  4. Wear your best for your execution and stand dignified. Your last recourse against randomness is how you act—if you can’t control outcomes, you can control the elegance of your behaviour. You will always have the last word.
  5. Don’t disturb complicated systems that have been around for a very long time. We don’t understand their logic. Don’t pollute the planet. Leave it the way we found it, regardless of scientific “evidence”.
  6. Learn to fail with pride—and do so fast and cleanly. Maximise trial and error—by mastering the error part.
  7. Avoid losers. If you hear someone use the words “impossible”, “never”, “too difficult” too often, drop him or her from your social network. Never take “no” for an answer (conversely, take most “yeses” as “most probably”).
  8. Don’t read newspapers for the news (just for the gossip and, of course, profiles of authors). The best filter to know if the news matters is if you hear it in cafes, restaurants… or (again) parties.
  9. Hard work will get you a professorship or a BMW. You need both work and luck for a Booker, a Nobel or a private jet.
  10. Answer e-mails from junior people before more senior ones. Junior people have further to go and tend to remember who slighted them.

Additionally, though within a more focused scope (and from the same source), Kurt Vonnegut advises on how to write with style. Herewith, a full reproduction. (I’m open to suggestions if reproducing these here, even with proper credits, breaks with good sense.)

In Sum:

  1. Find a subject you care about
  2. Do not ramble, though
  3. Keep it simple
  4. Have guts to cut
  5. Sound like yourself
  6. Say what you mean
  7. Pity the readers

I find, for some, that the first is surprisingly difficult. I rather have a hard time choosing a subject I care most about; it’s my utter failure at commiting to any one subject that leads to my abject frustration in making headway in any of them.

Mr. Kottke seems to revel in this meme. You can find similar material throughout his archives.