Across billions of neurons, mapped not quite chaotically, but with only that structure which provides for rudimentary autonomic function and the building blocks for more. For cognition, for personhood. As we receive stimuli, we change the structure of this web of pathways. Some grow stronger, others weaken and atrophy.
Patterns develop. Stimuli arriving in near-simultaneity become linked structurally. Impressions are formed, impressions in the physical structure of the brain, affecting the strength and reach of signal passage. The brain grows in sophistication, in connectedness. Basic if not simple functions, those coded for in DNA, are used as the building blocks of yet higher connections and functions. We bend the brain’s capacity to detect visual features, like vertical, horizontal, and diagonal lines, into the ability to recognize a chair or a face. We feel pain, associate it with the visual and spatial attributes of the stovetop, and learn to avoid it with our fingers.
We begin with simple atomic units, and build.
Some few layers beyond, after spending years bathed in sensory data, sorting it by building stronger pathways when they’re helpful and diminishing others when not, the brain has a refined capacity for parsing impressions of the world. It develops a sensory algebra for referring to the world through the senses. We begin to see that the world includes us, that we are in the world, and refine our reference frame such that by echoing our worldview back against the world, we can see our shadow upon it. We are now self-aware.
Thereafter, we have larger units built upon all those smaller ones. We are a unit, an algebraic element, as is the seen world, the heard world, the felt world, the smelled world, the tasted world. We are now world-aware, in a local sense, as that locality follows us.
At some point, before or after, we find a curious application of these patterns: they can be encoded and transferred. We call this language, including but not limited to the grammatical spoken and written languages. With this phenomenon arrives the possibility of creating algebraic elements which are actually only references to others, and the ability to combine these references into yet larger references, systematically. What the world is is reshaped as we fold sensory data into lingual structures, as we see a dog and understand her to be a “dog” and then read Old Yeller.
In this way, electrical signals passing through neuron gaps, down myelin-sheathed neuron fibers, across networks of fibers and gaps, form higher-order structures, which in turn form yet higher-order structure, and so on, until our “instincts” (whatever those are) are joined by our cognitive machinery to create our world. Just as a brick house is made of walls, which are made of brick, which are made of compacted clay and sediment, and so on, the space which our cognition occupies evolves from more fundamental blocks.
We may–and many people do–call this process or its result “analogy.” I describe it as the inference of structure via the combination of available cognitive elements, at least for now. It forms the basis for subsequent discussion.
I’m still trying to figure out what I think about that. I started in February of 1998, studying physics at Miami University (Ohio), and soon after decided that I might as well pursue a dual math/physics major. Some years later, logistical practicality prevailed, and I chose to study only mathematics. I’m still skeptical that I ever did.
I’d come to define myself, in part, in terms of this studenthood. Between first registration and final grade tally, I married once twice and divorced once. I saw two daughters come into the world, and one leave it. I bought a house, divested myself of that house, and bought another. I accepted the job I’ve had for a decade, and which I’ve spent the last few hoping to leave. Always, actively or in that part of my forebrain where ego and regret collide, I was a student.
And now I’m not. Now I’m “done.” I expect my diploma in the mail soon, and I will hang it in our office proudly. Though…though it’ll represent not so much a goal reached, but the closing of a personal epoch. An end to a struggle I never fully joined. A curtain call for the döppelganger, the ostensible student. And a beginning of my real education.
Not that I haven’t learned anything; that’d be a bit melodramatic and glib. I’ve learned that the Stewart calculus text is a credible device for (a) teaching various recipes for mathematical cooking, e.g., L’Hopital’s Rule; and for (b) scaring students into believing they’re “bad at math.”
I’ve learned enough of Riemann and Cauchy to have a first-order approximation of how much yet there is to know. I’ve learned some of the basic lexicon of mathematics, of continuity and metric spaces, of normal subgroups and homomorphisms. With a little review, I could even do something with them.
Which is to say, I’ve also learned what it is to truly study a subject. Well, actually, I suppose I can’t claim that: a recursively enumerable set isn’t necessarily a recursive set, which is to say that my knowing that I did not truly study a subject doesn’t confer to me knowledge of what it is to study it. Another first-order approximation, then. But useful nonetheless.
Now there is but to finally carry the mantel of student, freed from the illusion that completing repetitive problem sets on line integrals or expectation values actually constitutes more than a cheap rip-off of mathematics. What I’ve gleaned from this at-once trivial and enlightening realization applies not only to my development, but also to that of my children. What I’ve gleaned is that, even if these cobblestone streets were paved with good intentions, what we generally refer to as “education” is at its very best vocational training, and is at its very worst a commoditizing of our time and attention, of our will to power.
I failed as a student in more ways than I have the patience to note at the moment. Among this litany is my failure to form any academic community. I’m not sure how much I missed out on, but I bet it’s substantial. I’m a product of my passive-aggressive heuristic for dealing with people, characterized by idolizing my antisocial tendencies and then lamenting my lack of human connection. This failure as a student is my failure as a person, notwithstanding the logistical challenges a parent-student faces.
I’m officially done with this particular rant. I’ve really drained myself by alternating self-congratulation and “Woe is me.” I created what Carl Rogers might call a self-image at odds with reality: every Einstein quote, every time I watched Good Will Hunting, every browsing of Carl Friedrich Gauss’ or Terrance Tao‘s Wikipedia pages, contributed to the fantasy that I would, or more importantly, should, be similarly prodigious and capable. My particular knot of neurons and ganglia translated that information into a proscription against hard work, as if I shouldn’t need to work hard, as if instead I should need only to surround myself with the trappings of genius to free genius. And that, that’s the most important thing I’ve learned.
My most ambitious academic product is stored here (LaTeX file here). It’s an interesting, short paper I all but transcribed from my faculty advisor’s notes. I did learn a bit about computability, and LaTeX, and I enjoyed the process. Now I’m moving on, perhaps to study mathematics.
I get lost in a flesh-colored sea of mundanity, and feel powerless by virtue of membership. Alternately, the slightest brush of sentiment, a sudden memory of a childhood contemplation, a simple courtesy, each can bring me to sentimentality and emotional vulnerability. I may be awed by a feat of community, only to be horrified by the quickness of depravity. Hot and cold.
Hidden somewhere in my DNA is whatever code whose execution makes me need community. I don’t pretend to understand it, and I’m nearing the end of my too-cool aloofness toward the idea of sharing my concern with others, even putting theirs above mine. Each time I roll my eyes at Christian goodness expressed as a bumper sticker, I’m hoping someone sees it. I want to share my weak outrage, cleverly if possible.
I suffer the fools, nodding with a forced grin, or, more frequently these days, a committal nonchalance, a disaffection. I try not to enable their insipidity. My working hypothesis is that I can condition people to spare me their American Idol gossip and formulaic well wishes, this hypothesis underpinned by the presumption that they proffer the former to justify their interest, and the latter at the behest of perceived cultural obligation. Neither, then, is anything better than the misapplication of earnestness, a motive earnestness powering a crass social algorithm.
Whatever the quality of the hypothesis, I am yet not quite the good scientist. While you may rest your willing suspension of disbelief of the meaninglessness of our entire venture on a fetish for familial melodrama, I am just as easily duped into giving a damn by the ignorant wagging of a small dog’s tail. There’s no sound nihilistic conclusion that doesn’t undermine the value of both.
In our best moments, people and I dance like we’ve been at it for years, like we took lessons and we know what we’re doing. We finally cave and celebrate Christmas, because, damn it, it’s as good an excuse as any to get together and laugh and talk about the good old times and make yet another promise to stay in touch and yes I mean it this time. They’ve got a way of confusing me, people do, making me forget for a moment the difficulty of thinking deeply about continued existence without concluding that it’s ludicrous. I can get lost in that attachment. I can want a social life, a life filled with people and their straining against credulity to believe in the President or their political parties or how wonderful their children are. I can get lost in it, because I want their illusions to be true, because it makes those first steps out of bed each morning easier to take.
Convinced that I can circumnavigate banality by choosing to associate with like-minded folks, I seek them out, find them, and usually make them regret having admitted to a shared interest. I have gone so far as to bemoan the absence of this social cloud, blaming my fractured attentiveness to fields of my interest on the fact that I’ve no one of whom to ask for help and no one to whom to offer it. If only I could surround myself with ambient similarity of intent and disposition, I suppose, I’d finally earn my pride.
There’s logic in that, of course. There are fewer things any one of us can do better alone than there are things better suited to multiple effort, and individual aptitude doesn’t scale well. And maybe my American pedigree comes with too great a focus on not only genius, but on solitary genius, the single brilliant point in all the surrounding murky darkness. Too great, as I’ve only just seen my way through it and started learning that genius doesn’t exist, that the question isn’t even relevant to any substantive success. No, whatever our natural aptitude, it’s our work that achieves; and work is about the goal, independent of whether we reached out for help or soldiered on in private.
I have this idea, right? I’ve tinkered with it, and I’m convinced it’s a winner. It’s about a coffee house, but not Starbucks or that local shop you go to to keep it real. No, this is a shop with coffee, and healthy snacks, and white- and/or chalkboards, and computers loaded with fancy math software. And books—oh, the books. Shelves of canonical tomes, of Newton and Euclid and Euler and Gauss. Lesser works, too, but nothing without rigor. The real stars, though, are the customers.
They’re math geeks, physics geeks, engineers in fact or aspiration, hackers and crypto wonks, and the taggers-along who dig the nerd scene. You could float from one table to the next and overhear all manner of esoteric enthusiasm; and you could watch the hasty cycle of writing, erasing, lip-chewing, and writing at the boards, two or three contributors locked on a proof or a messy optics problem. All the reference material you’d ever need, all the theorems and algorithms and particle masses, are sitting on the shelves, in multiplicity. You could just soak it in, and maybe learn a thing or two.
Or, rather, I could. Ambient similarity of intent and disposition + ready access to coffee and geeks. And I own the place, which means all the business is my business; and it means that I’ve made it out of the cube farm and into the fold. No longer on part-time terms, I’m a full-time, fully vested member of that community. I’d call this place The Teachers Lounge.
Despite the eagerness with which I hang it, there’s a window behind all that window dressing, and it opens out into a void. All our efforts, all our cares, all our lofty endeavors, all our conniving and violence and pity, all of it breaks down to a single concern: survival. Depending on the scope, survival of the individual or survival of the species; but each of our urges to survive arises from similar genetic heritage, so the former is just an instance of the latter. How, then, can survival actually mean anything, really? If we matter to more than ourselves, I have yet to see how. Put those things together—that all our business is merely survival, and that our existence bears no objective worth—then our business is not itself imbued with intrinsic value. We must give it any value it has.
But the form of this value varies across cultures, measured nationally or racially or ethnically or socioeconomically, so we can’t even provide a sake for life other than life itself. This value is for each of us to assign if it is to be nontrivial, which is problematic. It’s problematic because the shared vision, the community, is in its sparest incarnation a search for authenticity, of an authority for evaluation; if we can each count on no one else but ourselves to define our value, then the community offers false hope at best, confusion or worse otherwise.
I’ll skirt an invocation of Godwin’s law and instead generically assert that history provides us many glaring examples of vicarious valuation gone wrong. Wherever there is a prostration to the wisdom of crowds, there stands a good chance for disaster and mayhem, though we might merely suffer banality instead. And we’re back where we started.
I have a few projects in mind that revolve around a surprisingly resilient interest in sharing something with a group of people. Most are more immediately tenable than The Teachers Lounge. I’ve decided—again, but more resolutely now—to give this community thing a serious whirl. I’m anticipating making use of the extra computing power; for anything my brain can crunch, mine and someone else’s might do it better and/or faster. And there are some prospects for sustainable business models among them, too. But most of all, I’m interested in collecting data and testing my hypotheses. Maybe I’m wrong, maybe there’s some kind of meaning hidden away in the spaces between us. I don’t think so, I’m not expecting much, and I’m not quite beyond feeling I’m selling out. Alas, there is but to give it a shot.
Upon cursory reading, Chesterton also makes several comments to this effect, though madness, not simple despair, is his result. I concur: it’s quite daunting to have none but yourself to rely upon for guidance and whatever “purpose” can be had.
It doesn’t follow, however, that the truth need not be dire. Invoking fruitlessness or direness as illustrations or proof of absurdity would seem relevant only if we take as given a need for life, a purpose in it other than as a means to excrete and feed other life by dying. That effectively cuts to the heart of the issue, and begs the question of whether or not humanity exists to serve a designed purpose.
Furthermore, to support a designed Reason via despair in its absence is tantamount to “finding religion” as a means of escapism, and it should be insultingto the believer. Clear said, There are no atheists in foxholes”; and I say that grenades don’t make believers. More than mere delusion, it appears circular: believing in a higher authority as an alternative to a dire life in its absence presupposes that the universe gives a fuck. Again, that points to the face of the question, and is unconvincing.
Of course life can be desperate. It has, does, and by all indications, will, from time to time, become so. I counter that it must be so in the absence of an extrahuman power, man-jerk or otherwise. I find it unfortunate, both that the believer is oft characterized as a zealous mystic, ignorant of fact; and that the nonbeliever is characterized as suffering from a melancholic humor, lost and listless.
To say that death is the only viable option in a world without an intrinsic, enterprising purpose is to say that life lived for its own sake is not viable. If a morality devoid of extrahuman stewardship requires no distinction between raping and not raping, then there can’t be any differentiation between the choice to live and the choice to die. Even if the world of the nonbeliever is this absurd, there still is no de facto reason for suicide. Even if life is nothing more than a protracted process of cellular division and decay, of conversion of energy into food into energy, why is living not a justifiable choice?
Consider, then, that even the nonbeliever can accommodate a belief in something grander than himself: an elegant proof; a refinement of a physical theory; the next quirky thing your son says. All are informing, all contribute to making you more than you were before them. Even oppression, persecution, and assault are potentially additive, if variably unpleasant. Those who don’t believe it was created are not necessarily restricted from admiring the natural worldthis is no one’s special province.
I awake each morning and rise, not because today will be better than yesterday; but because today is today, and is not yesterday.
And if it’s not, you’re doing it wrong. Which represents the greater hubris: we have found structures in the universe whose origins we can understand only if they were created by a higher consciousness; or given an intelligent creator, whoever that is must’ve received, like, a C at best for ENG302: Creating Things That Work Well to Populate Your New Universe?
The first point is rather subtle, or seems so given the enthusiasm with which the standard of intelligent design is being raised. The creative capacity of the human species is lauded as one of the characteristics that significantly separate us from other species, and is fundamental to viewing humans as undoubtedly greater. We look to our semiconductor chipsets, our skyscrapers, our movable type, our perfect novels, and our weapons of mass destruction and proclaim, awfully, Holy shit, we own. To us, these results of our creative endeavors are without the bounds of nature.
No surprise, then, that upon observing some phenomenon in nature which bears strikingly similar complexity as we’ve come to understand creations wrought by our hands and minds, we conclude said phenomenon is the product of some creative force. Of course that force would have to be intelligent, must be conscious and create with intent, because that’s what we do. That force, though, is not human.
Let’s assume there is an intelligent designer. It’s no feat whatsoever to find fault with this designer’s product (though this is by no means perfectly demonstrative of the claim, as some responses note). I have also heard, more than once and from more than one source, that engineers interested in the topic generally consider the human body to be poorly engineered. Our only saving grace, it would seem, is our intelligence, as, without it, we surely would not prosper as we do. However, a useful argument can be framed which would call even this conclusion into question. It is a long way from self-evident that we are, in fact, the product of an intelligent designer, if a product of any designer we are.
Proponents of intelligent design might have their bets hedged. It’s likely that someone, upon reading the preceding paragraph, would respond, Well, sure, maybe the coccyx seems ill-planned; but you can’t claim to know the Creator’s plan! You’re overstepping the bounds of your understanding, sir, and I’ll have none of that. To which I would reply, I was only following your lead, my fellow, given that you’ve deemed your understanding of nature to be quite conclusive on the matter of what occurs naturally and what must be created to exist.
Further, I’d add, straightening my smoking jacket, it might appear to the wary observer that it is you who has overstepped the bounds of your understanding more grossly, decided as you have that whatever you might make with some hardware and a free afternoon is beyond the means of the remaining entirety of the universe…except an agent fashioned after humans. Could it be, rather, that ‘creation’ is not the province of intelligence at all, but instead just one other dynamic of the universe, like entropy? Is it really so odd to think that we ‘create’ because we are wholly within nature, inextricably, and are no less an agent of nature, an organ, than gravity or stars or diseases; and, further, is it really so odd to think that, as an organ of nature, we should propagate a dynamism found elsewhere in nature, producing output which had not been observed prior? What if ‘creation’ is a product of the universe, and not vice versa?
How different, really, are these conceptions? What is really so different about explaining the development of some observed phenomena with an appeal to this Greatness over that one? We may debate the details but not the Greatness. That is, no one really presumes to understand everything about everything, or even everything about much of anything; and whether they’re particle physicists or a pastors (or both), no matter in which of these camps they might fall, for each contributor of this large discussion there is a threshold beyond which we seem commonly to say, Well, beyond that, we just don’t know.
Carlin died in June of 2008, at the age of 71. It’s hard to estimate Carlin’s value in the Western culture; his Seven Words are, today, somewhat less powerful than they were in 1972, and no matter his one-time popularity, their deflation is not significantly his doing, I think. At his height, TV love scenes still involved the “one foot on the floor” rule, so for many the Carlin spiel was apparently quite powerful.
And while the the Carlin canon pivots on those seven words, they were only part of his larger social commentary machine. No doubt the Richard Pryors and Steven Wrights of the world owe some small debt to George Carlin. As social creatures who, if unwittingly, make use of their exploits, we, too, owe him that debt. I often found his delivery a little too fond of itself, but maybe that was part of the bit.
However his cultural value is estimated, he seems one of the last of a dying breed, if not necessarily a dying species. I don’t get out much, but I don’t recall hearing of anyone with a similar currency, in terms of popularity and stickiness, who challenges social norms as a matter of principle. The closest might be Dave Chappelle, off the top of my head, but that’s not where I keep my good stuff so I’m not sure how much you should invest in that idea. Oprah Winfrey notwithstanding, the popular culture has changed over these decades of Carlin’s popular decline, and any market research company (including mine) will tell you that it’s increasingly difficult to intercept peoples’ attention. Some have called George Carlin a “hero“; I’m not sure anyone can become that kind of hero in a culture that cherishes its schisms and disaffection.
The University of Oxford has apparently taken £2M toward answering “Why do we believe in God?” From the Times article:
They will not attempt to solve the question of whether God exists but they will examine evidence to try to prove whether belief in God conferred an evolutionary advantage to mankind. They will also consider the possibility that faith developed as a byproduct of other human characteristics, such as sociability.
I found this in a discussion about the study in the Galilean Library, which is a wonderful resource for thoughtful, and almost more importantly, civil discussion across the philosophical spectrum. I added a (typically long-winded) point of consideration, summarized by saying that, as much as “God” might just be a label for the collection of “all things we think we don’t know, or which are currently ineffable and beyond our scrutiny,” then any such study, however expensive, really aims to investigate just one in a class of cultural metaphors, models, for things we want to understand.
There are undoubtedly flaws in viewing all our cognitive endeavors as mere mapping sensation to lingual metaphors, as it is certain to be a crude approximation, if a valid approximation at all; but just as in the case of the infinite square well, first-order, crude models work well as insertion points to broader and/or deeper analysis. And lest it be thought that I’m of the same “science != religion” mindset I was just a few short years ago, here’s a hopefully fruitful excerpt.
Of course, science itself probably holds no special claim to some objectivity, some circumnavigation of our use of metaphors as models. Just because scientific models use polynomial notation, and are algebraically derivable, and embed quite a lot of effective structure in the mathematical language they inhabit, doesn’t mean they’re anything more than just intricate metaphors. Niels Bohr developed a pretty model for the atom, but this perfect picture has since been rebutted. It worked, though, and, in many contexts, is a fine picture of atomic structure. Physics students still make use of metaphors in university, e.g. the infinite square well. These are simplified pictures used to motivate a subtle concept, before their complexity is increased on the way to deeper understanding.
The specific question about whether belief in “God” conferred/confers an evolutionary benefit is interesting, and something I’ve pondered for a while. In general, it seems that people with some kind of workable model prosper to a greater degree than those who don’t, independent of the model’s approximation of “truth.” This is related, I think, to results of a study concluding that those able to deceive themselves tended to be happier than those who aren’t, given that (a) models are only ever approximate, and (b) the utility of a model is probably proportionate to how strongly we believe in it’s explanatory power.
Of all the memes wriggling across The One True Web (that social network that includes and exceeds the digital), I suggest that the greatest compels some to describe helpful normative domains, and compels others to seek these out.
I came across the following, via kottke.org, and while it’s not quite as concise as other efforts, “Taleb’s top life tips” nevertheless makes for interesting brain fodder. I’d care to hear which of these are particularly interesting to you.
- Scepticism is effortful and costly. It is better to be sceptical about matters of large consequences, and be imperfect, foolish and human in the small and the aesthetic.
- Go to parties. You can’t even start to know what you may find on the envelope of serendipity. If you suffer from agoraphobia, send colleagues.
- It’s not a good idea to take a forecast from someone wearing a tie. If possible, tease people who take themselves and their knowledge too seriously.
- Wear your best for your execution and stand dignified. Your last recourse against randomness is how you act—if you can’t control outcomes, you can control the elegance of your behaviour. You will always have the last word.
- Don’t disturb complicated systems that have been around for a very long time. We don’t understand their logic. Don’t pollute the planet. Leave it the way we found it, regardless of scientific “evidence”.
- Learn to fail with pride—and do so fast and cleanly. Maximise trial and error—by mastering the error part.
- Avoid losers. If you hear someone use the words “impossible”, “never”, “too difficult” too often, drop him or her from your social network. Never take “no” for an answer (conversely, take most “yeses” as “most probably”).
- Don’t read newspapers for the news (just for the gossip and, of course, profiles of authors). The best filter to know if the news matters is if you hear it in cafes, restaurants… or (again) parties.
- Hard work will get you a professorship or a BMW. You need both work and luck for a Booker, a Nobel or a private jet.
- Answer e-mails from junior people before more senior ones. Junior people have further to go and tend to remember who slighted them.
Additionally, though within a more focused scope (and from the same source), Kurt Vonnegut advises on how to write with style. Herewith, a full reproduction. (I’m open to suggestions if reproducing these here, even with proper credits, breaks with good sense.)
- Find a subject you care about
- Do not ramble, though
- Keep it simple
- Have guts to cut
- Sound like yourself
- Say what you mean
- Pity the readers
I find, for some, that the first is surprisingly difficult. I rather have a hard time choosing a subject I care most about; it’s my utter failure at commiting to any one subject that leads to my abject frustration in making headway in any of them.
Sometime ago, I read two pieces a synthesis of which might be interpreted to say that all we can hope for in life is to concoct an artifice which distracts us from the impending and concurrent oblivion of purposelessness which would otherwise crush us. Herewith my earnest and hopefully readable attempt at such a synthetic enlightenment.
… creativity is essentially an overwhelming presence of awareness, and may very well be mindfulness, and could be a form of meditation, or it could be more like lucid dreaming (outside of dreaming – as in, lucid wakefulness), or it could be the state attained through the creative mind, which seems to be on a whole different level of consciousness altogether.
Hydragenic responds “intuitively.”
Yes, yes, yes. And can we throw the word ‘otherness’ in there somehow as well? Creativity as an approaching of the divine via a process of lucid mindfulness that allows us to appreciate, however briefly and superficially, the intrinsic strangeness of everything other than oneself.
Hg soaks in this for a bit, and illuminates some deeper reflection, steeped over years and an embrace of the strange.
We’re all looking for meaning in life, one way or another. I’ve come to the conclusion that meaning comes from creativity. Creativity in its broadest sense: the bees in the garden gathering pollen to make honey, friends and family making relationships and babies, businesses concocting fascinating products, singers pulling together words and melodies, painters filling canvases with dreams and desires.
Thus, so far, we get the picture that meaning is a consequence of, and not a cause of nor reason for, our existence. A search for meaning, then, should not be a search for some pocket of import somewhere in the aether, should not be an effort to distill some back-of-the-universe answer to the question “What is the purpose of life?” Rather, we have but to create our meaning, and live our meaning.
Where, in other hands, this sort of inspection might yield a gaudy melodrama, Hg finds empowerment, in the attribution of meaning to those who would build their own.
To discover something is to encounter its essence, which the dictionary describes as “the basic, real, and invariable nature of a thing or its significant individual feature or features”. Its true identity, in other words: what makes it different to everything else. It strikes me that art – creativity – is the process of divining and defining uniqueness. It’s fine to make connections between things, but ultimately those things are separate.
Does that sound too bleak? I see it as strength, as infinite richness. Too abstract? We all encounter art on a daily basis, in one form or another. Too solipsistic? I can’t dispute that: all I am is all I am.
Jared Christensen has touched on a parallel set of observations. Though mostly specific to the software industry (my own feeble gross overgeneralization of a culture, but it suffices), Jared’s piece provides a subtle but explicit hook to abstraction. And, anyway, the “software industry” is just instance of a class of social dynamic systems, which allows for a natural comparison across common points of structure.
Sometimes I look around at the state of software, and systems in general, and wonder if they are run by [agents of chaos (my paraphrasing)]. Or perhaps the chaos began by human fallibility, but now the mess is willfully maintained in order to feed this ecosystem that thrives on the system failure. Do companies actually put overly complex, mildy [sic] destructive products out into the market, intentionally giving rise to and continuing to feed an ecosystem of other companies that thrive on repairing the damage? Are some systems designed to be so irritating and complex that whole industries must be erected to make sense of it (*cough* taxes)? Is broken the preferred state for the makers of some products and systems we interact with every day? And does the ecosystem have the power to perpetuate the failure, supplanting the creatorâ€™s will to rectify the problems?
Surely, you’ve heard arguments of similar logical structure applied to government, to law enforcement, to lawyers, to the profession of educators, to book publishers, and to practically innumerable other facets of our shared lives. Each year, hundreds of high school and university departments order the nth edition of Larry Hackajob’s monotonously-uninspiring text for <some class>, not because it’s an improvement, but due to a self-propagated system of kickbacks and peer pressure designed not only to justify this subsequent edition, but to penalize the frugality of just using last year’s edition. At least Western, and most industrialized, societies require work to justify the creation of jobs to buoy the societies themselves. Consider pieces of Roosevelt’s New Deal, which included allocations of the federal budget explicitly for the creation of jobs, not because there was a pressing need for any of them, as much as to start the nation’s long economic repair. We might usually consider work to include an intrinsic necessity, but, while an indefatigably noble cause, and more, ethically necessary, these programs were artifacts. The theme echoes through time.
Drawing them together, I can’t help but conclude that all our efforts, all our endeavors, are not dissimilar from those Depression-era policies nor from the willful messes of agents of chaos. If, as Hg posits, meaning is our product and not the other way ’round, then we build our own temples to chaos and fabricate our need to aspire and achieve, no matter the ambition or arena. Assuming this, there’s absolutely no wonder so many of us are lost, listless, and sometimes paralyzed by the prospect of choosing a course.
Once, quite without meaning to, I advised someone to embrace just such a notion. When asked how to decide the rightness of an act or decision, I said something like, “Well, I don’t know, it’s hard to say; but maybe we just have to decide what we would want to be right, and stay as close to that as we can.”
I care about the answer non-academically, in a real, I have an assignment and it is to live and I don’t want to fail kind of way. As I told Hg, though, in a comment:
Truth be told, I’m a little lost on this question. There’s no greater sensibility to leaving the world than staying, so at the very least, an inclination to survive keeps me going. But if the most we can hope for is to revel in the possibilities of our creative efforts (in your quite useful, broad notion), that’s kind of just a dressed-up hedonism…or no?
There’s no manual I’m aware of, except the one we’re writing and editing. Since any sense of meaning is necessarily a social one, I’m curious how others find their materials and what grammar they use to construct sense and order (unabashedly subjective, those) in life. So share.