Copyright © 1993 Dave Gross
Contact Dave Gross for reprint permission.

Reprogramming Your Mind for Fun and Profit

with a brief review of meme theory

by Dave Gross

It is much too simplistic to say that the brain is a computer, that thoughts and memory are data, that thinking is analogous to running software. But ever since these analogies have presented themselves (often in reverse, with computer engineers using, or misusing, brain analogies to explain their creations), they have proven irresistable to those trying to understand the human mind.

If the mind is software, asks the student of urban legends, what would a computer virus of the mind look like? If the brain is a computer, the neuroscientist muses, where are the blueprints? If memories are data, wonders the dream theorist, where are they stored and in what format?

Psychedelic researchers like Timothy Leary and John Lilly have often phrased their questions about the mind in terms of "reprogramming" it (similar in etymology if not in connotation to the more ominous "deprogramming" undergone by unfortunates removed from the incorrect cult). Lilly, in fact, went so far as to declare that:

"All human beings, all persons who reach adulthood in the world today are programmed biocomputers. No one of us can escape our own nature as programmable entities. Literally, each of us may be our programs, nothing more, nothing less."

Human beings are not gods, (and Lilly aside, we are more than algorithms) but we are not quite animals either. Through our use of symbolic language, we have merged our mostly genetic history with an almost self-perpetuating, human-created universe populated with ideas, each competing on the turf of the cultural meme pool for momentary natural selective advantage.

But memes, god bless the little buggers, are much less static than genes. Just as some viruses mutate so quickly (even in the course of a single colonization) so as to defy identification as a species with species-constant traits, memes mutate extremely quickly and with great tolerance levels on all sides. (If a phrase-meme gets transmogrified in a human brain into a song-meme which then turns into a tune-meme or then a rhythm-meme -- when that meme is passed to another human who "can't get this rhythm out of my head," what can we say for the memetic success of the original phrase-meme?)

Because individual human survival at this point in the genetic game depends so much on that individual's ability to understand and manipulate the memes with which the individual interacts -- and, more importantly, because the memes that human will interact with may be very very different from those interacted with by even close genetic ancestors, the process for doing this interaction must be genetically manipulated (gently) on a meta-level.

Certainly much of this meta-learning is done very efficiently by the brain already. But the question that concerns us is this: Is the brain really doing the most efficient learning and acting that it is capable of, or is it genetically predisposed to learning about a world that doesn't exist anymore -- a world modeled by natural selection in which any surviving trait has survived because of its relevance in the past and not necessarily in the present?

Are our brains designed to learn about the world as it was 10,000 years ago or as it is today? If our brains had flexible enough meta-learning skills, perhaps there would be no difference.

I think, however, it is more likely that our brains have developed more haphazardly, and I think the available evidence would back that up.

If this is so, another question presents itself: Can we consciously grab the reins of our normally subconscious mental processes and change them in such a way as to make the processes more pleasing, efficient, or accurate?

Most of us consciously deal with our brains on a very high level -- content to use the wonderful features without looking for the source code, to use one of those inaccurate computer analogies. When we try to remember a name or a face, we usually are content to push the mental "try to remember" button rather than examine the process by which the brain is going about its process of committing something to memory.

In other words, we often make a conscious decision to do something, without doing that something in a conscious way. We use pre-packaged brain routines rather than trying to invent our own methods.

And why not? One of the classic ways to unnerve your tennis opponent is to say "great shot? Try to remember how to do it again" whereupon the opponent thinks so hard about the shot that he can't repeat it. Thinking about a process for which you already have a working subconscious subroutine is usually inefficient.

But what of cases in which our brain seems "hard-wired" to do a certain process in a certain way. The activities for which natural selection has not yet provided the brain with flexible "meta-learning" skills? For instance, the brain learns some linguistic techniques and processes them in such a specific portion of the brain, that often a doctor can determine where brain damage has occured just by listening to her patient talk. Are these places necessarily the best places (for our own interests, which may not coincide with those of our genes), or might moving these skills to another part of the brain -- at least on occasion -- be worth the effort? Also, are there parts of our brain that are unused, or underused, because they have fallen out of evolutionary fashion?

I recently read in the newspaper about a man with the strange hobby of trying to memorize one million digits of pi. So far he's up in the high seven thousands with an accuracy of 99.9982% Most people are incapable of memorizing more than a handful of arbitrary digits at a time, due to limitations of whatever part of our brains are dedicated to memorizing series of arbitrary digits.

This fellow, though, expanded his algorithm to take over other areas of his brain normally unused for this function. By grabbing more system resources, including a better "memorization processor," he was able to do his impossible-sounding task. The way he reprogrammed his memorization-of-numbers algorithm is the following (taken from the newspaper account):

No one can remember 1 million numerals. Or 7,777. But people can remember little stories, even nonsensical ones. That's the secret: to translate numerals to words and then stories...

As [Michael] Harty does it, each numeral from 0 to 9 is given a consonant sound. The second and third numbers of pi, 1 and 4, are T and R, sounding like a tear from the eye. The next digits, 159, are TLP, expanded to tulip. A tear on a tulip. It's called visual memory or visual representation. Here's the beginning of pi, starting with a numeral translated directly to a picture:

Line 1 is a picture of a bow tie and it's tearing, that is, it's crying, on a tulip, which moves away and picks up a device which puts a big notch into a yellow mule. The yellow mule becomes quite upset, and he grabs a big box of a laundry detergent called Fab, I don't know if it's even out anymore, and he dumps it on a gray bomb, and the gray bomb, of course, gets angry as well...

So Michael Harty has taken a process that is normally allocated slight (and to Harty, insufficient) resources, and reconfigured it in such a way as to take over the more developed brain processes of narrative-following, visual memory, and linguistic memory.

We already do some of this without really being conscious of it. When someone in Physics 101 says "I understand the question, I just can't visualize it," you can understand the process they're going through, even if you aren't thinking much about the process itself. That the person is busy trying to transfer a problem from the inadequate mathematical part of his brain to the highly efficient visual processing powerhorse, is implied but not examined.

Or if you say "it's too loud in here to think," are you really acknowledging that you are frustrated by not being able to use the extra brain resources normally available in the aural processing areas to do non-auditory brain tasks?

Human beings use our sense of smell much less than many of the beasties from near the territory from which our branch of the evolutionary tree sprouted. Could it be that the efficiency of the scent-receiving organ, and the importance of smell have been reduced greatly but the processing methods and areas in the brain may still be there, suffering from disuse, but efficient in their own ways?

Certainly the legendary way in which smell (home-baked bread, for instance) can summon back vivid memories suggests that our atrophied sense of smell has left behind a mental processor with great potential.

But really, it's only speculation at this point as to which mental processors are most efficient at which tasks, and which tasks can be reformulated in such a way as to reprogram the brain's way of handling it (how would one memorize pi using scent processing? I have no idea), and which brain functions are consciously mutable. Studies that have been done so far aren't looking directly for this sort of information, and only hint at the answers.

Anecdotal accounts of savants in various areas (Michael Harty, et al.) are a good starting point, but not the end of the story. Most of us could come up with areas in which we would like our mental functioning to work in ways more convenient to our own personal needs. Most of us, however, feel straitjacketed by what intuitively seem to be immutable mental processes in these areas. I think that there is hope for non-intuitive, even ridiculous-seeming reprogramming of brain skills.

The often-taught process of using mnemonics in memorization is hardly even an example of this -- mnemonics are taught as if they were magic tricks, without any awareness of the mental processes behind them, and the application of similar techniques in applications other than memorization is not even hinted at.

The people who are equipped to swim in the information age will be those who have learned to change their minds -- to change their ways of processing information so that their brains are the servants of their will, and not the slaves of natural selection.


Memes: an introduction

The word "meme" was invented by Richard Dawkins in his 1976 book The Selfish Gene. In his words, "examples of memes are tunes, ideas, catch-phrases, clothes fashions, ways of making pots or of building arches. Just as genes propagate themselves in the gene pool by leaping from body to body... so memes propagate themselves in the meme pool by leaping from brain to brain..."
"We need a name for the new replicator, a noun that conveys the idea of a unit of cultural transmission, or a unit of imitation. `Mimeme' comes from a suitable Greek root, but I want a monosyllable that sounds a bit like `gene.' I hope my classicist friends will forgive me if I abbreviate mimeme to meme. If it is any consolation, it could alternatively be thought of as being related to `memory,' or to the French word même. It should be pronounced to rhyme with `cream.'"
-- Richard Dawkins
"[W]hen we look at the evolution of cultural traits and at their survival value, we must be clear whose survival we are talking about. Biologists... are accustomed to looking for advantages at the gene level (or the individual, the group, or the species level according to taste). What we have not previously considered is that a cultural trait may have evolved in the way that it has, simply because it is advantageous to itself."
-- Richard Dawkins



Home * What's New * Feedback * Icon legend * Search

The Sputnik Drug Information Zone

http://nepenthes.lycaeum.org/