The Death of Long Term Memory

Computer brain.This fascinating article at Scientific American, about human and animal consciousness, contains the following passage:

In humans, the short-term storage of symbolic information—as when you enter an acquaintance’s phone number into your iPhone’s memory—is associated with conscious processing.

A few years ago, when I was first learning about memory, the example probably would have gone more like “your short term memory holds small amounts of information, like a phone number, while you rehearse it in your head until you have it memorized.”

The main difference between the examples is that the iPhone has replaced our own biological memory storage as the final resting place of long term memories. I think this points toward a more general trend, in which technology is taking over many of the functions that our brains carried out before. Why memorize a phone number when you can, at any time, just retrieve it on a screen with a few swipes of your finger? Why commit the times table to memory when a calculator is always close at hand?

Storing memories outside of our brains is nothing new. Scrawling something on paper is much the same. However, the ease with which we can store and retrieve these external memory banks is improving at an exponential rate. Today, a lot of the human race’s collective store of knowledge can be searched in fractions of a second with a few keystrokes in a search engine. Maybe tomorrow, our fingers won’t even be an intermediary step; a direct link between our minds and databases need not be science fiction. Google may not just be the future of computers, but the future of the human race.

As we continue to improve our access to information outside of our heads, I think there will be less emphasis on teaching people raw information, and more emphasis on teaching what to do with information. [self plug] Scientific research into topics like human creativity (which computers don’t seem to have mastered yet) and cognitive psychology will become increasingly important [/self plug], as will disciplines like philosophy and math, which deal purely with how to manipulate information into something useful. We should probably also keep Keanu Reeves around to make sure we haven’t slipped into The Matrix without realizing it.

Christof Koch (2009). Exploring Consciousness Through the Study of Bees Scientific American

Minor Issues


I’ve been thinking about music a lot lately. Yesterday, I had a conversation about why certain chords tend to “sound good” together. It seems like a lot of it has to do with the physical layout of an instrument; certain chords are easier to play together on a guitar. Since most rock music is based on guitars, chords that are easy to play together “sound good” together in rock music.

The thing with this is that it’s arbitrary. There’s no real, underlying reason why certain patterns sound good; it’s just a matter of what was easiest to play, and thus what musicians played, and thus what we’ve been exposed to our whole lives. Other cultures hear different patterns growing up, and would think ours sound weird. If we’d grown up hearing random patterns of chords (within certain limitations, I’m sure), those would sound natural together.

This seems unsatisfying somehow. Music feels like this transcendental, magical stuff that, when done right, can tickle the deepest reaches of our souls. If the line between beautiful music and shitty music is really just a proxy for the line between familiar and unfamiliar, filtered through historical accidents in our culture (like the layout of a guitar), it seems less magical, less eternal.

I think an even more striking example is the difference between major chords and minor chords. To people in Western culture, major chords usually sound happy, and minor chords usually sound sad. Why? Did one of the first popular musicians happen to associate minor chords with sad lyrics, then later musicians just followed suit? Could it have just as easily gone the other way?

I dunno. I’m inclined to refuse to believe in the arbitrariness of music. Maybe minor chords are more similar to the sounds of crying and other expressions of sorrow, so their sadness is deeply imprinted in our genes and our souls. Maybe there is a deeper reason to prefer patterns of chord progressions, even if the specific set of chords in them is arbitrary.

I tried to look this up, as I figure it’d be a common issue and is certainly subject to scientific scrutiny. However, Google only comes up with speculation, and a quick search of PsychINFO (a database of psychology research) only comes up with only 10 results. One of them is an article from 1942 titled “The preference of twenty-five Negro college women for major and minor chords”, which might be a bit outdated. I guess, then, that this is still an open issue, and I’m one of the only nerds who spends time thinking about crap like this.

Of course, overthinking music is, while fun, pointless. No amount of intellectual pondering can take away the fact that music feels magical, and that is what really matters.

You Don’t Write on Your Own Facebook Wall

A peculiar fact about Facebook is that you are not supposed to write on your own wall. Because that really could have gone either way, eh? With blogs, conversations take place on a single blog, often with the blog’s owner commenting on his or her own blog. It has the advantage of the entire conversation being in one place. But a disadvantage is that anyone who comments on a blog will have to go back to that blog to see if anyone responded to it.

What I wonder is who decided that posting on your own blog is OK, but posting on your own Facebook profile is not. Was it one person who persuasively argued for a position? (e.g., I’ve seen it argued that posting on your own wall is like leaving a note on your own fridge and hoping your friends will stop by to read it) Or did it just happen naturally due to subtle properties of Facebook that make having conversations between walls easier than having them on a single wall? Or was it completely arbitrary, with one position that just happened to spread around and eventually became codified as a new taboo?

It makes you wonder if other taboos develop in similar ways. Like, who decided it was wrong to wear a hat at dinner time? I’m sure there was a good reason for it at one point, but now, I see no reason why having a piece of cloth on your head disrupts a meal.

Of course, a good source of LOLs is breaking taboos, so I’m gonna go write inane messages to friends on my own Facebook wall while I eat pizza in a cowboy hat.

The Psychology of Ice Cream

In the psychological study of learning, there has been a lot of research on how to reinforce behaviours. Of particular interest is the timing of rewards. If you want someone to keep doing something, do you reward them every time they do it? Or do you reward them only some of the time?

Well, it turns out that if you want somebody (or somerat) to do something a lot, and keep doing it, it’s best to reward them only some of the time, and to randomly determine whether they will get rewarded or not. This is called a variable ratio schedule. If you don’t believe me, here is a graph with writing and numbers. Graphs do not lie:

This is why gambling is so addictive. You get rewarded for pulling that lever, but randomly and only every so often. It may also be why checking email can be addictive. Clicking that inbox gets rewarded with a message, but only sometimes.

I think this also applies to ice cream. As we all know, the best part of many ice creams is the chunks. Vanilla ice cream is OK on its own, but in a spoonful with a nice big chunk of cookie dough or a brownie bit, it’s infinitely more rewarding.

But usually, in a tub of, say, 100 spoonfuls, there can only be, say, 25 spoonfuls that contain yummy chunks. And since the chunks are randomly distributed throughout the tub of ice cream, each spoonful only has about a 25% chance of containing a chunk. If eating a spoonful of ice cream is the behaviour and chunks are the reward, this is what we call a VR4 (variable ratio 4) schedule; reinforcement is random, but on average, every 4th behaviour is rewarded. It’s the perfect recipe for making someone eat ice cream quickly, and keep eating it.

This is why I eat so much ice cream. It’s friggin’ science. And while I often complain that there are not enough chunks in ice cream, it’s clear that ice cream manufacturers have outsmarted me. It wouldn’t be quite so addictive if every spoonful had a chunk.

It’s also why you shouldn’t eat right from the tub. With the magic of psychology at work, you would probably eat the entire tub in the time it takes to, say, write a blog post about the psychology of ice cream.

*burp*

———-

P.S. If you do a Google image search for “Reese ice cream”, you will find a surprising number of pictures of Reese Witherspoon eating ice cream. She, too, must be a victim of variable ratio reinforcement.

Halifax

My last major stop on my trip was Halifax, where I presented research at the Canadian Psychological Association convention. The poster was about my research on the relationship between geomagnetic activity and creativity. Basically I found that when the earth’s magnetic field is disturbed by funky stuff going down on the sun, people are more creative. So, you know, pretty out-there stuff. Surprisingly, nobody really challenged it and most found it quite interesting. One person asked me if this means that there is something to magnetic bracelets, and I said no, those are a scam and they are stupid. I think maybe she was wearing one so that was insulting, but dude, they’re a scam.

Halifax is a beautiful city. I’d love to live there someday (though maybe I’d regret it come winter). Here are some pictures:

Apparently Halifax has the most pubs per capita in North America, and was populated only because residents were promised free booze for a year. My kind of place.

The Keith’s brewery is there, obviously.

Alexander Keith, who was a mayor of Halifax in addition to brewing average-tasting beer, is buried in this graveyard:

We saw Anonymous protesting Scientology. One sign said “honk if you oppose Scientology”, but I was on a tour bus at the time, so I just sorta made a honking motion in the air. Because seriously, screw Scientology.

Peggy’s cove, a tiny fishing/tourism village just outside Halifax, is gorgeous. Look:

This girl was chasing two ducks and some giant mutant duck-goose-thing in a prom dress. She was laughing as she tortured the poor birds, while other nicely dressed people took pictures. It was all very surreal.

Anyway, Halifax was probably my favourite part of the trip, because I did lots of fun things and ate lots of delicious foods and met lots of awesome people. You should go.

Beer : Statistics :: Peas : Carrots

I once got slightly intoxicated while “studying” the night before a major exam in statistics. Normally this would not be something to be proud of, but the fact is, despite the morning headache, my mind was clear of distractions and all that wonderful statistical knowledge flowed onto the paper just as smoothly as the beer flowed into my belly the night before. I aced the exam and secured my future in psychology. (This is a story that Nick likes to tell whenever someone mentions exams and drinking in the same sentence).

It turns out there is a very good reason that beer and statistics go together like birds of a feather. The study of statistics has been linked with beer since its early history. Anyone with basic stats knowledge has heard of Student’s t-distribution, often used to tell if two groups are different from each other on some measure. Student was the pen name of William Sealy Gosset, a statistician working in Dublin. The dude chummed with some of the more familiar names in stats, like Pearson and Fisher.

The thing is, Gosset didn’t give a crap about discovering the inner workings of the mind by poking and prodding samples of unsuspecting humans. No, Gosset just wanted to use mathematics to brew tasty beer. He worked for the Guinness brewery, applying statistical knowledge to growing and brewing barley. Guinness wanted to protect this powerful secret knowledge from competitors, so Gosset was forced to publish under a fake name, and apparently more math-creative than naming-creative, chose the name “Student.”

So that’s how Student’s t-distribution was born. And that’s why having a few pints of Guinness before a major stats exam should be encouraged. Even if it results in failure – and very well might – mention to the prof that it was a tribute to the long and fascinating history of beer and statistics. That’s gotta be worth a few bonus marks.

…..

P. S. I hope you noticed the subtle normal curve in the picture of the Guinness up there. That took some serious Photoshop skills you know.

Get Smart

Wired Magazine has just put up a set of articles on the topic of intelligence: Get Smarter: 12 Hacks That Will Amp Up Your Brainpower.

It’s partly just a movie promotion (for the Steve Carell remake of Get Smart. Get it?), and a lot of it is oversimplified or just plain wrong, but there is some interesting stuff in there that’s worth thinking about as long as you have a few grains of salt at the ready.

One thing I found particularly interesting is the person who tried maximizing their time by cutting down on sleeping. As I’ve long maintained, sleep sucks. For the most part, it’s a waste of time. Unfortunately, it didn’t seem to work for this person, who tried altering their sleeping pattern such that they had short naps throughout the day but less overall sleeping time. They felt crappy all the time. Bummer. I’m still waiting for that anti-sleeping pill with no side effects. Get on it, science!

Publish and Perish

Not to not brag or nothin’, but you are now a friend/acquaintance/worshiper of a published scientific researcher. My first publication finally popped up on the internet recently (even though it was apparently published in 2007, the journal seems to be running behind or something).

Here is the full reference:
Sorrentino, R. M., Seligman, C., & Battista, M. E. (2007). Optimal distinctiveness, values, and uncertainty orientation: Individual differences on perceptions of self and group identity. Self and Identity, 6, 322-339.

If you’re subscribed through your university (or wherever), you can find the article here, at your local library, or through Google. That’s right, I’m Googlable.

Optimal distinctiveness refers to the fact that people don’t like to feel too different from other people, but also don’t like to feel too similar. However, this is true for some people more than others. We found that people who prefer certainty to uncertainty also tend to try thinking of themselves as similar to other people after being made to feel different. In other words, these certainty oriented people tend to want to assimilate back into a crowd when they feel like they are weirdos who don’t fit in.

We proved this with advanced science. Here is some science from the article:

Those are graphs and formulas formulae. It doesn’t get much more scientific than that.

I do find it strange that this article costs $43.75 to purchase without a subscription. That’s more than most books, just for one article that is, no offense to the authors (none taken), not all that exciting. What’s strangest, though, is that I don’t get a dime of that. Musicians complain that record companies take a large percentage of the profit from record sales. With us, publishers take 100%.

Plus, isn’t science supposed to be free, open, and collaborative?

Oh well. Luckily, with the internet, it’s nearly free to distribute a file containing a research article, and many researchers make their own work available free of charge on their personal web sites. Hey, maybe I should do that. I will soon. You just stay tuned.

Anyway, I’m done bragging / feeling sorry for my broke self.

See also: Optimal Distinctiveness Theory on Wikipedia. Oh look, there’s our article! How did that get there? *WINKY FACE*

Arthur C. Clarke, RIP


Arthur C. Clarke died today (*). The man was a genius. I’ve only recently started reading his books, but his impact has been felt throughout my life. Nearly every piece of science fiction created since the 50s owes something to Clarke. More directly, seeing 2001: A Space Odyssey as a kid, even though I didn’t fully understand it at the time, probably had quite the impact on me. It’s a testament to human curiosity about life’s most perplexing questions, and the fact that there is more to life than this earthly existence, with no need to invoke the supernatural to appreciate it. Perhaps this was part of what sparked my interest in science.

Speaking of which, anybody interested in science should take note of Clarke’s laws of prediction:

  • 1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
  • 2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
  • 3. Any sufficiently advanced technology is indistinguishable from magic.

There’s a lot to take out of those three little statements. But I think the main message is one of hope rather than cynicism. What seems impossible may very well be possible; what we consider magic today may be within our reach tomorrow.

Even though it’s impossible, let’s hope Clarke is now a glowing fetus looking down on us from a bubble floating in space. Float in peace, Arthur C. Clarke.

.

.

* Actually, he died tomorrow, since he was in Sri Lanka, where it’s already Wednesday.

Book Review: Stumbling on Happiness, by Daniel Gilbert


As anyone who has studied any psychology knows, humans have one of the most advanced brains out of all the animals on this planet, but they’re far from perfect. There are a lot of situations in which our brains make minor mistakes, and some situations in which they outright betray us. Stumbling on Happiness is an overview of many of these mistakes, with a special focus on mistakes we make when we remember how we felt in our past, or try to predict how we’ll feel in the future.

The book is extremely easy to read. It’s often hilarious, and not just in a “I’m a clever scientist so I’ll throw in a reference to some obscure work of literature and everyone will laugh” sort of funny, but actually hilarious. It also stays clear of any psychology jargon or statistics. As someone studying psychology, I could complain that he oversimplifies things sometimes (for example, discussing the theory of cognitive dissonance without ever calling it by name), but really, the book isn’t meant for psychologists. Anyone could read this and learn a lot about how the mind works, then go read the original research for the details. And even though I don’t fully agree with every conclusion he reaches, I’m glad he never simplifies to the point of being dishonest (like some popular psychology books are prone to doing), such as offering an easy answer to eternal happiness. In science, and especially in psychology, there are no easy answers.

A caution though: the book is more about stumbling, less about happiness. As Gilbert clearly states at the beginning, this isn’t a book about how to make you happy. It’s about how you often suck at predicting what your future will be like, and that happens to include how happy you’ll be. This book can teach you a bit about human psychology, but it cannot teach you how to be happy. He does give one scientifically verified suggestion for how to predict your own happiness, but you won’t like it.

I really enjoyed reading Stumbling on Happiness. Or at least, I currently think I enjoyed it. My brain may not be entirely accurate when retrieving my past happiness while I read it. But I’m pretty sure that anyone interested in psychology could amplify their own future happiness by picking up this book.