The Death of Long Term Memory

Computer brain.This fascinating article at Scientific American, about human and animal consciousness, contains the following passage:

In humans, the short-term storage of symbolic information—as when you enter an acquaintance’s phone number into your iPhone’s memory—is associated with conscious processing.

A few years ago, when I was first learning about memory, the example probably would have gone more like “your short term memory holds small amounts of information, like a phone number, while you rehearse it in your head until you have it memorized.”

The main difference between the examples is that the iPhone has replaced our own biological memory storage as the final resting place of long term memories. I think this points toward a more general trend, in which technology is taking over many of the functions that our brains carried out before. Why memorize a phone number when you can, at any time, just retrieve it on a screen with a few swipes of your finger? Why commit the times table to memory when a calculator is always close at hand?

Storing memories outside of our brains is nothing new. Scrawling something on paper is much the same. However, the ease with which we can store and retrieve these external memory banks is improving at an exponential rate. Today, a lot of the human race’s collective store of knowledge can be searched in fractions of a second with a few keystrokes in a search engine. Maybe tomorrow, our fingers won’t even be an intermediary step; a direct link between our minds and databases need not be science fiction. Google may not just be the future of computers, but the future of the human race.

As we continue to improve our access to information outside of our heads, I think there will be less emphasis on teaching people raw information, and more emphasis on teaching what to do with information. [self plug] Scientific research into topics like human creativity (which computers don’t seem to have mastered yet) and cognitive psychology will become increasingly important [/self plug], as will disciplines like philosophy and math, which deal purely with how to manipulate information into something useful. We should probably also keep Keanu Reeves around to make sure we haven’t slipped into The Matrix without realizing it.

Christof Koch (2009). Exploring Consciousness Through the Study of Bees Scientific American

LHC

I love Google’s title image for today:

It’s a nice mix of recognizing an extremely important scientific accomplishment with just a pinch of end-of-the-world paranoia.

The truth is that the world has about the same chance of ending today as it did yesterday. But I think the dimwitted people protesting the large hadron collider aren’t all bad. It’s seriously nice to be reminded that the world could end at any moment. All of human history is just a brief blip in time on a cosmic scale; it could end right now and the universe would barely notice. But the thing is, in a universe with a past almost completely devoid of our existence, and a future that could very easily be the same, all we’ve got are our short little lives here in the present.

The fact that the universe is vast, cold, and uncaring does not make our lives meaningless. It’s the opposite; it shows that we are the exception rather than the rule, so we damn well better take advantage of this fleeting gift and make our lives mean something. It also makes it all the more incredible that we are on our way to understanding this vast, cold, and uncaring universe with technology like the LHC. Even if it did end human existence, at least we went out trying to understand our place in the universe. And with a good excuse to have sex.

(xkcd rules)

You Don’t Write on Your Own Facebook Wall

A peculiar fact about Facebook is that you are not supposed to write on your own wall. Because that really could have gone either way, eh? With blogs, conversations take place on a single blog, often with the blog’s owner commenting on his or her own blog. It has the advantage of the entire conversation being in one place. But a disadvantage is that anyone who comments on a blog will have to go back to that blog to see if anyone responded to it.

What I wonder is who decided that posting on your own blog is OK, but posting on your own Facebook profile is not. Was it one person who persuasively argued for a position? (e.g., I’ve seen it argued that posting on your own wall is like leaving a note on your own fridge and hoping your friends will stop by to read it) Or did it just happen naturally due to subtle properties of Facebook that make having conversations between walls easier than having them on a single wall? Or was it completely arbitrary, with one position that just happened to spread around and eventually became codified as a new taboo?

It makes you wonder if other taboos develop in similar ways. Like, who decided it was wrong to wear a hat at dinner time? I’m sure there was a good reason for it at one point, but now, I see no reason why having a piece of cloth on your head disrupts a meal.

Of course, a good source of LOLs is breaking taboos, so I’m gonna go write inane messages to friends on my own Facebook wall while I eat pizza in a cowboy hat.

Vodka Illusions

Bill Deys recently wrote about a Business Week article stating that, in a blind taste test, all vodkas taste pretty much the same.

It was an informal test with a writer and a few friends. Without statistical analysis, it’s impossible to tell if the friends were guessing at an above-chance level or not (there was one correct guess about vodka brand during the trial, but who knows if it was based on taste or just a lucky guess). Still, the theory behind it makes sense; vodka is basically alcohol and water, without any oak barrels or extra ingredients being added, so differences would have to be subtle if they exist. And if people who claim to be able to distinguish one brand from another obviously can’t do so at all even in an informal test, differences can’t be as major as we’ve been lead to believe.

The implication here is that all vodkas are the same. Is that really true, though? I don’t think so. I’d argue that the appeal of a drink is about more than just the electrical signals going from our tongues and noses to our brains. It’s also about atmosphere, expectations about taste, preparation rituals, discussion of the drink with other people, etc. These factors are eliminated from a blind taste test, but present in real life. A blind test may reveal that vodkas are the same in the absence of knowledge about what brand is being drunk (drinken? drunken?), which is interesting information, but doesn’t exactly map onto real-life drinking situations.

In real life, the subjective experience of a drink is different depending on the brand. For some people, buying a $100 bottle of vodka, putting it in the freezer, garnishing it and mixing it with just the right amount of ice (or not) is more enjoyable than doing the same with a $20 bottle. Furthermore, it probably actually tastes better to them. It may be an “illusion” in the sense that the difference in taste is not purely based on receptors in the tongue and nose; but does it really matter if good taste signals are originating in the tongue or in the drinker’s own biased brain? No; a better taste is a better taste.


The problem, though, is if people knew that all vodkas were physically identical, they might have a harder time deceiving themselves into believing that “better” brands actually taste better. I guess that’s the difference between actual physical differences in taste and illusory differences; illusions can disappear as soon as one becomes aware of them. It’d be hard to enjoy a $100 bottle of vodka knowing that the stuff inside is the same as the stuff in the $20 bottle.

Luckily I’m not so into vodka after several pukey experiences with it, and I doubt the same lack of brand differences applies to more complex drinks like rum, scotch, wine, and beer. “Still”, a lot of the differences are probably all in our heads, and there is nothing wrong with that.

Here is a dog made of beer labels:

(from here.)

This is the 2nd post in an unintentional series of posts about the link between alcohol and psychology. See the 1st: Beer and Statistics.

Publish and Perish

Not to not brag or nothin’, but you are now a friend/acquaintance/worshiper of a published scientific researcher. My first publication finally popped up on the internet recently (even though it was apparently published in 2007, the journal seems to be running behind or something).

Here is the full reference:
Sorrentino, R. M., Seligman, C., & Battista, M. E. (2007). Optimal distinctiveness, values, and uncertainty orientation: Individual differences on perceptions of self and group identity. Self and Identity, 6, 322-339.

If you’re subscribed through your university (or wherever), you can find the article here, at your local library, or through Google. That’s right, I’m Googlable.

Optimal distinctiveness refers to the fact that people don’t like to feel too different from other people, but also don’t like to feel too similar. However, this is true for some people more than others. We found that people who prefer certainty to uncertainty also tend to try thinking of themselves as similar to other people after being made to feel different. In other words, these certainty oriented people tend to want to assimilate back into a crowd when they feel like they are weirdos who don’t fit in.

We proved this with advanced science. Here is some science from the article:

Those are graphs and formulas formulae. It doesn’t get much more scientific than that.

I do find it strange that this article costs $43.75 to purchase without a subscription. That’s more than most books, just for one article that is, no offense to the authors (none taken), not all that exciting. What’s strangest, though, is that I don’t get a dime of that. Musicians complain that record companies take a large percentage of the profit from record sales. With us, publishers take 100%.

Plus, isn’t science supposed to be free, open, and collaborative?

Oh well. Luckily, with the internet, it’s nearly free to distribute a file containing a research article, and many researchers make their own work available free of charge on their personal web sites. Hey, maybe I should do that. I will soon. You just stay tuned.

Anyway, I’m done bragging / feeling sorry for my broke self.

See also: Optimal Distinctiveness Theory on Wikipedia. Oh look, there’s our article! How did that get there? *WINKY FACE*

Arthur C. Clarke, RIP


Arthur C. Clarke died today (*). The man was a genius. I’ve only recently started reading his books, but his impact has been felt throughout my life. Nearly every piece of science fiction created since the 50s owes something to Clarke. More directly, seeing 2001: A Space Odyssey as a kid, even though I didn’t fully understand it at the time, probably had quite the impact on me. It’s a testament to human curiosity about life’s most perplexing questions, and the fact that there is more to life than this earthly existence, with no need to invoke the supernatural to appreciate it. Perhaps this was part of what sparked my interest in science.

Speaking of which, anybody interested in science should take note of Clarke’s laws of prediction:

  • 1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
  • 2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
  • 3. Any sufficiently advanced technology is indistinguishable from magic.

There’s a lot to take out of those three little statements. But I think the main message is one of hope rather than cynicism. What seems impossible may very well be possible; what we consider magic today may be within our reach tomorrow.

Even though it’s impossible, let’s hope Clarke is now a glowing fetus looking down on us from a bubble floating in space. Float in peace, Arthur C. Clarke.

.

.

* Actually, he died tomorrow, since he was in Sri Lanka, where it’s already Wednesday.

Book Review: The Singularity is Near, by Ray Kurzweil

The singularity refers to a time, sometime in the future, when machines become more intelligent than biological humans, and technology begins to improve rapidly as a result. The Singularity is Near is Ray Kurzweil’s attempt to justify his belief that the singularity is coming sooner than most people think, and what consequences it will have.

Oh, what consequences.

Kurzweil envisions a future where almost nothing is impossible. Human-machine hybrids live forever in a world with very few problems, playing and engaging in intellectual pursuits in any virtual reality environment they can imagine. This isn’t your typical flying-car future. What use are flying cars when anybody can instantly obtain any information, or experience any location, just by thinking about it? It sounds like science fiction, but Kurzweil convincingly argues that it is not fiction at all.

The best part is that, if he’s right, almost everyone reading this can experience this future in their lifetime. This book should be prescribed to suicide-prone people. With a Utopian future just a few years off, why end it now?

Some would probably argue that Kurzweil is too hopeful. He does seem a little, uh, off at times. The dude is on a radical diet involving dozens of drugs and food restrictions, just so his aging body can last long enough to see the singularity he so believes in. And how many times do we need to be reminded that in the future, you can become the opposite gender and have sex with whoever, or whatever, you want? That’s cool if you’re into it, but in a world with almost no limits, I think most people will come up with even more interesting stuff to do with their time. And although he argues each point well, if he’s wrong about even one – for example, one fundamental limit on technology is reached, or one catastrophic world-altering event sets us back – all his predictions could fall apart.

Still, even a small chance that he’s right should give us all an enthusiastic hope for the future. Reading this book (and its shorter predecessor, The Age of Spiritual Machines) made me happy to be alive in today’s world; I don’t think I could give a book any higher a recommendation than that.

P.S. I wrote more about this book at this post. Yes, it took me more than 6 months to read it. In fact, it probably took me over a year. It’s damn thick. But although it does have boring bits, it’s worth the time investment.

We Got a New Camera

So this is a photo-blog now.

The camera is a Canon Powershot SD750 and it’s very nice. It can recognize human faces and focus on them, just like The Terminator. It has not, however, killed anybody or time traveled. Yet.

I don’t like people much, though, so I only take pictures of dogs.

THANKS20070002

Cuuuute!

THANKS20070005

Cuuuuuuute!

THANKS20070007_1

SUPER CUTE!

THANKS20070011

AHHH!! WHAT THE FUCK!!!

Magnets: Beyond Holding Things to Fridges


Some random but fascinating tidbits that I’ve learned while writing my comps today:

  • There are over fifty known sensory systems that have been identified in living things. Why, then, is a “sixth sense” seen as a far-out impossibility?
  • The genome of bacteria that can sense magnetic fields is only about 4.3 megabytes. All the information needed to create this organism could easily fit in an email attachment. The human genome is about 750 Mb. Bigger than a bacteria, but still smaller than Windows XP.
  • Magnetic structures, similar to those that allow the bacteria above to detect magnetic fields, have been found in a 4 billion year old meteorite from Mars. This is half a billion years older than the earliest known life on Earth. It suggests that the ability to detect magnetic fields may have been one of the first sensory systems to evolve, and that the ability to do so may have been brought to Earth from Mars.

While I still want to get this part of my comprehensive exams over with, it’s actually turning out to be pretty cool. My paper involves the following kickass things: Ghosts, hallucinations, Jesus, pigeon navigation, The Virgin Mary, ESP, psychokinesis, turtle navigation, mental patients, God, airplane crashes, whale suicide, lobster navigation, and now, Martians.

References:

Kirschvink, J. L., Walker, M. M., & Diebel, C. E. (2001). Magnetite-based magnetoreception. Current Opinion in Neurobiology, 11, 462-467.

Rotten Apple


Is it just me, or are Apple’s new iPods sorta disappointing?

The Shuffle hasn’t changed. The Nano does video and has a new interface, but looks fat and ugly (best comment I’ve seen about it: “does it do the truffle shuffle?”). What’s now called the “Classic” has the new interface and looks pretty good, and got a storage increase, which is nice, but for me, not worth upgrading for.

What will be talked about most is the new iPod Touch. It is, almost literally, an iPhone without the phone. It has all the stuff that people have been asking for in the top-of-the-line iPod for years: A giant touch screen, a snazzy new interface, and most importantly, a wireless internet connection to connect to the iTunes store directly (and, I pray to Jobs, sync with a computer wirelessly). It even has some cool stuff nobody expected, like having a web browser and doing fun stuff while in a Starbucks location.

This is all awesome. If it stopped there, and we used logical assumptions to fill in the blanks, I’d currently be putting my old iPod and a few internal organs up on Ebay in order to pay whatever it could possibly cost to get my hands on one. Unfortunately, one of those logical assumptions turns out to be false. I’m talking about storage capacity. This is the most advanced, most expensive iPod ever. It looks like an iPhone, but its focus is on music and video. So you’d expect it to hold the most songs and videos, at least as much if not more than the Classic. But this assumption is wrong. The biggest Touch is only 16GB.

It looks like they took the “iPhone without the phone” part a bit too literally. Why would I want this new iPod, then, if it does less than the iPhone, but doesn’t do much more? This sentiment is enhanced by the fact that Apple has also announced that the Touch’s WiFi capabilities will be immediately available on the iPhone as well, and it has just dropped in price to only $100 more than the Touch.

I’d rather pay $100 more for the same device with a phone. But I’ll do neither, not only because the iPhone still isn’t available in Canada, but because 8GB, or even 16GB, isn’t enough space to hold even a medium-sized music collection. Mine is already bigger than the previous 80GB ceiling, so even if I could afford it, using the Touch or iPhone as my primary iPod just isn’t practical.

Why can’t Apple just make one device that does everything? Fix the stupid problems with the iPhone, add a 160GB drive to it, and nobody could resist it.