Why I’m Cutting Down on Artificial Sweeteners

Why I’m Cutting Down on Artificial Sweeteners

I may have been wrong about artificial sweeteners. I’ve always been a big fan of them, because I love stuffing sweet things in my mouth, but also try to keep my daily calorie count somewhat reasonable.

Sweeteners also have an ideological draw — I love artificial things. I’m typing this on an artificial iPad, powered by artificial electricity, basking in artificial heat, under an artificial roof. So when I see people react with hostility to anything that isn’t “natural” (whatever that even means), I push back. You think artificial sweeteners are poison because Splenda packets aren’t plucked from the ground like potatoes? Well, then I’m gonna put Splenda on everything! I’ll sprinkle it on beef I don’t even care. Take that, hippy!

And in theory, my pettiness should be supported by science. If controlling weight is the goal, then all that matters is calories in and calories out, right? Sweeteners lead to fewer calories in, so they help control weight in a world where calories are frickin’ everywhere. That’s the theory.

The thing is, the best theory in the world is worthless without data. More and more data is coming out about artificial sweeteners, and the results often differ from what theory would predict.

1*RaoEkzh90PAlPMo8HiyTnQ@2x

Most data isn’t conclusive. When you look at the whole population, people who use artificial sweeteners tend to be overweight. That’s just a correlation — maybe bigger people are trying to lose weight with sweeteners. It’s not evidence that sweeteners don’t work, but it’s a lack of evidence that sweeteners do work.

True experiments, in which people changed their intake of artificial sweeteners, would be more definitive if they showed an effect. Here’s a recent meta-analysis reviewing studies on artificial sweeteners, including randomized controlled trials. The researchers concluded:

Evidence from [randomized controlled trials] does not clearly support the intended benefits of nonnutritive sweeteners for weight management.

So again, not evidence that they cause weight increases, or poison you, or have any negative effects. But also not evidence that they do have their intended effect: weight loss.

I’ve seen speculation and emerging research on why the theory doesn’t match up with the data. Some people think it’s a psychological thing — the classic “I had a Diet Coke, so I can order two Baconators instead of one” phenomenon. Some think it’s more biological, with sweeteners mixing up the critters in our guts so they suck at dealing with the calories we do consume.

Whatever the case, there’s simply a lack of evidence that artificial sweeteners help with weight loss, or have any other positive effects. When it comes to translating research into actual behavior, here’s where I’ve come down, personally, for now:

  • Artificial sweeteners won’t kill me, so I won’t avoid them. I’ll use up the packets and syrups that we have around the house.
  • But there’s no evidence that they’ll help me, either. That puts them on the same scientific level as any other bullshit health intervention, like eating organic food, fad diets, or acupuncture. I wouldn’t do those things, so why continue slurping down Splenda?
  • Therefore, I’ll reduce my intake of artificial sweeteners. I’ll use sugar when I need it, or better yet, just have fewer sweeter things overall. If I have the will power for that, it’ll almost certainly lead to fewer calories in, with no mysterious counteracting force.

That’s where I currently stand, but I’m a scientist, so I’ll keep updating my opinions and behaviour as new evidence comes out.

For now, I’m cutting down on sweet things …

1*lU8gkP3jF8ioNDTPs_6dAg@2x

… right after the holidays.

Advertisement

On the Political Correctness of “g”

I’m a scientist now. Specifically, a scientist at a neuroscience company. But not a neuroscientist.  I know, confusing, but the point is that I’ll probably be writing mostly about neuroscience at this blog now.

One project I’ve been working on involves intelligence. For decades, there has been a war between the idea that intelligence is one thing—i.e., there is a “g” factor that powers all intellectual feats—and the idea that intelligence is many things—i.e., there are several independent factors that power different intellectual feats.

I have no idea which is true. The data behind the debate is complicated, and I have a feeling it won’t be unambiguously interpreted until we have a better understanding of the physical workings of the brain. But one thing that I find fascinating is that the “g” idea is seen as the politically incorrect position.

Why? I suppose it’s because it simplifies people’s intellectual ability to a single number, which makes people uncomfortable. If your g factor is low, everything you can possibly do with your mind is held back. Which, scientifically speaking, sucks.

What I don’t understand is why adding more factors is more politically correct. Let’s say there are two independent factors underlying intelligence: g1 and g2. If you’re below average on g1, well, there’s still a 50% chance that you’re below average on g2 as well, which means your mind is still behind on every possible intellectual feat. But hey, any given person still has that 50% chance of not being bad at everything. Is that the difference between acceptable and offensive? A coin flip’s worth of hope?

It’s like the old argument for common ground (kinda) between atheists and religious folks: “I contend we are both atheists, I just believe in one fewer god than you do.” Similarly, little-g enthusiasts just believe in one (or more) fewer intelligence factors than their politically correct colleagues. They have even more common ground, because they agree that there is at least one measurable variation in intelligence. Is there really such a big difference between the positions? Can’t we all just get along?

I’m content to reserve judgement and follow the data in any direction, regardless of which direction is popular and deemed inoffensive for arbitrary reasons.

Could What You Do Without Thinking Be the Key to Artificial Intelligence?

My Master’s thesis explored the links between intuition and intelligence. I found that measures of intuition were closely related with intelligence: people who tend to rely on quick, unconscious decision making also tended to be more intelligent. When poking at the implications, I wrote:

The intuition-intelligence link also holds promise in the advancement of artificial intelligence (AI). Herbert Simon (see Frantz, 2003) has used AI as a framework for understanding intuition. However, this also works in the other direction. A greater understanding of intuition’s role in human intelligence can be translated to improved artificial intelligence. For example, Deep Blue, a chess-playing AI, was said to be incapable of intuition, while the computer’s opponent in a famous 1997 chess match, Garry Kasparov, was known for his intuitive style of play (IBM, 1997).  While it is difficult to describe a computer’s decision-making process as conscious or unconscious, the AI’s method does resemble analytic thought rather than intuitive thought as defined here. Deep Blue searched through all possible chess moves, step-by-step, in order to determine the best one. Kasparov, in contrast, had already intuitively narrowed down the choices to a small number of moves before consciously determining the most intelligent one. Considering that, according to IBM, Deep Blue could consider 200 000 000 chess positions per second, and Kasparov could only consider 3 positions per second, Kasparov’s unconscious intuitive processing must have been quite extensive in order to even compete with Deep Blue. Deep Blue’s lack of intuition did not seem to be an obstacle in that match (the AI won), but perhaps an approximation of human intuition would lead to even greater, more efficient intelligence in machines.

That was back in 2007, just a decade after Deep Blue beat Kasparov at chess. Here we are another decade later, and Google’s AlphaGo has beat a champion at the more complex game of Go.

I’m no expert on machine learning, but my understanding is that AlphaGo does not play in the same way as Deep Blue, which brute-forces the calculation of 200 000 000 positions per second. That’s the equivalent of conscious deliberation: considering every possibility, then choosing the best one. Intuition, however, relies on non-conscious calculations. Most possibilities have already been ruled out when an intuitive decisions enters consciousness, which is why intuition can seem like magic to the conscious minds experiencing it.

Intuition seems closer to how AlphaGo works. By studying millions of human Go moves, then playing against itself to become better than human (creepy), it learns patterns. When playing a game, instead of flipping through every possible move, it has already narrowed down the possibilities based on its vast, learning-fueled “unconscious” mind. AI has been improved by making it more human.

Which is to say: hah! I was right! I called this ten years ago! Pay me a million dollars, Google.


P.S. This Wired article also rather beautifully expresses the match in terms of human/machine symmetry.

The Age of the Companion is Here

doctor_who___companions_by_strawberrygina-d3a6q8i

[I don’t watch Dr. Who and have no idea if this makes sense] [Source]

For the last seven years or so, our technology lives have revolved around apps. The majority of what we do with our devices is a result of choosing an app, then opening it up to get something done*. Call it the Age of the App.

I believe that in 2016 we are moving into a different age: The Age of the Companion.

A companion isn’t a piece of software that you open to get something done. Rather, it proactively works with you to get things done, works autonomously when you’re not around, and may integrate multiple apps and hardware to work.

That’s still a vague definition, but it ties together a trend I’m seeing that hasn’t yet had a good name put to it.

The simplest examples are companions that work directly alongside or inside good old apps (remember apps? From the previous age? Those were the days). Facebook’s M is a companion that lives inside of Facebook Messenger–which is itself a sort of companion to Facebook. M’s artificial intelligence is able to chat with a person, offering assistance with almost anything; a good example is getting it to cancel your cable subscription for you. A companion that helps you avoid the pure evil of Rogers or Comcast is sure to become your best friend pretty quick. Right now, a human on the other end takes over when the AI can’t, but that will become less common as AI improves.

Similarly, I’ve been using an app called Lark that could be considered a companion. It gathers information from HealthKit (e.g. steps taken, weight, sleep) and sends proactive notifications to motivate you to be healthier. When you open the app, it chats with you, Messenger-style, to gather more information and offer advice.

These examples are software, but companions can be hardware too. The Apple Watch is a bit muddled in its purpose, but I think it works best as a companion to both you and your iPhone. It sends notifications, sure, but even when you’re not paying attention to it, it’s gathering data about you, occasionally offering up advice based on that data (“you’ve almost reached your goal” is a good example; “stand up” every hour is … less good). It can pass information on to other companions (like Lark), essentially forming an AI committee that collaborates to better your life.

Amazon is a surprising early leader in the Age of the Companion. The Echo is a semi-creepy always-listening rod that sits in your house and collaborates with various apps to help you, interacting using mics and speakers alone, as if it thinks it’s people. Their Dash technology detects when physical goods (detergent, printer ink, medical supplies) are running low and automatically orders more. Soon, Prime Air delivery drones will take over from there, flying packages to your home in 30 minutes. Right now I wouldn’t consider those single-purpose drones companions, but what if they ask “can I grab anything else on the way?” before coming? What if they ask “can I help mow your lawn while I’m here?” when they arrive? Maybe putting blades on our AI isn’t advisable (especially after they talk to evil cable companies; it might give them ideas), but, you get the gist.

See the pattern? This isn’t just software. It’s not just the Internet of Things. It’s not just artificial intelligence. It’s all these advancements working together to automatically, proactively make a specific person’s life better.

In the Age of Companions, “there’s an app for that” is replaced by “can we help with that?”

It’ll get disturbing before it gets mundane. “Hey uhhh, your watch detected a drop in serotonin, and your calendar said you’re free for a few hours, so I invited all your nearby friends over to cheer you up. Also, watch out, your new puppy is about to air-drop.” But when companions mature, the world will be much different, and hopefully better. The nice thing is that the Age of Companions is already underway, so even if Lark doesn’t prolong our lives indefinitely, we’ll get to experience a different world if we’re around for a few more years.

The future is becoming a complicated place to live in, but at least we won’t have to do it alone.


 

* It’s easy to forget that it wasn’t always this way. Most machines served a single function; a phone was a phone (like, a thing you talked into), a screen was a screen, a camera was a camera, etc. Computers have always had applications, of course, but they were expensive toolsets, different than what we generally consider “apps” today.


 

P.S. Andy Berdan pointed me to another perfect example called x.ai. It automatically works to schedule meetings with a group of people, just by including its address in a regular email. It highlights that, like the other examples above, these companions aren’t apps, and they can run on many platforms.

Our established technology—hardware, sensors, apps, messaging, even email—is becoming a platform for companions. Just like clocks are now only tiny pieces of smartphones, all our whiz-bang gadgets and applications are becoming nothing more than infrastructure upon which companions are built.

Laziness Drives Progress

Image via Rinspeed

I think about autonomous cars a lot.

That’s partly because I don’t enjoy driving. However, a lot of people do. Many of those people promise that they will never buy a self-driving vehicle. I propose that laziness will drive that promise right out of them.

Today, even people who own cars will occasionally take a taxi. To the airport, or out drinking, or when traveling. As taxis become autonomous, they will be even more convenient. Imagine tapping your smartphone, then 30 seconds later a car arrives for you, and you can step inside and keep dicking around on your phone, or have a meal, or get work done, until it drops you off right at your destination. And it only costs a few dollars.

Even people who love driving will take advantage of that once in a while. At first maybe it’ll only be to get to the airport. But then it’ll be when they have a deadline coming up, or are really hung over, or are just feeling lazy.

As those situations become more common, and driving your own car becomes less common, the per-trip cost of owning a car becomes prohibitive. Is it worth tens of thousands of dollars in purchase price, fuel, maintenance, and insurance just to drive a car once a day? Once a week? What about once a month?

“I’m too lazy to drive, just this once” can quickly become “I haven’t driven in a month and I might as well sell my car.” As more and more people succumb to laziness and rely on a cloud of autonomous vehicles, houses will gradually lose their driveways and garages, and the thrill of driving will be confined to go-kart tracks.

In short, human laziness will lead to a more efficient, car-ownership-free world.

I think it’ll be a good change. The people who disagree will be too lazy to resist it.

How Artificial Intelligence Will Kill Science With Thought Experiments

Think about this:

Science—empirical study of the world—only exists because thought experiments aren’t good enough. Yet.

Philosophers used to figure out how stuff worked just by thinking about it. They would take stuff they knew about how the world worked, and purely by applying intuition, logic and math to it, figure out new stuff. No new observations were needed; with thought alone, new discoveries could be created out of the raw material of old discoveries. Einstein developed a lot of his theories using thought experiments. He imagined gliding clocks to derive special relativity and accelerating elevators to derive general relativity. With thought alone, he figured out many of the fundamental rules of the universe, which were only later verified with observation.

That last step is always needed, because even the greatest human intelligence can’t account for all variables. Einstein’s intuition could not extend to tiny things, so his thought experiments alone could not predict the quantum weirdness that arose from careful observation of the small. Furthermore, human mental capacity is limited. Short-term memory can’t combine all relevant information at once, and even with Google, no human is capable of accessing all relevant pieces of information in long-term memory at the right times.

But what happens when we go beyond human intelligence?


New York as painted by an artificial intelligence

If we can figure out true artificial intelligence, the limitations above could disappear. There is no reason that we can’t give rise to machines with greater-than-human memory and processing power, and we already have the Internet as a repository of most current knowledge. Like the old philosophers on NZT, AI could take the raw material of stuff we currently know and turn it into new discoveries without any empirical observation.

Taken to a distant but plausible extreme, an advanced AI could perfectly simulate a portion of the world and perform a million thought experiments within it, without ever touching or observing the physical world.

We would never need science as we know it again if there were perfect thought experiments. We wouldn’t need to take the time and money required to mess with reality if new discoveries about reality could be derived just by asking Siri.

It solves ethical issues. There are a lot of potentially world-saving scientific discoveries held back by the fact that science requires messing with people’s real lives. AI could just whip up a virtual life to thought-experiment on. Problem solved.

Of course, AI brings up new ethical problems. Is a fully functioning simulated life any less real than a physical one? Should such a simulation be as fleeting as a thought?

As technology advances, there will be a lot to think about.

Light and Dark in Daily Deals

Dealfind.com, one of those daily deal Groupon clones that everyone got sick of, often posts questionable deals. Some are only useless or frivilous (oh hi Justin Bieber tooth brush), but others are actively deceptive.

One such deal was for a “Crystal Bala Bracelet With Magnetic Hematite Beads.” While careful to avoid specific health claims, they do claim that “in Buddhism, the pañca bala, or Five Strengths are critical to the achievement of enlightenment. Now you can keep them close to you every day with the Bala Bracelet.”

How does a mere bracelet help you achieve enlightenment? Well:

“Crystals catch and refract the light every time you move [and] six beads of magnetic hematite polarize the effect of light and dark”

Sciencey yet spiritual! It must work. It’s not quite the magnetic bracelets you see at summer festivals that claim to cure cancer, but still, manipulative and deceptive.

Luckily Dealifind has a forum to clear up any misconceptions about the products, so I dug a little deeper. Here’s my conversation:

Mike (me)

Can you provide a link to the peer reviewed scientific articles supporting the claim “six beads of magnetic hematite polarize the effect of light and dark”? I’m sure they just got left off by accident. Thanks!

Amy (Dealfind Admin)

Hi Mike,

Thanks for your inquiry.
Our deal page states:

“In Buddhism, the pañca bala, or Five Strengths are critical to the achievement of enlightenment. Now you can keep them close to you every day with the Bala Bracelet. Each of the crystal-encrusted balls represents one of the bala: Faith, Energy, Mindfulness, Concentration and Wisdom. Six beads of smooth magnetic hematite provide the perfectly polarized color choice to offset the crystals.”

For more of a scientific background, please contact Widget Love at 1.800.990.6771.

Thank you!

Mike

I have to call them just to have any idea about whether or not the bracelet does what it says it does? 😦

Can you at least explain what “polarize the effect of light and dark” and “polarized color” even mean?

I want to know more about what I’m getting into before buying into this sca–…er…product. I’m afraid polarizing my dark could have serious medical effects.

Thx!

The above post was deleted shortly after posting it. Later:

Mike

Oh fiddlesticks, I think my follow-up post failed to go through so I’ll post my question again:

Can you at least explain what “polarize the effect of light and dark” and “polarized color” mean?

Thanks!

Mesha (Dealfind Admin)

Hi Mike,

Thank you for your post.

In this sense polarized means that although the colours range from one extreme to another (both dark and light) they compliment each other and the crystals.

For more of a scientific background, please contact Widget Love at 1.800.990.6771.

I hope this helps! 🙂

Mike

Ah, so it’s saying “there are black rocks and white rocks but they are both rocks.”

Thanks! That clears up everything! I’ll take 50!

That post was deleted too.

Yeah, I’m kind of just being a dick. But trying to sell people bullshit (bullshit capitalizing on the perfectly respectable religion of Buddhism) is also pretty dickish. So screw Dealfind and the dickshit company they promote. It’s just a cheap bracelet, but every penny milked from gullible people through lies is a penny too much.

Why Horror Movies Are Scary, and Why People Like Them Anyway

A while ago, I was contacted by a PR agency who had seen one of my talks about the psychology of horror. A British media company was putting together a Halloween marketing campaign, and wanted some advice on how to use some scariness to make it more effective. I wrote them the below summary of why people regularly expose themselves to horror. I have no idea if the campaign ever went anywhere, but I figure it makes for an interesting read, so here it is.

Why are horror movies scary?

The answer to this is less obvious than it first appears. It might seem self-evident that scary movies are scary because they have scary things in them. But that just shifts the question to “what makes things scary?” Plus, fear is, by definition, an emotional response to danger. People sitting in a comfortable chair with their friends, munching on popcorn, are in no danger. They know they are in no danger.

So why are they scared anyway?

1) Because horror movies show us things that we were born scared of. Millions of years of evolution have programmed us to be frightened by things like spiders, growling monsters, and darkness. Early people who weren’t scared of these things tended to die, so they never got a chance to be our ancestors. With the survivors’ genes in us, we can’t help but feel the fear that kept them alive.

2) Because horror movies show us things that we’ve learned to be scared of. We may not be born scared of knives, needles, or clowns, but a few bad real-life encounters with them and we learn to fear them pretty quick. Movies can take advantage of the lessons we’ve learned from being scared for real.

3) Because we get scared when people we like are scared. Horror movies show us shots of people being scared just as much as they show us what is scaring them. When we’ve grown to like a character, we can’t help but feel some empathy for them when they appear to be frightened.

4) Because filmmakers exaggerate. No matter how realistic, a scary image on a screen pales in comparison to the real thing. That is why filmmakers need to exaggerate to make up for our safety from real danger. Extra dark settings, disorienting camera angles, anticipatory music, and discordant sounds (think the violins in Psycho) all make a scary image even scarier.

5) Because our bodies tell us we’re scared. For all the reasons above, our brains and our bodies are tricked into thinking we’re really scared. Our heart rates go up, we sweat more, and we breathe faster. These bodily reactions feed back into our conscious experience of fear. Furthermore, horror movies are one of the most visceral types of film. In one study, horror was one of only two genres that had a significant and identifiable physiological response. (The other was comedy).

So why would people watch something that scares them?

Again, fear is an emotional response to danger. Usually one that makes us want to run away, or at least turn off the TV. Why would we not only keep watching a scary movie, but pay money to do it?

6) Because some people like the rush of being scared for its own sake. Studies have found that the more scared people report being during a movie, the more they enjoy it. For some fans of horror movies (but not everyone), excitement is fun, whether it’s from joy or fear. My research shows that people high in sensation seeking—who say they frequently seek out intense thrills—said they like the horror genre more than people low in sensation seeking.

7) Because some people like the relief when it’s all over. The happy moments of a horror movie can be just as important as the horrifying parts. A moment of relief after escaping the bad guy can seem even more positive than it would normally, because our hearts are still beating with excitement. The leftover emotion from being scared can translate into happiness when the source of fear is gone.

8) Because you can control your image by controlling your reactions to a horror film. In my study, even though everyone had about the same “gut reaction” to horror imagery (a negative one), what they said they liked varied a lot. People with rebellious sorts of personalities were proud to say they liked horror movies.

9) Because it helps us hook up. Although they have the same negative “gut reaction” to horror, men say they like the genre more than women. Research has supported the fact that men and women who act “appropriately” to frightening films—men being fearless and women being fearful—tend to be liked by the opposite sex more. Horror films are perfect for dates.

There you go. Just a few of the many reasons that we’re happy to be horrified.

On Lying

I recently finished reading Sam Harris’s short essay on the topic of lying, which is called, no lie, Lying. In it, he explores the rationality of communicating things that are not true, and comes to the conclusion that it is wrong to lie.

Yeah. Obviously. But Harris goes further than what many people mean when they say “it’s wrong to lie,” arguing that even seemingly justified forms of lying, like little white lies, lying to protect someone, and false encouragement, are all wrong in their own way.

He’s convincing, for the most part. Take false encouragement; the lies we tell without a second thought, like “yeah, I love your blog, you are such a good writer.” It seems harmless, and it would be awkward to say otherwise to someone, but Harris makes a good point: “False encouragement is a kind of theft: it steals time, energy, and motivation a person could put toward some other purpose.”

I’ve always been a big believer that the truth is the fastest route to success, both on a societal level (hence my interest in science) and on a personal level. It would be easy to get carried away with this, becoming one of those people who spouts his opinion whether asked for it or not, and is rarely invited to the next party. However, I think it is possible to tactfully express the truth whenever asked to.

I appreciate blunt people. Others may not, but even they can be served well by the right kind of bluntness. If I tell you that yes, you actually do look like a giant turd in that brown dress (like really, brown dress? What were you thinking?), it might hurt at first, but when you show up to the party in a different dress and get genuine compliments rather than awkward false encouragement, you’re better off in the long run.

Harris also makes the point that lying is not only harmful to the people being lied to, but taxing for the liar. Keeping up a lie takes a lot of mental effort, since the lie was fabricated in the liar’s mind. Every time the lie comes up, the liar has to check against his memory of previous lies, who knows what, how the lie affects everything else; he essentially has to store a new version of reality entirely in his head, often fabricated in real-time. When the truth comes up, though, it’s easy to keep track of; the truth-teller only has to keep track of one version of reality. The real one.

Many of these examples assume the people involved are regular, sane people, who ultimately just want to get along. Where Harris starts to lose me is when discussing situations where this arrangement breaks down. He discusses a hypothetical situation of a murderer showing up at your door looking for a little boy who you are sheltering. Should you tell the murderer the truth? Harris argues that lying could have unintended harmful consequences; the murderer might go to the next house and murder someone else, or at best, it just shifts the burden of dealing with the murderer to someone else. Instead, a truth like “I wouldn’t tell you even if I knew,” coupled with a threat, could mollify the situation without a lie.

I’d argue that, when facing someone for whom cooperation and rationality have obviously broken down (e.g., a kid murderer), sometimes there are known consequences of lying (e.g., saving a kid’s life) that are almost certainly less harmful than far-fetched unknown consequences. Harris later makes this same point on a larger scale, when justifying lying in the context of war and espionage, saying the usual rules of cooperation no longer apply. I think blowing up a city with a bomb and stabbing a kid with a knife are both situations where cooperation has broken down, and both situations where lying can be a tool used in good conscience.

There are no absolute moral principles that work in all situations. Life is too complicated for that. Trying to summarize it in simple prescriptive rules (as many religions have) doesn’t work. So, the rule “lying is always wrong” can’t work. There are extreme situations where the rule breaks down.

Luckily, most people will never encounter such an extreme situation in their daily lives. This is where Harris’s main point is spot on: we should lie a lot less than we do. If everyone told the truth in every normal situation, relationships would be stronger, and people would be happier and more productive. I’ve certainly been more aware of my honesty since reading the book, so it’s fair to say it literally changed my life. That’s certainly worth the $2.00 it costs (buy it here). No word of a lie.

Book Review: Moonwalking With Einstein, by Joshua Foer

Memory is often taken for granted in a world where paper and transistors store information better than neurons ever could. Moonwalking With Einstein shines a much-needed light on the art of memorization. It could have been a dry collection of basic science and light philosophy on the subject, but Foer makes it riveting by telling the story of his own head-first dive into the world of memory as sport.

I had no idea this went on, but every year, there are regional and worldwide memory championships in which people compete to perform seemingly superhuman feats of memory, such as memorizing decks of cards as fast as possible, or recalling hundreds of random numbers. After covering one of these events, Foer became so curious that he began training to participate himself.

What he discovered is that these impressive acts of memorization actually boil down to a few simple tricks that anyone can learn. While not a how-to manual, the tricks are simple enough that anyone can pick them up just by reading about how Foer learned them. I can still recall a list of 15 unusual items (in order) that Foer’s mentor, Ed Cooke, used to first teach the memory palace technique. It’s only a matter of practice and refinement for anyone, no matter how forgetful, to memorize several decks of cards.

This humanization of the extraordinary carries throughout the book. Foer himself keeps a modest tone about his damn impressive accomplishments, emphasizing that he’s just a regular forgetful dude who lives in his parents’ basement. The other memory championship contestants, too, can do amazing things during the contest, but it’s clear that the ability to memorize a poem doesn’t translate to a successful personal life.

In fact, Foer is critical of those who do profit from using memory tricks. His contempt for Tony Buzan, the entrepreneur who makes millions on books and sessions related to memory, comes through every time Buzan’s name comes up. He might as well add “coughBULLSHITcough” after every claim of Buzan’s. More substantially, a tangent on savantism takes a strange turn when Foer begins to suspect that one self-proclaimed1 memory savant, Daniel Tammet, may have more in common with the memory championship contestants than with Rain Man2. When Foer confronts him about it directly, things get a bit uncomfortable.

By wrapping fascinating facts and anecdotes about memory up with his own story, Foer keeps it riveting throughout. This is one of those books that I literally had trouble putting down. Anyone with even a passing interest in the human mind should remember to stick Moonwalking With Einstein in their brain hole.


1 And expert-proclaimed; psychologist Simon Baron-Cohen (yes relation) studied Tammet and was more convinced of his traditional savantism.

2 The inspiration for Rain Man, Kim Peek, also makes an appearance and is more convincing as having freakish memory naturally.