On the Political Correctness of “g”

I’m a scientist now. Specifically, a scientist at a neuroscience company. But not a neuroscientist.  I know, confusing, but the point is that I’ll probably be writing mostly about neuroscience at this blog now.

One project I’ve been working on involves intelligence. For decades, there has been a war between the idea that intelligence is one thing—i.e., there is a “g” factor that powers all intellectual feats—and the idea that intelligence is many things—i.e., there are several independent factors that power different intellectual feats.

I have no idea which is true. The data behind the debate is complicated, and I have a feeling it won’t be unambiguously interpreted until we have a better understanding of the physical workings of the brain. But one thing that I find fascinating is that the “g” idea is seen as the politically incorrect position.

Why? I suppose it’s because it simplifies people’s intellectual ability to a single number, which makes people uncomfortable. If your g factor is low, everything you can possibly do with your mind is held back. Which, scientifically speaking, sucks.

What I don’t understand is why adding more factors is more politically correct. Let’s say there are two independent factors underlying intelligence: g1 and g2. If you’re below average on g1, well, there’s still a 50% chance that you’re below average on g2 as well, which means your mind is still behind on every possible intellectual feat. But hey, any given person still has that 50% chance of not being bad at everything. Is that the difference between acceptable and offensive? A coin flip’s worth of hope?

It’s like the old argument for common ground (kinda) between atheists and religious folks: “I contend we are both atheists, I just believe in one fewer god than you do.” Similarly, little-g enthusiasts just believe in one (or more) fewer intelligence factors than their politically correct colleagues. They have even more common ground, because they agree that there is at least one measurable variation in intelligence. Is there really such a big difference between the positions? Can’t we all just get along?

I’m content to reserve judgement and follow the data in any direction, regardless of which direction is popular and deemed inoffensive for arbitrary reasons.

Advertisement

Could What You Do Without Thinking Be the Key to Artificial Intelligence?

My Master’s thesis explored the links between intuition and intelligence. I found that measures of intuition were closely related with intelligence: people who tend to rely on quick, unconscious decision making also tended to be more intelligent. When poking at the implications, I wrote:

The intuition-intelligence link also holds promise in the advancement of artificial intelligence (AI). Herbert Simon (see Frantz, 2003) has used AI as a framework for understanding intuition. However, this also works in the other direction. A greater understanding of intuition’s role in human intelligence can be translated to improved artificial intelligence. For example, Deep Blue, a chess-playing AI, was said to be incapable of intuition, while the computer’s opponent in a famous 1997 chess match, Garry Kasparov, was known for his intuitive style of play (IBM, 1997).  While it is difficult to describe a computer’s decision-making process as conscious or unconscious, the AI’s method does resemble analytic thought rather than intuitive thought as defined here. Deep Blue searched through all possible chess moves, step-by-step, in order to determine the best one. Kasparov, in contrast, had already intuitively narrowed down the choices to a small number of moves before consciously determining the most intelligent one. Considering that, according to IBM, Deep Blue could consider 200 000 000 chess positions per second, and Kasparov could only consider 3 positions per second, Kasparov’s unconscious intuitive processing must have been quite extensive in order to even compete with Deep Blue. Deep Blue’s lack of intuition did not seem to be an obstacle in that match (the AI won), but perhaps an approximation of human intuition would lead to even greater, more efficient intelligence in machines.

That was back in 2007, just a decade after Deep Blue beat Kasparov at chess. Here we are another decade later, and Google’s AlphaGo has beat a champion at the more complex game of Go.

I’m no expert on machine learning, but my understanding is that AlphaGo does not play in the same way as Deep Blue, which brute-forces the calculation of 200 000 000 positions per second. That’s the equivalent of conscious deliberation: considering every possibility, then choosing the best one. Intuition, however, relies on non-conscious calculations. Most possibilities have already been ruled out when an intuitive decisions enters consciousness, which is why intuition can seem like magic to the conscious minds experiencing it.

Intuition seems closer to how AlphaGo works. By studying millions of human Go moves, then playing against itself to become better than human (creepy), it learns patterns. When playing a game, instead of flipping through every possible move, it has already narrowed down the possibilities based on its vast, learning-fueled “unconscious” mind. AI has been improved by making it more human.

Which is to say: hah! I was right! I called this ten years ago! Pay me a million dollars, Google.


P.S. This Wired article also rather beautifully expresses the match in terms of human/machine symmetry.

How Artificial Intelligence Will Kill Science With Thought Experiments

Think about this:

Science—empirical study of the world—only exists because thought experiments aren’t good enough. Yet.

Philosophers used to figure out how stuff worked just by thinking about it. They would take stuff they knew about how the world worked, and purely by applying intuition, logic and math to it, figure out new stuff. No new observations were needed; with thought alone, new discoveries could be created out of the raw material of old discoveries. Einstein developed a lot of his theories using thought experiments. He imagined gliding clocks to derive special relativity and accelerating elevators to derive general relativity. With thought alone, he figured out many of the fundamental rules of the universe, which were only later verified with observation.

That last step is always needed, because even the greatest human intelligence can’t account for all variables. Einstein’s intuition could not extend to tiny things, so his thought experiments alone could not predict the quantum weirdness that arose from careful observation of the small. Furthermore, human mental capacity is limited. Short-term memory can’t combine all relevant information at once, and even with Google, no human is capable of accessing all relevant pieces of information in long-term memory at the right times.

But what happens when we go beyond human intelligence?


New York as painted by an artificial intelligence

If we can figure out true artificial intelligence, the limitations above could disappear. There is no reason that we can’t give rise to machines with greater-than-human memory and processing power, and we already have the Internet as a repository of most current knowledge. Like the old philosophers on NZT, AI could take the raw material of stuff we currently know and turn it into new discoveries without any empirical observation.

Taken to a distant but plausible extreme, an advanced AI could perfectly simulate a portion of the world and perform a million thought experiments within it, without ever touching or observing the physical world.

We would never need science as we know it again if there were perfect thought experiments. We wouldn’t need to take the time and money required to mess with reality if new discoveries about reality could be derived just by asking Siri.

It solves ethical issues. There are a lot of potentially world-saving scientific discoveries held back by the fact that science requires messing with people’s real lives. AI could just whip up a virtual life to thought-experiment on. Problem solved.

Of course, AI brings up new ethical problems. Is a fully functioning simulated life any less real than a physical one? Should such a simulation be as fleeting as a thought?

As technology advances, there will be a lot to think about.

Tolerance, Conflict, and Nonflict

A lot of conflict can be explained in terms of differing tolerance levels. A disagreement may simply be a matter of one person hitting their limit before another.

An example will help: let’s say a couple is fighting because he feels like he always has to clean up her mess around the house. It would be easy to label her as a slob and/or him as a clean freak, but maybe they just have different levels of tolerance for messes.

Let’s say he can tolerate four dirty dishes before cleaning up, while she can tolerate five. They agree on most things: too many dirty dishes are bad, cleaning up after one dish is a waste of time, etc. They have no fundamental disagreement. Yet, that one-dish difference will result in him cleaning up every time, simply because his four-dish limit gets hit first. That can lead to other conflicts, such as unequal division of labour, questioning compatibility, failure to communicate, etc. All because of one very small difference in tolerance.

How does this help us resolve conflict? On one hand, it can help foster understanding of different points of view. Many conflicts are not between people on different sides of a line, but rather different distances from the same side of the line. It’s worth noting that most people don’t choose their limits; they are born with them, or they had them instilled early on, or they believe they are rational. Sometimes the resolution to a conflict can be as easy as “ok, your limit is here, my limit is here, and that’s okay.”

On the other hand, living with other humans often necessitates adjusting our tolerance levels. Things run smoother if our limits are close. In the example above, if she dropped her tolerance to four dishes 50% of the time, each of them clean up half the time, and they live happily ever after. Sometimes it’ll have to go the other way too: if he’s not too ragey with disgust after four dishes, he could wait until five, then she hits her limit and naturally cleans up. Either way, hooray for compromise.

This may be a subtle point, but I think it’s a good one: many disagreements are not disagreements at all. It’s not that one person is wrong and the other is right. They’re just feeling different things based on how close they are to their limit. That is much easier to deal with than genuine conflict, especially if it’s recognized as the non-conflict (nonflict) it is.

Why Horror Movies Are Scary, and Why People Like Them Anyway

A while ago, I was contacted by a PR agency who had seen one of my talks about the psychology of horror. A British media company was putting together a Halloween marketing campaign, and wanted some advice on how to use some scariness to make it more effective. I wrote them the below summary of why people regularly expose themselves to horror. I have no idea if the campaign ever went anywhere, but I figure it makes for an interesting read, so here it is.

Why are horror movies scary?

The answer to this is less obvious than it first appears. It might seem self-evident that scary movies are scary because they have scary things in them. But that just shifts the question to “what makes things scary?” Plus, fear is, by definition, an emotional response to danger. People sitting in a comfortable chair with their friends, munching on popcorn, are in no danger. They know they are in no danger.

So why are they scared anyway?

1) Because horror movies show us things that we were born scared of. Millions of years of evolution have programmed us to be frightened by things like spiders, growling monsters, and darkness. Early people who weren’t scared of these things tended to die, so they never got a chance to be our ancestors. With the survivors’ genes in us, we can’t help but feel the fear that kept them alive.

2) Because horror movies show us things that we’ve learned to be scared of. We may not be born scared of knives, needles, or clowns, but a few bad real-life encounters with them and we learn to fear them pretty quick. Movies can take advantage of the lessons we’ve learned from being scared for real.

3) Because we get scared when people we like are scared. Horror movies show us shots of people being scared just as much as they show us what is scaring them. When we’ve grown to like a character, we can’t help but feel some empathy for them when they appear to be frightened.

4) Because filmmakers exaggerate. No matter how realistic, a scary image on a screen pales in comparison to the real thing. That is why filmmakers need to exaggerate to make up for our safety from real danger. Extra dark settings, disorienting camera angles, anticipatory music, and discordant sounds (think the violins in Psycho) all make a scary image even scarier.

5) Because our bodies tell us we’re scared. For all the reasons above, our brains and our bodies are tricked into thinking we’re really scared. Our heart rates go up, we sweat more, and we breathe faster. These bodily reactions feed back into our conscious experience of fear. Furthermore, horror movies are one of the most visceral types of film. In one study, horror was one of only two genres that had a significant and identifiable physiological response. (The other was comedy).

So why would people watch something that scares them?

Again, fear is an emotional response to danger. Usually one that makes us want to run away, or at least turn off the TV. Why would we not only keep watching a scary movie, but pay money to do it?

6) Because some people like the rush of being scared for its own sake. Studies have found that the more scared people report being during a movie, the more they enjoy it. For some fans of horror movies (but not everyone), excitement is fun, whether it’s from joy or fear. My research shows that people high in sensation seeking—who say they frequently seek out intense thrills—said they like the horror genre more than people low in sensation seeking.

7) Because some people like the relief when it’s all over. The happy moments of a horror movie can be just as important as the horrifying parts. A moment of relief after escaping the bad guy can seem even more positive than it would normally, because our hearts are still beating with excitement. The leftover emotion from being scared can translate into happiness when the source of fear is gone.

8) Because you can control your image by controlling your reactions to a horror film. In my study, even though everyone had about the same “gut reaction” to horror imagery (a negative one), what they said they liked varied a lot. People with rebellious sorts of personalities were proud to say they liked horror movies.

9) Because it helps us hook up. Although they have the same negative “gut reaction” to horror, men say they like the genre more than women. Research has supported the fact that men and women who act “appropriately” to frightening films—men being fearless and women being fearful—tend to be liked by the opposite sex more. Horror films are perfect for dates.

There you go. Just a few of the many reasons that we’re happy to be horrified.

On Lying

I recently finished reading Sam Harris’s short essay on the topic of lying, which is called, no lie, Lying. In it, he explores the rationality of communicating things that are not true, and comes to the conclusion that it is wrong to lie.

Yeah. Obviously. But Harris goes further than what many people mean when they say “it’s wrong to lie,” arguing that even seemingly justified forms of lying, like little white lies, lying to protect someone, and false encouragement, are all wrong in their own way.

He’s convincing, for the most part. Take false encouragement; the lies we tell without a second thought, like “yeah, I love your blog, you are such a good writer.” It seems harmless, and it would be awkward to say otherwise to someone, but Harris makes a good point: “False encouragement is a kind of theft: it steals time, energy, and motivation a person could put toward some other purpose.”

I’ve always been a big believer that the truth is the fastest route to success, both on a societal level (hence my interest in science) and on a personal level. It would be easy to get carried away with this, becoming one of those people who spouts his opinion whether asked for it or not, and is rarely invited to the next party. However, I think it is possible to tactfully express the truth whenever asked to.

I appreciate blunt people. Others may not, but even they can be served well by the right kind of bluntness. If I tell you that yes, you actually do look like a giant turd in that brown dress (like really, brown dress? What were you thinking?), it might hurt at first, but when you show up to the party in a different dress and get genuine compliments rather than awkward false encouragement, you’re better off in the long run.

Harris also makes the point that lying is not only harmful to the people being lied to, but taxing for the liar. Keeping up a lie takes a lot of mental effort, since the lie was fabricated in the liar’s mind. Every time the lie comes up, the liar has to check against his memory of previous lies, who knows what, how the lie affects everything else; he essentially has to store a new version of reality entirely in his head, often fabricated in real-time. When the truth comes up, though, it’s easy to keep track of; the truth-teller only has to keep track of one version of reality. The real one.

Many of these examples assume the people involved are regular, sane people, who ultimately just want to get along. Where Harris starts to lose me is when discussing situations where this arrangement breaks down. He discusses a hypothetical situation of a murderer showing up at your door looking for a little boy who you are sheltering. Should you tell the murderer the truth? Harris argues that lying could have unintended harmful consequences; the murderer might go to the next house and murder someone else, or at best, it just shifts the burden of dealing with the murderer to someone else. Instead, a truth like “I wouldn’t tell you even if I knew,” coupled with a threat, could mollify the situation without a lie.

I’d argue that, when facing someone for whom cooperation and rationality have obviously broken down (e.g., a kid murderer), sometimes there are known consequences of lying (e.g., saving a kid’s life) that are almost certainly less harmful than far-fetched unknown consequences. Harris later makes this same point on a larger scale, when justifying lying in the context of war and espionage, saying the usual rules of cooperation no longer apply. I think blowing up a city with a bomb and stabbing a kid with a knife are both situations where cooperation has broken down, and both situations where lying can be a tool used in good conscience.

There are no absolute moral principles that work in all situations. Life is too complicated for that. Trying to summarize it in simple prescriptive rules (as many religions have) doesn’t work. So, the rule “lying is always wrong” can’t work. There are extreme situations where the rule breaks down.

Luckily, most people will never encounter such an extreme situation in their daily lives. This is where Harris’s main point is spot on: we should lie a lot less than we do. If everyone told the truth in every normal situation, relationships would be stronger, and people would be happier and more productive. I’ve certainly been more aware of my honesty since reading the book, so it’s fair to say it literally changed my life. That’s certainly worth the $2.00 it costs (buy it here). No word of a lie.

Book Review: Moonwalking With Einstein, by Joshua Foer

Memory is often taken for granted in a world where paper and transistors store information better than neurons ever could. Moonwalking With Einstein shines a much-needed light on the art of memorization. It could have been a dry collection of basic science and light philosophy on the subject, but Foer makes it riveting by telling the story of his own head-first dive into the world of memory as sport.

I had no idea this went on, but every year, there are regional and worldwide memory championships in which people compete to perform seemingly superhuman feats of memory, such as memorizing decks of cards as fast as possible, or recalling hundreds of random numbers. After covering one of these events, Foer became so curious that he began training to participate himself.

What he discovered is that these impressive acts of memorization actually boil down to a few simple tricks that anyone can learn. While not a how-to manual, the tricks are simple enough that anyone can pick them up just by reading about how Foer learned them. I can still recall a list of 15 unusual items (in order) that Foer’s mentor, Ed Cooke, used to first teach the memory palace technique. It’s only a matter of practice and refinement for anyone, no matter how forgetful, to memorize several decks of cards.

This humanization of the extraordinary carries throughout the book. Foer himself keeps a modest tone about his damn impressive accomplishments, emphasizing that he’s just a regular forgetful dude who lives in his parents’ basement. The other memory championship contestants, too, can do amazing things during the contest, but it’s clear that the ability to memorize a poem doesn’t translate to a successful personal life.

In fact, Foer is critical of those who do profit from using memory tricks. His contempt for Tony Buzan, the entrepreneur who makes millions on books and sessions related to memory, comes through every time Buzan’s name comes up. He might as well add “coughBULLSHITcough” after every claim of Buzan’s. More substantially, a tangent on savantism takes a strange turn when Foer begins to suspect that one self-proclaimed1 memory savant, Daniel Tammet, may have more in common with the memory championship contestants than with Rain Man2. When Foer confronts him about it directly, things get a bit uncomfortable.

By wrapping fascinating facts and anecdotes about memory up with his own story, Foer keeps it riveting throughout. This is one of those books that I literally had trouble putting down. Anyone with even a passing interest in the human mind should remember to stick Moonwalking With Einstein in their brain hole.


1 And expert-proclaimed; psychologist Simon Baron-Cohen (yes relation) studied Tammet and was more convinced of his traditional savantism.

2 The inspiration for Rain Man, Kim Peek, also makes an appearance and is more convincing as having freakish memory naturally.

Evolution’s Failures

I think it’s hilarious to imagine evolution’s failures.

Think of how our digestive systems are able to function no matter which way we’re sitting or lying, carrying food to the right place in a peristaltic wave, even if it’s going against gravity. Think of the pre-human who didn’t get that gene. He’s all like, “check out this handstand!”, then as soon as he’s upside-down, all the wooly mammoth he ate earlier is pouring out of his face. He suffocates, dying before he ever had a chance to procreate, and his shitty genes never get passed on. Hilarious.

Thing is, one day that guy will be us.

Evolution is not only biological, but technological. We already pity the people of the past—most of human history—who didn’t expect to live past the age of thirty. Technology has doubled our lifespan just by tuning up our default biological hardware from the outside. Think of what we can do once technology moves inside.

It’s a near certainty that we will merge with technology. We already rely on it, and there’s gotta be a better way of interacting with it than through our fingers. When our brains and bodies are made more of bits and bytes than nerves and leukocytes, the people of today will be the pre-humans.

Looking back, we’ll think that our squishy biological way of doing things was hilarious. “That’s right son,” we’ll say, to our sons. “We had computers we plugged into walls, but our own method of recharging was—hah, it’s so gross, but get this—we mashed up other living things with our teeth then let them slide down our throat. There were actually people who couldn’t find things to eat, and they died. Forever! They didn’t even have a backup.”

And our sons, they probably won’t even understand how (or why) we managed to get through the day.

Evolution makes failures of us all.

The Myth of the Evil Genius

Joker by Nebezial

The evil genius only exists in fiction.

An evil genius cannot exist in reality, because in reality, intelligence and evil are incompatible. A genius acts rationally, and history constantly proves that it is rational to be good.

Genius and evil are two terms that are nearly impossible to define, but most people know it when they see it. Adolf Hitler was evil. Osama Bin Laden was probably evil. Albert Einstein was a genius. Bill Gates is probably one too.

It’s not that evil doesn’t pay; genius and evil both pay, in some sense. Bill and Osama both have mansions, and could probably afford the most expensive bacon at the grocery store (though I guess Osama would pass). The difference is that Bill is living a comfortable life that leaves a trail of advancements and improved lives. Osama is at the bottom of the ocean riddled with bullets, and has left a trail of destruction and ruined lives.

Osama and Adolf did gain power, but was it through genius? I doubt it. They excelled in some areas—charisma, mostly, and probably a good helping of being in the right place at the right time—but I doubt they were geniuses. Not in the sense meant here: extreme mental ability for coming to correct conclusions.

On both an individual and a societal level, it is rational to be good. More often than not, the correct choice between a good option and an evil option is the good option, all things considered. Murdering a person you can’t stand may be easier than altering your own life to get away from him (say, packing up and moving away), but on an individual level, murder will probably put you in jail or dead yourself, and on a societal level, allowing people to murder willy-nilly wouldn’t be conducive to happiness and productivity.

That’s why the evil genius doesn’t exist. Even if the impulse to do evil was there, a true genius would take a moment, and think “hmm, considering all the consequences, maybe genocide isn’t such a spiffy idea.” If The Joker was really so smart, he’d figure out a way to resolve his Batman problem without blowing up innocent people and getting thrown in Arkham again and again.

Evil cannot result from the cool calculated machinations of a genius. In real life, evil is in the hot passion of an argument when a knife is nearby. It’s in the subtle biases of a politician whose values are misguided. And in that sense, evil is in all of us; luckily we also have an inner genius to play superhero.

The Emotional Ramifications of Bleeps and Bloops

The iPhone needs more options for the new text message sound. There are only six beeps, bongs, and honks available, with no ability to add new ones.

I say this not out of a vain need for customization, but for the emotional well-being of iPhone users.

This is modern life:

You meet someone you like, and she likes you enough to give you her phone number. You send her an innocuous text, then wait with breath abated for a reply. BONG, an innocuous text in return. You do this back-and-forth a few times and soon each message contains not just neutral words but embedded emotion.

Eventually it’s BEEP BEEP here are your plans for the evening; DING! here comes a compliment you’ll remember for the rest of your life. You precede those consequences with that sound enough times, and they become inextricably linked. A smile hits your lips and your heart leaps into your throat with every buzz of your pocket.

Maybe you go on a few adventures. Maybe you screw. Maybe you make plans for the future. But nothing lasts forever, and when things inevitably go sour, all the positive associations with that tone become ambivalent, then negative. Finally, DONG! we need 2 talk.

Those associations are embedded deep, and they never quite go away. Alerts for even the most frivolous texts now make your mouth go dry; they’re Pavlov’s bell in reverse.

It doesn’t take long to cycle through all six tones.

Technology is so embedded in our lives that we must increasingly consider not only its practical ramifications, but the full spectrum of human emotion as well.