The Matrix Meets Braid: Artificial Brains in Gunfights

superhotIt’s The Matrix meets Braid: a first-person shooter video game “where the time moves only when you move.” You can stare at the bullets streaking toward you as long as you like, but moving to dodge them causes the enemies and bullets to move forward in time as well.

The game is called SUPERHOT, and the designers describe it by saying “With this simple mechanic we’ve been able to create gameplay that’s not all about reflexes – the player’s main weapon is careful aiming and smart planning – while not compromising on the dynamic feeling of the game.”

Here’s the trailer:

I’ve always loved questions about what it would be like to distort time for yourself relative to the rest of the universe (and the potential unintended consequences, as we explored in discussing why The Flash is in a special hell.)

In Superhot, it’s not that you can distort time exactly – after all, whenever you take a step, your enemies get the same amount of time to take a step themselves. Instead, your brain is running as fast as it likes while (the rest of) your body remains in the same time stream as everything else.

And then it struck me: this might be close to the experience of an emulated brain housed in a regular-sized body.

Let’s say that, in the future, we artificially replicate/emulate human minds on computers. And let’s put an emulated human mind inside a physical, robotic body. The limits on how fast it can think are its hardware and its programming. As technology and processor speeds improve, the “person” could think faster and faster and would experience the outside world as moving slower and slower in comparison.

… but even though you might have a ridiculously high processing speed to think and analyze a situation, your physical body is still bound by the normal laws of physics. Moving your arms or legs requires moving forward in the same stream of time as everyone else. In order to, say, turn your head to look to your left and gather more information, you need to let time pass for your enemies, too.

Robin Hanson, professor of economics at George Mason University and author of Overcoming Bias, has put a lot of thought into the implications of whole-brain emulation. So I asked him:

Is Superhot what an emulated human would experience in a gunfight?

His reply:

An em could usually speed up its mind to deal with critical situations, though this would cost more per objective second. So a first-person shooter where time only moves when you do does move in the direction of letting the gamer experience action in an em world. Even better would be to let the gamer change the rate at which game-time seems to move, to have a limited gamer-time budget to spend, and to give other non-human game characters a similar ability.”

He’s right: thinking faster would require running more cycles per second, which takes resources. And yeah, you would need infinite processing speed to think indefinitely while the rest of the world was frozen. It would be more consistent to add a “mental cycle” budget that ran down at a constant rate from the gamer’s external point of view.

I don’t know about you, but I would buy that game! (Even if a multi-player mode would be impossible.)

What’s so special about living longer?

Atheist death panel: Red America's suspicions confirmed.

After reading about the death panel we held at Skepticon IV last week, a very clever philosopher friend of mine named Henry Shevlin wrote to me with a challenge to the transhumanist perspective. The transhumanist argument, which Eliezer made eloquently in the panel, is that death is a terrible thing that we should be striving to prevent for as long as possible.
Henry asks:

“Is death a tragedy because it involves a possible loss of utility, or because there’s some special harm in the annihilation of the individual? So consider two scenarios… Earth 1B and Earth 2B. Both of them have 100 million inhabitants at any one time. But Earth 1B has a very high life expectancy and a very low birth rate, while Earth 2B has a lower life expectancy and a very high birth rate. Otherwise, though, the two worlds are very similar. Which world is morally superior, by which I mean, generates more utils? “

Good question. Why, exactly, is prolonging existing lives better than creating new lives?

Let’s start with Henry’s Option 1 — that a person’s death is a tragedy because of the loss of the utility that person would have had, if he hadn’t died. Starting with this premise, can we justify our intuition that it’s better to sustain a pre-existing life than to create a new one?

One possible tack is to say that we can only compare utilities of possible outcomes for currently existing people — so the utility of adding a new, happy person to this world is undefined (and, being undefined, it can’t compensate for the utility lost from an existing person’s death). Sounds reasonable, perhaps. But that also implies that the utility of adding a new, miserable person to this world is undefined. That doesn’t sound right! I definitely want a moral theory which says that it’s bad to create beings whose lives are sheer agony.

You might also be tempted to argue that utility’s not fungible between people. In other words, my loss of utility from dying can’t be compensated for by the creation of new utility somewhere else in the world. But that renders utilitarianism completely useless! If utility’s not fungible, then you can’t say that it’s good for me to pay one penny to save you from a lifetime of torture.

Or you could just stray from utilitarianism in this case, and claim that the loss of a life is bad not just because of the loss of utility it causes. That’s Henry’s Option 2 — that death is a tragedy because there’s some special harm in the annihilation of the individual. You could then argue that the harm caused by the death of an existing person vastly outweighs the good caused by creating a new person. I’m uncomfortable with this idea, partly because there doesn’t seem to be any way to quantify the value of a life if you’re not willing to stick to the measuring system of utils. But I’m also uncomfortable with it because it seems to imply that it’s always bad to create new people, since, after all, the badness of their deaths is going to outweigh the good of their lives.

ETA: Of course, you could also argue that you care more about the utils experienced by your friends and family than about the utils that would be experienced by new people. That’s probably true, for most people, and understandably so. But it doesn’t resolve the question of why you should prefer that an unknown stranger’s life be prolonged than that a new life be created.

How Should Rationalists Approach Death?

“How Should Rationalists Approach Death?” That’s the title of the panel I’m moderating this weekend at Skepticon, and I couldn’t be more excited. It’s a big topic – we won’t figure it all out in an hour, but I know we’ll get people to think. Do common beliefs about death make sense? How can we find comfort about our mortality? Should we try to find comfort about death? What should society be doing about death?

I managed to get 4 fantastic panelists, all of whom I respect and admire:

  • Greta Christina is author, blogger, speaker extraordinaire. Her writing has appeared in multiple magazines and newspapers, including Ms., Penthouse, Chicago Sun-Times, On Our Backs, and Skeptical Inquirer. I’ve been thrilled to see her becoming a well-known and respected voice in the secular community. She delivered the keynote address at the Secular Student Alliance’s 2010 Conference, and has been on speaking tours around the country.
  • James Croft is a candidate for an Ed.D at Harvard and works with the Humanist Chaplaincy at Harvard. I had the pleasure of meeting James two years ago at American Humanist Association conference, where we talked and argued for hours. Eloquent, gracious, and sharp, he’s a great model of intellectual engagement. He’s able to disagree agreeably, but also change his mind when the occasion calls for it.
  • Eliezer Yudkowsky co-founded the nonprofit Singularity Institute for Artificial Intelligence (SIAI), where he works as a full-time Research Fellow. He’s written must-read essays on Bayes’ Theorem and human rationality as well as great works of fiction. Have you heard me rave about Harry Potter and the Methods of Rationality? That’s him. His writings, especially on the community blog LessWrong, have influenced my thinking quite a bit.
  • And some lady named Julia Galef, who apparently writes a pretty cool blog with her brother, Jesse.

To give you a taste of what to expect, I chose two passages about finding hope in death – one from Greta, the other from Eliezer.

Greta:

But we can find ways to frame reality — including the reality of death — that make it easier to deal with. We can find ways to frame reality that do not ignore or deny it and that still give us comfort and solace, meaning and hope. And we can offer these ways of framing reality to people who are considering atheism but have been taught to see it as inevitably frightening, empty, and hopeless.

And I’m genuinely puzzled by atheists who are trying to undercut that.

Eliezer:

I wonder at the strength of non-transhumanist atheists, to accept so terrible a darkness without any hope of changing it. But then most atheists also succumb to comforting lies, and make excuses for death even less defensible than the outright lies of religion. They flinch away, refuse to confront the horror of a hundred and fifty thousand sentient beings annihilated every day. One point eight lives per second, fifty-five million lives per year. Convert the units, time to life, life to time. The World Trade Center killed half an hour. As of today, all cryonics organizations together have suspended one minute. This essay took twenty thousand lives to write. I wonder if there was ever an atheist who accepted the full horror, making no excuses, offering no consolations, who did not also hope for some future dawn. What must it be like to live in this world, seeing it just the way it is, and think that it will never change, never get any better?

If you’re coming to Skepticon – and you should, it’s free! – you need to be there for this panel.

“No problem!” Michio Kaku predicts our sci-fi future.

On Wednesday night I went to the Strand to hear celebrity physicist Michio Kaku promote his new book, “Physics of the Future: How Science will Change Daily Life by 2100.”

I know Kaku is supposed to be a real heavyweight in the physics community — he co-founded string field theory — so I was surprised to hear him talking like the kind of giddy, incautious futurist that gave futurism a bad name: “By 2100, our destiny is to become like the gods we once worshiped and feared.” He even had a catch phrase, like a salesman: No problem. (For example: “Lose your hand in an accident? No problem! Scientists will create a new mechanical hand that can touch and feel.” ).

I’m willing to believe that it’s possible to make certain predictions about the near- to medium-term future with some confidence. But bad prognosticating is so easy and so common that my skepticism is on a hair trigger, and hyperbole and flowery language set it off immediately.

I flipped through Kaku’s book after the talk to see whether my first impression was accurate. Indeed, the book is full of grandiose claims about telekinesis, immortality, and avatars on far away planets. He tempers them with plentiful qualifiers like “may,” “might,” and “could.” But that doesn’t change the fact that he offers no evidence that some of the new technologies he’s describing are going to happen at all, let alone in the next 90 years.

Take this claim, for example: “By midcentury, the era of emotional robots may be in full flower.” Kaku describes how victims of brain injury whose emotional centers were severed from their cerebral cortices became incapable of making choices; everything had the same value to them. Emotion plays a crucial role in human decisionmaking, and Kaku makes the case that it would be critical to intelligent robot decisionmaking as well.

But the fact that instilling robots with emotion would make them more effective isn’t evidence that such a feat would be possible by 2100, or ever. There is a recently created robot called KISMET whose face can mimic a wide range of emotions, but, Kaku acknowledges, “scientists have no illusion that the robot actually feels emotions.” So there’s nothing, at least in this book, that backs up Kaku’s original claim about an “era of emotional robots” flowering this century.

Another claim that raised my eyebrows was that of telekinesis: later this century, Kaku says, we’ll be able to move things around with our minds. How will we do this? “In the future, room-temperature superconductors may be hidden inside common items, even nonmagnetic ones,” Kaku writes. “If a current is turned on within the object, it will become magnetic and hence it can be moved by an external magnetic field that is controlled by your thoughts.”

But there’s no evidence that room temperature superconductivity is even possible. Currently, the world record high temperature for superconductors is -211 degrees Fahrenheit. And we don’t even understand the science behind that success, Kaku says — so as far as I can tell, there’s no way to know how much higher we’ll be able to bring the record.

I don’t have any problem with speculation about radical new technologies. But that speculation should be framed as, “This is something that is theoretically possible,” not as “This will probably happen in the next X years.” Attaching a date to your speculation implies a precision which is, in Kaku’s and in most cases of future forecasting, deceptive.

%d bloggers like this: