Astro Teller, Captain of Moonshots at X, on the Future of AI, Robots, and Coffeemakers By Erico Guizzo Posted 8 Dec 2016 | 17:39 GMT
Astro Teller has an unusual way of starting a new project: He tries to kill it.
Teller is the head of X, formerly called Google X, the advanced technology lab of Alphabet. At X’s headquarters not far from the Googleplex in Mountain View, Calif., Teller leads a group of engineers, inventors, and designers devoted to futuristic “moonshot” projects like self-driving cars, delivery drones, and Internet-beaming balloons.
To turn their wild ideas into reality, Teller and his team have developed a unique approach. It starts with trying to prove that whatever it is that you’re trying to do can’t be done—in other words, trying to kill your own idea. As Teller explains, “Instead of saying, ‘What’s most fun to do about this or what’s easiest to do first?’ we say, ‘What is the most likely reason this project won’t make it?’ ”
The ideas that survive get additional rounds of scrutiny, and only a tiny fraction eventually becomes official projects; the proposals that are found to have an Achilles’ heel are discarded, and Xers quickly move on to their next idea. It’s all part of Teller’s plan to “systematize innovation” and turn X into an assembly line of moonshots.
The moonshots that X has pursued since its founding six years ago are a varied bunch. While some were quite successful, such as Google Brain, which led to AI technologies now used in a number of Google products, others faced backlash, as was the case, most notably, with Google Glass. With Teller at the helm—his official title is “Captain of Moonshots”—X sees itself playing a key role in shaping the future of its parent company.
“If Alphabet wants to continue to grow, it needs to have one or more mechanisms for creating new problems to have,” Teller says, adding, “That’s X’s mission . . . our product is producing new Alphabet entities.”
To learn more about how they approach things at X, and get an update on its current projects, IEEE Spectrum senior editor Erico Guizzo spoke with Teller at Google’s office in New York City. The following has been edited and condensed for clarity.
Astro Teller on . . .
IEEE Spectrum: Your grandfather, the famed nuclear physicist Edward Teller, wrote an article for Spectrum in 1973 on potential non-military applications of thermonuclear power. One of his ideas was using it for spacecraft propulsion. If the spacecraft could be accelerated to one-thousandth the speed of light, he wrote, “We could get to Mars in a week; the round trip would be two weeks.” So in the article he basically starts by looking at a technology and then envisions a revolutionary application for it. How does that compare to how X comes up with its moonshots and goes about turning them into reality?
Astro Teller: I’m not sure that Edward would have agreed to this but I think he and a lot of other amazing inventors of the last hundred years have enjoyed starting from a technology they wanted to have work and then tried to figure out if they could. I’m sure, at the margins, that happens at X, but it is not our process. We work really hard for that not to be our process, because chasing the tech first can occasionally lead to wonderful things, but it’s not the most efficient way to get important answers.
So our process is first you have to say what the huge problem is you’re trying to solve. You have to be able to describe it in order for it to have any chance of taking root at X. And there has to be some articulatable, hard but potentially solvable, technology problem at the middle of it. Once that’s true, we go down a path where instead of saying, “What’s most fun to do about this or what’s easiest to do first?” we say, “What is the most likely reason this project won’t make it?”
So if we were working on Edward’s space travel idea—just to use the example that you’ve given me—instead of saying, “How good a propulsion system would this be?” we would say, “Of all of the possible reasons—cost, danger to the astronaut, heat—what is the most likely reason this will turn out to be a bad idea?” Let’s just look at that for an hour, a day if necessary. If we succeed in killing the idea on the basis of that, thank god we didn’t work on all the other issues first. And if we don’t [kill it], if the first thing that we named doesn’t turn out to be an Achilles’ heel for this project, great. Then let’s go and look at the next two or three most exposed aspects of the project.
Spectrum: So the thing that might kill an idea, it could be a major technical limitation, or maybe it’s just cost?
Teller: I get asked frequently, “At what stage do you make a business plan for the moonshot?” And the answer is never Stage 1 or Stage 5 or Stage 17. The answer is always, “Is making a business plan the next most efficient thing we can do to try to kill this project?” And there are some projects where we have a business plan now and it becomes more detailed all the time. Let’s say for the self-driving car group. We never said in the early days, “Okay, let’s make the business plan.” Because if you can make cars that drive themselves, the world is going to change in such a dramatic way that the details of your business plan are not going to kill that as a project.
There are other things that we’re doing, let’s say our airborne wind turbines, that’s such a cost-driven business—energy generation—that’s all determined by the levelized cost of energy, the LCOE, and that number is what determines whether you’re competitive or not. So we need to be thinking sooner rather than later about that for that project.
Spectrum: You once said that for every major project that takes off at X—like the self-driving car—you consider lots and lots of other ideas. How do you keep coming up with new stuff? And how do you compete with other places like, say, Y Combinator, and other incubators and accelerators that have people bringing them ideas all the time?
Teller: We have a team that’s dedicated to coming up with ideas, but the rest of X, and to a less extent the rest of Alphabet, and in particular the founders [Larry Page and Sergey Brin], are sources of ideas. And we’re looking at academic work that’s happening all the time. We go to conferences, and we invite people to come visit us. So people bring us ideas, too.
We sometimes bring academics who have special expertise in for months at a time to just see if we can find something. What typically happens is that you’ll sit with us for three or four months trying to talk us into doing more and more research on the thing you like doing, and we’ll keep trying to talk you into reframing your excitement in the terms that I gave: Huge problem with the world, radical proposed solution, underlying hard technology problem that can cause that radical solution to be realized. Sometimes we can’t connect—our way of being and your way of being just don’t match. And occasionally it does work. For example, the contact lens work that we did came out of two academics from the University of Washington.
Spectrum: Yes, they wrote an article on bionic eyesight for Spectrum in 2009.
Spectrum: And once you have all these ideas, how do you keep track of them? Do you put them on a database or a giant board on the wall?
Teller: We do keep track of projects, especially after we’ve killed them. For two reasons: No. 1, we don’t want to reinvent the wheel. We don’t want someone who gets hired two years after we kill a project to come up with the same idea and then spend three months rediscovering it.
Reason No. 2 is we want to keep track of the ideas that we’ve had—and we do have a database—because sometimes there are presumptions [that could be revisited]. Say we’re going to not work on this project because one of the necessary constituent ingredients is a battery with 10 times the energy density of lithium-ion; it doesn’t exist, we don’t consider that a safe bet that it will appear in the next five years, so let’s not start this project. But then if that battery appeared, we would want to be able to go back and say, “Hey, wait a second, now we can reconsider this as a potential moonshot because that was the reason we killed it and that technology has now appeared.”
So it’s not as indexed as that might make it sound. We can’t type, “We got the batteries” somewhere and then all of the moonshots that we killed because of batteries would like naturally drop out of our database. It’s not that organized. But we do internal post-mortems on these things so that we can learn.
One of the other reasons that we write up these little case-studies every time we kill a project, especially for the more advanced projects that we kill, is because we want to ask the question, “Okay, nobody is a bad person for not having succeeded here, but now that we know that this project should’ve been turned off, because we turned it off, how could we’ve gotten to this answer faster?” Hindsight is always 20/20, but we could still ask the question, and sometimes the answer is, you know, we played that about as well as it could be played, and there are other times where we say, “Yeah, that was kind of a mistake on our part, we could’ve looked at this thing where the Achilles’ heel was early on and we didn’t.” And maybe that’s because we didn’t have the expertise in house, but that’s not a good reason for not looking at, we should have hired a consultant and gotten that done faster.
Spectrum: How many entries do you have in this database?
Teller: I don’t know how many are in there, but I’m sure there are at least a hundred. The ones that are in the database are the ones that have received hundreds to thousands of person-hours. The ideas that don’t last very long don’t get put in there, but some of them are wonderful ideas.
“There’s no shame in most of the questions being wrong. You don’t really know which are genius and which are crazy until after you dig in.”
For example, a few years ago someone asked, “I wonder if we could harvest the power in an avalanche.” I remember sitting around and we did some calculations about the total kinetic energy you could get from a medium-sized avalanche, how many avalanches a year were there, and if you had an idealized piece of equipment that you could somehow magically teleport from one place to another so it was always at the bottom of some avalanche that was happening—so what is like the best case scenario for this, and it just wasn’t big enough. We killed that idea in half an hour. And that was an intellectually stimulating half an hour, and I think that that was a gorgeous question that was asked. There’s no shame in most of the questions being wrong. You don’t really know which are genius and which are crazy until after you dig in.
Someone else, a year or two later, asked if could we put a ring of copper around the North Pole and have the Earth’s magnetic flux generate current in the copper coil. Now, how much would that copper cost, and how much would it cost to lay that coil? And then where is the place we could actually pull it down to, and how much heat would be lost along the way, and then, okay, what if we made the copper cross section bigger so that the resistance is lower so it loses less heat? Oh well, that’s a lot of copper all of a sudden. No, you can’t close the circle—pun intended—there. That one also probably died in half an hour.
Spectrum: I’d like to talk about how X has evolved since it was formed six years ago and overseen by Sergey Brin. Back in 2013 you described X as “Sergey’s Batcave,” and you said he was Bruce Wayne and you were Lucius Fox…
Teller: It’s a little bit different now.
Spectrum: Tell me about it. What are some of the things that you still do the same way and some of the things that you do differently?
Teller: The single biggest change has been the process of Alphabetization, where I’m now formally the CEO of this thing, X, which is a sibling to Google. I meet with Larry and Sergey on a regular basis, and so do the CEOs of all of the other Alphabet entities. But we’re in a somewhat unusual position with respect to them, because our product is producing new Alphabet entities.
Verily came from a set of projects within X and it’s now an independent Alphabet entity that is a peer to X. The self-driving car group is in the middle of spinning out from X right now. There are other projects which aren’t technically independent Alphabet entities, like Google Brain, that were spun out from X and went back to Google.
And since our best case scenario is that the projects, as they become more mature, head toward graduation—toward this independent Alphabet entity status—we want Larry and Sergey and Ruth [Porat, Alphabet’s CFO], in particular, to be very knowledgeable about and very comfortable with each of these things, so that by the time the moment of graduation happens, they are not saying, “What is this?” but that they are excited about it and they’ll take ownership of it so that we can retract our oversight and spend time on other projects.
Spectrum: Do they give you guidelines or point to areas they want X to focus on?
Teller: Both. Larry, Sergey, Ruth, also David Drummond [Alphabet’s chief legal officer] and Eric Schmidt [Alphabet’s executive chairman], function as sort of the board of directors, informally at least, for X. And for a relatively mature X project like Loon, for example, they will also meet regularly with Loon to give them advice and to create the kind of oversight needed once it leaves X and it is its own entity.
Teller: Of course not. We’ve always had as our mission to make things that solve a real large problem in the world and can produce the kinds of returns to Alphabet that will justify our ongoing existence. No matter how well-meaning the founders and the board of Alphabet might be, they are not going to just pour indefinite money into something they don’t have high confidence will produce more value than they’re spending. And that can take a good long time to come to fruition, but I believe that their money is being well spent, and I think they believe their money is being well spent.
As a thought experiment, you or your readers might look at the things that we’ve graduated and imagine all of them being purchased by somebody, and then come up with a number for what that would cost to buy. I promise you, the number you come up with will be many times the cost that it cost us to build them.
If Alphabet wants to continue to grow, it needs to have one or more mechanisms for creating new problems to have, not just new solutions. If you have a fixed set of problems, your ability to solve them sort of asymptotes. How are you going to go find new problems to have—the right problems to have—that you at least have a shot at solving? That’s X’s mission. So it’s not an accident that we’re going to be useful for Alphabet: We were designed to be useful to Alphabet. As I was saying, our product is producing new Alphabet entities. So I’d like to think that while Alphabet will find other ways over time to add to its roster—for example through acquisitions—that we will be influential in helping to shape what Alphabet is in 10 or 20 years.
Teller: Which I hope people found to be a balanced view, neither whitewashing AI and robotics, nor giving in to or allowing for the hysteria to continue on the negative side. It was a really interesting project.
Spectrum: Yes, so AI seems to be everywhere these days, and in fact Google Brain came out of X and became part of Google in 2012, and now it’s used in a bunch of Google products. How do you see AI evolving and being incorporated into things and our daily lives?
Teller: Artificial intelligence has already changed the world in some pretty dramatic ways and will certainly do even more so in the future. But it’s a component technology. The transistor has changed the world. But saying, “How are transistors going to change the world?” is almost the wrong layer of abstraction—it’s like trying to understand a river by talking about H20. So artificial intelligence will participate meaningfully in causing technologies to become more intelligent and will shift how we try to deliver value to people. Less and less will technologies need to do what we want them to do through straightforward mechanical and structural solutions and more and more they’ll solve the problems through the increment of intelligence.
For example, instead of trying to make a car so that it can survive being smashed into a brick wall at 50 miles an hour, which is the current standard for safety, 20 years from now the question will not be, “Can it survive the crash?” but how smart is it such that it doesn’t crash into the wall. And that will be the new, better way of talking about the actual safety of the car.
We will see that happen in everything from our watches to our phones to our cars to the coffeemaker that we walk up to. When you walk up to a coffeemaker in 10 or 20 years, you’re not going to push a button and tell it how much coffee you want. It’s going to figure out how much coffee you want based on all the coffees you’ve ever ordered, probably taking into account how you look, maybe details of your biometrics, and it’s going to give you the coffee that will actually cause you to feel the best.
Spectrum: Switching now to one of my favorite topics, robots. The one place where I think many people would like to see more robots—I certainly would—is in our homes. I’m thinking of home robots that would do chores that we don’t want to so we have more time to spend with our kids, families, hobbies. And what is more valuable than time?
Teller: I love it. I love that you say that, and agree with you. Look, we’re hiring no discipline faster than we’re hiring people from the field of machine learning. When I just gave you that previous answer about artificial intelligence, we’re up to our eyebrows in machine learning and only doing more so.
I’m making a distinction that if you start from the presumption that the tech is what’s important you may end up in the wrong place. You already have a robot in your house that washes dishes. I’m sorry that it doesn’t look like a mechanical man, but you do—it’s called a dishwasher. And it may or may not be the case that a man-looking mechanical object will wash your dishes better. If it does, and it gives you more time, great. If not, maybe the current shape of the robot that’s already in your kitchen is acceptable.
I’m more interested in using technology to get your dishes washed to get you more time than I am in whether it looks like the kind of robot that you have in your fantasy. Because our lives are already awash in things that have the three components that fundamentally determine something to be a robot: Sensing, computation, and actuation. It has to be aware of the world, has to think about the world, and has to do something back to the world.
Your dishwasher is a robot. Self-driving cars are robots. If making something that’s a biped saves you more time, great. And if making no bipeds ever is the best way to solve all of the world’s problems, I don’t think we should be hung up about bipeds, or quadrupeds for that matter.
And I know that there are some people, to some extent even me a little bit—it’s kind of sad to say that—but it’s my job not to let us get sucked into what would be most fun to build. I actually want us to solve the problems. And that means I need to be an evangelist, and I am full time an evangelist, for solving the problems. And not letting any of us have fun in the technology at the expense of lost efficiency. Because Alphabet deserves for us to actually generate value while we’re generating the technology, and value is produced by actually solving problems.
Spectrum: So when you think of tasks at home—cleaning, doing dishes, clearing the table, finding things, picking up toys from the floor—those things take a lot of our time, and that, I think, is a significant problem worth solving. Robots aren’t the best solution, is that what you’re saying?
Teller: No, that’s not what I’m saying. What I’m saying is, look at the introduction of the early PCs, like the Apple II. It was something for tech hobbyists and it was a replacement in the early days for calculators and for typewriters—basically, word processing and VisiCalc. It was in the home to some extent but actually the bulk of Apple IIs in the first couple of years were sold to IBM, ironically, and to a few other places that had a lot of calculators and a lot of typewriters and wanted to do them more efficiently. It turned out to be useful to solve secondary and tertiary problems that we couldn’t even imagine using computers—nobody started in the earliest days of the PC saying, “Let’s make Facebook or Gmail.” That’s just not how the dialogue went, and I don’t think that that would have been the most productive way to get started.
“I think robotics is coming to the home, but if we over-constrain the problem to be the robot that the 1950s and 1960s embedded in our brain as our visual image of a robot, that is a great way to undermine the best solutions.”
Sure, there was the occasional luminary or visionary, who did say things like that. But I think there’s a lot to be said for starting by replacing calculators with spreadsheets. And robots are likely to do the same thing in and out of the home. And so by focusing on the things that we can use technology to solve your problems for you, like a thermostat in your home that doesn’t just stay at a fixed temperature but that pays attention to whether you’re home or not, and what time of the day it is, and what that means about what temperature you’d like—like Nest does—that’s a robot too, by the way. So I think robotics is coming to the home, is coming to the office in lots of ways, but if we over-constrain the problem to be the robot that the 1950s and 1960s embedded in our brain as our visual image of a robot, that is a great way to undermine the best solutions. Great engineers remove all unnecessary constraints so as to find the best solutions. That’s all I’m championing here.
Spectrum: The last update we had on the robotics group at X was that they were looking for a moonshot to pursue. Have they found one?
Teller: Multiple ones. So we haven’t made any of these public yet, but as I guess you understand, we took all of the robotics work that was happening, except for Boston Dynamics, and we have been recapitulating the people and some of the ideas into three, maybe four groups. And they’re now hard at work on interesting things, and like the rest of X projects, they’ll be successful or not successful. They’re sort of in their adolescence right now, they’re only a year in, not even quite…
Spectrum: So they are not just helping out with other projects that could use some robotics, they are actually working on their own robot projects?
Teller: A few of those people did end up on other X projects, but no, there are some new X robotics projects that have now been defined as things that we would recognize as moonshots and that we’re moving forward.
Spectrum: I’d like to push back a bit on the issue of personal robots in our homes. I’m sure you’re familiar with Willow Garage and how it got started…
Teller: Yes, I am. And you know, many of those people are now at X.
Spectrum: Right. So Willow, too, had a goal that was kind of like a moonshot, which was building a personal robot, and they faced a lot of challenges and saw first-hand how hard it is. But in their attempt, what they created and gave to the community, it’s such a giant legacy, and some people just wished that there was a continuation to that, a Willow Garage Redux, or something like that.
Teller: So, good news, bad news. The good news is I want the same thing that you want, and so do all those people. In order for them to have the best chance to solve a real problem and to continue to grow, to flourish, to have a bigger impact on the world, some nuance needs to be added to “personal robotics.” The phrase you just used is not a problem.
Spectrum: Personal robots to help people at home?
Teller: That’s not a problem. You could say, “My laundry is on the floor, I wished someone else would pick up my laundry,” that’s a problem. Then we can have a discussion about whether that problem is a big enough problem to justify robotics, or a personal robot in your home. And we get into an interesting conversation about issues of safety and cost and when you’d actually adopt that, but I’m not pushing back because I don’t believe in personal robotics: I deeply believe in personal robotics, but getting the details right is the difference between success and failure. So what is it that—I’m pushing back on you now—you want to see, and don’t tell me personal robotics, tell me a very concrete thing in your life you wished a robot would solve and that you believe you’d spend on the order of five to ten thousand dollars on to have it fixed.
Spectrum: I’ve thought about that a lot and the answer is, I don’t know, but I think we don’t know because we haven’t experimented enough, and that’s something a place like X would have the capability to do, just like you did with other hardware, with Glass and Tango. So you’d put, I don’t know, a thousand robots in the homes of your employees or other testers and see what they can do and what they can’t. So that level of experimentation I’m hoping that someone will go and try.
Teller: I hear you. How much do you think people would pay? I’m not selling you one right now, I’m just curious. You just painted a picture of a platform for learning about what these robots could do, they are in a thousand homes—you reference Glass, we had some interesting learnings, positively and negatively, with that—let’s say we had an explorer program and a thousand people were buying these robots, what do you think they’d pay for them?
Spectrum: Would they be subsidized by you?
Teller: No, you’d be paying for it. If you believed that it was a serious learning platform, what would you pay to have one of these robots in your home?
Spectrum: Hmm, for a robot that would do something, maybe a couple thousand dollars or a little more?
Teller: So on the order of three to six thousand dollars?
“We built Glass as a learning platform, and after we started selling it . . . we fell into the trap of letting, and even amplifying, the world in seeing it as a product. Which led to a set of expectations that the learning prototype was not going to meet.”
Teller: That’s interesting. I know you’re supposed to be interviewing me but I’m really curious, so I’m going to ask you. We built Glass as a learning platform, and after we started selling it, two things happened, which ended up in a bad cycle. The world was determined to see it as a product, and we fell into the trap of letting, and even amplifying, the world in seeing it as a product. Which led everyone, including ourselves, to a set of expectations that the learning prototype was not going to meet. So we can avoid part of that, but even if we made the [home robot platform] that you were talking about, what do you think the media would say, right after it came out?
Spectrum: This happens often with Kickstarter and other crowdfunded projects—many people don’t understand that they are supporting an effort to build something, not simply buying a product. And I suspect the same thing could happen to a robot designed as a learning platform if you’re charging people money for it.
Teller: Well, but now you’ve proposed an interesting alternate thing, which is, Alphabet or some other company would spend a huge amount of money to develop some new thing for which it’s not clear what the value is, then build a whole bunch of them, and give them to people without charging them.
Teller: Well, that’s a nice thing to wish for but that doesn’t seem super realistic, does it?
Spectrum: Well, it goes back to the idea of doing an experiment to find out what works and what doesn’t when it comes to robots in the home. Of course, there are some details that…
Teller: Hah! I have to live with these details, so I look forward to hearing your more detailed business plan for me to go implement when you have it.
Look, I’m very serious. We care about this. We want to solve this. I don’t think that what I want in terms of the future is different than what you want. We’re trying to find a path that actually is a financially responsible path. It doesn’t mean that we’re not taking risks, that we might not be wrong, that we might not have to entirely stop or go back and try again. But it can’t just be a Hail Mary, it can’t be “I wanna build Rosie the robot” so we’re just going to build it whether it makes any sense to or not.
Spectrum: That’s not a moonshot?
Teller: No, that’s not a moonshot. A moonshot, by definition, is a huge problem, which we can name, a radical proposed solution to the huge problem, and a clear, articulatable set of hard technology aspects, which we have some reason to believe would build the radical solution, which would solve the hard problem.
Spectrum: I guess I keep going back to what I see as a huge problem, which is, “Give me more time to do the things I want to do.” Whether the solution will take the shape of a humanoid robot or just a mobile dishwasher-looking machine with arms, I don’t know and I don’t care, I just want it to do my chores for me.
Teller: I think that one of the interesting and unanswered questions, which we’re serious about looking into, and I’m sure the rest of the world is also, is “Can your time best be saved by a single all-purpose thing or by a range of special-purpose things?”
The phone is a good example and the PC before it are examples where having it be just cheap enough, just ubiquitous enough, just programmable enough has turned out to be so powerful, even if it isn’t perfect for everything. On the other hand, things like your thermostat and your dishwasher in your home are some early evidence that special-purpose things might actually be better. And I really don’t know which one is going to turn out to be the right way to do it, but we’re very interested in that question.
Spectrum: Let’s talk about a different kind of robot project: the self-driving cars at X. What’s the latest with that?
Teller: That one is in the middle of graduating.
Spectrum: In a TED talk you gave earlier this year, you said that after a lot of real-world testing with your self-driving cars, you decided that having a person in the car ready to take over if needed is not ideal, and that’s something that we’re seeing now, that indeed it might not be physically, humanly possible to do that…
Teller: This is what we’ve discovered four years ago.
Spectrum: But trying to be fully autonomous has its challenges, too. Can you talk about this trade-off?
Teller: It’s harder. Reasonable people can disagree, but we have been very vocal that what we believe is the responsible thing to do, and ultimately the most valuable thing to do, is to go solve what in the parlance of the self-driving car world is called Level 4 driving. That’s fully autonomous driving: You push a button, the car takes you from point A to point B, and there’s no expectation that you’d be able to take over—depending on the car, maybe no mechanism for you to take over.
I think the only downside to that is we can’t responsibly field those cars in a large way until it’s very reliably driving safer than people are driving, and that’s a very high bar, giving the complexity of what people do when they drive. So we have set ourselves up for a very hard problem, but we believe it’s the right problem to solve. And it’s a problem whose scope is big enough in terms of the number of dollars wasted every year sitting in traffic and the number of lives lost every year to poor driving, so this is a problem worth solving from a moral perspective and from a financial perspective. So I don’t think this is a misplaced bet, but if you’re saying, “Is this harder?” Yes. It’s harder, it’s much harder.
Spectrum: And how is the testing going for those cars?
Teller: We’re testing them in fully autonomous mode, and we have safety drivers and they just sit there all day with their hands [near the steering wheel] in case something bad happens. I think it’s a pretty dull job at this point because they go many, many miles in between having to even note something down about something weird the car did. But we learn a lot in this process, we gather a lot of evidence about where it’s working, and little places where we could do things better. There’s a lot of interesting problems still to solve, but they are getting more and more subtle.
Like if we’re waiting to move out into traffic and we do it safely but there were two or three holes as the cars were going by, and a human would have done it sooner, then our drivers might write on their little clipboard, “We could’ve probably done it sooner,” and then later we go look at that data and see how we could adjust the algorithms. It’s still safe, but if we had somebody being driven by the car, we would have wasted a tiny bit of their time because it didn’t merge at the first really safe opportunity. But the better we are, the longer we have to wait for each of those pieces of evidence.
So we’re driving 20,000 miles autonomously every week, and yet probably seeing a tenth, a fifth of these many interesting things every week as we used to, just because the car’s doing so well now, that we have to wait a long time for it to do something even mildly imperfect.
Teller: That one is at an earlier phase. I certainly hope it does graduate at some point, but there are hard technical aspects to causing UAVs to move packages around in a very safe, very reliable way. But a big chunk of it, as with the cars, is building up confidence for the regulators. So doing that in a responsible way, where we have a sandbox and they can watch us and then slowly expand the sandbox as they build comfort in what we’re doing as part of our process and part of their process. That’s the stage that we’re in, that’s why we’re doing the testing that we’re doing.
And we’ve learned some things there, but of course, we’ve been over private land, where there were no other people around, doing lots of flying. So we were not shocked to see our vehicles doing a pretty good job. But having the regulators sit there and watch it, and then have that place be somewhere where we could expand the footprint that we’re doing things on, it’s a good process for us and for the regulators to go through.
Spectrum: And what is the moonshot exactly—delivery anywhere, even in urban areas?
Teller: Yeah, I’d describe that particular moonshot as, if the cost and the time for you to have whatever you want, whenever you need it, was pretty close to zero, the world would change in a whole bunch of really dramatic ways. There’s a lot of things in your life that you own not because you use them most of the time—you have a hammer at home, and you probably use that hammer one thousandth of the time, not even, one hundred thousandth of the time. But you don’t share that hammer with anybody. What a waste for the planet. Think about how much richer we could all be functionally if we just could share things between us. But we don’t, because we don’t want to have to wait 20 minutes to have a car drive a hammer to your house. There’s a lot of food that you keep in your house, and it’s very slowly going stale. On the off chance you need a particular spice or you need that extra half cup of milk, so it goes bad in your refrigerator just in case you need it—you’re just wasting the planet, and that’s mainly because when you want something, you can’t have it quickly enough.
When someone brings you a pair of shoes, a pizza, or whatever, in a six thousand pound car, or a small truck, that doesn’t make any sense if the thing that you ordered is less than 6 pounds, or one 1000th the weight of the delivery vehicle. And it’s clogging up the streets, it’s making sound pollution, it’s making actual carbon footprint pollution. There’s got to be a better way to do these things. So the moonshot is how can we remove a bunch of the remaining friction in how physical things are moved around in the real world. And you know, one of the ways that’s most visceral for people to experience right now is food, because it’s one of the things that we have delivered the most often, but that’s just the tip of the iceberg of how the world would change if whenever you wanted something, it would just magically appear for you.
Spectrum: I saw that you’re making progress with another project, Loon, using high-altitude balloons to beam Internet to people in remote areas.
Teller: Lots of things have been going well for Loon, but probably the best news that we’ve given out recently is that, as we get smarter about what the winds are likely to be like, above and below us, we get better and better at steering our balloon. So we can plan out a series of weird escalator rides where we can go up and down about 4 or 4.5 kilometers, and we’ve gotten much better at being energy efficient about putting weight on to the balloon by stuffing air into a bag, basically. We’ve focused on one particular balloon that stayed up for more than two months over Peru and just did little tight circles near Lima. That’s not a one-time thing now, we’re getting better and better at doing that, which really changes what Loon is going to be able to do. So that’s a project that’s going pretty well.
Spectrum: Finally, it looks like X is hiring. What would you tell my readers to convince them to apply, or in other words, why is X still a place that will come up with really cool tech projects that will capture people’s imagination, like it did with the self-driving car project several years ago?
Teller: Look, as they say on Wall Street, past performance is not a perfect predictor of future performance. That said, if you want to know the kinds of things that X is likely to do in the future, look at things like Google Brain, the life science businesses that we built, the self-driving car, Google Glass, the Tango tablet that came from X, the indoor localization technology [now part of Google Maps] that also came from us originally. Look at things like Loon, our attempt to fix the connectivity problem, or Makani, our airborne wind turbines, or Wing, our project with self-flying vehicles for package delivery. And there’s a startup that spun out from X in its early days called Flux, which is interested in changing how buildings are designed and built.
I think those are representative of the kinds of things that we take on. And we’re doing many things other than that, but those are some representative examples. If people are considering working at X, I think imagining having been at one of those places as it went from 10 people to 100 people is a good way for them to imagine what their time at X might feel like.
One of the nice things about X is that it’s designed to admit, “This is not a project we should continue.” Many people see on the surface that to be scary, “Oh, I won’t have good job security there.” But the exact opposite is true. Because we’re so good at recycling people, for people who are talented, that is our lifeblood. That’s actually where the value of X is. So we’re wired almost entirely to make sure that world-class people, once we get them into X, when projects stop, they just end up helping us start the next one.
Spectrum: And they are okay with that?
Teller: I think it would be dangerous to say that there are no people who aren’t fine with that, but that is very strongly X’s culture, and once people come to believe that that’s true, once they’ve been inside for long enough, they get a kind of comfort that many of them have never felt before. It’s dynamic stability, as opposed to static stability. If you’re moving fast enough on a bicycle, you’re not going to fall over because you’re moving, ironically. It feels a little bit like that, I think, to many of the people at X.
Alphabet Astro Teller Google Google X Larry Page Project Loon Project Wing Sergey Brin Silicon Valley Verily X ai artificial intelligence balloons delivery drones drone delivery drones failure home robots innovation invention robots self-driving cars