Search

'I Think We're Heading Toward the Best World Ever': An Interview With Sam Altman - The New York Times

erotoko.blogspot.com

We spoke to the OpenAI founder just two days before he was ousted by his company’s board in a surprise coup. To him, the future seemed bright.

When Sam Altman, the chief executive of OpenAI, strolled into The Times’s San Francisco bureau last Wednesday morning, he looked carefree and relaxed — a happy mogul in an orange hoodie, riding a wave of success atop one of Silicon Valley’s hottest companies.

Mr. Altman was there to record an interview for the podcast I co-host, “Hard Fork,” and to discuss the roller coaster year he has had since the release of ChatGPT, the chatbot that took the world by storm and kicked off a generative A.I. boom.

None of us — Mr. Altman included — had any idea that two days later, he would be fired from OpenAI by the company’s board, or that his ouster would kick off one of the most dramatic sequences of events in recent Silicon Valley history.

Less than a week after our interview, everything has changed for Mr. Altman. He now plans to run a new advanced A.I. lab at Microsoft, along with his co-founder Greg Brockman, who quit OpenAI in solidarity. Almost all of OpenAI’s employees have threatened to go to Microsoft to join them unless the board reinstates Mr. Altman and Mr. Brockman.

In our interview, Mr. Altman discussed the success of ChatGPT, his optimism about the future of A.I. and whether he is an “accelerationist” who believes that A.I. should progress faster.

We still don’t know the exact reason for the OpenAI board’s firing of Mr. Altman, but Ilya Sutskever, OpenAI’s chief scientist and a member of its board who led the coup, reportedly took issue with Mr. Altman’s aggressive push for faster A.I. progress. And while Mr. Altman is no longer in charge of OpenAI, he’s still the biggest figure in A.I. today, and his views remain as relevant as ever.

This is an edited transcript of the conversation among me; my co-host, Casey Newton; and Mr. Altman.

Hard Fork Poster

Listen to Our Interview

With the former OpenAI chief Sam Altman

Casey Newton: It has been just about a year since ChatGPT was released, and I wonder if you have been doing some reflecting about where it has brought us in the development of A.I.

Sam Altman: Frankly, it has been such a busy year. There has not been a ton of time for reflection.

Casey Newton: That’s why we brought you in. We want you to reflect here.

Sam Altman: Great. I can do it now. I mean, I definitely think this was the year where the general average tech person went from taking A.I. not that seriously to taking it pretty seriously. Given that, I think in some sense that’s the most significant update of the year.

Casey Newton: I would imagine that a lot of the past year has been watching the world catch up to things that you have been thinking about for some time. Does it feel that way?

Sam Altman: Yeah, it does. We always thought on the inside of OpenAI that it was strange that the rest of the world didn’t take this more seriously, like it wasn’t more excited about it.

Casey Newton: I think if it was five years ago and you had explained what ChatGPT was going to be, I would have thought, “Wow, sounds pretty cool.” But I think until I actually used it, it was just hard to know what it was.

Sam Altman: I actually think we could have explained it and it wouldn’t have made that much of a difference. We tried. People are busy with their lives.

Kevin Roose: So I’m curious what you feel like you have learned about language models specifically from putting them out into the world.

Sam Altman: What I think you can’t do in the lab is understand how technology and society are going to co-evolve. So you can say, “Here’s what the model can do or not do.” But you can’t say, “And here’s exactly how society is going to progress alongside it.” And that’s where you just have to see what people are doing — how they’re using it.

One example that I think is instructive, because it was the first and the loudest, is what happened with ChatGPT and education. Days after the release of ChatGPT, school districts were falling all over themselves to ban it. And that didn’t really surprise us. We could have predicted it. But the thing that happened after that — quickly — was school districts and teachers saying: “Hey, actually, we made a mistake. And this is a really important part of the future of education. And the benefits far outweigh the downside. And not only are we un-banning it, we’re encouraging our teachers to make use of it in the classroom. We’re encouraging our students to get really good at this tool, because it’s going to be part of the way people live.”

And that is just not something that could have happened without releasing it.

Kevin Roose: But looking back, do you wish that you had done more to sort of give people some sort of a manual to say, “Here’s how you can use this at school or at work”?

Sam Altman: Two things. One, I wish we had done something intermediate between the release of GPT-3.5 in the API and ChatGPT. Now, I don’t know how well that would have worked, because I think there was just going to be some moment where it went viral.

Now, the second thing is, should we have released more of a how-to manual? And I honestly don’t know. I think we could have done some things that would have been helpful, but I really believe that it’s not optimal for tech companies to tell people, like, “Here is how to use this technology” and “Here’s how to do whatever.” And the organic thing that happened there actually was pretty good.

Kevin Roose: I want to talk about AGI and the path to AGI later on. But first I want to just define AGI.

Sam Altman: So I think it’s a ridiculous and meaningless term.

Kevin Roose: Yeah?

Sam Altman: So I apologize that I keep using it.

Kevin Roose: I mean, I just never know what people are talking about when they’re talking about it.

Sam Altman: They mean really smart A.I.

Kevin Roose: Yeah. So it stands for artificial general intelligence. And you could probably ask a hundred different A.I. researchers, and they would give you a hundred different definitions. Researchers at Google DeepMind just released a paper this month that sort of offers a framework. They have five levels, ranging from zero, which is no A.I. at all, all the way up to Level 5, which is superhuman. And they suggest that currently ChatGPT, Bard, LLaMA are all at Level 1, which is sort of equal to, or slightly better than, an unskilled human. Would you agree with that?

Sam Altman: I think the thing that matters is the curve and the rate of progress. And there’s not going to be some milestone that we all agree, like, OK, we’ve passed it and now it’s called AGI. I think most of the world just cares whether this thing is useful to them or not. And we currently have systems that are somewhat useful, clearly. And whether we want to say it’s a Level 1 or 2, I don’t know.

But people use it a lot, and they really love it. There’s huge weaknesses in the current systems. I’m a little embarrassed by GPTs, but people still like them, and that’s good. It’s nice to do useful stuff for people. So, yeah, call it a Level 1. Doesn’t bother me at all.

Kevin Roose: What are today’s A.I. systems useful and not useful for doing?

Sam Altman: I would say the main thing they’re bad at is reasoning. And a lot of the valuable human things require some degree of complex reasoning. They’re good at a lot of other things — like, GPT-4 is vastly superhuman in terms of its world knowledge. It knows more than any human has ever known. On the other hand, again, sometimes it totally makes stuff up in a way that a human would not. But, you know, if you’re using it to be a coder, for example, it can hugely increase your productivity. And there’s value there even though it has all of these other weak points. If you are a student, you can learn a lot more than you could without using this tool. Value there, too.

Kevin Roose: Right now, I think what’s holding a lot of people back in a lot of companies and organizations is that it can be unreliable — it can make up things; it can give wrong answers. Which is fine if you’re doing creative writing assignments, but not if you’re a hospital or a law firm or something else with big stakes. And how do we solve this problem of reliability? And do you think we’ll ever get to the sort of low fault tolerance that is needed for these really high-stakes applications?

Sam Altman: So first of all, I think this is a great example of people understanding the technology, making smart decisions with it, and of society and the technology evolving together. What you see is that people are using it where appropriate and where it’s helpful and not using it where you shouldn’t. And for all of the sort of fear that people have had, both users and companies seem to really understand the limitations and are making decisions about where to roll it out. The kind of controllability, reliability — whatever you want to call it — is going to get much better. I think we’ll see a big step forward there over the coming years. And I think that there will be a time — I don’t know if it’s like 2026, 2028, 2030, whatever, but there will be a time where we just don’t talk about this anymore.

Casey Newton: Let’s maybe start moving into some of the debates that we’ve been having about A.I. over the past year. And actually, I want to start with something that I haven’t heard as much about, but that I do bump into when I use your products, which is they can be quite restrictive in how you use them. I think mostly for great reasons, right? I think you guys have learned a lot of lessons from the past era of tech development. At the same time, I feel like, for example, if I tried to ask ChatGPT a question about sexual health, it’s going to call the police on me. So I’m just curious how you approach that subject.

Sam Altman: One thing: No one wants to be scolded by a computer. We have started very conservative, which I think is a defensible choice. Other people may have made a different one. But again, that principle of controllability. What we’d like to get to is a world where you want some of the guardrails relaxed a lot, and you’re not a child or something, then we’ll relax the guardrails. But I think starting super conservative here, although annoying, is a defensible decision, and I wouldn’t have gone back and made it differently. We have relaxed it already. We will relax it much more, but we want to do it in a way where it’s user controlled.

Kevin Roose: Are there certain red lines you won’t cross? Things that you will never let your models be used for other than things that are, like, obviously illegal or dangerous?

Sam Altman: Yeah, certainly things that are illegal and dangerous. There’s a lot of other things that I could say, but where those red lines will be so depends on how the technology evolves, that it’s hard to say right now. We really try to just study the models and predict capabilities as we go, but if we learn something new, we change our plans.

Kevin Roose: One other area where things have been shifting a lot over the past year is in A.I. regulation and governance. I think a year ago, if you’d asked the average congressperson, “What do you think of A.I.?” They would have said, “What’s that?” We just recently saw the Biden White House put out an executive order about A.I. You have obviously been meeting a lot with lawmakers and regulators, not just in the U.S., but around the world. What’s your view of how A.I. regulation is shaping up?

Sam Altman: It’s a really tricky point to get across. What we believe is that on the frontier systems, there does need to be proactive regulation. But heading into overreach and regulatory capture would be really bad. And there’s a lot of amazing work that’s going to happen with smaller models, smaller companies, open-source efforts. And it’s really important that regulation not strangle that. So I’ve sort of become a villain for this. But …

Casey Newton: Yeah. How do you feel about this?

Sam Altman: Annoyed. But I have bigger problems in my life right now. But this message of, “Regulate us, regulate the really capable models that can have significant consequences, but leave the rest of the industry alone” — it’s just a hard message to get across.

Casey Newton: Here is an argument that was made to me by a high-ranking executive at a major tech company as some of this debate was playing out. This person said to me that there is essentially no harms that these models can have that the internet itself doesn’t enable. And that to do any sort of work like it is proposed in this executive order is just essentially pulling up the ladder behind you and ensuring that the folks who’ve already raised the money can sort of reap all of the profits of this new world and we’ll leave the little people behind. So I’m curious what you make of that argument.

Sam Altman: I disagree with it on a bunch of levels. First of all, I wish the threshold for when you do have to report was set differently and based off of evals and capability thresholds.

Casey Newton: Not FLOPS?

Sam Altman: Not FLOPS.

Kevin Roose: The FLOPS are the measure of the amount of computing that is used to train these models. The executive order says if you’re above a certain computing threshold, you have to tell the government that you’re training a model that big.

Sam Altman: But no small effort is training at 10-to-the-26th FLOPS. Currently, no big effort is either. So that’s like a dishonest comment. Second of all, the burden of just saying “Here’s what we’re doing” is not that great. Third of all, to say that there’s nothing you can do here that you couldn’t already do on the internet is either dishonest or shows a lack of understanding. You could maybe say that with GPT-4, but I don’t think that’s really true. There are some new things. And GPT-5 and -6, there will be very new things. And saying that we’re going to be cautious and responsible and have some testing around that, I think, is going to look more prudent in retrospect than it maybe sounds right now.

Casey Newton: I would say for me, these seem like the absolute gentlest regulations you can imagine: Tell the government and report on any safety testing you did.

Sam Altman: It does seem reasonable.

Kevin Roose: Some people — some of the more vocal critics of OpenAI — have said that you are specifically lying about the risks of human extinction from A.I., creating fear so that regulators will come in and make laws or give executive orders that prevent smaller competitors from being able to compete with you. Andrew Ng — who was, I think, one of your professors at Stanford — recently said something to this effect. What’s your response to that?

Sam Altman: Yeah, I actually don’t think we’re all going to go extinct. I think it’s going to be great. I think we’re heading towards the best world ever. But when we deal with a dangerous technology as a society, we often say that we have to confront and successfully navigate the risks to get to enjoy the benefits. And that’s like a pretty consensus thing. I don’t think that’s a radical position. I can imagine that if this technology stays on the same curve, there are systems that are capable of significant harm in the future. And Andrew also said — not that long ago — that he thought it was totally irresponsible to talk about AGI because it will never happen.

Kevin Roose: I think he compared it to worrying about overpopulation on Mars.

Sam Altman: And I think now he might say something different. So, humans are very bad at having intuition for exponentials. Again, I think it’s going to be great. I wouldn’t work on this if I didn’t think it was going to be great. People love it already, and I think they’re going to love it a lot more. But that doesn’t mean we don’t need to be responsible and accountable and thoughtful about what the downsides could be. And in fact, I think the tech industry often has only talked about the good and not the bad. And that doesn’t go well either.

Kevin Roose: I know we talked about AGI and it not being your favorite term, but it is a term that people in the industry use as sort of a benchmark or a milestone or something that they’re aiming for. And I’m curious what you think the barriers between here and AGI are. Maybe let’s define AGI as sort of a computer that can do any cognitive task that a human can.

Sam Altman: Let’s say we make an A.I. that is really good, but it can’t go discover novel physics. Would you call that AGI?

Kevin Roose: I probably would, yeah. Would you?

Sam Altman: Well, again, I don’t like the term, but I wouldn’t call that done with the mission. We’d still have a lot more work to do.

Casey Newton: The vision is to create something that is better at humans than doing original science — something that can invent, can discover?

Sam Altman: I am a believer that all real sustainable human progress comes from scientific and technological progress. And if we can have a lot more of that, I think it’s great. And if a system can do things that we, unaided, on our own can’t do — just even as a tool, it helps us go do that? I would consider that a massive triumph and happily retire. But before that, I can imagine that we do something that creates incredible economic value, but is not the kind of AGI superintelligence that we should aspire to.

Casey Newton: What are some of the barriers to getting to that place where we’re doing novel physics research?

Sam Altman: We talked earlier about the model’s limited ability to reason, and I think that’s one thing that needs to be better. The model needs to be better at reasoning. An example of this, that my co-founder Ilya uses sometimes that is really stuck in my mind, is that there was a time in Newton’s life where the right thing for him to do was to read every math textbook he could get his hands on. He should talk to every smart professor, talk to his peers, do problem sets, whatever. And that’s kind of what our models do today. And at some point, Newton was never going to invent calculus if it didn’t exist in any textbook. At some point, he had to go think of new ideas and then test them out and build them. And that phase, that second phase, we don’t do yet. And I think you need that before we want to call it AGI.

Kevin Roose: What is “superalignment”? You all just recently announced that you are devoting a lot of resources and time and computing power to it.

Sam Altman: Alignment is how you get these models to behave in accordance with the human who’s using them. And superalignment is how you do that for super capable systems. So we know how to align GPT-4 pretty well — better than people thought we were going to be able to do. But we don’t yet know what the new challenges will be for much more capable systems. And so that’s what that team researches.

Kevin Roose: So what kinds of questions are they investigating, or what research are they doing? Because, I confess, I lose my grounding in reality when you start talking about super capable systems and the problems that can emerge with them. Is this sort of a theoretical future forecasting team?

Sam Altman: Well, they try to do work that is useful today, but for the theoretical systems of the future. So they’ll have their first result coming out, I think, pretty soon. They’re interested in these questions of, as the systems get more capable than humans, What is it going to take to reliably solve the climate challenge?

Casey Newton: This is the stuff where my brain starts to melt as I ponder the implications, because you’ve made something that is smarter than every human. But you, the human, have to be smart enough to ensure that it always acts in your interests, even though by definition it is way smarter than you.

Sam Altman: Yeah, we need some help there.

Casey Newton: I want to ask you about this feeling that Kevin and I have had; we call it “A.I. vertigo.” There’s this moment when you contemplate an A.I. future, and you start to think about what it might mean for the job market, your own job, your daily life, for society. And there is this kind of dizziness that I find sets in. And this year I actually had a nightmare about AGI. And then I sort of asked around, and I feel like people who work on this stuff — that’s not uncommon. I wonder if you have had these moments of vertigo. Or is there at some point where you think about it long enough that you feel like you get your legs underneath you?

Sam Altman: There were some. There were some very strange, extreme vertigo moments. Particularly around the launch of GPT-3. But you do get your legs under you. And I think the future will somehow be less different than we think. Like, we invent AGI, and it matters less than we think. And yet it’s what I expect to happen.

Kevin Roose: Why is that?

Sam Altman: There’s, like, a lot of inertia in society, and humans are remarkably adaptable to any amount of change.

Kevin Roose: I do want to push us a little bit further into the future than the five-year horizon we’ve been talking about. If you can imagine a good post-AGI world, what does it look like? Does it have a government? Does it have companies? What do people do all day?

Sam Altman: A lot of material abundance. People continue to be very busy. But the way we define work always moves. Like, our jobs would not have seemed like real jobs to people several hundred years ago. This would have seemed like incredibly silly entertainment. But it’s important to me. It’s important to you. And hopefully it has some value to other people as well. And the jobs of the future may seem even sillier to us, but I hope the people get even more fulfillment in our society out of them. But everybody can have a really great quality of life to a degree that I think we probably just can’t imagine now. Of course we’ll still have governments. Of course, people will still squabble over whatever they squabble over. Less different in all of these ways than someone would think. And then, unbelievably different in terms of what you can get a computer to do for you.

Kevin Roose: One fun thing about becoming a very prominent person in the tech industry, as you are, is that people have all kinds of theories about you. One fun one that I heard the other day is that you have a secret Twitter account where you are way less measured and careful.

Sam Altman: I don’t anymore. I did for a while. I decided I just couldn’t keep up with the OPSEC.

Kevin Roose: What was your secret Twitter account?

Sam Altman: Obviously, I can’t. I had a good alt. A lot of people have good alts. But I think I just got, like, too well-known.

Kevin Roose: The theory that I heard attached to this was that you are secretly an accelerationist, a person who wants A.I. to go as fast as possible, and that all this careful diplomacy that you’re doing and asking for regulation — this is really just the sort of polite face that you put on for society. But deep down you just think we should go all gas, no brakes toward the future.

Sam Altman: No, I certainly don’t think all gas, no brakes toward the future. But I do think we should go to the future. And that probably is what differentiates me from most of the A.I. companies. I think A.I. is good. Like, I don’t secretly hate what I do all day. I think it’s going to be awesome. I want to see this get built. I want people to benefit from this. So all gas? No brakes? Certainly not. And I don’t even think most people who say it mean it. But I am a believer that this is a tremendously beneficial technology and that we have got to find a way safely and responsibly get it into the hands of the people, to confront the risks so that we get to enjoy the huge rewards. And maybe relative to most people who work on A.I., that does make me an accelerationist. But compared to those accelerationist people, I’m clearly not them. So, I think you want the C.E.O. of this company to be somewhere in the middle — which I think I am.

Kevin Roose: So you’re acceleration-adjacent.

Sam Altman: I believe that this will be the most important and beneficial technology humanity has yet invented. And I also believe that if we’re not careful about it, it can be quite disastrous. And so we have to navigate it carefully.

Adblock test (Why?)



"with" - Google News
November 21, 2023 at 08:24AM
https://ift.tt/xofsFqh

'I Think We're Heading Toward the Best World Ever': An Interview With Sam Altman - The New York Times
"with" - Google News
https://ift.tt/g9fiu0X
https://ift.tt/UaVSGfN

Bagikan Berita Ini

0 Response to "'I Think We're Heading Toward the Best World Ever': An Interview With Sam Altman - The New York Times"

Post a Comment


Powered by Blogger.