Narratives
Narratives
92: AI, Philanthropy and the Future with Holden Karnofsky
0:00
-57:08

92: AI, Philanthropy and the Future with Holden Karnofsky

In this episode, I'm joined by Craig Fratrik and Co-CEO of Open Philanthropy Holden Karnofsky to discuss AI, Philanthropy and the Future. We also discuss how the variance of future outcomes is much higher than people presume, and the consequences of that fact.

Transcript:

William Jarvis 0:05
Hey folks, welcome to narratives. narratives is a podcast exploring the ways in which the world is better than in the past, the ways that is worse in the past towards a better, more definite vision of the future. I’m your host, William Jarvis. And I want to thank you for taking the time out of your day to listen to this episode. I hope you enjoy it. You can find show notes, transcripts and videos at narratives podcast.com. Holden, how are you doing

Holden Karnofsky 0:43
this evening? Doing fine. How

Will Jarvis 0:46
are you? Doing great doing great. And I just want to know, for the listeners, we’re also joined by my friend Craig Fractracker, as well, to talk to him the night. Well, Holden, do you mind giving us kind of a brief bio and some of the big ideas you’re interested in?

Holden Karnofsky 1:00
Sure, yeah, brief bio is, you know, I worked in finance for a few years out of college. And after a couple of years of that wanted to give to charity, and kind of wished I could find a website that would help me figure out how to help the most people possible with the money that I was giving. And I had trouble finding that. So, you know, co workers and I facing a similar problem, decided to create it and my coworker, Ellie Hassenfeld. And I left and started GiveWell, which is now a website that basically tries to find the charities that can help people the most per dollar spent and publish these, you know, recommendations that people use to give. And there’s, you know, a couple 100 million dollars each year now that are being given to give those recommended charities three of those recommendations. After a few years of GiveWell. We met Carrie tonight and Dustin Moskovitz. Dustin is one of the co founders, Facebook and also Asana. And they were trying to give away a very large fortune in the billions of dollars and wanted a similar goal, they wanted to do as much good as possible, they wanted to help people as much as possible with that. And because we felt that they, they were in a little bit different position than let’s say, you know, an individual donor coming to a website, because they wanted to build a whole foundation. So we spun off, basically started an internal project to give well, and then that eventually spun out into the organization I work out now, which is called Open philanthropy. And open philanthropy just like it well. And just like my general career theme is how do we help people as much as possible with the resources we have, but it’s more serving kind of a large philanthropist model and less than individual donor model. So it tends to, you know, a lot of what I’ve been playing that he does not all of it is kind of higher risk, higher reward, doing things that you know, what we call hits, based giving, where you might find 10 different projects, and nine of them might kind of fail, and one of them might just be so epic that it makes up for everything. And so that’s the direction I’ve kind of moved. And then over the years, I’ve I’ve generally just been as part of my job more and more intensely, looking for outsized opportunities to good just things in the world, that are what we call important neglected and tractable, so things that are just a big deal that aren’t getting enough attention. And that we might be able to do something about and that’s what I’m always looking for. And as part of that I’ve I’ve become very interested in this idea of long termism, which is that if you want to help the most people, then one of the best things you can do is anything that helps the entire future of the world go better for all the generations that will exist to come. And so I’ve now become co CEO of open philanthropy and my coworker, Alexander Berger runs the other half. And I run the side that is focused on long term ism. So that’s, that’s a little bit of my journey, that last transition to focusing on long term ism was in 2021, so it’s all recent. And I’m happy to go into next what what I see as some of the major, major ideas that could matter for the entire future. But that’s, that’s kind of my story and how I came to what I’m interested in today.

Will Jarvis 4:02
I love that and what are some of these big ideas that you think really matter for the long term?

Holden Karnofsky 4:07
Sure. Um, so a, you know, one idea that I’ve just become extremely interested in, is the idea that developing the right kind of AI, or the wrong kind, depending on how you think it’s all going to shake out, could make this the most important century of all time for humanity. And the basic idea there would be that if you had something that was kind of able to do what humans do to advance science technology, but it was also digital, so that you could do so that it can sort of reproduce itself in a way that’s much faster and less bottleneck than humans. You according to standard economic growth models would get a massive explosion in productivity. And that could, you know, could be a much bigger change from today than today was from the pre industrial times, which would be really huge. In theory. If you just look at the economic modeling he kind of says To go to infinity, that’s not actually possible. But you could see just a crazy, crazy amount of progress. In the same way, the last couple of 100 years have seen more progress than the previous stuff millions or billions of years, you might see the next decades see more scientific and technological advancement than than ever before. They never before combined. And so that can take us into a really qualitatively unfamiliar future. And how that transition goes, actually could be something that matters for billions of years, especially if that scientific advancement kind of fits the limits of how far we can go, especially if we end up in some kind of stable society, which I think is very possible if technology advances enough. So that that becomes very important and neglected, at least for me, because you’re looking at just unimaginable stakes if this is true. And I think it’s something that most people are not focused on when they think about, hey, what can I do to help the world. So that’s what I call the most important century hypothesis. And I’m very interested in it, when I launched my blog cold takes about six months ago, lead off with the most important century hypothesis, a series of like 10 or 12 pieces making that case. I’m also just interested in any other narrative about what matters for the very long run, and what are things we can do to change the course of of everything that’s going on with humanity. And another, you know, another set of ideas or narratives I’ve been examining recently, is the idea that a lot of the best things that have ever happened have come from scientific and technological advancement and economic growth, what some people call progress. What I in one blog post called rowing, which is like if you’re, if you’re on a boat, you could you could argue about where it’s pointed, or you could just throw it forward. And so you know, a lot of people feel that kind of rowing forward, or just gaining more capabilities, doing whatever, whatever we want to do, and be able to do it faster and better, has been a lot of the source of life getting better in the past. And so I think a lot of people believe that, when you think about how to help the world, what you should be thinking about is how to speed things up how to have more progress, more innovation, more advancement, more wealth, more economic growth. And I think some people really hate this idea. They think that economic growth and technology are evil, and we should slow them down. Some people really love this idea. And they think that, you know, economic growth and technology, they’re the best thing that’s ever happened. And maybe the only good thing that’s ever happened. That’s what we need more of that we should focus on. And as you can probably predict from the structure of the sentence, I don’t agree with either.

Will Jarvis 7:30
I think Craig and I both agree, economic growth is incredibly, incredibly important. I guess my first question I have is, you know, the Steven Pinker kind of enlightenment now, things have been getting better, you know, infant mortality is going down. A lot of people I think, I guess, normal people are somewhat skeptical this, I think, because, and I my pet theory is because since 1970, you know, wages have been fairly stagnant. If things aren’t really getting better, it feels like they’re getting worse. Or they’re like faith, they’re fairly flat. So where do you come down on that between like the, you know, the tech stagnation versus kind of the Steven Pinker like things have been getting better or like, I have to thought that that they kind of go together and that like, things have gotten a lot better? In many ways. A lot of people come out of extreme poverty at the same time, like things have just kind of slowed down since the 70s.

Holden Karnofsky 8:20
Yeah, well, they can certainly both be true. And to some extent, I think they both are true. I mean, one, one story you could tell is that life has been getting better. But the rate at which it’s been getting better has been slowing down. And technology has been advancing by historical outlier pace, but the pace has been slowing down. So you can you can simultaneously be at an incredibly high speed by historical standards and be slowing down. And I think both are true. I mean, enlightenment now is more about, I think, quality of life. And I do think quality of like, I think I think in many ways you would expect quality of life to just lag technological progress. So right now, I mean, I do think we’re getting slower technological progress than we were, let’s say, like 50 years ago, or maybe a little more than 50 years ago, maybe like 70 or 80 years ago, we are getting slower, but but also, so much of the world is still yet to reap a lot of the benefits of new technology and new capabilities, that life is probably getting better about as fast as it ever was for the average person on earth. And now, that would be my guess, and where we are right now. And I’m happy to elaborate on that. I mean, I’ve got, I’ve got blog posts on all this stuff on my blog, and charts and all that. And some of it I know a fair amount about and some of it I know very little about, but that’s my high level take

Unknown Speaker 9:35
nine tripping. So I hear kind of just being aware of what scale the question gets asked that, right. Yeah. So the point about technology, right? Like if we look over the as you’ve discussed on your blog, and I’m a big fan of this post, if we look over, you know, 10,000 20,000 30,000 years technology is this incredible hockey stick, but then if we look over 40 years, it looks like it’s this declining thing. And then if you look at The sort of unhappiness with lifestyle, the kind of great stagnation and Tyler Cowen, you know, the the Western kitchen looks very similar from 50 years ago. But then if you scale outside of Western societies and you go all the way up to the entire world, you got a very different picture. So is that one of your main emphasis is just like being careful about what scale or at least considering multiple scales, when you’re answering these questions,

Holden Karnofsky 10:27
that’s a good way of putting it, I think a lot of these debates are really confused, because people are just have different different contexts windows in mind, or they have different, you know, they have different things they’re looking at. And so it, you know, my kind of gloss would be sure, I mean, in the USA, or in rich countries, the last few decades, have seen slower growth in, in technology, and probably slower growth in quality of life for the average person than the previous decades before that. On the other hand, if you zoom out, and you’re looking at hundreds of years, or 1000s years, it just looks like we’re in the most exciting, crazy, ridiculous couple centuries of all time. And it’s just if you look at on a chart, it’s almost vertical, it’s just like a straight line up. That’s the US on short and long timeframes. And then if you talk about the world, you get a slightly different picture, which is, you know, I don’t think things are slowing down on a global basis, because there’s still so many people in in levels of poverty that you almost never see in the US. And so, you know, for people across the world, I think the pace is quite high. And then, you know, the same applies, I think, when you look at the future, which is if you, you know, if you kind of ask, well, you know, do we do we do we think the future is gonna get better, I kind of want to say, look, the path we’re on is an upward path. And I expect it to continue to be upward by default. But then there’s crazy stuff that could happen, we could hit the next hockey stick, we could hit the next discontinuous thing. And, you know, I tend to think that technological progress and wealth growing, they seem probably like they’ve been a good thing over the last few 100 years. But if you take them up another notch, and you accelerate them even more, you’re now looking at a different phenomenon. And you’re now out of sample. And so I think it’s worth asking questions about is that going to be a good thing? And how should we think about that?

Unknown Speaker 12:09
And I 111, I’d add to that is just wondered about the general thing. And I mean, you’ve mentioned it with some of the stuff you’ve worked on, but just trying to make those out of sample predictions. Like that is not easy, right? Like our normal, the most confident sorts of predictions, we have our, you know, the other sorts of stuff gridwall does with the RCTs. And, you know, scaling slowly and that sort of thing. How do you think about that problem? What do you think about facing that challenge?

Holden Karnofsky 12:35
Yeah, sure. I mean, the easiest kind of prediction to make is to look at what’s been going on recently and say, it’ll just keep happening. And the longer the longer something’s been happening, the more likely it is, it’ll keep happening. So, for example, you know, economic growth has been globally around few percent a year, for the past couple centuries. And so if you say, what will it be over the next 10 years, I would say, probably a few percent a year. If you ask what it will be over the next 100 years, well, I’m a lot less confident. And I think there’s a lot of room for something really wacky to happen, I think there’s a lot of room for things to slow down a lot. And for us to run out of new technology and completely stagnate. I think there’s a good argument that might happen. I think there’s a good argument for an explosion, the opposite. So I think there’s a good argument that we figure out how to build the right kind of AI or some other technology, that that resolves certain bottlenecks and makes much faster growth possible. And then you see things go totally vertical, for better or for worse, and, and then I think there’s also possibility we just go extinct sometime over the next 100 years. So I just think you have to radically reduce your confidence when you’re talking about, you know, the next several decades, because there aren’t even that many decades of the current regime we’re in I mean, the Industrial Revolution wasn’t that long ago. So I tend to think that people are, are talking too much about, well, are things slowing down since 1970? And like, what is the next 20 years like and what are the last 20 years like? And I think there’s, I think there’s more of that than there needs to be relative to oil in the next 50 years, in the next 100 years? What’s something really crazy that can happen? And what how can we guard against the worst version of that happening? And how do you make the predictions? I mean, you know, I, I don’t think there’s any good way to make reliable predictions about the future. And the further out you go, the more possibilities you entertain, the harder it is, but I think we have to do it. Because I think the stakes are enormous. So I think we just have to do our best and try and learn as we go.

Will Jarvis 14:23
Have you thought about what we can do to make the likelihood of like, some singularity like thing where we really increased slope of economic growth to to like, you know, we develop like a AGI or something like that. Is it spending more money on basic research for AI? Is it like, you know, inspiring more smart people to work on it? Like, what does that look like? Or have you thought about it?

Holden Karnofsky 14:43
Are you asking about how to how to make this happen faster? Or how to how to make it go better? Yeah,

Will Jarvis 14:49
yeah, well book, I would ask both, I guess. Well, the first question is like, how would How would you make it happen faster? If you were thinking about it?

Holden Karnofsky 14:56
Well, I mean, I just I just have to be totally open and mean that you know, I have I have A blog post in the most important century series that kind of says look like, if you’re saying, Hey, we could this could be a billion dollar company, maybe the appropriate reaction is to be really excited. But when we’re talking about the most important century, in the sense I’m talking about, my reaction tend to be like vertigo and getting like a little dizzy and nervous and kind of nauseous. And, you know, so I have a tough time. When I when I think about the prospect of an AI driven productivity explosion, that kind of takes scientific advancement to a pace that we’ve never seen before. I have a little trouble being like, Great, how do we make it happen? Like, I’m a little like, Ooh, maybe that would be really good. Maybe that would be really bad. If anything, I kind of wish we had like, a little more time to just like, think about what we’re dealing with here and think about how to make the worst case not happen. So yeah, I mean, I guess if I did want to speed it up. I mean, I would I would generally expect more more investment in AI. I mean, that would be that would be my best guess. But, you know, but I’m not I’m not sure that speeding it up is the best way to make it go well, and I think whether it goes well is a lot more important than whether it comes five or 10 years earlier. So I tend to think more about how to make it go well.

Unknown Speaker 16:06
Will you go into more? Can you can you just expand on why? Why would tell me more about that. That feeling of vertigo? Sure. Like, I mean, I know you do this in your series, but for the podcast. Can you talk about why, you know, accelerated scientific advances economic growth all seem like generally good things. Yeah. Yeah. unpack that vertigo. Yeah. So one

Holden Karnofsky 16:25
way of just thinking about it is just that, that it would be very out of sample that we’re talking about, you know, what are the historical trends? And what do we think is next. So, you know, I think a lot of people this is like getting, including enlightenment now are saying, you know, for hundreds of years, economic growth has been happening, and the world has been getting better. So probably economic growth is more good than bad. And if there’s more of it, things will be better. And I agree with that, as stated as a guess, as a guess as to what’s coming. But if the next 50 years, see more growth, than you would expect from the current pace in a kajillion years, then we’re just out of sample like we just we you know, and one way of thinking about it is, over the past 100 years, we’ve seen people going from not having enough to eat, to having enough to eat, and from dying of infectious diseases to having straightforward cures. So that stuff is great. And that stuff seems to be making the world better. But that’s not going to be if we get a kajillion times that much progress, we’re gonna run out of diseases to care, we’re gonna run out of like people who are hungry, who are no longer hungry. And what we’re going to start doing instead is maybe start creating virtual worlds where people can instantly clone themselves and break the laws of physics and who knows, and maybe, like, reprogram each other’s minds. And you know, when you think about that stuff, and it’s like, oh, it’s a lot less clear that that would be good. And we can’t assume that would be good, just from the fact that things have been good for a couple of 100 years, because we’re talking about the equivalent of another kajillion years at that rate. So that’s like, that’s the sense of vertigo. I mean, that’s the core of it, the core is that I think people tend to talk about overly short timeframes. And so they they miss when they’re out of sample. And so they’re saying, you know, well, progress has always been good. I’m sure it’ll be good. And I don’t think that applies when you talk about a potential explosion, something we’ve never seen before. That’s like the the high level sense of vertigo, a little bit more detailed sense of vertigo is just thinking through, well, what would that actually look like if we had a massive explosion in scientific and technological advancement? And there’s, there’s two things that make me nervous. One, you know, one of the pieces in my blog post series is about digital people. So it’s about the idea of, you know, having people who live in virtual worlds and kind of are, are as malleable or programmable as computers in some sense, or their environments are. And I think when you think about that idea, there’s a lot of it’s just one example of what happens if technology advances a lot. There’s many other things that can happen if technology advances a lot. This is one example that I think is particularly easy to think about. And it makes particularly clear that if we got that technology, we would have the potential for an incredible utopia or an incredible dystopia, and we don’t really know which one we’re gonna get. And so you know, it would become possible for potentially authoritarian abusive regimes to last forever is one argument that I make in that series, it would also be possible for disease and health problems and violence to be gone forever. So you know, there’s, there’s, that’s the vertigo too, it’s just like, gosh, if we were digital, we could really be in a utopia or a dystopia. And I don’t know which one we get if digital people technology drops tomorrow. So that’s, that’s an object level thing. And then a third, a third thing that specifically makes me nervous is, you know, if we were to build AI is that can do everything humans do to advance science technology. That would be the second time in the history of life, that there was something capable of creating its own technology. The first time was humans. So in a lot of ways, we would be welcoming another species to our planet that is able to do what we’re able to do and able to be as powerful We are. And that makes me nervous too, because I don’t know what that species is going to want. And I think it would really depend on exactly how we build it. And I’m not sure that the default way of building it results in something good. So another post that I have is about why AI alignment to be hard. And why if you just blindly forge forward with today’s AI development paradigms, until you get something really, really powerful, well, that thing might actually have goals of its own and want to kill you. And so that makes me nervous to just gosh, there’s never there’s never been something that’s as powerful as humans and is not human before. That makes me nervous. So yeah, so those are, those are all reasons. I think the the upside is large, but the downside is large. And I spend my time trying to think about how to get the good without the bad.

Will Jarvis 20:45
Do you have any directional thoughts on you know, how you get the good without the bad? Is it just, you know, you know, spend a lot of resources on AI safety, and alignment and, and try and make sure that goes well? Or is there anything more specific you’ve thought about? Yeah, I

Holden Karnofsky 21:00
do in my in my head, it’s kind of like the first gate you have to get through is you don’t want you don’t want an AI that has goals of its own that you didn’t intend at all. You don’t you don’t want to accidentally designed something that you know, you’ve been giving it. You’ve been giving it like thumbs up signs to signify that it did something good. And now it just wants as many thumbs up signs as it can get, even if that means like disassembling stars to turn them into thumbs up signs, or I don’t know. That’s like, that’s like step one. That’s like gate one. And so getting around that and figuring out how to design an AI that doesn’t do that is often referred to as the alignment problem. And yeah, I think, and I’ve generally been like, very interested in the alignment problem recently. It’s something I’ve been thinking about myself. It’s something I’ve been trying to get more people to think about. I think that you know, people who people who are into weird, abstract questions, if they can think productively about the alignment problem, that would be a really great thing to do. If we do get aligned AI, we’re not out of the woods. I think it matters a lot if if, if good people or bad people, good governments or bad governments are the ones to kind of have a lead in AI technology. And so thinking about that, and thinking about, you know, what kind of government do we have right now? How can we work toward that government being the kind of government that won’t use advanced AI for abuse? I mean, I think there’s I think there’s an awful lot to do. And, and so And with all that in mind, just saying, you know, well, how do we make this come faster? It is quite low on my list.

Will Jarvis 22:26
Yeah. Sure, it goes goes well, it’s really important. Yeah.

Holden Karnofsky 22:31
Yeah. So I’m willing to wait an extra 10 years to get kind of a, you know, a human rights respecting AI driven regime instead of God knows what, right? Especially because we may be talking about, you know, if you if we reach, this is something that needs defending, and I’m happy to discuss it, if you want I do I do talk about in the blog post series. But at a certain level of technological advancement, you run out of new innovations, and you may be able to create a civilization that stays just as it is for billions of years. So if we’re, if this century is going to be where when we decide what the next several billion years look like, I’d rather take some extra time and get it right. Definitely, definitely, definitely. And I can I can live with the 10 years that we had to wait the 10x Are you

Will Jarvis 23:13
worried about the seniors? Like that? some extent? Yeah, I

Holden Karnofsky 23:15
think 1 billion year timeframe. Yeah, exactly. Exactly. Really, you’re

Will Jarvis 23:18
changing your perspective, right. Yeah. I’m curious. This kind of goes along with this. It’s a bit of a segue. But do you think this trend towards less violence will continue? Or do you think, you know, like, something like, you know, the the violence, the rhetorical violence on Twitter eventually spills over into the real world? Again, we have some kind of mean reversion, back to, you know, what, what historical levels of violence have been?

Holden Karnofsky 23:41
Yeah, I’ve written a couple of posts about better angels of our nature and about the hypothesis that violence is declining. And, you know, as far as I can tell, everyday violence, things like homicide, you know, that is declining, and has been declining for a long time. And my best guess is that continues to decline. I do think there’s a much bigger question of whether that’s been offset by the increased rare risk of giant crazy catastrophes with tons of deaths. So that’s a hard, that’s a hard thing to know from looking at charts, because we’re talking about rare giant events, right. But when you you know, when you look at, I mean, an interesting fact is when you look at the deaths from the world wars, and some of the, you know, some of the atrocities committed in the 20th century, it’s like, they kind of they kind of make up for a lot, if not all, are more than all the drop in homicides over the past several centuries. So what is the future look like? And I kind of think it’s the same direction. It’s kind of like you’re gonna see fewer homicides every day. And the risks that we deal with the rare risks is something really wacky happening just gets bigger and bigger. So, you know, there’s the possibility that at some point in the coming decades, someone will be able to deliberately design a bio weapon. that would that would be much worse than COVID-19. Well, that would be I mean, That’s a kind of risk we’ve never faced before nuclear war is a risk that has only existed for, you know, whatever, 70 years, 80 years. And so there’s there’s a lot of new and then there’s the acting. So there’s a lot of new cards, and now new things coming on to the table that could be more violent and more dramatic and more complete than, than past things. But homicides are going down. So that’s good. Definitely. Well, I always,

Will Jarvis 25:25
I always find that fact fairly disturbing, in the sense that is there just some equilibrium level of human violence, where it’s like, we know, we just commit this much violence, maybe we can move it over to these like tail risks, things for the end of the day like these, this amount of people are getting murdered, you know, and forever so often, or something like that. And maybe, maybe morally, we don’t end up getting that much better over time. Yeah.

Holden Karnofsky 25:48
I don’t think there’s a magic equilibrium. I think it’s just super uncertain. I don’t think it all cancels out. I just think it’s hard to say. And, you know, in general, when I think about historical narratives, I mean, I’ve always been into into these grand narratives and history, I was a, you know, a social studies major in college and spent read a lot of like social theorists, you know, Marx and Durkheim and Faber. And you know, but I do think a lot of the, a lot of the narratives people want to talk about today are like, emotionally loaded narratives are like, Are things getting better? Are things getting worse? And I mean, I have answers to those questions. But my answers tend to be kind of complicated and annoying and unsatisfying. And what I prefer is narratives that are about what’s what’s reliably, what are we reliably seeing more of, as time goes on? What are we reliably seeing less of as, as history advances? Or as years go by? Is it just one random thing happening after another? Or are there systematic trends that are consistent? And I think there are systematic trends? And if you were to just think about, if I were to just say like, what distinguishes later times from earlier times? I wouldn’t say they’re better. I wouldn’t say they’re worse, I think they happen to be, you know, having Sunday that today is better than a few 100 years ago, well, that’s pretty big, it’s better than any time in the past. I think that’s also true. And that’s great. But fundamentally, that’s not what makes today different from the past, what makes a Labor Day different from an earlier date. One approximation would be that there’s more people, which has been basically true throughout history, population has almost always grown. And there’s a greater stock of ideas. So there’s people and people innovate, and people have ideas. And as the ideas come into the world, they don’t go away, the ideas remain there for other people to use. And so the stock of ideas grows. And then I do think there was kind of a probably a cultural change in the last few 100 years where the rate of ideas went up a lot, and now might be slowing down a bit. But you’re you’re always have more people and more ideas. And that leads to something I call empowerment, which means that we have more capabilities, we have more options, there’s more things we can do. So what does that mean for violence? Well, I mean, on one hand, it means that we can enforce laws better. And so if we all agree, or 90% of us agree that we don’t want homicide, we have an easier time stamping it out. On the other hand, if one very powerful government decides they want to kill lots of people, well, they also they have more power to do that they have more options. They have more technologies that can help them oppress people at great scale. So that’s that’s kind of how I tend to think of it is is to start from the idea that what’s really going on in history, what’s the arrow of time, and it’s pretty reliable, there’s more people, and there’s more of a stock of ideas that’s pretty reliable. And then everything else about does that make things better? Does that make things worse? It falls out of that, and it’s a bit inconsistent. And it’s a bit contingent, and it’s a bit unsatisfying, which is, when we look, why would we look at, you know, potential events that could cause dramatic changes in everything and huge accelerations? We shouldn’t be too confident about what direction that’s going to go. Got it. Got it. Great. Did you have a question?

Unknown Speaker 28:48
Yeah. Alright, so a quick question about the people one that’s, you know, predictable. Do you either, you know, isn’t world population, that might be the one trend that isn’t going to keep going up? I mean, do you ever do have a stand on whether population peaks?

Holden Karnofsky 29:03
I haven’t looked at the official projections in a while. I think by default, fertility rates seem to be falling in, like developed countries to the point where you would actually get below replacement. So I think historically, yeah, populations almost always gone up. I mean, there are exceptions, like the Black Death, but um, you know, in the future, yeah. I mean, I think by default, you might expect population to flatten out and even decline at some point, then there’s this question of what we mean by population, because in terms of the, you know, if we did create a technology like digital people, or even advanced AI, you could you could have science happening as if population are exploding. And depending on your philosophical views about whether digital people are conscious, which have been talked about if you want, you could argue you could have people arguing some people could believe that population is declining. Some people could believe it’s exploding. But yeah, I do. I do think there’s a good chance. Yeah, you’re right. There’s a good chance for population decline in our future. And now it’d be something that would be very historically unusual, but the Stockholm idea seems to be solidly on the upswing.

Unknown Speaker 30:02
If I were going to summarize, you know, some of the things we’ve discussed so far, like maybe the quickest way I could put it would be, the variance of the future is much higher than people are like considering or taking seriously. Right. Yeah. And that’s, that’s informed by like some of the stuff you said earlier. And the stuff you just said recently, it’s like, actually, all variants of our, like Empowerment has been going up over time. And like projecting that forward, just keeps it going forward. So the question I want to ask about that is, what is the experience like trying to yell at society or society? It’s like, hey, variance is going up, you should be paying attention to this. And you know, most of society’s engaging in Twitter disputes and assuming, assuming mean reversion, assuming like, yeah, arrogant, like ergodic distributions or something.

Holden Karnofsky 30:48
I mean, it definitely seems to me that it’s, it’s hard to get people hyped up about the long run future, or even the long run past or, you know, I mean, a thing that I that I just consistently do on my blog, I have a post called something like if I were a billion years old, and I just, I just consistently try and take this very, very long run view. So I’m just, every time I talk about anything, I’m just like, forget about what’s happened since 1990. Like, what’s, how far back? Can we go? Right? So I have, I have a has life gotten better series where I’m like, Okay, I agree with enlightenment now that life has gotten better since a few 100 years ago. What about the 10,000 years before that? What about the hundreds of 1000s of years before that? You know, so I have that series, I have a series about what’s happening to the rate of innovation, you know, great art, great science. And instead of going back to 1970, or vaccinating 20, I go back to 1400, which is as far back as I was able to go, you know, so I’m always trying to do that. And I mean, gosh, and then when I look to the future, I ask, well, what are things that can happen in the next 50 years? That could actually matter for hundreds of years? 1000s of years, billions of years? Most things won’t? What are the things they could? And look? I mean, if if the media was full of this stuff already, then I would I would not be writing all this stuff. I wouldn’t bother. And when I do write it if it’s not exactly, you know, it’s not exactly viral wildfire most of the time. So I don’t know, I’m not I’m not screaming anyone. I’m not mad at anyone. But does it feel like people are genuinely generally uninterested? Yeah, it mostly does. Some people are very interested. And that’s great. But, you know, for the most part, it is, it has been generally seems pretty hard to get people to have serious intellectual conversations about very long time frames,

Unknown Speaker 32:28
I realized a implicit assumption of my question, it’s just, I think, comes from the fact that I have been relatively exposed to these ideas recently. So I like I’m very familiar with what more mainstream society discusses. And then I read your stuff. And I find that very interesting and compelling. And I noticed it’s a small minority of what, what goes on and things. But maybe the better way to frame the question is, what has the been the progress of these ideas over the past five or 10 years? Because maybe there’s like, a hidden, like there’s a growth part there that I don’t see being kind of a late comer to these ideas.

Holden Karnofsky 33:02
Sorry, can you? Can you clarify that? I don’t think you understand the question. Yeah.

Unknown Speaker 33:05
So I see you fighting this uphill battle. And I appreciate it. I support you. I like it. I cheering you

Holden Karnofsky 33:12
on. Good. Thanks.

Unknown Speaker 33:13
But what I have less visibility into was, what was it like developing these ideas over the past five years or since the time you’ve been working on this stuff? And the question I want to ask is, has there been significant growth in the ideas and the people engaging with the ideas? Has there been growth in there? That’s not, you know, visible to me, to us on the outside?

Holden Karnofsky 33:35
Oh, I see. Yeah. I mean, they’re not there, by and large, not my ideas. I mean, by and large, there’s, you know, there’s people that as part of my job, I try and get to know the people who are obsessing all day about how to help the most people possible, and how to think about the long run future. But it is true that a lot, a lot of ideas are kind of floating around in the social circles. And they’re very interesting. And there’s a lot of like, private discussions and emails and stuff that are really interesting, but they’re not, you know, it’s they’re not getting a rigorous public presentation. And a lot of what a lot of what we’ve been trying to do at open philanthropy is take some of the most important ideas and really critically examine them, pin them down, and put them in a forum where you can, you know, put a technical report online. And then I’ve been writing these blog posts where I kind of summarize the technical reports are talking about the big picture lessons and big picture takeaways. So I would say there is, you know, there is a community of people who talks about this stuff, who has a lot of great ideas, the community has been growing. The ideas have been advancing, but it’s in the scheme of things. It’s a small community. And it’s very early stage ideas to the point where a lot of them just aren’t aren’t written up anywhere or, you know, are getting their first kind of real public analysis with with the reports that we’ve been writing sometimes. Makes sense.

Will Jarvis 34:53
You know, hold on, I really like your writings on utopia. You know, which is funny because you tell is almost, you know, it’s often viewed as like a fool’s errand to like this, you know, I, why should we care more about thinking about utopia?

Holden Karnofsky 35:10
Sure. I mean, I spent about a year kind of obsessed with Utopia I was I was I would look for, I would say, you know, has anyone thought about what we hope the world eventually looks like when we have a lot more advanced technology, we can do whatever we want with it. And you know, the answer was always like, well, fiction. I mean, there’s like, there’s like almost nothing. There are projects that people will try to create utopias today. But I was I was curious about what, you know, what do we hope the world looks like when we have a lot of capabilities and empowerment and technology that we don’t have today? That’s hard to find outside of fiction. And then, you know, I got interested in this, I was like, Fine, I’ll read the fiction. And check it out. And, you know, I felt like a lot of what I found was just genuinely unappealing. And I would also try and kind of like survey people using Mechanical Turk, and how they felt about it. And they thought it sounded killing. I tried going to a conference on utopian studies. And it was like, it was like a literary criticism conference. And like, basically, everyone, there was just like, talking about why it’s silly to talk about utopia, or like, What weird behaviors make us think it would be an okay idea to talk about utopia. So I was like, wow, like this really, I mean, I feel like it’d be a very natural thing to be having debates over what we hope the world eventually looks like. In the long run, when we have a lot more technological technological capabilities than we have today. This is a very unpopular topic, I would even say it’s like, a topic that people mock topics that people hate a topic that people are uncomfortable with. So I’ve been writing a bit on my blog about that. And, you know, I think, I think some of the reasons, I think there’s just some kind of structural reasons, it’s just hard to discuss Utopia without kind of grossing people out and putting them off. It’s hard to, it’s hard to tell a story that anything super different from today and have it sound appealing, because a lot of people are, are attached to various things about the status quo in their lives. It’s hard to tell a story without conflict and have it not sound boring. It’s hard to. And that’s that point I just said, has been remarked on a bunch before these other ones, I think west. So it’s also hard to like, talk about a world and not have it sound homogenous, because the world we’re in is very diverse. And people are able to do all kinds of different things. And it’s hard to describe that. And it’s hard to get a feel for that. So I think that that is something that holds us back. But I think we should I think we should try and push through these obstacles, and should talk about what we hope the world eventually looks like. And I think that’s that is part of my my feeling that there isn’t enough discussion about extreme possibilities for the long run future, how we get the good ones and avoid the bad ones. And one of the reasons I think there’s not enough discussion of that stuff is because I think a lot of the long run future could be here faster than we think.

Will Jarvis 37:51
Definitely, definitely, we should at least have thought about it a little bit before it hits us. Right.

Holden Karnofsky 37:55
Yeah, I think so. I mean, yeah, I mean, if you if you think that if you think that we’re not going to have you know, if you think that technology’s not going to be able to do all the things we can think of for another million years, then then that would be one thing. And yeah, my view is that my view is roughly that the rate of technological advance is a is somewhat of a function of population, although ideas get harder to find over time. So over time, you need more population to have more ideas. But if you if you have, you know, something that fulfills all the functions of human scientists, or technological advancer, and you can duplicate it, so that would be AI, then you kind of have this exploding, self reinforcing population loop. And you can get any rate of technological advance. And so when you think about crazy sci fi features, I mean, those those could be very, very soon in calendar time. And so thinking about what we want the world to look like, does seem worth doing. And a lot of it is for me, also is just trying to continue to find ways to think about the long run and get myself into that headspace and someone who’s, you know, really asking myself Well, when I look zoomed out on a chart that has billions of years on it before us and billions of years after us, what are the things that matter for how that chart looks? Instead of just getting wrapped up and stuff that you know, will matter for another five years, and then won’t matter anymore?

Unknown Speaker 39:10
Definitely. So I have a question. And I think it’s similar to one that will kind of after those apologies for stealing it. So with it’s kind of like what’s your optimism posture, like in doing this work? Again, I’m like going a little meta. So it’s like, you know, I think there’s, there’s all these things, as you say, we’ve empowered more and more over time, and the next century could be just like, you know, 1000 Next, just so much more than what we’ve seen before. And I and I hear your conversation. I’m like, Oh, yeah. These types of discussion don’t land with our like, natural inclinations that were formed on your hunter gatherer tribes when like the world was static, more or less, right? And so do you do this work? Because you feel like it’s the best thing we can Do it’s like, oh, oh, sorry, you could I can imagine you thinking, well, we have a 1% chance of success. And anything we do, like increases that and it’s definitely worthwhile. Or you could think we have a 99% chance of success, and we just got to get that 1% chance of failure, like lower and lower. So do you have like a scale? Where you’re like, yeah. Are you hopeful?

Holden Karnofsky 40:19
Are you kind of asking, like, do I think by default? Do I think things are gonna go well, or badly?

Unknown Speaker 40:23
Yeah, that’s fine. Well, yeah, sorry. That’s,

Holden Karnofsky 40:25
you know, I think surprisingly little about that question. I think, um, I think I think I resist the framings, of optimist and pessimist. And where I really want to focus, what I really want to convince people of is that is that it’s indeterminate. And, and it’s not, it’s not a story written in the heavens, for us to live out. We’re here right now, doing things that matter. And we might get a good outcome, we might get a bad outcome. And, you know, ignoring it all, and just paying attention to stuff that doesn’t fit into that picture is not the way we’re going to get a good outcome. I don’t think maybe, maybe, maybe it is, especially if you’re, you know, if you’re a person who’s gonna make things worse by getting involved. But, yeah, I, I tend to not frame things that way. And, and to be honest, I’ve just been all over the place like I do, I’ve had times when I’ve just thought, well, this will almost certainly go fine. And eventually, you’ll great. Because eventually, we will, we will make the most of our empowerment, and we will, whatever problems come up, we’ll find a way to fix them. Because as we get more capabilities, those include the ability to, to reverse problems that we’ve created with our new capabilities. So I’ve had that kind of thought. And I’ve also had the kind of thought that’s just like, gosh, we’re just, we’re just a train headed for a wall. And you know, and no one is interested in finding the steering wheel, or whatever trains on steering wheels that maybe that’s problem, you know, but I mean, we just don’t have time, we’re not going to wake up to what’s going on. And it’s going to be too late. And we’re going to, like, treat the seed of an advanced civilization. And that seed is going to take over the galaxy before any of us can really react. And it’s, you know, if we if we introduce some new species onto this planet, whether that’s digital people or AIs, we’re very quickly going to be less powerful than than them and they’re going to decide where things go. And so the kind of half assed unthoughtful thing we created will soon be just running everything and will have lost our chance to have any say. So I’ve actually just really, really, honestly occupied both positions. And I think I don’t really spend time trying to convince people one or the other. I try and friends spend time convincing people that that it’s it’s not written in stone that we should we should care which one we get,

Unknown Speaker 42:40
I say. So it’s something like that sort of external evaluation of likelihood of success or failure is not very actionable. Instead, the key point is that like, again, there’s much more variants that remains. Yeah. And therefore we should try and we should do a lot of stuff to try and make sure it goes well.

Holden Karnofsky 42:58
Yeah. Yeah, that tends to be what I focus on the most. Yeah,

Will Jarvis 43:01
super smart. Super smart. Well, well, going off that. This is a question Craig, floated to me. For you, and it was, you know, you’ve worked in EA and a lot of different capacities, you know, like, there’s this idea, generation, like, you know, coming, you know, writing the blog, things like that the media stuff. There’s community building, there’s organ building, there’s all kinds of different stuff. And Craig, you know, if you want expand on that a little bit, but But what has been the most important, and what do you what for you has been most impactful? You think

Holden Karnofsky 43:32
you’ve been most impactful for the work of the things? The things Yeah, other things you’ve worked on? Yeah. I, you know, I don’t I think I have a little trouble answering that question. Because I think a lot of a lot of what is most important comes down to philosophical judgment calls so while I’m very I care a great deal about the long run future, and I’m currently spending all my time on it are just about for division of labor reasons, but I I’ve also spent tons of my career just like trying to reduce help reduce global poverty, improve health in poor countries. And, you know, and do other things that just, like try to make the world better today. And I think it’s, it’s, it can be kind of apples and oranges. So I don’t, I don’t think that um, yeah, open lens he’s worked on a lot of and give Well, I’ve worked on a lot of different issues that you would have to have a long, kind of, like, philosophical discussion to figure out who did more good. But, um, you know, I do I do, I do think in terms of some of the some of the stuff I’m particularly glad to have been involved in. I mean, I think GiveWell is redirecting a lot of donations, and it’s also probably causing a lot of donations to happen that wouldn’t have happened otherwise. And getting them to where they can kind of really help people the most per dollar and it’s, it’s, you know, I think they’ve caused a lot of like, just anti malarial equipment to be distributed. Yeah. And a lot of children to be treated for intestinal parasites. And I’m just, gosh, like, I think I think those are much better things to do with money than most things you can do with money. And then Then what else would have been done with the money? So I’m really proud to have played a part in helping to start that organization. That, you know, I’m not anymore, but but I’m really proud of the role I’ve played in helping them get started. And I’m also, you know, when I, when I looked at some of open philanthropies, you know, big, big projects, I mean, I’m really glad that we’ve gotten so involved in Farm Animal welfare, I think that it’s incredible how poorly animals are treated on factory farms. And I’m really proud that we’ve supported a lot of amazing work by animal advocacy groups that, you know, certainly certainly had momentum before we came in, but I’m glad we supported them and glad we’ve tried to help speed them up. You know, over the last few years, we’ve seen an incredible number of corporate pledges to treat animals better. And we’ve also seen great progress in alternatives to animal products, like for people, you know, for food. And so I think I think, you know, hopefully animal suffering is a lot lower than it would be otherwise. And, again, you know, a third, a third example I give them something that I’m proud of is that we, we started a biosecurity and pandemic preparedness program, you know, a number of years ago, several years before the COVID 19 pandemic. And we were just supporting a lot of organizations in the US that plan around pandemics to try and have better policy for preventing pandemics and were part of the pandemic response. And that’s, you know, once COVID-19 hit, a lot of people were interested in pandemic preparedness, but it was it was too late then to go back in time and have these organizations be well funded for several years and build up, you know, build up capacity. So I’m glad we were able to support them to

Will Jarvis 46:42
Holden. Well, I want to first thank you for humanity, you’ve done a really a lot of good. And I really mean that. That’s a great segue. So you’re really good at finding like, you know, causes that are that are underserved. And do you have a system for thinking through like, like, how to find those things? Or is it just something where you just, you know, just study a lot of things? And then you notice, wow, no one’s working on this? It’s like, GiveWell, right? I mean, how long have we had charities, and yet, at the same time, most charities are quite ineffective. And nobody had like, ranked these things and figured out how can we do the most good? Just? And maybe that’s a difficult question to answer. But yeah, how do you go about finding, you know, alpha when you’re trying to do these things?

Holden Karnofsky 47:24
Sure. I mean, Thanks for the kind words. I mean, one thing about alpha is just Effective Altruism is really new. And it could be interesting to talk about why that is. Because, you know, I just I just don’t think it was true 50 years ago, that you had a bunch of people all kind of intensively asking, How do I do the most good per dollar? And I think the reason people are asking today might have something to do with just the fact that we do live in an incredible information age. And so, you know, some of the questions that GiveWell ask that open philanthropy asks, you know, yet some 50 years ago, it would be like, I don’t, I don’t know where to what would you even how would you start to answer like, go to your local library? Oh, gosh, like, that’s gonna be a tough one. You know, these days, you can kind of say, well, I made I made a database of 1000 charities, I went through all their mission statements that I pulled from the tax records, I looked at the ones that were relevant, I went to their website, I looked at what they do, I went on a Google Scholar found the papers on what they do read the papers, found the papers, they reference, read those, if you could. So, so I think I think that may be part of why this whole Effective Altruism thing. It just hasn’t been around that long. And I think there has been a lot of low hanging fruit. And I think if you made a clone of me today, at age 20, you know, I don’t think I would, I would have an easier time finding like cool stuff to do that nobody’s doing. So that’s sort of it. You know, our general framework of open philanthropy that we just use over and over again, to find things we might want to get involved in is important neglecting this intractability. So we look for things that matter a lot. And you can kind of quantify that you can say, you know, here’s a problem in the world, if we fix the problem, how big a deal would that be? How much wealth would it create? How many people wouldn’t help? How many lives would it save? So you can you can put rough numbers on importance. neglecting this, we look for things that aren’t getting enough attention. So you can ask, you know, how many dollars are directed towards solving this problem right now? Or how many people work on it? And then tractability is kind of the hardest one, you never really know it’s tractable. But you can least ask yourself, you know, do we see a path forward? Are there activities we can do that we think would matter would make a difference. So we do use those criteria over and over again, and we use them for the long run future and we use them for, you know, helping global health and well being which is the other work that we do and, you know, we do we do turn up a lot of different causes that way, and when you specifically look for neglected things, and you’re always chucking stuff away when it already gets plenty of attention. That is, you know, it’s kind of, I don’t know, probably sounds too easy. It’s just it’s just a that is a kind of a way to cheat and find good things. things that other people aren’t doing.

Will Jarvis 50:01
Well that I will that are you down for a quick round of overrated or underrated before we let you go?

Holden Karnofsky 50:06
Sure. Give it a shot. All right.

Will Jarvis 50:07
So I’ll throw out a term and he tell us whether it’s overrated, underrated and maybe a sentence or two. Why?

Holden Karnofsky 50:12
So okay. Yeah. This is always tough because they never know how something is rated. I don’t know. Actually, do you have any thoughts on like, who’s who’s reading it? Like, like, who? Underrated or overrated? by who? That might help?

Will Jarvis 50:25
Yeah, that’s a great question, I guess.

Holden Karnofsky 50:28
average person listening to this podcast? Yeah, that’s tough. Right. I don’t know anything about them.

Will Jarvis 50:34
Craig, maybe your thoughts?

Unknown Speaker 50:36
No, I don’t think I don’t think you can do the average person listening to this podcast. I mean, I guess you I think you could. Oh, yeah. But I guess it is really tough. Well, I know my default is more like society in general. Although, I feel like the one of the major answers that always comes up on conversations with Tyler is is like, you know, underrated by some are sorry, overrated by some underrated by most Yes. Right. It’s like, the classic is the the pattern that always shows up. And so I think that, like, I think anything that fits into that pattern, I think, is like a fine answer. You know, I think the answer has got to be society unless you want to, like have a specific one.

Holden Karnofsky 51:15
America or global society, or like,

Unknown Speaker 51:19
I don’t think it’d be global society. I mean, as you know, better than we do that, like, you know, lots of the world is yeah, you know, struggle is subsistence or whatever. It’s just poor and like struggling nearby, so maybe the right audience is like, educated. I mean, yeah, it’s like college educated. America has got to be like,

Holden Karnofsky 51:37
Okay. Educated America. I don’t know that much about what college educated America thinks. But I’ll I’ll give this a shot. Yeah.

Will Jarvis 51:46
Agriculture overrated or underrated?

Holden Karnofsky 51:49
I think underrated in some sense, because I think it really was an incredibly huge deal. There are, you know, their best sellers out there arguing that agriculture is a huge deal and saying was a big part of why we had civilization at all. So their best sellers, but um, you know, I think they’re mostly like, I’m more agree with them than disagree with them, and they haven’t necessarily penetrated everywhere. So I, you know, I think it’s not entirely clear, like, it’s not 100% clear that we couldn’t have had civilization or that it would have been different without agriculture, but it’s probably true. And I think civilization has been a big deal. That means we have a lot more people than we would have otherwise, it probably means at least right now, like is better than it was before and then it would have been otherwise. So um, you know, probably underrated. Hunter Gatherer,

Will Jarvis 52:36
happiness, overrated or underrated?

Holden Karnofsky 52:39
I think. I think no one has any idea how happy gatherers are. So I think I think anyone who’s got a confident view on that I think is under is over. is wrong?

We don’t know we don’t know. I guess I would say pretty similar to us. Good stuff. Good

Will Jarvis 52:59
stuff. Ai existential risk overrated or underrated.

Holden Karnofsky 53:04
Underrated? Gotcha.

Will Jarvis 53:05
Do you think it is up? Here? I’ll go I’ll zoom in a little bit. Do you think it’s overrated or underrated within like, the rationalist community?

Holden Karnofsky 53:13
I think underrated I just think it’s, I just think this is just a huge, huge, huge deal. And I do think there are people who, you know, they get the impression Oh, lots of people are talking about this. I’ll ignore it. Or I’ll you know, I’ll work on some other thing. And I’m kind of like, like, I do think some people should work on other things. Just to be clear. Yeah. But I, you know, I yeah, I think unless, unless you restrict yourself to pretty selective circles. I think. I think AI experience is underrated.

Will Jarvis 53:40
Go on Reddit. That’s good. Yeah. One more and then Craig, I’ll let you go. Philanthropy generally, is it overrated or underrated?

Holden Karnofsky 53:48
Oh, I think it’s underrated. When, when we started open philanthropy, I, I read a lot about history of philanthropy. And I kind of expected that it was all going to be just like silly stuff that didn’t matter. And that. And what I learned is that like some of the most incredibly important events that have made life better over the last 100 years, had a major role for philanthropy. So the Rockefeller Foundation funded this obscure agricultural research that led to the Green Revolution, and has been credited with lifting a billion people out of poverty and is probably one of the major events of the century for human wellbeing. And there was a feminist philanthropist, Catherine McCormick, who was yeah, basically doing completely neglected funding for the research that led to the pill, which I think was also just like, incredibly revolutionary. So and I think there’s more where that came from. So I think I think philanthropy has been a big deal and can continue to be a big deal.

Will Jarvis 54:39
It’s good stuff. Craig, do you have any last questions?

Unknown Speaker 54:42
Yeah, I thought I’d have a quick one. I when I was listening to the 1000 hours, podcast, and you mentioned you I think you had a kid recently, and I was just wondering, you know, if you had any what was the most surprising part about having a kid?

Holden Karnofsky 54:57
Well, for me the most surprising I Hey, I really wanted to kid. But I thought that I only wanted an older kid. And I thought we would just be kind of rolling our eyes through the infant phase, because he doesn’t really have a personality. So I The biggest surprise for me is that I really enjoy. I really enjoy my kid even now, even when he was a newborn. And he doesn’t he doesn’t really do much, and I really enjoy him anyways, so that was that was good. That was a good surprise. Yeah. Awesome. Thank you.

Will Jarvis 55:26
I love that. I love that. Well, Holden, thank you so much for taking the time tonight. I really, really appreciate it. Where should people find your blog? Where would you like to send people? And we’ll put some stuff in the shownotes as well.

Holden Karnofsky 55:37
Yeah. Sure. Yeah. My blog is called takes that clld Space ta ke s and the website is called DASH takes.com. You know, open philanthropy you can google give well, you can Google I mean, you know, donating to Google top Charities is probably the most tangible action I wish people would take or something, just just people who don’t know what else to do. But um, yeah, but if you want to read more, I have my weird thoughts. Politics is where to read them.

Will Jarvis 56:03
We’ll do that. And but well, just before we let you go, you know, if someone is like super wealthy, they don’t know, you know, what they should do with their money? Like, could you just give them the quick pitch on like, like, where they should put it?

Holden Karnofsky 56:15
Well, I wish I had better answers to that right now. Because, you know, open philanthropy is is on a mission to help our main funders give away all their money within their lifetimes. And that’s just a lot of work. Like finding great places to put that much money is a lot of work. So it’s kind of awkward, but I can’t, you know, it’s like, we’re looking for that right now. So I don’t I don’t have a million places where I can point and be like, give her a gift here. Yeah, so I, I wish I could give a better answer. I think it was a good starting point. And I would also just encourage, like, reading up on effective philanthropy and Effective Altruism and learning and making some of your own decisions.

Will Jarvis 56:48
That’s good stuff. I love that. Well, thank you, Holden.

Holden Karnofsky 56:51
Cool. All right, good talking to you. Thank you both.

Will Jarvis 56:59
Thanks for listening. We’ll be back next week with a new episode of narratives.

0 Comments
Narratives
Narratives
Narratives is a project exploring the ways in which the world is better than it has been, the ways that it is worse, and the paths toward making a better, more definite future.
Narratives is hosted by Will Jarvis. For more information, and more episodes, visit www.narrativespodcast.com