Narratives
Narratives
136: Ben Reinhardt - Speculative Technologies
0:00
Current time: 0:00 / Total time: -43:14
-43:14

136: Ben Reinhardt - Speculative Technologies

In this episode, we're joined by previous guest Ben Reinhardt to discuss his newest venture, Speculative Technologies. https://www.spec.tech/library/introducing-speculative-technologies

We also discuss the ins and outs of starting a new organization, how to bootstrap legitimacy, the second order effects of technology, large language models, and the future of robotics and its current challenges. 

Transcript:

William Jarvis 0:05

Hey folks, welcome to narratives. narratives is a podcast exploring the ways in which the world is better than in the past, the ways it is worse in the past, where it's a better, more definite vision of the future. I'm your host, William Jarvis. And I want to thank you for taking the time out of your day to listen to this episode. I hope you enjoy it. You can find show notes, transcripts and videos at narratives podcast.com.

Will Jarvis 0:38

Ben, how you doing this evening?

Ben Reinhardt 0:40

I'm doing well. Um, as I mentioned earlier, I'm slightly overwhelmed. But other than that, things are things are going well. Yeah, being a

Will Jarvis 0:49

founder is no joke, man. They they don't tell you this, you know, the founder school we all go to before we get there.

Ben Reinhardt 0:56

Oh, man. Okay. So something that I really I've realized is that I think that starting any organization is the same regardless of like, what the form of that organization is. So like, you know, it's like, everybody knows that like starting a company, you sort of like roughly go through the same, the same things. But I think that starting a nonprofit is the same thing. And which is also the same thing as starting an academic lab. Because I know a number of people who are in network labs, and it's like the exact same process of things where you basically have to, like, somehow bootstrap legitimacy for this thing that has no track record. Yes.

Will Jarvis 1:37

So and how do you go about this? Yeah, it is. It is a interesting job. What do you think's been particularly good for you in bootstrapping legitimacy?

Ben Reinhardt 1:45

Um, and good for me. Like, I think that having written a, an obscenely long piece about my entire thought process has actually been pretty good for bootstrapping legitimacy. In the sense of a, like, I've clearly at least thought through what I'm doing. Right. And I clearly, I might be wrong about it. But I clearly like have, I'm not just like making things up. I'm thinking that's yeah, no, this is a thing I think about all the time is like how to bootstrap legitimacy, ya know, coming on your podcast, right?

Will Jarvis 2:34

Doing the podcast tour, you know, like, give your name upsides of different folks that have done some good stuff. And yeah, writing your ideas down seems to be important. Writing online, I think that's a really powerful mechanism. Definitely, definitely.

Ben Reinhardt 2:47

The thing, the thing that I would say about writing online writing, specifically online is specifically as opposed to generically, like writing things that are like, very much like, oh, like, you're the person who wrote that thing that had like this very particular opinion, right is good. Also, just really, truly it's, it's once you can get a couple of people to be excited about what you're doing. It's like leveraging that, right. So it's like, the like finding, finding program managers by getting, like one researcher in a field to take you seriously. And then they like, introduce you to other people. And then look, that's how you bootstrap legitimacy? Definitely, I

Will Jarvis 3:36

don't know. I think that that makes it. That's a that's a great segue for the audience. Ben, can you talk a little bit about what you're building? It's really, really exciting.

Ben Reinhardt 3:44

Sorry, I got ahead of myself. So I'm building a organization called speculative technologies. We're a nonprofit research organization, modeled after after arpa. And the the sort of mandate that I've set out for us is to build new paradigms in materials and manufacturing. That's, that's sort of like the one liner about it.

Will Jarvis 4:13

I love that. I love that. And I know you spent a lot of time going to conferences, exploring manufacturing and new technologies around material science. Was there an aha moment where you realize there's a lot we can do here that we have not done yet as a species?

Ben Reinhardt 4:30

I'm not sure if there was an okay. I'm not sure if there was an aha moment around that. Can I can I tell you what, there was an apple. So the aha moment was was like that, like in in basically all domains. There are things that we could be doing that we're not doing, right. So the trick is, is is less to figure out where you can do things that you that people aren't doing, but to knit almost like narrow down. So aha moment was around the fact that we should be focusing on on materials of manufacturing, and that came from this, sort of the, like, basically crystallizing the, the opinion that again, many people may disagree with me on this, that the like, biggest, sort of like social and economic effects of technologies are all second order effects. And the and I can I can dig into that, but then the technologies that are like most likely to create second order effects are materials and new manufacturing processes. Like every time we create one of those, there's these this like ripple of second order effects that happen out in the world.

Will Jarvis 5:46

Can you talk about that a little bit?

Ben Reinhardt 5:48

Yeah, um, the example I, that I point to the most is how, you know, a couple of decades ago, like, not really not too long ago, overpopulation was an existential threat to humanity, right, like, you have The Population Bomb. And you have basically people worried about this, this exponential number of people leading to starvation and a cascade of effects that, you know, it was like migrations that would destabilize even developed countries and, and, you know, like, really bad things for humanity. And if you look around the world today, we passed the eighth billionth person in the world not too long ago, and like, not to not that much fanfare, right. And that's because over overpopulation, just like isn't a thing we worry about. And that is, in large part, thanks to the fact that we invented the haber bosch process, which lets us fix nitrogen from the air and create much more fertilizer than we could just using natural fertilizers. We figured out how to do that some amount of genetic engineering, so you have the whole green revolution. And in many plants in crops that are much more productive, and then more generally, like, we create a lot of technology that have made people a lot more wealthy, which, if tends to make people have fewer kids. So between the three of those, these, like, second order effects of technologies, this this existential threat was was no longer a problem. So that's, that's like, the classic one. And then we can like point to like, all these other things, right? Like, you know, it's like, we don't really like whales are doing a lot better, in large part, because we use electricity and natural gas instead of whale oil. Our cities are no longer covered in insert, not because we got made really good technology for cleaning sweat off of buildings, but because, again, we don't use coal for heating and powering factories anymore. Right. Like, and like, I mean, everybody points to the ways in which, you know, like computers have sort of like reconfigured society. Right. So anyway, well, etc, etc, the list goes on. But it's all. All sort of like, the social and economic effects are second order effects that were very hard to predict in the first place.

Will Jarvis 8:30

Gotcha. And so these positive second order effects. It sounds like you either think, too many people are focusing on software, and not enough people are focusing on manufacturing and material science. Is that correct? Or is it just hire less leverage to humanity to focus on manufacturing and material science?

Ben Reinhardt 8:51

Or, but I'm not sure if it's, like, I don't know, the balance of like, number of absolute number of people. Right. But I think that right now, there is a lot of work, like the work itself, like I can't think of a lot of work in well, even actually, rescind that even. There's even a lot of work in in, in computer science that's not being done. I just think that there is like, a particularly large amount of work that's not being done in materials and manufacturing, sort of because of, of institutional constraints. Gotcha. And so, and like, that's where we're, I see sort of, like, my highest leverage point, being, you know, it's like, yeah.

Will Jarvis 9:49

Can you talk about like, what are those institutional drag factors that you've noticed, that prevent people from making more innovation in these spaces?

Ben Reinhardt 9:58

So Let's see, let's, I guess what I like to do is like walk through the different institutions that we sort of have that better that are responsible for that, we would hope are doing this. So, in in academia, a lot of the incentives are around doing new things like really, really like, and especially new ideas, and like showing that that a new idea is feasible. So people will, will come up with like, with a new material, right? Or they'll come up with like, one piece of a manufacturing process, and then they'll say, Okay, well, you did it. I wrote my paper, I've done like, let's, and then, if they tried to do sort of, like, build on it, reviewers are like, well, like, and their peers are like, Well, this has been done. This is not, there's not new why, why, right? Why are you doing this. And so what it does is it creates things that, like, are completely unscalable are not that they're unscalable, but they require a lot more work to scale. You'll see this all all over the place where we're scaling, like materials, and manufacturing processes often requires as much research as inventing them in the first place. So that work is not being done in academia, and then or you get a situation where you have a whole bunch of components. And again, you need to do just as much research to put those components together into a functional system, as you do to invent the components in the first place. So that's, that's actually Kenya to, then we think about startups. And, like, startups are really great mechanisms for scaling things that you know, how to scale, and building products and selling things to customers. Right. And, and, like, that's, that is a great thing. But the kind of work that that I was describing, is still really researched, there's like, really sort of fundamental uncertainties in there. And so it just like does not make a good investment, both because of the levels of uncertainty, not just about, like, whether it will work, but whether it will like turn into a product even, or like something that you can cap. And additionally, the type of work that you need to do is often creates public goods, whether you like it or not, in the sense of like, if there's if you think of them as being like some very large design space, then like the work once you say like, okay, like this point in design space is the right point, everybody else can copy that, but it might take you like many years and millions of dollars to find that point. And like, that is that's like the opposite of a moat. That's exactly the anti moat. Yes. And then, and so between those things, and just like the timescales and amount of money involved, and the amount of value you can capture, it means that like doing this work is just not well suited for a venture capital back startup. Right. And then we might turn to industrial labs, as as sort of a third and final institution. And, you know, it's like this work was in the past, done by industrial labs, like Bell Labs, they, you know, your, your classic example. But sort of due to a combination of restructuring of the industry and like a lot of outsourcing of r&d to startups, partially because of that, partially because of just increased investor expectations from the public markets. And then third, just in terms of like, the the companies that sort of have the, the, the slack to do really sort of speculative work being primarily software companies now. You don't have companies doing this sort of work, especially when it would sort of go contrary to their main product line. So so they'll do research, when it's like, okay, like this, this will, like, augment our main product line, but when it's something like, Okay, we're gonna, like, reinvent the process by which we make this thing like they have no incentive to do that. So those that that is a very long explanation of why this sort of work has not have a home in any institution right now. Right. That makes sense.

Will Jarvis 14:47

And how difficult has it been been to convince like philanthropy and funders to fund it? Because it makes sense, right? There's no place that does this right now. And It's like it is speculative, speculative, it's in the name, right? It's speculative technologies. It's like, we're not sure we're gonna find somebody, we'll probably find something out there. We're not exactly sure what its gonna be. You know how it's like, that sounds like a difficult pitch, you know, I just finished fundraising. And it's pretty hard to fundraise. And we had this pretty, pretty concrete thing where we're selling. And I mean, it sounds like a pretty difficult thing to go out there and do but you've been successful at it. Can you talk about that process at all? And how you think you've made that work?

Ben Reinhardt 15:26

Yeah, I would say I've been somewhat successful. I mean, I like I need to be more successful in order to sort of achieve achieve our goals. Let's Let's call a spade a spade. I mean, right now, our funding has come from very generous donors who kind of get it already, right? Like it was not, because they did not give us money, because I convinced them that this was a problem worth addressing. I like that, that is the I would actually say like, that's sort of like, the biggest next thing that I need to do is really like start trying to convince people of this. And so in that sense, it was a matter of finding. funders who already realized that this was this was an issue worth tackling. And doing the slightly easier argument of like, the way that the way that I'm proposing to tackle it has a chance of succeeding. Gotcha. So that's what they're betting on.

Will Jarvis 16:51

Like that. So you start with initial, I think there's a good lesson here, though, initially, you start with a group of core people who are sympathetic to what you're building, and you come to them and say, Hey, like, I've got this really good plan to go out there and try and solve this. You know, what do you think about this plan? Like, would you be interested in supporting this? And then you bootstrap to start convincing people as you build success over time?

Ben Reinhardt 17:12

Yeah, I assume I assume it's similar to building a startup, right? Like, you're gonna go, yeah. Your initial customers are gonna be the ones that like, get it. Exactly. And then once once they start sort of like having a lot of success and getting the testimonials, then maybe you'll start to convince other people who right now or skeptical, but like, then once you show them momentum,

Will Jarvis 17:35

exactly. You get these early adopters, right? And then you can move into like the more general public over time, as you

Ben Reinhardt 17:42

succeed, see, building building things, it like building new organizations is the same. The domain

Will Jarvis 17:49

is very wide. So it's a wise observation. I think this is it's definitely on the money. So I've been out of my curiosity, I wanted to how are you think about finding program managers? And can you talk about what the program manager role looks like? They are program managers. Right? That's the right, I'm using the right term there. Yeah.

Ben Reinhardt 18:06

Yeah. I'm not sure that that's exactly the right title. Yeah, that may change in the future, but like, DARPA calls them program managers, that seems like a reasonable name. So the problem with it is that they're, they're PMS, you also have product managers and project managers. And all three of these roles are are wildly different. So anyway, how do I go about finding them? So the question is, how do I go about finding them?

Will Jarvis 18:41

So can you describe the role and then we can talk about you know, like, ideal, who, you know, who slots in there? And

Ben Reinhardt 18:46

yeah, so the the role is effectively to be kind of like the mini CEO of a specific of a program and what I mean by a program is a roughly five year sit like five year thing that starts off by doing a lot of planning around basically like, what, what projects need to happen within this program umbrella, who's going to be doing those, those projects, like why are we doing this whole thing in the first place? What's going to happen at the end of this thing? And like, you know, what are the risks? I right now are rough framework is trying to first like go through and answer the questions in the homebuyer catechism, which is like, what are you trying to do describe it with zero jargon to how's it done today? I may be mixing up the order on these three What is new in what you're trying to do? For who cares? Five? What are the risks? Six? How long will it take? And seven? How much will it cost? I may have missed one, put in the show notes. But anyway, let's just roughly answer those questions anyway. So like, you plan it, and then the program managers job is to, like, figure that out, and then figure out like, who at other organizations we could work with, give money to, in order to work on separate projects, that all sort of, like, have to work together towards building, you know, a final or final goal. So that's the role. And then the way that I go about finding them, frankly, is mostly some combination of telling people I know, like, Hey, I'm looking for people who might have really ambitious and good program ideas. Yes, introduce them to me. And then to kind of having a very, like rough sense of, of a program that might might fit into the organization. You know, reading, reading a bunch of papers in that area, and then contacting the authors and being like, Hey, I read your paper. I'm really interested in like, you know, just like what you think about this more broadly, do you think that there's room for programs there? And then if they're like, oh, yeah, like, there might be room for a program there that I'm like, Oh, cool. Like, do you know anybody who might be really great at at running that? And then, you know, it's like, they'll introduce me to other people. It's like, and then put me to other papers, and then you wash, rinse and repeat.

Will Jarvis 21:46

Make sense? Make sense? And are there any characteristics you found that make people successful in these types of roles? Are they kind of like, inside are outsiders like, you know, do they like they know the field? Well, but they're not exactly they fit the mold of like, your average like, person who who ends up as a tenured professor at in academia? What kind of characteristics these people have?

Ben Reinhardt 22:08

Yeah, so So, like full disclosure, I'm still figuring this out. So I can tell you something like, I'll tell you my current height, no, I can tell you my current hypotheses, I just yeah, like, want this to like, like, have like, high epistemic uncertainty about what I'm about to say. So I think what it so whenever someone's doing one of these programs, it's going to be like, broader than any one person can have done. But they should have, like, some strong experience in like, at least one piece of what the program will be. They need to have done, like, some amount of like, real physical research. And so what I what I say is, like, they don't need to have a PhD, but you need to have done something like, like that, right? Where you're like, I spent like, like, you know, what it is like to suffer while trying to get the like, real world to yield up and secrets in doing something that nobody's ever done before. Right? So it's like, yeah, you don't that's that's like, pretty hard requirement. And then beyond that, it like, it's, it's there's like, it's like this, like some combination of like curiosity. But like the curiosity of a scientist, and in critical thinking of a scientist combined with like, the hustle of A and the hustle and ambition of a startup founder, with like, a giant dash of epistemic humility is kind of like, the general recipe.

Will Jarvis 23:55

That's awesome. That sounds hard to find.

Ben Reinhardt 23:58

Yeah, it's really hard.

Will Jarvis 24:01

Not impossible, which is good. You know, like, like, it says, is most of your job it will ended up being recruiting at the end of the day,

Ben Reinhardt 24:10

I think. I mean, unless, unless, you know, so that ends up being someone else's job. Right. Like, that's, again, similar other things that are the same with startups. Right. Like, at the end of the day, the founders job ends up being just like recruiting and fundraising, for the most part, right. Yeah, I would estimate that there's like, under 10,000 of these people in the world, like that's, that's kind of like my rough.

Will Jarvis 24:42

That sounds about right, rough hunch. Yeah, it's about 1000, though.

Ben Reinhardt 24:47

It is much more than 1000. I suspect. There's more than 1000. And I suspect to some extent, it can be trained. Like I don't Yeah, I think then that's kind of the hope with building a new institutional structure, right? Is that like, at first, the only people that you can get are the ones who like through some set of luck are like, good for that institution. But eventually, like institutions kind of generate people who are, are good for them. Like, yes. I don't know. Like, I would argue that, because startups became such a big thing. There are more people who would make good founders in the world, then, right. Then there were like, in the 1980s.

Will Jarvis 25:37

Yeah, Matt, Matt Clifford talks about that, you know, what to? His question always is in a given country, what did the ambitious people do? And if you're in Singapore, the Yeah, civil service or something like that. And then and perhaps with a in Ben Reinhart, Stan, you know, people go and they've come program managers. That's what

Ben Reinhardt 25:55

I agree with. I agree with Matt on most, like, by default, I agree with Matt on things so

Will Jarvis 26:01

smart guys forgot. Yeah, exactly. That's cool. So Ben, other than fundraising, and recruiting and everything, and trying to do everything all at once, which, you know, building new works is, you know, what is the single biggest challenge you face so far? That if you could wave a magic wand, and alleviate, what would that be?

Ben Reinhardt 26:22

Besides, besides fundraising in recruiting? Mmm hmm. I have to go like pretty far down in my life. My my priority list, let's biggest challenge. I mean, so like, I guess, like, interested like, this is this is not particularly. And I guess the kind of is tied to those is like a broader question of like, how do you know that research is good? Like, like, like, that, like that challenge? Right? Which is like, both sounds incredibly philosophical. But then it's also like, like, how will like the question of like, how will we know when we're successful? Right? Like, what, what does that what will that actually look like? And then the, on top of that, is, is there any way to get leading indicators of the fact that, that we're doing good work? Right? Yes. Because it's like, you can see, throughout history, like both like inventors and scientists, who don't see the impact of their work for, until after they're dead, right. Yeah. And that's like, that is both hard to to course, correct on and also hard to convince other people that we're doing a good job, if that's the case. So, so really trying to figure out like, how do you? How do you judge yourself? Is, is I think, a big challenge. That is,

Will Jarvis 28:03

it's a huge, it's a huge challenge in the research space. I have you do you have any sense of like, how to go about, you know, measuring these things, you got to talk to Don Braden, a long time ago, he had some labeling, which I cannot quite recall, just checking in with people. And if they're still like, following interesting paths or something, I don't know. What would you think? Yeah.

Ben Reinhardt 28:21

Yeah. Um, I mean, right now, right now, the, the hunch we're acting off of, is sort of tied up in these roadmaps of, like, here's our hypothesis about like, what the things that we would want to hit in order to achieve this technology and sort of like, almost like, really going deep on not necessarily a justification, but like a reasoning behind all of it to ourselves. So you can say like, okay, like, we think that this technology is worth building, because it will have like, these general purpose, because like, it will have, like these general purpose uses. We could potentially imagine a future where, where it could people could do this. And then like, here are the biggest risks and the things that we need to do to show that these are not risks are like XYZ. And so it's almost like kind of just building up this like, really big causal chain. And I think at some point, you sort of just need to say like, okay, like, are we executing on this causal chain? Yes, no. How are things going? But, but I think it's really hard to it's like, I might almost push back against the idea of like, measurement per se. Got it. In the sense that like, I think that that measurement involves like, somehow We're making like a universal, like a universally transferable sense of something. Yes. And, and I think, almost, again, this is this is not something that many people would agree with. But I think that I think that we've lost in that we need to do is kind of build like these like chains of trust, where it's like, okay, like that, like, I build trust with the program manager, they build up trust with scientists, the scientists build up trust with each other. And then, like, get everybody to be really honest, and just sort of say, like, like, Is this is this working? Is this stupid? Right? And go from that. And it's like, that's, I realize, like, deeply unsatisfying to everybody, including myself. But I think like, I think this is, is a thing that has sort of been lost and like, on the margin, we need to do more.

Will Jarvis 31:03

With does seem like a lot is lost, if you just get obsessed with the free rider problem. And you're, you know, and all you're trying to track is not getting, you know, like on the margin, someone running off with the money or something like that.

Ben Reinhardt 31:17

Yeah. crackpots? And it's not, it's not it's not zero. Exactly,

Will Jarvis 31:25

exactly. It's something like so if we were talking about Matt Clifford, earlier, and one of the things he mentioned on how he can identify if one of the startups and entrepreneur first is doing really well, is the number of positive surprises that frequency, do you think something like that might might might work as well? It's not like so much that, you know, hard metrics, but wildly interesting things are happening quickly. Yeah,

Ben Reinhardt 31:51

I think that's probably a good one. And another one. So I'm not sure if you know, this, I actually worked for Matt. Yeah, I worked from it in. In 2019. I worked at entrepreneur first and Singapore. So I was like, intimately involved in this question of like, how do we know if, if, if founders are doing well. So so like another, another sort of, like leading indicator that actually was was pretty strong, was like, just like, like, how much they get done. Interesting. And I'm actually yeah, like, now that you bring that up? Like, I think that that's actually a pretty good indicator, even in science. I'm sorry, not science, but like in, like technology research, because because I think we want to exclude the kind of like Einstein going off and like spending 12 years figuring out general relativity, right. So it's like, there definitely are domains where you want to see people just like, like, it is possible for people to just go off and solve for Mars last theorem or whatever, you know, in a room. I don't think that we're there. I think that what you want to see, when you're doing kind of experimental building focus research is still just like, oh, like, tried this didn't work. Doing this other thing. I had this idea I like if I executed on it, it was a bad idea. Right? Like, you want to see that, that iteration speed. And it doesn't necessarily mean like pivoting just means like, like, some some like level of activity.

Will Jarvis 33:37

Yes. I was having Friedman, describe this to me, as he said, Well, you've got like a very low powered flashlight, and you're running around in the dark room looking for treasure. And the only thing you lever you have is moving around faster. It's like, that's how you should think about finding product market fit, which is may be related.

Ben Reinhardt 33:56

Yeah, I think it's not. It's like not quite, it's not exactly the same for the same analogy. But I think that there really is something to just like, doing smart things like and like I almost think that like planning, planning itself can be one of those things that you're doing right? So if you're like, like I wrote down like these five possible scenarios, and like rule like these two out and talk to these two people about them, right? Yes. I would look at that and say like, oh, yeah, like that's, that's great.

Will Jarvis 34:28

If not just flailing about or something.

Ben Reinhardt 34:30

Yeah. analogy, that, that Peter van Hindenburg. I'm butchering his name he runs Incan switch gave me that I loved is this term of building better, which is from boating, which I don't do at all, but like, unless you have like water jets on the front of your boat. You can't turn unless you're moving. It's right. So so like in order to be able to change the action in a boat didn't move at all. So it's like something around that. That's really good. It's really important. But but at the same time, like, right, like, it's like, the tension there is like, how do you like what is what is like good action versus like flailing action? Right? Yes. And that, that. I don't know how to try to discriminate that.

Will Jarvis 35:27

Maybe it's like obscenity, you kind of know what when you say it, or something like that, which is not helpful, but like to describe on the podcast, but perhaps is true.

Ben Reinhardt 35:35

Yeah, I think so. I, like I'm kind of optimistic and confident that if we pay attention, it might be possible to come up with like, better theories about Gotcha. About how that works.

Will Jarvis 35:52

What's my beautiful successful? Yeah. Ben, Ben, you're you're super smart guy. Why pick this problem? In particular, I've realized I've never asked you this. Oh, you got a lot of options. Right? You know, yeah, see all kinds of different stuff. But this is appealing.

Ben Reinhardt 36:08

So like, the really honest answer is because I want there to be more awesome sci fi shit in the world. That's awesome. Like, that's actually it. It's like that. That's what I want to happen. And it's like, I, I've managed to convince myself that this is the best way that like, I give it like, do it. Like, given my set of skills can can try to make that happen? Frankly, yeah,

Unknown Speaker 36:40

I love that. I love that.

Ben Reinhardt 36:42

There's nothing more profound, profound to it.

Will Jarvis 36:45

No, that's great. That's it for you personally, are there any technologies you're super jazzed about that you think has just been overlooked?

Ben Reinhardt 36:53

I mean, my my sort of big one is, is general purpose teller robotics. So basically, the possibility of through some system, you could imagine, like putting on a headset, and going into some contraption, basically being able to seamlessly act through a robot anywhere in the world. And so the reason for that is some combination of a belief that I think like automation, like general purpose, automation, is going to be much harder than many people think it is. And the fact that it is it's not just sort of like automation, but it's actually like, allows people to effectively teleport, to like, recall, that's something that that really would excite me is like, a, a scientific lab, where you basically have collaborators all over the world working together. 24/7. So you have like, different shifts based on different places in the world. And everybody just was like, seamlessly handing off. And that's just like, really not even something that people are thinking about automating anytime soon, but is like, yeah, so anyway, that's, that's, that's one.

Will Jarvis 38:18

I like that. And you said something interesting there that, you know, general purpose automations can be much harder than people realize. We were talking to a Facebook, email engineer, he works in automation, you know, general prefers automation, or he did at the time. And that's one of the things he mentioned, as well, it's like, you don't realize how hard a lot of this stuff is. And we see that even with self driving cars, it seems like they're always, you know, perennially, five years away or something. Yeah. You know, maybe we're getting there now.

Ben Reinhardt 38:45

Yeah, I'm kind of bullish on the cars. You know, it's like, you can you can ride around in San Francisco and a cruise car. That's true. Maybe like not like, yeah, I don't know how well it'll go from city to city, but like, but in terms of like, the robotics automation problem? Yeah, I think so. It's hard. It's really hard. Again, like, I think, I don't know, I can arguments about this on Twitter, sometimes. Yes. And, you know, it's like people, people who have have some amount of training in an area are often the like, the most the ones who are like the least able to predict a paradigm shift. So, you know, it's like, I could be completely wrong about this, but it's just like, I don't I don't see the mechanism by which a lot of the advances that we're seeing in AI on computers, which really depends on like, truly obscene amounts of data. I don't see that translating into the physical world without some, some sort of discovery about how to do things differently.

Will Jarvis 40:00

Gotcha. Something like, I think you mentioned this to me, but when we were having coffee, but it's something where there just isn't the same data layer that we have for text for the real world.

Ben Reinhardt 40:09

Yeah, exactly like it's, yeah, we just don't. It's like, literally everything that goes on in a computer can become data for training for trading, like a large language model or something. But we don't have anything like that in the real world. Right. So it's like, so hopefully, I'm wrong. Right, right.

Will Jarvis 40:33

But that'd be great. That'd be great. So So do you think? Do you think the effect of large language models are being somewhat overrated right now? Like for the next 10 years of human productivity?

Ben Reinhardt 40:43

I think so. I, I think that we're going beyond my mindset. But like, sir, I think I think like the thing that I would say is, like, I think like my, sort of what I'm making is a very scoped claim, which is that like, I think that the effect of large language models on doing things outside of a computer is going to be limited. I think like that's, that's sort of my like, inside of a computer. I I have I have no have no have no way of knowing.

Will Jarvis 41:24

Gotcha. Great does Oh, yeah. That makes everything that makes sense. And it directionally jives with with my thoughts. Ben, I want to thank you for taking the time to come on. specular technologies, it's been a ton of fun specular technologies, what can the audience do? Are you looking for any money? Are you trying to recruit for anything? Is there anything we can help you with?

Ben Reinhardt 41:46

Yeah, very, very concretely if you know anybody who wants to donate. That's, that's always always appreciated. Always looking for good program managers. If if there is a some technology that is currently bottlenecks in materials or manufacturing that you think could be really big if true. Get in touch. But yeah, and just sort of spread the word. When this airs, the website should be live. They'll be great. Do you have a domain for it yet? Yes, it is spec. Dot tech.

Will Jarvis 42:24

Perfect. Love it. Love it. That's easy. That's it for

Ben Reinhardt 42:26

speculative technologies.org I think both of them redirect.

Will Jarvis 42:30

Awesome. Love that a little bit. Well, Ben, yeah. Thank you so much for coming on. Again. Really appreciate you.

Ben Reinhardt 42:34

Thanks for having me. Well, it is always a pleasure.

Will Jarvis 42:37

Definitely. Special thanks to our sponsor, Bismarck analysis for the support. Bismarck analysis creates the Bismarck brief, a newsletter about intelligence grade analysis of key industries, organizations, and live players. You can subscribe to Bismarck free at brief dot Bismarck analysis.com. Thanks for listening. We'll be back next week with a new episode of narratives. Special thanks to Donovan Dorrance, our audio editor. You can check out documents work in music at Donovan dorrance.com

Transcribed by https://otter.ai

Discussion about this podcast

Narratives
Narratives
Narratives is a project exploring the ways in which the world is better than it has been, the ways that it is worse, and the paths toward making a better, more definite future.
Narratives is hosted by Will Jarvis. For more information, and more episodes, visit www.narrativespodcast.com