The Tim Ferriss Show Transcripts: Elad Gil, Consigliere to Empire Builders — How to Spot Billion-Dollar Companies Before Everyone Else, The Misty AI Frontier, How Coke Beat Pepsi, When Consensus Pays, and Much More (#863)

12 hrs ago 14

Please enjoy this transcript of my interview with Elad Gil (@eladgil), CEO of Gil & Co, a multi-stage investment firm, holding company, and operating company working on the world’s most advanced technologies. Elad is a serial entrepreneur, operating executive, and investor or advisor to private companies, including AirBnB, Anduril, Coinbase, Figma, Instacart, OpenAI, SpaceX, and […] The post The Tim Ferriss Show Transcripts: Elad Gil, Consigliere to Empire Builders — How to Spot Billion-Dollar Companies Before Everyone Else, The Misty AI Frontier, How Coke Beat Pepsi, When Consensus Pays, and Much More (#863) appeared first on The Blog of Author Tim Ferriss.

Please enjoy this transcript of my interview with Elad Gil (@eladgil), CEO of Gil & Co, a multi-stage investment firm, holding company, and operating company working on the world’s most advanced technologies. Elad is a serial entrepreneur, operating executive, and investor or advisor to private companies, including AirBnB, Anduril, Coinbase, Figma, Instacart, OpenAI, SpaceX, and Stripe. He was previously VP of Corporate Strategy at Twitter and started mobile at Google. He was the founder and CEO of Mixerlabs and Color. Elad is the author of the bestseller High Growth Handbook: Scaling Startups from 10 to 10,000 People.

Books, people, tools, and resources mentioned in the interview

Legal conditions/copyright information

Elad Gil, Consigliere to Empire Builders — How to Spot Billion-Dollar Companies Before Everyone Else, The Misty AI Frontier, How Coke Beat Pepsi, When Consensus Pays, and Much More

Additional podcast platforms

Listen to this episode on Apple PodcastsSpotifyOvercastPodcast AddictPocket CastsCastboxYouTube MusicAmazon MusicAudible, or on your favorite podcast platform.


Transcripts may contain a few typos. With many episodes lasting 2+ hours, it can be difficult to catch minor errors. Enjoy!


Tim Ferriss: Elad, nice to see you. Thanks for making the time. Appreciate it.

Elad Gil: Great to see you, as always.

Tim Ferriss: I thought we could begin with something we were chatting about, or you were explaining before we started recording, which is a new phenomenon of sorts. Could you explain what we were just talking about?

Elad Gil: Oh yeah, we were just talking about some of the acquisitions that are happening in the AI world. We saw that xAI just got an option to effectively purchase Cursor, it looks like. Obviously, Scale was partially taken by Meta. There’s been a variety of these deals that have been happening over the last year or two.

And separate from that, we were just talking about, well, what does that mean for the AI research community and the AI community in general? And I think the most interesting, or one of the interesting things that’s happened over the last year or so is Meta really started aggressively bidding on AI talent, which was a very rational strategy. They’re going to spend tens of billions of dollars on compute, so it made sense to have a real budget to go after people. And normally, what happens in tech is a single company will go public, and a bunch of people from that company will be enriched and then a subset of them will continue to be heads down and working really hard and focused on their original mission. And a subset of people will start to get distracted. They may go and work on passion projects for society. They may get involved with politics. They may go start a company. They may just check out and hang out or go to the beach kind of thing.

And what happened recently is because of the Meta offers and then all the other major tech companies having to match offers for their best researchers, somewhere between 50 and a few hundred people effectively had an IPO, but as a class of people. It wasn’t like they were at one company. They were spread across Silicon Valley, but all of their pay packages suddenly went up dramatically and they experienced the equivalent of an IPO, and that’s really unusual. It’s kind of the personal IPO. And the only time in history I can think of where I’ve seen it happen before is in crypto where a bunch of the really early crypto holders or founders suddenly as a class all went effectively public in ’20, I guess ’17-ish, and then again, more recently.

But this is really interesting, it’s under discussed. It may not have huge long-term implications, but it does mean a subset of people will change what they’re focused on, try and do big science projects to help humanity work on AI for science maybe. Maybe some people will go off and do personal quests or things like that. 

Tim Ferriss: Yeah. Or just “quiet quit” and do lots of drugs and chase vices. I mean, there’s that too. 

Elad Gil: Definitely that.

Tim Ferriss: In that case, you look around, say Austin, you’ve got the Dellionaires, which refers to Dell post-IPO or early employees and so on. But as a class of people, when that happens, I suppose we don’t know how large or how long term the implications are, but there seem to be implications. And I know only a few people who I would go to as technical enough and also broad enough in their awareness and networks to watch AI. To the extent that someone can watch it comprehensively, I would put you in that bucket. And you wrote this week just to talk about some of the other elements at play here, the compute constraints that AI labs are facing and the implications maybe for the next one to five years. This is in a piece people should check out: “Random Thoughts While Gazing at the Misty AI Frontier.” Good headline, by the way.

Elad Gil: Very dramatic.

Tim Ferriss: Yeah, very dramatic, I love it. It’s very evocative. So would you mind explaining, actually, before we move to the compute constraints, because I do want you to talk to that next, but for people who don’t have any real context on the talent wars and what you were just mentioning earlier with Meta, on the high end, what does some of these pay/equity packages, compensation packages look like that are getting offered?

Elad Gil: Yeah. I don’t have exact knowledge of the full range and everything else. The rumors and the things that have made it into the press, the claims are that these things are between tens of millions and hundreds of millions of dollars per person. And again, it’s a very small number of people who would get anything that’s outsized. But I think the basic idea is we’re in one of the most important technology races of all times. The faster that we get to better and better AI, the more economic value will effectively show up. And therefore, people are really willing to pay in an outsized way for the handful of people who are the world’s best at this thing.

And five, 10 years ago, these people were well compensated, but it was a completely different ballgame because it just wasn’t the core of everything that’s happening in technology. But also honestly, societally and politically and for education and health, it’s going to have all these really broad, and I think largely positive implications for the world, but it is the moment of transformation, and so suddenly these pay packages were going way up.

Tim Ferriss: What are the compute constraints that you discussed in your recent piece?

Elad Gil: All the different — people call them labs now — that’s OpenAI, that’s Anthropic, that’s Google, that’s xAI, et cetera. All the labs are basically training these giant models. And effectively, what you do is you buy a bunch of chips from NVIDIA. You’re actually building out a system so you have chips from NVIDIA, you have memory from Hynix and Samsung and other places and you’re building out data center. There’s all these things that go into building these big systems and data centers and everything else. And you basically have clusters of hundreds of thousands or millions, or the scale keeps going up of systems that you’re buying from NVIDIA and from others. Google has your TPU, there’s other systems as well, and you’re using that to basically train an AI model.

What that means is you’re running huge amounts of data against these big clouds, and eventually the crazy thing is your output or your model is literally like a flat file. It’s almost like outputting a text doc or something. And that text doc is what you then load to run AI, which is insane if you think about it. You use a giant cloud for months and months and months, and your output is like a small file.

And that small file is a mix of representing all of humanity’s knowledge that’s available on the internet, plus logic and reasoning and other things built into it. And you can think about that in the context of your brain. You have three or four billion base pairs of DNA, and that’s more than enough to specify everything about your physical being, but also your brain and your mind and how it works and how you can see things and talk and taste things and all your senses and everything’s just encapsulated in these very small number of genes, actually. And so similarly, you can encapsulate all of human knowledge into this slot file effectively.

Tim Ferriss: How do you think about the constraints then? What are the constraints?

Elad Gil: Every year, the constraint on building out these big clouds to train AI, and then also what’s known as inference, where you’re actually using these chips to understand, to run the AI system itself, you need lots and lots of chips from NVIDIA to do this or TPUs or others, but then you also need other things. You need packaging to actually be able to package the chips, and so there’s a whole supply chain around building out these systems. Different parts of that supply chain have constraints of them at different times, and so right now the major constraint is memory or a specific type of memory that’s largely made by Korean companies, although there’s some broader providers of it. People think that that memory constraint will exist for about two years, maybe, plus or minus, because ultimately the capacity of those companies has been lower than the capacity for everything else in the system.

People think other constraints in the future may literally be building out the data centers or power and energy to run these things, but for today it’s this memory. Everybody in the industry is constrained in terms of how much compute they can buy to throw out these things. What that does is it creates a ceiling on top of how big you can scale these models up in the short run because every lab is buying as much as it can. A bunch of startups are buying as much of this compute as they can, and everybody’s constrained. What that means though is you have an artificial ceiling on how big a model can get in the short run, and how much inference can run or how many things you can actually do with AI right now.

That also means that you’re effectively enforcing a situation where no one lab can pull so far ahead of everybody else because they can’t buy 10 times as much compute as everybody else. And there are these scale laws that the more compute you have, the bigger the AI model you can build, in many cases, the more performant it can be eventually. That may mean that over the next two years-ish, all these labs should be roughly close to each other because nobody has the capacity to pull ahead. And when the constraint comes off, there is some world where you could make an argument that suddenly somebody can pull far ahead of everybody else. So right now, OpenAI, Anthropic, Google, they’re reasonably close in terms of capabilities, although some will pull ahead on one thing versus another. That should roughly continue everybody thinks for the next at least two years because of this.

Tim Ferriss: Google is also constrained by the memory from Samsung, Micron, et cetera. They’re similarly constrained as the other players?

Elad Gil: Right now, everybody is similarly constrained and a subset of these labs either are already making their own chips or systems like Google has TPUs and other things. Amazon has actually built its own chips called Trainiums. There’s basically different systems for different companies, but fundamentally all of them are limited in terms of how much they can either manufacture themselves, purchase themselves. And a year or two ago, the main constraint was packaging, now it’s memory. Two years from now, who knows, maybe it’s something else. We constantly are hitting the bottlenecks as we’re trying to do this build out.

Tim Ferriss: This is probably going to be a naive question because I’m a muggle and not able to write technical white papers or anything approaching that, but it seems to me that, I’m not the first person to say this, we’re better at forecasting problems than solutions, potentially. And so for instance, way back in the day, the price per gallon of gasoline or petrol goes above a certain point. Okay, people are forecasting doom and destruction, but past a certain price per barrel, suddenly new means of extraction became feasible and there were investments made in things like fracking and so on. Is there a plausible scenario in which there is some type of workaround? Along those lines, if that makes any sense. I don’t know. Maybe there isn’t.

Elad Gil: As far as I know, there, so far at least, is not. And part of that is because of the way that some of these things are built and it’s basically the capacity that you need, for example, for memory is basically a type of fab, and so you just need time to build out the fab and to get the equipment and put the lines in place. It’s a traditional CapEx into infrastructure cycle and these companies basically underinvested in that because they didn’t quite believe the demand forecast that other people had around this stuff, and so now they’re trying to catch up.

And so it’s one of these things where everybody keeps saying, “Well, AI is growing so fast, how can it possibly keep growing at this rate?” But it keeps growing at this rate, it just keeps going, and that’s because these capabilities are so impactful and so important. And so you look at the revenue of these companies and it’s interesting, I can send you the chart later, but Jared on my team pulled together a graph of how long did it take for companies to get to a billion dollars in revenue, and then from a billion to 10 billion, and then from 10 to 100. And there’s only a small number of companies that have ever done that. And you can literally look by generation of company how long it took. And so for example, I can’t remember, it was ADP or somebody, it took them 30 years to get to a billion in revenue or whatever it is, and Anthropic and OpenAI did that in a year.

So for Google, it took four years or whatever. I don’t remember exactly what the numbers are, but it was like as you go through these subsequent generations, it gets faster and faster to get to scale. Right now, OpenAI and Anthropic are each rumored to be roughly around $30 billion run rate.

Tim Ferriss: That’s crazy.

Elad Gil: Because four years ago they didn’t have any revenue. And that’s 0.1 percent of US GDP. So AI probably went from zero to half a percent of GDP, at least as a revenue contributor. And you extrapolate out, and if they hit 100 billion in revenue in the next year, two years, whatever it is, then we’re getting close to a place where each of these companies is a percent or two of GDP. That’s insane if you think about that.

Tim Ferriss: It’s bananas, yeah. It’s bananas.

Elad Gil: This stuff is really actually important and useful. That doesn’t include the cloud revenue for Azure for doing AI stuff or Google GCP or Amazon. It’s just those two companies. It’s insane.

Tim Ferriss: I would love to dig into your thinking because you’re one of the best first principles and also systems thinkers I’ve met, and I love having conversations with you because I always learn something new, and it’s not necessarily a data point, but often it might be a lens or a framework for thinking about different things. And that framework evolves for you as well. But for instance, if I was looking at this interview you did, this is a while back with first round capital and you were talking about market first and then strength of team second, but you talked about passing on investing in Lyft Series C. This was at the time, right? And ultimately, part of it seemed to hinge on winner-take-all versus oligopoly versus other.

I’m curious how you are thinking about that within the AI space, because I mean, you started skating for that puck before almost anyone I know, if not everyone I know. And so how are you thinking about that? And this ties into something that you mentioned in your piece that I haven’t heard anyone else talking about, but I’ll give the sentence as a cue. I don’t think you’ll need it, but founders running successful AI companies should all take a cold, hard look at exiting in the next 12 to 18 months, which might be a value maximizing moment for outcomes. And you went back to the dot-com bust and the survival rates and then breakout rates. How are you thinking about, could you just explain that sentence?

Elad Gil: Sure.

Tim Ferriss: And then also explain how you’re thinking about whether you think this will be winners-take-all, oligopoly, like what type of dynamic you think emerges?

Elad Gil: So in terms of the precedent, and that doesn’t mean it’s going to happen here, but if you look at every technology cycle, 90 percent, 95 percent, 99 percent of the companies in that cycle go bust. And that dates way back even to what was high-tech a hundred years ago, which was the automotive industry. Detroit had dozens of car companies and hundreds of suppliers, and it collapsed into a small number of auto companies essentially, and so this is not a new story. During the internet cycle or bubble of the ’90s, 450 companies went public in ’99, 450 or so companies went public in the first few months of 2000, and so that was 900 companies. And, say, another 500 to 1,000 went public in the couple years before that. So you had somewhere between 1,500 and 2,000 companies go public.

Go public, so that means they kind of made it. And of those, how many have survived? A dozen, maybe two dozen. And so that’s out of 2,000 companies, 1,980 or so went under, one form or another, or maybe they got bought for a little bit. And so there’s no reason to think that AI cycle will be any different. And every cycle’s like that. SaaS was like that and mobile was like that and crypto was like that. Most companies are not going to make it. A handful will, and we can talk about those. And so if you’re running an AI company right now, you should ask yourself, what is the nature of the durability of your company? And are you one of that dozen or two that are going to be really important 10 years from now? Or is now a good moment for you to sell because what you’re doing will start to get commoditized, or will be competed by a lab, or will be something that the market will shift or the technology will shift and you’ll become obsolete?

There’s a handful of companies that will continue to be great. They should never sell, they should never exit, they should keep going. But there’s probably a lot of companies that now, or the next 12 to 18 months is the best moment for them possible in terms of the value that they’ll get for what they’re doing. For every company, there’s a value maximizing moment where they hit their peak, and it’s usually a window. Usually six, 12 months where what you’re doing is important enough, you’re scaling enough, everything’s working before some headwind hits you.

And sometimes it’s very predictable that that headwind is coming and you can see it. And often, you see it in the second derivative of growth. How fast are you growing starts to plateau a little bit and you’re either going to keep going up or you should sell. That’s really what that’s meant to be. I’m incredibly bullish around AI, as you can tell from the rest of the conversation. And so it’s less about the transformation that’s happening overall because of this technology, and more that only a handful of companies are going to continue to be really important, and so are you one of them or not? If you’re one of them, you should never, ever, ever sell.

Tim Ferriss: So what are the characteristics of that handful? The handful that have durable advantage, right? Because you look back at 2000 and it’s like, man, what would you have used to try to pick out Google and Amazon, right?

Elad Gil: Yeah. 

Tim Ferriss: And I’m not saying that’s the best comparator, but within the many just avalanche of AI companies, which are those that you think have durable advantage? I mean, of course, some of the name brand labs come to mind. Maybe they become the interface for everything else, who knows? But how would you answer that in terms of either shared characteristics or actual names? What sets apart the handful that you think will make it?

Elad Gil: Yeah. I mean, I think the core labs will be around for a while, so that’s OpenAI, Anthropic, Google, barring some accident or disaster or some blowup, but it seems like they’re in a relatable spot. And to your point on market structure, I wrote a Substack post, I don’t know, three years ago or something predicting that that would probably be an oligopoly market and there’d be a handful and then be aligned with the clouds, and that’s roughly what happened. I mean, there’s Meta and there’s xAI and there’s other players that may change this. It didn’t exist when I wrote that post. But it feels to me like in the short run, that’s an oligopoly. There’s no reason for that to be a monopoly market, unless one of them pulls ahead so much on capabilities that it just becomes the default for everyone. And that could happen, but so far it hasn’t. And again, this compute constraint may prevent that in the short run or at least provide an asset to it on it.

As you move up the stack and you say, “Well, there’s different application companies, there’s Harvey for legal, there’s Abridge for health, there’s Decagon and Sierra for customer success.” There’s these different companies per application. There’s three or four lenses that you can look at. One is if the underlying model gets better, does your product or service get dramatically better for your customers in a way that they still want to keep using you?

Second, how deep and broad are you going from a product perspective? Are you building out multiple products? Are they all integrated in a cohesive whole? Is it really being built directly into the processes in a company and a way that it’s hard to pull out? Often the issue for companies and adoption of AI isn’t how good is the AI, it’s how much do I have to change the workflows and the ways that my people do things in order to adopt it? It’s about change management, usually. It’s not about technology.

And so if you’ve been able to embed yourself enough into workflows and how people do business and how they work and how everything else ties together, that tends to be quite durable. Are you capturing and storing and using proprietary data? Sometimes it’s useful. I think data modes in general are overstated, but I think sometimes it can be actually quite useful and that’s usually the system of record view of the world. There’s a handful of criteria around, will this thing be long-term defensible or not? And at the application level, that’s often one potential lens on it.

Tim Ferriss: So, question, if people are listening to this and they are in the position of perhaps a founder who should consider identifying their short period of maximum valuation and perhaps hitting the parachute in some way, what are the options? Because I think of some of these companies, I’m not going to name them, but there are multiple companies that have multi-billion dollar valuations. There seems to be, again, from a mostly layperson perspective, i.e. me, that the labs probably can build what they are currently selling without too much trouble. Do they aim to be acquired by a lab, in which case there’s a build versus buy decision for the lab itself? Are they aiming for one of, not the OpenAIs or Anthropics, but maybe somebody who’s trying to get more skin in the game like Amazon or fill in the blank. What are the exit options?

Elad Gil: Yeah, I think there’s a lot of exit options. And the thing that’s crazy right now is if you go back 10 or 15 years, the biggest market cap in the world was 300 billion. And the biggest tech market cap was, I don’t know, 200-ish or something. I think the biggest one at the time was Exxon or somebody 15 years ago. And over the last 10 or 15 years, what happens is we suddenly ended up with these multi-trillion dollar market caps, which everybody thought was nuts at the time, but things will probably only get bigger. There’ll probably be more aggregation versus less into the biggest winners.

There’s more and more companies who have these market caps between say a hundred billion and a few trillion in a way that is unprecedented. And that means there’s enormous buying power because one percent of three trillion is 30 billion. We can dilute one percent and pay $30 billion for something, which is insane. That’s truly unprecedented. And that means that these really big acquisitions can happen.

Tim Ferriss: For the companies that I’m imagining, again, I don’t want to name names, that may have, seem to have a limited lifespan. When I’m in these small group threads with friends of mine who are oftentimes, not always, but I’m in a bunch of them. And when they’re tech investors, very successful tech investors, and I’m like, “Okay, these five companies, you’ve got 10 chips. How would you allocate your 10 chips?” There’s certain companies that can consistently get zero, even though they’re reasonably well known. Why would one of the labs buy one of those?

Elad Gil: Depends on what it is. And it may be a lab, it may be one of the big tech incumbents and Apple, Amazon, Google’s kind of both things. There’s Oracle, there’s Samsung, there’s Tesla, there’s SpaceX now in the market doing things that there’s a bunch of different buyers of different types. There’s Snowflake and Databricks. There’s Stripe, Coinbase if you’re doing financial service. There’s just a ton of companies that actually are quite large, that’s kind of the point. And so often you end up selling to one of four things, right? You can sell to one of the big labs or hyperscalers or giant tech companies.

You can sell to somebody who cares a lot about your vertical. For example, a Thomson Reuters if you’re doing legal or accounting or things that are kind of related to that. I mean, I think actually one thing that doesn’t happen enough is merger of competitors, particularly private companies where you can do that because ultimately if your primary vector is winning and you’re neck and neck with somebody and you’re competing on every deal and you’re destroying pricing for each other, maybe it’s better to just merge. That actually was X.com and PayPal in the ’90s. Elon Musk, Peter Thiel were running different companies and they merged because they said, “Were there two people doing this? Why fight?”

Tim Ferriss: Yeah. Or Uber, Lyft way back in the day. That might not have been a merger. It might have been an acquisition.

Elad Gil: Yeah. And the rumor is that that almost happened and then the Uber side walked away from it, but all the money that Uber spent on fighting Lyft for all those years maybe would’ve been better spent just buying them. Maybe not, I don’t know the exact math on it. But often, it actually does make sense to say, “You know what? We’ll just stop fighting it out and we’ll just combine and just go win.” Because if the primary purpose is to win the market, you’re already fighting all these big incumbents that already exist anyhow, so why make it even harder?

Tim Ferriss: As you know, and we talk about this a lot, but we’ll talk about you with your investing hat on. But before you even put that, let’s call it full-time investing hat on, you had a lot in your background that may or may not have helped you. And I’m curious, if you look at your biology background, the math background, do you think any of those things or other elements materially contributed to how you think about investing that has given you an advantage in, I suppose there are different stages to winning deals, but sometimes they’re not crowded, but let’s just talk about the selection, the selection process.

Elad Gil: The math stuff helped me, I think, in two ways. One is it’s helped me with certain aspects of technical or algorithmic CS and understanding it, and sometimes it’s useful in the context of how certain things work in AI or things like that or just fluency of numbers and data and I don’t have to call it nerd language or something. And I did the math degree, honestly, just for fun. I think that’s actually the thing that was helpful. I only did an undergrad degree in math, so I didn’t go that far with it, but I did the very abstract pure math stuff. And I think that was a good forcing function of how to really think logically step by step about things because roughly the way that at least I learned how to do proofs was you do the logical sequence, but then sometimes you do these intuitive leaps and then go back and try and prove it to yourself, or flesh out the reasoning behind that intuitively. And I think sometimes investing is a little bit like that.

Tim Ferriss: When did you first have the inkling that you could be good at investing? And that could be investing writ large, it could be maybe within the context of our conversations, startups and angel investing. When did you first go, “Huh, yeah, maybe I could be good at this”? Was there a moment or a deal or anything like that that comes to mind?

Elad Gil: Not really. I’m really hard on myself so even now I second guess myself a lot. Somebody was telling me that the two people that always beat themselves up the most in hindsight is me and this one other person who’s another well-known founder/investor. And so I don’t think there’s a single moment where I’m like, “Wow, this makes sense for me to do.” I think it just organically kept going because I was getting into some very strong companies, and then that allowed me to continue what I’m doing. Yeah, I wish I had a moment like that.

Tim Ferriss: Goddammit, you need to revise your genesis story like every good founder.

Elad Gil: Yeah, ever since I was seven, I’ve been thinking about investing in technology.

Tim Ferriss: So getting into those deals, what allowed you to get into those deals? Because some people have an informational advantage and they put themselves in a position to have an informational advantage. And I think that had I not — I don’t want this to be a leading question, but it’s like had I not moved to Silicon Valley when I did, 2000, and then subsequently stayed there, moved to San Francisco specifically, nothing that I was able to do in angel investing would’ve been possible. But there’s more to your story because a lot of people moved there with hopes of startup riches in whatever capacity. Not saying that that’s why you moved there, but what was it that allowed you to get into those deals? There are certain things that come to mind based on our prior conversations, but I’ll just leave it at that. Why were you able to get into or select those deals?

Elad Gil: Yeah. I think there’s what happened early and what happens now, and I think those two things are different. I think to your point, the single most important thing for anybody wanting to break into any industry is go to the headquarters or cluster of that industry. Move to wherever that thing is, and all the advice that you can do anything from anywhere and everything’s remote is all BS. And you see that for every industry, not just tech.

If you wanted to get into the movie business, people wouldn’t say, “Hey, you can write a film script from anywhere, you can digitally score it from anywhere, you can edit it from anywhere, you can film it anywhere, go to Dallas and join their burgeoning film scene.”

They’d say, “Go to Hollywood.”

And if you want to do something in finance, and you’re like, “Well, you could raise money from anywhere and come up with trading strategies and a hedge fund strategy from anywhere and you could do it from anywhere.”

People would say, “Hey, go to whatever.” Seattle, they’d be like, “Go to New York or go to X, Y, Z financial center.”

So the same is true for tech. And Shreyan on my team has been performing this sort of unicorn analysis of where is all the private market cap aggregating for technology. And traditionally, about half of it’s been the US and then half of that has been the Bay Area. But with AI, 91 percent of private technology market cap is the Bay Area, 91 percent of the entire global set of AI market cap is all in one 10 by 10 area. If you want to do stuff in AI, you should probably be in the Bay Area. Probably the secondary place is New York, and then after that, it just drops off a cliff, and really it’s the Bay Area. If you want to do defense tech, you probably should be in Southern California, close to where SpaceX and Anduril are, and Irvine, Orange County, et cetera, or El Segundo. There’s a lot of startups there. If you want to do FinTech and crypto, maybe it’s New York.

But the reality is these are very strong clusters. To your point, number one, is I was just in the right location. I was in the right networks and a default was, I was running a startup myself. I was at Google for many years, and then I left to start a company. People just started coming to me for advice. The way I ended up investing in Airbnb is I was helping them when there were eight people or something, raise their Series A, and introduced them to a bunch of people and help with some of the strategy there in very light ways. They would’ve done it without me. They said, “Hey, at the end of it, do you want to invest a little bit?” I said, “Great, that sounds wonderful.” This was very organic.

Or the way I invested in Stripe is I had sold an infrastructure, early API company to Twitter. When Twitter was, say, 90 people or so, and I sent an email to Patrick, the CEO of Stripe, just saying, “Hey, I’ve heard great things about you and I really like what Stripe is doing and I would use it for my own startup. I sold this API company myself. Do you want to just talk about this stuff?” I went on a couple walks and then a week or two later he texted me and he’s like, “Hey, we’re doing a round. Do you want to invest?” The first few things that I did were very organic where the founders were like, “I want you on board.”

I didn’t think, “Oh, I should be an investor and I’m going to chase things.” I just really liked talking to smart people and I liked working on certain business problems, and I love technology and it’s translation to the world. I was just a nerd and I met other nerds and we hit it off. It’s the early story for me.

Tim Ferriss: But it just struck me that I’m sure people have heard, or I’m sure you’ve heard this before, but if you want money, ask for advice. If you want advice, ask for money. It just struck me that it goes the other way around too. It’s like, if you offer a bunch of advice, oftentimes you get to give money. If you try to give money, you might get solicited for advice.

Elad Gil: Yeah, good point.

Tim Ferriss: When did you write the High Growth Handbook? When was that published?

Elad Gil: It’s a while ago now. It’s probably seven-ish years ago, something like that.

Tim Ferriss: Seven years ago. All right. Yeah, we’re going to come back to that in a minute because you were in the right place geographically speaking. You were in the center of the switchboard. Like you said, some of these initial standout investments came about very organically. What I’d be curious to hear, because you also said yourself not too long ago that there’s what I did then, there’s what I did now. There’s also what you did in between along the way. I’m wondering, for instance, if you would still stand by this, this is from that first round interview I was mentioning.

“As a general rule, when I make investments, it’s market first and the strength of the team second,” and there’s more to it. But would you still agree with that?

Elad Gil: 90 percent, yes. Every once in a while you meet somebody exceptional and you just back them or something maybe so early. I led the first round of Perplexity, the very, very first round. The way that came about was Aravind, the CEO, I think he pinged me on LinkedIn, literally. This was when nobody was doing anything in AI and he was an OpenAI engineer or a researcher. He’s like, “Hey, I’m at OpenAI,” which nobody cares about at the time, “I’m thinking of doing something in AI. I heard that you’re talking about this stuff and nobody else is talking about it. Can we meet up?”

We just started meeting every two weeks and brainstorming, and then that led to investing in that. That was a people first thing where he was just so good. Every time we talk, he’d show up a week later with a thing that we discussed built. Who does that?

Tim Ferriss: Yeah, that’s a good sign.

Elad Gil: It’s so good. Or, the way I ended up investing in Anduril. Google shuts down Maven, which was their defense project. I think, “Well, if the incumbents are going to do it, what a great place for startups to play.” Because there’s been a long history of the Silicon Valley and the defense industry. That’s HP and that’s a lot of the early brands. I was just looking for something or somebody to work on this area and it was very unpopular at the time. I ran into, I think it was Trae Stephens, who’s one of the co-founders of Anduril, who’s also at Founders Fund, at some blunch or something else. Again, right city to be in.

He said, “Oh, I’m working on this new defense thing.” I said, “Amazing. Let’s talk about it.” It was very just looking, sometimes just looking for these things too in a market and sometimes it’s people. Anduril was looking for a market and then finding amazing people. Perplexity was in between where it was like I was looking at everything in AI, because I thought it was going to be incredibly important, but not very many people were. Then I just ran across an exceptional individual, and that’s when I funded OpenAI. That’s when I funded Harvey, which is the early legal — I funded a lot of really early stuff because they were the only people doing anything in this market that I thought would be really important.

Tim Ferriss: Let me come back to a few things you said. You mentioned the Perplexity founder, or later the founder who said you’re talking about this stuff or heard or read or found you talking about this stuff, where was that? Was that post on your blog? Was it somewhere else? How did he actually find you talking about anything?

Elad Gil: Yeah, I think he pinged me in part because I was involved with a bunch of the prior wave of technology companies, Airbnb, Stripe, Coinbase, Instacart, Square, a bunch of stuff like that. I think at that point I was already known as a founder and investor. But then on top of that, I was trolling AI researchers and just asking them about what’s going on because it was so interesting. There was a bunch of art that was being done with these things called GANs at the time, these generative adversarial networks.

I was playing around with that. I tried to hire engineers to build me effectively what’s Midj ourney, because I just thought it’d be really cool to make it easy to make AI art.

Tim Ferriss: Let me pause for a second because this is my second question and it’s a good time. When you mentioned AI, I thought it would be incredibly important. What were the indicators of that? What was the smoke in the distance where you’re like, “Oh, that’s an interesting direction.”

Elad Gil: Yeah, I think there was two or three things. AI was one of those things that people have always talked about. When I was doing my math degree, I took a lot of theoretical CS classes and there were the early neural network classes and things like that and the math behind it. There’s always this promise of building these artificial intelligences of different forms. One could argue Google was a first AI first company. Back then it was called machine learning and it was different technology basis in some sense. I think 2012 was when AlexNet came out and there’s this proof that you can start scaling things and have really interesting characteristics in terms of how AI systems work.

Then, 2017 is when Google, a team at Google invented the transformer architecture, which everything is based on now, or roughly everything. For example, if you look at GPT for ChatGPT, the T stands for transformer. Around 2020-ish I think was when GPT-3 came out, and that was such a big step from GPT-2. It still wasn’t good enough to really do stuff with, but you’re like, oh, shit, the scaling law papers are out. The step function and capabilities was huge. You suddenly have a generalizable model available via an API that anybody can ping. Just extrapolate that out to the next step and this is going to be really important.

It’s basically looking at that capability step and playing around with the technology, and then reading the scaling law papers, or just in general, the scaling law seem to work for everything. You’re like, wow, this is going to be really, really important, so let me start getting involved with it.

Tim Ferriss: Do you think you would have or could have done that without a mathematics background? I’m guessing there were probably some other folks, but that leads me to the question of, how are you finding and ingesting that? Was it the talk of the town? It was in a sense within your social circles and the networks that you’re a part of, it was a open discussion, so you were engaged with it. Or, are you ingesting vast quantities of information from different fields and this happened to be something that really caught your attention?

Elad Gil: I guess it’s three things. I’ve always ingested a lot of information from a lot of different fields just because I like learning about stuff. I was always this mix of math and biology and anime and art and other things. It was always a mix. Then it was something that my friends were talking about, but it was a bit more toy-like, “Oh, this is cool and look at what came out.” But most people didn’t then extrapolate. It’s like early crypto or Bitcoin, everybody was talking about it, but very few people bought it. I think that was part of it. Then third, honestly, I just thought it was really neat stuff that I kept playing around with.

This is back to the GAN stuff and the art where these different models would come out and you could mess around with them. One of the things that’s really underdiscussed in terms of the importance of it relative to this wave of foundation models and AI and everything else is, the way AI or machine learning used to work is your team at a company or wherever else would go and there’d be what’s known as an MLOps team. An operations team whose whole thing was helping you set up all the data and the pipelines and everything to train a model. You train a model that was accustomed to your use case and what you were trying to accomplish.

Then you had to build a bunch of internal services to interact with that model. It was a huge pain to get to the point where you had a working ML system up and running in production. Then suddenly, you have a thing where you just do an API call. With a line of code or a few lines of code, anybody anywhere in the world can ping it, but not just that, it’s generalizable. It’s not just specialized to one use case, like spell correction or whatever. You can use it for anything and it has all of the internet embedded in it in some sense, in terms of the knowledge base. It can start having these advanced reasoning capabilities. But one of the most important things is, hey, you can get it with a couple lines of code.

You don’t have to go and build an MLOps team. You don’t have to host it. You don’t have to interact with it. You don’t have to do all this extra stuff. It just works. That’s really important.

Tim Ferriss: It’s huge. Yeah, it’s hard to overstate.

I have a million questions for you. The problem with this is the embarrassment of riches of directions that we could go. I am using, in my team, Claude Code and assorted tools for all sorts of stuff right now. One of them, it just so happens, overlaps with an area of great skill for you and experience, which is angel investing. This is the first time where I feel really enabled to do, and there is some manual effort involved, as you might imagine, but to go back and do an analysis of 20 years of angel investing to try to do any number of things. I suspect that a lot of what interests me is not particularly useful, like doing some counter-factuals.

What if I had held each of these for three years, for five years, for whatever? That’s like just Opus Dei, whipping myself in the back for the most part. But in doing an analysis like that, there are certain things that immediately come to mind for me that might be of interest. I want to hear what you would do if you would even do this. Part of it is, frankly, just curiosity. Are the stories I tell myself about this true or not? I’m interested, like who made certain introductions? Are there certain people who just took me there, basically people in hospice care, and shipped them over as a last ditch effort? Are there people who actually sent me good stuff consistently, et cetera?

There are a million and one ways I could try to interrogate the data and enrich it. We’re doing a pretty good job of enriching it, Claude and other tools. OpenAI is very good at this. What are some of the more interesting questions or lines of examination you think looking back, like whatever it is, in my case, it’s about roughly 20 years of stuff.

Elad Gil: Yeah. The weird thing I’ve been doing is uploading pictures of founders and asking the models to predict if they’d be good founders.

Tim Ferriss: Oh, wow.

Elad Gil: Because if you think about it, we do this all the time when we meet people. We quickly try to create an assessment of that person, their personality and what they’re like. There’s all these micro features. Do you have crow’s feet by your eyes which suggests that your smiles are genuine? What does that imply about the sense of humor you have? Or, have you furrowed your brow over time and what does that mean? There’s all these micro features. When you meet people, you actually can get a pretty quick impression of them pretty fast. It doesn’t mean it’s correct, right? But we actually do this really fast as people.

I have this whole set of prompts that I’ve been messing around with just for fun around, can you extrapolate a person’s personality based off of a few images? Therefore, can you be predictive about their behavior in any way? I think that’s fun, right?

Tim Ferriss: Yeah. You’re finding any signal there or not? TBD?

Elad Gil: Yeah, it actually works pretty well. I’ve been doing weird shirt, right?

Tim Ferriss: Right, practice smiling people.

Elad Gil: Yeah, yeah. No, but I think it’s interesting because we do this all the time where we read people and that’s part of the prompt. It’s like you’re a very good cold reader of people based on micro features and et cetera, spell it out. Then based on that, not only you give me your interpretation of this person, but explain the specific micro features for each thing that you’re stating about the person, and it will break it down for you. It’s amazing. Imagine what this technology is. It’s crazy. Again, I’m not saying it’s fully accurate and I’m not saying it’ll be predictive, but it’s done pretty well in terms of nailing people.

It’s even done things like, “Oh, this person probably has this type of sense of humor.” Or, “This person probably holds themselves back in most social settings and then chimes in with a witty wry thing that nobody expects or whatever.” It’s very specific.

Tim Ferriss: Very specific.

Elad Gil: Yeah. It’s amazing. I’ve been doing stuff like that, which may not be your question, but I’ve been finding it really fun.

Tim Ferriss: Well, it’s related in the sense that, and I’m sure I’m missing some steps, but I love angel investing and the dose makes the poison, so there’s usually a case to be made when I get to a certain threshold. I’m like, “Okay, this isn’t fun anymore.” I love dark chocolate too, but I don’t want just to be force-fed dark chocolate all day. I’ve talked about this, but I really do enjoy the learning and the sport of it, frankly, and interacting with some very, very smart people. Not all of them work out as far as founders of companies, but ultimately, I’m trying to figure out how to separate signal from noise.

Also, it’s fun to try to use anything, but in this case investing, to sharpen your own thinking and to stress test your own beliefs and the assumptions that undergird some of your predictions, things like that. I’m just wondering if you’ve ever done a retrospective analysis of your startup investing or if you’re like, “No, more Marc Andreessen style, only forward.”

Elad Gil: Yeah. Early on when I was first starting to invest, I would have this long grid of things by which I would score each company, and then I’d go back and see if it was correct. It was roughly correct. I think the hard part is there’s a lot of randomness in outcomes. There’s the company that sells for a few billion dollars that you thought was dead or whatever it is, right?

Tim Ferriss: Sure.

Elad Gil: How do you score things like that? It’s like, well, right now we’re in this really weird market moment where trillions of dollars of market cap are all chasing the same prize. They’re going to do all sorts of stuff that wouldn’t happen normally. It’s really hard to account for that kind of thing relative to all this. I’m much more in the Marc Andreessen camp of, I think very little about the past. I think close to zero about my own past, I just am like, “Let’s keep going.” Maybe that’s bad and there should be dramatically more self-reflection.

I try to self-reflect in the moment, but I don’t try to re-extrapolate and examine my entire life and decisions. If anything, most of the decisions have been ones where I’m really upset with myself for not being more aggressive on something. In other words, I’ve invested in the company, but I should have tried even harder to invest more even if I tried really, really hard because there’s a handful of companies that really matter, and that’s all that matters as an investor. Obviously, as a person, I enjoy getting involved with different companies and different founders and helping them whether the thing works or not, or I think the technology’s interesting or whatever.

But the realities from a returns perspective, there’s a very clear power law that people talk about and it’s true. I remember a friend of mine did this analysis, I think it may have been Yuri Milner or someone where it’s like, look at all the companies from, I don’t remember the exact dates, 2000 or 2004 until today in technology. It was something like a hundred companies drove like 90 something percent of all the returns and 10 companies total drove 80 percent of all returns over a two decade period in technology. If you weren’t in that 10 companies, you were a bad investor. Once you start dealing with these power laws and these outsize outcomes and all of that, how can you rate that?

It’s basically, did you hit one of 10 things or not? That’s really the rating. That’s probably the correct rating for investment.

Tim Ferriss: I’d love to try to focus on some early-ish decisions on this podcast, right? Because like you said, they’re the earlier decisions. There’s how you did things then, there are how you’re doing things now, which isn’t to say that one is better than the other, but certainly what you do in the past tends to inform what you’re able to do and what you do in the present. What I’m curious about, and we won’t spend a ton of time on this, but it might be interesting to folks, is to discuss when you moved from purely doing angel investing yourself to involving other investors in your deals, right?

There are multiple ways to do this, but the reason I want to ask this is because you did a number of SPVs, I’ll explain what that is, special purpose vehicle, but for folks, you might be familiar with venture capital firm. They have funds and they raise, let’s just call it $100 million for a fund. It can be more or less, of course. Then they invest in a bunch of different companies and then you see who wins, who lose, and then if there are profits, like I guess conventionally, let’s just use the textbook example, the venture capital firm takes 20 percent of the upside and then the LPs, the investors get 80 percent.

The venture capital firm takes a management fee to keep the lights on, although it usually does a lot more than keep the lights on. With the SPVs, you’re investing in, let’s just say, for simplicity, a single company, right? There are advantages to that in simplicity, for somebody who’s putting together, the SPV, but you also have a lot of reputational risk, because if you have a fund, you have a couple of losers. Your investors don’t automatically go to zero, but if you have an SPV and it goes to zero, that could really hurt you reputationally.

When I look at some of your early SPVs, which I think included, certainly, a number of name brands like Instacart and so on, how did you choose which companies to do the SPVs with? Because that seems like a very important set of decisions to lay the groundwork for creating optionality for what you do after that.

Elad Gil: Yeah. I think to your point, I’ve always been terrified of losing other people’s money. I’m fine if I lose my own money. It’s my decision. I’m an adult. It’s okay, but I’ve always been — people giving money are adults or institutions, et cetera, to invest on their behalf. But similarly there, I was just terrified of ever losing money for people. I’ve tried over time to be judicious behind the SPVs that I did early on and the focus was on things that I thought would really be outsized companies. That was, to your point, Instacart, it was early Stripe, it was Coinbase, it was a couple things like that that were amongst my very first SPVs.

The emphasis was very much on, do I think this can be a massive thing? Also, do I think there’s enough downside protection in some sense that even if it didn’t work as well as I thought it would still be a good outcome for people. Yeah, I try to do that very diligently. It’s interesting because a lot of people ping me for help as they think about becoming investors or they’re scouts for a fund, which means basically they’re given a small amount of money by a venture capital fund. Sequoia famously has this program, they give people money and then those people invest money on their behalf. Some of the scouts that I’ve talked to basically treat it like free money or an option.

They’re just like, “Oh, just throw out a bunch of stuff, maybe something works.” I’ve pointed out to them, “Hey, if you actually want to become a professional investor at some point, this is your track record.” A, you’re a fiduciary in some sense, and maybe I’ll be more careful from that perspective, but B, this will establish your track record, and do you want to have a good one or a bad one? How do you think about that? Again, sometimes people just get lucky and they hit the one thing out of a hundred, but that more than returns everything and they look great. But it’s hard to be consistently good at this stuff or consistently hit great companies.

Tim Ferriss: All right. I want to double click on a few things you said and maybe you could walk us through a pseudonymous example. It doesn’t need to be a named company, but when you’re talking about setting your track record, you did an excellent job of that before you then went on later to raise funds and so on. I would love you to perhaps explai


View Entire Post

Read Entire Article