IPFS, CoinList, and the Filecoin ICO with Juan Benet and Dalton Caldwell

by Y Combinator6/30/2017

Juan Benet is the founder of Protocol Labs (YC S14). They’re working on IPFS, Filecoin, and CoinList.

Dalton Caldwell is a Partner at YC.



Subscribe

iTunes
Google Play
Stitcher
SoundCloud
RSS


Transcript

Craig Cannon [00:00] – Hey, this is Craig Cannon, and you’re listening to Y Combinator’s podcast. Today’s episode is with Dalton Caldwell, who’s a partner at YC, and Juan Benet, who’s the founder of Protocol Labs, a YC company that’s working on IPFS, Filecoin and CoinList. If you’re just getting into cryptocurrency, I highly recommend listening to episode 244 of Tim Ferriss’ podcast, which does a pretty good job of covering all the terms and explaining how they all connect to each other. Before we get started, I wanna let you know that this is a really long episode, so it’s pretty much broken up into three parts. Part one starts right after this, and it’s Juan’s explanation of IPFS and Filecoin. Part two is our conversation with Dalton, and that starts around minute 11. Part three is Juan answering questions from Twitter, and that starts around one hour and 40 minutes in. All right, here we go. Let’s just start with a description of all the words we’ve been talking about, IPFS, Protocol Labs, et cetera.

Juan Benet [00:50] – Protocol Labs is a research development and deployment lab for networks that I started to really build the IPFS project and build Filecoin, and create a place where we could create the kinds of projects that could turn into something like IPFS or Filecoin or other things. I really wanted to build an organization that someone like Satoshi could have seen as a way to build a project through and be like, oh, yeah, instead of doing this on my own anonymously, I could go and build it in Protocol Labs. It was born out of a personal frustration where when I was starting an IPFS project, I didn’t have such an organization that I could go to and go and build a project there. Really, I think the only option was either university or Google. In the university case, it would have been killed in the publish or perish world where like, hey, this is way too ambitious. Focus on this one little thing and maybe publish that and move on to the next thing. It would have not been an implementation project. Similar to how the web could have never really been built as a grassrooted project. Then the flip side, I think this kind of tech is stuff that Google might be interested in funding from the perspective of Google funds a lot of protocols and funds a lot of research. But it also kinda runs counter to basic Google positions around data, control of data and how the internet, how information flows and all that kinda stuff. It’s like in direct opposition, so it’s stuff that probably shouldn’t have been funded or in direct control by Google. It’s the kind of stuff that has the potential to really rebalance power on the internet. I figured I would start an organization that’s separate. Protocol Labs is really a group that is trying to create a number of these projects and protocols around things that we think are broken on the internet. The charge that we have for ourselves, the mission that we have is to go in and improve and upgrade a whole bunch of the software and protocol machinery that we have running the internet both in low level actual internet part or the web and more user-facing pieces. We have a very open-ended kind of perspective of like, hey, we just want to improve computing in general and improve the pipeline of going from research to a product that people use. It just happens that for now and for the next few years, we’re super focused on how information moves around the internet, how to distribute it better, how to change and rebalance power associated with information, give people sovereignty of data. Just making it more efficient, make it route around things like attacker and hostile censorship. Make it so that information has more permanence, a whole bunch of questions around this. The two projects there are, one of them is IPFS, the InterPlanetary File System. It’s used by a ton of organizations and, both businesses and projects and blockchain networks and governments and so on. It’s used in a whole bunch of cases around … A sure way to describe IPFS is saying, hey, it’s a large scale content-addressed content distribution thing. It’s a protocol. It’s a peer-to-peer protocol for moving around anything, any kind of content; files, data, hypermedia, whatever, in a peer-to-peer way, and with proper content addressing and cryptographic verification and all this kinda stuff, and a whole bunch of tooling around the guts of making all of that work, which is peer-to-peer networks and the ability to work across a whole bunch of different transports. There’s no end to the really important pieces of the peer-to-peer machinery that you have to build that the IPFS part was really about. That’s used by a ton of people both in the blockchain space and outside. It’s used in the blockchain space because it fits really well with the model of you have authenticated data structures, then you have hashes and you address things that way. It’s used outside because people want to distribute things in a better way. People want to address things by what they are, not where they are. It’s time for the internet to move from location addressing to content addressing. In a big way, we’ve been, I guess, appointed to do so. We have to slog through the really hard work that is doing that. We’re doing it and it’s great. We’re succeeding, but there’s more to go. There’s a lot more to go.

Craig Cannon [05:49] – What’s the current status of making it all human readable? Because I knew that was an issue early on.

Juan Benet [05:53] – Oh, like making human-readable names?

Craig Cannon [05:54] – Yeah.

Juan Benet [05:56] – Human-readable names are an interesting question. Human-readable names should map to content, and people should use them when they know and are aware that that name is now subject to a consensus protocol. In a way, human-readable names either require a consensus protocol that is global scale and makes everyone agree on what the value of a name is. Or they’re relative. Meaning, I think there’s a GNS which is like the new naming system, which is relative on a trust graph. It kinda maps more to how humans think about names where I might call a friend Jeremy, and I know him as Jeremy, but he actually might have a last name, as well, and he might other names that he goes by on the internet. Other people call other people Jeremy. GNS is an interesting, or the approach of using trust graphs and so on or social networks to name people, it’s a really interesting and good one, but it doesn’t give you URIs or names that you can print in a billboard that a ton of people can look at and enter into their computer, which is the whole point of human-readable naming. You really are stuck with consensus. So when you’re stuck with consensus, you either have something hierarchical like DNS and so on, or you have something like blockchain naming, so Namecoin or ENS or Blockstack. You have a situation where, human-readable naming is important for people to type, but I think we have this massive addiction to human-readable naming where it shouldn’t be used in a lot of places because it brings in a whole bunch of baggage around, hey, now you need a consensus system, now you need like a network stack, now you need a whole bunch of things that normally you shouldn’t need to just address or point to some information. We still want hashes to be the main thing that people use to link to things, just maybe allow human readability as an entry point to all of that. Do you want me to describe Filecoin first or do you wanna dive deeper?

Craig Cannon [08:00] – Yeah.

Juan Benet [08:02] – The Filecoin project is a, it was borne out of IPFS as a way to incentivize the distribution of content in the IPFS network. That’s where you can think about the problem of storing bytes of data in the world, and you have a situation where there’s a lot of people with disks and there’s a lot of people with data. It’s effectively a market where people want to buy storage and some people want to provide storage and provide a valuable service. In the old peer-to-peer tradition, people would just do resource sharing and kind of try and hope to achieve a right balance. It’s been shown that that works for some use cases, but doesn’t work for others. What was really missing there was an understanding that there’s actually a spectrum where on one end, some people contribute massive amounts of storage and don’t really need to use the network very much, and on the other end, you have people that are contributing or asking for massive amounts of storage to store all their data, and don’t plan to contribute any storage. This is basic, hey, introduce a currency and now you’ve mediated this market. That’s what Filecoin is about. It’s creating a currency that can mediate this market. Now, there’s a whole second aspect to it, which is you can look at a network like Bitcoin as an entity that managed to get tons of people around the planet to amass massive amounts of computing power to maintain the Bitcoin blockchain, all of the Bitcoin mining that’s going on. Can you create a different proof of work function that maintains a blockchain that instead of just trying to crunch through hashes and find the little target, that also causes a valuable side effect? That valuable side effect is, hey, you have to store a whole bunch of files in order to have power in the consensus. A way of framing it is that the Filecoin consensus, if you want to participate in a Filecoin consensus and maintain the Filecoin blockchain, what’s counted is not your CPU raw power as your influence over the consensus, but rather the amount of storage you are providing to the rest of the network. For that, we use proofs of storage and specifically a new kind of proof we came up with, we, I guess, discovered, which we call proof of replication. That checks and verifies that content has been correctly and independently stored. Independently in, it doesn’t mean different physical hardware, but rather it means that a different array of bytes somewhere is being used to store this, and you can’t de-duplicate that, and that you can’t cheat it in that you can’t generate, you can’t pre-generate a lot of the content and cheat. There’s a whole bunch, that’s a very specific problem, but the thing there is Filecoin, with this different work function, can organize massive amounts of storage to then sell in the network. You get a lot of people to mine the currency, and you have a very strong incentive to mine the currency, and then you can sell all that storage, that supply that comes on to users. Mediate this. It’s a blockchain-powered decentralize storage network. It’s the way that we can think about it.

Craig Cannon [11:15] – Dalton, you wanna just kick it off? What’s your first question?

Dalton Caldwell [11:18] – My first question is, maybe start with the timeline of you as a founder, what your initial idea was, why you started the company and just how it got here.

Juan Benet [11:29] – Sounds good. It was probably 2013, late 2013 or so. I’d been working on a whole bunch of knowledge tools. This means software tools that can help you learn faster or help scientists figure out what’s in papers and so on better. I found this really annoying problem, which is data sets, scientific data sets, were not well-versioned, were not well managed and so on. There’s a whole bunch to that problem, but it struck me as this hugely lacking thing that computers scientists have Git and we have versioning, and we also have BitTorrent and we know how to move around large content volumes of data very efficiently in a peer-to-peer way. What really seem to be missing was this sort of combination of getting BitTorrent that would enable these data sets to be distributed worldwide, well versioned and so on. That sent me on a path of re-engaging with a whole bunch of stuff that I’d been thinking about prior, many years before, a lot of peer-to-peer stuff. My background is in these sort of systems and networking. I studied that at Stanford. At the time, I had been looking into things like wireless networks and why peer-to-peer networks like Skype works, and so on. It always struck me that that was a very untapped area of potential. I think the potential there was vastly underutilized.

Dalton Caldwell [13:10] – A lot of the problem’s with usability, I don’t know if you know my whole background, but my first company, imeem that I started, was a peer-to-peer. It was distributed social networking. A lot of these ideas keep recycling every few years. One thing that we noticed is how hard it was for users to get the negative side effects; having something that is peer-to-peer. BitTorrent worked pretty well, but even Skype. Skype kept it really, you didn’t know that it was peer-to-peer. Unless your upstream bandwidth was saturating and you got a nasty letter from your ISP or something, you had no knowledge as a user. Sort of my takeaway during that era was that usability always trumped the elegance of peer-to-peer models. Then when I saw YouTube take off, YouTube is exactly the sort of thing you expect to be both on top of BitTorrent, but in fact was entirely centralized and they were streaming everything themselves. Holy cow, because it worked so well and Flash Video worked so well, the culmination of those events happened. My kind of knowledge going into this of even for the, going back to your story in a second, is usability, to me, is such an important concept to have these distributed systems get used by end users.

Juan Benet [14:20] – Absolutely. Without a question. Famously, I think Drew has even pointed out how there were a whole bunch of clunky sync, file sharing sync things that really just did not work. The big thrust of Dropbox for a while was just get usability right, get the user experience flawless, and it almost doesn’t matter what you do underneath the hood as long as you make sure the UX is like-

Dalton Caldwell [14:47] – I’m sure back in the day, everyone was like, well, we have rsync, that’s good enough. We don’t need Dropbox, we have rsync.

Juan Benet [14:52] – But then there’s this other fundamental difference, which is that, yes, absolutely, building these systems is hard and you have to pay attention to the UX, but there’s a whole bunch of places where, economically, it makes a ton of sense to do something better and to do something that has a different arrangement. I think there were a whole bunch of, there was a period of time, basically from 2003 to 2009 or so, where peer-to-peer was sort of dead. I sorta call this like the peer-to-peer winter similar to the AI winter. There’s been a series of AI winters. That was kind of like the peer-to-peer winter. There probably were more peer-to-peer winters before because peer-to-peer is actually a pretty old concept. A lot of people have been struggling with the differences between making things peer-to-peer or centralized since the beginning of the internet. I think there’s a whole bunch of reasons why a lot of the companies failed that were getting built around that time where products failed, and why there were very few success stories. I think Skype and BitTorrent are probably the biggest success stories from that entire time. Yeah, I think Skype didn’t really talk about peer-to-peer very much. BitTorrent, aside from Blizzard and a few others. it was mostly used for moving around a lot of movies. That said, though, it doesn’t, the actual CS behind it, the actual engineering reasons for choosing to do something peer-to-peer make a ton of sense. This actually connects very well with Protocol Labs as a company because the key thing is to understand deeply what the benefits of using some technology are. What are the underlying, from a research and theory perspective, what is the theoretical difference between doing one thing one way or another, between centralized models or decentralized models, between doing things peer-to-peer or doing things in a hierarchical, well-structured way? Those different properties can give you a different range of opportunities. Now, Peer-to-peer is a lot harder to build with because you don’t have a lot of control. When you build centralized things, it’s a lot easier for people to get going. Lots of established ways of doing things and so on.

Dalton Caldwell [17:04] – Rolling out changes. We could enumerate all these. It’s easy to roll out a website. It’s hard to distribute software and get everyone to upgrade.

Juan Benet [17:15] – I would argue that it’s easy to roll out a website today because you’re working on top of decades of centralized engineering, whereas we haven’t had the same level, deep level, of engineering on the peer-to-peer side. The majority of groups that end up going into peer-to-peer end up having to create a lot of stuff from scratch because it either hadn’t been done or had been done in a way that wasn’t re-usable. This was actually one of the big thrusts of the IPFS project in general, it was create a whole bunch of, create a huge toolkit that people can use to build applications in peer-to-peer land without having to reinvent everything from scratch. It was this really huge frustration for us. Okay, great, it’s 2013, ’14, at the time. We have to go back and rewrite tons of normal peer-to-peer stuff that could have been written 10 years before, mostly because the language and tooling had changed, we wanted to do a few different things. We couldn’t reuse a whole bunch of the libraries that were out there, or the libraries made a whole bunch of assumptions about reality that were broken. Very famously, a lotta people, just from the engineering perspective, things like assuming that you are gonna be working on top of TCP and that the port that you have is a TCP port, and that it’s not UDP port or whatever, or even that you don’t have some other transport, right away can make a library completely unusable for a project years down the road.

Dalton Caldwell [18:34] – I remember dealing with NAT traversal.

Juan Benet [18:36] – Yeah. NAT traversal is a wonderful problem. It still plagues people everywhere.

Dalton Caldwell [18:42] – Let’s go back. You were working on distributed systems. This was interesting to you. How did this turn into the company? What was the thing you applied to YC with? What was the timeline there?

Juan Benet [18:51] – I applied to YC with a plan of doing this, of building both IPFS and Filecoin, and a company called Protocol Labs. It was, right away from the beginning, it was this large-scale plan of going to build a whole bunch of different things, all around distributed peer-to-peer systems, all about decentralization, and with a business model of taking a portion of currency. This was in 2014, when this was a very new thing. People weren’t doing this. There was basically Ethereum and a couple other groups that had also gotten to the same conclusion. Aside from a few side projects that we started and so on, and basically delaying our timelines, in terms of software taking a lot longer to build than expected, we’ve pretty much followed the plan in that, from the beginning, we had both IPFS and Filecoin. Connecting to what I was saying earlier, I had this problem around data sets and versioning and so on. That led down the rabbit hole of really thinking through how information moves in the network, how information moves on the internet in the first place. How do we do addressing in general? It turns out, with HTTP and so on, we do all this location and addressing stuff. It works very well for certain kind of use cases, but absolutely terrible for a bunch of other setup use cases, and introduce a whole bunch of brittleness to the infrastructure. Exploring this set of ideas that had been well trodden by lots of groups before me, and before the current wave of peer-to-peer–

Dalton Caldwell [20:30] – Do you remember MojoNation?

Juan Benet [20:32] – Of course.

Dalton Caldwell [20:33] – I would run it in my dorm at Stanford.

Juan Benet [20:35] – Awesome.

Dalton Caldwell [20:36] – It had a lot of the primitives in there. I ran a node, and I had storage space on my PC. I had fast internet. It was great.

Juan Benet [20:44] – I was not familiar with Mojo until I chatted with Zuko about it. It turns out Mojo pioneered all this, completely.

Dalton Caldwell [20:51] – I thought it was so cool. I completely drank the Kool-Aid. This was in 1999. That was my favorite thing.

Juan Benet [20:59] – It was a great era. You had the beginning of, had just been, the first major, large-scale DHD had been deployed. You had a bunch of people building peer-to-peer networks, like Kazaa, which then turned into Skype and a whole bunch of other things. It was very promising. It was the moment where everyone was getting connected to the internet. You could now build huge, large-scale infrastructures and so on. Again, there was this peer-to-peer winter. There’s a whole bunch of reasons why that happened, and people could sit around debating, but I think it had to do with the fact that the first primary use case that people were using peer-to-peer for was copyright infringement, and that being not a viable strategy for a lot of companies. Another thing was it was right around the same time of the rise of the normal cloud. Google and other companies were investing very deeply into building large-scale distributed systems. They were building hierarchical structures. They ended up funding a ton of work down the road in a bunch of labs. A lot of the labs that were doing peer-to-peer research switched entirely to doing cloud infrastructure research. That’s another avenue, that’s another point. Then, I think, probably third or fourth where, third was there was no digital currency, so you couldn’t actually pay people correctly. You had a bunch of trusted. You had the beginnings of cryptocurrency.

Dalton Caldwell [22:24] – Mojo was a currency, as I recall.

Juan Benet [22:25] – That’s right. You had the beginnings of digital currencies, but they were still very unproven and still, I think, relied on significant trust in a bunch of places. You didn’t have the same fungibility that you, sorry, the same level of trustlessness that you have with something like Bitcoin. I think the properties were not quite there yet with digital currencies. I think another one was just the hardware, the hardware around that people had did not warrant a peer-to-peer structure yet. Meaning, it made sense for a number of use cases, but a different set of use cases, they didn’t make that much sense. It’s interesting to think about computing and a normal computing problem this way because a lotta people always get hung up on how things scale, but when you actually think about the total magnitude of data in a problem, sometimes you realize, oh, yeah, just throw that into one server. You have one server, and maybe you replicate that to, you have five servers that are all full copies of the index, and you’re done. You don’t have to build a very complicated distributed system to deal with this because your total amount of data is way smaller than the latest disks. So, whatever.

Dalton Caldwell [23:36] – Let’s think about this, just to put this in context. In a lotta ways, history is repeating itself, and the same ideas cycle back. I’ve heard Marc Andreessen say this before, that Webvan, he’ll keep funding ideas that didn’t work over and over again ’cause eventually they’ll work. Instacart and Webvan. It seems like a lot of these ideas are well known to researchers and computer scientists, we’re trying them again. There’s a bunch of things that are different. You listed a few of them. But to enumerate them so I understand, it’s just the tools are better. Is that one of them?

Juan Benet [24:03] – Yeah, massively.

Dalton Caldwell [24:05] – Just the tools are better. Two, something about the hardware infrastructure of bandwidth plus CPU?

Juan Benet [24:12] – Computing changed. Just the numbers, the actual raw numbers that people have, either just in disks and–

Dalton Caldwell [24:18] – So it’s Moore’s law-type stuff.

Juan Benet [24:20] – Yeah. It’s not just Moore’s law because you have to account. You have accelerating returns in computing, in storage and so on, not so much in bandwidth. An interesting point to compare is realizing that storage is decreasing in cost super rapidly, whereas bandwidth is not. Bandwidth, it always feels like the internet is really slow because we continue building larger and larger applications and larger media, but then we can’t get to the moving around, moving around as much. Sorry, let me say that again. There’s this trade-off between storage and bandwidth where storage is significantly, it’s getting cheaper at a really rapid rate, whereas bandwidth is not. Because of that, what you end up with is the feeling that constantly you’re saturating your pipe and that constantly the internet’s slow and so on, but you’re just putting a lot more data through it. Bandwidth is just not improving as fast. Eventually, we’re gonna get to a point where it might actually be cheaper to ship around stuff to consumers.

Dalton Caldwell [25:26] – Hard drives in 747s.

Juan Benet [25:26] – It’s crazy. Already, if you look at how large companies move data, they do not send it over the internet. They send it over packages or move it around physically in some other way.

Dalton Caldwell [25:41] – Or direct fiber. If you do data center to data center transfer, you have a direct fiber line, and it’s not actually on the internet.

Juan Benet [25:46] – That’s right. If you have some really fast uplink, some really fast link, not really an uplink because you’re in the core, some really fast link between two data centers, then, yeah. But for example, if you’re a company and you’re trying to put some data into Amazon, Amazon will say, “Hey, just ship us a hard drive and we’ll put in on for you.” There’s packet switching, and then there’s also package switching.

Dalton Caldwell [26:14] – Are those the big difference? I did the enumeration. Am I forgetting any other major factor about why this time, we’re running the same play again, but this time it’s gonna work.

Juan Benet [26:25] – I don’t think it’s the same play. I think it’s vastly different. I think that when you think about what the projects are trying to do and what they’re building and what applications people are going for, it’s very different. I think maybe Mojo was one exception in that they were really far ahead thinking about cryptocurrency and resource sharing and all this kinda stuff.

Dalton Caldwell [26:44] – Remember, it was hard drive space. Again, as a user, I could rent out my hard drive space, I could rent out my CPU. There was three things. It was bandwidth, CPU. You earned Mojo from each of those things.

Juan Benet [26:54] – That’s right. If you think about people like … There were a few people around at the time, especially the Cypherpunks mailing list. You can go back and read a bunch of ideas that have just become reality in the last few years. There were definitely a lotta people already thinking about the things that we’re doing now, but nowhere close to doing them. There’s one big difference between this wave and the last wave, is that the, being able to access a range of applications that were dreams and ideas back then, but they were far away, makes this wave actually quite different in goals. When you think about peer-to-peer in 2001, you don’t think about Mojo that much. You think about Napster, you think about Kazaa, you think about those systems that. BitTorrent maybe was the, it was actually in the tail end. BitTorrent got massive and so on, but they’re right as a whole bunch of the other ones were failing and going away. When people think about peer-to-peer, and what was working really well with peer-to-peer networks at the time, it was mostly pretty simple peer-to-peer structures. Definitely, there were people using DHDs, there were definitely people doing some amount of distribution of files and so on, but it was mostly around very simple file sharing problems.

Dalton Caldwell [28:16] – Again, so to summarize, the use case really matters. That’s what you’re saying.

Juan Benet [28:20] – I think both the tooling and the use case that people got to are very different. Ya had to have smart contracts. You had the beginnings of what smart contracts were gonna be able to do, but you didn’t have them in the level of trustlessness that, say, Ethereum gives you today. That is a very important piece of infrastructure that once you deploy something like Ethereum, a whole bunch of other things become instantly possible, which did not have at the time. You did not have this kind of worldwide computer, effectively, that allows you to runs some very expensive, but trustless code. You don’t have to trust the computers running this code on their output. You have a way to verify that all of the computation was done correctly and all this kinda stuff.

Dalton Caldwell [29:11] – Let’s try to use the same thought experiment. There’s infinite demand for free music. I remember, I’m exactly the right age. I was in college when Napster took off. There was a product that everyone wanted. Yes, it was illegal, but there was infinite demand for that. What is the closest analog for the current generation of things that you think there’s inherent consumer demand for, that can be the equivalent thing that pushes this wave?

Juan Benet [29:40] – There’s a lot there because, first of all, it’s not about consumers. This peer-to-peer wave and the reason why it’s massive is not because consumers are using it. I think that’s one of the things that Silicon Valley has failed to understand. I think in 2013 and ’14, a lot of the blockchain tech was being built in New York and Europe, and far ahead of Silicon Valley. I remember having a lot of conversations with people here and New York and Europe. The level of thought outside of Silicon Valley was vastly superior. It was very surprising and annoying to me because I was like wait. The Silicon Valley’s supposed to be the place where all of the tech gets built and so on. The reality is that it’s not that there was more thinking, and that certainly people in Silicon Valley understood all of that and had thought about it and so on, but the understanding about what businesses or what value propositions might actually be useful in Silicon Valley was dramatically centered around consumers. In reality, what Bitcoin and Ethereum did was allow you to create any kind of financial instrument extremely cheaply and with almost free verification of correct proceeding of this financial instrument, which is not normally a consumer need.

Dalton Caldwell [30:55] – Regards to the consumer part, what is that, what is the burning desire need that you think is best solved?

Juan Benet [31:02] – But it’s not one thing. It really isn’t one thing.

Dalton Caldwell [31:04] – Can we name one?

Craig Cannon [31:05] – Maybe you should just talk about Filecoin now.

Dalton Caldwell [31:07] – What is the burning, and it’s okay if it’s not consumers, but what is the thing that, with Filecoin, that is gonna make, whether it’s business or consumers, people get really excited about using it?

Juan Benet [31:19] – Filecoin is not representative of the entire industry. Filecoin is one example. With Filecoin, the point is being able to, this is a whole different argument that I think makes sense with or without a peer-to-peer winter or summer. The thoughts around Filecoin are about thinking about the massive latent storage that’s out there and putting it to good use. There’s exabytes of storage that are not in use right now, and that if you were to add them to the market, you would drive the price down significantly. That’s true, whether or not there’s currently a peer-to-peer wave or whether or not people are excited about peer-to-peer in any way, or decentralization. Now, there’s a point that you can build a network, like Filecoin, that can use decentralization and can use financial assets created cryptographically to then organize a massive group of people around the planet to then do this. Forgetting all of the excitement around decentralization, just think about Bitcoin as a way to incentivize people to add a bunch of hardware to a network. There’s been nothing like it. It generated a massive, massive amount of computing power dedicated to do one single thing, which is try and find hashes that are of low target. You have tens of thousands of people around the planet that worked really hard to add a bunch of hardware to this. You end up with this insane hash rate that is, when you actually work out the amount of power and computation that it’s using, is one of the most powerful computing networks on the planet. When you take that idea, of you saying create a very strong financial incentive for people to do something around the planet, and you then couple it with building some other kind of resource-sharing network, something like file storage, you can organize this massive, massive network, as well. You can put all of that latent storage that already is there and depreciating and going to waste into valuable use. Filecoin’s a business around, you have to think about these networks as services and businesses that are solving some set of problems, but it’s not always just one problem. That’s, I think, fundamentally different about this type of thing than normal consumer products. They solve a lotta problems for a lotta people.

Dalton Caldwell [33:41] – You said financial, though. Again, they’re doing it for financial reasons. Again, what I’m looking for is what is the incentive for someone to get involved, whether it’s a business or a consumer, the reason you would put a miner on the network?

Juan Benet [33:53] – For Filecoin specifically, the reason why somebody would add storage to the network, the primary motivator will be money. That’s what’s gonna drive this massive amount of storage. Now, a secondary and very important motivator, is also the fact that data is completely centralized and a whole bunch of providers. We get a lot of businesses and people highly concerned about this, that want to distribute their data across a number of providers and want stronger guarantees. They want a different set of features. But you don’t necessarily need the peer-to-peer to achieve that. That just happens to come with the package. Think about Bitcoin miners, and you can think about the motivations of Bitcoin miners are not fundamentally about just enabling peer-to-peer and so on. A huge motivator there is money. Now, that’s not true of the early Bitcoin miners. The early Bitcoin miners, a lot of them were primarily motivated by building a digital currency that was not controlled by any government. That’s something very different than what we have today. What we have today is a structure where it’s a massive business and people are going for it. That’s, I think, fundamentally different. But it doesn’t make sense to try and box it and to say, hey, there’s one thing that the entire industry is trying to do. Filecoin is completely different than the entire industry. We’re using things from the industry to create a very powerful service. The reason I mentioned financial instruments is because that is a fundamental innovation that both Bitcoin and Ethereum use, the ability to create financial instruments extremely cheaply without spending tens to hundreds of thousands of dollars. Instead, writing a few lines of code. You don’t have to litigate this in court. If it goes wrong, it just automatically settles in a computer. What happened with the blockchain stuff is that software began to eat finance and law in a way that had never happened before. There were a whole bunch of things that were kinda waiting, or a lot of ideas that people had had for a long time, some of them a few years, some of them decades, that got knocked loose by the existence of a digital currency that was ubiquitous. Suddenly, a ton of these applications were being able to be built. It’s a very different thing than the early peer-to-peer time. The fact that IPFS and Filecoin happen to relate a lot to the early peer-to-peer goals is a side effect. The majority of the blockchain world does not care at all about those early goals. They care about different goals. They care about a different kind of decentralization. Decentralization of power, not of resources. Filecoin happens to care about decentralization of resources and distribution and use and all that kinda stuff, but it’s a very different thing.

Craig Cannon [36:34] – How were people incentivized with Mojo to put their drives online?

Dalton Caldwell [36:38] – You would get, Mojo was the name of the current, it was tokens, it wasn’t really a currency. You would earn Mojo, and you could spend it on other stuff. They were very vague about that, but you could spend it on other storage.

Craig Cannon [36:52] – It wasn’t liquid in the same way.

Dalton Caldwell [36:55] – It was kind of like when Bitcoin very first came out. It was sort of like a cool thing on Slashdot. It wasn’t a serious project. What was interesting is people–

Juan Benet [37:05] – To some people.

Dalton Caldwell [37:06] – People spent a lotta time doing black hat stuff to try to earn more. It was very fun to try to get more. I think a lotta people, I used to read the commit list. A lot of people, a lot of what they had to write was anti-hacking stuff, which you would expect. Someone with a hacking brain, whenever they see new stuff, it’s always fun to take advantage of it. Cool. What do you think? This is sort of an aside, but I read YC applications for all this stuff. I’m trying to understand what the best use. Where do smart contracts help you, as a founder? This is a little bit outside of the IPFS thing, but what is the use case that, in its current state, are most useful for smart contracts? Because I see a lotta people applying with these, and I’ve yet to see one with a non-conceptual use case. Is there a case in your business where you would use smart contracts?

Juan Benet [38:06] – You can think of Filecoin as a smart contract, whether or not it’s implemented as a smart contract on top of Ethereum or not. You can think of the idea of a protocol declaring what the rules of a transaction are going to be, and a very clear, cryptographic way of proceeding through that transaction and verifying it at the end. That’s effectively a smart contract. You can think of Bitcoin, that whole thing, as a smart contract.

Dalton Caldwell [38:36] – I feel the metaphor. I’m thinking of the part where we eat the financial world. What’s the lowest hanging fruit?

Juan Benet [38:45] – You can go today and start writing something that behaves like equity or something that is a derivative. All of these kinds of financial instruments that would take you a ton of time to think about and reason about and inject into the jurisdiction, any kinda legal jurisdiction in the world. You’re now able to do that in a totally different way, with a whole bunch of assets that represent real value. I think that there’s a ton of these that have very direct use cases and applications, but they’re not consumer. That’s why you’re seeing a wave of things that seem weird to Silicon Valley. They seem like, oh, this would never work. Yet, there’s a ton of companies out there in the world that actually needs these kinds of thing, that actually think, and they’re like, okay, wow, that means I don’t have to spend hundreds of thousands of dollars to millions of dollars in legal just to understand, reason about and conduct these transactions, and then have to worry about litigation down the road, in the millions of dollars, to try and make sure that transaction is safe. You can then turn that into single dollars of running transaction fees. That is a massive shift. We haven’t even begun to see the offshoots of that. There’s been the beginnings of this. You’ll see a ton of assets being created in Ethereum that have a bunch of different kinds of properties. These kinds of assets, effectively, you get to create any kinda financial instrument that you want, as long as you can reason about how to program it and you can deploy it into the network. You can solve a whole bunch of these kinds of problems. One interesting example is insurance. You can do insurance trivially on top of Ethereum. There’s, I think, a really fun one. I’ve yet to use it because my perception on a lot of these things. Maybe insurance is a interesting consumer one, actually. An insurance policy is a very simple idea. There’s a whole bunch of regulation in the regular jurisdictions when you think about how do insurance policies work, but you can definitely create structures and financial structures around insuring some activity. There’s a contract out there where you can tell it your flight, and you can buy an insurance policy for a few ether, and it pays out. If the flight gets delayed, then it pays you out. It pays out some amount. If the flight gets canceled, it pays out some amount. All that can happen by just writing a few dozen lines of smart, maybe hundreds of lines of Solidity. It needs some sort of oracle that brings in the real-world data of the flights. Once you have that, you’re set, and you can now create an insurance policy. I think this was, I don’t know exactly who built it, but there’s, it was effectively a trivial project, and you now have what normally would take a company of dozens to hundreds of people reasoning about all of the legal landscape around insurance and how to provide this, and then legal protections of how to make sure this goes correctly and how do you collect and things. Just all of that madness goes away completely by replacing it with a single smart contract. I think those are the kinds of things we’d start seeing. There’s a big bottleneck right now, which is that the fundamental innovation is one around. Let me rephrase this ’cause I don’t think it’s characteristic of the entire space. One of the fundamental innovations of something like Ethereum is this decreasing software eating finance and law. When you can create these contracts and financial instruments really trivially, a whole bunch of things open up. So far, the people that have waded through the difficulty in using these platforms to do this happen to be people that are looking at large-scale transactions, things that would otherwise require a lotta money to do or things that suddenly become possible to do in the space of crypto assets, and they’re just kinda transplants of the regular world. They’re looking at some way of doing some transactions in the regular world, and they say, “Wouldn’t it be great to do that with ether?” Then they go and build it. What you’re gonna start seeing in the near term is that there’s this blocker around user experience where, right now, nobody can use these blockchain systems from normal consumer devices, and with that same kind of UX that people expect. There’s a massive barrier there where a ton of applications that can be geared towards consumers. Instead of starting from a consumer need, or rather, instead of the entire space solving consumer needs, now you can create something that now solves some important consumer need. Right now, it cannot get deployed easily and it cannot take off because the UX is so hard to get right. Every individual project has to spend an enormous amount of resources thinking about the UX of the users. One great example of this is OpenBazaar. It was a great project. They were building a completely decentralized eBay-type thing. They allow buyers and sellers to come in and buy and sell things. When the project started, they had to build all their peer-to-peer stack from the ground up. That was a huge undertaking for them. At the same time, we were building IPFS. They found IPFS. It made a lotta sense for them to switch over to IPFS, and they did that. That, hopefully, saved them a lot of time in the lower layers, but then they had to go and build all of the UX side of things. They had an application that you could download and run locally, but then thinking about mobile, you now have to think of and build that mobile application, and give people the same kind of utility. That is yet another massive undertaking. They’re doing it. I’m super impressed. They have this awesome mobile app that, I think, I don’t know if it’s out yet, but it’s super exciting. I think it’s one of the very first things in the entire space that gives you the really nice normal UX that you would expect in normal products. The entire space has to catch up. I think it’s gonna take about a year or two before you start seeing these things get mainstream consumer use. It could happen faster with a lot of these things. Maybe it’s one library away. If somebody writes a really solid library that kinda solves a bunch of the problems, and then everything gets easy. Just because you’re not seeing a ton of consumer use things yet does not mean they aren’t about to hit in a huge way.

Craig Cannon [45:06] – That’s what I wanted to focus on before. You kind of juxtaposed 2013, 2014. People not really getting it here, things not being built here. Obviously, in 2017, things have changed. What has changed? What’s motivating people now to start building these things? I wonder, we have a lot of founders listening, and they’re trying to figure out the ideas, what’s needed. What changed to make this possible?

Juan Benet [45:37] – What changed to make what specifically possible? ‘Cause there’s a lot of them.

Craig Cannon [45:41] – One trajectory is why is San Francisco into this now? Then the other is what changed to start pushing? Obviously, Ethereum came out, but then all these products are following, as well.

Juan Benet [45:52] – I think San Francisco and Silicon Valley got interested when the ICO craze happened. When you suddenly had projects raising tens, 20, 35, $100 million. Suddenly, everyone was like, what the hell is going on? What’s happening? It was actually the money that was the thing that drew attention. I don’t know. I think that’s pretty short-sighted. I think the underlying differences, or the underlying hard, foundational work that people are seeing, reaping the benefits of right now, all of that creation of value and so on, happened over the last two years with Ethereum, which was a project that here was seen as this crazy science project thing that was never gonna work and was too crazy and so on, and was disregarded completely by tons of people. They just failed to do the deep thinking of looking at these very contrarian perspectives and contrarian ideas that dared to question underlying base assumptions about consumer products today, which is that nobody really cares about giving your data to everyone, nobody really cares about trust, nobody really cares about running these kinds of transactions. Everyone has some easy way to use their mobile app. Everyone trusts Google, Facebook, Apple, Amazon, Airbnb, whatever. It’s really just about convenience. If you don’t have something that’s convenient, screw it. It’s never gonna work. That was just false. I think that perspective. I don’t mean to characterize the entire space value that way. I think there were a ton of people thinking very deeply about what Bitcoin was gonna do to the world, and a lotta people invested very heavily into Bitcoin and creating Bitcoin companies. That turned out, in a number of ways, really well, and in other cases, not so well. But I think what can be said about the whole space is we’re seeing projects emerging that are about building large-scale infrastructure that might take years to build out before the utility is shown. That’s just something that normal VC can’t entertain. VC is not built for long-term investment in things that are extremely high risk and building some deep, foundational technology. VC is doing for 10 year returns, which means that in two or three years, you have to show very strong sign of massive adoption and a really strong business, which means that if something is more than two years out in development, and there’s research questions to be solved, it doesn’t fit. You have to go and figure out all of that before.

Dalton Caldwell [48:36] – Historically, wasn’t this the government that funded all this stuff?

Juan Benet [48:38] – That’s right. This is part of why I’m building Protocol Labs, which is that there’s this massive gap between, there’s this huge open area where stuff is not getting funded around building large-scale infrastructure. My claim is you couldn’t build something as free and open and that works as well as the internet today because no group would fund it. What you would end up with is a massively stunted version of something that is highly centralized and controlled by a couple of groups, and that wouldn’t have the amazing generality of something like TCP/IP. Part of what’s beautiful about TCP/IP, DNS, that whole era of protocols was that people worked super hard for months and years at a time to think about the interfaces and refine it so that you could end up with something sufficiently abstract to support a ton of use cases, and sufficiently concrete to actually work today. That kinda development is not super fast, and it takes a lot of work and takes a lot of money. That’s not something that VC funds. VC funds clear application use cases.

Dalton Caldwell [49:52] – Why should VC? In those cases–

Juan Benet [49:54] – I’m just describing.

Dalton Caldwell [49:55] – Universities and Bell Labs. Those things exist today. It’s mostly doing AI stuff, but the closest equivalent to that are things like OpenAI or Google Brain and all of that, where there’s absolutely no practical use for that stuff, but man, there’s a lotta money in grant research.

Juan Benet [50:11] – No practical use for Google Brain?

Dalton Caldwell [50:13] – They’re not productizing it immediately.

Juan Benet [50:16] – I disagree very strongly. I think that a very clear use for Google Brain is massive cost reductions to a company like Google.

Dalton Caldwell [50:23] – I guess I can speak much more confidently about OpenAI ’cause I actually know how that works. We’re not productizing that.

Craig Cannon [50:30] – You see a lotta stuff being rolled in, like speech-to-text, already.

Dalton Caldwell [50:33] – But again, it’s not a startup. There’s not double bottom line where people are trying to monetize. That’s not why it’s being funded. I’m saying if you want people working on this foundational stuff, it seems like if you’re trying to replicate what you feel like worked really well, do you think it needs to be direct? Does the analog need to be directly replicated?

Juan Benet [50:56] – These kinds of projects don’t get funded. You could see it with something like SpaceX and Tesla. SpaceX and Tesla both went through major, major funding issues. If Elon hadn’t been extremely wealthy, both projects would have probably failed. That is a clear example of something that is, it’s basically maybe not directly consumer perspective, but Tesla, definitely. Tesla is a very consumer-oriented thing, but it was extremely difficult. It was a large-scale, long-term project that just scared the hell out of VC, with good reason. It’s extremely unlikely that you would get any of that to work. But what I’m highlighting is not that necessarily VC has to fund this. What I’m saying is that’s not what VC funds. Because it’s not what VC funds, and then there is no strong ARPA organizing major large-scale infrastructure endeavors like it used to, then you have this gap and this hole of things that weren’t getting funded. Bell Labs is a great example. Part of the reason that I started Protocol Labs is to try and re-create the spirit of the labs in an organization, at least focused around networks. The reason that you had something like Bell Labs happening was because you had an entity that was very enlightened in its perspective about technology and understood how to innovate and understood how large-scale innovations could be done with deep investment over decades. They would run projects for multiple decades. Sometimes they would break up projects. These first five years are gonna be about this. These next five years are gonna be about this other thing, and so on.

Dalton Caldwell [52:38] – Out of curiosity.

Juan Benet [52:39] – You wouldn’t see a product until much later.

Dalton Caldwell [52:40] – Do you know what their budget was if we translated it into modern dollars?

Juan Benet [52:44] – I don’t know what the budget is, but it’s tens to hundreds of billions or more. Massive.

Craig Cannon [52:49] – Then you figured it out with Ethereum, right, because the value gets accrued to the people that are creating and developing the protocol. There’s a fundamental shift.

Juan Benet [52:55] – Yes. But that’s, I think, something a bit different. I wanna draw an analogy between what happened at Bell Labs and with Google Brain. Bell Labs was about constructing massive cost reductions for Bell. The reason Bell Labs got to thrive as an organization was because it represented a very strong financial interest for this massive monopoly that had enormous business. They had deep pockets to just invest deeply into things that were gonna save them a lot of money later. Bell could look at thing like, oh, wow, vacuum tubes are really inefficient, or vacuum tubes break a lot and it’s a huge pain to repair them. Wouldn’t it be great if we had something better? It basically took something like 20 years, I think it’s 10 to 20 years before the transistor. I might be wrong on those dates, but the point is Bell Labs understood the need. Bell Labs, of this massive cost reduction that happened. It had it as one of the open problems. If you were a researcher at a Bell Labs at the time, one of the things that you could work on that was seen as a very important problem to solve was replace vacuum tubes. Create something that can replace vacuum tubes. It took a whole bunch of open-ended thinking and deep fundamentalist research from a physics perspective and solid state physics, this is the story of Shockley and Bardeen and Brattain and so on, to be able to come up with something that became the transistor that solved that problem. But that was an innovation that happened over decades in time scales, and primarily motivated by cost reductions on the large-scale Bell front. The funding that Bell could feed into funding tens to hundreds of researchers thinking about all of these problems, these specific problem, on a 10-year time horizon to try and get that cost reduction is something that only massive monopolies to date have been able to fund. Basically, massive monopolies, either in business or in power. It’s either massive monopolies like Bell or Google, or massive monopolies like the US and power over being able to say we need something that connects all the computers around, and we just need it, so just fund it. Whatever it takes.

Dalton Caldwell [55:16] – Like the Space Race.

Juan Benet [55:17] – Or the Space Race. We need to get to the moon. I don’t care how much money it takes. Just make it happen. That kind of directed power and funding can predictably innovate, which is kind of amazing. In Bell Labs, you had a place where they could chart out the things that they were working on and kinda think through when they were gonna, not exactly and precisely per year, but they would know the relative progress through a whole bunch of open-ended problems whose solutions ended up giving people Nobel Prizes. This was the kind of innovation that is seen and recognized by the world as this stroke of genius that would have been so hard and so unpredictable and so on, and yet, Bell Labs was able to reliably get a whole bunch of researchers to achieve these kinds of innovations. The questions around why Bell Labs ultimately failed and fell apart have to do more with the surrounding ecosystem, its funding source–

Dalton Caldwell [56:17] – Is that when they broke apart the monopoly?

Juan Benet [56:19] – That’s right. Breaking Bell apart effectively stifled and killed Bell Labs. A few things happened. One was the rise of Silicon Valley, and the great invention, or not invention, but the great use of stock options, or just giving stock to everyone in the company working on something caused a ton of people working on very research-oriented things at the time to become quite wealthy, or get very significant personal returns. That, coupled with the excitement around all of the stuff that was happening in Silicon Valley in the ’50s and ’60s, with a number of people moving out and then coming back and talking about all the great and exciting things that were happening in the West, started to drain a lot of people out of Bell Labs and out of Boston. It’s known as this brain drain. Part of that, what happened there, was not only were people leaving and going and creating other research organizations that had different funding models, but Bell also started getting broken up. This is more like the ’80s, ’90s. I forget the exact date on this. But when Bell got broken up, Bell Labs had to find a way to charge the new, separate entities for all of its work, and it just became infeasible to fund and maintain a massive organization like that. It ended up breaking part. There’s a few interesting financial reasons why Bell Labs couldn’t continue existing, but the research organization itself was amazing, and continued to be amazing for a very long time.

Craig Cannon [57:56] – According to one site, their budget was 500 million in ’74, which translates to 1.5 billion today, or 2% of AT&T’s gross revenue.

Juan Benet [58:05] – 1 billion.

Dalton Caldwell [58:07] – One and a 1/2.

Juan Benet [58:07] – 1 and 1/2 billion. I was ’cause I said it’s something like 10 to 100. That’s a lot better. That is a lot cheaper than what I expected foundational research to cost, but that’s still massive. On the scale of 10 years, that’s 10 billion. You have to have 10 to 50 billion, and ready to commit them for two decades to be able to undertake some of these projects.

Dalton Caldwell [58:34] – Probably self-driving cars is getting that kind of money today. Probably AI’s getting that kind of money today. Not many, though. There’s few things that maybe, if you added all of the R and D budget being put into it, are getting that, but it’s definitely not probably for–

Juan Benet [58:49] – I think something like Google Brain is a clear example of this kind of thing happening again, where Google saw massive advancements in machine learning. We want to apply all of those massive advancements to machine learning into a whole bunch of the normal Google applications, and we want all of our applications to get better, faster, stronger and so on, and reduce costs. Not only are we gonna be able to do a whole bunch of new things and cool things, but we’re also gonna be able to do them a lot cheaper, which, effectively, is making money. The thing to think about is once you’re an organization that’s big enough, you don’t have to sell more products to make money. You just have to cut your costs.

Dalton Caldwell [59:29] – To make sure I understand, you’re saying that is analogous to the Bell Labs model.

Juan Benet [59:32] – That is analogous.

Dalton Caldwell [59:32] – I’m just making sure I get what you’re saying.

Juan Benet [59:34] – I wouldn’t wanna claim that something like Google Brain or even X is akin to Bell Labs because I think that’s very different organizations. I think that Google Brain and X are much more focused on shorter term valuable things than Bell Labs was. I think both Brain and X can’t yet afford to innovate on the multi-decade time scale. They’re innovating in single decade time scales on their own. I think that if you look around the planet, they’re the closest thing, probably, but they’re still kinda far away. I think it’ll take much more, not only capital, but also reach of that organization to be able to undertake some of these larger scale fundamental. When you start seeing Google funding open-ended physics research labs, then we’re in the–

Craig Cannon [01:00:28] – For a decade or more.

Juan Benet [01:00:28] – For a decade or more. Where you see the budget of a Google-run physics lab have a budget for a decade or more and 50-plus researchers and you start seeing some Nobel Prizes won outta Google, then we’re talking about the same thing, but we’re far away from that. We don’t necessarily need to re-create the same kinda structure. I think what we can do is look at a different thing that’s going on, and look at how innovation happens in a very different, open-ended way in the internet. The internet has a lot of similarities with the research culture of Bell Labs in that it’s extremely open, you get a lotta people thinking about problems, you have a lot of people talking about problems. Not only talking about potential solutions, but trying them out and so on. The people sharing knowledge and ideas through the internet and working in open-ended groups has been able to have very important results achieved, but they’re of a very different nature. You don’t get something like a transistor out of random open-source collaboration. You get something like Bitcoin and Ethereum, which are, arguably–

Dalton Caldwell [01:01:39] – And the Linux kernel, just to use a very different. That came out of the internet. The Linux kernel, that exists ’cause of the internet.

Juan Benet [01:01:46] – Yes, the Linux kernel’s an awesome example. You had the ability to undertake these major, major infrastructure projects, things that take a long time to create and mature, on the internet. A whole other interesting avenue here is how do you fund these things? How can you fund these long-term endeavors that are much more open ended on the internet and so on? That’s what Bitcoin and Ethereum proposed one example of how you fund that. This goes back to what you were starting to bring up earlier, which is the idea of you have a protocol and you have, you take that protocol and you say, hey, it’s gonna create a whole bunch of value. It also has this native token that’s gonna address a whole bunch of that value. Not all of it, but some subset. That native token is gonna be of limited supply, so because we’re creating this token, we can take some of that token and give it to the people building the protocol, which then helps, they can sell it for dollars or whatever to then feed themselves. That way, they can actually fund the development of the project. This is effectively what Ethereum did. That kind of funding model allows people to remain very close to the actual protocol layer and to think deeply about the protocol itself and its concerns without having to think about a product or a service on top. This is precisely what Protocol Labs business model is.

Craig Cannon [01:03:23] – How do we keep the funding going on? ‘Cause obviously there’s a certain amount of hype and excitement right now with all the ICOs happening.

Juan Benet [01:03:30] – There’s a ton of stuff happening right now.

Dalton Caldwell [01:03:37] – If we do it over 10 years. Let’s drill down, ’cause that’s a great point. We were just talking about it. What’s the 10 year? Do you have to keep selling bits, not you personally, but if you’re one of these folks, do you have to keep constantly reissuing tokens to keep feeding yourself?

Juan Benet [01:03:51] – It depends on whether to not the token appreciates. If the token appreciates enough, then you’re gonna have to sell less and less of it over time. You saw this happen with Bitcoin. There were some people that were early to Bitcoin that are now, they have their personal wealth at a point where unless there was a major crash in their assets, they don’t have to work again. Bitcoin is 10 years old now, almost. It started in 2008, 2009.

Craig Cannon [01:04:17] – Roughly, yeah.

Juan Benet [01:04:20] – It’s roughly 10 years old. I think maybe you could claim that the origins of Bitcoin happened through the Cypherpunk mailing list and MojoNation and all these other things and all of those discussion. That was long-term innovation that happened, and then only was getting funded afterwards. It’s a very different approach than, say, the Bell Labs centralized perspective. I think the funding of these things is gonna depend entirely on whether these things are continuing to be useful. If Ethereum continues to be useful five, 10 years from now and continues to accrue, continues to grow. If Ethereum becomes more and more successful, continues to solve a whole bunch of problems, then an ether is gonna be worth a ton more. As that ether is worth a lot more, you’re now gonna have tens of thousands of people that right now are crypto millionaires turning into, they’re gonna have 10 million, 100 million, potentiality billionaires. Who knows? I don’t know. I think at that point, the valuation of something like Ethereum gets as high as something like Google and Apple and so on. Who knows? Maybe it is worth that, but I don’t think it’s quite there yet. You have this very different way of building a service where you take a share of the worth of the service, in a sense. Having ether is kind of having a share of the worth in the network. It’s not the worth of the network totally. The network is worth more than that, but it is a subset of that. Then if you choose to hold it, and it accrues in value, then you have now gotten a return. It’s risky. Definitely I would not encourage anyone to invest so deeply into cryptocurrencies that they have a very significant fraction of their net worth.

Dalton Caldwell [01:06:07] – During the dot com bubble, everyone was a day trader and everyone made, you couldn’t lose.

Juan Benet [01:06:13] – Really?

Dalton Caldwell [01:06:15] – If you would have bought and held, to this day, and you were lucky enough to have enough exposure to, what, like Apple and Google, it actually woulda been okay, but if back in there you were day trading and you didn’t have one of those big winners, or if you just lost all your money in the early days.

Craig Cannon [01:06:30] – Or if you just sold early.

Dalton Caldwell [01:06:31] – If you just sold early, it would have been rough. But it’s kinda tricky right now because it’s really hard to lose money.

Juan Benet [01:06:37] – There’s a very big honeymoon period right now, where a ton of people have just finally understood the massive value that can be generated by these things. The excitement is around, and everyone is really stoked. If you caught Google and you invested early on, if you were one of the early investors of Google, during the height of the bubble, or PayPal or any of these amazing companies that got built through that period–

Dalton Caldwell [01:07:03] – Google didn’t go out till ’04, so I stand corrected. I’m trying to think of a ’99. Who could you have caught? If you were a day trader, definitely Apple.

Juan Benet [01:07:09] – Yeah, but I wouldn’t think of this like day trading and IPO markets. You have to think about.

Dalton Caldwell [01:07:17] – I just mean it was hot. It was the same thing where it was, popular media was like, hey, you could buy this stock and make three x. It’s more of the way people.

Juan Benet [01:07:23] – If you invested that way, you probably ended up getting thrown into the bubble and you probably lost a ton of money.

Dalton Caldwell [01:07:29] – I mean or even if you would have held and you had a decent portfolio, then you actually, even if you were the dumbest money at the top of the market, you woulda done okay.

Juan Benet [01:07:37] – Yeah, maybe, I don’t know the math.

Craig Cannon [01:07:40] – Is that maybe a question? Do you apply an ETF model right now and just buy some of everything?

Dalton Caldwell [01:07:45] – You have to hold it, right? One of the learnings is. Yeah, go ahead.

Juan Benet [01:07:49] – I think the more important thing that’s going on deeper, which is that a whole bunch of important things are getting built. If you find them, you can fund them. You can be part of them and you can help create them, and create massive amounts of value. The people that do that are gonna get greatly rewarded. I think that goes along with diligence. My perspective on this and the way that I look at a lot of the space is that I think deeply about each of these pieces of technology. I approach it much more like investing into a startup or investing into a project that I think is worthwhile and should happen, even if I lose all the money that I invest in it. I think about the underlying value that’s being created. What is this thing going to enable in two, five, 10 years from now? I think within a crypto space, you don’t even need to think 10 years out.

Dalton Caldwell [01:08:40] – Just to do a minor push on that, that’s a little different, though, than basic research. Isn’t part of basic research you don’t wanna, you wanna believe that the researchers are good, but you don’t actually wanna worry about what they’re working on ’cause they’re gonna do great stuff. Do you know what I’m saying? This is something I go through when I’m looking at these. You wanna understand that it’s a good team and that you believe in their vision, but if you get too in the details, you’ll miss the boat.

Juan Benet [01:09:05] – We’re mixing so many different topics, which is awesome, by the way. I rarely get to get this deep into a lot of this conversation, and I love it. I don’t mean to imply, that Bitcoin or Ethereum in like Bell Labs. It’s a different thing. It’s a different thing that is showing off that, or what you get out of it is you can see innovation of the kind that you saw at something like Bell Labs happening in the open internet with people exchanging ideas, with people scrounging up funding in whatever way they can until later. Now that we have cryptocurrency, now a new funding model can emerge, and now we can start thinking about this in a deeper way. When I think about structuring of Protocol Labs, we think about Filecoin as a specific service and business that has a much shorter term perspective. Filecoin has to work and be successful and valuable in two, three years, not five or 10. We’re nowhere near close to be able to–

Dalton Caldwell [01:10:01] – You’re already two, three years in. It is five years in. The company’s in two or three years.

Juan Benet [01:10:06] – Three years. We were building IPFS first. IPFS is out there and creating a ton of value for people, but IPFS is not something that we planned to monetize directly. IPFS is a large-scale infrastructure project that happens to be at a layer where you should not put money in. Money should not go and be a question on moving data. Money should be a concern that’s applied on top to a subset of those transactions. A subset of those transactions are gonna be the Filecoin transactions. Now we’re building out, we’re setting off and doing all this work. I really start the clock on Filecoin. We’ve had a whole bunch of detours. We ended up building this whole platform called CoinList. You can go and have token sales and so on because that whole madness of how do you correctly and legally do a token sale is a huge rabbit hole.

Dalton Caldwell [01:11:00] – We could burn the whole hour on that.

Juan Benet [01:11:04] – But we built a platform that allows people to do that in a compliant way in the US, using a set of documents that we developed called the SAFT. Similar to the SAFE, but for tokens. That was a detour that took us four, six months of work, but it’s gonna end up being super vital for the entire rest of the ecosystem, and for Protocol Labs in that we will have not only Filecoin, but other projects down the road that will end up using CoinList. You invested a little bit deeper into this thing, and now you have it. Filecoin has been delayed by IPFS and its success, and by things like CoinList and so on. Now we’ve managed to successfully switch gears to go and invest very deeply into the whole thing. One of the most interesting things probably about Filecoin that you’ll see coming out very soon is we spent about a month and a 1/2 rethinking the entire protocol from scratch, and looking at all of the advancements that have happened in distributed systems and crypto and the blockchain space in general in the last two or three years since the paper came out. We got to upgrade all of the entire system while it’s still in, before it’s out and live and people are using it, and it ends up being a very different protocol. We’re about to ship the new Filecoin paper, and it’s a very different protocol than when people first saw it. Solves a whole bunch of important problems. We had, for a brief period of a month and a 1/2, something akin to a Bell Labs feel of four or five people in a house doing nothing but reading papers and working on hard research problems and reading the papers of Turing Award winners, and then being a step ahead of some of them and being like, oh, wow, they just published this thing. That’s a problem that we solved a while ago or something. You get glimpses of this happening. You can think of someone like Vitalik as operating entirely in that space, where he’s just thinking about large-scale problems in the five, 10-year time horizon.

Craig Cannon [01:13:11] – Creator of Ethereum.

Juan Benet [01:13:13] – Vitalik, the creator of Ethereum. He has managed to build for himself a lab similar to one you would have at a place like Bell Labs or something, but in a very different landscape. You probably won’t see the creation of a Bell Labs like the one before. It’s possible that someone like Google and so on creates it, but I think what we’ll have instead is a much more distributed version of it, where you will have smaller labs that are able to get large-scale, 10-year time horizon funding. What I’m particularly interested in, and I’ll throw this out there now because it’s an interesting problem that I think is worth solving, and if we solve this, it can have massive implications for the world. If you’re a researcher and you wanna think about not just starting businesses and starting companies and so on, but really think deeply about what kind of problems, if we solved them today, would create enormous value for humanity worldwide. There’s a very specific problem in, and this is an economics problem, and it’s also an AI problem. It’s the credit assignment problem, which is that if you have a set of agents that are participating in a set of endeavors, and those endeavors either create or destroy value, how do you correctly propagate reward back to the agents? Meaning if you have a number of people working on a startup and you create a whole bunch of value in a startup, you capture some of that as a reward, how do you propagate that reward back? Effectively, this is equity. Equity right now is the way that this is done. In a larger scale, in the market, it’s seen as investments and capital and the capital formation world and so on. But then when you look at it in a different world, in science, for example, you have labs that are effectively accruing academic credit, economic/social credit and credibility that they’re gonna be able to use to then get further grants to fund the rest of the thing. That’s a different answer. When you think about open source, we don’t have, today, an easy way of correctly figuring out and computing what is the credit assignment on something like the Linux kernel. Linus has done an enormous amount of work and created a huge fraction of the value from the Linux kernel, but so have a ton of other people that have been slogging and wading through major, major issues. The majority of those people that are building this huge foundational thing that is now on a huge fraction of the computers on the planet did not see any kind of reward attributed to them on the scale of the companies that came after that used their technology and captured value. You can see something like Android as capturing massive amounts of value that went into the Android business and Google and all of those groups that completely rode on the value created by the Linux kernel group. You can think about this across every single business. Every single business that runs computers on a large scale has gotten value out of the Linux kernel group. How can we just propagate reward back so that all of those people now no longer have to worry about other day jobs and can really just focus on this thing? But can you do this in a big scale across all possible projects? We are super interested in solving this problem because we think if we solve this problem, even if we have a bit of a good answer to this problem, then we can fundamentally change how open source gets built, in that it would be great if people that wanna work on projects on open source can just do that without having to have a day job that they don’t like or whatever. There’s a lotta people I know that operate in that landscape where they have some job that’s kind of interesting and they do it because they have options. They’re not gonna work on something they completely don’t like or whatever, although there are a lotta people that are in that position. But at the same time, it’s not what they love the most, and it’s what will pay their bills. At the same time, they’re creating a ton of value by working on a whole bunch of interesting open-source projects, but there’s no easy way for them to get rewarded by the value that’s captured many, many layers deep after. I claim, and this is a complete guess, and I could be totally wrong about this, but I claim that if we solve that problem in a way that we have a function. Like, I could run a function over all of the people on GitHub that have contributed to all of the projects that Protocol Labs runs and all of the projects that Protocol Labs projects use. We’re taking about not only the community that’s working on one project, but also the other communities your project depends on. We depend on things like the Linux kernel. Can we figure out a way to correctly and accurately propagate reward back in a way that’s fair and that correctly gauges a whole bunch of these hard questions about opportunity cost–

Dalton Caldwell [01:18:10] – I’ll take a swing at that. Have you seen the papers about how to fairly slice a cake?

Juan Benet [01:18:17] – Yes.

Dalton Caldwell [01:18:18] – Essentially, you slice and I pick. They found a way to extrapolate that into multiple parties. This isn’t the actual solution, but I wonder if you could use where other contributors all are slicing other people’s cake.

Craig Cannon [01:18:35] – So they decide proof of work.

Juan Benet [01:18:38] – That’s a good intuition, that’s a good intuition, but then are you sure that’s not gameable? Because then I could just get a collection of 10 people that we all like each other and we all give each other huge slices.

Craig Cannon [01:18:49] – But that’s how companies work oftentimes, right? There’s someone who doesn’t always push the best code, but they might be a huge morale boost. Them being on the project is actually super valuable.

Dalton Caldwell [01:18:58] – To touch that, though, so what, you’re looking for something that doesn’t use human intervention whatsoever? It’s a purely algorithmic answer.

Juan Benet [01:19:05] – I think it’s fine to feed in human intervention along the way. There’s interesting research done on large companies and governments where you have all these peer reviews and manager reviews and all this kind of 360 review kind of perspective. Out of that, you can get good signal. Otherwise, if we weren’t getting good signal, then there’s no hope for any kind of company that’s large. Surely, something’s working. It’s a good ratio that shows you can definitely get interesting human feedback in the loop, and you can take that as a signal. But the hard thing is I claim that what we need to do is allow the collection of that feedback to have humans in the loop, but do so in such a way that it is extremely difficult to game because, again, if you give people, people will quickly learn that they can just give each other really high ratings, and that will translate into really big boosts and promotions and so on, or greater rewards. You have to get something that doesn’t, it’s not easy to game, but then further, if you take people out of the equation in the choosing part at the very top, all of those feedback, all of that feedback always propagates all the way to the top, and it’s ultimately people making decisions of compensation and all this kinda stuff. This is in companies, but in science, it’s grant funding. People that actually choose who to give grants to and what research to fund. Or in open source, it’s like, hey, a company decided to invest deeply into this project because they thought it was super valuable and they allocated engineers to just work on it, but they’re not directly just giving money to everyone in that project. If we just take humans out of the loop in that decision process and put an algorithm that people can have confidence over, that this is gonna be a correct and fair, a correct and fair allocation of the reward, at least better than most humans would do at first pass, approximation, if we can do that, turn that into an algorithm, then, I claim, we could fix a whole bunch of cap tables around the world that really screw up, and you can fix a whole bunch of the way that grant funding is done in science because you’re not gonna rely as hard on prior success. Or rather, you’re not gonna rely as hard on social signals, and you’re gonna rely more on deep achievement. And, I claim, you can do something fundamentally new, which is you can start propagating rewards through open source to the point where a lot of people can gravitate to the things that they think are extremely valuable, and they invest their time instead of investing their money into things they think are cool and interesting and valuable. If those turn out to be valuable in such a way that reward ends up getting propagated back to them, they can then turn that contribution into eating. We’re headed for a very big economic problem, and we’re already kind of in the middle of it, but we’re gonna have bigger problems that as automation comes in and AI comes in and all this kinda stuff, it’s gonna challenge our basic notions of worth in value, in economic terms. We live in a world that’s centered very rigidly around a perspective of, hey, you get a job and you work and you contribute value to an endeavor, and you get back some pay, and you turn that pay into food. If you want food and shelter and survival, and if you want nice things, and if you wanna be able to not only survive and have good things and so on, but you wanna be able to afford school for your kids or healthcare and so on, you have to have a job. This job is mediated by a whole bunch of external forces. It prevents a ton of people from allocating their work to what they think is actually most fundamentally valuable. I claim it doesn’t do as good of a job as it should in correctly rewarding major contributions. We see people with Nobel Prizes and Turing prizes that have made massive contributions to the world, and have not net worths similar to groups that ended up doing terrible things for the world and managed to get away with it. The claim here is one that this, on the small scale, could improve dramatically something like open source, and potentially companies and how you allocate compensation there, but in the big scale, a really good answer to this problem could be a new economic model. It could be a new version of capitalism, or it could be something else that’s not called capitalism. It could be something around just correct. I don’t know, it’s a whole new world.

Dalton Caldwell [01:23:45] – I think that’s super interesting, and we’ve had a lot of discussions internally around basic income. I think where I get hung up on this is that let’s pretend that we did have the algorithm. Let’s pretend someone did the research and they found a fair way to allocate worth. Would anyone accept it? Essentially, the tricky part is not the technical challenge, it’s getting people to ever believe a computer is fair. What if the algorithm said, actually, you’re not worth very much? It’s very hard to imagine people saying, you know what, you’re right. This algorithm is inherently fair.

Craig Cannon [01:24:19] – I think it actually meshes quite well with the American mindset, which is I can do work and create more value than the next person, rather than relying on some social system around you. You’re like, I’ll just do it. Right now, we rely on the market to decide what’s valuable, but who knows?

Juan Benet [01:24:34] – Ideally. I think for this to work correctly, you have to have markets involved and you have to have this kind of algorithm either work in a market. You can turn an algorithm into a market. Then, ideally, you wouldn’t have one computer that decides what you’re worth, but rather, you have an entire large-scale system, and relative worth is being ascribed by other groups. You have a lot of cases where one group thinks something is really valuable, and another group doesn’t think so. That’s fine. They themselves are accruing value and worth in whatever ways, and they can propagate it however they want. Similar to companies going in opposite directions or whatever. Yes, it’s gonna call into question a bunch of hard things, like here’s what your contribution’s really worth. But my claim is right now, we have a much. I kind of describe this as similar to the self-driving car problem. People think how could a computer ever possibly drive better than me? Computers are stupid. I am a great driver. I can go fast and I can react really well and so on. I would never trust a computer. Yet, it’s taken a long time. It’s taken over 50 years since the first plans to do this appeared, but now we have computers that drive better than humans. Pretty soon, they’re gonna start getting deployed, and we’re gonna start riding in them and so on. People will see this is gonna save a ton of lives, comparatively. My claim is you can create something that’s fair and you can create something that is also provably fair. One of the things here about algorithms is you can have a computation that’s provable, that actually runs over the whole thing and can produce a cryptographically verifiable proof that it was done correctly and that it was correctly assigning the right thing. It could give you a trace of all of the validation. Here’s the argument as to why this determination was made. I think that would be a much better place to be than where we are now, where it’s extremely fuzzy. Based on a whole bunch of factors that I think are biases all over the place. Allow a few people that understand all of those biases and perspectives to then game then, and then put themselves in positions of greater and greater power, which is, by the way, I think, one of the big reasons why capital accumulates. There’s a whole bunch of reasons why capital accumulates and centralizes, but I think one of them is the fact that once you understand enough about how all this stuff works, you can then position yourself and maneuver yourself to expose yourself to things that generate a lot of capital and wealth that don’t necessarily generate or create a lot of value. There’s a very big different between capital and value that is not correctly. The value of a dollar today does not equate to just raw, fundamental value. We use an approximation, and we think that it’s a good enough approximation and continue using it, but in a lot of ways, you can see things that are worth massive amounts of money. There’s tons of companies that get a lot of value by dumping a bunch of crap into the ocean and wrecking. There’s a whole bunch of externalities that we cannot properly calculate and account in those situations. Ultimately, there’s, at least in most countries here in the world, you have groups of people that are making those decisions at the very top and deciding what are the outcomes of major bad actors, actors that have made serious mistakes, like the 2008 crisis, major mistakes. They seem like, well, all these were major mistakes. All these things should fail, but if they fail, we’re gonna be in deeper trouble, so let’s just bail them out and continue as if nothing happened, to some degree, not quite, but to some degree. A ton of these people walked away scot-free, and got away with, in some cases, they’re actually making money through the financial crisis. People that were directly responsible for the problem ended up with returns. This is screwed up. I think this is something extremely far away from a correct and fair distribution of value. I think that’s an open problem of the kind of pre-companies or capitalism or this kind of thing. If we find a good solution to this problem, it could, in decades, translate into a rewiring of how we think and how we value things and how we allocate resources and all that. In the small scale, I think that we were beginning to see a few experiments in this direction. I see things like what Ethereum was able to do with its own resources and being able to just give a lotta people ether that then accrued in value and so on, and do things through RFPs and try to get some vague measure of what this might be worth, and giving people a share of the return. Not dollars or Euros, but instead ether, which means it’s a share of the potential future value generated by the network. It’s a step in the right direction. We’re gonna … way into this, but it’s a step in the right direction. I think we are gearing up to try some things like this.

Dalton Caldwell [01:29:27] – Do you think the way, you’ve looked at the relative distribution of wealth from crypto, that that is a good model? ‘Cause isn’t it really concentrated on a small number of people that happened to have the resources to be early?

Juan Benet [01:29:43] – I don’t think that. I think a few people ended up getting a lot of value. Also, with a lot of these projects, a few people ended up creating massive amounts of that value. For example, I think people should not at all undervalue Vitalik’s contribution. I think he’s contributed an enormous amount to the entire space. I use that as an example. There’s probably a whole bunch of misallocations all over the place that we can probably find.

Dalton Caldwell [01:30:11] – I’m just thinking through what you were saying about.

Juan Benet [01:30:14] – But I also know a lot of people, in the dozens to hundreds, that made a lot of money through crypto who slogged through the creation of value in this new network who understood the value of this thing, were willing to take the risk and work on it. Really spent the better part of a year and a 1/2 working on something that was completely super high risk, unclear that it was gonna work out and so on. They’ve seen returns that are higher than most startups. Higher than what their distribution would have been in–

Dalton Caldwell [01:30:49] – Like a 400X return, right? If you bought Ethereum at the crowd sale?

Craig Cannon [01:30:53] – It depends when you got in. Conversely, think of it like you just happen to luck into being one of the first 10 employees at a giant company, but the 25th person is the person who actually created the value, and their allocation is much less than yours. That model is not figured out yet.

Juan Benet [01:31:10] – I am deeply frustrated by that problem. I desperately wanna fix that problem. I think that if we fix that problem, then we can have massively open-ended creation of value. It’s a strong claim, but I think fixing that issue–

Craig Cannon [01:31:23] – It makes sense.

Juan Benet [01:31:24] – Would make a ton of tech companies work extremely well and be able to generate tons of value. Not only tech companies, but tech projects in general. The company is fading away, or not fading away, but rather an new thing has come in, which is the network or a market that is not a company, but it functions kinda like a company. You can think of Ethereum not as a company, but rather as this network that has some shared asset that is incenting people to work on it and so on. There’s some loose organization, but not really centralized, or it’s not really central planning. That’s a whole bunch of things that are interesting and are pushing in the right direction. I would say that the distribution of wealth is probably flatter. I don’t know this to be 100% the case. I need to look at the raw data.

Dalton Caldwell [01:32:06] – I think the raw data for Bitcoin a few years ago wasn’t pretty awesome. I looked at it, for what it’s worth.

Juan Benet [01:32:11] – For Bitcoin, what about Ethereum?

Dalton Caldwell [01:32:12] – I don’t actually know. I haven’t seen any breakdowns on that. But I remember, I was actually very curious about this, not what Satoshi has, but the other people that had, the people that bought in early, basically. What is their relative distribution and all that other good stuff?

Juan Benet [01:32:29] – I’m kinda bothered by that. I’m kinda bothered by the fact that in crypto, right now, you’re seeing the normal issues with capital flood in, which is that if you’re a speculator that has a lotta capital, you can afford to get much greater rewards than the people that actually build the thing. That, to me, is, again, another frustrating thing that I–

Dalton Caldwell [01:32:47] – This is why I ask ’cause it’s like here we are creating all this new stuff–

Juan Benet [01:32:51] – But I think it’s incremental. I think it’s a step in the right direction and a big step, dare I say a quantum leap in the right direction. What’s interesting to me is rather not what has been done so far, but the tools we now have, meaning that, and this is starting to get into the experiments we’re gonna run next year and the year after that, but we’re looking at the possibilities of issuing a token to a whole bunch of contributors that have created a whole bunch of value to a ton of projects that we think are valuable. Mostly, right now, we’re gonna do it with our own projects, but if this works well, we can do that in an even deeper way and across projects that contribute value to us. We’re gonna issue this token, and then we either are gonna do things like issue dividends or buy it back, and create a way for us to directly share a fraction of the value that Protocol Labs creates with the people that helped create that value. This is a huge experiment. Could go completely wrong. It could change the way that people, why people contribute. It could bring in a lotta people that are not deeply interested in the right things and are kinda just looking for money. That’s what I worry about. I don’t wanna do anything that would cause that ’cause I think open source is an amazing place where people are motivated to work for the project because of what you believe in. That’s super important. I would hate it if whatever kind of experiment in this direction kills the fact that the Linux kernel is built by a bunch of people that really care deeply about the problems and are fixing them. It has be to done carefully, but I think we can start running some experiments.

Craig Cannon [01:34:42] – Right, ’cause you don’t want every project to end up at some weird local maximum, where companies are using this now, value will accrue to me, I can jump to the next thing.

Juan Benet [01:34:50] – Exactly.

Craig Cannon [01:34:52] – Is this being wrapped into CoinList yet, these ideas?

Juan Benet [01:34:56] – They are ideas we are thinking about. These are not being wrapped into CoinList or other things just yet. There’s a lotta things that we have to carefully consider here. The thing about incentive engineering is that it’s hard. This is why this problem is an open problem. If you’re listening to this and you’re a researcher and you care about this, get in touch because I’m probably gonna start a small research group to solve this problem. I don’t expect a successful thing in years. I think this is a long-term thing. I think this is the kind of research project that Protocol Labs could fund that is one of those long-term innovation things. I don’t think we need that many people. Probably the right 10 people can solve this problem, maybe even less. Maybe it’s a singular person that actually figures it out, as has happened in a ton of other cases in history. But I think we can start, at the very least, collecting some data to assist the theory and that data that might come with some of these experiments. We are thinking about this, this is a premise. Right now, I mostly wanna reward a lot of the people that were very early to the IPFS project that saw the value created and said, “Wow, this is an awesome project and we wanna make this a reality” and so on. We’ve been slogging through a ton of hard work for the last three years. Right now, I guess, Go IPFS turned three years old two days ago. The protocol itself is a little bit older, but that was the code base. There’s a ton of people that came in and helped out tremendously, some of them who it didn’t make sense for us to hire into Protocol Labs. Some of the people, for whatever reason. Some of those cases are academics, some of them are grad students and professors who talked with me and walked me through certain important things that ended up contributing value to the project. Then I wanna find a way to then divert some of the return that we’ll see from Filecoin. ‘Cause what we’re gonna do is we’re gonna create this whole Filecoin network, and that will generate a ton of value. A whole bunch of people, the miners in the Filecoin network, are gonna get a ton of value, and so will Protocol Labs. Then can we divert some of that value that Protocol Labs gets back, and pump it straight into all of the open source work that we do in a way that doesn’t hurt it? I’m very, very wary and careful about anywhere where money and open source gets mixed because it can get really screwed up and it can kill projects, but I think that things like Ethereum are examples of things being done better and right in some direction, and at least in a successful one. You can look at the Ethereum community, and it’s filled with researchers. People who are thinking deep and hard about theory and the correct application, people that are thinking about consensus and consensus problems. The kinda stuff that only Turing Award winners normally think about, or grad students that are trying to upend 20 years or research or whatever. There’s people in the Ethereum community actually doing this work. It’s amazing. It should not be undervalued. It’s extremely difficult to find communities where not only is that valued by everyone around, but it’s also greatly rewarded. That’s, I think, an example in the right direction, and I think one that we can build on and create more of. If this gets to be, I think if Ethereum and Filecoin and these networks get to be massive and end up being of the same degree and scale as a whole bunch of the other ways of doing things, the centralized tech companies and so on, then we can then start looking at rewarding people across company lines. Here’s an interesting problem. There’s a ton of people that work at Google and a ton of people that work at a bunch of the other places that could contribute massively to these projects by just spending a few hours, maybe, on a week or later on. They’re the right people that have the right insights, that have the right perspective. Right now, they can’t work for another company because it’s a conflict of interest, but they can contribute to open source. Now, in many cases, they do. Then reward can be back propagated in a weird way. People contribute, and then later, if value gets greater, then there’s this back prop that happens out of distributions of this token. I don’t know how that works with, or how that’s gonna turn out and work with IP and so on, but I think it’s gonna come in kinda like a wrecking ball in that a ton of a ton of, I know a lot of researchers that, in crypto and game theory and so on, that understood the crypto world, and then either got a bunch of Bitcoin or ether, and now can just chill out and be grad students or professors, in some cases, and just do the research that they really care about, and they’re now personally wealthy. It’s awesome. That’s fantastic. That is a great example of a correct application of the rewards problem right there. The people that generated massive amount of value by slogging through really hard theory problems for years and came up with the right solution and so on are now able to correctly make contributions, in some cases, short time span contributions. Again, knowledge work is really hard to measure in hours. You can’t measure knowledge work in hours. Somebody’s investment over a decade can put them into the right perspective to make the right contribution at the right time that creates something like Ethereum or Bitcoin or whatever. How do you correctly reward that? I think something like these cryptocurrency networks reward that better than the normal notion of yearly pay in a salaried thing that was built for the Industrial Revolution, where you needed manufacturing and you needed to just bill for hours because you had to spend a whole bunch of hours working at something. I think this is an interesting thing. I think we’re starting to see this develop. We’re thinking about things like that, of how do we build Protocol Labs as an organization that can do deep research in a bunch of different directions with a bunch of collaborators around the planet in a bunch of different organizations, and how can we structure things in such a way that if those things we collaborate on succeed greatly, everyone gets rewarded? Everyone who contributed to that thing gets rewarded fairly. That’s super hard to try and solve, but we wanna do that.

Craig Cannon [01:41:18] – I think this is a good place to pause. You gotta roll. We do have some questions from Twitter. That would be awesome to address. Cool. We can just pause. Thank you.

Dalton Caldwell [01:41:26] – Thanks.

Craig Cannon [01:41:28] – Let’s go into the Twitter questions. We got a handful. You can answer them however you’d like. From @StartupSanatana, how does Filecoin’s data storage network, how is it natural slash unnatural disaster proof?

Juan Benet [01:41:45] – Ah, great question. It really depends on the scale of the natural disaster. If a comet hit the planet, that’s a little hard. There’s a few pieces here. One of them is IPFS is, by nature, what I like to call fully distributed or logically decentralized, as Vitalik calls it, which is that the nodes in the IPFS network can continue talking to each other even if the rest of the network disappears. Filecoin, because it uses IPFS and so on, Filecoin nodes will be able to talk to each other even if they can’t talk to the rest of the network. Now, there’s a question there of how can you clear transactions? That’s a thing that we have active and deep research on. We want to have a network that can shard and where you can have a subset of the Filecoin network operating, even if it can’t talk to the rest of the network, and clear transactions, that’s a hard problem. The first iteration of the Filecoin network that goes live won’t quite do that, but the way it’ll be. If you get isolated from the rest of the network, you may not be able to clear transactions, but you might be able to distribute files, at least for some period of time. Then if you are in the rest of the network, but then 1/2 of it disappears because of some huge natural disaster, I guess slightly less than 1/2, n divided by two, we can survive those failures because when people add data to the network, it gets split up into pieces, gets erasure coded. You can get this really nice replication factor where without adding too much overhead, where the replication factor does not add massive overhead, you can get a huge resilience factor where you can survive huge numbers of failures, and your data can still be there. The exact numbers on this, we’ll come up with and publish the exact details on this down the road, but it’s gonna be a tunable parameter. You can crank up the level of erasure codedness, effectively, that you want.

Craig Cannon [01:43:42] – On the user side?

Juan Benet [01:43:44] – On the user side, yeah. If you have a megabyte of data that’s really important, you just crank up the replication factor, the splitting into pieces and erasure coding so that you have hundreds of these. They all go out to a whole bunch of different miners. Now you are in a much better position than if only three people were storing this. That’s, I guess, one set of answers.

Craig Cannon [01:44:11] – Next question. Robert Andrew Smith (@robertandrewsm). When will Filecoin sale details be released? Then, following up on that.

Juan Benet [01:44:20] – Filecoin sale details will be released very soon. Unfortunately, we can’t give you an exact date, but it’s weeks away. It’s sometime in the next few weeks. Very soon. We are working as hard as we can right now to get it out the door. The reason for, that we can’t announce an exact date yet is that there are a few things, especially on, around, there’s a couple of processes that are running right now that we have a date that they were gonna finish. That’s, ideally, a week or two weeks out, but there’s a little bit of unpredictability yet there. I wanna be able to do the sale as soon as possible, but subject to that, so really weeks. Expect news very, very soon.

Craig Cannon [01:45:12] – Then he also asked another question. What other plans do you have with CoinList?

Juan Benet [01:45:18] – So CoinList, which is another project that Protocol Labs started. This is in partnership with AngelList. This is a token sale platform that kind of will allow token project creators to launch their networks and run token sales without having to slog through the hundreds of hours that we spent both building this kind of platform and going through legal and so on. CoinList works with the SAFT. CoinList will work with a lot of sales that are both, include the SAFT and others that don’t. Basically, there’s this important piece that if you wanted to run a token sale in the US, you want to, there’s a question there around whether or not you’re selling a security. If you think you’re indeed selling a security, then you should limit the sale of that security to accredited investors, at least in the US. When you do that, then CoinList makes that easy, and you can accredit in the same way that you would accredit through AngelList. But that’s not even the main selling point of CoinList. The main selling point of CoinList will be decreasing the amount of work for token sale creators and creating a network that focuses on finding really high signal projects. There’s a ton of projects in this space. One of the things that we care a lot about is how do you find really, really good projects, and help those gain attention and stick out, and how can you help them prove it? It’s one thing for that project to convey a lot of things, but it becomes really useful when you have third parties that are independent think about those projects. We’re very interested in solving that signal problem of how do you correctly figure out what are the really solid and outstanding projects. We think that’s gonna be an important value proposition from CoinList, of really finding the best things around.

Craig Cannon [01:47:20] – That was actually a question I wanted to ask before, but didn’t. Do you have any rules of thumb that you can give to people around filtering out all the noise right now?

Juan Benet [01:47:29] – We’ve gotten a ton of applications. A lot of interesting stuff is coming down the pipeline, some really, really cool stuff. We’ve also seen some scams. We’ve actually seen some applications that are outright scams. We don’t wanna be in a position to being effectively gatekeepers that prevent really good ideas from. If we don’t understand something, we shouldn’t be gatekeepers that we have to convince. But on the flip side, we also don’t want things that we can tell are outright scams on the platform. We want at least some layer of barriers there to make sure that the projects that do get listed on CoinList pass a certain bar of quality. Now, there could be some very cleverly engineered and designed scams or whatever that fool even us or whatever. Anyone investing through any kind of investment platform is ultimately responsible for doing their own diligence, but at the very least, we’re gonna, I think, cut out a huge fraction of a lot of those things. We’re working on ways of helping project creators highlight their technical strengths and the value they propose in ways that let them shine against other projects that could probably spend a whole bunch of money on marketing and so on, but actually really have no important technical depth underneath the hood. That’s a whole bunch of interesting problems that we wanna help solve with CoinList.

Craig Cannon [01:49:04] – Obviously, accredited investors is a major part of that.

Juan Benet [01:49:07] – Yeah, and I think accredited investors kind of weighing in on things is an important part, although I would say I think this is an important piece, or we’re gonna have to message this better in that not all sales that will go through CoinList are gonna be only for accredited investors. There will be some sales that are not securities. Then people can buy normally. There also will be the case that some sales might wanna do a Reg, a Reg D 506(c) offering in the US, so that’s accredited investors only, but are able to do a Reg S funding to the rest of the world and figure out things outside. This is similar to what Blockchain Capital did. We’re looking deeply into that, and we wanna, we expect that a number of tokens will be able to do that. I can’t definitively say that they will certainly be able to do that because there’s still some legal questions there that we need to solve. Additionally, we want to involve crowdfunding, as well. We think that it’s very important that people in the US that are not accredited, but that understand the tech really well, are able to make investments like that. It’s just the burden on doing crowdfunding is quite large. There’s questions of how does that combine with cryptocurrency and so on that we are doing the legal review on and legal work on at the moment. We hope to have news on that relatively soon. That’s stuff that we are actively working on and trying to enable because we don’t want the accredited investor limit to prevent people that truly understand the tech, and perhaps are much better investors than accredited investors, meaning having a million dollars does not mean that you know what cryptocurrency network is gonna be better or what cryptocurrency network actually will work. There were a lotta people investing in things that had a lotta money and could lose it on things that didn’t work out, and a lotta people that understood that something like Ethereum was gonna be really valuable. We wanna enable people to come into things like this, and so we’re looking at crowdfunding. We’re also looking at other ways of potentially involving people that, for whatever reason, they can’t directly invest in the presale, but maybe perhaps they can come in when the token goes live in an actual token sale, broader in live exchanges, at a discount, at some sort of discount that puts them into a good position. But sometimes that can be done by, instead of coming in and investing early, rather helping the network. One of the big parts of gathering investors for a network like this is gathering people that are really well aligned with the network and wanna help it grow. That’s what investors should be. Investors should not just be random speculators that are just trying to make a quick buck. We are interested in helping create large-scale communities that have really strong buy-in from people that wanna help create them and see the promise. One of the things that we’re thinking about is, okay, great, there’s a lotta people that maybe are, unfortunately, limited by the laws around accreditation. However, they probably have the ability to get actually involved directly with the project and contributing in another way that would then. In a sense, they could get rewarded by either getting paid for their work in tokens, or potentially being able to buy the token when it comes out at a discount that they have that other people don’t have.

Craig Cannon [01:52:32] – That’s a good point. I hadn’t heard about that. @JesseJumpcut asks, I’m having trouble understanding the market need for Filecoin. Is storage a burning pain that consumers face?

Juan Benet [01:52:43] – Oh, definitely. All you have to think about is how much data is being generated by computers. This is not just for consumers, although consumers do have a lotta data. Think about I have a phone here. I don’t even know how much storage this has, but it has a lot, and I use a lot of data by having applications that download video or whatever, or when you take pictures and video and so on. The right way to answer this question is look at the growth in market of cloud storage. It’s growing tremendously. Cloud storage, in general, is the idea of reselling storage for other groups. Consumers, massive businesses and so on are seeing exponential growth in data. Data and the need for storage, that’s about one of the few things you can look at and say this is growing exponentially and shows no sign of stopping anytime soon. You kinda have to extrapolate are humans gonna continue proliferating and building more cities and–

Craig Cannon [01:53:49] – I think I would reframe.

Juan Benet [01:53:52] – We’re gonna need more and more data.

Craig Cannon [01:53:55] – But I think I would reframe the question, actually. What is differentiated with Filecoin to Dropbox? Why care about using it?

Juan Benet [01:54:02] – Totally. Filecoin is not to be seen as a Dropbox replacement, although there will be Dropbox-like things that build on top of Filecoin. I think you should think about Filecoin as replacing cloud storage. It’s something that Dropbox would use. A company like Dropbox would think about, oh, do we run our own managed infrastructure or do we use AWS or do we use something like Filecoin? That’s where the economic improvement comes. What is the relative advantage of something like Filecoin to other cloud storage offerings, in a sense. The thing here is there’s a certain set of features that Filecoin will bring that other things don’t have. Being able to have decentralized data means that if Amazon doesn’t like you anymore, they can’t, just in one turn, shut you off in a way that suddenly you have to move to another provider and you have to deal with changing all of your addresses and everything. Right now, with Filecoin, it would continue to work. Two, it’s about, there’s a whole bunch of features like that and erasure coding and so on that we could go into. Then there’s a whole bunch of other things around the market dynamics in general in that Filecoin is not, you shouldn’t think of Filecoin as another provider. Think about Filecoin as a market. Filecoin is a market that layers across all providers, and enables a whole bunch of providers that right now are not selling data in the world to come in and sell it. Think about how much storage there is on the planet that right now is not being sold to other people, and if that storage came online, it would drive the price down. The storage right now is depreciating. A lotta people have invested huge amounts of money in having massive arrays of hard drives that are not giving them any money. They’re losing money on those investments. Think about creating a market that enables anybody to then sell that storage to the rest of the world for a profit. There’s a whole bunch of questions there. Wow, can you really achieve economies of scale with a network like this? Can you really get a better unit economics? Can you provide bytes cheaper than something like Google Cloud or Amazon or whatever? Our bet there is that yes. There’s a whole bunch of places and cases where certain individuals or groups in the world have access to either really cheap storage or storage that’s positioned well in the network that is somewhere between the backbone and a whole bunch of consumers. If they become Filecoin miners and storage nodes, they could actually be in a better optimization point than even something like Amazon. That’s a bet. We think it’s right. That, on its own, is an interesting reason for people to opt to choose something like Filecoin. Think about it kinda like an algorithmic market. Say, instead of this having a very inefficient market where you have to, when you wanna hire storage, you have to go and research companies and you have to look at them and you have to sign up with them. You have to be a legal entity. You have to be either a person or a company or whatever. You have to have a credit card and you have to buy. You enter into some legal agreement. Then you enter into a legal agreement. Then you can sell them bytes. It’s this huge, onerous process. When you compare them, you see their websites and so on. To something closer to an actual spot market where any file, any storage that’s available worldwide that has shown to have good metrics and shown to be online for a long period of time, show to be good or whatever, can then be sold to you at the cheapest possible price that you want immediately, algorithmically. This is about changing the market completely. It’s going from a world of centralized search providers to a world where there’s a huge market, and it’s mediated programmatically.

Craig Cannon [01:57:59] – Okay, so adding onto that, same person, @JesseJumpcut, had another question. How does Filecoin plan to compete with companies like Sia and Storj, I don’t know them, who have been out for a while?

Juan Benet [01:58:10] – Totally. There’s a few things there. One is the reason Filecoin is not out yet is because we spent a ton of our time building the IPFS project and getting that out of the door. There’s a ton of people that are using the IPFS project that are, that’s where they want Filecoin to be out. We know that already we have a ton of users lined up that right now are not going to those other competitors. They’re actually either in S3 or other places, and would jump directly to us. Then, the deeper question and way to look at it is just think about the technology. We’re about to release the second version of our protocol. It’s just a fundamentally different thing. It operates in a different way, it offers different guarantees and so on. We think that those different guarantees actually have a very significant market need and solve a whole bunch of different market needs than these other networks don’t. That’s how we are gonna be able to compete. Another thing is, I don’t know how it’ll play out, but I actually bet a lotta people will be mining on both networks, or all of the networks. We’ll see how that actually plays out. Right now, there’s a lot of drivers driving for either Uber or Lyft. We think the tokens and the rewards in tokens will be a, and people’s expectation on how this will end up working will drive people to mine in one. I guess an interesting question right now would be did people switch from storage to Sia, or Sia, when the Sia coin appreciated a lot? That’s an interesting question that people should look into.

Craig Cannon [01:59:52] – Can I cross-list storage?

Juan Benet [01:59:57] – In some ways, you will be able to. In other ways, you won’t. This actually is very protocol-dependent, and different protocols allow it in different ways. Some of the things you won’t be able to cross-list. Some of the things you will be able to cross-list. There, people will be trying to get–

Craig Cannon [02:00:14] – Trying to game it a little bit.

Juan Benet [02:00:15] – Yeah. People are participating in two different networks. They’re storing data. Because of the proof of replication, when you have proof of replication-backed storage, that ensures that it is unique to this particular request, and that’s a very important thing from a game theory perspective. You don’t want networks of … basically pretending to be storing huge amounts of data when they’re only storing one copy, and the thing is not replicated. That’s what the proof of replication is there for. Some things you won’t be able to cross-list, but some things, like for fast retrieval and so on, those will be cross-listable. But I think answering the question in a deeper way, I look at Filecoin as something very different than these other networks. It’s not solving exactly the same problem. Filecoin is solving the problem of how do you create a market and allow any provider? There’s actually a possibility where Sia and Storj make sense as route content to them because those networks provide a tiered structure. We’ll see what happens.

Craig Cannon [02:01:20] – This one is a little bit in the weeds. This is user Holy Nakamoto. Referenced a GitHub issue from a couple years ago.

Juan Benet [02:01:28] – Oh, man.

Craig Cannon [02:01:30] – I don’t know if you remember this one. Is the idea of IPFS rendering DDoS attacks impossible hyperbole?

Juan Benet [02:01:39] – Well, it depends on how you, it’s not hyperbole in the whole sense. There are some ways that you can take that question of DDoS prevention and say, “Oh, well, no, you can’t possibly mitigate all possible DDoS attacks on something,” but the way to think about IPFS is that when you have a piece of content, once you have the piece of content, or anybody else around in the network has it, you can retrieve it from them and it doesn’t have to come from the original source. We’ve already seen cases where people can DoS a specific location and can DoS the URL that some resource is at. But if it’s a name that, you know some providers that have that content and you can reach them, but the DoS attackers can’t know who those providers are. There could be a whole lot of reasons for this. It could be they’re actually disconnected. You’re in a network that they’re not connected to, or you have access to a network where you have the ability to search through a whole bunch of nodes that are willing to share routing information with you, but are unwilling to open it broadly to the whole world. This starts getting into private networks. When people are building private IPFS networks where they have their own set of content that is not exposed to the rest of the world. For example, you’re gonna be able to search through some network like that. Right there alone, you have entire barriers where people, the DoSsers can’t even get to the content, first of all. Can’t even get to the machines that are serving it. That solves it. The other case is, hey, if there’s some really popular piece of content and something gets replicated to tons of people, now the DoS attack gets way harder. Now you have to DoS thousands of people. In that particular case, it’s not that it’s impossible. It becomes intractable. It becomes intractable for a, even a sophisticated attacker, to DoS all possible computers that have this piece of content. This will be, especially with really incendiary things that a lot of people want to replicate, think about WikiLeaks-type stuff, a lotta people will wanna replicate it all over the place. Then very quickly will become very difficult for an attacker to actually silence all possible machines. It is not hyperbole. It is impossible in some cases, and then intractable in others.

Craig Cannon [02:04:00] – Okay, cool. That’s a good answer. Next question. Eric Tang asks, where do you see as the most immediate industry slash tech stack slash use case for being decentralized?

Juan Benet [02:04:18] – Basically, a way to reframe that question is where is it valuable to have decentralization.

Craig Cannon [02:04:23] – I think he’s kind of leaning towards what is a product or use case for something built on Ethereum to be decentralized in that way? Or maybe it’s IPFS would be a better way to.

Juan Benet [02:04:41] – I think decentralization changes the properties of the infrastructure. It shouldn’t be a thing that the end user should have to care about. In a lotta cases, some users will care about it, but I think it’s not something that they should have to care about. Meaning that developers are the ones that should think about whether decentralization matters. That has to do a lot with the, again, specific use cases and specific applications that you’re dealing with. When I look at things like Slack or GitHub or Google Docs, that are consumer applications that people use daily to do their work and talk to their coworkers or loved ones, things like messengers and so on, and all of that flow of information is passing through a set of centralized agents that can be brought down and frequently are brought down. There’s a lotta cases where GitHub does go down or Slack does go down, or your connection to them gets severed in some way. You just can’t reach them. Maybe you’re offline or whatever. That is a great example where logical centralization sucks. The fact that you can’t reach that origin server prevents you from using any of the data or working together or whatever. It gets so bad that you could have a room full of people with laptops open on Slack or on a Google Doc, and they can’t work together because their supercomputers, which are, again, let’s be clear here, these computers are more powerful than all of the computers on the planet were a few decades ago, their supercomputers that they have in front of them can’t figure out the content or an application they wanna run is really between them and the ones right next to each other, and are piping all of the data flows straight up the uplink, straight into the data center and then back. That’s just stupid and wrong. We should not live in that world. I wanna live in a world where if you have a computer and you’re trying to work with somebody across from you, that data can flow from one person to another and you can continue working whether or not some random machine somewhere else in the world is failing.

Craig Cannon [02:06:49] – To answer the question, literally anything where interacting between people.

Juan Benet [02:06:54] – It’s an infrastructure thing. There’s a whole bunch of cases where you wanna think about how the underlying data flows move. Answering the question for the Ethereum case, it’s really about power. Where do you want people to be able to exert power? Doing that transaction through Ethereum and having a smart contract allows you to cut out trust and power all over the place, and have a very clear thing that people agree to that is enforced by a computer, not by courts that are slow and expensive.

Craig Cannon [02:07:25] – I think with a lot of these things, you don’t necessarily have to make it obvious to the end user that this is what you’re doing. It just works, it’s better. Eric asked one more question around decentralization. Where and how does decentralization gain advantage over centralized benefits where you think about scale and cost?

Juan Benet [02:07:49] – Where do decentralization benefits actually provide better unit economics, in a way, where your costs for providing the service are actually better? I think this is where you wanna think about having as effective of an optimization process as you can get. Providing cheap storage to the world or cheap distribution of content to the world is a huge optimization problem. You’re dealing with billions of computers around the planet that are all trying to store or retrieve content in a whole bunch of places where you can store it and move it. Then you are dealing with, again, billions of people that are using those computers, and that some subset of those billions of people could actually work on maintaining the network, and some of them are gonna be consumers. It turns it into a super complex optimization problem. The point is it is actually quite difficult for a centrally planned organization to correctly find and use and leverage all possible local minima in a bunch of places, where this is exactly the right place where you wanna store something or distribute it from and so on, and get the best cost, the best cost reduction. That’s, I think, where decentralization has a massive advantage over centralized services, where you literally enabling any person in the world who says, “Oh, I have a clear idea of how I can get cheap power, cheap connectivity, cheap storage, cheap disks,” whatever, and enabling them to bring in and create a service. I guess a deeper way to look at it is do you think markets are more efficient or do you think central planning is more efficient? Looking at this question kinda naively, it’s like, well, the naive answer is, well, markets are better because central planning is bad. The slightly deeper answer is, well, no. If you had a massive computer that is actually able to calculate everything correctly, then you could actually solve that. You could have correct allocation of resources with one program. But then the even deeper version of that is not all agents are similarly incented, which means that one agent might produce an answer that it is not actually optimal to everyone, it’s optimal to that agent. Markets are kinda fundamental in how we operate. Markets allow individual actors to leverage optimizations. Those optimizations might not be optimizations for somebody else. That’s, I think, where decentralization of power is really important to these networks, in that decentralization of power and choice of how to run the service affects the kind of optimizations that people may wanna do and so on. A great example of this, I know of a lot of Bitcoin mines that have super cheap power. They’re able to get super cheap power because they’re in a particular country where they’re able to get a certain deal or because they know the right people or whatever. There’s a whole bunch of reasons why they suddenly have much better unit economics than a major player would have. They don’t have enough power that they could service everyone in the world, but they could at least contribute that piece. If you collect a whole bunch of these pieces, you actually can build a large-scale service. That’s the, I guess, one of the insights.

Craig Cannon [02:11:03] – Last question. What other projects should people be paying attention to right now?

Juan Benet [02:11:07] – Oh, wow. That is a great question. Oh, man, there’s a ton of interesting stuff. I’ll rattle off a few names. I think there’s probably a lot that people already know about. Of course, I think if people don’t yet understand how Ethereum works and all of that, definitely dive in. It’s the best introduction to the future, I guess, than Bitcoin ever was or that kinda stuff. Definitely dive into all of that world. Definitely look at things like OpenBazaar and a whole bunch of applications that are being built with these new kinds of networks. Things like Zcash and so on that bring in a new property into the world. Then start looking at, if we wanna think about new and more earlier things, there’s a whole bunch of interesting developments around these networks. There’s a lotta people building on Ethereum. There’s 0x, which is decentralized exchange. There is Livepeer, which is a peer-to-peer distribution thing that will be interacting with that, aligns really well with a lot of the IPFS tech and the Ethereum tech. There are things like Tezos, which is a project to build a smart contracts platform that smart contracts are written in OCaml. You have a lot more certainty about the programming language, the properties of the programs. Ideally, would like to get to a point where everything it provable. That’s probably unfeasible and there’s probably a theoretic argument why you can’t do that and actually have a useful thing, but maybe there’s something there where a network could have everything be provable and still be really useful for a certain class of computation. Then, Numerai is actually super interesting. Numerai is a hedge fund run with, that kinda decentralizes the data modeling. The predictive power of the models is decentralized. Individual participants can come in and contribute different algorithms to try and leverage the hedge fund’s data to trade better. That, I think, is a very interesting mixing of both competition between those participants that are coming in, but also collaboration in that all of them together are gonna win together. Numerai’s using a token. I think those are a set of projects that are pretty interesting. There’s probably further out things that are gonna come out. If you’re into research, I would highly encourage you follow the proof of stake line of work. We’re getting ever and ever closer. I think we’re quite close to something that can succeed and work at scale. There’s already several provable protocols. Anyway, that’s some interesting stuff.

Craig Cannon [02:14:01] – This was great. Thanks, man.

Juan Benet [02:14:02] – Thank you. Thanks so much for having me.

Craig Cannon [02:14:05] – Okay, thanks for listening. As always, you can check out the transcript at blog.ycombinator.com. We’ll also have the video of the interview up there. Please remember to subscribe and rate the show. See you next time.

Author

  • Y Combinator

    Y Combinator created a new model for funding early stage startups. Twice a year we invest a small amount of money ($150k) in a large number of startups (recently 200). The startups move to Silicon