At the Intersection of AI, Governments, and Google - Tim Hwang

by Y Combinator6/16/2017

Tim Hwang is the Global Public Policy Lead on AI and Machine Learning for Google. Learn more about Tim at TimHwang.org.



Subscribe

iTunes
Google Play
Stitcher
SoundCloud
RSS


Transcript

Craig Cannon [00:00] – Hey, this is Craig Cannon, and you’re listening to Y Combinator’s podcast. So today’s episode is with Tim Hwang. Tim’s the global public policy lead on AI and machine learning for Google. And what that basically means is he interacts with governments to inform Google’s opinions on policy. He also help educate governments on what things like machine learning actually mean and helps them figure out what the implications might be. So in this episode, Tim walks us through how governments are thinking about AI, and he also shares some thoughts on what the future might look like. Alright, here we go. And so I think that with AI and then policy on AI, you’ve kind of like nested two obscure things that people don’t really know what you’re talking about. So could you just back up a little bit and explain like what doing policy for Google actually means in the context of AI?

Tim Hwang [00:48] – Sure, definitely. So you know, I think the really interesting thing about AI is basically that, you know, a lot of the modern techniques in artificial intelligence, if you even asked people like a decade ago, they would have told you like, this is never gonna be a thing, it’s a complete dead end, why are you doing this research. And it really has kind of exploded in a completely unexpected way in the last few years. And so really a lot of the challenge has been like, okay, everybody’s kind of wrapping their heads around what the, you know, even what the business impact of the technology is gonna be. But there’s increasingly a lot of people trying to figure out like what the social impact of the technology will be. And I would say policy really sits at that interface between these really cool technological capabilities that are coming about and then like, what society in general’s gonna do about it.

Craig Cannon [01:27] – And so what would be like a tangible example, at Google, of like a policy that you guys have worked on to figure out?

Tim Hwang [01:34] – Sure, so there’s a couple of really interesting problems that we’ve been working on very closely. So one of them is this question about fairness in machine learning systems. And for example, to give you one really concrete challenge we’ve been thinking a lot about is in order to de-bias a system, right, once a machine learning system is behaving in a biased way, one way of trying to deal with it is collecting more diverse data. Well one of the big problems is when you do that, you end up collecting lots and lots of data about minorities, which raises all these really interesting questions around privacy and what have you. And that ends up being a really interesting problem because it’s both a technical challenge, right, which is like, can you collect an adequately diverse data set, but on the other hand, like also this policy question, which is what is society comfortable with you collecting and like what are the practices? And that ends up being a really interesting trade off that you have to navigate if you’re interested in these problems.

Craig Cannon [02:22] – And so what do you actually have to do? Like are you going doing like user interviews with people, or is it just guessing?

Tim Hwang [02:28] – Yeah, part of it’s user interviews. Part of it’s actually working with like people who know, right? Like it turns out that issues of privacy, particularly minority privacy, are like not new problems. And so a lot of our work is actually like talking with people who are experts in that space, right? People who have worked on, you know, bias and discrimination questions on the past, and a lot of data scientists are trying to get them to talk to one another because they think, right now what we’re really trying to do is kind of bridge these sort of human values, on one hand with like a lot of what’s happening on the technological side.

Craig Cannon [02:55] – And so if I’m a company, and I’m like, I can’t afford a policy guy like Tim, and I will be dealing with large amounts of data that may or may not discriminate against people, are there any like obvious no-go’s that you would tell someone?

Tim Hwang [03:09] – Well, I think it’s to be sure that you’re interrogating the data, right? Like I think that’s one important place to start. Now I think one of the interesting things about machine learning is that there’s lots of potential points of failure. And like I think every single interesting point of failure is being investigated right now. But I mean, one of the most common problems is just that you don’t adequately think through your data. And so the machine does what the machine does, right, which is trying to optimize against your objective function that you give it. And it’ll often maximize in ways that you don’t expect. And that is in fact part of the problem, right? So I mean one of the examples that I always think about is you know, we have this project that we released that was called Deep Dream. And one of the problems in computer vision is trying to figure out what the computer actually thinks it sees when it looks at an image. And so you go through this process, and you basically, the whole process you show it an image, you ask it like, what do I have to do to this image to make it look more like what you think, for example, a sandwich looks like? And you edit the image slightly, and you keep repeating this process until, you know, kind of you show out like, what is the ideal thing that the computer thinks it is. And it turns out that when you ask it to see like, ask it to reveal what it thinks a barbell looks like, you know barbells always show up with human arms attached to them.

Craig Cannon [04:19] – Oh, wow.

Tim Hwang [04:20] – Right, yeah. And so like that just what comes, right, because you’ve trained barbells on photos that always have someone holding the barbell. And so it ends up learning this completely bad representation. And what do you gotta do, I mean a big part of it is just like the consciousness around like, oh that can happen? And like how do you interrogate your data set to make sure it doesn’t have those problems.

Craig Cannon [04:37] – And you guys are doing some interesting stuff around adversarial data, right?

Tim Hwang [04:40] – Yeah, that’s right. So I mean, I think that adversarial examples and generative adversarial networks, are like some of the hottest points in the research right now, it’s almost become a joke that there’s like so many what they call GANs out there right now, there’s just everybody has a GAN.

Craig Cannon [04:54] – So what does that mean? What does that stand for?

Tim Hwang [04:56] – So a general adversarial network, so it’s a very particular way of kind of setting up machine learning. But adversarial examples lead to these really fascinating results, where either you can take a picture of a panda, and that’s the classic example, and you edit a couple of the pixels, and it like basically like, the computer will be like, yup, that’s definitely a giraffe. And it still looks like a panda to humans, right, is the really fascinating thing.

Craig Cannon [05:17] – And so what data are you seeding into that image to make it think it’s a giraffe?

Tim Hwang [05:20] – Well, a lot of it I think is basically, you’re editing particular pixels within the image that we know will set off the machine to behave in certain ways. So cause it turns out, basically, that like we always assume that a computer will see the same thing that we do, just based on the visuals. But how we process is actually completely different from machines. This researcher, David Weinberger, did this awesome article recently, which is basically trying to argue that like, you know, machine learning, it’s generating knowledge, but one of the most interesting things about it is that it’s generating knowledge in maybe a way that is completely different from the way our human brains work. And like that ends up being a really interesting challenge, is like how do you understand the knowledge that you’re getting and how do you understand the reasoning behind the knowledge that you’re getting from machine learning systems.

Craig Cannon [06:03] – And maybe that’s a sensible segue into like how people are investigating the impact of AI as it relates to like automation and what humans are good at doing and what computers are good at doing.

Tim Hwang [06:12] – Yeah, right.

Craig Cannon [06:13] – And so when you travel around, you meet with people, you meet with different countries, like how are people gauging the effects of automation and AI right now and its effects over the next like, you know, decade?

Tim Hwang [06:23] – Yeah, I think, so it’s an evolving picture. And I think right now, I think everybody is just surprised at all of the things that machines can do that we thought that humans were gonna be good at for the foreseeable future, right? So like Go is the canonical example. But there’s all sorts of really interesting kind of like, reasoning and other things that like, machines are engaging in now. And so one thing I always tell people is basically that everybody always wants to think about AI as if it were like this huge meteor just crashing into the earth, where they’re like, what do we do when the AI arrives? And it just like, it turns out that it doesn’t work like that, right? And in fact what we really need to get to is thinking about like how particular, you know, technical capabilities will map onto the economy. And that’s what a lot of the work is happening on right now.

Craig Cannon [07:09] – Okay, and so yeah, let’s go into some examples.

Tim Hwang [07:11] – Yeah, sure, so for example, one really interesting question is this adversarial example, right, which is basically like, everybody always assumes that like, okay, if it can be automated, it definitely will be automated, right? But that’s like a fallacy, because in certain cases, like, you may really worry about the security of your systems, right? So if someone, for example, can like hold up a photo and cause a security camera to be like, oh, it’s definitely Tim, open the door, right, like that ends up being a real reason why you would not necessarily want to implement a machine learning system for you know, access control, for instance. And so that’s actually really interesting because that means that if we don’t solve that research problem, that means that we’ll be limited in the kind of domains that machine learning enters into. And I think that’s what we’re really interested in right now is like what are these kind of gateway research questions that if we got through would like totally change the nature of like, who, when, and why someone would implement this stuff.

Craig Cannon [08:01] – And so are those things collecting the interest and like the momentum of the research community? Because like, I can see a certain direction where it becomes incredibly product-focused, right, where like I’m a researcher, I’m incredibly talented, like figuring out if this security camera’s gonna work with an adversarial network, like, might not be of highest interest to me.

Tim Hwang [08:20] – Right, right, right.

Craig Cannon [08:21] – Is that like blocking people, or is like the general concept enough?

Tim Hwang [08:24] – I mean, I think right now it’s a little bit unevenly divided. It turns out that research interest is not necessarily like policy-relevant, right? And so in some cases they’re overlapping, right, so I think there’s a lot of interest in adversarial examples. There’s a lot of interest in like, what are attacks, essentially, that you could put on these machines to get them to behave in ways that you don’t expect? That seems to be a place where like, security, which is very much a policy interest, will map on quite nicely to security as like a research interest. But for example things like fairness, right, like I was talking to a machine learning researcher the other day who was basically like, look, I could not in good faith advise a grad student to work on machine learning fairness issues. Because it’s not, it’s just not considered a serious problem in the field, right? And like that’s just, that has less to do with the field and more like the norms of the field. And then that ends up being a big issue. We don’t have coverage on certain types of things. And in practice it may actually really limit where these technologies are implemented.

Craig Cannon [09:19] – Well I think it’s a material issue right now, like there’s a gap between product understanding and like actual deep research.

Tim Hwang [09:25] – Yeah, that’s right. And I say this to a lot of people, you know, everybody’s always like, so what skills do we need to teach people in the future because of machine learning? And I think like one enormous skill will be like domain knowledge. Because coming up with like a technical capability is just like one part of this huge picture, right, which is like okay, so then how do we actually introduce automation in a way that makes sense to people? And that’s like a huge task. And so, I don’t know, my personal prediction is that like interface and like how we effectively collaborate with machines, particularly with these two new types of models, like how that effectively done is still a big open question and will seem to be increasingly in higher demand as you suddenly have access to these capabilities.

Craig Cannon [10:05] – So what I’ve been wondering then is like, does, for example, you know, TensorFlow or any one of the machine learning APIs, does that become the new AWS for products, or do people have to build their own to create like a defensible company?

Tim Hwang [10:21] – I mean, I think there’s still, like so, I mean, like cloud services will have the same impact on the economy that they always have, right? And I think this is one interesting thing is all these companies are now competing for offering cloud ML services. And the upshot of that basically is the amount of like, you don’t need a PhD in machine learning to get all the benefits from machine learning. And I think that will shape the space, for sure.

Craig Cannon [10:43] – Okay, cool. So and what are the other areas, like aside from, aside from the first one we talked about, for automation and work, like where are other people interested?

Tim Hwang [10:53] – Well, so I think the other thing we’re interested in, and I’m really interested in, is kind of like, is it possible to pull off machine learning with less and less data, right? And so, there’s a couple examples of that. But one of them is like one-shot learning, right? Where people are basically working on the ability to teach machines with like a much smaller number of examples. Now, that actually has a really big impact on the game, cause it means that you can implement machine learning effectively in situations where it’s like really expensive to collect lots of data. There’s also one really cool interface between VR and AI that’s happening right now, where the whole idea is like, there’s a project called Universe from OpenAI and another project called Deep Mind Lab, which basically like, imagine you need to teach a robot to get through a maze. Well, you could have it physically run through that maze millions of times or you could just have a virtual 3D environment that you cause a computer to run through. And it learns how to do that in virtual space. And then basically you put it into, in practice, in a real robot. And so that’s really another exciting way, we don’t necessarily need like an expensive physical setup to collect the data that you need in order to accomplish tasks in the real world.

Craig Cannon [11:57] – Okay, so then what, like, I guess what I’m curious about then is like how are these countries preparing for this like, you know, again, not a meteor strike, but like perhaps a gradual shift over 20 years, 30 years, to a very different world than what we have right now?

Tim Hwang [12:13] – Yeah, I think right now you’re seeing like a bunch of different ideas out in the space. You know, some, for example like basic income, right, universal basic income, which would like fundamentally reshape, you know, like the social contract and how we think about doing, like for example welfare in a whole number of countries. And like, so you see proposals like that. I think you see a number of proposals that are more like focusing on education. So like what are skills that people would need in the space? And that ranges from everything from everybody needs to be a programmer to like, oh, well we need to really encourage like computational thinking, right, which is like the ability to work effectively with data. And so there’s a couple of different options out there. Some of the more interesting ones that I’ve heard of that are a little bit more obscure, right, so some people have said like, oh, well maybe we need like automation insurance. So in the future your employer will provide you with like a contract that says, if your job turned out to be replaced by AI at some point in the future, we’ll pay out at some kind of rate. Right, so people are experimenting with lots of options right now. I think what we actually need in the space is like more experimentation. So for even proponents of basic income, a lot of them will tell you like we actually don’t know in practice what this would look like if it were actually rolled out at any level of scale. And so like I mean, it’s cool seeing YCR and a couple other places like experiment with this.

Craig Cannon [13:27] – And so where is the traction happening, then, with all of these experiments? Like it seems very limited, but is it all like in northern Europe, or I know there’s a basic income study in India at this point. Who seems to be focusing most on this area?

Tim Hwang [13:42] – Yeah, we’re seeing a lot of different countries engage in this. I think northern Europe is kind of leading the way in terms of their willingness to kind of experiment with some of these models. And I think they’ve got a couple things going for them. Right, like on one hand, I think they have a skilled labor force, right, that like is relatively expensive. So I think that they are seeking and excited about AI in large part because it’s a prospect to bring, for example, manufacturing back to the country. Because it allows them to compete on the same footing as like other countries that have offered labor for much lower cost. So that’s one thing that’s good for them. I think the other thing that’s also encouraging a lot of experiments is that they have a lot more coordination between government, industry, and labor, which is making it more possible to experiment with these sorts of things. So I think in a really interesting case, like, it turns out that maybe northern Europe is actually a little bit ahead in its ability to kind of experiment and understand some of these programs.

Craig Cannon [14:35] – And then as a like, Google, or Alphabet, as like this international institution at this point, how are you guys thinking about interacting with different countries as this happens?

Tim Hwang [14:44] – Yeah, so we’re investigating at the moment, right? So the question is who on the research side should we be working with and what are programs that we could support that will help us give a better handle on this picture? Because I think like, look, ultimately, it’s a technology company, right? And so we know that we don’t have all the talents necessary to like evaluate, you know, what is a proper like social welfare program. But on the other hand, we do think it’s actually really important that we encourage a better societal understanding of like how to deal with these technologies. And so I think we’re much in the mode of like, how can we support this? And I think that’s partially through, potentially, resources but also potentially like expertise as well. Like we, if you want to know anything about machine learning, we got people who can tell you about that. Now we have to marry that out with people who have a good understanding of how this will impact society, either through economics or otherwise.

Craig Cannon [15:34] – And do you ever feel like the information you’re disseminating is like, guiding the conversation and guiding the future, or like people are playing into the game like it’s intentional? Or is it just opened up?

Tim Hwang [15:46] – I think it’s very open, right. You know, I think it’s easy, particularly in the Valley, to be like oh my god, these big companies. But we’re only one part of a much larger, larger picture about what’s happening in the economy. I like totally think that’s the case. You know, we talk about AI and automation but we also want to talk about like demographic shifts happening in the economy, like what’s it mean that we have an aging workforce, right? Or like what’s it mean that we have falling workforce participation in the United States, right? Those are actually trends that they’re almost as large as like what someone comes up with in a lab and presents at a machine learning conference. And so like, I think it’s actually really important that we look at this all in a bigger perspective.

Craig Cannon [16:23] – Okay, and so what do you guys do to keep that in mind? I imagine you just have like a whole policy team to manage that sort of thing.

Tim Hwang [16:30] – Yeah, it’s kind of what we’re responsible for is like keeping track of a lot of this stuff and getting a better understanding of, like, who is researching in the space. Cause as I said, you know, I think we’re still really early on in this technology. Again, if you had asked someone 10 years ago whether or not neural nets were gonna be a thing, they’d be like, eh, I don’t know, it probably wouldn’t work, right? But if we’re at a phase right now where it suddenly has become technically real, I think now that understanding is just starting to percolate out into a bunch of other fields, who are like okay, well I guess we now gotta assess what’s going on.

Craig Cannon [16:59] – And so do you see companies and organizations in countries like locking their gates because they’re scared, because it feels new, it’s obviously massively hyped but there’s also some reality behind it, has there been a negative reaction?

Tim Hwang [17:11] – Yeah, I wouldn’t say so. I mean, I think by and large what we’re seeing is that a lot of the governments are just really curious. They actually want a better understanding of what’s going on. So in many cases I think what we’re seeing is like people asking, you know, like what is happening in the technology. So I think the phase of what to do about it is still on its way.

Craig Cannon [17:30] – Right, so you give them like the PowerPoint deck, and they’re like oh, okay, I kinda get how this works, and then they go home, you know, whatever, to like Japan and they’re like, okay, they think about it? That’s how it works?

Tim Hwang [17:41] – Well, yeah, I think so. I mean this is how government progresses, right? I think like they ask questions, they get information, and then there’s like a long process to figure out what you do around it. But I mean, that isn’t to say like, there isn’t like laws and other regulations being passed that have relevance for machine learning. So one of the most interesting aspects of the GDPR, which is a new privacy regulation in Europe, is the potential for this, what they call kind of a rights explanation. So the idea is for certain kinds of automated decision-making, it might be so significant as to require or give citizens the right for that system to be able to produce some kind of human understandable explanation for what it’s doing. And like that raises all sorts of interesting challenges about like how you actually pull that off. And so I would say, I don’t want to make it sound like no governments are taking action, but I think like that’s the beginning part of it, right? And I think by and large the stance of most governments have been like, understand what’s going on.

Craig Cannon [18:36] – Do you think someone’s doing it particularly well now?

Tim Hwang [18:41] – Yeah, I mean, I was really excited by some of the stuff happening out of the UK. So last year, they actually did a report that was on kind of like giving an account of the risks and opportunities from artificial intelligence. And I think there’s a really good account to that. So and then last year, under the Obama administration, there was a really good report they did as well on the topic.

Craig Cannon [19:00] – Okay, and so like can you go specific on that?

Tim Hwang [19:04] – Yeah, sure. So I mean, I think what we at least had in the US case, right, was basically a report that really focused in on like okay, what are the real concrete risks here? And part of the idea was to pivot away from discussions that were just like, okay the main thing we’ve gotta talk about here is whether or not robots are gonna destroy us, right, or decide to take over, right? Which I agree is like kind of an interesting scenario to consider, but like, there’s a lot of core, near-term problems that need to be dealt with. And I think that was one thing they did that was very useful.

Craig Cannon [19:36] – Aside from the stuff we talked about, what do you find to be particularly exciting, both like here, at a local Bay area level, as far as like research, and then at global, international research level, moving this stuff forward?

Tim Hwang [19:51] – So I think there’s two things that I find really interesting right now. One of them is the intersection of machine learning and art, right? So largely these are technologies we’ve been using to solve, like pretty pragmatic things, which is like, how do we ensure that we can adequately recognize, like cats in photos? But what’s really interesting is a bunch of people are kind of playing around right now with the intersection between like, oh, could I use this for artistic purposes? So there’s this really fun project, Google has this project called AI Experiments, which is a lot of kind of small things like this which kind of demonstrate the kinda artistic possibilities of this technology. We also have another program called Magenta, which is looking into machine learning in music and whether or not there’s kind of ways of creating better creative collaborations between humans and machines on that front.

Craig Cannon [20:33] – And have you experimented with it personally?

Tim Hwang [20:36] – Yeah, some of it’s really fun. There’s one project which is basically like a melody generator. Like you play some notes on a piano, and then the computer will play alongside you.

Craig Cannon [20:44] – Like harmonize with you, or?

Tim Hwang [20:45] – Yeah, exactly right. So you can kind of like improvise with a computer, which is super cool. There’s another project called (Giorgio) Moroder Cam, which you get on your phone, which is like you take a couple of photos of things in the room and it produces this like boppin, like, electronic dance hit that has, that uses the words of the objects in the room as like a rhyming, you know, set of lyrics.

Craig Cannon [21:03] – Woah!

Tim Hwang [21:04] – Super cool, yeah. And a great example of like how the technology’s becoming like really accessible. Cause again if you wanted to like do that like 10 years ago, it would have required a huge amount of money, and you know, a bunch of PhDs to try to work on this problem.

Craig Cannon [21:16] – Right, yeah. I’ve been fascinated with that, like how it’s become distributed just even in the past year. Like, I told you about all the speech to text stuff that I’m working on. It’s just like, man, like the fidelity of it is shocking just in like one year.

Tim Hwang [21:29] – Right, right. And it’s gotta way better, which I think is super interesting. I think the other thing is also like trying to figure out, like there’s these really unexpected things that emerge too, so the other thing that I think is really cool right now is there’s a paper that came out from Deep Mind I think earlier this year that was kind of like, if you get two machines to talk to one another, they will eventually, and you can set up another computer to basically say, oh I can read what you’re saying, I can’t read what you’re saying, you can basically train these two systems to come up with the rudiments of encryption without even necessarily needing to program encryption into the computers, which is also super cool as well.

Craig Cannon [22:00] – Woah.

Tim Hwang [22:01] – They learn how to accomplish that task. And it’s not very good at encryption, but the basics are basically learned by these systems, so long as you give them good reinforcement on like, okay, that’s still cognizable, I can still understand what you’re saying, versus like a third party being like oh, I can’t do that.

Craig Cannon [22:17] – Oh man. And so do you have thoughts on like how this will become distributed in such a way that any day we’ll be interacting with it in our everyday lives as just like fun projects? Like will it be existing in the art space, will it be you know, like training new programming languages for folks to work on when they’re younger?

Tim Hwang [22:35] – Yeah, I mean, I think like there’s, you know, I was talking to Peter Norvig, who is one of the researchers we have, he’s like one of the founding fathers of AI, and he had this really interesting thought, which is basically that like, we may be approaching the period where we actually have to entirely rethink how we teach computer science because machine learning is such a powerful tool, and also cognitively, it works in a way that’s like totally, you know, counterintuitive, right? So like I do less software than I used to, but definitely when I was in the trenches doing coding work, it was very much like, okay, let’s get a bunch of smart people in the room, let’s come up with a bunch of rules, and then like let’s get those rules into the machine, versus this much different kind of mode of thought, right, which is basically like let’s present the machine with a bunch of examples and then verify whether or not the machine has learned the proper lesson. And so his idea is like actually, we may actually really want to think about how we think about CS from like the very first moment you step into a classroom, which I think is like a super compelling idea. Cause it was always thought of like, oh, machine learning’s just gonna become this complement to how you do programming. But I wonder whether or not software in the future will actually look more and more like machine learning focused, right, and you’ll actually change your entire approach to programming systems.

Craig Cannon [23:46] – Oh, man, that’s fascinating. I mean, it’s already kind of gone that way, in that like many CS programs are so technical you actually never build a web app.

Tim Hwang [23:53] – Yeah, that’s right.

Craig Cannon [23:55] – Like you can go through Stanford CS and never build a web app.

Tim Hwang [23:56] – Yeah, and I think it’s a very natural trend that like, we’re getting to higher and higher levels of abstraction, so like in some ways machine learning is like this ultimate level of abstraction where it’s like, even if you wanted to understand what’s happening in like a neural net, it might be actually kind of difficult to do so.

Craig Cannon [24:09] – Yeah, I mean, I guess so. But I see it becoming like, there’s just new ways of thinking about how you ought to be programming, right, like how you structure the code because at a certain point, things will just become abstracted and you won’t have to do it anymore. Like I think about it in the context of like, you know, Parse creating an API, right? Like that will exist for many things. Like I could see like a Squarespace type thing but for like a proper web app, right, and you just drag your database in. And you’re good to go. You never even think about it.

Tim Hwang [24:36] – That’s right, yeah.

Craig Cannon [24:38] – And so, ironically, like programmers might lose their jobs way sooner than they think.

Tim Hwang [24:43] – Well, it’s particularly interesting because like, we actually, this emerging research right now, which is like using machine learning to train machine learning systems, raises like this meta-level where like right now there’s a lot of hand work that goes into building a model so it learns the right representations. But if a machine can do that in the future, it gets even more abstracted, where you might not even need to be like a specialist because in some ways the machine kind of codes itself.

Craig Cannon [25:07] – So I think one thing that a lot of people are curious about is how you’re actually going to build a business around AI. So just for like, we can start broad and then go more narrow, do you think AI will be dominated by massive companies like Google, Facebook, or will you know, there be very successful AI products on the small scale?

Tim Hwang [25:28] – Yeah, so I actually think that there’s actually a ton of room for competition here. And it’d be interesting to see how all the various companies find their niches in the space. I think there’s two really interesting trends right now, right, like I think one of them is the emergence of cloud platforms, right, where basically all the companies have said, like there’s a long tail of uses that we would never be able to take advantage of. Well we may be able to like provide the services that power those services, right? And so like for example Google’s offering cloud ML right now and I think it’s a really interesting development in the space which I think creates a lot of opportunity because it means that there’s all these industries that might not necessarily be like, AI industries that might be able to seize the benefit from the technology. So that seems like a pretty huge thing to me. I think a second one which is really interesting is some of the one-shot learning stuff we talked about earlier, right, which is basically that the amount of data you need now to pull off certain types of machine learning applications is going down over time. And what that tells me is that there might not be necessarily a first mover advantage to the space, where you may actually have collected a bunch of data, but if it’s not the relevant data, and also the amount of data you need is going down over time, then the real big challenge is less data and actually more your ability to build like good interfaces and good experiences around the technology.

Craig Cannon [26:44] – Yeah, I’ve been wondering about that like as I play around with it and build like, tiny little web apps and stuff, like how much of this is just entirely reliant on the product, as like it’s all plug and play, and so to a certain extent, like, folks can almost guess which techniques you’re implementing, which APIs you’re using. And if they’re faster, with better engineers, and then they have like the magic touch of the product person, I don’t see any reason why they can’t just jump ahead.

Tim Hwang [27:11] – Yeah, right, right. And I think we’re maybe fooled by like the nature of the field right now, where it’s like, ah, we gotta get like, the most researchers to go and compete on this thing. And that is like, a big, important part of it, because they’re producing a lot of the breakthroughs in the space, but it is, I think, important to consider too that like, there’s still this big, open question of how this actually becomes effectively part of the product.

Craig Cannon [27:33] – Oh, well, absolutely. Well, I mean, we did an interview at Baidu, and it may or may not come out before yours, so we might do like a fourth-wall jump, but they explicitly are focusing on things for over 100 million people, and you’re like okay, well, I can build plenty of successful startups or businesses for less than 100 million, maybe even a million. And so yeah, I think there are all these fantastic opportunities for people. And yet folks seem to be focusing on very similar implementations, you know, whether it’s like chat bot or like customer service, which I guess is effectively the same thing. Why do you think that is? Is it they just follow what seems to be like the market leader? Or are these the most obvious?

Tim Hwang [28:14] – Yeah, I think we’re also still trying to figure it out, right? And I think we can’t avoid that AI is a technology but AI is also like a position, it’s a marketing position, right, which I think is actually a really key part of the picture, right, where it’s like, why do we think about like Siri or the Google Assistant as like AIs but we don’t necessarily think about, like the Facebook newsfeed as an AI. Right, these are all systems that are all powered by machine learning, but there’s something about its representation as like, oh yeah, this is a machine that talks to you, that makes our brains snap immediately to like pop-culture, you know, equals AI, right? And then that ends up being a really big part of it too, in that there’s a lot of incentives to like correspond to what we think of as AI, even though some of the most powerful AI applications may not even come in the form of like a personified, you know, personality.

Craig Cannon [29:08] – Well I think that’s a super interesting angle. It’s like out here, seemingly, it makes sense to like raise your money as like an AI business. But when you look at Facebook, right, Facebook if you log in doesn’t say AI anywhere, and clearly they have a lot of people using it. So I wonder if it is like a massive positioning thing that many companies do end up missing because you just have to get like the nerdy people interested in it, to sell it, to raise the money, if you’re gonna do venture-backed or whatever, but then your end user is like, why am I paying all this money for this chat bot?

Tim Hwang [29:43] – I mean like, for example, yeah, if you wanna talk about one of the most critical applications of machine learning to date, it’s like spam filters, right? Like spam is this incredibly huge, systemic problem on the internet. It is largely contended with by machine learning right now, like that’s largely the tools that we use to deal with it. And that’s an application that we never think about, right, like with many technologies, the most important applications will be some of the least visible.

Craig Cannon [30:11] – So what are you excited about? What are you gonna build? What are you gonna build with AI?

Tim Hwang [30:16] – I gotta think about it some more. I mean, you know, I’m really interested in these kind of small-scale machine learning projects. I think we might have talked about it earlier, but like we have this really crazy story where it turned out that there was this cucumber farm in Japan that was using machine learning to build like a really cheap machine learning robot to sort cucumbers. It turns out, like, cucumber sorting is a really big problem in the cucumber farming space. And that was basically just trained using 3,000 or 4,000 like photos of cucumbers, and that was sufficient to train a model to do, like a pretty good job at sorting cucumbers. And so like I’m really interested in this kind of like artisanal machine learning, where it’s like, what are these kind of very specific daily problems that I have? And it’s a good way of I think wrapping my head around okay, what are actually going to be like the practical uses? Not necessarily like, the Cadillac uses that I think are being forseen right now, which are like the demonstration uses of the technology.

Craig Cannon [31:10] – And then you can open up like, Tim’s general store online?

Tim Hwang [31:13] – Yeah, that’s right.

Craig Cannon [31:13] – And people like download like, Tim’s Cucumber App? Yeah, I mean, I cracked my iPhone earlier and was getting it fixed this morning, and the guy had an entire box of assorted iPhone screws from literally like an iPhone, you know, iPhone one to an iPhone seven now. And these are just like, he’s got like a side hustle, buying and selling iPhones that are broken online, and if they’re totally damaged, he just strips all the components, but he spent like half an hour trying to figure out what screw would fit. So there you go, you can like use Tim’s Screw Identifier. Like, it’s super handy stuff.

Tim Hwang [31:51] – Yeah, I think it will be just a lot of small things like that. And what’s particularly interesting is like, going back a little bit to what we were talking about earlier, like what is the cost of solving a problem through machine learning, and what is the cost of solving a problem through like, traditional coding? Right, that’s actually one way of maybe thinking about the problem, right? Like for example for computer vision, like now the economies are way in favor of machine learning. It’s just way easier to design an effective machine learning image recognition system with, yeah, with ML than it is for traditional kind of coding techniques. And I think that’s actually one really interesting way of thinking about it, is for a given task, how long until machine learning is like, the preferred way of solving this problem with a computer?

Craig Cannon [32:31] – It totally makes sense, as like new kinds of entrepreneurs pop up in these very small niche things that are essentially like one-developer projects that previously like might have even seemed, like, way too laborious to spend your time engineering, like you’re never gonna pay someone to do it, you’re not gonna do it yourself, but you start plugging into like these cloud ML things and all of a sudden you have this app.

Tim Hwang [32:53] – Right, right.

Craig Cannon [32:54] – As far as distribution, I don’t know, I’ve heard more and more people talking about like localizing certain things to the device, which makes them amazing. Have you experimented with that yet?

Tim Hwang [33:05] – Yeah, so we’re actually working on a little bit of research right now. I haven’t played around with it myself. But for example, there’s a couple papers around what they call federated learning, where we’re exactly working on this premise, which is, the bet is, okay, what happens in the future where the edges of our network, like the phones, like have way more powerful processing power? Like, is it possible for us to basically do the majority of the training for these systems on device. And with basically a lot less data kind of floating in the cloud. And the idea was basically the local model would update, and it would share its learnings with all the other devices in the network. And it’s a really interesting way of thinking about how you actually do this because what you ideally want to have is models that are loaded on the device and can also train on the device as well, right? Cause right now one of the ironies is there’s a big piece of disparity between like training, which is computationally intensive, data intensive, and then actual execution, which can be actually pretty low computationally.

Craig Cannon [34:04] – It also creates a giant latency problem, with everything that’s like, in big quotes, AI right now. Like, you know, most people, if you give them Siri, they’re like, oh, it’s constantly broken. But if you could communicate with it in a way that’s like, eh, you didn’t understand, let me go again immediately afterward, all of a sudden the experience is entirely different.

Tim Hwang [34:21] – Yeah, latency ends up being a really key, not just for conversational interfaces, but you think about like, for example how do we deal with like, using this in medical, right, where you may need a response really soon if you’re gonna use it for like diagnosis or whatever.

Craig Cannon [34:35] – Totally, like if this thing turns into a robot surgeon arm and I move it to the Amazon, like I can’t rely on my like, you know, hotspot to connect it.

Tim Hwang [34:43] – That’s right, yeah, yeah. And so yeah, I think again, we’re talking about implementation, which ends up being like this really big piece of the AI picture which is still being worked out. Like we know we can get machines to do these remarkable things. The question is like, what do people actually want out of it?

Craig Cannon [34:57] – So I guess one of the last questions I have for you is, you know, people are interested in AI, machine learning across the board, or at least people paying attention to this are into it. If someone wants to get more into it, and they’re thinking about like, eh, how do I position myself, like what should I pay attention to, where should I focus, cause like, you know, now tens of thousands of people are checking it out, what would you say? What would you focus on?

Tim Hwang [35:22] – So I think there’s two really interesting problems in the space right now that desperately need more people to get involved in and more people to kind of like, organize events around. So one of them is, I think this like security thing, right, where like in the traditional computer security space we’ve got like events like capture the flag, where people can kind of like show their mettle and their ability to kind of like, secure and compromise systems. I actually think we really need that in the machine learning space and I’d be really excited to see that, which is like, so imagine a game where like you have to train a machine learning model on a set of data, and then people will take turns trying to like get past your computer vision system.

Craig Cannon [35:57] – Ah, cool.

Tim Hwang [35:58] – Which I think would be super cool to do. And I think like that’s one big piece of it that I think would be really cool for people to work on. I think the second thing that’s about to be in really strong demand is thinking about the visual dimension of this, right, which is like, happens on a couple levels. That’s both like the interface of how you work with machine learning systems but also just like visually how you represent a neural net. Like if you’ve read the technical papers, one of the things that you’ll see is just like, that it’s largely written by machine learning experts. And so they don’t really have a good sense of like how do you visually portray what a neural net is doing. And that stuff ends up being incredibly important for people to like both understand the technology and also be able to use it effectively. And so I think that’s another thing that’s about to come on the way, is basically a really high demand for people who understand this research and could give it good voice in terms of like representing it visually.

Craig Cannon [36:49] – And then if someone isn’t into machine learning yet, what would you recommend they read, study, watch, what should they check out?

Tim Hwang [36:59] – So I mean, I think it’s really nice because we’re now living in a world where there’s a lot more resources for how to learn about machine learning. So I’m a huge fan of Ian Goodfellow’s textbook on deep learning. It was really funny, I was in Cambridge picking up a physical copy of this textbook cause MIT Press is the publisher, and the guy selling me the book was like, this is like the Harry Potter of technical guides because it had been like flying off shelves so aggressively. So it’s really good though, its reputation is very well deserved.

Craig Cannon [37:25] – Okay.

Tim Hwang [37:26] – You know, one of the things I’ve been thinking a lot about is kind of like the history of all this, right, like it’s important to recognize that AI has been through this hype cycle before, and there have been long AI winters where this technology has totally oversold itself. And it’s important to understand those dynamics. So two books I’ll mention, one of them is John Markoff’s Machines of Loving Grace, which is all about kind of like the history of AI and particularly its competition with the notion of IA, right, intelligence augmentation, which I think is a really interesting battle that we’re having right now, right, in terms of what this technology is really about and what it should be used for. A second book which is great, which is also by MIT Press, is Cybernetic Revolutionaries, which talks about basically the Chilean Allende government. So it’s basically the socialist government, like during the mid-20th century. And they tried to basically set up a project called Project Cybersyn, where they were like, let’s automate the entire economy. So all factories will have to produce data links that will connect to a single, central command center, where we will like, actively control the economy. And that’s a great initial, another example of kind of like, the history of cybernetics but also its like implications for like what people tried to do back then that are I think useful for like, you know, making sure we understand what the limitations of the technology are today.

Craig Cannon [38:38] – That’s very neat, I haven’t read that. I will absolutely check it out. Cool, man, so if anyone wants to follow you online, where do they go?

Tim Hwang [38:44] – Oh, sure, I’m on my website is at TimHwang, T-I-M-H-W-A-N-G dot O-R-G (timhwang.org). I’m not the Korean pop star of the same name. And I’m also on Twitter @TimHwang, so at T-I-M-H-W-A-N-G.

Craig Cannon [38:58] – Very cool. Alright, thanks dude.

Tim Hwang [38:59] – Yeah, thanks for having me, Craig.

Craig Cannon [39:01] – Alright, thanks for listening. So please remember to subscribe and rate the show, wherever you find your podcasts. And if you want to watch videos from any of these episodes or read the transcript, you can check out blog.ycombinator.com. Alright, see you next time.

Author

  • Y Combinator

    Y Combinator created a new model for funding early stage startups. Twice a year we invest a small amount of money ($150k) in a large number of startups (recently 200). The startups move to Silicon