Canadian Information Processing Society (CIPS)
http://www.cips.ca
 

CIPS CONNECTIONS

INTERVIEWS by STEPHEN IBARAKI

Pedro Domingos, Globally Renowned Top Ranking Data Science, AI Researcher leads team to top AI prize

This week, Stephen Ibaraki has an exclusive interview with Pedro Domingos.

Pedro DomingosPedro Domingos is a professor of Computer Science at the University of Washington in Seattle. He is a winner of the SIGKDD Innovation Award, the highest honor in data science. He is a Fellow of the Association for the Advancement of Artificial Intelligence and has received a Fulbright Scholarship, a Sloan Fellowship, the National Science Foundation's CAREER Award, and numerous best paper awards.

He received his Ph.D. from the University of California at Irvine and is the author or co-author of over 200 technical publications. He has held visiting positions at Stanford, Carnegie Mellon, and MIT. He co-founded the International Machine Learning Society in 2001. His research spans a wide variety of topics in machine learning, artificial intelligence and data science, including scaling learning algorithms to big data, maximizing word of mouth in social networks, unifying logic and probability, and deep learning.

To listen to the interview, click on this MP3 file link

The latest blog on the interview can be found in the Canadian IT Pro Connection where you can provide your comments in an interactive dialogue.
http://www.canitpro.net

PARTIAL EXTRACTS AND QUOTES FROM THE EXTENSIVE DISCUSSIONS:

Interview Time Index (MM:SS) and Topic

  Let's start with your most recent roles and research interests. You and Ph.D student Abe Friesen contributed the paper winning the top prize in July at the 24th International Joint Conference on Artificial Intelligence, the world's largest AI conference. [The RDIS optimization approach, on average, performed tasks between 100,000 and 10 billion times more accurately than previous methods and has broad applications in all areas of science, engineering and business.] This latest achievement was followed by winning the KDD 2015 Test of Time Award and publishing a new book, "The Master Algorithm", and preceded by winning the KDD 2014 Innovation Award.

:01:02: You with Abe developed a new algorithm often described as magical by others. Let's explore this work. Can you define optimization in the context of AI and the broad class of nonconvex optimization problems?
"....Optimization is really what a lot of science, engineering and business problems boil down to at the end of the day. It's the problem of finding the settings for the variables that give you the most of what you want....What our algorithm is basically doing is taking some ideas from AI and computer science and bringing them over to the problem of optimization with numeric variables. So you can do optimization with discrete variables, which is what people have traditionally done in AI, or you can do it with numeric variables which is what happens in most of engineering and increasingly in AI...."

:02:43: What are the roots of RDIS, the evolution of the research, and what is RDIS?
"....RDIS stands for Recursive Decomposition into Independent Subspaces and the idea is actually very intuitive, it's based on how we human beings solve problems. We break them up into smaller sub-problems and then we break them up again until the problems that remain are simple enough so that you can just solve them one at a time outright. Then you can combine the solutions back again and then find the solution for the whole problem. As widespread as this is in computer science, the thing that's amazing is that in continuous problems people don't do that....So our motivation behind this was really to bring some of those ideas from AI and computer science over into continuous optimization...."

:04:22: Now let's get to the magical part, what is the performance of RDIS?
"....The thing that is amazing about this is that when we do this we often end up getting exponential improvements in either the speed with which we can solve problems or the quality of the solution given the fixed time....The reason this happens is because we are doing this decomposition of the problems into small sub-problems (the individual problems are over a smaller number of variables), and as a result there is an exponentially smaller space to search. When you can do this (not always), you could potentially get really spectacular improvements...."

:05:40: Can you provide some specific examples where it can be applied and what this means to each domain of the application?
"....In principle this could be applied to any domain where continuous optimization is applied and the number of domains where that happens is really endless. Vision is one and another one is robotics....Protein folding....In business, for example, what is the best use to put your resources to or how much of different things to produce...In engineering for example, designing the shape of an airplane is a continuous optimization problem....Same thing for cars....Electronic circuits....Designing power plants, figuring out what is the best configuration of components to do different things...."

:07:31: I can see now applications in things like economics and finance, epidemiology, genomics and so on. Do you see applications in those areas?
"....Definitely so. In economics there's all these variables that you need to vary to get your best results. A very classic example in finance is you want to find the optimal sequence of trades that will maximize your profit, like the amount of money that you make at the end of the day....In epidemiology, one of the big things is to detect outbreaks early, but you can't have sensors everywhere because that is not feasible, so you might want to optimize exactly where you put them so that the cost is minimal, but you get the earliest detection possible...."

:08:53: How do you see the work evolving and what are your specific next steps?
"....One of the things we want to do is explore more applications of this new algorithm. We are still finding out what it's good for and what it's not, but the other thing that we want to do is improve the algorithm. I think part of why we won this award is that this is not necessarily just one algorithm, it's potentially a whole new direction in optimization, and whole new directions in optimization don't come up every day....One of the things we want to do is come up with better methods for figuring out how to break the problem into sub-problems...Another example of something that we want to deal with is right now we are only dealing with the problems in the variables that they're originally described in. For example, like the positions of the different amino acids in the proteins, I think that you can probably solve the problems a lot better if you transform those variables into a new set of variables....I have a feeling that if you combine ideas like that with this algorithm you will find much better ways to split the problem in terms of the right variables and as a result do much better..."

:10:55: There are a lot of interdisciplinary thoughts in this. Are you interfacing with many of the other departments at your university?
"....Definitely. One of the fascinating things about optimization is that it cuts across so many disciplines...."

:11:32: You talked about some areas where perhaps it's not so good at. What are some of its limitations?
"....This algorithm is not a silver bullet; it's not going to solve every problem. It's a very good solution when the problem has this characteristic that it can be divided into smaller sub-problems...."

  Pedro, your latest achievement was preceded by winning the KDD 2015 Test of Time Award, publishing a new book, "The Master Algorithm", and winning the KDD 2014 Innovation Award. Let's drill into these in a little bit more detail.

:12:57: What led to winning the Test of Time Award?
"....This was a paper that we wrote 15 years ago that was one of the first papers to do what is now called data stream mining. The idea in data stream mining is that these days what you have is not a database that you go and do data mining projects on and six months later you deliver the results and the customer uses them. These days what almost every organization has is a continuous stream of data that is coming to them day by day, second by second....What we did in that paper was we took one of the classic machine learning algorithms, which is decision tree induction, and then produced an online data stream version of that algorithm that has this property of learning the model as it goes along and we were also able to prove - I think that's part of why the work had a lot of impact - that we could guarantee that our algorithm learned a model that was statistically indistinguishable from the model that it would have learned if it was applied in batch mode to the whole infinite dataset into the future with infinite resources. These days everybody is doing data stream mining, but at the time it was kind of a very speculative idea, but it's totally happened now, which is why we got this Test of Time Award...."

:15:28: What made you get into this area of research and thinking that you could produce these kinds of results?
"....The reason I do machine learning is that the impact you can have with machine learning is amazing. You could work on robotics and maybe produce better robots and that's great. You could work on protein folding and produce better drugs. You could work on a lot of these different things that people in research work on, but the thing about machine learning is if you come up with a new, better machine learning algorithm you will potentially have impact not just in one of these areas but across a large number of them. In terms of bang for the buck for your research, I think machine learning is hard to beat. The other thing is that it's a lot of fun, you get to play in everybody's backyard. Today you collaborate with biologists, tomorrow you collaborate with economists or with engineers so you never have a dull moment...."

:18:02: How about the KDD Innovation Award – what contributed to this outstanding honor?
"....The KDD Innovation Award is not really an award for one particular piece of work. It's like the Nobel Prize; it's an award for some outstanding body of work that you have done over time. It could be just one thing that you did that had a lot of impact, but more often it's for a series of things....Data stream mining was one of them, another one was the work that I've done that has also had a lot of impact on how to do data mining for 'word-of-mouth' marketing, for marketing in social networks....I've also done a lot of other work on scaling things up. Most recently I have worked on things like unifying logic and probability so we can have models that can handle a lot of uncertainty and complexity...."

:19:56: You've come out with this remarkable book and have some really interesting ideas in there. Can you talk about your new book and some of the key points that the audience needs to pay attention to?
"....The book is called 'The Master Algorithm How the Quest for the Ultimate Learning Machine Will Remake Our World'. It's not a technical book. There's no equations there (well there's a couple), there's no pseudo-code; it's really trying to explain the deep ideas behind machine learning to a general book-reading audience....The reason I decided to write this book is that I realized there's a really urgent need for one because machine learning is not a small obscure field anymore that a few scientists worry about, it's something that touches everybody's life every day....Part of my goal was to demystify machine learning and give people a conceptual model of how learning works...."

:23:28: Everybody has to read this book. As you indicated machine learning impacts every part of our lives, credit cards, fraud detection, email spam, you name it, it's there everywhere.
"....Actually credit cards are a good example. In fact a bank these days uses machine learning basically at every step of what it does. It uses it to choose who to send offers to, to validate the offers, it uses it as you said for fraud detection when you get those messages saying 'strange use of your credit card'; that was a machine learning algorithm. It uses it in its investments to trade things, it uses it to decide which customers maybe who are about to leave so that it can make them a better offer...."

:24:40: What will computers and robots look like by 2020?
"....There's a lot of robots in factories and then there are very simple ones like the Roomba that can vacuum a floor, but I think we are reaching the point where robots can really take off because the computing power is there, the sensors are there, the components are inexpensive enough that this actually becomes feasible. Also AI - because the crucial question is building the brain of the robot - has progressed to the point where we can actually start doing these things, but I think a lot of the robots that we are going to see are going to be very different from what people imagine. A self-driving car is a robot, it's just a robot in the shape of a car. Other robots are going to be very specifically designed for the particular thing that they do in the world so there's going to be a Cambrian explosion of different kinds of shapes and sizes and types of robots. It's going to be very exciting. The same with computers, I think we are going to see increasingly powerful computers on the one end. At the other end of the spectrum we are also going to see tinier and tinier computers embedded in everything until the world that we live in is going to have intelligence embedded in it everyhere...."

:28:41: Are you saying AI, robots and machine learning or deep learning and the impact it's going to have could be a major disruptor and pivotal point in our history?
"....This gets back to when you asked me why do you work on machine learning? It's that if you care about all these global problems, machine learning and all these technologies are all going to be a big part of solving them. In a way we are limited by our ingenuity, but I think we have so many tools at our disposal today that it behooves us to use them to solve all these problems, and I think we are going to see a lot of progress on them in the next 10 to 20 years. Of course not all of it is going to come from technology, but I think technology is a big part of it...."

:29:35: Can you talk further about recent advances in knowledge discovery and data mining?
"....Knowledge discovery and data mining didn't exist as a field or barely existed 20 years ago, but now it's this huge sprawling field that reaches everywhere. Even I, an old-timer, can't really keep up with everything that goes on there anymore, but I can tell you what some of the main things are. One of the main things is that data streams have become one of the big things that people do, mining data continuously. Another very big one which again basically did not exist 20 years ago is mining networks, learning about networks (often social networks, but could also be other kinds of networks)....Another area is that the world is full of unstructured data. We used to only mine databases of records, but these days we mine text, audio, video, and increasingly we mine combinations of them...."

:32:58: Some people say what separates us from other kinds of primates or Neanderthals or Denisovans is that we have this sort of critical mass of people who speak and communicate and share knowledge and that creates social learning and in essence improves our ability to evolve and to do disruptive innovation. Can you comment?
"....I agree. What gives us this great advantage over other animals is the fact that we are social and we learn from others. We used to learn from a hundred others and now we learn from 7 billion others. Just picture how fast the progress can be when you have a 7 billion-wide social network learning to do things better as opposed to a network of basically other members of your tribe. It's pretty exciting...."

:34:27: At the heart of this is your Master Algorithm?
"....I didn't mention this before, but the Master Algorithm is this idea that one algorithm can learn all the knowledge that there is to learn by being applied to the appropriate data, and if that algorithm exists (it's a hypothesis which I can't prove but I believe it does exist), then first of all, our job as machine learning researchers is to discover it and second of all, imagine what that algorithm can do once it can use all the data that all the people in the world produce...."

:35:11: Have you ever talked to Tom Mitchell at Carnegie and his Never-ending Language Learner (NELL)?
"....I would describe NELL as Tom Mitchell's approach to the Master Algorithm....I mentioned there are five schools of thought in machine learning and one of them is the Symbolists and Tom Mitchell is very much one of the leaders of the symbolist school of thought where you try to learn using symbolic representations...."

:36:21: You just mentioned one area of machine learning and you talked about others. Can you talk about these areas in a little bit more detail?
"....The idea of the symbolists comes from Newell and Simon's physical symbol system hypothesis (PSSH) which is the idea of abstraction. Computing and in particular learning can happen just by manipulating symbols in the same way that mathematicians or logicians manipulate symbols when they make deductions or when they make proofs....Another one is the Connectionists. Connectionists are people whose approach to machine learning is to reverse engineer the brain. They say the best living algorithm is the one inside your skull so let's see what goes on in there and see if we can reproduce it on the computer....The Evolutionaries are the people who say the best living algorithm in the world is not your brain, it's evolution because evolution produced your brain and the rest of you as well (and other living creatures) so it's pretty amazing. We have a good sense of how evolution works with genes and natural selection so let's try to simulate that on a computer except we are going to direct evolution to produce the things that we want as opposed to just letting it happen randomly....The Bayesians are a school of thought that comes from statistics. It has a long history in statistics. The key problem for Bayesians is uncertainty, that your knowledge is uncertain and you need ways to reason with that uncertainty and the way to do that is with probability....The Analogizers are a looser grouping of people who do learning based on the notion of similarity, of making an analogy between situations that you already know and the situations that you would like to understand....There are more ideas but these are the five major ones and in the book I have one chapter about each one of them...."

:39:49: Can you overview your work with the International Machine Learning Society?
"....The main job of the International Machine Learning Society is to organize the International Conference on Machine Learning (ICML) which is the top or one of the two top conferences on machine learning in the world. What we do is we select sites for the conference, invite candidates, and also select or find the General Chair for the conference - the General Chair is the person who is in charge of the conference as a whole - and the Program Chairs who are the people who will then recruit the program committee and then eventually review the papers and choose which papers are going to be in the conference, choose invited speakers, etc...."

:40:36: Have you ever had people like Geoffrey Hinton, Yann LeCun and Yoshua Bengio at the conference?
"....Those three people that you mentioned are probably the three most prominent connectionists. The main connectionist's conference is another conference called NIPS (Neural Information Processing Systems) and the history goes back to the different tribes of machine learning. ICML sort of has its roots in the symbolist's school of machine learning which is more connected to classic symbolic AI. NIPS has more of its roots in a whole new neural network movement that took off in the 80s....The good news is that 20 years ago when I started going to these conferences there were very few people who actually went to both. These days a lot of people go to both ICML and NIPS and there's a good interface between the two communities...."

:41:40: What are the big questions in machine learning?
"....One of the big questions is, 'Is there such a thing as a universal learner?'....If there is such an algorithm the next question is, 'What is it going to look like?'....There are similar questions in all the schools of thought, but I think at the end of the day (and this is very much the argument that I make in the book and I make to the community), each of these tribes is solving a real problem and they have some very brilliant solutions to them, but in the end to solve the machine learning problem it's not enough to solve one of these problems, you have to solve all of them in the same algorithm...The biggest question for me is how we combine these pieces into what you might call the grand unified theory of machine learning; in the same way that there are grand unified theories in physics like the standard model or in biology like the central dogma, I think we should be looking for one in machine learning and in AI...."

:44:36: Perhaps you will be able to do that?
"....Actually I would love to do that, but part of the reason I wrote this book is I think my chances as an individual or my group of doing that are very small. I also think one, we need more people in this field and two, we need new ideas. I think part of the problem is that as clever as the ideas are that people in these different areas have come up with already, my feeling is that there are some really crucial ideas still missing and I would like to see new people come into this field with a fresh mind, maybe from other areas. People who will think of things that we haven't thought of. My guess is that it's going to take one of those people to discover this universal learner...."

:45:26: Have you ever looked at the work of Judea Pearl, especially his last piece of work where he has created this mathematical model for causal relationships or causality?
"....Judea Pearl is a shining example of an AI researcher, in fact he's probably the most prominent of the Bayesians. Judea Pearl created and developed Bayesian networks and algorithms for them and this caused a revolution in AI and I think a lot of the growth in the Bayesian school came from him....."

:46:33: What are big questions in artificial intelligence overall?
"....In artificial intelligence the problem really boils down to the following thing: on the one hand we need powerful representations that we can encode knowledge in - if we want to build a really intelligent system as opposed to a system that does a very specialized thing we need very powerful representations. At the same time when the representation is powerful it also becomes intractable. Computing with it, doing inference with it becomes unbearably expensive so AI really boils down to this problem of how do you find representations that are expressive enough for what you want to do with them like controlling the robots or whatever or building a knowledge base on the web, but at the same time not be so expressive that they become intractable...."

:48:47: We covered some of this already but what are some of the big questions in data science?
"....I guess the first question is what is it exactly? Data science is a very new term which has become a very popular one. My view is that data science is actually something that encompasses both machine learning and statistics and high performance and distributed computing and human-computer interaction. So I think data science is the combination of all the different things that it takes to extract knowledge from data. It's interesting as it's a very broad field because it touches on many different things all the way from the hardware to the human who is trying to understand what the learning algorithm is doing...."

:50:46: What are the big questions in scaling learning algorithms to big data?
"....One of them is how to parallelize learning algorithms....How to learn on data streams that we already talked about....Another one is because we are dealing with networks as opposed to isolated examples and this creates a huge scaling problem....I think at the end of the day the most interesting scaling up problem is what algorithms can we design for this world of large scale data that are actually different from the ones that we had before?...."

:53:04: What are the major questions in maximizing word of mouth in social networks?
"....In reality the way you work in social networks is you do something now, you see what the results are and then you do something as a result of what has happened and so on. So dealing with networks over time is a big problem. Another is identifying the network. All of this will not work well if you don't know what the network is, so how do you figure who is linked to whom and how? Often what you have is indirect results of people's connections, so another problem is how can you infer people's connections from the behavior of the networks...."

:54:56: What are the big questions in unifying logic and probability?
"....That's one of the areas that I've worked most intensively in and it's I think one of the key parts of coming up with this master algorithm. It has to do with this issue that we talked about earlier, this trade-off between this expressiveness and tractability....The good news is that in the last decade we've made spectacular progress on this problem and I would even say at this point we have mostly figured out how we can unify logic and probability in a single representation. For example there is one called Markov logic networks....I have all this data, I have this very powerful representation but just doing it so that I can do very myopic research is probably not going to get me there in terms of constructing the formula....so what else can we do? To me this is the single biggest and most important question...."

:58:56: Again in a broader sense, what are the big questions in deep learning?
"....The central question in deep learning is how can you discover a representation of the world in the internal layers of your network? This is a great problem but it is still far from solved, so what is preventing this from happening?....Another problem is scaling up, you take advantage of a lot of data and all these things have to run fast enough that you can stream the data through and basically learn your model as the data streams through. Another question is how do you incorporate more of these symbolic types of learning and inference into a deep learning network? Again, if you believe in modeling the brain you know that the brain understands language and the brain can reason and plan, but today's deep networks can't do that yet. So one of the frontiers is how do you make them do that?...."

:01:00:55: We've already talked about some of the applications of this, can you perhaps further elaborate for example on business, government, media, education and society?
"....Deep learning folks and machine learning folks in general are very ambitious. I don't think there is ever a problem that they look at that they don't think that they could ever apply machine learning to. What you are already seeing that you are going to see more of in the future are businesses where there is learning in every nook and cranny of the business. When you look at companies like Google and Amazon that is already the case. They use machine learning pretty much everywhere....What is true of business is also true of government, is also true of healthcare and is also true of education...."

:01:02:23: Can you describe your most significant and influential research achievements and the practical outcomes seen today and forecasted into the future? I know you touched on some of them are there others you wish to outline?
"....Some of the things that I looked at are....I was looking to unify logic and probability as a step towards developing a universal learner. Another one has been the scaling up of machine learning algorithms to big data, not just to large databases, but to data streams. Another one is this whole area of modeling networks so one application of that is modeling word-of-mouth, but there are many others....Something that I have been working a lot on that I haven't mentioned yet is we have our own approach to deep learning in my group and others. It's called Sum-Product Networks. It has the same characteristics as the other deep learning systems - it has many internal layers where you can develop your own representation of the world - but the inference is always tractable...."

:01:04:50: You've done so much research that has lasting impact and there are broad applications and implications of your work throughout every domain out there. Can you summarize some of the lessons you've learned in doing this research that perhaps can help other researchers that are out there?
"....One lesson that I've learned is when you do research you should always swing for the fences. If you are an academic researcher like me you want to go for the game changers. If you can, you want to start in a whole new direction or start a whole new way of doing things....You should not be afraid to fail. You will fail most of the time but there is nothing wrong with that....Embrace the uncertainty and realize that research is a search, as the name implies. Most of the places where you look for the treasure you will find nothing, but it doesn't matter because eventually you will find a treasure because in research there isn't just one treasure in one place, there are treasures buried all over the place....Be ambitious. Don't be afraid to do something that seems too far-reaching because often you will be surprised....Computer science is an applied field so even if you do fundamental research, one of the things you want to do is to be always talking with people from lots of different areas, industry, biology, government, and from different sectors of industry. You always want to be looking at everything...."

:01:08:47: Can you share any valuable lessons from your many awards and recognitions?
"....When you do your work, your research, you actually shouldn't think about winning awards, you should think about doing the best that you can. If you do the best that you can, if you are ambitious, if you swing for the fences, if you look for what are the big problems that everybody has, then you will end up winning awards almost as a side effect of having done that....If you always try to be on the frontier, really pushing things forward and really trying to do things that will have an impact, then with your ingenuity and your hard work, these things will happen...."

:01:10:11: Pedro, what are the greater burning challenges and research problems for today's youth to solve to inspire them to go into computing?
"....I think one of the best things that a new person coming in to the field can do is to have a fresh perspective. This is one of the reasons that I wrote the book. I would like to help that happen. What you want to do when you come into a field like computing, you want to see what the problems are, you should also bring your own motivations....If somebody comes to computer science or machine learning from another field with ideas that haven't been applied here before, a revolution can happen....Another thing that you can do as a new person coming into computing is bridge areas....You should be very willing to learn but at the same time you should be very skeptical. These things in a way are contradictory but I think really essential to being an innovator...."

:01:15:39: Past, present, and future can you name someone who inspires you and why is this so?
"....Judea Pearl is a great inspiration. When he started doing what he was doing in AI nobody believed in it and then it just swept the field....Tom Mitchell is another great machine learning researcher....Geoff Hinton. He started out as a psychologist but he is now more of a computer scientist...."

:01:16:45: How has the ACM and its resources supported your research? Which ACM assets (SIGS, conferences, publications, digital library, peer network...) are the most valuable to you?
"....ACM supports my research every day. I'm a member of SIGKDD (the ACM Special Interest Group that deals with data science)....SIGKDD newsletter....The publications are the single biggest thing that the ACM does. So many of the papers that I read and build on came from ACM conferences, not just SIGKDD but SIGMOD and you name it....The CACM. Reading the Communications of the ACM is how I keep up with what's happening in computer science outside of AI. And then there's the whole networking aspect. The ACM is absolutely crucial to what I do...."

:01:18:32: What surprises you?
"....I'm continuously being surprised actually and I think if you want to be a researcher and innovator never losing the ability to be surprised is very important. One of the things that I'm continuously surprised about are things in everyday life. I think when you are an AI researcher you have an amazing respect for the brilliance of the human intellect. When I look at any scene it's a deep mystery how I'm actually able to look at that and see the objects that are there and figure out how to pick them up and all these different things that human beings are able to do....So everyday life is full of surprises if your goal is to understand how intelligence and learning works and how to implement them on a computer...."

:01:21:04: What will you do next?
"....My next three months will be largely taken up with promoting the book so there will be a lot of things to do, pieces to write, media, talks to give and so on and so forth. I'm also continuing to do research so I have several very exciting problems that me and my students are working on, some of them related to deep learning and some of them to unifying logic and probability....I think in research and a lot of other things it pays to be opportunistic. You should always have your eyes open for the opportunities that come up and if a new better thing to do appears, then run with it ...."

:01:22:09: From your extensive speaking, travels, and work, are there any stories you can share (perhaps something amusing, surprising, unexpected or amazing)?
"....There was a point there in about 1994 – 1995 where people who were getting PhDs in machine learning were not getting a lot of job offers or anything and then suddenly everything changed and multiple companies were trying to hire people and stuff. The amazing thing is that ever since then things have never stopped growing. Every time I think now machine learning has reached a plateau then something else comes along...."

:01:23:50: Describe the types of research being created or updated that will drive our experiences in five or ten years? What will this experience be like, can you paint a picture for our audience?
"....I believe in Don Norman's idea of The Invisible Computer - the most mature technologies are the ones you don't notice anymore but in a way that's when they are having the most impact. So I think computing is going to become even more pervasive but it's also going to become less visible and in some sense our ultimate victory will be when people no longer think about computing...."

:01:27:24: You continue to make significant historical contributions. How will your growing status contribute to your vision for the world, society, industry, academia, governments and technology?
"....When I was about 15 years old I decided very naively that I was going to learn everything that there was to know. Very soon I realized that this was a completely unfeasible goal and I settled on the next best thing which is to learn as much as I can and learn the things that will make the most difference to the world....One of the things that I like over the last couple decades of my life and hopefully also over the next one is, as I interact with more people and more varied people I can get a better picture of how these things all connect. Being in machine learning, if I can't learn everything there is to know then the next best thing is to work on machine learning and develop algorithms that can help synthesize knowledge from very different areas...."

:01:29:02: You choose the topic area. What do you see as the three top challenges facing us today and how do you propose they be solved?
"....One thing that's happening in the world today that is very worrisome is that we have abused our antibiotics and there are superbugs that are growing and we have no antibiotics that will work against them....Another example (and this might not be the thing that's most prominent on a lot of people's minds) is things like nuclear proliferation....There are many others like this you were talking about, the UN Millennium Developmental goals. I think it's very heartening that so much progress was made on many of them but there's also a lot still left to do...."

:01:31:45: You reminded me of another area, precision genetic medicine, the CRISPR/Cas9 technique which is really inexpensive and allows us to in essence modify our history and is readily available. What's going to happen because there aren't a lot of policies around it yet?
"....This is a great example of how new learning and new technology creates both enormous opportunities and enormous dangers. The ability to manipulate the genome has almost unlimited potential to do good. You could prevent a lot of disease that happens today you could create better crops, but on the other hand the potential for harm is also unlimited...."

:01:33:34: If you were conducting this interview, what questions would you ask, and then what would be your answers?
"....This interview was so comprehensive you've probably asked all the questions that I would have asked and many more besides. So I don't know if I have a good answer to that question and I hope my answers to the previous questions were good enough that they make up for a lack of one to this one...."

:01:34:06: Pedro, with your demanding schedule, we are indeed fortunate to have you come in to do this interview. Thank you for sharing your substantial wisdom with our audience.