Company
- Published
Meet Edgar Duéñez-Guzmán, a research engineer on our Multi-Agent Research team who’s drawing on knowledge of game theory, computer science, and social evolution to get AI agents working better together.
What led you to working in computer science?
I’ve wanted to save the world ever since I can remember. That’s why I wanted to be a scientist. While I loved superhero stories, I realised scientists are the real superheroes. They are the ones who give us clean water, medicine, and an understanding of our place in the universe. As a child, I loved computers and I loved science. Growing up in Mexico, though, I didn’t feel like studying computer science was feasible. So, I decided to study maths, treating it as a solid foundation for computing and I ended up doing my university thesis in game theory.
How did your studies impact your career?
As part of my PhD in computer science, I created biological simulations, and ended up falling in love with biology. Understanding evolution and how it shaped the Earth was exhilarating. Half of my dissertation was in these biological simulations, and I went on to work in academia studying the evolution of social phenomena, like cooperation and altruism.
From there I started working in Search at Google, where I learned to deal with massive scales of computation. Years later, I put all three pieces together: game theory, evolution of social behaviours, and large-scale computation. Now I use those pieces to create artificially intelligent agents that can learn to cooperate amongst themselves, and with us.
What made you decide to apply to DeepMind over other companies?
It was the mid-2010s. I’d been keeping an eye on AI for over a decade and I knew of DeepMind and some of their successes. Then Google acquired it and I was very excited. I wanted in, but I was living in California and DeepMind was only hiring in London. So, I kept tracking the progress. As soon as an office opened in California, I was first in line. I was fortunate to be hired in the first cohort. Eventually, I moved to London to pursue research full time.
What surprised you most about working at DeepMind?
How ridiculously talented and friendly people are. Every single person I’ve talked to also has an exciting side outside of work. Professional musicians, artists, super-fit bikers, people who appeared in Hollywood movies, maths olympiad winners – you name it, we have it! And we’re all open and committed to making the world a better place.
How does your work help DeepMind make a positive impact?
At the core of my research is making intelligent agents that understand cooperation. Cooperation is the key to our success as a species. We can access the world’s information and connect with friends and family on the other side of the world because of cooperation. Our failure to address the catastrophic effects of climate change is a failure of cooperation, as we saw during COP26.
What’s the best thing about your job?
The flexibility to pursue the ideas that I think are most important. For example, I’d love to help use our technology for better understanding social problems, like discrimination. I pitched this idea to a group of researchers with expertise in psychology, ethics, fairness, neuroscience, and machine learning, and then created a research programme to study how discrimination might originate in stereotyping.
How would you describe the culture at DeepMind?
DeepMind is one of those places where freedom and potential go hand-in-hand. We have the opportunity to pursue ideas that we feel are important and there’s a culture of open discourse. It’s not uncommon to infect others with your ideas and form a team around making it a reality.
Are you part of any groups at DeepMind? Or other activities?
I love getting involved in extracurriculars. I’m a facilitator of Allyship workshops at DeepMind, where we aim to empower participants to take action for positive change and encourage allyship in others, contributing to an inclusive and equitable workplace. I also love making research more accessible and talking with visiting students. I’ve created publicly available educational tutorials for explaining AI concepts to teenagers, which have been used in summer schools across the world.
How can AI maximise its positive impact?
To have the most positive impact, it simply needs to be that the benefits are shared broadly, rather than kept by a tiny number of people. We should be designing systems that empower people, and that democratise access to technology.
For example, when I worked on WaveNet, the new voice of the Google Assistant, I felt it was cool to be working on a technology that is now used by billions of people, in Google Search, or Maps. That’s nice, but then we did something better. We started using this technology to give their voice back to people with degenerative disorders, like ALS. There’s always opportunities to do good, we just have to take them.