120. Liam Fedus and Barrett Zoph - AI scaling with mixture of expert models

120. Liam Fedus and Barrett Zoph - AI scaling with mixture of expert models

Released Wednesday, 20th April 2022
Good episode? Give it some love!
120. Liam Fedus and Barrett Zoph - AI scaling with mixture of expert models

120. Liam Fedus and Barrett Zoph - AI scaling with mixture of expert models

120. Liam Fedus and Barrett Zoph - AI scaling with mixture of expert models

120. Liam Fedus and Barrett Zoph - AI scaling with mixture of expert models

Wednesday, 20th April 2022
Good episode? Give it some love!
Rate Episode

AI scaling has really taken off. Ever since GPT-3 came out, it’s become clear that one of the things we’ll need to do to move beyond narrow AI and towards more generally intelligent systems is going to be to massively scale up the size of our models, the amount of processing power they consume and the amount of data they’re trained on, all at the same time.

That’s led to a huge wave of highly scaled models that are incredibly expensive to train, largely because of their enormous compute budgets. But what if there was a more flexible way to scale AI — one that allowed us to decouple model size from compute budgets, so that we can track a more compute-efficient course to scale?

That’s the promise of so-called mixture of experts models, or MoEs. Unlike more traditional transformers, MoEs don’t update all of their parameters on every training pass. Instead, they route inputs intelligently to sub-models called experts, which can each specialize in different tasks. On a given training pass, only those experts have their parameters updated. The result is a sparse model, a more compute-efficient training process, and a new potential path to scale.

Google has been pushing the frontier of research on MoEs, and my two guests today in particular have been involved in pioneering work on that strategy (among many others!). Liam Fedus and Barrett Zoph are research scientists at Google Brain, and they joined me to talk about AI scaling, sparsity and the present and future of MoE models on this episode of the TDS podcast.

***

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

***

Chapters:

  • 2:15 Guests’ backgrounds
  • 8:00 Understanding specialization
  • 13:45 Speculations for the future
  • 21:45 Switch transformer versus dense net
  • 27:30 More interpretable models
  • 33:30 Assumptions and biology
  • 39:15 Wrap-up
Show More
Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features