Back To Schedule
Thursday, November 21 • 10:55am - 11:30am
Improving Performance of Deep Learning Workloads With Volcano - Ti Zhou, Baidu Inc

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Baidu internally has improved the performance of large-scale deep learning workloads by using the Volcano project. The CRD-based computing resource model makes it possible to use resources more efficiently and configure computing models more flexibly. The Volcano project has unified abstraction of the underlying capabilities of group scheduling, fair share, priority queue, job suspend/resume, etc., which makes up for the lack of functionality of the native job based training operator.

After using Volcano, Baidu's internal resource utilization increased by 15%, and the training task completion speed increased by 10%. This talk will introduce the overall function of Volcano, transformation of the old operator to support Volcano, and the comparison of the performance of deep learning training tasks before and after using Volcano.

avatar for Ti Zhou

Ti Zhou

Senior Architect, Baidu
Ti Zhou, Kubernetes member, LF AI & Data TAC member, currently serves as senior architect in Baidu Inc, focusing on PaddlePaddle Deep Learning Framework and Baidu Cloud Container Engine, helps developers to deploy cloud-native machine learning on private and public cloud.

Thursday November 21, 2019 10:55am - 11:30am PST
Room 1AB - San Diego Convention Center Upper Level
  Machine Learning + Data