Chaoyang He is Co-founder and CTO of FedML, Inc., a startup running for a community building open and collaborative AI from anywhere at any scale. He received his Ph.D. in Computer Science from the University of Southern California, Los Angeles, USA, advised by Salman Avestimehr (USC ECE & CS), Professor Mahdi Soltanolkotabi (USC ECE & CS), Professor Murali Annavaram (USC ECE & CS), and Professor Tong Zhang (HKUST). He also works closely with researchers/engineers at Google, Facebook, Amazon, and Tencent. Previously, He was an R&D Team Manager and Principal Software Engineer at Tencent (2014-2018), a Team Leader and Senior Software Engineer at Baidu (2012-2014), and a Software Engineer at Huawei (2011-2012). His research focuses on distributed/federated machine learning algorithms, systems, and applications.
Chaoyang He has received a number of awards in academia and industry, including Amazon ML Fellowship (2021-2022), Qualcomm Innovation Fellowship (2021-2022), Tencent Outstanding Staff Award (2015-2016), WeChat Special Award for Innovation (2016), Baidu LBS Group Star Awards (2013), and Huawei Golden Network Award (2012). During his Ph.D. study, he has published papers at ICML, NeurIPS, CVPR, ICLR, AAAI, MLSys, among others. Besides pure research, he also has R&D experience for Internet products and businesses such as Tencent Cloud, Tencent WeChat Automotive / AI in Car, Tencent Games, Tencent Maps, Baidu Maps, and Huawei Smartphone. He obtained three years of experience in R&D team management at Tencent between 2016-2018. [Biography]
Responsible and Trustworthy Data Economy: blockchain-empowered machine learning and data analytics: verification, proof of contribution, privacy protection, and robustness to malicious/cheating users.
Trustworthy Federated Learning: achieving high model performance on decentralized data under constraints of security, privacy, label deficiency, and system resources in a lifelong manner via the operational and practical system and ML co-design.
ML and System Co-design: Edge ML engine for training and inference; device-edge-cloud collaborative machine learning; MLOps with strong observability and monitoring capability to mitigate issues from data drift over time, skewed data distribution between training and deployment, and system heterogeneity; training efficiency on resource-constrained devices: reducing memory/energy/computation/communication cost for training large models on a resource-constrained commodity device; system for decoupling storage and computing resource for model training; serverless computing for cloud-based distributed training and multi-tanant training system; multi-cloud machine learning and data analytics
Machine Learning Applications with Strong Demand in Real World: computer vision, natural language processing, graph learning, recommendation systems, time-series forecasting
Systems: distributed/cloud/edge systems, mobile systems, open source library, product design
04/01/2022: quantitative summarization of Ph.D. (as of March 2022): (1) Publications 30, h-index 12, citations 2153, US Patent 1; (2) Professional Reviews: 97 (52 reviews in conferences, 28 reviews in workshops, and 17 reviews in journals); (3) Open Source Slack Users: 988; (4) Invited Talks: > 5 (Facebook, Amazon, Stanford, USC ISI, Sony, etc.); (5) Funding raised with my advisors: several grants and startup (> $3M USD)
03/30/2022: I received a very moving graduation gift: a Tang Dynastypoem (year 618-907)and a plum picture painted by a Qing Dynasty person that my labmate just bought from an American auction. The mood in the poem is very in line with the first two years when I came to the United States. I am lonely away from the crowd and accumulate silently. It is also the mood that should be possessed in the deepest part of scientific research. Here is the translation.
03/25/2022: I passed my Ph.D. defense. See my acknowledgment here. The thesis manuscript is maintained here.
11/21/2021: FL4NLP Workshop Proposal (Federated Learning for Natural Language Processing) has been accepted to ACL 2022. Welcome to submit your excellent works! Check this PDF for more details (topics, speaker, organizer, etc). You can read our FedNLP paper to find some interesting topics. We will release the next step soon.