Annonounced last week by the National Science Foundation (NSF,) Dr. Xulong Tang (Assistant Professor, Computer Science) received a new three-year grant. Tang's research focuses on deep neural networks and how to train such networks with less time and fewer computing resources.
"This is really a timely award that allows my group to continue pursuing cutting edge problems in the field. I also want to appreciate the collaborators, the support from our department, and the effort from talented students in my group," he said.
Abstract: Deep Neural Networks (DNNs) have become one of the most popular machine-learning techniques for solving real-world problems in object classification, autonomous vehicles, natural language processing, etc. Due to the ever-growing problem size and complexity, the training and inference of DNN models are increasingly time-consuming and require enormous computing resources. As such, multi-GPU infrastructure is a desirable platform that has been widely used in modern DNN tasks. However, the delivered DNN execution scalability is severely limited due to architectural unawareness and lacking easy-to-use runtime support. This research uncovers and addresses the architectural bottlenecks of DNN executions. The outcome of this research is expected to achieve scalable DNN executions on multi-GPU infrastructure.