Dr. Qiqing Christine Ouyang is Distinguished Engineer, IBM Quantum Computing Technical Partnership and Systems Strategy. She is an IBM Master Inventor and member of the Academy of Technology.
Dr. Ouyang is a thought leader in Collaborative Innovation for emerging technologies. She is currently building IBM Q Network, a collaboration of industrial, academic, and government organisations worldwide with the mission to advance quantum computing and launch the first waves of commercial applications. Prior to her current role, Dr. Ouyang held various technical executive positions in IBM, where she developed Hybrid Cloud Reference Architecture for Analytics and built long-term, strategic partnership with clients. Dr. Ouyang holds 120+ patents and has 100+ scientific publications in major journals and conferences. She started her career at IBM TJ Watson Research Center as a Research Staff Member. Her deep technical root is in solid-state physics and nanotechnology.
Dr. Ouyang received two B.S. degrees (double-major)with highest honours, one in Electrical Engineering, and the other in Economics and Management from Tsinghua University, Beijing, China; a M.S. in Electrical Engineering from the University of Notre Dame; and a PhD in Electrical Engineering from the University of Texas, Austin.
Keynote:
From Sci-fi to Reality: Quantum Computing is Here
Abstract:
Classical computers have been and will continue to be a driving problem-solving force, but many of the world’s biggest mysteries and potentially greatest opportunities remain beyond their grasp. Here at IBM, we believe that will augment classical computing to potentially open doors that we once thought would remain locked indefinitely.
Quantum computers are incredibly powerful machines that offer a novel approach to processing information. Built on the principles of quantum mechanics, they exploit complex and fascinating laws of nature that are always there, but usually remain hidden from view. By harnessing such natural behaviour, quantum computers can run new types of algorithms to process information more holistically.
While quantum computing is still in its infancy, rapid progress is driving scientific advancements in realizing quantum’s potential in areas such as chemistry, finance, machine learning and optimization. Christine will discuss this radically different approach to computing and how the roadmap for mainstream adoption of this new technology is being forged today by IBM in collaboration with a global community across business, academia and research.
Bio:
Professor Yutong Lu is the Director of the National Supercomputing Center in Guangzhou, China. She is the professor in School of Computer Science, Sun Yat-sen University. She is a member of Chinese national key R&D plan HPC special expert committee She got her B.S, M.S, and PhD degrees from the NUDT. Her extensive research and development experience has spanned several generations of domestic supercomputers in China. Prof. Lu is deputy chief designer of Tianhe Project. She had won first class award and outstanding award of Chinese national science and technology progress in 2009 and 2014 respectively. She is leading several innovation projects on HPC and Bigdata supported by MOST, NSFC and Guangdong Province now. Her continuing research interests include parallel operating systems (OS), high-speed communication, large scale file system& data management, advanced HPC/BD/AI convergent application environment.
Keynote:
Capable Platform for the Next Generation Supercomputing
Abstract:
Supercomputing technology has been developing very fast, impacted science and society deeply and broadly. Computing-driven and Bigdata-driven scientific discovery have become a necessary research approach in global environment, life science, nano-materials, high energy physics and other fields. Furthermore, the rapidly increasing computing requirements from economic and social development also call for the power of Exascale system. Nowadays, the development of computing science, data science and intelligent science has brought new changes and challenges in system, technology and application of HPC. The usage mode and delivery mode based on cloud computing also attract supercomputer users. The future Exascale system design faces many challenges, such as architecture, system software, application environment and so on. This report will analyse the usage mode of the current Supercomputing Center and then discuss the design and application environment of future supercomputing systems.
Deng Yuefan is a Professor of Applied Mathematics at Stony Brook University, the Mt. Tai Scholar at the National Supercomputer Centre of China in Jinan, China’s most prestigious Thousand-Scholar Program awardee, and a Visiting Professor of Computer Science at the National University of Singapore. Prof. Deng earned his BA (1983) in Physics from Nankai University and his PhD (1989) in Theoretical Physics from Columbia University.
Prof. Deng’s research covers parallel computing, molecular dynamics, Monte Carlo methods, and biomedical engineering. He has published more than 85 papers in these areas and supervised 25 doctoral theses. He is the architect of the Galaxy Beowulf Supercomputer at Stony Brook built in 1997 and of the NankaiStars Supercomputer which was China’s fastest when it was completed in 2004. He also built a supercomputer prototype called RedNeurons in 2007 with financial support from China’s Ministry of Science and Technology and Shanghai’s Commission of Science and Technology. His research in the US is supported by DOE, NSF, NIH, as well as New York State. He has lectured widely in the US, Germany, Russia, Brazil, Singapore, as well as the Greater China region.
Keynote:
Supercomputing Multiscale Modeling in Biomedical Engineering Guided by Machine Learning for Optimal Accuracy and Efficiency
Abstract:
TBC
Professor Dahua Lin is Co-Founder of SenseTime. He is also an assistant professor at the Department of Information Engineering, the Chinese University of Hong Kong (CUHK) and the director of the CUHK-SenseTime Joint Lab. Prior to joining CUHK, he served as a research assistant professor at Toyota Technological Institute at Chicago from 2012 to 2014. His research interest covers computer vision, machine learning and big data analytics. In recent years, he primarily focuses on deep learning and its applications on high-level visual understanding, probabilistic inference and big data analytics.
Professor Lin has published about seventy papers on top conferences and journals, e.g. ICCV, CVPR, ECCV, NIPSand T-PAMI. His seminal work on a new construction of Bayesian nonparametric models has won the Best Student Paper Award at NIPS 2010. He also received the Outstanding Reviewer Award at ICCV 2009 and ICCV 2011. He has supervised and co-supervised the CUHK team in international competitions and won multiple awards at ImageNet 2016, ActivityNet 2016and ActivityNet 2017. He also served as an area chair of ECCV 2018.
Dahua Lin received his PhD from the Department of Electrical Engineering and Computer Science(EECS) at Massachusetts Institute of Technology in 2012. He received his M.Phil. from the Department of Information Engineering at the Chinese University of Hong Kong in 2007, and B.Eng. from the Department of Electrical Engineering and Information Science at the University of Science and Technology of China in 2004.
Dr. Lin Gan is the assistant director, and director of the R&D center, at the National Supercomputing Center in Wuxi. He is also an assistant researcher in the Department of Computer Science at Tsinghua University.
His research interests include high-performance solutions to scientific applications based on state-of-the-art platforms such as CPU, FPGAs, and GPUs. He is currently leading several major projects to develop highly-efficient software and tools for the Chinese homegrown Sunway CPUs and to look for novel architectures for next-generation supercomputing systems.
Dr. Gan is the recipient of the 2016 ACM Gordon Bell Prize, the 2017 ACM Gordon Bell Prize Finalist, the 2018 IEEE-CS TCHPC Early Career Researchers Award for Excellence in HPC, the Most Significant Paper Award in 25 Years awarded by FPL 2015, and the 2017 Tsinghua-Inspur Computational Earth Science Young Researcher Award, etc.
Keynote:
Boosting the Efficiencies for HPC Systems: Lessons from Sunway and Reconfigurable Architectures
Abstract:
There is a strategic shift in HPC system platform architectures. On one side, accelerators like GPUs or even dedicated vector engines are added to the main general-purpose CPUs, whether on chip in homogeneous arrangements like Shenwei, or external like Nvidia or NEC. On the other side, FPGA reconfigurable accelerated computing is gaining traction as a possible integral part of core HPC system configuration, rather than being an options. This talk focuses on some unconventional, but important HPC systems, the Sunway many-core CPU and the FPGA-based reconfigurable engine, and introduce some uniques architectural features and algorithmic arithmetics that are greatly beneficial to the performance of some numerical applications.
Dr Lim Keng Hui is the Executive Director of the Institute of High Performance Computing (IHPC) in A*STAR. He leads the research institute to advance scientific knowledge, and deliver impact to the industry and society through research in computational modelling and simulation, visualisation and artificial intelligence.
Prior to his current role, Keng Hui was the Director of the Singapore
University of Technology and Design (SUTD)’s Digital Manufacturing & Design Centre (DManD), where he co-led the research centre to develop leading edge capabilities in digital manufacturing and computational design. Concurrently, he was also the Director of the National Additive Manufacturing Innovation Cluster ([email protected]), where he set up and led the translational research centre which focuses on additive manufacturing (AM) R&D with companies, standards development, and industry outreach.
Before SUTD, Keng Hui had held several key appointments in A*STAR. He was the
Deputy Executive Director of the National Metrology Centre (NMC), where he
managed research capability development, resource planning and talent development. He was also the Director of the SERC Engineering Cluster, where he established and managed large scale strategic initiatives, including pioneering programmes in the Future of Manufacturing (e.g. AM, robotics, remanufacturing, optical engineering, logistics & supply chain management), the national Marine & Offshore (M&O) programme, as well as the A*STAR Urban Systems and MedTech programmes. He also supported the establishment of the National Robotics Programme (NRP), A*STAR Technology Adoption Programme (TAP) and Technology Centre for Offshore & Marine Singapore (TCOMS).
Keng Hui was formerly the Head of Product Development at a manufacturing company; CTO and co-founder of a medical image diagnostics startup based on his inventionwith NUS and the National Skin Centre; CTO and co-founder of a medical imaging and robotics start-up in Boston based on his invention at MIT and the Massachusetts General Hospital; and research scientist at the National University Hospital.
Keng Hui is currently an adjunct professor in SUTD. He has served on national-level
R&D committees in advanced manufacturing, robotics and urban solutions, as well as standards committee in AM. He holds several patents, and was a recipient of the
Innovator’s Award from the Prime Minister’s Office for his MedTech work. He received his degrees from Imperial College, MIT and NUS.
Keynote:
Modelling and Simulation: Innovations to Solve Challenges in Data Centres
Abstract:
Data centers are increasingly important due to societal progress and technological advancement. The mobile wireless telecommunication industry’s progress from 4G to 5G is expected to bring about a significant capacity demand for data centers. In addition, moving towards Industry 4.0, cyber-physical systems’ interconnectivity will accelerate further the growth of data center. In view of the increasing criticality of data centers, it has to be protected against many disruptive factors, including heat build up. This is because electronic devices consume electricity and its natural byproduct is heat. In order to ensure reliable operation and satisfactory equipment lifetime, ambient temperature for electronics must be maintained within acceptable limits as temperature affects the performance of electronic systems in many ways. In this talk, I will share how modelling and simulation have been used to develop innovations to solve challenges in data centers, particularly in heat management. Those innovation successes are in collaboration with companies, research institutes, agencies and the academia. This includes server rack design, system modelling of refrigeration cycle and passive on-demand cooling system.
Indermohan (Inder) S. Monga serves as the Division Director for Scientific Networking Division at Lawrence Berkeley National Lab and Executive Director of Energy Sciences Network, a high-performance network user facility optimized for large-scale science, interconnecting the National Laboratory System in the United States. Under his leadership, the organization also focuses on advancing the science of networking for collaborative and distributed research applications. He contributes to ongoing research projects tackling network programmability, analytics and quality of experience driving convergence between application layer and the network. He currently holds 23 patents and has 20+ years of industry and research experience in telecommunications and data networking. His undergraduate degree in electrical/electronics engineering is from Indian Institute of Technology in Kanpur, India, with graduate studies from Boston University.
Keynote:
Cracking open the network ‘black-box’
Abstract:
The size of the digital universe is growing at an exponential rate, with machine-generated data adding significantly to that growth. Data acquisition in its raw form and its transformation through data analysis into insights requires a network infrastructure that makes collection and movement of data from instrument to data center to high-performance computing or cloud a seamless experience. This talk discusses emerging trend of integrating application workflows with compute, storage, and network resources in unique ways with experiences from Energy Sciences Network, the science network super-highway supporting big-data research in US.
Pawsey Supercomputing Centre Executive Director, is a research executive with more than 20 years’ experience working at a senior level in innovative research and business development roles in complex, multi-stakeholder environments. Through national and international programs and joint-ventures, Mark had successfully led initiatives to accelerate the impact of research, development and education programs for Australia’s key energy, mining and agricultural sectors.
He is a former Chief Executive of an LNG research and development alliance of CSIRO, Curtin University and UWA, partnering with Chevron, Woodside and Shell. Prior to his appointment at Pawsey Mark led the innovation and industry engagement portfolio at The University of Western Australia. In addition, Mark is the current Chair of the Board of All Saints’ College and was appointed an adjunct Senior Fellow of the Perth USAsia Centre (an international policy think tank) in 2017.
Keynote:
Beyond hyperscale, to hyper-connected – an emerging Asian HPC zone
Abstract:
HPC futures isn’t just a consideration of hardware and software challenges and developments. The future of HPC will largely be shaped by how we use it. The nature of science and the multi-disciplinary complexity of the world’s wicked problems means that collaboration between research and infrastructure providers, subject experts, industry and government will only increase. We need to collaborate at scale across geographic boundaries to tackle issues that affect us worldwide, embracing technology, diversity, and utility.
Our HPC future is hyper-connected. We are uniquely placed to accommodate challenges of both distance and scale. But in addition to making advances in hardware and software, and growing a pipeline of talent with the necessary digital skills, we need digitally-literate people who can build linkages, support collaboration, and span historical boundaries. By engaging, collaborating, and working at scale across both geographic and organisational boundaries, we recruit diverse perspectives and talents for our problem solving, and enlarge our discovery bandwidth.
Asia has long been an important economic zone. Its future is also as an emerging zone of HPC collaboration and engagement.
Dr. Dan Stanzione, Associate Vice President for Research at The University of Texas at Austin since 2018 and Executive Director of the Texas Advanced Computing Center (TACC) since 2014, is a nationally recognized leader in high performance computing. He is the principal investigator (PI) for a National Science Foundation (NSF) grant to acquire and deploy Frontera, which will be the fastest supercomputer at any U.S. university. Stanzione is also the PI of TACC’s Stampede2 and Wrangler systems, supercomputers for high performance computing and for data-focused applications, respectively. For six years he was co-PI of CyVerse, a large-scale NSF life sciences cyberinfrastructure. Stanzione was also a co-PI for TACC’s Ranger and Lonestar supercomputers, large-scale NSF systems previously deployed at UT Austin. Stanzione received his bachelor’s degree in electrical engineering and his master’s degree and doctorate in computer engineering from Clemson University.
Keynote:
Computing for the Endless Frontier
Abstract:
In August of 2018, the Texas Advanced Computing Center (TACC) at the University of Texas at Austin was selected as the sole awardee of the National Science Foundation’s “Towards a Leadership Class Computing Facility” solicitation. In this talk, I will describe the main components of the award: the Phase 1 system, “Frontera”, which will be the largest University-based supercomputer in the world when it comes online in 2019; the plans for facility operations and scientific support for the next five years; and the plans to design a Phase 2 system in the mid-2020s to be the NSF Leadership system for the latter half of the decade, with capabilities 10x beyond Frontera. The talk will also cover the growing and shifting nature of the scientific workloads that require advanced capabilities, the technology shifts and challenges the community is currently facing, and the ways TACC has and is restructuring to face these challenges.
Sinisa is a Global Business Development Executive—driver of restructuring and growth strategies that consistently deliver multimillion-dollar revenue growth. He has directed leadership teams with sales in excess of $500M and has been recognized as a thought leader by defining new marketing strategies and building the foundation for strategic alliances. Sinisa has actively led teams across multiple countries (both matrix and direct reporting) in building product vision, marketing, partnership and alliance strategies. He acts as a true partner to business executives, cultivating collaborative relationships for organizational and financial success.
He is currently Director, IBM Systems for Cloud and Cognitive Platforms where he drives the IBM Power server business across Asia Pacific – including cognitive and AI solutions for deep learning, machine learning and high-performance computing.
Gilad Shainer is an HPC evangelist that focuses on high-performance computing, high-speed interconnects, leading-edge technologies and performance characterizations. He serves as a board member in the OpenPOWER, CCIX, OpenCAPI and UCF organizations, a member of IBTA and contributor to the PCISIG PCI-X and PCIe specifications. Mr. Shainer holds multiple patents in the field of high-speed networking. He is also a recipient of 2015 R&D100 award for his contribution to the CORE-Direct collective offload technology. Mr. Shainer holds an M.Sc. degree and a B.Sc. degree in Electrical Engineering from the Technion Institute of Technology. He also holds patents in the field of high-speed networking.
Keynote:
Intelligent Data Center Architecture to Enable Next Generation HPC/AI Platforms
Steve Scott serves as Cray’s Senior Vice President and Chief Technology Officer, responsible for guiding the long-term technical direction of Cray’s supercomputing, storage and analytics products. Dr. Scott rejoined Cray in 2014 after serving as principal engineer in the platforms group at Google and before that as the senior vice president and chief technology officer for NVIDIA’s Tesla business unit. Dr. Scott first joined Cray in 1992, after earning his Ph.D. in computer architecture and BSEE in computer engineering from the University of Wisconsin-Madison. He was the chief architect of several Cray supercomputers and interconnects. Dr. Scott is a noted expert in high performance computer architecture and interconnection networks. He holds 35 U.S. patents in the areas of interconnection networks, cache coherence, synchronization mechanisms and scalable parallel architectures. He received the 2005 ACM Maurice Wilkes Award and the 2005 IEEE Seymour Cray Computer Engineering Award, and is a Fellow of IEEE and ACM. Dr. Scott was named to HPCwire’s “People to Watch in High Performance Computing” in 2012 and 2005.
Keynote:
The Changing Face of HPC
Abstract:
As CMOS performance plateaus, processor architectures are becoming more diverse in an attempt to gain performance through architectural specialization. Workloads are becoming more heterogeneous, as well, as machine learning, AI and data analytics gain traction across a wide set of markets and problem domains. This drives future system architects to embrace heterogeneity and focus on data-centric system designs that can ingest, manipulate and analyze vast amounts of data, while bringing a robust set of computational technologies to bear.
Join Dr. Steve Scott, CTO at Cray, as he discusses the end of Moore’s Law, growing architectural and workload diversity, HPC/AI convergence, and Cray’s next-generation Shasta system and Slingshot interconnect.
Jay Hiremath leads the platform and software engineering team for AMD EPYC™ Server Processors. He has over twenty-five years’ experience in the technology industry, with the past ten focused on the HPC platform and software engineering.
Keynote:
AMD Silicon and Software Solutions for HPC
Abstract:
Momentum is building for AMD’s HPC business, with a growing number of customers announcing deployments using the AMD EPYC™ 7000 Series Processor. Come learn about the latest updates to the AMD EPYC™ Server Processor product line and AMD Software for HPC.
DK Panda is a Professor and University Distinguished Scholar of Computer Science and Engineering at the Ohio State University. He has published over 400 papers in the area of high-end computing and networking. The MVAPICH22 (High Performance MPI and PGAS over InfiniBand, iWARP and RoCE) libraries, designed and developed by his research group (http://mvapich. cse. ohio-state. edu), are currently being used by more than 2,675 organizations worldwide (in 81 countries). More than 392,000 downloads of this software have taken place from the project’s site. This software is empowering several InfiniBand clusters (including the 12th, 15th and 31st ranked ones) in the TOP500 list. The RDMA packages for Apache Spark, Apache Hadoop and Memcached together with OSU HiBD benchmarks from his group (http://hibd.cse.ohio-state.edu) are also publicly available. These libraries are currently being used by more than 185 organizations in 26 countries. More than 18,000 downloads of these libraries have taken place. He is an IEEE Fellow. More details about Prof.
Panda are available at http://www.cse.ohio-state.edu/~panda.
Keynote:
How to Design Convergent HPC, Deep Learning and Big Data Analytics Software Stacks for Exascale Systems?
Abstract:
This talk will focus on challenges in designing convergent HPC, Deep Learning, and Big Data Analytics Software stacks for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X programming models by taking into account support for multi-core systems (Xeon, OpenPower, and ARM), high-performance networks, GPGPUs (including GPUDirect RDMA), and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries (http://mvapich.cse.ohio-state.edu) will be presented. An overview of RDMA-based designs for Hadoop (HDFS, MapReduce, RPC and HBase), Spark, Memcached, Swift, and Kafka using native RDMA support for InfiniBand and RoCE will be presented. Benefits of these designs on various cluster configurations using the publicly available RDMA-enabled packages from the OSU HiBD project (http://hibd.cse.ohio-state.edu) will be shown. For the Deep Learning domain, we will focus on scalable DNN training with Caffe and TensorFlow using MVAPICH2-GDR MPI library and RDMA-Enabled Big Data stacks (http://hidl.cse.ohio-state.edu).