Keynote: Paving the Road to Exascale
Gilad Shainer
Chairman
HPC & A.I. Advisory Council
Bio: Gilad Shainer is an HPC evangelist that focuses on high-performance computing, high-speed interconnects, leading-edge technologies and performance characterizations. He serves as a board member in the OpenPOWER, CCIX, OpenCAPI and UCF organizations, a member of IBTA and contributor to the PCISIG PCI-X and PCIe specifications. Mr. Shainer holds multiple patents in the field of high-speed networking. He is also a recipient of 2015 R&D100 award for his contribution to the CORE-Direct collective offload technology. Mr. Shainer holds an M.Sc. degree and a B.Sc. degree in Electrical Engineering from the Technion Institute of Technology. He also holds patents in the field of high-speed networking.
Abstract: The latest revolution in high-performance computing and artificial intelligence is the move to a co-design architecture, a collaborative effort among industry, academia, and manufacturers to reach Exascale performance by taking a holistic system-level approach to fundamental performance improvements. Co-design architecture exploits system efficiency and optimizes performance by creating synergies between the hardware and the software. The session will review the next generation of the data center architecture and the new approaches driving the development of Exascale platforms.
Keynote: Drive AI To Enter New Stage
Qingchun Song
Sr. Director, Market Development of APAC
Mellanox APAC
Bio: Master of Computing Science in China Tsinghua University. Has 15 years’ experience in HPC and storage industry. Participated many China, Japan, Singapore and Korea supercomputer design. Had been Taiwan General Manager to support worldwide hyperscale customers for 3 years. Was principle architect of AI solution in China and successfully built the RDMA eco-system over China mainstream machine learning/deep learning and big data framework.
Abstract: Distributed training and big data analysis are the foundation of AI. All mainstream training frameworks have moved to RDMA from TCP. SparkRDMA had been upstreamed. RDMA technology had significantly offloaded CPU from communication to application. Networking computing technologies had reduced the impact for CPU performance because the Intel Meltdown and Spectre fixes. High performance networking is the key to drive AI to enter new stage.
Keynote: Medical Discoveries when Big Data, AI & HPC Converge
Dr. Rangan Sukumar
Sr. Data Analytics Architect
Office of the CTO, CRAY Inc.
Bio: Rangan Sukumar is a Senior Analytics Architect in the CTO’s office at Cray Inc. His role is three-fold: (i) Solutions architect – Creating bleeding-edge solutions for scientific and enterprise problems in the long-tail of the Big Data market requiring scale and performance beyond what cloud computing offers, (ii) Technology visionary – Designing the roadmap for analytic products through evaluation of customer requirements and aligning them with emerging hardware and software technologies, (iii) Analytics evangelist – Demonstrating what Big Data and HPC can do for data-centric organizations. Before his role at Cray, he served as a group leader, data scientist and artificial intelligence/machine learning researcher scaling algorithms on unique super-computing infrastructures at the Oak Ridge National Laboratory. He has over 70 publications in areas of disparate data collection, organization, processing, integration, fusion, analysis and inference – applied to a wide variety of domains such as healthcare, social network analysis, electric grid modernization and public policy informatics.
Abstract: This talk is about the convergence of high performance computing (HPC) technologies for Big Data problems and artificial intelligence workflows. The convergence achieved with the combination of the HPC interconnect, the application of HPC best practices and communication collectives: (i) enables the ability to process 1000x bigger graph datasets up to 100x faster than competing tools on commodity hardware (i.e. GraphX) (ii) provides a 2-26x speed-up on matrix factorization workloads compared to cloud-friendly Apache Spark (iii) promises over 90% scaling efficiency on deep learning workloads (i.e. potential reduction in training time from days to hours). These benchmark results when assembled into data science workflows enable creative applications for discovery of domain-specific insights.
The talk will delve deeper into a use-case of applying artificial intelligence on medical `Big Data’ represented as massive, ad-hoc, heterogeneous graph networks. We will present the Cray Graph Engine (CGE) as a demonstration of the convergence of HPC and AI for Big Data that is capable of : (i) speeding-up ad-hoc searches (e.g. a query-able semantic database) and graph-theoretic mining (e.g. graph-theoretic algorithms) (ii) scaling to massive data sizes and (iii) providing newer functionality for temporal, streaming and snapshot analysis of massive graphs. We will demonstrate the convergence of graph-theory on a semantic database extracted from PubMed containing over 90 million knowledge nuggets published in over 27 million publications in medical literature and show how this capability was used as a: (i) demonstration of “explainable” artificial intelligence that augments clinical/medical researchers at the Historical Clinico-pathological Conference in Baltimore, USA to solve mystery illnesses; (ii) hypothesis generation tool that discovered the relationship between beta-blocker treatment and diabetic retinopathy at The University of Tennessee Health Sciences Center, Memphis, USA; (iii) knowledge browser that revealed xylene as an environmental cancer-causing carcinogen at the Oak Ridge National Lab, USA.
Keynote: Hybrid HPC strategies with a Hyperscale Cloud
Dr Jer-Ming Chia
Senior Program Manager for Azure Specialized Compute
Microsoft
Bio: Trained as a computational geneticist, Jer-Ming Chia holds a PhD in Human Genetics and works on delivering high performance computing solutions to customers with complex computational problems across different verticals. Originally from Singapore, he is currently a Senior Program Manager for the Azure Specialized Compute group in Seattle where he is now based.
Abstract: We will discuss the different approaches that organizations are taking to complement internal resources with a hyperscale Cloud — optimizing resource usage to service more user types, enable rapid development, and deliver faster results
Keynote: The Path Toward Tomorrow’s AI
Dr Loy Change Chen
Senior Research Consultant
SenseTime Group Limited
Bio: Loy Chen Change, PhD, is a senior research consultant at SenseTime. He is also an Adjunct Assistant Professor in the Chinese University of Hong Kong. He received his PhD (2010) in Computer Science from Queen Mary University of London. His research interests include computer vision and pattern recognition, with focus on face analysis, deep learning, and visual surveillance. He has published more than 90 papers in top journals and conferences of computer vision and machine learning. His journal paper on image super-resolution was selected as the `Most Popular Article’ by IEEE Transactions on Pattern Analysis and Machine Intelligence in 2016. He serves as an Associate Editor of IET Computer Vision Journal and a Guest Editor of the International Journal of Computer Vision. He is a senior member of IEEE.
Abstract: In this talk, I will discuss how computer vision and AI-powered deep learning help us recognize faces and objects, and understand the world. I will also share our on-going efforts in developing new AI technologies, for example, deep networks that can enhance images or hallucinate faces of very low resolution.
Anchoring AI in Singapore
Adhiraj Saxena
Manager, Industry Innovations
AI Singapore
Bio: Adhiraj is the manager of industry innovation in ‘AI Singapore’, a national initiative to anchor deep AI capabilities in Singapore. He oversees AI Singapore’s ‘100 Experiments’ programme with a mission to proliferate the adoption of AI among Singapore-based enterprises and start-ups. He is a Singaporean, graduated from National University of Singapore and served in Public Service for 7 years. Before joining AI Singapore, he was a project manager in the Ministry of Finance. He is also an Executive Committee member of the EDB Society, the alumni association for the Economic Development Board of Singapore.
Abstract: AI Singapore is a national initiative to anchor deep capabilities in Artificial Intelligence. This talk will share the programmes and initiatives to enable these outcomes as well as some of the exciting and innovative projects and challenges that Singapore will be embarking one. AI Singapore will also share on areas where industry can be supported and contribute to this AI transformation, under its 100 Experiments, AI Apprenticeship and other programmes.
Network Computing: Accelerate HPC & AI Performance
Avi Telyas
Director of Solution Engineering
Mellanox APAC
Bio: Avi Telyas is a Director of System Engineering in Mellanox Technologies, leading APAC Sales Engineering and FAE teams. Based in Tokyo, Avi is deeply involved in large HPC, Machine learning and AI deployments in Japan and APAC. In his free time, Avi is coding over AI frameworks and gets too excited talking about it. Avi holds a BSc (Summa cum laude) in Computer Science from the Technion Institute of Technology, Israel.
Abstract: The latest revolution in HPC is the move to a co-design architecture, a collaborative effort among industry, academia, and manufacturers to reach Exascale performance by taking a holistic system-level approach to fundamental performance improvements. Co-design recognizes that the CPU has reached the limits of its scalability, and offers In-Network-Computing to share the responsibility for handling and accelerating application workloads, offload CPU. By placing data-related algorithms on an intelligent network, we can dramatically improve the data center and applications performance.
Reconfigurable Accelerator Platform with Intel FPGAs
Robin Liu
Business Development Manager
Intel PSG Asia-Pac
Bio: Robin Liu has been in FPGA industry for more than 15 years with various technical and business roles. In 2015 Robin started leading the Asia Pac regional force developing the new businesses in the Data Centers, the Virtualization, and the Artificial Intelligence areas with FPGA technologies from Intel Programmable Solutions Group (former Altera Corporate).
Abstract: The presentation unveils the Intel FPGA product and platform that values the flexible and powerful acceleration for popular workloads including AI, Data Analytics, and other HPC applications.
New era in supercomputing- a wide variety of choices!
Rajesh Chhabra
General Manager-South East Asia, Greater China & Western Australia
Cray Inc.
Bio: Rajesh Chhabra is well recognized as one of the High Performance Computing (HPC) experts in Asia-pacific. He has worked nearly exclusively in the HPC field in his career of 18 years where he has worked for govt R&D centers, universities and commercial enterprises. He has performed variety of roles such as application programming, system administration, software development, program management, business development, sales and operations, however always focused in the HPC domain. Having expertise in HPC hardware, middleware and software domain gives him a unique capability to design end to end HPC solutions.
He worked as a research support specialist in one of the A*star institutions in Singapore early in his career and further moved on to work in Queensland University of Technology (QUT) in Australia performing similar research support role and HPC system administrator roles. He led a key national infrastructure project in Australia under the APAC Grid program leading the User Interface and Visualisation domain. In 2006, he joined Altair and moved to India to establish a global development center for PBSWorks. In 2008 he took business development and thought leadership role for PBSWorks looking after the entire Asia-Pacific region and grew the sales by over 300% during his time in Altair. He moved into the hardware industry few years ago when he joined Silicon Graphics (SGI) looking after the whole of Asia (except Japan) based out of Singapore. Under his leadership the regional sales grew by 400% in a very short span of time. In his current role in Cray he covers a spread-out region of South East Asia, Greater China and Western Australia.
He holds a Master’s in Information Technology from Swinburne University of Technology Australia and a Master’s in Technology Management from Griffith University Australia.
Abstract: For nearly a decade, supercomputing world has had more or less a defined architecture. From a CPU architecture to a set of interconnect choices, it can be easily argued as a fairly limited set. However, a new era is upon us where buyers are going to be spoiled for choice! With a variety of CPUs on the horizon, range of interconnects, cooling technologies and above all the choice of flexible business models (capex, opex, on premises or cloud) the buyers may find themselves taking longer than usual for finalizing a system. As HPC becomes a platform for AI, even more complexity is ahead of those who may be looking to purchase a balanced system. This talk will highlight the current supercomputing landscape in view of the growing AI demands and share the efforts of Cray to address these market challenges.
HPE Exascale HPC Strategy, Technologies and Status
Todd Churchward
Senior Technologist, HPC & AI
HPE APAC
Bio: Todd Churchward is a Solution Architect for HPE with responsibilities for major projects across Asia Pacific including Australia, New Zealand, India, China, Korea and ASEAN countries. Todd has extensive experience architecting Petascale high performance computer systems, high performance file systems, and large scale persistent storage environments. Todd also has extensive skills in HPC application porting and tuning as well as the development of scientific cloud computing environments.
Todd has diverse technical background with a Bachelor of Applied Science – Surveying, Master of Geographic Information Systems, and Post Graduate Diploma in Information Technology – Software Engineering. His role at HPE encompasses solution design, implementation and support of large scale computer solutions; application optimisation and benchmarking; consultancy engagements; as well delivering end-user training for HPC applications development and optimisation. Todd has over 25 years’ experience in technical computing and software, including Geomatics, GIS, SCADA and Industrial Control Systems, as well as fifteen years in HPC compute, storage and graphics with SGI prior to joining HPE.
Todd has been the technical lead on several Petascale national computational and storage systems including National Computational Infrastructure (NCI) and the Pawsey Centre in Australia, ASTAR Singapore, NeSI New Zealand. Todd has also worked with many university, public sector and private organisations to deliver innovative, highly effective and productive HPC solutions across a diverse range of industries.
Abstract: Exascale computing presents some significant challenges in the form of system scale, power efficiency, affordability, manageability, programmability, usability and sustained application performance. HPE is tackling these challenges head-on by delivering innovative technologies and approaches that will enable significant breakthroughs in system efficiency and allow innovative new and approaches and algorithms. This session will cover the HPE’s Exascale motivation, strategy, component technologies, integration approach and program status.
Compute, Storage and Interconnects – Performance and Usage Scenarios
Sandeep Lodha
CEO & Director
Netweb
Bio: A data science enthusiast with over 25 years of experience, Sandeep Lodha serves as a Co-Founder and Director for Singapore based IT Solutions company Netweb Pte Ltd. He is a Software Engineer and a Management Graduate with a deep interest in the area of Big Data Analytics. Sandeep has driven several high-value projects in Enterprise computing, Cloud and HPC for Netweb in India, Singapore and the Middle East. Prior to establishing Netweb in Singapore, Sandeep Lodha has served as the head of Sales and Marketing for Netweb Technologies, India. During this period, he laid out major marketing and operational milestones for the company.
Sandeep Lodha also serves as a founder and board member for the Tyrone Foundation. An initiative to make quality education available to the weaker sections of the society in India and also promotes awareness about academic excellence in the HPC domain.
Abstract: Storage world is rapidly changing and single largest reason is the user requirements. Today users have a varied requirement and not one solution can fit in everywhere. There has been a gamut of redefining technologies that have come and offer some good solutions to meet the new age requirement. NVMe is one such technology and storage solutions around them are gaining traction as they offer extremely dense storage with extremely high performance. PFS & DFS solutions have been around for a long time but are now evolving to meet some newer workloads offering a good alternative for some use cases. I intend to cover some of the exciting solutions around these.
Parallel File System Implementation for Artificial Intelligence and Machine Learning Computing
Carlos Thomaz
DDN
Bio:
Carlos Thomaz
Carlos is the Technical Product Manager for DDN’s Exascaler product line, a parallel file system based on Lustre File System. Carlos holds this position since 2015 when he re-joined DDN. Carlos has 19 years of experience in HPC and technical market. Prior to joining DDN, he worked for Sun Microsystems and Seagate in different technical roles on pre-sales, professional services and technical product team. Since 2011 his role has been primarily dedicated to Storage for High Performance Computing and he’s been supporting customers and projects worldwide, as well co-managing the Exascaler development team.
Shuichi Ihara
Shuichi Ihara is Senior Manager, Performance Engineering group at DDN. Shuichi joined DDN in 2010 and has been responsible for I/O benchmark and performance optimization on software and hardware. He is also Manager of Lustre development team at DDN. His team is working on adding new Lustre features to address complex customer challenges and requirements. Before joining DDN, Shuichi had been working at Sun Microsystems for 10 years. During the last three years there, he was in Lustre engineering team and worked for the development of Lustre/HPC software appliance stack. He has more than 12 years of HPC experiences and involved in many large HPC deployment projects all over the world.
Abstract: The realm of supercomputing is constantly changing. New software development paradigms change the way on how we address the current computational problems. Adoption of new algorithms and technologies brings a new novel on software and hardware development. Today, we are experiencing a shift on the technology scenario where business and analytical applications are overlapping the traditional HPC environment. Recently, the adoption of Artificial intelligence and Machine Learning workloads made System Designers and Architects to re-think on the way how traditional supercomputers are being built and deployed, including not only the compute and network design, but also the storage framework.
This presentation addresses the new challenges that Storage administrations are currently facing with these new paradigms. It exposes the power and needs of parallel file system and its attributes and features, as well the approaches being implemented into the real world. The content is based on DDN’s experience implementing Lustre File System as parallel storage solutions for traditional HPC centers as well new ventures engaged into AI and Machine Learning.
Managing trillions of research data files
David Honey
Data Management & Storage Principal Consultant, HPC & AI
HPE APAC
Bio: David is a Data Management Expert with HPE’s High Performance Computing & AI Solutions Sales business unit.
David’s experience covers all aspects of the system life cycle from requirements analysis, infrastructure design, capacity planning, continuity planning, integration planning and benchmarking for new systems through to configuration management, change management and problem management for mature systems.
He has provided consultancy to clients on technology and architecture options, designing solutions in conjunction with multiple vendors and managing complex implementations.
David’s knowledge covers data sharing, hierarchical storage and data protection.
Prior to joining HPE, David worked for SGI for 18 years and managed ICT Infrastructure for Telecom NZ Ltd for 12 years before that. David has a Bachelor of Science in Physics from Victoria University of Wellington and is a certified PMI Project Management Professional of 11+ years standing.
Abstract: High volume scientific instruments, sensor networks and the Internet of Things promise to make the Research Data Managers life increasingly difficult. All-flash storage and parallel file systems may maintain data access service levels but won’t hold back the rising tide. Fast, large-scale storage presents greater challenges for data management and data protection.
Introducing the 7th generation Data Management Framework product from HPE. This presentation explains how DMF addresses the cost of ownership of Exascale data stores while accelerating HPC jobs, protecting data and making Data Management easier.
AI for HPC and HPC for AI Workflows: The Differences, Gaps and Opportunities with Data Management
Dr. Rangan Sukumar
Sr. Data Analytics Architect
Office of the CTO, CRAY Inc.
Bio: Rangan Sukumar is a Senior Analytics Architect in the CTO’s office at Cray Inc. His role is three-fold: (i) Solutions architect – Creating bleeding-edge solutions for scientific and enterprise problems in the long-tail of the Big Data market requiring scale and performance beyond what cloud computing offers, (ii) Technology visionary – Designing the roadmap for analytic products through evaluation of customer requirements and aligning them with emerging hardware and software technologies, (iii) Analytics evangelist – Demonstrating what Big Data and HPC can do for data-centric organizations. Before his role at Cray, he served as a group leader, data scientist and artificial intelligence/machine learning researcher scaling algorithms on unique super-computing infrastructures at the Oak Ridge National Laboratory. He has over 70 publications in areas of disparate data collection, organization, processing, integration, fusion, analysis and inference – applied to a wide variety of domains such as healthcare, social network analysis, electric grid modernization and public policy informatics.
Abstract: The convergence of best practices from enterprise cloud computing and scientific supercomputing provide tremendous opportunity for productivity and performance at scale. Scientists who solve differential equations on supercomputers, are beginning to adopt AI tools in their workflows for process automation, ensemble analysis, computational steering etc. and AI practitioners in the enterprise are looking to HPC concepts for speeding and scaling up their algorithms to handle bigger data and more complex distributed models. However in reality, when these two worlds meet, key bottlenecks with data management emerge. We compare and contrast data management strategies in both the communities and share lessons learned on a diverse suite of hybrid computational and data-intensive use-cases. In doing so, we identify the following gaps and opportunities: (i) introducing parallelism for i/o on emerging hardware and software storage technologies; (ii) developing and implementing communication-aware algorithms; (iii) creating easy-to-use tools/middleware for seamless programmability and portability of data and code; (iv) designing end-to-end workflow benchmarks that include data management requirements.
The Vision of HPC & AI Advisory Council
Gilad Shainer
Chairman
HPC-AI Advisory Council
Bio: Gilad Shainer is an HPC evangelist that focuses on high-performance computing, high-speed interconnects, leading-edge technologies and performance characterizations. He serves as a board member in the OpenPOWER, CCIX, OpenCAPI and UCF organizations, a member of IBTA and contributor to the PCISIG PCI-X and PCIe specifications. Mr. Shainer holds multiple patents in the field of high-speed networking. He is also a recipient of 2015 R&D100 award for his contribution to the CORE-Direct collective offload technology. Mr. Shainer holds an M.Sc. degree and a B.Sc. degree in Electrical Engineering from the Technion Institute of Technology. He also holds patents in the field of high-speed networking.
Abstract: The HPC-AI Advisory Council is a leading worldwide organization for high-performance computing and artificial intelligence research, development, outreach and education activities. With more than 400 member organizations, the council support various best practices activities, worldwide educational programs, advanced research activities and more. The session will review the council mission and programs, and its view for the future of HPC.
Industrial-Level Deep Learning Training Infrastructure: the Practice and Experience from SenseTime
Dr. Shengen Yan
Research Director
SenseTime Group Limited
Bio: Shengen Yan received his PhD degree from Institute of Software, Chinese Academy of Sciences. Shengen Yan was a Postdoctoral researcher at Multimedia lab in the Chinese University of Hong Kong from Nov. 2015 to Nov. 2017. He was a visiting researcher in North Carolina State University from June. 2013 to Feb. 2014. Currently, he served as the R&D Director of Algorithm Platform department at SenseTime and help SenseTime to build the deep learning supercomputer and distributed training system. Shengen Yan has published about 20 papers in the area of parallel computing and deep learning. He is also the first person (as the first author) who consecutively published 2 papers in PPoPP (world’s top conference in parallel computing) in China. He has served as the PC member or reviewer for several academic conferences or journals. Before Join SenseTime, Shengen Yan is the technique leader of Minwa (world’s largest deep learning supercomputer at that time) project at Baidu Research.
Abstract: Sensetime is the leading cutting-edge artificial intelligence company and focuses on innovative computer vision and deep learning technologies. SenseTime is dedicated to building an industrial-level deep learning training infrastructure. Sensetime developed new deep learning training framework from scratch and offer to customer optimized GPU supercomputer. Sensetime training platform is broadly used in different industries, such as Face Detection Tracking, Facial Identity Recognition and Object Detection.
RDMA over ML/DL and Big Data Framework
Ido Shamay
Software Architect
Mellanox Technologies
Bio: Ido Shamay is software Architect in Mellanox Technologies, focused on distributed AI/Big Data applications acceleration, network virtualization technologies and network congestion algorithms. Before that Ido worked as a Cloud networking developer, Linux network device driver maintainer and as a performance engineer analyzing distributed HPC and cloud applications. Ido holds a BSc in Computer Science from the Technion Institute of Technology, Israel.
Abstract: The exponential data growth and increasing complexity of Machine Learning algorithms have raised modern Big Data and Artificial intelligence applications network requirements. RDMA is the most efficient way to move data across the network, enabling applications with transport level acceleration and direct end-to-end memory access semantics in user space, bypassing kernel networking stack providing those applications with high throughput, low latency and low CPU overhead. RDMA has been used by the HPC community for a long time, and it’s becoming the de-facto solution for Artificial Intelligence and Big Data distributed applications as well, already adopted by the Big Data and Machine Learning main frameworks.
Why AI Frameworks Need RDMA
Dr. Bairen Yi
Hong Kong Univerisity of Science and Technology
Bio: Bairen is currently a 3rd-year M.Phil. student in HKUST. He has 5 year’s experience in CUDA programming and data mining, 3 years’ experience in large scale machine learning system design and implementation, 1 pending patent in data center networking. He is a contributor to numerous open source software projects, including ZeroMQ, Apache Spark, and Google TensorFlow.
Abstract: The recent breakthrough of AI could be attributed to not only advances in algorithmic modeling, but warehouse-scale data volume and warehouse-scale computing infrastructure. If each computing or storage chip is a soldier, it’s datacenter networking that orchestrates them to fight like an army. Thanks to RDMA, we could enjoy the highest throughput and lowest latency all at once, making AI applications in cloud swifter than they were ever before. In this talk, we will present some of the practical concerns to deploy RDMA enabled AI applications in cloud, and how to address them combining best breeds of hardware from Mellanox and software from HKUST.
Student Cluster Competition (SSC) Experience Sharing
Liu Siyuan
Nanyang Technological University
Bio: Siyuan is currently a Computer Science senior at Nanyang Technological University (NTU). He has participated in 5 student cluster competitions in the past two years. He led the NTU team to win the “Deep Learning Excellence Award” at ISC17 and both the “Overall Champion” and “Highest LINPACK Award” at SC17.
Abstract: Student cluster competitions are unique high-performance computing related competitions. These competitions have gained popularity among students around the world and the team from Nanyang Technological University has been participating in those competitions in the past 4 years and won many awards along the way. In this sharing session, I will be introducing what student cluster competitions are, sharing our team’s experience with them, and what we have learnt from them.
Executing AI Projects Successfully In Your Organisation
Dr Bhushan Desam
AI Global Business Leader, Lenovo Data Center Group
Lenovo
Bio: Dr. Bhushan Desam is a currently a global business leader for Artificial Intelligence at Lenovo Data Center Group. Since joining Lenovo in early 2016, Bhushan has combined his engineering mindset with over a decade of expertise exploring high-performance computing and its various applications to shape and execute Lenovo’s artificial intelligence (AI) strategy. In the current role, he engages with customers to help them capitalize on the benefits of AI, machine learning, deep learning to solve their most challenging business and research problems. He holds a Ph.D. degree in engineering from the University of Utah and a management degree from MIT Sloan School of Management.
Abstract: Artificial intelligence (AI) has been on the agenda for many organizations considering the technical advances made in recent years in areas like deep learning. As organizations begin to implement various use cases, it is critical to be successful right from the beginning, both to create business value and to justify further investment for a broader impact. However, as a new technology, several factors need to be considered that are distinct from implementing a mature technology. For example, architectural considerations during early prototyping can have major implications on TCO as the activity ramps up. In this talk, we will discuss those important factors that can influence the successful outcome in implementing AI projects.
Meet the most demanding HPC and AI needs with help of Microsoft Azure
Luka Debeljak
APAC Manager for Azure Applications & Infrastructure
Microsoft
Bio: Luka Debeljak is the Manager of the Cloud Infrastructure Business for the Asia Pacific Region (APAC), focused on rapid adoption of Microsoft Azure cloud platform in the area. The solutions and workloads he is focused on vary from basic Infrastructure services up to Big Data platforms and HPC workloads, inclusive of many OSS solutions running on Microsoft Cloud. Luka has been with Microsoft for 14 years and has extensive knowledge and passion for everything related to cloud computing, cloud platforms, hybrid solutions architectural concepts, and technology in general. Luka holds a Master of Science in Computer Science with a focus on Artificial Intelligence (Machine Learning) and Computer Programming from the University of Ljubljana in Slovenia.
Abstract: High performance computing (HPC) applications are some of the most challenging to run in the cloud due to requirements that can include fast processors, low-latency networking inner-connect, parallel file systems, and specialized compute like GPUs. Similar trends can be observed with the new AI, machine (deep) learning scenarios.
Please join us for the session to understand how you could run these workloads on Microsoft Azure with extreme high performance and at hyperscale. Some of the topics discussed will cover HPC scenarios, Azure services (i.e. Azure Batch) and cases of implementation.
Spark Over RDMA: Accelerate Big Data
Ido Shamay
Software Architect
Mellanox Technologies
Bio: Ido Shamay is software Architect in Mellanox Technologies, focused on distributed AI/Big Data applications acceleration, network virtualization technologies and network congestion algorithms. Before that Ido worked as a Cloud networking developer, Linux network device driver maintainer and as a performance engineer analyzing distributed HPC and cloud applications. Ido holds a BSc in Computer Science from the Technion Institute of Technology, Israel.
Abstract: The opportunity in accelerating Spark by improving its network data transfer facilities has been under much debate in the last few years. RDMA (remote direct memory access) is a network acceleration technology that is very prominent in the HPC (high-performance computing) world, but has not yet made its way to mainstream Apache Spark. Proper implementation of RDMA in network-oriented applications can improve scalability, throughput, latency and CPU utilization. In this talk we are going to present a new RDMA solution for Apache Spark that shows amazing improvements in multiple Spark use cases. The solution is under development in our labs, and is going to be released to the public as an open-source plug-in.
Building an integrated AI-accelerated HPC hardware infrastructure
Francis Lam
Director of HPC Product Management
Huawei Technologies
Bio: Francis brings 20-plus years of HPC and IT industry experience specialized in server systems design and HPC solution architecture. Before joining Huawei Enterprise USA as Director of HPC Product Management, Francis served in Huawei US R&D Center since 2011 as an HPC System Architect. Francis is responsible of driving future direction of Huawei HPC products and solutions.
Prior to joining Huawei, Francis has served world leading HPC and IT solution providers such as Hewlett-Packard, Oracle/Sun Microsystems and Super Micro.
Abstract: Broad range of scientific research disciplines and industries have increasingly adopted artificial intelligence to speed up analysis, increase the precision of predictions, design better machines and make new discoveries. It becomes critically important for organizations to plan and integrate AI capability into their high performance infrastructure. This talk presents a forward-looking HPC architecture that deeply integrates AI with HPC in a flexible, efficient and scalable manner.
Building a High Performance Analytics Platform
Cheng Jang Thye
Chief Architect
Fujitsu Asia Pte Ltd
Bio: Jang Thye joined Fujitsu Singapore in June 2015 as Chief Architect. In his current role, he is responsible for driving strategic technology initiatives and oversees all aspects of solution and architecture design with various delivery teams to deliver a robust ICT portfolio to customers.
With over 20 years of professional experience in the industry, Jang Thye has held numerous positions including Systems Engineer, IT Architect, Senior Business Development Manager, and Systems Architect. Prior to Fujitsu, Jang Thye was Chief Architect at CA Technologies where he led the Asia Pacific/Japan (APJ) Solution Architect team in engaging major accounts in the APJ market. His key focus was driving Mobile Application Technologies, particularly in areas such as security, social networking, and application development in the market.
Jang Thye holds a Master of Science (Computer Science and Information Systems) and a Bachelor of Science (First Class Honours) (Computer Science and Information Systems), both from the National University of Singapore.
Jang Thye is married and has 2 boys. Outside of work, Jang Thye has multiple interests, such as tennis, badminton, swimming, cycling, violin, harmonica, and Go (WeiQi).
Abstract: Fujitsu sees a trend where HPC technologies and traditional data analytics will unify to bring unique capabilities to the digital transformation journey. As the cost of Flash storage goes down, there is a push towards leveraging them for high-performance Analytics use cases. This session aims to provide a deeper insight into the considerations such as cost, storage performance, the software stack when moving towards high performance analytics. Fujitsu will also introduce our partner solution from Iguazio, who will share insights on building a high performance data platform using Flash, enabling users to analyze data – in one simple, fast and secure platform, eliminating data pipeline complexities and reducing time to insights.
Access, Control, and Optimize HPC Clusters & Clouds with PBS Works 2018
Dr Bill Nitzberg
CTO of PBS Works
Altair
Bio: Dr. Bill Nitzberg is the CTO of PBS Works at Altair and “acting” community manager for the PBS Pro Open Source Project (www.pbspro.org). With over 25 years in the computer industry, spanning commercial software development to high-performance computing research, Dr. Nitzberg is an internationally recognized expert in parallel and distributed computing. Dr. Nitzberg served on the board of the Open Grid Forum, co-architected NASA’s Information Power Grid, edited the MPI-2 I/O standard, and has published numerous papers on distributed shared memory, parallel I/O, PC clustering, job scheduling, and cloud computing. When not focused on HPC, Bill tries to improve his running economy for his long-distance running adventures.
Abstract: PBS Works has become a key technology both to increase productivity and to reduce expenses, for organizations all around the world. With PBS Works 2018, Altair is reimagining the HPC experience with new versions of the PBS Works suite to Control HPC, Access HPC, and Optimize HPC. For system administrators, Altair’s new PBS Works Control tools provide 360-degree visibility and Control to configure, deploy, monitor, troubleshoot, report, and simulate HPC clusters and clouds, including automatically bursting peak workloads to public clouds, and creating and managing cloud appliances with ease. For engineers and researchers, PBS Works Access portals provide natural Access to HPC (no IT expertise needed) to run solvers, view progress, manage data, and use 3D remote visualization from anywhere — via the web, via the desktop, and via mobile devices. Finally, for systems, PBS Pro Optimizes HPC and is now dual-licensed — Open Source and Commercial — providing the best of both worlds to match organizational goals. In 2018, PBS Works is now stronger, faster, and better!
Heterogeneous Supercomputing and the POWER9 Processor
Dr H. Peter Hofstee
Distinguished Research Staff Member
IBM Research
Bio: Dr H. Peter Hofstee is a Dutch physicist and computer scientist who currently is a distinguished research staff member at the IBM Austin Research Laboratory, USA, and a part-time professor in Big Data Systems at Delft University of Technology, Netherlands. Hofstee is best known for his contributions to Heterogeneous computing as the chief architect of the Synergistic Processor Elements in the Cell Broadband Engine processor used in the Sony Playstation3 and the first supercomputer to reach sustained Petaflop operation. After returning to IBM research in 2011 he has focused on optimizing the system roadmap for big data, analytics, and cloud, including the use of accelerated compute. His early research work on coherently attached reconfigurable acceleration on POWER7 paved the way for the new coherent attach processor interface on POWER8. Hofstee is an IBM Master Inventor with more than 100 issued patents and a member of the IBM Academy of Technology.
Abstract: This talk looks at the state of heterogeneous supercomputing, and more specifically at the design and capabilities of large clusters based on heterogeneous POWER9 systems, most notably those combining POWER9 and NVIDIA processors. This talk will cover some of the recent results in conventional supercomputing, but will also address the use of such systems for newer applications such as Big Data Analytics and AI. We also intend to discuss what other POWER9 system configurations are possible, including systems that would deliver as much as 100GB/s of network, another 100GB/s of flash storage, and a third 100GB/s of acceleration. We present some recent work on accelerator designs that could leverage this level of bandwidth.
Network Accelerated AI
Elad Wind
Director of Networking Solutions
Mellanox APAC
Bio: Elad Wind is currently Director of Technical Marketing and is a founding member of the Mellanox Singapore office where he promotes the adoption of Mellanox Ethernet Switching in Asia Pacific. Since 2010, Elad has served in various technical roles at Mellanox. Elad holds an MBA from Tel-Aviv University and ESSEC Business School Paris, and a Bachelor of Science degree in Electrical Engineering from the Technion, Israel.
Abstract: With AI implemented in more applications: healthcare, smart cities, security and finance – organizations adopt AI clouds to enhance their competitive advantage. Mellanox highest performing, multi-tenant, scalable cloud networks accelerate the world’s leading Artificial Intelligence, Machine Learning and Big Data Analytics Platforms today. Mellanox innovation enables for Tensorflow and leading framework smart offloading such as RoCE and GPUDirect that dramatically improve neural network training performance and overall machine learning applications. For 100 Gigabit per second purpose-built AI fabrics: high performance, low latency and end-to-end congestion avoidance become hard requirements.
Hyperion Research HPC Market Update
Alex Norton
Analyst
Hyperion Research Holdings, LLC
Bio: Alex Norton’s primary focus is understanding the HPC industry, the vendors, products, and users. He utilizes his background in applied mathematics to provide deeper analysis on the HPC data.
Specifically, Mr. Norton focuses on:
– Conduct numerical analysis of our worldwide product, revenue, and technology HPC databases.
– Update and redesign various HPC database structures.
– Provide analytical and technical support in key Hyperion data products such as the quarterly HPC QView.
– Participate in surveys with key HPC experts, write both short and long-term assessments and reports, and deliver research results and analytical findings to clients.
– Collaborate with Hyperion Research team members in developing Hyperion Research’s capabilities in key HPC sectors including AI/ML/DL, quantum computing, HPDA, etc.
Mr. Norton received his Bachelor of Arts in Mathematics from Washington University in St. Louis, with a concentration in applied mathematics.
Abstract: Hyperion Research’s Analyst, Alex Norton, will present an update on the worldwide trends and events in the HPC market space. Starting with an update on Hyperion Research (the former IDC HPC team). Then an overview of the HPC market looking at competitive segments, vendor shares, purchases by industry, by processors and by regions around the world. Next, he will present our market forecasts, followed by a deeper dive into the HPC market in Asia.
Then he will present an overview of exascale plans around the world. Next, he will show highlights of our market racking of big data/HPDA/AI/ML and DL. And he will conclude with an overview of our research on the ROI and ROR from investments in HPC.
AI: Another Infrastructure?
Fumiki Negishi
Regional Sales Director for HPC/AI APJ, Data Center Group Sales
Intel
Bio: Fumiki Negishi is the Regional Sales Director for AI & HPC in Asia Pacific & Japan, in Intel’s Data Center Group Sales organization. Fumiki joined Intel in 2011 as a Sales Executive for HPC & FSI and has been in his current position since 2015. Prior to joining Intel, Fumiki was at IBM Japan where he held a variety of technical sales & consulting positions focused on utilizing advanced IT in various industries from FSI, Retail, Manufacturing. Fumiki has been engaged in key “First of a Kind” projects, from winning and delivering the first & largest multi-architecture Linux Cluster in Japan in 2004, to the largest supercomputer system in Japan now.
Abstract: AI is fundamentally a Data Analytics revolution. While the innovation is currently driven by SW, AI at scale is more an infrastructure challenge. For the few of those who already have a large & scalable infrastructure, AI is yet another workload, but a workload that has unique characteristics. For those who do not have this infrastructure, you must overcome a new but old challenge of creating one that is also sustainable. I will address what these unique characteristics are and how we should prepare to effectively host this new class of workloads.
Dell EMC – A day in life … 2030
Romain Bottier
Dell EMC HPC & AI Subject Matter Expert
DELL EMC
Bio: Romain Bottier is the Subject Matter Expert for HPC and AI at Dell EMC (South Asia). Romain comes from an HPC storage background, where he spent close to 10 years working with various HPC customers from different horizon, across Europe, Middle East and for the past 5 years in Asia.
Abstract: Dell EMC has been looking for decades at driving human progress through technology democratization. Today, Dell EMC picture the horizon 2030 and how HPC and AI, along other technology such as VR/AR will change the way we live, by simplifying the human-machine interactions. Please join us to get a glimpse of our 2030 vision and how Dell EMC today collaborate with Research community and Industries to reach this vision.
Watercooling and Impact on Performance and TCO
Matthew T. Ziegler
Director HPC and AI Architecture
Lenovo
Bio: Matthew T. Ziegler is a member of the global HPC and AI team at Lenovo and is currently serving as the Director of HPC and AI Architecture. His current role at Lenovo focuses primarily on working with system development to design and architect next-generation computing systems for high performance computing and AI. Before joining Lenovo in 2014 as part of the acquisition of the System business from IBM, Matthew spent 13 years as a HPC architect and Life Sciences/Bioinformatics subject matter expert. Matthew has a Bachelor of Arts in Molecular, Cellular and Developmental Biology from the University of Colorado, Boulder and spent 10 years working on various genomic research projects prior to moving solely into computing technology.
Abstract: Double the processing power every two years at the same cost. For decades, that concept, called Moore’s law, has been the accepted norm. Add more transistors. Shrink the chipset. Increase the power. All these knobs and buttons could be combined and manipulated to deliver performance that would deliver that proper performance trajectory of Moore’s law. The latter, increasing the power, has been the knob of choice for processor vendors in the recent years that enables technology vendors to deliver performance. This steady increase in power from generation to generation has introduced a whole new slew of challenges to overcome in the datacenter. With the combination of high wattage processors and co-processors, air-cooling is reaching its limit. Liquid-cooling options have continued to evolve and a side-effect of heat extraction by liquid is the increase in overall performance from each liquid cooled processor. In this session, we’ll highlight all the performance, density and TCO benefits that can be leveraged from liquid cooling versus traditional air-cooling.
Convergence of Big Data and AI with HPC
Prof. Satoshi Matsuoka
Professor, Global Scientific Information and Computing Center & Dept. of Mathematical and Computing Sciences
Tokyo Institute of Technology
Bio: Satoshi Matsuoka has been a Full Professor at the Global Scientific Information and Computing Center (GSIC), a Japanese national supercomputing center hosted by the Tokyo Institute of Technology, and since 2016 a Fellow at the AI Research Center (AIRC), AIST, the largest national lab in Japan, as well as becoming the head of the joint Lab RWBC-OIL (Open Innovation Lab on Real World Big Data Computing) between the two institutions, in 2017. He is the leader of the TSUBAME series of supercomputers, and has won the 2014 IEEE-CS Sidney Fernbach Memorial Award, the highest prestige in the field of HPC. From April 2018 he will become the director of Riken CCS, the top-tier HPC center that represents HPC in Japan, currently hosting the K Computer and developing the next generation Post-K machine, along with multitudes of ongoing cutting edge HPC research being conducted. He also has several important roles in research centers and projects in Singapore, including being on the NSCC Steering Committee, an advisory consultant for A*CRC-A*STAR, as well as advisory board member of AI Singapore.
Abstract: With rapid rise and the increase of Big Data and AI (BD/AI) as a new breed of high-performance workloads on supercomputers, we need to accommodate them at scale, traditional simulation-based HPC and BD/AI will converge. Our TSUBAME3 supercomputer at Tokyo Institute of Technology became online in Aug 2017, and became the greenest supercomputer in the world on the Green 500 ranking at 14.11 GFlops/W; the other aspect of TSUBAME3, is to embody various Data or “BYTES-oriented” features to allow for HPC to BD/AI convergence at scale, including significant scalable horizontal bandwidth as well as support for deep memory hierarchy and capacity, along with high flops in low precision arithmetic for deep learning. Furthermore, TSUBAME3’s technologies will be commoditized to construct one of the world’s largest BD/AI focused and “open-source” cloud infrastructure called ABCI (AI-Based Bridging Cloud Infrastructure), hosted by AIST-AIRC (AI Research Center), the largest public funded AI research center in Japan. The performance of the machine is slated to be several hundred AI-Petaflops for machine learning; the true nature of the machine, however, is its BYTES-oriented, optimization acceleration in the memory hierarchy, I/O, the interconnect etc, for high-performance BD/AI. ABCI will be online Spring 2018 and its architecture, software, as well as the data center infrastructure design itself will be made open to drive rapid adoptions and improvements by the community, unlike the concealed cloud infrastructures of today.
HPC Cooling Technologies
Prof. Satoshi Matsuoka
Professor, Global Scientific Information and Computing Center & Dept. of Mathematical and Computing Sciences
Tokyo Institute of Technology
Bio: Satoshi Matsuoka has been a Full Professor at the Global Scientific Information and Computing Center (GSIC), a Japanese national supercomputing center hosted by the Tokyo Institute of Technology, and since 2016 a Fellow at the AI Research Center (AIRC), AIST, the largest national lab in Japan, as well as becoming the head of the joint Lab RWBC-OIL (Open Innovation Lab on Real World Big Data Computing) between the two institutions, in 2017. He is the leader of the TSUBAME series of supercomputers, and has won the 2014 IEEE-CS Sidney Fernbach Memorial Award, the highest prestige in the field of HPC. From April 2018 he will become the director of Riken CCS, the top-tier HPC center that represents HPC in Japan, currently hosting the K Computer and developing the next generation Post-K machine, along with multitudes of ongoing cutting edge HPC research being conducted. He also has several important roles in research centers and projects in Singapore, including being on the NSCC Steering Committee, an advisory consultant for A*CRC-A*STAR, as well as advisory board member of AI Singapore.
Abstract: Tokyo Institute of Technology’s TSUBAME3.0, a 2017 successor to the highly successful TSUBAME2/2.5, deploys a series of innovative technologies, including ultra-efficient warm water liquid cooling and power control, inherited from years of basic research such as JST-CREST UltraGreen computing, deployments such as TSUBAME2.0 which became the “greenest production” supercomputer in the world in 2010, and the TSUBAME-KFC prototype which became #1 in the world in power efficiency on the Green500 twice in a row in 2013 and 2014. TSUBAME3.0 became #1 on the Green500 list, for the first time as a multi-petascale supercomputer, superseding the previous result by 50% at 14.11 Gigaflops/W. This is only 1/3 of the 50 Gigalops/W goal for exascale machines, indicating that investment in the technology has allowed continuous performance scaling of supercomputers.
Intelligent Case Retrieval System (ICRS)
Dr. Victor Chu
SPIRIT Centre, NTU
Bio: TBC
Abstract: Sharing of Intelligent Case Retrieval System (ICRS) project through Translational R&D Grants Programme
Continuous Space Representations of Language for Email Processing and Question & Answering
Dr. Rafael E. Banchs
Institute for Infocomm Research, A*STAR
Bio: TBC
Abstract: Deep learning technologies are changing the scope of machine learning in several different aspects, in this presentation, we will describe deep learning based technologies for continuous space representations of language currently being developed at I2R. These technologies can be applied to different natural language applications such as email pre-processing and auto-response generation, question & answering, intelligent conversational agents.
Panel Discussion: AI High-performance Computing in Fintech
Panelists:
Sinuhe Arroyo, CEO, Taiger
Swara Mehta, Senior Vice President, Technology Group – Data Science, GIC
Johnson Poh, Head of Data Science, DBS
Topic: Last year, global banks like Goldman Sachs and JP Morgan released strategy reports on harnessing alternative datasets for financial trading, while Blackrock announced a massive overhaul to focus on quantitative trading. We have seen AI being applied to all areas of finance from KYC and AML compliance, to fraud detection and customer identification. When high-performance GPU computing is made more readily accessible, what will be the implications for the financial industry? What possibilities are there on the horizon?
Panel Discussion: "Will AI High-performance Computing Replace Data Scientists?"
Panelists:
Drew Perez, Managing Director, Adatos
Topic: Evolutionary Algorithms, Meta-Learning, Transfer Learning and Hyperparameter Optimisation methods like Bayesian Optimisation have come together in recent years leading to solutions like AutoML and Datarobot that allow for the automated building of machine learning models. With the availability of high-performance GPU computing facilitating these methods to explore a greater search space in a shorter amount of time, does it mean the replacement of most data scientists?
Panel Discussion: AI High-performance Computing in Biotech
Panelists:
Aneesh Sathe, CEO, Qritive
Hossein Nejati, Chief Technology Officer, KroniKare
Huang Chao-Hui, Senior Bioinformatics Scientist, MSD
Topic: We have seen deep learning creating breakthroughs in all areas of biotech and healthcare, from medical image classification to the identification of precursor microRNAs. With high-performance GPU computing made more readily available and optimised for neural network training, what further possibilities might it open up? What are the implications for corporates, and the ways doctors, healthcare practitioners and scientists work in the future?