Tokyo Tech, Tohoku University, Fujitsu, & RIKEN Collaborates To Develop Distributed Training Of Large Language Models
TOKYO, May 22, 2023 - (JCN Newswire) - Tokyo Institute of Technology (Tokyo Tech), Tohoku University, Fujitsu Limited, and RIKEN today announced that they will embark on the research and development of a distributed training of Large Language Models (LLM) (1) on supercomputer Fugaku in May 2023, within the scope of the initiatives for use of Fugaku defined by Japanese policy.
LLMs are AI models for deep learning that serve as the core of generative AI including ChatGPT (2). The four organizations aim to improve the environment for creating LLMs that can be widely used by academia and companies, contribute to improving the research capabilities of AI in Japan, and increase the value of utilizing Fugaku in both academic and industrial fields by disclosing the results of this R&D in the future.
Background
While many anticipate that LLMs
and generative AI will play a fundamental role in as the
research and development of technologies for security, the
economy, and society overall, the advancement and refinement
of these models will require high-performance computing
resources that can efficiently process large amounts of
data.
Tokyo Tech, Tohoku University, Fujitsu, and RIKEN
are undertaking an initiative to this end that will focus on
research and development toward distributed training of
LLMs.
Implementation period
From May 24, 2023 to
March 31, 2024 *Period of the initiative for use Fugaku for
Japanese policies
Roles of each organization and
company
The technology used in this initiative will allow
the organizations to efficiently perform large-scale
language model training on the large-scale parallel
computing environment of the supercomputer Fugaku. The roles
of each organization and company are as follows:
Tokyo
Institute of Technology: Oversight of overall processes,
parallelization and acceleration of LLMs
Tohoku
University: Collection of learning data, selection of
models
Fujitsu: Acceleration of LLMs
RIKEN:
Distributed parallelization and accelerating communication
of LLMs, acceleration of LLMs
Future Plans
To
support Japanese researchers and engineers to develop LLMs
in the future, the four organizations plan to publish the
research results obtained through the scope of the
initiatives for use of Fugaku defined by Japanese policy on
GitHub (3) and Hugging Face (4) in fiscal 2024. It is also
anticipated that many researchers and engineers will
participate in the improvement of the basic model and new
applied research to create efficient methods that lead to
the next generation of innovative research and business
results.
The four organizations will additionally consider collaborations with Nagoya University, which develops data generation and learning methods for multimodal applications in industrial fields such as manufacturing, and CyberAgent, Inc., which provides data and technology for building LLMs.
Comment from Toshio Endo, Professor, Global Scientific Information and Computing Center, Tokyo Institute of Technology:
"The collaboration will integrate parallelization and acceleration of large-scale language models using the supercomputer "Fugaku" by Tokyo Tech and RIKEN, Fujitsu's development of high-performance computing infrastructure software for Fugaku and performance tuning of AI models, and Tohoku University's natural language processing technology. In collaboration with Fujitsu, we will also utilize the small research lab we established under the name of "Fujitsu Collaborative Research Center for Next Generation Computing Infrastructure" in 202X. We look forward to working together with our colleagues to contribute to the improvement of Japan's AI research capabilities, taking advantage of the large-scale distributed deep learning capabilities offered by "Fugaku"."
Comment from Kentaro Inui, Professor, Graduate School of Information Sciences, Tohoku University:
"We aim to build a large-scale language model that is open-source, available for commercial use, and primarily based on Japanese data, with transparency in its training data. By enabling traceability of the learning data, we anticipate that this will facilitate research robust enough to scientifically verify issues related to the black box problem, bias, misinformation, and so-called "hallucination" phenomena common to AI. Leveraging the insights gained from deep learning from Japanese natural language processing developed at Tohoku University, we will construct large-scale models. We look forward to contributing to the enhancement of AI research capabilities in our country and beyond, sharing the results of the research we obtain through the initiative for researchers and developers."
Comment from Seishi Okamoto, EVP,
Head of Fujitsu Research, Fujitsu Limited:
"We are
excited for the chance to leverage the powerful, parallel
computing resources of the supercomputer Fugaku to
supercharge research into AI and advance research and
development of LLMS. Going forward, we aim to incorporate
the fruits of this research into Fujitsu's new AI Platform,
codenamed "Kozuchi," to deliver paradigm-shifting
applications that contribute to the realization of a
sustainable society."
Comment from Satoshi Matsuoka,
Director, RIKEN Center for Computational Science:
"The
A64FX (5) CPU is equipped with an AI acceleration function
known as SVE.
Software development and optimization are
essential to maximize its capabilities and to utilize it for
AI applications, however. We feel that this joint research
will play an important role in bringing together experts of
LLMs and computer science in Japan, including RIKEN R-CCS
researchers and engineers, to advance techniques for
building LLMs on the supercomputer "Fugaku". Together with
our collaborators, we contribute to the realization of
Society 5.0."
Project name
Distributed Training of
Large Language Models on Fugaku (Project Number:
hp230254)
(1) Large Language Models :
Neural
networks with hundreds of millions to billions of parameters
that have been pre-learned using large amounts of data.
Recently, GPT in language processing and ViT in image
processing are known as representative large-scale learning
models.
(2) ChatGPT:
A large-scale language model for
natural language processing developed by OpenAI that
supports tasks such as interactive systems and automatic
sentence generation with high accuracy.
(3) GitHub:
A
platform used to publish open-source software around the
world.
(4) Hugging Face:
A platform used to publish AI
datasets around the world.
(5) A64FX:
An ARM-based CPU
developed by Fujitsu installed in supercomputer
Fugaku.
About Tokyo Institute of Technology
Tokyo Tech stands at the forefront of research and higher education as the leading university for science and technology in Japan. Tokyo Tech researchers excel in fields ranging from materials science to biology, computer science, and physics. Founded in 1881, Tokyo Tech hosts over 10,000 undergraduate and graduate students per year, who develop into scientific leaders and some of the most sought-after engineers in industry. Embodying the Japanese philosophy of "monotsukuri," meaning "technical ingenuity and innovation," the Tokyo Tech community strives to contribute to society through high-impact research. www.titech.ac.jp/english/
About Tohoku University
Tohoku University is home to 18,000 students across 10 faculties, 15 graduate schools and six research institutes. About 10 percent of the students come from abroad, contributing to one of the most cosmopolitan academic environments in Japan. Tohoku University's excellent learning environment, international outlook and research influence, has led to it being conferred the status of a Designated National University by the Japanese government in June 2017. For the last four years, it has also held the number one spot in Times Higher Education's annual ranking of Japanese universities, a list that highlights institutional resources, quality of education and overall student experience.
About Fujitsu
Fujitsu's purpose is to make the world more sustainable by building trust in society through innovation. As the digital transformation partner of choice for customers in over 100 countries, our 124,000 employees work to resolve some of the greatest challenges facing humanity. Our range of services and solutions draw on five key technologies: Computing, Networks, AI, Data & Security, and Converging Technologies, which we bring together to deliver sustainability transformation. Fujitsu Limited (TSE:6702) reported consolidated revenues of 3.7 trillion yen (US$28 billion) for the fiscal year ended March 31, 2023 and remains the top digital services company in Japan by market share. Find out more: www.fujitsu.com.
About RIKEN
As the leadership center of high-performance computing, the RIKEN Center for Computational Science (R-CCS) explores the "Science of computing, by computing, and for computing." The outcomes of the exploration - the technologies such as open-source software - are its core competence. The R-CCS strives to enhance the core competence and to promote the technologies throughout the world. The R-CCS, in collaboration with Fujitsu, developed Fugaku, the most powerful supercomputer in the world. Full operations of Fugaku began in March 2021, offering orders of magnitude increase in computing capabilities and synergy with other IT ecosystems, such as big data and artificial intelligence. In the past the R-CCS operated the K computer (2012-2019), which resulted in multitudes of world-leading scientific and engineering results, not only by the academia but also by the industry.