Author: boat, Chen Ping The world’s largest chip has the second generation! WSE 2 will be listed in the third quarter of this year. WSE 2 uses a 7-nanometer process technology with 2.6 trillion transistors. A large number of chips have entered the market in recent years, aimed at accelerating artificial intelligence and machine learning workloads. Based on different machine learning algorithms, these chips usually focus on several key areas, but most of them have a common limitation-chip size. Two years ago, Cerebras unveiled a revolution in the field of chip design: They developed a chip called Wafer Scale Engine (WSE) with 1.2 trillion transistors, 57 times the size of Nvidia’s GPU Titan V. The size of WSE is bigger than an iPad. Cerebras’ “violent aesthetics” once caused people to exclaim: “WSE has more transistors than neurons in the human brain!” Cerebras is also well-known in the industry. Today, the company announced that its next-generation chip, Wafer Scale Engine 2 (WSE 2), will be available in the third quarter of this year. Using a 7nm process technology, the number of transistors has doubled to 2.6 trillion and contains 850,000 AI cores. WSE-2: 46225 square millimeters, 2.6 trillion transistors. Image source: https://cerebras.net/ Cerebras has been committed to maximizing the role of logical solutions to machine learning problems. When Andrew Feldman founded the company in 2015, training neural networks took a lot of time, while large networks took several weeks. The biggest bottleneck is that data must be transferred multiple times between the processor and the external DRAM memory, which wastes time and consumes energy. The WSE R&D team pointed out: We can expand the chip so that it can accommodate all the data needed together with the AI processor core. With the development of natural language processing, image recognition and other fields, neural networks have also become very large, and the AI field needs a very large chip. How old is it? As large as possible, this means a whole wafer, 46225 square millimeters. CEO Feldman said: “When you want to make changes, you always encounter physical design challenges. Everything is related to geometry. It’s really difficult, but TSMC is our extraordinary partner.” 7 The development of nanotechnology means huge progress, but according to Feldman, the company has also improved the micro-architecture of its AI core. He declined to give details, but said that after working with customers for more than a year, Cerebras has learned some lessons and integrated them into the new core. WSE 2: 7nm process technology, 2.6 trillion transistors WSE 2 uses TSMC’s 7nm process technology. This allows it to be scaled down, and to a certain extent, the SRAM can be reduced. WSE 2 has 850,000 AI cores. The chip size of WSE 2 is the same as that of WSE, but almost all functions have been doubled, as shown in the following table: Like the first generation of WSE, WSE 2 has hundreds of thousands of AI cores on a silicon chip measuring 46,225 square millimeters. WSE 2 has 850,000 cores and 2.6 trillion transistors-by comparison, the second largest AI CPU on the market is about 826 square millimeters and has 0.054 trillion transistors. Compared to the 40MB memory of Ampere A100, Cerebras introduced 40GB of SRAM onboard memory, which is 1000 times the former. The picture shows WSE 1. WSE 2 has the same appearance but doubled the number of cores. Its core is connected by a 2D Mesh with FMAC data path. The goal of Cerebras and WSE is to provide a single platform designed through an innovative patent that allows larger processors for AI computing and has now been extended to a wider range of HPC workloads. WSE 2 is based on the first generation The first-generation WSE chip has 1.2 trillion transistors, which is 57 times the number of NVIDIA’s flagship GPU Titan V. It is built using TSMC’s 16-nanometer process technology. It covers an area of 46,225 square millimeters and contains 400,000 cores. On-chip storage is up to 18G and power consumption is 1.5. Ten thousand watts (approximately equal to the power of 6 induction cookers), memory bandwidth 9PB/sec, communication structure bandwidth 100PB/sec. In addition, the first generation of WSE also achieved a speed increase of 3000 times and a storage bandwidth expansion of 10000 times. The key to the WSE-2 design is a custom graphics compiler, which uses PyTorch or TensorFlow, and maps each layer to the physical part of the chip, and allows data flow to be calculated asynchronously. Having such a large processor means no waste energy. Compilers and processors are also designed with sparsity in mind, allowing high utilization regardless of the batch size, or allowing parameter search algorithms to run at the same time. How does WSE 2 compare to the first generation? Compared with the two, the size of the chip itself has not changed. 300 mm is still the largest wafer size in mass production. Therefore, the overall size of the WSE 2 chip has not changed, but the AI core has doubled. WSE 2 is still divided into a 7×12 rectangular array. Comparison of WSE 2 (left) and WSE 1 (right). In addition, the computer system CS-2 that hosts WSE 2 has not changed much. CS-2 is designed to achieve fast, flexible training and low-latency data center inference. Currently, CS-2 is powered by WSE-2. Compared with other data center AI solutions, CS-2 has higher computing density, faster memory and higher bandwidth interconnection, and uses a leading ML framework Perform programming. Cerebras, a fast-growing startup Cerebras Systems was established in California in 2015. Co-founder and CEO Andrew Feldman previously founded server chip company SeaMicro. Andrew Feldman, co-founder and CEO of Cerebras After several years of rapid development, the company’s size has basically doubled, with about 300 engineers in Silicon Valley, San Diego, Toronto and Tokyo. Reference link: https://www.anandtech.com/show/16626/cerebras-unveils-wafer-scale-engine-two-wse2-26-trillion-transistors-100-yield https://www.chinaventure.com.cn/news/114-20190820-347306.html https://cerebras.net/product/ https://spectrum.ieee.org/tech-talk/semiconductors/processors/cerebras-giant-ai-chip-now-has-a-trillions-more-transistors Amazon Cloud Technology Online Hackathon 2021 This is a like-minded training, this is a team competition where masters gather. Show your mind and play with creativity. From March 26th to May 31st, the stage of actual combat will open for you. The “Amazon Cloud Technology Online Hackathon 2021” is waiting for you! In order to encourage the participation and innovation of developers, this competition has prepared generous prizes for participants. In addition to the first, second, and third prizes, there are also special prActIcal awards, creAtIve awards, Koi Geek Awards, and Sunshine Awards. , The team that successfully submits works can receive prizes.
You must log in to post a comment.