Worlds Fastest Super Computer

IBM, Nvidia Build “World’s Fastest Supercomputer” for US Government
The DOE’s new Summit system features a unique architecture that combines HPC and AI computing capabilities.
https://www.datacenterknowledge.com/supercomputers/ibm-nvidia-build-world-s-fastest-supercomputer-us-government
IBM and DOE Launch World’s Fastest SuperComputer
Frederic Lardinois@fredericl / Jun 8, 2018 https://techcrunch.com/2018/06/08/ibms-new-summit-supercomputer-for-the-doe-delivers-200-petaflops/ IBM and the U.S. Department of Energy’s Oak Ridge National Laboratory (ORNL) today unveiled Summit, the department’s newest supercomputer. IBM claims that Summit is currently the world’s “most powerful and smartest scientific supercomputer” with a peak performance of a whopping 200,000 trillion calculations per second. That performance should put it comfortably at the top of the Top 500 supercomputer ranking when the new list is published later this month. That would also mark the first time since 2012 that a U.S.-based supercomputer holds the top spot on that list.
Summit, which has been in the works for a few years now, features 4,608 compute servers with two 22-core IBM Power9 chips and six Nvidia Tesla V100 GPUs each. In total, the system also features over 10 petabytes of memory. Given the presence of the Nvidia GPUs, it’s no surprise that the system is meant to be used for machine learning and deep learning applications, as well as the usual high performance computing workloads for research in energy and advanced materials that you would expect to happen at Oak Ridge.
IBM was the general contractor for Summit and the company collaborated with Nvidia, RedHat and InfiniBand networking specialists Mellanox on delivering the new machine.
“Summit’s AI-optimized hardware also gives researchers an incredible platform for analyzing massive datasets and creating intelligent software to accelerate the pace of discovery,” said Jeff Nichols, ORNL associate laboratory director for computing and computational sciences, in today’s announcement.
Summit is one of two of these next-generation supercomputers that IBM is building for the DEO. The second one is Sierra, which will be housed at the Lawrence Livermore National Laboratory. Sierra, which is also scheduled to go online this year, is less powerful at an expected 125 petaflops, but both systems are significantly more powerful than any other machine in the DoE’s arsenal right now.

Karl Freund
Karl Freund is a Moor Insights & Strategy Senior Analyst for deep learning & HPC
Summit, at the Oak Ridge National Laboratory in Oak Ridge, Tennessee. Capable of over 200 petaflops (200 quadrillion operations per second), Summit consists of 4600 IBM dual socket Power 9 nodes, connected by over 185 miles of fiber optic cabling. Each node is equipped with 6 NVIDIA Volta TensorCore GPUs, delivering total throughput that is 8 times faster than its predecessor, Titan, for double precision tasks, and 100 times faster for reduced precision tasks common in deep learning and AI. China has held the top spot in the Top 500 for the last 5 years, so this brings the virtual HPC crown home to the USA.

Some of the specifications are truly amazing; the system exchanges water at the rate of 9 Olympic pools per day for cooling, and as an AI supercomputer, Summit has already achieved (limited) “exascale” status, delivering 3 exaflops of AI precision performance. What may be more important, though, is the science that this new system will enable—it is already at work on drug discovery using quantum chemistry, chronic pain analysis, and the study of mitochondrial DNA.
For those who cannot afford a full-fledged $100M supercomputer, NVIDIA also announced the new HGX-2 chassis, available from many vendors, which can be connected to a standard server for some serious AI in a box. DGX-2 supports 16 Volta GPUs, interconnected via the new NVSwitch networking to act as a single massive GPU, to deliver 2 petaflops of performance for AI and HPC. As you can see, NVIDIA is paying a lot of attention to the idea of fusing AI with HPC.

The scientific advances in deep neural networks (DNNs) for HPC took center stage in the announcement. As I have noted in previous articles, DNNs are showing tremendous promise in High Performance Computing (HPC), not just on DNNs can be trained with massive datasets, created by running traditional simulations on supercomputers. The resulting AI can then be used to predict outcomes of new simulations with startling accuracy and can be completed in 1/1000th the time and cost. The good news for NVIDIA is that both supercomputing and AI are powered by—you guessed it, NVIDIA GPUs. Scientists have even more tools to use GPU hardware and to develop GPU software with NVIDIA’s new platforms.

The announcement of Summit as the world’s fastest computer was not a surprise; as a public project funded by the U.S. DOE, Summit has frequently been the subject of discussion. What is significant is that NVIDIA and the DOE believe that the future of HPC will be infused with AI, all running on the same hardware. The NVIDIA GPUs are delivering 95% of Summit’s performance, cementing the legitimacy and leadership of GPU-accelerated computing. HGX-2 makes that an affordable path for many researchers and cloud providers, while Summit demonstrates the art of the possible and a public platform for research. When combined, AI plus HPC also paves the way for future growth for NVIDIA.
https://www.forbes.com/sites/moorinsights/2018/06/12/ibm-and-nvidia-reach-the-summit-the-worlds-fastest-supercomputer/#69ebabcd31af

The Summit system, with 9,216 IBM processors boosted by 27,648 Nvidia graphics chips, takes as much room as two tennis courts and as much power as a small town. It’ll be used for civilian research into subjects like material science, cancer, fusion energy, astrophysics and the Earth’s changing climate.

Summit can perform 200 quadrillion (200,000 trillion) calculations per second, or 200 petaflops. Until now, the world’s fastest supercomputer has been the Sunway TaihuLight system at the National Supercomputing Center in Wuxi, China, capable of 93.01 petaflops.

Graphene the Future of Computing ?

Graphene the Future of Computing ?
Could make your computer a thousand (1000x) times faster.
Superconductive and Ultra-Thin
Conducts electricity 10x times better than copper, and 250 times better than silicon
Researchers built A transistor (Circuit) from graphene and applied current resulting in 1000 times increase in performance

Graphene Computers Work 1000 Times Faster, Use Far Less Power

Graphene-coated copper could dramatically boost future CPU performance
• By Joel Hruska on February 21, 2017

Graphene-coated copper could dramatically boost future CPU performance

IBM builds graphene chip that’s 10,000 times faster, using standard CMOS processes


While current chips are made of silicon, the prototype processor is made of graphene carbon nanotubes, with resistive RAM (RRAM) layered over it. The team claims this makes for “the most complex nanoelectronic system ever made with emerging nanotechnologies,” creating a 3D computer architecture.
If you follow a lot of tech circles, you may have seen graphene (a super-thin layer of carbon arranged in such a way that it has electrical properties verging on miraculous) come up in the news quite a bit, receiving plaudits about its massively fluid electrical conductivity and possible applications in several different technologies. What you haven’t heard much of is the ugly part of graphene: It’s impossible to build semiconductor transistors out of the material as it stands now since it has no electrical band gap to speak of. If that sounds confusing, that’s alright. That’s what this article is for!
Band Gap? What’s That?
A band gap is a tiny space in between a conduction band and a valence band that tells us at what level current will actually flow between the two. It’s like a little gatekeeper that keeps an electrical charge in one space until it is “turned off.” Virtually all chips on computers are made of a semiconductor material, which means that it has a moderate band gap that makes it neither conduct electricity so readily nor reject every electrical charge. This has to do with basic molecular structure, so there is quite a bit of chemistry involved in building a chip.
Very large band gaps exist in materials like rubber which will resist electrical currents so much that it would much rather catch fire than retain the charge. That’s why you use rubber to insulate the wires inside of cables. Materials with a negligible band gap are known as conductors, while those with virtually no band gap whatsoever are known as superconductors.
Today most chips are made of silicon, which serves as a very sturdy and reliable semiconductor. Remember, we need semiconductors that can quickly be turned on and off at will, not superconductors, which will lose the charge they were given the moment the band no longer supplies it.
Why Is Graphene Not Good for Building Chips?
As I mentioned earlier, graphene is an extremely efficient conductor of electricity but nothing much more than that. It can push a charge at an incredible speed, but it cannot retain it. In a binary system you may need to retain data so that your running programs don’t just close the instant they open. It’s important in a RAM chip, for example, to ensure that the data inside it can stay put and remain readable for the foreseeable future. When a transistor is in the “on” state, it registers a “1.” In an “off” state, it registers a “0.” A superconductor would be unable to “switch off” because the difference between “on” and “off” voltage is so small (because of the tiny band gap I mentioned earlier).
That’s not to say that graphene wouldn’t have a place in a modern-day computer. It certainly could be used to deliver information from one point to another quickly. Also, if supplemented by other technology, we could possibly see graphene used in transistors at some point in the future. Whether that would be an efficient investment of capital is up to the industry to decide.
There’s Another Material! (One I believe has more promise)
One of the problems with silicon is its inflexibility when working on ultra-thin surfaces. A piece of silicon could only be shaved so thin for it to be functional. That’s why we were exploring the use of graphene in the first place (it’s one single atom thick). Since graphene may not prove promising without investing truckloads of money into its development, scientists began trying other materials, one of which is titanium trisulfide (TiS3). The material not only has the ability to function even at the thickness of a single molecule, but it also has a band gap very similar to that of silicon.
The implications of this are far-reaching for miniature technology products which pack a vast amount of hardware in a very constrained amount of space. Thinner materials will also dissipate heat more efficiently, making them favorable for large power-hungry computers.
Graphene As A Promising Material For Computer Processors
From the time Graphene technology has been introduced, It has gained popularity as one of the most advanced materials with diverse applications. It can be used in mechanical and biological engineering applications. Car manufacturers are taking advantage of its weight and strength, Making it an excellent choice of materials to be combined with polymer composites.
It is also popular as a choice for energy storage and for solar cells. Nonetheless, Recently, It has also generated buzz because of the introduction of the Graphene processor, Which is expected to improve computing in more ways than one. IBM Taking Advantage of Graphene Among others, IBM is one company that has expressed its serious commitment to building a Graphene processor, Which is expected to redefine the future of computers.
By 2019, The company expects to develop a processor that is smaller and more significantly powerful than what is available in the market today. The goal is to build IBM Graphene transistors that measures only 7 nanometres but unrivalled in terms of the power it can provide to the computers of the future. As a demonstration of being serious in the pursuit of this component in a Graphene CPU, The company has invested $3 billion to provide the funding necessary for the development of the technology and in having it polished before finally being introduced in the market.