Latest Tech 27 March 2019

Huawei introduces the P30 and P30 Pro. Ryzen CPU`s from 1000 and 2000 series beginning to be at Bargain Prices, after all the news on Ryzen 3000 series coming out soon. Motherboards for Ryzen 3000 would be X470/X570 series for all the capabilities. Older boards still able but not full capabilities. Expect many announcements at the Electronics show called E3 this summer., as well as shows such as Computex 2019.

CES Show recap

Just a quick review and recap of some other noteworthy stuff from the last CES show. I have already written about AMD and NVidia, but that was not the end all and be all of the show. Here is a list of some other products and manufacturers.

Lenovo Yoga S940

Lenovo showed off a swathe of great new Yoga laptops at CES 2019, and one favorite is the well-built Yoga S940. It’s a wonderfully slim and light laptop with a Contour Glass display that comes in up to 4K resolution with HDR and Dolby Vision support.

Built out of aluminum, it weighs 2.64 pounds and is just 0.48 inches (12.2mm) thick, and comes with an 8th generation Intel Core i7 processor, up to 16GB RAM and 1TB of SSD storage.

The Lenovo Yoga S940 goes on sale May 2019, starting at $1,499 U.S.

Acer Swift 7

Acer impressed umany at CES 2019 by somehow making its teeny Swift 7 laptop somehow even smaller and lighter.

In its aim to make the ‘world’s thinnest laptop’, Acer’s flagship Ultrabook for 2019 is just 9.95mm (0.39 inches) thin and weighs in at just 890 grams (1.96 pounds).

Meanwhile, a smaller chassis allows the Acer Swift 7 to shrink the bezels even more this year around, achieving a screen-to-body ratio of 92%.

It’s still a sturdy laptop, though, with a chassis made of magnesium-lithium and magnesium-aluminum alloys. Acer claims that these materials are two to four times tougher than regular aluminum, while also being up to 35% lighter. Thin, light and powerful – there’s a lot to be impressed with on the new Acer Swift 7, and it’s one of the best laptops seen at this year’s CES.

LG gram 17

Speaking of thin and light laptops, LG wowed many as well with the LG gram 17, an incredibly light 17-inch laptop that weighs just 1.3kg – which is lighter than many other smaller laptops. You’re not going to see another 17-inch laptop that’s this light in 2019.

Its 16:10 display has a “2K” resolution of 2,560 x 1,600, and packs a Whisky Lake Intel Core i7 processor, 16GB of RAM, Thunderbolt 3, SSD storage and even ports such as a microSD reader that thicker laptops don’t include.

It will go on sale for $1,700 (U.S. later this year.

Alienware Area-51m

The Alienware Area-51m is another innovative gaming laptop at CES 2019 which does something new. Unlike other gaming laptops, the Area-51m allows its processor and graphics card to be upgraded, making it a future-proof laptop that will be playing games for years.

It answers one of our biggest complaints with gaming laptops – the lack of upgradability – and it does so with Dell’s customary high built quality and attractive design. I have been looking into this and may add one to my shopping list.

LG Signature Series OLED TV R (OLED65R9)

At one time, seeing a TV appear out of thin air would have been something straight out of a magic act. But LG’s new rollable Signature Series OLED TV R isn’t magic – it’s engineering and display technology taken to the nth degree. While some other 2019 TVs can do 8K and sit flush on the wall, only the 65R9 harnesses OLED’s natural flexibility to roll up on itself when you’re done watching it. Tech Geeks beware is contagious.. Panasonic GZ2000 4K OLED TV Just like the Las Vegas strip itself, the TVs of CES 2019 have been all about the glitz. Whether it’s 8K resolutions or rollable displays, the ‘wow’ factor may have been upped, but there’s a sense that it’s been a game of spec-chasing and headline-baiting. The Panasonic GZ2000 4K OLED, on the other hand, is a pure movie-lover’s dream – there are no gimmicks here, just a commitment to the best possible picture quality. 

Panasonic GZ2000 4K OLED TV

Just like the Las Vegas strip itself, the TVs of CES 2019 have been all about the glitz. Whether it’s 8K resolutions or rollable displays, the ‘wow’ factor may have been upped, but there’s a sense that it’s been a game of spec-chasing and headline-baiting. The Panasonic GZ2000 4K OLED, on the other hand, is a pure movie-lover’s dream – there are no gimmicks here, just a commitment to the best possible picture quality.

For further details and or just checking out the latest Computer Tech I like to go to the following : – ANANDTECH at https://www.anandtech.com/ and/or Tom`s Hardware at https://www.tomshardware.com/

CES Show

Sorry it has taken so long to write. It has been insane for me and my schedule since last December(2018). I have a full-time job in addition to 4 different businesses or activities I am involved in. Was just overwhelmed with demands for my time. Now about the last CES show in Vegas. There were as usual a large number of vendors and manufacturers etc… However I would have to say that the most anticipated were from AMD and NVidia. As expected they both introduced new Video Cards, and AMD also introduced their new Ryzen 3000 series of processors. As in the previous version of Ryzen (1 or 1000) and Ryzen 2, the Ryzen 3000 will come in at Ryzen 3,5,7, and new Ryzen 9.

AMD is expected to bump up the Ryzen 3 3000 models four cores up to six, the Ryzen 5 3000 chips from six to eight, and the Ryzen 7 3000 parts from eight to 12.

The introduction of Ryzen 9 3000 processors with up to 16 cores and 32 threads is perhaps the most surprising aspect of the recent leaks because it would effectively push the mainstream AM4 platform into Threadripper territory, much like Intel has encroached upon its own HEDT lineup with its mainstream Core i9-9900K.

The Ryzen Threadripper 2990 (32cores) and 2970 (24cores) are also getting a lot of press. The competition between Intel and AMD has become unbelievable. While traditionally Intel has had the advantage on performance, it now simply boils down to Pricing, and whether you are a gamer or a business oriented computer user. AMD has an edge in that they are now able to deliver 7nm parts with greater performance etc…

Intel has tried to counter with the Intel Corei9 and the Workstation 24 core Xeon CPU`s. However the workstation solution is far too expensive and the Corei9 is still using older processes at a higher price. Intel may soon be completely irrelevant if they do not get their processors on more modern iterations (7nm). Other developments at CES were of course related to Graphic or Video Cards. AMD announced their new Radeon VII cards.

At CES 2019, AMD announced the Radeon VII, marking a return to high-end consumer graphics. This is based off a 7nm architecture, and promises to compete with the Nvidia GeForce RTX 2080 at a similar price point. You won’t have to wait long to get your hands on it either – the Radeon VII hits the streets on February 7.

And, if you can’t justify ponying up the cash for a Vega card, AMD put out the Ryzen 3 2200G and the Ryzen 5 2400G APUs, with Vega graphics, on February 12, 2018.
AMD’s Navi GPU architecture will be on its way later this year, with the latest speculation suggesting a July release.

AMD Navi design, will be our first genuinely new Radeon chip since Vega launched into our desktops a year and a half ago. That architecture has though been given a fresh lick of paint with the AMD Radeon VII gaming GPU.

But the next-gen 7nm Navi GPUs will most likely be specced to dominate the mid-range market, taking on the GTX 1660 Ti et al, rather than trying and going toe-to-toe with Nvidia’s top Turing GPUs at the high end.

Lisa Su, AMD’s popular CEO, said they are committed to releasing the new Navi graphics cards this year, and the latest rumour has them shipping alongside the Ryzen 3000 processors. That could be a real challenge to Intel and NVidia.

But why should you wait for the next 7nm Radeon GPU? What will AMD’s next-gen GPUs deliver to make them a worthy upgrade from the Polaris design, and will they really arrive alongside the new Zen 2 CPUs?
The 14nm Vega 10 and Polaris 10 GPUs, used in the RX Vega and RX 500-series cards respectively, hold a total of 4,096 Stream Processors for Vega and 2,304 inside the Polaris chip. Thanks to the 7nm process, AMD could fit roughly 1.6x more logic into the same die space with Navi… if TSMC’s numbers are to be believed.
Pricing all depends on whether AMD target the high-end or mid-range markets with Navi. This will most likely also affect whether AMD utilises expensive HBM2 memory or GDDR6. A midrange RX 680 could be somewhere around $330 to $400 at most.

USB What ?

The Industry now appears to be doing the same thing to USB as they recently did to Wifi. As indicated in my previous article on the subject, they have renamed all the previous versions for more logical and supposedly easier understanding by the public. Well get ready for more of the same where USB is concerned. The USB 3.0 standard was eventually re-branded to USB 3.1 Gen 1 . USB 3.2 is about to come out, and the Group USB-IF (Implementers Forum) has decided that it is necessary to spell out the various iterations or versions for the Public to understand. So Here goes, USB 3.1 Gen 1 (formerly known as USB 3.0), which offers speeds up to 5 Gbps, will be rebranded into USB 3.2 Gen 1 while USB 3.1 Gen 2, which supports communication rates up to 10 Gbps, will be called USB 3.2 Gen 2 moving forward. Since USB 3.2 has double the throughput (20 Gbps) of USB 3.1 Gen 2, the updated standard has been designated as USB 3.2 Gen 2×2. In order to achieve a data transfer rate of 20 Gbps, USB 3.2 Gen 2×2 employs up to two high-speed 10 Gbps channels. Are you with me so far, next As noted by the USB-IF, conventional USB hosts and devices were designed as single-lane solutions. USB Type-C cables, on the other hand, support multi-lane operations that open the doors for scalable performance. As a result, USB 3.2 Gen 2×2 is only possible over the USB Type-C connection. To avoid overwhelming the consumer with technicalities, USB-IF suggested a separate marketing nomenclature for each standard. USB 3.2 Gen 1 should be identified as SuperSpeed USB while USB 3.2 Gen 2 and USB 3.2 Gen 2×2 are labeled as SuperSpeed USB 10Gbps and SuperSpeed USB 20Gbps, respectively. There is no actual date set for when USB 3.2 devices will arrive. Some think they might come out later this year, but it could be much longer. Either way, it’ll probably be a bit before the standard catches on in the motherboard space since manufacturers would have to incorporate third-party USB 3.2 controllers into their products. So basically this is just a heads-up for those planning on spending to Upgrade in the near future.

WiFi 6 ?

Wi-Fi 6 ?
Get ready for the next generation of wifi (Wireless) technology: Wi-fi 6 is going to be appearing on devices starting in 2019. But, should you replace your old router and get a new one? And is this going to make your Internet run faster? Here’s what you should know !
The history of wifi
Those of you of a certain age will remember when home internet access was only wired—and only one computer could get online, a single MP3 took half an hour to download. Then WIfi came along and changed everything. The first wifi protocol appeared in 1997, offering 2Mbit/s link speeds, but it was only with the arrival of 802.11b and 11Mbit/s speeds in 1999 that people seriously started thinking about home wifi.
Wifi standards, as well as a whole host of other electronics standards, are managed by the IEEE: The Institute of Electrical and Electronics Engineers. Specifically, IEEE 802 refers to local area network standards, and 802.11 focuses on wireless LAN. In the 20 years since 802.11b arrived, we’ve seen numerous new standards of all sorts come out, though not all of them apply to home networking.
The introduction of 802.11g in 2003 (54Mbit/s) and 802.11n in 2009 (a whopping 600Mbit/s) were both significant moments in the history of wifi. Another significant step forward was the introduction of dual-band routers with both 2.4GHz and 5GHz bands, tied to the arrival of 802.11n, which could offer faster speeds at shorter ranges.
Today, with 802.11ac in place, that 5GHz band can push speeds of 1,300Mbit/s, so we’re talking speeds that are more than 600 times faster than they were in 1997. Wi-Fi 6 takes that another step forward, but it’s not just speed that’s improving.
Explaining wifi technology can get quite technical. A lot of recent improvements, including those arriving with Wi-Fi 6, involve some clever engineering to squeeze more bandwidth out of the existing 2.4GHz and 5GHz your router already employs. The end result is more capacity on the same channels, with less interference between them, as well as faster data transfer speeds.
Turning wifi up to six
In the past, Wi-Fi versions were identified by a letter or a pair of letters that referred to a wireless standard. The current version is 802.11ac, but before that, we had 802.11n, 802.11g, 802.11a, and 802.11b. It was not comprehensible, so the Wi-Fi Alliance — the group that stewards the implementation of Wi-Fi — is changing it.
All of those convoluted codenames are being changed. So instead of the current Wi-Fi being called 802.11ac, it’ll be called Wi-Fi 5 (because it’s the fifth version). It’ll probably make more sense this way, starting with the first version of Wi-Fi, 802.11b:
Wi-Fi 1: 802.11b (1999)
Wi-Fi 2: 802.11a (1999)
Wi-Fi 3: 802.11g (2003)
Wi-Fi 4: 802.11n (2009)
Wi-Fi 5: 802.11ac (2014)
Now, instead of wondering whether “ac” is better than “n” or if the two versions even work together, you’ll just look at the number. Wi-Fi 5 is higher than Wi-Fi 4, so obviously it’s better. And since Wi-Fi networks have always worked together, it’s somewhat clearer that Wi-Fi 5 devices should be able to connect with Wi-Fi 4 devices, too. (Technically, Wi-Fi 1, Wi-Fi 2, and Wi-Fi 3 aren’t being branded because they aren’t widely in use, but I’ve labeled how it would look above for clarity.)
The Wi-Fi Alliance even wants to see this branding go beyond hardware. So in the future when you connect to a Wi-Fi network on your phone or laptop, your device will tell you what Wi-Fi version you’re connected to. That way, if two networks are available — one showing “4” and the other showing “5” — you’d be able to choose the newer, faster option.
Now that the retroactive renaming is done, it’s time for the future. If you’ve been closely following router developments over the past year (no judgments here), you’ll know that the next generation of Wi-Fi is on the horizon, with the promise of faster speeds and better performance when handling a multitude of devices. It was supposed to be called 802.11ax, but now it’ll go by a simpler name: Wi-Fi 6.
One of the most important changes Wi-Fi 6 brings with it is, of course, the new naming system: Using a simple succession of numbers is going to make it a lot easier for consumers to keep track of standards and make sure they’ve got compatible kit set up. The more technical term for Wi-Fi 6 is 802.11ax, if you prefer the old naming.
Expect to see the new Wi-Fi 6 name on hardware products and inside software menus from 2019, as well as funky little logos not unlike the one Google uses for its Chromecast devices.
As always, the improvements with this latest generation of wifi are in two key areas: Raw speed and throughput (if wifi was a highway, we’d be talking about a higher maximum speed limit for vehicles, as well as more lanes to handle more vehicles at once). Wi-Fi 6 will support 8K video streaming, provided your internet supplier is going to give you access to sufficient download speeds in the first place.
In practice that means support for transfer rates of 1.1Gbit/s over the 2.4GHz band (with four streams available) and 4.8Gbit/s over the 5GHz band (with eight streams available), though the technology is still being refined ahead of its full launch next year—those speeds may, in fact, go up (it’s been hitting 10Gbit/s in the lab). Roughly speaking, you can look forward to 4x to 10x speed increasesin your wifi.
Another improvement Wi-Fi 6 will bring is improved efficiency, which means a lower power draw, which means less of a strain on battery life (or lower figures on your electricity bill). It’s hard to quantify the difference exactly, especially as Wi-Fi 6 has yet to be finalized, but it’s another step in the right direction for wifi standards—it shouldn’t suck the life out of your phone or always-on laptop quite as quickly.
What will you have to do?
Not a lot. As is usually the case, Wi-Fi 6 is going to be backwards compatible with all the existing wifi gear out there, so if you bring something home from the gadget shop that supports the new standard, it will work fine with your current setup—you just won’t be able to get the fastest speeds until everything is Wi-Fi 6 enabled.
How long that takes is going to depend on hardware manufacturers, software developers, internet service providers, and everyone else in the industry. You might just have to sit tight until your broadband provider of choice deems the time is right to upgrade the hardware it supplies to you (though you could just upgrade the router yourself).
When you’re out and about in the wider world you might start to see certain networks advertising faster speeds, using the new terminology, but this rebrand is brand new: We’ll just have to wait and see how these new names and logos get used in practice. Would you swap coffee shops for Wi-Fi 6?
Bear in mind that it’s also going to take a while for this to roll out properly. When we say 2019, that’s the very earliest that fully approved Wi-Fi 6 devices are going to start appearing on the scene, so it might be months or years before everyone catches up. Some early devices making use of the draft technology have already appeared on the scene.
Even if you have no problems with download and upload speeds right now, Wi-Fi 6 is intended to fix some of the pain points that still exist: Trying to get decent wifi in a crowded space, for example, or trying to connect 20 different devices to the same home router without the wireless performance falling off a cliff.
The Wi-Fi Alliance even wants to see this branding go beyond hardware. So in the future when you connect to a Wi-Fi network on your phone or laptop, your device will tell you what Wi-Fi version you’re connected to. That way, if two networks are available — one showing “4” and the other showing “5” — you’d be able to choose the newer, faster option.
Now that the retroactive renaming is done, it’s time for the future. If you’ve been closely following router developments over the past year (no judgments here), you’ll know that the next generation of Wi-Fi is on the horizon, with the promise of faster speeds and better performance when handling a multitude of devices. It was supposed to be called 802.11ax, but now it’ll go by a simpler name: Wi-Fi 6.
The Wi-Fi Alliance says that it expects companies to adopt this numerical advertising in place of the classic lettered versions. It also expects to see earlier versions of Wi-Fi start to be referred to by their updated numbered names as well.
Because the Wi-Fi Alliance represents just about every major company that makes any kind of product with Wi-Fi in it, its actions usually reflect what the industry wants. So presumably, tech companies are on board with the branding change and will start to advertise it this way.

AMD Ryzen Threadripper2 with up to 32Cores !

AMD Ryzen Threadripper2 with up to 32 Cores, yes that’s correct you did read it right 32 Cores. AMD has quickly ramped up their ZEN Architecture and are now delivering the Threadripper2 (2nd Generation). AMD’s Zeppelin silicon has 8 cores, and the first generation Threadripper uses two of them to get to the top-SKU of 16-cores. Inside the CPU however, there are four pieces of silicon: two active and two inactive. For this second generation of Threadripper, called Threadripper 2 or the Threadripper 2000-series, AMD is going to make these inactive dies into active ones, and substantially increase the core count for the high-end desktop and workstation user. On some other processor designs,they have four active dies, with eight active cores on each die (four for each CCX). On one, there are eight memory channels, and AMD’s X399 platform only has support for four channels. For the first generation this meant that each of the two active die would have two memory channels attached – in the second generation Threadripper this is still the case: the two now ‘active’ parts of the chip do not have direct memory access. Not long ago it was stated by several motherboard vendors that some of the current X399 motherboards on the market might struggle with power delivery to the new parts, and so we are likely to see a motherboard refresh for several Manufacturers. AMD’s Threadripper2 is quite competitive with high-end i7’s and even the new Intel i9’s given the right circumstances. However I would keep in mind that very shortly they may transfer to a new CPU manufacturing process (7nm) that will further increase performance. So buyer beware, do your Homework before making a purchase. AMD is finally starting to give Intel competition. Especially on Price. Threadripper2 from 8 Cores to 32 Cores depending on which CPU flavor !

Worlds Fastest Super Computer

IBM, Nvidia Build “World’s Fastest Supercomputer” for US Government
The DOE’s new Summit system features a unique architecture that combines HPC and AI computing capabilities.
https://www.datacenterknowledge.com/supercomputers/ibm-nvidia-build-world-s-fastest-supercomputer-us-government
IBM and DOE Launch World’s Fastest SuperComputer
Frederic Lardinois@fredericl / Jun 8, 2018 https://techcrunch.com/2018/06/08/ibms-new-summit-supercomputer-for-the-doe-delivers-200-petaflops/ IBM and the U.S. Department of Energy’s Oak Ridge National Laboratory (ORNL) today unveiled Summit, the department’s newest supercomputer. IBM claims that Summit is currently the world’s “most powerful and smartest scientific supercomputer” with a peak performance of a whopping 200,000 trillion calculations per second. That performance should put it comfortably at the top of the Top 500 supercomputer ranking when the new list is published later this month. That would also mark the first time since 2012 that a U.S.-based supercomputer holds the top spot on that list.
Summit, which has been in the works for a few years now, features 4,608 compute servers with two 22-core IBM Power9 chips and six Nvidia Tesla V100 GPUs each. In total, the system also features over 10 petabytes of memory. Given the presence of the Nvidia GPUs, it’s no surprise that the system is meant to be used for machine learning and deep learning applications, as well as the usual high performance computing workloads for research in energy and advanced materials that you would expect to happen at Oak Ridge.
IBM was the general contractor for Summit and the company collaborated with Nvidia, RedHat and InfiniBand networking specialists Mellanox on delivering the new machine.
“Summit’s AI-optimized hardware also gives researchers an incredible platform for analyzing massive datasets and creating intelligent software to accelerate the pace of discovery,” said Jeff Nichols, ORNL associate laboratory director for computing and computational sciences, in today’s announcement.
Summit is one of two of these next-generation supercomputers that IBM is building for the DEO. The second one is Sierra, which will be housed at the Lawrence Livermore National Laboratory. Sierra, which is also scheduled to go online this year, is less powerful at an expected 125 petaflops, but both systems are significantly more powerful than any other machine in the DoE’s arsenal right now.

Karl Freund
Karl Freund is a Moor Insights & Strategy Senior Analyst for deep learning & HPC
Summit, at the Oak Ridge National Laboratory in Oak Ridge, Tennessee. Capable of over 200 petaflops (200 quadrillion operations per second), Summit consists of 4600 IBM dual socket Power 9 nodes, connected by over 185 miles of fiber optic cabling. Each node is equipped with 6 NVIDIA Volta TensorCore GPUs, delivering total throughput that is 8 times faster than its predecessor, Titan, for double precision tasks, and 100 times faster for reduced precision tasks common in deep learning and AI. China has held the top spot in the Top 500 for the last 5 years, so this brings the virtual HPC crown home to the USA.

Some of the specifications are truly amazing; the system exchanges water at the rate of 9 Olympic pools per day for cooling, and as an AI supercomputer, Summit has already achieved (limited) “exascale” status, delivering 3 exaflops of AI precision performance. What may be more important, though, is the science that this new system will enable—it is already at work on drug discovery using quantum chemistry, chronic pain analysis, and the study of mitochondrial DNA.
For those who cannot afford a full-fledged $100M supercomputer, NVIDIA also announced the new HGX-2 chassis, available from many vendors, which can be connected to a standard server for some serious AI in a box. DGX-2 supports 16 Volta GPUs, interconnected via the new NVSwitch networking to act as a single massive GPU, to deliver 2 petaflops of performance for AI and HPC. As you can see, NVIDIA is paying a lot of attention to the idea of fusing AI with HPC.

The scientific advances in deep neural networks (DNNs) for HPC took center stage in the announcement. As I have noted in previous articles, DNNs are showing tremendous promise in High Performance Computing (HPC), not just on DNNs can be trained with massive datasets, created by running traditional simulations on supercomputers. The resulting AI can then be used to predict outcomes of new simulations with startling accuracy and can be completed in 1/1000th the time and cost. The good news for NVIDIA is that both supercomputing and AI are powered by—you guessed it, NVIDIA GPUs. Scientists have even more tools to use GPU hardware and to develop GPU software with NVIDIA’s new platforms.

The announcement of Summit as the world’s fastest computer was not a surprise; as a public project funded by the U.S. DOE, Summit has frequently been the subject of discussion. What is significant is that NVIDIA and the DOE believe that the future of HPC will be infused with AI, all running on the same hardware. The NVIDIA GPUs are delivering 95% of Summit’s performance, cementing the legitimacy and leadership of GPU-accelerated computing. HGX-2 makes that an affordable path for many researchers and cloud providers, while Summit demonstrates the art of the possible and a public platform for research. When combined, AI plus HPC also paves the way for future growth for NVIDIA.
https://www.forbes.com/sites/moorinsights/2018/06/12/ibm-and-nvidia-reach-the-summit-the-worlds-fastest-supercomputer/#69ebabcd31af

The Summit system, with 9,216 IBM processors boosted by 27,648 Nvidia graphics chips, takes as much room as two tennis courts and as much power as a small town. It’ll be used for civilian research into subjects like material science, cancer, fusion energy, astrophysics and the Earth’s changing climate.

Summit can perform 200 quadrillion (200,000 trillion) calculations per second, or 200 petaflops. Until now, the world’s fastest supercomputer has been the Sunway TaihuLight system at the National Supercomputing Center in Wuxi, China, capable of 93.01 petaflops.

Graphene the Future of Computing ?

Graphene the Future of Computing ?
Could make your computer a thousand (1000x) times faster.
Superconductive and Ultra-Thin
Conducts electricity 10x times better than copper, and 250 times better than silicon
Researchers built A transistor (Circuit) from graphene and applied current resulting in 1000 times increase in performance

Graphene Computers Work 1000 Times Faster, Use Far Less Power

Graphene-coated copper could dramatically boost future CPU performance
• By Joel Hruska on February 21, 2017

Graphene-coated copper could dramatically boost future CPU performance

IBM builds graphene chip that’s 10,000 times faster, using standard CMOS processes


While current chips are made of silicon, the prototype processor is made of graphene carbon nanotubes, with resistive RAM (RRAM) layered over it. The team claims this makes for “the most complex nanoelectronic system ever made with emerging nanotechnologies,” creating a 3D computer architecture.
If you follow a lot of tech circles, you may have seen graphene (a super-thin layer of carbon arranged in such a way that it has electrical properties verging on miraculous) come up in the news quite a bit, receiving plaudits about its massively fluid electrical conductivity and possible applications in several different technologies. What you haven’t heard much of is the ugly part of graphene: It’s impossible to build semiconductor transistors out of the material as it stands now since it has no electrical band gap to speak of. If that sounds confusing, that’s alright. That’s what this article is for!
Band Gap? What’s That?
A band gap is a tiny space in between a conduction band and a valence band that tells us at what level current will actually flow between the two. It’s like a little gatekeeper that keeps an electrical charge in one space until it is “turned off.” Virtually all chips on computers are made of a semiconductor material, which means that it has a moderate band gap that makes it neither conduct electricity so readily nor reject every electrical charge. This has to do with basic molecular structure, so there is quite a bit of chemistry involved in building a chip.
Very large band gaps exist in materials like rubber which will resist electrical currents so much that it would much rather catch fire than retain the charge. That’s why you use rubber to insulate the wires inside of cables. Materials with a negligible band gap are known as conductors, while those with virtually no band gap whatsoever are known as superconductors.
Today most chips are made of silicon, which serves as a very sturdy and reliable semiconductor. Remember, we need semiconductors that can quickly be turned on and off at will, not superconductors, which will lose the charge they were given the moment the band no longer supplies it.
Why Is Graphene Not Good for Building Chips?
As I mentioned earlier, graphene is an extremely efficient conductor of electricity but nothing much more than that. It can push a charge at an incredible speed, but it cannot retain it. In a binary system you may need to retain data so that your running programs don’t just close the instant they open. It’s important in a RAM chip, for example, to ensure that the data inside it can stay put and remain readable for the foreseeable future. When a transistor is in the “on” state, it registers a “1.” In an “off” state, it registers a “0.” A superconductor would be unable to “switch off” because the difference between “on” and “off” voltage is so small (because of the tiny band gap I mentioned earlier).
That’s not to say that graphene wouldn’t have a place in a modern-day computer. It certainly could be used to deliver information from one point to another quickly. Also, if supplemented by other technology, we could possibly see graphene used in transistors at some point in the future. Whether that would be an efficient investment of capital is up to the industry to decide.
There’s Another Material! (One I believe has more promise)
One of the problems with silicon is its inflexibility when working on ultra-thin surfaces. A piece of silicon could only be shaved so thin for it to be functional. That’s why we were exploring the use of graphene in the first place (it’s one single atom thick). Since graphene may not prove promising without investing truckloads of money into its development, scientists began trying other materials, one of which is titanium trisulfide (TiS3). The material not only has the ability to function even at the thickness of a single molecule, but it also has a band gap very similar to that of silicon.
The implications of this are far-reaching for miniature technology products which pack a vast amount of hardware in a very constrained amount of space. Thinner materials will also dissipate heat more efficiently, making them favorable for large power-hungry computers.
Graphene As A Promising Material For Computer Processors
From the time Graphene technology has been introduced, It has gained popularity as one of the most advanced materials with diverse applications. It can be used in mechanical and biological engineering applications. Car manufacturers are taking advantage of its weight and strength, Making it an excellent choice of materials to be combined with polymer composites.
It is also popular as a choice for energy storage and for solar cells. Nonetheless, Recently, It has also generated buzz because of the introduction of the Graphene processor, Which is expected to improve computing in more ways than one. IBM Taking Advantage of Graphene Among others, IBM is one company that has expressed its serious commitment to building a Graphene processor, Which is expected to redefine the future of computers.
By 2019, The company expects to develop a processor that is smaller and more significantly powerful than what is available in the market today. The goal is to build IBM Graphene transistors that measures only 7 nanometres but unrivalled in terms of the power it can provide to the computers of the future. As a demonstration of being serious in the pursuit of this component in a Graphene CPU, The company has invested $3 billion to provide the funding necessary for the development of the technology and in having it polished before finally being introduced in the market.

The Technological Singularity

The technological singularity or simply The Singularity is the belief that the invention of artificial superintelligence (ASI) in combination with Neurochips will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization and humans themselves. Many refer to the human change as becoming Cyborgs (Half-Machine and Half-Human). John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings’ lives would be like in a post-singularity world. Some such as myself, believe that the combination of such technologies as Molecular NanoTechnology, Neurochips (Computer Interface with Human Brains), and Artificial Intelligence may all combine to radically change the entire World and Human Society. Some even speculate that AI might bring about the end of Mankind. The Singularity may happen within the next 20 or 30 years if not sooner. A neurochip is a chip (integrated circuit/microprocessor) that is designed for the interaction with neuronal cells. In Science-Fiction there have been many stories of Cyborgs who have many mechanical or Machine based parts. However, I believe we are coming to a period where we may be able to use organic systems that may be far more compatible. We are even seeing the dawn of being able to create human organs or parts for replacement of failed organs. “Neuromorphic computing—the next big thing in artificial intelligence—is on fire.” This is a quote from an article from February 2018 by Shelley Fan ! Cybernetics is considered to be the Science of integration of Computers with people (inside of them). We may be in for some very interesting but also dangerous times ahead !