Cloud computing is a myth. That’s not to say that it doesn’t exist, or it doesn’t work, or it hasn’t become a vital part of our global digital infrastructure. But the mental picture that the term ‘cloud computing’ conjures, of an immaterial source of unlimited computing power on-demand, just doesn’t chime with the reality.
The true picture of cloud computing is millions of vast data centres, packed with sophisticated servers doing computation by turning electrical energy into heat, serviced by advanced cooling systems and complex power supplies. Whether it is storing a file in your Dropbox or opening a web page, every bit of data that moves apparently effortlessly between client and server does so at a direct energy cost.
Cloud computing is integral to concepts such as the Internet of Things (IoT) and the roll-out of 5G networks, both in implementing the network infrastructure and in processing the vast amounts of data that will flow over these networks once they’re in place. The 5G network’s low latency will also enable algorithms running in cloud-computing data centres to partner with autonomous vehicles to make real-time decisions about traffic management and routing options.
With the amount of computing being undertaken in cloud data centres growing exponentially over the next decade – Gartner forecast that revenues from global public cloud services would grow by 17.5% to be worth $214.3 billion in 2019 – so too will the amount of energy consumed. Small improvements in energy efficiency throughout the system will add up to large energy savings in the overall data centre.
One obvious place to start is to make data-centre power supplies more efficient. An easy way of doing this is to swap out the existing silicon MOSFETs for silicon-carbide parts. These can switch at higher frequencies, to enable more efficient conversion, and run at higher temperatures than silicon equivalents, reducing the burden on data-centre cooling systems.
Designers might like to work from a blank sheet of paper when trying to solve the complex systemic optimisations involved in building a more energy-efficient data centre. The reality for many, though, will be to make many small improvements to what already exists. For power-supply design, UnitedSiC offers a range of SiC FETs in the widely used DFN8x8 package. The devices’ stacked cascode topology means they can be driven like a silicon MOSFET but will switch faster and handle more power, simplifying circuit design.
For example, to build a 3kW LLC circuit using the UnitedSiC UF3SC065040D8S SiC FET involves wiring two the devices in parallel (to meet thermal constraints) for each theoretical device in the circuit topology. Using two of these LLC circuits to build a half-bridge rectifier would, therefore, take four devices. Building the same circuit using competing parts would take at least six devices.
The advantages of SiC technology only grow at higher powers. Building a 5kW LLC circuit using the same UnitedSiC part takes three parallel devices for each theoretical device in the topology, and therefore only six devices for the complete half-bridge. Competing solutions need 10 devices to achieve the same thing.
It is these kinds of optimisations that will enable us to scale up our use of cloud computing without overwhelming our energy budgets. This practical approach to improving the efficiency of data centres will also help us to take a little of the hot air out of the myth of cloud computing.
Sign up for our quarterly newsletter and receive important technical information on all new products, app notes, white papers, and blogs.