Author: Anup Bhalla
Data-centers are big. Really, really big. Do an image search for “data-centers” and you will get back multiple aerial PR shots of vast, featureless buildings, often newly built on equally vast, featureless plains. The clear implication is, vast as the new buildings already are, their owners are keen to leave plenty of room for future expansion.
Amazon, Apple, Facebook, Google and many less well-known organisations operate such vast data-centers at multiple sites worldwide. For commercial reasons, they will never say exactly how many servers they have, although it is safe to assume that for some of them, the number is in the millions.
And here is where the engineering challenge comes in. Every joule of energy that goes into these data-centers has to be paid for, and when you are running millions of servers a small energy saving per machine can add up when taking into account the entire estate.
One key area is the power supply for each server, which can have a huge impact on the overall energy use. There are several basic issues that are relatively straightforward to address. For example, using a high input voltage to the supply will lead to lower I2R heating losses than using a lower input voltage. Being sure not to over-specify the supplies also helps – there is no point in running a 500W supply to deliver 300W. And then there is the basic conversion efficiency of the supply itself. HP has estimated that many server power supplies run at an efficiency of 65% to 80% – which means that in the worst case, one third of the energy you are paying for is doing no useful work save burdening your data-center cooling system.
More complex power-supply designs can push this efficiency up to 90% or more. But many designers have yet to pick up on a simpler way to boost efficiency: using more efficient semiconductor devices. For example, UnitedSiC co-packages a normally-on silicon carbide (SiC) JFET with a Si MOSFET in a cascode architecture, to produce a normally-off SiC FET device. This can be driven in the same way as Si IGBTs, Si FETs, SiC MOSFETs and Si super-junction devices, but has ultra-low gate charge and exceptional reverse-recovery characteristics that can be exploited to build highly efficient switching power supplies.
Parts such as the UnitedSiC UF3SC065030D8S and UF3SC065040D8S SiC FETs have a couple of other advantages. The first is that they have a very low RDS(on), reducing their internal losses, which directly relates to improved efficiency. The second is they are available in the popular surface mount DFN8x8 package, already used in applications such as telecoms equipment, where space is at a premium. Dropping in these SiC FETs enables designers to develop denser power supplies within the existing thermal budget of a case or a rack.
As I said at the beginning of this blog, data-centers can be big – really big. Reducing the energy that their servers consume also saves on cooling costs, as well as providing an opportunity to protect or improve systemic reliability. Optimisation involves making a set of quite complex trade-offs between capital and operating costs, energy efficiency and computing density, reliability and so on. The advantage of substituting SiC devices into server power supplies is that it is a straightforward, cost-effective move – and one which provides a lot of small savings that add up to a big, valuable difference.
Sign up for our quarterly newsletter and receive important technical information on all new products, app notes, white papers, and blogs.