Pesos Incoterms:FCA (Punto ng Shipping) Kinokolekta ang mga bayarin sa duty, customs at buwis sa oras ng paghahatid. Libreng shipping sa karamihan ng mga order na higit sa ₱2,000 (PHP)
Mga US Dollar Incoterms:FCA (Punto ng Shipping) Kinokolekta ang mga bayarin sa duty, customs at buwis sa oras ng paghahatid. Libreng shipping sa karamihan ng mga order na higit sa $50 (USD)
High-Density Power Distribution in Modern Data Centers
Article • 8 minute read
Data Center Power Gets a Voltage Boost
by Bill Schweber for Mouser Electronics
Data center power distribution is evolving to meet the rising demands of AI workloads. This article explores the shift from 48V to 400V topologies (800V architecture) and the design trade-offs needed to deliver more power efficiently while managing safety and space.
The unprecedented power demands of artificial intelligence (AI) computing have forced designers to rethink the way data centers are built. Data center architectures are transitioning from 48V distribution and backplanes to higher-voltage power delivery at 800V. This is due to the massive increase in aggregate server-rack power needs, even as their constituent integrated circuit (IC) and other loads have lower power demands, with rails reaching below 1V.
According to a recent US Department of Energy report, data centers consumed about 4.4 percent of total US electricity in 2023 and are expected to consume approximately 6.7 to 12 percent by 2028.1 Further, the International Energy Agency (IEA) reported that data centers consumed between 1.4 and 1.7 percent of global electricity in 2022, which is expected to double by 2026.2
The solution to this seemingly unstoppable demand encompasses three broad components:
Increased supply from both renewable and non-renewable sources
Improved transmission lines and infrastructure
Greater efficiency at the user load
Getting from Line to Load
The topology of server-rack power designs and migration to 800VDC (or ±400V) for delivery is the third iteration of this power-path challenge. The use of 800VDC for internal distribution requires fewer stages and offers significantly higher efficiency along with reduced thermal dissipation.
A review of the first three generations of topologies reveals the ongoing transformation and transitions. In all cases, the power path begins with three-phase high-voltage transmission lines that are typically stepped down to 480VAC near the data center (the exact value varies with the locale).
First-Generation Topology
The first power distribution topology could provide about 10kW to 15kW per rack (Figure 1).
Figure 1: First-generation line-to-load power distribution could deliver about 10kW to 15kW to a single chassis. (Source: Mouser Electronics)
The three-phase AC line charges an uninterruptible power supply (UPS), which provides backup power in case of a main power source failure. The power distribution unit (PDU), in turn, gets its power from the UPS battery and distributes it to the connected equipment via a DC/AC inverter while implementing the needed monitoring and control.
This power is still not ready for use by the many diverse loads on the circuit boards. The AC/DC rectifier plus DC/DC converter transforms the AC to DC and pre-regulates the DC voltage output, usually 12V. That 12VDC output rail is then further regulated down to lower DC voltages, such as 5V and 1V, for the loads on the server’s PC boards.
Second-Generation Topology
In this approach, the output of the DC/DC converter is increased to 48V from 12V (Figure 2). While this may seem counterintuitive, since the voltages required by the loads have decreased, it is more efficient and reduces losses. The 48V output rail is then regulated down to the end-load voltages using multiple efficient DC/DC converters (also called regulators).
Figure 2: Using a 48VDC rail improves performance with respect to efficiency and losses, but is still inadequate for today’s AI-driven systems. (Source: Mouser Electronics)
This higher-voltage arrangement is designed to minimize resistive voltage drop (IR drop) and power loss (I²R) for a given power level, along with the associated thermal effect. Although designers often initially focus on voltage drop, a significant amount of power is lost, and the issue becomes a dissipation issue when power is transferred from the PDU to the converters
In this second-generation design, the entire power subsystem—UPS, battery, PDU, and converters—is located within the rack it powers. Using this approach, a rack can be supplied with up to about 100kW (with suitable additional cooling).
Third-Generation Topology
This topology changes both the location of the power subsystem and the voltages supplied to the rack (Figure 3).
Figure 3: Going to a high-voltage DC distribution topology dramatically reduces losses and improves all aspects of power-delivery performance. (Source: Mouser Electronics)
The power subsystem is instead placed in an adjacent rack or cabinet called a "sidecar" (Figure 4) and uses an 800V (or ±400V) cable or busbar to deliver power to the computation rack. In many installations, a single sidecar can support a row of adjacent racks, as the power losses are sufficiently low using 800V distribution to each server rack. Using the sidecar arrangement, a single rack can support 500kW and more (again, with suitable cooling).
Figure 4: Along with the transition to 800VDC, the power distribution system is moved to an adjacent cabinet, with a heavy cable or busbar linking that rack and the processor rack. (Source: Fotograf/stock.adobe.com; generated with AI)
One benefit of using a sidecar is that the space in the rack previously occupied by the power subsystem is now available for placing the AI processor’s graphic processing units (GPUs), the central processing unit (CPU), and networking switches in closer proximity. This placement speeds their performance by reducing interunit propagation delays, which severely degrade performance when handling the large number of data transfers in AI-related processing.
It may seem counterintuitive to move the power subsystem further from the load it supplies, given that IR and I²R losses are a serious issue, but shifting to 800V drastically reduces the necessary current, thus reducing the losses and making this shift a net gain.
A significant benefit is that the cabling within the rack can be slimmed to smaller wires, saving copper cost and space while still offering lower losses. The same reasoning applies to the rigid busbars, which are generally used for transferring large amounts of current between points. Reducing busbar losses and their size means reduced thermal dissipation as well as simplified placement and routing.
Another important advantage of using 800VDC for distribution is that in most second-generation designs, the 48VAC distribution can use a single battery-backup unit and UPS across multiple racks, while each rack has its own power distribution unit. However, these UPS systems are supplied by AC, so they must first rectify that to DC to support the huge battery pack. The battery pack's DC output must then be converted back to AC for 48V distribution. This two-stage conversion process adds to inefficiency and cost. In contrast, using direct 800V distribution requires no AC/DC rectifier at the UPS, as the battery pack can be recharged directly from controlled DC.
However, when using 800VDC rather than 48V for internal distribution, safety becomes a major consideration. Recognize that 800VDC is not a Safety Extra-Low Voltage (SELV) as defined by various regulatory standards. SELV is an electrical safety concept that limits a circuit's voltage to a safe level (typically less than 50VAC or 120VDC, depending on the installation specifics) to minimize the risk of electric shock under both normal operation and fault conditions.
Any design exceeding SELV levels must incorporate many safety features related to insulation spacing, creepage and clearance, power access and cutoff, and more. Fortunately, many engineers now have more experience with their voltages, and knowledgeable consultants are available. Additionally, experience with electric vehicles has broadened the knowledge base and component availability.
Regulating Power Delivery
Renesas Electronics
RBA300N10x series MOSFETS(Figure 5) deliver higher power density and efficiency that meet the needs of third-wave topologies and beyond. The power MOSFETs are built on Renesas’s REXFET-1 MOSFET wafer process with split-gate topology, which reduces on-resistance by 30 percent.
These MOSFETs are JEDEC certified, ensuring suitability for industrial applications such as data centers. Key features of the RBA100N10x include high current capability and compact packaging, making them ideal for power management in data centers that serve the high-power demands of AI applications.
Conclusion
In response to growing data center energy needs, engineers from electrical, mechanical, and thermal disciplines have developed ways to significantly increase the power capacity of a single server rack. While tens of kilowatts were considered a maximum just a few decades ago, capacities beyond 500kW—an increase of more than an order of magnitude—are now in use. This gain was achieved through innovations in power transformation, power management, and the use of a higher internal distribution voltage of 800V versus 48V. These higher voltages bring tangible benefits but also present major issues related to safety mandates, component selection, and physical arrangements.
These flexible 12-phase digital PWM controllers offer advanced modulation, dynamic efficiency features, and PMBus/AVSBus support for high-performance, multi-output power management in space-constrained, demanding applications.
This Smart Power Stage module delivers 90A output with precise current sensing, built-in fault protection, and a compact footprint for boosting efficiency in space-constrained, high-performance designs.
These power MOSFETs combine low resistance, efficient thermal management, and high current handling to support reliable, high-performance operation across a wide range of demanding electronic systems.
Explore cutting-edge power switching technology designed for high efficiency, fast performance, and superior thermal management, ideal for next-gen energy systems and compact designs.
TP65H015G5WS SuperGaN® FET
These GaN FETs deliver higher efficiency, reduced losses, and simplified design integration for engineers looking to boost performance and shrink system size without compromising reliability.
From Grid to Rack: Powering Tomorrow’s Modern Data Centers
Discover how electricity flows from the grid to the heart of digital operations in today’s modern data centers.
How much power do data centers really use?
Global data center energy use is currently about 415 TWh and is expected to increase to 945 TWh by 2030
Source: IEA 2025, Energy and AI
5-10 MW
100+ MW
Legacy data centers use around 5-10 MW of power, but hyperscale data centers use 100 MW or more
Source: IEA 2024, What the data centre and AI boom could mean for the
energy sector
Data center power infrastructure
Learn more about each step in the flow of power through a data center
Re-play animation
Power grid
The power grid is the large-scale, interconnected network that generates, transmits and distributes electricity from power plants to the data center.
Switchgear
The switchgear manages all power that goes into the data center. It takes power from the grid (or onsite generators in the event of a grid outage) and routes power to IT equipment, HVAC and other onsite needs (lighting, etc). Typically converts grid power to 480VAC.
Generator
The generator acts as a backup electrical power system that ensures uninterrupted operation in the event of power outages.
UPS
The uninterruptible power supply (UPS) provides power for IT equipment. It has a battery backup that can support IT equipment in outages.
UPS batteries
UPS batteries are rechargeable power sources that provide immediate, seamless backup power to IT equipment during a utility power failure.
Cabinet PDU
The cabinet power distribution unit (PDU) provides isolation, voltage regulation and monitoring capabilities to balance loads for IT equipment.
Remote power panels (RPPs)
Remote power panels (RPPs) provide localized power distribution, monitoring and control for IT servers.
Rack PDU - IT Load
Rack PDUs are the final mile of power distribution to IT equipment, they are like smart power strips that monitor power usage and offer remote power cycling. Output voltage is typically 120VAC to 240VAC.
Article • 6 1/2 minute read
How Digital Power Can Boost Data Center Efficiency
by Adam Kimmel for Mouser Electronics
Explore how digital power technologies enhance energy efficiency in AI-driven data centers, with a focus on solutions from Renesas and industry trends.
Artificial intelligence (AI) continues to permeate nearly every industry, and the demand for resilient, scalable data processing capabilities has surged in response. This escalation puts pressure on data centers, creating the need for advancements in power management to ensure power conversion efficiency, scalability, and reliability. Traditional analog power systems, while suitable for legacy applications, struggle to meet these new requirements. Digital power management has emerged as a novel solution, offering enhanced flexibility and improved energy efficiency—the latter being a primary performance indicator for data centers.
This article examines how digital power management, driven by Renesas solutions, enhances efficiency, scalability, and sustainability. It also explores real-world deployments and emerging trends in renewable integration, predictive analytics, and superconducting chips to demonstrate the future of powering AI data centers.
What Is Digital Power Management?
Digital power management refers to the use of digital controllers and processors to monitor, control, and optimize power systems. Unlike analog systems, which rely on fixed hardware configurations, digital solutions offer programmability, which allows dynamic adjustments to power parameters in response to varying loads, temperatures, and conditions. This adaptability results in more efficient energy usage, improved control precision, and enhanced system monitoring. These factors are all essential in managing the intensive power demands of AI applications, which require big data analysis for peak performance.
One of digital power’s key advantages is its ability to provide real-time intelligence, enabling power systems to adapt to their environment and optimize efficiency automatically. Examples of these adaptive responses include automatic compensation (for changes in load and system temperature), adaptive dead-time control, and dynamic voltage scaling for optimal system performance.
These features are especially beneficial in data centers, where high-demand workloads can fluctuate significantly. Digital power management offers additional benefits that address data center energy efficiency, including the following:1
Intelligent power management
Reduced development time for system products
Reduced bill of materials (BOM) costs
Enhanced reliability and product lifespan
Compact system design and forward-compatible system upgrade within the same chassis
With these advancements in digital power, the supporting technology is well-suited to meet the increasing demands of AI processing in data centers.
The AI Boom and Need for Smarter Power
AI workloads are characterized by their intensive computational demands, which directly correlate with significant power consumption. Traditional power management approaches struggle to adapt to these dynamic loads, often causing inefficiencies and increased operational costs. Digital power management addresses these challenges by providing the adaptability and intelligence required to optimize power usage in real time.
McKinsey projects that global data center capacity demand will increase between 19 percent and 22 percent from 2023 to 2030, reaching an annual demand of 171 to 219 gigawatts (GW).2 This surge is largely driven by the compounding nature of AI application expansion, which consistently draws more power than traditional workloads. It is no surprise that generative AI is the key driver of demand for data center capacity (driven by hyperscalers like AWS, Google, and Microsoft).
In addition, Goldman Sachs Research estimates that the global data center market's power usage is currently around 55GW, with AI accounting for 14 percent of this consumption. By 2027, AI projects are expected to grow from 14 percent today to 27 percent of the overall market, taking share from cloud and traditional workloads. This shift highlights the growing importance of efficient power management solutions. It could also lead to a 50 percent increase in global data center power demand by 2027 and an increase of as much as 165 percent by 2030.3
How Digital Power Enhances Efficiency
Digital power systems enable fine-tuned, adaptive control over voltage and current, minimizing power waste and enhancing operational reliability. This capability is especially important in AI environments where loads can vary rapidly. Through real-time monitoring and intelligent adjustments, digital power not only helps reduce operational costs but also extends equipment lifespans by preventing overcurrent or overheating conditions. The adaptive control is the magic, as it continually optimizes system performance faster than human response could react.
Renesas Electronics Digital Power Solutions
Digital power solutions can also log relevant data for predictive maintenance and offer programmable control, facilitating the rapid deployment of new systems. These benefits reduce the cost and complexity of managing large-scale data center infrastructure. Renesas Electronics has developed several innovative product solutions enabling digital power in data centers.
ISL68239IRAZ Digital Multiphase Controller
The Renesas
ISL68239IRAZ
is a digital multiphase controller capable of managing up to 12 phases across three outputs. It features adaptive voltage scaling (AVS), PMBus compatibility, and flexible configuration options, supporting various topologies, including DCR and RDS(on) current sensing. The controller's versatility and precision make it a cornerstone in digital power architectures for AI applications.
ISL99390FRZ-TR5935 Smart Power Stage
The
ISL99390FRZ-TR5935
is a Smart Power Stage module that integrates high-accuracy current and temperature sensing. Its features include high efficiency, integrated protection against overcurrent and overtemperature conditions, and a compact design, facilitating high-density power solutions essential for space-constrained data centers.
RBA300N10x REXFET 100V N-Channel MOSFETs
The
RAA211412
regulator supports a wide operating input voltage range from 5.8V to 45V, and the output voltage is user-programmable down to 0.8V using a simple resistor divider. The device can deliver up to 1A of output current with tight load and line regulation and offers high efficiency from no load to full load (Figure 1).
Figure 1: The RAA211412 DC/DC regulator's efficiency remains relatively constant from light to full load, with only slight variation depending on the input voltage. (Source: Renesas Electronics)
The RAA211412 regulator employs a pulse width modulation (PWM) peak-current mode control architecture at a 630kHz switching frequency, providing fast transient response performance. This is an important performance attribute for server and AI functions, which feature fast, repetitive load shift cycles from quiescent to active modes.
Another suitable IC is the Renesas
ISL85005
synchronous buck regulator, with integrated 5A, 4.5V-to-18V high-side and low-side FETs (Figure 2). The regulator's output current rating makes it a good fit for higher current loads, whether individual devices or a circuit subfunction. Its switching frequency is internally set as 500kHz, but it can be synchronized to an external clock signal from 300kHz to 2MHz. As with the RAA211412, the regulator output voltage is programmed using an external resistor divider and can range between 1.2V and 5.0V.
Figure 2: The ISL85005 regulator is suited for low-voltage, higher-current loads, as low as 1.2V and up to 5 A. (Source: Renesas Electronics)
Deploying Digital Power in AI Data Centers
Moving to digital power management involves integrating components into existing data center systems. When designing in these new components, essential considerations include system compatibility, scalability, and the ability to establish comprehensive monitoring systems to unlock the full benefits of digital power management. By addressing these factors, data center operators can harness the full capabilities of digital power technologies to meet the growing demand for AI application power.
A significant system design outcome of using Renesas' digital power technologies, which are designed to be agile, versatile, and scalable, is that it enables data center designers to use a single standard architecture to manage power demands across various processor types, including x86 CPUs, TensorFlow GPUs, SoCs, and FPGAs. This scalability is crucial in supporting the rapid growth of AI applications.4
Upcoming Trends in Digital Power for AI Data Centers
As AI continues to evolve, several emerging trends are shaping the future of digital power management in data centers.
Integration of Renewable Energy Sources
Companies like Google are entering groundbreaking partnerships to construct data centers near solar and wind farms across the US, gaining access to additional power sources. Google’s $20 billion (USD) project aims to create several industrial parks dedicated to AI data centers, the first of which is expected to be partly operational by 2026 and completed by 2027. This approach enables Google to reduce its reliance on fossil fuels in the US electricity grid, thereby reducing pollution by using renewable energy directly on-site.5
Development of Superconducting AI Chips
Snowcap Compute, a startup developing AI chips using superconducting technology, has secured $23 million (USD) in funding. Superconductor materials conduct electricity without resistance, while significantly reducing power consumption. Traditional challenges with superconductor chips, notably their energy-intensive cryogenic cooling requirements, are becoming more evident due to the increasing demand for AI computing power and the growing strain on electricity grids. Snowcap claims its chips will offer 25 times better performance per watt than current leading chips (even after accounting for cooling energy). The company expects to produce its first chip by the end of 2026, with complete systems arriving in the coming years.6
Adoption of Predictive Analytics for Power Management
Implementing real-time power management in virtualized data centers, using predictive analytics, is also gaining traction. By leveraging machine learning algorithms to predict power consumption patterns, data center operators can optimize resource allocation, enhance energy efficiency, and maintain performance.
A study from National Central University in Taiwan demonstrated how this proactive approach to power management enables data centers to adjust their power settings dynamically. In doing so, machine learning algorithms integrate into power management, enabling the forecasting of power consumption patterns and predicting how power settings are likely to require adjustment. This approach saves energy and improves efficiency, ensuring optimal performance while minimizing energy waste.7
A Smarter, Scalable Future with Digital Power
The evolution of AI demands a fundamental change in how data centers manage power. Digital power solutions, including those offered by Renesas, provide the adaptability, efficiency, and reliability necessary to support AI’s growing influence. By embracing these technologies, data center operators can ensure optimal performance, scalability, and energy use in the AI era.
This compact regulator powers devices with up to 1A output, wide input range, and built-in protections, ideal for efficient, space-saving designs in tools, meters, and battery systems.
This power module delivers efficient 5A output with integrated FETs, wide input range, and built-in protections for compact, high-performance designs that demand reliability and simplicity.
Liquid Cooling: Improving Server Density with More Efficient Cooling
Explore how liquid cooling enables higher server density in compact data centers by improving thermal efficiency, reducing energy use, and outperforming traditional air cooling in space-constrained environments.
Data center solutions & resources
Watch videos, read articles, blogs and white papers on the newest electronics applications and technologies for design engineers