The Top 7 Design Features of Efficient Data Centers
Efficiency in data center design will always remain at the top of any client’s list of infrastructure priorities. Yet, the means and methods of achieving efficiency through design and IT hardware continue evolving, and this defining the industry. According to the U.S. Department of Energy’s Lawrence Berkeley National Laboratory, data center energy, which is estimated to reach 139 billion KWh by 2020, remains a key operational cost management issue for our clients. Here’s a look at some of the tools, methods, and hardware advancing reduction of that mega-energy load.
1. Internet of Things (IoT)
IoT has found its way into mission-critical operations. Discrete sensors embedded in data center infrastructure enables fresh staff management tools such as DCIM software, sensors, analytics and new architectures. Through closely coupled energy delivery, cooling, and energy recovery systems, energy efficiencies rarely achievable just six years ago are now achievable. With increased automation available through these tiny, networked IoT devices, managing complex and ever-changing machines is increasingly executed from offsite remote locations, just as IT infrastructure has achieved over the last decade.
2. Emerging Infrastructure
In addition to plant infrastructure, including energy exchange machinery, we watch emerging IT infrastructure features for potential effects on the future of the physical data center environment. Here, also, the landscape continues its rapid change. Beyond the megatrends of cloud/hybrid, edge computing, and security, we see changes in storage and networking technologies that will alter the personalities of white space with more storage equipment. Due to the vastly larger amounts of data produced by IoT and video appliances, combined with costs and performance increases, both data center and edge storage will explode and change the IT footprint of the white space.
United Launch Alliance in Centennial, Colorado by DLR Group. Photo by Ed LaCasse.
An example of these technological paradigm shifts might be Microsoft’s Hololens. Many of these exploratory technologies have the capability to exponentially expand the creation, movement, and storage of data that heretofore was never imagined. I always hesitate to point to specific technologies as it can quickly date the principles underlying the message, so I hope the message will not be lost on those reading this in 2020. The principle is; data is being created exponentially and it will need a place to live so that humans and AI programs can work with it to enhance our human experience.
3. Scalable Resiliency
The bulk of initial costs for data centers lays in the electrical and mechanical systems which need to provide 100-to-200 times the power demands of an average office building. Adding to this is the need for redundancy and resilience to insure that a system failure or service outage – say to replace a fan motor – must not result in an outage to critical IT services that requires constant connection. This is where many high initial costs come from. However, many operations grow over time, which permits the use of scalable infrastructure and allows our client to grow their plant as their IT needs grow. This results in the best initial cost solution while allowing them to grow quickly as their needs change.
Holland Computing Center in Omaha, Nebraska by DLR Group. Photo by Tom Kessler.
4. Shifting Risks
Many clients understand the ever-escalating cost of down time and the consequences of disaster recovery following a complete loss of equipment and/or staff. FEMA P-361; 426/P-431, and FM Global 1-40 offer national reference standards for durability and survival by design. Many locations and jurisdictions subject to natural hazards have created their own sets of enforceable codes that draw in part, or in whole, from some of these standards. It’s critical that design teams understand the client’s risk tolerances and communicate the costs of physical durability. Surprisingly, it is typically not as costly as one might think, particularly when compared to the project cost and the ever-increasing value of the assets within.
5. Diminishing Rack Space
Cloud computing’s visible impact on current and future data centers is revealing itself in the enterprise client’s white space. The combination of virtual machines (VM) and cloud solutions has slowed the growth of rack deployments. Of course, each client’s services and application set will affect their own cloud strategy. In some cases, growth has stopped as applications have moved out of private data centers and into the cloud. How much, how long, and what future requirements will look like requires a creative imagination. In 20 short years, data centers have changed to something few imagined.
One change we have all championed is the move to liquid-cooled IT equipment. Capable of exponential improvements in energy and space conservation as this technology matures, the entire machine fabrication process will be re-thought for increasingly dense and efficient spaces. Whether this translates into less rack space or simply aids in managing exponential growth in technology, it raises the ceiling on a designer’s opportunity to elevate the human experience.
Benedictine University Goodwin Hall of Business in Lisle, Illinois by DLR Group. Photo by James Steinkamp.
6. Automated Uptime Management
We have been exploring adaptive power over Ethernet-enabled damper/actuators to tie airflow requirements to the actual demands of the rack or racks it supports. This will lead to optimized pressure and airflow management, directing cooling exactly where it’s needed as the environment changes. Analytics and low-cost sensors are making the approach appealing.
7. Uptime Management
For one of our repeated clients, we deployed an autonomous automated warning and control system that can detect severe weather and automatically respond accordingly. We combined wind speed sensors and lightning strike distance/rate sensors to detect and react to developing violent weather. Should wind speeds exceed a preset level, the system alarms are triggered and prepares the plant to abandon more vulnerable but less energy-intense exterior cooling systems in favor of safely-bunkered interior cooling systems. Typically, lightning flash rates are proportional to a storm’s severity, with the exception of a drastic or immediate drop in flash rates that precedes a tornado or microburst storm. Coupling this windspeed sensor with a lightning flash rate and distance detector has produced our predictive weather response system. Together, the data from these commonly manufactured devices alerts and automates equipment responses to lower risks for our clients.
Subscribe to have our latest Insights delivered directly to your inbox.