Skip to content
Business
Link copied to clipboard

Cheaper way to deal with the waste heat at data centers?

At any given moment, millions of financial transactions, e-mail, and online videos may be coursing through the circuitry of one of the massive computer server farms that have proliferated in the Internet age.

At any given moment, millions of financial transactions, e-mail, and online videos may be coursing through the circuitry of one of the massive computer server farms that have proliferated in the Internet age.

In Alfonso Ortega's thinking, it's a lot of hot air. He's not talking about spam.

The Internet generates a huge amount of waste heat, said Ortega, a Villanova University professor of energy technology.

By some estimates, 3 percent of the nation's electricity is devoted to computer processing and data centers, enough to light up a couple of states. The cost of cooling the equipment nearly equals the cost of powering the computers that process the bits and bytes, said Ortega, who is also the college of engineering's associate dean for graduate studies and research.

"People started to pay attention when companies said, 'Wow, unbelievable, we're now eating up half of our costs just to keep this thing cool,' " he said.

Ortega's Villanova team, along with a consortium of four other universities and four area corporate partners, were recently awarded a five-year, $3.4 million grant by the National Science Foundation to study ways to improve energy usage at data centers. The effort is formally called the Industry/University Cooperative Research Center in Energy-Efficient Electronic Systems.

While other researchers are looking at the efficiency of computer hardware - the lead institution is Binghamton University - Villanova's expertise is in thermodynamics. Ortega wants to explore ways to reduce or reuse the gales of waste heat generated by data centers, which is now dissipated into the atmosphere by air-conditioners.

Ortega envisions reusing the heat to warm offices, or nearby greenhouses. But in reality, hot air from computer equipment is not "high-quality" thermal energy, and it is difficult to recover usable heat or electricity from it.

Rather, researchers are looking at developing control mechanisms to rapidly shift processing loads to other data centers to take advantage of cheaper electric rates or cooler climates.

They are also developing sensors to identify the hottest servers to target with cooling, which would be more efficient than air-conditioning an entire building.

"There are many different opportunities for efficiency," said Ortega. "Some are obvious, some not so obvious."

In some ways, the emergence of big high-density data centers represents the industry's returning to its roots. In the early days of computing, large centralized mainframes did the heavy processing. Then, the work was decentralized to minicomputers and desktop devices when the market switched to personal computers.

"What has happened in the Internet era is the reemergence of centralized computing," said Ortega. "We've kind of come full circle."

Some large Internet companies have opened facilities in cool northern latitudes to reduce their energy costs.

Facebook in October announced it was building a new server farm near the Arctic Circle in Sweden to handle its European business. The 900,000-square-foot facility - the size of five Wal-Mart Supercenters - will consume cheap hydroelectric power and free Nordic winter air.

But some data centers need to be closer to their customers.

Steel Orca L.L.C., a Newtown start-up, is aiming to open a 300,000-square-foot "digital utility center" next year in Falls Township, Bucks County, at the former site of U.S. Steel's Fairless Works. The green-energy data center would use electricity generated by solar, wind, and landfill gas to supplement power off the grid. It would also use water from an aquifer and the nearby Delaware River as a coolant, said David Crocker, the venture's chief executive.

Crocker said Steel Orca planned to donate 5,000 square feet to the Villanova venture to allow researchers to use the site as a laboratory on sustainable-energy practices.

In addition to Steel Orca, the other local corporate partners include Delaware Valley Liebert, a supplier of data-center equipment and air-conditioner units, and Verizon Wireless and Comcast, operators of large data centers to serve their phone, Internet, and cable customers.

Data centers are located all around the country, often in nondescript, very secure buildings with multiple backup supplies of power and Internet connections, as well as redundant air-conditioning systems.

"In the data-center business," said Crocker, "you can't afford to go down for any reason."

One such facility is the Philadelphia Technology Park (PTP), a 25,700-square-foot data center in the Navy Yard in South Philadelphia.

PTP was built for the Philadelphia Stock Exchange, but when the exchange was acquired in 2008, the data center became surplus. Enterprise Technology Parks L.L.C. of Baltimore bought the site last year and opened it as a merchant data center, where companies can safely house their own computer equipment.

One selling point for the Navy Yard site: It is outside both the New York City and Washington "blast zones," areas that would be devastated in a nuclear attack. One client, Legg Mason Capital Management, uses the Philadelphia site as its disaster recovery facility for data stored in Baltimore.

On a tour last week, PTP President Corey Blanton pointed out the cooling system, which uses a typical data-center design called "cool aisle, hot aisle."

Amid the racks of computer servers that stand in erect rows like library book stacks, cool air blows up from floor vents beneath every other aisle. After passing through the servers, the hot air is recovered through ceiling vents over the "hot aisle."

Gary W. Aron, a vice president of software provider Asset Vue, said that, in earlier incarnations, data-center operators responded to excessive heat conditions by simply cranking up the air-conditioning so much that the wind would "make your hair stand on end."

Current designers are exploring more efficient cooling systems, including piping liquid coolant directly to heat-recovery units installed on each rack of servers, a practice that once was used with some mammoth mainframe units in the early days of computing.