Skip to content

Inside one of the U's better-kept secrets: the Downtown Data Center

Chris Pedersen, operations specialist at the University of Utah's Downtown Data Center, stands inside the DDC's main production area on October 18, 2018.
Chris Pedersen, operations specialist at the University of Utah's Downtown Data Center, stands inside the DDC's main production area on October 18, 2018.

Chris Pedersen swipes his key card, taps his finger on a biometric scanner, and takes a step back. With one well-rehearsed lunge, he fights the door open against a blast of air.

"Sometimes you need a head start," he says.

Welcome to the University of Utah's Downtown Data Center, home of a significant portion of the U's storage and server capabilities. The DDC is what you might call thoughtfully overbuilt. Separating the main production area and a maintenance hallway is a towering wall of cooling fans, with a serious pressure differential on either side.

Designing a structure in an earthquake zone is complicated by nature — hope for the best, plan for the worst. Originally built in 1938, the DDC building is located on an unmarked city block-length property, and fortified with earthquake-proof building materials right down to fake exterior windows that disguise about a foot of concrete and reinforced steel.

Wall of cooling fans, facing the DDC production area.
Wall of cooling fans, facing the DDC production area.

Inside, a stable environment is maintained by redundant automated systems for cooling, humidity, particulate, and power distribution.

As you might expect, access is tightly monitored and controlled, open only to authorized staff members, some restricted to particular areas. Besides biometrics and key cards, surveillance cameras are trained on every door and common area, and the DDC is staffed 24/7/365. 

The production area has a concrete floor, which makes the layout a bit unique as data centers go. Rather than burying cabling under the floor, it's all dropped from above and "very well organized," Pedersen said.

Everything runs on battery backup, which alone can sustain the entire building's computer resources for 15 minutes. In an emergency, however, diesel generators with 25,000 gallons of backup fuel will kick on in seconds and keep the place humming for several weeks. To save on power during an outage, critical systems will stay up while non-critical systems (like research nodes) are powered down.

The DDC is operated by the Communications Infrastructure (CI) group in UIT and serves a mix of university and external tenants. Incoming equipment means consultations on what can be added to an enterprise data center, like how much to buy and connectivity needed. But within strict parameters, such as hardware being enterprise class and rack-mountable, Pedersen said, if tenants provide the equipment, the DDC will host it.

The Center for High Performance Computing (CHPC), while also part of UIT, is one of three partners alongside UIT and ITS in setting DDC policy, approving operating budgets, and serving on the Data Center Advisory Committee, according to DDC Manager Glen Cameron. CHPC operates and maintains most of its production systems on-site, "racking and stacking" equipment in their own exclusive footprint. With nontechie names like Kingspeak, Ember and Redwood, these high-performance computing clusters are familiar to faculty with sponsored research projects (including those in protected environments), along with other eligible students, staff, and post-docs — though they may have never visited them in person.

The job description for DDC staff, Pedersen said, could just as well be "Jack of all trades." 

"We have an extremely diverse technical staff," he said. "We basically do everything in-house, from the racks and cable ladders to the network connections and power rails. We terminate all of our own copper and fiber, running the spools of cable. The staff seek out the certifications they need, and deploy all the upgrades and build-outs. We have a great team."

Why downtown?

UIT took full occupancy of the DDC in April 2012, after a lengthy search for alternatives. Building from the ground up was a possibility, but deemed cost-prohibitive.

That left seven sites in production, all of which Pedersen said were in desperate need of major upgrades, or at capacity and unable to grow any further with the expanding needs of IT. Most of these were on campus. Consolidating these production sites into one main, modern, expandable facility was the driver for the purpose-built downtown data center.

As you might expect of buildings not purpose-built to be a data center, all had unique challenges. There were limits on what kind of cooling and power could be brought in. The power grid on campus at the time, Pedersen explained, was also far from perfect. Most buildings also ran the risk of plumbing mishaps and flooding.

"You can have a flood above you, and it’s going to rain in your IT space," Pedersen said. 

The downtown location's biggest advantage was cost, and it was already scoped out to be a data center by previous tenant, MCI WorldCom, a telecommunications company that filed for bankruptcy and foreclosed.

"This was a very unique find. It was an empty shell, but it already had a determined future to be a data center," Pedersen said.

A model of efficiency

A data center, you might think — what a power drain. While true that operating so many computing resources demands ample energy, the "out with the old, in with the new" adage has helped the DDC become a model of efficiency.

"What we've found is that the influx of new equipment is denser, takes up less space, is more powerful compute-wise, and uses power more effectively," Pedersen said. "We're seeing a curve that’s flattening out. We thought about five years in, we'd need to expand, but as legacy gear comes out, cloud services [are] used, virtualizations deployed, and more people moved to virtual environments hosted by the U, that means older gear is going out, with more efficient gear coming in."

The production room is pressurized so that air is pushed through the machines, rather than relying on fans to pull air in, which also cuts down on power consumption.

What's cool about cooling

Water misters keep the humidity level just right.
Water misters keep the humidity level just right.

Cooling might not seem like a showstopper, until you think of the fan that bleeds heated air from a CPU tower. Now imagine the amount of heat generated in a data center.

The data center uses a system that's called "hot aisle containment." Racks upon racks of servers sit in the open production space, or inside enclosed "pods." Outside on the floor, it's always a pleasant 72 to 75 degrees. Inside these containment pods, temps can rise to 95 degrees, but all that hot air is pumped up and out of the building.

The DDC's Power Usage Effectiveness (PUE) rating, a measure of energy efficiency, is 1.2 — comparable, Pedersen said, to data centers run by tech giants Google and Amazon. 

To maintain this world-class efficiency rating, it uses traditional dry air chillers and evaporative cooling (water evaporation to cool chilled water that comes into building). Water misters in the maintenance hallway adjacent to the main production area add humidity, as needed. A wall of fans (and 240 particulate filters) keeps the air inside clean and the temperature constant. The third cooling system is “Direct Air Economization,” the design piece that enables the DDC's high efficiency. Pedersen said by utilizing fresh outside air nearly 80 percent of the year, the system filters and humidifies that air, before pushing it directly through the clean production environment to heat exchange.

There you have it ... a peek inside one of the university's most interesting "secret" places.

DDC at a glance

  • 79,000 square feet, including 16,000 of warehouse space
  • 40 miles of copper
  • 25,000 gallons of reserve generator diesel fuel
  • 14 full-time staff
  • 24/7/365 coverage 

Power capacity

  • Scaleable to: 10 Megawatts
  • Current use: 1 Megawatt
  • 208 volts run to the racks

Power usage effectivness (PUE) 

PUE is a ratio of energy efficiency. The DDC's 1.2 PUE rating is on par with Google, Amazon, and eBay.

ACI project

Barring some lingering connections, the vast majority of the U's production systems have moved to Application Centric Infrastructure (ACI).

That network conversion included:

  • 1,800+ server ports 
  • 1,600 new terminations running on about 11 miles of cable

ACI-related project (in progress) 

DDC and network staff are performing what Pedersen dubs a “surgical extraction” of unused fiber-optic cable, or dark fiber in industry parlance, in addition to old switch gear and copper. Switch gear, Pedersen said, is generally exchanged with Cisco for credit, copper is commonly recycled if unused and up to industry standards, while dark fiber is removed and destroyed.

Share this article:

 

Node 4

Our monthly newsletter includes news from UIT and other campus/ University of Utah Health IT organizations, features about UIT employees, IT governance news, and various announcements and updates.

Subscribe

Categories

Featured Posts

Last Updated: 8/30/23