Skip to content

CHPC has strong presence at SuperComputing 2014

CHPC booth at SuperComputing 2014

By Emily Rushton

SuperComputing 2014 brings together people all across the country to debut their research and innovation in high performance computing, networking, storage, and analysis. The conference attracts more than 10,000 attendees and over 350 exhibitors, including UIT’s Center for High Performance Computing (CHPC). 

Sam Liston, Senior Systems Administrator for CHPC, has been in charge of the department’s participation for the past 13 years.

“It’s a good place to show off, show your presence, and show your participation in the community among other HPC centers, national labs, and the industry,” said Liston. “It’s a good place to go see and hear about new technologies.” 

For CHPC, it’s all about networking and showcasing the latest advances in high performance computing.

“Our booth has sort of become a place to demo. Unlike other booths that have a schedule of presenters throughout, ours is more of a meeting place,” said Liston. “We make it very lounge-y with lots of seating; vendors who want to meet with us can stop by, and we can set up ad hoc meetings with our peers.”

This year, CHPC staff collaborated with the Scientific Computing and Imaging Institute (SCI) on two specific demos for their booth. The first showcased ViSUS, a visual application that enables real-time management of large datasets on a variety of systems, including desktops, laptops, and portable devices. 

“It matches the visualization with the capabilities of the display device, whether you’re on your phone or at a big display wall,” said Liston. “And the interesting thing was that they did some real-time visualizations while things were being computed on CHPC resources. So on our clusters, things were being created and simulated, and then visualized remotely there [at the booth].”

The second demo focused on the power of coprocessors, which are special-purpose processing units that perform specific types of operations. The demo showed different ways of visualizing the same thing, using two separate coprocessors – an Intel Xeon Phi and an NVIDIA GPU.

“It’s visualizing things that are really sort of a grand challenge,” said Liston. “Things that have been historically too large to visualize – structures with five to seven million atoms. To think about rendering that many atoms in an interactive fashion is a big feat.”

There’s a big push these days in high performance computing for coprocessors, and for good reason. CHPC has been using coprocessors for the past six years.

“It’s like potentially having hundreds of computers in a single card,” he said. “If you have the right sort of workload for it, it can really pay off in speeding up your code.”

There were takeaways from the conference, too. One of the new methodologies emerging into the scene is called object storage, a way of managing data as objects rather than a traditional system of files and directories.

“Essentially you’re abstracting files and allowing the system to store them in a more efficient way,” said Liston.

As some storage systems grow to very large capacities and file counts, object storage can become quite necessary.

“One of the directions that we think this would be advantageous is a huge research archive. That’s something that the U does not have right now, a place to store things long-term,” he said. “So that’s something we’re looking at – perhaps using object storage as the backend for it.”  

Last Updated: 5/6/21