University of Kentucky

University of Kentucky researchers move massive data loads easily with an SDN-enabled campus network.

From astronomy to bioinformatics to diagnostic medicine to physics and beyond, science requires moving massive data sets. Making that possible is Cody Bumgardner’s mission.

“Researchers need to move data that’s generated on campus to the cloud and other universities,” explains Bumgardner, director of research computing at University of Kentucky.

Big Data Chokes the Network

Moving data measured in terabytes and petabytes is essential for scientific collaboration, but the stark reality is that traditional campus networks simply aren’t designed to support pervasive big data.

Many universities use a science DMZ to tackle this problem. A science DMZ is a portion of the network, built near the campus perimeter that is optimized for high-performance scientific applications rather than enterprise computing. Researchers’ desktops, servers and other data services are isolated inside the science DMZ. The challenge is that security and other network policies aren’t enforced consistently.

University of Kentucky wanted to take a fresh approach: an all-campus science DMZ. “We wanted to keep the researchers’ desktops and servers on the campus network,” says Bumgardner.

A cross-departmental team got to work.

Using SDN to Speed Big Data Flows

Using SDN is a strategic approach to deploying campus networks.
Cody Bumgardner, Director of Research Computing, University of Kentucky

University of Kentucky

University of Kentucky uses software defined networking (SDN) to enable scientific data flows to take a high-speed path through the campus network and to the cloud.

“The initial idea was to add OpenFlow code to control the existing campus switches and then use high-end routers to connect them,” says Bumgardner. But there were setbacks: OpenFlow code didn’t exist for all the necessary boxes, different flavors of OpenFlow caused interoperability woes or performance was degraded because of how certain switches implemented OpenFlow.
Then the team tested the Aruba 5400R Switch Series, an OpenFlow-enabled, high-performance, low-latency, Advanced layer 3 modular switch from Aruba, a Hewlett Packard Enterprise company.

“Having SDN-enabled switches from Aruba allowed us to achieve our goal,” says Jacob Chappell, programmer/systems analyst at the Center for Computational Sciences.

An Intelligent, Programmable Network
“The 5400R switch was one of the few switches that we tested that implemented the OpenFlow NORMAL rule, allowing SDN switches to act as normal switches until an SDN rule was applied,” explains Bumgardner. That capability was the key to moving forward.”

The 5400R switches work in conjunction with Aruba Virtual Application Networks (VAN) SDN Controller Software, which acts as a unified control point in the OpenFlow-enabled network to simplify management, provisioning and orchestration.

The university deployed more than 3,000 SDN-enabled ports across campus to handle scientific flows without any impact to the academic, administrative and residential IT needs of over 30,000 students, faculty and staff. The 5400R switches are used for distribution and access.

University of Kentucky

Fast, Secure Data Flows

With SDN, the university can extend its science DMZ all the way to the switch port in a researcher’s building or office—and enforce consistent network and security policies. Ordinary traffic moves through the 5400R switches in “normal” mode, while scientific flows from researchers or supercomputer locations are diverted to take a high-bandwidth path.

“The normal rule is important because we only want to modify traffic that affects researchers, be able to drop attacks from hostile sources, or avoid middleboxes,” says Lowell Pike, network programmer in the Computer Science department.

Right now, the OpenFlow rules are installed manually, but efforts to automate are underway.

Smashing Bottlenecks
Using SDN to create high priority paths for big-data flows alleviates another performance crunch in the traditional campus network. Appliances—or middleboxes—that provide traffic shaping, load balancing, firewall and other network functions, can be a big bottleneck.

“Even if you have a 100Gbps network, the middleboxes can drop the north/south speed to 100Mbps,” says Pike.

But now, scientific flows are diverted around these performance-sapping middleboxes, while network policies are enforced via SDN.

For researchers, the data floodgates have opened. Big data transfers between campus and research sites on Internet2 are 88 times faster. A transfer that previously would have taken a month to complete can now be done in less than eight hours.

Exploring a New Way to Build Campus Networks

SDN enabled the University of Kentucky to build a smarter, faster, more cost-effective network and support academic collaboration. The ability to accelerate big-data flows using SDN also meant the university needed fewer high-end routers, which delivered savings that could be reinvested into more high-speed switch ports.

“Using SDN is a strategic approach to deploying campus networks,” says Bumgardner. “We can push the money into higher capacity instead of router feature sets.”

That strategy is not only critical for big-data flows, but also fits the changing nature of traffic flows as more applications are hosted in the cloud and off-site data centers.

“Even if you’re sitting in a classroom, your traffic is going up and out to a data center or the cloud, because that’s where the learning management system is,” says Bumgardner. “Our research shows that 90 percent of traffic campus is north/south.”

With SDN switches and controllers from Aruba, adapting to the changing nature of campus networks has never been easier.