Any number of experts will give you their opinions on the hot trends in datacenters these days. For example, analyst Andreas M. Antonopoulis, senior VP of Nemertes Research, lists consolidation, growth, availability and operational efficiency as the four main themes that come up in Nemertes surveys of IT executives. Others emphasize such issues as cutting costs or implementing a service oriented architecture (SOA).

While it’s true that most datacenters are growing, and most people want to cut costs and operate efficiently, these perennial factors recently seem to be driven by an increasing emphasis on using the datacenter to better achieve specific business goals. In that light, we can see more specifically how four key trends—consolidation, virtualization, upgrading networks and improving the power and cooling infrastructure—constitute the operational realities of growth, cost containment and efficiency improvement. Note that these are not four distinct trends, but, as explained below, each facilitates and/or requires the other three.

Cramming It In

Over the past two decades, datacenters have evolved from mainframe/dumb terminal to client/server architectures, then to Linux clusters and distributed computing.

While few people pine for the days of green screens and command line interfaces, neither do datacenter staff want to manage an ever-growing number of different boxes scattered all over the landscape. Instead, they are seeking more simplicity through consolidation.

“IT executives are condensing their infrastructure so they can deliver new services and enhance existing service levels while holding the line with respect to both budgets and staffing levels,” said Matt Eastwood, program VP of worldwide server research at IDC.

From a hardware standpoint, this trend manifests in several ways:

■ Server Consolidation—Clusters of commodity servers were once viewed as the replacement for mainframes. Now many organizations are finding they can more efficiently run their applications on larger servers or on mainframes, cutting both their support costs and the number of server software licenses.

■ Processor Density—Both AMD and Intel are putting multiple processor cores on a single CPU, moving more processing power into a given space, as well as cutting power and cooling needs by up to 40 percent. AMD was first out the door and now has dual-core Athlon (desktop), Opteron (server) and Turion (mobile) 64-bit processors.

In August, AMD announced that its upcoming quad-core Opteron server processors, scheduled for release in mid-2007, would fit into the same power and thermal envelope as its current dualcore processors, making upgrades easy once the new processors are available. Intel, meanwhile, has its Core2 Duo line of desktop/mobile processors and dual-core Xeon server CPUs. It plans to release quad-core Xeons late this year, with a quad-core desktop version coming in early 2007.

■ Blade Servers and Switches—These also reduce the space required in the datacenter. Market research by Gartner indicates that 564,189 blade units shipped worldwide last year, and as much as 20 percent of datacenter servers will be blades by 2010. IDC said that the number of blade servers shipped in the first quarter of 2006 was 29.5 percent higher than in the previous year. Revenue growth was even higher, expanding 43.4 percent to $591 million, as customers switched from single processor blades to blades with two or four processors.

“We recommend clients reserve blades for certain applications such as Web serving, high performance computing, terminal servicing and distributed applications,” said Gartner research director Jane Wright. “We don’t feel comfortable putting heavy database applications on blades, and middle-tier applications we look at on a case-by-case basis.”

After a few years of hype, it is finally the right time to start deploying blades, according to Bob Gill, chief research officer for market researchers TheInfoPro, Inc. (TIP) in New York City. “Two to three years ago, blade servers were pitched as a cost savings, and people were routinely disappointed because the costs of blades were higher than 1U rackmount servers,” he said. “Now they understand that while the initial costs are higher they can save on operation costs.”

This doesn’t just apply at the datacenter, Gill added, but in remote locations as well. “If I can have a relatively unskilled person just slide another blade in that is already networked and connected to storage,” he said, “I have saved money and I need less expertise at the remote location.”

Gartner’s Wright said that blades, though more expensive, can reduce the cost of networking and storage, resulting in an overall cost savings. But she recommends that the server administrators consult with the networking staff before installing blades to ensure they are compatible with the organization’s  current network standards.

While IBM, HP and other major hardware vendors are expanding their blade offerings, Sun Microsystems has dropped out of the blade server market.  Sun does, however, offer blades based on the Advanced Telecom Computing Architecture Specification (ATCA) for use in carrier-grade,  high-availability telecommunications equipment. (For more on blade servers, see BCR, November 2006).

In addition to these hardware-level changes, organizations are also shutting down branch office servers and/or multiple datacenters and consolidating these resources into a single corporate datacenter. The remote users then access their applicationsand data over the WAN.

“Many companies are consolidating their IT equipment into fewer locations to improve management and reduce costs,” said Kevin McCalla, power product manager for Liebert Corp. of Columbus, OH, a subsidiary of Emerson Network Power. “Large datacenters are being built or retrofitted to house high-density blade servers, data storage devices, power and cooling equipment, VOIP and additional systems.”

The City of Bergen, Norway, for example, had local Windows NT file/print/communication servers at its 100 schools, and IT personnel were constantly being called out to address problems on these boxes.

“Disk, fan and power supply failures were common, and system corruption by viruses and other problems meant that engineers were faced with an ever-increasing workload,” said CTO Ole-Bjorn Tuftedal. “We had an amazing range of software versions and disk images, and the demand for human intervention was constant.”

To address this, the city replaced those 100 servers with 20 IBM HS20 diskless blade servers sitting in an  enclosure in the datacenter. This has drastically lowered their support costs.

Countering this trend, however, is the emphasis on redundancy for regulatory compliance or as part of disaster recovery and business continuity plans. In this case, many organizations locate their servers at a main datacenter, but then set up one or more mirrored backup sites in other locations.

“Instead of consolidating our datacenters, we are going in the opposite direction,” said Dan Agronow, CTO of The Weather Channel Interactive in Atlanta. “In the past we have consolidated, but now we are expanding the number of datacenters we have in order to enhance our disaster recovery posture.”

Living Virtually

The next hot topic is virtualization of servers, networks and storage. Virtualization pools physical resources, allowing them to be logically shared, and thus more fully utilized. For example, a disk drive can be partitioned into two or more logical volumes, or multiple disks can be joined into a single virtual disk. Server and network resources can similarly be shared or combined.

“This virtualization of datacenter resources means that servers are no longer tied to specific operating systems or applications, and can be easily shared and automatically repurposed based on business priorities and service level agreements,” said Susan Davis, vice president of Marlboro, MA blade server manufacturer, Egenera, Inc. “By turning servers into dynamic assets, enterprises can dramatically reduce the number they need, use a single server as backup for many servers, and reduce datacenter complexity.”

Virtualization can offer other benefits, but so far it is mostly being used for server consolidation.

“Virtualization has been portrayed as offering all kinds of dramatic benefits like dynamic provisioning and disaster recovery,” said TIP’s Bob Gill. “People who are doing it say this is a dirtsimple way to save a lot of money.”

Jim Jones, network administrator for WTC Communications in Wamego, KA, began using EMC Corp.’s VMware ESX Server two years ago. WTC provides telephone, Internet and cable television services to customers in the Kansas River Valley. The primary goal was to run additional servers without adding to the power load.

“We are a telco so we run everything on DC,” said Jones. “Everything that runs on AC has to go through an [expensive] inverter to go from DC back to AC, so AC is prime real estate.”

WTC went from eight physical servers down to three, with another two for its EMC storage area network, significantly cutting the amount of power needed. By installing ESX Server, WTC runs from 18 to 24 virtual machines on those three physical servers, and the newest version of the ESX software continually monitors performance and shifts loads between the physical servers when needed.

“We used to have to go back and look at hours of graphs and try to figure out what needs to be where at different times of the day,” said Jones. “The newest version of ESX knows when physical servers are being overtaxed and reallocates the virtual machines to other servers.”

Virtualization also saves time when it comes to provisioning. Previously, whenever a new box was purchased, it had to be built from scratch. Even if it was the same model, the BIOS and drivers were probably different so there was no way to deploy a standard image. Virtualization eliminates that particular problem.

“Now we deploy a new server in five minutes,” he said. “Weeks of work are saved this way.”

Since WTC only had a few servers, it could virtualize all of them at one time. Larger organizations will take a more gradual approach. For example, PHH Arval, a commercial fleet management firm headquartered in Sparks, MD, which manages 625,000 vehicles in North America, has racks of virtualized Hewlett Packard Blade Center servers. When it is time to refresh older equipment or there is a new project, it is virtualized.

“We have a big push for further consolidation and reduction in footprint, reduction in overhead and support costs,” said PHH CIO Tim Talbot. “To me virtualization is just the next natural progression in technology. If you have to refresh or add new equipment, it is foolish not to do it within a virtualized footprint.”

Better Connections

The above trends of consolidation and virtualization, while reducing the number of servers, also increase the load on the network. A blade server chassis may have a dozen or more servers sharing the same network connection. That gives a much higher level of CPU and memory utilization, but that many virtual servers sharing the same network interface card and network connection can easily produce a bottleneck.

“In many cases, as they consolidate multiple servers to single physical boxes, they are not engineering the network requirements of these boxes,” said TIP’s Bob Gill. “They are just putting multiple loads in single boxes and hoping everything works out OK.”

This is particularly true during resource-intensive operations such as backups. Bill Trousell, TIP’s managing director of networking, said that datacenter consolidation and virtualization are making it necessary for many organizations to upgrade to 10-Gbps Ethernet.

“Both of these [consolidation and virtualization] are causing aggregation of information flow,” he said. “In order not to have the network contribute to any latency issues, companies have to make sure the backbone has the bandwidth to handle the aggregate server total, which is much larger than it used to be.”

He says, therefore, that 10-Gbps Ethernet at the core is at the top of networkers’ wish list. In fact, 20 percent of the organizations that TIP has recently surveyed have it in use, and another 36 percent plan to be moving to it by the end of 2007.

“Moving a number of servers from geographically diverse locations into a datacenter, or consolidating local servers onto one machine—both have an impact on the networking,” Trousell said. “In order to facilitate that, you have to look at whether the backbone can handle the demand, and that is where people are looking at 10-Gbps.”

Another factor facilitating the move to 10-Gbps Ethernet has been the price drop. “Two to three years ago, a 10-Gigabit module ran about $20,000; today a module can cost about $4,000, with some as low as $1,000,” said Mike Johnson, a LAN/WAN specialist with technology provider CDW Corp. “As a result, increasing numbers of organizations are able to take advantage of the network speed improvements offered by 10-Gigabit.”

Ten-Gigabit connections are also being used for off-site connections. For example, when the Weather Channel implemented its multiple datacenter approach, it worked with its telecommunications provider, Verizon, to beef up the connections to accommodate the amount of data that would be replicated or synchronized.

“Before, we had 1-Gbps links, which limited the amount of data we could move between facilities,” said Weather Channel CTO Dan Agronow. Now they are connected by a 10-Gbps ring which, he said, gives them plenty of room to grow, without having to resort to data compression.

But increasing the size of the pipe is not always the best option. In addition to datacenter mirroring such as The Weather Channel is doing, companies also need to consider the WAN when consolidating remote servers into a datacenter.

“Through consolidation, companies are winding up where the applications are being served out of one location and used by a geographically dispersed workforce,” said TIP’s Bill Trousell. “What we are seeing now is movement toward using application acceleration and WAN data compression, particularly in international locations where bandwidth is expensive.”

Powering Up And Cooling Down

In addition to the network capacity problems caused by increasing datacenter density through consolidation and virtualization, are the interconnected problems of providing adequate power and cooling.

“In the past, this equipment might have been spread over several sites and the costs of the power appeared manageable,” said Liebert’s Kevin McCalla. “When a large number of these highdensity systems are concentrated in one location, the use of power escalates dramatically.”

The more power that equipment uses, the more heat it generates and, according to Kenneth G. Brill, executive director of The Uptime Institute, 10 percent of all racks are already too hot. Then  there is the cost. “HP research has found that for every dollar spent on IT in the datacenter, companies spend a dollar or more to power and cool that equipment,” said Mark Potter, vice president, HP BladeSystem.

These issues are putting a limit on how fully a datacenter can execute its consolidation strategy. “We could put far more servers physically in a cabinet than we have power for those servers,” said The Weather Channel’s Agronow.

He is not alone. According to TIP’s Gill, the problem is widespread. “In the past, it was all ‘smaller, faster, cheaper,’ now this whole energy profile is a major problem,” he said. “The vendors are painfully aware of it and are working on ways to solve it.”

Solutions to improve server cooling are available. For example, this summer, HP introduced the Active Cool Fan, which is based on the design of radio controlled airplane engines. The company says the new design consumes one third the power of traditional fans, while also being 50 percent more effective in cooling. Verari Systems of San Diego now uses a vertical cooling technology for its blade racks, venting at the top of the racks (rather than requiring the traditional alternating hot aisle/cold aisle design). IBM, Silicon Graphics, HP and Liebert have reintroduced liquid cooling.

Most datacenters have enough capacity to cool the datacenter as a whole, they just can’t cool a specific rack loaded with blades or other compact servers. But increased cooling and drawing away heat are, at best, an intermediate solution. The better choice is to reduce wasted power and not generate heat in the first place.

At the CPU level, both AMD and Intel are cutting the power consumption, and thereby heat, of their latest designs. According to IDC research analyst Jed Scaramella, power issues are one of the major reasons companies are switching to AMD processors. “IDC believes the perceived thermal benefits of AMD systems resonate with IT managers in their initiatives to control the increasing power and cooling costs of the datacenter,” he said.

But CPUs are just part of the wasted power. “We see these marketing campaigns from Intel and AMD getting a lot of positive buzz over the fact that they are slightly more efficient,” said TIP’s Gill. “And really, the processor is just one small piece of the equation. You have to also think about disk drives and power supplies.”

To cut energy in the power supplies, Rackable Systems has DC power solutions which can be installed on a single rack, a row of racks or for a whole datacenter. A rectifier on each rack converts the AC to DC power which is then fed to DC power supplies in each server, rather than each server having its own AC-to-DC converter. This setup is more efficient, resulting in nearly 14 percent less power consumed and 54 percent less heat. An additional advantage is that the heat generation of the AC-to-DC conversion takes place outside the server enclosure, keeping the server components cooler.

“DC-based systems can dramatically increase server efficiency and reliability while reducing power consumption,” said Colette LaForce, Rackable’s VP of marketing. “In California alone, it is estimated that power consumed on a daily basis in datacenters is the equivalent of 5,000 barrels of oil—so you can imagine the impact a 10 to 30 percent reduction would have.”

Greg Palmer, ESSG Product Line Manager for American Power Conversion, said that APC is focused on driving UPS efficiencies higher. One way it achieves this is through modular design.

“UPSs typically perform at high efficiency only when heavily loaded,” said Palmer. “Modular UPS designs significantly lower the cost of UPS redundancy, also allowing the UPS to be loaded to a higher level and, in the end, to run more efficiently, saving the customer money.”

Conclusion: Moore Power

Just as Moore’s Law predicts an ever-increasing number of transistors in a single integrated circuit, so can we expect to see an ever-increasing number of processors in the datacenter for the foreseeable future. But it won’t happen haphazardly. By taking advantage of advances in the above four areas, datacenter managers can continue to provide greater quantities of computing power, within existing budgets and physical plants

Leave comment

Your email address will not be published. Required fields are marked with *.