Content area
From Windows NT-based systems with single failover capabilities to fault-tolerant systems based on Unix or proprietary operating systems, hardware clusters no longer require a performance trade-off for availability. Clusters - which combine processors or systems as a single-system image - help IS shops cope with outages, disasters, updates, and upgrades. Typically, however, clustering a system sacrifices some performance. Casey Powell, chairman and CEO of Sequent Computer Systems Inc., believes IS managers require 99.99% availability from servers. In an effort to deliver, Powell directs a hardware cluster architecture based on 4 Intel processor quads, 64 quads per node, and 8 nodes per single system image. In February 1997, Sequent expects to begin shipping Unix systems built according to NUMA (Nonuniform Memory Access), an architecture for building clusters with inexpensive commodity boards and local memory. Powell expects the new Sequent systems to perform 6 to 12 times better than the current Symmetry systems.
Mark Brandes doesn't put a priority on computing performance. A systems analyst at Central Grocers Cooperative Inc., he'll take availability every time. Last summer, when the Franklin Park, Ill., company needed to upgrade its Data General Aviion systems, he opted for a clustered system. That way, he says, "if we have any downtime on one server, the system will keep running."
What Brandes also got-but wasn't expecting-was better performance. His daily inventory, a job that used to take two hours, now takes just 10 minutes. "It blows our socks off," Brandes says. "The first time we ran it, we thought something was wrong."
From Windows NT-based systems with single failover capabilities to fault-tolerant systems based on Unix or proprietary operating systems, hardware clusters no longer require a performance trade-off for availability. Virtually all server vendors-including Compaq, Data General, Digital, IBM, NCR, Sequent, Tandem, and Unisys-are working on clusters, though NT clusters are still in their infancy. "If you're sitting there with a maxed-out server, you can add another server and increase performance," says David Flawn, marketing director for Data General Corp. in Westboro, Mass.
Clusters-which combine processors or systems as a single-system image-help IS shops cope with outages, disasters, updates, and upgrades. Shops that don't use clusters must shut down systems to do updates, and they must spend money on a new box to add processing power. With clusters, they don't have to do either. Above all is the issue of availability:If a company's single-processor system goes down, the business goes down.
But typically, clustering a system sacrifices some performance. Two 50-Mips processors, for example, provide less than 100 Mips when clustered, because some power is lost to overhead. But that's less of an issue now due to the screaming performance of today's processors. They're so powerful that they outstrip some of the overhead. Yet even Flawn admits, "You just can't make a database server process any faster by clustering."
Flawn says Data General's Cluster-In-A-Box solution-which combines processors with storage devices, software, and services-enhances overall clustering performance even more. "You achieve bulletproof redundancy, but you don't have a second server sitting around, waiting for something to go wrong," he explains.
Brandes of Cooperative Grocers evaluated clusters from Data General, Hewlett-Packard, IBM, and NCR before selecting two clustered Aviion 8500 four-processor systems from Data General. He made his choice for administrative considerations. "Data General had the most advanced clustered system because both systems are accessed from the same formatted Unix drive," says Brandes. "On the others, we'd have to maintain a separate drive for each system and keep them in sync for users to access both."
But some cluster vendors say availability, not performance, remains the goal. "You cluster hardware for availability," says Casey Powell, chairman and CEO of Sequent Computer Systems Inc. in Beaverton, Ore. Powell draws an analogy to a consumer electronics store that has 100 TV sets tuned to the same channel. With Sequent systems, he says, no matter how many processors he has, the system scales. "I want tuners," he adds, "not televisions."
Powell believes IS managers require 99.99% availability from servers. In an effort to deliver, Powell directs a hardware cluster architecture based on four Intel processor quads, 64 quads per node, and eight nodes per single system image. In February, Sequent expects to begin shipping Unix systems built according to NUMA (Nonuniform Memory Access), an architecture for building clusters with inexpensive commodity boards and local memory. Even with Powell's focus on system availability, he expects the new Sequent systems will perform six to 12 times better than the current Symmetry systems.
Higher Speed
One Sequent customer has already gotten used to availability and performance from Sequent systems, though administration of multiple nodes is sometimes a concern. "You don't inherently give up performance with clusters," says Mike Prince, CIO of Burlington Coat Factory in Burlington, N.J. "We won't cluster active databases, because we encounter overhead that may not be warranted. But if we want to increase CPU speed, we could run a parallel query, spread it around, and a user thinks he has tremendous performance."
Prince, who oversees a Sequent implementation of 28 processors, says gains in speed outweigh the overhead from some database queries. His greatest performance concern comes from online transactional processing applications that require a distributed locking manager. "That can be the bottleneck, causing us to lose efficiency and speed for the applications," says Prince. "However, the NUMA architecture should allow us to have more processors working without diminishing returns on a given application."
One analyst believes that NUMA or no NUMA, clustering is sometimes an option to increase performance. "A newer, faster processor may not be available beyond what you have, or some new high-end processing horsepower cost could be much too high," says Dan Kusnetzky, a Unix and client-server analyst at International Data Corp., a market research firm in Framingham, Mass.
Kusnetzky says the best-performing hardware is influenced by the application that runs on the server. "Was the application partitioned to run on multiple processors, or was it one large program?" he asks. "If it's one large program, it will only run as fast as one processor can compute."
The most powerful hardware clusters for the next two years, Kusnetzky believes, will be IBM's premium-priced System/390 Parallel Sysplex mainframe system, and Digital's OpenVMS system. "Digital did things in the '80s with the operating system, the file system, and the hardware to make the cluster look like and operate like one computer," says Kusnetzky. "So now, the systems administrator doesn't even have to treat it like a cluster."
What about operating systems? Kusnetzky believes that both Unix and NT are at a technological disadvantage compared with proprietary systems, he says. "NT and Unix aren't clustered operating systems, and they don't even have a file system, like OpenVMS, to hide cluster hardware from the applications," he says. "NT is struggling to compete with Unix, to do things Unix already does, while Unix doesn't do what OpenVMS does."
Yet Brandes of Central Grocers is managing well with Unix and without such a file system. "So far, we've had no downtime," he says. Well, not exactly. One drive has crashed. But a RAID (redundant arrays of inexpensive disks) configuration let Brandes and his team pop in a new disk. "No one," he says, "knew the difference."
Copyright 1997 CMP Media Inc.
(Copyright 1997 CMP Publications, Inc. All rights reserved.)
