Content area
Full text
Mark Brandes doesn't put a priority on computing performance. A systems analyst at Central Grocers Cooperative Inc., he'll take availability every time. Last summer, when the Franklin Park, Ill., company needed to upgrade its Data General Aviion systems, he opted for a clustered system. That way, he says, "if we have any downtime on one server, the system will keep running."
What Brandes also got-but wasn't expecting-was better performance. His daily inventory, a job that used to take two hours, now takes just 10 minutes. "It blows our socks off," Brandes says. "The first time we ran it, we thought something was wrong."
From Windows NT-based systems with single failover capabilities to fault-tolerant systems based on Unix or proprietary operating systems, hardware clusters no longer require a performance trade-off for availability. Virtually all server vendors-including Compaq, Data General, Digital, IBM, NCR, Sequent, Tandem, and Unisys-are working on clusters, though NT clusters are still in their infancy. "If you're sitting there with a maxed-out server, you can add another server and increase performance," says David Flawn, marketing director for Data General Corp. in Westboro, Mass.
Clusters-which combine processors or systems as a single-system image-help IS shops cope with outages, disasters, updates, and upgrades. Shops that don't use clusters must shut down systems to do updates, and they must spend money on a new box to add processing power. With clusters, they don't have to do either. Above all is the issue of availability:If a company's single-processor system goes down, the business goes down.
But typically, clustering a system sacrifices some performance. Two 50-Mips processors,...
We're sorry, your institution doesn't have access to this article through ProQuest.
You may have access to this article elsewhere through your library or institution, or try exploring related items you do have access to.
