Content area
Speed isn't everything. If a fast system's complexity prevents users from getting the benefits they want, performance can actually be a hindrance.
That issue is at the heart of a debate in the user community over two fast, multiprocessing architectures: MPP (massively parallel processing) and SMP (symmetric multiprocessing).
In an MPP system, each node has its own operating system and memory. Thus, the system must contend with latency and overhead when transaction-processing requests need to hop among multiple nodes. By contrast, SMP systems have a common operating system and common memory. Speed is slowed for applications that require large volumes of data and a lot of interaction, which is typical for data-warehouse applications. Such issues made many people believe that SMP would slowly fade away. But it's not happening.
Case in point: General Accident Insurance in Philadelphia. The insurer knew that an enterprise data warehouse it wanted to build would hold data that spanned multiple years. More important, General Accident expected that the data warehouse would grow from 100 Gbytes to 3 terabytes very quickly. An MPP system seemed the natural solution. But that's not what the company chose.
"To handle so many joins for data mining, we decided we had to use a parallel system," says Charlie Drumm, a senior business consultant at General Accident. "We're not a high-technology shop, and to bring in MPP on Day 1 would have brought us to our knees."
Specifically, Drumm doesn't think enough MPP-specific tools are available yet, or that MPP is mature enough for commercial applications yet. The MPP architecture carries complexities relating to its "shared nothing" (memory and operating system) architecture. "I knew we'd be safe with SMP," reasons Drumm. "It's widely used, less complex, and there are many more tools."
Because they're on a common node, General Accident's NCR WorldMark systems can migrate from SMP to MPP in the future.
"I see convergence of the two technologies," says Drumm. "MPP is for very large data warehouses, but SMP has TP {transaction processing} and data-mart life cycles."
Industry analysts say General Accident's decision is no accident. "SMP came a long way during the quasi-religious war," says Howard Richmond, a VP and analyst with Gartner Group Inc., an IT advisory firm in Stamford, Conn. "MPP is dead; long live SMP."
Actually, Richmond believes there are nothing but survivors. "The data indicates that MPP is not as broad, but scalability says it will occupy the largest applications," he says. "It's not one or the other."
NCR Corp. embodies the industry's position. It sells both MPP and SMP systems. The MPP systems are designed specifically to run with the company's Teradata parallel database. But NCR believes in SMP for its high-availability transaction processing systems. "We'll focus SMP on 8{-processor} Pentium Pros," says Russell Holt, a VP and general manager of computers and servers for NCR. "Beyond that, we'll cluster SMP or move to MPP."
Holt says the gap between SMP and MPP is software. "Clusters and MPP from a hardware perspective look the same," he says, "but you don't have to change the software with SMP."
Holt acknowledges that Teradata, on MPP, is not suitable for online updates-but he says it is well-suited for huge volumes of data. By the same token, Informix and Oracle databases are optimized only for small amounts of data and local pieces of data, making them ideal for OLTP (online transaction processing). NCR will run any of the databases, as well as SMP and MPP. "We can handle any application," says Holt.
Better Clustering
The desire to cost-effectively support scalable applications is driving some vendors to embrace a new architecture that can better cluster SMP systems: NUMA (non- uniform memory access). With this architecture, vendors can link commodity motherboards, each containing multiple processors, to enhance scalability and performance by means of shared memory, and do it in a cost-effective manner. "NUMA has taken over for MPP as the hot architecture," says Gary Smaby, an analyst with Smaby Group, an IT advisory firm in Minneapolis. "Every vendor will have NUMA or something like it."
Smaby believes MPP was an architectural experiment that was never market-driven. Few applications, he says, are "worth the pain" of MPP. NCR shields users from the complexities, Smaby adds, and IBM, the other primary MPP vendor (along with Pyramid/SNI), markets an SP2 MPP system that is "neither fish nor fowl," as he describes it, since it includes SMP processors. "Users will buy new technology that does things better, faster, and cheaper," says Smaby. Better yet, with SMP, "there is no rewriting of applications."
One user counting on ease of transition is Bloomberg LP, the New York financial-information provider. Bloomberg runs 70 Data General systems, half of which are NUMA-based systems containing 16-Mips to 32-Mips processors-DG won't ship Intel-based NUMA systems until early next year. The systems run applications, including Bloomberg's stock ticker, that can each require hundreds of processors. "NUMA could scale to 32 processors and that was something we wanted to try," says Bob Ostrow, a Bloomberg partner. "We have a large process count, a lot of simultaneous events, and we need a lot of processors working."
Bloomberg started using multiprocessing systems in 1985 and SMP systems in late 1992, so Ostrow didn't think MPP was suitable for its operations. But last January, the company purchased DG's NUMA systems as soon as they became available. Now, Ostrow notices a 20% performance improvement moving from an older 16-way system to the newer 16-way NUMA system. "DG says 'your mileage may vary' depending on the application," says Ostrow. "If it didn't work, it would be slower, and instead it's 20% faster."
The Boeing Co., which uses Sequent's SMP systems, hopes its supplier's NUMA Q systems will help with its mainframe migration strategy for as many as 40,000 users. The Seattle builder of jet planes expects to have as many as 20 Sequent NUMA Q systems in operation as Oracle database servers by the end of next year. So far, Boeing successfully migrated one business unit, Skin and Spar (wing components) in Auburn, Wash. This month, Boeing plans to implement four more business units: Tubes and Cables in Everett, Wash.; Composites in Wichita, Kan.; Contracting in Everett, Wash.; and Finance in Reston, Wash.
"We're constrained in connecting everyone to the data because some applications are very data-intensive," says Maureen Morton, senior principal scientist at Boeing. "Some machines can't handle it, and we can break it up across machines more with additional power."
Morton expects NUMA Q systems will give Boeing more memory, more processing power, and help with heavy batch processing. Since applications are coming off the mainframe, they are sequential. "If we can get better performance without rewriting those applications, we'll really be ahead," says Morton. "We'll know in a few months if we'll be able to implement the applications on NUMA Q much faster."
Morton says she's pleased with Sequent's track record thus far. "Our experience with Sequent has been very strong, and clustering gives us 99.6% availability," she says. "We had our first outage in Skin and Spar after eight months, and the failover was successful."
Ready For Prime Time?
So far, Boeing's supplier, Sequent Computer Systems Inc. in Beaverton, Ore., has a handful of its NUMA Q systems being tested by customers. But it plans to move to general availability in February 1997. "Applications are driving SMP," says Kevin Joyce, director of inbound product marketing for Sequent, "and CIOs look at application requirements."
Some industry analysts want to see more availability and performance figures from the field before they get too excited about NUMA architectures. Gartner Group's Richmond, for one, says he's concerned about latency between near- and remote-memory sources. DG and Sequent claim the systems will pump data so fast, they'll solve such latency. "It might work, but it's too early to tell," says Richmond. "I must see real customer workloads first."
One SMP vendor, Digital Equipment Corp., must resolve such performance concerns before it will consider a NUMA solution for its customers. "NUMA says you stretch shared memory, but we're hearing about 25-to-1 performance differences between local and remote data," says Rick Gillett, senior consulting engineer in Digital's Alpha server division.
Digital uses its own Memory Channel to cluster SMP systems for increased performance. But Digital doesn't rule out a NUMA solution of its own. "Start with SMP for inherent programming ease," says Gillett. "Then connect the processors with combinations of hard- ware and software for applications to perform best without communications problems."
For current SMP users, application performance is always the issue. "Everybody's application will fit into SMP," says Bloomberg's Ostrow, "and you want the system to fit the application."
Analysts couldn't agree more. "Applications still drive the SMP market," says Smaby. Adds Gartner Group's Richmond: "The NUMA war will be total hype battles with words of fury and counterclaims. But the market just wants to scale applications beyond what they can scale applications today."
Copyright 1996 CMP Media Inc.
(Copyright 1996 CMP Publications, Inc. All rights reserved.)
