Content area
Client-server computing processes information via 2 or more computers connected in such a way that users perceive the system as an integrated whole. The lowest level of client-server participation is distributed presentation. The middle level of client-server computing is distributed applications, in which the client participates in application processing. The most complex type of client-server computing is distributed database, in which the data the client requests and updates may reside on multiple servers or hosts. As with other new technologies, client-server computing has deficiencies, many related to its relative youthfulness compared to mainframe computing. Aspects of client-server computing that have not been addressed satisfactorily are: 1. backup-recovery, 2. program change control, 3. access control, 4. data synchronization, 5. network configuration, and 6. user training. One way management accountants can use a client-server system involves collecting and analyzing quality cost data.
The era of instant access to information has arrived. In the past, it was difficult to retrieve and manipulate data because much corporate information was stored in files kept on mainframe computers that were accessible only to information systems (IS) professionals. The development of personal computers changed all that. Users expect to be able to get the information they want, in the form they want it, when they want it.
When information was costly to collect and manipulate, "the constraining variable was the availability of data, not the need for it."(1) When data were scarce, accountants agreed on fixed reporting formats. Now management wants accountants to manipulate data in ways that are appropriate for today's problems, without regard for familiar fixed formats.(2) Client/server, a cooperative form of computing, lets this happen from the user's desktop.
WHAT IS CLIENT/SERVER COMPUTING?
Client/server computing, as its name implies, processes information via two or more computers connected in such a way that users perceive the system as an integrated whole. The user's workstation or PC works with data maintained on another computer. The workstation is called the client, and the computer on which the data are maintained is called the server. The server may be another PC, a workstation, a minicomputer, a mainframe, or a combination of several computers, and it can provide data to many clients.
Before client/server computing, when application systems were kept only on centralized mainframe computers, terminals displayed data and reports in fixed formats. When medium-sized computers were used as the host processor, this configuration became known as host-based processing. (See Figure 1, leftmost part.) (Figure 1 omitted) The computer actually running the application is known as the host, and the only function for the user's terminal is to request and display predefined input and output screens.
By the mid-1980s, many terminals were replaced by PCs that could emulate terminals. Emulating a terminal means that the PC is running a program, such as ProComm or CrossTalk, that allows it to respond as if it were a terminal connected to the host computer. The advantage of terminal emulation is that users have a device that does more than just perform as a dedicated terminal.
Client/server applies when the user's PC -- the client -- participates in the processing in a meaningful way by distributed presentation, distributed application, or distributed database.
DISTRIBUTED PRESENTATION
The lowest level of client/server participation is distributed presentation. At this level, the client computer runs a program that permits the user to request data from the server computer, manipulates the data in useful ways, and presents the data in formats the user designates. (See Figure 1, second column.) This method of splitting the processing tasks is the easiest client/server arrangement to implement.
The distinguishing characteristic of distributed presentation is that the clients can read data that have been organized and loaded for a specific purpose only. Many current client/server implementations just support distributed presentation. The databases created on servers for client access are relational databases (databases in which users see only tables containing rows of data) that can be accessed with a relational query language such as SQL (structured query language). Relational query languages let users extract rows and columns of data from the tables, manipulate the data with arithmetic operators, and designate reporting formats.
Client computers have no capability for changing data on the host or server systems. Data on the server typically are subsets or combinations of master files maintained by legacy systems on host computers. Legacy systems are the decades-old transaction processing systems -- such as receivables, payables, inventory, and payroll -- that capture and represent an organization's business. Legacy systems usually are inaccessible directly by users unless they are programmers.
Distributed presentation appeals to users because PC software such as spreadsheet programs and database query languages are easier to use than mainframe software such as programming languages. One reason PC software is easier to use is its graphical user interface, called GUI (pronounced goo-ey). GUIs are characterized by pull-down menus, online help screens, overlapping and resizable windows, and other presentation devices that help users organize their work.
Accelerated information delivery for Prudential. A distributed presentation client/server system can add flexibility to information processing. When the Prudential Bank & Trust Company relied on mainframe systems exclusively for transaction processing and information reporting, the typical turnaround time for report requests was a week. Using its new distributed presentation client/server system, Prudential has reduced the turnaround from a week to minutes and has given analysts the ability to write "what if" queries to discover new trends or correlations in corporate data (Figure 2).(3) (Figure 2 omitted) If analysts can think of a situation that may show interesting results, all they have to do to get the results is write the SQL query representing the situation. Prudential's system is typical of the efforts that many large banks are making to encapsulate their legacy systems with interfaces, data warehouses, and utilities that let users have easier, more flexible access to mainframe data. The implementation time for this system was six months.
How did client/server computing help the bank speed up access? The bank analysts were given client computers with access to servers that receive daily updates from the mainframe databases. The same customer, transaction, and product databases reside on the mainframe as previously, but now there are daily updates to servers via RJE (remote job entry). The updates are made to a data warehouse (mainframe resident) server and multiple databases on server computers (Figure 2). Access software known as middleware, parts of which run on server and client computers, allows the client computers to access the data warehouse and the databases on the servers. On client computers, analysts have a query language for the relational databases on the servers, a spreadsheet program to manipulate the data, and other application software. Particularly valuable queries are saved for later use and made available to other users on the network. Instead of their having to rekey data into spreadsheet models or report writers, analysts receive data directly from their own queries, save the information on their hard disks, and import it into a database, spreadsheet, or other application model. Furthermore, analysts do most of their own information systems work. Previously they would have requested custom reports from the information systems group and waited a week to get the results. The backlog of requests in IS has diminished considerably.
DISTRIBUTED APPLICATIONS
The middle level of client/server computing is distributed applications, in which the client participates in application processing. (See Figure 1, third column.) In this arrangement, processing on the client computer can initiate changes to the data stored on the server. This system is worthy of the name cooperative processing because programs on the client and on the server have to cooperate to process changes and to ensure the integrity of changes made to the databases.
Improved response time for Avis. Better service to customers is one benefit of distributed applications systems. In 1993, Avis Rent A Car Systems, Inc. implemented a distributed application client/server system to improve response time to customers making reservations for rental cars (Figure 3).(4) (Figure 3 omitted) Since 1972, agents had accessed the Wizard reservation system through IBM 3270-type terminals and looked up insurance regulations in foot thick paper volumes. With-the new system, average response time per reservation has decreased from two-and-a-half minutes to two minutes, which translates into telecommunications savings of $1 million per year for 23 million customer calls. Training time for new agents fell from six weeks on the 3270 terminals to three weeks for the new client computers -- Macintosh LC IIIs. Implementation time for this system was 18 months.
How did client/server computing enable these improvements? It made more information about reservations and regulations more readily available to the agents. The Wizard reservation system still resides on the IBM 3090 mainframe with its customer and rate information databases (Figure 3).
Server computers receive periodic data updates from the mainframe. The servers maintain databases for reservation information and insurance regulations across the country. Agents use client computers connected to the servers to make reservations and answer customers' questions. Typical transactions now require 40% fewer keystrokes because the program running on the client computers has more single-key functions and is more flexible than the host program it replaced. The client computer automatically gives agents regional insurance regulations they formerly had to look up in paper rate books.
DISTRIBUTED DATABASES
The most complex type of client/server computing is distributed database, in which the data the client requests and updates may reside on multiple servers or hosts. (See Figure 1, rightmost column.) Distributed database systems are difficult to implement because the theoretical, technical, and practical hurdles to maintaining data bases in multiple places simply have not been overcome. Further developments in database and networking technologies are needed before this type of system can be implemented readily.
HAZARDS OF AN IMMATURE TECHNOLOGY
As with other new technologies, client/server computing has deficiencies, many related to its relative youthfulness compared to mainframe computing. Developers of mainframe systems have had 30 years to perfect general approaches to making systems work for one mainframe computer with one copy of files being updated by centrally controlled software. Developers of client/server systems are beginning to address the complexities engendered by having a network of computers containing multiple copies of files being updated by multiple programs. Some aspects of client/server computing that have not been addressed satisfactorily are backup/recovery, program change control, access control, data synchronization, network configuration, and user training.
Backup/recovery. On mainframe systems, backup/recovery procedures are fairly straightforward: All files are copied to secondary storage devices with a few utility programs. On client/server systems, the matter is not nearly as straightforward because the data are distributed and duplicated across many computers. Multiple copies of the same data may reside on different systems, but as each system may be slightly different, each separate system requires its own backup. Systems from different vendors or even systems running different versions of the same software may require different utility programs to take the backups.
In a mainframe environment, a centralized information systems operations staff has the responsibility for backups, so usually they are performed on schedule. There may be no counterpart to the centralized operations staff at the client/server locations, but control procedures are still needed to ensure that backups are performed. To ensure timely backups, one insurance company developed a program that disables logons for users who fail to perform backups.(5) Another problem is that the greater the number of servers and clients, the less likely it is that all the backups can or will be taken at the same time.
Program change control. In mainframe environments, program change control procedures usually ensure that only tested, authorized programs go into production libraries and that prior versions can be restored to production status if needed. No change control analog exists in the client/server environment. Change control is needed to ensure that application software on servers and clients is consistent with new database structures on servers. In addition to controlling their own application-specific program changes, organizations need automated distribution and version tracking for the proprietary software they acquire from vendors that want certification of the number of users or the number of computers using the software.
Access control. Security provisions also are easier to implement in a mainframe environment because there is only one system with one set of files to secure. Each different computer in a client/server environment may have its own variant of security features -- if it has any at all. Coordinating multiple sets of security features and administering access privileges obviously means more work.
Data synchronization. Data synchronization is the process of updating multiple copies of the same data on different computers so that all copies of the data are consistent. Except for backup purposes, data synchronization has not been required in mainframe environments because only one copy of each data file existed. Often synchronization strategies in client/server environments are hard to evaluate. For example, for a particular network, would it be better to rebuild server databases completely each day or to update server databases from a change database driven by changes to the master database? The answer varies depending on how long complete rebuilding would take and how complicated managing a change database might be.
Lack of data synchronization became apparent quickly to one company in a pilot project for a client/server profit-and-loss analysis application. The system's graphical front-end and data-downloading features were successful, but the proprietary software had no provision for synchronizing account codes on the mainframe database and on the servers.(6) Initially the installation team entered the new codes on the servers manually, but this task consumed so much time that it negated the benefits of the application. Even though the pilot was deemed a success, the system was not implemented companywide because the procedures for keeping the account codes synchronized were cumbersome.(7)
Network configuration. In mainframe environments, information systems professionals have learned how to estimate and anticipate telecommunications requirements so that equipment configurations that satisfy response needs can be installed. User demands on client/server systems have been much less predictable, with the result that initial configurations often have been inadequate.(8) The only remedy has been to add more telecommunications or processing capacity at significantly higher cost. Predictions of effective capacities and actual usage will improve in the future, but, in the short run, some client/server implementations have been thwarted due to insufficient data transmission capacity.
User training. In mainframe environments, user training is confined to the application systems running on the mainframe. In client/server environments, users still may need to be trained for the mainframe systems, but they also must be trained to use the software on client computers such as spreadsheet programs, database query languages, and graphical presentation programs. Users want faster access to more information, but enabling better access brings the risk that users will not have the expertise to use it -- especially at first. Because users will be asking software tools to do more of their work for them, there are more opportunities for user queries to overload a network with inefficient processing. For example, one program for backing up a manufacturing database took 22 hours until an expert SQL coder was hired, who unscrambled the SQL statements so the program ran in only one hour.(9)
IMPLICATIONS FOR MANAGEMENT ACCOUNTANTS
Client/server systems are valuable because they provide more flexible access to a greater amount of information than is available from mainframe-only systems. Initially they may cost more and be more frustrating for developers and users, but the benefit of better information access should exceed the cost.
Client/server systems are the closest implementation to McKinnon and Bruns' management accounting system of the future that will have:
[A] large real-time database into which information is continually flowing. Labeling and storage should be sufficiently flexible to allow managers throughout the company to find what they want easily and to construct their own reports to get the information they need....The management accounting system needs to be accessible and friendly. Output formats should be as flexible as possible to allow managers to use quantitative summaries or graphic displays. The goal should be to enable any manager to work with the data in any way he or she chooses with full confidence that the information obtained will be current and reliable.(10)
One example of the way management accountants can use a client/server system involves collecting and analyzing quality cost data.(11) Client/server computing should facilitate quality cost calculation and analysis because the data may be manipulated as if they were in one large database even though they may be kept in several different places by different computers. If all the computers with production and cost data were linked in a client/server network, accountants could access them to perform integrated analyses that otherwise would not be possible without several time-consuming steps such as data extraction, data entry, and data recoding. In the past, these steps required manual procedures, which meant that the analyses were too costly to conduct routinely. A client/server system can eliminate these manual operations and should improve the quality of the analyses.
Suppose that manufacturing plants convert raw materials into parts that are shipped to other plants for assembly and shipment to customers. Any one part is manufactured in several different plants and may be shipped to several assembly plants. At the assembly plants, some parts do not meet quality standards and must be reworked if possible. If the parts cannot be reworked, they are scrapped. Furthermore, suppose that each plant has its own client/server computing system, complete with servers that maintain data about that plant's costs, daily production, and rework/scrap. If all the plants' computing systems were linked together into a client/server network, then a management accountant located at any plant (or headquarters) could access all the cost, production, and rework/scrap data from all the plants.
In this manufacturing environment, management may want to analyze the data to determine the causes of the quality problems and whether it would be profitable for the company to incur additional costs to decrease the rework/scrap expense at the assembly plants. In a client/server computing environment, all the data would be accessible to analyze causes and calculate the costs of the quality problem. From the servers at each manufacturing plant, the accountant could retrieve data for the parts produced and the costs of production. From the servers at each assembly plant, the accountant could retrieve data on the number of defective parts and the costs of rework and scrap. Then these data could be analyzed to evaluate causes of the defects and how they affect profitability.
To analyze the data, an accountant could import the retrieved data into a spreadsheet model. The spreadsheet then could be used to determine the parts needing the most rework/scrap, the most frequent causes of the defects, and the costs of the defects. It would be useful to know the causes because it might show that most defects are being produced at a specific plant or that material supplied by one vendor is of lower quality than materials supplied by other vendors. With this knowledge, management can work to improve the quality at the source.
Information about the costs associated with poor quality parts has several uses. First, it can help management understand just how expensive it is to produce parts of inferior quality. Although these cost data might be available in a legacy system, it would be time-consuming and costly to have each plant report the data to a location where they could be summarized and analyzed. In addition, the data would be available faster in a client/server system.
Accountants also can use cost data to evaluate whether investments should be made to change current operations to improve quality. These investments might include new machinery and equipment as well as training employees in quality management techniques. One problem with evaluating new investments to improve quality is that the benefits often do not include all cost savings that will result from improved quality. With access to cost and production data from individual plants, the accountant should be able to improve the cost/benefit analysis for investments in quality improvements.
LOOKING TO THE FUTURE
As shown by the quality example, client/server systems should help management accountants accumulate data, evaluate performance, and compare alternative courses of action. When client/server systems become the dominant computing configuration, management accountants will become proficient users of their organizations' client/server systems -- or else. As McKinnon and Bruns also said:
Understanding data processing and communications technologies is critical, because the value of even the right information is conditional on the medium through which it is received....The line between the data and their transmission has blurred to the point that accountants can no longer afford to be technologically ignorant. The management accountant of the future must be skilled in both system strategy and technology, in addition to being proficient in collection and interpretation of data.
Management accountants have a greater opportunity than ever to become even more valuable members of their organizations. By mastering the technology of client/server computing and relational database systems, management accountants will help their organizations obtain the best benefits of distributed computing in a cost effective manner.
1 John C. Burton, "What lies ahead for SEC's financial reporting?" Legal Times, Oct. 8, 1984, p. A10.
2 H. Thomas Johnson and Robert S. Kaplan, Relevance Lost: The Rise and Fall of Management Accounting, Boston, Harvard Business School Press, 1991.
3 James Daly, "Bank enlists PCs, Macs to speed information retrieval," Computerworld, May 31, 1993, pp. 39-40.
4 Thomas Hoffman, Avis saves time, money with Mac client/server system," Computerworld, May 24, 1993, p. 39.
5 Richard Pastore, "A Controlling Interest," CIO, Sept. 1, 1993, p. 52.
6 Rosemary Cafasso, "Holiday Inn won't let go of its big iron -- yet," Computerworld, May 3, 1993, pp. 89-90.
7 Joe Panepinto, "Client/Server Breakdown," Computerworld, Oct. 4, 1993, p. 110.
8 Kyle Pope, "In Downsizing from Mainframes to PCs, Unexpected Glitches Often Defer Gains," The Wall Street Journal, May 19, 1993, p. C3; James A. Hepler, "Network Jam," Computerworld, April 26, 1993, pp. 89-90.
9 Panepinto, p. 110.
10 Sharon M. McKinnon and William J. Bruns, Jr., The Information Mosaic, Boston, Harvard Business School Press, 1992, p. 222.
11 Lawrence P. Carr and Thomas Tyson, "Planning Quality Cost Expenditures," MANAGEMENT ACCOUNTING(R), October 1992, pp. 52-56.
12 McKinnon and Bruns, p. 223.
A. Faye Borthick, CMA, CPA, CISA, is a professor of accountancy at Georgia State University. She has a DBA degree from the University of Tennessee and is a member of the Knoxville chapter of IMA. She can be reached at (404) 651-4472.
Harold P. Roth, CMA, CPA, is a professor of accounting at the University of Tennessee, Knoxville. He holds a Ph.D. degree from Virginia Polytechnic Institute and State University and is a past president of the Knoxville Chapter of IMA. He can be reached at (615) 974-1756.
Copyright Institute of Management Accountants Aug 1994
