Abstract:
In an era that demands increased accountability and transparency of business practices, much of a company's potential exposure and risk hinges on the integrity and reliability of its information systems. Forward-looking CIO recognizes how the business and the systems that protect and transmit its vital information are inextricably interdependent. Leveraging that interdependency to the competitive advantage of the business is their job. They are responsible for enabling (through IT and operations) an adaptive enterprise, one that can flex to handle change without disrupting the business.
This paper aims to present some considerations about the size of external memory support requirements for the database.
Keywords: u-Gov management, evaluation system, tuning, memory space
(ProQuest: ... denotes formulae omitted.)
Introduction
This paper explains how to build an adaptive enterprise that establishes a tight partnership between business and IT, and in turn delivers greater business agility. It presents the technical highlights behind GES - a framework to help customers get more from their current IT investments and make more strategic choices about how they evolve their infrastructure to more tightly align it to their business. It examines the steps required to design and architect that infrastructure - and how to best leverage management software and IT services and solutions to get more business value from their current and future IT investments [1], [2]. This paper highlights leadership in the areas of infrastructure technology, management software, and IT services and solutions - and spells out the role of independent software vendors and system integrators at each level of the enterprise. In addition, the paper explores the sourcing options available to enterprises for introducing these new technologies and best practices into existing IT environments without causing disruption to the business.
Physical performance of databases is regarded in many respects, among which:
- increased speed of response from the system to the information needs of end users;
- reduction of the external memory allocated to support the database;
- enhancing the privacy of data from the database;
- preventing the intentional or unintentional destruction of the database and its restoration in case of incident.
- providing a user friendly interface with the database.
1. The Challenge with a Vertical Approach to It
When rapid change is imposed on an IT infrastructure that is not designed to handle it, the only choices are costly redeployment of existing assets or, worse, adding yet another silo of incompatible IT. In such an environment, every change makes the business less agile [3].
System complexity slows down change - especially at scale - and increases the burden on management and maintenance. In fact, the cost of change is the fastest-growing component of infrastructure total cost of ownership (TCO), according to META Group's July 2002 report "How Do I Minimize Infrastructure TCO by Cutting the Cost of Change?" By any measure, the traditional organization and management of IT is incompatible with business agility. A new way of looking at IT assets is required for enterprise success.
2. Estimating the Required External Memory for Databases
The estimation and use of external memory space depends very much on the target DBMS and also on the hardware used to implement the database. Also, the required memory is influenced by the organization of data.
In general, external memory space estimation is based on the size of each logical record and the number of logical records in the relation. For the Ingres DBMS, generally the calculations required memory space is made for each table by multiplying the number of logical record with their size and adding the necessary space for indexes. The size of a logical record is the sum of its attributes dimensions plus auxiliary space introduced by operating system. In the Ingres system, the size of a logical record is calculated by summing each attribute size plus 2 bytes for variable-length strings and another byte for fields that accept nulls. Memory requirements are expressed in number of pages or blocks of a certain size. The size of a page can be multiple of 1024. The structure of a page may differ from one DBMS to another and depending on the organization manner of data [4], [5]. For examples, the Ingres system uses data organization in HEAP files. A page is 2048 bytes, 40 bytes are used as page header and the rest of 2008 are available for storing user data. So the total space required for a HEAP file can be calculated as:
tuples_per_page = 200B / (tupic_size + 2)
page_ number = total tuplear / tuples_per_page
For HASH files, the formula is the same as for a HEAP file, except that it takes into account the filling factor, which is the percentage of a page that will be used before it is considered full. Default fill factor is 50%, but it can be modified by programmers. In these conditions the formula becomes:
tuples_per_ page = '(fill_factor x 2008 / (tuple_size + 2)
hash_page_number - tuple_nr_tuples_per_page x (1 x fil _factor)
In the case of ISAM type files, the memory space is determined as in the previous case, except for the adding of a space required for ISAM index. In the case of DB2 and Oracle DBMS's the required memory support is expressed also in pages or blocks of 512, 1024, 4096 bytes. The blocks can be linked, each block containing the address of the previous and next block (Figure no. 1).
When defining the database, it is allotted a certain space where tables will be defined at a logical level. On a physical level the database will be organized in partitions, which can store data organized in files. A partition may contain one or more files. A logical table from the database space can be mapped to a file within a partition. Also, a table can be decomposed into two or more parts, each part mapping at a physical level to a file.
Oracle accepts defining clusters and at a physical level a cluster can be mapped to a file. The correspondence between the logical and physical level is shown in Figure no. 2.
In this context, the physical structure of a non-cluster type page is presented in Figure no. 3.
The structure of a cluster type page is similar, except that the first record (tuple) is preceded by 5 bytes not only 4 bytes.
Now, based on these clarifications, we render the formula for determining the requirement of external memory non cluster type relationship and a single index as follows:
...
Where:
- i=l..n, identifies a column in the relationship;
- NUMPAG = number of pages;
- NUMROW = total number of tuples from the reference table;
- AVEBYT = average tuple size, expressed in bytes;
- PCTFREE = backup space percentage;
- DP = page size (512, 1024, 2048);
- 76 bytes = PAGE HEADER size;
- Nj = the percentage of columns that accept null values;
- Ci = the average length of a column from the table.
By summing the external memory space required for each table and adding the memory space required for indexes tables we will obtain the necessary storage space for the whole database. External memory space required for index table is more difficult to estimate than the one required for a table. In practice, it is considered that the necessary space for indexes is approximately 20 to 30% of the space needed for the data in the tables.
More specifically, the parameters to be taken into account are:
- number of indexes for the relationship;
- distribution of values in index columns (in particular the number of distinct values in comparison with the total number of values);
length of index columns;
percentage of null values in indexed columns;
When indexes are created.
The number of pages for an index can be calculated considering the following formula:
...
Where:
- ELK = estimated key length;
- 1,1 = standard factor for adding the 10% for the superior nodes of a B-type index.
3. Tuning a New Adaptive Generic Evaluation System for U-Gov Management
An adaptive GES is one in which business demand is constantly matched by IT supply. It operates on a consumption based model - the business uses what it needs, then pays only for what it uses. In an adaptive enterprise, every cost is variable, resulting in an optimal use of the enterprise's assets. Ultimately, this serves the business well: it frees the CIO from focusing on the delivery mechanisms of information, to focusing on more business-strategic applications of the information itself. This brings the enterprise and its supply chain closer together; reduces unproductive and redundant work on the part of IT staff; and, most important, improves the ability of the enterprise to deliver satisfaction to customers in the form of correct, immediate, valuable information.
To support this kind of agility, the adaptive enterprise must meet three requirements:
* IT resources must be delivered in the form of services. Web services are moving the industry to a service-delivery model. In this model, infrastructure and IT services - including applications - are delivered, allocated and paid for when and where required.
* Resources must be virtualized. Virtualization means the management and control of physical servers, storage and networks to create virtual resources - computing, information and communications. Virtualization dramatically reduces adaptation time, advancing it from the purchasing or redeployment time frame to the millisecond world of intelligent management software.
* The end-to-end environment must become business-process-based. Success today depends of the ability of the IT environment to respond and adapt intelligently to changing business conditions. This is a new and crucial requirement that is at the core of adaptiveness - the continuous analysis of business needs and intelligent delivery of managed resources to optimize business capability and flexibility.
Adaptiveness is built in through consistent simplification, standardization, integration and modularity of the elements and aspects of the feedback loop.
Business processes are the day-to-day functions that keep the enterprise running. From human resources to accounting to supply-chain functions, the ability of the business to meet its customers' needs is the driving force against which the IT environment must deliver. The business processes collectively and continually set and adjust levels of IT resources to meet changing demands. While the priorities for allocating resources to each business process are in constant flux, the supply of resources available to the business must always meet the demand. This is core to creating an adaptive enterprise. Applications acquire, organize and transform the information needed to support business processes. Enterprise applications like SAP or PeopleSoft organize and deliver information across an enterprise; desktop applications serve local business processes the same way. In the Darwin Reference Architecture, applications will be requested and delivered as services, using web services standards like J2EE and .NET. For example, rather than being built redundantly into monolithic "order entry" and CRM applications, a discrete business process like "verify customer entitlement" may be delivered as an application service in both contexts.
Infrastructure services deliver the secure, continuous computing power and storage capacity applications require. In the past, resources were delivered by assigning servers directly to applications. But low utilization and high cost from "one application, one server" policies compel organizations toward shared, virtualized and on-demand solutions that make better use of processor and storage capacity. The GES adaptive infrastructure technologies and solutions accept requirements from and deliver infrastructure services to applications through open industry interfaces and web services.
Virtualized resources provide the foundation for the adaptive GES, with computing power, information and communications delivered as services abstracted from their underlying physical servers, storage and network infrastructures. Sharing or pooling of IT resources helps eliminate over deployed and underutilized technology components, reducing costs for hardware and software, and further reducing management complexity (Figure no. 4). Sharing IT resources across business functions also helps to increase business agility, enabling the rapid provisioning of new services or resources and scaling of established services.
4. Automated and Intelligent Management
Business agility is realized when shared resources are dynamically allocated as needed by business procedures [6], [7], [8]. The creation of this dynamic link between business and IT is aided by a clear assessment and measurement of the agility from the transformation of the underlying IT infrastructure. Management software analyzes demand signals from every part of the organization - from the infrastructure through the extended enterprise - delivering business insight while managing and optimizing the user experience in a secure, continuous infrastructure. A key contributor is human input - managing and adjusting the shape of the IT environment itself to ensure maximum flexibility and agility in the face of unpredictable changes in the future business environment.
Coordination and orchestration of infrastructure depends on moment-to-moment inventory and monitoring, planning, provisioning and maintenance. All management and control processes are driven by demands at the IT element level and the IT service and business levels, and are communicated according to open standards.
By synchronizing infrastructure, application services and processes with business processes through automated and intelligent management and provisioning, HP enables enterprises to reduce the cost of change, reduce the total cost of ownership, simplify management complexity and provide the enterprise with the ability to rapidly implement the solutions that provide the corporation with a competitive advantage (Figure no. 5).
5. Impact on the Overall Performance of a Database
The organization of data on disk can have a major impact on the overall performance of a database, as respect to the use of external memory space and the cost of database queries.
It is known that access to data stored in the operative memory is much faster than accessing data stored in the secondary memory.
Problems arise when there is not enough main memory to cover all processes, the operating system transfers pages from the main memory to disk in order to free up memory. Later, when one of these pages is required, the operating system must transfer it back to disk. In this context, the two resources: main memory and external memory can influence each other. For example:
- procuring more main memory must result in a smaller amount of paging and removal;
- a more efficient use of main memory can result in decreasing the number of entry/exist operations.
Regarding the costs of the database queries, it is known that they are given in large proportion by the number of disc hits, which depends on the size of the blocks and the cardinality of a table.
Conclusions
The time needed to implement or react to a business environment change Range the range of implementation across geographies, business processes or operating units Ease - the breadth and scope of change that the infrastructure can support
One approach to simplification is consolidation: underutilized, underpowered and over-deployed resources are identified and streamlined into an updated infrastructure. This infrastructure then contains fewer elements - making it easier to manage - and delivers results with improved speed and ease when executing changes.
Standards extend the benefits of simplification throughout the enterprise and simplify the context in which IT assets are deployed and used. Standardization can be applied across different processes, procedures, technologies and applications.
DBMS tuning refers to tuning of the DBMS and the configuration of the memory and processing resources of the computer running the DBMS. This is typically done through configuring the DBMS, but the resources involved are shared with the host system.
After the database is created, initialised and populated it needs to be maintained. Various database parameters may need changing and the database may need to be tuned for better performance; application's data structures may be changed or added, new related application programs may be written to add to the application's functionality, etc.
References
[1] Black F., Derman E. and Toy, (1990), A One factor Model of Interest Rates and its Application to Treasury Bond Options, Financial Analysts Journal
[2] Grady Booch, Robert Maksimehnk, (2007), Object oriented analysis and design with applications, Addison Wesley
[3] Lewis, James P., (1997), Fundamentals of Project Management, AMACOM Publishing House, New York
[4] Lock, D., (1996), Project Management (Sixth Edition), Wiley Publishing House, New York, ISBN 047023723-6
[5] David N., Cârstea C., Web Applications Architecture, Review of the "Henri Coanda" Air Force Academy, The Scientific Informative Review No. 1/2008, Brasov, Romania, Publishing House of the "Henri Coanda" Air Force Academy, Management and Socio-Humanities Section, ISSN: 1842-9238, p. 67-71, A4, B class review, CNCSIS Code: 732
[6] Carstea, C., (2011), New Approch about Project Management of Complex Information Systems, LAP LAMBERT Academic Publishing GmbH & Co. KG Dudweiler Landstraße 99, 66123 Saarbrücken, Germany
[7] Tohru Kawabe, (2007), A control method with brain machine interface for man-machine systems, 6th WSEAS International Conference on Telecommunications and Informatics (TELE-INFO '07) Dallas, Texas, USA
[8] Cârstea C., David N., Solutions about Evaluation and Control Data For Complex Information Systems, p.165-169, A4, WSEAS Conference in Istanbul, Turkey, 7th International Conference on TELEINFO'08, New Aspects of Telecommunications and Informatics TELE-INFO '08, 27-30 May 2008, ISBN-978-960-6766-640, ISSN-1790-5117, www.wseas.org. indexed by ISI (ISINET), http://www.worldses.org/books/index.html
Supplementary recommended readings
Date, C. J., (2004), An introduction to Database Systems, Addison Wesley
Cârstea C., David N., The advantages of Flexible Evaluation Systems in Project Management, p.169-173, A4, WSEAS Conference in Istanbul, Turkey, 7th International Conference on TELEINFO'08, New Aspects of Telecommunications and Informatics TELE-INFO '08, 27-30 May 2008, ISBN-978-960-6766-64-0, ISSN1790-5117, www.wseas.org. indexed by ISI (ISINET), http://www.worldses.onz/books/index.html
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright George Bacovia University 2013
Abstract
In an era that demands increased accountability and transparency of business practices, much of a company's potential exposure and risk hinges on the integrity and reliability of its information systems. Forward-looking CIO recognizes how the business and the systems that protect and transmit its vital information are inextricably interdependent. Leveraging that interdependency to the competitive advantage of the business is their job. They are responsible for enabling (through IT and operations) an adaptive enterprise, one that can flex to handle change without disrupting the business. This paper aims to present some considerations about the size of external memory support requirements for the database. [PUBLICATION ABSTRACT]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer





