Chapter Eight 8.1 Traditionally, what happened to the requirements specification document as systems analysts moved from the analysis to design segments of information systems development? The requirements specification documentation is output from analysis. The contents of this document get "transformed" into design equivalents as the systems analysts begin to customize the proposed information system for the specific hardware and software platforms to be used for its creation and ultimate operation in some production environment within an organization. 8.2 What is one of the main drawbacks of traditional information systems development with regard to the requirements specification document? When changes to the system need to be made, as in the construction example, these changes are rarely reflected the requirements specification document, due in part to: 1) time constraints, 2) little use for this document after design, and 3) difficulty keeping the document in agreement with design documents. However, because of the need for the end user to be familiar with this document, these changes should be made. 8.3 How is an object-oriented model of information systems different from a historical model of an information system? An object oriented model seeks to overcome the deficiencies of the historical model. The object-oriented model created during requirements determination is not transformed as with traditional systems development methods, but rather "refined" during the design activity. The model created during analysis is enhanced with more detail but is still the same model. 8.4 What are the advantages of a refined rather than a transformed requirements specification model? It allows systems analysts to work with the original model all the way through implementation of the new information system. This makes it easier to validate and verify the completed information system with the user's original requirements specification document. 8.5 List and give a brief description of the five components of an information system within the physical design activity. 1) hardware -- issues such as PCs, client-server, mainframe, networking memory, disk capacity, etc. 2) software -- operating system, networking software, communications support software, backup and recovery software, productivity software, work-group software. 3) data -- security, integrity, redundancy, physical structure, access paths, file managers, database managers, backup and recovery, conversion, disaster plan. 4) people -- training, organizational culture, change, readiness, staffing. 5) procedures -- training materials, user documentation, on-line help, operational documentation, manual operation procedures. 8.6 List and discuss the different areas of feasibility that need to be addressed within the physical design activity. 1) economic feasibility -- cost/benefit analysis. 2) operational feasibility -- measure of whether or not this system will work for this particular organization. It considers issues such as management philosophy, make-up of work force, employee technology expertise, and management support. 3) technical feasibility -- measures practicality of the new or changed system. 8.7 Discuss some of the different kinds of documentation and why these particular kinds of documentation have become so prevalent in today's information systems. Documentation includes user training documentation, on-line help, training videos, and computer-aided instruction. These have become prevalent because of the fact that few people even look at user manuals, much less rely on them. Users need help at their fingertips. 8.8 What is meant by "creeping commitment" and what is its significance within the testing phase of information systems development? Creeping commitment describes the involvement of the end user in the information systems testing process. This is important because as the user continues to be involved with testing, he or she is exposed to the system's functionality at an early stage and can suggest changes early rather than at the very late time of acceptance testing. 8.9 Why is user acceptance testing such an important part of information systems testing? User acceptance testing is the last opportunity for the user to make any recommendations for change prior to implementation or wide-scale distribution of the information system. 8.10 What is meant by conversion as part of information systems development, and what are the two main types? Conversion is the process of moving the user community's data from an old system to a new system. Abrupt cutover conversion, or plunge, is when an old system is completely replaced with a new system, such as during the period from Friday to Monday, when the system is not being used. A parallel conversion involves one system running simultaneously with another system, until such time that users are satisfied that the new system is working satisfactorily. 8.11 List and briefly describe the three phases within the implementation activity. 1) install -- installation and testing of hardware and software, documentation and data availability, and people readiness. 2) activate -- get people ready to use the new system. This includes management support and commitment. 3) institutionalize -- make the new system status quo within the organization. 8.12 What are some of the advantages and disadvantages of prototyping? Advantages are that it gives end users a tangible product to look at, evaluate, and critique rather than just looking at diagrams. Disadvantages include 1) elevation of user expectations of system completeness, and 2) expansion of the scope of the system's boundaries beyond what is documented in the requirements specification document. 8.13 What are the three types of windows most likely to be found in the human interaction component of the object model? 1)Security/logon window. 2)Setup window. 3)Business function windows. 8.14 What other object type is most often found in the human interaction component of the object model? Reports are another common aspect of most information systems and are addressed in the human interaction component also. 8.15 How does the Coad notation show message communication across object model component boundaries? Scenarios are the most appropriate place to show class connections across components. 8.16 Why does the Coad object model have a data management component? The main reason is for maintainability of the object model across multiple hardware, software, and data management platforms. 8.17 What is the purpose of the system interaction component and why is it necessary in the object model? The purpose of the system interaction component is to interface the proposed information system, such as the video store information system, with other systems and with physical devices that are controlled by the information system. Similar to the data management component, the system interaction component affords the object model to be modular with the "plug and play" concept for various devices and interfacing systems. 8.18 Are devices such as printers and barcode readers candidate devices to be included as objects in the system interaction component of the object model? Why or why not? Yes, because these devices are a part of the information system which will interface with the rest of the information system through the system interaction component of the object model. 8.19 Discuss several object-oriented information systems development strategies. Today, many information systems development organizations are heavily invested in structured analysis and structured design by virtue of the fact that their systems analysts are well versed in these methodologies or strategies. So, it is not too unusual for an organization wishing to experiment with object-oriented programming to have a structured analysis and design project culminate with object-oriented programming. 8.20 Which object-oriented information systems development strategy is the best? Research and industry practice continue to suggest that the optimal strategy in the long-run would be to use object-oriented analysis, design, and programming. Chapter Nine 9.1 Discuss the characteristics that distinguish data from information. Data is a raw state, such as facts or observations about physical phenomena or business transactions. Data in itself is thus not very useful to those in need. Raw data needs to be organized into a form that is useable and understandable by those who need it. Information is data that has been transformed into a meaningful and useful context for an end user. 9.2 What are some of the characteristics that make up quality and useable information? 1) accessibility -- how easy is it to use the information that is obtained. Information must be easily identifiable and accessible? 2) timeliness -- how long does it take to get necessary information? 3) relevance -- is the information free of trivial and superfluous details or lacking in sufficient needed detail? 4) accuracy -- to what extent is the information error-free? 5) usability -- is the information's presentation format acceptable (i.e. text, numbers, graphics, etc.)? 9.3 What are internal outputs? What are external outputs? Briefly define each. Internal outputs are intended to be used by people or other information systems within an organization. External outputs are intended to be used by people or other information systems outside the organization. The distinctions between the two are 1) internal outputs tend to be proprietary or confidential in nature, not to be seen by those outside the organization. 2) external outputs have a more formal or cosmetic look (i.e. laser printout) because they are seen and used by others outside the organization. 9.4 What is the difference between a static and a dynamic output? Static outputs are those that can be predetermined and planned for, such as hard copy reports of information used within the organization (sales reports, inventory levels, receivables and payables, etc.). The word static implies that the output does not change in format, but does change in content. Such as with a telephone bill. The format stays the same every month, but the charges and locations of phone calls will always change. Dynamic outputs cannot be easily determined or planned for during information systems development. They are often "one-time" reports, whose format and contents are used once and never used again. 9.5 Briefly define three types of output formats. 1) zoned output -- the traditional row and column oriented textual and numeric information. 2) graphic output -- drawings, symbols, icons, graphs, windows, and video images. 3) narrative output -- word processor-type output, used to produce letters, memos, documents, and books. 9.6 What are the different types of analytical output reports, and what are the advantages of each? Horizontal and vertical analytical reports present information in a form that allows the person reviewing it to analyze the information and establish similarities and differences in the information as presented, such as comparison of two years' financial statements. A counterbalance report is used for projections and forecasts, including best and worst case scenarios. A variance report is used to compare actual performance indicators with budget, forecast, standard or baseline, or quota indicators. Exception reports are used to identify unusual or exceptional conditions, unusual as defined by the user for each particular situation. Historical reports are records of an organization's activities---its own history. 9.7 What are some of the advantages that business analysts and managers get from the use of spreadsheet software? Spreadsheets have the advantages of multi-dimensional worksheets, aggregation of worksheets, graphs, "what if", and "goal seeking" capabilities. A spreadsheet has the ability to make effective visual presentations of sometimes hard to understand numeric information. 9.8 Discuss four types of graphical representations that are possible with the use of spreadsheet software. 1) scatter diagrams -- used to reveal trends in data. Trends are much easier to see when presented in a visual format, rather than numerical form. 2) line diagram -- used to show data fluctuations over a specific time period. This allows you to view trends with respect to time. 3) bar charts (vertical and horizontal) -- shows proportions or quantities as they relate to each other. Horizontal bar charts are used to compare different items in the same time period. Vertical bar charts are used to compare the same item in different time periods. 4) sectograph -- splits up a total amount into its proportionate or sectional parts. A pie chart segments a circle into two or more pieces of the pie. Each slice represents a percent of the total. 9.9 What are some of the internal control aspects that must be considered with regard to output? 1) Timing -- how often should "output" happen? Daily, hourly, weekly, on-demand? This is determined in consultation with the user. 2) volume -- how much output is needed. One page, 100 pages, one video screen display? Also determined in consultation with the user. 3) processing time -- amount of time it takes the computer to process the outputs. This depends on the complexity of the output's content. 4) distribution -- who gets possession of the output? Depending on the confidentiality of the output's information, distribution lists of users are strictly monitored and adhered to. 5) access -- not everyone should have access to all information. Where confidentiality is important, authorization is enforced before anyone can request and receive a specific display or sound output. Use of passwords serves to keep information in the right hands. Chapter Ten 10.1 What is meant by data validation in an information systems context? Data validation involves a process of error-checking or editing of input data, such as errors in alphabetic data. This serves to protect against the "garbage in, garbage out" concept. Bad or incorrect input data will almost always lead to bad or incorrect output. 10.2 Describe the types of data validation and verification. 1) Self-checking digits, also referred to as check-digits, are used most often on bank credit and debit cards, checking and savings accounts, inventory identification numbers, and customer and membership accounts. The concept makes use of an algorithm and some or all of the specific numbers in the card number, account number, inventory number, and so on. The result is a number that becomes a part of the original number. 2) A combination check makes use of two or more associated attributes to validate the data entered in one, two, or more of those attributes. 3) Limit and range checks are used to validate numeric input data. Limits can be either high limits or low limits and the number entered must be greater than or equal to the low limit and less than or equal to the high limit 4) Completeness checks can be established in many different ways. One of the most common ones is to have the information system establish a completeness check for each data entry screen in the system. Each data entry screen object should know what data are required to be entered in order to proceed with processing. 10.3 What are the two methods of data input in information systems? 1) batch -- consists of three parts. a: groups of related data are collected. b: the groups of related data are entered onto some electronic medium such as magnetic tape or disk. c: he entire batch of related data as a group input is processed by the information system. 2) on-line -- processing of input data as it occurs, such as with an ATM transaction. 10.4 What are some of the advantages and disadvantages of group processing? Advantages: 1) collecting and entering can be done without impacting the performance of the computer system. 2) specially trained data personnel can assist with the entering activity, making it very efficient and accurate. 3) processing can be done very quickly since it is coming directly from an electronic media. 4) processing can be done during system down-times (nights, weekends). Disadvantages: 1) data collection has to be centralized. 2) data entry needs to be done by specialized personnel. 3) processing is delayed, making some data old or untimely when it is finally processed. 4) because batch processing is done during off-hours, input errors will not get corrected until the next regularly scheduled processing time. 5) the off-hours operator may have to call the systems analyst or programmer if the program malfunctions. 10.5 What are some of the advantages and disadvantages of on-line processing? Advantages: 1) data can be entered by its owners. 2) data can be entered as close to source document as possible. 3) immediate feedback regarding correctness and acceptability of data. 4) data can immediately update a database, making it as current as possible. Disadvantages: 1) equipment may be costly. 2) users are not always well-trained to input data. 3) user data entry procedural controls may be lacking. 4) software must have additional controls to handle it. 5) because data is only entered during normal business hours, normal computer load is heavier. 6) data entry could actually be slower than batch processing for the same data. 10.6 What are four general guidelines to keep in mind when entering data into a system? 1) input only the data that are necessary--reduce redundancy. 2) do not enter data that can be derived or calculated by the information system. If the system can retrieve data on its own, let it, such as with arithmetic formulas that sum lists of numbers, give totals, etc. 3) use business codes. A business code is a group of one or more alphabetic, numeric, or special characters that identify and describe something in the information system (such as social security numbers, telephone numbers, and bank account numbers). 4) Alphabetic and numeric data should be entered in a left-to-right, top-to-bottom motion. 10.7 Briefly discuss four guidelines that should be followed when using business codes as input data? 1) the codes should allow for expansion--you will probably always be adding items. 2) the codes should be unique--so as to distinguish them from other data and reduce the possibility of mistaking the code for other data. 3) the size of the code (number of positions) is important because the larger the number of positions in the code, the more difficult it is to use and enter into an information system by a person. 4) the codes selected should be convenient, easy to use. 10.8 What are the different modes of operation in a graphical user interface PC environment? 1) navigation -- he way a user instructs the computer's software to do something, such as launching a software program or directing the computer to print or copy a file. Navigation is usually done by either using readable word menus or clicking on an iconic button with a mouse. Other navigation methods include the use of pull-down, pop-up, or nested function menus. 2) data entry -- the actual data that users input into the graphical user interface based software, such as typed words on a page and names and test scores entered into a spreadsheet program. 10.9 What are some of the disadvantages associated with graphical user interfaces? Users not remembering what a particular icon symbol represents; the size and color of the icon not clear enough to make the icon recognizable; too many icons displayed on the screen at one time, causing confusion. Chapter Eleven 11.1 Describe what is meant by the term persistent data? Persistent data is term used in computer science or software engineering circles. The term is synonymous with files and databases, and describes the storage of input or calculated data. This storage usually takes place electronically on magnetic disks. 11.2 Discuss what distinguishes a bit, a byte, an attribute, and a record. Be specific. A bit is a single binary digit, either 0 or 1. It takes several bits in combination, usually 8 as in 8-bit ASCII code, which is common on most personal computers, to represent a character or byte. A byte describes a single alphanumeric character, such as a letter, number, space, comma, or period. An attribute is made up of one or more bytes that together create a template for a meaningful piece of data, such as a word. A record combines one or more related attributes to form a record. 11.3 Why is it so important that a data structure be simple? The simpler the data structure, the more easily it is maintained over the life of the information system, This also makes it easier for the user to retrieve their own output information using the data structures as input data to some report generator software. 11.4 What is a key attribute, and what is its purpose in file and database design? A key attribute helps identify one instance of a record from all other instances in the file. Examples include a social security number, bank account number, or a student identification number. Key attributes may include alphabetic letters and or numbers. Keys are either primary, secondary, or foreign. A primary key uniquely identifies one record instance from all other record instances in the file (no duplication); secondary keys provide an alternative path to data records in a file, but do not have to be unique; foreign keys are attributes associated with the records in one file that are used to associate data records in another file with these records, in other words, foreign keys represent the access path that links more than one record together. 11.5 What is a master file? Discuss its importance within a database environment. Master files contain the records for a group of similar things which collectively represent the foundational data component for the information system. A master file makes up the core data component from which all other activities draw upon. 11.6 What purpose does a temporary file have in a database environment? Temporary files can be created and used anytime an information system needs to do so. For example, if data in the master or transaction files are not in the necessary order that is needed for a report or display, these files can be re-arranged temporarily in order to produce the required output. 11.7 Discuss the purposes of a log file? 1) log files allow the information system to be audited. 2) they allow the information system to be analyzed for a variety of statistical information about the processing done within the information system. 3) they provide a mechanism for recovery or reconstruction of the master and transaction files in the event of system failure. 11.8 What distinguishes sequential from direct access? Sequential access is access by physical record location or position within a file. It means starting at record one and sequentially reading or writing every record to the end of the file or some other stopping point within the file's records. Direct, or random access, is access by physical address. It allows the user to go directly to certain records within files without having to sequentially scan from the first record in the file. 11.9 What is the role of normalization in file and database design? Normalization is the process of simplifying complex data structures so that the resulting data structures will be more easily maintained and more flexible to meet present and future needs of the user. 11.10 Briefly discuss the first three normal form levels. In the unnormalized level, the data structure does not conform to the rule associated with first normal form even if the data structure happens to conform with one or more higher level normal form rules. To put data structure into first normal form, all repeating groups from the structure creating a new structure for each distinct repeating group must be removed. The rule for second normal form is to remove all non-key data elements that are not fully, functionally dependent on all data elements that make up the key (partial dependencies). This rule only applies to data structures already in first normal form, and more specifically only to those data structures that have two or more data elements or attributes that make up the primary key for the data structure. In third normal form, all data elements (attributes) that are uniquely identified by another non-key data element (attribute) must be removed (transitive dependencies). This normal form rule is applied to all data structures normalized through second normal form. 11.11 Briefly discuss the history of OODB. Object-oriented concepts came initially from object-oriented programming (OOP), which was developed in the 1960's with the Simula language as an alternative to traditional programming methods. Little came of this work until Xerox's Smalltalk was introduced around 1980. The initial concept for OODB seemed to arrive on the coattails of Smalltalk and PS-Algol in the early 1980's. 11.12 Discuss the characteristics of an OODB. There appears to be a minimum set of characteristics that a data model must possess before it can be considered an object data model; these include: 1)Support the representation of complex objects. 2)Extensibility; allow the definition of new data types as well as operations that act on them. 3)Encapsulation of data and methods. 4)Inheritance of data and methods from other objects. 5)Object identity. 11.13 List the thirteen rules found in the OODB "manifesto". 1)Support complex objects 2)Support object identity 3)Allow objects to be encapsulated 4)Support types or classes 5)Support inheritance 6)Avoid premature binding 7)Be computationally complete 8)Be extensible 9)Be able to remember data locations 10)Be able to manage very large databases 11)Accept concurrent users 12)Be able to recover from hardware/software failures 13)Support data query in a simple way. 11.14 Discuss several strengths and weaknesses of OODB. Strengths: 1)Data modeling 2)Nonhomogenous data 3)Variable length and long strings 4)Complex objects 5)Version control 6)Schema evolution 7)Equivalent objects 8)Long transactions 9)User benefits Weaknesses: 1)New problem solving approach 2)Lack of a common data model with a strong theoretical foundation 3)Limited success stories. Chapter Twelve 12.1 Briefly discuss some of the standards to which "good" software/information systems must conform. What is the main reason for this need to conform? 1) software must work correctly. This sounds obvious but the implications of software that does not work can be extremely burdensome, if not devastating. 2) the software must conform the requirements specification document created during analysis and any modifications made during design. 3) the software must be reliable over time while processing a potentially limitless combination and types of data, as well as user keystrokes and pointing device clicks. 4) the software must be maintainable and evolutionary over time. 5) the software must be user friendly. However, user friendly is a relative term. The user's ability/computer knowledge, etc., must be taken into account when developing software. 6) the software should be easy to test and implement. 7) the software should use computer resources efficiently. 12.2 What is the first step necessary in software construction? Why? As with any other "construction" project, a set of software "blueprints" and any other supporting documents which contain the model of the system are necessary. It is necessary that you know what you are doing/where you are going and the blueprints serve as your guide. 12.3 What are the uses and implications of software libraries and software reuse? Many believe that software libraries and reuse can significantly contribute to the reduction of the software industry's large software development backlog. Reuse can reduce the time it takes for a software engineer to create the equivalent software module to just library retrieval time to locate the appropriate reusable software module. Reuse may also eliminate the need to test an already tested and certified module. 12.4 List and briefly describe three different approaches to software construction. 1) top-down software construction takes a functional approach. A software engineer starts with the design overview of the system and then creates the program code for it. The software engineer then works his or her way down into the details of the system writing the program code, layer by layer, until the bottom-most modules are programmed. 2) bottom-up software construction starts with the software engineer writing the program code for the lowest level details of the system and proceeding to higher levels by a progressive aggregation of details until they collectively fit the requirements for the system. Next, the software engineer would construct the code for the module or modules that call these bottom-most modules. The engineer would then continue working in an upward or outward manner until reaching the top-most, control-oriented modules. 3) middle-out software construction starts at a convenient middle of the system and then proceeds both upward and downward to levels as appropriate. 12.5 What is meant by the chief programmer concept? The chief programmer is a person or group of persons who actually reviews the code of junior programmers with the intent of offering improvement suggestions. 12.6 What are cohesion and coupling and what is their purpose in software construction? Cohesion is a measure of the strength of the interrelatedness of statements within a module. Coupling is a measure of the strength of the connection between modules. Both cohesion and coupling have an effect on software maintainability over its life and its efficiency in utilizing computer resources. The ideal situation would be to maximize cohesion and minimize coupling. 12.7 What are the different types of cohesion? Which are more desirable? Which are less desirable? 1) coincidental cohesion -- exists when a module performs multiple, virtually unrelated actions, such as writing a computer program from start to finish using just one "main" module which has all of the programming statements in it that are necessary to accomplish the job of the program. This is the worst type of cohesion. 2) logical cohesion -- exists in a module when it performs a series of related actions, one or more of which are requested by the calling module. An example would be in a module that calculates net pay for hourly, salaried, and piece work employees. The calling module tells this module what type of employee to calculate the net pay for, and then the module uses only one of the three calculation actions available. Logical cohesion is a lower-end quality cohesion. 3) temporal cohesion -- exists in a module when it performs a series of actions related by time. This type of cohesion is better than coincidental or logical, but is still on the lower end of cohesion types. 4) procedural cohesion -- exists in a module when it performs a series of actions related by the sequence of steps being performed. Procedural cohesion is better than temporal cohesion because the actions being performed are related to one another in order to get the job done, not merely because they are done at the same time. 5) communicational cohesion -- exists in a module when it performs a series of actions related by the sequence of steps to be followed, and, in addition, all of the actions are performed on the same data. Communicational cohesion is better than procedural cohesion because the actions being performed are related and affect the same data. 6) information cohesion -- exists in a module when it performs several actions, each having its own entry/exit point; independent code for each action; and actions that are performed on the same data structure. 7) functional cohesion -- is the best kind of cohesion. It exists in a module when it performs exactly one action or achieves a single goal. Because modules with functional cohesion have only one task or one goal, they can more readily be reused in a variety of situations. 12.8 What are the different types of coupling? Which are more desirable? Which are less desirable? 1) content coupling -- exists when one module directly references the contents of another module. This is the worst type of coupling. 2) common coupling --exists when at least two modules have access to the same global data. This type of coupling is very unmanageable and is not much better than content coupling. 3) control coupling -- exists if one module passes an element of control to some other module which, in turn, explicitly controls the logic of the second module. 4) stamp coupling -- exists if one module is allowed to pass an entire data structure or record to another module. This is difficult to maintain over time because one might assume that all data elements in the data structure are being utilized in some way by the second module when, in fact, only one or a few of the data elements are used. 5) data coupling -- this type of coupling views other modules as "black boxes" that either receive input data variables from other modules or provide output data variables to other modules or both. If coupling is to occur at all, data coupling is preferred. 12.9 What are the three main principles to which software testing must conform? Why are they important? 1) software testing begins with plans for the tests and ends with the user accepting the tested software. This will complete the development loop and the software will be in compliance with the requirements specification document. 2) the intent of software testing is to cause and discover errors. This is a necessary process because most if not all errors should be caught prior to end use. 3) the rules of reasonableness should prevail. You cannot test every conceivable action in a software program. Time and money are always a prevailing concern. Testing should follow some commonly accepted testing techniques designed to provide a certain threshold of statistical confidence in the tests. 12.10 What distinguishes the two terms, white box and black box? White box refers to software testing from an inside perspective. The software tester has the ability to actually look at the code statements within the white box portion being tested and make changes to them if they do not work correctly. White box testing is usually done by testers who wrote the code themselves. Black box refers to software testing from an outside perspective. The software tester does not have the ability to look at code statements and change them. Black box testing is usually done on software wrote by authors other than the testers themselves. 12.11 What are the differences between alpha- and beta-level testing? Alpha testing is testing done in-house, by a company's own programmers, software engineers, and internal users. Beta testing is a level of testing done after alpha testing provides satisfactory results. Satisfactory does not necessarily mean bug-free. Beta testing usually involves a select group of customers to perform tests on the software as they would use it in their normal environment. In exchange, these customers get an advance copy of the soon to be released software. 12.12 Discuss the importance of the feedback/fallback loop in the generic software testing methodology. The feedback and fallback arrows in figure 13.6 indicate that whenever testing discovers errors at any level of the methodology, the programming staff will need to make coding changes followed by a trip back through the layers to insure defect-free code in all test levels. What must be kept in mind is that changes made because of errors in one module or section of a program may fix the identified bug but may cause an error in some other seemingly unrelated section of the program when it is tested again. 12.13 What distinguishes acceptance testing from system, function, integration, and unit testing? Acceptance testing is usually done by the end user of the software program. This is also referred to as beta testing. By the time software comes to be purchased in stores, it has usually gone through thousands of hours of acceptance testing. Chapter Thirteen 13.1 List and define the three phases of implementation. 1) installation -- putting the new information system into place. This means to physically put in place all five components (hardware, software, people, procedures, data) of an information system. 2) activation -- getting the user to use the new or changed information system. 3) institutionalize -- making the new or changed information system status quo within the organization. 13.2 What is conversion, and what are the two basic conversion strategies? Conversion is the switching from one information system to another, usually either converting to a new information system or a replacement of an obsolete information system. A cutover, or abrupt, conversion strategy is one in which the user uses the old system up to a specific point in time after which the user starts using the new system. This would usually happen from a Friday to Monday period in which system was not being used. A parallel conversion strategy is one that allows the user to continue to use the old system and the new system simultaneously for a period of time, eventually discarding the old system. The difference between cutover and parallel conversion strategies is that a parallel conversion implies that one or more users will be using both the old and the new information system for a period of time. 13.3 What are the three main factors to consider when choosing a conversion strategy? 1) the design of the information system. Certain characteristics of both the new and old systems must be taken into account when deciding which strategy to use. If the systems are very different, it might not be wise to use both at the same time (parallel). If the systems are similar, it might be very useful to use both at the same time. It always depends on the particular situation. 2) user needs and preferences. Users may prefer a particular conversion strategy. Monetary considerations may dictate the use of one strategy over another. The fact that a user might not use a new system if the old one is still available is yet another consideration. 3) risk factors. The risk profile of the organization, information system, and user must be considered. Some organizations are more conservative than others, and may opt for a parallel conversion. For some organizations, the risk of failure of a cutover strategy is worth taking. Once again, it depends on the particular situation. 13.4 What are some of the advantages and disadvantages of the different conversion strategies? CUTOVER: Advantages include no duplication of effort for users; no transition costs with the old system; learning advantage for groups converted later if cutover is done in groups. Disadvantages include high risk; results are not comparable; a sense of user insecurity exists. They do not know the new system like they knew the old. PARALLEL: Advantages include low risk; high sense of user security (they can still use the old system); ability to compare the results with the old system. Disadvantages include a duplication of effort for users; transition costs; additional processing strain on the computer (doing the same thing twice). 13.5 What is one of the important consequences that the development team almost always faces during the activation stage of implementation? There will always be problems that the development team must not only expect but be prepared to correct. Also, the development team should monitor the user's usage of the information system, taking note of what the users like and dislike, patterns of user usage, and shortcut opportunities to improve the system's effectiveness. 13.6 What are some of the reasons that new or changed information systems meet with user resistance? People in general do not like change. Changes in the way they have to do their job inevitably leaves many people with a fear of job loss, fear of failure, and a fear of uncertainty (they don't know the new system). Also, people will naturally prefer an old system over a new one because they "know" it better. Other factors are politics, pride, and loss of control. Some people have a vested interest in the current status quo. People also feel that management does not allow enough time to learn the new system. 13.7 Briefly discuss a common misperception about why a new information system fails. Very often, the new technology takes much of the blame. Any number of reasons are cited----"It doesn't work", "its too hard to learn", "the old system was better", etc. Much less blame is placed upon the organization's infrastructure and even less blame is placed upon the human users. However, in reality it is often just the opposite. Human factors usually deserve the most blame, then an organization's infrastructure, and finally, the least blame should fall upon the technology itself. 13.8 How do the stages of organizational change relate to the stages of information system implementation? 1) The unfreezing stage of organizational change relates to the installation stage of information system implementation. Unfreezing relates to end of an old situation (old, obsolete, or lacking information system). There are forces for change (a need for a new or changed information system) that exceed the forces for remaining status quo (the existing system). 2) The moving stage of organizational change relates to the activation stage of information system implementation. During this stage, people are starting to use the new information system. This is a transition phase. A user is struggling with a high degree of ambiguity during this period because he is dealing with the problem and trying to adjust to something new or different. The user is also expending a high degree of energy because he may have to devote active mental energy, physical endurance, and will power. 3) The refreezing stage of organizational change relates to the institutionalization stage of information system implementation. The changes to the new system are now becoming the new status quo. 13.9 What is force field analysis and how is it used to deal with user resistance to information systems? Force Field Analysis is a survey tool used to identify an individual's perceived positive and negative aspects about some current or pending organizational change. Users list all barriers or benefits that they perceive from the new system, using "strength" or "importance lines as a method of ranking these items. This listing can either be done verbally in a group setting or anonymously through the use of electronic meeting software. The rank ordered lists are then presented back to the individuals in aggregate form for verification and clarification. The rank ordered lists with potential solutions are then presented to management for their consideration and action. 13.10 What are some of the critical success factors for information system implementation? 1) user commitment -- there must be a champion or advocate (with organizational influence) of the new system. People have to see that people are using the system in order to fully accept the system themselves. 2) organizational trust --management must have trust in one another. 3) open communication between information systems management and staff and user management and staff. 4) financial commitment from user management -- a lack or limited funding for the new system can cause significant problems during the development process and during the system's implementation. 5) common view of the information system's development and implementation strategy between management and staff -- all participants must agree on the strategy chosen for the development and implementation of the information system. If everyone is not on the same page, the development and/or implementation of the system can be seriously jeopardized.