Sunday, August 23, 2020

International Quality Management System - Practice exam Coursework

Universal Quality Management System - Practice test - Coursework Example Conveying quality items and administrations additionally help in building a notoriety for business, which impacts the accomplishment of upper hand (Goetsch and Davis, 2013). As a rule, organizations try to accomplish accreditation from the global body that perceives quality principles. The body gives ISO affirmation to organizations that satisfy the global quality guidelines, which helps in building a company’s brand and notoriety. Quality isn't feasible except if made, implying that the items and administrations having the option to accomplish quality principles can't occur unintentionally. The accomplishment of value is a procedure since quality aides in the making of significant worth for the two clients and the business. In view of this understanding, organizations have moved from just accomplishing quality, however enhancing this procedure by applying all out quality administration. Complete quality administration is a procedure that centers around expanding a company’s serious level through the consistent improvement of its capacities. Capacities for this situation incorporate administrations, business forms, workers, items, and the business condition (p. 4). Absolute quality administration likewise centers around the consistently accomplishing consumer loyalty at the most reduced expenses. Quality can be accomplished by improving a business’ activity attributes, causing the items to be stur dy, dependable and the highlights of the item. Juran characterized quality as ‘fitness for use’ and was of the conviction that cash was the language that is for the most part utilized in the board. Juran built up the Juran’s set of three that had quality improvement, quality control and quality arranging as the three primary components. In quality arranging, Juran hinted that it is significant for a business to distinguish its clients and their needs before setting out on the way toward conveying quality (p. 10). When these components have been recognized, a business can have the option to create items and administrations that can address the issues of the clients. For this

Friday, August 21, 2020

Teddington Tennis Club Essay Example | Topics and Well Written Essays - 1500 words

Teddington Tennis Club - Essay Example The structure essentially offers two choices for the client. One, to see past outcomes and two, to enter results on the web. At first, the client needs to pick on the time of rivalry, at that point the week lastly the groups to play for that week. From here the alternative fluctuates. The client can either see the outcomes if the year chose is before or, more than likely can refresh the outcomes for present and future games. This is finished by practically concealing all the controls present in the structure until the client taps on the suitable catches which are ‘View’ and ‘Update’ (the names are simple). This is finished by denoting a tag to every one of these controls and setting their obvious property to ‘false’. An example bit of code is demonstrated as follows. Once these are chosen, the client can choose the ‘GO’ button, to see the outcome input boxes loaded up with players’ names adjoining them. When the client fills and taps on ‘Save’, the information gets put away in suitable tables. The inquiry used to refresh these qualities is given beneath: The making of the budgetary report enumerating the players who are individuals for current monetary and the individuals who don't fundamentally rely upon the information accessible in the two tables to be specific, ‘Members’ and ‘Players’. These tables contain insights about the players, their part ids, and their enrollment subtleties. The report is made utilizing the Report Wizard of MS Access which records out the fields that should be introduced in the report notwithstanding the factors dependent on which the report must be requested. The report is created by dissecting the estimation of the ‘Members’ table field segment, current which makes reference to whether the player is a part or not. On the off chance that the incentive for a part id is ‘Y’, the player is a part and if ‘N’, the player needs to restore is participation. In view of the part ID and Player Id relationship, the player subtleties are acquired from the player table. A case of the report that was produced for the present year is demonstrated as follows.

Wednesday, August 19, 2020

Understanding Racial Discrimination Essay Topics

<h1>Understanding Racial Discrimination Essay Topics</h1><p>The subject of racial separation exposition themes has been at the focal point of much conversation and discussion. While some recommend that the theme is as of now excessively warmed and costly, other feel that the subject is excessively fundamental and understudies ought to be disregarded. Similarly as with any subject in English language composing, the discussion is bound to seethe on, yet whether one is a torch or a harmony producer is just going to be chosen by one's viewpoint.</p><p></p><p>The requirement for a paper on race has without a doubt developed to where one needs to have a feeling on the issue. Regardless of whether one concurs or can't help contradicting the lesson of the issue, is completely up to the understudy. Numerous individuals need their position known, while others think that its odd and shameful of addressing.</p><p></p><p>One princi ple premise why such a large number of individuals are talking about this issue is a direct result of the twofold guidelines inside our general public. In the present society, there are two gatherings: white, and dark. They carry on with an unexpected life in comparison to every other person. For reasons unknown, the two gatherings of individuals can be viewed as by one way or another more terrible than all of us, however they are held up as holy messengers by the remainder of the society.</p><p></p><p>While everybody is entirely allowed to accept what they need and to be actually who they need, America has a ton of issues as a nation. A large number of these issues are associated with the occasions of the social equality development. Prejudice and separation have been an issue in America since the get-go. If one somehow happened to take a gander at the historical backdrop of humankind, one would see that it appears as though everybody has their own rundown o f individuals they might want to leave all of us in peace.</p><p></p><p>Nature itself will change in light of man's activities. In the event that a fire must be extinguished by firemen, nature will fill its need. Obviously, with regards to the acknowledgment of contrast, the procedure may not be so clear. Notwithstanding, nobody ought to be permitted to oppress someone else, paying little heed to the contrasts between them.</p><p></p><p>A beneficial thing to know is that you don't need to go into school to get familiar with the exercise. In case you're not keen on taking a class to find out about race separation exposition points, at that point there is as yet another approach to get taught. You can get your hands on a few books about the subject, look at the Internet, or even do a Google search.</p><p></p><p>As long as you recollect that the article isn't required perusing, you ought to be fine. Be that as it m ay, in the event that you have perused any of the articles or book audits, you will probably arrive at the resolution that the point is a long way from straightforward and too delicate to even consider allowing for simple discussion.</p>

Monday, August 10, 2020

Current Affairs For Essay Writing

<h1>Current Affairs For Essay Writing</h1><p>Current undertakings for paper composing is an extraordinary method to finish an article. A few people favor a research project or proposition, yet this is unquestionably not required by most of understudies today as they like to compose short, clever and contemporary expositions on contemporary topics.</p><p></p><p>A future understudy who has been considering how the person will make sure about a superior profession should look at the up and coming age of Internet scholars. These exposition essayists are experts and their administrations are effectively accessible at different online entries. Their articles have numerous similitudes with the ones you would discover in schools and colleges however there are a couple of things that recognize these from an ordinary school style essay.</p><p></p><p>Even an understudy with information on science, innovation and financial aspects would discover these expositions intriguing as the point for the papers extend from the present issues in the economy, governmental issues, and world issues like psychological oppression and wrongdoing. It is significant that the author keep his perusers intrigued while composing his expositions. This is particularly evident in expositions managing current undertakings. Else, it doesn't regard the capacity of the author in the event that he just focuses on the topic of his essay.</p><p></p><p>The subject for papers ought to be something that would intrigue the peruser and increment their advantage fragment. For the most part, papers managing current issues don't have the chronicled perspectives related with them. Be that as it may, if the article is composed by the recent developments and the thoughts introduced, it would give a sentiment of recognition. A portion of the points would incorporate employments, governmental issues, nation, economy, and educatio n.</p><p></p><p>This is on the grounds that these expositions manage ongoing global issues. Different themes incorporate family issues, human services, love, love and marriage, social insurance, self destruction, and reasoning. Thusly, in the event that you have concentrated on specific parts of these issues, at that point you ought to set up your exposition to suit the sort of paper you are going to write.</p><p></p><p>The English substance of the article is significant. On the off chance that you need your paper to have any sort of effect on the perusing level of the perusers, at that point you ought to settle on a course dependent on this specific subject. Your paper ought to in a perfect world be conversational and short. It ought to permit the peruser to get a handle on the substance of the paper without the requirement for dreary explanations.</p><p></p><p>However, on the off chance that you can't utilize the English language for general composing articles, at that point you ought to likewise write in a conversational language. This would assist you with depicting your considerations on the subject to the most ideal impact. Along these lines, journalists who are familiar with both English and casual language are constantly wanted to compose articles. Also, the paper is generally increasingly pleasant when it includes a blend of the two.</p>

Tuesday, July 28, 2020

Essay Samples

<h1>Essay Samples</h1><p>The immaculate occupation (article) tests are basic when you take your recognition and look for business. Employment tests ought to be equivalent to an expected set of responsibilities. Truth be told, you can utilize the activity test as a motivation for your activity description.</p><p></p><p>The flawless occupation (paper) tests are typically the activity tests that you are as of now searching for. In the event that you are searching for work, you can utilize the activity test as your motivation for an occupation. Besides, in the event that you need to get a new line of work, it is additionally conceivable to utilize the activity test as your introduction.</p><p></p><p>In request to make a decent exposition test, you should know the patterns of the expected set of responsibilities. For instance, you ought to consider for a second that it could be specialized composition or you could be offere d for research. So also, the activity test may be experimental writing or IT composing. Each of these is explicit to its workplace and the sort of work. Thusly, your optimal exposition test ought not rely upon nature and sort of work.</p><p></p><p>Another significant interesting point recorded as a hard copy an occupation (paper) test is that your article ought to be unique. The perfect occupation (exposition) tests ought to be not the same as other sets of expectations. A vocation (paper) test ought to be one of a kind in its way to deal with composing and substance. Besides, an exposition that is exceptional is probably going to be generally welcomed by the employer.</p><p></p><p>In making a vocation test, you ought to be mindful so as to utilize an assortment of catchphrases. In spite of the fact that you may concentrate on the watchwords that are utilized in a particular expected set of responsibilities, it is as yet valuable to ex pound on different things also. In this way, you should think about utilizing various watchwords for various sets of expectations. This will help you in making the activity test unique.</p><p></p><p>The immaculate occupation (paper) tests are probably going to be enlightening, instructive, or important. This is significant. On the off chance that you are expounding on an occupation, you should concentrate on the significant parts of the activity. You ought to abstain from expounding on insignificant things.</p><p></p><p>The immaculate employment (article) tests are probably going to incorporate names of areas and organizations. What's more, it is probably going to contain an area of feelings or tips. In conclusion, the perfect employment (exposition) tests should show instances of work.</p><p></p><p>As you can see, there are a few interesting points when you are composing the ideal occupation (article) tests. Wh at's more, it is a smart thought to examine these things with your educator before you start to make your own work samples.</p>

Saturday, July 18, 2020

Finding Free College History Papers Online

<h1>Finding Free College History Papers Online</h1><p>Free school history papers online are accessible in an assortment of arrangements for use by those hoping to enhance their school training. In the event that you are keen on moving in the direction of your degree however don't have the assets or an opportunity to do as such, at that point taking a shot at school papers online is a perfect alternative.</p><p></p><p>In request to begin with online school papers or some other sort of school work at your own pace, it is significant that you have the option to distinguish the correct website for working with free school history papers on the web. For whatever length of time that you take a couple of seconds to recognize the correct choices that will work best for you, you will find that it will be much simpler to finish assignments, and you will feel progressively certain that you will have the option to effectively finish the work on time.< /p><p></p><p>One of the main things you will need to do is guarantee that you can have your online class papers accessible consistently during the semester that you intend to take last, most important tests. The exact opposite thing you need to do is to offer a teacher the chance to see that you have a deficient task that they can access whenever it might suit them. You need to ensure that when you are investigating the reports that you have accessible, they are on the whole perfectly sorted out, with the goal that you don't need to battle to search for the wellspring of the data that you need.</p><p></p><p>Your last, most important tests ought to be the most significant pieces of your scholastic vocation, and your last assignments should assist you with becoming familiar with your coursework and how to make it increasingly effective. You ought to consistently set aside the effort to ensure that you are decidedly ready and ready to introd uce your assignments and tests in the most ideal manner conceivable, and it will be a lot simpler for you to go into your last test of the year with confidence.</p><p></p><p>When you are assessing your last papers, don't expect that the teacher will know precisely what to search for, or that the individual will comprehend what the central matters are. A few teachers may not be acquainted with online school papers, and they will be unable to comprehend what you are attempting to state, yet it doesn't imply that you can't be fruitful in your future life as a student.</p><p></p><p>Since this is your end of the year tests, it is important that you discover the specific arrangement that you should get ready for them. When you have that data, it will be a lot simpler for you to discover the materials that you need so as to take the last, most important test that you will be attempting to complete.</p><p></p><p>The las t thing you need to do before you start the way toward planning for your end of the year test is to be certain that you know about the entirety of the components that will come up in your last paper. Recall that there are various sorts of definite papers, and every one of them will require a specific measure of preparation.</p><p></p><p>One of the best things about online school papers is that it permits you to finish your last, most important tests when you have to without venturing out to class or spend a great deal of cash on movement costs. It is ideal to start pondering your future profession when you are finishing your work on papers on the web, and keeping in mind that a few understudies will possibly take a shot at papers when they are learning at home, others may even be chipping away at their last test of the year during their lunch hour!</p>

Sunday, July 5, 2020

Software Testing - How do we measure the progress of testing - Free Essay Example

Chapter 10 Metrics and Models in Software Testing How do we measure the progress of testing? When do we release the software? Why do we devote more time and resources for testing a particular module? What is the reliability of software at the time of release? Who is responsible for the selection of a poor test suite? How many faults do we expect during testing? How much time and resources are required to test a software? How do we know the effectiveness of test suite? We may keep on framing such questions without much effort? However, finding answers to such questions are not easy and may require significant amount of effort. Software testing metrics may help us to measure and quantify many things which may find some answers to such important questions.. 10.1 Software Metrics What cannot be measured, cannot be controlled is a reality in this world. If we want to control something we should first be able to measure it. Therefore, everything should be measurable. If a thing is not measurable, we should make an effort to make it measurable. The area of measurement is very important in every field and we have mature and establish metrics to quantify various things. However, in software engineering this area of measurement is still in its developing stage and may require significant effort to make it mature, scientific and effective. 10.1.1 Measure, Measurement and Metrics These terms are often used interchangeably. However, we should understand the difference amongst these terms. Pressman explained this clearly as [PRES05]: A measure provides a quantitative indication of the extent, amount, dimension, capacity or size of some attributes of a product or process. Measurement is the act of determining a measure. The metric is a quantitative measure of the degree to which a product or process possesses a given attribute. For example, a measure is the number of failures experienced during testing. Measurement is the way of recording such failures. A software metric may be average number of failures experienced per hour during testing. Fenton [FENT04] has defined measurement as: It is the process by which numbers or symbols are assigned to attributes of entities in the real world in such a way as to describe them according to clearly defined rules. The basic issue is that we want to measure every attribute of an entity. We should have established metrics to do so. However, we are in the process of developing metrics for many attributes of various entities used in software engineering. Software metrics can be defined as [GOOD93]: The continuous application of measurement based techniques to the software development process and its products to supply meaningful and timely management information, together with the use of those techniques to improve that process and its products. Many things are covered in this definition. Software metrics are related to measures which, in turn, involve numbers for quantification, these numbers are used to produce better product and improve its related process. We may like to measure quality attributes such as testability, complexity, reliability, maintainability, efficiency, portability, enhanceability, usability etc for a software. We may also like to measure size, effort, development time and resources for a software. 10.1.2 Applications Software metrics are applicable in all phases of software development life cycle. In software requirements and analysis phase, where output is the SRS document, we may have to estimate the cost, manpower requirement and development time for the software. The customer may like to know cost of the software and development time before signing the contract. As we all know, the SRS document acts as a contract between customer and developer. The readability and effectiveness of SRS document may help to increase the confidence level of the customer and may provide better foundations for designing the product. Some metrics are available for cost and size estimation like COCOMO, Putnam resource allocation model, function point estimation model etc. Some metrics are also available for the SRS document like number of mistakes found during verification, change request frequency, readability etc. In the design phase, we may like to measure stability of a design, coupling amongst modules, cohesion of a module etc. We may also like to measure the amount of data input to a software, processed by the software and also produced by the software. A count of the amount of data input to, processed in, and output from software is called a data structure metric. Many such metrics are available like number of variables, number of operators, number of operands, number of live variables, variable spans, module weakness etc. Some information flow metrics are also popular like FANIN, FAN OUT etc. Use cases may also be used to design metrics like counting actors, counting use cases, counting number of links etc. Some metrics may also be designed for various applications of websites like number of static web pages, number of dynamic web pages, number of internal page links, word count, number of static and dynamic content objects, time taken to search a web page and retrieve the desired information, similarity of web pages etc. Software metrics have number of applications during implementation phase and after the completion of such a phase. Halstead software size measures are applicable after coding like token count, program length, program volume, program level, difficulty, estimation of time and effort, language level etc. Some complexity measures are also popular like cyclomatic complexity, knot count, feature count etc. Software metrics have found good number of applications during testing. One area is the reliability estimation where popular models are Musas basic executio n time model and Logarithmic Poisson execution time model. Jelinski Moranda model [JELI72] is also used for the calculation of reliability. Source code coverage metrics are available that calculate the percentage of source code covered during testing. Test suite effectiveness may also be measured. Number of failures experienced per unit of time, number of paths, number of independent paths, number of du paths, percentage of statement coverage, percentage of branch condition covered are also useful software metrics. Maintenance phase may have many metrics like number of faults reported per year, number of requests for changes per year, percentage of source code modified per year, percentage of obsolete source code per year etc. We may find number of applications of software metrics in every phase of software development life cycle. They provide meaningful and timely information which may help us to take corrective actions as and when required. Effective implementation of metrics may improve the quality of software and may help us to deliver the software in time and within budget. 10.2 Categories of Metrics There are two broad categories of software metrics namely product metrics and process metrics. Product metrics describe the characteristics of the product such as size, complexity, design features, performance, efficiency, reliability, portability, etc. Process metrics describe the effectiveness and quality of the processes that produce the software product. Examples are effort required in the process, time to produce the product, effectiveness of defect removal during development, number of defects found during testing, maturity of the process [AGGA08]. 10.2.1 Product metrics for testing These metrics provide information about the testing status of a software product. The data for such metrics are also generated during testing and may help us to know the quality of the product. Some of the basic metrics are given as: (i) Number of failures experienced in a time interval (ii) Time interval between failures (iii) Cumulative failures experienced upto a specified time (iv) Time of failure (v) Estimated time for testing (vi) Actual testing time With these basic metrics, we may find some additional metrics as given below: (i) (ii) Average time interval between failures (iii) Maximum and minimum failures experienced in any time interval (iv) Average number of failures experienced in time intervals (v) Time remaining to complete the testing. We may design similar metrics to find the indications about the quality of the product. 10.2.2 Process metrics for testing These metrics are developed to monitor the progress of testing, status of design and development of test cases and outcome of test cases after execution. Some of the basic process metrics are given below: (i) Number of test cases designed (ii) Number of test cases executed (iii) Number of test cases passed (iv) Number of test cases failed (v) Test case execution time (vi) Total execution time (vii) Time spent for the development of a test case (viii) Total time spent for the development of all test cases On the basis of above direct measures, we may design following additional metrics which may convert the base metric data into more useful information. (i) % of test cases executed (ii) % of test cases passed (iii) % of test cases failed (iv) Total actual execution time / total estimated execution time (v) Average execution time of a test case These metrics, although simple, may help us to know the progress of testing and may provide meaningful information to the testers and project manager. An effective test plan may force us to capture data and convert it into useful metrics for process and product both. This document also guides the organization for future projects and may also suggest changes in the existing processes in order to produce a good quality maintainable software product. 10.3 Object Oriented Metrics used in Testing Object oriented metrics capture many attributes of a software and some of them are relevant in testing. Measuring structural design attributes of a software system, such as coupling, cohesion or complexity, is a promising approach towards early quality assessments. There are several metrics available in the literature to capture the quality of design and source code. 10.3.1 Coupling Metrics Coupling relations increase complexity, reduce encapsulation, potential reuse, and limit understanding and maintainability. The coupling metrics requires information about attribute usage and method invocations of other classes. These metrics are given in table 10.1. Higher values of coupling metrics indicate that a class under test will require more number of stubs during testing. In addition, each interface will require to be tested thoroughly. Metric Definition Source Coupling between Objects. (CBO) CBO for a class is count of the number of other classes to which it is coupled. [CHID94] Data Abstraction Coupling (DAC) Data Abstraction is a technique of creating new data types suited for an application to be programmed. DAC = number of ADTs defined in a class. [LI93] Message Passing Coupling. (MPC) It counts the number of send statements defined in a class. Response for a Class (RFC) It is defined as set of methods that can be potentially executed in response to a message received by an object of that class. It is given by RFC=|RS|, where RS, the response set of the class, is given by [CHID94] Information flow-based coupling (ICP) The number of methods invoked in a class, weighted by the number of parameters of the methods invoked. [LEE95] Information flow-based inheritance coupling. (IHICP) Same as ICP, but only counts methods invocations of ancestors of classes. Information flow-based non-inheritance coupling (NIHICP) Same as ICP, but only counts methods invocations of classes not related through inheritance. Fan-in Count of modules (classes) that call a given class, plus the number of global data elements. [BINK98] Fan-out Count of modules (classes) called by a given module plus the number of global data elements altered by the module (class). [BINK98] Table 10.1: Coupling Metrics 10.3.3 Inheritance Metrics Inheritance metrics requires information about ancestors and descendants of a class. They also collect information about methods overridden, inherited and added (i.e. neither inherited nor overrided). These metrics are summarized in table 10.3. If a class has more number of children (or sub classes), more amount of testing may be required in testing the methods of that class. More is the depth of inheritance tree, more complex is the design as more number of methods and classes are involved. Thus, we may test all the inherited methods of a class and testing effort well increase accordingly. Metric Definition Sources Number of Children (NOC) The NOC is the number of immediate subclasses of a class in a hierarchy. [CHID94] Depth of Inheritance Tree (DIT) The depth of a class within the inheritance hierarchy is the maximum number of steps from the class node to the root of the tree and is measured by the number of ancestor classes. Number of Parents (NOP) The number of classes that a class directly inherits from (i.e. multiple inheritance). [LORE94] Number of Descendants (NOD) The number of subclasses (both direct and indirectly inherited) of a class. Number of Ancestors (NOA) The number of superclasses (both direct and indirectly inherited) of a class. [TEGA92] Number of Methods Overridden (NMO) When a method in a subclass has the same name and type signature as in its superclass, then the method in the superclass is said to be overridden by the method in the subclass. [LORE94] Number of Methods Inherited (NMI) The number of methods that a class inherits from its super (ancestor) class. Number of Methods Added (NMA) The number of new methods added in a class (neither inherited, nor overriding). Table 10.3: Inheritance Metrics 10.3.4 Size Metrics Size metrics indicate the length of a class in terms of lines of source code and methods used in the class. These metrics are given in table 10.4. If a class has more number of methods with greater complexity, then more number of test cases will be required to test that class. When a class with more number of methods with greater complexity is inherited, it will require more rigorous testing. Similarly, a class with more number of public methods will require thorough testing of public methods as they may be used by other classes. Metric Definition Sources Number of Attributes per Class (NA) It counts the total number of attributes defined in a class. Number of Methods per Class (NM) It counts number of methods defined in a class. Weighted Methods per Class (WMC) The WMC is a count of sum of complexities of all methods in a class. Consider a class K1, with methods M1,.. Mn that are defined in the class. Let C1,.Cn be the complexity of the methods. [CHID94] Number of public methods (PM) It counts number of public methods defined in a class. Number of non-public methods (NPM) It counts number of private methods defined in a class. Lines Of Code (LOC) It counts the lines in the source code. Table 10.4: Size Metrics 10.4 What should we measure during testing? We should measure every thing (if possible) which we want to control and which may help us to find answers to the questions given in the beginning of this chapter. Test metrics may help us to measure the current performance of any project. The collected data may become historical data for future projects. This data is very important because in the absence of historical data, all estimates are just the guesses. Hence, it is essential to record the key information about the current projects. Test metrics may become an important indicator of the effectiveness and efficiency of a software testing process and may also identify risky areas that may need more testing. 10.4.1 Time We may measure many things during testing with respect to time and some of them are given as: 1) Time required to run a test case. 2) Total time required to run a test suite. 3) Time available for testing 4) Time interval between failures 5) Cumulative failures experienced upto a given time 6) Time of failure 7) Failures experienced in a time interval A test case requires some time for its execution. A measurement of this time may help to estimate the total time required to execute a test suite. This is the simplest metric and may estimate the testing effort. We may calculate the time available for testing at any point in time during testing, if we know the total allotted time for testing. Generally unit of time is seconds, minutes or hours, per test case. Total testing time may be defined in terms of hours. Time needed to execute a planned test suite may also be defined in terms of hours. When we test a software, we experience failures. These failures may be recorded in different ways like time of failure, time interval between failures, cumulative failures experienced upto given time and failures experienced in a time interval. Consider the table 10.5 and table 10.6 where time based failure specification and failure based failure specification are given: Sr. No. of failure occurrences Failure time measured in minutes Failure intervals in minutes 1 12 12 2 26 14 3 35 09 4 38 03 5 50 12 6 70 20 7 106 36 8 125 19 9 155 30 10 200 45 Table 10.5: Time based failure specification Time in minutes Cumulative failures Failures in interval of 20 minutes 20 01 01 40 04 03 60 05 01 80 06 01 100 06 00 120 07 01 140 08 01 160 09 01 180 09 00 200 10 01 Table 10.6: Failure based failure specification These two tables give us the idea about failure pattern and may help us to define the following: 1) Time taken to experience n failures 2) Number of failures in a particular time interval 3) Total number of failures experienced after a specified time 4) Maximum / minimum number of failures experienced in any regular time interval. 10.4.2 Quality of source code We may know the quality of the delivered source code after reasonable time of release using the following formula: Where WDB: Number of weighted defects found before release WDA: Number of weighted defects found after release The weight for each defect is defined on the basis of defect severity and removal cost. A severity is assigned to each defect by testers based on how important or serious is the defect. A lower value of this metric indicates the less number of error detection or less serious error detection. We may also calculate the number of defects per execution test case. This may also be used as an indicator of source code quality as the source code progressed through the series of test activities [STEP03]. 10.4.3 Source Code Coverage We may like to execute every statement of a program at least once before its release to the customer. Hence, percentage of source code coverage may be calculated as: The higher value of this metric given confidence about the effectiveness of a test suite. We should write additional test cases to cover the uncovered portions of the source code. 10.4.4 Test Case Defect Density This metric may help us to know the efficiency and effectiveness of our test cases. Where Failed test case: A test case that when executed, produced an undesired output. Passed test case: A test case that when executed, produced a desired output Higher value of this metric indicates that the test cases are effective and efficient because they are able to detect more number of defects. 10.4.5 Review Efficiency Review efficiency is a metric that gives insight on the quality of review process carried out during verification. Higher the value of this metric, better is the review efficiency. 10.5 Software Quality Attributes Prediction Models Software quality is dependent on many attributes like reliability, maintainability, fault proneness, testability, complexity, etc. Number of models are available for the prediction of one or more such attributes of quality. These models are especially beneficial for large-scale systems, where testing experts need to focus their attention and resources to problem areas in the system under development. 10.5.1 Reliability Models Many reliability models for software are available where emphasis is on failures rather than faults. We experience failures during execution of any program. A fault in the program may lead to failure(s) depending upon the input(s) given to a program with the purpose of executing it. Hence, time of failure and time between failures may help us to find reliability of software. As we all know, software reliability is the probability of failure free operation of software in a given time under specified conditions. Generally, we consider the calendar time. We may like to know the probability that a given software will not fail in one month time or one week time and so on. However, most of the available models are based on execution time. The execution time is the time for which the computer actually executes the program. Reliability models based on execution time normally give better results than those based on calendar time. In many cases, we have a mapping table that converts execution time to calendar time for the purpose of reliability studies. In order to differentiate both the timings, execution time is represented byand calendar time by t. Most of the reliability models are applicable at system testing level. Whenever software fails, we note the time of failure and also try to locate and correct the fault that caused the failure. During system testing, software may not fail at regular intervals and may also not follow a particular pattern. The variation in time between successive failures may be described in terms of following functions: () : average number of failures upto time () : average number of failures per unit time at time and is known as failure intensity function. It is expected that the reliability of a program increases due to fault detection and correction over time and hence the failure intensity decreases accordingly. (i) Basic Execution Time Model This is one of the popular model of software reliability assessment and was developed by J.D. MUSA [MUSA79] in 1979. As the name indicates, it is based on execution time (). The basic assumption is that failures may occur according to a non-homogeneous poisson process (NHPP) during testing. Many examples may be given for real world events where poisson processes are used. Few examples are given as: * Number of users using a website in a given period of time. * Number of persons requesting for railway tickets in a given period of time * Number of e-mails expected in a given period of time. The failures during testing represents a non-homogeneous process, and failure intensity decreases as a function of time. J.D. Musa assumed that the decrease in failure intensity as a function of the number of failures observed, is constant and is given as: Where : Initial failure intensity at the start of testing. : Total number of failures experienced upto infinite time : Number of failures experienced upto a given point in time. Musa [MUSA79] has also given the relationship between failure intensity () and the mean failures experienced () and is given in 10.1. If we take the first derivative of equation given above, we get the slope of the failure intensity as given below The negative sign shows that there is a negative slope indicating a decrementing trend in failure intensity. This model also assumes a uniform failure pattern meaning thereby equal probability of failures due to various faults. The relationship between execution time () and mean failures experienced () is given in 10.2 The derivation of the relationship of 10.2 may be obtained as: The failure intensity as a function of time is given in 10.3. This relationship is useful for calculating present failure intensity at any given value of execution time. We may find this relationship Two additional equations are given to calculate additional failures required to be experienced to reach a failure intensity objective (F) and additional time required to reach the objective. These equations are given as: Where : Expected number of additional failures to be experienced to reach failure intensity objective. : Additional time required to reach the failure intensity objective. : Present failure intensity : Failure intensity objective. and are very interesting metrics to know the additional time and additional failures required to achieve a failure intensity objective. Example 10.1: A program will experience 100 failures in infinite time. It has now experienced 50 failures. The initial failure intensity is 10 failures/hour. Use the basic execution time model for the following: (i) Find the present failure intensity. (ii) Calculate the decrement of failure intensity per failure. (iii) Determine the failure experienced and failure intensity after 10 and 50 hours of execution. (iv) Find the additional failures and additional execution time needed to reach the failure intensity objective of 2 failures/hour. Solution: (a) Present failure intensity can be calculated using the following equation: (b) Decrement of failure intensity per failure can be calculated using the following: (c) Failures experienced and failure intensity after 10 and 50 hours of execution can be calculated as: (i) After 10 hours of execution (ii) After 50 hours of execution (d) and with failure intensity objective of 2 failures/hour (ii) Logarithmic Poisson Execution time model With a slight modification in the failure intensity function, Musa presented logarithmic poisson execution time model. The failure intensity function is given as: Where : Failure intensity decay parameter which represents the relative change of failure intensity per failure experienced. The slope of failure intensity is given as: The expected number of failures for this model is always infinite at infinite time. The relation for mean failures experienced is given as: The expression for failure intensity with respect to time is given as: The relationship for additional number of failures and additional execution time are given as: When execution time is more, the logarithmic poisson model may give large values of failure intensity than the basic model. Example 10.2: The initial failure intensity of a program is 10 failures/hour. The program has experienced 50 failures. The failure intensity decay parameter is 0.01/failure. Use the logarithmic poisson execution time model for the following: (a) Find present failure intensity. (b) Calculate the decrement of failure intensity per failure. (c) Determine the failure experienced and failure intensity after 10 and 50 hours of execution. (d) Find the additional failures and additional and failure execution time needed to reach the failure intensity objective of 2 failures/hour. Solution: (a) Present failure intensity can be calculated as: = 50 failures = 50 failures = 0.01/falures Hence = 6.06 failures/hour (b) Decrement of failure intensity per failure can be calculated as: (c) Failure experienced and failure intensity after 10 and 50 hours of execution can be calculated as: (i) After 10 hours of execution (ii) After 50 hours of execution (d) and with failure intensity objective of 2 failures/hour (iii) The Jelinski Moranda Model The Jelinski Moranda model [JELI72] is the earliest and simples software reliability model. It proposed a failure intensity function in the form of Where = Constant of proportionality N = total number of errors present i = number of errors found by time interval ti. This model assumes that all failures have the same failure rate. It means that failure rate is a step function and there will be an improvement in reliability after fixing an error. Hence, every failure contributes equally to the overall reliability. Here, failure intensity is directly proportional to the number of errors remaining in a software. Once we know the value of failure intensity function using any reliability model, we may calculate reliability using the equation given below: Where is the failure intensity and t is the operating time. Lower the failure intensity and higher is the reliability and vice versa. Example 10.3: A program may experience 200 failures in infinite time of testing. It has experienced 100 failures. Use Jelinski-Moranda model to calculate failure intensity after the experience of 150 failures? Solution: Total expected number of failures (N) = 200 Failures experienced (i) =100 Constant of proportionality () = 0.02 We know = 2.02 failures/hour After 150 failures = 0.02 (200-150+1) =1.02 failures/hour Failure intensity will decrease with every additional failure experience. 10.5.2 An example of fault prediction model in practice It is clear that software metrics can be used to capture the quality of object oriented design and code. These metrics provide ways to evaluate the quality of software and their use in earlier phases of software development can help organizations in assessing a large software development quickly, at a low cost. To achieve help for planning and executing testing by focusing resources on the fault prone parts of the design and code, the model used to predict faulty classes should be used. The fault prediction model can also be used to identify classes that are prone to have severe faults. One can use this model with respect to high severity of faults to focus the testing on those parts of the system that are likely to cause serious failures. In this section, we describe models used to find relationship between object oriented metrics and fault proneness, and how such models can be of great help in planning and executing testing activities [MALH09, SING10]. In order to perform the analysis we used public domain KC1 NASA data set [NASA04] The data set is available on www.mdp.ivv.nasa.gov. The 145 classes in this data were developed using C++ language. The goal of our analysis is to explore empirically the relationship between object oriented metrics and fault proneness at the class level. Therefore, fault proneness is the binary dependent variable and object oriented metrics (namely WMC, CBO, RFC, LCOM, DIT, NOC and SLOC) are the independent variables. Fault proneness is defined as the probability of fault detection in a class. We first associated defects with each class according to their severities. The value of severity quantifies the impact of the defect on the overall environment with 1 being most severe to 5 being least severe. Faults with severity rating 1 were classified as high severity faults. Faults with severity rating 2 were classified as medium severity faults and faults with severity rating 3, 4, 5 as low severity faults as at severity rating 4 no class was found to be faulty and at severity rating 5 only one class was faulty. Table 10.7 summarizes the distribution of faults and faulty classes at high, medium, and l ow severity levels in the KC1 NASA data set after preprocessing of faults in the data set. Level of severity Number faulty of classes % of faulty classes Number of faults % of Distribution of faults High 23 15.56 48 7.47 Medium 58 40.00 449 69.93 Low 39 26.90 145 22.59 Table 10.7: Distribution of Faults and Faulty Classes at High, Medium and Low Severity Levels The min, max, mean, median, std dev , 25% quartile and 75% quartile for all metrics in the analysis are shown in table 10.8. Metric Min. Max. Mean Median Std. Dev. Percentile (25%) Percentile (75%) CBO 0 24 8.32 8 6.38 3 14 LCOM 0 100 68.72 84 36.89 56.5 96 NOC 0 5 0.21 0 0.7 0 0 RFC 0 222 34.38 28 36.2 10 44.5 WMC 0 100 17.42 12 17.45 8 22 LOC 0 2313 211.25 108 345.55 8 235.5 DIT 0 6 1 1 1.26 0 1.5 Table 10.8: Descriptive Statistics for metrics The low values of DIT and NOC indicate that inheritance is not much used in the system. The LCOM metric has high values. Table 10.9 shows the correlation among metrics, which is an important static quantity. Metric CBO LCOM NOC RFC WMC LOC DIT CBO 1 LCOM 0.256 1 NOC -0.03 -0.028 1 RFC 0.386 0.334 -0.049 1 WMC 0.245 0.318 0.035 0.628 1 LOC 0.572 0.238 -0.039 0.508 0.624 1 DIT 0.4692 0.256 -0.031 0.654 0.136 0.345 1 Table 10.9: Correlations among Metrics The correlation coefficients shown in bold are significant at 0.01 level. WMC, LOC, DIT metrics are correlated with RFC metric. Similarly, the WMC and CBO metrics are correlated with LOC metric. Therefore, it shows that these metrics are not totally independent and represents redundant information. The next step of our analysis found the combined effect of object oriented metrics on fault proneness of class at various severity levels. We obtained from four multivariate fault prediction models using LR method. The first one is for high severity faults, the second one is for medium severity faults, the third one is for low severity faults and the fourth one is for ungraded severity faults. We used multivariate logistic regression approach in our analysis. In a multivariate logistic regression model, the coefficient and the significance level of an independent variable represent the net effect of that variable on the dependent variable in our case fault proneness. Tables 10.10, 10.11, 10.12 and 10.13 provide the coefficient (B), standard error (SE), statistical significance (sig), odds ratio (exp(B)) for metrics included in the model. Two metrics CBO and SLOC were included in the multivariate model for predicting high severity faults. CBO, LCOM, NOC, SLOC metrics were included in the multivariate model for predicting medium severity faults. Four metrics CBO, WMC, RFC, and SLOC were included in the model predicted with respect to low severity faults. Similarly, CBO, LCOM, NOC, RFC, SLOC metrics were included in the ungraded severity model. Metric B S.E. Sig. Exp(B) CBO 0.102 0.033 0.002 1.107 SLOC 0.001 0.001 0.007 1.001 Constant -2.541 0.402 0.000 0.079 Table 10.10: High severity faults model statistics Metric B S.E. Sig. Exp(B) CBO 0.190 0.038 0.0001 1.209 LCOM -0.011 0.004 0.009 0.989 NOC -1.070 0.320 0.001 0.343 SLOC 0.004 0.002 0.006 1.004 Constant -0.307 0.340 0.367 0.736 Table 10.11: Medium severity faults model statistics Metric B S.E. Sig. Exp(B) CBO 0.167 0.041 0.001 1.137 RFC -0.034 0.010 0.001 0.971 WMC 0.047 0.018 0.028 1.039 SLOC 0.003 0.001 0.001 1.003 Constant -1.447 0.371 0.005 0.354 Table 10.12: Low severity faults model statistics Metric B S.E. Sig. Exp(B) CBO 0.195 0.040 0.0001 1.216 LCOM -0.010 0.004 0.007 0.990 NOC -0.749 0.199 0.0001 0.473 RFC -0.016 0.006 0.006 0.984 SLOC 0.007 0.002 0.0001 1.007 Constant 0.134 0.326 0.680 1.144 Table 10.13: Ungraded severity fault model statistics To validate our findings we performed 10-cross validation of all the models. For the 10-cross validation, the classes were randomly divided into 10 parts of approximately equal (14 partitions of ten data points each and 1 partition of five data points each). The performance of binary prediction models is typically evaluated using confusion matrix (see table 10.14). In order to validate the findings of our analysis, we used the commonly used evaluation measures sensitivity, specificity, completeness, precision, and ROC analysis. Observed Predicted 1.00 (Fault-Prone) 0.00 (Not Fault-Prone) 1.00 (Fault-Prone) True Fault Prone (TFP) False Not Fault Prone (FNFP) 0.00 (Not Fault-Prone) False Fault Prone (FFP) True Not Fault Prone (TNFP) Table 10.14: Confusion matrix Precision It is defined as the ratio of number of classes correctly predicted to the total number of classes. Sensitivity It is defined as the ratio of the number of classes correctly predicted as fault prone to the total number of classes that are actually fault prone. Completeness Completeness is defined as the number of faults in classes classified fault-prone, divided by the total number of faults in the system. Receiver Operating Characteristics (ROC) Curve The performance of the outputs of the predicted models was evaluated using ROC analysis. The ROC curve, which is defined as a plot of sensitivity on the y-coordinate versus its 1-specificity on the x coordinate, is an effective method of evaluating the quality or performance of predicted models [EMAM99]. While constructing ROC curves, we selected many cutoff points between 0 and 1, and calculated sensitivity and specificity at each cut off point. The optimal choice of the cutoff point (that maximizes both sensitivity and specificity) can be selected from the ROC curve [EMAM99]. Hence, by using the ROC curve one can easily determine optimal cutoff point for a model. Area under the ROC Curve (AUC) is a combined measure of sensitivity and specificity. In order to compute the accuracy of the predicted models, we use the area under ROC curve. We summarized the results of cross validation of predicted models via the LR approach in Table 10.15. Model I Model II Model III Model IV Cutoff Sensitivity Specificity Precision Completeness AUC SE 0.25 64.60 66.40 66.21 59.81 0.686 0.044 0.77 70.70 66.70 68.29 74.14 0.754 0.041 0.49 61.50 64.20 63.48 71.96 0.664 0.053 0.37 69.50 68.60 68.96 79.59 0.753 0.039 Table 10.15: Results of 10-cross validation of models The ROC curve for the LR model with respect to the high, medium, low, and ungraded severity of faults is shown in 10.4. Based on the findings from this analysis, can use the SLOC and CBO metrics in earlier phases of software development to measure the quality of the systems and predict which classes with higher severity need extra attention. This can help management focus resources on those classes that are likely to cause serious failures. Also, if required, developers can reconsider design and thus take corrective actions. The models predicted in the previous section could be of great help for planning and executing testing activities. For example, if one has the resources available to inspect 26 percent of the code, one should test 26 percent classes predicted with more severe faults. If these classes are selected for testing one can expect maximum severe faults to be covered. There are many school of thoughts about the usefulness and applications of software metrics. However, every school of thought accepts the old quote of software engineering i.e. You cannot improve what you cannot measure; and you cannot control what you cannot measure. In order to control and improve various activities, we should have something to measure such activities. This something differs from one school of thought to another school of thought. Despite different views, most of us feel that software metrics help to improve productivity and quality. Software process metrics are widely used such as capability maturity model for software (CMM-SW) and ISO9001. Every organization is putting serious efforts to implement these metrics. 10.5.3 Maintenance effort prediction model The cost of software maintenance is increasing day by day. The development may take 2 to 3 years, but same software may have to be maintained for another 10 or more years. Hence, maintenance effort is becoming an important factor for software developers. The obvious question is how should we estimate maintenance effort in early phases of software development life cycle?. The estimation may help us to calculate the cost of software maintenance which a customer may like to know as early as possible in order to plan the costing of the project. Maintenance effort is defined as number of lines of source code added or changed during maintenance phase. A model has been used to predict maintenance effort using Artificial Neural Network (ANN) [AGGA06, MALH09]. This is a simple model and predictions are quite realistic. In this model, maintenance effort is used as a dependent variable. The independent variables are eight object oriented metrics namely WMC, CBO, RFC, LCOM, DIT, NOC, DAC, and NOM. The model is trained and tested on two commercial software products User Interface Management System (UIMS) and Quality Evaluation System (QUES), which are presented in [LI93]. UIMS system consists of 39 classes and QUES system consists of 71 classes The ANN network used in model prediction belongs to Multilayer Feed Forward networks and is referred to as M-H-Q network with M source nodes, H nodes in hidden layer and Q nodes in the output layer. The input nodes are connected to every node of the hidden layer but are not directly connected to the output node. Thus, the network does not have any lateral or shortcut connection. Artificial Neural Network (ANN) repetitively adjusts different weights so that the difference between desired output from the network and actual output from ANN is minimized. The network learns by finding a vector of connection weights that minimizes the sum of squared errors on the training data set. The summary of ANN used in model for predicting maintenance effort is shown in table 10.16. Architecture Layers 3 Input Units 8 Hidden Units 9 Output Units 1 Training Transfer Function Tansig Algorithm Back Propagation Training Function TrainBR Table 10.16: ANN Summary The ANN was trained by the standard error back propagation algorithm at a learning rate of 0.005, having the minimum square error as the training stopping criterion. The main measure used for evaluating model performance is the Mean Absolute Relative Error (MARE). MARE is the preferred error measure for software measurement researchers and is calculated as follows [FINN96]: where: estimate is the predicted output by the network for each observation n is the number of observations To establish whether models are biased and tend to over or under estimate, the Mean Relative Error (MRE) is calculated as follows [FINN96]: We use the following steps in model prediction: 1. The input metrics are normalized using min-max normalization. Min-max normalization performs a linear transformation on the original data. Suppose that minA and maxA are the minimum and maximum values of an attribute A. It maps value v of A to v in the range 0 to 1 using the formula: 2. Perform principal components (P.C) analysis on the normalized metrics to produce domain metrics. 3. We divide data into training, test and validate sets using 3:1:1 ratio. 4. Develop ANN model based on training and test data sets. 5. Apply the ANN model to validate data set in order to evaluate the accuracy of the model. The P.C. extraction analysis and varimax rotation method is applied on all metrics. The rotated component matrix is given in table 10.17. Table 10.17 shows the relationship between the original object oriented metrics and the domain metrics. The values above 0.7 (shown in bold in table 10.17) are the metrics that are used to interpret the PCs. For each PC, we also provide its eigenvalue, variance percent and cumulative percent. The interpretations of PCs are given as follows: * P1: DAC, LCOM, NOM, RFC and WMC are cohesion, coupling and size metrics. We have size, coupling and cohesion metrics in this dimension. This shows that there are classes with high internal methods (methods defined in the class) and external methods (methods called by the class). This means cohesion and coupling is related to number of methods and attributes in the class. * P2: MPC is coupling metric that counts number of send statements defined in a class. * P3: NOC and DIT are inheritance metrics that count number of children and depth of inheritance tree in a class. P.C. P1 P2 P3 Eigenvalue 3.74 1.41 1.14 Variance % 46.76 17.64 14.30 Cumulative % 46.76 64.40 78.71 DAC 0.796 0.016 0.065 DIT -0.016 -0.220 -0.85 LCOM 0.820 -0.057 -0.079 MPC 0.094 0.937 0.017 NOC 0.093 -0.445 0.714 NOM 0.967 -0.017 0.049 RFC 0.815 0.509 -0.003 WMC 0.802 0.206 0.184 Table 10.17: Rotated principal components We employed ANN technique to predict the maintenance effort of the classes. The inputs to the network were all the domain metrics P1, P2, and P3. The network was trained using the back propagation algorithm. Table 10.16 shows the best architecture, which was experimentally determined. The model is trained using training and test data sets and evaluated on validation data set. Table 10.18 shows the MARE, MRE, r and p-value results of ANN model evaluated on validation data. The correlation of the predicted change and the observed change is represented by the coefficient of correlation (r). The significant level of a validation is indicated by a p-value. A commonly accepted p-value is 0.05. MARE 0.265 MRE 0.09 r 0.582 p-value 0.004 Table 10.18: Validation results of ANN model ARE Range Percent 0-10% 50 11-27% 9.09 28-43% 18.18 44% 22.72 Table 10.19: Analysis of model evaluation accuracy For validate data sets, the percentage error smaller than 10 percent, 27 percent and 55 percent is shown in table 10.19. We conclude that impact of prediction is valid in the population. 10.5 plots predicted number of lines added or changed vs actual number of lines added or changed. Software testing metrics are one part of metrics studies and focus on the testing issues of processes and products. Test suite effectiveness, source code coverage, defect density and review efficiency are some of the popular testing metrics. Testing efficiency may also be calculated using size of software tested/resources used. We should also have metrics to provide immediate, real time feedback to testers and project manager on quality of testing during each test phase rather waiting until the release of the software. MULTIPLE CHOICE QUESTIONS Note: Select most appropriate answer of the following questions. 10.1 One fault may lead to (a) One failure (b) Two failures (c) Many failures (d) All of the above 10.2 Failure occurrences can be represented as: (a) Time to failure (b) Time interval between failures (c) Failure experienced in a time interval (d) All of the above 10.3 What is the maximum value of reliability? (a) (b) 1 (c) 100 (d) None of the above 10.4 What is the minimum value of reliability? (a) 0 (b) 1 (c) 100 (d) None of the above 10.5 As the failure intensity decreases reliability: (a) Increases (b) Decreases (c) No effect (d) None of the above 10.6 Basic and logarithmic execution time models were developed by: (a) Victor Baisili (b) J.D. Musa (c) R. Binder (d) B. Littlewood 10.7 Which is not a cohesion metric? (a) Lack of cohesion in methods (b) Tight class cohesion (c) Response for a class (d) Information flow cohesion 10.8 Which is not a size metric? (a) Number of attributes per class (b) Number of methods per class (c) Number of children (d) Weighted methods per class 10.9 Choose an inheritance metric? (a) Number of children (b) Response for a class (c) Number of methods per class (d) Message passing coupling 10.10 Which is not a coupling metric? (a) Coupling between objects (b) Data abstraction coupling (c) Message passing coupling (d) Number of children 10.11 What can be measured with respect to time during testing? (a) Time available for testing (b) Time to failure (c) Time interval between failures (d) All of the above 10.12 NHPP stands for (a) Non-homogeneous poisson process (b) Non-hetrogeneous poisson process (c) Non-homogeneous programming process (d) Non-hetrogeneous programming process 10.13 Which is not a test process metric? (a) Number of test cases designed (b) Number of test cases executed (c) Number of failures experienced in a time interval (d) Number of test cases failed 10.14 Which is not a test product metric? (a) Time interval between failures (b) Time to failure (c) Estimated time for testing (d) Test case execution time 10.15 Testability is dependent on (a) Characteristics of the representation (b) Characteristics of the implementation (c) Built in test capabilities (d) All of the above 10.16 Which of the following is true? (a) Testability is inversely proportional to complexity (b) Testability is directly proportional to complexity (c) Testability is equal to complexity (d) None of the above 10.17 Cyclomatic complexity of code provides (a) An upper limit for the number of test cases needed for the code coverage criterion (b) A lower limit for the number of test cases needed for the code coverage criterion (c) A direction for testing (d) None of the above 10.18 Higher is the cyclomatic complexity: (a) More is the testing effort (b) Less is the testing effort (c) Infinite is the testing effort (d) None of the above 10.19 Which is not the object oriented metric given by chidamber and Kemerer: (a) Coupling between objects (b) Lack of cohesion (c) Response for a class (d) Number of branches in a tree 10.20 Precision is defined as: (a) Ratio of number of classes correctly predicted to the total number of classes (b) Ratio of number of classes wrongly predicted to the total number of classes (c) Ratio of total number of classes to the classes wrongly predicted (d) None of the above 10.21 Sensitivity is defined as: (a) Ratio of number of classes correctly predicted as fault prone to the total number of classes (b) Ratio of the number of classes correctly predicted as fault prone to the total number of classes that are actually fault prone (c) Ratio of faulty classes to total number of classes (d) None of the above 10.22 Reliability is measured with respect to (a) Effort (b) Time (c) Faults (d) Failures 10.23 Choose an event where poisson process is not used (a) Number of users using a website in a given time interval (b) Number of persons requesting for railway tickets in a given period of time (c) Number of students in a class (d) Number of e-mails expected in a given period of time 10.24 Choose a data structure metric: (a) Number of live variables (b) Variable span (c) Module weakness (d) All of the above 10.25 Software testing metrics are used to measure (a) Progress of testing (b) Reliability of software (c) Time spent during testing (d) All of the above Exercises 10.1 What is software metric? Why do we really need metrics in software? Discuss the areas of applications and problems during implementation of metrics. 10.2 Define the following terms: (a) Measure (b) Measurement (c) Metrics 10.3 (a) What should we measure during testing? (b) Discuss things which can be measured with respect to time (c) Explain any reliability model where emphasis is on failures rather than faults 10.4 Describe the following metrics: (a) Quality of source code (b) Source code coverage (c) Test case defect density (d) Review efficiency 10.5 (a) What metrics are required to be captured during testing? (b) Identify some test process metrics (c) Identify some test product metrics 10.6 What is the relationship between testability and complexity? Discuss the factors which are affecting the software testability. 10.7 Explain the software fault prediction model. List out the metrics used in the analysis of the model. Define precision, sensitivity and completeness. What is the purpose of using receiver operating characteristics (ROC) curve. 10.8 Discuss the basic model of software reliability. How can we calculate and? 10.9 Explain the logarithmic poisson model and find the values of and . 10.10 Assume that initial failure intensity in 20 failures / CPU hr. The failure intensity decay parameter is 0.05 / failure. We assume that 50 failures have been experienced. Calculate the current failure intensity. 10.11 Assume that the initial failure intensity is 5 failures / CPU hr. The failure intensity decay parameter is 0.01 / failure. We have experienced 25 failures upto this time. Find the failures experienced and failure intensity after 30 and 60 CPU hrs of execution. 10.12 Explain the Jelinski Moranda model of reliability theory. What is the relationship between and t. 10.13 A program is expected to have 500 faults. Assumption is that one fault may lead to one failure. The initial failure intensity is 10 failures / CPU hr. The program is released with a failure intensity objective of 6 failures / CPU hr. Calculate the number of failures experienced before release. 10.14 Assume that a program will experience 200 failures in infinite time. It has now experienced 100. The initial failure intensity was 10 failures / CPU hr. (a) Determine the present failure intensity (b) Calculate failures experienced and failure intensity after 50 and 100 CPU hrs. of execution. Use the basic execution time model for the above calculations. 10.15 What is software reliability? Does it exist? Describe the following terms: (i) MTBF (ii) MTTF (iii) Failure intensity (iv) Failures experienced in a time interval