DINESH CHOUDHARY1*, VIJAY KUMAR2*
1Department of Computer Science & Engineering, Kautilya Institute of Technology & Engineering and School of Management, Jaipur, India
2Department of Computer Science & Engineering, Stani Memorial College of Engineering & Technology, Jaipur, India
* Corresponding Author : vijay_matwa@yahoo.com
Received : - Accepted : - Published : 01-11-2011
Volume : 1 Issue : 1 Pages : 1 - 9
J Comput Simulat Model 1.1 (2011):1-9
Software Testing is an empirical investigation conducted to provide stakeholders with information about the quality of the product or service under test, with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding software bugs. Testing can never completely establish the correctness of computer software. Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against oracles—principles or mechanisms by which someone might recognize a problem. These oracles may include (but are not limited to) specifications, comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria. Over its existence, computer software has continued to grow in complexity and size. Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it presumably must assess whether the software product will be acceptable to its end users, its target audience, its purchasers, and other stakeholders. Software testing is the process of attempting to make this assessment. A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed. A primary purpose for testing is to detect software failures so that defects may be uncovered and corrected. This is a non-trivial pursuit. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions. The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.
[1] G. Rothermel, L. Li, and M. Burnett.
Testing strategies for form-based visual
programs. In Proceedings of the 8th
International Symposium on Software
Reliability Engineering, pages 96–107,
Albuquerque, NM, November 1997
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[2] G. Rothermel, R. H. Untch, C. Chu, and
M. J. Harrold. Test case prioritization: An
empirical study. In Proceedings of the
International Conference on Software
Maintenance, pages 179–188, August
1999
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[3] G. Rothermel, Roland H. Untch,
Chengyun Chu, and M. J. Harrold.
Prioritizing test cases for regression
testing. IEEE Transactions on Software
Engineering, 27(10):929–948, October
2001a
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[4] Gregg Rothermel, Margaret Burnett, Lixin
Li, Christopher Dupuis, and Andrei
Sheretov. A methodology for testing
spreadsheets. ACM Transactions on
Software Engineering and Methodology,
10(1):110–147, January 2001b.
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[5] Karen J. Rothermel, Curtis R. Cook,
Margaret M. Burnett, Justin Schonfeld, T.
R. G. Green, and Gregg Rothermel.
WYSIWYT testing in the spreadsheet
paradigm: an empirical evaluation. In
Proceedings of the 22nd International
Conference on Software Engineering,
pages 230–239. ACM Press, 2000
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[6] Andrew Sears. Layout appropriateness: A
metric for evaluating user interface widget
layout. IEEE Transactions on Software
Engineering, 19(7):707–719, 1993
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[7] Forrest Shull, Ioana Rus, and Victor
Basili. Improving software inspections by
using reading techniques. In Proceedings
of the 23rd International Conference on
Software Engineering, pages 726–727.
IEEE Computer Society, 2001
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[8] Ian Sommerville. Software Engineering.
Addison-Wesley, 6th edition, August
2000
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[9] John Steven, Pravir Chandra, Bob Fleck,
and Andy Podgurski. jRapture: A
capture/replay tool for observation-based
testing. In Proceedings of the
International Symposium on Software
Testing and Analysis, pages 158–167.
ACM Press, 2000
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[10] T. Tsai and R. Iyer. Measuring fault
tolerance with the FTAPE fault injection
tool. In Proceedings of the 8th
International Conference on Modeling
Techniques and Tools for Computer
Performance Evaluation, pages 26–40,
1995
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[11] Timothy K. Tsai and Navjot Singh.
Reliability testing of applications on
Windows NT. In Proceedings of the
International Conference on Dependable
Systems and Networks, New York City,
USA, June 2000
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[12] Raja Vall´ee-Rai, Laurie Hendren, Vijay
Sundaresan, Patrick Lam, Etienne
Gagnon, and Phong Co. Soot - a Java
optimization framework. In Proceedings of
CASCON 1999, pages 125–135, 1999
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[13] Jeffrey M. Voas. PIE: a dynamic failurebased
technique. IEEE Transactions on
Software Engineering, 18(8):717– 735,
1992
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[14] F. Vokolos and P. Frankl. Pythia: A
regression test selection tool based on
textual differencing. In Third International
Conference of Reliability, Quality, and Safety of Software Intensive Systems,
May 1997
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[15] Elaine Weyuker. Axiomatizing software
test data adequacy. IEEE Transactions on
Software Engineering, (12): 1128–1138,
December 1986
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[16] Elaine J. Weyuker, Stewart N. Weiss, and
Dick Hamlet. Comparison of program
testing strategies. In Proceedings of the
Symposium on Testing, Analysis, and
Verification, pages 1–10. ACM Press,
1991.
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[17] James A. Whittaker. What is software
testing? and why is it so hard? IEEE
Software, 17(1):70–76, January/February
2000
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[18] James A. Whittaker and Jeffrey Voas.
Toward a more reliable theory of software
reliability. IEEE Computer, 32(12): 36–42,
December 2000
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[19] W.E. Wong. On Mutation and Data Flow.
PhD thesis, Department of Computer
Science, Purdue University, West
Lafayette, IN, December 1993
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[20] W.E.Wong, J.R. Horgan, S. London, and
H. Agrawal. A study of effective
regression testing in practice. In
Proceedings of the 8th International
Symposium on Software Reliability
Engineering, pages 230–238, November
1997.
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[21] Michael Young and Richard N. Taylor.
Rethinking the taxonomy of fault detection
techniques. In Proceedings of the 11th
International Conference on Software
Engineering, pages 53–62. ACM Press,
1989
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus
[22] Hong Zhu, Patrick A. V. Hall, and John H.
R. May. Software unit test coverage and
adequacy. ACM Computing Surveys,
29(4):366–427, 1997
» CrossRef » Google Scholar » PubMed » DOAJ » CAS » Scopus