cs562 Notes Feb. 14, 2014 Defects in handling stress and unusual situations Heavy usage in system Insufficient response time or throughput on minimal configurations Think slowest CPU speed, least memory, least disk space, oldest OS Defect if doesn't meet requirements on such a system Be sure to test using minimally configured platforms for production SW Incompatibility with HW/SW configurations Think problems with OS, Graphics cards, external libraries, browsers, etc. Test on different platforms. Be sure to try lots of them. Handling Peak Loads Could be considered a defect if computer runs short on memory, disk space, network bandwidth, under a heavy load System should gracefully handle such situations Examples at higher levels too. Lots of patients in a hospitals (pandemic), lots of users accessing website (Black friday). Generate heavy loads to test systems for known possibilities Inappropriate resource management Release resources when you don't need them any more Memory leaks Memory overuse File locks etc. Programs should be able to recover from a crash Users expect this now Even if they don't save files Formal test cases -Test cases are - explicit set of instructions designed to bring about a particular class of defect in a system. May be a group of tests. -Identify and classify them: Should be numbered, have title, indicate system, subsystem or module, and include reference to design/requirements documentation. -Instructions - tell the tester how to implement them -Expected result - tell the tester what should happen -Cleanup - tell the tester how to go back to a normal system state Could classify with levels of importance Critical General Low priority Both automated and manual testing is important with GUI Test first development Good idea to develop test cases before you even start programming (part of design) Can drive the design process Lead to better programming styles Let customers know about realistic results Testing is not the same as proof of correctness Large system testing Integration Testing Test how parts of a system work together Simplest form - big bang testing Put everything together and see if it works Leads to serious integration problems Incremental testing is better Test subsystems first then combine them Horizontal - divide into sub applications and test each separately Vertical - divide into layers Top down - start with UI and simulate parts of it with stubs Use same interface (function names) but with simulated results (e.g. hardcoded data instead of real functionality) Bottom up - start with DB/network/other layers Sandwich - do both Problems when you need to go back Ripple effect of fixing errors Impact analysis Regression testing - go back and rerun all tests on that subsystem When to stop testing??