Reading List: Test (Pyramid / Unit / Integration / Acceptance etc.)

General QA
Pyramid / test categories
Oracles and Simulation
Fakes and Mocks
Test Data Mgmt.
Other Test Patterns
Agile testing
Exploratory Testing
Unit Testing
API Testing
Regression Testing
UI-specific testing
UI-specific testing selenium-specific
Performance testing
Server testing
Test Automation
Bug Mgmt
Lists

General QA

  • Mosaic: Peter Wilson: Ten Common Mistakes Companies Make Setting Up and Managing Software Quality Assurance Departments – this article advocates for qa driving not just qa but also overall process; the ten: 1) not properly defining objectives; 2) not properly defining qa’s responsibilities and staffing; 3) sr. mgmt. not understanding their responsibility for qa; 4) not holding the qa dept. accountable for proj success; 5) assuming existing standards / processes are followed and are sufficient; 6) separating methodology responsibilities from review and enforcement; 7) not integrating measurement into process; 8) ignoring, misunderstanding or not communicating risk; 9) lack of mgmt. reporting from qa; 10) qa dept. positioned to low in the org
  • EvilTester.com: blog on Exploratory, Selenium, Technical testing etc.
  • Josiah Renaudin: Is the “Traditional Tester” Just a Myth? – most good testers already had at least some dev skills – the need for this has only increased with agile short cycles and the need for automated regression testing to support a build pipeline

Pyramid / test categories

Oracles and Simulation

Fakes and Mocks

Test Data Mgmt.

Other Test Patterns

Agile testing

  • Matt Heusser: Programmer / Tester Pair Programming
  • Martin Fowler & Manasi Kulkarni: Agile Fluency and Testing (1hr12m vid)
  • Michael Bolton: Drop the Crutches – “Test cases are formally structured, specific, proceduralized, explicit, documented, and largely confirmatory test ideas. And, often, excessively so”; “The idea that we could know entirely what the requirements are before we’ve discussed and decided we’re done seems like total hubris to me”; “Instead of overfocusing on test cases and worrying about completing them, focus on risk. Ask “How might some person suffer loss, harm, annoyance, or diminished value?”
  • Shift Left but Get It First-Time Right: An Interview with Shankar Konda – “shift left is more about how we accelerate the development activity in conjunction with the testing processes”; “moving away from a shared services model like the test center of excellence to a more federated model of testing, where quality assurance and testing teams work collaboratively with the development teams”; “Automation which used to happen at the end of the testing lifecycle is now a thing of the past. Now we are talking about how progressive automation, or holistic automation across the lifecycle, can enable the development teams to accelerate the process to integrate”; “explore a test-driven development approach by integrating QA with the agile teams with early creation and execution of automation test scripts”; “gone are those days where the traditional model of testing was as a gatekeeper. In the good old days, you are trying to find a defect, and once you find a defect, you’re expecting to get it repaired, and then you try to retest that repair and to see if the defect doesn’t exist anymore—so traditionally, it was more of a quality control function… the market is not accepting that kind of methodology anymore… In the modern era, what is happening now is testing is not just testing anymore—it is more quality engineering now. It is more how quality can be engineered into the practices of the development lifecycle itself. In fact, for a couple of our customer engagements, TCS has redefined the roles of the quality assurance and testing professionals. They are now being referred to as “quality engineers” instead of quality analysts. They’re not any longer just testers because of the fact that they need to enable the other aspects of lifecycle development and become that part of the development team. They are not just part of the testing team anymore; they are part of the development team, working as quality engineers”; “expecting the quality team to perform an anchor role in getting things done. They don’t want them to be over the fence and telling other people that something is wrong. They want something to be anchored and help facilitate between the teams and get things done without raising a red flag, so the anchor role is now between the development teams, the business, and the operations”
  • Atlassian: Quality at Speed (30min video + transcribed Q&A on Atlassian’s “Quality Assistance” approach)
  • Shalloway, Beaver, Trott: Lean-Agile Software Development: The Role of Quality Assurance in Lean-Agile Software Development – role of testers to prevent defects, not find them; two lean principles: build quality in, eliminate waste; use found defects to improve process; QA at the end of the cycle is wasteful; team benefits from spec’d tests upfront even if not yet automated
  • Gary Miller: From Test Cases to Context-Driven: A Startup Story (1h15m vid) – references heuristic approaches by Bach, Kendrickson, Johnson and how he applied them to evolve an approach and shorten release cycles
  • Gregory & Crispin: Using Models to Help Plan Tests in Agile Projects – book chapter – Modeling test planning using Agile Testing Quadrants, Nonfunctional requirements, Test automation Pyramid
  • Amy Reichert: Use manual modular tests for testing automation development – architecting tests in a modular fashion leads to more maintainable tests; starting with manual testing of modules then automating later often makes sense; cleaner test architecture avoids the pile-of-hard-to-maintain-tests syndrome
  • Bill Wake: Resources on Agile Testing
  • Keith Klain: qualityremarks blog
  • #ShiftLeft

Exploratory Testing

Unit Testing

  • Steve Berczuk: The Value of Testing Simply – “too many low-value tests can slow integration time, lengthening the feedback cycle that makes agile work”;  “tests that cover more code (see Ruberto article below) don’t always improve quality. Some tests have low value and thus, implicitly, high cost”; “when it comes to testing, the line between trivial and valuable can be a fine one. If you err too much on the side of seemingly complex testing, you may lose opportunities to find and fix problems early”; “Validating that configuration files are syntactically correct and that the code in your persistence layer is mapping all the essential fields in the data store seem like the kinds of errors one could catch by inspection, but we don’t always do so”; “any validation you can do as part of your build pipeline helps you fix the problems earlier and at lower cost. Some research even shows that simple testing using validation and static analysis can prevent most critical failures (see Colyer article below)”
  • Adrian Colyer: Simple testing can prevent most critical failures – “the more catastrophic the failure (impacting all users, shutting down the whole system etc.), the simpler the root cause tends to be”; “Almost all catastrophic failures (48 in total – 92%) are the result of incorrect handling of non-fatal errors”; “In 23% of the catastrophic failures, while the mistakes in error handling were system specific, they are still easily detectable. More formally, the incorrect error handling in these cases would be exposed by 100% statement coverage testing on the error handling stage”; “Almost all (98%) of the failures are guaranteed to manifest on no more than 3 nodes. 84% will manifest on no more than 2 nodes…. It is not necessary to have a large cluster to test for and reproduce failures”
  • John Ruberto: 100 Percent Unit Test Coverage is not Enough – “One hundred percent unit test coverage doesn’t say anything about missing code, missing error handling, or missing requirements”; “The unit tests will check that the code is working as the developer intended the code to work, but not necessarily that the code meets customer requirements”; “Having the code 100 percent executed during testing also does not mean the tests are actually good tests, or even test anything at all”; “Unit testing is a fantastic way to test exception handling because the developer has full control of the context and can inject the conditions necessary to test the errors”
  • James O Coplien: Why Most Unit Testing is Waste
  • (book) Osherove: The Art of Unit Testing: with examples in C# – Clear explanations of fakes, mocks, stubs

API Testing

Regression Testing

  • Justin Rohrman: Language in Software Testing – good overview of what ‘regression testing’ can mean to various parties: “…the real smoke test was at the intersection of all their desires. Our product managers wanted something that would represent the most important functionality in our product for a handful of highly profitable customers. These were scenarios that must work for our customers to be able to do their job. Within the test group, we wanted coverage that would find important bugs. These were severe failures that would reduce the value of our software for large groups of people. That might be a browser crash, or a data persistence problem, or just an image overlapping a submit button on a webpage.What our manager really wanted, was to avoid the question of “why wasn’t this tested?” and also have the smoke test performed quickly, using as few people for as little time as we could get away with. What we ended up with was a set of 10 or so scenarios built as automated user interface tests. These ran against candidate builds toward the end of each release and took about 15 minutes to finish”
  • Martin Fowler: Eradicating non-determinism in tests – quarantine; lack of isolation; async behavior; remote services; time; resource leaks
  • Friendly Tester: Flawed Approach to Regression Testing (FART) (12min vid) – 3 layers of a SUT: system -> knowledge -> checks; need to regularly review automated test suites; think of automation as change detection; balance increasing automated checks with need to increase knowledge (via exploratory) – related interview write-up
  • Friendly Tester: How Often Do You Really Fix A “Failing” Automated Check – regularly review automated checks, ensure they are adding value

UI-specific testing

UI-specific testing selenium-specific

Performance

Server testing

Test Automation

  • Deliberate Testing in an Agile World: An Interview with Dan North – “Most of the teams I work with who would describe themselves as agile tend to have two types of testing: automated feature and unit testing, and manual exploratory testing.. it’s almost embarrassing how many types of testing we aren’t even aware of; never mind whether or not we are choosing to do them”; “…the idea of testing teams itself is flawed. Testing is a set of capabilities that should be intrinsic to any software delivery team, rather than something handed off to a dedicated testing team”
  • Richard Bradshaw: A look at test automation and test automators – responses to tweets by Alan Page; difference between Automation In Testing and Test Automation; devs should be primarily writers of test automation

Bug Mgmt

  • Justin Rohrman: The Bug Reporting Spectrum – “…boring triage meetings, where managers decide what is or is not a bug”; “use the title format ‘X fails when Y’”; “If the steps to reproduce the bug were needed, they focused on the parts that were critical”; “I reduced the amount of written bug reports by at least 50%”; “Bug reports were mostly done through demonstration and conversation. We were able to discover new problems, demonstrate exactly how they were triggered, and get them fixed without ever touching a bug tracking system. We went to the bug tracker only when we had questions that couldn’t be answered immediately, or bugs that were complicated and needed some research before they were fixed”; “The idea of “zero defects” is a lie… At some point during feature development, a tester, programmer, or product person will stumble across a problem that can’t be fixed immediately. That issue might be complicated, it might require research, or the programmer may be busy working on something else. Either way the bug can’t be fixed now, and not documenting it is risky business”; “My general rule now is to only make a bug report when it’s an absolute necessity. If there is a question that no-one can answer in the next day, or if there is a bug that can’t be fixed yet. Most of the time, I find that a conversation can solve problems that a bug report introduces”; “Some people say the best tester is the one that finds the most bugs. I’d change that and say the best tester is the one that can get the most bugs fixed. That means reporting them in a way people care about and understand”
  • Melissa Eaden: When Testers Deal With Process Debt: Ideas to Manage It And Get Back To Testing Faster – warning signs of process debt; example of a good defect management plan; tips for cleaning up test cases; consolidating and maintaining documentation; the devil is in the details: automation; equipment, licenses, and tools; other things which add to non-code technical debt; sunken cost fallacy

Lists

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s