Reading List: Test (Pyramid / Unit / Integration / Acceptance etc.)

General QA
Pyramid / test categories
Oracles and Simulation
Fakes and Mocks
Test Data Mgmt.
Other Test Patterns
Agile testing
Exploratory Testing
Unit Testing
Regression Testing
UI-specific testing
UI-specific testing selenium-specific
Performance testing
Lists

General QA

  • Mosaic: Peter Wilson: Ten Common Mistakes Companies Make Setting Up and Managing Software Quality Assurance Departments – this article advocates for qa driving not just qa but also overall process; the ten: 1) not properly defining objectives; 2) not properly defining qa’s responsibilities and staffing; 3) sr. mgmt. not understanding their responsibility for qa; 4) not holding the qa dept. accountable for proj success; 5) assuming existing standards / processes are followed and are sufficient; 6) separating methodology responsibilities from review and enforcement; 7) not integrating measurement into process; 8) ignoring, misunderstanding or not communicating risk; 9) lack of mgmt. reporting from qa; 10) qa dept. positioned to low in the org
  • EvilTester.com: blog on Exploratory, Selenium, Technical testing etc.
  • Josiah Renaudin: Is the “Traditional Tester” Just a Myth? – most good testers already had at least some dev skills – the need for this has only increased with agile short cycles and the need for automated regression testing to support a build pipeline

Pyramid / test categories

Oracles and Simulation

Fakes and Mocks

Test Data Mgmt.

Other Test Patterns

Agile testing

  • Michael Bolton: Drop the Crutches – “Test cases are formally structured, specific, proceduralized, explicit, documented, and largely confirmatory test ideas. And, often, excessively so”; “The idea that we could know entirely what the requirements are before we’ve discussed and decided we’re done seems like total hubris to me”; “Instead of overfocusing on test cases and worrying about completing them, focus on risk. Ask “How might some person suffer loss, harm, annoyance, or diminished value?”
  • Shift Left but Get It First-Time Right: An Interview with Shankar Konda – “shift left is more about how we accelerate the development activity in conjunction with the testing processes”; “moving away from a shared services model like the test center of excellence to a more federated model of testing, where quality assurance and testing teams work collaboratively with the development teams”; “Automation which used to happen at the end of the testing lifecycle is now a thing of the past. Now we are talking about how progressive automation, or holistic automation across the lifecycle, can enable the development teams to accelerate the process to integrate”; “explore a test-driven development approach by integrating QA with the agile teams with early creation and execution of automation test scripts”; “gone are those days where the traditional model of testing was as a gatekeeper. In the good old days, you are trying to find a defect, and once you find a defect, you’re expecting to get it repaired, and then you try to retest that repair and to see if the defect doesn’t exist anymore—so traditionally, it was more of a quality control function… the market is not accepting that kind of methodology anymore… In the modern era, what is happening now is testing is not just testing anymore—it is more quality engineering now. It is more how quality can be engineered into the practices of the development lifecycle itself. In fact, for a couple of our customer engagements, TCS has redefined the roles of the quality assurance and testing professionals. They are now being referred to as “quality engineers” instead of quality analysts. They’re not any longer just testers because of the fact that they need to enable the other aspects of lifecycle development and become that part of the development team. They are not just part of the testing team anymore; they are part of the development team, working as quality engineers”; “expecting the quality team to perform an anchor role in getting things done. They don’t want them to be over the fence and telling other people that something is wrong. They want something to be anchored and help facilitate between the teams and get things done without raising a red flag, so the anchor role is now between the development teams, the business, and the operations”
  • Atlassian: Quality at Speed (30min video + transcribed Q&A on Atlassian’s “Quality Assistance” approach)
  • Shalloway, Beaver, Trott: Lean-Agile Software Development: The Role of Quality Assurance in Lean-Agile Software Development – role of testers to prevent defects, not find them; two lean principles: build quality in, eliminate waste; use found defects to improve process; QA at the end of the cycle is wasteful; team benefits from spec’d tests upfront even if not yet automated
  • Gary Miller: From Test Cases to Context-Driven: A Startup Story (1hr vid) – references heuristic approaches by Bach, Kendrickson, Johnson and how he applied them to evolve an approach and shorten release cycles
  • Gregory & Crispin: Using Models to Help Plan Tests in Agile Projects – book chapter – Modeling test planning using Agile Testing Quadrants, Nonfunctional requirements, Test automation Pyramid
  • Amy Reichert: Use manual modular tests for testing automation development – architecting tests in a modular fashion leads to more maintainable tests; starting with manual testing of modules then automating later often makes sense; cleaner test architecture avoids the pile-of-hard-to-maintain-tests syndrome
  • Bill Wake: Resources on Agile Testing
  • Keith Klain: qualityremarks blog
  • #ShiftLeft

Exploratory Testing

Unit Testing

Regression Testing

UI-specific testing

  • Jon Bellah: Visual Regression Testing with PhantomCSS – things like CSS changes are particularly likely to result in dramatic breakage; humans’ change blindness limits their effectiveness manually testing for visual regressions; handling dynamic content via static content JavaScript injection; checking vs. testing
  • Dave Haeffner: How to Handle Visual Testing False Positives
  • LinkedIn: Keqiu Hu: How we make UI tests stable – possibly questionable approach to UI testing almost seems overly mocked, but interesting; 700 UI Tests and 1,000 Unit tests running stably and swiftly; flaky tests were worse than no tests. In other words, if a test wasn’t stable, we would rather eliminate it from our test suite; how can we make UI tests stable? flaky testing environment; flaky testing framework; flaky tests; trunk guardian

UI-specific testing selenium-specific

Performance

Lists

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s