Once an organization transitions from a start-up to a more mature business, it often finds that its software development velocity stalls when it tries to add new features or attempts to refactor problematic code. This is because without solid, automated tests developers don't know if they've broken existing behavior.
Management then tries to ameliorate the crisis by prioritizing automated testing. Because higher-level end-to-end (e2e) testing promises more test coverage per line of test code written, it's often pushed as a silver-bullet. However, e2e testing comes with serious drawbacks. In addition, decision makers often overestimate the potential for e2e testing to find bugs, and underestimate the value of other types of automated tests and quality assurance strategies.
This post aims to describe some broad categories of automated testing and clarify why we write automated tests in the first place, so that developers and managers can make wise choices in implementing a testing strategy that works.