Early Performance Feedback
One of the tenets of Agile product development is to seek and learn from feedback, even from your deployment process and server environments. At my company, while we build and test for well-performing code, we wait until we're in the Certifcation/Staging environment to officially test performance. This is yet another scenario when the function, albeit valuable and necessary, should be moved forward in the development cycle. First, this testing is performed just late enough to disallow any significant performance-enhancing changes at the code level. Second, the timing of when these tests occur does little to add any value to the software development process (hence no learning can occur).
I asked my team to consider the following:
- write tests that time certain functions of our application
- time the tests to get an average performance time for each (note that we're not testing performance, we're merely benchmarking it). And you'll want to benchmark the performance during the time when you're going to regularly run these tests.
- schedule the tests to run and time existing performance compared to the expected performance time. Factor in a little tolerance, say 10%, and fail each tests when it performs outside that range.
- these tests would run as part of a regression test suite, triggered after every build deployment.
- failed tests would be identified and investigated immediately to see if the changed code could have done anything to adversely affect performance.
As Agile does so many times, this challenged conventional thinking. Why would you test in an environment known to be a horrible performer? How could you trust the test results and not spend countless time investigating errors that prove to have nothing to do with the code? But just think of the benefits:
- shifts responsibility for performance to the developer, rather than a tester or person responsible for supporting testing (possibly even a developer who didn't write the code)
- provides opportunity to immediately change code that affected performance. Remember that the earlier you find and fix a problem, the less time actually is spent doign so. Finding such problems during a Certification cycle will likely take sizeably longer. This could have been avoided and is not Lean thinking. Equally so, the cost to modify related code could equally be sizeably higher.
- encourges the developer to think about performance while designing code, rather than trusting that someone else's test will report back any poor performance
- investigating these results facilitates learning, which enhances future code design as well as future test design. Such continuous process improvement and commitment to indivduals are hallmarks of Agile, Lean and Scrum.
I don't see this as an option. I think we absolutely have a responsibility to ourselves and our company in bringing this testing (or timing & investigating) forward. And we'll all benefit from it. Lastly, I'd like to give a special call out to Mike Kelly who took the time to discern this topic with me and blog a wonderful post on testreflections.com. Thanks Mike!