The SQA2 Blog: BBT - Behavioral Based Testing
Applying QA Methodologies
Having been performing Quality Assurance now for close to 20 years, I have applied many different methodologies, including, finally, BBT. All of it in the quest to deliver complete coverage and execute it in the most efficient and effective manner. Many long nights were spent verifying requirements, refactoring tests and performing risk assessments on tests I had to execute; knowing that if someone scrutinizes your coverage, they can probably find some test(s) that you didn’t cover in your test plan. When you present it, you are using the skill and experience you’ve had on past projects to show why it won’t be a critical risk and hope the stakeholder sees it the same way as you do.
But I’m preaching to the choir! You already know this. It is the constant struggle all QA Leads and Managers face on a daily basis (and we haven’t even mentioned what happens to your ulcers when your timetable gets squished… development gets delayed, QA gets the same amount of time as planned, release date gets pushed out – nice dream, huh!). NO!, reality is that QA time gets squashed and go-live rarely changes; so the risk goes up even more since you have to whittle down the tests that you can complete in the time given. And yet you still need to find EVERY defect.
Automation or no automation?
After 20 years of strategy, planning, getting it right (sometimes) and defects getting into production, I keep searching for something to make my job easier. Automation is a double-edged sword. But it does help. While it helps execute tests faster and increases my coverage, it often quickly gets out of date or the maintenance can be overbearing. Considering that, I’d still rather have it than not.
So what would you say if I told you, “I found it.” You’d laugh in my face, right? I would have too. A colleague introduced me to Behavior Based Testing (BBT) a few months ago. Since then I’ve tried to punch holes in it. With my statistics background and applied math, I was eager to get under its hood and dive in. Based on an amalgam of Behavior Driven Development and “Cause and Effect” graphing, it combines the benefits of both and uses statistical models to determine the test cases. Yeah, I was skeptical too! How can a framework map out my test cases and ensure I capture all the positive and negative paths? But, the more I sat with the straightforward concepts, the more I wondered why nobody did it before. And it worked!
Pinch me. Is this a dream?
I have been able to watch this now from end to end. I can imagine sitting down with stakeholders and getting buy-off because I can put the test cases in a format that makes sense to them. From that point, the engine generated all the test cases, positive and negative. I can’t imagine having the stakeholder go through and confirm test cases, in fact, I can’t think of one that has. They will confirm my strategy but rarely go into the detail to verify all my cases. But rue the day you miss a defect. But now, I can walk them through a visual of their requirements and verify it with them with both of us knowing that if we both confirm, the test cases generated from it will have complete coverage. <Enter load off my shoulders here>