The SQA2 Blog: Automation, General
In today’s fast-paced software development landscape, test automation plays a vital role in ensuring product quality and reducing time-to-market. This guide outlines the steps we took to implement a robust automation framework for a client in the healthcare industry. Using Playwright, Python, and Behavior-Driven Development (BDD), we developed a solution that handles complex scenarios, improves scalability, and integrates seamlessly into CI/CD pipelines.
Why Playwright?
One common issue in UI/browser automation is managing element loading timing, which often leads to false failures caused by element timeout errors. Playwright’s action methods (e.g., click) automatically handle waits, eliminating the need for explicit wait() methods and reducing boilerplate code. Additionally, Playwright offers advantages such as:
- Simplified setup and configuration
- Broad browser support
- Network interception and mocking
- High performance
These features were especially valuable in ensuring that healthcare systems with intricate workflows and critical data operated smoothly without disruptions.
Framework Setup: Step-by-Step Guide
Environment Setup:
Install Python 3.11+
Install dependencies:
pip install -r requirements.txt
playwright install
Sample requirements.txt
:
behave
behave-html-formatter
playwright
pytest
pytest-playwright
Define Behave Configuration
Create a behave.ini
file to configure default behaviors:
[behave]
show_skipped = false
[behave.userdata]
browser = chrome
env = qa
basic_report_path = "reports/automation_report.html"
[behave.formatters]
html = behave_html_formatter:HTMLFormatter
Store environment details as JSON files in the resources/
directory. For example:
{
"environment_name": "google_prod",
"url": "https://google.com"
}
Initialize Playwright in Behave Fixtures
Define a fixture in features/environment.py
to initialize Playwright:
from behave import fixture, use_fixture
from playwright.sync_api import sync_playwright
import json
@fixture
def playwright_fixture(context):
run_configs = context.config.userdata
browser_type = run_configs.get('browser')
env_type = run_configs.get('env')
playwright = sync_playwright().start()
if browser_type == 'firefox':
context.browser = playwright.firefox.launch(headless=False)
else:
context.browser = playwright.chromium.launch(headless=False)
try:
context.env = json.load(open(f'resources/{env_type}.json', 'r'))
except FileNotFoundError:
raise RuntimeError(f"Environment file for {env_type} not found.")
context.playwright = playwright
def before_all(context):
use_fixture(playwright_fixture, context)
Create Feature Files
Define BDD feature files for test scenarios. Example:
Feature: Google Search
Scenario: Basic search
Given on search page
And query entered in search field
When search triggered
Then Search Results page with expected results shown
Implement Step Definitions
Write step definitions in Python to implement the feature steps:
from behave import given, when, then
@given("on search page")
def step_impl(context):
pass
@given("{query} entered in search field")
def step_impl(context, query):
pass
@when("search triggered")
def step_impl(context):
pass
@then("Search Results page with expected results shown")
def step_impl(context):
pass
Reuse Page Objects
Encapsulate page interactions in reusable page objects:
from playwright.sync_api import expect
class GoogleSearchPage:
def __init__(self, context):
self.context = context
self.query_field = "//textarea[@title='Search']"
self.search_button = "//input[@value='Google Search']"
def enter_query(self, query):
self.context.page.locator(self.query_field).fill(query)
def trigger_search(self):
self.context.page.locator(self.search_button).click()
Parallel Test Execution
To reduce test execution time, Python’s multiprocessing
library was used:
import multiprocessing
def run_script(command):
subprocess.run(command, shell=True)
commands = ["behave -Dbrowser=chrome -Denv=qa", "behave -Dbrowser=firefox -Denv=qa"]
processes = [multiprocessing.Process(target=run_script, args=(cmd,)) for cmd in commands]
for process in processes:
process.start()
for process in processes:
process.join()
Integrating with CI/CD Pipelines
Tests were integrated into CI/CD pipelines with steps for execution, report generation, and notifications:
- task: Bash@3
displayName: Execute Automation
inputs:
script: |
docker run --rm my-test-image:latest behave -Denv=qa -Dbrowser=chrome
Docker was used to maintain consistent environments:
FROM python:3.11
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN playwright install --with-deps
Dynamic Data Management
Tests dynamically generated or retrieved data using APIs or the UI. The Behave context
object was used to share data across steps:
context.data = api.get_patient_data(patient_id)
assert context.data["name"] == "John Doe"
Error Handling and Reporting
Failed test cases were automatically retried to minimize flaky test failures:
def before_feature(context, feature):
for scenario in feature.walk_scenarios():
patch_scenario_with_autoretry(scenario, max_attempts=3)
Reports were generated in JSON and HTML formats and uploaded to a Teams channel for visibility.
Results and Outcomes
For this particular client:
- Regression testing time was reduced by 67%, dropping from 12 hours to just 4 hours.
- Automation identified an average of 9 defects per release.
- The framework significantly improved scalability and integration into DevOps pipelines, ensuring reliability for critical healthcare workflows.
Contact Us
If you’re looking to implement scalable, efficient automation frameworks for your healthcare platform or any other industry, our team of QA experts is here to help. Contact us today to learn how we can optimize your quality assurance processes and support your business growth.