What is CI/CD?
Continuous Integration/Delivery prioritizes delivery of new releases of a build, frequently and quickly. A project that’s launched remains open to continuous iterations (like Agile).
The only difference is this: the project also remains ready to be shipped at all times (instead of waiting for iterations to run their course).
A CI/CD pipeline looks like this:
- A developer has code they wants to integrate into the project
- An external CI server does an ‘integration’ test—it grabs the source files and attempts to do a build with the new code.
- If the build completes successfully, the server packages the changes with source files. If not, the server notifies members of the team.
CI engines (like Jenkins or Bamboo) have dashboards that display current and previous builds, logs of previous check-ins and their status (successful/failed), what broke (and when), etc. Everyone remains informed about any change in code, infrastructure, or configuration. This ensures that deployment failures are caught (and fixed) early.
Note: There’s a difference between a ‘successful build’ and ‘quality build’. Even if a new integration is successful, it’s not considered ready to ship until it has passed a series of tests by QA engineers. That’s where automation testing with Selenium comes in handy.
Selenium automates frequent and recurrent functional, performance, and compatibility testing. This gives developers near-instant feedback for faster debugging, leaving them with more time to code business logic for newer versions/features.
Modern web development needs Selenium testing because:
- It automates repeated testing of smaller components of a large(r) code-base
- It’s integral to agile development and CI/CD
- It frees resources from manual testing
- It’s consistently reliable; catches bugs that human testers might miss
- It can provide extensive test coverage
- It’s precise; the customizable error reporting is an added plus
- It’s reusable; you can refactor and reuse an end-to-end test script every time a new feature gets deployed.
- It’s scalable; over time, you can develop an extensive library of repeatable test cases for a product
What Types of Testing can be Automated with Selenium?
Types of testing that are commonly automated with Selenium are:
Done by QA professionals/Testers to ensure that the web app meets performance benchmarks on different browser-OS combinations. For example, testing on different devices (mobile and desktop) to ensure that the front-end fits to scale (responsive); testing on different browsers to see if video ads render on the pages as they should. There are many Selenium Testing course available which can be free also to help you master it.
Series of tests done by QA professionals/Testers to ensure that the project meets performance benchmarks set by the stakeholders. Tester writes a script that checks whether all elements on homepage load within 2 seconds on different browsers/browser versions.
Done by developers to verify that units/modules coded separately (that work on their own), also work when put together. Parallel Test Calculator, for instance, has separate layers. UI takes input and business logic calculates the output—then sends it back to UI to display. The tester could verify whether they are able to relay data/output when integrated.
aka Black Box testing. Done by Testers/QA professionals with no context of the code or any previously executed tests. Typically centered on a single user workflow. The check-out process on a product website, for instance, comprises of: validating user credentials, fetching products from the cart, checking their availability, and validating payment details—before redirecting to the bank website. The tester could write a script to verify that the entire system is functional.
Also done by Testers/QA professionals, typically from the user’s point of view. The aim is to verify that all touchpoints on the web app are functional. From the previous example, the tester could write a series of test cases to check that sign-up, product search, checkout, review, bookmark, and all other features function as intended (and fail when invalid values are entered in input fields).
A series of tests done to ensure that newly built features work with the existing system. From the same example, say the product website launches a new feature (promotional codes) that automatically apply to eligible items before checkout. The tester could write cases to verify that it doesn’t break the rest of the checkout feature.
Well-written test suites can also automate Smoke and Sanity testing with Selenium.