The success of an E-commerce website mostly depends on a combination of four basic features:
In other words, it should not only present the products in the most appealing manner, but also be secure and stable.
As the owner of such a website, you definitely do not want to lose money when something breaks. As a developer, you have to find a way to ensure that the most important functionality is working correctly and get notified of any issues as soon as possible.
In this article, we will discuss one of the possible solutions to achieve these goals, which is automated testing.
What is automated testing?
As the name implies, automated testing is a technique that utilizes specialized tools to perform validation tasks for functionality without relying solely on human resources. By integrating automated test into your website, you can benefit from increased accuracy, efficiency, speed, and reusability of test scripts, allowing your staff to focus on other important tasks.
If you are familiar with testing in general, you are likely aware of different types of tests such as unit, integration, and performance testing. For e-commerce websites, we recommend considering the automation of end-to-end (E2E) tests that replicate real user behavior in the actual application environment.
With an automated test script, you can simulate actions like opening the product list, applying filters, selecting and purchasing a product – essentially replicating the actions of a real user. Any potential issues encountered during the process can be immediately identified and reported.
Which functionality should be covered by automated tests?
If you’re impressed by the advantages of automated tests and planning to start writing them immediately, please pause for a moment and continue reading. It doesn’t make sense to automate everything.
Based on what we have discussed so far, it’s easy to guess identify what should be covered by automated tests as a priority. Here is our top 3 list:
- Functionality critical for the user flow (e.g. payment processing);
- Scenarios that are difficult to test manually or require significant manual effort (e.g. filling multiple forms with specific rules or setting up complex filter criteria).
- Repetitive tests (e.g. when there are multiple white-label sites with the same code base but different configurations).
Having automated tests in place for these cases will serve as an excellent starting point and a significant step towards improved stability.
Why is automated testing a valuable addition to monitoring and alerting?
When talking about web application reliability and stability, one can’t help but mention concepts like metrics, monitoring and alerts. If the monitoring system is properly set up and already provides meaningful results, including real-time incident notifications, one could say that there is no need to have anything beyond that. However, one should never forget that all of these components typically only run in the production environment and only respond to certain critical events. You should also have a tool to constantly check the functionality of the website.
Automated tests can be run in any environment, starting with the developer’s computer. When your team develops a new feature or fixes a bug, you want to check the overall health of the application as early as possible.
When you add automated testing to the continuous integration (CI) process, it’s much more likely that the quality level of the application won’t drop, and you’ll be able to spot potential bugs after the initial deployment to an internal instance.
So if a test script fails when paying for a product, you don’t have to wait until the same thing happens to a real user on the real site – plus, you already have an idea of what change might be causing that problem, and thus the ability to investigate it directly.
Why will project managers be happy (and also unhappy!) if you integrate automated tests?
As persons responsible for organizational tasks and communicating directly with the representatives of the website owner, project managers should always have good evidence that everything which is live or meant to go live is working properly. With automated tests, this becomes real.
When tests are failing, they can see in advance that some features require additional work, and can plan ahead accordingly. Project Managers can address the production issues to developers as soon as possible. When tests are passing, they can be sure that a new release will be delivered on time without issues, or that the production environment is functioning as expected.
But even the automated processes may sometimes falter. For example, a pipeline that should execute a test script may get stuck due to lack of resources, which may lead to false failure reports. If it is not possible to distinguish between these reports and real failures, it may cause some panic. This is where the team work is important – someone with the technical background should be able to react to the failure notification and inform if this is a real problem or just a false alarm.
Why do you probably need a dedicated QA engineer in your team to create and maintain the tests?
Creating an automated test script means writing some code. But if you think this is a new adventure for your developer team, you may be mistaken. Of course, developers are usually great in learning new technologies, and they know how exactly everything on the website should work. This is not enough, though.
Automated testing is not only about initial implementation, but also maintenance. Whenever something changes on the website, test scripts may have to be updated accordingly to cover the new state. Developers concentrated on their primary job will produce better results than developers constantly distracted by doing what a QA engineer is supposed to do.
From a professional perspective, QA engineers have a different point of view on the process of testing. They can detect more prominent test scenarios, select more suitable testing tools, and will most likely spend less time on writing test scripts. Developers should create features – QA engineers should verify them. Having this tandem is a key to more efficient work, and as a result to reliability of the website functions.
Why we decided to start writing automated tests, and how we use them?
One of the latest products created by freshcells, the TravelSandbox® QuickStart, is a sales platform for the online travel market. It is successfully running in production for multiple clients, including very large players in the travel industry.
Like other e-commerce websites, it provides the possibility to buy a product, which is in our case a combination of accommodation, flight and various additional services, such as insurances or rental cars. In order to cover all these things, we have to introduce booking flows that are quite complex from a technical perspective – for example, when an insurance is booked directly from a third-party provider, we consider separate payments for it and the main trip package. Moreover, sometimes it comes to adding new available services or payment methods, or adapting to the changes in the external APIs or business requirements.
This naturally leads to the necessity of keeping all parts of the booking process super stable, especially the ones that have dependencies on the external systems, and being able to catch any issues even before they may happen to real users.
First thing we did was to establish the monitoring infrastructure – every important internal action of the booking microservice produces log records, which can be used to identify errors and create alerts when the mean error rate reaches a certain level. The support team gets notified about such cases immediately and is able to take care of the issues directly.
But we wanted to improve this even more, and then the idea of running automated tests against the production environments at least twice a day came to our minds. This way, we could explicitly verify the most critical booking related functions, and even if there were no real bookings in the last 24 hours, we could still trigger them ourselves to check if the system is in the normal state.
Creating automated tests was definitely a good solution to consider, at least because of the following points:
- Even though the client websites are different to some extent because of certain custom requirements, they all originated from the same QuickStart core, which allows us to take the advantage of reusability of the test scripts
- The possibility to run tests outside the standard working hours, in the late evening or on the weekends, when no manual resources are available, but the system load may increase since people have time to relax and book a trip for an upcoming holiday, gives us an additional insight on what is happening in these very important periods when our clients earn the most money.
As a testing framework we decided to use Cypress – a modern lightweight solution providing such features as cross browser support, debug-ability, CLI commands, official Docker images for running in CI, screenshots automatically taken on failure and videos of the entire test suite, available for further analysis of the results in the Cypress Dashboard.
For our booking flow tests it was also especially relevant to be able to configure a number of retry attempts, as availability check for an offer may fail if a room or a flight is booked out, but this does not mean that the booking is not working, so in such cases we wanted not to mark the test as failed directly but start over with another offer. Fortunately, Cypress provides this out of the box.
An additional plus was the official ESLint plugin, so we could write tests following the best practices and keep their code as clean as the code of our application in general. The final decision was made by developer and DevOps teams together, meaning that everyone involved in writing tests and including them into the CI process agreed on using Cypress as it suited their needs.
Before writing tests, we prepared some documentation.
First, we created flow diagrams as the visual representation of the booking process to be tested for each client. This helped us identify the common patterns for the core implementation and specifics to add on top for the client projects.
Second, we approved the booking rules for each client. Testing against the production environment means that the bookings land in the real booking systems, and not every system is able to detect that it was just a test based on certain criteria such as specific first and last name of the traveler, or a specific comment, or fake credit card number.
Sometimes it is necessary to request cancellation by sending an email with the reservation number, or select a specific travel agency that should receive the payment and manage the post-book operations. All these rules have to be considered in advance to create a test that will cover the critical functionality and at the same time not disturb the back offices.
Finally, as a proof of concept, we wrote tests for the core to be executed against an internal environment. In parallel, we developed a CI solution to trigger the test run automatically after deployment to this environment and set up a cron job that does the same four times a day. After this was done, we educated all developers about the basics of Cypress and the exact implementation of our tests.
With this foundation, we were ready to start working on the first client based integration of the booking flow tests, which successfully went live and is functioning currently, helping us to keep track of the most important website features.
Introducing automated testing is a great opportunity to verify and maintain the reliability of the most critical functions of your e-commerce website. At freshcells, we definitely plan to continue using automated tests not only for the booking flow but also for other areas, including both, end user experiences and internal scenarios, such as content management with freshMS.