A good strategy for regularly doing QA (Quality Assurance) as you create features for your product can create a really solid, polished, and pleasing-to-use experience. I say this from the perspective of a Programmer who gets things functionally working per requirements and does his best to make it look nice, but then ultimately passed it off to a person who's single role is to perform QA before the functionality makes it into the released software.
And I'll say this: I've seen what I produce, and I'm always relieved to know there's another set of skilled and talented eyes that exercise it for Quality. This particular co-worker has become a very critical part of our team (as well as a good friend who does huge DPS in Neverwinter during Monday game nights).
I recently had a conversation with him about how he goes about diligently performing his QA magic, and he was decent enough to provide me with an in-depth write-up that I'd like to share. Without further delay, here is How to QA
I'm going to outline my typical day at OurCoolProduct since "All of QA" could probably fill up a couple of volumes. This is meant to grant some general insight to the testing process, so if you want to drill deeper, just let me know and I can do a followup.
Requirements understanding
If it's a chunk of work, taking some time to write out/plan out the following:
-
Who does this chunk of work have an effect on? Is this a user facing change, or is this a backend/dev only kind of thing? Basically Black Box Testing (functionality testing, assumes user has no insight into inner workings) vs White Box Testing (tests inner workings, typically database or API level testing).
-
What does this chunk of work entail?: What's the scope of my testing? Are we testing one piece of functionality that requires a few different tests to ensure it works? Are we testing something that covers multiple parts of the project? The testing pyramid doesn't directly apply here per se, but if you're testing some API calls then you're looking at integration points and not necessarily an end to end test.
-
Why does this chunk of work matter to the user? This goes big picture, and it involves putting yourself in the shoes of your user. Is what I'm testing and experiencing aligning with what the developer/product owner and the user expect? If as a user my experience feels way off, then it's a red flag. This has some overlap with all of the UX practices that our Designer pushes, as the quality of the experience matters as much as the actual experience being what's expected. Depending on need, a QA can also fall into the UX side of things.
Creating and writing tests
As the majority of my feature testing is on the exploratory side, this is where the scope plays in and why Kanbanize and good AC's help a lot. I'll either pull up notepad or take out my notebook and start writing down some potential test flows based off the AC. If I'm testing, say, custom attribute creation, I'll map out the user flow start to finish.
EX. User opens OurCoolProduct -> User navs to Data Source Page -> User looks for Custom Attribute Upload tab -> User clicks Template -> Template downloads -> User opens Template -> User Adds values -> User saves template -> User clicks upload button and selects their file -> Upload completes
You've heard of the "happy path" probably, and that's what that basically is and what I consider the "bare minimum" of testing. When all of it works perfectly, you expect the perfect result. It's basically what our Product Owner would show off at a demo.
Based off the happy path, determined with the help of AC's, I can then come up with tests for the feature. This is where the more intricate side of testing comes in, though, and that's when you get off the happy path and into potential pitfalls and edge cases. It's important to distinguish between a true edge case and a user behavior that doesn't follow the happy path but is something they typically do.
EX. We might expect our users to be using basic strings as their custom attribute names, ie. "Test One" "Custom" "Attribute 3" etc. However, we know a lot of them do things contrary to expectation, such as using parentheses, "1) Test A" "2. Test B" etc. Knowing this user behavior gives you a starting point to get into the more obscure ways that the user can use the feature, and you can start extrapolating some really weird behavior, such as using emojis. Having this "other angle" or "outside of the box" view is what a lot of people bring up in QA's.
Execution of (manual) tests
Now that I have the basic roadmap of my test as well as tests to find pitfalls/boundaries, this is where we actually execute those tests. I'm not going to spend a lot of time talking about setting up test environments since you have a lot of insight into those already, but the standard practices of ensuring you have something that mimics production as closely as possible applies.
The following is a very useful tool for form testing:
Bug Magnet
I've used Bug Magnet for years, and it's an exceptionally useful tool for boundary and edge case testing. Note that this is easier to use for web testing (which the majority of mine is), but the values themselves can easily be plugged anywhere. It contains Lorems, numbers, whitespace, unicode, urls etc. Very useful for form testing.
Side note: We don't put a ton of emphasis on accessibility testing, to mixed feelings, but OurCompany has a pretty nice cheat sheet to reference: Redacted!SorryFolks!.
Colorblinding
Colorblinding is a really nice extension for testing various colorblind levels, have used it a lot where accessibility was a core focus.
The most important thing to note about executing manual tests is to not just ensure that you're running the test correctly, but that what you're seeing can be interpreted at more than the UI level, either through network calls, API logs, or console logging. Seeing an aberration at a manual level is good, but having the ability to interpret what it means is the most important part in being able to get something actionable to your devs. "This looks wrong" is a lot less valuable than "This looks wrong and here's the console error/500 error/warning in the logs" and will vastly reduce the lag time between finding an issue and the work on an issue beginning. A common practice drilled into our heads at MyPreviousCompany for any project was to set a main priority of "Get log access".
Interpreting and rerunning tests
After you've collected the results of your tests then interpreting the results, making any adjustments, and rerunning them is a necessary task. This prevents a few things, namely getting into the habit of just seeing a happy path (even into boundary/edge testing) and moving on, and also forces you to consider even more test cases you hadn't. This is where it gets tricky though, since you don't want to get into a rabbit hole of testing such extreme cases as to not be valuable. This can be done a couple of different ways, but for this example, I'll use user profile testing. Basically, this is an exercise that puts you in the shoes of different kinds of users. I'm going to post the basic premise of it from https://qablog.practitest.com/5-ideas-on-how-user-profiles-can-improve-your-testing/
When creating a User Profile you will define all the personal and professional traits that are relevant to the work this person does with your application. For example:
- Years of experience in the field
- Knowledge of your tool, or similar tools
- The way in which he/she works: on a desk in his office, in the field while walking, on a store behind a counter, does the person access the application from his iPhone or BlackBerry?
- Interactions with other users of your tool or other tools with which they need to communicate and collaborate
- You may even want to define some demographic information if relevant such as age, nationality, language, etc
Using this kind of methodology, you can expand your testing from "QA user" to "actual user" and hopefully find some test cases you didn't think of the first time. A healthy exercise I did on my first team at MyPreviousCompany was to come up with 5-7 user profiles and make the team mob test the product coming at different user profile angles, and we found a good amount of issues based off say a "power user" trying to break the app vs "the not tech savvy account manager" who barely knew how to log in to their ipad.
There are tons of different QA approaches to a product similar to this, which we could get into specifically later if you want.
Regression and Release testing
For the purposes of OurCoolProduct, most of the big regression tests come as we're prepping for a release. Normally I would do a fully manual sweep of the app, but since our existing automation is pretty good, I tend to end up just testing the things we can't cover as easily, such as the UI of the d3 graphs.
Release testing would normally take a lot longer due to having to test the integration of new features with each other, but an advantage of the current dev branch is that the integration aspect of a release tends to exist together already, and the use of the .15 environment as a new feature "holding pen" of sorts helps with the confidence of feature separation. Release candidates go through a typical process of being tested on .6 environment with emphasis on upgrading from current client versions to the candidate version via the UI. I tend to lump regression testing with a successful upgrade to the release client. If everything looks good, then the release gets the final stamp of approval, we demo any features the users may have missed, and then ship it as needed.
So that was all very high level stuff without getting too into the weeds, but that's the general outline of how I operate QA for OurCoolProduct. It's definitely now how I've done it on the last five teams I've been on over the years, so how you end up doing QA will likely be different too. What's important is that the general testing cycle itself stays consistent, and you end up just molding practices around it that are relevant to the project (much like development).