Software testing can often be seen as the tick in the box – one of the final things you need to do (often squeezed or condensed at the last minute) before some huge software project is sent into production.
Management generally don’t want unplanned hiccups. They’re hoping for and expecting a steady stream of “yeses” from their test teams. And yet “no” is sometimes what I’m required to give them when I feel that we can do better on a project.
It’s difficult or at least awkward to do this. No one wants the responsibility of holding up a move to production and appearing to be the blocker or troublemaker.
Still, I’m not afraid to say no if I think we can improve upon what has been designed and tested. Waving things through with a meek “yes” might be the easy option. But I’m interested in good, even great – not just in easy. If the decision is still to proceed, I can be confident that it is done so in an informed state and the risks have been outlined and accepted.
This article sets out a little about what I do and what I look for in a good tester.
What Non Functional testing really measures
I work in Non Functional Testing.
Whereas Functional Testing is concerned with what a system does (such as processing payments, for example), Non Functional Testing relates to how it does it – essentially, tests that measure that what.
Non Functional Testing usually covers what we sometimes refer to as the “–ilities”: operability, maintainability, recoverability, availability and scalability – as well as other areas such as performance, volume, utilisation, security, resilience and disaster recovery. Which for clarity, I’m fully aware don’t end in “ility”, but these system quality attributes are where we would start. There are several more, but I personally feel there is some blurring of the lines between them, and these would meet most general needs.
In Non Functional Testing, not all requirements are as quantifiable or measurable as, say, performance SLAs. Instead, the requirements are sometimes subjective, operability for example.
So, to keep ourselves away from subjective or qualitative measures alone, we continually ask our clients for feedback to understand exactly what they want and need. By repeatedly asking ourselves the “is it testable?” question, we can get to the point where our tests and their results are meaningful and useful.
Some of this testing can be done using automation and tools, as this allows us to speed up test cycles as just one benefit, but they really excel in the Performance testing arena. However, I do remind clients that this can be expensive: there’s no point wasting time and resources on automation unless you’ve first researched exactly what you need and what problem you are trying to solve.
Qualities of a top Non Functional Tester
I expect testers to display high standards and to be the fussy, picky and nosey type. They should be analytical thinkers who are constantly curious.
Length of experience and technical knowledge aren’t always everything. Anyone who works with us should show that they love leaping on new ideas, wanting to inspect and challenge them straight away.
A natural interest in wanting to know more is always a good sign. If you’re the kind of person who can’t see a gadget without wanting to disassemble it, understand out how it all works and then fit it together again, you might make a good tester!
Communication skills are particularly relevant in managing testing, because you sometimes have to switch quickly between the deepest of techie conversations with a tech/dev team and much more high-level, non-technical discussions with clients.
Understanding the tone needed and the stress that different groups are under is helpful when trying to move messages between those groups, so that projects are guided safely through to completion.

Whoever we communicate with, the tester’s job is to help the project succeed, so the language should never be framed as “your software failed, you need to fix it.” We try to be a lot more understanding and supportive than that. We don’t ever want anything to fail and if it does then there’s a shared responsibility around that and how we get into a better position.
Perhaps most importantly for me, a tester shouldn’t just be a yes person. If we’re interested in being guardians of quality and making things better, we’re not just going to fall like dominoes when a manager decides that a project really needs to be signed off now.
If I need to speak up about concerns with there being too many defects or other design – or test-related issues with a project, I’ll say it – even if that’s in front of senior directors (obviously never without ensuring my immediate management are informed prior to that. Nobody likes to be hung out to dry and shooting blame helps no one!).
Again, that comes from wanting to achieve the best result for the project, not because I want to be difficult by holding things up. Unnecessary hold-ups are pointless. If something is good, I’m happy to say so and let’s get on with it! But if it’s not, I’ll speak up so that our projects can make the news for the right reasons (instead of being a PR fail, like this financial services IT project reported by the BBC).
Building a high performance testing team
I want to see people who have my level of dedication to the work – the people who want the best for the projects they work on. They’re not always easy to find. On one occasion, I spent 2 years reviewing and interviewing around 200 candidates to ultimately build a team of 20, but they were a team I was immensely proud of and worked fabulously as just that, a team..
This is what I look for and expect when someone’s on my testing team:
- Skills sharing: I never ask anyone to do what I can’t, and I want members of the team to share what they know so that we can all level up together. Myself included!
- Initiative: I’m happy to offer help but I don’t tolerate fools If you come to me with a question – bring your own proposal for how you might deal with the situation. It might not be right but at least you’ve shown me that you tried. And I’m not a genius. You may well have thought of a fantastic suggestion that I may not have, and I want to hear it.
- Attitude: I’d favour someone with the right thinking towards testing versus someone with great academic or intellectual achievements, but who isn’t interested in being a supportive team player.
- Diversity of thought: I don’t want to work with cookie-cutter testers. People with lots of different backgrounds and skillsets make for a much stronger team. (It works for Google.)
How to avoid common mistakes in Non Functional Testing
One of the biggest issues I see is leaving or considering testing far too late. The earlier we’re engaged, the better. We encourage “shift left” testing, which means a better outcome and less stress all round. Waterfall or Agile or any hybrid in between, bring your testers to the table early in the project.
Allow enough time for proper testing
Managers need to allow enough time for their test team to find defects and for the tech/dev team to have enough space for those defects to be fixed and retested properly. Rushing this never leads to anything good and if the end result is a project that doesn’t work properly in production, there’s going to be an unhappy client. No one wants that.
Create manageable test sets
It’s important to create manageable test sets so that testing can be as efficient as possible. Failing to combine requirements (especially performance, utilisation and volume as an example) means that a test set can balloon unnecessarily, and this means it’s hard (and expensive) to maintain.
Understand the broader architecture
Another mistake is when people fail to understand the environmental constraints and architecture in which they’re working. Some tests they want to do might not be appropriate if this could interfere with another activity. For example, I’ve seen servers being switched off by people who didn’t realise that another team was running a multi-day Soak test. We need to be aware and informed of what’s going on around us rather than thinking in silos.
Balance automation and manual testing
Automating everything might sound great to the bean counters but it can sometimes lead to unnecessary expense and a sub-optimal result. It’s always important to know what the client really wants to achieve and then pick the spots where automation (via the right tools, not just any tools) makes sense. There’s no point doing tests unless you know what you’re trying to achieve. Automating can help you prove there are no problems, but manual testing can help you find problems – don’t discredit either too easily.
Testers need to be brave and speak up.
As I said before, waving things through is the easy route but that doesn’t lead to the best results – if we say no, it’s not because of an ego trip or so that we can be difficult. It’s simply because we believe and want to do better.
So, there we go. A good tester is naturally curious, able to communicate – and not afraid to say no!