One of the more positive outcomes of the COVID-19 crisis is that people are thinking of “the Future of Work” in the Mutable Enterprise and of business continuity in the face of the next pandemic or whatever. This is giving an impetus towards digital transformation, but (as Bloor consultant Tim Connolly points out in “Remote working: how strong is your trust culture?”, 16 March 2020) this has to be done carefully, with due regard to maintaining digital quality and Trust in the new digital channels.
Which is why I was listening to a webinar from Eran Kinsbruner (Chief Evangelist, Perforce Perfecto) about moving testing into the cloud because (Eran says), “to ensure business continuity at all times, teams need to utilize a cloud infrastructure that is always on, available, secure, and scalable”.
I’d agree, and Perfecto is an important player at scale. However, I was particularly taken by a question about “non-functional” testing (NFT) in the Q&A after the Webinar.
“Non-functional testing” isn’t testing that doesn’t work. It is testing against the general expectations of customers that digital channels/apps will be trustworthy and usable – secure, resilient, reliable, performant, fraud resistant, offer a good User Experience etc. In other words, it tests the (often unexpressed) customer and business requirements that aren’t “functional requirements” (hence the name), built as automated business functions such as “display catalogue”, “make sale” etc.
Eran commented that NFT was important and that Perfecto had many capabilities that supported it. He also made the excellent point that it must “shift left” – in other words, you start it at as near to the beginning of the beginning of the project as is possible, you don’t wait until everything is built (when it is too late to find out that a digital channel has systemic security or performance issues).
I was interested to see Chris O’Malley (CEO, Compuware) say much the same about performance testing, in the context of Mainframe DevOps: “By automating shift-left performance testing, your teams can improve agility, deliver higher quality applications, reduce development costs and deliver better customer experiences”.
Nevertheless, I can still find apparently respectable web sources which imply that NFT is done after FT, which I think is misleading… In fact, I think the traditional “design, then build, then test, then check non-functional stuff if you still have time” is plain wrong, partly because it encourages poor testing when deadlines slip but mainly because it is wasteful. If you discover that the basic architecture or design of a system is insecure just before you go live, you may have to throw most of it away and rebuild, as well as impacting the business with a missed deadline.
So, how do you test things like security and performance before you’ve built everything? Well, no doubt the software vendors I’ve referenced will help with specifics; but, broadly, you can look at designs and requirements, identify performance, security etc. antipatterns and get rid of them early on, before they are coded. If the requirements include, for example, remote access through the firewall with nothing but a name check against a list of authorised people held, in clear, on the browser, this is wrong on so many levels, that waiting until the security officer throws it out just before (hopefully) it goes live would be silly. Fixing such a mess before anybody starts coding it, is much cheaper than waiting until it is embedded in code, and a skilled developer, with help from automation, can identify many potential issues very early on.
Now, what is in a name? Well, I think “Non-Functional Testing” has unfortunate baggage. It implies that it is less important than testing business functions and many people are unsure exactly what is included under that heading. There is an alternative name “UX/DX (User Experience/ Developer Experience) Testing”, but that term, to most people, misses out some stuff that still needs testing.
It’s difficult, because most techies say NFT, and know roughly what it means (I hope) but I think that the term misleads business stakeholders, who speak ordinary English.
My favoured terms would be Customer (or business) Requirements Testing vs Customer (or business) Expectations Testing (where Expectations are the, often unstated, non-functional requirements). Of course, if these requirements really are unstated, the first stage of Customer Expectations Testing is to get the expectations stated, or you have nothing to test against.
Do readers have my problems with the term “non-functional testing”? We expect (assume) that our systems will be performant, resilient, secure, easy to use, maintainable etc., without always specifically saying so. Perhaps the term should be System Assumptions Testing…