11 Jul 2022 · Greatest Hits

    How to Introduce Testing in Teams and Companies 

    15 min read
    Contents

    Testing is a way to encourage change in a company.

    Testing provides a robust safety net and supports code refactoring. Refactoring can refresh the codebase and make the code more flexible. Flexible software enables the quick introduction of new features, benefiting the business.

    Testing, which appears to be just a mere technical detail on the surface, can energize the development of software products.

    This article offers some suggestions to encourage the introduction of testing practices in companies looking to improve product quality and production efficiency.

    Joining new companies and teams

    Before we start talking about testing, let’s take a look at a common situation that you might have experienced in your career.

    When you’re new to an organization, you can’t always address existing technical issues. The possibilities vary depending on the role: an outside consultant can bring knowledge and experience but cannot, for example, make strategic decisions. A software developer working as an employee will have a different scope than a software architect or CTO.

    In addition to role, technical issues are influenced by the available budget and the people who make up the workforce. For example, in some situations, development is outsourced to external software houses where intervention is not feasible.

    All possible aspects aside, it’s always interesting to observe the dynamics that govern software production: every company has its own idiosyncrasies, and the best ideas are often drawn from them.

    A pattern observed with some frequency is that of the young, dynamic, and responsive company, which over time acquires market share, growing vertiginously. As the company gets bigger, it grows sluggish and loses its initial momentum (as per the Ringelmann Effect). Satisfying customer needs becomes a very demanding process and delivery time increases. The rush leads to less time for reflection and improvement, resulting in a downward spiral: more and more defects, and the rework of existing features is continuous. Production and releases slow to a crawl and customer dissatisfaction grows.

    There are many individual causes that give rise to such situations: over-demanding clients, under-staffed teams, lack of skills, and even more sensitive and personal issues in interpersonal relationships.

    In practice, it is often a mix of several factors. An effective place to start is to take teams and companies through three essential steps: first, viewing; second, measuring; and third, improving. Before testing can be introduced and implemented, the current state of the organization (and its strengths/weaknesses) should be determined. This way, various stakeholders can be made aware of the current state of affairs and will be able to more easily see where testing can fit in and make major improvements.

    Viewing

    In this context, the verb “view” is used in a broad sense. It starts with the actual visualization of data, then the introduction of physical and electronic boards to share the vision, roadmap, and backlog, and goes all the way down to information radiators that show the real-time health of the Continuous Integration pipeline. These activities lead people to observe behaviors and situations that are not clear to everyone. In the technical domain, it’s not uncommon to see teams gain a new awareness of the state of affairs in the organization and how their work relates to other domains.

    Once we are “fully aware of our work and how it fits into the bigger picture“, it becomes natural to start measuring.

    Measuring

    Without altering anything about the existing process, the second step is to introduce metrics. You could start by counting and cataloging activities, features developed, bugs introduced, support tickets processed. In a short time, patterns will emerge, and people involved in the process gain new insights. Personal feelings and judgments give way to data and statistics, which generally have the advantage of being more objective.

    Improving

    Once you have identified the critical points, the first improvement experiments can be implemented. A clear hypothesis and a precise and measurable expected result allow for the beginning of the actual change.

    What role does testing play in this design?

    A vital role, as long as certain precautions are taken.

    Testing as a catalyst for change

    In a previous article, we saw how there is a wide range of tools and approaches for testing.

    In this stage, you can (and should!) engage multi-layered discussions regarding the available options and practices, in order to choose the best fit for the organization

    With product teams and developers, it’s good to open a conversation to discuss current technical practices and tools, to share existing knowledge, and gain new insights.

    Furthermore, it’s just as essential to foster a debate on the topic with staff outside of the development department. Non-technical staff interact with the product as well, and their expertise should be taken into account.

    As we mentioned, testing is often relegated to a mere technical issue, but this is a poor practice. Testing hypotheses is an approach that has its roots in the experimental method: it is how science has distanced itself from superstition and morphed into its current form.

    Therefore, testing and experimenting are some of the most effective ways to challenge our own assumptions and those of the people we work with.

    When a customer is asking for a new feature, “How would you test it?” is a very powerful question that will get them to think analytically about their request: if their ideas are clear, it will be easy for them to offer plenty of test cases. If there is hesitation, the lack of detail is brought to light, and there is then an opportunity to further refine the requested functionality and the underlying needs. In practice, this can be done by taking advantage of Acceptance Testing benefits.

     

    How to talk about testing to non-technical people

    One of the greatest obstacles to the introduction of cultural change is the inability to clearly express its advantages. Speaking of testing, it is not uncommon to find a certain inhibition or inability to convey the importance of a given change to people who are not directly involved with the technical side of things (or think they are not).

    When addressing this topic with a team of developers, it’s usually pretty easy to reach a consensus about the benefits that testing brings to development (although there are plenty of naysayers).

    However, there are prerequisites (time to learn new practices, introduce tools, etc.) and sometimes not everything is manageable within the team. It often happens that you are forced to ask for more budget (in time and money) to enable the desired change of pace.

    And it is precisely when developers try to convey their needs to non-technical domains that difficulties begin to emerge. The reason for this is often that developers try to convince non-technical people by depicting only technical advantages, for example when talking about technical debt.

    Developers often talk about refactoring and technical debt, and are unaware that non-developers don’t necessarily understand the context. Non-technical staff often think that adding new features simply boils down to adding more code: they don’t understand the complexity of the development process (as most developers don’t understand the inherent complexity of business affairs).

    Very often, programmers fail to focus on the long red thread that leads testing through the entire implementation chain, from idea to software production, and consequently fail to convince people outside the team. It is important to frame discussions in a way that all participants can understand.

    In these situations, the person who can help the team most is someone with a certain degree of reputation and authority, such as a coach, engineering manager, or CTO. The experience and the “super partes” position offer the possibility to bridge the proverbial gap from business to development, favoring the introduction of a “new language” everyone can understand.

    A mature team, which has learned to see and measure the fruits of its labor, should be able to effectively communicate its experience to its business-oriented co-workers. This communication often takes the form of various data visualizations that display agreed-upon performance metrics. Information, in the form of metrics, is a powerful weapon to open the eyes of non-technical people to technical issues.

    The number of open bugs, regressions, and time spent fixing existing code, compared to the time spent innovating and adding value to the product are all important issues to underline.

    People leading teams and companies think in terms of tactics and strategies. A healthy company is one that makes a profit, by which it is able to maintain a certain level of well-being for its employees. When a development team can show that a new approach would lead to more efficiency and a better product, you can be sure that non-technical executives will be eager to implement it.

    Testing culture plays a key role here: testing everything, from idea to production code, is one of the most effective ways to reduce waste and maximize efficiency.

    The strategic importance of testing

    So, we’ve seen how testing has strategic relevance within the entire workflow of a company. Hopefully now we can stop thinking of testing simply as the way we “check that the software runs“.

    Testing code is important, but it’s only one of the many places where we can use testing.

    As we mentioned, testing should also become part of the culture of those who collect user needs. An effective business analyst doesn’t just collect requirements and pass them on to the development team; they are also able to explore each request, determining the actual need and getting the client to make the details clear and explicit.

    Adding new functionality to a product without necessary governance can sneakily lead us into the “Swiss Army Knife” trap: little by little we create a tool with lots of functionality, but which doesn’t particularly excel at any of said functions.

    The ultimate goal is to solve problems, developing software only when needed.

    Less functional analysis, more examples

    In large firms, it is not uncommon to see an equally large hierarchy of roles and job titles. The communications between departments become complex, and with it so too does the manner of conveying the needs that should be met by the software.

    In these situations, a typical solution is to use specification documents and meticulously collect requirements, which are able to accurately convey intent and leave no room for doubt or interpretation.

    And the belief is that, once written, these requirements will never change.

    Unfortunately, this often remains a mere illusion. Edward Berard once said that “Walking on water and developing software from a specification are easy if both are frozen”. The more widely the software is to be implemented, the more true this statement is.

    This topic has been talked about for years. Brian Marick has spent rivers of ink emphasizing the importance of examples in testing, so much so that he named his consulting firm Exampler.

    In a previous article, we brushed up on his work regarding tests, particularly their cataloging. In that important piece, Marick helps the reader understand the various types of tests, making an important distinction: it is one thing to verify results (checking), and another to explore possible unexpected behaviors (testing).

    This distinction is of fundamental importance because, in addition to clarifying the types of tests and the techniques associated with them, it places the emphasis on what should  be automated (checking) and what should be performed manually (testing).

    When gathering requirements, it is therefore important to take these aspects into account: for functional requirements, there must be a large collection of tests (a test suite) that developers and QA engineers will implement via automated tests, which will be performed constantly in the CI/CD pipeline.

    For non-functional requirements, we have to set up a suite of tests to assess the robustness and security of the solution offered; additionally, we need to design a series of exploratory tests that QA engineers can perform manually to constructively criticize the product.

    The ability to ask for examples during requirement gathering, in addition to facilitating the writing of a test suite and the subsequent creation of automated tests, allows you to validate the clarity of the solution required by clients.

    If the requester of a particular feature is unable to give clear examples of its use and is unable to accurately formulate the tests that would be required  to verify proper implementation, then we have a clear signal that further refinement is needed before development starts. We will be able to separate clear requests from those that need further refinement to be implemented, user stories from spikes, and so on, all to the benefit of the development process.

    Using tests as small contracts with stakeholders

    The concept of User Story was born in the XP days. Ron Jeffries (father of the User Story concept) introduced a key to interpreting the concept of a user story, the so-called 3Cs: Card, Conversation, Confirmation.

    By Card Ron means the importance of having a place to gather information as it emerges. The term Card is not simply a symbol: the first user stories were actually written on index cards, which the early XP team found themselves having in abundance as the old filing cabinets fell into disuse.

    The second “C” is the keystone: Conversation. Gathering and writing requirements are necessary activities, but often they are not sufficient. Those who ask for functionality must be able to explain their needs in a face-to-face conversation with the development team that will implement the required software. It’s only via this discussion that all doubts can be resolved, and the user will have the opportunity to better convey their needs, and that the programmers will be able to fulfill their key tasks: to question the expressed need, clarify ideas, highlight any technical implications, and finally accept the request and proceed to development.

    The third “C” stands for Confirmation: once development is complete, a user story must be confirmed. Here is where the Acceptance Tests of XP culture come into play–where we, too, find our beloved tests. In this scenario, they act as validators of the initial hypotheses, almost as if they were the clauses of a small contract stipulated between those who ask for a functionality (the user or his representative) and the developers. And these clauses protect both parties: they allow those who requested the development to be sure that the system will behave as expected, and the developers to have a precise idea of when the development can be considered finished.

    Promoting a culture of quality

    This article is the result of several studies and much practice in the field. We have not yet talked about quality: now is the time to do so.

    What we have done so far is to describe the road that leads to the realization of a quality product, a road made safe and smooth by a multitude of tests, each with its own precise purpose. This is a fundamental prerequisite: one cannot proceed with determination on treacherous roads, where risks and dangers lurk around each corner. Just as we don’t drive too fast if road conditions are bad and we haven’t been to the mechanic in a long while. You can’t safely go fast if you don’t prepare well first.

    Testing is a fundamental part of quality-driven culture, a prerequisite for avoiding the two biggest risks for a company developing products: the risk of doing the wrong thing, and the risk of doing it too late.

    With this article, we wanted to bring the topic of testing to a higher level, try to dispel the myth that it is an engineering vanity, and convince you that testing’s rightful place in the business strategy.

    Changing course is not a simple operation, many companies don’t make it to success. This article’s purpose is to provide encouragement to take the first steps and reduce the range of action and focus on a precise theme that is clear and able to influence every part of the company.

    Tests have a cost, no doubt, but their costs vary enormously. The closer the tests are to those who make the product and the closer they are written and executed to the development cycle, the more effective and cheap they are. As time goes by, their cost increases: just think of the cognitive load of revisiting a theme from months or years earlier or having non-technical people dealing with unexpected anomalies and behaviors.

    Waiving testing can be a way to save time and money, sure, but it can (and often will) come back to bite you down the road. There is no law that requires testing(excluding of course special cases subject to special laws), but that isn’t a reason not to implement tests.

    Either way, if you don’t test your product, someone else will, and it’s more expensive to go back and fix problems with a finished product than it is to make fixes during development.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Avatar
    Writen by:
    Ferdinando has been a developer for a long time. Today he works with teams as an independent consultant, coaching and training them both the technical and process aspects. He published a book about Git. He is a volunteer of the Italian Agile Movement. He is passionate about conferences: he likes to attend, speak and organize them.
    Avatar
    Reviewed by:
    I picked up most of my soft/hardware troubleshooting skills in the US Army. A decade of Java development drove me to operations, scaling infrastructure to cope with the thundering herd. Engineering coach and CTO of Teleclinic.