In this week’s episode of Semaphore Uncut, I had the honor of speaking with author, consultant, and continuous delivery thought leader Dave Farley.
Dave, who has been in the industry for more than 30 years, was kind enough to share his experience as a strategic software development consultant, industry patterns (and anti-patterns) he has observed, best practices for setting up successful testing strategies, and more.
Listen to our entire conversation below, and check out my favorite parts in the episode highlights!
Like this episode? Be sure to leave a ⭐️⭐️⭐️⭐️⭐️ review on the podcast player of your choice and share it with your friends!
Highlights from this Episode
Darko Fabijan (00:02): Hello and welcome to Semaphore Uncut Podcast, where we talk about continuous delivery, continuous integration, testing, and developing software in general. Today with us we have Dave Farley.
Dave, thank you so much for joining us. Please feel free to go ahead and introduce yourself.
Dave Farley: It’s a pleasure, thank you. My name is Dave Farley. I’m one of the authors of the Continuous Delivery book that described continuous delivery in the way that you think about it these days, I think, for the first time.
These days I make a living as a consultant advising organizations on how to, in general, improve their software engineering practice, but specifically in the context of continuous delivery. So if you’re a big organization with legacy systems or the complicated built system or anything else, I help people get over those sorts of problems.
A day in the life of a strategic software development consultant
Darko: Can you guide us through some examples of how day-to-day life looks as a software consultant helping companies on their journey?
Dave: Sure. One of my clients described what I do for a living as strategic consultancy. So I’m no longer the sort of consultant that goes in and kind of writes code for people. I used to do that, but that’s not really what I do anymore. Mostly what I do these days is to advise organizations on broader topics.
And so mostly my consultancy kind of falls into three different groups of activities. I do quite a lot of public speaking, so I speak at conferences and things like that. And sometimes I get engaged by organizations to go in and talk to them and try and get them interested or enthusiastic about ideas around continuous delivery. That’s a small part of what I do.
Occasionally, I do things like run training courses for people, but the bulk of my work is really about consultancy. So what I tend to do is go into an organization and try and analyze the way in which they practice software development, from soup to nuts.
We try and do some kind of value stream analysis and understand how their development process works, and then usually I kind of critique it. I’ll kind of give them advice about different parts of that, and that usually boils down to a bunch of different kinds of activities that they might carry out.
Patterns (and anti-patterns) in software development companies
Darko (03:00): Maybe what would be interesting to know that some of us may recognize ourselves in those categories. What are maybe some anti-patterns that you’re seeing and what are people struggling with mostly?
Miscategorizing software development
Dave: Quite a lot of things. Let me philosophize for a moment to try and put that into context. I think that the biggest anti-pattern, the trillion-dollar mistake that our industry has made, is miscategorizing what software development is. Nearly all organizations—nearly all of my clients’ organizations anyway—try and treat software development as a production problem. A problem in production in the sense of being able to scale it up in order to be able to produce things more reliably, for example.
And software development isn’t that kind of problem. I think of waterfall development as the equivalent of a production line approach. And software isn’t that kind of problem.
Software development, in my eyes, is always an exercise in learning and discovery. So I think that, first and foremost, we should be optimizing our work to be really, really good at learning discovery, experimentation, exploration, and those sorts of things. So I think that’s the biggest anti-pattern. One of the common facets of that that I see in my organization, I think it’s fair to say that agile thinking over the last 20 years has won the argument for how to approach software development at some level.
Misinterpreting agile software development
And so what I tend to see is I see lots of teams, technical teams in bigger organizations, practicing what they think of as agile software development. Usually what that means is it means that they’re practicing some form of scrum. And usually what that means is that they’re having stand up meetings and they’re running in things called sprints, but they’re not delivering software at the end of the sprint.
The standup meetings are usually kind of status meetings, and there’s very little or no automated testing. There’s no kind of continuous planning, there’s no kind of customer involvement. So it’s not really scrum, let alone really agile or anything else.
Disconnecting business and development teams
The last one that I’ll call out that is commonly broken is the interface between the business and development teams. But the story or the requirements processes often inadequate.
I most commonly see these requirements expressed as technical instructions. So do this thing, add this column to a database, refactor that component—those sorts of things. That’s not really an effective way of organizing a development process for lots of reasons. Some of them are obvious and some of them are subtle, but that’s a poor interaction. It’s sort of like trying to write code by remote control, and that’s not an effective strategy.
So there’s a whole bunch of things that people commonly get wrong, but I think that the big one that it all boils down to is this misapprehension and trying to treat our problem as though it’s a production line and it is not. It’s nothing like a production line. It’s a creative, exploratory, and intensive problem.
The last part of that issue in big organizations is the other really hard problem in software development, which is not only is this a problem of exploration and learning, it’s also a problem in keeping and managing complexity. So ultimately all of our work, we have to work within the constraints of what fits inside a human being’s head.
Therefore, we’ve got to treat very, very seriously ideas like modularity and coupling and separation of concerns. And that’s true at an organizational level, at least as much as it’s true at the technical level of the software that we build. And that’s another area that traditional organizations comprehensively tend to get wrong.
Continuous delivery should be a standard engineering practice
Dave (09:31): I saw something on Twitter this week. I was involved in a conversation and somebody posted a challenge to say, could you name a single practice across our industry that we could consider to be standard? If I thought hard about this, I couldn’t, so somebody cropped up and said version control.
I came across an organization last year that didn’t use version control. I think using version control for some kinds of software is actually quite unusual. So if they’re configuring product systems, often they don’t use version control. Even something as fundamental, or what I would consider as fundamental to doing a decent job, is not used across the board.
You couldn’t say that against other professions. All surgeons wash their hands. There are no surgeons that don’t wash their hands. I think we’re an odd industry in that respect. And in part I think that boils down to not looking in the right places for how to solve these problems.
One of the reasons why I value continuous delivery and the thinking around it so highly is not because of my personal involvement. I don’t think this is down to just having a personal connection with the idea. I genuinely don’t believe that’s true. What I do have a personal connection with from my point of view is the idea of the application of the scientific method. I think the scientific method is humanity’s best problem-solving technique. And I believe, genuinely, that continuous delivery is an application of the scientific method to solving problems in software.
That means continuous delivery has a decent case to make to be considered as a genuine engineering discipline for software. And if that was the case, the implications of that would be that what we ought to see is that people that practice continuous delivery did a better job than people that didn’t, because that’s what engineering does. And engineering amplifies the effectiveness of craft, creativity, and understanding and makes those things higher quality and more reliable. That’s what happens in other disciplines.
So we ought to be able to observe that in software development. And the evidence is that that’s what we see. But if you read the Accelerate book, it will look at the state of DevOps report, that’s what the numbers tell us. They tell us that organizations that practice continuous delivery produce higher-quality software more quickly. The people working on it enjoy it more and the organizations that practice it make more money. So those are pretty good measures, on the whole.
Setting up your testing strategy for a faster feedback loop
Darko (16:54): In the talk that we had previously, you mentioned the strategy of testing. What do you see as a successful testing strategy and getting to a fast feedback loop?
Dave: Yeah, by all means. So the mental model I have when I think of this is based on a real-life project. I was fortunate, while in the middle of writing the Continuous Delivery book, to get employed building one of the world’s highest performance financial exchanges. I was already immersed in the ideas of continuous delivery, so we did it based on continuous delivery from scratch.
What we’re trying to achieve is the fast, high-quality feedback that you’re talking about. And what that means is that we try to get to the point to where we can kind of make a change and release that into production with a degree of confidence.
And there’s a number of things that go into that. The first thing is the ability to try and weed out the bad changes as far as we can. So the spine of the Continuous Delivery book is organized around an idea called a deployment pipeline. And the aim of a deployment pipeline is to organize the evaluation of any change that’s destined for production and try to eliminate the bad changes.
This is another one of those things that we learned from science. We’re trying to treat it as a falsification mechanism. If any test fails, we’re going to reject the change.
Then, we’re going to commit that code. And what we’re looking at is to get feedback very quickly so that we can get a sense of whether this was a good change or a bad change and a fairly high level of confidence that if all of those tests pass, everything else will be okay.
Generally, I advise my clients that what they should be aiming for as a target is to get feedback in under five minutes with roughly an 80% level of confidence. That says a huge amount about the nature of the test that you can afford to run in that amount of time. It means that they can’t afford to be starting up another process, talking to a database, talking to a file system, optimum the message queue, or any of those things.
Really, we’re talking about these small, focused, unique tests that are the output of test-first, test-driven development on the whole. That gives lots of beautiful properties in terms of the impact on the design of your system and that same very powerful step in its own right.
The limitations of test-driven development
Dave (19:54): Academic research suggests that just that kind of testing will eliminate something around 70-odd percent of production defects. So you’re talking about a 10x improvement. This is one step in that direction. Seventy-odd percent of production defects means that you spend much less time chasing and fixing bugs in production. That’s brilliant, and essentially all I’ve described so far is continuous integration.
The limitation of test-driven development is that it says, “Does the software do what I as a software developer intended the software to do?” And it just verifies that. It’s a bit like double-entry bookkeeping, but for software. That’s brilliant, but it’s not enough. You also need a user-centered test. You need to evaluate the chain from the perspective of the user of the system.
What people get wrong about microservices
Darko (26:19): There is now quite a lot of talks for a couple of last years about microservices. What are your thoughts on microservices in general? Is it something that you think that has a high potential value?
Dave: I like the microservices approach, but actually I think that it’s an expression of some deeper principles. I talked earlier about software being essentially an exercise in learning and managing complexity. And I think that what microservices does is it gives you a little help on both of those fronts, but there are other strategies that you can also use to keep those things in check. I think if you understand that’s what’s going on, it makes you better understand why microservices are important.
The reason that I say this is because I think that lots of people get microservices wrong. I think that the microservices are important. First, as you point out, each service is simple and therefore it’s relatively easy to reason about. That’s a big step forward in terms of being able to fit things into people’s heads.
The other attribute of microservices, though, that’s really important is the degree to which they decouple organizationally. So the reason why Amazon first took the step to microservices wasn’t because of fitting stuff into people’s heads as much as it was to liberate the organization to be able to work more independently in different parts of the organization. And I think that’s a crucially, vitally important strategy. And you want to be able to do that. What that means is you don’t get to test all of the microservices together before you release them.
We talked about anti-patterns earlier on. One of the anti-patterns that I see very commonly in large organizations is everybody’s read about microservices and it’s kind of an obviously good idea when you read about it. It makes an awful lot of sense. But lots of ideas. What we tend to do is that we tend to pause the ideas and miss out the bits that sound hard or difficult to think about and just try to go for the easy bits.
I see lots of organizations that write these totally coupled components that they call microservices that to my mind aren’t because they’re not independently deployable and they can’t even imagine being able to deploy those without testing them together. So that’s a key value and a key complexity of microservices.
PS. Be on the lookout for Dave’s upcoming book!
Darko (33:37): You mentioned working on a new book and what software development will look in a hundred years, which is quite intriguing. Can you share more on when your book will be available and what’s it about?
Dave: Sure. I’m probably talking about these a bit early in the life of the book. I’ve written about half of the book so far. I’m just preparing to maybe release it on Leanpub to get some feedback, so if people want to crack where I’m up to, perhaps the best thing they can do is follow me on Twitter and I’ll announce it on Twitter, so my handle on Twitter is @DaveFarley77.
The working title, I’m probably going to change this, but the working title for the book at the moment is Modern Software Engineering, which is a bit of a grandiose title. And, yes, at some level the ideas of what software development like in a hundred years time is related to that. It’s one of the spinoffs I’m thinking about in the book.
Darko: Again, thank you very much for this conversation. Good luck with your book and yeah. We’ll be sure to share the link to your Twitter account so people can discover the book as soon as you decide to publish something live and if you can get feedback.
Dave: Thank you very much!