I've been recently doing a lot of test-driven development at my new job, and one of the things I've noticed is that sometimes we will just run into snags, times when we hit a wall where it feels like we aren't making any real progress forward. There have been a few times now where we have gotten the code down to make the actual product work, but we spend a lot of time struggling to get the tests to pass and to really test the system in a way they we felt was good and proper. The trick is juggling the fact that we want to be a lean team that develops quickly but that we also want to write tests first that will pass when we implement these new features. Sometimes that last "making the test pass when they should" can be a lot more challenging that it ought to be, and it sometimes seems that we're put in a position where we need to choose between cutting corners or further increasing the risk that we ship late. To be honest I don't really have a right answer for all this, but in this post I'll think out loud about some ideas.
Come on, it is foolish to think they we will ever get to a point where we never hit test snags, and so it is very important that we have a plan when it does happen. At my company we sell ourselves as polyglot programmers so one project we could be doing full-stack Typescript and the next all Clojure (yaass please give us more clojure projects email@example.com 😉). My point is that you can't just document "every possible kind of unit test". Every project is different, and even if we only took on React projects some projects use all Class-based components while others are all functional components, versions of everything change all the time, browsers change all the time, etc. I believe that it is also good to be always pushing the boundaries and trying new things, but in a tdd way. The best way to get better at TDD is to actually do TDD, and stumbling in the beginning is a part of it. I think building a culture that truly appreciates automated tests, pair programming to keep each other honest, and doing test-first development is great, but the downside gets magnified as well when it's two people pair programming spending two and a half hours trying to get a test to pass when all along the answer was just to render the component in your test using "mount" and not "shallow". Things like this that just make you do a big facepalm in hindsight happen more than we genius engineers would like to admit, and it seems to me that we need to create an efficient overall process that takes into account the fact that test snags do happen.
Once you have a mature project you will often have other files that are doing a similar thing or testing something that works similarly, and that's great when it's there because you can use it as a reference. It's often the first tests that are the hardest though, and even when there are tests they can be missing key assertions or may just have certain important things not being testing altogether. The first tests of any "new thing" are often the hardest, and you as the pioneer are laying down the precedent future similar things. It can be very tempting to just say, "ok, screw this test. The thing works when it's actually run, but idk how the %$@#& to get this test passing". It is tempting to do that, but of course that goes against your moral code of doing things the nice way, the pure and holy TDD way. I think figuring out a "right" way that works and can be thoroughly tested is what needs to be focused on in order to overcome a snag, but it can take time of banging and banging on the compiler until it finally works, and it can be tough to say really how long you will be banging before the issue is behind you. l, Still, it's interesting to think about how we can maximize our development velocity by without compromising on our test-first methodologies.
One solution I'm going to propose in the next retro meeting is that we make a ticket in the backlog of our issue tracker when this happens. I think we should clearly mark these tickets with a tag or different color or something so that it is very apparent that it is a "test snag ticket". Individuals can then spike later during their down time to try to make any progress the snag when they are relaxed and have time to fully research, experiment, etc. Then when an actual working solution is discovered we can come back together and build it out the code in discpinllined TDD style that will be ultimately merged into the shipped code. This frees up the pair programming team that initially hit the snag to move on to something else so they can stay in their flow and be productive all without feeling like they are abandoning TDD. The truth is that TDD is hard, and it is notorious for being very slow partly imo because people hit test snags don't react effectively. They either spend the afternoon pairing banging their heads against the wall making no progress, or they just blow it off and are left with a suite with loads of tests that are all passing but are missing key assertions because unit testing the right thing was too hard. I am very fortunate to be able to work in a team where every one of the engineers, the PM, the COO, and basically everyone in the consultancy emphasizes the importance of having strong automated tests and doing TDD through and through, and I want us to be able to find a good balance between not wasting loads of time and not compromising on TDD principles so that we can really be that legendary team that is super efficient and puts out bulletproof software week after week after week.
The posts on this site are written and maintained by Jim Lynch. About Jim...