Monday, 4 November 2024

Document as you go

In my younger days working at McDonald's I was taught to clean as you go. The advice is deeper than you first think. It naturally breaks a large problem down into smaller problems and encourages you to tackle them immediately.

For some reason this approach has stuck with me, and I apply it to how I code, and how I document. For example when a colleague or report asks me a question. I can create a quick document to share the information and link that back to the colleague. I am no longer the bottle neck to that information and we have something to build upon and reference in the future.

To consider the cons before going all in on this approach is to make sure your docs are discoverable and easy to keep up to date. This will help with creating a single source of truth that isn't you, and not multiple documents that are out of date.

Writing documentation that is easy to keep up to date is it's own topic entirely. Keep things short and factual. Making sure things are discoverable is a harder topic to broach and it relies on your tools more than your technique mostly.

Ultimately documentation should make you redundant as far as that part of your job goes. Freeing up time to work on more interesting problems, rather than answering the same questions multiple times.

Saturday, 26 October 2024

Unit testing, and the 100% target conundrum

 What to do if a manager wants 100% coverage in unit tests

100% coverage just means the code was executed in the test. It does not capture if the test is checking the outcome of the code being run. Developers often have to review and maintain 500+ lines test files, it becomes difficult to ensure the test is relevant. In unit tests it only ensures the individual pieces work, it does not test the integration of those pieces. With large swaths of code come mistakes and shortcuts.

Ultimately the user is the highest priority. The best documentation for the user experience are the requirements for the features. Each requirement has a logic component and (usually) a visual component.

The logic should be tested with fast integration tests, the visual can be tested with fast integration snap shots tests (like Paparazzi). Coverage can be calculated from the integration tests to find dead code, missing requirements etc. The goal is to have 100% coverage of the requirements.

Tests need to be fast to ensure they can be run with every pull request and merge.

Integration tests can focus on the requirement as a whole, and makes the tests more robust and able to withstand different implementation changes and refactors.

Unit tests can be saved for complex classes where they will have the best return on investment.

Solves issues like:

- Writing unit tests for simple classes like transformers/mappers/middle layers that just tie together dependencies is a waste of developer and CI runner time.

- Ensures the developers are reviewing and focusing on what makes a good test, rather than hitting a 100% coverage number.

Once these elements are in place, the team can then move onto automated tests that run the entire application (like Espresso), which can be run uncoupled from the merges and pull requests. If run on a regular cadence, any issues can be scheduled for review before dev complete / code cut off etc to catch regression issues. Development should be responsible for these, as they are a lot of code, and require updating often and deep understanding of the code base.

In summary:

- 100% of feature requirements
- Integration tests and fast snapshot tests
- Unit tests for complicated individual classes only
- Code coverage is a possible indicator of code smell or requirements missing, but not the target
- Focus on good tests, rather than coverage