Notes about books read in 2019
Here are some notes on books I read in 2019. They’re not book reviews, more notes on whatever I found interesting or problematic in them. I can recommend reading all of them; the books I didn’t get much out of I won’t list here. As much as possible i try to link the author own site. There is no referral in any of the links.
This one has been on my list for a long time. It was definitely time i got to it. If you are in tech and you only have place for one book in 2020, read it. It is refreshing to see something with reliable data and research behind. I do not agree with everything and the third part is definitely superfluous, but it is a good book. I keep finding sad that we have all kind of backing to show that Release Engineering is the most important piece of infra you can invest on as the leader of Dev teams. And still see no change toward Release Engineering being valued in the mainstream. Not even a bit. DevOps keep paying at best lip service to it.
The 2020 Commission Report on the North Korean Nuclear Attacks Against the United States: A Speculative Novel, by Jeffrey Lewis
I have spent a lot of time in Resilience Engineering this year, as last year. Mostly around learning from incidents, but also how we can make people understand the reality of making decision in high-pressure/high-tempo environments. The world of nuclear weapons and nuclear incidents is highly congruent to the one of software incidents, particularly security incidents. Jeffrey Lewis provide one of the most gripping believable narrative of how an incident unfold.
It is also one of the best post-incident review artifact i have seen in the past few years. There is a lot in the novel that we can use as software incidents investigators to structure our own post-incident reviews artifacts.
It has also reinforced my belief that we need to get out of nuclear weapons, but that is a different question altogether.
I have always got a particular interest in Human Factors and mistakes in general. My family has multiple people in the healthcare industry that are deeply interested in Human Factors, so this was right up my alley. It is probably one of the best book i have seen on how we talk about mistakes as operators of the systems that failed. I will probably need to reread it, but its core proposition is that we do not have the correct language to talk about mistakes.
It links to other things i keep talking about, which is that we have really few sociological studies in experts and experts networks. Particularly in engineering. I would love to see more sociological and anthropological studies of engineers. Marianne Paget language and tentative to describe what operators in medical errors feels is probably stuck with me.
It already impacted the language i use and will do it more. This is an extract that i slightly edited to adapt to software.
Errors are indigenous feature of the diagnostic [..] process. Because they are, reparation is an intrinsic feature of the work. Reparation has its origin in the progressive refinement of understanding what requires work. It adheres in the intention of finding an appropriate [solution] for a [system]. Yet nothing requires reparation. Nothing assures its occurrence. On the contrary, an act of will by a specific [operator] forges the correction of an error or fails to do so.
This short book is a good introduction to a term that we consider all the time in software. We can do it “fast” but will “cut corner”. Or we can do it slower and cut less corners. A lot of my time in 2019 has been spent trying to find a better term than “technical debt” to describe the complexity of our every day work as developers. And get rid of the term “debt” in general. I think we have far outlived that analogy. This is a good approach to one possible contender, but i am still searching.
I think it would be time we move past this kind of language. There is a world of real actions taken every day by devs and ops and business and designers around our systems. And this language is just making it harder to engage fully with the reality of our systems.
Erik Hollnagel offer some approach into some of these problems.
I feel ambivalent about this book. On one hand, it makes a good case for what i call “production readiness” or “release readiness”. It is something i would like to hear more discussion about in our field. It is the single biggest differentiator i can find between the Erlang and Elixir community compared to mainstream software. It has some good examples, that will talk to a lot of the mainstream software crowd. It address some real problems they all deal with everyday.
And on the other hand, it fall really really short on my expectations. Yes it has applicable advices and yes it will engage a crowd that need it. But it does nothing to address the reality of how we got there. What it means for the way we develop and code to move toward “production readiness”. It helps you tack some production readiness after the fact, as an ops team that got some power over the release. It provides some things to point developers to so they fix their shit.
But it does not engage into the fact that all our tools, languages and stack are aggressively trying to make it hard to be production ready. It engages in some really blame-y and “heroic” language regularly. And it does not address the underlying reality of our field. I am still searching for a book that could address this to be honest.
2020 Reading List
Currently on my reading list for 2020, some already begun: