When I wrote Real-World Maintainable Software, I highlighted the fact that Dr. Winston W. Royce introduced the world to the concept of Waterfall Project Development with a stark warning: the implementation described is risky and invites failure
Did the world heed this warning? No, we went full on Waterfall for decades, suffering failed project after failed project, never wondering if anything had to be changed.
This is a breathtakingly naive point of view from the author of a book with such a lofty title.
The idea that everyone went “full waterfall” is hilariously misinformed. And that it was “failed project after failed project” until waterfall went away is comical too. Where did all that software come from if every project failed? All software engineers did nothing for decades and still got work??
I thought “oh, maybe I should take this guy seriously because he wrote a book with a title like that…?” Nah. It’s not that kind of book. It’s a glorified list in pamphlet form for junior developers who don’t know to buy something else.
I know hubris is one of the qualities of a good programmer but he seems to have skipped the details of that one.
Quality is not a measurable entity
The moment at which I knew this article couldn’t be taken seriously came later than expected, but here it is
How does one measure code quality? I’m a big advocate of linting, and have used rules including cyclomatic complexity, but is that, or tools such as SonarQube, an effective measure of quality? You can code that passes those checks, but what if it doesn’t address the acceptance criteria - is it still quality code then?
What if we don’t define code quality in terms of the aesthetics of that code?
Why is the perception of code quality so important prior to that code ever having been executed?
How can we test our code? What does it’s testability tell us? What do it’s tests tell us?
Is our test code good quality? Does it need to be? How can we know it is? Is it’s quality measured by the same metrics as the code it is testing?
‘Clean Code’ by Uncle Bob is a good place to start when answering these questions.
Am I too dumb or is this some LLM gibberish that has nothing to do with probablistic programming…
I’ve copied the text and put it into a tool that tries to find out if its written by an Ai (using an Ai model themselves off course, the irony). Here is one such: https://stealthwriter.ai/
About 66% of the content is likely human-written, while 34% appears AI-generated.
The tool also has the ability trying to obfuscate or rewrite passages that should avoid being detected by such a tool themselves. So it is possible this article was written by an Ai model, a tool to obfuscate was used and them maybe a human edited and rephrased stuff. I can imagine this being part of the publishing process. Complete assumption by me here.





