Right, I guess I’m more taking issue with the article than with you, because the “trail” is always documents, and it’s pretty easy for LLMs to fake documents. I mean, humans have been half-assing verification checks for decades, and it has kind of worked because even half-assing a verification document has required at least some fluency with the code under test, which in turn has required the engineering team to develop enough understanding of the code that they can maintain it, just to produce plausible verification trail documents. Now, the relationship between plausible documentation and a dev’s understanding of the code being verified is much less reliable, so we need more precise mechanisms. In other words, the signals of trust have always been broken, it’s just that it hasn’t been a problem up until now because there were side effects that made the signals a good-enough proxy for what we actually wanted. Now that proxy is no longer reliable.
Right, I guess I’m more taking issue with the article than with you, because the “trail” is always documents, and it’s pretty easy for LLMs to fake documents. I mean, humans have been half-assing verification checks for decades, and it has kind of worked because even half-assing a verification document has required at least some fluency with the code under test, which in turn has required the engineering team to develop enough understanding of the code that they can maintain it, just to produce plausible verification trail documents. Now, the relationship between plausible documentation and a dev’s understanding of the code being verified is much less reliable, so we need more precise mechanisms. In other words, the signals of trust have always been broken, it’s just that it hasn’t been a problem up until now because there were side effects that made the signals a good-enough proxy for what we actually wanted. Now that proxy is no longer reliable.