Something I worry about with generative AI in business and commercial use: almost no one fully reads anything in those environments.
Now imagine when even the author hasn’t read what was written… yikes. How does AI writing and reading impact this reality?
Steven Sinofski has an interesting post here about “slop” at work. I think about this in terms of heuristics. Our brains are lazy and we all naturally develop “shortcuts” to evaluating a piece of work. Historically seeing a PRD or tech spec that was fully fleshed out with diagrams and clear explanations of context was a good indication that some serious work and discovery had gone into the document. A PR full of detailed test cases indicated attention to detail, and that thought had gone into all the possible edge cases. But with LLMs it is easy to take a few half thought through bullet points and make them look credible, or generate plausible test cases at scale.
We’re going to need to retrain our brains.