The 5 myths of the agentic coding apocalypse
Agentic coding with AI tools requires careful management, testing, and maintenance to avoid potential pitfalls and ensure high-quality software.
["There are two prevailing narratives about agentic coding. The first is that you can write a single sentence, and the AI will give you back a million-dollar app. The second is that since the AI is writing all the code, humans have no idea what's inside it.
It must, therefore, eventually fail and cause a large-scale apocalypse. Both of these narratives are caricatures of reality.", "When working with agentic coding, it's similar to going to a restaurant that specializes in fusion cuisine. You know the reputation of the chef is good, but you have no idea what you'll get.
You have little insight into the actual code coming from the AI, and you're basically going to have to accept it, regardless of what you've been served. The quality of the code depends on the prompt, and garbage-in, garbage-out is a very real concern.", "Engineering managers have faced the challenge of managing contractors under their supervision for decades. Assigning work and evaluating the work product is what engineering managers do.
Maintaining quality and control in that process is at the core of software engineering. When using agentic coding, it's essential to have checkpoints at every stage, carefully track integration, and assume you're taking delivery from outside contractors.", "One of the biggest challenges with agentic coding is testing. Automated tests can help determine whether a recent fix broke something else, but they're limited by their own blind spots.
AIs can be used to try various tests to see if the code can survive, but they inherit the same blind spots as human-created tests. To overcome this, it's essential to test like an outsider, incorporate adversarial test practices, and build in instrumentation for unexpected behavior.", "Another concern with agentic coding is security. AI coding models have been trained on public internet data, which includes faulty code and bad advice.
This means that AIs may reproduce insecure coding patterns learned from public data. To mitigate this, it's crucial to use multiple AIs based on different large language models to code-review each other's work and prioritize security best practices."]
Source: ZDNet