Two Test Frameworks. Same AI Agent. Same Prompt. Same Task.
𝗔𝗱𝗱 𝗮 𝗹𝗼𝗴𝗶𝗻 𝘁𝗲𝘀𝘁.
Framework 1 result:
- Test passes on first run.
- Credentials pulled from the fixture.
- File placed in the right folder.
- Naming follows the existing convention.
- Page object used. No raw selectors.
Framework 2 result:
- Test technically runs.
- Credentials hardcoded directly in the test.
- New file dropped in the root directory.
- Named `test_new.py.`
- Raw selectors everywhere. No page object in sight.
The test in Framework 1 looks like if was written by an actual engineer.
The test in Framework 2 is a mess, it is kinda of working... but still a complete mess.
────────────────────────────────────────
🧠 𝐇𝐞𝐫𝐞 𝐈𝐬 𝐖𝐡𝐲
The AI Agent does not decide what good tests look like.
It reads what already exists in your repo and continues the pattern.
Framework 1 had fixture files, page objects, consistent naming, and a clear folder structure. The agent read all of that. Matched against it. Wrote a test that fits right in.
Framework 2 had hardcoded values, raw selectors, no structure, and copy-pasted setup code in every file. The agent read that too. It draws one conclusion: this is the standard here. And continued exactly that pattern.
────────────────────────────────────────
📌 𝐓𝐡𝐞 𝐔𝐧𝐜𝐨𝐦𝐟𝐨𝐫𝐭𝐚𝐛𝐥𝐞 𝐓𝐫𝐮𝐭𝐡
Before you hand your repo to an agent, ask yourself one question:
“Would I be comfortable showing this code to senior engineers?”
If yes, start using AI coding agents.
If no, fix the framework issues first, then bring in AI. Because it won’t fix bad test automation code. It will scale it.
────────────────────────────────────────
Want to learn how to use AI coding agents in test automation ? Checkout this live workshop.