Activity
Mon
Wed
Fri
Sun
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Memberships

AI & QA Accelerator

604 members โ€ข Free

3 contributions to AI & QA Accelerator
Playwright CLI: The Practical Guide
๐Ÿง  ๐—”๐˜‚๐˜๐—ผ๐—บ๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐˜๐—ผ๐—ผ๐—น๐˜€ ๐˜‚๐˜€๐—ฒ๐—ฑ ๐˜๐—ผ ๐—ฏ๐—ฒ ๐—ฏ๐˜‚๐—ถ๐—น๐˜ ๐—ณ๐—ผ๐—ฟ ๐—ต๐˜‚๐—บ๐—ฎ๐—ป๐˜€. 1. A QA engineer wrote the code. 2. Read the errors. 3. Decided what to try next. That was the normal workflow for years. But now everything has changed. Starting in early 2026, AI Coding Agents can handle all of those steps, while QA engineers act as managers and agentic leads. โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ ๐ŸŸ  ๐๐ฅ๐š๐ฒ๐ฐ๐ซ๐ข๐ ๐ก๐ญ ๐Œ๐‚๐ It was the first serious tool for this new AI QA workflow. It let an AI Agent look at the page, click buttons, take page snapshots, and do basic browser tasks. Main use cases for the Playwright MCP in Test Automation: - Gathering locators for the UI tests - Debugging flaky or failed tests - Read console and network logs How it works: 1. User asks an AI agent that has access to Playwright MCP to do a task. 2. The AI coding agent controls the Playwright MCP to interact with a browser. For a while, that seemed like a great option, but soon enough it was discovered that it has a few fatal issues... โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ ๐Ÿ”ด ๐—ฃ๐—น๐—ฎ๐˜†๐˜„๐—ฟ๐—ถ๐—ด๐—ต๐˜ ๐— ๐—–๐—ฃ ๐—ถ๐˜€ ๐—ป๐—ผ๐˜ ๐˜๐—ต๐—ฒ ๐—ฏ๐—ฒ๐˜€๐˜ ๐—ผ๐—ฝ๐˜๐—ถ๐—ผ๐—ป ๐—ณ๐—ผ๐—ฟ ๐˜๐—ฒ๐˜€๐˜ ๐—ฎ๐˜‚๐˜๐—ผ๐—บ๐—ฎ๐˜๐—ถ๐—ผ๐—ป Here is how Playwright MCP works: 1. It loads a full page snapshot (HTML + CSS) into the AI agentโ€™s context after each page interaction. 2. It also loads large MCP metadata that tells the agent how to use the tool. That means Playwright MCP can eat 20โ€“30% of that memory in a single use. And once context crosses 50โ€“60%, agents start making mistakes and losing track of earlier instructions. So technically it works, but the context overhead and cost are not great. Quick recap: the AI agentโ€™s context is its working memory. It holds the current conversation, instructions, code, and everything else the agent needs to stay on track. โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ ๐ŸŸข ๐๐ฅ๐š๐ฒ๐ฐ๐ซ๐ข๐ ๐ก๐ญ ๐‚๐‹๐ˆ Playwright CLI was built to solve those problems. It gives AI agents a simple command-line utility they can call like any other terminal command: - The agent runs small commands and gets back short results. - It reads the full HTML page only when needed, not on every interaction like Playwright MCP does.
Playwright CLI: The Practical Guide
3 likes โ€ข 6d
Excellent article ๐Ÿ‘ Now I want to try it out.
AI Coding Agents for QA: Part 5 โ€” Stop Writing Prompts. Start Writing Task Specs
You open Cursor, Copilot or whatever AI tool you like ... You type: "write a login test" The agent responds. It looks like a test. Imports are there. Structure looks familiar. But you look closer. - Hardcoded credentials. - Wrong file location. - No page objects. - Naming convention are ignored. - And on top of all that, you run it... it fails. โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ ๐Ÿง  ๐–๐ก๐ฒ ๐ญ๐ก๐ž ๐€๐ ๐ž๐ง๐ญ ๐†๐ฎ๐ž๐ฌ๐ฌ๐ž๐ฌ ๐–๐ซ๐จ๐ง๐  Most people at this point blame the model. - "Claude is bad at tests." - "GPT doesn't understand Playwright." - "I need a better model." But the reality is... the model did not fail you. You gave it nothing useful to work with. Think of the agent like a new hire. Smart. Fast. Capable. But they have never seen your project before. โžค They do not know where your fixtures live. โžค They do not know how you name test files. โžค They do not know what credential pattern you use. โžค They do not know whether you run tests after every change. You told them: "write a login test." So they try to find all that information and make a lot of assumptions. Every assumption is a guess. Every guess is a risk of being wrong. That is an onboarding problem and a lack of proper documentation. โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ ๐Ÿ“ ๐–๐ก๐š๐ญ ๐š ๐‘๐ž๐š๐ฅ ๐“๐š๐ฌ๐ค ๐’๐ฉ๐ž๐œ ๐‹๐จ๐จ๐ค๐ฌ ๐‹๐ข๐ค๐ž In the AI coding agents world, that documentation is often called "Task Spec." A task spec is not a longer prompt. It is a precise set of constraints that leaves the agent very little room to guess. Here is the difference. ๐—ช๐—ฒ๐—ฎ๐—ธ ๐—ฝ๐—ฟ๐—ผ๐—บ๐—ฝ๐˜: ``` write a login test ``` ๐—š๐—ผ๐—ผ๐—ฑ ๐—ง๐—ฎ๐˜€๐—ธ ๐—ฆ๐—ฝ๐—ฒ๐—ฐ: `` Write a login test. Before making any changes, inspect the existing tests in /tests/auth/ and follow the existing suite structure, naming, and conventions. Task: - Add a test for successful login using the existing credentials fixture. - Place it in the appropriate existing auth test suite. - Do not hardcode credentials or duplicate fixture data. - Do not create new files unless no existing test file is appropriate.
AI Coding Agents for QA: Part 5 โ€” Stop Writing Prompts. Start Writing Task Specs
0 likes โ€ข 11d
@Art Martinez I also had to buy my own licenses, the company did not provide any. I hope it will all work out and eventually they will pay. But for now I am happy to pay, its only $20.
AI in QA 2026: The Transformation That Already Happened
Many people say AI is a bubble. It very well might be. But here's what doesn't change: AI has already transformed the tech industry. By the end of 2026, it will be unrecognizable. โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ ๐€๐ˆ ๐‡๐š๐ฌ ๐€๐ฅ๐ซ๐ž๐š๐๐ฒ ๐‚๐ก๐š๐ง๐ ๐ž๐ ๐๐€ ๐…๐จ๐ซ๐ž๐ฏ๐ž๐ซ Tasks that defined QA work are already "solved" by AI: โˆ™ Document gathering - 10x faster โˆ™ Summarization - 10x faster โˆ™ Brainstorming - 10x faster โˆ™ Document creation - 10x faster The same thing is happening to test automation right now. AI coding agents like Cursor, Claude CLI, and Codex make code writing almost instant. โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ ๐Ÿ”ด ๐’๐จ ๐ˆ๐ฌ ๐ˆ๐ญ ๐€๐ฅ๐ฅ ๐’๐œ๐š๐ซ๐ฒ? Yes and No. Depends what you do next. If you're a manual tester, your time is running out. โคท Will manual jobs disappear? No. โคท Will they compress? Absolutely. Where you needed 5 people before for documentation and research work, now 2 people can do it. The Automation + AI combination is brutal reality. It's not coming. It's happening NOW. If you're still doing pure manual work, you need to hurry up and decide: Fight in a brutal job market or dedicate a few months to learn automation. โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ ๐Ÿšซ ๐“๐ก๐ž "๐€๐ˆ ๐–๐ข๐ฅ๐ฅ ๐–๐ซ๐ข๐ญ๐ž ๐“๐ž๐ฌ๐ญ๐ฌ ๐…๐จ๐ซ ๐Œ๐ž" ๐ƒ๐ž๐ฅ๐ฎ๐ฌ๐ข๐จ๐ง Don't kid yourself thinking AI coding agents can write code and you can just tell them what to do. AI agents write consistent production-quality code ONLY if you tell them HOW to do it. That requires using advanced features like: โˆ™ AGENTS.md โˆ™ SKILL.md โˆ™ Proper test design patterns โˆ™ Understanding technical limits If those names mean nothing to you, that's a problem you must fix ASAP. Only then can you start using AI for coding properly. โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ โšก ๐“๐ก๐ž ๐†๐š๐ฉ ๐ˆ๐ฌ ๐–๐ข๐๐ž๐ง๐ข๐ง๐  ๐„๐ฏ๐ž๐ซ๐ฒ ๐ƒ๐š๐ฒ Current QA Automation Engineers and SDETs already KNOW all the foundational skills, plus have the experience . When they start using AI coding agents, you simply will not be able to catch up. They're already ahead. AI makes them 10x faster. The gap gets wider every single day.
AI in QA 2026: The Transformation That Already Happened
0 likes โ€ข 11d
AI is crazy now, I cannot believe how fast it gets better
1-3 of 3
Rob L.
1
2points to level up
@rob-l-5910
Full speed the become AI QA

Active 6d ago
Joined Apr 15, 2026
Powered by