Activity
Mon
Wed
Fri
Sun
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
What is this?
Less
More

Memberships

Cursor Skool

59 members • Free

AI Automation Agency Hub

248.4k members • Free

AI Developer Accelerator

9.9k members • Free

AI Automation Society

143.6k members • Free

Coding the Future With AI

1.3k members • Free

Business AI Alliance

7.2k members • Free

AI Money

272 members • $9/m

6 contributions to Cursor Skool
Cursor Rules
Hey all! I'm new in cursor and using it less than 1 month. The problem which I've got right now is how to set up proper cursor rules for my project? I'm trying to use ChatGPT to generate me this rules but it's not working very well, the structure is mess and rules sometimes too complecated. Anybody know the way how to do it well? My project has external backend, I'm developing frontend in one folder and backend in another and would like to seperate rools for backend and frontend.
0 likes • 1d
Hi. If you have not found a solution this is what I use
GPT-5 Codex now in Cursor
OpenAI just rolled out GPT-5 Codex, and it’s now available in Cursor. Codex is a specialized variant of GPT-5 optimized for coding, refactoring, and code review. It’s designed to handle long, complex tasks autonomously (sometimes working for hours), adapt reasoning depth based on task complexity, and catch critical issues in reviews. Benchmarks: 74.5% on SWE-bench Verified OpenAI also highlithed that it uses drastically fewer tokens for simple tasks so hopefully it is also cheaper to use No information on context length, probably same as GPT-5 so roughly 200k
GPT-5 Codex now in Cursor
1 like • 1d
I’m also using it in the past days. It’s good. I use it for peer review and claims confirmation eg what sonnet claims is not always true 🤣
Codex in Cursor
🚀 Tried out the new Codex update today — and wow, it’s a game changer. What stood out the most for me: - IDE Extension → Codex now lives directly in my workflow, no context switching. - Sign in with ChatGPT → zero setup pain, no API keys, just works. - Local ↔ Cloud Handoff → started a task locally, then delegated it to Codex Cloud and picked it back up seamlessly. - GitHub Reviews → mentioned @codex in a PR and instantly got a structured review + suggested fixes. Honestly, it feels like pairing with a teammate who never gets tired. If you haven’t tried it yet and you’re already on ChatGPT, it’s included in your plan — worth turning on and testing in your daily dev loop.
0 likes • 19d
Codex + Cursor + Planner = 🚀 Following up on my last post about Codex in Cursor — here’s what it looks like in action. 🖥️ In the screenshot you can see: - Codex reviewing my BDD tests right inside Cursor. - Checklist of fixes (assertions, env config, run scripts). - Planner integration keeping tasks truth-aligned. - Terminal + code editor + Codex feedback loop all in one place. The flow feels natural: 1. I run and debug locally in Cursor. 2. Codex reviews and points out gaps. 3. PM bot updates Planner and creates next actions. Instead of “just another AI tool,” it’s becoming an actual teammate — executing, reviewing, coordinating. If you’re running multi-step projects (tests, infra, APIs), this loop saves a ton of overhead.I’m starting to trust Codex as both reviewer and executor, with Planner + PM keeping everything clean.
Documentation changes now that we have Cursor and LLMs
"Just ask grok-code-fast-1" Quick thought that came to mind: I'm setting up deployment for an application and needed to define environment variables for it. I had no idea what environment variables the project had or what I should set. It's a new project so I couldn't even look it up from the documentation. So I just asked grok-code-fast-1 to go through the code and list all the environment variables that are used in the codebase. Couple seconds later I had a list of the environment variables. My first instinct was "I should document these somewhere" But that got me thinking: why? Why can't I just do the same thing next time I need to know the environment variables? Documentation gets out of date while the code is always correct. So if possible, I feel like I should prefer just asking the documentation-related questions directly from AI, especially now that we have a blazing-fast, ridiculously cheap model to comb through the code. Which means that in the future, documentation should focus way more on the things that can't be figured out just by looking at the code. Design decisions, tradeoffs, reasonings, specifications, and so on. This kind of documentation also helps the model when developing software. Side question that I have here: is there a way for Cursor to do the same without context pollution? Could Cursor Agent somehow ask Grok to go through the code, look for the environment variables, and just return a summary so that the main Agent doesn't have to read all the code? What do you think? Give me your thoughts in the comments
1 like • 20d
You can tell over different rules. Hopefully they will work. Overall I think that there is high risk that you forget all what you want to ask next time. You need a process and protocol The model doesn’t matter so much. I think you need to reinforce over targeted process and taks like the todo cursor has.
How I’m Pairing Codex + Cursor with Modern Project Knowledge & Task Tracking
TL;DR: The combo of Codex (AI codegen/review) and Cursor (AI IDE/context) is now my default stack for any dev work—not just PowerBI. The key: plugging both into our Planner (tasks) and Cortex (docs/decisions) so every step is tracked, reviewable, and shareable. For those not in our stack: - Planner = advanced project/task tracker (dependencies, AI action logs). - Cortex = semantic doc/knowledge base (like Notion/Confluence, but with built-in AI for tagging, context, and audit). - Codex = code reviewer, optimizer, and PR fixer (via cloud or in-editor). - Cursor = IDE/workspace, doc/test generator, and context anchor. PowerBI example (real use): - Built and tested a PowerBI Dataflow JSON Validator in Cursor, - Used Codex for code review/PR feedback, - CI/CD ensures every change goes through the same process—no shortcuts, all traceable. But it’s not just PowerBI: - I’m using this same Codex+Cursor + Planner+Cortex combo for new app builds, backend services, even internal tooling. - The benefit: all process, logic, and decisions are linked together—you can trace any feature or fix from code, to doc, to planner task, to review history. Why it matters: - Onboarding = instant (all links and context in one place). - Review/governance = built-in, not bolted on after the fact. - Works for any coding project, not just dashboards. If you’re building apps, pipelines, or internal tools and want AI and governance, try connecting your copilot/IDE to your project and doc stack. DM if you want examples or want to compare notes!
1
0
1-6 of 6
Delyan Dosev
2
14points to level up
@delyan-dosev-3323
AI enthusiast

Active 17h ago
Joined Jul 26, 2025