Activity
Mon
Wed
Fri
Sun
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
What is this?
Less
More

Memberships

Crypto Cash Flow

92 members • Free

Faceless Freedom

36 members • Free

Agent Zero

2.5k members • Free

23 contributions to Agent Zero
Okay, We REALY REALY need a STOP button 🔴
I saw someone else on hear had mentioned the need for a stop button. I wasn't sure at first if that was truly necessary as I have only been serious about using Agent Zero for the past 2 weeks now. Well, it seems they were indeed correct. Today I found the need for one in the middle of my workflow. I had been doing some market research and with my agent going at it for about 30 minutes solid, I managed to fill the context. The memory compaction kicked in and started doing its job. I have seen this before, so I thought, No big deal - research was about complete and it was running tokens on the final response. Then, something went wrong and the memory compaction failed. It hung at that step forever. I nudged it to continue giving the final response. It seems that with the context being full it did not have reference of what was already given as output - so it proceeded to do the research as per original instructions again! It just kept going! I paused it, I sent a stop message, the agent acknowledged but continued its tasks any way. I gave it further instructions to abort - again it acknowledged that "the user wants me to stop" but it still kept going. It just kept burning up tokens. No way did I want it to run through the whole 30 minute research routine again! The only solution I had was to stop the docker container. This behavior is just unacceptable. If I were using online models and required to pay API fees I could be throwing money away had I left the agent unattended. That's time and money wasted. Thankfully, I have the hardware to run everything locally, so in this case it was just my time and the hassle to get it to stop. Can someone perhaps bring this up in today's call? I'm unsure if I will be able to make it. Thanks. Would be nice to see this implemented in a way that stops/aborts all running tasks.
NEED ADVICE ON TECH STACK
After downloading various Ollama models on my host machine. I have decided (due to hardware restrictions namely GPU). I cannot afford to use these free models until such time that I have a desktop setup that will enable them to function profitiently. They just take forever to task anything. I will pay for API calls, for the sake of speed. Free really isn't free. You're going to pay one way or the other. My time is spread very thin as is. Also I will be traveling. Not taking my main machine with me. So... Setting up cloud based tools that are compatible with a smaller laptop and phone config. Are what I am setting up now. I am going to be using a Claude Code, Cursor, Agent Zero stack. Although, I'm not certain how A0. Plays into this if I'm using a Lenovo ThinkPad. Its my travel/everyday activity machine. Very small, compact. But it really serves its purpose. Any input regarding this new approach will greatly appreciated. Thanks
1 like • 12d
Have you tried looking into the Venice API options? If you use the A0T crypto token and stake it you get API credit. I haven't used it yet myself. I have the hardware to run things locally. Seems like a very good option = pay once and you get monthly api credits.
Need Suggestions
Hello everyone, I recently set up Agent Zero useing docker and Lmstudio. I don’t have much experience yet and I’m still trying to understand what I can do with it. It would be really helpful if you could share some ideas or examples of how you use it. Also, please suggest some models that work well. I’m using an RTX 5070 Ti (16GB VRAM) and 32GB RAM. Thanks a lot! 😊
1 like • 12d
Another idea: use a vision model and it can process receipts or tag images.
Working With Local LLMs
My setup is with Local models only. Have had great success with the latest Qwen3.5 family (vision support in all of them). The latest GLM (4.7 and 5) models are pretty good too. I just wanted to quickly share a tool that I came across. Will help save you time and testing in what model will work on your PC. LLMFIT https://github.com/AlexsJones/llmfit
Working With Local LLMs
0 likes • 18d
Yeah, with that hardware you can run quite a few good models at high speed. You have to balance a few things: 1. Mixture of Experts 2. Context Length 3. Concurrency 4. Quant Size 5. Then shove it ALL in to your vRAM Tell me your setup. I could give you some tips for your system.
1 like • 13d
@Jubaer Utshob download LM Studio, then go to discover models and use their downloader. It gives you hints to what will fully offload to the GPU. I recommend trying GPT-20B. That will fit in your vRAM. Or one of the small new Qwen3.5 models.
Fine tuned llm for Agent zero
Just out of curiosity, has anyone fine tuned a (small) llm with the agent zero profile so that it would know about its environment and available tools? I am experimenting with the new Qwen3.5 locally and finding it to be really good for some usecases. But it also needs to process this big context with every call.
0 likes • 14d
@Armin J Sorry, I guess I misunderstood. You are looking to replace / find something smaller than Qwen3.5 for your main model?
0 likes • 14d
@Armin J Oh, I think I understand now. No I haven't even thought of that. Fine tuning and training a model to be specific to work with and work better with AgentZero sounds like a cool idea though. You are right, in theory it could be less explaination to the LLM if the framework were tweaked maybe and the LLM itself was trained. With less added context going back and forth it may be faster. I think it may be possible with unsloth. Time and cost? I haven't ventured into model training yet but I would try this if I had the time. I do see that it could be a huge benefit. See, once the model is specific to AgentZero, you reap the benefit in time and possibly intelligence going forward - like forever.
1-10 of 23
Daniel ITwizard
3
20points to level up
@daniel-p-4970
Certified IT professional and Linux user. Investigative researcher.

Active 9h ago
Joined Mar 2, 2026
USA
Powered by