Activity
Mon
Wed
Fri
Sun
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
What is this?
Less
More

Memberships

AI Money Lab

27.6k members โ€ข Free

Master AI Automation (Free)

1.1k members โ€ข Free

Data Alchemy

37.5k members โ€ข Free

AI Automation Agency Hub

248.4k members โ€ข Free

Online Business Friends

83.5k members โ€ข $10/m

๐ŸŽ™๏ธ Voice AI Bootcamp

5.7k members โ€ข Free

13 contributions to Data Alchemy
End of the road?
I hope everyone had a good Christmas. I am making my last attempt at fixing me issues. It has been more than 6 months and a still have not written a single line of code, as I am constantly stuck on the ollama api nonsense. I can tell you it is running here http://localhost:11434/ The browser says "Ollama is running" But non of this api stuff works localhost:11434/chat/generate localhost:11434/api/generate localhost:11434/api/chat/generate http://localhost:11434/api http://localhost:11434/chat My chatbots tell me "Check Documentation: Carefully review the Ollama documentation, specifically looking for instructions on enabling the API or configuring API access. You might need to modify a configuration file or use different commands depending on your installation method." I have looked everywhere and can find anything, or at least don't understand anything. After all I am not a coder and want to use ollama with agents to do the work I can not do. If its something to do with the model I don't understand If it is something to do with the install, I don't understand. I have nowhere left to turn. Its make or break for me. New Year and new start whether that is working with ollama or getting a sole destroying office job...this is it. Please can anyone help?
0 likes โ€ข Dec '24
What I really dont understand is why you would respond when you only have a bad attitude. Thanks for nothing
2 likes โ€ข Dec '24
Thanks for offering to help. I have worked around that issue now so I should be able to complete my project now. Cheers.
I am seriously confused about this
Wondering if anyone can explain something to me. I am going through the Introduction to LangChain and looking at this page and see what llm models can be used https://python.langchain.com/docs/integrations/llms/ for example my preferred model is here OllamaLLM-langchain-ollama. On this page https://python.langchain.com/docs/tutorials I see some interesting things that I want to work with Build a Simple LLM Application with LCEL Build a Chatbot Build vector stores and retrievers Build an Agent Now, if I take a look at building a chatbot here https://python.langchain.com/docs/tutorials/chatbot/ I can see instructions how to set and install etc,,, For example pip install -qU langchain-xxxxx Available options: OpenAI Anthropic Azure Google Cohere NVIDIA FireworksAI Groq MistralAI Togethe import getpass import os os.environ["ANTHROPIC_API_KEY"] = getpass.getpass() from langchain_anthropic import ChatAnthropic model = ChatAnthropic(model="claude-3-5-sonnet-20240620") My question is this. If I want to use ollama is just a case of changing the code like this: os.environ["OLLAMA_API_KEY"] = getpass.getpass() from langchain_ollama import ChatAnthropic model = ChatOllama(model="model_name") And finally, if this is the case, how do i go about getting the Ollama API Key? Thanks for your help :o)
0 likes โ€ข Oct '24
fantastic, thanks for this. I will take a look ๐Ÿ‘
0 likes โ€ข Oct '24
Hi, yes I have reviewed them but have not had any time to attack them. I am again fighting with vs code just to get it to install requirements. Its driving me insane.
Scratching my head about this for a long time
The Introduction to Langchain (LLM Applications) was very interesting and it is touching on some things that I have been struggling with. My question about this video goes like this. My great battle is using my preferred LLM in this case Ollama and although most frameworks say they are compatible with local Ollama, I have never been able to get a straight answer from anyone about how to make that happen. I found it relatively easy creating working models on my local machine with an AGU so I run Ollama locally with something like Anythingllm in a Docker container, but when I attempt to host live for the world, things start going wrong. Most cloud hosts that allow Ollama to run are pretty expensive, and if one decides to use a paid-for product, there are other restrictions like cost and number of requests. I my mind it would be perfect to run lightweight agents just like in this video and run a private LLM somewhere, or use and endpoint at huggingface, but that is still a mystery to me. So, oh yes, my question. If I want to replicate what David is doing in this video, how would I reference either a local or remote hosted Ollama installation?
1 like โ€ข Oct '24
I made a big decision to abandon my live bots. I wasted too much time on them. In the end the straw that broke the camels back was https. Can not believe is was impossible to get SSL to was on the subdomain and the host would not help. So, back to studying. I will do my project the old fashion way be using the coding I learn here.
0 likes โ€ข Oct '24
Yeah my local install is fine. I wanted to migrate to a live server. That's when it all went wrong.
Hmm, that seems contradictory...
Ok, so I think I have mentioned this before and I have had an answer but my query goes a little futher now. The thing is, I donยดt want to pay for the chatGPT API while I am studying, so I chose to use a local installation of Ollama. So far thatยดs all good and I have success running chatbots on my local machine. But, and this is a big but... In the Building Applications with LLMs section , video "OpenAI - Function Calling" In the video at minute 1.55 David says "If you want to follow along you need an openAI key, no access to GPT is requeried" Okay, what does that mean? The openAI key is a paid for services giving access to openAI. I donยดt get it. Also, I wish someone would make a video for those of us who wish to use free open source LLM in our frameworks. Anyway, thatยดs my thought for the day. Cheers :o)
1
0
Please please can someone help?
Dear Team, I don't know where to start. I have been stuck on section 2 forever because I can not get VS code to load dependencies etc... Let me explain. After failure after failure and having chagpt sending me around in a loop I am at my wits end, and I hope someone can offer some suggestions. The first thing I need to understand is while running vs code on windows I can do 2 things. 1. Set up vs code to connect via WSL. 2. Use the windows native directory structure. Question. What option should I opt for? Because I have issues with both options In wsl I can not run html files, (not found) In windows I can run python and html files but struggle to load dependencies. they are underlined in red. chatgtp say " command palette. Type Python: Select Interpreter" If I do, there is nothing to select. Also chatgpt says "pip install Flask torch diffusers" Is this done in the terminal of the individual project? Is it done in windows or wsl? I have run that install on both. XXXX wsl Runs and updated but can not run html XXX windows=error LocalCache\local-packages\Python311\Scripts' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. To update, run: C:\Users\carls\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\python.exe -m pip install --upgrade pip PS C:\Users\carls\Projects\Repos\to-video> -m pip install --upgrade pip -m : The term '-m' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + -m pip install --upgrade pip + ~~ + CategoryInfo : ObjectNotFound: (-m:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException OUTCOME. I now have 2 repositories, 1 for windows and 1 for wsl. Both of which have issues and all I am doing as I follow instruction is load more and more code on to my machine.
1 like โ€ข Sep '24
@Anaxareian Aia I have deleted everything and started again. Now looking like the vs code is working as a windows installation. Thanks
1 like โ€ข Sep '24
@Nick Young yes i was. thanks for clearing that up. ๐Ÿ˜
1-10 of 13
Carl Scutt
3
4points to level up
@carl-scutt-7219
In between tech jobs, so looking to recurve into AI but not sure which direction.

Active 16d ago
Joined Jul 29, 2024
Powered by