DebtDeflation
This is really nice. My first reaction was "oh great, another Ollama/WebGenUI wrapper on Llama.cpp" but it's actually much more - supporting not only the LLM but also embedding models, vector databases, and TTS/STT. Everything needed to build a fully functioning voice chatbot.
furyofantares
This looks sweet.

Totally irrelevant but ... "Language Learning Model" ? Probably just some brainfart or I'm missing something but it would be hilarious if the authors of this did this whole project without knowing what LLM stands for.

hannofcart
Downloaded and checked it out. Looks great so far. Tried using it to read a bunch of regulatory PDFs using GPT-4o.

Some quick and early feedback:

1. The citations seem to be a bit dicey. The response seems to give largely correct answers but the citations window seems to show content that's a bit garbled.

2. Please please add a text search to search within existing chat content. Like if I searched for something about giraffes in one of the chats, search chat history and allow switching to it.

tencentshill
As someone who doesn't know what an Embed or Vector is, this has been the only offline AI tool I've been able to install and start using on my standard office PC.
101008
LLM will become like web frameworks in the future, in the sense that they will be free, open source, and everybody will be able to build on them. Sure, there will be paid options, such as there are paid web frameworks, but most of the time free options will be more than good enough for most of the jobs.
hm-nah
I’ve been attempting to deploy a customized AnythingLLM instance within an enterprise env. TimC (and presumably dev crew) are top notch and very responsive.

Waiting for EntraID integration. Post-that, a customized version of AnythingLLM can tick the boxes for most of the lowest hanging use cases for an org.

Thanks for the killer app TimC and crew!

A4ET8a8uTh0
This definitely makes it super easy for less technical folks to access it ( got it up and running in less than 5 minutes ). Initial reaction is positive with just Ollama. Everything is automatically detected and if you want to manually set it up, you still can. Lets see how it does after adding huggingface ( quick and painless ).
nunobrito
There was an error while installing on Linux, was solved with:

''' sudo chown root:root /home/hn/AnythingLLMDesktop/anythingllm-desktop/chrome-sandbox sudo chmod 4755 /home/hn/AnythingLLMDesktop/anythingllm-desktop/chrome-sandbox '''

Other than that it worked really well.

phren0logy
I have been really impressed with AnythingLLM as a no-fuss way to use LLMs locally and via APIs. For those of us who want to tinker, there's a solid range of choice for embedders and vector stores.

The single install desktop packaging is very slick. I look forward to the upcoming new features.

indigodaddy
Question on the anythingllm hosted. So is the $50/mo just basically you are logging into some sort of Remote Desktop environment and run an anythingllm instance from there that you guys manage? Everything else is still the same like it’s still byok and all that right? Or do you get some sort of “ai/token” usage with the monthly fee as well?

Oh and the other question would be in your hosted version does the instance have access to a gpu that you provide? Which in that case we could just use a local model with (relative) ease right?

And if this hosted service is as I have described, I feel like this sort of service coupled with top tier frontier open source models is the (not too distant) future for AI!—- at least in the short to medium term (which is probably a pretty short period relatively, given the rapid speed of AI development).

Thanks

egamirorrim
I've got an open ai API key, and I pay for chatgpt. I'd imagine switching to this and using openai would end up costing quite a lot? How are people running it relatively cheaply?
CuriouslyC
I noticed you put LiteLLM in your list of providers. Was that just marketing, or did you re-implement model support for all the models LiteLLM already supports separately?
ranger_danger
$ docker pull mintplexlabs/anythingllm Using default tag: latest Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
fl_rn_st
What a coincidence, I just setup AnythingLLM yesterday for trying it on an enterprise level. I'm super impressed with most of the stuff I used so far.

I just wish there was an option to properly include custom CSS. The default interface looks a little bit.. dated.

Keep up the amazing work!

politelemon
> AnythingLLM packages as an AppImage but you will not be able to boot if you run just the AppImage.

The 'boot' seems to indicate it will affect the computer's startup process, I think you meant to say you will not be able to 'start the application'

philipjoubert
This looks really great. Are you planning on adding shortcut keys anytime soon?
conception
Can this roll into home assistant and provide alexa at home in any capacity?
md3911027514
this is super sweet for developers (or anyone else) who like to have granular control over their LLM set up

- ability to edit system prompt

- ability to change LLM temperature

- can choose which model to use (open-source or closed)

santamex
How does this differ from chatboxai?

https://github.com/Bin-Huang/chatbox

vednig
How do you ensure "privacy by default" if you are also providing cloud models?
somesun
where is the desktop app download ?

or need to install source code from github ?

anonymous344
what kind of pc does it need ? ram, etc?
ranger_danger
Finally.
m1keil
Came here to say that I really like the content on your YouTube channel.
mangeshbankar21
[flagged]