Did I miss out on some major developments there? Because I don't see why it's a thing that's being talked about everywhere, when it's barely anything.
Even if you don't want to use agents, it is still useful as a convenient library for calling an Open AI compatible endpoint.
https://langroid.github.io/langroid/quick-start/llm-interact...
First of all, every library/framework I've found is moving so fast that all the tutorials and printed material (O'Reilly books etc) are already out of date. Many of the changes are out of necessity, as it's a rapidly developing space, but sometimes it just feels like someone got high and decided to add 3 more layers of abstraction. Although for many tasks, AI coding assistants would be a benefit for noobs like me, the code base and documentation are too loose for me to get the expected benefits I would find in a more established code base.
LangChain seems to be where a lot of the action is with regard to modularity, and using different components in each part of the pipeline. That's important for me, because I need either local or HIPAA-compliant tools (Azure OpenAI works, Anthropic won't return my requests for a BAA, and I need a bigger GPU).
But using LangChain is a pretty horrible experience because, at least for my uses and as a noob, it's much too buried in abstractions to make quick iterations. The GUI-based stuff like flowise and langflow are too limited with regard to available components, and mostly they hide the problems so that errors are tough to address.
I'm thrilled that there has been so much work on adding JSON output and agent stuff at the LLM level, as hopefully it can bring some of these astronauts back to earth (or at least in a low orbit).