I have had the opposite experience. For complex tasks, LLMs fail in subtle ways which requires inspection of its output: essentially “declarative” to “imperative” is bug ridden.
My trick has been to create DSLs (I call them primitives) that are loaded as context before I make declarative incantations to the LLM.
These micro-languages reduce the error space dramatically and allow for more user-friendly and high-level interactions with the LLM.
> Do you know the industry term for a project specification that is comprehensive and precise enough to generate a program?
> Code, it's called code.
As any software dev knows, the problem is not the language. The problem is that the vague musings of an "ideas guy" cannot be turned into an actual program without a lot of guesswork, refinement, clarification, "filling-in", and outright lying. The required understanding and precision of description is just not that far from the understanding and precision required to write the code.
As others have pointed out, natural language is often insufficient to describe precisely the operations that you want. Declarative programming solves this with specialized syntax; AI codegen solves this by guessing at what you left out, and then giving you specific imperative code that may or may not do what you want. Personally, I'll be investing my time and resources into the former.
Here is a code example
ReadFileAndUpdate
- read file.txt in %content%
- set %content.updated% as %now%
- write %content% to file.txt
I call this intent based programming. There isn't a strict syntax (there are few rules), the developer creates a Goal (think function) and writes steps (start with -) to solve the goalI've been using it for clients with very good results, and from the 9 months I've been able to build code using it, the experience has shown far less code needs to be written and you see the project from different perspective.
[0] https://github.com/jmaczan/text-to-ml
[1] Page four - https://pagedout.institute/download/PagedOut_004_beta1.pdf#p...
Plain language is not precise enough.
The idea that "I tell the AI what I want, and it writes the code for me!" is "declarative programming" is so wrong it's almost comical... except that it's yet another instance of the LLM-idiots confusing their lack of knowledge with not needing to know or understand it. In the process the term "declarative programming" as a concept may end up flushed down the shitter.
For instance:
// pseudocode input
fizzBuzz(count)
for each i in count
if divisible by 3, print 'fizz'
if divisible by 5, print 'buzz'
if both, print 'fizz buzz'
// rust output
fn fizz_buzz(count: i32) {
for i in 1..=count {
match (i % 3, i % 5) {
(0, 0) => println!("fizz buzz"),
(0, _) => println!("fizz"),
(_, 0) => println!("buzz"),
_ => println!("{}", i),
}
}
}
First, on imperative versus declarative. I would describe imperative as "giving a list of instructions to follow". The words "instruction" and "direction" are largely synonyms in my mind and the difference may be subtler than the original words they are trying to describe. Instead, I would say that declarative programming gives "a goal and a set of constraints". We describe what we want, not how to get it. A large part of describing what we want is by describing what we don't want or can't do.
On using LLMs for declarative programming, I assert that we already do this. Prompt engineering is all about defining a set of constraints on the LLMs response. The goal is often within the system prompt: answer the user's question given the context. The user's request is just one of many constraints on the answer.
This declaration in the form of constraints is a direct result of the fact that LLMs operate on conditional probabilities. An LLM chooses each token by taking the list of all possible tokens and their a priori probabilities and conditioning those on the tokens that preceded it. By prefacing the generated output with a list of tokens describing constraints, we condition the LLMs generation to fit those constraints. The generated text is the result of applying the constraints to the space of all possible outcomes.
As we know, this isn't perfect. Most declarative languages and their engines use strict logic to limit the generated solutions, whereas LLMs are probabilistic. The constraints aren't specified in concrete terms but as a set of arbitrary tokens whose influence on the generated output is based on frequency of occurrence within a corpus of text rather than any logical rules.
Still, the fact that the generated output is the result of conditioning based on a set of tokens provided by the user means that it uses constraints to determine an outcome that fits those constraints, which is exactly how we solve a problem based on a declarative description.
The real problem always comes back to the fact that the LLM cant just make code appear out of nowhere, it needs _your_ prompt (or at least code in the context window) to know what code to write. If you can't exactly describe the requirements - or what is increasingly happening - _know_ the actual technical descriptions for what you are trying to accomplish, its kinda like having a giant hammer with no nail to hit. I'm worried of a sort of future where we sort of program ourselves into a circle, all programs starting to look the same simply because the original "hardcore" or "forgotten" patterns and strategies of software design "just don't need to be taught anymore". In other words, people getting things to work but having no idea how they work. Yes I get the whole "most people dont know how cars work but use them", but as a software engineer not really knowing how the actual source code itself works? It feels strange and probably ultimately the wrong direction.
I also think the entire idea of a fully automated feature build / test / deploy AI system is just impossible... the complexity of such a landscape is just too large to automate with some sort of token generator. AGI, if course, but LLMs are so far from AGI it's laughable.
Note:
When the syntax, which is similar to natural language is: It is a good example if you want to prove to people that they are a bad idea.I prefer to use these tools as a form of red/green/refactor workflow, where I don't use them in the refactor step or the test case step.