By @inacio17m at 7/23/2024
I see LLMs as a new programming primitive, akin to databases in the 70s and the explosion of web technologies in the 2000s. They offer programmers a unique capability: the power to infuse code with reasoning and natural language understanding.
Yet, new primitives need new abstractions. With RΞASON I hope to have created that.
RΞASON is a backend open-source Typescript framework for building great LLM apps.
One of the unique aspects of RΞASON is its use of Typescript interface
to get structured output from LLMs:
Install with:
npx use-reason@latest
Create src/entrypoints/joke.ts
:
import { reasonStream } from 'tryreason'
interface Joke {
joke: string;
topics: string[]
}
export async function* GET() {
return reasonStream<Joke>('tell me a joke')
}
And run with:
npm run dev
This blog post will explore some of RΞASON's features and its design philoshopy.
It won't be a tutorial — if you wish to learn how to use RΞASON, go to our docs, which is the best place to learn how to use RΞASON.
As said above, I see LLMs as a new programming primitive.
I don't, however, see LLMs as a completely new way to program — LLMs should be adapted into our current programming paradigms, not the other way around.
To accomplish this, the right abstractions are needed. With RΞASON I hope to have created that by following 5 essentials principles:
Both the input and output of LLMs is text. This is great for human-to-LLM interactions, but it's far from ideal for code-to-LLM interactions:
To address this problem I need to somehow get structured output from LLMs. And while some frameworks offer solutions, they often come at the price of learning entirely new APIs.
RΞASON tackles this differently, I familiar concepts like Typescript's `interface` and JSDoc comments.
import { reason } from 'tryreason'
interface Joke {
/** Use this property to indicate the age rating of the joke */
rating: number;
joke: string;
/** Use this property to explain the joke to those who did not understood it */
explanation: string;
}
const joke = await reason<Joke>('tell me a really spicy joke')
With reason()
, you directly call a LLM and receive structured output from it by just passing an interface
. For example, here's the output of `joke`:
{
"joke": "I'd tell you a chemistry joke but I know I wouldn't get a reaction.",
"rating": 18,
"explanation": "This joke is a play on words. The term 'reaction' refers to both a chemical process and a response from someone. The humor comes from the double meaning, implying that the joke might not be funny enough to elicit a response."
}
A framework should only help in areas that do not differentiate your business/app.
Yet, in the context of LLM apps, frameworks often offer pre-made prompts, agents and retrieval strategies. Those are key areas to the sucess of your app. You should be the one in charge of it — not the framework.
Moreover, with LLMs being still really new, libraries that try to offer pre-made prompts/agents/retrieval will either:
This is why RΞASON does not offer any pre-made prompts or agents. Instead, I try to offer awesome ways to create your own prompts and agents.
Many frameworks offer pre-made agent templates like:
const agent = new ConversationalAgent()
This approach has its pitfalls — like discussed previously, it risks becoming outdated or bloated.
You might wonder, 'Can't the framework provide basic agents for developers to extend?'. Yes, this is a possibility, but it requires that developers learn how the pre-made agent class is structured and also the framework's API.
The alternative, creating custom agents, often is a more flexible path. Howerver, the traditional Object-Oriented Programming (OOP) approach can lead to a lot of boilerplate code, as seen in LangChain's agent creation example. This issue is akin to why React transitioned from class components to functional components (source).
RΞASON addresses this by using functions to represent agents and actions. Let's see how the LangChain agent above looks in RΞASON:
import { useAgent } from 'tryreason'
import serpApi from '../actions/serp-api'
export const actions = [
serpApi
]
/**
* You are a helpful assistant that can answer questions about current events.
*/
export default async function WebAgent(userMessage: string) {
const agent = await useAgent()
return agent.run(userMessage)
}
When the LLM selects an action, RΞASON simply calls the selected the action, which is just a normal Javascript function, passes the parameters generated by the LLM, wait for the output and returns it back to the LLM.
Both the action and agent are normal Javascript functions. This approach, I believe, is highly effective for creating agents in a more streamlined and developer-friendly way.
LLMs are compute-heavy, it may take +20 seconds to generate a single completion. This is why using streaming is a must for great LLM experiences.
However, streaming just text is not enough — you need to be able to stream structured outputs — for the reasons discussed above.
RΞASON supports this natively:
import { reasonStream } from 'tryreason';
interface City {
description: string;
state: string;
country: string;
population: number;
}
export async function* GET(req: Request) {
return reasonStream<City>('tell me about San Francisco')
}
In order to create a great LLM app, obversability should *probably* be above all else.
You need to know what your users are asking, what the LLM is responding, how long it takes to respond, where are your agents getting stuck, what actions are they calling, how is your actions responding, etc.
We're super happy to say that RΞASON is out-of-the-box OpenTelemetry compatible!
You don't need to download any extra package, add decorators or anything. By using RΞASON you get observability for free.
RΞASON is still in its early stages — expect bugs when playing with it, but I'm commited into fixing all as fast as possible. So, please, make a Github issue if you find any.
I look forward to hear what you all think and what you all build with RΞASON.