Promptd
Promptd is a set of tools to help you manage AI prompts. Below are instructions on how to use it as part of your application.
🔨 Installation
The Javascript client is available from NPM (opens in a new tab):
yarn add @promptd-js/client # yarn
npm install @promptd-js/client # npm
📖 Quick start
Fetch template
For this example, we're going to fetch an existing prompt template defined in the Promptd dashboard, whose slug (our way of uniquely identifying a prompt) is say-hello-to-user
.
First we need to instantiated a Promptd
object, and fetch the prompt template by its slug:
const promptd = new Promptd({ apiKey: promptdApiKey });
const template = promptd.template(promptSlug);
Render prompt
What we get is a Mustache template (opens in a new tab), which we can turn into text by feeding the necessary template variables into the render()
method.
The template might look like Say hello to {{ name }} in a cheerful way.
.
const prompt = await template.render({ name });
The raw prompt rendered can be accessed with the text()
method:
const raw_prompt = await prompt.text();
Generate response
At this point, we can then ask our LLM of choice to generate a response based on the full prompt. In this example, we're using llm-api (opens in a new tab) to abstract away all the underlying LLM, but promptd
does not impose anything for that side of things.
Under the hood, we use zod-gpt (opens in a new tab) to enforce fully validated and typed responses from LLM.
import { OpenAIChatApi } from 'llm-api';
const ai = new OpenAIChatApi({ apiKey: openAiKey }, { model: 'gpt-4-turbo' });
const result = await prompt.generate(
{
schema: z.object({
message: z.string().describe('The message to be sent to user.'),
}),
},
ai,
);
return result.data.message;
To get unstructured response, simply leave the schema empty:
const ai = new OpenAIChatApi({ apiKey: openAiKey }, { model: 'gpt-4-turbo' });
const result = await prompt.generate({}, ai);
return result.data;
You may also specify the default LLM model to use for this prompt when fetching the template:
const template = promptd.template(promptSlug, { ai });
const prompt = await template.render({ name });
const result = await prompt.generate({});
Full listing
import { z } from 'zod';
import { Promptd } from '@promptd-js/client';
import { OpenAIChatApi } from 'llm-api';
const promptdApiKey = 'xxx'; // Grab this in your Promptd project settings page.
const promptSlug = 'say-hello-to-user'; // Find this on the prompt page in the Promptd dashboard.
const promptd = new Promptd({ apiKey: promptdApiKey });
const template = promptd.template(promptSlug); // `template` is a Mustache template
// We're using Open AI's GPT4 model in this example, but we're LLM-agnostic:
const openAiKey = 'yyy'; // Grab this in your Open AI account.
const ai = new OpenAIChatApi({ apiKey: openAiKey },{ model: 'gpt-4-turbo' });
export const sayHello = async(name: string) => {
// For example, the prompt body might be: "Say hello to {{ name }} in a cheerful way."
const prompt = await template.render({ name });
// If we called sayHello('Alice'), prompt would be "Say hello to Alice in a cheerful way."
const result = await prompt.generate({
schema: z.object({
message: z.string().describe("The message to be sent to user."),
}),
}, ai);
// The result might now contain something akin to "Hello, Alice, what a pleasure to meet you!"
return result.data.message;
};