Prompt

Prompt

A prompt is used to generate dynamic content by sending completion request to your LLM model of choice. A new prompt instance can be instantiated by rendering a template with all required variables:

const promptd = new Promptd({ apiKey: promptdApiKey });
// `template` is a Mustache template with only one variable `name`
const template = promptd.template(promptSlug);
const prompt = await template.render({ name });

Optionally, you may set the runId to be associated with the prompt when instantiating:

const run = await promptd.startRun({ name: 'Script say hello' });
const prompt = await template.render({ name }, run.id);

📝 How to use

A prompt instance can be used to take below actions:

generate(options: RequestOptions, ai?: CompletionApi)

Ask the LLM of your choice to generate a response based on this prompt. In this example, we're using llm-api (opens in a new tab) to abstract away all the underlying LLM, but promptd does not impose anything for that side of things.

By passing a Zod schema you can opt-in into using zod-gpt (opens in a new tab) to enforce fully validated and typed responses from LLM.

import { OpenAIChatApi } from 'llm-api';
 
const ai = new OpenAIChatApi({ apiKey: openAiKey }, { model: 'gpt-4-turbo' });
const result = await prompt.generate(
  {
    schema: z.object({
      message: z.string().describe('The message to be sent to user.'),
    }),
  },
  ai,
);
return result.data.message;

options

Request options can be overridden via this parameter. The RequestOptions object extends the request options defined in zod-gpt.

type RequestOptions = {
  // set a zod schema to enable JSON output
  schema?: T;
 
  // set to enable automatically slicing the prompt on token overflow. prompt will be sliced starting from the last character
  // default: false
  autoSlice?: boolean;
 
  // attempt to auto heal the output via reflection
  // default: false
  autoHeal?: boolean;
 
  // set message history, useful if you want to continue an existing conversation
  messageHistory?: ChatRequestMessage[];
 
  // the number of time to retry this request due to rate limit or recoverable API errors
  // default: 3
  retries?: number;
  // default: 30s
  retryInterval?: number;
  // default: 60s
  timeout?: number;
};

ai

The LLM model to use for completion. If different LLM models were set here and at template level, the model used in generate() function call will take precedence.

import { OpenAIChatApi } from 'llm-api';
 
const gpt35Turbo = new OpenAIChatApi(
  { apiKey: openAiKey },
  { model: 'gpt-3.5-turbo' },
);
const template = promptd.template(promptSlug, { ai: gpt35Turbo });
 
const prompt1 = await template.render({ name: 'Leo' });
const result1 = await prompt1.generate({}); // Model gpt-3.5-turbo is used
 
const gpt4o = new OpenAIChatApi({ apiKey: openAiKey }, { model: 'gpt-4o' });
const prompt2 = await template.render({ name: 'Lena' });
const result2 = await prompt2.generate({}, gpt4o); // Model gpt-4o is used

text()

Returns the raw prompt as text string.

const name = 'Anne';
const prompt = await template.render({ name }); // Say hello to {{ name }} in a cheerful way!
const rawPrompt = prompt.text();
console.log(rawPrompt); // Say hello to Anne in a cheerful way!

variables()

Returns the variables used for rendering this prompt.

const prompt = await template.render({ name: 'Anne' }); // Say hello to {{ name }} in a cheerful way!
const vars = prompt.variables();
console.log({ vars }); // { vars: { name: 'Anne' } }

setRunId(runId: string)

Set the runId to be associated with this prompt.

const run = await promptd.startRun({ name: 'Script say hello' });
const prompt = await template.render({ name: 'Anne' });
prompt.setRunId(run.id);