prompty

package module
v0.3.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 2, 2026 License: MIT Imports: 17 Imported by: 0

README

prompty

Go Reference

TL;DR — prompty is a library for prompt management, templating, and unified interaction with LLMs in Go. It supports loading prompts from files, Git, or HTTP and works with multiple backends (OpenAI, Anthropic, Gemini, Ollama) without locking you to a single vendor.

Modules and installation

The project is split into multiple Go modules. Install only what you need:

Layer Package Install
Core prompty (templates, registries in-tree) go get github.com/skosovsky/prompty
Adapters OpenAI go get github.com/skosovsky/prompty/adapter/openai
Gemini go get github.com/skosovsky/prompty/adapter/gemini
Anthropic go get github.com/skosovsky/prompty/adapter/anthropic
Ollama go get github.com/skosovsky/prompty/adapter/ollama
Registries Git (remote) go get github.com/skosovsky/prompty/remoteregistry/git

fileregistry and embedregistry are part of the core module (github.com/skosovsky/prompty).

Quick Start

Copy-paste friendly example: load a prompt (in-memory here), format with a payload, then call the OpenAI API via the adapter. Requires OPENAI_API_KEY in the environment.

package main

import (
	"context"
	"fmt"
	"log"
	"os"

	"github.com/openai/openai-go/v3"
	"github.com/openai/openai-go/v3/option"
	"github.com/skosovsky/prompty"
	"github.com/skosovsky/prompty/adapter"
	openaiadapter "github.com/skosovsky/prompty/adapter/openai"
)

func main() {
	tpl, err := prompty.NewChatPromptTemplate(
		[]prompty.MessageTemplate{
			{Role: prompty.RoleSystem, Content: "You are {{ .bot_name }}."},
			{Role: prompty.RoleUser, Content: "{{ .query }}"},
		},
		prompty.WithPartialVariables(map[string]any{"bot_name": "HelperBot"}),
	)
	if err != nil {
		log.Fatal(err)
	}
	type Payload struct {
		Query string `prompt:"query"`
	}
	ctx := context.Background()
	exec, err := tpl.FormatStruct(ctx, &Payload{Query: "What is 2+2?"})
	if err != nil {
		log.Fatal(err)
	}

	adp := openaiadapter.New()
	params, err := adp.TranslateTyped(ctx, exec)
	if err != nil {
		log.Fatal(err)
	}
	client := openai.NewClient(option.WithAPIKey(os.Getenv("OPENAI_API_KEY")))
	resp, err := client.Chat.Completions.New(ctx, *params)
	if err != nil {
		log.Fatal(err)
	}
	parts, err := adp.ParseResponse(ctx, resp)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(adapter.TextFromParts(parts))
}

Main abstractions

  • Registry — supplies ChatPromptTemplate by id (from files, embed, or remote). Interface: GetTemplate(ctx, id) (*ChatPromptTemplate, error).
  • Adapter — maps PromptExecution to a provider request and parses the response. Interface: Translate(ctx, exec) (any, error), ParseResponse(ctx, raw) ([]ContentPart, error), and optionally ParseStreamChunk (or ErrStreamNotImplemented).
  • TemplatingChatPromptTemplate is built from message templates and optional tools; you pass a typed payload (struct with prompt tags) to FormatStruct(ctx, payload) to get a PromptExecution. Registries can load manifests (YAML) and support WithPartials for shared {{ template "name" }} partials. Template functions (funcmaps) include truncate_chars, truncate_tokens, render_tools_as_xml, render_tools_as_json.

Pipeline: RegistryTemplate + payload → PromptExecutionAdapter → provider API. HTTP/transport is the caller’s responsibility.

Features

  • Domain model: ContentPart (text, image, tool call/result), ChatMessage, ToolDefinition, PromptExecution with metadata; open-ended roles in manifests (validation in adapters). Message-level: provider-specific options are passed only via ChatMessage.Metadata (e.g. anthropic_cache for prompt caching, gemini_search_grounding for Gemini).
  • Media: exec.ResolveMedia(ctx, fetcher) fills MediaPart.Data using a Fetcher (e.g. mediafetch.DefaultFetcher{}); use before Translate for adapters that require inline data (Anthropic, Ollama); OpenAI and Gemini accept URL natively.
  • Templating: text/template with fail-fast validation, PartialVariables, optional messages, chat history splicing. DRY: registries support WithPartials(pattern) so manifests can use {{ template "name" }} with shared partials (e.g. _partials/*.tmpl).
  • Template functions: truncate_chars, truncate_tokens, render_tools_as_xml / render_tools_as_json for tool injection.
  • Registries: load manifests from filesystem (fileregistry), embed (embedregistry), or remote HTTP/Git (remoteregistry) with TTL cache.
  • Adapters: map PromptExecution to provider request types (OpenAI, Anthropic, Gemini, Ollama); parse responses back to []ContentPart. Tool result is multimodal: ToolResultPart.Content is []ContentPart (text and/or images). Adapters that do not support media in tool results return ErrUnsupportedContentType when MediaPart is present in ToolResultPart.Content.
  • Observability: PromptMetadata (ID, version, description, tags, environment) on every execution.

Registries

Package Description
github.com/skosovsky/prompty/fileregistry Load YAML manifests from a directory; lazy load with cache; Reload() to clear cache; WithPartials(relativePattern) for {{ template "name" }}
github.com/skosovsky/prompty/embedregistry Load from embed.FS at build time; eager load; no mutex; WithPartials(pattern) for shared partials
github.com/skosovsky/prompty/remoteregistry Fetch via Fetcher (HTTP or Git); TTL cache; Evict/EvictAll; Close() for resource cleanup

Template name and environment resolve to {name}.{env}.yaml (or .yml), with fallback to {name}.yaml. Name must not contain ':'.

Adapters

Package Translate result Notes
github.com/skosovsky/prompty/adapter/openai *openai.ChatCompletionNewParams Tools, images (URL/base64), tool calls
github.com/skosovsky/prompty/adapter/anthropic *anthropic.MessageNewParams Images as base64 only
github.com/skosovsky/prompty/adapter/gemini *gemini.Request Model set at call site
github.com/skosovsky/prompty/adapter/ollama *api.ChatRequest Native Ollama tools

Each adapter implements Translate(ctx, exec) (any, error) and TranslateTyped(ctx, exec) for the concrete type; ParseResponse(ctx, raw) returns []prompty.ContentPart; ParseStreamChunk(ctx, rawChunk) returns stream parts or ErrStreamNotImplemented. Use adapter.TextFromParts and adapter.ExtractModelConfig for helpers. Tool result: ToolResultPart.Content is []ContentPart (multimodal). Adapters that do not support media in tool results return adapter.ErrUnsupportedContentType when MediaPart is present. Media: OpenAI and Gemini accept image URL in MediaPart natively. For Anthropic and Ollama (base64 only), call exec.ResolveMedia(ctx, fetcher) before Translate when using image URLs; pass a Fetcher (e.g. mediafetch.DefaultFetcher{} or a custom implementation). Otherwise the adapter returns adapter.ErrMediaNotResolved. The core has no HTTP dependency; the default implementation lives in mediafetch.

Architecture

flowchart LR
    Registry[Registry]
    Template[ChatPromptTemplate]
    Format[FormatStruct]
    Exec[PromptExecution]
    Adapter[ProviderAdapter]
    API[LLM API]

    Registry -->|GetTemplate| Template
    Template -->|FormatStruct + payload| Format
    Format --> Exec
    Exec -->|Translate| Adapter
    Adapter -->|request| API
    API -->|raw response| Adapter
    Adapter -->|ParseResponse| ContentParts[ContentPart]

Pipeline: RegistryTemplate + typed payload → Fail-fast validationRendering (with tool injection) → PromptExecutionAdapter → LLM API. HTTP/transport is the caller’s responsibility.

Template functions

  • truncate_chars .text 4000 — trim by rune count
  • truncate_tokens .text 2000 — trim by token count (uses TokenCounter from template options; default CharFallbackCounter)
  • render_tools_as_xml .Tools / render_tools_as_json .Tools — inject tool definitions into the prompt (e.g. for local Llama)

Development

This repo uses Go Workspaces (go.work). The root and all adapter/registry submodules must be listed there so that changes to the core prompty package and adapters compile together in one PR without publishing intermediate versions.

Build and test (from repo root):

go work sync
go build ./...
go test ./...
cd adapter/openai && go build . && go test . && cd ../..
cd adapter/anthropic && go build . && go test . && cd ../..
cd adapter/gemini && go build . && go test . && cd ../..
cd adapter/ollama && go build . && go test . && cd ../..
cd remoteregistry/git && go build . && go test . && cd ../..

Ensure go.work includes: ., ./adapter/openai, ./adapter/anthropic, ./adapter/gemini, ./adapter/ollama, ./remoteregistry/git.

Running examples locally: go.work already includes ./examples/basic_chat, ./examples/git_prompts, and ./examples/funcmap_tools. From the repo root run go run ./examples/basic_chat (or cd into an example dir and go run .). Each example’s go.mod uses replace for local development; remove those when using a published module.

License

MIT. See LICENSE.

Documentation

Overview

Package prompty provides prompt template management for LLM applications. It supports multimodal content, tool definitions, fail-fast validation, and provider-agnostic rendering via text/template.

Example
package main

import (
	"context"
	"fmt"

	"github.com/skosovsky/prompty"
)

func main() {
	tpl, err := prompty.NewChatPromptTemplate(
		[]prompty.MessageTemplate{
			{Role: prompty.RoleSystem, Content: "You are {{ .bot_name }}."},
			{Role: prompty.RoleUser, Content: "{{ .query }}"},
		},
		prompty.WithPartialVariables(map[string]any{"bot_name": "HelperBot"}),
	)
	if err != nil {
		panic(err)
	}
	type Payload struct {
		Query string `prompt:"query"`
	}
	ctx := context.Background()
	exec, err := tpl.FormatStruct(ctx, &Payload{Query: "What is 2+2?"})
	if err != nil {
		panic(err)
	}
	fmt.Println(exec.Messages[0].Content[0].(prompty.TextPart).Text)
	fmt.Println(exec.Messages[1].Content[0].(prompty.TextPart).Text)
}
Output:

You are HelperBot.
What is 2+2?

Index

Examples

Constants

This section is empty.

Variables

View Source
var (
	ErrMissingVariable  = errors.New("prompty: required template variable not provided")
	ErrTemplateRender   = errors.New("prompty: template rendering failed")
	ErrTemplateParse    = errors.New("prompty: template parsing failed")
	ErrInvalidPayload   = errors.New("prompty: payload struct is invalid or missing prompt tags")
	ErrTemplateNotFound = errors.New("prompty: template not found in registry")
	ErrInvalidManifest  = errors.New("prompty: manifest file is malformed")
	ErrReservedVariable = errors.New("prompty: reserved variable name in payload (use a different prompt tag than Tools)")
	// ErrInvalidName indicates template name or env contains invalid characters (e.g. ':', path separators).
	ErrInvalidName = errors.New("prompty: invalid template name")
)

Sentinel errors for template and registry operations. All use prefix "prompty:" for identification. Callers should use errors.Is/errors.As.

Functions

func ValidateID added in v0.3.0

func ValidateID(id string) error

ValidateID checks that id is safe for use in paths and cache keys. Rejects empty id and ids containing '/', '\\', "..", or ':'. Call before registry GetTemplate/Stat or path resolution.

func ValidateName

func ValidateName(name, env string) error

ValidateName checks that name and env are safe for use in paths and cache keys. Rejects empty name and names containing '/', '\\', "..", or ':'. Call before registry GetTemplate or path resolution.

Types

type CharFallbackCounter

type CharFallbackCounter struct {
	CharsPerToken int
}

CharFallbackCounter estimates tokens as runes/CharsPerToken. Zero value uses 4 chars per token (English average).

func (*CharFallbackCounter) Count

func (c *CharFallbackCounter) Count(text string) (int, error)

Count returns estimated token count: ceil(rune_count / CharsPerToken). If CharsPerToken <= 0, uses 4.

type ChatMessage

type ChatMessage struct {
	Role     Role
	Content  []ContentPart
	Metadata map[string]any // Provider-specific flags; adapters read known keys (e.g. anthropic_cache, gemini_search_grounding)
}

ChatMessage is a single message with role and content parts (supports multimodal). Provider-specific options (e.g. anthropic_cache, gemini_search_grounding) are passed only via Metadata; adapters may ignore unknown keys.

type ChatPromptTemplate

type ChatPromptTemplate struct {
	Messages         []MessageTemplate
	PartialVariables map[string]any
	Tools            []ToolDefinition
	ModelConfig      map[string]any
	Metadata         PromptMetadata
	ResponseFormat   *SchemaDefinition // JSON Schema for structured output (passed to PromptExecution)
	RequiredVars     []string          // explicit required vars from manifest; merged with template-derived in FormatStruct
	// contains filtered or unexported fields
}

ChatPromptTemplate holds message templates and options for rendering. Use NewChatPromptTemplate to construct; options are applied via ChatTemplateOption. Fields must not be mutated after construction to ensure goroutine safety.

func CloneTemplate

func CloneTemplate(c *ChatPromptTemplate) *ChatPromptTemplate

CloneTemplate returns a copy of the template with cloned slice and map fields. Registries use this so callers cannot mutate the cached template.

func NewChatPromptTemplate

func NewChatPromptTemplate(messages []MessageTemplate, opts ...ChatTemplateOption) (*ChatPromptTemplate, error)

NewChatPromptTemplate builds a template with defensive copies and applies options. Returns ErrTemplateParse if any message content fails to parse.

Example
package main

import (
	"fmt"

	"github.com/skosovsky/prompty"
)

func main() {
	msgs := []prompty.MessageTemplate{
		{Role: prompty.RoleSystem, Content: "You are a helpful assistant."},
		{Role: prompty.RoleUser, Content: "Hello, {{ .user_name }}!"},
	}
	tpl, err := prompty.NewChatPromptTemplate(msgs)
	if err != nil {
		panic(err)
	}
	fmt.Println(len(tpl.Messages))
}
Output:

2

func (*ChatPromptTemplate) FormatStruct

func (c *ChatPromptTemplate) FormatStruct(ctx context.Context, payload any) (*PromptExecution, error)

FormatStruct renders the template using payload struct (prompt tags), merges variables, validates, splices history.

Example
package main

import (
	"context"
	"fmt"

	"github.com/skosovsky/prompty"
)

func main() {
	tpl, _ := prompty.NewChatPromptTemplate([]prompty.MessageTemplate{
		{Role: prompty.RoleSystem, Content: "Hello, {{ .name }}!"},
	})
	type Payload struct {
		Name string `prompt:"name"`
	}
	ctx := context.Background()
	exec, err := tpl.FormatStruct(ctx, &Payload{Name: "Alice"})
	if err != nil {
		panic(err)
	}
	text := exec.Messages[0].Content[0].(prompty.TextPart).Text
	fmt.Println(text)
}
Output:

Hello, Alice!

func (*ChatPromptTemplate) ValidateVariables added in v0.3.0

func (c *ChatPromptTemplate) ValidateVariables(data map[string]any) error

ValidateVariables runs a dry-run execute with the given data (same merge as FormatStruct: PartialVariables + data + Tools). Returns an error with role/message index context if any template references a missing or invalid variable.

type ChatTemplateOption

type ChatTemplateOption func(*ChatPromptTemplate)

ChatTemplateOption configures ChatPromptTemplate (functional options pattern).

func WithConfig

func WithConfig(config map[string]any) ChatTemplateOption

WithConfig sets model config (e.g. temperature, max_tokens).

func WithMetadata

func WithMetadata(meta PromptMetadata) ChatTemplateOption

WithMetadata sets prompt metadata for observability.

func WithPartialVariables

func WithPartialVariables(vars map[string]any) ChatTemplateOption

WithPartialVariables sets default variables merged with payload (payload overrides).

func WithPartialsFS added in v0.3.0

func WithPartialsFS(fsys fs.FS, pattern string) ChatTemplateOption

WithPartialsFS sets an fs.FS and pattern for partials (e.g. embed FS and "partials/*.tmpl"); enables {{ template "name" }}.

func WithPartialsGlob added in v0.3.0

func WithPartialsGlob(glob string) ChatTemplateOption

WithPartialsGlob sets a glob pattern (e.g. "_partials/*.tmpl") to parse before message templates; enables {{ template "name" }}.

func WithRequiredVars

func WithRequiredVars(vars []string) ChatTemplateOption

WithRequiredVars sets explicit required variable names (e.g. from manifest variables.required). Merged with variables inferred from template content in FormatStruct.

func WithResponseFormat added in v0.2.0

func WithResponseFormat(schema *SchemaDefinition) ChatTemplateOption

WithResponseFormat sets the JSON Schema for structured response format (used by OpenAI, Gemini).

func WithTokenCounter

func WithTokenCounter(tc TokenCounter) ChatTemplateOption

WithTokenCounter sets the token counter for truncate_tokens in templates.

func WithTools

func WithTools(tools []ToolDefinition) ChatTemplateOption

WithTools sets tool definitions available in templates as .Tools.

Example
package main

import (
	"fmt"

	"github.com/skosovsky/prompty"
)

func main() {
	tpl, _ := prompty.NewChatPromptTemplate(
		[]prompty.MessageTemplate{
			{Role: prompty.RoleSystem, Content: "Tools: {{ render_tools_as_json .Tools }}"},
		},
		prompty.WithTools([]prompty.ToolDefinition{
			{Name: "get_weather", Description: "Get weather", Parameters: nil},
		}),
	)
	fmt.Println(len(tpl.Tools))
}
Output:

1

type ContentPart

type ContentPart interface {
	// contains filtered or unexported methods
}

ContentPart is a sealed interface for message parts. Only package types implement it via isContentPart().

type Fetcher added in v0.3.0

type Fetcher interface {
	Fetch(ctx context.Context, url string) (data []byte, mimeType string, err error)
}

Fetcher defines how media URLs are resolved into raw bytes. Callers can use mediafetch.DefaultFetcher or provide a custom implementation (e.g. S3, local files).

type MediaPart added in v0.2.0

type MediaPart struct {
	MediaType string // "image", "audio", "video", "document"
	MIMEType  string // e.g. "application/pdf", "image/jpeg"
	URL       string // Optional: link (adapters may fetch and convert to inline)
	Data      []byte // Optional: raw bytes (base64 is decoded by adapters as needed)
}

MediaPart holds universal media (image, audio, video, document). URL or Data may be set. Adapters that do not accept URL natively may download the URL in Translate(ctx) and send inline data.

type MessageTemplate

type MessageTemplate struct {
	Role     Role           // RoleSystem, RoleUser, RoleAssistant (and others; see Role* constants)
	Content  string         // Go text/template: e.g. "Hello, {{ .user_name }}"
	Optional bool           // true → skip if all referenced variables are zero-value
	Metadata map[string]any `yaml:"metadata,omitempty"`
}

MessageTemplate is the raw template for one message before rendering. After FormatStruct it becomes a ChatMessage with substituted values. Optional: true skips the message if all referenced variables are zero-value. Provider-specific options go in Metadata (e.g. anthropic_cache: true, gemini_search_grounding: true).

type PromptExecution

type PromptExecution struct {
	Messages       []ChatMessage
	Tools          []ToolDefinition
	ModelConfig    map[string]any
	Metadata       PromptMetadata
	ResponseFormat *SchemaDefinition `json:"response_format,omitempty" yaml:"response_format,omitempty"`
}

PromptExecution is the result of formatting a template; immutable after creation.

func (*PromptExecution) AddMessage added in v0.3.0

func (e *PromptExecution) AddMessage(msg ChatMessage) *PromptExecution

AddMessage appends one message. Clones Messages before append; returns e for chaining.

func (*PromptExecution) ResolveMedia added in v0.2.1

func (e *PromptExecution) ResolveMedia(ctx context.Context, fetcher Fetcher) error

ResolveMedia fills Data and MIMEType for all MediaParts that have a URL but no Data, using the provided Fetcher. Only "image" media type is supported; other types with URL and empty Data return an error (fail-fast).

func (*PromptExecution) WithHistory added in v0.3.0

func (e *PromptExecution) WithHistory(history []ChatMessage) *PromptExecution

WithHistory appends history messages after system/developer block. Clones Messages before append; returns e for chaining.

type PromptMetadata

type PromptMetadata struct {
	ID          string // From manifest id
	Version     string
	Description string   // From manifest description
	Tags        []string // From manifest metadata.tags
	Environment string   // Set by registry when loading by env (e.g. production); not from manifest
}

PromptMetadata holds observability metadata.

type PromptRegistry

type PromptRegistry = Registry

PromptRegistry is an alias for Registry for backward compatibility; prefer Registry.

type ReasoningPart added in v0.2.0

type ReasoningPart struct {
	Text string
}

ReasoningPart is the hidden reasoning chain returned by some models (e.g. DeepSeek R1, OpenAI o-series).

type Registry added in v0.3.0

type Registry interface {
	GetTemplate(ctx context.Context, id string) (*ChatPromptTemplate, error)
	List(ctx context.Context) ([]string, error)
	Stat(ctx context.Context, id string) (TemplateInfo, error)
}

Registry returns a chat prompt template by id, lists ids, and returns template metadata. id is a single identifier (e.g. "doctor", "doctor.prod"); environments are expressed via file layout.

type Role

type Role string

Role is the message role in a chat (system, developer, user, assistant, tool).

const (
	RoleSystem    Role = "system"
	RoleDeveloper Role = "developer" // Replaces system for OpenAI o1/o3-style models
	RoleUser      Role = "user"
	RoleAssistant Role = "assistant"
	RoleTool      Role = "tool"
)

Chat message roles.

type SchemaDefinition added in v0.2.0

type SchemaDefinition struct {
	Name        string         `json:"name,omitempty" yaml:"name,omitempty"`
	Description string         `json:"description,omitempty" yaml:"description,omitempty"`
	Schema      map[string]any `json:"schema" yaml:"schema"` // JSON Schema
}

SchemaDefinition describes a structured output (JSON Schema) for response format.

type TemplateInfo added in v0.3.0

type TemplateInfo struct {
	ID        string
	Version   string
	UpdatedAt time.Time
}

TemplateInfo holds metadata about a template without parsing its body.

type TextPart

type TextPart struct {
	Text string
}

TextPart holds plain text content.

type TokenCounter

type TokenCounter interface {
	Count(text string) (int, error)
}

TokenCounter estimates token count for a string. Callers can plug in an exact tokenizer (e.g. tiktoken); default is CharFallbackCounter.

type ToolCallPart

type ToolCallPart struct {
	ID        string // Empty for models that do not support ID (e.g. base Gemini)
	Name      string
	Args      string // Full JSON string of arguments (non-stream response)
	ArgsChunk string // Chunk of JSON arguments (streaming); client glues chunks
}

ToolCallPart represents an AI request to call a function (in assistant message). In streaming: ArgsChunk holds incremental JSON; Args is set in non-stream ParseResponse.

type ToolDefinition

type ToolDefinition struct {
	Name        string         `json:"name"`
	Description string         `json:"description"`
	Parameters  map[string]any `json:"parameters,omitempty"` // JSON Schema for parameters
}

ToolDefinition is the universal tool schema. JSON tags are required for template functions (e.g. render_tools_as_json) that marshal tools.

type ToolResultPart

type ToolResultPart struct {
	ToolCallID string
	Name       string
	Content    []ContentPart
	IsError    bool
}

ToolResultPart is the result of a tool call (in message with Role "tool"). Content is a slice of multimodal parts (text, images, etc.).

type VariableError

type VariableError struct {
	Variable string
	Template string
	Err      error
}

VariableError wraps a sentinel error with variable and template context. Use errors.Is(err, ErrMissingVariable) and errors.As(err, &variableErr) to inspect.

func (*VariableError) Error

func (e *VariableError) Error() string

Error implements error.

func (*VariableError) Unwrap

func (e *VariableError) Unwrap() error

Unwrap returns the wrapped error for errors.Is/errors.As.

Directories

Path Synopsis
Package adapter defines the ProviderAdapter interface for mapping prompty's canonical PromptExecution to provider-specific request/response types (e.g.
Package adapter defines the ProviderAdapter interface for mapping prompty's canonical PromptExecution to provider-specific request/response types (e.g.
Package embedregistry provides an embed.FS-based prompt registry that loads all YAML manifests at construction (eager).
Package embedregistry provides an embed.FS-based prompt registry that loads all YAML manifests at construction (eager).
Package fileregistry provides a filesystem-based prompt registry that loads YAML manifests on demand (lazy) and caches them.
Package fileregistry provides a filesystem-based prompt registry that loads YAML manifests on demand (lazy) and caches them.
internal
cast
Package cast provides type conversion helpers for map[string]any and similar generic data.
Package cast provides type conversion helpers for map[string]any and similar generic data.
Package manifest parses YAML/JSON prompt manifests into prompty.ChatPromptTemplate.
Package manifest parses YAML/JSON prompt manifests into prompty.ChatPromptTemplate.
Package mediafetch provides safe URL download for adapters that require inline media (e.g.
Package mediafetch provides safe URL download for adapters that require inline media (e.g.
Package remoteregistry provides a remote prompt registry that loads YAML manifests via a Fetcher (HTTP or Git).
Package remoteregistry provides a remote prompt registry that loads YAML manifests via a Fetcher (HTTP or Git).

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL