gollm
is a Go package designed to help you build your own AI golems. Just as the mystical golem of legend was brought to life with sacred words, gollm
empowers you to breathe life into your AI creations using the power of Large Language Models (LLMs). This package simplifies and streamlines interactions with various LLM providers, offering a unified, flexible, and powerful interface for AI engineers and developers to craft their own digital servants.
ChainOfThought
for complex reasoning tasks.gollm
can handle a wide range of AI-powered tasks, including:
ChainOfThought
function to analyze complex problems step-by-step.go get github.com/teilomillet/gollm
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/teilomillet/gollm"
)
func main() {
// Load API key from environment variable
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
log.Fatalf("OPENAI_API_KEY environment variable is not set")
}
// Create a new LLM instance with custom configuration
llm, err := gollm.NewLLM(
gollm.SetProvider("openai"),
gollm.SetModel("gpt-4o-mini"),
gollm.SetAPIKey(apiKey),
gollm.SetMaxTokens(200),
gollm.SetMaxRetries(3),
gollm.SetRetryDelay(time.Second*2),
gollm.SetLogLevel(gollm.LogLevelInfo),
)
if err != nil {
log.Fatalf("Failed to create LLM: %v", err)
}
ctx := context.Background()
// Create a basic prompt
prompt := gollm.NewPrompt("Explain the concept of 'recursion' in programming.")
// Generate a response
response, err := llm.Generate(ctx, prompt)
if err != nil {
log.Fatalf("Failed to generate text: %v", err)
}
fmt.Printf("Response:\n%s\n", response)
}
Here's a quick reference guide for the most commonly used functions and options in the gollm
package:
llm, err := gollm.NewLLM(
gollm.SetProvider("openai"),
gollm.SetModel("gpt-4"),
gollm.SetAPIKey("your-api-key"),
gollm.SetMaxTokens(100),
gollm.SetTemperature(0.7),
gollm.SetMemory(4096),
)
prompt := gollm.NewPrompt("Your prompt text here",
gollm.WithContext("Additional context"),
gollm.WithDirectives("Be concise", "Use examples"),
gollm.WithOutput("Expected output format"),
gollm.WithMaxLength(300),
)
response, err := llm.Generate(ctx, prompt)
response, err := tools.ChainOfThought(ctx, llm, "Your question here")
optimizer := optimizer.NewPromptOptimizer(llm, initialPrompt, taskDescription,
optimizer.WithCustomMetrics(/* custom metrics */),
optimizer.WithRatingSystem("numerical"),
optimizer.WithThreshold(0.8),
)
optimizedPrompt, err := optimizer.OptimizePrompt(ctx)
results, err := tools.CompareModels(ctx, promptText, validateFunc, configs...)
The gollm
package offers a range of advanced features to enhance your AI applications:
Create sophisticated prompts with multiple components:
prompt := gollm.NewPrompt("Explain the concept of recursion in programming.",
gollm.WithContext("The audience is beginner programmers."),
gollm.WithDirectives(
"Use simple language and avoid jargon.",
"Provide a practical example.",
"Explain potential pitfalls and how to avoid them.",
),
gollm.WithOutput("Structure your response with sections: Definition, Example, Pitfalls, Best Practices."),
gollm.WithMaxLength(300),
)
response, err := llm.Generate(ctx, prompt)
if err != nil {
log.Fatalf("Failed to generate explanation: %v", err)
}
fmt.Printf("Explanation of Recursion:\n%s\n", response)
Use the ChainOfThought
function for step-by-step reasoning:
question := "What is the result of 15 * 7 + 22?"
response, err := tools.ChainOfThought(ctx, llm, question)
if err != nil {
log.Fatalf("Failed to perform chain of thought: %v", err)
}
fmt.Printf("Chain of Thought:\n%s\n", response)
Load examples directly from files:
examples, err := utils.ReadExamplesFromFile("examples.txt")
if err != nil {
log.Fatalf("Failed to read examples: %v", err)
}
prompt := gollm.NewPrompt("Generate a similar example:",
gollm.WithExamples(examples...),
)
response, err := llm.Generate(ctx, prompt)
if err != nil {
log.Fatalf("Failed to generate example: %v", err)
}
fmt.Printf("Generated Example:\n%s\n", response)
Create reusable prompt templates for consistent prompt generation:
// Create a new prompt template
template := gollm.NewPromptTemplate(
"AnalysisTemplate",
"A template for analyzing topics",
"Provide a comprehensive analysis of {{.Topic}}. Consider the following aspects:\n" +
"1. Historical context\n" +
"2. Current relevance\n" +
"3. Future implications",
gollm.WithPromptOptions(
gollm.WithDirectives(
"Use clear and concise language",
"Provide specific examples where appropriate",
),
gollm.WithOutput("Structure your analysis with clear headings for each aspect."),
),
)
// Use the template to create a prompt
data := map[string]interface{}{
"Topic": "artificial intelligence in healthcare",
}
prompt, err := template.Execute(data)
if err != nil {
log.Fatalf("Failed to execute template: %v", err)
}
// Generate a response using the created prompt
response, err := llm.Generate(ctx, prompt)
if err != nil {
log.Fatalf("Failed to generate response: %v", err)
}
fmt.Printf("Analysis:\n%s\n", response)
Ensure your LLM outputs are in a valid JSON format:
prompt := gollm.NewPrompt("Analyze the pros and cons of remote work.",
gollm.WithOutput("Respond in JSON format with 'topic', 'pros', 'cons', and 'conclusion' fields."),
)
response, err := llm.Generate(ctx, prompt, gollm.WithJSONSchemaValidation())
if err != nil {
log.Fatalf("Failed to generate valid analysis: %v", err)
}
var result AnalysisResult
if err := json.Unmarshal([]byte(response), &result); err != nil {
log.Fatalf("Failed to parse response: %v", err)
}
fmt.Printf("Analysis: %+v\n", result)
Use the PromptOptimizer
to automatically refine and improve your prompts:
initialPrompt := gollm.NewPrompt("Write a short story about a robot learning to love.")
taskDescription := "Generate a compelling short story that explores the theme of artificial intelligence developing emotions."
optimizerInstance := optimizer.NewPromptOptimizer(
llm,
initialPrompt,
taskDescription,
optimizer.WithCustomMetrics(
optimizer.Metric{Name: "Creativity", Description: "How original and imaginative the story is"},
optimizer.Metric{Name: "Emotional Impact", Description: "How well the story evokes feelings in the reader"},
),
optimizer.WithRatingSystem("numerical"),
optimizer.WithThreshold(0.8),
optimizer.WithVerbose(),
)
optimizedPrompt, err := optimizerInstance.OptimizePrompt(ctx)
if err != nil {
log.Fatalf("Optimization failed: %v", err)
}
fmt.Printf("Optimized Prompt: %s\n", optimizedPrompt.Input)
Compare responses from different LLM providers or models:
configs := []*gollm.Config{
{
Provider: "openai",
Model: "gpt-4o-mini",
APIKey: os.Getenv("OPENAI_API_KEY"),
MaxTokens: 500,
},
{
Provider: "anthropic",
Model: "claude-3-5-sonnet-20240620",
APIKey: os.Getenv("ANTHROPIC_API_KEY"),
MaxTokens: 500,
},
{
Provider: "groq",
Model: "llama-3.1-70b-versatile",
APIKey: os.Getenv("GROQ_API_KEY"),
MaxTokens: 500,
},
}
promptText := "Tell me a joke about programming. Respond in JSON format with 'setup' and 'punchline' fields."
validateJoke := func(joke map[string]interface{}) error {
if joke["setup"] == "" || joke["punchline"] == "" {
return fmt.Errorf("joke must have both a setup and a punchline")
}
return nil
}
results, err := tools.CompareModels(context.Background(), promptText, validateJoke, configs...)
if err != nil {
log.Fatalf("Error comparing models: %v", err)
}
fmt.Println(tools.AnalyzeComparisonResults(results))
Enable memory to maintain context across multiple interactions:
llm, err := gollm.NewLLM(
gollm.SetProvider("openai"),
gollm.SetModel("gpt-3.5-turbo"),
gollm.SetAPIKey(os.Getenv("OPENAI_API_KEY")),
gollm.SetMemory(4096), // Enable memory with a 4096 token limit
)
if err != nil {
log.Fatalf("Failed to create LLM: %v", err)
}
ctx := context.Background()
// First interaction
prompt1 := gollm.NewPrompt("What's the capital of France?")
response1, err := llm.Generate(ctx, prompt1)
if err != nil {
log.Fatalf("Failed to generate response: %v", err)
}
fmt.Printf("Response 1: %s\n", response1)
// Second interaction, referencing the first
prompt2 := gollm.NewPrompt("What's the population of that city?")
response2, err := llm.Generate(ctx, prompt2)
if err != nil {
log.Fatalf("Failed to generate response: %v", err)
}
fmt.Printf("Response 2: %s\n", response2)
Prompt Engineering:
NewPrompt()
with options like WithContext()
, WithDirectives()
, and WithOutput()
to create well-structured prompts.prompt := gollm.NewPrompt("Your main prompt here",
gollm.WithContext("Provide relevant context"),
gollm.WithDirectives("Be concise", "Use examples"),
gollm.WithOutput("Specify expected output format"),
)
Utilize Prompt Templates:
PromptTemplate
objects.template := gollm.NewPromptTemplate(
"CustomTemplate",
"A template for custom prompts",
"Generate a {{.Type}} about {{.Topic}}",
gollm.WithPromptOptions(
gollm.WithDirectives("Be creative", "Use vivid language"),
gollm.WithOutput("Your {{.Type}}:"),
),
)
Leverage Pre-built Functions:
ChainOfThought()
for complex reasoning tasks.response, err := tools.ChainOfThought(ctx, llm, "Your complex question here")
Work with Examples:
ReadExamplesFromFile()
to load examples from files for consistent outputs.examples, err := utils.ReadExamplesFromFile("examples.txt")
if err != nil {
log.Fatalf("Failed to read examples: %v", err)
}
Implement Structured Output:
WithJSONSchemaValidation()
to ensure valid JSON outputs.response, err := llm.Generate(ctx, prompt, gollm.WithJSONSchemaValidation())
Optimize Prompts:
PromptOptimizer
to refine prompts automatically.optimizer := optimizer.NewPromptOptimizer(llm, initialPrompt, taskDescription,
optimizer.WithCustomMetrics(
optimizer.Metric{Name: "Relevance", Description: "How relevant the response is to the task"},
),
optimizer.WithRatingSystem("numerical"),
optimizer.WithThreshold(0.8),
)
Compare Model Performances:
CompareModels()
to evaluate different models or providers.results, err := tools.CompareModels(ctx, promptText, validateFunc, configs...)
Implement Memory for Contextual Interactions:
llm, err := gollm.NewLLM(
gollm.SetProvider("openai"),
gollm.SetModel("gpt-3.5-turbo"),
gollm.SetMemory(4096),
)
Error Handling and Retries:
llm, err := gollm.NewLLM(
gollm.SetMaxRetries(3),
gollm.SetRetryDelay(time.Second * 2),
)
Secure API Key Handling:
llm, err := gollm.NewLLM(
gollm.SetAPIKey(os.Getenv("OPENAI_API_KEY")),
)
Check out our examples directory for more usage examples, including:
gollm
is actively maintained and under continuous development. With the recent refactoring, we've streamlined the codebase to make it simpler and more accessible for new contributors. We welcome contributions and feedback from the community.
gollm
is built on a philosophy of pragmatic minimalism and forward-thinking simplicity:
Build what's necessary: We add features as they become needed, avoiding speculative development.
Simplicity first: Additions should be straightforward while fulfilling their purpose.
Future-compatible: We consider how current changes might impact future development.
Readability counts: Code should be clear and self-explanatory.
Modular design: Each component should do one thing well.
We welcome contributions that align with our philosophy! Whether you're fixing a bug, improving documentation, or proposing new features, your efforts are appreciated.
To get started:
Thank you for helping make gollm
better!
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.