Prompt

The prompt keyword is one of AIScript's core AI integration features, allowing you to easily interact with large language models (LLMs) directly from your code.

Basic Usage

The simplest form of the prompt keyword takes a string argument and returns the model's response:

let answer = prompt "What is the capital of France?";
print(answer); // The capital of France is Paris.

In this basic form, AIScript will use the default model configuration (typically OpenAI's model) to generate the response.

Advanced Configuration

For more control over the prompt behavior, you can provide additional configuration options:

let detailed_answer = prompt {
    input: "Explain quantum computing",
    model: "claude-3.7",
    max_tokens: 500,
    temperature: 0.7,
    system_prompt: "You are a quantum physics expert"
};

Configuration Options

OptionTypeDescription
inputStringThe prompt text to send to the LLM
modelStringThe model identifier to use (e.g., "gpt-4", "claude-3.7")
max_tokensNumberMaximum number of tokens in the generated response
temperatureNumberControls randomness (0.0-1.0, lower is more deterministic)
system_promptStringSystem message to guide the model's behavior

Dynamic Prompts

You can construct prompts dynamically using variables and string interpolation:

let topic = "quantum computing";
let detail_level = "beginner";
let response = prompt f"Explain {topic} at a {detail_level} level";

API Keys and Configuration

To use the prompt feature, you need to configure your API keys. AIScript supports multiple LLM providers including cloud-based services and local models.

Environment Variables

Set your API keys using environment variables:

# For OpenAI
export OPENAI_API_KEY="your_openai_key"

# For Anthropic Claude
export CLAUDE_API_KEY="your_claude_key"

# For Ollama (local models)
export OLLAMA_API_ENDPOINT="http://localhost:11434/v1"

Configuration File

Alternatively, you can specify API keys in your project configuration file:

project.toml
[ai.openai]
api_key = "YOUR_OPENAI_API_KEY"

[ai.anthropic]
api_key = "YOUR_CLAUDE_API_KEY"

# For Ollama (local models)
[ai.ollama]
api_endpoint = "http://localhost:11434/v1"
model = "llama3.2" # or any other model installed in your Ollama instance

Local Models with Ollama

AIScript supports running local models through Ollama, which allows you to run AI models on your own hardware without sending data to external services.

Setting Up Ollama

  1. Install Ollama from ollama.ai
  2. Pull models you want to use:
    ollama pull llama3.2
    ollama pull deepseek-r1
    ollama pull codellama
  3. Start Ollama (it runs on port 11434 by default)
  4. Configure AIScript to use Ollama

Example with Local Models

// Using a local Llama model
let response = prompt {
    input: "What is rust?",
    model: "llama3.2"
};

// Using a local coding model
let code_explanation = prompt {
    input: "Review this Python function for potential issues",
    model: "codellama"
};

Error HandlingNot supported yet

Like other AIScript operations, you can use error handling with the prompt keyword:

let response = prompt "Generate a complex response" |err| {
    print(f"Error occurred: {err}");
    return "Fallback response";
};

Best Practices

  • Be specific: The more specific your prompt, the better the response quality.
  • Consider context limits: Each model has a maximum context window. Very large prompts may be truncated.
  • Handle rate limits: API providers have rate limits. Consider implementing retry logic for production applications.
  • Manage costs: Using LLMs incurs costs based on token usage. Monitor usage to avoid unexpected charges.

Examples

Simple Question-Answering

let question = "What is the capital of Japan?";
let answer = prompt question;
print(answer);

Generating Content with Specific Parameters

let poem = prompt {
    input: "Write a haiku about programming",
    temperature: 0.9,
    max_tokens: 100
};
print(poem);

Using Different Models

// Using OpenAI
let openai_response = prompt {
    input: f"Summarize this article: {article_text}",
    model: "gpt-4"
};

// Using Claude
let claude_response = prompt {
    input: f"Summarize this article: {article_text}" ,
    model: "claude-3.7"
};

// Using Ollama (local models)
let local_response = prompt {
    input: f"Summarize this article: {article_text}",
    model: "llama3.2"
};

Chaining Prompts

let topic = prompt "Suggest a topic for a blog post";
let outline = prompt f"Create an outline for a blog post about: {topic}";
let introduction = prompt f"Write an introduction for a blog post with this outline: {outline}";

print("Topic:", topic);
print("Outline", outline);
print("Introduction:", introduction);

Notes and Limitations

  • The response from a prompt is always a string. If you need structured data, consider asking for specific formats like JSON in your prompt or parse the result.
  • Network connectivity is required for the prompt keyword to function, as it makes API calls to LLM providers.
  • API costs apply based on your provider's pricing model, typically based on the number of tokens processed.