# install.packages("devtools")
devtools::install_github("YuLab-SMU/aisdk")Get Started with aisdk
aisdk is a production-grade AI SDK for R, providing a unified interface for multiple AI model providers (OpenAI, Anthropic, Gemini, DeepSeek, etc.). It features a layered architecture, robust error handling, and advanced capabilities like multi-agent orchestration and a distributed MCP (Model Context Protocol) ecosystem.
Installation
You can install the development version from GitHub:
Basic Usage
1. Configuration
Load the library and set your API keys as environment variables. We recommend using an .Renviron file for persistence.
2. Creating a Model
Create a model instance using provider factory functions.
# Create an OpenAI model
model <- create_openai()$language_model("gpt-4o")
# Create an Anthropic / Gemini model
# model <- create_anthropic()$language_model("claude-3-5-sonnet-20241022")
# model <- create_gemini()$language_model("gemini-1.5-pro")
# Create models from other supported providers (DeepSeek, Volcengine, Stepfun, etc.)
# model <- create_deepseek()$language_model("deepseek-chat")
# model <- create_volcengine()$language_model("your-endpoint-id")
# model <- create_stepfun()$language_model("step-1-8k")If you want a package-wide default, set it once and omit model from high-level helpers:
3. Generating Text
Use generate_text() for simple request-response interactions.
# Simple prompt
response <- generate_text(
prompt = "What are the advantages of using R6 classes in R?"
)
cat(response$text)
# Using a system prompt for behavior control
response <- generate_text(
model = model,
system = "You are a helpful R programming expert.",
prompt = "Explain the difference between `lapply` and `sapply`."
)4. Streaming Responses
For real-time output, use stream_text() with a callback.
stream_text(
prompt = "Write a short poem about data science.",
callback = function(chunk, done) {
if (!done) cat(chunk)
}
)5. Managing State with ChatSession
For stateful conversations, use ChatSession. It automatically manages history and provides a higher-level API.
# Initialize a session
session <- ChatSession$new()
# First message
res1 <- session$send("Hi, I'm analyzing gene expression data.")
cat(res1$text)
# Second message (context is preserved)
res2 <- session$send("What R packages would you recommend for normalization?")
cat(res2$text)
# Check history
# session$history()6. Tools and Function Calling
aisdk makes it easy to give LLMs access to R functions.
# Define a tool using the schema DSL
weather_tool <- tool(
name = "get_weather",
description = "Get current weather for a city",
parameters = z_object(
city = z_string("The city name, e.g., Beijing")
),
execute = function(args) {
paste("The weather in", args$city, "is sunny, 25 degrees C.")
}
)
# Use the tool
response <- generate_text(
model = model,
prompt = "What's the weather like in Shanghai?",
tools = list(weather_tool),
max_steps = 3
)
cat(response$text)7. Interactive Console Chat
For a more interactive experience, aisdk provides a built-in REPL (Read-Eval-Print Loop) called console_chat(). This allows you to chat with the model directly in your R terminal, with support for streaming, slash commands, and an intelligent agent that can execute R code or bash commands.
# Optional: set a package-wide default first
# set_model("openai:gpt-4o")
# Start an interactive chat with an intelligent agent (default)
# console_chat("openai:gpt-4o")
# Start a simple chat without agent tools
# console_chat("openai:gpt-4o", agent = NULL)console_chat() now exposes a structured terminal workflow rather than a plain print stream:
- A persistent status bar shows the current model, sandbox mode, view mode, streaming state, local execution state, and tool activity.
- Agent mode can run shell commands, read and write files, and execute R code through natural language.
- Tool-heavy turns stay compact in normal use, but remain inspectable when you need to understand what happened.
- The console keeps a shared frame structure for status, timeline, and overlay surfaces, while still degrading safely to an append-only terminal path.
View modes
-
clean: the default mode. Keeps the transcript calm and hides raw tool payloads. -
inspect: adds per-turn tool timeline summaries and an inspector overlay for drilling into the latest turn or an individual tool. -
debug: enables detailed tool logging and thinking output for development and troubleshooting.
You can switch modes live:
# Enter inspect mode
# /inspect on
# Enter debug mode
# /debug on
# Return to compact clean mode
# /debug offInspector workflow
The inspector is useful when a turn used tools and you want to understand the execution path without switching the entire console into debug mode.
-
/inspect turn: opens the inspector overlay for the latest turn. -
/inspect tool <index>: opens the inspector overlay for a specific tool in the latest turn. -
/inspect next//inspect prev: moves between tools inside the current inspector context. -
/inspect close: closes the active inspector overlay.
The inspector overlay shows:
- turn or tool identity
- elapsed time
- summary of arguments and results
- captured messages and warnings
- compact previews of raw arguments and raw results
Common slash commands
-
/help: show all available commands. -
/model <id>: switch the model on the fly. -
/save [path]//load <path>: persist and restore sessions. -
/history: inspect conversation history. -
/stats: inspect session usage statistics. -
/stream [on|off]: toggle token streaming. -
/local [on|off]: toggle local execution mode in the shared session environment. -
/clear: reset the conversation history. -
/quit: exit the interactive session.
Practical usage pattern
In day-to-day work, the recommended workflow is:
- Start in
cleanmode for normal conversation. - Switch to
inspectwhen a turn uses tools and you want a concise explanation of what ran. - Open the inspector overlay for the latest turn or a specific tool.
- Use
debugonly when developing providers, tools, or console behavior itself.
Next Steps
Explore the following vignettes for more advanced features:
- Providers: Configure models, connect via Custom Providers, and use AIHubMix advanced compatibility wrappers.
- Agents: Build specialized AI workers and multi-agent systems.
- Tools: Deep dive into the Tool system and Schema DSL.
- Sessions: Advanced session management and persistence.
- Console Chat: Full guide to the terminal REPL, inspect/debug modes, and inspector overlay workflow.
- MCP: Connect to external tools via the Model Context Protocol.
- Skills: Progressive knowledge loading for agents.