LLMs (inference)
[!TIP]
Location for concrete implementations within the framework
Hive-agent-framework/adapters
.Location for base abstraction within the framework
Hive-agent-framework/llms
.
A Large Language Model (LLM) is an AI designed to understand and generate human-like text. Trained on extensive text data, LLMs learn language patterns, grammar, context, and basic reasoning to perform tasks like text completion, translation, summarization, and answering questions.
To unify differences between various APIs, the framework defines a common interface—a set of actions that can be performed with it.
Providers (adapters)
WatsonX
✅
⚠️ (model specific template must be provided)
❌
Ollama
✅
✅
⚠️ (JSON only)
OpenAI
❌
✅
⚠️ (JSON schema only)
Azure OpenAI
❌
✅
⚠️ (JSON schema only)
LangChain
⚠️ (depends on a provider)
⚠️ (depends on a provider)
❌
Groq
❌
✅
⚠️ (JSON object only)
AWS Bedrock
❌
✅
⚠️ (JSON only) - model specific
VertexAI
✅
✅
⚠️ (JSON only)
BAM (Internal)
✅
⚠️ (model specific template must be provided)
✅
All providers' examples can be found in examples/llms/providers.
Are you interested in creating your own adapter? Jump to the adding a new provider section.
Usage
Plain text generation
Source: examples/llms/text.ts
[!NOTE]
The
generate
method returns a class that extends the baseBaseLLMOutput
class. This class allows you to retrieve the response as text using thegetTextContent
method and other useful metadata.
[!TIP]
You can enable streaming communication (internally) by passing
{ stream: true }
as a second parameter to thegenerate
method.
Chat text generation
Source: examples/llms/chat.ts
[!NOTE]
The
generate
method returns a class that extends the baseChatLLMOutput
class. This class allows you to retrieve the response as text using thegetTextContent
method and other useful metadata. To retrieve all messages (chunks) access themessages
property (getter).
[!TIP]
You can enable streaming communication (internally) by passing
{ stream: true }
as a second parameter to thegenerate
method.
Streaming
Source: examples/llms/chatStream.ts
Callback (Emitter)
Source: examples/llms/chatCallback.ts
Structured generation
Source: examples/llms/structured.ts
Adding a new provider (adapter)
To use an inference provider that is not mentioned in our providers list feel free to create a request.
If approved and you want to create it on your own, you must do the following things. Let's assume the name of your provider is Custom.
Base location within the framework:
Hive-agent-framework/adapters/custom
Text LLM (filename):
llm.ts
(example implementation)Chat LLM (filename):
chat.ts
(example implementation)
[!IMPORTANT]
If the target provider provides an SDK, use it.
[!IMPORTANT]
All provider-related dependencies (if any) must be included in
devDependencies
andpeerDependencies
in thepackage.json
.
[!TIP]
To simplify work with the target RestAPI feel free to use the helper
RestfulClient
class. The client usage can be seen in the WatsonX LLM Adapter here.
[!TIP]
Parsing environment variables should be done via helper functions (
parseEnv
/hasEnv
/getEnv
) that can be found here.