ChatAnthropic
Anthropic is an AI safety and research company. They are the creator of Claude.
This will help you getting started with Anthropic chat
models. For detailed documentation of all
ChatAnthropic
features and configurations head to the API
reference.
Overviewβ
Integration detailsβ
Class | Package | Local | Serializable | PY support | Package downloads | Package latest |
---|---|---|---|---|---|---|
ChatAnthropic | @langchain/anthropic | β | β | β |
Model featuresβ
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|
β | β | β | β | β | β | β | β | β |
Setupβ
Youβll need to sign up and obtain an Anthropic API
key, and install the @langchain/anthropic
integration package.
Credentialsβ
Head to Anthropicβs website to sign up to
Anthropic and generate an API key. Once youβve done this set the
ANTHROPIC_API_KEY
environment variable:
export ANTHROPIC_API_KEY="your-api-key"
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# export LANGCHAIN_TRACING_V2="true"
# export LANGCHAIN_API_KEY="your-api-key"
Installationβ
The LangChain ChatAnthropic
integration lives in the
@langchain/anthropic
package:
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Instantiationβ
Now we can instantiate our model object and generate chat completions:
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-haiku-20240307",
temperature: 0,
maxTokens: undefined,
maxRetries: 2,
// other params...
});
Invocationβ
const aiMsg = await llm.invoke([
[
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
],
["human", "I love programming."],
]);
aiMsg;
AIMessage {
"id": "msg_013WBXXiggy6gMbAUY6NpsuU",
"content": "Voici la traduction en français :\n\nJ'adore la programmation.",
"additional_kwargs": {
"id": "msg_013WBXXiggy6gMbAUY6NpsuU",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 29,
"output_tokens": 20
}
},
"response_metadata": {
"id": "msg_013WBXXiggy6gMbAUY6NpsuU",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 29,
"output_tokens": 20
},
"type": "message",
"role": "assistant"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 29,
"output_tokens": 20,
"total_tokens": 49
}
}
console.log(aiMsg.content);
Voici la traduction en français :
J'adore la programmation.
Chainingβ
We can chain our model with a prompt template like so:
import { ChatPromptTemplate } from "@langchain/core/prompts";
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
],
["human", "{input}"],
]);
const chain = prompt.pipe(llm);
await chain.invoke({
input_language: "English",
output_language: "German",
input: "I love programming.",
});
AIMessage {
"id": "msg_01Ca52fpd1mcGRhH4spzAWr4",
"content": "Ich liebe das Programmieren.",
"additional_kwargs": {
"id": "msg_01Ca52fpd1mcGRhH4spzAWr4",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 23,
"output_tokens": 11
}
},
"response_metadata": {
"id": "msg_01Ca52fpd1mcGRhH4spzAWr4",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 23,
"output_tokens": 11
},
"type": "message",
"role": "assistant"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 23,
"output_tokens": 11,
"total_tokens": 34
}
}
Content blocksβ
One key difference to note between Anthropic models and most others is
that the contents of a single Anthropic AI message can either be a
single string or a list of content blocks. For example when an
Anthropic model calls a tool, the tool
invocation is part of the message content (as well as being exposed in
the standardized AIMessage.tool_calls
field):
import { ChatAnthropic } from "@langchain/anthropic";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
const calculatorSchema = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});
const calculatorTool = {
name: "calculator",
description: "A simple calculator tool",
input_schema: zodToJsonSchema(calculatorSchema),
};
const toolCallingLlm = new ChatAnthropic({
model: "claude-3-haiku-20240307",
}).bindTools([calculatorTool]);
const toolPrompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);
// Chain your prompt and model together
const toolCallChain = toolPrompt.pipe(toolCallingLlm);
await toolCallChain.invoke({
input: "What is 2 + 2?",
});
AIMessage {
"id": "msg_01DZGs9DyuashaYxJ4WWpWUP",
"content": [
{
"type": "text",
"text": "Here is the calculation for 2 + 2:"
},
{
"type": "tool_use",
"id": "toolu_01SQXBamkBr6K6NdHE7GWwF8",
"name": "calculator",
"input": {
"number1": 2,
"number2": 2,
"operation": "add"
}
}
],
"additional_kwargs": {
"id": "msg_01DZGs9DyuashaYxJ4WWpWUP",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "tool_use",
"stop_sequence": null,
"usage": {
"input_tokens": 449,
"output_tokens": 100
}
},
"response_metadata": {
"id": "msg_01DZGs9DyuashaYxJ4WWpWUP",
"model": "claude-3-haiku-20240307",
"stop_reason": "tool_use",
"stop_sequence": null,
"usage": {
"input_tokens": 449,
"output_tokens": 100
},
"type": "message",
"role": "assistant"
},
"tool_calls": [
{
"name": "calculator",
"args": {
"number1": 2,
"number2": 2,
"operation": "add"
},
"id": "toolu_01SQXBamkBr6K6NdHE7GWwF8",
"type": "tool_call"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 449,
"output_tokens": 100,
"total_tokens": 549
}
}
Custom headersβ
You can pass custom headers in your requests like this:
import { ChatAnthropic } from "@langchain/anthropic";
const llmWithCustomHeaders = new ChatAnthropic({
model: "claude-3-sonnet-20240229",
maxTokens: 1024,
clientOptions: {
defaultHeaders: {
"X-Api-Key": process.env.ANTHROPIC_API_KEY,
},
},
});
await llmWithCustomHeaders.invoke("Why is the sky blue?");
AIMessage {
"id": "msg_019z4nWpShzsrbSHTWXWQh6z",
"content": "The sky appears blue due to a phenomenon called Rayleigh scattering. Here's a brief explanation:\n\n1) Sunlight is made up of different wavelengths of visible light, including all the colors of the rainbow.\n\n2) As sunlight passes through the atmosphere, the gases (mostly nitrogen and oxygen) cause the shorter wavelengths of light, such as violet and blue, to be scattered more easily than the longer wavelengths like red and orange.\n\n3) This scattering of the shorter blue wavelengths occurs in all directions by the gas molecules in the atmosphere.\n\n4) Our eyes are more sensitive to the scattered blue light than the scattered violet light, so we perceive the sky as having a blue color.\n\n5) The scattering is more pronounced for light traveling over longer distances through the atmosphere. This is why the sky appears even darker blue when looking towards the horizon.\n\nSo in essence, the selective scattering of the shorter blue wavelengths of sunlight by the gases in the atmosphere is what causes the sky to appear blue to our eyes during the daytime.",
"additional_kwargs": {
"id": "msg_019z4nWpShzsrbSHTWXWQh6z",
"type": "message",
"role": "assistant",
"model": "claude-3-sonnet-20240229",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 13,
"output_tokens": 236
}
},
"response_metadata": {
"id": "msg_019z4nWpShzsrbSHTWXWQh6z",
"model": "claude-3-sonnet-20240229",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 13,
"output_tokens": 236
},
"type": "message",
"role": "assistant"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 13,
"output_tokens": 236,
"total_tokens": 249
}
}
API referenceβ
For detailed documentation of all ChatAnthropic features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html