langchain_community.chat_models.minimax.MiniMaxChat

Note

MiniMaxChat implements the standard Runnable Interface. 🏃

The Runnable Interface has additional methods that are available on runnables, such as with_types, with_retry, assign, bind, get_graph, and more.

class langchain_community.chat_models.minimax.MiniMaxChat[source]

Bases: BaseChatModel

MiniMax chat model integration.

Setup:

To use, you should have the environment variable``MINIMAX_API_KEY`` set with

your API KEY.

export MINIMAX_API_KEY="your-api-key"
Key init args — completion params:
model: Optional[str]

Name of MiniMax model to use.

max_tokens: Optional[int]

Max number of tokens to generate.

temperature: Optional[float]

Sampling temperature.

top_p: Optional[float]

Total probability mass of tokens to consider at each step.

streaming: Optional[bool]

Whether to stream the results or not.

Key init args — client params:
api_key: Optional[str]

MiniMax API key. If not passed in will be read from env var MINIMAX_API_KEY.

base_url: Optional[str]

Base URL for API requests.

See full list of supported init args and their descriptions in the params section.

Instantiate:
from langchain_community.chat_models import MiniMaxChat

chat = MiniMaxChat(
    api_key=api_key,
    model='abab6.5-chat',
    # temperature=...,
    # other params...
)
Invoke:
messages = [
    ("system", "你是一名专业的翻译家,可以将用户的中文翻译为英文。"),
    ("human", "我喜欢编程。"),
]
chat.invoke(messages)
AIMessage(
    content='I enjoy programming.',
    response_metadata={
        'token_usage': {'total_tokens': 48},
        'model_name': 'abab6.5-chat',
        'finish_reason': 'stop'
    },
    id='run-42d62ba6-5dc1-4e16-98dc-f72708a4162d-0'
)
Stream:
for chunk in chat.stream(messages):
    print(chunk)
content='I' id='run-a5837c45-4aaa-4f64-9ab4-2679bbd55522'
content=' enjoy programming.' response_metadata={'finish_reason': 'stop'} id='run-a5837c45-4aaa-4f64-9ab4-2679bbd55522'
stream = chat.stream(messages)
full = next(stream)
for chunk in stream:
    full += chunk
full
AIMessageChunk(
    content='I enjoy programming.',
    response_metadata={'finish_reason': 'stop'},
    id='run-01aed0a0-61c4-4709-be22-c6d8b17155d6'
)
Async:
await chat.ainvoke(messages)

# stream
# async for chunk in chat.astream(messages):
#     print(chunk)

# batch
# await chat.abatch([messages])
AIMessage(
    content='I enjoy programming.',
    response_metadata={
        'token_usage': {'total_tokens': 48},
        'model_name': 'abab6.5-chat',
        'finish_reason': 'stop'
    },
    id='run-c263b6f1-1736-4ece-a895-055c26b3436f-0'
)
Tool calling:
from langchain_core.pydantic_v1 import BaseModel, Field


class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(
        ..., description="The city and state, e.g. San Francisco, CA"
    )


class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(
        ..., description="The city and state, e.g. San Francisco, CA"
    )

chat_with_tools = chat.bind_tools([GetWeather, GetPopulation])
ai_msg = chat_with_tools.invoke(
    "Which city is hotter today and which is bigger: LA or NY?"
)
ai_msg.tool_calls
[
    {
        'name': 'GetWeather',
        'args': {'location': 'LA'},
        'id': 'call_function_2140449382',
        'type': 'tool_call'
    }
]
Structured output:
from typing import Optional

from langchain_core.pydantic_v1 import BaseModel, Field


class Joke(BaseModel):
    '''Joke to tell user.'''
    setup: str = Field(description="The setup of the joke")
    punchline: str = Field(description="The punchline to the joke")
    rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")


structured_chat = chat.with_structured_output(Joke)
structured_chat.invoke("Tell me a joke about cats")
Joke(
    setup='Why do cats have nine lives?',
    punchline='Because they are so cute and cuddly!',
    rating=None
)
Response metadata
ai_msg = chat.invoke(messages)
ai_msg.response_metadata
{'token_usage': {'total_tokens': 48},
 'model_name': 'abab6.5-chat',
 'finish_reason': 'stop'}
param cache: Union[BaseCache, bool, None] = None

Whether to cache the response.

  • If true, will use the global cache.

  • If false, will not use a cache

  • If None, will use the global cache if it’s set, otherwise no cache.

  • If instance of BaseCache, will use the provided cache.

Caching is not currently supported for streaming methods of models.

param callback_manager: Optional[BaseCallbackManager] = None

[DEPRECATED] Callback manager to add to the run trace.

param callbacks: Callbacks = None

Callbacks to add to the run trace.

param custom_get_token_ids: Optional[Callable[[str], List[int]]] = None

Optional encoder to use for counting tokens.

param max_tokens: int = 256

Denotes the number of tokens to predict per generation.

param metadata: Optional[Dict[str, Any]] = None

Metadata to add to the run trace.

param minimax_api_host: str = 'https://api.minimax.chat/v1/text/chatcompletion_v2' (alias 'base_url')
param minimax_api_key: SecretStr [Required] (alias 'api_key')

Minimax API Key

Constraints
  • type = string

  • writeOnly = True

  • format = password

param minimax_group_id: Optional[str] = None (alias 'group_id')

[DEPRECATED, keeping it for for backward compatibility] Group Id

param model: str = 'abab6.5-chat'

Model name to use.

param model_kwargs: Dict[str, Any] [Optional]

Holds any model parameters valid for create call not explicitly specified.

param rate_limiter: Optional[BaseRateLimiter] = None

An optional rate limiter to use for limiting the number of requests.

param streaming: bool = False

Whether to stream the results or not.

param tags: Optional[List[str]] = None

Tags to add to the run trace.

param temperature: float = 0.7

A non-negative float that tunes the degree of randomness in generation.

param top_p: float = 0.95

Total probability mass of tokens to consider at each step.

param verbose: bool [Optional]

Whether to print out response text.

__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) BaseMessage

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters
Return type

BaseMessage

async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) List[Output]

Default implementation runs ainvoke in parallel using asyncio.gather.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.

Parameters
  • inputs (List[Input]) – A list of inputs to the Runnable.

  • config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – A config to use when invoking the Runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

  • return_exceptions (bool) – Whether to return exceptions instead of raising them. Defaults to False.

  • kwargs (Optional[Any]) – Additional keyword arguments to pass to the Runnable.

Returns

A list of outputs from the Runnable.

Return type

List[Output]

async abatch_as_completed(inputs: Sequence[Input], config: Optional[Union[RunnableConfig, Sequence[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) AsyncIterator[Tuple[int, Union[Output, Exception]]]

Run ainvoke in parallel on a list of inputs, yielding results as they complete.

Parameters
  • inputs (Sequence[Input]) – A list of inputs to the Runnable.

  • config (Optional[Union[RunnableConfig, Sequence[RunnableConfig]]]) – A config to use when invoking the Runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None. Defaults to None.

  • return_exceptions (bool) – Whether to return exceptions instead of raising them. Defaults to False.

  • kwargs (Optional[Any]) – Additional keyword arguments to pass to the Runnable.

Yields

A tuple of the index of the input and the output from the Runnable.

Return type

AsyncIterator[Tuple[int, Union[Output, Exception]]]

async agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, run_id: Optional[UUID] = None, **kwargs: Any) LLMResult

Asynchronously pass a sequence of prompts to a model and return generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • messages (List[List[BaseMessage]]) – List of list of messages.

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

  • tags (Optional[List[str]]) –

  • metadata (Optional[Dict[str, Any]]) –

  • run_name (Optional[str]) –

  • run_id (Optional[UUID]) –

  • **kwargs

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) LLMResult

Asynchronously pass a sequence of prompts and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

async ainvoke(input: LanguageModelInput, config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) BaseMessage

Default implementation of ainvoke, calls invoke from a thread.

The default implementation allows usage of async code even if the Runnable did not implement a native async version of invoke.

Subclasses should override this method if they can run asynchronously.

Parameters
  • input (LanguageModelInput) –

  • config (Optional[RunnableConfig]) –

  • stop (Optional[List[str]]) –

  • kwargs (Any) –

Return type

BaseMessage

async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) str

Deprecated since version langchain-core==0.1.7: Use ainvoke instead.

Parameters
  • text (str) –

  • stop (Optional[Sequence[str]]) –

  • kwargs (Any) –

Return type

str

async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) BaseMessage

Deprecated since version langchain-core==0.1.7: Use ainvoke instead.

Parameters
  • messages (List[BaseMessage]) –

  • stop (Optional[Sequence[str]]) –

  • kwargs (Any) –

Return type

BaseMessage

as_tool(args_schema: Optional[Type[BaseModel]] = None, *, name: Optional[str] = None, description: Optional[str] = None, arg_types: Optional[Dict[str, Type]] = None) BaseTool

Beta

This API is in beta and may change in the future.

Create a BaseTool from a Runnable.

as_tool will instantiate a BaseTool with a name, description, and args_schema from a Runnable. Where possible, schemas are inferred from runnable.get_input_schema. Alternatively (e.g., if the Runnable takes a dict as input and the specific dict keys are not typed), the schema can be specified directly with args_schema. You can also pass arg_types to just specify the required arguments and their types.

Parameters
  • args_schema (Optional[Type[BaseModel]]) – The schema for the tool. Defaults to None.

  • name (Optional[str]) – The name of the tool. Defaults to None.

  • description (Optional[str]) – The description of the tool. Defaults to None.

  • arg_types (Optional[Dict[str, Type]]) – A dictionary of argument names to types. Defaults to None.

Returns

A BaseTool instance.

Return type

BaseTool

Typed dict input:

from typing import List
from typing_extensions import TypedDict
from langchain_core.runnables import RunnableLambda

class Args(TypedDict):
    a: int
    b: List[int]

def f(x: Args) -> str:
    return str(x["a"] * max(x["b"]))

runnable = RunnableLambda(f)
as_tool = runnable.as_tool()
as_tool.invoke({"a": 3, "b": [1, 2]})

dict input, specifying schema via args_schema:

from typing import Any, Dict, List
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import RunnableLambda

def f(x: Dict[str, Any]) -> str:
    return str(x["a"] * max(x["b"]))

class FSchema(BaseModel):
    """Apply a function to an integer and list of integers."""

    a: int = Field(..., description="Integer")
    b: List[int] = Field(..., description="List of ints")

runnable = RunnableLambda(f)
as_tool = runnable.as_tool(FSchema)
as_tool.invoke({"a": 3, "b": [1, 2]})

dict input, specifying schema via arg_types:

from typing import Any, Dict, List
from langchain_core.runnables import RunnableLambda

def f(x: Dict[str, Any]) -> str:
    return str(x["a"] * max(x["b"]))

runnable = RunnableLambda(f)
as_tool = runnable.as_tool(arg_types={"a": int, "b": List[int]})
as_tool.invoke({"a": 3, "b": [1, 2]})

String input:

from langchain_core.runnables import RunnableLambda

def f(x: str) -> str:
    return x + "a"

def g(x: str) -> str:
    return x + "z"

runnable = RunnableLambda(f) | g
as_tool = runnable.as_tool()
as_tool.invoke("b")

New in version 0.2.14.

async astream(input: LanguageModelInput, config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) AsyncIterator[BaseMessageChunk]

Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output.

Parameters
  • input (LanguageModelInput) – The input to the Runnable.

  • config (Optional[RunnableConfig]) – The config to use for the Runnable. Defaults to None.

  • kwargs (Any) – Additional keyword arguments to pass to the Runnable.

  • stop (Optional[List[str]]) –

Yields

The output of the Runnable.

Return type

AsyncIterator[BaseMessageChunk]

astream_events(input: Any, config: Optional[RunnableConfig] = None, *, version: Literal['v1', 'v2'], include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Any) AsyncIterator[Union[StandardStreamEvent, CustomStreamEvent]]

Beta

This API is in beta and may change in the future.

Generate a stream of events.

Use to create an iterator over StreamEvents that provide real-time information about the progress of the Runnable, including StreamEvents from intermediate results.

A StreamEvent is a dictionary with the following schema:

  • event: str - Event names are of the

    format: on_[runnable_type]_(start|stream|end).

  • name: str - The name of the Runnable that generated the event.

  • run_id: str - randomly generated ID associated with the given execution of

    the Runnable that emitted the event. A child Runnable that gets invoked as part of the execution of a parent Runnable is assigned its own unique ID.

  • parent_ids: List[str] - The IDs of the parent runnables that

    generated the event. The root Runnable will have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.

  • tags: Optional[List[str]] - The tags of the Runnable that generated

    the event.

  • metadata: Optional[Dict[str, Any]] - The metadata of the Runnable

    that generated the event.

  • data: Dict[str, Any]

Below is a table that illustrates some evens that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.

ATTENTION This reference table is for the V2 version of the schema.

event

name

chunk

input

output

on_chat_model_start

[model name]

{“messages”: [[SystemMessage, HumanMessage]]}

on_chat_model_stream

[model name]

AIMessageChunk(content=”hello”)

on_chat_model_end

[model name]

{“messages”: [[SystemMessage, HumanMessage]]}

AIMessageChunk(content=”hello world”)

on_llm_start

[model name]

{‘input’: ‘hello’}

on_llm_stream

[model name]

‘Hello’

on_llm_end

[model name]

‘Hello human!’

on_chain_start

format_docs

on_chain_stream

format_docs

“hello world!, goodbye world!”

on_chain_end

format_docs

[Document(…)]

“hello world!, goodbye world!”

on_tool_start

some_tool

{“x”: 1, “y”: “2”}

on_tool_end

some_tool

{“x”: 1, “y”: “2”}

on_retriever_start

[retriever name]

{“query”: “hello”}

on_retriever_end

[retriever name]

{“query”: “hello”}

[Document(…), ..]

on_prompt_start

[template_name]

{“question”: “hello”}

on_prompt_end

[template_name]

{“question”: “hello”}

ChatPromptValue(messages: [SystemMessage, …])

In addition to the standard events, users can also dispatch custom events (see example below).

Custom events will be only be surfaced with in the v2 version of the API!

A custom event has following format:

Attribute

Type

Description

name

str

A user defined name for the event.

data

Any

The data associated with the event. This can be anything, though we suggest making it JSON serializable.

Here are declarations associated with the standard events shown above:

format_docs:

def format_docs(docs: List[Document]) -> str:
    '''Format the docs.'''
    return ", ".join([doc.page_content for doc in docs])

format_docs = RunnableLambda(format_docs)

some_tool:

@tool
def some_tool(x: int, y: str) -> dict:
    '''Some_tool.'''
    return {"x": x, "y": y}

prompt:

template = ChatPromptTemplate.from_messages(
    [("system", "You are Cat Agent 007"), ("human", "{question}")]
).with_config({"run_name": "my_template", "tags": ["my_template"]})

Example:

from langchain_core.runnables import RunnableLambda

async def reverse(s: str) -> str:
    return s[::-1]

chain = RunnableLambda(func=reverse)

events = [
    event async for event in chain.astream_events("hello", version="v2")
]

# will produce the following events (run_id, and parent_ids
# has been omitted for brevity):
[
    {
        "data": {"input": "hello"},
        "event": "on_chain_start",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"chunk": "olleh"},
        "event": "on_chain_stream",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"output": "olleh"},
        "event": "on_chain_end",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
]

Example: Dispatch Custom Event

from langchain_core.callbacks.manager import (
    adispatch_custom_event,
)
from langchain_core.runnables import RunnableLambda, RunnableConfig
import asyncio


async def slow_thing(some_input: str, config: RunnableConfig) -> str:
    """Do something that takes a long time."""
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 1 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 2 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    return "Done"

slow_thing = RunnableLambda(slow_thing)

async for event in slow_thing.astream_events("some_input", version="v2"):
    print(event)
Parameters
  • input (Any) – The input to the Runnable.

  • config (Optional[RunnableConfig]) – The config to use for the Runnable.

  • version (Literal['v1', 'v2']) – The version of the schema to use either v2 or v1. Users should use v2. v1 is for backwards compatibility and will be deprecated in 0.4.0. No default will be assigned until the API is stabilized. custom events will only be surfaced in v2.

  • include_names (Optional[Sequence[str]]) – Only include events from runnables with matching names.

  • include_types (Optional[Sequence[str]]) – Only include events from runnables with matching types.

  • include_tags (Optional[Sequence[str]]) – Only include events from runnables with matching tags.

  • exclude_names (Optional[Sequence[str]]) – Exclude events from runnables with matching names.

  • exclude_types (Optional[Sequence[str]]) – Exclude events from runnables with matching types.

  • exclude_tags (Optional[Sequence[str]]) – Exclude events from runnables with matching tags.

  • kwargs (Any) – Additional keyword arguments to pass to the Runnable. These will be passed to astream_log as this implementation of astream_events is built on top of astream_log.

Yields

An async stream of StreamEvents.

Raises

NotImplementedError – If the version is not v1 or v2.

Return type

AsyncIterator[Union[StandardStreamEvent, CustomStreamEvent]]

batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) List[Output]

Default implementation runs invoke in parallel using a thread pool executor.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.

Parameters
  • inputs (List[Input]) –

  • config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) –

  • return_exceptions (bool) –

  • kwargs (Optional[Any]) –

Return type

List[Output]

batch_as_completed(inputs: Sequence[Input], config: Optional[Union[RunnableConfig, Sequence[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) Iterator[Tuple[int, Union[Output, Exception]]]

Run invoke in parallel on a list of inputs, yielding results as they complete.

Parameters
  • inputs (Sequence[Input]) –

  • config (Optional[Union[RunnableConfig, Sequence[RunnableConfig]]]) –

  • return_exceptions (bool) –

  • kwargs (Optional[Any]) –

Return type

Iterator[Tuple[int, Union[Output, Exception]]]

bind_tools(tools: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]], **kwargs: Any) Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], BaseMessage][source]

Bind tool-like objects to this chat model.

Parameters
  • tools (Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]]) – A list of tool definitions to bind to this chat model. Can be a dictionary, pydantic model, callable, or BaseTool. Pydantic models, callables, and BaseTools will be automatically converted to their schema dictionary representation.

  • **kwargs (Any) – Any additional parameters to pass to the :class: ~langchain.runnable.Runnable constructor.

Return type

Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], BaseMessage]

call_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) str

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters
  • message (str) –

  • stop (Optional[List[str]]) –

  • kwargs (Any) –

Return type

str

configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) RunnableSerializable[Input, Output]

Configure alternatives for Runnables that can be set at runtime.

Parameters
  • which (ConfigurableField) – The ConfigurableField instance that will be used to select the alternative.

  • default_key (str) – The default key to use if no alternative is selected. Defaults to “default”.

  • prefix_keys (bool) – Whether to prefix the keys with the ConfigurableField id. Defaults to False.

  • **kwargs (Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) – A dictionary of keys to Runnable instances or callables that return Runnable instances.

Returns

A new Runnable with the alternatives configured.

Return type

RunnableSerializable[Input, Output]

from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI

model = ChatAnthropic(
    model_name="claude-3-sonnet-20240229"
).configurable_alternatives(
    ConfigurableField(id="llm"),
    default_key="anthropic",
    openai=ChatOpenAI()
)

# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)

# uses ChatOpenAI
print(
    model.with_config(
        configurable={"llm": "openai"}
    ).invoke("which organization created you?").content
)
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) RunnableSerializable[Input, Output]

Configure particular Runnable fields at runtime.

Parameters

**kwargs (Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) – A dictionary of ConfigurableField instances to configure.

Returns

A new Runnable with the fields configured.

Return type

RunnableSerializable[Input, Output]

from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI

model = ChatOpenAI(max_tokens=20).configurable_fields(
    max_tokens=ConfigurableField(
        id="output_token_number",
        name="Max tokens in the output",
        description="The maximum number of tokens in the output",
    )
)

# max_tokens = 20
print(
    "max_tokens_20: ",
    model.invoke("tell me something about chess").content
)

# max_tokens = 200
print("max_tokens_200: ", model.with_config(
    configurable={"output_token_number": 200}
    ).invoke("tell me something about chess").content
)
generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, run_id: Optional[UUID] = None, **kwargs: Any) LLMResult

Pass a sequence of prompts to the model and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • messages (List[List[BaseMessage]]) – List of list of messages.

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

  • tags (Optional[List[str]]) –

  • metadata (Optional[Dict[str, Any]]) –

  • run_name (Optional[str]) –

  • run_id (Optional[UUID]) –

  • **kwargs

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) LLMResult

Pass a sequence of prompts to the model and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

get_num_tokens(text: str) int

Get the number of tokens present in the text.

Useful for checking if an input fits in a model’s context window.

Parameters

text (str) – The string input to tokenize.

Returns

The integer number of tokens in the text.

Return type

int

get_num_tokens_from_messages(messages: List[BaseMessage]) int

Get the number of tokens in the messages.

Useful for checking if an input fits in a model’s context window.

Parameters

messages (List[BaseMessage]) – The message inputs to tokenize.

Returns

The sum of the number of tokens across the messages.

Return type

int

get_token_ids(text: str) List[int]

Return the ordered ids of the tokens in a text.

Parameters

text (str) – The string input to tokenize.

Returns

A list of ids corresponding to the tokens in the text, in order they occur

in the text.

Return type

List[int]

invoke(input: LanguageModelInput, config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) BaseMessage

Transform a single input into an output. Override to implement.

Parameters
  • input (LanguageModelInput) – The input to the Runnable.

  • config (Optional[RunnableConfig]) – A config to use when invoking the Runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details.

  • stop (Optional[List[str]]) –

  • kwargs (Any) –

Returns

The output of the Runnable.

Return type

BaseMessage

predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) str

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters
  • text (str) –

  • stop (Optional[Sequence[str]]) –

  • kwargs (Any) –

Return type

str

predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) BaseMessage

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters
  • messages (List[BaseMessage]) –

  • stop (Optional[Sequence[str]]) –

  • kwargs (Any) –

Return type

BaseMessage

stream(input: LanguageModelInput, config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) Iterator[BaseMessageChunk]

Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output.

Parameters
  • input (LanguageModelInput) – The input to the Runnable.

  • config (Optional[RunnableConfig]) – The config to use for the Runnable. Defaults to None.

  • kwargs (Any) – Additional keyword arguments to pass to the Runnable.

  • stop (Optional[List[str]]) –

Yields

The output of the Runnable.

Return type

Iterator[BaseMessageChunk]

to_json() Union[SerializedConstructor, SerializedNotImplemented]

Serialize the Runnable to JSON.

Returns

A JSON-serializable representation of the Runnable.

Return type

Union[SerializedConstructor, SerializedNotImplemented]

with_structured_output(schema: Union[Dict, Type[BaseModel]], *, include_raw: bool = False, **kwargs: Any) Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]][source]

Model wrapper that returns outputs formatted to match the given schema.

Parameters
  • schema (Union[Dict, Type[BaseModel]]) – The output schema as a dict or a Pydantic class. If a Pydantic class then the model output will be an object of that class. If a dict then the model output will be a dict. With a Pydantic class the returned attributes will be validated, whereas with a dict they will not be. If method is “function_calling” and schema is a dict, then the dict must match the OpenAI function-calling spec.

  • include_raw (bool) – If False then only the parsed structured output is returned. If an error occurs during model output parsing it will be raised. If True then both the raw model response (a BaseMessage) and the parsed model response will be returned. If an error occurs during output parsing it will be caught and returned as well. The final output is always a dict with keys “raw”, “parsed”, and “parsing_error”.

  • kwargs (Any) –

Returns

If include_raw is True then a dict with keys:

raw: BaseMessage parsed: Optional[_DictOrPydantic] parsing_error: Optional[BaseException]

If include_raw is False then just _DictOrPydantic is returned, where _DictOrPydantic depends on the schema:

If schema is a Pydantic class then _DictOrPydantic is the Pydantic

class.

If schema is a dict then _DictOrPydantic is a dict.

Return type

A Runnable that takes any ChatModel input and returns as output

Example: Function-calling, Pydantic schema (method=”function_calling”, include_raw=False):
from langchain_community.chat_models import MiniMaxChat
from langchain_core.pydantic_v1 import BaseModel

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''
    answer: str
    justification: str

llm = MiniMaxChat()
structured_llm = llm.with_structured_output(AnswerWithJustification)

structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers")

# -> AnswerWithJustification(
#     answer='A pound of bricks and a pound of feathers weigh the same.',
#     justification='The weight of the feathers is much less dense than the weight of the bricks, but since both weigh one pound, they weigh the same.'
# )
Example: Function-calling, Pydantic schema (method=”function_calling”, include_raw=True):
from langchain_community.chat_models import MiniMaxChat
from langchain_core.pydantic_v1 import BaseModel

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''
    answer: str
    justification: str

llm = MiniMaxChat()
structured_llm = llm.with_structured_output(AnswerWithJustification, include_raw=True)

structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers")

# -> {
#     'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_function_8953642285', 'type': 'function', 'function': {'name': 'AnswerWithJustification', 'arguments': '{"answer": "A pound of bricks and a pound of feathers weigh the same.", "justification": "The weight of the feathers is much less dense than the weight of the bricks, but since both weigh one pound, they weigh the same."}'}}]}, response_metadata={'token_usage': {'total_tokens': 257}, 'model_name': 'abab6.5-chat', 'finish_reason': 'tool_calls'}, id='run-d897e037-2796-49f5-847e-f9f69dd390db-0', tool_calls=[{'name': 'AnswerWithJustification', 'args': {'answer': 'A pound of bricks and a pound of feathers weigh the same.', 'justification': 'The weight of the feathers is much less dense than the weight of the bricks, but since both weigh one pound, they weigh the same.'}, 'id': 'call_function_8953642285', 'type': 'tool_call'}]),
#     'parsed': AnswerWithJustification(answer='A pound of bricks and a pound of feathers weigh the same.', justification='The weight of the feathers is much less dense than the weight of the bricks, but since both weigh one pound, they weigh the same.'),
#     'parsing_error': None
# }
Example: Function-calling, dict schema (method=”function_calling”, include_raw=False):
from langchain_community.chat_models import MiniMaxChat
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.utils.function_calling import convert_to_openai_tool

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''
    answer: str
    justification: str

dict_schema = convert_to_openai_tool(AnswerWithJustification)
llm = MiniMaxChat()
structured_llm = llm.with_structured_output(dict_schema)

structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers")

# -> {
#     'answer': 'A pound of bricks and a pound of feathers both weigh the same, which is a pound.',
#     'justification': 'The difference is that bricks are much denser than feathers, so a pound of bricks will take up much less space than a pound of feathers.'
# }

Examples using MiniMaxChat