langchain.chat_models.base
.init_chat_model¶
- langchain.chat_models.base.init_chat_model(model: str, *, model_provider: Optional[str] = None, configurable_fields: Literal[None] = None, config_prefix: Optional[str] = None, **kwargs: Any) BaseChatModel [source]¶
- langchain.chat_models.base.init_chat_model(model: Literal[None] = None, *, model_provider: Optional[str] = None, configurable_fields: Literal[None] = None, config_prefix: Optional[str] = None, **kwargs: Any) _ConfigurableModel
- langchain.chat_models.base.init_chat_model(model: Optional[str] = None, *, model_provider: Optional[str] = None, configurable_fields: Union[Literal['any'], List[str], Tuple[str, ...]] = None, config_prefix: Optional[str] = None, **kwargs: Any) _ConfigurableModel
Beta
此功能处于测试阶段。功能正在积极开发中,因此API可能会更改。
使用模型名称和提供商初始化ChatModel。
必须已安装与模型提供商对应的集成包。
从版本0.2.7开始新增。
在版本0.2.8中更改: 增加了对
configurable_fields
和config_prefix
的支持。在版本0.2.12中更改: 通过langchain-ollama包增加了对Ollama的支持。之前默认安装的是langchain-community版本的Ollama(现已弃用)。
- 参数
model – 模型名称,例如:“gpt-4o”,“claude-3-opus-20240229”。
model_provider –
模型提供商。支持的model_provider值和相应的集成包
openai (langchain-openai)
anthropic (langchain-anthropic)
azure_openai (langchain-openai)
google_vertexai (langchain-google-vertexai)
google_genai (langchain-google-genai)
bedrock (langchain-aws)
cohere (langchain-cohere)
fireworks (langchain-fireworks)
together (langchain-together)
mistralai (langchain-mistralai)
huggingface (langchain-huggingface)
groq (langchain-groq)
ollama (langchain-ollama) [支持添加在langchain==0.2.12]
如果未指定,将尝试从模型推断model_provider。以下提供商会基于以下模型前缀推断
gpt-3… 或 gpt-4… → openai
claude… → anthropic
amazon…. → bedrock
gemini… → google_vertexai
command… → cohere
accounts/fireworks… → fireworks
configurable_fields –
哪些模型参数可配置
None:没有可配置字段。
“任意”:所有字段均可配置。 请参阅以下安全提示。
Union[List[str], Tuple[str, …]]:指定字段可配置。
如果有config_prefix,则假定字段已经去除config_prefix。如果指定了模型,则默认为None。如果没有指定模型,则默认为
("model", "model_provider")
。*安全提示*:设置
configurable_fields="any"
意味着api_key、base_url等字段可以在运行时改变,可能会将模型请求重定向到不同的服务/用户。请确保在接收不受信任的配置时,显式列举configurable_fields=(...)
。config_prefix – 如果config_prefix是一个非空字符串,则模型将通过
config["configurable"]["{config_prefix}_{param}"]
键在运行时进行配置。如果config_prefix是空字符串,则模型将通过config["configurable"]["{param}"]
进行配置。kwargs – 将传递给
<<selected ChatModel>>.__init__(model=model_name, **kwargs)
的额外关键字参数。
- 返回
如果推断出可配置性为False,则返回与指定的model_name和model_provider对应的BaseChatModel。如果可配置,则返回一个在配置传入后运行时初始化底层模型的聊天模型模拟器。
- 引发
ValueError – 如果无法推断或不支持model_provider。
ImportError – 如果未安装模型提供商集成包。
- 初始化非可配置模型
# pip install langchain langchain-openai langchain-anthropic langchain-google-vertexai from langchain.chat_models import init_chat_model gpt_4o = init_chat_model("gpt-4o", model_provider="openai", temperature=0) claude_opus = init_chat_model("claude-3-opus-20240229", model_provider="anthropic", temperature=0) gemini_15 = init_chat_model("gemini-1.5-pro", model_provider="google_vertexai", temperature=0) gpt_4o.invoke("what's your name") claude_opus.invoke("what's your name") gemini_15.invoke("what's your name")
- 创建一个没有默认模型的部分可配置模型
# pip install langchain langchain-openai langchain-anthropic from langchain.chat_models import init_chat_model # We don't need to specify configurable=True if a model isn't specified. configurable_model = init_chat_model(temperature=0) configurable_model.invoke( "what's your name", config={"configurable": {"model": "gpt-4o"}} ) # GPT-4o response configurable_model.invoke( "what's your name", config={"configurable": {"model": "claude-3-5-sonnet-20240620"}} ) # claude-3.5 sonnet response
- 创建具有默认模型和配置前缀的完全可配置模型
# pip install langchain langchain-openai langchain-anthropic from langchain.chat_models import init_chat_model configurable_model_with_default = init_chat_model( "gpt-4o", model_provider="openai", configurable_fields="any", # this allows us to configure other params like temperature, max_tokens, etc at runtime. config_prefix="foo", temperature=0 ) configurable_model_with_default.invoke("what's your name") # GPT-4o response with temperature 0 configurable_model_with_default.invoke( "what's your name", config={ "configurable": { "foo_model": "claude-3-5-sonnet-20240620", "foo_model_provider": "anthropic", "foo_temperature": 0.6 } } ) # Claude-3.5 sonnet response with temperature 0.6
- 将工具绑定到可配置模型
您可以对配置模型调用任何ChatModel声明性方法,就像对常规模型那样。
# pip install langchain langchain-openai langchain-anthropic from langchain.chat_models import init_chat_model from langchain_core.pydantic_v1 import BaseModel, Field class GetWeather(BaseModel): '''Get the current weather in a given location''' location: str = Field(..., description="The city and state, e.g. San Francisco, CA") class GetPopulation(BaseModel): '''Get the current population in a given location''' location: str = Field(..., description="The city and state, e.g. San Francisco, CA") configurable_model = init_chat_model( "gpt-4o", configurable_fields=("model", "model_provider"), temperature=0 ) configurable_model_with_tools = configurable_model.bind_tools([GetWeather, GetPopulation]) configurable_model_with_tools.invoke( "Which city is hotter today and which is bigger: LA or NY?" ) # GPT-4o response with tool calls configurable_model_with_tools.invoke( "Which city is hotter today and which is bigger: LA or NY?", config={"configurable": {"model": "claude-3-5-sonnet-20240620"}} ) # Claude-3.5 sonnet response with tools