LangChain
核心
社区
实验性
文本分割器
ai21
airbyte
anthropic
astradb
aws
azure-dynamic-sessions
chroma
cohere
couchbase
elasticsearch
exa
fireworks
google-community
google-genai
google-vertexai
groq
huggingface
ibm
milvus
mistralai
mongodb
nomic
nvidia-ai-endpoints
ollama
openai
pinecone
postgres
prompty
qdrant
robocorp
together
unstructured
voyageai
weaviate
合作伙伴库
ai21
airbyte
anthropic
astradb
aws
azure-dynamic-sessions
chroma
cohere
couchbase
elasticsearch
exa
fireworks
google-community
google-genai
google-vertexai
groq
huggingface
ibm
milvus
mistralai
mongodb
nomic
nvidia-ai-endpoints
ollama
openai
pinecone
postgres
promty
qdrant
robocorp
together
unstructured
voyageai
weaviate
文档
切换菜单
langchain_core.globals
.get_llm_cache
get_llm_cache()
langchain_core.globals
.get_llm_cache
¶
langchain_core.globals.
get_llm_cache
(
)
→
BaseCache
[source]
¶
获取全局设置
llm_cache
的值。
返回值
llm_cache
全局设置的值。
返回类型
BaseCache