langchain_experimental.text_splitter
.SemanticChunker¶
- class langchain_experimental.text_splitter.SemanticChunker(embeddings: Embeddings, buffer_size: int = 1, add_start_index: bool = False, breakpoint_threshold_type: Literal['percentile', 'standard_deviation', 'interquartile', 'gradient'] = 'percentile', breakpoint_threshold_amount: Optional[float] = None, number_of_chunks: Optional[int] = None, sentence_split_regex: str = '(?<=[.?!])\\s+')[source]¶
根据语义相似度来分割文本。
改编自Greg Kamradt的出色笔记本:https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb
所有荣誉归他所有。
从高层次来说,这个功能首先将文本分割成句子,然后分成三个句子的组,然后合并那些在嵌入空间中相似的句子。
方法
__init__
(embeddings[, buffer_size, ...])atransform_documents
(documents, **kwargs)异步转换文档列表。
create_documents
(texts[, metadatas])从文本列表创建文档。
split_documents
(documents)分割文档。
split_text
(text)transform_documents
(documents, **kwargs)通过分割文档来转换文档序列。
- 参数
embeddings (Embeddings) –
buffer_size (int) –
add_start_index (bool) –
breakpoint_threshold_type (Literal['percentile', 'standard_deviation', 'interquartile', 'gradient']) –
breakpoint_threshold_amount (Optional[float]) –
number_of_chunks (Optional[int]) –
sentence_split_regex (str) –
- __init__(embeddings: Embeddings, buffer_size: int = 1, add_start_index: bool = False, breakpoint_threshold_type: Literal['percentile', 'standard_deviation', 'interquartile', 'gradient'] = 'percentile', breakpoint_threshold_amount: Optional[float] = None, number_of_chunks: Optional[int] = None, sentence_split_regex: str = '(?<=[.?!])\\s+')[源代码]¶
- 参数
embeddings (Embeddings) –
buffer_size (int) –
add_start_index (bool) –
breakpoint_threshold_type (Literal['percentile', 'standard_deviation', 'interquartile', 'gradient']) –
breakpoint_threshold_amount (Optional[float]) –
number_of_chunks (Optional[int]) –
sentence_split_regex (str) –
- async atransform_documents(documents: Sequence[Document], **kwargs: Any) Sequence[Document] ¶
异步转换文档列表。
- create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) List[Document] [source]¶
从文本列表创建文档。
- 参数
texts (List[str])-
metadatas (Optional[List[dict]])-
- 返回类型
List[Document]