LangChain 官方网站上的快速入门教程

66次阅读
没有评论

快速入门指南

本教程为您提供了有关使用 LangChain 构建端到端语言模型应用程序的快速演练。

安装

要开始使用,请使用以下命令安装 LangChain:

pip install langchain
# or
conda install langchain -c conda-forge

环境设置

使用 LangChain 通常需要与一个或多个模型提供程序、数据存储、API 等集成。

在本例中,我们将使用 OpenAI 的 API,因此我们首先需要安装它们的 SDK:

pip install openai

然后,我们需要在终端中设置环境变量。

export OPENAI_API_KEY="..."

或者,您可以从 Jupyter 笔记本(或 Python 脚本)中执行此操作:

import os
os.environ["OPENAI_API_KEY"] = "..."

如果要动态设置 API 密钥,可以在启动 OpenAI 类时使用 openai_api_key 参数,例如,每个用户的 API 密钥。

from langchain.llms import OpenAIllm = OpenAI(openai_api_key="OPENAI_API_KEY")

构建语言模型应用程序:法学硕士

现在我们已经安装了 LangChain 并设置了我们的环境,我们可以开始构建我们的语言模型应用程序了。

LangChain提供了许多可用于构建语言模型应用程序的模块。模块可以组合以创建更复杂的应用程序,也可以单独用于简单的应用程序。

LLM:从语言模型中获取预测

LangChain最基本的构建块是在某些输入上调用LLM。 让我们通过一个简单的示例来说明如何执行此操作。 为此,让我们假设我们正在构建一个服务,该服务根据公司制作的内容生成公司名称。

为此,我们首先需要导入 LLM 包装器。

from langchain.llms import OpenAI

然后,我们可以用任何参数初始化包装器。 在这个例子中,我们可能希望输出更加随机,所以我们将使用高温初始化它。

llm = OpenAI(temperature=0.9)

我们现在可以在一些输入上调用它!

text = "What would be a good company name for a company that makes colorful socks?"
print(llm(text))

Feetful of Fun

有关如何在 LangChain 中使用 LLM 的更多详细信息,请参阅 LLM 入门指南

Prompt Templates: Manage prompts for LLMs

打电话给LLM是一个很好的第一步,但这只是一个开始。 通常,当您在应用程序中使用LLM时,您不会将用户输入直接发送到LLM。 相反,您可能正在获取用户输入并构造提示,然后将其发送到LLM。

例如,在前面的示例中,我们传入的文本被硬编码为请求生产彩色袜子的公司的名称。 在这个虚构的服务中,我们想要做的是只接受描述公司工作的用户输入,然后使用该信息格式化提示。

这很容易用LangChain做到!

首先,让我们定义提示模板:

from langchain.prompts import PromptTemplate

prompt = PromptTemplate(
    input_variables=["product"],
    template="What is a good name for a company that makes {product}?",
)

现在让我们看看这是如何工作的!我们可以调用该方法对其进行格式化。.format

print(prompt.format(product="colorful socks"))

What is a good name for a company that makes colorful socks?

有关更多详细信息,请查看入门指南以获取提示。

链:在多步骤工作流中组合LLM和提示

到目前为止,我们已经单独使用PromptTemplate和LLM原语。但是,当然,真正的应用程序不仅仅是一个原语,而是它们的组合。

LangChain 中的链由链接组成,链接可以是 LLM 等原语或其他链。

最核心的链类型是LLMChain,它由PromptTemplate和LLM组成。

扩展前面的示例,我们可以构造一个LLMChain,它接受用户输入,使用PromptTemplate对其进行格式化,然后将格式化的响应传递给LLM。

from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.9)prompt = PromptTemplate(
    input_variables=["product"],
    template="What is a good name for a company that makes {product}?",
)

我们现在可以创建一个非常简单的链,它将接受用户输入,用它格式化提示,然后将其发送到LLM:

from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)

现在我们可以只指定产品来运行该链!

chain.run("colorful socks")
# -> '\n\nSocktastic!'

我们开始吧!有第一条链 – LLM链。 这是更简单的链类型之一,但了解它的工作原理将使您为使用更复杂的链做好准备。

有关更多详细信息,请查看连锁店入门指南。

代理:根据用户输入动态调用链

到目前为止,我们已经看到的链按预定顺序运行。

代理不再这样做:他们使用LLM来确定要采取哪些操作以及以什么顺序执行。操作可以使用工具并观察其输出,也可以返回给用户。

如果使用得当,代理可以非常强大。在本教程中,我们将向您展示如何通过最简单、最高级别 API 轻松使用代理。

为了加载代理,您应该了解以下概念:

  • 工具:执行特定职责的功能。这可以是:谷歌搜索,数据库查找,Python REPL,其他链。工具的接口当前是一个函数,该函数应将字符串作为输入,并将字符串作为输出。
  • LLM:为代理提供支持的语言模型。
  • 代理:要使用的代理。这应该是引用支持代理类的字符串。由于本笔记本重点介绍最简单、最高级别 API,因此仅涵盖使用受支持的标准代理。如果要实现自定义代理,请参阅自定义代理的文档(即将推出)。

代理:有关支持的代理及其规范的列表,请参阅此处

工具:有关预定义工具及其规格的列表,请参阅此处

对于此示例,您还需要安装 SerpAPI Python 包。

pip install google-search-results

并设置适当的环境变量。

import os
os.environ["SERPAPI_API_KEY"] = "..."

现在我们可以开始了!

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
llm = OpenAI(temperature=0)

# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
tools = load_tools(["serpapi", "llm-math"], llm=llm)


# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)

# Now let's test it out!
agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?")

> Entering new AgentExecutor chain...
 I need to find the temperature first, then use the calculator to raise it to the .023 power.
Action: Search
Action Input: "High temperature in SF yesterday"
Observation: San Francisco Temperature Yesterday. Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...
Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.
Action: Calculator
Action Input: 57^.023
Observation: Answer: 1.0974509573251117

Thought: I now know the final answer
Final Answer: The high temperature in SF yesterday in Fahrenheit raised to the .023 power is 1.0974509573251117.

> Finished chain.

Memory: Add State to Chains and Agents

So far, all the chains and agents we’ve gone through have been stateless. But often, you may want a chain or agent to have some concept of “memory” so that it may remember information about its previous interactions. The clearest and simple example of this is when designing a chatbot – you want it to remember previous messages so it can use context from that to have a better conversation. This would be a type of “short-term memory”. On the more complex side, you could imagine a chain/agent remembering key pieces of information over time – this would be a form of “long-term memory”. For more concrete ideas on the latter, see this awesome paper.

LangChain provides several specially created chains just for this purpose. This notebook walks through using one of those chains (the ) with two different types of memory.ConversationChain

By default, the has a simple type of memory that remembers all previous inputs/outputs and adds them to the context that is passed. Let’s take a look at using this chain (setting so we can see the prompt).ConversationChainverbose=True

from langchain import OpenAI, ConversationChain

llm = OpenAI(temperature=0)
conversation = ConversationChain(llm=llm, verbose=True)

output = conversation.predict(input="Hi there!")
print(output)

> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:

Human: Hi there!
AI:

> Finished chain.
' Hello! How are you today?'

output = conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
print(output)

> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:

Human: Hi there!
AI:  Hello! How are you today?
Human: I'm doing well! Just having a conversation with an AI.
AI:

> Finished chain.
" That's great! What would you like to talk about?"

Building a Language Model Application: Chat Models

同样,您可以使用聊天模型而不是LLM。 聊天模型是语言模型的变体。虽然聊天模型在后台使用语言模型,但它们公开的接口有点不同:它们不是公开“文本输入,文本输出”API,而是公开一个接口,其中“聊天消息”是输入和输出。

聊天模型 API 是相当新的,所以我们仍在找出正确的抽象。

从聊天模型获取消息完成

您可以通过将一条或多条消息传递到聊天模型来获取聊天完成。响应将是一条消息。LangChain 中目前支持的消息类型是 、、 和 – 采用任意角色参数。大多数时候,您只需要处理 、 和 。AIMessageHumanMessageSystemMessageChatMessageChatMessageHumanMessageAIMessageSystemMessage

from langchain.chat_models import ChatOpenAI
from langchain.schema import (
    AIMessage,
    HumanMessage,
    SystemMessage
)

chat = ChatOpenAI(temperature=0)

您可以通过传入一条消息来获取完成。

chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})

您还可以为 OpenAI 的 gpt-3.5-turbo 和 gpt-4 模型传入多条消息。

messages = [
    SystemMessage(content="You are a helpful assistant that translates English to French."),
    HumanMessage(content="I love programming.")
]
chat(messages)
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})

您可以更进一步,使用 为多组消息生成补全。这将返回一个带有附加参数的:generateLLMResultmessage

batch_messages = [
    [
        SystemMessage(content="You are a helpful assistant that translates English to French."),
        HumanMessage(content="I love programming.")
    ],
    [
        SystemMessage(content="You are a helpful assistant that translates English to French."),
        HumanMessage(content="I love artificial intelligence.")
    ],
]
result = chat.generate(batch_messages)
result
# -> LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}})

您可以从此 LLMResult 恢复令牌使用情况等内容:

result.llm_output['token_usage']
# -> {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}

聊天提示模板

与 LLM 类似,您可以通过使用 .您可以从一个或多个 s 构建一个。您可以使用 ‘s – 这将返回一个 ,您可以将其转换为字符串或对象,具体取决于您是要使用格式化值作为 llm 还是聊天模型的输入。MessagePromptTemplateChatPromptTemplateMessagePromptTemplateChatPromptTemplateformat_promptPromptValueMessage

为方便起见,模板上公开了一个方法。如果要使用此模板,则如下所示:from_template

from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
)

chat = ChatOpenAI(temperature=0)

template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)

chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])

# get a chat completion from the formatted messages
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})

Chains with Chat Models

The discussed in the above section can be used with chat models as well:LLMChain

from langchain.chat_models import ChatOpenAI
from langchain import LLMChain
from langchain.prompts.chat import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
)

chat = ChatOpenAI(temperature=0)

template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])

chain = LLMChain(llm=chat, prompt=chat_prompt)
chain.run(input_language="English", output_language="French", text="I love programming.")
# -> "J'aime programmer."

Agents with Chat Models

Agents can also be used with chat models, you can initialize one using as the agent type.AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
chat = ChatOpenAI(temperature=0)

# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)


# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)

# Now let's test it out!
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")

> Entering new AgentExecutor chain...
Thought: I need to use a search engine to find Olivia Wilde's boyfriend and a calculator to raise his age to the 0.23 power.
Action:
{
  "action": "Search",
  "action_input": "Olivia Wilde boyfriend"
}

Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought:I need to use a search engine to find Harry Styles' current age.
Action:
{
  "action": "Search",
  "action_input": "Harry Styles age"
}

Observation: 29 years
Thought:Now I need to calculate 29 raised to the 0.23 power.
Action:
{
  "action": "Calculator",
  "action_input": "29^0.23"
}

Observation: Answer: 2.169459462491557

Thought:I now know the final answer.
Final Answer: 2.169459462491557

> Finished chain.
'2.169459462491557'

Memory: Add State to Chains and Agents

You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.

from langchain.prompts import (
    ChatPromptTemplate,
    MessagesPlaceholder,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate
)
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory

prompt = ChatPromptTemplate.from_messages([
    SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."),
    MessagesPlaceholder(variable_name="history"),
    HumanMessagePromptTemplate.from_template("{input}")
])

llm = ChatOpenAI(temperature=0)
memory = ConversationBufferMemory(return_messages=True)
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)

conversation.predict(input="Hi there!")
# -> 'Hello! How can I assist you today?'


conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
# -> "That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?"

conversation.predict(input="Tell me about yourself.")
# -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?"
正文完
 
Windows12系统管理员
版权声明:本站原创文章,由 Windows12系统管理员 2023-06-14发表,共计12815字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
评论(没有评论)