llm.invoke("how can langsmith help with testing?")
输出
‘Test Case Generation:\n\nGenerate Test Cases Automatically:* Langsmith can generate test cases based on the specifications or code under test, reducing manual effort and ensuring thorough coverage.\n* Extract Test Cases from Code: It can extract test cases from existing code, helping to maintain test coverage and prevent regression bugs.\n\nTest Execution and Automation:\n\nExecute Tests Automatically:* Langsmith can automate test execution, saving time and ensuring consistent and repeatable results.\n* Integrate with CI/CD Pipelines: It can be integrated into CI/CD pipelines to run tests automatically as part of the build and deployment process.\n\nTest Reporting and Analysis:\n\nGenerate Test Reports:* Langsmith can generate detailed test reports that provide insights into test coverage, pass/fail rates, and any issues encountered.\n* Analyze Test Results: It can analyze test results to identify patterns, bottlenecks, and areas for improvement i…’
chain.invoke({"input": "how can langsmith help with testing?"})
输出
‘Test Case Generation\n\nNatural language processing (NLP): Langsmith can extract test cases from user stories, requirements documents, and other text-based sources.\nMachine learning (ML): Langsmith’s ML algorithms can automatically generate test cases based on predefined rules and patterns.\n\nTest Case Management\n\nCentralized repository: Langsmith provides a central repository for storing and managing test cases, ensuring consistency and traceability.\nCollaboration and version control: Multiple users can collaborate on test cases, track changes, and maintain different versions.\nTest case prioritization: Langsmith can prioritize test cases based on risk, coverage, and business value.\n\nTest Execution**\n\nTest automation: Langsmith can generate automated test scripts from test cases, reducing manual effort and increasing efficiency.\n* Integration with test tools: Langsmith can integrate with various test tools, such as Selenium a…’
from langchain_core.output_parsers import StrOutputParser
output_parser = StrOutputParser()
我们现在可以将此添加到之前的链中:
1
chain = prompt | llm | output_parser
我们现在可以调用它并提出同样的问题。答案现在将是一个字符串(而不是 ChatMessage)。
1
chain.invoke({"input": "how can langsmith help with testing?"})
当前使用 Gemini API 在 Colab 中这几次输出结果格式并没有什么区别,都是字符串格式。
‘Langsmith can assist with testing by:\n\n1. Automating Test Case Generation:\n\nGenerates test cases based on the specifications or requirements.\n Reduces manual effort and improves test coverage.\n\n2. Code Coverage Analysis:*\n\nAnalyzes the codebase and identifies areas not covered by tests.\n* Helps ensure comprehensive testing and reduces the risk of missed defects.\n\n3. Test Documentation Generation:\n\nAutomatically generates test plans, test cases, and reports.\n Streamlines documentation and improves communication among stakeholders.\n\n4. Assisted Testing:\n\nProvides AI-powered suggestions and guidance during manual testing.\n Identifies potential issues and helps testers focus on high-risk areas.\n\n5. Test Case Optimization:\n\nOptimizes test cases to reduce redundancy and increase efficiency.\n Reduces test execution time and improves overall testing productivity.\n\n6. Integration with Test Management Tools:*\n\n Integrates with…’
Retrieval Chain
为了准确地回答这个问题(”how can langsmith help with testing?”),我们需要向大语言模型(LLM)补充更多相关的信息。这可以通过信息检索来完成。当你手头的数据太多,不能直接全部传给大语言模型时,信息检索就显得尤为重要。你可以利用一个信息检索工具来筛选出与问题最相关的信息片段,并只将这些信息片段输入到模型中。
from langchain.chains.combine_documents import create_stuff_documents_chain
prompt = ChatPromptTemplate.from_template("""Answer the following question based only on the provided context: <context> {context} </context> Question: {input}""")
document_chain.invoke({ "input": "how can langsmith help with testing?", "context": [Document(page_content="langsmith can let you visualize test results")] })
'Langsmith can help with testing by letting you visualize test results.'
response = retrieval_chain.invoke({"input": "how can langsmith help with testing?"}) print(response["answer"])
输出
LangSmith can help with testing by allowing developers to create datasets, which are collections of inputs and reference outputs, and use these to run tests on their LLM applications. These test cases can be uploaded in bulk, created on the fly, or exported from application traces. LangSmith also makes it easy to run custom evaluations (both LLM and heuristic based) to score test results.
from langchain.chains import create_history_aware_retriever from langchain_core.prompts import MessagesPlaceholder
# First we need a prompt that we can pass into an LLM to generate this search query
prompt = ChatPromptTemplate.from_messages([ MessagesPlaceholder(variable_name="chat_history"), ("user", "{input}"), ("user", "Given the above conversation, generate a search query to look up to get information relevant to the conversation") ]) retriever_chain = create_history_aware_retriever(llm, retriever, prompt)
我们可以尝试一个场景,其中用户提出了一个后续问题,来验证这个流程的效果。
1 2 3 4 5 6 7
from langchain_core.messages import HumanMessage, AIMessage
chat_history = [HumanMessage(content="Can LangSmith help test my LLM applications?"), AIMessage(content="Yes!")] response = retriever_chain.invoke({ "chat_history": chat_history, "input": "Tell me how" })
chat_history = [HumanMessage(content="Can LangSmith help test my LLM applications?"), AIMessage(content="Yes!")] response = retrieval_chain.invoke({ "chat_history": chat_history, "input": "Tell me how" })
print(response['answer'])
输出
LangSmith allows developers to create datasets, which are collections of inputs and reference outputs, and use these to run tests on their LLM applications. These test cases can be uploaded in bulk, created on the fly, or exported from application traces. LangSmith also makes it easy to run custom evaluations (both LLM and heuristic based) to score test results.