Skip to content

打开调试

方法1

python

import langchain
langchain.debug = True

方法2

python
from langchain.globals import set_debug
set_debug(True)

案例

python
from langchain_community.llms import Ollama
import langchain
llm = Ollama(model="qwen2:1.5b")
langchain.debug = True
res = llm.invoke("你是谁?")
print(res)

输出

shell
[llm/start] [llm:Ollama] Entering LLM run with input:
{
  "prompts": [
    "你是谁?"
  ]
}
[llm/end] [llm:Ollama] [479ms] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "我是一个大型语言模型,由阿里云开发。",
        "generation_info": {
          "model": "qwen2:1.5b",
          "created_at": "2025-03-31T15:36:00.096341Z",
          "response": "",
          "done": true,
          "done_reason": "stop",
          "context": [
            151644,
            872,
            198,
            105043,
            100165,
            11319,
            151645,
            198,
            151644,
            77091,
            198,
            35946,
            101909,
            101951,
            102064,
            104949,
            3837,
            67071,
            102661,
            99718,
            100013,
            1773
          ],
          "total_duration": 466333833,
          "load_duration": 28063000,
          "prompt_eval_count": 11,
          "prompt_eval_duration": 225000000,
          "eval_count": 12,
          "eval_duration": 211000000
        },
        "type": "Generation"
      }
    ]
  ],
  "llm_output": null,
  "run": null,
  "type": "LLMResult"
}
我是一个大型语言模型,由阿里云开发。