Ruby langchainrb gem and custom configuration for the model setup

题意 :Ruby 的 langchainrb gem 以及针对模型设置的自定义配置

问题背景:

I am working in a prototype using the gem langchainrb. I am using the module assistant module to implemente a basic RAG architecture.

我正在使用 langchainrb 这个 gem 来开发一个原型。我利用其中的 assistant 模块来实现一个基本的 RAG(Retrieval-Augmented Generation,检索增强生成)架构。

Everything works, and now I would like to customize the model configuration.

一切运行正常,现在我想自定义模型的配置。

In the documenation there is no clear way of setting up the Model. In my case, I would like to use OpenAi and use:

在文档中,没有明确的方法来设置模型。在我的情况下,我想使用 OpenAI 并使用以下配置:

  • temperature: 0.1
  • Model: gpt-4o

In the README, there is a mention about using llm_options.

在 README 文件中,提到了使用 llm_options

If I go to the OpenAI Module documentation:

如果我去查看 OpenAI 模块的文档:

It says I have to check here: 它说我要查看这里:

But there is not any mention of temperature, for example. Also, in the example in the Langchain::LLM::OpenAI documentation, the options are totally different.

但是在文档中并没有提到例如"温度"这样的设置。此外,在 Langchain::LLM::OpenAI 的文档示例中,给出的选项是完全不同的。

I am working in a prototype using the gem langchainrb. I am using the module assistant module to implemente a basic RAG architecture.

Everything works, and now I would like to customize the model configuration.

In the documenation there is no clear way of setting up the Model. In my case, I would like to use OpenAi and use:

In the README, there is a mention about using llm_options.

If I go to the OpenAI Module documentation:

It says I have to check here:

But there is not any mention of temperature, for example. Also, in the example in the Langchain::LLM::OpenAI documentation, the options are totally different.

ruby 复制代码
# ruby-openai options:

CONFIG_KEYS = %i[
  api_type
  api_version
  access_token
  log_errors
  organization_id
  uri_base
  request_timeout
  extra_headers
].freeze
python 复制代码
# Example in Class: Langchain::LLM::OpenAI documentation: 

{
  n: 1,
  temperature: 0.0,
  chat_completion_model_name: "gpt-3.5-turbo",
  embeddings_model_name: "text-embedding-3-small"
}.freeze

问题解决:

I have a conflict between llm_options and default_options. I thought it was the same with different priorities.

我在 llm_optionsdefault_options 之间遇到了冲突。我原本以为它们只是优先级不同的相同设置。

For the needs expressed in the question I have to use the default_options as in here:

针对问题中表达的需求,我必须按照这里的示例来使用 default_options

python 复制代码
llm =
  Langchain::LLM::OpenAI.new(
    api_key: <OPENAI_KEY>,
    default_options: {
      temperature: 0.0,
      chat_completion_model_name: "gpt-4o"
    }
  )
相关推荐
写bug写bug19 分钟前
彻底搞懂Spring Boot的系统监控机制
java·后端·spring
邦杠19 分钟前
最新SpringBoot上传Maven中央仓库,在其他项目直接引入依赖(github开源项目打包上传,不需要私服)
spring boot·后端·开源·github·maven
BigYe程普22 分钟前
出海技术栈集成教程(四):Resend邮件服务
前端·后端·全栈
啃火龙果的兔子30 分钟前
Form.Item中判断其他Form.Item的值
开发语言·前端·javascript
风象南37 分钟前
SpringBoot 实现在线查看内存对象拓扑图 —— 给 JVM 装上“透视眼”
后端
学习编程的gas40 分钟前
C++多态:理解面向对象的“一个接口,多种实现”
开发语言·c++
用户40993225021243 分钟前
定时任务系统如何让你的Web应用自动完成那些烦人的重复工作?
后端·github·trae
Lx35244 分钟前
MapReduce性能调优:从理论到实践的经验总结
大数据·hadoop·后端
EndingCoder1 小时前
Next.js 中间件:自定义请求处理
开发语言·前端·javascript·react.js·中间件·全栈·next.js