Ruby langchainrb gem and custom configuration for the model setup

题意 :Ruby 的 langchainrb gem 以及针对模型设置的自定义配置

问题背景:

I am working in a prototype using the gem langchainrb. I am using the module assistant module to implemente a basic RAG architecture.

我正在使用 langchainrb 这个 gem 来开发一个原型。我利用其中的 assistant 模块来实现一个基本的 RAG(Retrieval-Augmented Generation,检索增强生成)架构。

Everything works, and now I would like to customize the model configuration.

一切运行正常,现在我想自定义模型的配置。

In the documenation there is no clear way of setting up the Model. In my case, I would like to use OpenAi and use:

在文档中,没有明确的方法来设置模型。在我的情况下,我想使用 OpenAI 并使用以下配置:

  • temperature: 0.1
  • Model: gpt-4o

In the README, there is a mention about using llm_options.

在 README 文件中,提到了使用 llm_options

If I go to the OpenAI Module documentation:

如果我去查看 OpenAI 模块的文档:

It says I have to check here: 它说我要查看这里:

But there is not any mention of temperature, for example. Also, in the example in the Langchain::LLM::OpenAI documentation, the options are totally different.

但是在文档中并没有提到例如"温度"这样的设置。此外,在 Langchain::LLM::OpenAI 的文档示例中,给出的选项是完全不同的。

I am working in a prototype using the gem langchainrb. I am using the module assistant module to implemente a basic RAG architecture.

Everything works, and now I would like to customize the model configuration.

In the documenation there is no clear way of setting up the Model. In my case, I would like to use OpenAi and use:

In the README, there is a mention about using llm_options.

If I go to the OpenAI Module documentation:

It says I have to check here:

But there is not any mention of temperature, for example. Also, in the example in the Langchain::LLM::OpenAI documentation, the options are totally different.

ruby 复制代码
# ruby-openai options:

CONFIG_KEYS = %i[
  api_type
  api_version
  access_token
  log_errors
  organization_id
  uri_base
  request_timeout
  extra_headers
].freeze
python 复制代码
# Example in Class: Langchain::LLM::OpenAI documentation: 

{
  n: 1,
  temperature: 0.0,
  chat_completion_model_name: "gpt-3.5-turbo",
  embeddings_model_name: "text-embedding-3-small"
}.freeze

问题解决:

I have a conflict between llm_options and default_options. I thought it was the same with different priorities.

我在 llm_optionsdefault_options 之间遇到了冲突。我原本以为它们只是优先级不同的相同设置。

For the needs expressed in the question I have to use the default_options as in here:

针对问题中表达的需求,我必须按照这里的示例来使用 default_options

python 复制代码
llm =
  Langchain::LLM::OpenAI.new(
    api_key: <OPENAI_KEY>,
    default_options: {
      temperature: 0.0,
      chat_completion_model_name: "gpt-4o"
    }
  )
相关推荐
计算机毕设小月哥7 分钟前
【Hadoop+Spark+python毕设】中国租房信息可视化分析系统、计算机毕业设计、包括数据爬取、Spark、数据分析、数据可视化、Hadoop
后端·python·mysql
技术净胜17 分钟前
MATLAB二维绘图教程:plot()函数全解析(线条样式/颜色/标记/坐标轴设置)
开发语言·matlab
x***381620 分钟前
Spring Boot项目中解决跨域问题(四种方式)
spring boot·后端·dubbo
e***985721 分钟前
RabbitMQ HAProxy 负载均衡
rabbitmq·负载均衡·ruby
Coder-coco36 分钟前
在线商城系统|基于springboot vue在线商城系统(源码+数据库+文档)
java·数据库·vue.js·spring boot·后端·宠物
Slow菜鸟38 分钟前
Java开发规范(八)| 安全规范—企业级应用的“架构级底线”
java·开发语言·安全
憨憨崽&1 小时前
进击大厂:程序员必须修炼的算法“内功”与思维体系
开发语言·数据结构·算法·链表·贪心算法·线性回归·动态规划
7***68431 小时前
Spring Boot 热部署
java·spring boot·后端
k***45991 小时前
Spring Boot实时推送技术详解:三个经典案例
spring boot·后端·状态模式
毕设源码-邱学长1 小时前
【开题答辩全过程】以 基于Java的公职备考在线学习系统的设计与实现为例,包含答辩的问题和答案
java·开发语言·学习