Skip to content

Gemini CLI 核心

¥Gemini CLI Core

Gemini CLI 的核心包(packages/core) 是 Gemini CLI 的后端部分,负责与 Gemini API 进行通信、管理工具以及处理来自packages/cli。有关 Gemini CLI 的概述,请参阅主文档页面

¥Gemini CLI's core package (packages/core) is the backend portion of Gemini CLI, handling communication with the Gemini API, managing tools, and processing requests sent from packages/cli. For a general overview of Gemini CLI, see the main documentation page.

¥Navigating this section

核心的作用

¥Role of the core

虽然packages/cliGemini CLI 的一部分提供了用户界面,packages/core负责:

¥While the packages/cli portion of Gemini CLI provides the user interface, packages/core is responsible for:

  • Gemini API 交互:与 Google Gemini API 安全通信,发送用户提示,并接收模型响应。

    ¥Gemini API interaction: Securely communicating with the Google Gemini API, sending user prompts, and receiving model responses.

  • 即时工程:为 Gemini 模型构建有效的提示,可能结合对话历史、工具定义和教学背景GEMINI.md文件。

    ¥Prompt engineering: Constructing effective prompts for the Gemini model, potentially incorporating conversation history, tool definitions, and instructional context from GEMINI.md files.

  • 工具管理和编排:

    ¥Tool management & orchestration:

  • 注册可用的工具(例如,文件系统工具、shell 命令执行)。

    ¥Registering available tools (e.g., file system tools, shell command execution).

  • 解释来自 Gemini 模型的工具使用请求。

    ¥Interpreting tool use requests from the Gemini model.

  • 使用提供的参数执行请求的工具。

    ¥Executing the requested tools with the provided arguments.

  • 将工具执行结果返回给Gemini模型进行进一步处理。

    ¥Returning tool execution results to the Gemini model for further processing.

  • 会话和状态管理:跟踪对话状态,包括历史记录和连贯交互所需的任何相关背景。

    ¥Session and state management: Keeping track of the conversation state, including history and any relevant context required for coherent interactions.

  • 配置:管理核心特定的配置,例如 API 密钥访问、模型选择和工具设置。

    ¥Configuration: Managing core-specific configurations, such as API key access, model selection, and tool settings.

安全注意事项

¥Security considerations

核心在安全方面发挥着至关重要的作用:

¥The core plays a vital role in security:

  • API密钥管理:它处理GEMINI_API_KEY并确保在与 Gemini API 通信时安全使用。

    ¥API key management: It handles the GEMINI_API_KEY and ensures it's used securely when communicating with the Gemini API.

  • 工具执行:当工具与本地系统交互时(例如,run_shell_command),核心(及其底层工具实现)必须以适当的谨慎态度执行此操作,通常涉及沙盒机制以防止意外的修改。

    ¥Tool execution: When tools interact with the local system (e.g., run_shell_command), the core (and its underlying tool implementations) must do so with appropriate caution, often involving sandboxing mechanisms to prevent unintended modifications.

聊天记录压缩

¥Chat history compression

为了确保长时间对话不超过 Gemini 模型的令牌限制,核心包含聊天历史压缩功能。

¥To ensure that long conversations don't exceed the token limits of the Gemini model, the core includes a chat history compression feature.

当对话接近所配置模型的令牌限制时,核心会自动压缩对话历史记录,然后再将其发送到模型。这种压缩旨在确保信息传递无损,但会减少使用的令牌总量。

¥When a conversation approaches the token limit for the configured model, the core automatically compresses the conversation history before sending it to the model. This compression is designed to be lossless in terms of the information conveyed, but it reduces the overall number of tokens used.

您可以在Google AI 文档

¥You can find the token limits for each model in the Google AI documentation.

模型回退

¥Model fallback

Gemini CLI 包含一个模型回退机制,以确保即使默认的“pro”模型受到速率限制,您也可以继续使用 CLI。

¥Gemini CLI includes a model fallback mechanism to ensure that you can continue to use the CLI even if the default "pro" model is rate-limited.

如果您正在使用默认的“pro”模式,并且 CLI 检测到您受到速率限制,它会自动在当前会话中切换到“flash”模式。这样您就可以继续工作而不会受到干扰。

¥If you are using the default "pro" model and the CLI detects that you are being rate-limited, it automatically switches to the "flash" model for the current session. This allows you to continue working without interruption.

文件发现服务

¥File discovery service

文件发现服务负责查找项目中与当前上下文相关的文件。它由@命令和其他需要访问文件的工具。

¥The file discovery service is responsible for finding files in the project that are relevant to the current context. It is used by the @ command and other tools that need to access files.

内存发现服务

¥Memory discovery service

内存发现服务负责查找和加载GEMINI.md为模型提供上下文的文件。它会以分层结构的方式搜索这些文件,从当前工作目录开始,向上移动到项目根目录和用户主目录。它还会在子目录中搜索。

¥The memory discovery service is responsible for finding and loading the GEMINI.md files that provide context to the model. It searches for these files in a hierarchical manner, starting from the current working directory and moving up to the project root and the user's home directory. It also searches in subdirectories.

这使您可以拥有全局、项目级和组件级上下文文件,这些文件全部组合在一起,为模型提供最相关的信息。

¥This allows you to have global, project-level, and component-level context files, which are all combined to provide the model with the most relevant information.

您可以使用/memory命令showadd, 和refresh已加载的内容GEMINI.md文件。

¥You can use the /memory command to show, add, and refresh the content of loaded GEMINI.md files.