Skip to content

Agent Mode#201

Open
vkarpov15 wants to merge 9 commits intomainfrom
vkarpov15/agents-wip
Open

Agent Mode#201
vkarpov15 wants to merge 9 commits intomainfrom
vkarpov15/agents-wip

Conversation

@vkarpov15
Copy link
Copy Markdown
Member

No description provided.

Copilot AI review requested due to automatic review settings March 30, 2026 18:06
@vercel
Copy link
Copy Markdown

vercel bot commented Mar 30, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
studio Ready Ready Preview, Comment Mar 30, 2026 9:23pm

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c84fa59791

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 62 to 63
for await (const textPart of textStream) {
yield { textPart };
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Handle non-text stream events before yielding textPart

agentMode defaults to true and line 59 passes tools into streamLLM(), but this loop treats every streamed chunk as plain text; when the model emits a tool event, it gets forwarded as textPart: { ... } instead of a structured tool event. In the create-document flow this is concatenated into the suggestion string, producing invalid output like [object Object] whenever a tool is invoked. Please branch on event type here (as done in ChatThread streaming) or disable tools for this endpoint.

Useful? React with 👍 / 👎.

Comment on lines +66 to 70
return client.post('', { action: 'ChatThread.toggleAgentMode', ...params }).then(res => res.data);
},
streamChatMessage: async function* streamChatMessage(params) {
// Don't stream on Next.js or Netlify for now.
const data = await client.post('', { action: 'ChatThread.createChatMessage', ...params }).then(res => res.data);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Avoid exposing agent toggle in Lambda path without wiring

In the Lambda API branch, this commit adds toggleAgentMode, but chat requests still go through ChatThread.createChatMessage (line 70), which does not use chatThread.agentMode or tool-enabled prompting. So the new toggle appears to work but has no effect in Next.js/Netlify deployments. Either route Lambda chat through the agent-aware path or hide/disable this toggle for Lambda mode.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds an “Agent Mode” to chat threads that enables the LLM to call server-side tools (MongoDB exploration + script type-checking), streams tool-call status to the UI, and persists agent-mode + tool-call metadata.

Changes:

  • Add agentMode to chat threads, plus a new API/action to toggle it from the frontend.
  • Extend LLM streaming to emit tool-call/tool-result events and display tool activity in the chat UI.
  • Introduce server-side agent tools (estimatedDocumentCount, find, findOne, typeCheck) and persist tool-call history on assistant messages.

Reviewed changes

Copilot reviewed 16 out of 16 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
package.json Adds runtime deps needed for agent tooling (notably TypeScript).
frontend/src/chat/chat.js Adds tool-call handling during streaming + agent-mode toggle method + scroll helper.
frontend/src/chat/chat.html Adds Agent Mode toggle button and minor layout/icon tweaks.
frontend/src/chat/chat-message/chat-message.js Adds tool input formatting for tool-call display.
frontend/src/chat/chat-message/chat-message.html Renders tool-call status list above assistant messages; adjusts message width.
frontend/src/chat/chat-message-script/chat-message-script.html Minor styling adjustments to script editor UI controls.
frontend/src/api.js Adds ChatThread.toggleAgentMode() client API method.
backend/integrations/streamLLM.js Switches to fullStream and emits text/tool-call/tool-result events.
backend/helpers/getAgentTools.js New agent tool definitions (MongoDB exploration + script type-check).
backend/db/chatThreadSchema.js Persists agentMode on chat threads.
backend/db/chatMessageSchema.js Persists toolCalls metadata on chat messages.
backend/authorize.js Adds authorization mapping for ChatThread.toggleAgentMode.
backend/actions/Model/streamChatMessage.js Enables agent tools for model chat streaming (currently impacts streaming contract).
backend/actions/ChatThread/toggleAgentMode.js New action to toggle agentMode for a thread.
backend/actions/ChatThread/streamChatMessage.js Uses agent system prompt + emits tool-call/tool-result events + stores toolCalls.
backend/actions/ChatThread/index.js Exports the new toggleAgentMode action.
Comments suppressed due to low confidence (1)

backend/actions/Model/streamChatMessage.js:64

  • Model.streamChatMessage now passes through the output of streamLLM(), which can emit tool-call/tool-result events (objects). The current loop wraps every event as { textPart: event }, so tool-call objects will be sent as textPart and break consumers that expect textPart to be a string (e.g. create-document AI streaming). Update the loop to detect typeof event === 'string' and yield { textPart: event }, otherwise yield the tool event as-is or ignore it for this endpoint.
  const llmMessages = [{ role: 'user', content: [{ type: 'text', text: content }] }];
  const llmOptions = agentMode ? { ...options, tools: getAgentTools(db) } : options;
  const textStream = streamLLM(llmMessages, system, llmOptions);

  for await (const textPart of textStream) {
    yield { textPart };
  }

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

} else {
const assistantChatMessageIndex = this.chatMessages.indexOf(assistantChatMessage);
assistantChatMessage = event.chatMessage;
assistantChatMessage.toolCalls = toolCalls;
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When the final assistant chatMessage arrives, the code overwrites event.chatMessage.toolCalls with the local toolCalls array (assistantChatMessage.toolCalls = toolCalls). Since the backend persists and returns toolCalls on the assistant message, this overwrite is redundant and can produce stale UI if the client missed a tool-result event. Prefer the server-provided toolCalls (or only fall back to local state when the field is absent).

Suggested change
assistantChatMessage.toolCalls = toolCalls;
// Prefer server-provided toolCalls; fall back to local state if absent.
if (assistantChatMessage.toolCalls && assistantChatMessage.toolCalls.length) {
toolCalls = assistantChatMessage.toolCalls;
} else if (toolCalls.length) {
assistantChatMessage.toolCalls = toolCalls;
}

Copilot uses AI. Check for mistakes.
if (tc) {
tc.status = 'done';
}
yield { toolResult: event.toolResult };
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

toolResult events currently get yielded to the client with the full output payload. Since the UI only uses tool name/status, sending tool outputs increases bandwidth and allows users to access sampled document data via the network inspector even though it's not rendered. Consider stripping output before yielding to the client (or gate it behind a debug flag) and only persist/display a status update.

Suggested change
yield { toolResult: event.toolResult };
const sanitizedToolResult = {
toolName: event.toolResult.toolName,
status: tc ? tc.status : 'done'
};
yield { toolResult: sanitizedToolResult };

Copilot uses AI. Check for mistakes.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants