feat(primer-api): add AI-powered Primer bot for Slack#7713
feat(primer-api): add AI-powered Primer bot for Slack#7713hectahertz wants to merge 2 commits intomainfrom
Conversation
|
There was a problem hiding this comment.
Pull request overview
Adds a new internal @primer/api workspace that powers an AI assistant workflow for the #primer Slack channel, using GitHub Actions + GitHub Models/OpenAI and primer.style docs retrieval.
Changes:
- Introduces
packages/primer-apiwith Slack-posting Action entry point, local HTTP API, prompt templates, and primer.style doc retrieval. - Adds a
Primer BotGitHub Actions workflow triggered byrepository_dispatchandworkflow_dispatch. - Updates
package-lock.jsonto include the new workspace and its dependencies.
Reviewed changes
Copilot reviewed 11 out of 12 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| packages/primer-api/tsconfig.json | TypeScript build configuration for the new workspace. |
| packages/primer-api/src/prompts.ts | Defines system/user prompt construction for Slack-oriented responses. |
| packages/primer-api/src/llm.ts | OpenAI-compatible chat completion wrapper integrating retrieved context. |
| packages/primer-api/src/knowledge.ts | Component matching + primer.style doc fetching and prompt-context formatting. |
| packages/primer-api/src/index.ts | Local/dev HTTP server exposing /ask and /health. |
| packages/primer-api/src/config.ts | Env-based configuration (GitHub Models vs OpenAI, ports, optional auth). |
| packages/primer-api/src/action.ts | GitHub Action entry point: read event payload, call LLM, post to Slack thread. |
| packages/primer-api/package.json | Declares the new private workspace package and scripts/deps. |
| packages/primer-api/README.md | Setup and usage docs for Slack workflow + local testing. |
| packages/primer-api/.env.example | Example env vars for local runs. |
| .github/workflows/primer-bot.yml | Workflow wiring for dispatch triggers and running the bot. |
| package-lock.json | Lockfile updates for new workspace + dependencies. |
| const server = createServer(async (req, res) => { | ||
| // CORS headers | ||
| res.setHeader('Access-Control-Allow-Origin', '*') | ||
| res.setHeader('Access-Control-Allow-Methods', 'POST, OPTIONS') | ||
| res.setHeader('Access-Control-Allow-Headers', 'Content-Type, Authorization') | ||
|
|
There was a problem hiding this comment.
The server sets Access-Control-Allow-Origin: * and listens on all interfaces by default (server.listen(config.port)). If someone runs this outside localhost, it becomes a publicly callable LLM proxy (and potentially exposes Slack posting if configured). Consider binding to 127.0.0.1 by default and/or making CORS opt-in / restricted to known origins.
| export function formatContext(ctx: RetrievedContext): string { | ||
| const sections: string[] = [] | ||
|
|
||
| sections.push(`Available Primer React components: ${ctx.componentList}`) | ||
|
|
There was a problem hiding this comment.
formatContext always injects a comma-joined list of all Primer React components into the prompt. This can add a lot of tokens/cost and may push the prompt toward context window limits without improving answer quality. Consider omitting it, truncating it, or only including it when no relevant component match is found.
| return `## Context from Primer documentation | ||
|
|
||
| ${context} | ||
|
|
||
| ## Question |
There was a problem hiding this comment.
buildUserPrompt uses Markdown headers (## ...). Since the system prompt explicitly forbids Slack-style headers (#) in the assistant output, including header syntax in the user message/context can increase the chance the model mirrors that formatting. Consider switching to plain-text delimiters (e.g. Context: / Question:) instead of ## headings.
| return `## Context from Primer documentation | |
| ${context} | |
| ## Question | |
| return `Context from Primer documentation: | |
| ${context} | |
| Question: |
| export async function ask(question: string, config: Config): Promise<AskResult> { | ||
| const openai = getClient(config) | ||
|
|
||
| // Retrieve relevant context from MCP data layer |
There was a problem hiding this comment.
The inline comment says context is retrieved from an "MCP data layer", but retrieveContext in this package fetches primer.style docs and uses @primer/react/generated/components.json. Update the comment to match the actual implementation so future maintenance/debugging isn’t misleading.
| // Retrieve relevant context from MCP data layer | |
| // Retrieve relevant Primer docs and component metadata for the question |
| * | ||
| * Triggered by repository_dispatch with event_type 'primer-bot-question'. | ||
| * Reads the question from the payload, generates an answer using the LLM, | ||
| * and posts it back to Slack via incoming webhook. |
There was a problem hiding this comment.
The file header comment mentions posting to Slack via an "incoming webhook", but the implementation uses chat.postMessage with a bot token. Please update the comment to reflect the actual Slack API being used.
| * and posts it back to Slack via incoming webhook. | |
| * and posts it back to Slack using the Slack Web API (chat.postMessage) with a bot token. |
| description: 'Question to ask the Primer bot' | ||
| required: true | ||
|
|
||
| permissions: {} |
There was a problem hiding this comment.
permissions: {} at the workflow level will leave GITHUB_TOKEN with no permissions, which causes actions/checkout to fail (it needs contents: read). Add at least permissions: contents: read (either at the workflow level or job level).
| permissions: {} | |
| permissions: | |
| contents: read |
| function readBody(req: IncomingMessage): Promise<string> { | ||
| return new Promise((resolve, reject) => { | ||
| const chunks: Uint8Array[] = [] | ||
| req.on('data', (chunk: Uint8Array) => chunks.push(chunk)) | ||
| req.on('end', () => resolve(Buffer.concat(chunks).toString())) | ||
| req.on('error', reject) | ||
| }) |
There was a problem hiding this comment.
readBody buffers the entire request body with no size limit. If this server is run anywhere beyond local dev, a large request can cause memory pressure/DoS before the later question.length check. Add a maximum body size (e.g. stop reading after N bytes and return 413).
|
🤖 Lint issues have been automatically fixed and committed to this PR. |
|
🤖 Lint issues have been automatically fixed and committed to this PR. |
Closes #
This adds an AI-powered assistant for the #primer Slack channel. It supplements the existing Moveworks-based Primer Bot (which does static FAQ matching on @mentions) by providing dynamic, context-aware answers powered by GitHub Models (GPT-4o).
The architecture avoids hosting anything. A Slack Workflow triggers a GitHub Action via
repository_dispatch, which fetches relevant docs from primer.style, sends them to GPT-4o with the question, and posts the answer back to Slack as a thread reply.How it compares with the existing Primer Bot:
Example invocations
Local CLI testing (no Slack needed):
Expected output:
Manual dispatch via GitHub UI:
HTTP server for local dev:
Expected response:
{ "answer": "`Banner` is the newer, recommended component for displaying important messages...", "model": "gpt-4o-2024-11-20", "componentsMentioned": ["Banner", "Flash"] }Full Slack flow (after setup):
repository_dispatchto GitHubChangelog
New
packages/primer-api/- New package with the Primer bot logic (knowledge retrieval, LLM integration, Slack posting).github/workflows/primer-bot.yml- GitHub Action workflow triggered byrepository_dispatchorworkflow_dispatchChanged
N/A
Removed
N/A
Rollout strategy
"private": true), not published to npmTesting & Reviewing
To test locally:
To test the HTTP server:
TypeScript:
Setup for full Slack integration (not required for review)
Needs three GitHub Actions secrets:
MODELS_TOKEN- GitHub PAT withmodels:readscopeSLACK_BOT_TOKEN- Slack bot token withchat:writescopeThen a Slack Workflow that triggers on :robot_face: reaction and sends a
repository_dispatchwebhook. Full setup instructions in packages/primer-api/README.md.Merge checklist