refactor(dashboard): unify AI chat surfaces on assistant-ui Thread#1427
refactor(dashboard): unify AI chat surfaces on assistant-ui Thread#1427mantrakp04 wants to merge 1 commit into
Conversation
Replaces the bespoke ai-chat-shared chat UI used by ask-ai, the stack companion widget, vibe coding chat, and the create-dashboard preview with the shared assistant-ui Thread component. Extracts the streaming request/format helpers into a new chat-stream module and the tool call UI into a reusable ToolFallback.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile SummaryThis PR unifies four separate AI chat surfaces (ask-ai, stack-companion widget, vibe-coding chat, create-dashboard preview) onto the shared
Confidence Score: 3/5Safe to merge for most surfaces, but the analytics chat transport captures currentUser at creation time while the comment promises per-request freshness. The analytics transport passes currentUser as a plain value and omits it from useMemo deps with a comment claiming liveness. That claim only holds when currentUser is a getter function. The dashboard preview sibling uses the getter pattern correctly. A session token refresh mid-session would leave the analytics surface sending stale auth credentials. use-ai-query-chat.ts needs the ref+getter pattern for currentUser; chat-stream.ts should log dropped SSE parse failures. Important Files Changed
Sequence DiagramsequenceDiagram
participant Surface as UI Surface
participant Thread as Thread component
participant Adapter as ChatModelAdapter
participant ChatStream as chat-stream.ts
participant Backend as /api/latest/ai/query/stream
Surface->>Thread: render(welcome, assistantContentComponents)
Thread->>Adapter: "run({ messages, abortSignal })"
Adapter->>ChatStream: formatThreadMessagesForBackend(messages)
Adapter->>ChatStream: sendAiStreamRequest(baseUrl, user, body)
ChatStream->>Backend: POST JSON + auth headers
Backend-->>ChatStream: SSE UIMessageChunk stream
ChatStream-->>Adapter: ReadableStream of UIMessageChunk
Adapter->>ChatStream: readUIMessageStream + uiPartsToChatContent
Adapter-->>Thread: "yield { content: ChatContent }"
Thread-->>Surface: AssistantMessage via ToolFallback / MarkdownText
|
| } | ||
| } | ||
| return result; | ||
| } | ||
|
|
||
| export type WireMessage = { role: string, content: unknown }; | ||
|
|
||
| /** | ||
| * `DefaultChatTransport` configured for the unified `/api/latest/ai/query/stream` | ||
| * endpoint. Shared by `useChat`-style callers (analytics, create-dashboard). | ||
| * `transformMessages` runs after `convertToModelMessages` and can prepend | ||
| * extra context messages. | ||
| */ | ||
| export function createUnifiedAiTransport(opts: { | ||
| backendBaseUrl: string, | ||
| /** Either a value (closed at creation) or a getter called at request time for liveness. */ |
There was a problem hiding this comment.
Silent drop of parse failures hides backend protocol errors
When parseJsonEventStream fails to parse a SSE chunk (parseResult.success === false), the chunk is silently discarded with no logging of parseResult.error or parseResult.rawValue. In production, malformed backend responses will cause the stream to appear to end cleanly with no observable signal. A captureError call or at minimum a console.warn on the failure branch would make these failures visible.
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/dashboard/src/components/assistant-ui/chat-stream.ts
Line: 140-155
Comment:
**Silent drop of parse failures hides backend protocol errors**
When `parseJsonEventStream` fails to parse a SSE chunk (`parseResult.success === false`), the chunk is silently discarded with no logging of `parseResult.error` or `parseResult.rawValue`. In production, malformed backend responses will cause the stream to appear to end cleanly with no observable signal. A `captureError` call or at minimum a `console.warn` on the failure branch would make these failures visible.
How can I resolve this? If you propose a fix, please make it concise.| result.push({ | ||
| type: "tool-call", | ||
| toolCallId: raw.toolCallId, | ||
| toolName: raw.toolName ?? "tool", |
| } | ||
|
|
||
| runAsynchronouslyWithAlert(doSave(messagesToSave, title)); | ||
| const persist = useCallback((priorMessages: readonly ThreadMessage[], finalAssistantContent: ThreadAssistantContentPart[]) => { |
There was a problem hiding this comment.
| const persist = useCallback((priorMessages: readonly ThreadMessage[], finalAssistantContent: ThreadAssistantContentPart[]) => { | |
| const persist = useCallback((priorMessages: readonly ThreadMessageLike[], finalAssistantContent: ThreadAssistantContentPart[]) => { |
Type mismatch in persist() callback: expects ThreadMessage[] but receives ThreadMessageLike[] from ChatModelRunOptions
| result.push({ | ||
| type: "tool-call", | ||
| toolCallId: raw.toolCallId, | ||
| toolName: raw.toolName ?? "tool", |
| } | ||
|
|
||
| runAsynchronouslyWithAlert(doSave(messagesToSave, title)); | ||
| const persist = useCallback((priorMessages: readonly ThreadMessage[], finalAssistantContent: ThreadAssistantContentPart[]) => { |
There was a problem hiding this comment.
| const persist = useCallback((priorMessages: readonly ThreadMessage[], finalAssistantContent: ThreadAssistantContentPart[]) => { | |
| const persist = useCallback((priorMessages: readonly ThreadMessageLike[], finalAssistantContent: ThreadAssistantContentPart[]) => { |
Type mismatch in persist() callback: expects ThreadMessage[] but receives ThreadMessageLike[] from ChatModelRunOptions
Summary
ai-chat-sharedchat UI (used by ask-ai, the stack companion widget, vibe coding chat, and the create-dashboard preview) with the sharedassistant-uiThreadcomponent.components/assistant-ui/chat-stream.tsmodule so each surface only owns itsChatModelAdapter.ToolFallbackfor tool-call rendering and delete the now-unusedai-chat-shared.tsx(-1386 / +747 lines net).Stacked on top of
refactor/data-grid-and-dashboard-surfaces.Test plan
pnpm lintpnpm typecheck