Frequently Asked Questions

Common questions about Workflow DevKit covering troubleshooting, migration, compatibility, and advanced usage.

Getting unstuck

Why does my stream stop when the user refreshes the page?

With standard streaming, the response is tied to a single HTTP connection. When the page reloads, the connection closes and the response is lost. Use WorkflowChatTransport to make the stream durable. The workflow keeps running on the server, and the client reconnects to the same run using the runId. See Resumable Streams.

Why is my workflow step failing silently?

Open the Workflow Web UI locally and inspect the step execution trace:

npx workflow inspect runs --web

The step debugger shows the state of each step, including failures that do not surface in console logs. Click into a run to see the full step trace, retry attempts, and error details. See Observability.

Why does WorkflowChatTransport ignore my custom body fields?

WorkflowChatTransport shapes its POST body differently than the default AI SDK transport. To add custom fields, use the prepareSendMessagesRequest hook:

new WorkflowChatTransport({
  prepareSendMessagesRequest: async (config) => ({
    ...config,
    body: JSON.stringify({
      ...JSON.parse(config.body as string),
      customField: "value",
    }),
  }),
})

See the WorkflowChatTransport API Reference for all options.

Why does streaming need to live inside a step?

Workflow functions must be deterministic to support replay. Since streams bypass the event log for performance, reading stream data in a workflow function would break determinism. By requiring all stream operations to happen in steps, the framework ensures consistent behavior across replays. See Streaming.

My reconnection endpoint returns an empty stream. What is wrong?

Check that the runId in the reconnection URL matches the run you want to resume. You can inspect the run in the Web UI or CLI (the run does not need to be active to connect to a stream):

npx workflow inspect runs

Also check that the startIndex query parameter is not set beyond the number of chunks the run has produced. If omitted, the stream starts from the beginning.

Migration

How do I migrate from ephemeral streaming to durable streaming?

The core changes are:

  1. Add "use workflow" and "use step" directives to your route handler
  2. Wrap your generation call inside a step function
  3. Return the runId in the response headers
  4. Add a reconnection endpoint using getRun()
  5. Use WorkflowChatTransport on the client

See the full walkthrough in Migrate from Ephemeral to Durable Streaming.

Can I migrate incrementally?

Yes. Start with one route or feature. Workflow does not require restructuring your entire app. The "use workflow" directive and "use step" wrapper are additive changes to existing route handlers. Other routes continue working as before.

What do I get after migrating?

Retries and observability come built in. You do not need to wire a separate retry system or logging infrastructure. The local dev tools and step debugger are available immediately for debugging during development.

Compatibility

Can I run workflows locally during development?

Yes. Workflow runs locally with full dev tools, including the step debugger and execution trace viewer. The Local World is bundled and requires zero configuration. Run npx workflow inspect runs --web to open the Web UI.

Does Workflow work with the AI SDK?

Yes. Workflow integrates with streamText, generateText, and other AI SDK functions through DurableAgent. Wrap AI SDK calls inside step functions and use WorkflowChatTransport on the client for durable streaming with reconnection.

Which frameworks does Workflow support?

Workflow DevKit supports Next.js, Vite, Astro, Express, Fastify, Hono, Nitro, Nuxt, SvelteKit, and NestJS. See the Getting Started guides for framework-specific setup instructions.

Can I use DurableAgent instead of manual step composition?

Yes. DurableAgent is designed for agentic workloads where the task outlives a single request-response cycle. It fits naturally with Workflow's execution model and gives you the same retries, observability, and local tooling. See Building Durable AI Agents.

Advanced usage

How does stream reconnection work after a network failure?

  1. The client stores the runId from the initial workflow response header
  2. If the stream is interrupted before receiving a "finish" chunk, WorkflowChatTransport automatically reconnects
  3. The reconnection request includes the startIndex of the last chunk received
  4. The server returns the stream from that position forward
  5. The client continues rendering from where it left off

See Resumable Streams for a complete implementation.

How do I handle user input mid-workflow?

Use hooks. Workflow hooks pause execution and wait for external input before continuing. This fits use cases like confirmations, approvals, or user choices during a multi-step AI flow. See Human in the Loop.

What happens to in-progress runs when I redeploy?

This depends on the World you are using. With the Vercel World, runs continue to run on their original deployment, so deploying comes without risk. With the Local World, the in-memory queue does not persist across server restarts, so in-progress runs will not resume.

Can multiple clients read the same stream?

Yes. Multiple clients can connect to the same run's stream using getRun(runId).getReadable(). Each client gets its own ReadableStream instance. Use the startIndex parameter to control where each client starts reading from.