90% of AI chat systems experience UX collapse the moment tool calling is added. The open-source agui project points out the core disease: pure text streaming rendering (outputting character by character like a typewriter) is fundamentally not a viable product protocol.

What this is

We note that when most teams build AI chat, the first step is always printing the LLM output character by character onto the page. But taking just one step further—adding tool calling to the AI (having it execute actions like searching or sending emails before answering)—makes pure text streams insufficient: users don't know if the AI is thinking or browsing the web; search failures can only be reported as errors mixed into the text; and originally structured data gets mashed into a blob of text.

The agui project's solution is to establish a "unified event stream": turning text, tool input, tool output, errors, and user interruptions into equal members within the same UI message protocol. The frontend no longer renders a single block of text; instead, it renders differently based on event types. This is like upgrading a single-lane road into a multi-lane highway, making the AI's thinking and execution processes visible and controllable to the user.

Industry view

This represents a correct direction for AI frontend engineering: protocol first, capabilities later. Get the message pipeline working first, then gradually integrate various tools. The frontend stays thin, and the backend can more easily evolve into a true Agent system (an AI that can autonomously call tools to complete tasks).

But it is worth our concern that such custom protocols also harbor hidden risks. Currently, the industry lacks a unified UI message standard; every team is defining its own data blocks, which leads to frontend components being unable to be reused across projects and a highly fragmented ecosystem. At the same time, shifting state management pressure onto the frontend event stream means that if the protocol design isn't decoupled enough, the frontend can easily become a new performance bottleneck in complex tool scenarios.

Impact on regular people

For enterprise IT, building in-house AI assistants can no longer focus solely on backend LLM capabilities; the robustness of the frontend interaction protocol will directly determine whether the system can pass production environment acceptance.

For individual careers, product managers and designers need to add a new skill: designing interactions for the "process of AI calling tools," rather than just drawing a chat box.

For the consumer market, users will gradually get used to AI no longer being just a black-box text generator, but a transparent system where they can see its "searching, calculating, and trial-and-error" processes, which will significantly lower the threshold for human-machine trust.