We fired our prompts — and hired an agent to build our forms

We fired our prompts — and hired an agent to build our forms

Elias Garzon
Elias Garzon

You are a monday.com CRM power user and are going to spin up a lead generation form like you have done countless times before. You hit “Create a form”, but this time you are met with a new, clean, and vibrant prompt box: “Provide some details about your form…” 

Intrigued by the new capability, you type what you need. “A lead generation form that captures the lead’s name, job title, phone number, email, company name, and a description of their needs.” Seconds later, the form appears — no drag-and-drop, no config panels. It’s almost perfect. You just want to tweak the title and add one more question. You tweak the prompt accordingly and click regenerate. Chaos. Some questions have disappeared, some have a different title,  and now it just seems you will have to do everything manually. Again. The old school way. 

This is when the magic broke. The first iteration of our feature had no memory of what it had just built. Every prompt started from zero. Prompt-only generation was great if you got exactly what you asked for— but brittle for real, complex forms. The solution? AI agents. Specifically, a conversational agent that collaborates, remembers, and supports true iterative form building.

Wait, what just happened?

The user in our opening example was interacting with our “prompt-to-form” feature—a single-shot LLM API call (what we call an AI block in monday.com) that translated natural language into a structured form. It worked like this:

It understood its task well: parse the request, validate against our schema, handle edge cases, and return a clean, complete form definition. When it worked, it felt magical.

But it didn’t scale.

When we pushed the system further — asking it to generate longer forms or handle full form and question settings — things started to break:

  • The model began hallucinating a schema
  • Outputs became malformed or incomplete

Even though we carefully defined its system prompt and validation rules, the LLM couldn’t juggle the growing complexity. Most critically, it had no memory. Every prompt was treated like a brand-new form. You couldn’t modify or iterate — even the smallest tweak meant starting over.

This exposed an opportunity to provide a far more powerful experience: instead of treating each prompt as a blank slate, we could enable true AI-assisted collaboration.

The solution? AI Agents

Instead of relying on a one-shot LLM prompt to output, we introduced a form-building agent — a system that provides an LLM with memory, structure, and actions.

🔁 Agent = LLM + Tools + State + Control Flow

At the core, the LLM still interprets the user’s prompt. But now, the output is routed through a graph-based flow (powered by LangGraph, read more here) that adds:

  • Memory: Persist the state of the current form
  • Decision logic: Route the LLM’s output to the correct tool when needed
  • Controlled execution: Only valid, structured changes are made to the form

Tools, not magic

Instead of asking the LLM to mutate an entire JSON structure, we define tools — focused actions that reflect anything a user can perform on the form. Each tool has a clear purpose, input schema, and output format.

These are the tools we defined:

Each tool acts like a low-level function that the agent can invoke. The LLM doesn’t directly edit the form — it calls tools with structured arguments. As you can see, this is not so different from what a human would do when building a form!

The agent in the wild

Here’s how a single user interaction plays out:

  1. User types: “Add a question for the user’s email address”
  2. LLM interprets the intent → “add a question”
  3. Agent identifies that it has formQuestionGeneratorTool available and is the correct tool according to the user’s request.
  4. Agent responds with a request to call this tool with specific arguments (e.g., type email, title: “Enter your email address”)
  5. The tool is executed and the output is sent to the agent (e.g,. “tool call executed successfully”
  6. Agent deems the user’s request as fulfilled, so it can respond with a final message, “A question asking for the email address has been added successfully.”

Why This Works

  • Reliability: Tool outputs are deterministic and validated
  • Scalability: We can add or modify tools as needed
  • Modularity: We can test each tool independently
  • Control: We define what the agent can and cannot do

This shift from “LLM as the main executor” to “LLM as the orchestrator” gave us full control over the form-building experience — and dramatically improved stability and performance. However, even the best tools are only as effective as the one who wields them.

With great tools comes great responsibility?

With our tools defined, we shifted to designing the system prompt — and the key was treating it like code: versioned, testable, and precise. It didn’t just describe what the agent could do; it defined what it should assume, which tools to call when, and how to respond. 

In one early case, it always added new questions to the bottom of the form — even when the user requested them to appear elsewhere.

The issue? It used formQuestionGeneratorTool correctly but failed to follow up with formQuestionOrderingTool to place the questions correctly.

We fixed it by updating the system prompt:

“After adding one or more questions, reorder them using the formQuestionOrderingTool in a separate step (ID will be available only after adding the questions).
Example: User says ‘Move the email question to the top’ → Get form → Reorder all questions with email question first.”

That single addition solved it — the agent never made the mistake again.

This is the fundamental shift: with an agent, the system prompt becomes a playbook for reasoning, not just output. It guides planning, sequencing, and decision-making — just like you would.

Now it builds with you — not just for you

Switching to an agent-based system transformed our form builder.

Want to update the title, add a question, tweak appearance, and reorder — all in one message? You can. And the agent applies each change exactly as intended.

We now support everything our previous version could — plus more! Areas that were previously limitations, like form appearance and question-level settings, are now fully supported. Next, we’re bringing this to existing forms, so users can modify live forms just as easily.

The future’s here — start building with it

By combining well-defined tools, persistent state, and a graph-driven flow around an LLM, we didn’t just improve a feature — we unlocked a new way of building with AI. What started as a one-shot prompt became a reliable, collaborative form-building agent.

This isn’t just about forms. It’s about how we build AI systems that reason, adapt, and grow with our system. AI isn’t a layer you sprinkle on top — it’s now part of the foundation. And how you architect around it will define what your product can do — and what it can become.

We’ve shown what’s possible for our use case. Now it’s your turn.