Strict Mode vs Flexible Mode
FlowKit supports two conversation modes that control how the LLM interacts with users.
Overview
| Aspect | Flexible Mode | Strict Mode |
|---|---|---|
| LLM Freedom | High | Minimal |
| Message Control | LLM generates responses | Uses defined messages exactly |
| Conversation Feel | Natural, adaptive | Consistent, predictable |
| Best For | General assistance | Enterprise, regulated flows |
| LLM Requirements | Any quality | Better extraction needed |
Flexible Mode (Default)
The LLM has freedom to respond naturally while following the flow structure.
Characteristics
- LLM generates responses in its own words
- Can add personality and contextual responses
- More natural conversation feel
- Adapts to user's tone and style
Example
typescript
const assistant = agent("Support Bot")
// flexible mode is default, no need to set it
.personality("friendly and helpful")
.build();
const supportFlow = flow("support", assistant)
.ask("issue", "What can I help you with today?", text(), "issue")
.then("confirm")
.ask("confirm", "Got it! Anything else?", yesNo(), "more")
.when({ "yes": "issue", "no": "bye" })
.say("bye", "Thanks for reaching out!")
.done()
.build();Conversation Example
Bot: Hey there! What can I help you with today?
User: my internet is slow
Bot: Oh no, slow internet is frustrating! I've noted that down.
Is there anything else you'd like help with?
User: no that's it
Bot: Great! Thanks so much for reaching out.
We'll look into your slow internet issue. Take care!Strict Mode
The LLM uses exact messages defined in the flow with minimal deviation.
Characteristics
- Uses exact messages from step definitions
- Consistent experience every time
- Better for compliance and quality control
- Easier to audit and test
Example
typescript
const assistant = agent("Verification Bot")
.strict() // Strict mode enabled
.personality("Follow the verification process exactly")
.build();
const verifyFlow = flow("verify", assistant)
.ask("name", "Please enter your full name:", name(), "full_name")
.then("dob")
.ask("dob", "What is your date of birth? (MM/DD/YYYY)", custom("date"), "birth_date")
.then("confirm")
.ask("confirm", "Is this information correct: {{full_name}}, born {{birth_date}}?", yesNo(), "confirmed")
.when({ "yes": "success", "no": "name" })
.say("success", "Thank you. Your identity has been verified.")
.done()
.build();Conversation Example
Bot: Please enter your full name:
User: john smith
Bot: What is your date of birth? (MM/DD/YYYY)
User: january 15 1990
Bot: Is this information correct: John Smith, born 01/15/1990?
User: yes
Bot: Thank you. Your identity has been verified.When to Use Each
Use Flexible Mode When:
- Building general-purpose chatbots
- User experience is priority
- Conversation needs to feel natural
- You want the LLM to handle edge cases gracefully
- Brand voice allows variation
Use Strict Mode When:
- Building enterprise/regulated applications
- Consistency is critical
- Legal/compliance requirements exist
- You need predictable, testable responses
- Exact wording matters (financial, healthcare, legal)
- You want to control the experience precisely
Hybrid Approach
You can use different agents for different flows:
typescript
// Casual greeting flow - flexible
const greeterAgent = agent("Greeter")
.personality("Be warm and friendly")
.build();
// Verification flow - strict
const verifierAgent = agent("Verifier")
.strict()
.personality("Follow verification exactly")
.build();
const greetingFlow = flow("greeting", greeterAgent)
.ask("hello", "Hi! How's it going?", text(), "greeting")
.then("transfer")
.say("transfer", "Let me transfer you to verification...")
.done()
.build();
const verifyFlow = flow("verify", verifyAgent)
// strict verification steps...
.build();
// Use different engines for different flows
const greetingEngine = new FlowEngine(greetingFlow, { llm: adapter, storage });
const verifyEngine = new FlowEngine(verifyFlow, { llm: adapter, storage });Technical Differences
System Prompt
Flexible mode system prompt allows LLM freedom:
You are [Agent Name]. [Instructions]
Current step: [step_id]
Ask: [message]
...respond naturally...Strict mode system prompt enforces exact messages:
You are [Agent Name]. [Instructions]
IMPORTANT: Use EXACTLY the message defined for the current step.
Do not paraphrase or add to the message.
Current step: [step_id]
Say EXACTLY: "[message]"Extraction
Both modes use JSON extraction, but strict mode is more demanding:
typescript
// Both modes extract the same way
// But strict mode requires better extraction accuracy
// because there's no room for follow-up clarificationLLM Model Considerations
Flexible Mode
Works well with most models:
- GPT-4o-mini (recommended)
- GPT-4o
- Llama 3
- Qwen
- Most 3B+ models
Strict Mode
Requires models with good instruction following and JSON extraction:
- GPT-5.2
- GPT-5 mini
- GPT-5 nano (recommended)
- GPT-4o
- GPT-4o-mini
- Claude 3
- Qwen 4B+
- Llama 3.2 3B (may struggle with JSON)
Tip: For strict mode with local models, use qwen3:4b or larger.
Migration Between Modes
Changing from flexible to strict (or vice versa) is simple:
typescript
// Before (flexible)
const assistant = agent("Bot").build();
// mode defaults to "flexible"
// After (strict) - just add .strict()
const assistant = agent("Bot")
.strict()
.build();No flow changes needed - only the agent configuration changes.