Package Exports
- @junniepat/conversational-ai-input
- @junniepat/conversational-ai-input/dist/index.esm.js
- @junniepat/conversational-ai-input/dist/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@junniepat/conversational-ai-input) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
@junniepat/conversational-ai-input
Transform rigid forms into engaging conversations with AI-powered intelligence
🚀 Try Live Demo →
Experience the future of form interactions! Our live demo showcases real AI processing with Mistral Cloud - no setup required.
🎯 What You'll See:
- ✅ Real AI Processing - Watch natural language transform into structured data
- ✅ Voice Input - Speak naturally and see instant transcription
- ✅ File Uploads - Drag & drop documents for AI analysis
- ✅ Multiple AI Providers - Test with OpenAI, Anthropic, Mistral, and more
- ✅ Interactive Configuration - Try different models and settings
🚀 Quick Start
npm install @junniepat/conversational-ai-inputimport { ConversationalInput } from '@junniepat/conversational-ai-input';
function App() {
return (
<ConversationalInput
aiProvider="mistral"
apiKey="your-api-key"
onSubmit={(data) => console.log('Extracted:', data)}
/>
);
}Ready to transform your forms? Try the live demo →
📚 Interactive Documentation →
Explore all component features with our interactive Storybook! Test different configurations, see live examples, and understand the API.
🎯 What You'll Find:
- ✅ Component Stories - All ConversationalInput variants and use cases
- ✅ Interactive Controls - Adjust props and see changes in real-time
- ✅ Code Examples - Copy-paste ready code snippets
- ✅ AI Integration Demos - See Mistral Cloud AI processing in action
- ✅ Security Examples - Learn secure implementation patterns
- ✅ Responsive Testing - Test on different screen sizes
Perfect for developers who want to understand the component before integrating! Explore Storybook →
A powerful, flexible React component that transforms any form input into a conversational, AI-ready interface. Perfect for job applications, customer support, surveys, and any scenario where you want to gather information naturally.
✨ Features
- 🎤 Voice Input: Built-in speech-to-text with Web Speech API
- 📎 File Upload: Drag & drop file support with validation
- 🤖 AI Ready: Designed for AI processing and clarification
- 🎨 Highly Customizable: Render props, custom styling, and flexible configuration
- 📱 Responsive: Works perfectly on all devices
- ♿ Accessible: WCAG compliant with proper ARIA labels
- 🔒 Privacy First: Works offline and with local LLMs
- ⚡ Lightweight: Only ~15KB gzipped
🎯 Use Cases
Job Applications
<ConversationalInput
onSubmit={handleJobApplication}
placeholder="Describe your experience and why you'd be great for this role..."
requireFiles={true}
acceptedFileTypes={['.pdf', '.doc', '.docx']}
labels={{
addAttachments: "Upload Resume",
submit: "Submit Application"
}}
/>Customer Support
<ConversationalInput
onSubmit={handleSupportRequest}
placeholder="How can we help you today? Describe your issue..."
enableFileUpload={true}
acceptedFileTypes={['.png', '.jpg', '.pdf']}
labels={{
addAttachments: "Add Screenshots",
submit: "Send Message"
}}
/>Surveys & Research
<ConversationalInput
onSubmit={handleSurveyResponse}
placeholder="Share your thoughts and experiences..."
enableVoice={true}
enableFileUpload={false}
labels={{
submit: "Submit Response"
}}
/>🎨 Customization
Custom Styling
<ConversationalInput
onSubmit={handleSubmit}
classNames={{
container: "max-w-4xl mx-auto",
textarea: "h-32 text-lg font-serif bg-gradient-to-r from-purple-50 to-blue-50",
submitButton: "bg-gradient-to-r from-green-500 to-emerald-600 text-white",
voiceButton: "bg-purple-600 text-white border-0",
}}
/>Render Props for Complete Control
<ConversationalInput
onSubmit={handleSubmit}
render={{
submitButton: ({ onClick, disabled, isSubmitting, text }) => (
<button
onClick={onClick}
disabled={disabled}
className="custom-submit-button"
>
{isSubmitting ? 'Processing...' : text}
</button>
),
voiceButton: ({ isListening, onClick, disabled }) => (
<button
onClick={onClick}
disabled={disabled}
className={`voice-btn ${isListening ? 'recording' : ''}`}
>
{isListening ? '🔴 Recording...' : '🎤 Start Voice'}
</button>
),
}}
/>Form Integration
<ConversationalInput
onSubmit={() => {}} // Form handles submission
showSubmitButton={false}
submitTrigger="none"
onTextChange={(text) => setFormData(prev => ({ ...prev, description: text }))}
onFilesChange={(files) => setFormData(prev => ({ ...prev, attachments: files }))}
/>🤖 AI Integration
OpenAI Integration
import { ConversationalInput } from '@junniepat/conversational-ai-input';
const processWithOpenAI = async (text: string, files?: File[]) => {
const response = await fetch('/api/openai/process', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ text, files })
});
const result = await response.json();
return result.extractedData;
};
<ConversationalInput
onSubmit={processWithOpenAI}
placeholder="Describe your needs..."
/>Local LLM Integration (Ollama)
const processWithLocalLLM = async (text: string) => {
const response = await fetch('http://localhost:11434/api/generate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
model: 'mixtral',
prompt: `Extract structured information: ${text}`,
stream: false
})
});
const result = await response.json();
return JSON.parse(result.response);
};Clarification System
import { Clarifier } from '@junniepat/conversational-ai-input';
function MyComponent() {
const [clarification, setClarification] = useState<string | null>(null);
const handleSubmit = async (text: string, files?: File[]) => {
const result = await processWithAI(text, files);
if (result.needsClarification) {
setClarification(result.clarificationQuestion);
}
};
return (
<div>
<ConversationalInput onSubmit={handleSubmit} />
{clarification && (
<Clarifier
question={clarification}
onClarify={handleClarify}
type="info"
suggestions={["Yes", "No", "I'll provide more details"]}
/>
)}
</div>
);
}📚 Examples
Check out our comprehensive examples:
import {
BasicUsage,
FormIntegration,
CustomStyling,
RenderProps
} from '@junniepat/conversational-ai-input/examples';
// See examples in action
<BasicUsage />
<FormIntegration />
<CustomStyling />
<RenderProps />🔧 API Reference
ConversationalInput Props
| Prop | Type | Default | Description |
|---|---|---|---|
onSubmit |
(text: string, files?: File[]) => Promise<void> | void |
Required | Callback when form is submitted |
placeholder |
string |
"Type your message..." |
Placeholder text |
requireFiles |
boolean |
false |
Whether files are required |
acceptedFileTypes |
string[] |
['*'] |
Accepted file types |
maxFileSize |
number |
10MB |
Maximum file size in bytes |
className |
string |
"" |
Custom CSS class |
showClearButton |
boolean |
true |
Show clear text button |
enableVoice |
boolean |
true |
Enable voice input |
enableFileUpload |
boolean |
true |
Enable file upload |
showSubmitButton |
boolean |
true |
Show submit button |
submitTrigger |
'button' | 'enter' | 'both' | 'none' |
'both' |
Submit trigger behavior |
clearAfterSubmit |
boolean |
true |
Clear text after submission |
initialValue |
string |
"" |
Initial text value |
value |
string |
undefined |
Controlled text value |
onTextChange |
(text: string) => void |
undefined |
Text change callback |
onFilesChange |
(files: File[]) => void |
undefined |
Files change callback |
autoSubmitOnEnter |
boolean |
false |
Auto-submit on Enter key |
classNames |
ClassNamesObject |
{} |
Custom CSS classes |
render |
RenderObject |
{} |
Custom render functions |
validateInput |
(text: string) => string | null |
undefined |
Custom validation |
isSubmitting |
boolean |
false |
Loading state |
disabled |
boolean |
false |
Disable component |
Labels Customization
labels={{
submit: "Send Message",
clear: "Clear Text",
addAttachments: "Add Files",
useVoice: "Use Voice",
listening: "Listening...",
cvReady: "CV Ready"
}}Custom CSS Classes
classNames={{
container: "max-w-4xl mx-auto",
textarea: "h-32 text-lg font-serif",
actionBar: "bg-gray-100 p-3",
voiceButton: "bg-blue-500 text-white",
fileButton: "bg-green-500 text-white",
submitButton: "bg-purple-600 text-white",
clearButton: "bg-red-500 text-white",
fileDisplay: "bg-white border rounded",
errorDisplay: "bg-red-100 border-red-300"
}}🎭 Render Props
Available Render Props
voiceButton: Custom voice button renderingfileButton: Custom file upload buttonsubmitButton: Custom submit buttonclearButton: Custom clear buttonfileDisplay: Custom file displayerrorDisplay: Custom error display
Render Props Interface
interface VoiceButtonRenderProps {
isListening: boolean;
isSupported: boolean;
onClick: () => void;
disabled: boolean;
className: string;
}
interface FileButtonRenderProps {
onClick: () => void;
disabled: boolean;
className: string;
acceptedTypes: string[];
}
interface SubmitButtonRenderProps {
onClick: () => void;
disabled: boolean;
className: string;
isSubmitting: boolean;
text: string;
}🎯 Advanced Patterns
Multi-Step Form Integration
function MultiStepForm() {
const [step, setStep] = useState(1);
const [formData, setFormData] = useState({});
const handleStepSubmit = async (text: string, files?: File[]) => {
setFormData(prev => ({ ...prev, [`step${step}`]: { text, files } }));
if (step < 3) {
setStep(step + 1);
} else {
await submitFinalForm(formData);
}
};
return (
<div>
<h2>Step {step} of 3</h2>
<ConversationalInput
onSubmit={handleStepSubmit}
placeholder={`Tell us about step ${step}...`}
showSubmitButton={true}
submitTrigger="button"
/>
</div>
);
}Real-time Validation
<ConversationalInput
onSubmit={handleSubmit}
validateInput={(text) => {
if (text.length < 10) return "Please provide at least 10 characters";
if (text.length > 1000) return "Please keep it under 1000 characters";
return null; // No error
}}
onTextChange={(text) => {
// Real-time validation feedback
const error = validateInput(text);
setValidationError(error);
}}
/>File Processing Pipeline
const processFiles = async (files: File[]) => {
const results = [];
for (const file of files) {
if (file.type.startsWith('image/')) {
const processed = await processImage(file);
results.push(processed);
} else if (file.type === 'application/pdf') {
const processed = await processPDF(file);
results.push(processed);
}
}
return results;
};
<ConversationalInput
onSubmit={handleSubmit}
onFilesChange={processFiles}
acceptedFileTypes={['.pdf', '.png', '.jpg', '.jpeg']}
/>🌐 Browser Support
- ✅ Chrome 66+
- ✅ Firefox 60+
- ✅ Safari 11.1+
- ✅ Edge 79+
- ✅ Mobile browsers (iOS Safari, Chrome Mobile)
Note: Voice input requires HTTPS in production (Web Speech API requirement).
📦 Installation Requirements
Peer Dependencies
{
"react": "^16.8.0 || ^17.0.0 || ^18.0.0",
"react-dom": "^16.8.0 || ^17.0.0 || ^18.0.0"
}Optional Dependencies
{
"lucide-react": "^0.300.0" // For icons (included in bundle)
}🚀 Performance
- Bundle Size: ~15KB gzipped
- Tree Shaking: Full ES module support
- Lazy Loading: Voice recognition only loads when needed
- Memory Efficient: Proper cleanup of event listeners
🔒 Security & Privacy
⚠️ Critical Security Notes
NEVER call Ollama directly from the browser! Always use server-side proxy routes:
// ❌ DANGEROUS - Don't do this
const response = await fetch('http://localhost:11434/api/generate', {
method: 'POST',
body: JSON.stringify({ model: 'mixtral', prompt: text })
});
// ✅ SAFE - Use server proxy
const response = await fetch('/api/ollama/process', {
method: 'POST',
body: JSON.stringify({ text, files })
});Server-Side Proxy Example (Next.js)
// pages/api/ollama/process.ts
export default async function handler(req: NextApiRequest, res: NextApiResponse) {
const { text, files } = req.body;
const response = await fetch('http://localhost:11434/api/generate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
model: 'mixtral',
prompt: `Extract structured data: ${text}`,
stream: false
})
});
const result = await response.json();
res.json({ extractedData: JSON.parse(result.response) });
}Built-in Security Features
- 🔒 SSR-Safe Voice: Auto-hides microphone on non-HTTPS origins
- 🛡️ Prompt Injection Defense: Sanitizes user input and strips system prompts
- 📊 PII Detection: Built-in detection of emails, phones, SSNs
- 🚫 Profanity Filter: Configurable content filtering
- 📏 Input Validation: Minimum character requirements and length limits
- 🔐 Data Minimization: Only collects fields defined in schema
- ⏰ Configurable Retention: Field-level TTLs and org-scoped policies
Privacy & Compliance
- No External Dependencies: Works completely offline
- Local Processing: Voice and file processing happen in browser
- No Data Collection: Zero telemetry or analytics
- GDPR Ready: Built-in consent management and data export
- Audit Logs: Immutable logs for compliance (EEOC, GDPR)
- Encryption: PII encrypted at rest with configurable keys
🤝 Contributing
We welcome contributions! Please see our Contributing Guide for details.
Development Setup
git clone https://github.com/mr-junniepat/conversational-input-oss.git
cd conversational-input-oss
npm install
npm run dev
npm run build
npm run test🔒 Security & API Key Management
⚠️ Important Security Notes
Never expose API keys in client-side code! This package is designed for secure AI integration.
✅ Secure Implementation Patterns
1. Environment Variables (Recommended)
# .env.local (never commit this file)
MISTRAL_API_KEY=your_actual_api_key_here
OPENAI_API_KEY=your_openai_key_here
ANTHROPIC_API_KEY=your_anthropic_key_here// ✅ SECURE - Use environment variables
<ConversationalInput
aiProcessing={{
provider: 'mistral',
apiKey: process.env.MISTRAL_API_KEY, // Loaded from environment
model: 'mistral-small-latest'
}}
/>2. Server-Side Proxy (Most Secure)
// ✅ MOST SECURE - API calls go through your server
<ConversationalInput
aiProcessing={{
provider: 'custom',
endpoint: '/api/ai-process', // Your secure server endpoint
}}
/>// pages/api/ai-process.js (Next.js example)
export default async function handler(req, res) {
const response = await fetch('https://api.mistral.ai/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.MISTRAL_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify(req.body)
});
const data = await response.json();
res.json(data);
}❌ What NOT to Do
// ❌ NEVER DO THIS - API key exposed in client code
<ConversationalInput
aiProcessing={{
provider: 'mistral',
apiKey: 'm3JBmxV66WYcAxdhOtYtwbu9IHgfLFgz', // EXPOSED!
}}
/>🛡️ Built-in Security Features
- Input Sanitization: Automatic cleaning of user input
- PII Detection: Identifies and warns about sensitive data
- Prompt Injection Defense: Protects against malicious prompts
- File Upload Validation: Secure file type and size checking
- SSR-Safe Voice: Microphone access only on secure origins
🔐 Production Checklist
- API keys stored in environment variables
-
.envfiles added to.gitignore - Server-side proxy for AI calls (recommended)
- HTTPS enabled for voice features
- Input validation configured
- File upload restrictions set
- Rate limiting implemented
📄 License
MIT License - see LICENSE file for details.
🆘 Support & Contact
📞 Get Help
- 📚 Interactive Docs: Storybook Documentation
- 🎮 Live Demo: Try the Component
- 🐛 Bug Reports: GitHub Issues
- 💬 Questions: GitHub Discussions
- 📖 Documentation: Full API Reference
- 🎯 Examples: Live Examples
- 🤖 AI Integration: AI Integration Guide
👨💻 Author & Creator
Patrick Igwe - Full-Stack Developer & AI Enthusiast
- 🔗 LinkedIn: Patrick Igwe
- 🐙 GitHub: @mr-junniepat
- 🌐 Portfolio: PromptForms
💼 Professional Services
Need help implementing conversational AI in your project? I offer:
- 🎯 Custom Implementation - Tailored solutions for your use case
- 🏢 Enterprise Integration - Large-scale deployments and consulting
- 🎓 Training & Workshops - Team training on conversational UI best practices
- 🚀 Technical Consulting - Architecture and AI strategy guidance
Let's connect on LinkedIn to discuss your project! Connect with Patrick →
🌟 Why Conversational Input?
Traditional forms are rigid and frustrating. Our conversational approach makes data collection feel natural and engaging:
- 🎯 Better Completion Rates: Users are more likely to complete conversational forms
- 📊 Higher Quality Data: Natural language often contains richer information
- 🤖 AI Ready: Designed from the ground up for AI processing
- 📱 Mobile First: Perfect for mobile and voice-first interfaces
- ♿ Inclusive: Works for users with disabilities and different input preferences
Built with ❤️ by Patrick Igwe