
A Case Study in Operationalizing Generative AI
Lyssin needed an AI-enabled system that could accurately and consistently categorize and prioritize unstructured data.
Olio Apps built a workbench to collaboratively develop prompts with Lyssin stakeholders.
Lyssin got a production-ready SaaS application with reliable, highly efficient and consistent prompts that met their business goals.
“Olio's flexibility, technical confidence, and passion for the mission stood out. They handled new and complex technologies with ease, always stayed available, and truly cared about delivering something valuable. They made the process collaborative, not transactional.”
Lyssin, a Portland-based performance optimization company, helps organizations capture and elevate frontline employee insights into actionable strategic intelligence. With a patent-pending approach to employee feedback, Lyssin had a bold vision: enable company leadership to hear directly from their teams in a way that was meaningful, prioritized, and actionable. They needed a technical partner who could determine whether a large language model could actually solve this problem, and then build their version 1 application.
Leadership needs to understand what's happening on the front lines of their organizations, but traditional employee feedback mechanisms are broken. Comment boxes overflow with unstructured feedback ranging from cafeteria complaints to serious operational concerns. Without context or prioritization, this valuable intelligence gets lost in noise.
Organizations that can't surface critical employee insights risk missing early warning signs of operational problems, talent retention issues, and cultural challenges that directly impact the bottom line.
— Josh Frantz, CEO and co-founder, Lyssin
Lyssin envisioned a system that could automatically categorize, score, and prioritize employee feedback, surfacing what matters most to leadership. But they faced fundamental questions: Could AI reliably extract meaning from unstructured employee comments? Could it distinguish between urgent issues and everyday gripes? Could it understand context well enough to group related feedback without creating chaos?
Adding to the challenge, obtaining real employee feedback data for testing proved difficult during the discovery phase. The team would need creative solutions just to validate the concept.
As Olio Apps began building Lyssin's system, we focused on the inherent challenge of working with large language models: optimizing their performance for consistent, reliable results in a production environment. LLMs require careful prompt engineering and systematic testing to ensure they deliver accurate classifications that meet specific business requirements.
The technical requirements were complex and interconnected. The system needed to:
Traditional approaches wouldn't work. We couldn't process massive datasets in single prompts as the cost and processing time would be prohibitive. Instead, we combined strategic prompt engineering with intelligent data retrieval to process only the most relevant feedback items. This hybrid approach balanced accuracy with performance while keeping costs manageable.
But the bigger challenge was iteration speed and collaboration. How do you accelerate the prompt refinement process? How do you enable non-technical stakeholders to participate meaningfully in optimizing AI behavior?
The breakthrough came from creating a custom prompt engineering workbench early in the project that accelerated iteration and enabled collaborative refinement of prompts.
The Workbench gave the Lyssin team the ability to:
This tool transformed prompt engineering from an opaque, developer-only activity into a transparent, collaborative process. When real sample data was scarce, we used LLMs to generate fabricated feedback with controlled parameters (specific tones, urgency levels, topics), then tested our classification system against this synthetic dataset.
The toolbench also allowed us to evaluate different LLM providers. After testing several options, we selected Claude for its superior accuracy in classification tasks.
What once required endless meetings and emailed spreadsheets became an efficient feedback loop. The Lyssin team could experiment with prompt variations and immediately see the results. We estimate this approach saved at least 80 hours of development and testing time per prompt.
This wasn't a traditional vendor-client relationship. The project required deep collaboration between Olio's technical team (two developers, a fractional CTO, and a part-time project manager), and Lyssin's product team, including CEO Josh Frantz, a product manager, and a designer.
Together, these cross-functional teams worked through product design, sprint planning, and workflow refinement. The Lyssin team brought domain expertise about employee feedback and CEO needs. Olio brought expertise in AI implementation and system architecture. The Prompt Workbench served as the bridge, enabling non-technical stakeholders to participate meaningfully in prompt engineering decisions.

The engagement resulted in a comprehensive, AI-powered feedback system with three key components:
A mobile application where employees can submit anonymous feedback. The app uses AI to automatically score and categorize submissions, making it easy for employees to share insights without fear of reprisal.
An intuitive dashboard that displays employee feedback ranked by importance and frequency. Key features include:
The prompt engineering workbench that made the entire system possible and allows Lyssin to extend their applications to adjust, tune, and potentially create additional LLM features in the future.
The project delivered significant value across multiple dimensions:
Olio built a functioning, scalable SaaS platform now serving Lyssin's customers. The system successfully transitioned to Lyssin's internal team with comprehensive training and documentation.
The Lyssin team can now evolve the system and continue prompt engineering independently, thanks to the tooling and training Olio provided.
The project proved that AI and LLMs could solve a complex, real-world problem: turning unstructured employee feedback into strategic intelligence that CEOs can act on.
The Prompt Engineering Workbench saved approximately 80 hours of development and testing time per prompt, dramatically accelerating the iteration cycle.
The project validated that AI could reliably solve a complex, real-world problem: turning unstructured employee feedback into strategic intelligence. Lyssin is now successfully serving paying customers with a production system that consistently delivers accurate categorization and prioritization.
— Josh Frantz, CEO and co-founder, Lyssin