Job Openings
Research Internship: Diagnostic & Perceptual Evaluation Framework for Generative Speech
Full-time | Voice & Conversational AI | Global Enterprise AIÂ Platform
Duration: Â Â 4-8 Months
Location:   Switzerland (Europe), on-site at AGIGO’s Zurich Office
About AGIGO
AGIGO™ is the first enterprise-grade conversational AI platform that empowers enterprises to transform customer engagement and business performance with high-agency AI-agents - agents that match well-trained human customer agents in naturalness, responsiveness, and autonomous task resolution. Built for on-premises or hybrid deployment, with no reliance on third-party services, our proprietary platform gives enterprises full control, observability, and data sovereignty. Its unified core, tunable base models, and end-to-end design toolchain deliver context-aware, adaptable agents that engage directly with customers in real-time. Founded February 2025 in Switzerland by a team of 18 experienced AI pioneers, AGIGO is driven by a bold vision to lead the next major wave in AI by transforming how businesses interact with their customers.
Your Research Mission
The objective of this internship is to design and build a next-generation diagnostic and perceptual evaluation framework for generative speech models - a system that not only tells us if a model is better, but why. You will combine robust objective metrics with novel techniques for automated failure diagnosis and perceptual correlation. The resulting framework will become a core internal tool, guiding model selection and optimization across AGIGO’s voice-synthesis development and deployment cycle.
Phase 1: Foundational Objective Metrics
In the initial phase of your project, you implement a state-of-the-art suite of automated metrics that provide a comprehensive, objective view of model performance and robustness, going far beyond conventional Word Error Rate (WER):
Aggregated WER: An ensemble of diverse ASR models (auto and non-autoregressive (AR/NAR) models of different architectures) to measure intelligibility robustness.
Semantic Error Rate (SER): You will implement a metric that goes beyond simple word matching. By comparing the semantic embeddings (e.g., from a T5 or BERT model) of the ground truth text and the ASR-transcribed text, this metric can tolerate minor transcription differences ("the car" vs. "a car") while heavily penalizing meaning-altering errors, e.g., hallucinations or repeated n-grams.
Signal & Perceptual Proxy Suite
- Integrate standardized metrics such as STOI, PESQ, Si-SNR from TorchAudio-Squim to assess signal-level fidelity.
- Integrate non-Intrusive perceptual objective metrics based on neural networks, such as DNSMOS.
- Implement speaker similarity metrics using pre-trained speaker-verification models to quantify performance in voice cloning tasks (optional, if time allows).
Phase 2: Automated Failure Diagnostics & Adversarial Testing
This is where the project becomes truly innovative. The goal is to automatically find and categorize the subtle failures that plague even the best TTS models. You will develop a classifier to detect common TTS failure modes on generated audio:
Hallucination Detector: Identifies repeated phrases, word omissions, and truncated sentences.
Prosody Mismatch Detector: A model trained to detect when the intonation of a sentence does not match its punctuation, e.g., a question spoken as a statement.
Artifact Detector: A model that specifically listens for common synthesis artifacts like metallic ringing or hissing.
Automated Challenge Set Generation: A system to automatically find or generate difficult text samples (e.g., tongue twisters, complex numerical expressions) that are likely to cause a given model to fail, creating a constantly evolving stress test. We could potentially use an LLM to pursue this line of research. (Optional, if time allows.)
Key Research Challenges
Predictive Evaluation: Can you analyze a model's internal states or confidence scores before synthesis to predict whether it is likely to fail on a given piece of text? This could be used to build a fallback or self-correction mechanism directly into the TTS engine.
Multi-Lingual Generalization: How do these advanced metrics and diagnostic tools generalize across different languages and phonetic systems? You will lay the groundwork for a truly universal evaluation suite.
Your Impact
Your project will develop a fast, reliable, and diagnostic evaluation pipeline that will accelerate the selection of the best TTS systems among candidates for deployment in real use cases. By moving from slow and subjective listening tests, your work will enable us to iterate on models faster, catch regressions, and scientifically measure progress towards truly human-like speech synthesis.
We value original thinking and encourage you to help shape and redefine the project’s direction as your research uncovers new insights. AGIGO fosters an open, collaborative environment where ideas can evolve freely. Exceptional innovation often emerges where disciplines and perspectives intersect, and we actively support creative exploration that pushes the boundaries of what Voice-AI can achieve.
What You Bring
Required
- Master student (preferred) or PhD student in Computer Science, Machine Learning, or a related field
- Strong Python programming skills and Git
- Solid understanding of ML fundamentals and MLOps
- Hands-on experience with PyTorch
- Fluent in English, highly motivated, willingness to learn
Plus Points
- Experience with Hugging Face models (for LLMs, ASR, or "speech-LLMs")
- Familiarity with audio benchmarks
- Knowledge of speech, ASR, or/and TTS concepts
- Hands-on experience with large-scale data processing pipelines
- Hands-on experience with audio AI (ASR/TTS) model training and development
What You Will Gain
- Direct company impact: your project will strengthen the agility and effectiveness of AGIGO's industry-leading Voice-AI research
- Mentorship: work closely with our expert team of researchers and engineers
- Top-tier AIÂ infrastructure: access to GPU clusters with NVIDIA Hopper (H200) and Blackwell RTX GPUs
- Research visibility: we will actively support you in publishing your work at a top-tier conference or in a journal paper
- Disciplined and inspiring research environment: a team of sharp minds grounded in expertise, autonomy, and a shared pursuit of impactful breakthroughs
- Paid internship: market-level salary, flexible hours, and free coffee, drinks, fruits and snacks
- Career path: this internship may lead to a full-time permanent role in AGIGO's world-class AIÂ R&D team
How to Apply
To apply, please send your resume and a brief introduction to internships@agigo.ai with the subject line:
‍Research Internship – Evaluation Framework for Generative Speech – [Your Full Name].
‍
By submitting your application, you agree to allow AGIGO to store and process your data for recruitment purposes. Unless otherwise requested, we may retain your data for up to one year to consider you for this or other future opportunities.
AGIGO™ is a registered trademark of AGIGO AG, Switzerland.‍