Job Openings

Research Internship: Universal Phonetizer for Next-Generation Voice AI

Full-time | Voice & Conversational AI | Global Enterprise AI Platform

Duration:    4-8 Months

Location:    Switzerland (Europe), on-site at AGIGO’s Zurich Office

About AGIGO

AGIGO™ is the first enterprise-grade conversational AI platform that empowers enterprises to transform customer engagement and business performance with high-agency AI-agents - agents that match well-trained human customer agents in naturalness, responsiveness, and autonomous task resolution. Built for on-premises or hybrid deployment, with no reliance on third-party services, our proprietary platform gives enterprises full control, observability, and data sovereignty. Its unified core, tunable base models, and end-to-end design toolchain deliver context-aware, adaptable agents that engage directly with customers in real-time. Founded February 2025 in Switzerland by a team of 18 experienced AI pioneers, AGIGO is driven by a bold vision to lead the next major wave in AI by transforming how businesses interact with their customers.

Your Research Mission

In this internship, your mission is to architect and train a dynamic, universal neural phonetizer, which -  based on AGIGO’s ground-breaking proprietary innovation - is capable of inferring the correct pronunciation of any word, including new or foreign ones. When brought to a production-ready state, the model will replace static G2P dictionaries, thereby finally solving a long-standing problem that has hampered the user experience of voice-based conversational AI systems for decades. The phonetizer should handle accent variations with high accuracy and low latency and be designed for seamless integration with LLM-based voice synthesis systems. At the forefront of Voice-AI innovation, your project will lay the foundation for commercial application of recent AGIGO inventions, further strengthen AGIGO’s leadership in voice synthesis, and ultimately contribute to an enhanced multilingual user experience of voice-enabled applications.

Phase 1: Data Foundation - Large-Scale Aligned Text Corpus Creation

Your first task will be to engineer a robust data-processing pipeline to create a massive, high-quality training corpus of isolated words, with phonemes, i.e., triplets (word, phoneme sequence, audio). This involves:

Forced Alignment at Scale: You will utilize and refine forced alignment tools to process over 100,000 hours of multi-lingual speech. The goal is to obtain precise time-stamps for every phoneme in the dataset, creating a vast corpus of (audio_segment, phoneme_sequence) pairs.

Data Curation and Normalization: You will develop strategies to filter noisy alignments and normalize text while handling with variations in pronunciation across our diverse datasets. This foundational work is critical for training a state-of-the-art model.

Phase 2: Model Architecture & Training - Non-Autoregressive Phonetization

While we are open to exploring large, LLM-based sequence-to-sequence models, our primary focus is on production viability. Therefore, we prioritize non-autoregressive (NAR) architectures for their superior inference speed and low-budget requirements, even the possibility to run on CPU only machines.

The Model: The proposed architecture consists of a powerful, pre-trained speech encoder (e.g., a wav2vec2-style or Hubert encoder) followed by a linear projection layer that maps to phonemes from the International Phonetic Alphabet (IPA).

The Training Objective: The model will be trained end-to-end for phoneme recognition using the Connectionist Temporal Classification (CTC) loss function. This NAR approach predicts a sequence of phonemes for a given audio input in a single forward pass, making it extremely efficient. We are also open to explore recent or different loss functions and methodologies.

Evaluation: The system will be evaluated with Phoneme Error Rate (PER) on held-out test sets. The extrinsic evaluation will involve integrating your model into our end-to-end TTS pipeline and measuring the impact on synthesized speech quality and intelligibility (e.g., via aggregated ASR-based WER) and some other well-known objective metrics for TTS systems.

Key Research Challenges

This is where we move beyond the baseline. This project is a chance for deep applied research with a direct impact. We are eager to explore and innovate with you on topics like:

Disentangled Representation: Can we train the model to separate phonetic content from speaker identity or prosodic information, leading to a more robust phoneme recognizer?

Extending the phonetic dictionaries: Can we extend the phonetic dictionary to bi-phone or tri-phone systems?

Zero-Shot Cross-Lingual Phonetization: By training on a diverse set of languages, can the model generalize to pronounce words from unseen languages by learning a universal phonetic space?

Accent and Dialect Modeling: You can investigate conditioning the model on language/accent tags to produce tailored pronunciations, e.g., generating a US-English vs. UK-English phoneme sequence for "schedule".

Your Impact

The final, trained model and system will be integrated directly as a core component in our production voice-synthesis service, immediately improving its capabilities for our users. Your work will have a tangible impact. We believe that the best ideas come from collaboration. We are fully open to discuss to potentially enhance, modify, or expand the project scope based on your research insights, interests and expertise.

What You Bring

Required

  • PhD student (preferred) or Master student in Computer Science, Machine Learning, or a related field
  • Strong Python programming skills and Git
  • Solid understanding of ML fundamentals and MLOps
  • Hands-on experience with PyTorch
  • Fluent in English, highly motivated, willingness to learn

Plus Points

  • Experience with Hugging Face models (for LLMs, ASR, or "speech-LLMs")
  • Familiarity with phonetization and linguistics concepts
  • Knowledge of speech, ASR, or/and TTS concepts
  • Hands-on experience with large-scale data processing pipelines
  • Hands-on experience with audio AI (ASR/TTS) model training and development

What You Will Gain

  • Direct product impact: your research and code used in AGIGO’s production platform
  • Mentorship: work closely with our expert team of researchers and engineers
  • Top-tier AI infrastructure: access to GPU clusters with NVIDIA Hopper (H200) and Blackwell RTX GPUs
  • Research visibility: we will actively support you in publishing your work at a top-tier conference or in a journal paper
  • Disciplined and inspiring research environment: a team of sharp minds grounded in expertise, autonomy, and a shared pursuit of impactful breakthroughs
  • Paid internship: market-level salary, flexible hours, and free coffee, drinks, fruits and snacks
  • Career path: this internship may lead to a full-time permanent role in AGIGO's world-class AI R&D team

How to Apply

To apply, please send your resume and a brief introduction to internships@agigo.ai with the subject line:

‍Research Internship – Universal Phonetizer – [Your Full Name].

‍
By submitting your application, you agree to allow AGIGO to store and process your data for recruitment purposes. Unless otherwise requested, we may retain your data for up to one year to consider you for this or other future opportunities.

AGIGO™ is a registered trademark of AGIGO AG, Switzerland.
‍

internships@agigo.ai