Olympus is a complete framework for implementing spoken dialog systems. It was created at Carnegie Mellon University (CMU) during the late 2000's and benefits from ongoing improvements in functionality. It's main purpose is to help researchers interested in conversational agents to implement and test their ideas on complete systems, without having to build them on their own. To this end, Olympus incorporates the Ravenclaw dialog manager, which supports mixed-initiative interaction, as well as components that handle speech recognition, understanding, generation and synthesis. Olympus uses a Galaxy message passing layer to integrate its components ans supports multi-modal interaction. The Olympus/Ravenclaw distribution includes several example systems that demonstrate the operations of its various features.
The Olympus architecture incorporates modules developed by researchers at Carnegie Mellon and by others, in previous and ongoing research projects. These include:
- Dialogue management is handled by RavenClaw , a task-independent dialogue engine based on the AGENDA dialog manager first introduced as part of the CMU Communicator system.
- Low-level interaction management (e.g. exact timing of start and end of utterances, handling of interruptions, etc) is performed by the Apollo interaction manager .
- For speech recognition, Olympus currently supports engines from the CMU Sphinx family (Sphinx 2, Sphinx 3, PocketSphinx), and provides an interface for support for other engines.
- Natural language understanding is done by Phoenix , a robust parser based on CFG-like grammars.
- The Helios components integrates information from various levels and assigns a confidence measure to all user inputs.
- Natural language generation uses the Rosetta template-based generation system.
- Kalliope, the synthesis interface, currently allows the use of SAPI 5-compliant TTS engines, CMU's Flite, and the proprietary Cepstral Swift engine.
- The communication between the different modules is handled by the MIT/MITRE Galaxy Communicator architecture.
Funding for the development of Olympus and the systems that it is based on has been provided by a variety of sponsors, including the Defense Advanced Research Projects Agency (DARPA) for CMU Communicator and CALO, the Office of Naval Research for LARRI, from the National Science Foundation for the Let's Go project (grants number 0208835 and 0741773) and for learning-related capabilities, and the The Boeing Corporation for development of the Treasure Hunt system.
The Olympus system and several example applications are available under a modified BSD Open Source license and is accessible through our public subversion repository. For complete download instructions see: Download.
We continue to work on developing a complete set of documentation for Olympus/Ravenclaw. Some of it is already (at least partially) available such as a tutorial and some reference pages. We will post announcements here and on the distribution mailing list as more of it gets completed. In the meanwhile feel free to post questions to the developer mailing-list or get in touch with people at Carnegie Mellon. (read more)
Questions and Problems
For additional information, send email to the developers mailing list (olympus-developers@@cs.cmu.edu).
- Development News
- Speech Lab Mailing Lists
- Join the mailing list or view the mailing list archives.
Interested in Contributing?
If you are interested in contributing to this wiki, please get in touch with one of the current maintainers. Unrestricted editing access has been curtailed, as we've noticed that some people from the web have been making spurious edits and adding unrelated materials. If you are working with Olympus/Ravenclaw and would like to help improve on-line documentation, please let us know and we'll set you up.
If you are a developer and have been making changes to the code that you believe could benefit others, please consider becoming an official contributor. Getting started is as simple as announcing yourself on the developers' mailing list. Even if all you do is bring attention to bugs and propose new features, you'd be contributing.
Many many people have contributed to Olympus over the years (including prior to its existence...). The main contributors are:
Other (sometimes significant) contributions have been made by:
|Alan W Black||Maxine Eskenazi||Scott Judy|
|Andrew Hoskins||Brian Langner||Matthew Marge|
|Udhyakumar Nallasamy||Alexander I. Rudnicky||Jahanzeb Sherwani|
|Svetlana Stenchikova||Yitao Sun|
The ideas incorporated in Olympus/Ravenclaw are described in the following papers. You can learn about more recent developments by consulting papers that cite the first paper below, as well as the others:
- Bohus, Dan & Alexander I. Rudnicky (2009), "The RavenClaw dialog management framework: Architecture and systems", Computer Speech & Language
- Bohus, Dan & Alexander I. Rudnicky (2003), "RavenClaw: Dialog Management Using Hierarchical Task Decomposition and an Expectation Agenda", Eurospeech 2003
- Bohus, Dan; Antoine Raux; Thomas K. Harris; Maxine Eskenazi & Alexander I. Rudnicky (2007), "Olympus: an open-source framework for conversational spoken language interface research", Bridging the Gap: Academic and Industrial Research in Dialog Technology workshop at HLT/NAACL 2007
- Raux, Antoine & Maxine Eskenazi (2007), "A Multi-Layer Architecture for Semi-Synchronous Event-Driven Dialogue Management", IEEE Automatic Speech Recognition and Understanding Workshop
- Rudnicky & Wei Xu (December 1999, p. I-337), "An agenda-based dialog management architecture for spoken language systems", IEEE ASRU Workshop
- Ward, Wayne & Sunil Issar (March 1994), "Recent improvements in the CMU spoken language understanding system", ARPA Human Language Technology Workshop