AUDIMUS.SERVER creates complete (verbatim) transcripts – rather than just a summary – of all utterances produced in a meeting, in less than play-back time. How offline transcription works:
MEDIA INPUT: Just select the media file you want to transcript and let AUDIMUS.SERVER do the rest. The system receives any type of media file.
PROCESSING: Through automatic speech recognition, AUDIMUS.SERVER produces a complete transcript of the content provided. This result is enhanced by a complete semantic description of what has been recognized in addition to the indication of the speech and non-speech areas.
EXPORT: Exporting subtitles in multiple formats.
LANGUAGES: Different language models with specific themes each (e.g. news, sports) are provided. These are trained with large amounts of audio and text, where the total volume of vocabulary exceeds 200,000 words. Even more, it is possible to adjust the phonetic transcriptions to adapt models to local language.
METADATA: The speech metadata produced can be ingested into MAM systems, creating an accurate and fully automatic description of its contents. This exhaustive archive indexing allows retrieving relevant media files using textual-based searches on spoken contents and not only in a set of manually created keywords.
TIMECODES: Audimus.Server is able to parse several screenplay formats and compute accurate time codes through the alignment of automatically extracted dialogues with the audio track. The caption files are produced according to user-defined formatting features.
FORMATS: Select the most compatible subtitle format with your workflow, a pre-formatted Word document or even a XML representation. The exported speech metadata can be pushed into a NLE video editor, a text formatting application or with a subtitle correction and translation tool.