AI Proofreader

What's the AI Proofreader?

The AI Proofreader is an LLM-powered AI safeguard layer that, based on the generic AI context, will proofread the captions lines in real-time.

It works with both methods of producing captions (human and ASR).

It works best with events where the language used is highly specific and technical or where correct spelling of brands and names is critical.

What it does

The AI Proofreader will dynamically supervise any captions lines and apply the following checks:

  • Check spelling (names, brands, places, products, etc.) & grammar
  • Intuit gaps in conversation
  • Fix punctuation & formatting
  • Format sentencing (remove filler words)
  • Apply other custom rules

What it doesn't do

  • It doesn't listen to audio or process it
  • It is the not the same as the ASR dictionary. Instead the AI Proofreader acts after the ASR engines have processed the audio and provided the text.
  • It doesn't do fact checking.

Examples

A great example of this is the works of Professor Tolkien. We shared some of the captions lines (one produced by a human captioner and one produced by ASR) with the AI Proofreader.

SourceText
Original"dwarves live in cause of doom"
Corrected"dwarves live in Khazad-dûm"
SourceText
Original"Boromiet took over when scanning off fails"
Corrected"Boromir took over when Gandalf fell"

LLM Engines

The AI Proofreader is based on top of the best LLM engines available. It makes use of the extensive world knowledge inherent in the LLM training data and it also applies the specific AI Context of the event, as configured in the AI Context Builder.

Engines available:

ProviderModel
OpenAIGPT 4o Mini GPT 4o GPT 5 Mini GPT 5 Nano
AnthropicClaude 4 Sonnet Claude 3.5 Haiku

We use different LLM engines because each has its own strengths and weaknesses. These models offer each a different balance between speed and depth of thinking. However, we currently limit the choice of models to the ones available in the OpenAI and Anthropic APIs strictly because, in our testing, these are the only publicly available models that are able to correctly follow the rules we set for the AI Proofreader.

If you think other LLM models should be available, please contact us and we'll be happy to consider it.

Before or After

The AI Proofreader can run before or after the captions are distributed to destinations (and visible by the audience). The toggle "Required for Distribution" controls this.

We recommend you keep this toggle on for all events.

Proofreader option

When should I have it on?

If using machine translation, livestreaming captions or translations. This is because the text needs to be delivered to the audience as its final form, without further corrections.

When can I have it off?

If providing captions via browser or overlay. The destinations are the only ones that can display further caption corrections.

Rules & Custom Rules

Proofreader rules

The AI Proofreader comes with a set of rules that are applied by default. These rules are:

RuleDefaultDescription
Check Names and Proper NounsonThe Proofreader will check text against names provided in the context.
Allow Factual ErrorsonThis toggle prevents factual corrections.
Custom RulesIf you have preferences, such as preferring lowercase for certain words, or want the Proofreader to filter out specific words, then you can use conversational language in this field to explain what you want.

AI Context

The AI Proofreader makes use of the AI Context to understand the event and the content. This context can be fine-tuned any time on the duration of the event, and it is automatically referred to when the AI Proofreader is running.

Latency

It depends on the underlying LLM engine used, but typically it is less than 1 second (~400-500ms).

Last updated: January 13, 2026 at 09:22 AM

On this page