Machine Translation
Machine translation fills in first drafts for target languages so translators don’t start from a blank cell. Comvi integrates multiple providers; you enable the ones that fit your workload and budget, and pick per request or default to a primary provider.
Provider matrix
Section titled “Provider matrix”| Provider | Type | Glossary behaviour | Best for |
|---|---|---|---|
| Google Translate | Classical neural | Hints only (shown to editor, not applied by provider) | Wide coverage (130+ languages), high volume |
| DeepL | Classical neural | Hints only | Strong European + East Asian quality |
| Amazon Translate | Classical neural | Hints only | AWS-native pipelines |
| Azure Translator | Classical neural | Hints only | Azure-native pipelines |
| OpenAI | LLM (GPT models) | Injected into prompt, provider respects terms | Marketing copy, tone, context-sensitive content |
| Anthropic | LLM (Claude models) | Injected into prompt, provider respects terms | Nuanced long-form, brand voice |
Classical vs LLM
Section titled “Classical vs LLM”Classical neural providers are fast, cheap, and consistent. They excel at straightforward UI strings — labels, buttons, short descriptions. They do not read your instructions; whatever you pass as a glossary is surfaced to the human editor as a hint, not applied by the provider itself.
LLM providers are slower and cost more per request, but they follow custom instructions, maintain tone and register, and respect glossaries by prompt injection. Use them for marketing pages, onboarding flows, emails, and anywhere a generic translation would feel off.
A reasonable default: classical for bulk fills on new languages, LLM for high-visibility surfaces (landing page, paid-conversion flows, support emails).
Provider API keys are configured server-side by the Comvi administrator. Project-level settings control which providers are enabled here and how they behave.
-
Open MT settings
Go to Project Settings → Machine Translation.
-
Enable providers
Toggle on the providers you want available in this project. You can enable several.
-
Set a primary provider
The primary provider runs when the editor or bulk action doesn’t specify one.
-
Pick LLM models
Per LLM provider, pick a model. Cheaper / faster models (GPT-5 Nano, Claude Haiku) are fine for most UI strings; reserve larger models for marketing copy.
-
Decide whether to auto-apply
When auto-apply is enabled, MT suggestions are written automatically as
Not reviewedso a human can verify them later. Leave it off if you want translators to explicitly accept each suggestion.
Single-key translation
Section titled “Single-key translation”In the Translations editor, click the MT icon on any target-language cell. Comvi sends the source value to the primary provider and drops the result in as Not reviewed. Review, edit if needed, and mark Translated.
Bulk translation
Section titled “Bulk translation”-
Filter to what you need
On the Translations page, filter by status
Not translated(and optionally by language or namespace) to scope the batch. -
Select keys and open bulk actions
Tick the rows, click the bulk menu, choose Translate.
-
Pick languages and provider
Select target languages. Optionally override the primary provider for this batch — e.g. “this one’s a marketing page, run it through Anthropic instead”.
-
Start
The job runs in the background. Large batches take minutes depending on the provider. Results land as
Not reviewedfor the human pass.
Glossary behaviour
Section titled “Glossary behaviour”Glossaries define preferred target terms for specific source terms. They interact with MT differently depending on provider type:
- LLM providers (OpenAI, Anthropic) — matching glossary terms are injected into the prompt with their preferred translations. The provider is explicitly told to use them. In practice this is reliable but not guaranteed — always review critical terminology.
- Classical providers (Google, DeepL, Azure, Amazon) — glossary terms are shown to the editor as hints during review. Comvi does not apply provider-specific glossary APIs. If you need enforced terminology with a classical provider, use a find-and-replace pass after bulk translation, or switch that batch to an LLM provider.
This is a deliberate product choice — see Glossaries for rationale.
Translation lifecycle and review
Section titled “Translation lifecycle and review”Machine translation output always enters the lifecycle as Not reviewed — never directly as Translated. This means:
- Bulk MT fills a wave of
Not reviewedvalues - A translator or reviewer reads each, edits or approves it →
Translated - The next Publish picks up the
Translatedvalues if your CDN status filter includes onlyTranslated
To keep raw MT out of production, configure the CDN status filter to include only Translated.
Cost and throughput
Section titled “Cost and throughput”Providers bill per character (classical) or per token (LLM). Comvi records usage per request so you can see where the spend is going. Rough rules of thumb:
- Classical neural: fractions of a cent per short string; large bulk runs stay cheap.
- LLM: one to two orders of magnitude more per string, slower throughput, higher quality.
Run a small test batch on a new provider before bulk-translating a large project.
Language pair support
Section titled “Language pair support”- Google Translate — widest, 130+ languages
- Azure Translator — 100+
- DeepL — 30+, strongest in European + East Asian pairs
- Amazon Translate — 70+
- OpenAI / Anthropic — most widely-spoken languages; quality degrades on less common ones
Unsupported pairs in a bulk run are skipped and reported in the job result.
Limits
Section titled “Limits”- MT output always lands as
Not reviewed. There is no “auto-approve MT” setting. - Glossary injection is LLM-only. Classical providers get hints for humans, not terms.
- Per-request quotas depend on the provider account tier configured on the server side.
Troubleshooting
Section titled “Troubleshooting”Bulk translation skipped some keys
Section titled “Bulk translation skipped some keys”Usually language-pair coverage. Check the job summary for a skipped-with-reason breakdown.
LLM output ignores my instructions
Section titled “LLM output ignores my instructions”If custom instructions are configured through the API, they are sent with every request, but LLMs can stray on long batches. Shorten and sharpen the instruction, switch to a stronger model, or split the batch.
Glossary terms aren’t being used
Section titled “Glossary terms aren’t being used”You are probably on a classical provider. Confirm by provider type above — switch to an LLM provider or handle the term post-hoc.
Quality is poor on a specific language
Section titled “Quality is poor on a specific language”Try a different provider for that pair. DeepL often wins in Europe + East Asia; Google wins on coverage; LLMs win on nuance but lose on consistency for very short strings.