Back To More Publications

PUBLICATION

Translation as a Scalable Proxy for Multilingual Evaluation

Read Full Paper Here

Published

16 Jan 2026

Authors

Sheriff Issaka, Erick Rosas Gonzalez, Lieqi Liu, Evans Kofi Agyei, Lucas Bandarkar, Nanyun Peng, David Ifeoluwa Adelani, Francisco Guzmán, Saadia Gabriel

Abstract

The rapid proliferation of LLMs has created a critical evaluation paradox: while LLMs claim multilingual proficiency, comprehensive non-machine-translated benchmarks exist for fewer than 30 languages, leaving >98% of the world's 7,000 languages in an empirical void. Traditional benchmark construction faces scaling challenges such as cost, scarcity of domain experts, and data contamination. We evaluate the validity of a simpler alternative: can translation quality alone indicate a model's broader multilingual capabilities? Through systematic evaluation of 14 models (1B-72B parameters) across 9 diverse benchmarks and 7 translation metrics, we find that translation performance is a good indicator of downstream task success (e.g., Phi-4, median Pearson r: MetricX = 0.89, xCOMET = 0.91, SSA-COMET = 0.87). These results suggest that the representational abilities supporting faithful translation overlap with those required for multilingual understanding. Translation quality, thus emerges as a strong, inexpensive first-pass proxy of multilingual performance, enabling a translation-first screening with targeted follow-up for specific tasks.

Subjects

Computation and Language (cs.CL); Artificial Intelligence (cs.AI)

Before you go...

Connect with 
our Community!

E-mail. Reach out and ask your follow up questions

Write

LinkedIn. Adopt best practices in projects

Follow

Forms. Reach out and ask your follow up questions

Fill

LinkedIn. Adopt best practices in projects

Follow

Partner with Africa’s Language AI Leader.

We help enterprises localize, innovate, and
scale intelligently across Africa’s diverse
linguistic landscape

African Languages Lab