Judging
Rubric
Description:
Participants will work as evaluators to review and refine rubric items used to assess the quality of Generative AI responses. Evaluators will be given prompts and asked to define criteria (rubrics) that determine whether AI-generated answers meet high-quality standards.
Purpose:
To support the training and fine-tuning Large Language Models (LLMs) by improving the accuracy and effectiveness of evaluation rubrics.
Main requirements:
- Native speaker of the required language
- Fluent in English
- Strong analytical and creative thinking skills
- Excellent communication and collaboration abilities
- Familiarity with Generative AI systems
- Excluded countries: Applicants cannot be located in Afghanistan, Argentina, Bolivia, Brazil, Chile, China, Colombia, Cuba, Ecuador, Iran, Iraq, Mexico, North Korea, Panama, Russia, Sudan, Syria, Ukraine (Crimea, Luhansk, Donetsk), United Kingdom, Venezuela, or Yemen
About OneForma
OneForma is a global digital and technology services company. We combine data, intelligence, and experience to deliver human-centric solutions to complex business challenges.
OneForma is an equal-opportunity employer and will not discriminate against any of our applications based on race, gender, religion, or cultural background.
Close
This role is available in multiple languages.
Select the one most relevant to you.