-
Worldwide
-
Ongoing
-
Fixed rate per approved asset
-
Description:
Participants will work as evaluators to review and refine rubric items used to assess the quality of Generative AI responses. Evaluators will be given prompts and asked to define criteria (rubrics) that determine whether AI-generated answers meet high-quality standards.
Purpose:
To support the training and fine-tuning Large Language Models (LLMs) by improving the accuracy and effectiveness of evaluation rubrics.