-
Worldwide
-
Ongoing
-
Fixed rate per approved asset
-
Description:
We are seeking meticulous and highly motivated evaluators to join our team and assess the severity and safety of content, specifically hate speech, harassment, dangerous content, and sexually explicit material.
Evaluators gauge the severity of these safety aspects in prompts and responses, label the content, and rate overall quality based on relevance, coherence, and factuality, excluding safety attributes.
Purpose:
This project will play a pivotal role in refining AI-generated content, by ensuring it complies with safety standards.