AI Engineering Researcher/AI Safety/Value Alignment Researcher
Region: Shenzhen/Beijing/Shanghai, China.
Job Preference:
AI Safety/Value Alignment Researcher
Job Responsibilities:
This position focuses on innovative research in AI Safety/Trustworthy AI and value alignment theories and technologies in cutting-edge scenarios such as generative AI, agentic AI, and physical AI systems. This includes, but is not limited to, key areas such as value alignment of large models, fairness/non-discrimination, AI-human collaboration, transparency/explainability, and authenticity:
- Conduct theoretical research in the field of AI Safety/Trustworthy AI/Value Alignment, and make breakthroughs in fundamental theories and core technologies related to value alignment of large models, fairness/non-discrimination, AI-human collaboration, transparency/explainability, and authenticity.
- Address key challenges such as value alignment of large models, fairness/non-discrimination, and authenticity, and develop corresponding technologies for risk identification (assessment) and risk mitigation (alignment) of large models, as well as corresponding training data quality evaluation technologies.
- Conduct key technical insights and research innovations in AI-human collaboration and transparency/explainability, including, but not limited to, ethical risks in human-machine interaction, controllability of AI systems, and related risk scanning/risk control technologies.
- Based on regulatory requirements, standard analysis, and industry practices, conduct research on safety and trustworthiness compliance certification and evaluation frameworks and methods, and participate in the development of domestic and international standards related to trustworthiness and safety certification.
- Engage in exchanges and collaborations with academia and industry to continuously enhance the company’s technical influence in the field of AI Safety/Trustworthy AI and value alignment.
Job Requirements:
- A degree in artificial intelligence, computer science, software engineering, or related fields. Candidates with interdisciplinary research experience (e.g., sociology, law, philosophy) are preferred.
- Proficiency in programming languages such as Python and C++; a solid understanding of machine learning and deep learning; and skills in large model training, fine-tuning, inference optimization, and prompt engineering. Candidates with experience in large model alignment, large model evaluation, fine-tuning of reasoning chains, or research in AI regulations and AI certification standards are preferred.
- In-depth research and practical experience in the field of AI Safety/Trustworthy AI and value alignment; preference will be given to candidates who have published related papers in top AI journals or conferences, or have experience in large model-related competitions.
- Strong communication and teamwork skills.