Job Opportunities

ECAI is pleased to welcome Huawei as a sponsor of the conference. As part of their sponsorship, Huawei is offering exciting career opportunities in the field of Artificial Intelligence.

We invite interested candidates to explore and apply for the open research positions listed below, which reflect Huawei’s commitment to advancing AI through innovative research and collaboration.

JD: AI governance technology researcher

Region: Paris, France.

Responsibilities:

AI technology is moving fast and brings a series of ethical and governance challenges. This position focuses on research on AI governance technology and standards.

  • AI governance theory and technology research: Participate in the research on foundamental thoery or methodology of AI governance based on the latest AI progress, produce insight towards academic and/or industry progress of this area, propose new AI governace framework or systematic mechanisms accordingly; Carry out specific technology research for AI, e.g. accountability, agentic AI governance. Carry out research on AI system risk evaluation, classification and mitigation mechanisms, propose new risk governance framework.
  • Global AI governance regulation research: Following the latest legistration progress on AI and participate in the regulation requirement analysis, and produce insight on it. Propose update to the internal governance framework accordingly.
  • AI Standards: Participate in the international or regional AI governance standards by cooperating and communicating with external partnership/ecosystem, contributing to topics especially like AI trustworthiness, AI risk governance, AI governance technologies.

Requirements:

  1. Both of the following skills are required:
    • Strong technical background on AI governance, practical experience on AI governance research or industry project.
    • Deep understanding on latest AI progress, e.g. AIGC, agentic AI.
  2. A background with AI application to certain industry vertical is preferred, experience of standardization is also prefered.
  3. Master’s degree or above in artificial intelligence, computer science, mathematics (statistics, optimization and applied mathematics) or any other relevant field of research is required, PhD with background in artificial intelligence is preferred.
  4. Fluent in written and spoken English.
  5. Excellent communication skills, teamwork spirit, and a high degree of initiative and autonomy are required.

Contact:

AI Algorithm Engineer/Natural Language Processing/Speech and Semantics

Region: Shenzhen/Beijing/Shanghai, China.

Job Preference:

Natural Language Processing / Speech and Semantics

Job Responsibilities:

  1. Engage in applied research and development in areas such as speech recognition and synthesis, machine translation, dialogue and question-answering systems, deep natural language processing, multimodal semantic understanding, and knowledge graphs.
  2. Be responsible for the design and development of core algorithms and platforms for natural language processing, enhancing the product’s core competitiveness and user experience.

Job Requirements:

  1. Major in Computer Science, Machine Learning, Statistics, Applied Mathematics, or related fields.
  2. Proficient in at least one commonly used deep learning framework, with a good understanding of common neural network architectures such as RNN, CNN, Transformer, GAN, GNN, etc., and familiar with the definitions and basic implementation methods of common NLP tasks.
  3. Strong research background and achievements, with a keen interest in algorithm research and strong ability to abstract business problems. Possess creative thinking and the ability to transform new ideas into engineering applications. Be passionate about research work and have excellent teamwork and communication skills.
  4. Strong programming skills, proficient in mainstream programming languages such as C++, Java, and Python.
  5. Have published relevant papers in high-level international conferences and academic journals, such as top conferences or workshops in CL and NLP, or have won awards in high-level competitions.

Contact:

AI Engineering Researcher/AI Safety/Value Alignment Researcher

Region: Shenzhen/Beijing/Shanghai, China.

Job Preference:

AI Safety/Value Alignment Researcher

Job Responsibilities:

This position focuses on innovative research in AI Safety/Trustworthy AI and value alignment theories and technologies in cutting-edge scenarios such as generative AI, agentic AI, and physical AI systems. This includes, but is not limited to, key areas such as value alignment of large models, fairness/non-discrimination, AI-human collaboration, transparency/explainability, and authenticity:

  1. Conduct theoretical research in the field of AI Safety/Trustworthy AI/Value Alignment, and make breakthroughs in fundamental theories and core technologies related to value alignment of large models, fairness/non-discrimination, AI-human collaboration, transparency/explainability, and authenticity.
  2. Address key challenges such as value alignment of large models, fairness/non-discrimination, and authenticity, and develop corresponding technologies for risk identification (assessment) and risk mitigation (alignment) of large models, as well as corresponding training data quality evaluation technologies.
  3. Conduct key technical insights and research innovations in AI-human collaboration and transparency/explainability, including, but not limited to, ethical risks in human-machine interaction, controllability of AI systems, and related risk scanning/risk control technologies.
  4. Based on regulatory requirements, standard analysis, and industry practices, conduct research on safety and trustworthiness compliance certification and evaluation frameworks and methods, and participate in the development of domestic and international standards related to trustworthiness and safety certification.
  5. Engage in exchanges and collaborations with academia and industry to continuously enhance the company’s technical influence in the field of AI Safety/Trustworthy AI and value alignment.

Job Requirements:

  1. A degree in artificial intelligence, computer science, software engineering, or related fields. Candidates with interdisciplinary research experience (e.g., sociology, law, philosophy) are preferred.
  2. Proficiency in programming languages such as Python and C++; a solid understanding of machine learning and deep learning; and skills in large model training, fine-tuning, inference optimization, and prompt engineering. Candidates with experience in large model alignment, large model evaluation, fine-tuning of reasoning chains, or research in AI regulations and AI certification standards are preferred.
  3. In-depth research and practical experience in the field of AI Safety/Trustworthy AI and value alignment; preference will be given to candidates who have published related papers in top AI journals or conferences, or have experience in large model-related competitions.
  4. Strong communication and teamwork skills.

Contact:

Copyright 2023-2025 Prospero Multilab Srl. All rights reserved