About AIRoA
The AI Robot Association (AIRoA) is launching a groundbreaking initiative : collecting one million hours of humanoid robot operation data with hundreds of robots, and leveraging it to train the world’s most powerful Vision-Language-Action (VLA) models.
What makes AIRoA unique is not only the unprecedented scale of real-world data and humanoid platforms, but also our commitment to making everything open and accessible. We are building a shared “robot data ecosystem” where datasets, trained models, and benchmarks are available to everyone. Researchers around the world will be able to evaluate their models on standardized humanoid robots through our open evaluation platform.
For researchers, this means an opportunity to :
- Work on fundamental challenges in robotics and AI : multimodal learning, tactile-rich manipulation, sim-to-real transfer, and large-scale benchmarking.
- Access state-of-the-art infrastructure : hundreds of humanoid robots, GPU clusters, high-fidelity simulators, and a global-scale evaluation pipeline.
- Collaborate with leading experts across academia and industry, and publish results that will shape the next decade of robotics.
- Contribute to an initiative that will redefine the future of embodied AI—with all results made open to the world.
As we prepare for our official launch on October 1, 2025, we are assembling a world-class team ready to pioneer the next era of robotics.
We invite ambitious researchers and engineers to join us in this bold challenge to rewrite the history of robotics.
Job Description
In this role, you will be responsible for :
Design and implement data preprocessing pipelines for multimodal robot datasetsTrain VLA models using supervised learning, RL, fine-tuning, RLHF, and training from scratchDevelop and evaluate models in both simulation and on physical robotsImprove training robustness and efficiency through algorithmic innovationAnalyze model performance and propose enhancements based on empirical resultsDeploy VLA models onto real humanoid and mobile robotic platformsPublish research in top-tier conferences (e.g., NeurIPS, CoRL, CVPR)Requirements
Required Qualifications
MS degree with 3+ years of industry experience, or PhD in Computer Science, Electrical Engineering, or a related field.Experience with open-ended learning, reinforcement learning, and frontier methods for training LLMs / VLMs / VLAs such as RLHF and reward function designExperience working with simulators or real-world robotsKnowledge of the latest advancements in large-scale machine learning researchExperience with deep learning frameworks such as PyTorchPreferred Qualifications
PhD or equivalent research experience in robot learning.Practical experience implementing advanced control strategies on hardware, including impedance control, adaptive control, force control, or MPC.Experience using tactile sensing for dexterous manipulation and contact-rich tasks.Familiarity with simulation platforms and benchmarks (e.g., MuJoCo, PyBullet, Isaac Sim) for training and evaluation.Proven track record of achieving significant results as demonstrated by publications at leading conferences in Machine Learning (NeurIPS, ICML, ICLR), Robotics (ICRA, IROS, RSS, CoRL), and Computer Vision (CVPR, ICCV, ECCV)Strong end-to-end system building and rapid prototyping skillsExperience with robotics frameworks like ROSBenefits
There are currently no comparable projects in the world that collect data and develop foundation models on such a large scale. As mentioned above, this is one of Japan’s leading national projects, supported by a substantial investment of 20.5 billion yen from NEDO.
This position will play a crucial role in determining the success of the project. You will have broad discretion and responsibility, and we are confident that, if successful, you will gain both a great sense of achievement and the opportunity to make a meaningful contribution to society.
Furthermore, we strongly encourage engineers to actively build their careers through this project—for example, by publishing research papers and engaging in academic activities.