CoRL 2025 Workshop (under review)
Robotics is entering an era where high-fidelity simulation and data-driven controllable world models are converging to revolutionize learning for manipulation and navigation. Traditionally, physics-based simulators and digital twins have been the go-to approach for training and evaluating robot policies. However, constructing realistic simulations with engines like Unity or Unreal is labor-intensive and often fails to capture the full complexity of the real world.
Recent advances in large-scale pretrained models have enabled the generation of videos, 3D assets, and even entire simulators. At the same time, compact world models learned from scratch can capture task-specific dynamics tailored to a particular robot. Together, these approaches show that simulations no longer need to be manually constructed, they can be learned directly from data.
This workshop aims to bridge the spectrum from physics-grounded simulation, photorealistic digital twins, AI-controlled simulators, to fully learned neural world models. We will explore how cutting-edge simulation technologies, such as differentiable physics engines and realistic rendering, can integrate with or evolve into learned models of the world.
The goal is to bring together researchers from traditionally separate communities: those focused on high-fidelity simulators and those developing learned implicit models. Together, we will discuss how to combine the strengths of both approaches to advance robot learning.
For questions / comments, reach out to: learning-to-simulate-robot-worlds@googlegroups.com
Website template adapted from the OSC/ORLR workshops, originally based on the template of the BAICS workshop.