Give instructions to your space engineers
Google DeepMind introduced SIMA 2, and Space engineers was included in the project's research, which was an important step both for the studio and for the advancement of artificial intelligence in interactive environments. This collaboration shows how far general agents have come and how they are beginning to understand and operate in complex virtual worlds.
The original SIMA demonstrated impressive versatility, mastering over six hundred language skills. He could perform commands such as turning left, going up stairs, or opening a map, and performed each action by watching the game screen and using a virtual keyboard and mouse. Instead of relying on hidden systems or developer shortcuts, he interacted with games the same way a player would, proving that acquiring vast skills is only possible through vision and control.
SIMA 2 represents a significant shift. Instead of focusing only on following instructions, the agent now includes a Gemini model, allowing it to reason about situations, interpret tasks more deeply, and make decisions with greater independence. This evolution moves SIMA from simply executing commands to understanding intent and adapting to the tasks at hand.
One of the clearest demonstrations of this progress is the work of SIMA 2 within Space Engineers. The game's detailed voxel world, supported by realistic physics and fully deformable structures, creates an environment that requires spatial awareness and problem solving. Players can assemble, disassemble, damage or destroy any component, creating scenarios where smart decisions are important. Seeing how AI controls such a system provides insight into how future agents will be able to cope with creative, open environments.
DeepMind's ongoing development, along with contributions from studios like the Space Engineers team, suggests an exciting future for AI behavior in games and other interactive simulations.





