Why do we perceive the visual world as stable? The information provided by our eyes is jumpy, because we make rapid, frequent eye movements called saccades. The result is much like a movie filmed with a hand-held video camera (but even worse). Somehow, the brain transforms this chaotic information into a stable percept. One idea is that the stabilization depends on a special class of visually-responsive neurons in the brain. They “sneak a peek” at the part of the visual scene that they will see after the saccade, an operation called presaccadic remapping. A direct test of this idea is nearly impossible; one would have to find all such neurons, silence them, and see if visual perception becomes jumpy when the eyes move. We therefore followed the dictum, “To understand a system, you must try to make it”. We built a system that uses video cameras for eyes, a computer model for a brain, and robots that use the model to guide their arms. We will train the system until robots reach and grab objects accurately even as their cameras/eyes move. After training, we will examine the simulated neurons in the model. If presaccadic remapping is necessary for stabilizing visual inputs, the trained neurons should exhibit the property. We could then computationally manipulate those neurons to understand how they promote visual stability. The robotic system that we develop should be useful for solving myriad problems in neuroscience that are beyond the reach of current biological methods.