On Logical Extrapolation for Mazes with Recurrent and Implicit Networks

Date:

We investigate whether recurrent and implicit neural networks can perform logical extrapolation on maze-solving tasks. Testing models across multiple dimensions beyond maze size, we identify several failure modes and find evidence that at least one RNN may have learned a dead-end filling heuristic. Training on more diverse data addresses some failure modes but does not improve extrapolation performance overall. Our results demonstrate that logical extrapolation remains vulnerable to goal misgeneralization, and we propose that studying extrapolation dynamics could inform better future architectures.