ViMo: A Generative Visual GUI World Model for App Agent

Dezhao Luo*,1, Bohan Tang*,2, Kang Li2, Georgios Papoudakis3, Jifei Song3, Shaogang Gong1, Jianye Hao3, Jun Wang4, Kun Shao†,3
1Queen Mary University of London 2University of Oxford 3Huawei Noah's Ark Lab 4University College London
* Equal Contribution

Corresponding authors: shaokun2@huawei.com
Example Image

ViMo is able to generate next GUIs based on the current GUI observation and a user action

Abstract

App agents, which autonomously operate mobile Apps through Graphical User Interfaces (GUIs), have gained significant interest in real-world applications. Yet, they often struggle with long-horizon planning, failing to find the optimal actions for complex tasks with longer steps. To address this, world models are used to predict the next GUI observation based on user actions, enabling more effective agent planning. However, existing world models primarily focus on generating only textual descriptions, lacking essential visual details. To fill this gap, we propose ViMo, the first visual world model designed to generate future App observations as images. For the challenge of generating text For the challenge of generating text in image patches, where even minor pixel errors can distort readability, we decompose GUI generation into graphic and text content generation. We propose a novel data representation, the Symbolic Text Representation (STR) to overlay text content with symbolic placeholders while preserving graphics. With this design, ViMo employs a STR Predictor to predict future GUIs' graphics and a GUI-text Predictor for generating the corresponding text. Moreover, we deploy ViMo to enhance agent-focused tasks by predicting the outcome of different action options. Experiments show ViMo's ability to generate visually plausible and functionally effective GUIs that enable App agents to make more informed decisions.

Method Overview

Framework of our ViMo. We first detect text content (actual words) in the current GUI and overlay it with text symbols (rectangle-shaped placeholders with a black border and white fill), to create STR. Then the STR and the user action are input to the STR predictor to generate the STR of the next GUI. Next, text symbols within STR are located and assigned unique ID token. Then the LLM predicts the text content corresponding to each token. Finally, the next GUI image is constructed by overlaying the predicted text into the STR.

Qualitative Comparisons

GUI generation comparison in graphic generation (Top) and text generation (Bottom, cropped for space efficiency)

Quantitative Comparisons

Decision optimisation comparisons on APP agent performance. Apps are categorised into “Leisure”, “Work”, and “System”.

Visualisations

Visualisation of ViMo in generating the next GUI. For each example, the action is displayed at the top, with the current GUI shown on the left and the generated GUI on the right.

BibTeX

to be added