DISTRL:
AN ASYNCHRONOUS DISTRIBUTED
REINFORCEMENT LEARNING FRAMEWORK
FOR ON-DEVICE CONTROL AGENTS

Taiyi Wang1 2 * †, Zhihao Wu3 *, Jianheng Liu4, Jianye Hao3, Jun Wang4, Kun Shao3 †
1 University of Cambridge
2 Powersense Technology Limited
3 Huawei Noah's Ark Lab

4 University College London
* Equal Contribution

Corresponding authors: Taiyi.Wang@cl.cam.ac.uk, shaokun2@huawei.com

Comparison between DistRL Agent and DigiRL Agent on Real Web Browsing Task:

Host-worker Running Loop Illustrations for DistRL Asynchronous Framework:

Abstract

On-device control agents, especially on mobile devices, are responsible for operating mobile devices to fulfill users' requests, enabling seamless and intuitive interactions. Integrating Multimodal Large Language Models (MLLMs) into these agents enhances their ability to understand and execute complex commands, thereby improving user experience. However, fine-tuning MLLMs for on-device control presents significant challenges due to limited data availability and inefficient online training processes. This paper introduces DistRL, a novel framework designed to enhance the efficiency of online RL fine-tuning for mobile device control agents. DistRL employs centralized training and decentralized data acquisition to ensure efficient fine-tuning in the context of dynamic online interactions. Additionally, the framework is backed by our tailor-made RL algorithm, which effectively balances exploration with the prioritized utilization of collected data to ensure stable and robust training. Our experiments show that, on average, DistRL delivers a 3X improvement in training efficiency and enables training data collection 2.4X faster than the leading synchronous multi-machine methods. Notably, after training, DistRL achieves a 20% relative improvement in success rate compared to state-of-the-art methods on general Android tasks from an open benchmark, significantly outperforming existing approaches while maintaining the same training time. These results validate DistRL as a scalable and efficient solution, offering substantial improvements in both training efficiency and agent performance for real-world, in-the-wild device control tasks.

System Design

DistRL is an asynchronous distributed reinforcement learning framework that decouples trajectory collection from policy learning. The framework consists of a central Host Learner equipped with GPUs for policy training, and multiple heterogeneous Workers running Android emulators or connected to mobile devices. This separation optimizes efficiency by aligning tasks with appropriate hardware - CPU-intensive environment interactions on worker machines and GPU-accelerated policy training on the host server. The host maintains a Circular Replay Buffer for storing trajectories and uses a FIFO Queue to manage incoming experiences from workers efficiently.

The workers operate in parallel through multi-threading, with each thread executing the latest policy received from the Host Learner. Workers collect trajectories during environment interactions and asynchronously send them back to the host for training. The framework demonstrates excellent scalability - with two 96 vCPU machines supporting up to 32 concurrent emulators and nearly linear scaling with worker performance.

A-RIDE: The Backbone of DistRL

Our method employs advantage-based estimations to refine policy gradient updates, as an extension of Generalized Advantage Estimation (GAE) by Schulman et al. (2015), effectively balancing exploration and exploitation in the learning process.

By introducing a trace decay parameter, A-RIDE manages the bias-variance trade-off in advantage calculations, optimizing the stability and convergence of the policy. A-RIDE incorporates enhancements tailored to distributed, asynchronous environments, ensuring robust policy stability and efficient learning in complex device control tasks.

The Retrace(λ) update is defined as:

Q(st, at) ← Q(st, at) + δt, where the correction term δt is calculated as:

δt = ∑k=tH γk−t (∏i=t+1k ci) [rk + γ Q(sk+1, ak+1) − Q(sk, ak)].

Here, Q(st, at) is the estimated action-value function; γ ∈ [0, 1] is the discount factor; H is the time horizon; ci = λ min(1, ρi) with λ ∈ [0, 1] being the trace decay parameter; ρi = π(ai|si) / μ(ai|si) is the importance sampling ratio between the target policy π and the behavior policy μ.

To ensure effective exploration within the action space and prevent the generation of nonsensical or invalid commands, we incorporate entropy regularization into the actor loss function:

ℒ = −𝔼μt A(st, at) log π(at|st)] − β 𝔼μ[ℍ(π(at|st))] + λ 𝔼μ[𝒫invalid(at)],

where A(st, at) represents the advantage function, defined as the difference between the action-value function Q(st, at) and the state-value function V(st): A(st, at) = Q(st, at) − V(st). The entropy term ℍ promotes exploration, and 𝒫invalid(at) imposes penalties on invalid actions. The parameters β and λ are tuned to balance exploration and stability.

DistRL Pipeline

DistRL adopts a distributed asynchronous setup where multiple worker agents generate trajectories under the behavior policy μ and send them to a central learner.

The trajectory reward is computed using the Monte Carlo estimate:

L(Vtraj) = −𝔼ν[r(sH, aH) log Vtraj(sH, aH) + (1 − r(sH, aH)) log (1 − Vtraj(sH, aH))].

The actor is updated using policy gradients based on advantage estimates, and enhanced Retrace corrections are applied for off-policy learning. This process is distributed asynchronously across worker nodes, ensuring efficient fine-tuning in environments with sparse rewards and distributed delays.

Results

Training performance (32 emulators) between the current SoTA (DigiRL) and DistRL, highlighting the enhanced efficiency of DistRL’s distributed framework during online training. (a) Wall-clock time comparison; (b) Training efficiency comparison; (c) Accumulated trajectories collection ability comparison; (d) Scalability of different training frameworks

Framework Type Framework Name General Web Shopping
Training Test Training Test
Prompting AppAgent + GPT-4v 41.4 43.0 31.2 35.2
AppAgent + Gemini 39.1 45.3 30.5 32.0
Learning AutoUI 38.3 40.6 42.2 44.5
DigiRL (single, online) 64.6 ± 1.5 59.9 ± 2.1 63.3 ± 1.5 59.6 ± 3.1
DigiRL (multi) 67.7 ± 1.3 61.2 ± 2.4 64.5 ± 1.1 59.9 ± 2.8
DistRL (Ours) 75.5 ± 0.2 73.2 ± 1.1 69.8 ± 0.5 68.5 ± 1.7

Main comparisons regarding the success rate of different agents across various settings. Each experiment is repeated three times and the mean and standard deviation are reported. Results are evaluated with our autonomous evaluator with the 128 user instructions in the train and test set.

For full results and more details, please refer to our paper.

BibTeX


@article{wang2024distrl,
  title={DistRL: An Asynchronous Distributed Reinforcement Learning Framework for On-Device Control Agents},
  author={Wang, Taiyi and Wu, Zhihao and Liu, Jianheng and Hao, Jianye and Wang, Jun and Shao, Kun},
  journal={arXiv preprint arXiv:2410.14803},
  year={2024}
}