Multi-Agent Reinforcement Learning with Graph Convolutional Networks for Collaborative Task Scheduling in Distributed Virtual Reality Systems
Abstract
Efficiently scheduling collaborative tasks in multi-user, distributed virtual reality systems is challenging due to the dynamic nature of user interactions and resource constraints. To address this problem, a novel task scheduling framework integrating multi-agent reinforcement learning and graph convolutional networks is proposed. The distributed VR scheduling problem is formulated as a Markov game, for which the multi-agent proximal policy optimisation algorithm serves as the foundational decision-making framework. Graph convolutional networks are used to capture the dynamic topological relationships between tasks and users. This enhances global perception capabilities and enables more refined collaborative decision-making through graph attention mechanisms. Experimental results demonstrated that the proposed model outperformed baseline approaches in terms of task completion rate, system throughput, and resource utilisation. In simulation environments, the framework maintained low latency and high success rates across a range of VR application scenarios. This research presents a structured methodology for intelligent scheduling in distributed collaborative systems, promoting the integration of graph-based learning and multi-agent coordination in complex virtual environments. This work makes a valuable contribution to scheduling theory and practical system design, offering valuable insights for the development of responsive and scalable VR platforms.
Keywords
Full Text:
PDF
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
![]() |

Journal of Computing and Information Technology
