Abstract: The path signature, a mathematically principled and universal feature of sequential data, leads to a performance boost of deep learning-based models in various sequential data tasks as a complimentary feature. However, it suffers from the curse of dimensionality when the path dimension is high. To tackle this problem, we propose a novel, trainable path development layer, which exploits representations of sequential data with the help of finite-dimensional matrix Lie groups. In this talk, I will introduce the path development layer and its theoretical background. Besides, I will present the backpropagation algorithm of the development layer via an optimisation method on manifolds known as trivialisation. Furthermore, numerical experiments demonstrate that the path development consistently and significantly outperforms signature features on several empirical datasets. In particular, stacking the LSTM with the development layer with a suitable matrix Lie group is empirically proven to alleviate the gradient issues of LSTMs and the resulting hybrid model achieves the state-of-the-art performance. I'll conclude the talk with its application of for generative models for time series generation. The talk is based on joint work with Hang Lou (UCL) and Siran Li (Shanghai Jiao Tong University). The papers can be found via the links (arxiv.org/pdf/2204.00740.pdf and arxiv.org/abs/2305.12511).
30 июн 2024