A deep reinforcement learning approach for early classification of time series
Affiliation auteurs | !!!! Error affiliation !!!! |
Titre | A deep reinforcement learning approach for early classification of time series |
Type de publication | Conference Paper |
Year of Publication | 2018 |
Auteurs | Martinez C., Perrin G., Ramasso E., Rombaut M. |
Conference Name | 2018 26TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO) |
Publisher | European Assoc Signal Processing; IEEE Signal Processing Soc; ROMA TRE Univ Degli Studi; MathWorks; Amazon Devices |
Conference Location | 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1264 USA |
ISBN Number | 978-90-827970-1-5 |
Mots-clés | Deep Q-Network, early classification, Reinforcement Learning, time sensitive applications, time series |
Résumé | In many real-world applications, ranging from predictive maintenance to personalized medicine, early classification of time series data is of paramount importance for supporting decision makers. In this article, we address this challenging task with a novel approach based on reinforcement learning. We introduce an early classifier agent, an end-to-end reinforcement learning agent (deep Q-network, DQN) [1] able to perform early classification in an efficient way. We formulate the early classification problem in a reinforcement learning framework: we introduce a suitable set of states and actions but we also define a specific reward function which aims at finding a compromise between earliness and classification accuracy. While most of the existing solutions do not explicitly take time into account in the final decision, this solution allows the user to set this trade-off in a more flexible way. In particular, we show experimentally on datasets from the UCR time series archive [2] that this agent is able to continually adapt its behavior without human intervention and progressively learn to compromise between accurate and fast predictions. |