Date of Award

Summer 8-16-2024

Level of Access Assigned by Author

Open-Access Thesis

Degree Name

Master of Science (MS)

Department

Computer Engineering

Advisor

Yifeng Zhu

Second Committee Member

Vincent Weaver

Third Committee Member

Vikas Dhiman

Abstract

AI is rapidly emerging as the most reliable and effective approach for prediction in all industries. One significant domain where it is applied is long sequence time-series forecasting (LSTF). While current LSTF models excel in temporal dimensions, they often neglect spatial dimensions. However, considering both spatial and temporal dimensions is crucial, as many real-world applications demand it and can benefit from it. Achieving a high prediction capacity is essential in long sequence time-series forecasting and spatiotemporal forecasting. To address this challenge, transformers are employed due to their ability to enhance prediction capacity. This thesis builds upon previous LSTF temporal transformer models and extends them to the spatiotemporal realm to assess if this approach improves predictions on both novel and established data. The findings reveal that models with Spatial Embedding exhibit enhanced prediction capabilities when the data distribution varies across locations. However, such models do not improve prediction in situations where the distribution is similar across locations. It also improves prediction on less complex models.

Share