HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Time Series Forecasting
Time Series Forecasting On Etth1 720 2
Time Series Forecasting On Etth1 720 2
Metrics
MAE
MSE
Results
Performance results of various models on this benchmark
Columns
Model Name
MAE
MSE
Paper Title
Repository
DLinear
0.359
0.189
Are Transformers Effective for Time Series Forecasting?
Transformer
0.4213
0.2501
Long-term series forecasting with Query Selector -- efficient model of sparse attention
QuerySelector
0.373
0.2136
Long-term series forecasting with Query Selector -- efficient model of sparse attention
Informer
0.357
0.201
Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting
SCINet
0.25
0.099
SCINet: Time Series Modeling and Forecasting with Sample Convolution and Interaction
PatchTST/64
0.236
0.087
A Time Series is Worth 64 Words: Long-term Forecasting with Transformers
FiLM
0.24
0.09
FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting
AutoCon
0.223
0.078
Self-Supervised Contrastive Learning for Long-term Forecasting
SegRNN
0.233
0.085
SegRNN: Segment Recurrent Neural Network for Long-Term Time Series Forecasting
PatchMixer
0.243
0.093
PatchMixer: A Patch-Mixing Architecture for Long-Term Time Series Forecasting
NLinear
0.226
0.08
Are Transformers Effective for Time Series Forecasting?
Parallel Series Transformer
0.286
0.129
How Features Benefit: Parallel Series Embedding for Multivariate Time Series Forecasting with Transformer
0 of 12 row(s) selected.
Previous
Next