Continual Learning
Continual Learning, also known as Incremental Learning or Life-long Learning, refers to model training methods that sequentially learn multiple tasks while retaining knowledge from previous tasks. This approach involves validating the model with task identifiers (task-ids) during new task training, without accessing data from old tasks. Continual Learning aims to enhance a model's adaptability in dynamic environments and holds significant application value, especially in scenarios where data is constantly changing.
ASC (19 tasks)
CTR
visual domain decathlon (10 tasks)
Res. adapt. decay
Cifar100 (20 tasks)
Model Zoo-Continual
Tiny-ImageNet (10tasks)
ALTA-ViTB/16
F-CelebA (10 tasks)
CAT (CNN backbone)
20Newsgroup (10 tasks)
CUBS (Fine-grained 6 Tasks)
CondConvContinual
DSC (10 tasks)
CTR
Flowers (Fine-grained 6 Tasks)
CondConvContinual
ImageNet (Fine-grained 6 Tasks)
CondConvContinual
Sketch (Fine-grained 6 Tasks)
Stanford Cars (Fine-grained 6 Tasks)
CPG
Wikiart (Fine-grained 6 Tasks)
Cifar100 (10 tasks)
RMN (Resnet)
ImageNet-50 (5 tasks)
CondConvContinual
Permuted MNIST
RMN
split CIFAR-100
5-dataset - 1 epoch
5-Datasets
CIFAR-100 AlexNet - 300 Epoch
CIFAR-100 ResNet-18 - 300 Epochs
IBM
Cifar100 (20 tasks) - 1 epoch
Coarse-CIFAR100
Model Zoo-Continual
CUB-200-2011 (20 tasks) - 1 epoch
mini-Imagenet (20 tasks) - 1 epoch
TAG-RMSProp
miniImagenet
MiniImageNet ResNet-18 - 300 Epochs
MLT17
Rotated MNIST
Model Zoo-Continual
Split CIFAR-10 (5 tasks)
H$^{2}$
Split MNIST (5 tasks)
H$^{2}$
TinyImageNet ResNet-18 - 300 Epochs