transformers

TabNet

TabNet: New Kid on the Boosting Block

"Enter Google's TabNet in 2019. According to the paper, this Neural Network was able to outperform the leading tree-based models across a variety of benchmarks. Not only that, it is considerably more explainable than boosted tree models as it has built-in explainability. It can also be used without any feature preprocessing." ... Read More
zero-shot image classification

Zero-Shot Image Classification – CLIP

Zero-shot text classification appeared last year and was a pretty cool thing. At the beginning of this year, we have an implementation of zero-shot image classification. What a beginning of the year :-) ... Read More
QuantumStat NLP on steroids

QuantumStat: NLP on Steroids

Introduction Anather post related to NLP and transfer-learning paradigm. Since at least in the after-BERT era it is not preferable to train your own model for anything related to NLP, the normal way of doing things is: Find a model that is already fine-tuned for your purpose, and Use it, ... Read More
nlp, transformers and adapter

AdapterHub: New Kid on the Block for Transfer Learning

Introduction First, there was a BERT, at least all of this started with it. It is a general-purpose language model, once pre-trained for different languages and in different size variations. If you wanted to use it for some specific task, you should fine-tune it in a way that you take ... Read More