Zero-shot text classification appeared last year and was a pretty cool thing. At the beginning of this year, we have an implementation of zero-shot image classification. What a beginning of the year :-) ... Read More
Introduction Anather post related to NLP and transfer-learning paradigm. Since at least in the after-BERT era it is not preferable to train your own model for anything related to NLP, the normal way of doing things is: Find a model that is already fine-tuned for your purpose, and Use it, ... Read More
With the latest HuggingFace BERT based transformers, it is possible to implement text classification against arbitrarily chosen set of categories. ... Read More
Introduction First, there was a BERT, at least all of this started with it. It is a general-purpose language model, once pre-trained for different languages and in different size variations. If you wanted to use it for some specific task, you should fine-tune it in a way that you take ... Read More