One of the key movements in Data Science in the previous 2020 was the low-code trend. This resolved in two approaches:
- auto-ml: writing code that incoporates a vast amount of models to be checked with only a few lines of code, providing results from all of the checked models, and hiberating the most successfull one. Examples: pycaret, h20, etc,
- libraries that implement common AI tasks in few lines. Today I’ll talk about one of them: fast.ai.
fast.ai is Python library based on pytorch, that implements common Deep Learning tasks i a VERY FEW lines of code.
You can find information about it on: fast.ai · Making neural nets uncool again
For example: you want to train an image classfied. Of course you wouldn’t start from scratch: that’s so old approach. Now-a-days you use already pretrained neural network, and fine-tune it with your images, using transfer learning approach.
Next, you don’t need to think about how to organize images per classes and how to label the classes. That’s already widely ellaborated, you simply create one folder per image class, and store all of the images for that class there.
Next, you don’t have to write a code that will traverse your folders on the disk, recognizing classes, then look for images in the folders. Already done, again.
Next, you run the training procedure. The shortest event is one epoch: not very customizable, but, how many times you’ll have to have a better controll over it?
Next, you save the model,
Next, you run inferences with the model.
All of this already done with fast.ai.
More cool stuff:
- all of this is very low-code, implementable in very few lines. However, if you want better control over the process, you are welcome.
- Integration with wandb.ai: Weights & Biases (wandb.ai). Already written about them, can find an article about them on the site at wandb.ai – AI Daily News (striki.ai)
Less cool stuff:
- Newer vesions, new problems. Backward compatability is not their strength.
- Documentation and examples for provided features for taking better control are not that ideal. I would expect more examples on GitHub, with better granulated groups of them, per topic.
Ok, maybe not that ideal library, however, worth to take a look and give it a chance.
And let the AI make uncool again :-). As stated on their site.