Research Papers, Blogs & Resources
Research Papers and Blogs¶
Graph Based Method¶
- DeepWalk: Online Learning of Social Representations
- HOPE : Asymmetric Transitivity Preserving Graph Embedding
- Feature Extraction for Graphs
- A overview of FE in Graphs
Discipline the method¶
Loss Functions¶
- Comprehensive Survey of Loss Functions in Machine Learning
- Classification
- Regression
Tech Blogs¶
- AirBnB Engineering
- Spotify Research
- Netflix Research
- DoorDash ML Blog
- Uber Engineering
- Lyft Engineering
- Shopify Engineering
- Meta Engineering
- LinkedIn Engineering
- Kaggle Competition Blog
Knowledge Distillation¶
- https://arxiv.org/pdf/2006.05525.pdf
- https://arxiv.org/pdf/1503.02531.pdf
- https://arxiv.org/pdf/1910.01108.pdf
Modalities and Mixture of Experts¶
Deep learning for Tabular data¶
Embeddings¶
GANs¶
- GAN = Generative model + Adversarial model (This model judges the Generative model)
- GAN tricks and Hacks: https://github.com/soumith/ganhacks
- https://medium.com/@jonathan_hui/gan-some-cool-applications-of-gans-4c9ecca35900
- [MNIST GAN](https://medium.com/datadriveninvestor/generative-adversarial-network-gan-using-keras-ce1c05cfdfd3
- deepgenerativemodels research
- https://medium.com/@sanjay035/sketch-to-color-anime-translation-using-generative-adversarial-networks-gans-8f4f69594aeb
Encoder Decoders¶
- Sequence to Sequence Learning with Neural Networks
- Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation
- Deep Visual-Semantic Alignments for Generating Image Descriptions
- https://ai.googleblog.com/2016/09/a-neural-network-for-machine.html
- https://ai.googleblog.com/2018/05/smart-compose-using-neural-networks-to.html
- https://medium.com/@martin.monperrus/sequence-to-sequence-learning-program-repair-e39dc5c0119b
- https://towardsdatascience.com/image-captioning-with-keras-teaching-computers-to-describe-pictures-c88a46a311b8
- http://www.manythings.org/anki/
- https://github.com/keras-team/keras/blob/master/examples/lstm_seq2seq.py
AutoEncoders¶
- http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/
- https://en.wikipedia.org/wiki/Autoencoder
Attention¶
- NEURAL MACHINE TRANSLATION
- Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
- Attention in Deep Networks with Keras
- Tx is a hyperparam, explained in a 2015 paper 1508.04025 and not in the original 2014 Attention Models paper 1409.0473.pdf. In the 2014 paper, Tx is the length of the whole input sentence.
Transformers¶
Explainanle AI¶
- Explaining the Predictions of Any Classifier
- Integrated Gradients
- Robustness of Interpretability Methods
- Interpretable Machine Learning Web Book
- LIME TDS 1 | LIME Blog | LIME Text Explain