I make sure I read a lot of research papers to keep up with the interesting ideas floating around in the field of Machine Learnning. Beyond staying current with research, it is also worth peregrinating to the past and giving older papers a read.
It often takes many hours and there is no guarantee that one walks away with the whole story. I’ll be open and transparent, reading papers is arduous at the start, there are no two ways about it. Sometimes I find myself in situations trying hard to recollect a past read.🤔
“What was that paper?? Damn! I don’t remember.”
To avoid such circumstances, recently, I started documenting the papers that I read, making a brief summary of each paper. This not only solves problem of idea documentation, but also helps me maintain this habit of paper reading.
- HydroNets: Leveraging River Structure for Hydrologic Modeling [ Summary ]
- CompGCN: Composition-Based Multi-Relational Graph Convolutional Networks [ Summary ]
- SPECTER: Document-level Representation Learning using Citation-informed Transformers [ Summary ]
- Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks [ Summary ]
- Relational inductive biases, deep learning, and graph networks [ Summary ]
- Directional Message Passing For Molecular Graphs [ Summary ]