More recently I have been sharing a lot more papers related to machine learning. Lots of exciting progress, new ideas, and trends emerging.
Let's review a few that I recently shared ↓
Let's review a few that I recently shared ↓
We are seeing great success in Transformers architectures for efficient and effective NLP tasks. Transformers have also started to make their way into the world of computer vision and beyond. Check out this survey for an overview:
Object segmentation is one of the fundamental problems in computer vision but it's far from being solved. New results described in the paper below show the potential to also expand recent ideas to more complex problems like video object segmentation:
I am personally very excited about neural rendering because of the types of advanced applications that are possible with these techniques. It's also a very hard problem but newer techniques from computer graphics and ML are starting to make strides:
We need powerful mathematical ideas and algorithms for dealing with complex data structures. GNNs fit those needs. They are being applied to all kinds of difficult problems. They are continually improved in terms of robustness and scalability:
Pretrained models such as BERT and GPT-3 have sort of revolutionized the NLP field. They are very effective at a wide variety of NLP tasks. The paper below provides an up-to-date overview of recent advances of large pre-trained models in NLP:
Lots of architectural improvements are being proposed in the Transformer regime, either for more efficiency or effectiveness. This recent work proposes an efficient hierarchical architecture for handling long sequences. A great read overall:
While it's fun to see all the new improvements in deep learning architectures, there is nothing more pressing and exciting than to responsibly apply them to the real world. Check out this survey on applications and techniques for fast ML in Science:
IMO, we are far away from reaping the benefits of quantum computing in machine learning, but I sense this might help to develop powerful and scalable ML systems to solve problems that weren't possible before. In NLP specifically, here is recent work:
There's a lot of progress happening in deep learning for a wide range of applications. Regardless, deep learning approaches still lack performance when applied to tabular data. The following paper provides an overview of DNNs for the tabular domain:
As ML practitioners, we are constantly looking for best practices on how to apply machine learning concepts and algorithms to real-world applications. Here is one interesting recent paper I found for ML practitioners:
Here is another paper discussing machine learning practices outside "Big Tech". There is a lot to learn here, especially because the availability of computing resources is currently a big challenge for the widespread development of ML systems at scale:
Those are some of the interesting machine learning papers I shared in the last month or so.
I will continue to do more research and continue to share some of the exciting work happening in the field.
Follow @omarsar0 if you don't want to miss these paper updates in the future.
I will continue to do more research and continue to share some of the exciting work happening in the field.
Follow @omarsar0 if you don't want to miss these paper updates in the future.
Loading suggestions...