Computing hardware is getting really freaking powerful! I think edge inference and ML will become more popular very quickly.
It's pretty exciting to think about what this means for industry ML development and some new cool problems we can work on: (1/5)
It's pretty exciting to think about what this means for industry ML development and some new cool problems we can work on: (1/5)
1. Continuous Delivery. How do you ship new releases in a way that doesn't overwhelm the end user? (Good) ML models may need to be updated pretty frequently. I already get annoyed at the number of Docker Desktop updates; lol. (2/5)
2. Authentication. ML "as a service" has been fairly cloud-based so far, so OAuth has worked decently. I think it's likely we move away from REST for MLaaS. It would be super cool do authentication at data & function transformation levels. (3/5)
3. More fragmentation in the end-to-end ML pipeline. We don't really know how to develop and fix ML pipelines on our own clusters, even when we have root access and can select * on PII. How are we going to do it on other people's machines? (4/5)
This post would not be complete if I didn't acknowledge the many more cool areas like federated learning and distributed training (possibly the same thing, depending on who I ask), compiler optimization, MLIR, building Bigger and Deeper networks, etc. Back to work 🙃 (5/5)
Loading suggestions...