Shreya Shankar
Shreya Shankar

@sh_reya

6 Tweets Dec 09, 2022
I thought about this a lot when releasing ML APIs that include classification model outputs in the response. Example: individual probabilities may be hard to trust, but a calibration curve can help. If the user knows what that means, they feel smart and more likely to engage. 1/6
But there is a fine line between demonstrating your competence through cool plots and through obscurity. I’ve seen ML monitoring tools show ~20+ plots, when it is unclear what 19 of them are doing other than flexing. If the user doesn’t understand your product, you’re toast. 2/6
I learned the hard way, when I released some predictions with some technical buzzwords (like softmaxed outputs, baseline-agnostic risk ratio) and didn’t hear back. It is probably still sitting in a customer’s email inbox, sigh. 3/6
I went through 3+ cycles turning the same model’s outputs into a product. The thing that “stuck” for us was a minimal group of tables and plots that implicitly suggested how to take action given historical model performance. 4/6
All figures had to be simple enough for anyone to understand, but at least one needed to be just complicated enough such that a customer believed they couldn’t make it on their own. 5/6
Anyways, it was a fun lesson I learned, and I see it play out everywhere around me. In new programming languages and frameworks. In new dev tools. Etc. It is truly important for a user to understand everything and simultaneously feel uniquely smart while using it. 6/6

Loading suggestions...