Some things about machine learning products just baffle me. For example, I'm curious why computer vision APIs release "confidence scores" with generated labels. What's the business value? Does this business value outweigh potential security concerns? (1/4)
Wouldn't publishing these confidence scores make it easier for an adversary to "steal" the model (ex: fine-tune a model to min. KL div between softmaxed model outputs and API-assigned scores)? Or even attack the model because you could approximate what its parameters do? (3/4)
I would love to hear more about how people think about productizing machine learning. What do you obfuscate, and what do you publish to the user? (4/4)
Loading suggestions...