studying for the Berkeley DB prelim after having built first-gen MLsys (feature stores, prediction serving, etc) on top of like postgres, Hadoop, Spark etc is kind of bonkers
I see how frustrating it must be for academics to watch industry-born projects repeatedly converge to: SQL is the best interface, networking costs are real, one size actually doesn't fit all, and more
I also see how frustrating it is for industry people to open the latest conf proceedings and read a million ML-inspired query optimization papers
It is also fun to read transcripts of conversations from the System R (an early relational DBMS) reunion. Treasure trove of gossip that assured me that the competitive, scoop-y, fear-mongering culture of empirical ML will probably die within a couple of decades
Loading suggestions...