Internet of Things (IoT)

IoT devices such as wearable gadgets and smartphones are severely resource constrained, both in memory and computation. State-of-the-art learning models (e.g., deep neural networks, decision tree ensembles) are extremely bulky, and cannot fit on IoT devices due to large memory footprint and computation cost. In principle, one could consider keeping models on the cloud, and interact with the IoT device when needed, but the latency due to communication overhead is often prohibitive. On the other hand, linear models such as LASSO and Elastic-Net, while sparse and interpretable, are not powerful enough for most applications.

We design a new class of compact and interpretable models, for resource-impoverished settings, by combining the computational advantages of sparse predictors with the non-linear expressivity of multiple prototypes. Specifically, we introduce two different convex formulations. One of these results in learning representations richer than those obtained from Lasso and Elastic-Net based methods, while the other amounts to improper learning of the k-DNFs via a Boolean relaxation.

More info:
Paper 1    Paper 2