Our methods are grounded in statistical machine learning and pattern recognition. This includes probabilistic graphical models, such as Markov models, hierarchical hidden Markov models and conditional random fields.
We are now focused on:
- Probabilistic models
Our work includes probabilistic graphical models, such as Markov models, hierarchical hidden Markov models, conditional random fields and more recently, Bayesian non-parametric models.
We ask two questions:
- What are the models that we must construct to be computationally scalable in big-data problems?
- How do we enhance the expressiveness of such models, what are the representations and can we develop appropriate inference and learning?
- Compressed sensing and sparsity modeling
What are the sparse, robust models we can construct for big data?