About Us

SAIGE is a group of engineers researching on machine learning to solve real-world signal processing problems.

We live in the world full of complex problems. In order for an AI to solve them, it involves a complex model that can best approximate the problem. In this era of AI, we finally seem to afford those complex models thanks to the technological advances in computing power, the theoretical machine learning research, and availability of big data. Therefore, we sometimes see that AI starts to compete with human intelligence in some problems.

However, we believe that AI is still missing many powerful features that natural intelligence surely possesses. For example, does your brain need ten times more sugar intake when it solves a problem ten times more difficult? Maybe just a little bit more, but not that much. Does a mosquito need much energy to sense the target and fly to the destination accordingly? We know that the intelligent systems found in nature are not only effective, but efficient. One of the main research agenda in SAIGE is to build machine learning models that run more efficiently during the test time and in hardware. This kind of systems range from a deep neural network and probabilistic topic models defined and operate in a bitwise fashion to a psychoacoustically informed cost function for training a less complex model that still produces perceptually equivalent results.

Another important intelligent behavior is collaboration. It’s rather an abstract concept and not straightforward for a computational model to mimic, but we did find some interesting applications that can benefit from a collaboration between devices and sensors. For example, we have been interested in consolidating many different audio signals recorded by various devices to come up with a commonly dominant source of the audio scene. Since the recordings can contain both the dominant source of interest and its own artifact (e.g. additive noise, reverberation, band-pass filtering, etc), a naïve average of the recordings is not a good solution to this problem. We call this kind of problems collaborative audio enhancement. Another collaboration in nature can happen in-between different sensors, like we recognize someone else’ emotion by looking at her facial expression as well as listening to her voice. Therefore, building a machine learning model that fuses all the different decisions made from different kinds of sensor signals is our another research direction.