Some Past Research Projects (not a complete list)

 

Image source: Caltech database

Automatic Portrait Beautification

In collaboration with Changhyung Lee (now at Samsungs), Morgan Schramm (HP), and Prof. Jan Allebach (ECE, Purdue)

This research aims to develop automatic methods to improve the appearance of subjects in digital pictures. Portrait beautification is routinely performed in the fashion industry using image editing software such as photoshop. Processing images this way is slow and costly. Our methods allows one to process a large number of pictures quickly and without user input.

Different beautification steps have been considered.

á      Skin Smoothing. Smoothing the skin has the effect of reducing the appearance of wrinkles and blemishes in the subject. To learn about our proposed method for automatic skin smoothing, look at this presentation or see the following publication:

C. Lee, M. Schramm, M. Boutin, and J. Allebach, "An Algorithm for Automatic Skin Smoothing in Digital Portraits,Ó IEEE Int'l Conference on Image Processing (ICIP), 2009. pdf

This research was funded by a grant from the Hewlett-Packard Company.

 

     

 

Hardware Friendly Descreening

In collaboration with Hasib Siddiqui (now at Qualcomm) and Prof. Charles A. Bouman (ECE, Purdue).

In this project, we developed an efficient descreening algorithm that only requires a small number of fixed point operations. More precisely, our algorithm requires a total of 181 additions, 18 multiplications, and 117 bitwise shifts per trapped pixels, making it particularly attractive for hardware implementation.

For more information, see the following publications:

á      H.S. Siddiqui, M. Boutin, C.A. Bouman, ÒHardware Friendly DescreeningÓ, IEEE Int'l Conference on Image Processing (ICIP), San Diego, CA, October 12-15, 2008. pdf

  • H. Siddiqui, M. Boutin and C.A. Bouman, ÒHardware Friendly DescreeningÓ, IEEE transactions on Image Processing, to appear (2008). pdf

This research was funded by a grant from the Hewlett-Packard Company.

Light-weight methods for text area identification in natural images

In collaboration with Prof. Ed Delp (ECE, Purdue) and Next Wave Systems LLC.

In this project, we developed methods for identifying the areas containing text in an image. The methods are specially designed to be deployed on devices with a small processor and limited energy, such as a cellular phone or a PDA. Moreover, they perform equally well independently of the language or font used.

For more information, see the following publication:

  • S.A. Jafri, M. Boutin and E. Delp ÒAutomatic text area segmentation in natural imagesÓ, IEEE IntÕl conference on Image Procession (ICIP), San Diego, CA, October 12-15, 2008.

 

This research is funded by a DARPA STTR grant.

AppleMark

 

Automatic translation of text in natural scenes using a mobile device

In collaboration with Prof. Ed Delp (ECE, Purdue) and Next Wave Systems LLC.

In this project, we are developing a hand-held translation device using the built-in camera of a commercially available cellular phone or PDA.  We are targeting languages that are written in a different character set than English. Without a significant knowledge of these languages, it is extremely difficult to obtain a translation using currently available translation devices because of the difficulty of entering the text by hand into the device. By taking a picture of the sign, we circumvent this problem. See this press release.

For more information, see the following publication:

  • S.A.R. Jafri, A.K. Mikkilineni, M. Boutin, E.J. Delp, ÒThe Rosetta Phone: a real-time system for automatic detection and translation of signsÓ, IS&T/SPIE joint symposium, Multimedia on Mobile Devices 2008, San Jose, CA, Jan-Feb 2008.
  • S.A. Jafri, M. Boutin and E. Delp ÒAutomatic text area segmentation in natural imagesÓ, IEEE IntÕl conference on Image Procession (ICIP), San Diego, CA, October 12-15, 2008.

 

This research is funded by a DARPA STTR grant.

 

 

Software friendly color trapping

In collaboration with Haiyin Wang and Prof. Jan Allebach (ECE, Purdue).

For mechanical reasons, the color planes of an image are often shiften with respect to each other when they are printed on a color printer. This phenomena, called Òcolor plane mis-registrationÓ, creates gap and halo artifacts on the printed image. Color trapping is an image processing technique that consists in modifying the edges of an image to hide the effects of small color plane mis-registrations. In this project, we are developing efficient automatic color trapping algorithms for software or firmware implementation.

For more information, see the following publications:

  • H. Wang, M. Boutin, J. Trask, and J. Allebach, ÒAn efficient method for color trappingÓ,  IS&T/SPIE joint symposium, Color Imaging XIII: Processing, Hardcopy, and Applications, San Jose, CA, Jan-Feb 2008. pdf
  • H. Wang, M. Boutin, J. Trask, and J. Allebach, ÒThree efficient low-cost algorithms for automatic color trappingÓ, submitted to IEEE transactions on Image Processing, 2008.

This research is funded by a grant from the Hewlett-Packard Company.

 

   

A novel representation method for the shape of point configurations

In collaboration Prof. Gregor Kemper (Math, TU Munich).

In this work, we showed that the distribution of distances (i.e., the bag of their pairwise distances) is a faithful representation of the shape of generic point configurations. In other words, we showed that given any two point configurations not belonging to an exceptional set of measure zero, there exists a rotation, translation and reflection that maps the first configuration onto the second if and only if the distribution of distances of both point configurations coincide. We extended this result to point configurations in different spaces transformed by a variety of  groups.  We also derived an equivalent result for the problem of recognizing the shape of Gaussian mixtures from samples of this mixture. Our main motivation is the problem of recognizing the shape of an object represented by points given noisy measurements of these points.

For more information, see the following publications:

  • M. Boutin and G. Kemper, "On reconstructing n-point configurations from the distribution of distances or areas," Adv. Appl. Math. 32, 709-735, 2004. ps , pdf
  • M. Boutin and G. Kemper, "On reconstructing Configurations fo Points in P2 from a Joint Distribution of Invariants," Applicable Algebra in Engineering, Communication and Computing, 15 (6):361-391, 2005. ps , pdf
  • M. Boutin and G. Kemper, "Which Point Configurations are Determined by the Distribution of their Pairwise Distances", Int. J. Compt. Geometry and Appl., 17(1): 31-43, 2007. ps , pdf
  • M. Boutin, K. Lee and M. Comer, ÒLossless shape representation using invariant statistics: the case of point-setsÓ, Proceedings of Asilomar Conference on Signals, Systems and Computers, October 29-November 1, 2006, Pacific Grove, CA, USA. ps, pdf
  • Copyright 2001 SS&C.
  • M. Boutin and M. Comer, ÒFaithful Shape representation for 2D Gaussian MixturesÓ, IEEE Int'l Conference on Image Processing (ICIP), San Antonio, September 2007. ps, pdf

 

 

Efficient Graph Representations for browsing and indexing

In collaboration Prof. Gregor Kemper (Math, TU Munich).

In this work, we developed some novel representations for weighted graphs. These representations are constructed from some simple statistics of the graph, such as the distribution of the weights and the distribution of the sum of adjacent weights. These representations provide us with a polynomial time algorithm for comparing graphs. This algorithm gives the right answer in the vast majority of time. In particular, we have shown that some of our proposed representations yield the right answer with probability one.  The motivation for this work is the problem of browsing through a large set of graphs.

For more information, see the following publication:

á       M. Boutin and G. Kemper, ÒLossless representation of graphs using distributionsÓ, submitted (2008). ps, pdf

This research is funded by NSF grant CCF-0728929.

 

Improved conditioning of Structure from Motion through variable elimination

In collaboration with Pierre-Louis Bazin, Ji Zhang and Prof. Daniel G. Aliaga (CS, Purdue).

In this project, we are concerned with the problem of reconstructing a scene from a set of images of the scene taken from unknown viewpoints by a projective camera. We are using symbolic equation manipulation techniques to decouple the unknown variables (i.e., the camera pose and the scene points) in order to decrease its complexity and improve its conditioning. In particular, we have been able to eliminate all the camera parameters from the equations, obtaining a simple set of degree two and three polynomial equations that provide a problem formulation with much improved conditioning.

For more information, see the following publications:

á       P.-L. Bazin and M. Boutin, "Structure from Motion: a new look from the point of view of invariant theory," SIAM Journal on Applied Mathematics, 64(4):1156-1174, 2004. ps , pdf

á       M. Boutin, J. Zhang and D.G. Aliaga, "Improving the numerical stability of structure from motion by algebraic elimination", IS&T/SPIE joint symposium, Computational Imaging IV conference, San Jose, CA, January 2006. ps , pdf

á       J. Zhang, M. Boutin and D.G. Aliaga, "Robust bundle adjustment for structure from motion", IEEE Int'l Conference on Image Processing (ICIP), 2006. ps , pdf

á       J. Zhang, D.G. Aliaga, M. Boutin and R. Insley, "Angle-Independent Bundle Adjustment Refinement", 3DPVT, 2006. pdf

á       J. Zhang, M. Boutin and D.G. Aliaga, ÒVariable Elimination for 3D from 2D,Ó IS&T/SPIE joint symposium, Visual Communication and Image Processing conference (VCIP), San Jose, CA, Jan.-Feb. 2007. ps, pdf

This research was funded in parts by NSF grant MSPA-MCS-0434398.

 

 

Last update: December 15, 2011.


Prof. Boutin's webpage