I am now working at NASA/ JPL - the Computer Vision Group. At JPL my main area is robust 3D perception for mapping and manipulation. I am part of various robotics projects such as DARPA Robotics Challenge, ARM Challenge, and many others.
You may still visit this website to browse my PhD projects
What are the spatial representations, computer vision algorithms, and object search strategies needed to visually localize objects in large environments?
In this regard, I have come up with four main contributions that each spawned a project on its own:Learning 3D context of everyday objects from Kinect images
What can we learn from 38,000 rooms?
Active visual search in Unexplored Large Environments
Kinect@Home: Crowdsourcing in the wild Kinect images
- My paper on learning 3D context was an IROS 2012 Best paper finalist!
- Check out the Wired and BBC articles on Kinect@Home here and here . Full list (that I could gather so far) here
- I've finally launched Kinect@Home!
- July 2012: TR-O paper submitted on object search in large scale unexplored environments!
- July 2012: Two first-name papers are accepted to IROS 2012! See you in Portugal
- June 2012: RAS paper on object search using spatial relations got accepted!
- April 2012: I am co-organising the Active Semantic Perception at IROS 2012, which is a continuation of the IROS 2011 workshop
- Feb. 2012: I am co-organising the semantic perception workshop at ICRA 2012!
- Feb. 2011: I am organising a workshop at IROS 2011, check it out!
a Kinect@Home 3D model!
- Active Visual Search
- Semantic Mapping
- Machine Learning