• ESRI
  • NAVTEQ
  • Veriplace
  • AT&T Interactive
  • DigitalGlobe
  • Google
  • Yahoo! Inc.
  • ZoomAtlas
  • Digital Map Products
  • Microsoft Research (MSR)
  • Pitney Bowes Business Insight
  • NAVTEQ

Sponsorship Opportunities

For information on exhibition and sponsorship opportunities at the conference, contact Yvonne Romaine at yromaine@oreilly.com

Media Partner Opportunities

For media partnerships, contact mediapartners@ oreilly.com or download the Media & Promotional Partner Brochure (PDF)

Press and Media

For media-related inquiries, contact Maureen Jennings at maureen@oreilly.com

Where 2.0 Newsletter

To stay abreast of conference news and to receive email notification when registration opens, please sign up for the Where 2.0 Conference newsletter (login required)

Where 2.0 Ideas

Have an idea for Where to share? where-idea@oreilly.com

Contact Us

View a complete list of Where 2.0 contacts

John Zelek

John Zelek
Associate Professor, University of Waterloo

Website | @jzelek

John Zelek is Associate Professor (in Systems Design Engineering at the University of Waterloo), with expertise in the area of intelligent Mechatronic control systems that interface with humans; specifically, the areas are (1) wearable sensory substitution and assistive devices; (2) probabilistic visual and tactile perception; (3) wearable haptic devices including their design, synthesis and analysis; and (4) human-robot interaction. He was awarded the best paper award at the 2007 Iinternational IEEE/IAPRS Computer and Robot Vision conference. He was awarded a 2006 & 2008 Distinguished Performance Award from the Faculty of Engineering at the University of Waterloo. He was also awarded the 2004 Young Investigator Award by the Canadian Image Processing & Pattern Recognition society for his work in robotic vision. He is also the CTO for Tactile Sight Inc.

Sessions

Mobile
Location: Ballroom IV Level: Intermediate
John Zelek (University of Waterloo)
Average rating: **...
(2.75, 4 ratings)
A smart mobile device (e.g., iphone) contains a camera, GPS, accelerometers that all can be used to define location including the camera. We exploit the camera to perform Visual SLAM (Simultaneous Localization & Mapping), object recognition and the computation of depth. The camera performs triangulation on landmarks to obtain geo position which is useful when the GPS data is not available. Read more.