September 26-27, 2016
New York, NY

Speakers

New speakers are added regularly. Please check back to see the latest updates to the agenda.

Filter

Search Speakers

Pieter Abbeel is an associate professor in UC Berkeley’s EECS department, where he works in machine learning and robotics—in particular his research is on making robots learn from people (apprenticeship learning) and how to make robots learn through their own trial and error (reinforcement learning). Pieter’s robots have learned advanced helicopter aerobatics, knot tying, basic assembly, and organizing laundry. He has won various awards, including best paper awards at ICML and ICRA, the Sloan Fellowship, the Air Force Office of Scientific Research Young Investigator Program (AFOSR-YIP) Award, the Office of Naval Research Young Investigator Program (ONR-YIP) Award, the DARPA Young Faculty Award (DARPA-YFA), the National Science Foundation Faculty Early Career Development Program Award (NSF-CAREER), the Presidential Early Career Award for Scientists and Engineers (PECASE), the CRA-E Undergraduate Research Faculty Mentoring Award, the MIT TR35, the IEEE Robotics and Automation Society (RAS) Early Career Award, and the Dick Volz Best US PhD Thesis in Robotics and Automation Award.

Presentations

Deep reinforcement learning for robotics Session

Pieter Abbeel explores deep reinforcement learning for robotics.

Alekh Agarwal is a researcher at Microsoft Research New York City working on machine learning. His research spans several areas, including online learning and optimization and learning with partial feedback, which routinely arises in interactive machine learning and reinforcement learning problems. Alekh has won several awards, including a best paper award at NIPS 2015.

Presentations

Interactive learning systems: Why now and how? Session

Alekh Agarwal explains why interactive learning systems that go beyond the routine train/test paradigm of supervised machine learning are essential to the development of AI agents. Along the way, Alekh outlines the novel challenges that arise at both the systems and learning side of things in designing and implementing such systems.

Robbie Allen is the founder and CEO of Automated Insights. The company’s Wordsmith NLG platform is revolutionizing the way professionals generate content with data. Wordsmith helps data-driven industries, including financial services, ecommerce, real estate, business intelligence, and media, achieve content scale, efficiency, and personalization for clients including the Associated Press, Allstate, the Orlando Magic, and Yahoo. Robbie drives the company’s strategic vision, oversees engineering and research, and ensures the company continues to be named one of the best places to work in the Raleigh-Durham area, an honor it’s received from the Triangle Business Journal four years in a row. In 2015, Robbie was named the North Carolina Technology Association’s Tech Exec of the Year. Robbie started writing code to automate the writing process while working at Cisco, where he was a distinguished engineer, the company’s top technical position. He has authored or coauthored 10 books about enterprise software and software development and spoken at numerous events including Strata, SXSW, and the MIT Sloan CIO Symposium. Robbie has two engineering master’s degrees from MIT and was recently appointed an adjunct professor at the UNC Kenan-Flagler Business School.

Presentations

The future of natural language generation, 2016–2026 Session

Natural language generation, the branch of AI that turns raw data into human-sounding narratives, is coming into its own in 2016. Robbie Allen explores the real-world advances in NLG over the past decade and then looks ahead to the next. Computers are already writing finance, sports, ecommerce, and business intelligence stories. Find out what—and how—they’ll be writing by 2026.

Eduardo Arino de la Rubia is chief data scientist at Domino Data Lab. Eduardo is a lifelong technologist with a passion for data science who thrives on effectively communicating data-driven insights throughout an organization. He is a graduate of the MTSU Computer Science Department, General Assembly’s Data Science Program, and the Johns Hopkins Coursera Data Science Specialization. Eduardo is currently pursuing a master’s degree in negotiation, conflict resolution, and peacebuilding from CSUDH. You can follow him on Twitter at @earino.

Presentations

What I learned by replacing middle-class manufacturing jobs with ML and AI Session

Manufacturing in the United States is facing extreme pressures from globalization. Eduardo Arino de la Rubia synthesizes what he learned working side by side with the workers he was replacing with AI and ML, discussing their struggles, how they saw the technology the would take their jobs, the limitations of the technology, and what his real impact was in the face of globalization.

Amitai Armon is the chief data scientist for Intel’s Advanced Analytics group, which provides solutions for the company’s challenges in diverse domains ranging from design and manufacturing to sales and marketing, using machine learning and big data techniques. Previously, Amitai was the cofounder and director of research at TaKaDu, a provider of water-network analytics software to detect hidden underground leaks and network inefficiencies. The company received several international awards, including the World Economic Forum Technology Pioneers award. Amitai has about 15 years of experience in performing and leading data science work. He holds a PhD in computer science from the Tel Aviv University in Israel, where he previously completed his BSc (cum laude, at the age of 18).

Presentations

Intel's new processors: A machine-learning perspective Session

Intel has recently released new processors for the Xeon and Xeon Phi product lines. Amitai Armon discusses how these processors are used for machine-learning tasks and offers data on their performance for several types of algorithms in both single-node and multinode settings.

Guruduth Banavar is vice president and chief science officer for cognitive computing at IBM, where he is responsible for advancing the next generation of cognitive technologies and solutions with IBM’s global scientific ecosystem, including academia, government agencies, and other partners. Most recently, he led the team responsible for creating new cognitive systems in the family of IBM Watson designed to create new partnerships between people and machines to augment and scale human expertise in all industries. Previously, as chief technology officer for IBM’s Smarter Cities initiative, Banavar designed and implemented big data and analytics systems to help make cities more livable and sustainable. Prior to that, he was director of IBM Research in India, where he and his team received a presidential award for innovation. He holds more than 25 patents and has published extensively in media outlets around the world.

Presentations

Transforming your industry with cognitive computing Session

In the last decade, the availability of massive amounts of new data, the development of new AI techniques, and the availability of scalable computing infrastructure have given rise to a new class of machine capabilities we call cognitive computing. Guruduth Banavar offers an overview of the technological breakthroughs that are enabling this trend.

Jon Barker is a solution architect with NVIDIA, helping customers and partners develop applications of GPU-accelerated machine learning and data analytics to solve defense and national security problems. Jon is particularly focused on applications of the rapidly developing field of deep learning. Prior to joining NVIDIA, Jon spent almost a decade as a government research scientist within the UK Ministry of Defence and the US Department of Defense R&D communities. While in government service, he led R&D projects in sensor data fusion, big data analytics, and machine learning for multimodal sensor data to support military situational awareness and aid decision making. Jon has a PhD and BS in pure mathematics from the University of Southampton, UK.

Presentations

Managing the deep learning computer-vision pipeline with DIGITS Session

The process for deploying an effective neural network is iterative. Before an effective neural network is reached, many parameters must be evaluated and their impact on performance assessed. Jon Barker offers an overview of DIGITS, a deep learning GPU-training system designed to provide a real-time interactive user interface targeted toward accelerating the development process.

Genevieve Bell is an Australian-born anthropologist and researcher. With a father who was an engineer and a mother who was an anthropologist, perhaps Genevieve was fated to ultimately work for a technology company. As director of user interaction and experience in Intel Labs, Genevieve leads a research team of social scientists, interaction designers, human-factors engineers, and computer scientists that shapes and helps create new Intel technologies and products that are increasingly designed around people’s needs and desires. In this team and her prior roles, Genevieve has fundamentally altered the way Intel envisions and plans its future products so that they are centered on people’s needs rather than simply silicon capabilities. She is also an accomplished industry pundit on the intersection of culture and technology and a regular public speaker and panelist at technology conferences worldwide, sharing myriad insights gained from her extensive international field work and research. Genevieve’s first book is Divining the Digital Future: Mess and Mythology in Ubiquitous Computing, cowritten with Paul Dourish of the University of California at Irvine. In 2010, she was named one of Fast Company’s inaugural 100 Most Creative People in Business. Genevieve is the recipient of several patents for consumer electronics innovations. She holds a PhD and a master’s degree in cultural anthropology from Stanford and a bachelor’s degree in anthropology from Bryn Mawr.

Presentations

Artificial intelligence: Making a human connection Keynote

Genevieve Bell explores the meaning of “intelligence” within the context of machines and its cultural impact on humans and their relationships. Genevieve interrogates AI not just as a technical agenda but as a cultural category in order to understand the ways in which the story of AI is connected to the history of human culture.

Peter Brodsky is a middle school dropout, a college graduate, and a PhD dropout. Peter built and sold his first company and is now building second company.

Presentations

Building an AI startup: Realities and tactics Session

AI is all the rage in tech circles, and the press is awash in tales of AI entrepreneurs striking it rich after being acquired by one of the giants. Matt Turck and Peter Brodsky explain why the realities of building a startup are different and offer successful strategies and tactics that consider not just technical prowess but also thoughtful market positioning and business excellence.

Cristian Canton is a senior SDE and researcher at Microsoft Technology and Research, where he is primarily focused on bridging the gap between bleeding-edge state-of-the-art computer vision and machine-learning research and impactful products. Cristian’s topics of interest revolve around computer vision and machine learning applied to scene understanding. He holds a BSc and a PhD in telecommunications engineering from the Technical University of Catalonia (UPC) and an MSc from the Swiss Federal Institute of Technology of Lausanne (EPFL).

Presentations

Building and applying emotion recognition Session

Anna Roth and Cristian Canton walk you through building a system to recognize emotions by inferring them from facial expressions. Cristian and Anna explain how they trained their emotion recognition CNN from noisy data and how to approach labeling subjective data like emotion with crowdsourcing before showing a demo of this work in action, as it is exposed in Microsoft’s Emotion API.

Roger Chen is the program cochair for the O’Reilly Artificial Intelligence Conference. Previously, he was a principal at O’Reilly AlphaTech Ventures (OATV), where he invested in and worked with early-stage startups primarily in the realm of data, machine learning, and robotics. Roger has a deep and hands-on history with technology; before he worked in venture capital, he was an engineer at Oracle, EMC, and Vicor and developed novel nanotechnology as a PhD researcher at UC Berkeley. Roger holds a BS from Boston University and a PhD from UC Berkeley, both in electrical engineering.

Presentations

Closing remarks Keynote

Program chairs Ben Lorica and Roger Chen close the first day of keynotes.

Closing remarks Keynote

Program chairs Ben Lorica and Roger Chen offer closing remarks on the last day of keynotes.

Monday opening remarks Keynote

Program chairs Ben Lorica and Roger Chen open the first day of keynotes.

Tuesday opening remarks Keynote

Program chairs Ben Lorica and Roger Chen open the second day of keynotes.

Lili Cheng is a corporate vice president of Microsoft’s AI and Research Division, where she is responsible for the AI developer platform, which includes Cognitive Services, a collection of powerful cognitive AI APIs for vision, speech, language understanding, knowledge and search that enables developers to easily add AI to their apps and services, and the Bot Framework, which makes it easy for developers to build and connect intelligent conversational AI to their customer experiences and deploy these in their own custom UI and embed them in Skype, Microsoft Teams, Cortana, Bing, Facebook, Slack, etc. Prior to Microsoft, Lili worked in Apple’s advanced technology group on the user interface research team, where she focused on Quicktime conferencing and Quicktime VR. Lili is a registered architect; she worked in Tokyo and Los Angeles for Nihon Sekkei and Skidmore, Owings & Merrill on commercial urban design and large-scale building projects. She has taught in NYU’s Interactive Telecommunications program as well as at Harvard University.

Aparna Chennapragada is a director of product management at Google and is currently the technical assistant (TA) to the CEO. She most recently led Google Now, a personalized digital assistant that proactively helps you through the day. Having led multiple efforts across Google Search and YouTube over the years, she is excited about the potential of AI and algorithms in powering life-changing products. Aparna’s 15 years of experience in the tech industry, as a computer scientist, engineer, and product leader, started with helping build world’s first content delivery services at Akamai technologies. She holds a master’s degree in management and engineering from MIT, a master’s degree in computer science from the University of Texas at Austin, and a bachelor’s degree in computer science from the Indian Institute of Technology (IIT), Madras.

Presentations

Lessons on building data products at Google Keynote

Aparna Chennapragada explores building data products at Google.

As the founder and CEO of Lumiata, Ash Damle drives the company’s vision of leveraging the best of data science and medical knowledge to power high-value healthcare around the world. At Lumiata, Ash has pioneered the Lumiata Medical Graph, a first-of-its-kind medical graph based on current scientific research and clinical practice that combines multisourced health data with medical knowledge and analyzes the complex, multidimensional relationships between them, allowing for the delivery of hyperpersonalized business and clinical insights across the entire healthcare network. Ash is a technologist and data scientist deeply rooted in the application of big data to health and its intersection with design as well as a global entrepreneur, who has worked with clients and partners in the United States, China, the UK, Canada, Australia, France, Germany, India, and Japan. He graduated from MIT with degrees in both computer science and mathematics, has published numerous papers, and has received patents in real-time unstructured semantic analysis.

Presentations

Achieving precision medicine at scale: Building medical AI to predict individual disease evolution in real time Session

AI in healthcare demands models that can handle the complexity of health data and implementation of automation, precision, speed, and transparency with minimal error. Drawing on Lumiata’s experience with building medical AI, Ash Damle discusses key considerations in dealing with high-dimensional data, deep learning, and how to apply practical AI in healthcare today.

Kenny Daniel is founder and CTO of Algorithmia. Kenny’s goal with Algorithmia is to accelerate AI development by creating a marketplace where algorithm developers can share their creations and application developers can make their applications smarter by incorporating the latest machine learning algorithms—an idea he came up with while working on his PhD, when he encountered a plethora of algorithms that never see the light of day. Kenny has also worked with companies like wine enthusiast app Delectable to build out their deep learning-based image recognition systems. Kenny holds degrees from Carnegie Mellon University and the University of Southern California, where he studied artificial intelligence and mechanism design.

Presentations

Lessons learned from deploying the top deep learning frameworks in production Session

By building a marketplace for algorithms, Algorithmia gained unique experience with building and deploying machine-learning models using a wide variety of frameworks. Kenny Daniel shares the lessons Algorithmia learned through trial and error, the pros and cons of different deep learning frameworks, and the challenges involved with deploying them in production systems.

Tom Davenport is the President’s Distinguished Professor of Information Technology and Management at Babson College, the cofounder of the International Institute for Analytics, a fellow of the MIT Center for Digital Business, and a senior advisor to Deloitte Analytics. Tom teaches analytics and big data in executive programs at Babson, Harvard Business School, MIT Sloan School, and Boston University. He pioneered the concept of “competing on analytics” with his best-selling 2006 Harvard Business Review article (and his 2007 book by the same name). Tom has written or edited 17 books and over 100 articles for Harvard Business Review, Sloan Management Review, the Financial Times, and many other publications. He also writes a weekly column for the Wall Street Journal’s Corporate Technology section. His most recent book is Only Humans Need Apply: Winners and Losers in the Age of Smart Machines (with Julia Kirby). Tom has been named one of the Top Three Business/Technology Analysts in the World, one of the 100 Most Influential People in the IT Industry, and one of the world’s Top 50 Business School Professors by Fortune magazine. Tom earned a PhD from Harvard University in social science and has taught at the Harvard Business School, the University of Chicago, Dartmouth’s Tuck School of Business, Boston University, and the University of Texas at Austin.

Presentations

Only humans need apply: Adding value to the work of very smart machines Session

The automation of decisions and actions now threatens even knowledge-worker jobs. Tom Davenport describes both the threat of automation and the promise of augmentation—combining smart machines with smart people—and explores five roles that individuals can adopt to add value to AI, as well as what these roles mean for businesses.

Currently a Forbes 30 under 30 star and partner at the Longevity Fund, Laura Deming has wanted to cure aging since the age of eight. After years working on nematode longevity at the UCSF graduate school, Laura matriculated at MIT at 14 to work on artificial organogenesis and bone aging. She is now based in San Francisco, working to find and fund therapies to extend the human health span. She has also recently become a board observer at Navitor Pharmaceuticals.

Presentations

Genetic architect: Investigating the structure of biology with machine learning Session

Each human genome is a 3 billion-base-pair set of encoding instructions. Decoding the genome using deep learning fundamentally differs from most tasks, as we do not know the full structure of the data and therefore cannot design architectures to suit it. Laura Deming and Sasha Targ describe novel machine-learning search algorithms that allow us to find architectures suited to decode genomics.

Greg Diamos leads computer systems research at Baidu’s Silicon Valley AI Lab (SVAIL), where he helped develop the Deep Speech and Deep Voice systems. Previously, Greg contributed to the design of compiler and microarchitecture technologies used in the Volta GPU at NVIDIA. Greg holds a PhD from the Georgia Institute of Technology, where he led the development of the GPU-Ocelot dynamic compiler, which targeted CPUs and GPUs from the same program representation.

Presentations

The need for speed: Benchmarking deep learning workloads Session

Greg Diamos and Sharan Narang discuss the impact of AI on applications within Baidu, including autonomous driving and speech recognition, offering a brief introduction to the challenges in training deep learning algorithms as well as the different workloads that are used in various deep learning applications.

Jana Eggers is a tech executive focused on products and the messages surrounding them. Jana has started and grown SMBs and led large organizations within enterprises. She supports, subscribes to, and contributes to customer-inspired innovation, systems thinking, Lean analytics, and autonomy, mastery, and purpose-style leadership. Jana’s software and technology experience includes technology and executive positions at Intuit, Blackbaud, Basis Technology (internationalization technology), Lycos, American Airlines, Los Alamos National Laboratory (computational chemistry and supercomputing), Spreadshirt (customized apparel ecommerce), and acquired startups that you’ve never heard of. Jana is a frequent speaker, writer, and CxO educator on innovation, change, and technology. She holds a bachelor’s degree in mathematics and computer science from Hendrix College in Arkansas and pursued graduate studies in computer science at Rensselaer Polytechnic.

Presentations

How to scope an AI project Session

Drawing on her experience implementing AI systems in large enterprises, Jana Eggers covers the dos and don'ts of scoping a project across time, money, and people and compares and contrasts AI projects with typical IT and data science projects to explore the new aspects you need to consider as you add AI to your tech portfolio.

Rana el Kaliouby is cofounder and CEO of Affectiva—a pioneer in emotion AI, the next frontier of artificial intelligence—where she leads the company’s award-winning emotion recognition technology, built on a science platform that uses deep learning and the world’s largest emotion data repository of nearly 4.9 million faces analyzed from 75 countries, amounting to more than 50 billion emotion data points. Previously, Rana was a research scientist at MIT Media Lab, where she spearheaded the applications of emotion technology in a variety of fields, including mental health and autism research. Her work has appeared in numerous publications including the New Yorker, Wired, Forbes, Fast Company, the Wall Street Journal, the New York Times, CNN, CBS, Time magazine, Fortune, and Reddit. A TED speaker, she was recognized by TechCrunch as a women founder who crushed it in 2016, by Entrepreneur magazine as one of the seven most powerful women to watch in 2014, and on Ad Age’s 40 under 40 list. Rana has also been inducted into the Women in Engineering Hall of Fame and is a recipient of Technology Review’s 2012 Top 35 Innovators Under 35 award and Smithsonian magazine’s 2015 American Ingenuity Award for Technology. Rana holds a BSc and MSc in computer science from the American University in Cairo and a PhD from the Computer Laboratory at the University of Cambridge.

Presentations

Why AI needs emotion Keynote

Highly connected, interactive artificial intelligence systems surround us daily, but as smart as these systems are, they lack the ability to truly empathize with us humans. Rana El Kaliouby explores why emotion AI is critical to accelerating adoption of AI systems, how emotion AI is being used today, and what the future will look like.

Oren Etzioni is chief executive officer of the Allen Institute for Artificial Intelligence. Oren has been a professor in the University of Washington’s Computer Science Department since 1991, receiving several awards including GeekWire’s Hire of the Year (2014), Seattle’s Geek of the Year (2013), the Robert Engelmore Memorial Award (2007), the IJCAI Distinguished Paper Award (2005), AAAI Fellow (2003), and a National Young Investigator Award (1993). He is also the founder or cofounder of several companies including Farecast (sold to Microsoft in 2008) and Decide (sold to eBay in 2013) and the author of over 100 technical papers that have garnered over 25,000 citations. The goal of Oren’s research is to solve fundamental problems in AI, particularly the automatic learning of knowledge from text. Oren holds a PhD from Carnegie Mellon University and a BA from Harvard.

Presentations

The future of AI Session

Oren Etzioni offers his perspective on the future of AI, based on cutting-edge research at the Allen Institute for AI on projects such as Aristo and Semantic Scholar. This future reflects the institute's mission: AI for the common good.

Shahin Farshchi is a principal at Lux Capital, where he empowers entrepreneurs aiming to accelerate humanity toward a fantastic future through feats of engineering. Shahin’s recent investments include deep learning company Nervana, recently acquired by Intel; Planet Labs, which is launching the world’s largest fleet of Earth-imaging satellites; Plethora, which is rolling out a fleet of robotic machine shops; Flex Logix, making chips that can reprogram themselves; and Zoox, designing what comes after the automobile.

Presentations

Pascale Fung is a professor in the Department of Electronic & Computer Engineering at the Hong Kong University of Science & Technology. She is an elected fellow of the Institute of Electrical and Electronic Engineers (IEEE) for her contributions to human-machine interactions and an elected fellow of the International Speech Communication Association for fundamental contributions to the interdisciplinary area of spoken language human-machine interactions. She is keenly interested in promoting AI research for the betterment of the humanity, including AI for ethical fintech and medical practices. Pascale has recently become a partner in the Partnership on AI, an organization of top AI players in the industry and academia focusing on promoting AI to benefit people and the society. She is a member of the Global Future Council on Artificial Intelligence and Robotics, a think tank of the World Economic Forum, and blogs for the forum’s online publication agenda. Pascale has been recognized as one of 2017’s Outstanding Women Professionals and a Woman of Hope in 2014. She holds a PhD in computer science from Columbia University. She is a fluent speaker of seven European and Asian languages.

Presentations

How to make robots empathetic to human feelings in real time Session

Pascale Fung describes an approach to enable an interactive dialogue system to recognize user emotion and sentiment in real time and explores CNN models that recognize emotion from raw speech input without feature engineering and sentiments. These modules allow otherwise conventional dialogue systems to have “empathy” and answer users while being aware of their emotion and intent.

Some are cognitive scientists; others are computer scientists and engineers. Mark Hammond is a cognitive entrepreneur bringing together both fields along with business acumen. He has a deep passion for understanding how the mind works, combined with an understanding of own human nature, and turns that knowledge into beneficial applied technology. As the founder and CEO of Bonsai, Mark is enabling AI for everyone. Mark has been programming since the first grade and started working at Microsoft as an intern and contractor while still in high school. He has held positions at Microsoft and numerous startups and in academia, including turns at Numenta and in the Yale Neuroscience Department. He holds a degree in computation and neural systems from Caltech.

Presentations

Unlock the power of AI: A fundamentally different approach to building intelligent systems Session

Mark Hammond explains how Bonsai’s platform enables every developer to add intelligence to their software or hardware, regardless of AI expertise. Bonsai’s suite of tools—a new programming language, AI engine, and cloud service—abstracts away the lowest-level details of programming AI, allowing developers to focus on concepts they want a system to learn and how those concepts can be taught.

Binh Han is a senior software engineer and data scientist at Arimo, a leader in building the enterprise brain, where she focuses on building its machine learning platform, with a focus on deep learning. Previously, Binh held multiple software engineering and research positions involving big data analytics. She has authored and coauthored numerous publications and presentations on scientific computing, spatiotemporal data mining, and distributed systems. Binh holds a PhD in computer science from Georgia Tech.

Presentations

Deeply active learning: Approximating human learning with smaller datasets combined with human assistance Session

Natural-language assistants are the emergent killer app for AI. Getting from here to there with deep learning, however, can require enormous datasets. Christopher Nguyen and Binh Han explain how to shorten the time to effectiveness and the amount of training data that's required to achieve a given level of performance using human-in-the-loop active learning.

Song Han is a rising fifth-year PhD student at Stanford University under Bill Dally. Song’s research interests are deep learning and computer architecture; he is currently focused on improving the accuracy and efficiency of neural networks on mobile and embedded systems. Song has worked on deep compression that can compress state-of-the art CNNs by 10x–49x and compress SqueezeNet to only 470KB, which fits fully in on-chip SRAM. He proposed a DSD training flow that improved that accuracy of a wide range of neural networks and designed the EIE accelerator, an ASIC that works on the compressed model, which is 13x faster and 3000x energy efficient than TitanX GPU. Song’s work has been covered by The Next Platform, TechEmergence, Embedded Vision, and O’Reilly. His work on deep compression won the best paper award at ICLR ’16.

Presentations

Deep neural network model compression and an efficient inference engine Session

Neural networks are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. Song Han explains how deep compression addresses this limitation by reducing the storage requirement of neural networks without affecting their accuracy and proposes an energy-efficient inference engine (EIE) that works with this model.

Zachary Hanif leads the security machine learning team at Capital One, where he currently works to create powerful analytics within batch and real time data processing engines though applied statistics and rapid correlation. In addition to his individual contributions, Zachary is currently working to establish the Center for Machine Learning within Capital One. His research interests revolve around applications of machine learning and graph mining within the realm of massive security data and the automation of model validation and governance.

Presentations

Growing up: Continuous integration for machine-learning models Session

Developing and validating frequently updated models is core to professional data science teams. Zachary Hanif discusses the adaptation of CI tools and practices to solve model governance and accuracy tracking concerns in a complex environment with adversarial and temporal data complications.

Shohei Hido is the chief research officer of Preferred Networks, a spin-off company of Preferred Infrastructure, Inc., where he is currently responsible for Deep Intelligence in Motion, a software platform for using deep learning in IoT applications. Previously, Shohei was the leader of Preferred Infrastructure’s Jubatus project, an open source software framework for real-time, streaming machine learning and worked at IBM Research in Tokyo for six years as a staff researcher in machine learning and its applications to industries. Shohei holds an MS in informatics from Kyoto University.

Presentations

Chainer: A flexible and intuitive framework for complex neural networks Session

Open source software frameworks are the key for applying deep learning technologies. Orion Wolfe and Shohei Hido introduce Chainer, a Python-based standalone framework that enables users to intuitively implement many kinds of other models, including recurrent neural networks, with a lot of flexibility and comparable performance to GPUs.

Babak Hodjat is cofounder and chief scientist of Sentient, where he is responsible for the core technology behind the world’s largest distributed artificial intelligence system. Babak is a serial entrepreneur, having started a number of Silicon Valley companies as main inventor and technologist. Prior to cofounding Sentient, Babak was senior director of engineering at Sybase iAnywhere, where he led mobile solutions engineering. Previously, Babak was cofounder, CTO, and board member of Dejima Inc. (acquired by Sybase) and was the primary inventor of Dejima’s patented, agent-oriented technology applied to intelligent interfaces for mobile and enterprise computing—the technology behind Apple’s Siri. Babak is a published scholar in the fields of artificial life, agent-oriented software engineering, and distributed artificial intelligence and has 25 granted or pending patents to his name. Babak holds a PhD in machine intelligence from Kyushu University, in Fukuoka, Japan.

Presentations

The new artificial intelligence frontier Session

Babak Hodjat discusses the progress in AI, diving into how AI can offer unique solutions in verticals such as investment, medical diagnosis, and ecommerce. Babak details how using massively scaled distributed evolutionary computation, mimicking biological evolution, allows an AI to learn, adapt, and react faster to provide customers with the answers and decisions they need.

Xuedong “XD” Huang serves as Microsoft’s chief speech scientist and leads Microsoft’s Advanced Technology group, which includes Microsoft’s world-wide Advanced Technology Labs in Egypt, Israel, and Germany. XD joined Microsoft to found the company’s speech recognition team. As the head of Microsoft’s spoken language efforts for over a decade, he provided technical, engineering, and business leadership to bring speech recognition to the mass market. XD introduced SAPI to Windows in 1995 and later the enterprise-grade Speech Server in 2004. Prior to his current role, he spent five years in Bing as chief architect working to improve search relevance for the Web. Before Microsoft, he was on the faculty at Carnegie Mellon University and directed Sphinx-II, which had not only the best performance of all categories in 1992’s DARPA speech recognition benchmarking but also the most dramatic error reduction in the history of DARPA-sponsored speech recognition evaluations. XD received the Alan Newell research excellence leadership medal in 1992, an IEEE Best Paper award in 1993, and the SpeechTek Top 10 Leaders award in 2003. He was honored as an IEEE Fellow in 2000 and the Asian American Engineer of the Year in 2011. He was recently named to Wired magazine’s 2016 Next list. XD holds over 80 patents and has published 100 papers and two books.

Presentations

Progress of delivering real AI workloads Session

Progress in enterprise AI workloads, particularly in deep learning, big data, and computing infrastructure, will profoundly impact productivity for users. XD Huang outlines enterprise AI examples to illustrate the collective efforts and exciting opportunities modern AI technologies are making possible.

Anirudh Koul is a senior data scientist at Microsoft AI and Research. An entrepreneur at heart, he has been running a mini-startup team within Microsoft, prototyping ideas using computer vision and deep learning techniques for augmented reality, productivity, and accessibility, building tools for communities with visual, hearing, and mobility impairments. Anirudh brings a decade of production-oriented applied research experience on petabyte-scale social media datasets, including Facebook, Twitter, Yahoo Answers, Quora, Foursquare, and Bing. A regular at hackathons, he has won close to three dozen awards, including top-three finishes for three years consecutively in the world’s largest private hackathon, with 16,000 participants. Some of his recent work, which IEEE has called “life changing,” has been showcased at a White House AI event, Netflix, and National Geographic and to the Prime Ministers of Canada and Singapore.

Presentations

How advances in deep learning and computer vision can empower the blind community Session

Anirudh Koul and Saqib Shaikh explore cutting-edge advances at the intersection of computer vision, language, and deep learning that can help describe the physical world to the blind community. Anirudh and Saqib then explain how developers can utilize this state-of-the-art image description, as well as visual question answering and other computer-vision technologies, in their own applications.

Yann LeCun is director of AI research at Facebook and Silver Professor at New York University, affiliated with the Courant Institute of Mathematical Sciences, the Center for Neural Science, and the Center for Data Science, for which he served as founding director until 2014. Over his career, Yann has held a wide range of positions, including a postdoc at the University of Toronto, head of the Image Processing Research department at AT&T Labs-Research, and a researcher at the NEC Research Institute, as well as the 2015–2016 annual visiting professor chair of computer science at Collège de France. His research interests include machine learning and artificial intelligence with applications to computer vision, natural language understanding, robotics, and computational neuroscience. Yann is best known for his work in deep learning and the invention of the convolutional network method, widely used for image, video, and speech recognition. He is the recipient of the 2014 IEEE Neural Network Pioneer Award and the 2015 IEEE Pattern Analysis and Machine Intelligence Distinguished Researcher Award. Yann holds a PhD in computer science from Université Pierre et Marie Curie (Paris).

Presentations

Obstacles to progress in AI Keynote

The essence of intelligence is the ability to predict. Prediction, perception, planning/reasoning, attention, and memory are the pillars of intelligence. Yann LeCun describes several projects at FAIR and NYU on unsupervised learning, question answering with a new type of memory-augmented network, and various applications for vision and natural language understanding.

Ben Lorica is the chief data scientist at O’Reilly Media. Ben has applied business intelligence, data mining, machine learning, and statistical analysis in a variety of settings, including direct marketing, consumer and market research, targeted advertising, text mining, and financial engineering. His background includes stints with an investment management company, internet startups, and financial services.

Presentations

Closing remarks Keynote

Program chairs Ben Lorica and Roger Chen close the first day of keynotes.

Closing remarks Keynote

Program chairs Ben Lorica and Roger Chen offer closing remarks on the last day of keynotes.

Monday opening remarks Keynote

Program chairs Ben Lorica and Roger Chen open the first day of keynotes.

Tuesday opening remarks Keynote

Program chairs Ben Lorica and Roger Chen open the second day of keynotes.

Vikash Mansinghka is a research scientist at MIT, where he leads the Probabilistic Computing Project, and a cofounder of Empirical Systems, a new venture-backed AI startup aimed at improving the credibility and transparency of statistical inference. Previously, Vikash cofounded a venture-backed startup based on his research that was acquired by Salesforce, was an advisor to Google DeepMind, and held graduate fellowships at the National Science Foundation and MIT’s Lincoln Laboratory. He served on DARPA’s Information Science and Technology advisory board from 2010 to 2012 and currently serves on the editorial boards for the Journal of Machine Learning Research and Statistics and Computation. Vikash holds a PhD in computation, an MEng in computer science, and BS degrees in mathematics and computer science, all from MIT. His PhD dissertation on natively probabilistic computation won the MIT George M. Sprowls dissertation award in computer science, and his research on the Picture probabilistic programming language won an award at CVPR.

Presentations

Probabilistic programming for augmented intelligence Session

The next generation of AI systems will provide assisted intuition and judgment for everyday people trying to collaboratively solve hard problems. Vikash Mansinghka and Richard Tibbetts explore how AI will be used on problems like malnutrition, public health, education, and governance—complex, ambiguous areas of human knowledge where data is sparse and there are no rules.

A scientist, best-selling author, and entrepreneur, Gary Marcus is currently professor of psychology and neural science at NYU and CEO and cofounder of the recently formed Geometric Intelligence, Inc. Gary’s efforts to update the Turing test have spurred a worldwide movement and his research on language, computation, artificial intelligence, and cognitive development has been published widely in leading journals such as Science and Nature. He is also the author of four books, including The Algebraic Mind, Kluge: The Haphazard Evolution of the Human Mind, and the New York Times best-seller Guitar Zero, and contributes frequently to the the New Yorker and the New York Times. Gary’s most recent book, The Future of the Brain: Essays By the World’s Leading Neuroscientists, features the 2014 Nobel Laureates May-Britt and Edvard Moser.

Presentations

Hilary Mason is founder and CEO of Fast Forward Labs, a machine intelligence research company, and data scientist in residence at Accel Partners. Previously Hilary was chief scientist at Bitly. She cohosts DataGotham, a conference for New York’s homegrown data community, and cofounded HackNY, a nonprofit that helps engineering students find opportunities in New York’s creative technical economy. Hilary served on Mayor Bloomberg’s Technology Advisory Board and is a member of Brooklyn hacker collective NYC Resistor.

Presentations

Practical AI product development Session

Hilary Mason explores a framework for applied AI research, with a focus on algorithmic capabilities that are useful for building real-world products today. Drawing on real-world examples, Hilary outlines a system for thinking about which AI capabilities are ready to transition from pure research to applied products and how to make the transition from research paper to a working product.

Jim McHugh is vice president and general manager at NVIDIA. He currently leads DGX-1, the world’s first AI supercomputer in a box. Jim focuses on building a vision of organizational success and executing strategies to deliver computing solutions that benefit from GPUs in the data center. With over 25 years of experience as a marketing and business executive with startup, mid-sized, and high-profile companies, Jim has a deep knowledge and understanding of business drivers, market/customer dynamics, technology-centered products, and accelerated solutions. Previously, Jim held leadership positions with Cisco Systems, Sun Microsystems, and Apple, among others.

Presentations

Thor’s hammer Keynote

We are entering a new computing paradigm—an era where software will write software. This is the biggest and fastest transition since the advent of the Internet. Big data and analytics brought us information and insight; AI and deep learning turn that insight into superhuman knowledge and real-time action. Jim McHugh shares real-world examples of companies solving problems once thought unsolvable.

Diogo Moitinho de Almeida is a data scientist, software engineer, and hacker. Currently, Diogo is a senior data scientist at Enlitic, where he works to radically improve the quality of medical diagnosis using deep learning, advance the state of the art in modeling, and build novel ways to interact with neural networks. Previously, he was a medalist at the International Math Olympiad, ending a 13-year losing streak for the Philippines; received the top prize in the Interdisciplinary Contest in Modeling, achieving the highest distinction of any team from the Western Hemisphere; and won a Kaggle competition, setting a new state of the art for black box identification of causality and getting the opportunity to speak at the Conference on Neural Information Processing Systems.

Presentations

Deep learning: Modular in theory, inflexible in practice Session

The high-level view of deep learning is elegant: composing differentiable components together trained in an end-to-end fashion. The reality isn't that simple, and the commonly used tools greatly limit what we are capable of doing. Diogo Almeida explains what we can do about it and offers a practical attempt at a deep learning library of the future.

Urs Muller is a developer at NVIDIA, where he built and leads an autonomous driving team that creates novel deep-learning solutions for self-driving cars on NVIDIA’s high-performance DRIVE PX platform. Previously, Urs worked at Bell Labs and later founded Net-Scale Technologies, Inc., a prime contractor on several robotics and machine-learning DARPA programs.

Presentations

End-to-end learning for autonomous driving Session

Urs Muller presents the architecture and training methods used to build an autonomous road-following system. A key aspect of the approach is eliminating the need for hand-programmed rules and procedures such as finding lane markings, guardrails, or other cars, thereby avoiding the creation of a large number of “if, then, else” statements.

Aman Naimat is the senior vice president for Demandbase, where he works on leveraging the latest developments in artificial intelligence and data science for marketing and sales platforms. Aman was previously founder and CTO of Spiderbook, a data-driven sales engine for account-based targeting. Before Spiderbook, he was the cofounder of TopCorner, a platform for open government. Aman has been building CRM systems since he was 19 and has founded and worked in various startups in search, trading systems, and enterprise software. Aman was the architect for the Oracle CRM applications, the director of special projects for the CEO’s office at Oracle, and the senior director of product management in the Oracle Database Group. Aman holds an MS in computer science from Stanford, where his research focused on artificial intelligence and natural language understanding, and a master’s degree in public policy from Stanford. He was also a fellow at the Stanford Graduate School of Business. Aman holds a number of patents and has authored scientific publications on information retrieval, CRM, and databases.

Presentations

Making AI a reality for the enterprise and the physical world Session

Aman Naimat and Mark Patel present an analysis of the current adoption of AI in industry based on a systematic study of the entire business Internet at over 500,000 companies. Drawing on this data, Aman and Mark offer a new economic framework to discover, measure, and motivate future use cases for AI.

Sharan Narang is a researcher on the Systems team at Baidu’s Silicon Valley AI Lab (SVAIL), where he plays an important role in improving the performance and programmability of the deep learning framework used by researchers at SVAIL. Sharan’s research focuses on reducing the memory requirement of deep learning models, and he has explored techniques like pruning neural network weights and quantization to achieve this goal. He also proposed a DSD training flow that improved the accuracy of deep learning applications by ~5%. Previously, Sharan worked on next-generation mobile processors at NVIDIA.

Presentations

The need for speed: Benchmarking deep learning workloads Session

Greg Diamos and Sharan Narang discuss the impact of AI on applications within Baidu, including autonomous driving and speech recognition, offering a brief introduction to the challenges in training deep learning algorithms as well as the different workloads that are used in various deep learning applications.

Jasmine Nettiksimmons is a data scientist at Stitch Fix, where she focuses on robust parameter estimation in observational data and assessing how successfully humans interact with a live recommendation system. Prior to joining Stitch Fix, she worked in the field of cognitive aging with research focusing on biomarker profiles which are predictive of cognitive decline and dementia. In addition to her work in cognitive aging, she has a broad publication record across many public health and social issues including rural health care delivery, childhood obesity, domestic violence prevention, and family-friendly policy usage. Jasmine holds a PhD in epidemiology from UC Davis.

Presentations

Combining statistics and expert human judgement for better recommendations Session

Jay Wang and Jasmine Nettiksimmons explore the business model of Stitch Fix, an emerging startup that uses artificial intelligence and human experts for a personalized shopping experience, and highlight the challenges encountered implementing Stitch Fix's recommendation algorithm and interacting AI with human stylists.

Christopher Nguyen is CEO and cofounder of Arimo (née Adatao), the leader in collaborative, predictive intelligence for enterprises. Previously, Christopher served as engineering director of Google Apps and cofounded two successful startups. As a professor, he also cofounded the computer engineering program at HKUST (香港科技大学). Christopher has a BS from UC Berkeley, where he graduated summa cum laude, and a PhD from Stanford, where he created the first standard-encoding Vietnamese software suite, authored RFC 1456, and contributed to Unicode 1.1. He is also a cocreator of the open source Distributed DataFrame project.

Presentations

Deeply active learning: Approximating human learning with smaller datasets combined with human assistance Session

Natural-language assistants are the emergent killer app for AI. Getting from here to there with deep learning, however, can require enormous datasets. Christopher Nguyen and Binh Han explain how to shorten the time to effectiveness and the amount of training data that's required to achieve a given level of performance using human-in-the-loop active learning.

Peter Norvig is a director of research at Google. Previously, he directed Google’s core search algorithms group. Peter is coauthor of Artificial Intelligence: A Modern Approach, the leading textbook in the field, and coteacher of an artificial intelligence course that signed up 160,000 students, helping to kick off the current round of massive open online classes. He is a fellow of the AAAI, ACM, California Academy of Science, and American Academy of Arts & Sciences.

Presentations

Software engineering of systems that learn in uncertain domains Keynote

Building reliable, robust software is hard. It is even harder when we move from deterministic domains (such as balancing a checkbook) to uncertain domains (such as recognizing speech or objects in an image). The field of machine learning allows us to use data to build systems in these uncertain domains. Peter Norvig looks at techniques for achieving reliability (and some of the other -ilities).

Tim O’Reilly has a history of convening conversations that reshape the computer industry. In 1998, he organized the meeting where the term “open source software” was agreed on and helped the business world understand its importance. In 2004, with the Web 2.0 Summit, he defined how “Web 2.0” represented not only the resurgence of the web after the dot-com bust but a new model for the computer industry, based on big data, collective intelligence, and the internet as a platform. In 2009, with his Gov 2.0 Summit, Tim framed the conversation about the modernization of government technology that has shaped policy and spawned initiatives at the federal, state, and local levels and around the world. He has now turned his attention to implications of the on-demand economy, AI, robotics, and other technologies that are transforming the nature of work and the future shape of the economy. He shares his thoughts about these topics in his new book, WTF? What’s the Future and Why It’s Up to Us (Harper Business, October 2017). Tim is the founder and CEO of O’Reilly Media and a partner at O’Reilly AlphaTech Ventures (OATV). He sits on the boards of Maker Media (which was spun out from O’Reilly Media in 2012), Code for America, PeerJ, Civis Analytics, and POPVOX.

Presentations

Why we'll never run out of jobs Keynote

There are many who fear that in the future, AI will do more and more of the jobs done by humans, leaving us without meaningful work. To believe this is a colossal failure of the imagination. Tim O'Reilly explains why we can't just use technology to replace people; we must use it to augment them so that they can do things that were previously impossible.

Mark Patel is a partner at McKinsey & Company, where he advises semiconductor, high-tech, and industrial clients on their challenges related to strategy and operations. Mark has a decade of experience helping companies tackle their challenges in strategy, operations, and marketing. He is based in San Francisco and has served clients in the United States, Europe, and the Middle East. Previously, Mark was head of strategy and senior vice president of commercial operations for Amyris, a renewable-products company. Mark holds an MBA from Stanford University as well as master’s and bachelor’s degrees in engineering from the University of Cambridge.

Presentations

Making AI a reality for the enterprise and the physical world Session

Aman Naimat and Mark Patel present an analysis of the current adoption of AI in industry based on a systematic study of the entire business Internet at over 500,000 companies. Drawing on this data, Aman and Mark offer a new economic framework to discover, measure, and motivate future use cases for AI.

Naveen Rao is the vice president and general manager of artificial intelligence solutions at Intel. Naveen’s fascination with computation in synthetic and neural systems began around age 9 when he began learning about circuits that store information and encountered the AI themes prevalent in sci-fi at the time. He went on to study electrical engineering and computer science at Duke, but continued to stay in touch with biology by modeling neuromorphic circuits as a senior project. After studying computer architecture at Stanford, Naveen spent the next 10 years designing novel processors at Sun Microsystems and Teragen, specialized chips for wireless DSP at Caly Networks, video content delivery at Kealia, Inc., and video compression at W&W Comms. After a stint in finance doing algorithmic trading optimization at ITG, Naveen was part of the Qualcomm’s neuromorphic research group leading the effort on motor control and doing business development. Naveen was the founder and CEO of Nervana (acquired by Intel), which brings together engineering disciplines and neural computational paradigms to evolve the state of the art and make machines smarter. Naveen holds a PhD in neuroscience from Brown, where he studied neural computation and how it relates to neural prosthetics in the lab of John Donoghue.

Presentations

Deep learning at scale and use cases Keynote

Deep learning has made a major impact in the last three years. Imperfect interactions with machines, such as speech or image processing, have been made robust by deep learning that finds usable structure in large datasets. Naveen Rao outlines deep learning challenges and explores how changes to the organization of computation and communication can lead to advances in capabilities.

Anna S. Roth is a program manager at Microsoft Technology and Research, where she works on computer vision services. Prior to Microsoft Cognitive Services, she worked at Bing analyzing query logs for catchy stories about user behavior. Anna holds a bachelor’s degree in applied math from Harvard. Say hello on Twitter at @AnnaSRoth.

Presentations

Building and applying emotion recognition Session

Anna Roth and Cristian Canton walk you through building a system to recognize emotions by inferring them from facial expressions. Cristian and Anna explain how they trained their emotion recognition CNN from noisy data and how to approach labeling subjective data like emotion with crowdsourcing before showing a demo of this work in action, as it is exposed in Microsoft’s Emotion API.

Suman Deb Roy is a computer scientist and the author of Social Multimedia Signals: A Signal Processing Approach to Social Network Phenomena. Suman currently works as the lead data scientist in NY-based startup studio betaworks. Previously, he worked with Microsoft Research and was a fellow at the Missouri School of Journalism. Suman is the recipient of the IEEE Communications Society MMTC Best Journal Paper Award in 2015 and the Missouri Honor Medal for Outstanding PhD Research in 2013. He also serves as the editor of IEEE Special Technical Community on Social Networking. Suman is responsible for building the machine-learning algorithms driving product features in Digg, Instapaper, and Poncho.

Presentations

The identities of bots: A learning architecture for conversational software Session

The recent explosion of bots on communication platforms has rekindled the hopes of conversational AI. However, building intelligent and customizable bots is not just bottlenecked by NLP and speech recognition. Our biggest limitation is the inability to modularize the goals of human bot interconnection. Suman Roy explains why we need a layered architecture for bots to learn about us from data.

Jennifer Rubinovitz is a machine-learning scientist with a passion for enabling human-computer collaboration at DBRS Innovation Labs, where she works with artists on integrating artificial intelligence and machine learning into their work. Jennifer came to DBRS on the heels of earning her MS in computer science with a machine-learning concentration at Columbia University, where she researched data tools and machine-learning algorithms to help early-stage entrepreneurs. Jen holds a bachelor’s in computer science from Rutgers University and also studied at Ringling College of Art & Design.

Presentations

Leveraging artificial intelligence in creative technology Session

Jennifer Rubinovitz and Amelia Winger-Bearskin offer an overview of how artificial intelligence researchers and artists at the DBRS Innovation Lab have collaborated on five different projects (and counting), ranging from composing modern classical music to visualizing deep neural networks in virtual reality.

Sanford Russell is in charge of NVIDIA’s autonomous driving ecosystem in North America, where he leads the development of self-driving vehicles with NVIDIA partners, transportation startups, and research institutions. Previously, Sanford served as general manager of NVIDIA’s CUDA-accelerated software platform. Before joining NVIDIA 17 years ago, he worked at Silicon Graphics. Sanford has a degree in marketing from the University of Massachusetts, Dartmouth.

Presentations

Deploying AI-based services in the data center for real-time responsive experiences Session

In the new era of artificial intelligence, every organization must examine how to extract intelligence from its data using deep learning. Sanford Russell explores how NVIDIA GPUs are deployed today to accelerate deep learning inference workloads in the data center.

Saqib Shaikh is a software engineer at Microsoft, where he has worked for 10 years. Saqib has developed a variety of Internet-scale services and data pipelines powering Bing, Cortana, Edge, MSN, and various mobile apps. Being blind, Saqib is passionate about accessibility and universal design; he serves as an internal consultant for teams including Windows, Office, Skype, and Visual Studio and has spoken at several international conferences. Saqib has won three Microsoft hackathons in the past year. His current interests focus on the intersection between AI and HCI and the application of technology for social good.

Presentations

How advances in deep learning and computer vision can empower the blind community Session

Anirudh Koul and Saqib Shaikh explore cutting-edge advances at the intersection of computer vision, language, and deep learning that can help describe the physical world to the blind community. Anirudh and Saqib then explain how developers can utilize this state-of-the-art image description, as well as visual question answering and other computer-vision technologies, in their own applications.

Vin Sharma is the director of machine learning solutions in the Data Center group at Intel, where he focuses on autonomous driving and automated trading. Vin has helped build data center infrastructure software platforms—most recently the Trusted Analytics Platform—and has helped drive enterprise adoption of open source software like Linux, KVM, OpenStack, Hadoop and analytics for over 20 years. Before joining Intel, Vin held various engineering and management roles at HP for 15 years, building enterprise software products based on Linux, Java, XML, and other open source software.

Presentations

AI on IA Session

Vin Sharma explores how Intel is investing in artificial intelligence and using open source software platforms, frameworks, and libraries, as well as Intel's hardware to advance the future.

Sasha Targ is an MD-PhD student at the University of California, San Francisco using deep learning to solve problems in computational genomics and medicine. Sasha studied biology and physics at MIT and graduated Phi Beta Kappa in three years in order to pursue research full time. Sasha is also interested in the intersection of public health and technology and has worked on interventions to improve access to preventive health information in the Boston Chinatown community. She previously conducted six years of basic immunology research into mechanisms of antibody development that could be used to create better vaccines, resulting in two Science coauthorships.

Presentations

Genetic architect: Investigating the structure of biology with machine learning Session

Each human genome is a 3 billion-base-pair set of encoding instructions. Decoding the genome using deep learning fundamentally differs from most tasks, as we do not know the full structure of the data and therefore cannot design architectures to suit it. Laura Deming and Sasha Targ describe novel machine-learning search algorithms that allow us to find architectures suited to decode genomics.

Richard Tibbetts is CEO of Empirical Systems, an MIT spinout building an AI-based data platform that provides decision support to organizations that use structured data. Previously, he was founder and CTO at StreamBase, a CEP company that merged with TIBCO in 2013, as well as a visiting scientist at the Probabilistic Computing Project at MIT.

Presentations

Probabilistic programming for augmented intelligence Session

The next generation of AI systems will provide assisted intuition and judgment for everyday people trying to collaboratively solve hard problems. Vikash Mansinghka and Richard Tibbetts explore how AI will be used on problems like malnutrition, public health, education, and governance—complex, ambiguous areas of human knowledge where data is sparse and there are no rules.

Matt Turck is a managing director of FirstMark Capital, where he invests across a broad range of early-stage enterprise and consumer startups, with a particular focus on big data, AI, and frontier tech companies. Previously, Matt was a managing director at Bloomberg Ventures, the investment and incubation arm of Bloomberg LP, which he helped start, and the cofounder of TripleHop Technologies, a venture-backed enterprise search software startup that was acquired by Oracle. Matt is passionate about building communities and organizes two large monthly events, Data Driven NYC (which focuses on data-driven startups, big data, and AI) and Hardwired NYC (which focuses on frontier tech, including the Internet of Things, AR/VR, drones, and other emerging technologies). Matt graduated from Sciences-Po (IEP) Paris and holds a master of laws (LLM) from Yale Law School. He blogs at Mattturck.com.

Presentations

Building an AI startup: Realities and tactics Session

AI is all the rage in tech circles, and the press is awash in tales of AI entrepreneurs striking it rich after being acquired by one of the giants. Matt Turck and Peter Brodsky explain why the realities of building a startup are different and offer successful strategies and tactics that consider not just technical prowess but also thoughtful market positioning and business excellence.

Benjamin Vigoda is the CEO of Gamalon Machine Intelligence. Previously, Ben was technical cofounder and CEO of Lyric Semiconductor, a startup that created the first integrated circuits and processor architectures for statistical machine learning and signal processing. The company was named one of the 50 Most Innovative Companies by Technology Review and was featured in the Wall Street Journal, New York Times, EE Times, Scientific American, Wired, and other media. Lyric was successfully acquired by Analog Devices, and Lyric’s products and technology are being deployed in leading smartphones and consumer electronics, medical devices, wireless base stations, and automobiles.

Ben also cofounded Design That Matters, a not-for-profit that for the past decade has helped solve engineering and design problems in underserved communities and has saved thousands of infant lives by developing low-cost, easy-to-use medical technology such as infant incubators, UV therapy, pulse oximeters, and IV drip systems that have been fielded in 20 countries. He has won entrepreneurship competitions at MIT and Harvard and fellowships from Intel and the Kavli Foundation/National Academy of Sciences and has held research appointments at MIT, HP, Mitsubishi, and the Santa Fe Institute. Ben has authored over 120 patents and academic publications. He currently serves on the DARPA Information Science and Technology (ISAT) steering committee. Ben holds a PhD from MIT, where he developed circuits for implementing machine-learning algorithms natively in hardware.

Presentations

Bayesian program learning for the enterprise Session

Benjamin Vigoda explains how Bayesian program learning can do things that other machine-learning approaches can't and why it's especially suited to enterprise data challenges.

Jianqiang “Jay” Wang is a data science lead at Stitch Fix working on recommendation algorithms and human computer interaction. Previously, Jay worked in academia on survey sampling, nonparametric smoothing, and Bayesian hierarchical models; at HP Labs on demand forecasting and supply-chain management; and as a data scientist at Twitter on ads CTR prediction and ranking. Jay holds a PhD in statistics from Iowa State University.

Presentations

Combining statistics and expert human judgement for better recommendations Session

Jay Wang and Jasmine Nettiksimmons explore the business model of Stitch Fix, an emerging startup that uses artificial intelligence and human experts for a personalized shopping experience, and highlight the challenges encountered implementing Stitch Fix's recommendation algorithm and interacting AI with human stylists.

Pete Warden is the mobile/embedded lead for the TensorFlow team. Pete was formerly the CTO of Jetpac, which was acquired by Google for its deep learning technology optimized to run on mobile and embedded devices. He previously worked at Apple on GPU optimizations for image processing and has written several books on data processing for O’Reilly.

Presentations

TensorFlow for mobile poets Session

Pete Warden shows you how to train an object recognition model on your own images and then integrate it into a mobile application. Drawing on concrete examples, Pete demonstrates how to apply advanced machine learning to practical problems without the need for deep theoretical knowledge or even much coding.

Francisco Webber is the CEO and cofounder of Cortical.io, a company that develops natural language processing solutions for big text data. Francisco’s medical background in genetics combined with his more than two decades of experience in information technology inspired him to create semantic folding, a groundbreaking technology based on the latest findings on the way the human neocortex processes information. Prior to Cortical.io, Francisco founded Matrixware Information Services, a company that developed the first standardized database of patents. Francisco also initiated the Information Retrieval Facility, a nonprofit research institute, with the goal to bridge the gap between science and industry in the information retrieval domain.

Presentations

AI is not a matter of strength but of intelligence Session

Francisco Webber offers a critical overview of current approaches to artificial intelligence using "brute force" (aka big data machine learning) as well as a practical demonstration of semantic folding, an alternative approach based on computational principles found in the human neocortex. Semantic folding is not just a research prototype—it's a production-grade enterprise technology.

Martin Wicke is a software engineer working on making sure that TensorFlow is a thriving open source project. Before joining Google’s Brain team, Martin worked in a number of startups and did research on computer graphics at Berkeley and Stanford.

Presentations

High-level APIs for scalable machine learning Session

TensorFlow is a system for scalable machine learning. However, using raw TensorFlow and profiling, optimizing, and debugging large-scale models can be daunting for novice and expert users alike. Martin Wicke explores new APIs based on TensorFlow that aim to make building complex models easier and allow users to scale quickly.

New York-native Amelia Winger-Bearskin is an accomplished artist and creative technologist with a deep understanding of digital media, visual storytelling, and performance. Amelia is currently the director of the DBRS Innovation Lab, where beautiful things are made with big data, and an artist in residence (tech) at Pioneer Works. She also has created a research group, LoveMachine.ai, looking at human-positive AI in virtual experiences, which has a salon series. Amelia’s video artwork was included in the 2014 Storytelling: La biennale d’art contemporain autochtone, 2e édition (Art Biennale of Contemporary Native Art) at Art Mur (Montreal, Canada). She performed as part of the 2012 Gwangju Biennial and created an interactive portion of the Exchange Archive at the Museum of Modern Art (MoMA) in 2013. She is the cofounder of the Stupid Hackathon and was an assistant professor of interactive performance art at Vanderbilt University before coming to home to NYC to study at NYU’s Interactive Telecommunications program.

Presentations

Leveraging artificial intelligence in creative technology Session

Jennifer Rubinovitz and Amelia Winger-Bearskin offer an overview of how artificial intelligence researchers and artists at the DBRS Innovation Lab have collaborated on five different projects (and counting), ranging from composing modern classical music to visualizing deep neural networks in virtual reality.

Orion Wolfe started his career working on aerospace guidance, controls, modeling, and simulation. He then worked as quantitative trader, developing strategies as part of a statistical arbitrage trading group. Inspired to learn more about computer science while still furthering his skills in modeling, Orion joined a research group in applied machine learning with a focus on semiconductor manufacturing tools and anomaly detection. He holds a BS with honors in electrical engineering from UCLA and an MS in management science and engineering from Stanford. While in college, Orion performed research in particle physics and support vector machines.

Presentations

Chainer: A flexible and intuitive framework for complex neural networks Session

Open source software frameworks are the key for applying deep learning technologies. Orion Wolfe and Shohei Hido introduce Chainer, a Python-based standalone framework that enables users to intuitively implement many kinds of other models, including recurrent neural networks, with a lot of flexibility and comparable performance to GPUs.

Reza Bosagh Zadeh is Founder CEO at Matroid and an Adjunct Professor at Stanford University. His work focuses on Machine Learning, Distributed Computing, and Discrete Applied Mathematics. Reza received his PhD in Computational Mathematics from Stanford under the supervision of Gunnar Carlsson. His awards include a KDD Best Paper Award and the Gene Golub Outstanding Thesis Award. He has served on the Technical Advisory Boards of Microsoft and Databricks.

As part of his research, Reza built the Machine Learning Algorithms behind Twitter’s who-to-follow system, the first product to use Machine Learning at Twitter. Reza is the initial creator of the Linear Algebra Package in Apache Spark. Through Apache Spark, Reza’s work has been incorporated into industrial and academic cluster computing environments. In addition to research, Reza designed and teaches two PhD-level classes at Stanford: Distributed Algorithms and Optimization (CME 323), and Discrete Mathematics and Algorithms (CME 305).

Presentations

Benefits of scaling machine learning Session

Machine learning is evolving to utilize new hardware, such as GPUs and large commodity clusters. Reza Zadeh presents two projects that have benefitted greatly through scaling: obtaining leading results on the Princeton ModelNet object recognition task and matrix computations and optimization in Apache Spark.

Matthew Zeiler is the founder and CEO of Clarifai, where he is applying his award-winning research to create the best visual recognition solutions for businesses and developers and power the next generation of intelligent apps. An artificial intelligence expert, Matt led groundbreaking research in computer vision, alongside renowned machine learning pioneers Geoff Hinton and Yann LeCun, that has propelled the image recognition industry from theory to real-world practice. He holds a PhD in machine learning from NYU.

Presentations

Unlocking AI: How to enable every human in the world to train and use AI Session

Fostering diversity in the burgeoning AI community is a responsibility that falls upon all of us, not just corporate gatekeepers or data scientists with advanced technical degrees. Matt Zeiler unveils groundbreaking new technologies that will transform the way AI is “taught” and make both teaching and using AI accessible to anyone in the world.

Angela Zhou is a data scientist at x.ai focused on understanding scheduling-related email through data analysis, machine-learning, and NLP methods. In the past, she has used statistical machine-learning methods for iris detection and recognition. Angela holds a BS degree in mathematics from Northeastern University in China and a master’s degree in statistics from Columbia University.

Presentations

A peek at x.ai’s data science architecture Session

In any human-machine interaction, you need a dialogue model: the machine must understand and be able to respond appropriately. Angela Zhou discusses x.ai's AI personal assistant, Amy Ingram, who schedules meetings for you, focusing on the way x.ai has approached both understanding and responding.