Machine learning in data quality management
Traditional rule-based data quality management methodology is costly and hard to scale in the big data environment. It requires subject-matter experts within the business, data, and technology domains. A machine-learning-based data quality management methodology enables a cost-effective and scalable way to manage data quality for a large amount of data.
Jennifer Yang discusses a use case that demonstrates how to use machine learning techniques in the data quality management space in the financial industry. You’ll discover the results of applying various machine learning techniques in the four most commonly defined data validation categories and learn approaches to operationalize the machine learning data quality management solution. Jennifer also shares lessons learned along the way.
Wells Fargo ECS
Jennifer Yang is the head of data management and data governance at Wells Fargo Enterprise Core Services. Previously, Jennifer served various senior leadership roles in risk management and capital management at major financial institutions. Her unique experience allows her to understand data and technology from both the end user’s and data management’s perspectives. She’s passionate about leveraging the power of new technologies to gain insights from the data to develop cost-effective and scalable business solutions. Jennifer holds an undergraduate degree in applied chemistry from Beijing University, a master’s degree in computer science from the State University of New York at Stony Brook, and an MBA specializing in finance and accounting from New York University’s Stern School of Business.
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires