Faced with the need to handle increasing volumes of data, alternative datasets (“alt data”), and AI, many enterprises are working to design or redesign their big data architectures. One traditional approach is to store everything in a data lake, process it via ETL, and analyze it in batch or interactive modes. However, this decade-old approach often fails to generate sufficient ROI.
Yaron Haviv argues for a different approach, in which information is ingested, enriched, and analyzed in context as it arrives, including via machine learning or deep learning, and is then immediately made available to users or to drive automated actions. He explains why it’s possible to take full advantage of modern hardware, microservices, and serverless functions to achieve much higher performance while still benefitting from CI/CD, autoscaling, and fast software rollouts. The resulting “continuous analytics” solutions yield faster answers for the business while remaining simpler and less expensive for IT.
Yaron Haviv is CTO at iguazio. Yaron is a serial entrepreneur with deep technological experience in the fields of big data, cloud, storage and networking. Previously, he was the vice president of data center solutions at Mellanox, where he led technology innovation, software development, and solution integrations, and the CTO and vice president of R&D at Voltaire, a high-performance computing, I/O, and networking company. Yaron is a CNCF member and one of the authors in the CNCF working group.
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com