As software becomes more free and open, it also is becoming more complex and expensive to operate. How can we in the open source community clarify best practices and recommended operations to model complex, interconnected services so users can focus on their ideas? How can we as developers deliver recommended best practices in our applications so users are free to focus on the science on their choice of substrate (e.g., laptop, cloud, or bare metal/x86, ARM, ppc64el, or s390x)?
Antonio Rosales offers an overview of Juju, an open source method to distill the best practices and operations needed to use interconnected big data solutions, such as modeling a multinode Apache Spark cluster across a diverse set of substrates and adding other services to build additional solutions. Antonio leads a demo of Juju in action, and you’ll be able to take all software shown to try yourself in a free and open source manner.
This session is sponsored by Canonical.
Antonio Rosales is an engineering manager at Canonical. Antonio has spent the past 15 years in the Unix/Linux community working with Sun Microsystem and IBM. He enjoys working on open source projects—specifically those that enable people to cycle their ideas faster and help realize their solution.
©2016, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com
Apache Hadoop, Hadoop, Apache Spark, Spark, and Apache are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries, and are used with permission. The Apache Software Foundation has no affiliation with and does not endorse, or review the materials provided at this event, which is managed by O'Reilly Media and/or Cloudera.