Impure is a visual programming language that supports the development of workspaces that enable the exploration of complex datasets. The main goal of Impure is to assist non-expert users in the manipulation and understanding of data through a set of visual elements that, connected among them, operate and visualize the data supporting the generation of insight.
Mendeley is opening up the world's knowledge and connecting researchers to accelerate scientific progress. With 700K researchers and 60M research documents uploaded, Mendeley is approaching the scale which enables genuinely useful things to be done with attention metadata we capture about research articles. I will demonstrate influential author prediction, paper recommendation and an Open API.
Interactive simulations of wildfires and traffic are projected down onto a physical sandtable. Users can change the modeled topography by moving the sand directly and have the elevation map generated by a projector camera pair. GIS layers with active agent-based models are displayed for training, community outreach and incident command by public safety workers.
The move to cloud infrastructure and the need to handle big data have created the perfect catalysts for organizations to introduce new infrastructure software and break ties from their expensive incumbent vendors. Ed will share a detailed strategy on how to leverage open source database solutions like PostgreSQL to contain database cost and free budget for other, more valuable initiatives.
This presentation lays bare the dark underbelly of analytics in the enterprise. Drawing on darkly humorous experiences, the speaker will explain why executives treat analytics as an occult phenomenon. The talk will give executives the mental tools to separate strategically valuable analytics projects from fishing expeditions, and provide litmus tests to keep the witch doctors honest.
Live demonstration of ambient computing using projector-camera pairs to scan the room and place interactive simulations into the space. All surfaces are rendered interactive. We will demonstrate a 3D sandtable for firefighter training and STEM education where the 3D sand becomes and interactive surface.
Apache Cassandra is a second-generation distributed database originally open-sourced by Facebook. Its write-optimized shared-nothing architecture results in excellent performance and scalability.
This tutorial will cover application design with Cassandra through a series of exercises with Twissandra, a simple Twitter clone written in Python and Django.
Apache Avro provides an expressive, efficient standard for representing large data sets. Avro data is programming-language neutral and MapReduce-friendly. Hopefully it can replace gzipped CSV-like formats as a dominant format for data.
This presentation will focus on how businesses can maximize big data analytics for deeper customer insights.
We're presenting our database-oriented textual analytics and processing platform - a suite of tools designed to allow a wide range of functionality (frequency analysis, clustering, classification) from within your database of text sources.
This is a recommendation engine built on self-organizing flocking behavior mapped to survey answers such as the US Census. Alignment, cohesion, separation, attraction, and wandering are weighted to balance the interaction of respondent characteristics. It's intended to be used a meditative discovery tool suggesting patterns in complex data sets to be pursued with traditional analysis methods.
We will discuss the impact of the information explosion, the effectiveness of current technological directions, and explore the success that new perception-based, human-computer interfaces provide in analyzing and understanding complex data. Real examples will be used to illustrate that effective man-machine environments are essential in productively dealing with multi-dimensional information.
How do you build a crack team of data scientists on a shoestring budget? In this 40-minute presentation from the co-founder of Infochimps, Flip Kromer will draw from his experiences as a teacher and his vast programming and data experience to share lessons learned in building a team of smart, enthusiastic hires.
BigDataCamp is a free unconference for users of Hadoop and other Big Data-related technologies with the purpose of exchanging ideas and sharing experiences about this quickly emerging field. This BigDataCamp takes place the evening of Jan 31st at the Santa Clara Convention Center, the night before Strata kicks off
BillGuard is a personal finance security startup harnessing the “collective vigilance” of crowds to protect everyone from unwanted charges such as hidden fees, billing errors, scams and fraud on our credit and debit card bills. With BillGuard, whenever anyone flags a charge on their bill that also appears on yours, you get an alert and even help getting your money back.
The state of open data today is a real mess. It's very difficult to find the data you need and be confident that it's timely and accurate. There is a growing list of companies now vying to become the key destinations for people to gather around new datasets and be excited together. What projects, partnerships and even ventures would be created if there was a marketplace for data?
How do you go about building a product around data using Hadoop?
This talk will present how LinkedIn builds and maintains such
features as People You May Know. We will present our architecture
for doing so (open-sourced) as well as knowledge we've gained in
There's never just one way to do things, but for big bets like Big Data, it helps to learn about paths that others have taken. In this presentation, Bob Page, VP Data & Analytics Platforms for eBay, gives a "behinds the scenes" look at the systems and procedures in that power decision-making at the world's largest online marketplace.
BuzzData intends to become the first destination for people excited about working with data. Structured as a collaborative social hub, BuzzData aims to help people discover trending datasets in a curated environment. Whether you're a hacker looking for inspiration, or an entrepreneur sourcing a streaming feed, BuzzData's unique community marketplace is designed to make your world happier.
In 2001, the Institutes of Medicine declared that “between the care we have and the care we could have lies not just a gap, but a chasm,” yet nothing’s really changed. Healthcare remains one of the most richly endowed yet poorly equipped knowledge industries anywhere. Using real world examples, we’ll see how BIG DATA may be just what the doctor ordered, but only if we pick the right problems.
This tutorial describes how to draw clear, concise, accurate graphs that are easier to understand than many of the graphs one sees today. The tutorial emphasizes how to avoid common mistakes that produce confusing or even misleading graphs. Graphs for one, two, three, and many variables are covered as well as general principles for creating effective graphs.
Ram Peddibhotla, a Director from Intel’s Software and Services Group, will discuss how the future of mobile involves ubiquity across multiple hardware platforms. Specifically, Ram will discuss how open source software will shape the next generation of computing devices, improving compatibility.
Topics for any discipline that focuses on quantitative or technical data have always depended on the datasets that were available at the time. Crowdsourcing has changed that — democratizing the data-collection process and cutting researchers’ reliance on stagnant, overused datasets. Tools like Amazon Mechanical Turk allow anyone to gather data overnight, rather than waiting years.
Artistic visualizations and infographics tell the stories of rich data in unique, compelling ways and synthesize datasets in ways that allow them to be interpreted, absorbed, and experienced in ways beyond the spreadsheet, pie chart, and bar graph.
This tutorial offers a basic introduction to practicing data science.
We'll walk through several typical projects that range from
conceptualization to acquiring data, to analyzing and visualizing it,
to drawing conclusions.
This tutorial offers a basic introduction to practicing data science. We'll walk through several typical projects that range from conceptualization to acquiring data, to analyzing and visualizing it, to drawing conclusions.
Zane Adam from Microsoft speaks about the Azure Data Marketplace.
The OpenStack project was launched last summer by Rackspace, NASA, and a number of other cloud technology leaders in an effort to build a fully-open cloud computing platform. It is a collection of scalable, standards-based projects currently consisting of OpenStack Compute and OpenStack Object Storage. This session will introduce the projects and describe how they can help manage your data.
Moderated by: Marshall Kirkpatrick
After Kennedy, you couldn't win an election without TV. After Obama, it was social media. But tomorrow's citizen gets their information from visualizations.
In this panel, three acclaimed designers show how they apply visualization to big data, making complex, controversial topics easy to understand and explore.
Moderated by: Julie Steele
Does information really want to be free? While the Internet is full of open data, there's plenty of data companies are willing to pay handsomely for -- particularly if it's timely and well aggregated.
As a result, data marketplaces are a burgeoning business. This panel will look at the market for data, and where it's headed.
A wiki database and tools for crowd sourced interpretation of the human genome. A $100 peek into you and your family's profile of diseases and drug reactions.
The new data centricity drives that we have to rethink how we collect, store, manage, analyze and share our data, as all these processes now require limitless resources. This talk will focus on the changes in infrastructure requirements to support the new world and how innovations are removing barriers for companies to be successful.
Data-Publica’s team created the first directory of the French Public Sector Information in less than 9 months. Referencing more than 1800 datasets, Data-Publica became the 3rd PSI directory in the world after data.gov and data.gov.uk. The project gave birth to Data-Publica.com, a company that will leverage this directory and launch the first data marketplace in France in early 2011.
DataSift is a real-time programmable curation platform. It is a cloud based collaborative platform letting users define in our own unique stream definition language rules that define how content is curated. On top of this we have augmentations from Klout, PeerIndex for social authority, Lexalytics that gives us Natural language processing. Lastly all data can be stored + filtered.
The tools we use play a key role in how we use and respond to big data. Hear about the changes being led by key architects of future big data systems.
We are designing DemandEstimator as a a decision support tool for executives in sales, distribution and marketing roles. It combines the power of data mash-ups, advanced analytics and information visualization to provide granular fact based insights leading to targeted decisions.
Chart.io graduated from the Y Combinator in 2010 (TechCrunch coverage: http://tcrn.ch/9bVRdB). Chart.io is an extremely easy to use SaaS Business Intelligence tool for SMBs. We let companies connect their databases (and other 3rd party data sources) and create real-time charts which they can share with their whole team. Our goal is to bring the power of enterprise BI to small companies.
Organizations today possess massive data - in tera- and petabytes - that
needs to be effectively collected, stored and processed. Hadoop is a cost
effective option that helps manage this big data. To derive real returns
from these big data systems, one needs to extract useful insights and
When faced endless data and the need to manage it, there are a variety of proven design patterns that will help designers create usable, efficient, and effective interfaces. From distributing workload to reducing sensory overload, we’ll review the techniques that are enabling the highly scalable user interfaces of today and tomorrow.
There has been an explosion in database technology designed to handle big data and deep analytics from both established vendors and startups. This session will provide a quick tour of the primary technology innovations and systems powering the analytic database landscape.
In a first, Forbes presented all federal campaign contributions by America’s wealthiest people in our September 2010 online edition of the Forbes 400. We combined human effort and homegrown database code to sort through 6 million political donations and find the 20,000 that came from America’s richest people.
Learn how to leverage data exhaust, the digital byproduct of our online activities, to solve problems and discover insights about the world around you. We will walk through a real world example which combines several datasets and statistical techniques to discover insights and make predictions about attendees at O'Reilly Strata.
With thousands of datapoints per second from nodes around the world, how can you tell when something isn't right? The bottom line is: it's hard, but with the right tools it is achievable.
Discover how the industrial revolution in data will affect your business. Learn about the new opportunities and challenges that big data and analytics provide, hear from successful data-driven businesses, and plan for the impact on your organization's infrastructure and personnel.
Discover how the industrial revolution in data will affect your business. Learn about the new opportunities and challenges that big data and analytics provide, hear from successful data-driven businesses, and plan for the impact on your organization's infrastructure and personnel.
Much useful business data is in "semi-structured" form: government filings, insurance claims, customer comment forms, etc. Although most search tools don't take advantage of it, knowing a little structure goes a long way. This talk will show how semi-structured data can be interpreted, summarized, and applied to produce business value in several real-life examples.
Ever been stuck waiting for a late flight as another airline’s flight boards and leaves? Using 10 years of flight data from the FAA, this application looks at your desired departure and arrival cities and then tells you the average arrival and departure delay by airline. You can even choose your threshold of pain, i.e. your tolerance for delays, and filter out airlines that exceed it.
FocusLab is a powerful tool for analyzing and visualizing user behavior. It marries the quantitative power of traffic analytics with the insight of focus groups, making it an important part of the online professionals toolkit. The tool makes it easy to perform segmentations on populations by establishing meaningful groupings based on the correlations of specific behaviors.
90,000 items on Afghanistan, 291,000 on Iraq - and another 251,000 cables. Managing the Wikileaks release is just one of the huge data journalism projects the Guardian's data team has embarked on. This talk will look at how journalists can make sense of data, get stories out of it and our role in supplying open data to the world.
Developing a social network map is fundamental to comprehensively understanding a person. Social networks are dynamic and better derived from real-world data than static configurations. However, the vast majority of this real world data is unstructured. This preso will show how Synthesys uses very large scale unstructured data to create social network maps for reporting and further analysis.
Many of the tools Google created to store, query, analyze, visualize data are exposed to external developers. This talk will give you an overview of Google services for Data Crunchers: Google Storage for developers, BigQuery, Machine Learning API, App Engine, Visualization API.
This tutorial will explain MapReduce and how to develop big data applications in Java and high level languages such as Pig and Hive SQL. Using examples it will cover how to prototype, debug, monitor, test and optimize big data applications for Hadoop. Attendees will get hands-on instruction and will leave with a solid understanding of how to analyze data on Hadoop clusters and practical examples.
Our talk summarizes some recent thinking in the field of vertical search and illustrates it in the context of a new version of Westlaw, called WestlawNext. We argue that getting the right allocation of function between person and machine is the key to making specialist content more findable and search results more understandable.
Much of the world's most valuable information is trapped in digital sand, siloed in servers scattered around the globe. In this talk I'll discuss the promise of big data, which will come to pass in the coming decade, driven by advances in three principle areas: sensor networks, cloud computing, and machine learning.
A discussion of Big Data approaches to analysis problems in marketing, forecasting, academia and enterprise computing. We focus on practices to enhance collaboration and employ rich statistical methods: a Magnetic, Agile and Deep (MAD) approach to analytics. While the approach is language-agnostic, we show that sophisticated statistics can be easily scaled in traditional environments like SQL.
Data, goals, systems and technologies are in a constant state of flux, and organizations that succeed will embrace this change. Cerrio is a visual platform for building and maintaining adaptable distributed systems. The platform scales naturally for real-time and big data. Business logic is altered at runtime and maintenance done visually. Big data is fast data, and fast data is easily managed.
"Water, water everywhere, nor any drop to drink." - Rime of the Ancient Mariner. People feel overwhelmed with data. But the problem is not with the amount of data. The problem is that data is not presented in a form that people can understand and use. Juice Analytics will present and demonstrate proven techniques to design information applications to present data in enjoyable and rewarding ways.
Data modeling competitions allow companies and researchers to post a problem and have it scrutinised by the world's best data scientists. By exposing a problem to a wide audience, competitions are a great way to get the most out of a dataset. In just a few months, Kaggle's competitions have helped to progress the state of the art in chess ratings and HIV research.
Virtual worlds are a goldmine of untapped insights, even for predicting physical behaviors. Not only will we share PARC findings and methods developed to extract key data from online games, but more importantly, we'll discuss how social scientists converted and processed raw behavioral metrics into meaningful psychological variables that can be applied to a broad spectrum of business applications.
Windows Azure Marketplace includes data, imagery, and real-time web services from leading commercial data providers and authoritative public data sources. Customers have access to datasets such as demographic, environmental, financial, retail, weather and sports.
Certain recent academic developments in large data have immediate and sweeping applications in industry. They offer forward-thinking businesses the opportunity to achieve technical competitive advantages. However, these little-known techniques have not been discussed outside academia–until now. What if you knew about important new large data techniques that your competition don't yet know about?
Moderated by: Alistair Croll
Today's web analyst has moved far beyond funnels and visitors. Automated systems decide who gets what content, and language parsing tries to distill sentiment from millions of online interactions.
This panel will look at where web analytics is headed, and how new algorithms and approaches are yielding fresh insights into online commerce.
Sharing data on the Web comes with a tough trade-off between minimalism and enabling creative new scenarios. This session will explore Web APIs that focus on exposing data and let clients decide how to use it. We'll share our experiences while designing the Open Data Protocol (odata.org), what we found to be great and terrible ideas and what we hear from folks running OData Web APIs.
Edd Dumbill and Alistair Croll welcome you to Strata.
Alistair Croll and Edd Dumbill welcome you back to Strata.
Join OpenStack contributors, users, and backers immediately after Strata ends - to celebrate the second release of the fastest-growing open source cloud platform, code-named Bexar. There will be a community Meetup with speakers from 6:00 - 7:00 pm, followed by an open bar from 7:00 - 9:00 pm.
OpenTSDB is an open-source, distributed time series database designed to monitor large clusters of commodity machines at an unprecedented level of granularity. OpenTSDB allows operation teams to keep track of all the metrics exposed by operating systems, applications and network equipment, and makes the data easily accessible.
If you are a leading enterprise or web company, then two things are almost certainly true. Data is the lifeblood of your business. And you face an ever-increasing need to scale your applications and data services.
ParStream has just won the one-to-watch-award from Nvidia & Adobe for
presenting the first analytical database exploiting the potential of
GPUs. Due to its novel indexing technology enabling efficient parallel
processing, ParStream delivers ultra-fast response times on Big Data.
Weather is everywhere, and due to the pervasive nature of weather data, its' importance in our daily lives is sometimes lost. WeatherTrends360 is now available for users to analyze their own personal weather calendar, for any point on the surface of the Earth, up to one year out.
Moderated by: Alistair Croll
The convergence of big, open data, ubicomp, and new interfaces will change the way humans work, play, learn, and love. It's a slow transformation that happens one tweet, one blog, and one game at a time -- but it's also an inexorable road towards the singularity.
In this panel discussion, we'll look beyond the bytes and algorithms to think about humanity awash in a sea of information.
Moderated by: Drew Conway
Data doesn't just show us the past—it can help predict the future. Several new firms harvest massive amounts of open data, trying to anticipate everything the right ad placement to the next terrorist attack. In this session, we bring together the founders of these firms to discuss the technology—and ethics—of looking into the future.
The rise of sensor network data and the expectation for low latency query responses combine to obsolete available databases and storage platforms. We have built a platform for web-scale OLAP and in this talk I will cover how we made our infrastructure capable of real-time update and query performance over hundreds of terabytes of multidimensional data.
Ours is a new era of big behavioral data. Unprecedented business model experimentation is rapidly eroding individual privacy despite rising consumer concerns. Successfully managing privacy is a key differentiator for services providers. In the B2B space, the stakes to get privacy right are even higher. This talk will discuss the implications of privacy in order to succeed in the B2B space.
If you're a new startup looking for investment, or a team at a large company seeking the green light for a new product, nothing convinces like real running code. But how do you solve the chicken-and-egg problem of filling your early prototype with real data? We'll discuss how to use open datasets and public web APIs as a proxy for the final product while you're still in the development stage.
Qwerly has built the web's largest database consolidating people's social media profiles: the "DNS for people." We are able to resolve unique identifiers, such as a Twitter handle or email address, to a person's other social media presences, e.g. their Facebook, LinkedIn, Foursquare, Plancast, Quora, etc. Qwerly provides a commercial API for businesses that wish to incorporate its information.
Open access to information promises to connect citizens to their representatives, improving government transparency and helping educators transform the classroom.
In this real-world panel, practitioners in government and the public sector will give us a glimpse into how data and new interfaces are transforming how we teach and govern.
Join practitioners from a range of industries to learn how they're putting new tools and massive data sets to work. We'll hear how music, geophysics, and the legal system are all changing by putting huge, rich information into the hands of business.
Moderated by: Andrew Odewahn
Information is changing healthcare forever. From the study of epidemics, to machine learning that can improve diagnosis, to the sequencing of the human genome, we're doing the math of life itself.
This panel of practitioners will show us what they're doing in healthcare, pharmaceuticals, and genomics, and how it will change the way we discover, treat, and eliminate disease.
Can machines help us make better decisions? In this panel, real-world practitioners from the travel, finance, and energy industry give us an inside look at how they're applying machine learning to their industries, oprimizing the use of resources and helping with decision support.
Hadoop and HBase make it easy to store terabytes of data, but how do you scale your search mechanism to sift through these mountains of bits and retrieve large result sets in a matter of milliseconds? Careful use of the Solr search server, based on Lucene, made these requirements come to life in our production environment. Come learn how we query terabytes of data in a highly available system.
Most analytics systems rely on large offline computations, which means results come in hours or days behind. Twitter is all about realtime, but with over 160 million users producing over 90 million tweets per day, we need realtime analytics that scaled horizontally. This talk discusses the development of that infrastructure, as well as the products we are beginning to build on top of it.
New technologies are driving a new era of global collaboration among scientists and researchers. Digital scholarship, the ability to create, collect, publish and collaborate in new digital mediums, is driving the exponential growth of data related to scholarly research. This talk will highlight evolving strategies used to appraise and predict success of institutions and researchers.
Retailers and their suppliers have always operated on the cutting edge of data science. In fact, this industry is responsible for many of the technology advances that have contributed to the exponential growth of data, analytics, and related technology. This session covers the history of data science in retail, current trends, and explores future directions in the “big” data age.
Riak Core is a general implementation of a distributed systems model,
enabling you to build a customized, scalable, highly-available
distributed system without too huge an investment. Justin will explain
that model, its history, and how it can be used to build new data
With growing amounts of digital data at the fingertips of software developers the need for a scalable, easy to use framework is tremendous. This talk introduces Apache Mahout - a project with the goal of implementing scalable machine learning algorithms for the masses.
Singly is sponsoring the Locker Project, an open source community effort to build a platform that helps anyone collect all of their social and personal data from wherever it is online back into one secure place. We then help developers create really awesome utilities that everyone can use atop their own data.
While the majority of charts were designed to handle a variety of data, there is a certain novelty of presenting data in a very succinct way. By designing a presentation method restricted to specific data points, we can realize an economy of space and interface.
Social media websites are producing ginormous amounts of data and creating a massive demand for insight related to users, how they engage with features, where they are coming from, why they are visiting, what excites them, and so forth.
The Gaggle framework enables data exchange between independently developed bioinformatics software for exploratory analysis of biological data.
Join us in the Sponsor Pavilion immediately following sessions on Wednesday, February 2. Have a drink and some delectable nibbles, network with other Strata attendees, and visit our Sponsors who are at the leading edge of the data conversation.
At the inaugural edition of Strata, we’re hosting our first ever Startup Showcase. Highlighting the startup ecosystem’s creativity and variety, the Showcase will give you a chance to get your company in front of a global community of leaders in the technology industry-—as well as potential investors.
As part of Strata, we'll be holding a Science Fair. It's a place to demonstrate cutting-edge technologies and cool toys — the more hands-on, the better. Whether it's software that breaks the rules of computing, a compelling new interface, or a prototype that pushes the envelope, we want to see it.
This session explores how to get more done, faster with high-performance Map/Reduce and expand the universe of Hadoop possibilities with tools to speed and simplify development and deployment of analytic applications.
From customer behaviors & usage statistics to security postures & operational analytics, Splunk's ability to make sense of all types of machine data, structured or unstructured, and mash it up w/ other business data provides complete real-time visibility & operational intelligence. This tutorial demos a new approach for analyzing your organization's petabytes of data to derive real-time insights.
Digital Reasoning is making the tools and power of Synthesys, the leading Entity Oriented Analytics Solution, available to everyone. This brief demonstration will explain the benefits of the service as well as showing how the service is used for open cloud-scale text mining. Strata Conference is the intro of this service and all attendees will receive priority access during our BETA period.
Interactive visualizations have become the new media for telling stories online. This session will focus on going from a good visualization to a great visualization by focusing on organization, user interface, and formatting. You should expect to leave this session confident in your ability to consistently create excellent interactive visuals.
Data competitions come of age: from movie recommendations to life and death. Possibly the biggest news at Strataconf is Heritage Provider Network's $3 million predictive modeling prize - the biggest data mining competition ever. It requires data scientists to build algorithms that predict who will go to hospital in the next year, so that preventive action can be taken.
With Big Data comes Big Promises. Mine the blogsphere and discover the secret of eternal wealth. Feast on the Twitter feeds for the wisdom of the ages. We have visited this land in the past, naming it data warehousing and business intelligence. Will we learn the lessons of history? Can we do it differently today? Let’s take this present moment to review the past and imagine the future.
Spire, the database from Drawn to Scale, is built from the ground up to handle "big data" without sacrificing Real-Time Queries, Fulltext Search, and simple deployment. Unlike other distributed DBs, Spire's design allows you to scale both the number of users and amount of data.
For more than 20 years now, data warehousing has put manners on unruly enterprise data. Yet, physics tells us that disorder inexorably increases unless we endlessly fight it. As information volumes and types explode into chaos, is it time to declare the warehouse dead? Or we could move from classical to quantum physics and create a new information architecture. It’s time to make some new choices…
Big data and analytics have developed a mythology rooted in underlying assumptions. We need to ignore these myths and think clearly about how organizations use data, which means understanding how people use information and make decisions.
Birds of a Feather (BoF) sessions provide face to face exposure to those interested in the same projects and concepts. BoFs can be organized for individual projects or broader topics (best practices, open data, standards). BoF topics are entirely up to you. Thursday's Lunchtime BoF sessions will happen on the hotel side of the Hyatt Regency, Mezzanine Level.
This talk demonstrates how an eclectic blend of storage, analysis, and visualization techniques can be used to gain a lot of serious insight from Twitter data, but also to answer fun quesions such as "What does Justin Bieber and the Tea Party have (and not have) in common?"
Corporations and government agencies are using simulations connected to live data feeds to explore how their decisions affect customers (and citizens) and might have unexpected outcomes. In this demo I'll show how three organizations are doing it using an online simulation development and hosting tool called Forio Simulate.
We'll demonstrate technology for viewing extremely large data sets of time-series or real-time data, as well as other tabular data sets. We'll look at data sets related to retail analysis, crime, weather, server performance metrics, real-time sensors, IT security, surveys, cheating teachers and more.
awe.sm is a platform for understanding the value social media drives to your business. VIPLi.st is a standalone project built on top of awe.sm's APIs using data from our customer Plancast to map the social pathways by which attendees discover and signup for Plancast events.
Moderated by: Alistair Croll
"Many hands make light work", as the saying goes. That's true when thousands of people can collaborate on a data set. In this session, we'll look at collective interfaces that allow many distributed users to examine and share data with one another, and how that's changing traditional desktop visualization tools.
Birds of a Feather (BoF) sessions provide face to face exposure to those interested in the same projects and concepts. BoFs can be organized for individual projects or broader topics (best practices, open data, standards). BoF topics are entirely up to you. Wednesday's Lunchtime BoF sessions will happen on the hotel side of the Hyatt Regency, Mezzanine Level.
Data science is evolving rapidly. I'll talk about our current and slightly future technical and philosophical challenges, including realtime vs non-realtime analysis, streams of data vs traditional databases, and some of the opportunities we have to learn amazing things about the world through our data and what this means for those of us who are immersed in working with it.
Data integration and viz technology have given rise to an appetite for government data–the Gov 2.0 movement. Do government agencies have good data? Sort of: I believe that an understanding of data limitations has gotten short shrift in the drive to develop the next app. I'll discuss why a knowledge of the complexities of government data is crucial to building quality decision-making tools.
Looking up a word in a dictionary is so 20th century. Wordnik.com shows you how big data and Wordnik's word graph model can offer new perspectives on what words mean.
To many people, Big Data means Open Data: social graphs, voting records, weather patterns, and more. But who owns data? Most of our laws were written for atoms, not bits; they're woefully out of date in an information age. When you share data, does it become more or less valuable? If someone adds to your data, is it still yours? This panel will tackle the gray area of data ownership.
The ability to collect, crunch, act upon, and share huge amounts of data disrupts nearly every industry, tearing down barriers to entry and creating entirely new businesses.
This panel of investors will discuss where they see the opportunities in the Big Data industry, and how they think about the value of new ventures in the space.
Companies must choose to spend their money and time on the right software initiatives. With exploding volumes of critical data, getting new insight and mastery over business operations demands new investments in BI at multiple levels. Ed will show a proven path for how to avoid exorbitant database software fees and shift that spend to be used in areas like BI where you can realize a stronger ROI.
Big Data and predictive analytics can deliver incredible insight that can be used for purposes both good, and not so good. Drawing on real world examples, this session will examine the fine line between competitive advantage and bad behavior, and implications to a complex cast of stakeholders. Let’s begin a dialog on ethics now instead of waiting for our first major crisis.
Asthmapolis will demonstrate devices that use GPS to track the time and location where people use their asthma inhalers, and a variety of interfaces to help patients, physicians and public health agencies put that information to work to improve asthma control.
The world's available scientific and factual data is growing at an alarming pace, but how do we use all this information? How do we incorporate it into our decision making process? Joshua Martell, will give an inside look into how Wolfram|Alpha works, what it takes to make data "computable", understand user input, and present meaningful results.
A defining characteristic of modern life is the incredible proliferation of digital information. The Economist estimates that the amount of information created each year is growing at a 60% compounded rate. According to the Harvard Business Review, we humans generated more data last year than in all of previous human history.
YourSports is mapping the sports graph: the relationship a fan has to the teams it roots for, roots against, and just watches; the relationship a fan has to the game it attends, plays in, coaches, and creates content for; and the relationship fans have to each other.
In the world of sports, whoever controls this data shapes the future of the industry. YourSports aims to do just that.