The best big data insight comes when an organisation looks at itself from the inside out. Approaching the challenge to get your own network as transparent and clean-running as possible is the best grounding for real business intelligence insights.
Robert Plant, Associate Professor, School of Business Administration, University of Miami, 5/22/2013 Comment now
Most of us think we have a big data set related to our email storage; I personally have about six gigabytes of email. The business school where I reside has about 3 Terabytes of live email storage containing emails going back to around 1995, when we moved to Windows NT. However, this is just a drop in the proverbial email ocean when compared to some ...
Ariella Brown, Technology Blogger, 5/22/2013 Comment now
Have you ever asked your doctors why they prescribed a particular brand of pill? For a surprisingly large number, the honest answer is that there are a number of pills that all have the same effect, but the one prescribed is made by the company that gives them thousands of dollars in fees and gifts.
James M. Connolly, US Correspondent, 5/21/2013 Comment now
If you look at the information that could be associated with a single household system or appliance, the data store isn't huge, but viewed in the aggregate -- all systems in all houses in all towns -- the impact on energy efficiency can be tremendous.
Michael Ross, Chief Scientist, eCommera, 5/20/2013 Comment now
With the explosion in multichannel commerce, retailers and brands already know they need to respond faster than ever to changes in consumer behavior. And in the push to drive innovation and find new cost savings, more businesses are recognizing the untapped potential of their data.
James M. Connolly, US Correspondent, 5/17/2013 Comment now
College students get queasy when they think of their institution of higher learning as being a business with budgets and management mandates. After all, the classroom, the dorms, and the campus are at the root of the word collegial.
Date: 4/25/2013 At this one-hour webinar, you’ll learn:
• Best-practices for integrating your data into one manageable and results-oriented store
• What different types of data (log files, sensors, video, images) will be needed from your storage and query capabilities
• Approaches to convincing the wider business that data needs to come out of its silos and be unified for maximum use
At The Big Data Show, we caught up with James Robinson of Open Signal, who encourages a team approach to visualizations. One of the reasons is that it sometimes takes a graphic designer or project manager to get the technical-minded visualization producer to go the extra distance.
Sam Zindel, Data Strategist at iCrossing Digital Marketing, filled us in on how supermarket delivery giant Ocado uses big data to identify and serve specific content to vegetarians once they spot them on their website.
This was part of Sam's talk at The Big Data Show about putting the customer at the heart of digital marketing, as well as making the most of the data you already have.
ETL is central to a lot of big data work, standing for Extract, Transform, and Load. But what does that mean? Let's explain it with an example:
Lauren is a data scientist working at a university, looking to bring together different datasets to make sure students are offered courses which best suit their profiles. To do this, she needs to pull data from lots of places into a centralized data warehouse.
First, she needs to extract data from the original sources, which can include existing university databases, as well as web crawling for social media information on students.
Next, Lauren has to transform this extracted data so that it fits in a way the centralized data warehouse can use it. For this, she can use a series of rules or functions to get the data into shape -- for instance, changing DOBs to reflect age, deriving aggregated values, deduplicating records, or joining data from multiple sources, depending on what the final data warehouse needs.
Finally, Lauren can load this data into the data warehouse, giving her a way to gain new insight on students by mining for patterns in this collected data.
At last week's Big Data Show we were lucky enough to speak to Lauren Walker, Sales Leader at IBM Big Data Solutions, who gave us a great message from her real-time analytics talk: Babies, Brains, and Buses.
This case study focused on the big data's ability to help the survival rate of premature babies by combining machine information and human content in real time.
On the opening day of the Big Data Show, Mike Cornwell, CEO of The IDM, was generous enough to give us some time to discuss his afternoon panel session. He also offered a word of caution on the state of marketing data. We're all getting excited about big data, but it seems most people still can't deal with their small data in the best possible manner.
"Just knowing enough to find some insight from information and using it intelligently for marketing still seems to be beyond a lot of organizations," he said.
Does this resonate with your business? Have you got the small data figured out before you invest time in de-siloing and bringing more information together?
Big data is awash with acronyms at the moment, none more widely used than HDFS. Let's cut to the chase... it stands for Hadoop Distributed File System.
This is the system of distributing files that allows Hadoop to work on huge data sets at speed. It spreads blocks of data across different servers, as well as duplicating those blocks of data, and storing them distinctly.
Let's see why with an example.
Sarianne works in the financial markets, and runs a lot of predictive models to make sure her investments are minimum risk.
Utilising HDFS, her queries through Hadoop can run quickly because the data blocks are stored separately -- meaning all the computation can happen in one go, rather than queuing up behind each other.
As an added benefit, if one server fails (as one is bound to, given the amount of servers and disk drives needed to run big data projects) it won't stop Sarianne's models from pulling the data they need, because HDFS duplicated those blocks -- meaning Hadoop can return Sarianne's results in double quick time.
Continuing our series of interviews with businesses leveraging big data, we talk to James Gill, CEO of GoSquared.
GoSquared offers real-time web analytics, using big data technologies to surface the analytical data that counts. Marketing managers and IT departments benefit from GoSquared's ability to pick out the most actionable insights as they happen.
Pig basically simplifies the processes needed to get analytics done through Hadoop on your big data sets.
Like the animal, Pig is not a fussy eater, getting its name from its ability to crunch through data, no matter what form it takes. It acts as a scripting interface to
Hadoop, meaning a lack of
MapReduce programming experience won't hold you back.
Example: Harvey works in a government office, looking to formulate new solutions for his city's parking problems. He knows how to use data, but writing his own mapper and reduce functions is a little beyond him.
Luckily, he's been set up with access to the databases through Pig, meaning he can draw on sources like parking ticket records and population density maps. Taking advantage of Pig's eat-anything attitude, he can also mine topics from a call for email suggestions his department sent to local residents, as well as sensor information about the amount of traffic on the roads. In spite of his limited programming capabilities, Pig allows Harvey to query these data sets and sketch out some draft suggestions he can use to alleviate the local parking problems.
In the first of a series of interviews with business leaders who leverage big data, we talk to James Robinson, CTO and co-founder of OpenSignal.
OpenSignal combines big data technologies and sensor data from mobile phones to give insight to both mobile consumers and telecommunications giants. Robinson is also a contributing writer on Big Data Republic.
Hadoop is the open-source software framework that quickly became almost synonymous with big data. But what does it actually do?
Whereas traditional data queries were run on one server, Hadoop enables you to run data queries across a large number of machines. By spreading the computational load across many servers, Hadoop enables you to deal with big data in a timely fashion.
Tobias runs an online DVD store -- and he wants to increase sales by recommending products to customers as they check out. But he doesn't just want to recommend bestsellers, he wants a smart system that recommends based on the buyer's demographics and taste.
That's where Hadoop helps out. For each customer, Hadoop enables Tobias to spot patterns across all of his customers' data, based on age, sex, genre preference, actor preference, period of production, and many other defining elements. He can access this information quickly, because different elements of the search can be carried out individually and simultaneously, instead of having to take place on a single machine.
I want to tackle Hadoop, but before we get there, we're going to need to explore MapReduce. MapReduce is a programming model for processing large datasets, and the clue to its function is in its name.
When you want to pull certain information from your datasets, it "maps" out the relevant information for your query.
Then it "reduces" the information down, sorts it based on any rules you've applied, and gives you just the data you were after.
Virginia is a medical researcher looking to carry out research on diabetes patients. For the purposes of her study, she wants to see any geographical concentrations of diabetes patients who are male, between the ages of 40 and 50, and who smoke.
The map in the MapReduce model finds the data sets which fit Virginia's needs.
Then begins the reduce function -- aggregating geographical data of these records and providing an ordered list of cities with the highest population of the defined type. This simple process has allowed Virginia to identify areas of concentration for further study.
MapReduce itself is pretty straightforward, but once we start ramping up the amount and types of data used we will need Hadoop's help -- which is where things get a bit more complex.
Today we're going to take a look at the V that allows big data to be immediate and reactive: Velocity.
As well as having to master the sheer volume and variety of information within big data, organizations also have to be able to contend with the speed at which all of this data is generated. Real benefit can be gained by pouncing on this data in real-time -- affecting outcomes while they are still forming.
What kind of benefit?
Well, as we've already established, data can take many different forms. How working on this stream of real-time big data will benefit you will depend on your industry. For this example I'll focus on the financial services sector.
Andy is in charge of online security for a big bank, trying to make sure his customers' money is safe. When he can detect fraud after the event, it's fairly useless, but if he can spot it as it happens, it can be priceless. If a malicious computerized attack is started on Andy's bank, it will be generating thousands of events every second -- but Andy has put the right system in place to detect these events by comparing them to the way actual, normal customers behave. And it happens in real time, so alarms are going off to let him know.
Many fraudsters will access online banking and go directly to the transfer section of a Website without first checking balances and transactions. That clickstream is foreign and unfamiliar to the complex event processing engine and thus gets flagged.
In this way the bank can stamp down on the illegal activity as it happens, rather than chasing up after the event.
Today we're going to take a look at the V that makes big data big: Volume.
It's no secret we're inundated with data these days, from mobile devices, machines, social media, transactions, satellites… pretty much everything is throwing data out. And technology has reached a point that allows us to capture and keep everything, too.
Why would we bother?
Because controlling such a vast quantity of data can reveal information and patterns about the people and objects that we otherwise can't see.
John runs a tradeshow and wants to make it a really unique and repeatable experience for all his attendees.
Tim is an attendee at the show, and has been for five years. John's company has been tracking his every data point with the show for that whole time – from his online activity before the show, checking in to his hotel, scanning his ticket as he enters the show, the stands and sessions he has attended in previous years, even down to what he has had for his lunch.
Keeping hold of all of this data on Tim means John can present him with a really personalized experience – with a dedicated map and timetable guiding Tim to the content he has a history of making a beeline for, and even getting him a voucher for his favorite vegetarian lunch!
That's a lot of data, but what makes this really big data is that John's company has been collecting this information from every one of the attendees at every one of its shows – allowing it to offer this personalized and highly valued experience to everyone.
With good management of the volume of data, big data allows organizations to grow and experiment based on previous encounters.
There's plenty of talk about big data's three V's: volume, velocity, and variety. But what exactly do these terms mean?
We're going to take a quick trip through one of these today: Variety.
This exciting concept within big data gives you the opportunity to gain insight by combining a variety of data sets that would not traditionally sit together. By enabling you to link up your traditional analytical data sets with many different types of information, a new world of analytical possibilities is opened.
So what's so exciting about this?
Well, it allows you to collate data sets that don't obviously relate to each other. Data experts can then analyse this collated data, to spot patterns or create new insights you would previously have been blind to. Variety, when tackled well in big data, allows you to see new revelations in the data your organization already produces.
An example: Judith is a brand manager, she loves her job and is very good at it, but knows she would benefit from being able to listen even more closely to the voice of her customer.
Taking traditional financial information, Judith can already see the performance of her brand. It doesn't take a data scientist to see which week did well, and which week did badly. But it won't tell her why.
Harnessing variety in data, Judith's data team can create relations between this data and what's being said on social media about her brand, as well as in text-input fields on customer satisfaction surveys. These disparate sets of data can be brought together, contextualized, and visualized in a way that gives Judith clues as to what her brand has done to influence customer behavior.
Suddenly, Judith now has the vision to generate hypotheses on ways to amplify positive results and mitigate negative trends.