The text below is an excerpt from the original article by Giovanna Mingarelli, published June 6, 2014 on The Huffington Post.
Big Data* is creating lots of buzz these days — especially in the humanitarian sector. In the last few years, governments, businesses, humanitarian organizations and citizens have been using Big Data to accomplish feats ranging from analyzing Google search queries to predict flu outbreaks, to helping the U.S. government better understand the needs of people impacted by natural disasters, like Hurricane Sandy.
In this regard, I was fortunate to participate in a recent discussion at George Washington University in D.C. hosted by the World Economic Forum’s Global Agenda Councils on Catastrophic Risks, Future of the Internet and Data Driven Development, and the United Nations’ Office for the Coordination of Humanitarian Affairs (UN OCHA), which together examined: “The Role of Big Data In Catastrophic Resilience.”
Participants at this event explored the use of data available today and how it could help decision makers prevent, prepare for and recover from catastrophic events.
Three key trends appeared as part of these discussions:
1) The Relevance of Data
In his opening talk of the day, former Secretary of Homeland Security, The Honorable Tom Ridge, questioned: “Are we to be data rich and knowledge poor?” In a world where zettabytes** of information are being produced from our cell phones, credit cards, computers, homes — and even the sensor-equipped cars, trains, and the buildings that make up our cities — the problem isn’t a matter of quantity of data, but the relevance of it all.