Custom Software and Offshore Development | LARION

Big data

Big data is a term for data sets that are so large or complex that traditional data processing application software is inadequate to deal with them. Challenges include capture, storage, analysis, data curation, search, sharing, transfer, visualization, querying, updating and information privacy. The term “big data” often refers simply to the use of predictive analytics, user behavior analytics, or certain other advanced data analytics methods that extract value from data, and seldom to a particular size of data set. “There is little doubt that the quantities of data now available are indeed large, but that’s not the most relevant characteristic of this new data ecosystem.”[2] Analysis of data sets can find new correlations to “spot business trends, prevent diseases, combat crime and so on.”[3] Scientists, business executives, practitioners of medicine, advertising and governments alike regularly meet difficulties with large data-sets in areas including Internet search, finance, urban informatics, and business informatics. Scientists encounter limitations in e-Science work, including meteorology, genomics,[4] connectomics, complex physics simulations, biology and environmental research.[5]

 

Data sets grow rapidly – in part because they are increasingly gathered by cheap and numerous information-sensing mobile devices, aerial (remote sensing), software logs, cameras, microphones, radio-frequency identification (RFID) readers and wireless sensor networks.[6][7] The world’s technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s;[8] as of 2012, every day 2.5 exabytes (2.5×1018) of data are generated.[9] One question for large enterprises is determining who should own big-data initiatives that affect the entire organization.[10]

 

Relational database management systems and desktop statistics- and visualization-packages often have difficulty handling big data. The work may require “massively parallel software running on tens, hundreds, or even thousands of servers”.[11] What counts as “big data” varies depending on the capabilities of the users and their tools, and expanding capabilities make big data a moving target. “For some organizations, facing hundreds of gigabytes of data for the first time may trigger a need to reconsider data management options. For others, it may take tens or hundreds of terabytes before data size becomes a significant consideration.”[12]

 

Definition

Visualization of daily Wikipedia edits created by IBM. At multiple terabytes in size, the text and images of Wikipedia are an example of big data.

The term has been in use since the 1990s, with some giving credit to John Mashey for coining or at least making it popular.[13][14] Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process data within a tolerable elapsed time.[15] Big data “size” is a constantly moving target, as of 2012 ranging from a few dozen terabytes to many petabytes of data.[16] Big data requires a set of techniques and technologies with new forms of integration to reveal insights from datasets that are diverse, complex, and of a massive scale.[17]

 

In a 2001 research report[18] and related lectures, META Group (now Gartner) defined data growth challenges and opportunities as being three-dimensional, i.e. increasing volume (amount of data), velocity (speed of data in and out), and variety (range of data types and sources). Gartner, and now much of the industry, continue to use this “3Vs” model for describing big data.[19] In 2012, Gartner updated its definition as follows: “Big data is high volume, high velocity, and/or high variety information assets that require new forms of processing to enable enhanced decision making, insight discovery and process optimization.” Gartner’s definition of the 3Vs is still widely used, and in agreement with a consensual definition that states that “Big Data represents the Information assets characterized by such a High Volume, Velocity and Variety to require specific Technology and Analytical Methods for its transformation into Value”.[20] Additionally, a new V “Veracity” is added by some organizations to describe it,[21] revisionism challenged by some industry authorities.[22] The 3Vs have been expanded to other complementary characteristics of big data:[23][24]

  • Volume: big data doesn’t sample; it just observes and tracks what happens
  • Velocity: big data is often available in real-time
  • Variety: big data draws from text, images, audio, video; plus it completes missing pieces through data fusion
  • Machine learning: big data often doesn’t ask why and simply detects patterns[25]
  • Digital footprint: big data is often a cost-free byproduct of digital interaction[24][26]

 

The growing maturity of the concept more starkly delineates the difference between big data and Business Intelligence:[27]

  • Business Intelligence uses descriptive statistics with data with high information density to measure things, detect trends, etc..
  • Big data uses inductive statistics and concepts from nonlinear system identification[28] to infer laws (regressions, nonlinear relationships, and causal effects) from large sets of data with low information density[29] to reveal relationships and dependencies, or to perform predictions of outcomes and behaviors.[28][30]

 

Link:

https://en.wikipedia.org/wiki/Big_data

http://www.webopedia.com/TERM/B/big_data.html

https://www.sas.com/en_us/insights/big-data/what-is-big-data.html

https://www.ibm.com/big-data/us/en/

More interesting resources

Top 5 Vietnam Offshore Software Development Companies