Collective intelligence is a shared or group intelligence that emerges from the collaboration and competition of many individuals and appears in consensus decision making in organisms (including some bacteria) and computer networks. The term appears in sociobiology, political science, and in context of mass peer review and crowdsourcing web applications (e.g. Wikipedia). This broader definition involves consensus, social capital, and formalism such as voting systems, social media and other means of quantifying mass activity.
Everything from a political party to a public wiki can reasonably be described as this loose form of collective intelligence. The notion of collective intelligence has also been called ‘Symbiotic intelligence.’ A precursor of the concept is found in entomologist William Morton Wheeler’s observation that seemingly independent individuals can cooperate so closely as to become indistinguishable from a single organism. Wheeler saw this collaborative process at work in ants that acted like the cells of a single beast he called a ‘superorganism.’
read more »
Collective Intelligence
Knowledge Graph
The Knowledge Graph is a knowledge base used by Google to enhance its search engine’s search results with semantic-search information gathered from a wide variety of sources. Knowledge Graph display was added to Google’s search engine in 2012, starting in the United States. It provides structured and detailed information about the topic in addition to a list of links to other sites. The goal is that users would be able to use this information to resolve their query without having to navigate to other sites and assemble the information themselves.
According to Google, this information is derived from many sources, including the CIA World Factbook, Freebase, and Wikipedia. The feature is similar in intent to answer engines such as Ask Jeeves and Wolfram Alpha. As of 2012, its semantic network contained over 500 million objects and more than 3.5 billion facts about and relationships between these different objects which are used to understand the meaning of the keywords entered for the search.
Natural Language Processing
Natural language processing (NLP) is a field of computer science, artificial intelligence, and linguistics concerned with the interactions between computers and human (natural) languages.
As such, NLP is related to the area of human–computer interaction. Many challenges in NLP involve natural language understanding — that is, enabling computers to derive meaning from human or natural language input. An automated online assistant providing customer service on a web page, an example of an application where natural language processing is a major component.
read more »
Deep Learning
Deep learning refers to a sub-field of machine learning (systems that examine data, from sensors or databases, and identify complex relationships) that is based on learning several levels of representations, corresponding to a hierarchy of features or factors or concepts, where higher-level concepts are defined from lower-level ones, and the same lower-level concepts can help to define many higher-level concepts.
Deep learning is part of a broader family of machine learning methods based on learning representations. An observation (e.g., an image) can be represented in many ways (e.g., a vector of pixels), but some representations make it easier to learn tasks of interest (e.g., is this the image of a human face?) from examples, and research in this area attempts to define what makes better representations and how to learn them.
read more »