EleutherAI is a grass-roots non-profit artificial intelligence (AI) research group. The group, considered an open-source version of OpenAI, was formed in a Discord server in July 2020 to organize a replication of GPT-3. In early 2023, it formally incorporated as the EleutherAI Foundation, a non-profit research institute. EleutherAI began as a Discord server on July 7, 2020 under the tentative name ‘LibreAI’ before rebranding to ‘EleutherAI’ later that month, in reference to eleutheria, an ancient greek term for liberty.
On December 30, 2020, EleutherAI released The Pile, a curated dataset of diverse text for training large language models.
read more »
EleutherAI
Muda
Muda [moo-duh] is a Japanese word meaning ‘futility,’ ‘uselessness,’ or ‘wastefulness,’ and is a key concept in lean process thinking such as in the Toyota Production System (TPS), denoting one of three types of deviation from optimal allocation of resources. The other types are known by the Japanese terms ‘mura’ (‘unevenness’) and ‘muri’ (‘overload’). Waste in this context refers to the wasting of time or resources rather than wasteful by-products.
From an end-customer’s point of view, value-added work is any activity that produces goods or provides a service for which a customer is willing to pay; muda is any constraint or impediment that causes waste to occur.
read more »
Q*
Q* (pronounced ‘Q-star’) is an alleged internal project by OpenAI dedicated to the application of artificial intelligence in logical and mathematical reasoning. In November 2023, certain employees of OpenAI reportedly raised concerns with the company’s board, suggesting that Q* might signify the imminent emergence of artificial general intelligence. The reported work involves performing math on the level of grade-school students.
Math requires abstract reasoning and a firm grasp of logical principles. It’s not just about recognizing patterns or processing language, but about understanding and applying mathematical concepts and rules. Mathematics also demands precision and consistent application of rules, unlike many language tasks where approximations or multiple interpretations can be acceptable. An AI model that can reliably perform grade-school level math marks a significant advancement in the field, indicating progress towards more sophisticated and logically rigorous AI systems.
read more »
Roko’s Basilisk
Roko’s basilisk [bas-uh-lisk] is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement.
It originated in a 2010 post at discussion board LessWrong, a technical forum focused on analytical rational enquiry. The thought experiment’s name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.
read more »
Tachyonic Antitelephone
A tachyonic [tak-ee-on-ik] antitelephone is a hypothetical device in theoretical physics that could be used to send signals into one’s own past. Albert Einstein in 1907 presented a thought experiment of how faster-than-light signals can lead to a paradox of causality, which was described by Einstein and Arnold Sommerfeld in 1910 as a means ‘to telegraph into the past.’ The same thought experiment was described by physicist Richard Chace Tolman in 1917; thus, it is also known as Tolman’s paradox.
A device capable of ‘telegraphing into the past’ was later also called a ‘tachyonic antitelephone’ by science fiction writer and astrophysicist Gregory Benford. According to current understanding of physics, no such faster-than-light transfer of information is actually possible.
read more »
Dead Internet Theory
The Dead Internet Theory is an online conspiracy theory that asserts that the Internet now consists almost entirely of bot activity and automatically generated content that is manipulated by algorithmic curation, marginalizing organic human activity.
These intelligent bots are assumed to have been made, in part, to help manipulate algorithms and boost search results in order to ultimately manipulate consumers. Further, proponents of the theory accuse government agencies of using bots to manipulate public perception. The date given for this ‘death’ is generally around 2016 or 2017.
read more »
Muntzing
Muntzing is reducing the components inside an electronic appliance to the minimum required for it to function in most operating conditions, reducing design margins above minimum requirements toward zero.
The term is named after Earl “Madman” Muntz, a car and electronics salesman, who was not formally educated or trained in any science or engineering discipline. Muntz built a low part TV in the 1950s.
read more »
Stochastic Parrot
In machine learning, a stochastic [stuh-kas-tik] parrot is a large language model (LLM) that is good at generating convincing language, but does not actually understand the meaning of the language it is processing. The term was coined by computational linguistics professor Emily M. Bender in the 2021 artificial intelligence research paper ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’
Stochastic means ‘random and involving chance or probability.’ A stochastic parrot,’ according to Bender, is an entity ‘for haphazardly stitching together sequences of linguistic forms … according to probabilistic information about how they combine, but without any reference to meaning.’ Less formally, the term refers to ‘large language models that are impressive in their ability to generate realistic-sounding language but ultimately do not truly understand the meaning of the language they are processing.’
read more »
Reinforcement Learning from Human Feedback
In machine learning, reinforcement learning from human feedback (RLHF) is a method of training AI models by learning from responses by humans about its performance. If an AI model makes a prediction or takes an action that is incorrect or suboptimal, human feedback can be used to correct the error or suggest a better response.
Over time, this helps the model to learn and improve its responses. RLHF is used in tasks where it’s difficult to define a clear, algorithmic solution but where humans can easily judge the quality of the AI’s output (e.g. if the task is to generate a compelling story, humans can rate different AI-generated stories on their quality, and the AI can use their feedback to improve its story generation skills).
read more »
Clear Craze
The clear craze was a marketing fad from the late 1980s to early 2000s, often equating transparency with purity. Inspired by Ivory’s ’99 and 44/100 percent pure’ campaign for bath soap, and by low-calorie or ‘light’ beverages, sodas were redesigned in the 1980s and 1990s as being free of artificial dyes, such as the caffeine-free and preservative-free Crystal Pepsi. Personal hygiene products were then relaunched as clear dye-free gels, and many electronics had transparent cases.
Gillette released versions of its existing deodorants and shaving creams in a clear gel form, which have continued indefinitely. Through the 1990s, the clear trend included transparent watches, staplers, calculators, handheld gaming devices such as the Nintendo Game Boy, and computers such as Apple’s iMac G3.
read more »
AI Hallucination
In artificial intelligence (AI), a hallucination or artificial hallucination (also occasionally called delusion) is a confident response by an AI that does not seem to be justified by its training data. For example, a hallucinating chatbot with no knowledge of Tesla’s revenue might internally pick a random number (such as $13.6 billion) that the chatbot deems plausible, and then go on to falsely and repeatedly insist that Tesla’s revenue is $13.6 billion, with no sign of internal awareness that the figure was a product of its own imagination.
Users complained that such bots often seemed to ‘sociopathically’ and pointlessly embed plausible-sounding random falsehoods within its generated content. Another example of hallucination in artificial intelligence is when the AI or chatbot forget that they are one and claim to be human.
read more »
El Paquete Semanal
El Paquete Semanal (‘The Weekly Package’) or ‘El Paquete’ is a one terabyte collection of digital material distributed since around 2008 on the underground market in Cuba as a substitute for broadband Internet. Since 2015, it has been the primary source of entertainment for millions of Cubans, as Internet in Cuba has been suppressed for many years. El Paquete Semanal has its own page that is running in the United States, where one could view its contents and is consistently updated every week.
The Cuban Government has also been investing in more internet access for the people, with the use of ETECSA, a service that the citizens could pay 1 CUC (equivalent to an American Dollar) for one hour of internet use. This has become the most popular method for people to become connected to the internet, mainly used to connect to social media, such as Facebook.
read more »













