Q* (pronounced ‘Q-star’) is an alleged internal project by OpenAI dedicated to the application of artificial intelligence in logical and mathematical reasoning. In November 2023, certain employees of OpenAI reportedly raised concerns with the company’s board, suggesting that Q* might signify the imminent emergence of artificial general intelligence. The reported work involves performing math on the level of grade-school students.
Math requires abstract reasoning and a firm grasp of logical principles. It’s not just about recognizing patterns or processing language, but about understanding and applying mathematical concepts and rules. Mathematics also demands precision and consistent application of rules, unlike many language tasks where approximations or multiple interpretations can be acceptable. An AI model that can reliably perform grade-school level math marks a significant advancement in the field, indicating progress towards more sophisticated and logically rigorous AI systems.
Furthermore, math problems often require multi-step reasoning and the ability to plan and execute a sequence of operations. This involves not just understanding individual concepts but also how to combine them to reach a solution. Therefore, success in mathematical tasks can be a foundation for more complex cognitive tasks.
GPT-4, primarily a language model, is trained on large amounts of text data and excels at tasks involving language understanding and generation. However, its approach is based on identifying patterns in data, which is different from the logical and structured reasoning required for mathematics. While GPT-4 can perform some basic arithmetic and solve simple problems, its approach is not inherently logical or mathematical. It doesn’t ‘understand’ math in the traditional sense but relies on patterns it has seen in its training data, which can lead to inconsistencies and errors, especially in more complex or abstract mathematical tasks.
The early reaction to news of Q* and its abilities was met with skepticism in the AI community and refuted by OpenAI, with spokesperson Lindsey Held Bolton telling the ‘The Verge,’ ‘[Interim CEO] Mira told employees what the media reports were about but she did not comment on the accuracy of the information.’ Additionally, a source familiar with the situation informed ‘The Verge’ that the board never received a letter regarding such a groundbreaking development, and the progress of the company’s research did not factor into Altman’s abrupt termination.
Reaction from others in the field of AI were also dismissive when it came to claims of artificial general intelligence (AGI). François Chollet, an AI Researcher at Google with work on how to achieve greater generality in artificial intelligence, noted ‘Every single month from here on there will be rumors of AGI having been achieved internally. Just rumors, never any actual paper, product release, or anything of the sort. The first panic over imminent AGI was circa 2013 about Atari Q-learning by DeepMind. The second one was circa 2016 over Deep RL (partially triggered by AlphaGo).’



Leave a comment