Artificial General Intelligence (AGI)

Research into computer-based Artificial Intelligence (AI) began over sixty years ago with the development of Expert Systems (ref.) but AI only began to (appear to) show some parallels with human intelligence with the deployment of workable Large Language Models (LLMs)(ref.) in the late 2010s.

A proposed derivative of AI, called Artificial General Intelligence (AGI) currently lacks an agreed definition - but is broadly taken to mean an AI-based system which equals or surpasses human mental capacities across a wide range of cognitive abilities.

Despite its lack of definition, recent progress with AI LLMs has prompted some leading computer scientists (ref.) and philosophers to suggest that Artificial General Intelligence could oneday exist (note there is also very little agreement on a timescale).

A 2024 paper in Proceedings of the 41st International Conference on Machine Learning, authored by one of the firms involved in promoting the research - Google Mind - reviewed nine 'case studies' of previous definitions, and created six 'principles' by which (they say) AGI might be judged.

We hope our framework will prove adaptable and scalable – for instance, how we define and measure progress toward AGI might change with technical advances such as improvements in interpretability that provide insight into models’ inner workings."

Note that AGI's definition is greatly complicated by the lack of agreement about what biological Intelligenceplugin-autotooltip__plain plugin-autotooltip_bigIntelligence

undefined

"Numerous definitions of and hypotheses about intelligence have been proposed since before the twentieth century, with no consensus reached by scholars."

Source wikipedia"

Despite the lack of an agreed definition, tests of IQ (Intelligence Quotients) are still routinely used to grade students, job-applicants and patients.
itself actually is, and how to define it.

Some proponents of AGI suggest the possibility that systems could oneday become 'conscious' (by some unknown means) though again, there is currently no agreement about what Consciousnessplugin-autotooltip__plain plugin-autotooltip_bigConsciousness

inexplicable

Does an ant carrying a piece of leaf across the forest floor ‘know’ it’s doing so? Is it conscious? If science could construct a machine as complex as the human brain, would it ‘know’ it existed? Will it oneday be possible to explain consciousness via the laws of physics?
itself actually is.

To recap the unknowns surrounding AGI :

Notes :

1) The exact workings of the computer programming constructions known as Neural Networksplugin-autotooltip__plain plugin-autotooltip_bigNeural Networks

functionality_unexplained

""Multilayer neural networks are among the most powerful models in machine learning, yet the fundamental reasons for this success defy mathematical understanding."

Source : Proceedings of the National Academy of Sciences [open access]2018 115 (33)"

'Artificial Intelligence' (AI) systems predominantly use
- on which LLMs are fundamentally based - are unclear.

2) Because of Neural Network obscurities and the complexities of their enormous database inputs, it's not always straightforward to understand exactly how or why LLM queries produce the results which they do.

3) Although extremely useful and fast for solving many real-world problems, LLMs are never 100% reliable - sometimes giving entirely ludicrous answers, but nevertheless presented with great 'confidence'. A syndrome known in AI jargon as 'Hallucinations'. Presumably, an AGI system would also 'hallucinate'.

4) Various skeptical observers of what they call the 'AGI Hype Cycle' have pointed out that most of the groups involved in AGI research are fully commercial enterprises which could stand to financially gain from investments, sales, and promotions of systems which they create - whether they are truly AGI ( according to a yet-to-be-agreed definition ) or simply appear to be,