The exciting news about the level of intelligence demonstrated by AI systems is that they are approaching qualitatively new capabilities of learning, planning, and reasoning that are human and above in some respects. As the number of neurons and connections grows beyond 10s of billions, new capabilities have emerged. Humans think with neurons in a soup that is blood free. That soup is part of what makes us able to think the way that we do. Neurotransmitters, hormones and many other chemicals are part of how we think and feel. So synapses and their connectivity are critical but the electrical signal is only one dimension of thought.
Our present push to build intelligent machines does seem to use specialized neural structures (mathematically) such as deep neural networks and the transformer architecture, for example. Humans have the amygdala, cortex, hypothalamus, etc. and each is a complex, specialized thinking system. Our present math has some analogs to these functions however they are primitive. Even primitive agi is exciting and powerful though barren of the emotional so far, as I can tell.
These simple structures can produce a certain type of intelligence that is very useful and powerful however there are also specializations developing "under the hood" as we see papers talking about "mixture of experts", prompt pre-processing layers, output "safety", and the various stages of GPT models (embedding, transformers, DNN, whatever).
Back in the 1990's this idea, that biochemistry and electronics are both involved in thinking and intelligence was put forward by a Finnish Researcher and others. The 1960s confirmed this combination :-).
Hype and media excitement make AGI the new buzz word replacing "green" as the marketer's target to make meaningless.
As the capabilities of the AGI systems grow, the need to have "safety" is more important and urgent. The human safety system includes fear, nurturing, prediction, socialization, and so much more, features yet to be added to the so-called AGI.
I'm excited to work with these new LLM models and looking forward to the Nvidia GTC conference later this month. I've studied neural networks since 1991 and only recently has it impressed and excited me. We had fuzzy logic, didn't go anywhere until recently, we had neural network math, but it was pathetic. The hardware was pathetic,
Later we had "expert systems" that were not expert but that did generate study of how to extract expert knowledge from people and knowledge of the stages of expert development.
When deep neural networks began to do object recognition well, I was excited to do video processing and image work and that has only become more powerful. Audio processing is now becoming integrated into the LLM,
For me, it was fun to ask ChatGPT to write some python code that would take sound samples from a recording and make audio spectrum plots that could later be used to in a recognition task. It did it in a few minutes and then it took me an hour to make it work in my system however then I have a tool that I can use as a component in a workflow!
The coupling of the LLM to code generation is wonderful for me since I write code but am not a coding expert and use it to accomplish a task, often hardware and software integration, whatever. I can read python and several languages like C++, SQL, Javascript, PHP, Perl, Fortran and so on but I only know enough to get by, then it fades from lack of use. Having a coding assistant that can also help debug is wonderful.
I was able to give ChatGPT an error message that I saw when I ran some of it's code, and it told me how to debug in a conversation that eventually made it run. I learned from the interaction and produced usable code.