You describe yourself as a rationalist. Unfortunately, you're as mistaken in that as you are in pretty much everything else.
Higgins, I've read through some of your posting history and I can see that you are an intelligent - and normally civil - guy. There's no need to get angry.
estimate the performance necessary for replicating all of the cortex, as it has a uniform architecture bollocks
How much computational neuroscience literature have you read? 10 hours worth? 100 hours worth? 1,000 hours worth? I only ask this to speed up the conversation. Your reply of 'bollocks' indicates to me that it is unlikley that you are up to date on computational neuroscience literature.
The general uniformity of the laminar cortical architecture has been a central key finding of neuroscience since the days of Hubel and Weiss's discovery of columnar maps in the primary visual cortex of cats. The same general 6 layered microcircuit with its characteristic distributions of excitatory and inhibitory cell types and initial wiring patterns is repeated throughout the cortex of all mammalian brains, with varying degrees of small differences across cortical regions (motor and frontal cortical regions have some key differences) and between species. At the higher macrocircuit level the cortex is partitioned into (mostly) genetically predefined modules, each of which repeats the same general wiring strategy but has unique intermodule connectivity.
The most significant and striking direct evidence of cortical microcircuit uniformity comes from intentional or accidental rewiring experiments. If the normal 'auditory' cortex is rewired to receive input from the retina, the microcircuit develops the 'software' wiring for visual processing instead (although - as expected due to various reasons - not at the same level of capability).
1The detailed microwiring changes dynamically during development and learning - encoding acquired mind 'software' if you will - and thus ultimately is unique per individual.
In blind humans, the 'visual' cortical modules typically develop advanced audio processing capabilities, including even active echolocation (sonar) in some cases.
2 .
3The general uniformity of cortical architecture leads to the 'one learning algorithm hypothesis', the key insight behind 'deep learning' (just another name for large cortex inspired ANNs), and the success of that field is strong indirect supporting evidence for the principle. The same general deep CNN architecture that has now achieved primate/human levels of performance in vision is also leading to similar breakthroughs in a wide variety of other pattern recognition challenges such as speech recognition and automated translation.
See this
video by Andrew Ng, it summarizes the importance of the universal cortical learning hypothesis and how it has (and is) revolutionizing machine learning.
Also - just to be clear - there is still a great deal of research to do in learning algorithms. The brain uses a complex mix of supervised, unsupervised, and reinforcement learning, and in general its learning machinery is more complex and powerful than anything we have yet in DL/ML.
Even so, our current learning algorithms are already powerful enough to rival isolated equivalent brain circuits in some cases - at least for tasks that are simple enough to construct an ideal training environment. The 'commonsense reasoning' that you mention below will probably require embedding an AGI for virtual years in a complex simulated 3D environment (running the simulation much faster than realtime is key - its a performance issue).
...
machine learning is already making steady exponential progress Patently untrue, do you know what the word means?
Performance is constrained by the complexity of ANNs in terms of size and speed. It takes about a few years of visual experience for a state of the art DANN to achieve human level performance, which requires running the simulation dozens of times faster than real-time so that training takes a more reasonable few weeks. Everything is performance constrained: with more compute power we can train and experiment with ever larger ANNs. Model complexity is thus increasing exponentially concomitantly with performance.
...
what future evidence would cause you to update that to a high"higher", enough with the heavy-handed framing probablility? any evidence of advances in the understanding of the G in AGI (see below).
...
Human level general intelligence will require very large ANNs with complexity similar to that of a human brain, along with a similarly complex and lengthy educational development process. Essentially we will need to actually raise and educate AGIs - somewhat like children. The key is accelerating the learning speed, which is already essential for training visual circuits.
There is still a great deal of research to do in unraveling many of the brains macrocircuits and principles. On the other hand, we also have a huge backlog of theories and wiring data - far more than we can test. Massively accelerating experimental throughput is the key to accelerating progress.
Say in 5 years, you experience an AI 'app' that has intelligence - but only that of a 5 year old. I've no idea what you think is to be gained by this orgasm of conjecture but it's certainly not a rationalist's argument, it's unfounded speculation about an as-yet-invisible near-term major breakthrough. And what do you mean: "only that of a five year old"? What towering, empty, casual, blind arrogance. Not a parent, are you? ROFL
The exact timeframe is irrelevant - the question is simply to gauge your current general intuition for where the principle locus of uncertainty/difficulty in achieving AGI resides. Is it in getting to infant AGI? toddler AGI? adult AGI? etc.
For example, once we achieve AGI-5 (toddler AI), it is still uncertain that AGI-30 (adult) is near. It could be that there is a huge remaining complexity gap. I doubt this - instead I believe most of the complexity/difficulty is in getting up to AGI-5. In that - at least - it appears we agree.
initially its just simulating infant brains that don't do much, and certainly aren't interesting yet You consider the cognitive plasticity of infants to be uninteresting yet anticipate the development of AGI in ML in just a decade or two? HAHAHAHAHAHA. Please stop, it hurts.
Now I sense that you are intentionally misreading me. By 'simulating an infant brain', I meant simulating a brain the size and complexity of a human infant - without yet understanding the correct seed wiring architecture. It would be a brain dead infant, essentially. There is still a huge amount of research work to do going from "now we can simulate an infant brain" to "now we understand all of the initial wiring/algorithms that infant brains need to become adults".
A while back the AI/AGI community split into a few directions. One group believed that AGI could be achieved by advances in logic and traditional computer science. Another group believed that AGI will require artificial brains - based on understanding the learning mechanisms in real brains. The former group's thesis has essentially been disproven and is a dead-end, and all advances in AI at this point are actually coming from the latter group (neuroscience/machine learning).
General 'commonsense' reasoning requires a general internal predictive model of the world, which is something that comes around or after AGI-5. Its not an isolated problem you can solve independently before hand - it's essentially AI complete.
Cheers
-Alix