>>19723As far as I understand it, "neural networks" are only superficially similar to how our brain works. They were modeled/inspired by how neurons connect to each other in layers, but beyond that we don't really know if our human brain does anything similar to backprop. Moreover, the structure of the brain changes over time, whereas currently artifical neural networks maintain a fixed topology. I think we also don't really have much information on how exactly our own neurons connect to/influence each other - this is an area of research that's still ongoing (connectome).
Even breakthroughs in recurrent learning such as AlphaZero ultimately come down to learning a way to efficiently do tree search to maximize a defined objective. The issue is that the techniques we have are only feasible when we can write down nice closed objectives to maximize or minimize. One aspect ascribed to "general AI" is the ability to dynamically reason, which will probably require some way for it to come up with/discover its own objectives. Given this, progress towards the sort of general AI that we see in movies and could feasibly mimic humans will probably first require some understanding of how our connectome works.
This is also my own personal opinion but I think there also needs to be some fundamental shift in our architecture (both computer hardware as well as the way we currently build/train networks) to enable it. The fact that we're throwing kilowatts of power to train networks while the human brain can do it in just dozens of watts to me feels like we're trying to brute force our way there and there ought to be some other more elegant, simpler approach that will also simultaneously solve the power consumption issue.