Here's the short version.
1.) Experts know next to nothing about how neurons work. They understand that neurons communicate via neurotransmitters in synapses, they understand there are activation thresholds in synapses, and the neuron uses electrical signals to communicate within itself, but beyond that they don't know anything. Neurons are themselves fully-realized cells containing their own metabolism, and potentially contain significant state or perform significant computation. They also don't work alone. They have companion cells that may very well contribute as well. Even if they are just as simple as neurons in e.g. Tensorflow, though, for whatever reason synapses all seem to secrete and respond to several up to around a hundred different neurotransmitters at different activation thresholds. Nobody knows what significance the different neurotransmitters have, whether the neuron 'understands' that the signal is different or whatever, although we do know synapses switch to different mixes of neurotransmitters over time (e.g.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3209552/). The safe bet IMO is that real neurons should be understood as more like networked computers, rather than labeled nodes on a graph like we currently treat them. Even in the most optimistic case though, we're still incredibly far away from emulating real neurons (not to mention that we need to understand how they work, first). The fact that you can get impressive results from current RNNs speaks to the mathematical significance of networks, not that RNNs are similar to how real brains work.
2.) Experts know next to nothing about the local structure of the brain. fMRI studies seemed super rad but they've pretty much all turned out to be junk science. The remaining reverse engineering techniques are virtually useless outside of brain injury diagnosis (e.g.
http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268). Pretty much all we know is that the structure is significant, or in less fancy words, that destroying neurons makes stuff stop working. Even in the most optimistic case from #1, we're still nowhere close to understanding what shape of neural network can admit human-like intelligence, let alone close to being able to engineer one.
3.) Experts know next to nothing about the global structure of the brain. The brain has a changing electromagnetic field, brainwaves. What is the significance of brainwaves? Nobody knows. It could be waste radiation from the operation of the brain. Or it could carry important global state, or maybe even function like neuron 'wifi'. Neurons also respond to chemicals in your blood, neurotransmitters and hormones and antagonists and inhibitors and and and, and certain brain neurons are connected to the glands that excrete these compounds and are therefore also capable of changing global state that way. None of this stuff is possible in current attempts, as far as I know, and even if it were nobody has any clue why it's useful or how you might start usefully introducing it.
4.) Billions of years of evolution aside, it's not even obvious where you should start training a neural network. What conditions make human-like intelligence or consciousness beneficial? It could be a ****ing accident!