I suffer from two horrifying and reinforcing afflictions; I’m and engineer and an information scientist. I have no hope of ever being able to interact in any reasonable way with normal humans; though I can occasionally fake it for short periods of time. That doesn’t make us, normal or engineer, any less interesting. The processing element we call the human brain is very interesting!
Over the last several months I’ve been learning about neural network technology. I’m still very low on the learning curve. In essence though at least one form of neural network creates its output by solving a mathematical equation. This is in contrast to a more typical computer program that creates its output by executing a sequence to steps; an algorithm. The magic in a neural network is the set of coefficients in the equation. These coefficients are derived, via a complex process, from training data; a set of examples of the form “when you see an input that looks like that then the output should be this.” Given a large enough training set, the equation becomes more likely to produce a correct answer even for inputs that it hasn’t been trained for, but only if the inputs are relatively “near” those used by the training data. Sufficiently obscure inputs will still produce hogwash. The remedy for this is to include training data that includes these more obscure inputs. Note that a neural network can’t “unlearn” anything. All you can do is reinforce correct responses at the expense of other responses. Doing so may unfortunately make more main stream responses slightly less likely to be correct. Fortunately, for artificial neural networks, there are ways to compensate for this by adjusting the architecture of the network1.
The same is true with humans. In our element we get it right most of the time. At least we behave consistently and in alignment with expectations in that particular space. Outside of our zone of familiarity god-only-knows what nonsense we’ll conjure up. The remedy for this is to increase the size of “our element” by increasing the variety of our experiences. The downside is that human architecture is fixed; the more we expand our experience the less likely we are to be in alignment with where we started. Sometimes we’re lucky and where we started adapts to us at least in a limited fashion, but this is not usually the case. This is likely at the root of the aphorism, “You can never go home.” In the extreme this creates an interesting paradox. The universe is very large and complex. Humans are fixed, finite processing elements. If we were to maximize our “training set”, then it’s likely we would be increasingly out-of-touch with any specific circumstance we might find ourselves in. We would likely be much closer to universal “truth” and much less able to apply it in any meaningful way.
Where does this leave us? The easy and obvious observation is that we’re all victims and beneficiaries of our experience. We are more robust when our range of experiences enables us to be effective in the range of circumstances we’re likely to find ourselves in. We’re at risk, and we feel fear, when we find ourselves in circumstances we’re not equipped for. There are a ton of places that you can take this, such as: How much diversity do you need for the life you’re likely to have? What is the value of education (indirect transitive experience) vs. direct experience? How should you “plot the course” of your experiences into the future?
I’ve rambled too long on this. Suffice it to say that this helps me to better understand and appreciate the technology induced and enabled stresses2 on our society today. We’re forcing experiences on people that they are ill-equipped to deal with. In hindsight, I’d bet the outcome of this circumstance will have been obvious3. Human society has a “hull speed” and in our rush to democratize we’ve exceeded it. Everyone should benefit from our advances, but it appears that timing and rate may critical.