>>16611I don't agree that neural networks force anything more than a human child forces things as it is growing and learning. There is certain element of chaos and uncertainty that a simple algorithm probably couldn't handle, but neural networks are better at working with chaos and with limited data than just list of "if this then that" type of simple algorithms that sometimes pass as an AI. I guess we can agree that at some level planning will need computers, programs at varied levels of complexity.
Could I make the argument that this is mostly a problem of insufficient datasets used for training and/or lack in the scale of neural network, since that is pretty much what separates a human (or animal) mind and a current neural network. Not that it probably needs nearly the levels of human complexity to provide results at human level or even better since it will be a specialized thing and when it makes a mistake it will be corrected and it will be improved. Unlike a human that might repeat the same mistake multiple times or at least multiple humans from generation to generation could repeat the same mistake multiple times.
What I however am a bit worried about is "black boxing" the whole planning process with neural networks and algorithms and such. I'm not afraid of any machine uprising, but it's dangerously undemocratic if nobody or only a select few can tell how and on what basis were these certain planning results reached.