Joe Joyce wrote on Fri, Dec 15, 2017 12:31 AM UTC:
I agree, Aurelian. I think it's obvious that neural nets could 'easily' (with much hardware, time, and $$) play games like I've described at human level, and possibly a bit beyond. My point is that there are far too many indeterminacies for even the best neural nets to successfully predict game states (ie: what the opponent, or even the AI itself, will do in a couple of turns) for the software to consistently outperform the best humans or human teams. The game tree for even a specific game of Macysburg (a 32x32 abstract strategy war game riff on the Battle of Gettysburg during the American Civil War) is ridiculous. If AlphaZero depends in part on the exact board configuration, that can/does change significantly game to game. And predicting future game states does not work except in the most limited of circumstances. The best the AI can achieve is a generalized knowledge of how terrain affects movement and combat. It can apply those rules very well in limited situations and be a brilliant tactician, but so can humans. The AI clearly has the potential to be better at tactics, but how much better? And I don't think the AI can be significantly better in strategy without teaching us more about strategy. I think that people find it very hard to understand the total range of possibilities. The game starts with about 42 pieces on the board, all of which can move every turn, if they have a nearby leader. And there are 3 reinforcement turns which bring in another ~42 pieces each time. Excpect to have ~100 pieces maneuvering in the middle of the game. Exactly where each type of piece stands each turn, the exact order in which they are moved, exactly where terrain is in relation to each piece, as well as what the terrain is - different pieces get different effects - determine what attacks can be made each turn, and changing any of those conditions changes what happens *each turn*. I maintain that unless quantum computers work exactly as advetrised, the AI *cannot* effectively predict future game states to any overwhelmingly useful degree. Thus, based on monte carlo statistical approaches, such Ais can be at best only marginally better than the best humans/human teams.
I agree, Aurelian. I think it's obvious that neural nets could 'easily' (with much hardware, time, and $$) play games like I've described at human level, and possibly a bit beyond. My point is that there are far too many indeterminacies for even the best neural nets to successfully predict game states (ie: what the opponent, or even the AI itself, will do in a couple of turns) for the software to consistently outperform the best humans or human teams. The game tree for even a specific game of Macysburg (a 32x32 abstract strategy war game riff on the Battle of Gettysburg during the American Civil War) is ridiculous. If AlphaZero depends in part on the exact board configuration, that can/does change significantly game to game. And predicting future game states does not work except in the most limited of circumstances. The best the AI can achieve is a generalized knowledge of how terrain affects movement and combat. It can apply those rules very well in limited situations and be a brilliant tactician, but so can humans. The AI clearly has the potential to be better at tactics, but how much better? And I don't think the AI can be significantly better in strategy without teaching us more about strategy. I think that people find it very hard to understand the total range of possibilities. The game starts with about 42 pieces on the board, all of which can move every turn, if they have a nearby leader. And there are 3 reinforcement turns which bring in another ~42 pieces each time. Excpect to have ~100 pieces maneuvering in the middle of the game. Exactly where each type of piece stands each turn, the exact order in which they are moved, exactly where terrain is in relation to each piece, as well as what the terrain is - different pieces get different effects - determine what attacks can be made each turn, and changing any of those conditions changes what happens *each turn*. I maintain that unless quantum computers work exactly as advetrised, the AI *cannot* effectively predict future game states to any overwhelmingly useful degree. Thus, based on monte carlo statistical approaches, such Ais can be at best only marginally better than the best humans/human teams.