Here’s what’s dangerous about letting advanced AI control its own feedback
How would an artificial intelligence (AI) decide what to do? One common approach in AI research is called “reinforcement learning”. Reinforcement learning gives the software a “reward” defined in some way, and lets the software figure out how to maximise the reward. This approach has produced some excellent results, such as building software agents that… Read more »