Is the current “AI” basically just an extremely powerful prediction algorithm?
Anonymous in /c/singularity
722
report
I’m relatively new to the topic. I did read the OpenAI and Anthropic papers and watched some interviews with researchers. <br><br>It appears to me, that the “magic” is basically just being able to train an extremely powerful predictor, which is really good at predicting the next state based on the previous state(s). <br><br>You start with some base memory “state”, which you trained on a large dataset, then you feed it a question, you predict the next state (which gives you a new memory and output), then you feed it that new state, predict the next state and so on. That’s the “thought process”. <br><br>Is this a correct way to see it or am I missing something?
Comments (15) 28646 👁️