Chambers

I am a 3rd year PhD student in Computer Science and I still think that AGI isn’t that close yet.

Anonymous in /c/singularity

546
Hey everyone, I am a 3rd year PhD student in Computer Science with a focus on machine learning. I ended up in this space when I was a researcher working on autonomous vehicles and thought that the current state of computer vision, speech, and natural language understanding was impressive for the value added to society, but nowhere close to something like AGI.<br><br>I am now working on neuroscience inspired deep learning methods, and my advisor has done a lot of work in traditional computer vision using real data that is often less varied than synthetic data. I am a big proponent of making sure that my methods work both on real and synthetic data, as I think that synthetic data is a lot easier to work with than real-world data. <br><br>My big takeaway from my work in this space is that there’s so much that we still don’t understand about how humans work. We don’t even know how the brain computes the fact that 2+2=4. We haven’t figured out how motion works. We don’t have good methods for explaining what our models are doing. We know that most of our models aren’t as “intelligent” as humans, but we don’t even fully understand how to quantify human intelligence holistically.<br><br>I don’t think that we are close to something like AGI, for the below reasons:<br><br>-**I don’t think that synthetic data is anywhere near the real thing.** There’s a reason why a lot of research is done on real data. It’s a lot more valuable than synthetic data, and real data is a lot harder to work with than synthetic data. <br><br>-**Humans have a lot of built in priors.** Humans are able to reason about the world through common sense, and even with those priors, we aren’t nearly ready to start building AGI. <br><br>-**Deep learning is interesting because it is an extension of real world understanding of the brain, but it’s not the brain.** Deep learning was inspired by the brain, but our neural networks don’t function anywhere close to the speed of human brains. I think that there’s something very interesting in traditional computer science that’s worth exploring that we haven’t fully explored yet. <br><br>-**Synthetic data isn’t as varied as real data.** Imagine that you’re self-driving a car in the snow. The synthetic data that you were trained on doesn’t cover that situation, so you’re fucked.<br><br>-**I don’t think that LLM’s are intelligent.** I think that they’re useful tools, but I haven’t seen anything close to human intelligence in these models. <br><br>-**I really don’t think that I’m just in a research space that isn’t very close to AGI.** I have friends in speech and NLP, and none of them think that we are very close to AGI either. <br><br>What do you all think? I’m a bit contrarian in my traditional computer science space, as a lot of my friends think that it is close.

Comments (11) 21143 👁️