A Path Towards AGI?
While we still don’t fully grasp intelligence, concept models & step-reasoning may let us simulate it. Samuel Joseph Troyer | January 6, 2025 Transformers can’t obtain a conceptual understanding of their environment. Therefore, they cannot achieve AGI—let me explain … language, for a human, is one way of describing our perception of the worldlanguage, for transformers, is […]
Samuel Troyer•2 min read