Seeking a Jeff Hawkins quote

I recall hearing Jeff Hawkins at some point say that he believes AGI (or similar) will almost certainly not happen through LLMs, because they use so much energy. Does anyone know the quote I’m thinking of, or where I might be able to find it? Even data ranges for when it was recorded or released would help with my search.

If you’re confident he hasn’t said that, or that there’s a quote I may have misheard, I welcome that too :sweat_smile:

There are bunch of related quotes on this interview over on the Numenta site - Q&A with Jeff Hawkins on ChatGPT, the Brain, and the Future of AI

Hm, thanks for the link the closest quote I found was

This doesn’t mean I am against more limited forms of AI, such as ChatGPT. In fact, my company, Numenta, has created technology that greatly lowers the cost of running language models such as GPT. We are excited by how we can not only make these models less expensive, but also greatly reduce the energy required to run them. But ultimately, we will build machines that are intelligent in the same way we are, and those machines will work similarly to the brain.

I’ve been thinking about it, and I think it was less anti-LLM and more anti-hyperscaling. It wasn’t about excluding LLMs, as much as the expectation that bigger LLMs will not reach AGI by themselves.

There might be something in this interview, ignore Baratunde’s obsession with ChatGPT

This is a long interview by a layman where he asks all the usual questions

The capabilities video that @nleadholm presented at the meetup in December explains the difference in compute needed and why. I set this to start at that point

In that meetup, Hawkins continually handed off questions to the rest of the team. Quotes from the team speak for the project, how the theory works, and the research results so far because the entire team is brilliant, and it’s their work based on what Hawkins knows that made this project possible. Eight people over five years brought it to the place it’s at today. Quote them all.

2 Likes

A bit late, but maybe it’s this one from Lex Fridman Podcast #25 in 2019? However, this was before LLMs and the data center boom.

Lex Fridman: “You think, if we just scale things up significantly, so take these dumb artificial neurons, […] if we just have a lot more of them, do you think some of the elements that we see in the brain may start emerging?”

Jeff Hawkins: “No, I don’t think so. We can do bigger problems and of the same type; I mean, it’s been pointed out by many people that today’s convolutional neural networks aren’t really much different than the ones we had quite a while ago. Just, they’re bigger and train more, and have more labeled data and so on, but I don’t think you can get to the kind of things I know the brain can do, and that we think about as intelligence, by just scaling it up. So that may be, it’s a good description of what’s happened in the past, what’s happened recently with the re-emergence of artificial neural networks. It may be a good prescription for what’s going to happen in the short term, but I don’t think that’s the path. I’ve said that earlier, there’s an alternate path.”

3 Likes

That was round one. He was so uninteresting as an interviewer in the round two video that I didn’t watch that one. But the questions he asks in that one are ones that ordinary people would ask, and not what an expert would ask. That makes it more accessible to a non-technical audience, so there might be some good stuff in there. There was one with three dudes who were so annoying I could barely watch it all the way through, At the end they had a little discussion about why self modifying evolving machines were possible despite a clear explanation of why they aren’t from Hawkins earlier. Another I couldn’t watch more than halfway through around the same time :grinning_face_with_smiling_eyes:he stopped part way through to explain how he had chatGPT summarize 45 years of AI research to prepare himself. It’s a strange world out there.

Just remembered that I read the last book on my iPad using Libby, and there might be something in there. The biggest difference in power required comes down to how many steps it take to learn and identify objects, and that is an enormous gap.

I don’t think you can get to the kind of things I know the brain can do, and that we think about as intelligence, by just scaling it up

Thanks! I’m not sure if that was exactly what I was looking for, but I think the sentiment is right :folded_hands:

1 Like