Trying to understand voting process

Ah, where you’re coming from makes a lot more sense to me now. Thank you for that :slight_smile:

While TBT leans pretty heavily on some of the higher level principals of the brains functioning, it doesn’t try to constrain itself in the same way something like an HTM neuron would. If you’re looking for a more biologically aligned approach, I would honestly look there. Heres a link to their ML guide on it: found here, as well as a short FAQ on the differences between TBT and HTM: found here.

If you’re interested in learning more on the voting mechanisms, I had dug into it a bit for another one of @ElyMatos’s posts. You can find info on that here: About compositionality and heterarchy - #6 by HumbleTraveller

I believe the response is pretty accurate to whats going on under the hood, though I agree with you, it would be helpful to have a dedicated video or an authoritative post on it from the TBT team themselves.

Re. Tolerance to noise…
I’m not 100% sure how the team is handling this in code, but they’d spent quite a bit of time researching the sparse activation patterns in columnar layers 4 and 5b. They’d determined something along the lines of 3X10^211 possible neuronal sequencing patterns per time step within a given columns search-space (with something like a 2% pattern overlap). At the time, this struck me as incredibly noise/fault tolerant.

I’m not sure if they’ve worked this into the TBT framework yet, but I’d be suprised if it isn’t something they’re at least actively working towards. Perhaps @nleadholm or @vclay could provide more insight here?

1 Like