Interval Timer Implementation - Looking for Feedback on Approach

Hi everyone,

I am Daniel and I am starting to work on the Interval Timer feature for modeling object behaviors. Before diving into the code, I wanted to share my understanding of the issue and get some feedback to make sure I am on the right track.

After reading the documentation and exploring the codebase, here is my current understanding:

What the interval timer does?

The timer works like a clock that marks the time since the last significant event. Instead of continuous time, it uses discrete “time cells”. Each tick represents an interval, and when a significant event happens, the timer resets to tick 0.

The main purpose is to improve the learning process on dynamic behaviors

How it integrates with the architecture?

The timer maps to the cortical column layers like this:

L1 receives the current tick from the timer (broadcast to all LMs)
L4 receives features from the Sensor Module (already implemented)
L5b stores the current state in the sequence (new, advances when timer resets)
L6a stores spatial location (already implemented)

The timer would be a global component in MontyBase that broadcasts its state to all Learning Modules through the existing step flow.

Main components to implement

  • IntervalTimer class with methods like get_current_tick(), reset(), step(), and set_speed()
  • Modifications to the State class to include tick information
  • Buffer updates to store temporal information with observations
  • Hypothesis expansion from (location, rotation) to (location, rotation, state)
  • Object model changes to associate features with both location and temporal state

Questions I have

What defines a “significant event” that resets the timer? Should it be based on feature changes detected by the SM, state changes detected by the LM, or something else?

How many time cells should the timer have by default? The documentation shows 12 as an example, but is this configurable per experiment?

When the timer reaches the maximum tick without a reset, should it stay at the last tick or do something else?

Proposed aproach

Start with a simple IntervalTimer class with configurable num_time_cells (default 12)
Use logarithmic resolution so short intervals have more precision than long ones (as suggested in the documentation)
Add tick to non_morphological_features in State to minimize changes to the existing structure
Make the timer optional in MontyBase so existing experiments keep working
Broadcast the timer to all LMs including static morphology models, but static LMs will simply ignore the temporal information
Use the existing FeatureChangeFilter pattern as inspiration for detecting significant events

I am open to discussing any of these points. If there are many things to clarify, I am also available for a meeting to talk through the details.

Looking forward to your feedback!

5 Likes

Hi @Daniel_Lizarazo welcome to the forum and thank you for looking into this item and volunteering your help!

Your understanding of the problem and proposed implementation is already really good, and most of your open questions are also still open questions for us that we would have to figure out empirically.

Before I go into detailed replies to the points you listed, I just wanted to mention that there is a video on YouTube where we discuss this part specifically, if you haven’t seen it yet: https://youtu.be/01U-ZXEjEsc?si=2w1yE49mFKm51Uwl Also, as this topic is quite on the cutting edge of our theory and research and also our own thinking around some of the details is still evolving, I think it would be great to meet some time to be able to talk about this item in more depth. Especially also since it relates to a lot of other items here Object Behaviors and it is likely tricky to implement and test in isolation. But maybe we can come up with a good plan for that without exploding the scope of what you signed up for :slight_smile:

Now, to some of your points specifically:

re. the Main components to implement list: As I just mentioned, the scope of this item can easily get quite large, so I would try and keep it focused on the interval timer and associated infrastructure for now. Items like Hypothesis space including state, object models including state, and sensor modules detecting changes are all related but listed as separate items. Of course, we are super happy if you want to contribute to those as well, but I’m trying to define a self-contained first step instead of having you solve all the complexities of modeling object behaviors. I’m happy to talk more about your questions around those other items as well, but will leave them out for now. From your bullet list, I would focus on the first 3 items first, simulate significant events, and defer incorporating this info in models or hypotheses for now. One additional item would be to add a way for individual learning modules to adjust the speed of the global timer.

re. How many time cells should the timer have by default?: This is an open question that we would like to determine empirically, so having a parameter for this, as you suggest, would be great.

re. What happens when the timer reaches the end without a significant event: We don’t have a definite answer for this yet. Something we propose is that as time passes, the threshold for detecting a significant event gets lower and lower. For now, I would recommend just implementing a reset of the timer when the end is reached, and we see how far this gets us.

re. modifying the State class: This is a bit of a tricky question that would be great to discuss with you and the team a bit more. Conceptually, the tick information seems quite different from the CMP messages, since it is a global signal that all the LMs receive. But I agree that adding it to the State class seems like the simplest solution for now. I would, however, add it as a new field instead of as part of the non_morphological_features.

I hope this helps! I am happy to talk more about any of those and also to set up a meeting sometime. Just DM me for scheduling if you are interested in that.

Best wishes,

Viviane

4 Likes

Hi @Daniel_Lizarazo , was great meeting with you just now, and thanks again for your interest in the project. You should be able to create a personal fork from the TBP organizational fork below.

That way you (and others) can push changes to TBP’s feat.modeling_behaviors, serving as a location to collect all the changes we need in order to enable modeling of object behaviors in Monty. Let me know if you run into any issues.

2 Likes

Here is the Excalidraw board we were using.

1 Like

Thanks for already sharing this @nleadholm. It was great meeting you @Daniel_Lizarazo and thanks for your interest in helping up on this! Here a couple of meeting notes:

  • We talked about how the interval timer ties into all the other items that need to be completed to model and recognize object behaviors. It is one of the earliest dependencies so it is a great point to start. It also doesn’t depend on any other work so it seems like a good task to isolate.

  • Since the interval timer is a part of a larger project (modeling object behaviors), and it will take a while to validate that it works in combination with our other plans for implementing this, we will develop on a feature fork (link shared with Niels). This follows our workflow as outlined in this RFC: tbp.monty/rfcs/0014_conducting_research_while_building_a_stable_platform.md at main · thousandbrainsproject/tbp.monty · GitHub We will not integrate the interval timer on its own into tbp.monty. Instead we will work on the feature repository until the first (or potentially second) green box is reached and then do an implementation project to integrate the entire solution into Monty.

  • There are two steps involved that you can start with (yellow area).

  1. Global Interval Timer Represents Time that has Past since Reset:

Required changes:

  • New Monty class for interval timer
    • Needs to be included in Monty setup & configs
  • Timer class has step and reset function + duration variable
  • Timer class defines how much to increment duration at each step (speed parameter)
  • Monty steps the timer

How this could integrate into Monty’s architecture:

Rough sketch of basic Interval timer class:

(in future versions, the step function could also increment by actual time measurements in the environments (*self.speed) but since we don’t work in a real-time environment, this would be the easiest way to start. We could also test using the more neural-like representation with a log resolution but it doesn’t seem like a hard requirement that we need to start with for a first implementation.)

  1. Global Interval Timer Sends Timing Input to LMs

Required changes:

  • Monty coordinates sending timing signals to LMs (similar to SM to LM and LM to LM info routing)
  • Either involves updating CMP format to include interval timing or adding another channel for communicating it to the LM (this is still an open question)

How this could integrate into Monty’s workflow (adding the dark blue line)

Open Questions:

  • Should it be truly global or do we have a timer-lm connectivity matrix?
    V: for now would start with global
  • Should timing be part of CMP message or a separate signal that is only associated with input inside the LM?
    V: for now attaching to CMP seems easiest but may be more elegant/brain like as a separate input that is only associated with CMP input inside the LM.
  • How do we broadcast the reset signal?
    V: Even if an LM doesn’t receive input, it should know about resets happening. This would argue for a separate input channel.

Other:

  • Since there are various other dependencies until we can learn behavior models and test them, for now, the changes here involve adding infrastructure to Monty and covering them by unit tests. We don’t need to run any experiments of analyze results.

I hope this helps :slight_smile: Let me know if you run into any issues or questions.

Best wishes,

Viviane

4 Likes

Hello,

I don’t want to hijack this thread with potentially unrelated questions, especially before doing the due diligence of reading everything this project produced about this absolutely fascinating topic (this thread actualy made me buy some books, what to do ? this project is full of gems like this one…). So, I apologize in advance for some lack of proper grounding. Feel free to ignore me please. The question that popped in my head was this : I’m aware that monty is resilient to spacial noise, but will it be equally resilient to the nondeterministic latency of computer hardware when it comes to interpreting and memorizing time signals ? Wont the tick signatures produced by this solution be very coupled with the execution speed of the experiments, per execution ? as opposed to the universality of the spatial experience whose end results (including noise) depend only on configuration and not so much on when each particular machine produces an event ? Will a movie recorded on your computer be replayable in mine ? Will it even be replayable in yours ?

Thank you!
Nuno

1 Like

Hi @nunoo good questions!

I don’t have a definite answer to how robust to noise in time signals Monty will be, as we haven’t implemented this yet. But the idea is that if there is a consistent latency or a totally different speed than what was learned, Monty will adjust its expectations to that (see Speed Detection to Adjust Timer ).

Ideally, there is also some amount of tolerance to individual noisy timing inputs (could use a similar mechanism as we use to add tolerance for noise in locations), but we are usually quite good at picking up even slight differences in the timing of a sequence (like a melody), so we might not want too much tolerance here.

Maybe the more important mechanism would be that, just because one observation didn’t exactly match the expected timing, we don’t completely eliminate that hypothesis, and following consistent observations can help us still recognize the sequence. This is basically what the evidenceLM mechanism allows us to do, and it would also apply to timing signals that cause a prediction error due to noise.

Hope that makes sense!

Best wishes,

Viviane

1 Like

Thank you Viviane !

After I raised my question I realized that the clock is logical and ticks with the steps. The number of ticks between events automatically aligns with the processing capabilities of the machine executing those steps, so the number of ticks between significant events is always the same, regardless of the machine. Wall clock is irrelevant, machine quirks are irrelevant in the sim. There, monty’s logical clock imposes the frequency, even when that world is simulated to go faster or slower. The speed dial is crucial to infer the same sequence at different speeds but in the end the simulated experiment is completely scripted. Two musical notes may be separated by 2 seconds in the real world, but even if one particular machine A takes 10 seconds to go through 5 steps and another machine B just 1 second, they both see 5 ticks between those notes in the sim.

However, when machine A meets the world, it hears that second note after 1 of its ticks while machine B hears it after 10 of its ticks. Machine A accelerates its clock to keep up. Machine B slows it down. This makes transfer learning the same problem as recognizing accelerated/slower versions of a modeled sequence, both possibly tackled by the same speed dial. But as you mention, this speed dial seems too coarse, doesn’t it ? Especially as I read about how finely grained human perception of time can be, and the huge effects attention has on its dilation. The way it is currently modeled also starts to resemble part of a control theory problem.

Even as this led me to ponder how different and personal the perceived speed of the world must actually be among humans, I could not avoid noticing, as you also note in your answer, that a very similar issue must exist with spatial inference itself, as monty meets the world at different places and with a different resolution than the ones it was trained with. Maybe the big difference is that a lost tick inherently (i.e. mathematically) acts more like an off-object observation, making it much more disruptive for inference ? Can speed adjustment really compensate in time for a match ?

I realize I need to dig deeper into spatial inference in monty, before coming back to this issue. Your answer just reinforced the notion that my ignorance about the basics is no longer sustainable. I’ve actually been trying to reduce the number of concepts in Monty I would need to handle in the first place through an llm-driven ontological compression based on code, docs and video transcripts, but the result is still just too overwhelming. There are no meta shortcuts here… it just takes time I guess :slight_smile: Just as an aside, while starting in that direction, I played with the idea of whether recognizing sequences could not be modeled as sensing the shape of the time derivative as the object moves, just another object, composed by the brain itself…I plotted these shapes for the stapler and other shapes statically moving according to their intrinsic behavior (e.g. hinge), but didn’t find them distinct enough.

Thank you!

1 Like

Hi @nunoo

yes, I think using speed adjustments will make Monty able to generalize a lot to seeing the same temporal sequence at different speeds.

But as you mention, this speed dial seems too coarse, doesn’t it ?

I don’t think this is an issue, you can make the time representation as dense or corse as you want (in computers at least). What I meant was that we don’t want to try to add noise robustness by allowing a large range of temporal offsets to be consistent with a learned interval. What I think is the better mechanism is what we do for spatial matching, where one inconsistent observation adds negative evidence but isn’t sufficient to eliminate the hypothesis completely.

but the result is still just too overwhelming. There are no meta shortcuts here… it just takes time I guess :slight_smile:

100% agree, its a pretty steep learning curve as you have to learn completely new models. We’re trying to figure out good ways to make this a bit easier, so if you come across any “aha” moments or concepts that could be explained better or have other ideas on what helped you understand something, feel free to let us know!

Best wishes,

Viviane

1 Like