Hey there,
I was thinking through some potential replacements to the cartesian coordinate systems. Some of the ones that came to mind included: spherical, oct tree, various flavors of hexagonal, truncated octahedron (courtesy of @nleadholm) and rhombic dodecahedron.
Spherical seemed to have issues with touch based sensor integration and oct tree, while promising, has various issues. Additionally, oct tree is still using cartesian coordinates, so its not really much of a departure from whats currently being used.
I decided to compare the other four. Here is what I ended up calculating. I’d be curious to know the communities thoughts and opinions. Did I calculate anything wrong? Is there a coordinate system you think would be better?
| Operation | Cube Coordinates (control) | Hexagonal Lattice (uniform) | Hexagonal Lattice (offset) | Truncated Octahedron | Rhombic Dodecahedron |
|---|---|---|---|---|---|
| Neighbor lookups | O(6) | O(8) | O(12) | O(14) | O(12) |
| A Pathfinding* | O(6^d) | O(8^d) | O(12^d) but anisotropic | O(14^d) | O(12^d) |
| Memory Usage | 1.15x but lower per-node cost | 1.15x but lower per-node cost | 1.15x | 1.15x | 1.0x |
| Geometric Computation | Interger math | Mix of int and floating-point | Floating-point math | Floating-point math | Integer math |
| Collision & Intersection | O(6) | O(8) | O(12) but more complex | O(14) | O(12) |
| Anisotropy vs. Isotropy | 0.00 (Perfectly Isotropic) | ~9.06 × 10⁻¹⁷ (Nearly Isotropic) | ~7.85 × 10⁻¹⁷ (Nearly Isotropic) | 0.0718 (Low Anisotropy) | 0.00 (Perfectly Isotropic) |
Some quick comparison notes:
(spoiler alert) A Rhombic Dodecahedral (RD) grid is going to be the most computationally efficient out of all the above described. As such, most of these comparisons are going to be comparing the others against RD. That said, there’s some interesting considerations to be had when comparing RD against uniform hexagonal latticing. Details below.
RD vs Truncated Octahedron (TO)
- TL:DR: RD is significantly more efficient than TO, especially in the following: pathfinding (~2.2x speedup), memory efficiency (~15% reduction), removal of floating point overhead (~3x reduction) and geometric computations (~17% fewer ops).
Neighborhood lookup and connectivity:
This ones pretty straightforward. RD only has 12 faces, TO has 14. For graph based lookups, RD’s adjacency operations scale as O(12) ≈ O(1), whereas TO’s scales as O(14) ≈ O(1) but with 17% more operations per step. Because of this, RD is roughly 17% more efficient when it comes to neighbor lookups.
A Pathfinding*
A* operates over a graph where each node represents a spatial cell and each edge represents a valid movement. The “branching factor” (the number of neighbors per node) determines the number of nodes expanded per step.
* RD has 12 neighbors
* TO has 14 neighbors
We can assume that assume A explores on average b^d nodes* for branching factor b over depth d, such that RD → O(12^d) and TO → O(14^d). Since RD has a smaller branching factor, the search expands by ~17% fewer nodes per step.If we take an example depth of d = 10 we can assume that RD expands ≈ 61 million nodes, whereas TO expands ≈ 137 million nodes.
Because of this, we can assume that RD is ~2.2x fast in A* pathfinding task!
Memory storage efficiency:
Each spatial cell requires an index in a hashmap or voxel grid (Monty is using voxel). RD naturally aligns with an FCC lattice, which allows it to get away with using integer indexing. TO requires more complex indexing due to its shape.
For a 3D grid of size N³, storing all grid points: RD needs ~15% fewer cells for equivalent space coverage than TO. This translates to 15% lower memory usage for spatial indexing.
Geometric Computation (Intersection, Volume, Distance):
-
For distance calculations between nodes RD uses integer-based calculations due to FCC alignment. TO requires floating-point arithmetic, which is about 3× more computationally expensive.
-
For collision checks and volume computations, RD has 12 faces. TO has 14. Because of this, TO requires ~17% more face intersection tests it needs to perform.Additionally, Each TO face is a mix of hexagons & squares, requiring more complex surface area computations.
RD vs Offset Hexagonal stacking (OHS)
- TL;DR: RD is generally more efficient in computational tasks, especially in pathfinding (~10-15% faster), memory usage (~15% lower), and geometric calculations (~10-20% faster). RD also ensures uniform movement cost, making real-time navigation more predictable. However, one thing that may set OHS apart is the planar bias it introduces, which could provide certain advantages depending on Monty’s application.
Neighborhood lookups:
RD has 12 neighbors. A stacked hexagonal lattice, which is stacked, also has 12 neighbors (6 horizontal + 3 above + 3 below). Because of this, both are equally efficient.
Pathfinding Complexity (graph search):
For A pathfinding both RD and OHS have a branching factor of 12, meaning their number of expanded nodes is equivalent (O(12^d)). However, they differ in spatial connectivity. The geometric nature of RD ensures uniform movement cost regardless of direction. In comparison, OHS has anisotropic movement.
Moving within a single hexagonal layer is simpler (6 neighbors), but moving between layers requires a vertical alignment check. Because of this, some movements within OHS are longer than others, complicating heuristic computations in A* lookups.
Since RD offers uniform movement, it offers simpler heuristic calculations, potentially making A* faster in real-time applications. Testing would need to be done to confirm this, but I’m estimating slightly better (~10-15% faster A* execution) due to uniform distance metrics.
Memory Storage:
Each lattice structure impacts memory usage based on the number of cells needed to represent a volume. RD naturally fits an FCC lattice, which is ~15% denser than OHS. This translates to a lower memory footprint for RD (~15% lower), as well as faster spatial indexing due to fewer elements needing to be stored.
Geometric computation (intersect, vol and distance):
RD aligns with an FCC grid, meaning distances can be computed using integer math. OHS requires hexagonal-to-Cartesian transformations, adding floating-point arithmetic overhead.
RD has 12 faces. OHS hexagons require additional face normal transformations (due to the lack of uniformity in its faces). This makes collision detection slower (by requiring ~10-20% more operations)
RD vs. Uniform Hexagonal stacking (UHS)
-
TL;DR: This ones a bit of a toss up in my opinion. RD wins out in having better uniformity in its movement cost, its spatially more memory-efficient, and is able to leverage integer-based calculations. However, UHS has a lower computational overhead in pathfinding tasks, more efficient adjacency lookups, and a (potentially significant) lower per-node memory cost.
-
In my opinion, the winner here depends on application. RD is superior where uniformity of the movement cost really, really matters: physics sim, space/deep sea navigation, drone nav, that sort of thing. However, UHS is better for environments which posses a horizontal motion dominance (eg. ground based robotics). No clear winner in this one.
Neighbor lookup and connectivity:
Unlike with OHS, each UHS cell only has 8 neighbors (6 horizontal + 1 above + 1 below). RD obviously as 12. This means that UHS has ~33% fewer neighbors per node, resulting in ~33% more efficient neighbor lookups over RD.
Pathfinding (graph search):
For A* pathfinding, the branching factor (b) affects the number of nodes expanded upon. RD → O(12^d). UHS → O(8^d). At a depth d = 10: RD expands to ~61 million nodes. Likewise, UHS expands to ~1 million.
Because of this branching difference, UHS is able to explore ~60x fewer nodes at depth 10 (as opposed to RD), which drastically reduces computational overhead.
Memory Storage Efficiency:
As we said before, RD naturally aligns with an FCC lattice, which is 15% more efficient than a cubic grid. UHS has a lower neighborhood density, meaning it requires more nodes in order to achieve the same connectivity resolution.
For a fixed 3D volume, RD requires ~15% fewer nodes than UHS to represent the same space. However, UHS stores fewer neighbors per node (8 as opposed to 12), reducing per-node memory usage.
Overall, the two balance out, with UHS using slightly less memory per nod, but needing more nodes to achieve the same resolution. It’s honestly a bit of a toss up with this one.
It should be noted that we’re not taking into account indexing compressing techniques made available to hex-latticing. Axial coordination, for instance, may allow us to further reduce the face count of UHS (from 8 → 6). This applies to both UHS and OHS coordinate systems.
Geometric Computation:
RD is able to leverage inter math due to its FCC lattice alignment. UHS relies on a mix of integer and floating point arithmetic (hex distance calculations in-plane. Int jumps for vertical movement). RD is going to outperform UHS here simply by removing that floating point overhead (resulting in ~3x faster speed for distance calculations).
For intersection and collision checks, RD has 12 faces, while UHS only has 8. This results in UHS requiring ~33% fewer face checks in handling collision checks.
Once I get a little more familiar with Monty’s code base, I may tinker around with trying to implement UHS. RD looks super interesting to me, but honestly, programming that is going to be a bit beyond my skill set, if I’m being honest. Would be cool to see someone else give it a shot though!
Update: Including cube coordinates as a baseline and calculating Anisotropy vs. Isotropy.
It should be noted, that even though cube coordinates are being listed as perfectly isotrophic, this isn’t actually the case. When we include diagonality calculations into the work, we actually find its index to be something like ~9.06 × 10⁻¹⁷. Also, because cubes don’t handle diagonal neighbor lookups well, its neighbor connectivity should really be O(8), not O(6). However, to remain consistant across the table, I’ve left is as O(6) (I would need to adjust UHF if I updated cube over to account for its diagonal limitations).
Also, because calculating the anisotropy is a bit more involved than the other row data, I’m going to include the steps used to perform the calculation.
Steps Taken to Calculate Anisotropy
Step 1: Define the Movement Connectivity of Each System
Each spatial system has a unique set of neighboring movement directions:
- Uniform Hexagonal Lattice (6 directions)
- Offset Hexagonal Lattice (8 directions)
- Truncated Octahedron Lattice (14 directions)
- Rhombic Dodecahedron Lattice (12 directions)
- Cube Grid (6 directions)
Step 2: Compute Movement Magnitudes
For each system, we calculated the magnitude of each movement vector:

Where Vi is a movement vector in x, y, z dimensions.
Step 3: Compute Mean & Standard Deviation of Movement Magnitudes
We calculated:
- Mean movement magnitude:

This gives the average movement effort across all directions.
- Standard deviation of movement magnitudes:

This captures the spread (variation) of movement efforts.
Step 4: Compute the Anisotropy Index
The anisotropy index is the normalized standard deviation:

If A=0, the system is “fully isotropic” (equal movement in all directions).Whereas higher values of A indicate “greater anisotropy” (eg: some directions are more costly than others).






