Conversation
- SqliteVectorStore: only catch "no such table" OperationalError, re-raise others - test_visualizer: fix det.ts → obs.ts, add @pytest.mark.tool, remove double teardown and duplicate assertion - conftest: remove unreachable duplicate return
…ypes - EmbeddingModel/CLIPModel: @overload so single-arg returns Embedding, multi-arg returns list - conftest: cast getfixturevalue returns to correct Store/BlobStore types
- MobileCLIPModel, TorchReIDModel: add matching overloads for embed/embed_text - Remove now-redundant cast in memory/embedding.py
…n guard, strict zip - Delete dimos/memory/type.py (license header only, no code) - Add validate_identifier() to search() and delete() in SqliteVectorStore - Change zip(strict=False) to zip(strict=True) in Batch transformer
# Conflicts: # dimos/memory2/test_visualizer.py # dimos/memory2/transform.py
# Conflicts: # dimos/robot/unitree/go2/blueprints/basic/unitree_go2_basic.py # dimos/robot/unitree/go2/blueprints/smart/unitree_go2.py
Greptile SummaryThis PR extends
Confidence Score: 4/5Safe to merge for the visualization and storage changes, but the new module classes have multiple P1 runtime errors that must be fixed before they are exercised. Five P1 bugs are concentrated in the newly-added module.py classes (SemanticSearch, MemoryModule, Recorder) and one in _gap_threshold. The core storage, transform, and visualization work is clean and well-tested. Score 4 reflects that the bugs are real and fixable but isolated to a subset of the PR. dimos/memory2/module.py (multiple P1 bugs in SemanticSearch/MemoryModule/Recorder), dimos/memory2/transform.py (_gap_threshold IndexError edge case) Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
subgraph Storage ["Storage Layer"]
B[Backend] -->|"scalar (int/float)"| MC[metadata value col]
B -->|"blob"| BS[BlobStore]
B --> VS[VectorStore]
end
subgraph Stream ["Stream API"]
S[Stream] -->|at_relative / tap / scan_data| S
S -->|search k=None| B
S -->|cache| MEM[MemoryStore]
S -->|transform| T[Transforms]
end
subgraph Transforms
T --> throttle
T --> speed
T --> smooth
T --> peaks
peaks --> significant
end
subgraph Vis ["Visualization"]
P[Plot] -->|to_svg| SVGPlot[plot/svg.py]
SP[Space] -->|to_svg| SVGSpace[space/svg.py]
P -->|add Stream/Obs| SM[Series/Markers]
SP -->|add Obs/Msg| PAP[Pose/Arrow/Point]
end
subgraph Modules ["Module Classes ⚠️"]
MM[MemoryModule] -->|_store uninitialized| BUG1((AttributeError))
SS[SemanticSearch] -->|clip undefined| BUG2((NameError))
SS -->|search result dropped| BUG3((silent None))
REC[Recorder] -->|store=None| BUG4((AttributeError))
end
Stream --> Vis
Storage --> Stream
Reviews (1): Last reviewed commit: "plot.md work" | Re-trigger Greptile |
| .live() \ | ||
| .filter(lambda obs: obs.data.brightness > 0.1) \ | ||
| .transform(QualityWindow(lambda img: img.sharpness, window=0.5)) \ | ||
| .transform(EmbedImages(clip, batch_size=2)) \ |
There was a problem hiding this comment.
clip is undefined — NameError at runtime
EmbedImages(clip, batch_size=2) references clip which is never imported or defined in scope. The intent is clearly self.model (the CLIPModel instance stored in start()). As written this raises NameError: name 'clip' is not defined the first time a live frame arrives.
| self.embeddings \ | ||
| .search(query_vector) \ | ||
| .transform(peaks(key=lambda obs: obs.similarity, distance=1.0)) | ||
| # fmt: on |
There was a problem hiding this comment.
search() result is silently discarded
The pipeline chain on lines 204–206 constructs a Stream but the result is never returned or stored. Every call to search() builds the chain and immediately discards it, so callers always get None. The method needs return in front of the chain (or the result stored to self.results).
| def store(self): | ||
| if self._store is not None: | ||
| return self._store | ||
|
|
||
| self._store = self.register_disposable( | ||
| SqliteStore(path=str(self.config.db_path)), | ||
| ) | ||
| self._store.start() | ||
| return self._store |
There was a problem hiding this comment.
_store backing attribute is never initialized — AttributeError on first access
MemoryModule.store is a @property that reads self._store on line 156, but _store is never declared or set to None anywhere. On first access Python raises AttributeError: 'MemoryModule' object has no attribute '_store'. Additionally, line 188 (self._store.streams.color_image) bypasses the property and accesses the uninitialized backing attribute directly. The property guard should use getattr(self, '_store', None), or _store should be declared (e.g. as a Pydantic PrivateAttr(default=None)).
| def __init__(self) -> None: | ||
| # optionally delete existing recording on init if overwrite is True | ||
| # TODO: store reset API/logic is not implemented yet | ||
| # this module should have no relation to actual files (this is SqliteStore specific) | ||
| # .live() subs need to know how to re-sub | ||
| # in case of a restart of this module in a deployed blueprint | ||
| db_path = Path(self.config.db_path) | ||
| if db_path.exists(): | ||
| if self.config.overwrite: | ||
| db_path.unlink() | ||
| logger.info("Deleted existing recording %s", db_path) | ||
| else: | ||
| raise FileExistsError(f"Recording already exists: {db_path}") | ||
|
|
There was a problem hiding this comment.
Recorder.__init__ reads self.config before base class is initialized
Recorder.__init__ accesses self.config.db_path without calling super().__init__() first. Module (and likely its Pydantic base) initialises all fields, including config, during super().__init__(). Accessing self.config before that call means the field doesn't exist yet, raising AttributeError. Additionally, super().__init__() is never called at all in this override, which will skip all of Module's own setup.
The file-deletion side-effect also belongs in a model_validator(mode='after') rather than in __init__.
| return | ||
|
|
||
| for name, port in self.inputs.items(): | ||
| stream: Stream[Any] = self.store.stream(name, port.type) |
There was a problem hiding this comment.
Recorder.store is always None — AttributeError in start()
Recorder extends Module, not MemoryModule, so it does not inherit the store property that lazily initialises the SqliteStore. The class attribute store: SqliteStore | None = None is never assigned, so self.store.stream(name, port.type) raises AttributeError: 'NoneType' object has no attribute 'stream' every time start() is called. Recorder should extend MemoryModule instead of Module, or it needs its own store initialisation in start().
| def _gap_threshold(values: list[float]) -> float: | ||
| """Largest log-ratio gap between consecutive sorted values.""" | ||
| sorted_vals = sorted(v for v in values if v > 0) | ||
| n = len(sorted_vals) | ||
| best_ratio, best_idx = 0.0, n - 1 | ||
| for i in range(n - 1): | ||
| ratio = sorted_vals[i + 1] / sorted_vals[i] | ||
| if ratio > best_ratio: | ||
| best_ratio, best_idx = ratio, i | ||
| return 0.5 * (sorted_vals[best_idx] + sorted_vals[best_idx + 1]) |
There was a problem hiding this comment.
_gap_threshold raises IndexError when fewer than two positive values exist
sorted_vals is filtered to values > 0. If the result has 0 or 1 entries, best_idx is -1 or 0 respectively, and the final return accesses sorted_vals[best_idx + 1] which is out-of-bounds. This is reachable when significant(method="gap") is called after peaks(prominence=0.0) — a documented usage pattern — and all peak prominences are zero or only one is positive.
if n < 2:
return sorted(values)[len(values) // 2] if values else 0.0| print(f"sim={obs.similarity:.3f} ts={obs.ts} pose={obs.pose}") | ||
| assert obs.similarity is not None | ||
| assert obs.pose is not None | ||
| print(f"sim={obs.similarity:.3f} ts={obs.ts} pose={obs.pose}") | ||
| print(f"sim={obs.similarity:.3f} ts={obs.ts} pose={obs.pose}") |
There was a problem hiding this comment.
Pull request overview
This PR advances the “memory2” work by adding SVG-based visualization utilities (Space + Plot), updating replay/test plumbing toward memory2 SQLite datasets, and tightening Rerun initialization/bridge behavior.
Changes:
- Add
memory2.visvisualization primitives (Space renderer + Plot renderer) with tests and docs/examples + LFS-tracked rendered assets. - Migrate parts of replay/test flows to use memory2 SQLite datasets (
replay_db) and update related config/docs. - Introduce shared Rerun initialization (
rerun_init) and update the Rerun bridge configuration/ports.
Reviewed changes
Copilot reviewed 75 out of 93 changed files in this pull request and generated 12 comments.
Show a summary per file
| File | Description |
|---|---|
| uv.lock | Updates lockfile entries (incl. md-babel-py) and extras resolution. |
| pyproject.toml | Bumps md-babel-py; adjusts dds extra deps. |
| flake.nix | Adds uv to Nix dev tooling. |
| docs/usage/cli.md | Updates CLI docs from --replay-dir to --replay-db. |
| docs/capabilities/memory/reruntest.py | Adds memory2 visualization experiment script. |
| docs/capabilities/memory/plot.md | Adds Plot/Space usage documentation and examples. |
| docs/capabilities/memory/index.py | Adds memory capability example script (generates assets). |
| docs/capabilities/memory/index.md | Adds memory capability documentation page and examples. |
| docs/capabilities/memory/assets/speed.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/plot_robot_data.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/plot_plantness_significant.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/plot_plantness_marked.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/plot_plantness_gap_fill.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/plot_plantness_brightness.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/plot_plantness_autopeaks2.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/plot_plantness_autopeaks.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/plot_plantness.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/plot_named.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/plot_colors.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/plants_peak_detections.png | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/plants_meaningful.png | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/plants_auto.png | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/plants.png | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/peak_space.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/peak_detections.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/grid.png | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/embedding_focused.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/embedding.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/color_image.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/brightness.svg | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/all_images.png | Adds LFS-tracked rendered asset. |
| docs/capabilities/memory/assets/.gitattributes | Adds LFS attributes for docs assets. |
| dimos/visualization/rerun/init.py | Introduces shared rerun_init() wrapper. |
| dimos/visualization/rerun/bridge.py | Switches to rerun_init, changes ports, adds config fields. |
| dimos/utils/testing/test_replay.py | Updates replay tests to use LegacyPickleStore directly. |
| dimos/utils/testing/replay.py | Adds Memory2-backed TimedSensorReplay shim adapter. |
| dimos/utils/testing/moment.py | Updates moment helper to use legacy pickle store directly. |
| dimos/utils/test_data.py | Updates repo-root helper usage (get_project_root). |
| dimos/utils/data.py | Renames repo root helper to get_project_root and updates call sites. |
| dimos/types/test_timestamped.py | Updates timestamped tests to use legacy pickle store directly. |
| dimos/simulation/engines/mujoco_sim_module.py | Removes default_config assignment line(s). |
| dimos/robot/unitree/testing/test_tooling.py | Updates tooling test to use legacy pickle store directly. |
| dimos/robot/unitree/modular/detect.py | Updates unitree detect tooling to use legacy pickle store directly. |
| dimos/robot/unitree/go2/connection.py | Switches replay dataset handling to replay_db; updates replay video stream name. |
| dimos/robot/unitree/go2/blueprints/smart/unitree_go2.py | Adds recorder-based “memory” blueprint wiring. |
| dimos/robot/unitree/go2/blueprints/basic/unitree_go2_basic.py | Updates Rerun overrides/visual conversions. |
| dimos/robot/drone/test_drone.py | Updates patches/imports to LegacyPickleStore. |
| dimos/robot/drone/mavlink_connection.py | Updates replay usage to LegacyPickleStore. |
| dimos/robot/drone/dji_video_stream.py | Updates replay usage to LegacyPickleStore. |
| dimos/robot/all_blueprints.py | Adds new blueprint/module entries for memory/recorder. |
| dimos/protocol/pubsub/impl/test_rospubsub.py | Updates replay usage to memory2 dataset naming for lidar test. |
| dimos/perception/test_spatial_memory_module.py | Updates replay usage to LegacyPickleStore. |
| dimos/perception/detection/type/imageDetections.py | Adds annotated_image() helper for drawing detections. |
| dimos/perception/detection/type/detection2d/bbox.py | Adds bbox drawing utilities (draw_on, annotated_image). |
| dimos/perception/detection/conftest.py | Updates test fixture replay usage to LegacyPickleStore. |
| dimos/msgs/sensor_msgs/test_image.py | Updates tool test to use LegacyPickleStore. |
| dimos/msgs/sensor_msgs/PointCloud2.py | Adds bottom_cutoff option to rerun conversion. |
| dimos/msgs/sensor_msgs/Image.py | Adds brightness property. |
| dimos/msgs/nav_msgs/OccupancyGrid.py | Adds RGBA texture generation helper; tweaks to_rerun defaults. |
| dimos/models/vl/test_vlm.py | Updates VLM tests to use legacy pickle store directly. |
| dimos/memory2/vis/utils.py | Adds mosaic() helper for tiling images/detections into a grid. |
| dimos/memory2/vis/space/test_space.py | Adds Space element dispatch/rendering tests. |
| dimos/memory2/vis/space/svg.py | Adds SVG renderer for Space. |
| dimos/memory2/vis/space/space.py | Adds Space builder/dispatcher. |
| dimos/memory2/vis/space/rerun.py | Adds Rerun renderer for Space. |
| dimos/memory2/vis/space/elements.py | Adds Space element dataclasses. |
| dimos/memory2/vis/plot/test_plot.py | Adds Plot tests (SVG output, axes, gaps, palette behavior). |
| dimos/memory2/vis/plot/svg.py | Adds matplotlib-based Plot SVG renderer. |
| dimos/memory2/vis/plot/rerun.py | Adds placeholder rerun renderer for Plot. |
| dimos/memory2/vis/plot/plot.py | Adds Plot builder/dispatcher. |
| dimos/memory2/vis/plot/elements.py | Adds Plot element dataclasses/enums. |
| dimos/memory2/vis/color.py | Adds Color utilities/palette/deferred colormap handling. |
| dimos/memory2/type/observation.py | Adds data_type + tagging helpers for observations. |
| dimos/memory2/transform.py | Adds/renames transforms (downsample/throttle/speed/smooth/peaks/significant/etc). |
| dimos/memory2/test_e2e.py | Updates e2e tests; adds import alignment trimming; embeds + search. |
| dimos/memory2/test_cache.py | Adds cache tests for replayable streams. |
| dimos/memory2/stream.py | Adds stream helpers (at_relative, tap, scan_data, map_data, cache, etc.). |
| dimos/memory2/store/sqlite.py | Ensures data_type is reconstructed into backends. |
| dimos/memory2/store/base.py | Adds StreamAccessor.items(); passes data_type into backend construction. |
| dimos/memory2/observationstore/sqlite.py | Adds scalar value column and query/insert support. |
| dimos/memory2/module.py | Adds memory2 module abstractions (StreamModule/Recorder/SemanticSearch). |
| dimos/memory2/codecs/test_codecs.py | Updates replay path for codec test input. |
| dimos/memory2/backend.py | Adds backend data_type validation + scalar inline storage behavior. |
| dimos/memory/test_embedding.py | Updates embedding test replay source to new dataset/stream naming. |
| dimos/mapping/test_voxels.py | Updates voxel tests to use legacy pickle store directly. |
| dimos/mapping/pointclouds/test_occupancy_speed.py | Updates occupancy speed test to new dataset naming and iterate API. |
| dimos/hardware/sensors/fake_zed_module.py | Updates replay usage to legacy pickle store directly. |
| dimos/hardware/sensors/camera/gstreamer/gstreamer_sender.py | Adjusts typing ignores for gi imports. |
| dimos/hardware/sensors/camera/gstreamer/gstreamer_camera.py | Adjusts typing ignores for gi imports. |
| dimos/core/global_config.py | Renames replay config field to replay_db. |
| data/.lfs/go2_bigoffice.db.tar.gz | Updates LFS dataset archive pointer. |
| README.md | Updates replay command docs to --replay-db. |
| .gitattributes | Adds LFS rule for docs memory assets. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "go2-connection": "dimos.robot.unitree.go2.connection.GO2Connection", | ||
| "go2-fleet-connection": "dimos.robot.unitree.go2.fleet_connection.Go2FleetConnection", | ||
| "go2-memory": "dimos.robot.unitree.go2.blueprints.smart.unitree_go2.Go2Memory", | ||
| "go2-recorder": "dimos.robot.unitree.go2.blueprints.basic.unitree_go2_basic.Go2Recorder", | ||
| "google-maps-skill-container": "dimos.agents.skills.google_maps_skill_container.GoogleMapsSkillContainer", |
There was a problem hiding this comment.
all_modules now includes "go2-recorder": "...unitree_go2_basic.Go2Recorder", but there is no Go2Recorder definition in the repository (search shows no matches). This mapping will cause import failures when resolving modules/blueprints. Either add/rename the class in unitree_go2_basic.py or remove/update this entry to the correct symbol.
| store.streams.color_image \ | ||
| # here we will take 4fps because brigtness calculation loads the actual image | ||
| # observation.data triggers another db query to fetch the data | ||
| # otherwise observations only hold positions and timestamps | ||
| .transform(throttle(0.25)) \ |
There was a problem hiding this comment.
Spelling: "brigtness" → "brightness".
| config: MemoryModuleConfig | ||
| store: SqliteStore | None = None | ||
|
|
||
| @property | ||
| def store(self): | ||
| if self._store is not None: | ||
| return self._store | ||
|
|
||
| self._store = self.register_disposable( | ||
| SqliteStore(path=str(self.config.db_path)), | ||
| ) | ||
| self._store.start() | ||
| return self._store |
There was a problem hiding this comment.
MemoryModule defines both a store attribute and a store @property that uses self._store, but _store is never declared/initialized. This will raise AttributeError on first access and also overwrites the declared store: SqliteStore | None field. Consider renaming the cached field to _store: SqliteStore | None = None and keeping store as a plain property, or removing the store attribute annotation entirely and initializing the store in __init__/start.
| self.model = self.register_disposable(self.config.embedding_model()) | ||
|
|
||
| self.model.start() | ||
|
|
||
| self.embeddings = self.register_disposable( | ||
| self.store.stream("color_image_embedded", Image), | ||
| ) | ||
|
|
||
| # fmt: off | ||
| self.register_disposable( | ||
| self._store.streams.color_image \ | ||
| .live() \ | ||
| .filter(lambda obs: obs.data.brightness > 0.1) \ | ||
| .transform(QualityWindow(lambda img: img.sharpness, window=0.5)) \ | ||
| .transform(EmbedImages(clip, batch_size=2)) \ | ||
| .save(self.embeddings) \ | ||
| .drain()) |
There was a problem hiding this comment.
SemanticSearch.start() references several undefined names (Image, QualityWindow, EmbedImages, clip, and self._store). As written this will fail at runtime on module start. Import the required symbols, create a clip/embedding model instance consistently (likely self.model), and use self.store (not _store) for streams.
| class GO2Connection(Module, Camera, Pointcloud): | ||
| config: ConnectionConfig | ||
| cmd_vel: In[Twist] | ||
| pointcloud: Out[PointCloud2] | ||
| odom: Out[PoseStamped] | ||
| lidar: Out[PointCloud2] | ||
| color_image: Out[Image] | ||
| camera_info: Out[CameraInfo] | ||
|
|
||
| connection: Go2ConnectionProtocol | ||
| camera_info_static: CameraInfo = _camera_info_static() | ||
| _camera_info_thread: Thread | None = None | ||
| _latest_video_frame: Image | None = None | ||
|
|
||
| @property | ||
| def camera_info(self) -> Out[CameraInfo]: | ||
| return _camera_info_static() | ||
|
|
There was a problem hiding this comment.
GO2Connection declares camera_info: Out[CameraInfo] as an output port, but then overrides it with a @property def camera_info(...) that returns a CameraInfo message. This replaces the Out port descriptor at class creation time and will break transport wiring and .publish() calls. Remove/rename the property (e.g., camera_info_static) and publish the static message via the existing camera_info Out port instead.
| If *k* is given, return only the top-k results. | ||
| If *k* is omitted, return **all** observations with similarity scores. | ||
| """ |
There was a problem hiding this comment.
Stream.search() docstring says omitting k returns all observations, but the backend vector search path caps results (currently defaults to 4096 when search_k is None). Either require callers to pass an explicit k, or implement a true “all results” mode (e.g., fetch ids via metadata ordering / a separate API) and update the docstring/behavior so they match.
| If *k* is given, return only the top-k results. | |
| If *k* is omitted, return **all** observations with similarity scores. | |
| """ | |
| *k* must be provided explicitly and returns only the top-k results. | |
| Omitting *k* is not supported because backend vector search may apply | |
| an implicit result cap. | |
| """ | |
| if k is None: | |
| raise ValueError( | |
| "search() requires an explicit 'k' value; omitting 'k' may " | |
| "return only a backend-capped subset of results" | |
| ) |
| else: | ||
| print(f) | ||
| raise TypeError(f"Cannot extract Image from {type(f).__name__}") |
There was a problem hiding this comment.
mosaic() prints the offending frame (print(f)) right before raising TypeError. This is a library utility and the unconditional stdout write will be noisy in tests/notebooks; prefer removing the print or using the project logger at debug level.
| # this is not neccessary but we use a global map as a nice base for a drawing | ||
| drawing.add(global_map) |
There was a problem hiding this comment.
Spelling: "neccessary" → "necessary".
| config: RecorderConfig | ||
| store: SqliteStore | None = None | ||
|
|
||
| def __init__(self) -> None: | ||
| # optionally delete existing recording on init if overwrite is True | ||
| # TODO: store reset API/logic is not implemented yet | ||
| # this module should have no relation to actual files (this is SqliteStore specific) | ||
| # .live() subs need to know how to re-sub | ||
| # in case of a restart of this module in a deployed blueprint | ||
| db_path = Path(self.config.db_path) | ||
| if db_path.exists(): | ||
| if self.config.overwrite: | ||
| db_path.unlink() | ||
| logger.info("Deleted existing recording %s", db_path) | ||
| else: | ||
| raise FileExistsError(f"Recording already exists: {db_path}") | ||
|
|
||
| @rpc | ||
| def start(self) -> None: | ||
| super().start() | ||
|
|
||
| if not self.inputs: | ||
| logger.warning("Recorder has no In ports — nothing to record, subclass the Recorder") | ||
| return | ||
|
|
||
| for name, port in self.inputs.items(): | ||
| stream: Stream[Any] = self.store.stream(name, port.type) | ||
| self.register_disposable( | ||
| Disposable(port.subscribe(lambda msg, s=stream: s.append(msg))) # type: ignore[misc] | ||
| ) | ||
| logger.info("Recording %s (%s)", name, port.type.__name__) |
There was a problem hiding this comment.
Recorder.__init__ doesn’t call super().__init__(**kwargs) and uses self.config before the base Module initializer has a chance to populate it. Also Recorder.store is annotated but never initialized, so start() will fail on self.store.stream(...). Consider either inheriting from MemoryModule (so store is created) or adding proper initialization (call super().__init__, create/start a SqliteStore, and register it for disposal).
| def find_closest(self, timestamp: float, tolerance: float | None = None) -> T | None: | ||
| try: | ||
| obs = self._stream.at(timestamp, tolerance or 1.0).first() | ||
| except LookupError: |
There was a problem hiding this comment.
find_closest() uses tolerance or 1.0, which silently replaces valid tolerances like 0.0 with 1.0. Use an explicit None check instead (e.g., tolerance if tolerance is not None else 1.0).
Problem
Closes DIM-XXX
Solution
Breaking Changes
How to Test
Contributor License Agreement