Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
15743fc
feat(traceability): add coverage checker and reporting docs
FScholPer Apr 13, 2026
58ae80d
add coverage check
FScholPer Apr 13, 2026
4e9c60e
fix lint
FScholPer Apr 13, 2026
0ec5217
refactoring the coverage, metrics and dashboard
FScholPer Apr 14, 2026
764da8d
add generic filters
FScholPer Apr 14, 2026
ec2e994
Update src/extensions/score_metamodel/traceability_metrics.py
FScholPer Apr 16, 2026
a364257
Apply suggestions from code review
FScholPer Apr 17, 2026
ecd1caf
readd genai headers
FScholPer Apr 17, 2026
313ff9b
Merge branch 'main' into score-2774-traceability
FScholPer Apr 20, 2026
6287c69
changed to new json structure
FScholPer Apr 20, 2026
5876e16
Merge branch 'main' into score-2774-traceability
FScholPer Apr 27, 2026
b4ec35b
removed md and refactored gate
FScholPer Apr 27, 2026
7ce6835
Added the uml from the comment
FScholPer Apr 27, 2026
a6029b7
refactoring to the new json approach and refactoring of dashboards fo…
FScholPer Apr 27, 2026
a93233b
lint fix
FScholPer Apr 27, 2026
6e1e0aa
fixed liniting issues
FScholPer Apr 27, 2026
0d8d75c
Merge branch 'main' into score-2774-traceability
FScholPer Apr 27, 2026
c8e4058
improved description
FScholPer Apr 27, 2026
4437580
fix warnings
FScholPer Apr 27, 2026
243aa21
fix docs build
FScholPer Apr 28, 2026
cf19ba8
fixed review comments(removed coverage py to utilize extension)
FScholPer Apr 28, 2026
2b8aace
review comment fixes
FScholPer Apr 28, 2026
f137f03
replaced for loop by list
FScholPer Apr 28, 2026
e927339
fix linting
FScholPer Apr 28, 2026
a81e4ff
Merge branch 'main' into score-2774-traceability
FScholPer Apr 30, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
# Commonly used for local settings and secrets
# ╓ ╖
# ║ Some portions generated by Github Copilot ║
# ╙ ╜
.env

# Bazel
Expand Down Expand Up @@ -26,3 +29,4 @@ __pycache__/

# bug: This file is created in repo root on test discovery.
/consumer_test.log
.clwb
Comment thread
FScholPer marked this conversation as resolved.
4 changes: 3 additions & 1 deletion docs.bzl
Original file line number Diff line number Diff line change
Expand Up @@ -293,10 +293,12 @@ def docs(source_dir = "docs", data = [], deps = [], scan_code = [], known_good =
"--jobs",
"auto",
"--define=external_needs_source=" + str(data),
"--define=score_sourcelinks_json=$(location :sourcelinks_json)",
"--define=score_source_code_linker_plain_links=1",
],
formats = ["needs"],
sphinx = ":sphinx_build",
tools = data,
tools = data + [":sourcelinks_json"],
visibility = ["//visibility:public"],
# Persistent workers cause stale symlinks after dependency version
# changes, corrupting the Bazel cache.
Expand Down
183 changes: 183 additions & 0 deletions docs/how-to/dashboards_and_quality_gates.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,183 @@
..
# *******************************************************************************
# Copyright (c) 2026 Contributors to the Eclipse Foundation
#
# See the NOTICE file(s) distributed with this work for additional
# information regarding copyright ownership.
#
# This program and the accompanying materials are made available under the
# terms of the Apache License Version 2.0 which is available at
# https://www.apache.org/licenses/LICENSE-2.0
#
# SPDX-License-Identifier: Apache-2.0
# *******************************************************************************

Build Dashboards and Quality Gates
==================================

This guide is for repositories that *consume* docs-as-code as a Bazel
dependency. Examples are module repositories and integration repositories that
want to:

1. publish their own traceability dashboards,
2. export ``metrics.json`` during documentation builds, and
3. enforce quality gates in CI.

The docs-as-code repository itself documents tooling coverage. Consumer
repositories use the same extensions to document *their own* requirements,
architecture, source-code links, and verification evidence.

What You Get
------------

When a consumer repository integrates docs-as-code correctly, it can:

- build an HTML dashboard from its own Sphinx needs,
- include external needs from other repositories when desired,
- export ``needs.json`` and ``metrics.json`` for machine-readable reporting,
- gate CI on traceability thresholds via ``traceability_gate``.

Typical Setup
-------------

1. Add docs-as-code as a Bazel dependency as described in :ref:`setup`.
2. Define the documentation target via the ``docs(...)`` macro.
3. Provide process or upstream needs via the ``data`` argument when cross-repo
traceability is required.
4. Provide implementation sources via ``scan_code`` so ``source_code_link`` can
be generated.
5. Add test metadata so ``testlink`` and testcase needs can be generated.

Minimal Consumer Example
------------------------

In ``BUILD``:

.. code-block:: starlark

load("@score_docs_as_code//:docs.bzl", "docs")

filegroup(
name = "module_sources",
srcs = glob([
"src/**/*.py",
"src/**/*.cpp",
"src/**/*.h",
"src/**/*.rs",
]),
)

docs(
source_dir = "docs",
data = [
"@score_process//:needs_json",
],
scan_code = [":module_sources"],
)

In ``docs/conf.py``:

.. code-block:: python

score_metamodel_requirement_types = "feat_req,comp_req,aou_req"
score_metamodel_include_external_needs = False

Use ``score_metamodel_include_external_needs = True`` only in repositories that
intentionally aggregate traceability across dependencies, such as integration
repositories.

Building the Dashboard
----------------------

Run:

.. code-block:: bash

bazel run //:docs

This generates HTML output under ``_build/``.

Run:

.. code-block:: bash

bazel build //:needs_json

This generates machine-readable output under:

- ``bazel-bin/needs_json/_build/needs/needs.json``
- ``bazel-bin/needs_json/_build/needs/metrics.json``

The HTML dashboard and the exported ``metrics.json`` are backed by the same
traceability metric implementation, so the charts and the CI gate evaluate the
same data.

Inputs for Linkage Metrics
--------------------------

To get meaningful dashboard and gate values, consumer repositories typically
need three inputs:

1. Requirement and architecture needs in the documentation itself.
2. Source code references via :doc:`source_to_doc_links`.
3. Test metadata via :doc:`test_to_doc_links`.

If one of those inputs is missing, the related chart or gate metric will remain
empty or low.

Choosing Local vs Aggregated Views
----------------------------------

There are two common modes:

**Module repository**

- Set ``score_metamodel_include_external_needs = False``.
- Gate only on the needs owned by the repository itself.
- Use this for per-module implementation progress and traceability.

**Integration repository**

- Set ``score_metamodel_include_external_needs = True``.
- Aggregate requirements across module dependencies when that is the intended
repository purpose.
- Use this for system or integration-level dashboards.

CI Quality Gate
---------------

After building ``//:needs_json``, run the gate on the exported metrics:

.. code-block:: bash

bazel run @score_docs_as_code//scripts_bazel:traceability_gate -- \
--metrics-json bazel-bin/needs_json/_build/needs/metrics.json \
--min-req-code 70 \
--min-req-test 70 \
--min-req-fully-linked 60 \
--min-tests-linked 70

Useful flags:

- ``--require-all-links`` for strict 100 percent gating
- ``--fail-on-broken-test-refs`` to fail when testcase references point to
unknown requirement IDs

Recommended Rollout
-------------------

For a new consumer repository:

1. Start with local-only metrics.
2. Enable ``scan_code`` and verify ``source_code_link`` coverage first.
3. Add test metadata and verify ``testlink`` coverage.
4. Introduce modest thresholds in CI.
5. Raise thresholds over time as the repository matures.

Related Guides
--------------

- :ref:`setup`
- :doc:`other_modules`
- :doc:`source_to_doc_links`
- :doc:`test_to_doc_links`
3 changes: 3 additions & 0 deletions docs/how-to/get_started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,3 +24,6 @@ In an existing S-CORE repository, you can build the documentation using Bazel:
Open the generated site at ``_build/index.html`` in your browser.

In a new S-CORE repository, see :ref:`setup`.

After the initial setup, continue with :doc:`dashboards_and_quality_gates` to
build a repository dashboard and enforce CI quality gates.
1 change: 1 addition & 0 deletions docs/how-to/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ Here you find practical guides on how to use docs-as-code.
write_docs
faq
other_modules
dashboards_and_quality_gates
source_to_doc_links
test_to_doc_links
add_extensions
6 changes: 6 additions & 0 deletions docs/how-to/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,3 +86,9 @@ bazel run //:docs
#### 6. Access your documentation at

`/_build/index.html`

## Next Step

After basic setup, see {doc}`dashboards_and_quality_gates` to configure
traceability dashboards, export `metrics.json`, and enforce CI quality gates in
consumer repositories.
118 changes: 118 additions & 0 deletions docs/how-to/test_to_doc_links.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,10 @@
# SPDX-License-Identifier: Apache-2.0
# *******************************************************************************

# ╓ ╖
# ║ Some portions generated by Github Copilot ║
# ╙ ╜

Reference Docs in Tests
=======================

Expand Down Expand Up @@ -53,3 +57,117 @@ Limitations
- Partial properties will lead to no Testlink creation.
If you want a test to be linked, please ensure all requirement properties are provided.
- Tests must be executed by Bazel first so `test.xml` files exist.


CI/CD Gate for Linkage Percentage
Comment thread
FScholPer marked this conversation as resolved.
---------------------------------

The traceability tooling uses a **two-step architecture**:

1. The **Sphinx build** computes metrics via the ``score_metamodel`` extension and
writes a machine-readable ``metrics.json`` (schema v1) to the build output
directory alongside ``needs.json``.
2. ``traceability_gate`` reads that ``metrics.json`` and enforces configurable
coverage thresholds.

Separating computation (Sphinx extension, during docs build) from gating (thin
CLI, in CI) keeps the gate decoupled from the Sphinx/Bazel build: it never
parses ``needs.json`` itself and has direct access to all sphinx-needs data.

.. note::

``metrics.json`` is the **single source of truth** for traceability data.
It is written by the Sphinx docs build (via the ``score_metamodel`` extension)
to ``<outdir>/metrics.json``. The same computation that powers the dashboard
pie charts produces this file, so the gate and the dashboard always show
the same numbers.

.. plantuml::

@startuml
skinparam componentStyle rectangle
skinparam defaultTextAlignment center

rectangle "docs build (Sphinx + score_metamodel extension)" {
component "calc metrics\n(Sphinx extension\nbuild-finished hook)" as coverage
}

usecase "test" as test
database "needs.json\n(sphinx-needs)" as needsjson
database "metrics.json\n(v1: metrics per needs type,\ne.g. tool_req)" as metricsjson
component "gate\n(traceability_gate)" as gate

test --> coverage : xml
needsjson --> coverage : sphinx-needs data\n(already loaded)
coverage --> metricsjson
metricsjson --> gate
gate --> (Pretty output)

@enduml

Current workflow:

1. Run tests.
2. Build docs (``score_metamodel`` extension writes ``metrics.json`` automatically).
3. Run the gate against the exported metrics.

.. code-block:: bash

bazel test //...
Comment thread
FScholPer marked this conversation as resolved.
bazel build //:needs_json
Comment thread
FScholPer marked this conversation as resolved.

bazel run //scripts_bazel:traceability_gate -- \
--metrics-json bazel-bin/needs_json/_build/needs/metrics.json \
--min-req-code 100 \
--min-req-test 100 \
--min-req-fully-linked 100 \
--min-tests-linked 100 \
--fail-on-broken-test-refs
Comment thread
FScholPer marked this conversation as resolved.

In repository CI, wire the gate target to depend on the test-report and
``//:needs_json`` targets so Bazel handles the build order automatically.

The ``--require-all-links`` shortcut is equivalent to setting all ``--min-*``
flags to 100 and enabling ``--fail-on-broken-test-refs``.

The gate reports:

- Percentage of requirements with ``source_code_link``
- Percentage of requirements with ``testlink``
- Percentage of requirements with both links (fully linked)
- Percentage of testcases linked to at least one requirement
- Broken testcase references (testcases referencing an unknown requirement ID)

.. note::

Testcase-based metrics depend on testcase needs being present in the
exported ``needs.json``. Testcases are currently generated as external
needs, so values such as testcase linkage percentage or broken testcase
references are only meaningful if those external testcase needs are also
included in the exported dataset.

To restrict which need types are treated as requirements when computing metrics,
set ``score_metamodel_requirement_types`` in your Sphinx ``conf.py``
(default: ``tool_req``):

.. code-block:: python

score_metamodel_requirement_types = "tool_req,comp_req"

By default, dashboard and gate use only needs defined in the current repository
(``is_external == False``). This supports per-repo CI gates.
For integration repositories that intentionally aggregate across dependencies,
you can include external needs in both dashboard and gate by setting:

.. code-block:: python

score_metamodel_include_external_needs = True

You can also override dashboard behaviour per pie chart via filter args:

.. code-block:: rst

.. needpie:: Requirements with Codelinks
:filter-func: src.extensions.score_metamodel.checks.traceability_dashboard.pie_requirements_with_code_links(tool_req,true)

Use lower thresholds during rollout and tighten towards 100% over time.
Comment thread
FScholPer marked this conversation as resolved.
Loading
Loading