# ontology


<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

## Overview

This module implements Stages 1-2 of the trajectory: Define the Ontology
“Context Model” and provide bounded view primitives for progressive
disclosure.

**Stage 1**: Meta-graph scaffolding with navigation indexes  
**Stage 2**: Bounded view primitives for safe graph exploration

### Design Principles

- **Handles, not dumps**: Return graph handles with bounded view
  operations
- **Meta-graph scaffolding**: Build navigation indexes (labels,
  hierarchy, properties)
- **Progressive disclosure**: Small summaries guide exploration
- **RLM-compatible**: Works with namespace-explicit `rlm_run()`

### Context Model

From the trajectory document: \> The *root model never gets a graph
dump*. It gets a handle name (e.g. `ont`, `res_0`) and uses bounded view
operations.

## Imports

## Graph Loading

------------------------------------------------------------------------

### load_ontology

``` python

def load_ontology(
    path:str | pathlib.Path, ns:dict, name:str='ont'
)->str:

```

*Load an RDF ontology file into namespace as a Graph handle.*

Args: path: Path to ontology file (.ttl, .rdf, .owl) ns: Namespace dict
where Graph will be stored name: Variable name for the Graph handle

Returns: Summary string describing what was loaded

``` python
# Test loading prov.ttl
test_ns = {}
result = load_ontology('ontology/prov.ttl', test_ns, name='prov_ont')
print(result)
assert 'prov_ont' in test_ns
assert isinstance(test_ns['prov_ont'], Graph)
assert len(test_ns['prov_ont']) > 0
print(f"✓ Loaded {len(test_ns['prov_ont'])} triples")
```

    Loaded 1664 triples from prov.ttl into 'prov_ont'
    ✓ Loaded 1664 triples

## Meta-Graph Navigation

Build navigation scaffolding from a Graph to enable progressive
disclosure. This is what goes in the REPL environment, not the graph
itself.

------------------------------------------------------------------------

### GraphMeta

``` python

def GraphMeta(
    graph:Graph, name:str='ont'
)->None:

```

*Meta-graph navigation scaffolding for an RDF Graph.*

This is REPL-resident and provides bounded views over the graph. Indexes
discovered in dialogs/inspect_tools.ipynb exploration.

``` python
# Test GraphMeta with prov ontology
prov_g = test_ns['prov_ont']
meta = GraphMeta(prov_g, name='prov')

print(meta.summary())
print()
print(f"Sample classes (first 5): {meta.classes[:5]}")
print(f"Sample properties (first 5): {meta.properties[:5]}")
print(f"Namespaces: {list(meta.namespaces.keys())}")
```

    Graph 'prov': 1,664 triples
    Classes: 59
    Properties: 89
    Individuals: 1
    Namespaces: brick, csvw, dc, dcat, dcmitype, dcterms, dcam, doap, foaf, geo, odrl, org, prof, qb, schema, sh, skos, sosa, ssn, time, vann, void, wgs, owl, rdf, rdfs, xsd, xml, prov

    Sample classes (first 5): ['http://www.w3.org/2002/07/owl#Thing', 'http://www.w3.org/ns/prov#Accept', 'http://www.w3.org/ns/prov#Activity', 'http://www.w3.org/ns/prov#ActivityInfluence', 'http://www.w3.org/ns/prov#Agent']
    Sample properties (first 5): ['http://www.w3.org/2000/01/rdf-schema#comment', 'http://www.w3.org/2000/01/rdf-schema#isDefinedBy', 'http://www.w3.org/2000/01/rdf-schema#label', 'http://www.w3.org/2000/01/rdf-schema#seeAlso', 'http://www.w3.org/2002/07/owl#topObjectProperty']
    Namespaces: ['brick', 'csvw', 'dc', 'dcat', 'dcmitype', 'dcterms', 'dcam', 'doap', 'foaf', 'geo', 'odrl', 'org', 'prof', 'qb', 'schema', 'sh', 'skos', 'sosa', 'ssn', 'time', 'vann', 'void', 'wgs', 'owl', 'rdf', 'rdfs', 'xsd', 'xml', 'prov']

## Bounded View Functions (Stage 1)

Basic operations on GraphMeta that return small, bounded summaries:

- **graph_stats()**: Overall graph statistics
- **search_by_label()**: Simple label-based search
- **describe_entity()**: Get entity description with sample triples

These provide the foundation for progressive disclosure.

------------------------------------------------------------------------

### graph_stats

``` python

def graph_stats(
    meta:GraphMeta
)->str:

```

*Get graph statistics summary.*

------------------------------------------------------------------------

### search_by_label

``` python

def search_by_label(
    meta:GraphMeta, search:str, limit:int=10
)->list:

```

*Search for entities by label substring (case-insensitive).*

Backward-compatible wrapper around search_entity().

Args: meta: GraphMeta to search search: Substring to search for in
labels limit: Maximum results to return

Returns: List of (URI, label) tuples

------------------------------------------------------------------------

### search_entity

``` python

def search_entity(
    meta:GraphMeta, query:str, limit:int=10, search_in:str='all'
)->list:

```

*Search for entities by label, IRI, or localname.*

Args: meta: GraphMeta to search query: Search string (case-insensitive
substring match) limit: Maximum results to return search_in: Where to
search - ‘label’, ‘iri’, ‘localname’, or ‘all’

Returns: List of dicts: \[{‘uri’: str, ‘label’: str, ‘match_type’: str},
…\]

``` python
# Test search_entity
results = search_entity(meta, 'activity', limit=5)
print(f"Found {len(results)} matches for 'activity':")
for r in results:
    print(f"  {r['label']}: {r['uri']} ({r['match_type']})")

# Test different search modes
print("\nSearch by IRI only:")
iri_results = search_entity(meta, 'prov', search_in='iri', limit=3)
for r in iri_results:
    print(f"  {r['label']}: {r['uri']}")

# Test backward compatibility
print("\nBackward compatibility test:")
legacy_results = search_by_label(meta, 'activity', limit=5)
print(f"Found {len(legacy_results)} matches using search_by_label():")
for uri, label in legacy_results:
    print(f"  {label}: {uri}")
```

    Found 5 matches for 'activity':
      Activity: http://www.w3.org/ns/prov#Activity (label)
      ActivityInfluence: http://www.w3.org/ns/prov#ActivityInfluence (label)
      activity: http://www.w3.org/ns/prov#activity (label)
      hadActivity: http://www.w3.org/ns/prov#hadActivity (label)
      activityOfInfluence: http://www.w3.org/ns/prov#activityOfInfluence (label)

    Search by IRI only:
      Attribution: http://www.w3.org/ns/prov#Attribution
      invalidatedAtTime: http://www.w3.org/ns/prov#invalidatedAtTime
      Derivation: http://www.w3.org/ns/prov#Derivation

    Backward compatibility test:
    Found 5 matches using search_by_label():
      Activity: http://www.w3.org/ns/prov#Activity
      ActivityInfluence: http://www.w3.org/ns/prov#ActivityInfluence
      activity: http://www.w3.org/ns/prov#activity
      hadActivity: http://www.w3.org/ns/prov#hadActivity
      activityOfInfluence: http://www.w3.org/ns/prov#activityOfInfluence

------------------------------------------------------------------------

### describe_entity

``` python

def describe_entity(
    meta:GraphMeta, uri:str, limit:int=20
)->dict:

```

*Get bounded description of an entity.*

Args: meta: GraphMeta containing the entity uri: URI of entity to
describe (supports prefixed forms like ‘prov:Activity’) limit: Max
number of triples to include

Returns: Dict with label, types, and sample triples

``` python
# Test describe_entity
# Find the Activity class
activity_uri = 'http://www.w3.org/ns/prov#Activity'
desc = describe_entity(meta, activity_uri)

print(f"Label: {desc['label']}")
print(f"Types: {desc['types']}")
print(f"Comment: {desc['comment'][:100]}..." if desc['comment'] else "No comment")
print(f"Outgoing triples: {len(desc['outgoing_sample'])}")
```

    Label: Activity
    Types: ['http://www.w3.org/2002/07/owl#Class']
    No comment
    Outgoing triples: 10

### Stage 2: Progressive Disclosure Primitives

Advanced bounded view operations that enable root models to explore
graphs iteratively:

- **search_entity()**: Multi-mode entity search (label/IRI/localname)
- **probe_relationships()**: One-hop neighbor exploration with filtering
- **find_path()**: BFS path finding between entities
- **predicate_frequency()**: Usage analysis for understanding graph
  structure

These primitives answer questions like: - “Is X defined?” →
`search_entity()` - “What connects A to B?” → `find_path()` - “What are
the most important predicates?” → `predicate_frequency()` - “What does X
relate to?” → `probe_relationships()`

------------------------------------------------------------------------

### probe_relationships

``` python

def probe_relationships(
    meta:GraphMeta, uri:str, predicate:str=None, direction:str='both', limit:int=20
)->dict:

```

*Get one-hop neighbors of an entity, optionally filtered by predicate.*

Args: meta: GraphMeta containing the entity uri: URI of entity to probe
(supports prefixed forms like ‘prov:Activity’) predicate: Optional
predicate URI to filter by (supports prefixed forms) direction: ‘out’,
‘in’, or ‘both’ (default: ‘both’) limit: Maximum neighbors to return per
direction

Returns: { ‘uri’: str, ‘label’: str, ‘outgoing’: \[{‘predicate’: str,
‘pred_label’: str, ‘object’: str, ‘obj_label’: str}, …\], ‘incoming’:
\[{‘subject’: str, ‘subj_label’: str, ‘predicate’: str, ‘pred_label’:
str}, …\], ‘outgoing_count’: int, ‘incoming_count’: int }

------------------------------------------------------------------------

### find_path

``` python

def find_path(
    meta:GraphMeta, source:str, target:str, max_depth:int=2, limit:int=10
)->list:

```

*Find predicates connecting two entities using BFS.*

Answers “What predicates connect A to B?”

Args: meta: GraphMeta to search source: Source entity URI (supports
prefixed forms like ‘prov:Activity’) target: Target entity URI (supports
prefixed forms like ‘prov:Entity’) max_depth: Maximum path length
(default: 2) limit: Maximum paths to return

Returns: List of paths, each path is list of steps: \[{‘from’: uri,
‘predicate’: uri, ‘to’: uri, ‘direction’: ‘out’|‘in’}, …\]

------------------------------------------------------------------------

### predicate_frequency

``` python

def predicate_frequency(
    meta:GraphMeta, limit:int=20, predicate_type:str=None
)->list:

```

*Get predicates ranked by frequency of use.*

Args: meta: GraphMeta to analyze limit: Maximum predicates to return
predicate_type: Optional filter - ‘object’, ‘datatype’, ‘annotation’

Returns: List of dicts: \[{‘predicate’: str, ‘label’: str, ‘count’: int,
‘sample_subject’: str, ‘sample_object’: str}, …\]

``` python
# Test probe_relationships
activity_uri = 'http://www.w3.org/ns/prov#Activity'
probe_result = probe_relationships(meta, activity_uri, limit=5)

print(f"Probing: {probe_result['label']}")
print(f"Outgoing relationships: {probe_result['outgoing_count']} total, showing {len(probe_result['outgoing'])}")
for rel in probe_result['outgoing'][:3]:
    print(f"  --{rel['pred_label']}--> {rel['obj_label']}")

print(f"\nIncoming relationships: {probe_result['incoming_count']} total, showing {len(probe_result['incoming'])}")
for rel in probe_result['incoming'][:3]:
    print(f"  <--{rel['pred_label']}-- {rel['subj_label']}")

# Test find_path
# Find path between two PROV classes
entity_uri = 'http://www.w3.org/ns/prov#Entity'
paths = find_path(meta, activity_uri, entity_uri, max_depth=2, limit=3)

print(f"\n\nPaths from Activity to Entity:")
if paths:
    for i, path in enumerate(paths, 1):
        print(f"Path {i}:")
        for step in path:
            direction_sym = '-->' if step['direction'] == 'out' else '<--'
            pred_label = meta.labels.get(step['predicate'], step['predicate'])
            print(f"  {direction_sym} {pred_label}")
else:
    print("  No paths found")
```

    Probing: Activity
    Outgoing relationships: 10 total, showing 5
      --http://www.w3.org/1999/02/22-rdf-syntax-ns#type--> http://www.w3.org/2002/07/owl#Class
      --http://www.w3.org/2000/01/rdf-schema#isDefinedBy--> W3C PROVenance Interchange Ontology (PROV-O)
      --http://www.w3.org/2000/01/rdf-schema#label--> Activity

    Incoming relationships: 34 total, showing 5
      <--http://www.w3.org/2000/01/rdf-schema#range-- activity
      <--http://www.w3.org/1999/02/22-rdf-syntax-ns#first-- n0fe42a034f254bbc9cc97fe482231e2cb5
      <--http://www.w3.org/2000/01/rdf-schema#domain-- endedAtTime


    Paths from Activity to Entity:
    Path 1:
      --> http://www.w3.org/2002/07/owl#disjointWith

``` python
# Test predicate_frequency
print("Top 10 predicates by frequency:")
freq_results = predicate_frequency(meta, limit=10)
for r in freq_results:
    print(f"  {r['count']:4d} uses - {r['label']}")

# Test filtering by predicate type
print("\nTop 5 object properties:")
obj_props = predicate_frequency(meta, limit=5, predicate_type='object')
for r in obj_props:
    print(f"  {r['count']:4d} uses - {r['label']}")
```

    Top 10 predicates by frequency:
       184 uses - http://www.w3.org/2000/01/rdf-schema#isDefinedBy
       175 uses - http://www.w3.org/1999/02/22-rdf-syntax-ns#type
       161 uses - http://www.w3.org/2000/01/rdf-schema#label
       107 uses - http://www.w3.org/2000/01/rdf-schema#comment
       104 uses - http://www.w3.org/ns/prov#category
        85 uses - http://www.w3.org/ns/prov#component
        64 uses - http://www.w3.org/2000/01/rdf-schema#domain
        63 uses - http://www.w3.org/ns/prov#definition
        60 uses - http://www.w3.org/2000/01/rdf-schema#range
        55 uses - http://www.w3.org/2000/01/rdf-schema#subClassOf

    Top 5 object properties:
         7 uses - wasDerivedFrom
         3 uses - wasRevisionOf
         3 uses - specializationOf

## Additional Exploration Functions

Functions discovered in `dialogs/inspect_tools.ipynb` for deeper
ontology exploration.

------------------------------------------------------------------------

### ont_describe

``` python

def ont_describe(
    ont:str, uri:str, name:str='desc', ns:dict=None, limit:int=100
)->str:

```

*Get triples about a URI, store in namespace.*

Returns both triples where URI is subject and where it’s object.

Args: ont: Name of ontology variable in namespace uri: URI to describe
name: Variable name for storing result ns: Namespace dict limit: Maximum
triples to return per direction (default: 100)

Returns: Summary string

------------------------------------------------------------------------

### ont_meta

``` python

def ont_meta(
    ont:str, name:str='meta', ns:dict=None
)->str:

```

*Extract ontology metadata (prefixes, annotation predicates, imports).*

Args: ont: Name of ontology variable in namespace name: Variable name
for storing result ns: Namespace dict

Returns: Summary string

------------------------------------------------------------------------

### ont_roots

``` python

def ont_roots(
    ont:str, name:str='roots', ns:dict=None
)->str:

```

*Find root classes (no declared superclass), store in namespace.*

Args: ont: Name of ontology variable in namespace name: Variable name
for storing result ns: Namespace dict

Returns: Summary string

------------------------------------------------------------------------

### setup_ontology_context

``` python

def setup_ontology_context(
    path:str | pathlib.Path, ns:dict, name:str='ont', dataset_meta:NoneType=None
)->str:

```

*Load ontology and create meta-graph for RLM use.*

This sets up both the Graph and GraphMeta in the namespace.

NEW: Dataset integration - if dataset_meta provided, automatically
mounts the ontology into the dataset as onto/<name> graph.

Args: path: Path to ontology file ns: Namespace dict name: Base name for
graph handle dataset_meta: Optional DatasetMeta for auto-mounting

Returns: Summary string

## Ontology Sense Building

### What is a “Sense Document”?

When an LLM needs to work with an ontology, loading the entire graph
into context is wasteful and may exceed limits. Instead, we build a
**sense document** - a compact summary that captures:

- **Formalism**: Which OWL/RDFS/SKOS constructs are used
- **Metadata structure**: Which annotation properties exist (labels,
  descriptions, etc.)
- **Domain/scope**: What the ontology is about
- **Navigation hints**: How to effectively search and traverse

This approach was developed through experiments in
`dialogs/inspect_tools.ipynb` exploring progressive disclosure patterns.

### Why Sense Building Matters

**Design Decision Response** (from ISSUE_ANALYSIS.md): \>
*GraphMeta.labels only uses rdfs:label* - This is a limitation because
different ontologies use different annotation properties: \> -
`rdfs:label`, `skos:prefLabel`, `skos:altLabel` for labels \> -
`rdfs:comment`, `skos:definition`, `dcterms:description` for
descriptions  
\> - `vann:preferredNamespacePrefix`, `owl:versionInfo` for metadata

Rather than hardcode support for all possible properties,
`build_sense()` **detects which annotation properties this specific
ontology uses**, enabling intelligent search.

### References

- [Widoco Metadata
  Guide](https://github.com/dgarijo/Widoco/blob/master/doc/metadataGuide/guide.md) -
  Recommended ontology metadata properties
- [Anthropic: Building Effective
  Agents](https://www.anthropic.com/engineering/building-effective-agents) -
  Orchestrator-workers pattern
- [Anthropic: Progressive
  Disclosure](https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents) -
  Context engineering strategy

### Implementation Pattern

The sense-building workflow (not agentic): 1. **Metadata collection** -
Extract prefixes, detect annotation predicates, find ontology-level
metadata 2. **Structural exploration** - Build hierarchy, property
signatures, detect OWL axioms 3. **LLM synthesis** - One LLM call to
identify domain, patterns, navigation hints 4. **Structured storage** -
Store as retrievable AttrDict in REPL namespace

------------------------------------------------------------------------

### build_sense

``` python

def build_sense(
    path:str, name:str='sense', ns:dict=None
)->str:

```

*Build ontology sense document using workflow + LLM synthesis.*

Detects annotation properties per Widoco metadata guide: - Label
properties: rdfs:label, skos:prefLabel, skos:altLabel, dcterms:title -
Description properties: rdfs:comment, skos:definition,
dcterms:description - Ontology metadata: vann:preferredNamespacePrefix,
owl:versionInfo, etc.

This function: 1. Loads ontology and extracts metadata/roots
programmatically 2. Detects which annotation properties are actually
used 3. Builds hierarchy (2 levels), property info, characteristics 4.
Makes one LLM call to synthesize domain/scope/patterns/hints 5. Returns
structured AttrDict stored in namespace

Args: path: Path to ontology file name: Variable name for sense document
(default: ‘sense’) ns: Namespace dict

Returns: Summary string

``` python
# Test build_sense with PROV ontology
# Note: Requires API key, marked eval:false to avoid CI failures

test_ns = {}
result = build_sense('ontology/prov.ttl', name='prov_sense', ns=test_ns)
print(result)
print()

# Inspect the sense document
sense = test_ns['prov_sense']
print(f"Ontology: {sense.ont}")
print(f"Ontology Metadata: {sense.ont_metadata}")
print(f"Stats: {sense.stats}")
print()

# NEW: Show detected annotation properties
print(f"Label properties detected: {sense.label_properties}")
print(f"Description properties detected: {sense.description_properties}")
print()

print(f"Roots: {sense.roots}")
print(f"Root branches: {list(sense.hier.keys())}")
print(f"Top properties (first 3): {sense.top_props[:3]}")
print(f"Property characteristics: {sense.prop_chars}")
print(f"OWL constructs: {sense.owl_constructs}")
print(f"URI pattern: {sense.uri_pattern}")
print()
print("LLM Summary:")
print(sense.summary)
```

## Structured Sense Data

**NEW**: JSON-schemaed sense data for ReasoningBank integration.

The original `build_sense()` produces free-form prose in the `summary`
field. This new system creates: - **sense_card**: Compact,
always-injected structured data (~500 chars) - **sense_brief**: Detailed
sections retrieved when needed (~2000 chars) - **Grounding validation**:
All URIs must exist in the ontology

See `docs/ont-sense-improvements.md` for full specification.

------------------------------------------------------------------------

### validate_sense_grounding

``` python

def validate_sense_grounding(
    sense:dict, meta:GraphMeta
)->dict:

```

*Validate all URIs in sense exist in the ontology.*

Args: sense: Sense document with sense_card (and optional sense_brief)
meta: GraphMeta to validate against

Returns: {‘valid’: bool, ‘errors’: list\[str\], ‘error_count’: int}

------------------------------------------------------------------------

### build_sense_structured

``` python

def build_sense_structured(
    path:str, name:str='sense', ns:dict=None
)->dict:

```

*Build structured sense document with card and brief.*

Returns JSON-schemaed output instead of free-form prose.

Args: path: Path to ontology file name: Variable name for sense document
ns: Namespace dict

Returns: Dict with ‘sense_card’, ‘sense_brief’, and ’\_validation’ keys

## Integration with RLM

Helper to setup ontology context for `rlm_run()`.

``` python
# Test RLM with structured sense context
# Note: Requires API key, marked eval:false to avoid CI failures

from rlm.core import rlm_run

print("=" * 70)
print(" RLM INTEGRATION TEST: Structured Sense as Context")
print("=" * 70)

# Setup: Build structured sense for PROV ontology
ns = {}
sense_result = build_sense_structured('ontology/prov.ttl', name='prov_sense', ns=ns)

# Get formatted sense card as context
sense_context = format_sense_card(sense_result['sense_card'])

print(f"\n📋 Context Type: Structured Sense Card")
print(f"   Size: {len(sense_context)} chars")
print(f"   Grounding: {'PASS' if sense_result['_validation']['valid'] else 'FAIL'}")

# Test query
query = "What is the Activity class in PROV?"

print(f"\n❓ Query: {query}")
print("\n" + "-" * 70)
print("Running RLM with sense card context...")
print("-" * 70)

# Run RLM with sense context
answer, iterations, final_ns = rlm_run(
    query,
    sense_context,
    ns=ns,
    max_iters=5
)

print(f"\n✓ Answer: {answer}")
print(f"\n📊 Iterations: {len(iterations)}")
print(f"   Max allowed: 5")

# Show iteration details
print(f"\n🔍 Iteration Breakdown:")
for i, iteration in enumerate(iterations, 1):
    print(f"   {i}. {iteration.get('action', 'unknown action')}")

print("\n" + "=" * 70)
print(" TEST RESULT")
print("=" * 70)

if len(iterations) <= 5:
    print(f"\n✓ PASS: RLM converged in {len(iterations)} iterations")
    print(f"  The structured sense card provides sufficient context for RLM")
else:
    print(f"\n✗ FAIL: RLM did not converge within iteration limit")

print("\n💡 Benefits of Structured Sense:")
print("  • Compact context (~600 chars vs full ontology)")
print("  • 100% grounded (no hallucinated URIs)")
print("  • Ontology-aware (detects label/description predicates)")
print("  • Progressive disclosure ready (can add hierarchy brief)")
```

### Test RLM Integration with Structured Sense

Test if `rlm_run()` works with the new structured sense documents as
context.

``` python
# Test formatting functions
print("=" * 60)
print("FORMATTING FUNCTIONS TEST")
print("=" * 60)

# Test format_sense_card
formatted_card = format_sense_card(card)
print(f"\n✓ Formatted Sense Card ({len(formatted_card)} chars):")
print("-" * 60)
print(formatted_card)
print("-" * 60)

# Test format_sense_brief_section
formatted_hier = format_sense_brief_section(brief, 'hierarchy_overview')
print(f"\n✓ Formatted Hierarchy Overview ({len(formatted_hier)} chars):")
print("-" * 60)
print(formatted_hier)
print("-" * 60)

# Test get_sense_context
query = "What are the subclasses of Activity?"
context = get_sense_context(query, result)
print(f"\n✓ Auto-detected Context for: '{query}'")
print(f"  Context length: {len(context)} chars")
print(f"  Includes hierarchy: {('Hierarchy Overview' in context)}")

print(f"\n{'=' * 60}")
print("FORMATTING TESTS PASSED")
print("=" * 60)
```

``` python
# Test build_sense_structured with PROV ontology
test_ns = {}
result = build_sense_structured('ontology/prov.ttl', name='prov_sense_structured', ns=test_ns)

print("=" * 60)
print("STRUCTURED SENSE TEST")
print("=" * 60)

# Check validation
print(f"\n✓ Validation: {'PASS' if result['_validation']['valid'] else 'FAIL'}")
if not result['_validation']['valid']:
    print(f"  Errors: {result['_validation']['errors']}")
else:
    print("  All URIs grounded in ontology")

# Check sense_card
card = result['sense_card']
print(f"\n✓ Sense Card:")
print(f"  Ontology ID: {card['ontology_id']}")
print(f"  Triple count: {card['triple_count']:,}")
print(f"  Class count: {card['class_count']}")
print(f"  Property count: {card['property_count']}")
print(f"  Label predicates: {len(card['label_predicates'])}")
print(f"  Key classes: {len(card['key_classes'])}")
print(f"  Key properties: {len(card['key_properties'])}")
print(f"  Quick hints: {len(card['quick_hints'])}")

# Verify key_classes are grounded
print(f"\n✓ Key Classes (grounded URIs):")
for cls in card['key_classes'][:3]:
    print(f"  - {cls['label']}")
    print(f"    URI: {cls['uri'][:50]}...")

# Verify key_properties are grounded
print(f"\n✓ Key Properties (grounded URIs):")
for prop in card['key_properties'][:3]:
    print(f"  - {prop['label']}: {prop['role']}")
    print(f"    URI: {prop['uri'][:50]}...")

# Check sense_brief
brief = result['sense_brief']
print(f"\n✓ Sense Brief:")
print(f"  Hierarchy roots: {len(brief['hierarchy_overview']['root_classes'])}")
print(f"  Max depth: {brief['hierarchy_overview']['max_depth']}")

print(f"\n{'=' * 60}")
print("ALL TESTS PASSED")
print("=" * 60)
```

------------------------------------------------------------------------

### get_sense_context

``` python

def get_sense_context(
    query:str, sense:dict
)->str:

```

*Auto-detect and return relevant sense sections for a query.*

Args: query: User query sense: Full sense document (with sense_card and
sense_brief)

Returns: Formatted context string

------------------------------------------------------------------------

### format_sense_brief_section

``` python

def format_sense_brief_section(
    brief:dict, section:str
)->str:

```

*Format a specific brief section.*

Args: brief: sense_brief dict section: Section name (e.g.,
‘hierarchy_overview’, ‘patterns’)

Returns: Formatted markdown string

------------------------------------------------------------------------

### format_sense_card

``` python

def format_sense_card(
    card:dict
)->str:

```

*Format sense card for context injection (~500 chars).*

Args: card: sense_card dict

Returns: Formatted markdown string

``` python
# NOTE: setup_ontology_context() is defined above in cell-27
# This cell previously contained a duplicate definition
```

``` python
# Test setup for RLM
test_ns = {}
result = setup_ontology_context('ontology/prov.ttl', test_ns, name='prov')
print(result)
print()
print("Namespace contains:")
for k in test_ns.keys():
    print(f"  {k}: {type(test_ns[k]).__name__}")
```

    Loaded 1664 triples from prov.ttl into 'prov'
    Created meta-graph 'prov_meta' with 59 classes, 89 properties

    Namespace contains:
      prov: Graph
      prov_meta: GraphMeta
      prov_graph_stats: partial
      prov_search_by_label: partial
      prov_describe_entity: partial
      prov_search_entity: partial
      prov_probe_relationships: partial
      prov_find_path: partial
      prov_predicate_frequency: partial
      graph_stats: partial
      search_by_label: partial
      describe_entity: partial
      search_entity: partial
      probe_relationships: partial
      find_path: partial
      predicate_frequency: partial

``` python
# Test new exploration functions
# Reuse the test_ns from previous cell with loaded prov ontology
# Note: prov_meta is a GraphMeta object in test_ns

# Test that new indexes work
meta = test_ns['prov_meta']
assert len(meta.by_label) > 0  # inverted label index
assert len(meta.subs) > 0 or len(meta.supers) > 0  # class hierarchy
print(f"✓ New GraphMeta indexes work: by_label has {len(meta.by_label)} entries")

# Test ont_describe (need to pass GraphMeta object as namespace entry)
result = ont_describe('prov_meta', 'http://www.w3.org/ns/prov#Activity', name='activity_desc', ns=test_ns)
assert 'activity_desc' in test_ns
print(f"✓ ont_describe works: {result}")

# Test ont_meta  
result = ont_meta('prov_meta', name='prov_metadata', ns=test_ns)
assert 'prov_metadata' in test_ns
print(f"✓ ont_meta works: {result}")

# Test ont_roots
result = ont_roots('prov_meta', name='prov_roots', ns=test_ns)
assert 'prov_roots' in test_ns
print(f"✓ ont_roots works: {result}")
```

    ✓ New GraphMeta indexes work: by_label has 156 entries
    ✓ ont_describe works: Stored 10 + 34 triples about 'http://www.w3.org/ns/prov#Activity' into 'activity_desc'
    ✓ ont_meta works: Stored metadata into 'prov_metadata': 29 prefixes, 16 annotation predicates, 9 imports
    ✓ ont_roots works: Stored 10 root classes into 'prov_roots'

## Test with RLM

Now let’s test asking a question about the PROV ontology using
`rlm_run()`.

``` python
from rlm.core import rlm_run

# Setup namespace with PROV ontology
ns = {}
setup_ontology_context('ontology/prov.ttl', ns, name='prov')

# Ask a question
# The context is the GraphMeta summary, not the full graph
context = ns['prov_meta'].summary()

answer, iterations, ns = rlm_run(
    "What is the Activity class in the PROV ontology?",
    context,
    ns=ns,
    max_iters=3
)

print(f"Answer: {answer}")
print(f"Iterations: {len(iterations)}")
```

## Sense Validation Gate

Validate sense data before RLM operations (precondition check).

------------------------------------------------------------------------

### validate_sense_precondition

``` python

def validate_sense_precondition(
    sense:dict, meta
)->dict:

```

*Gate 0: Validate sense data before RLM operations.*

Checks: - URI grounding (all URIs exist in ontology) - Card size (under
800 chars) - Required fields present

Args: sense: Sense document from build_sense_structured() meta:
GraphMeta object for grounding validation

Returns: Dictionary with proceed flag and validation details

``` python
# Test sense validation gate (requires real ontology)
from rlm.ontology import setup_ontology_context, build_sense_structured

print("Test: validate_sense_precondition()")
print("=" * 60)

ns = {}
setup_ontology_context('ontology/prov.ttl', ns, name='prov')
sense = build_sense_structured('ontology/prov.ttl', name='prov_sense', ns=ns)

result = validate_sense_precondition(sense, ns['prov_meta'])

print(f"Proceed: {result['proceed']}")
print(f"Grounding valid: {result['grounding_valid']}")
print(f"Card size: {result['card_size']} chars (ok: {result['card_size_ok']})")
print(f"Has required fields: {result['has_required_fields']}")

if result['proceed']:
    print("\n✓ Sense validation gate passed")
else:
    print(f"\n✗ Validation failed: {result['reason']}")
```
