HHW Brick: Heating Hot Water System Brick Schema Toolkit
A Python package for converting heating hot water system data to Brick Schema models with comprehensive validation and portable analytics
Overview¶
HHW Brick provides tools for converting building heating hot water system data to Brick Schema models and running portable analytics applications.
Core Capabilities:
- CSV-to-Brick Converter: Automated conversion from tabular BMS data to Brick Schema 1.4 RDF models
- Multi-Level Validators: Ontology, point count, equipment count, and structural pattern validation
- Portable Analytics: Building-agnostic applications that use SPARQL to auto-discover required sensors
Key Benefits:
- Interoperability: Standardized semantic models work across different BMS platforms
- Portability: Write analytics once, run on any qualified building without recoding
- Quality Assurance: Comprehensive validation ensures model correctness
The package supports five heating hot water system types (condensing boilers, non-condensing boilers, generic boilers, district hot water, district steam) and has been tested on 216 real buildings.
Installation¶
# For users (when published to PyPI)
pip install hhw-brick
# For development (current method)
git clone https://github.com/CenterForTheBuiltEnvironment/HHW_brick.git
cd HHW_brick
pip install -e .
Requirements: Python 3.8 or higher
📘 Detailed Installation Guide →
Quick Start Example¶
Convert, validate, and analyze a building in under 5 minutes:
Sample Data: For input data format examples, see https://doi.org/10.5061/dryad.t4b8gtj8n or use test data in tests/fixtures/
Step 1: Convert CSV to Brick Model¶
Transform your CSV data into a standardized Brick Schema RDF model with automatic system type detection and sensor mapping.
from pathlib import Path
from hhw_brick import CSVToBrickConverter
# Setup paths
fixtures = Path("tests/fixtures")
metadata_csv = fixtures / "metadata.csv"
vars_csv = fixtures / "vars_available_by_building.csv"
# Convert CSV to Brick model
converter = CSVToBrickConverter()
graph = converter.convert_to_brick(
metadata_csv=str(metadata_csv),
vars_csv=str(vars_csv),
building_tag="105",
output_path="building_105.ttl"
)
print(f"✓ Converted: {len(graph)} RDF triples")
Step 2: Validate the Model¶
Ensure your Brick model is correct through multi-level validation: ontology compliance (SHACL), point counts, and equipment counts.
from hhw_brick import BrickModelValidator
from hhw_brick.validation import GroundTruthCalculator
# 2a. Ontology validation (Brick Schema compliance)
validator = BrickModelValidator(use_local_brick=True)
result = validator.validate_ontology("building_105.ttl")
print(f"✓ Ontology valid: {result['valid']}")
# 2b. Generate ground truth from CSV
calculator = GroundTruthCalculator()
calculator.calculate(
metadata_csv=str(metadata_csv),
vars_csv=str(vars_csv),
output_csv="ground_truth.csv"
)
# 2c. Validate point counts
validator = BrickModelValidator(ground_truth_csv_path="ground_truth.csv")
point_result = validator.validate_point_count("building_105.ttl")
print(f"✓ Point count match: {point_result['match']}")
# 2d. Validate equipment counts
equip_result = validator.validate_equipment_count("building_105.ttl")
print(f"✓ Equipment match: {equip_result.get('overall_success', False)}")
Step 3: Run Analytics Application¶
Deploy portable analytics that automatically discover required sensors using SPARQL queries. Save configuration templates for easy customization.
from hhw_brick import apps
import yaml
# Load application
app = apps.load_app("secondary_loop_temp_diff")
# Check if building qualifies
qualified = app.qualify("building_105.ttl")
if qualified:
# Get and save default config template
config = apps.get_default_config("secondary_loop_temp_diff")
# Save config template for easy editing
with open("app_config.yaml", "w") as f:
yaml.dump(config, f, default_flow_style=False, sort_keys=False)
print("✓ Config template saved: app_config.yaml")
# Customize config (or edit the YAML file directly)
config["output"]["output_dir"] = "results/"
config["output"]["generate_plots"] = True
# Run analysis
results = app.analyze(
"building_105.ttl",
"tests/fixtures/TimeSeriesData/105hhw_system_data.csv",
config
)
print(f"✓ Analysis complete! Results in: results/")
That's it! From CSV to insights in 3 simple steps.
Key Features¶
🔄 Automated Conversion
Convert CSV data to Brick Schema 1.4 models with automatic system type detection and sensor mapping.
🏭 5 System Types
Support for condensing boilers, non-condensing boilers, generic boilers, district hot water, and district steam.
✅ Multi-Level Validation
Ontology (SHACL) + point counts + equipment counts + structural patterns ensure model quality.
📊 Portable Analytics
Applications use SPARQL to auto-discover sensors, working across any qualified building.
⚡ Batch Processing
Convert and validate 100+ buildings in parallel with progress tracking and error handling.
🎯 Ground Truth Validation
Independent validation using expected counts calculated directly from source CSV data.
Documentation¶
🚀 Getting Started
Installation guide, 5-minute quick start tutorial, understanding Brick Schema, and CSV data format requirements.
Read Guide →🔄 Conversion Guide
Single building conversion, batch processing, system type configuration, and sensor mapping customization.
Read Guide →✅ Validation Guide
Ontology validation, ground truth comparison, structural pattern matching, and batch validation workflows.
Read Guide →📱 Available Apps
Browse ready-to-use analytics applications: temperature differential analysis, efficiency monitoring, and more.
View Apps →📊 User Guide
Application management, running apps, and detailed usage instructions for all features.
Read Guide →👨💻 Developer Guide
Create your own applications: step-by-step tutorials, SPARQL queries, visualization, and best practices.
Read Guide →Resources¶
Ready to Get Started?
Transform your heating hot water system data into standardized Brick models
Developed by Mingchen Li
Making building heating hot water system data standardized and analyzable