Why This Comparison

This report compares KPG Run (within KPG Platform) with commercial and open-source reference tools to answer two questions:

  1. How numerically consistent are KPG Run results against trusted reference tools?
  2. When assumptions differ, what explains the observed gaps?

The primary goal is to provide quantitative evidence of technical credibility and establish a baseline for future improvements.

What Was Compared

Target tools

ToolTypeWhy included
PLEXOSCommercialPractical UC reference widely used in planning and operations
PSS/ECommercialIndustry-standard transmission power-flow benchmark for AC behavior
UnitCommitment.jlOpen-sourceTransparent and reproducible UC benchmark
PowerModels.jlOpen-sourceTransparent and reproducible DCOPF/ACOPF benchmark

Comparison scope

KPG Run functionCommercial comparisonOpen-source comparison
UCPLEXOSUnitCommitment.jl
DCOPF-PowerModels.jl (DC)
ACOPFPSS/EPowerModels.jl (AC)

Baseline versions and period

  • Baseline versions: KPG193 v1.5, KPG Run v1.0, KPG View v1.0
  • Main body period: Day 1-90
  • Full-period appendix window: Day 1-365

Key Results (Day 1-90)

  • UC vs UnitCommitment.jl: average objective absolute difference 0.0447%, average status mismatch rate 1.17%
  • DCOPF vs PowerModels.jl: objective absolute difference not observed (0.0%)
  • ACOPF vs PowerModels.jl: average objective absolute difference 0.0303% (max 0.105%, Day 29)
  • UC vs PLEXOS: average status mismatch 28.1%, average dispatch MAE 207.7 MW (primarily due to modeling and temporal-linkage differences)
  • ACOPF vs PSS/E: high alignment under simplified HVDC treatment (equiv_load), with larger gaps under native two-terminal DC modeling

Full Report (Embedded)

Open KPG Comparison Report PDF

The full report PDF is provided in English, and this page additionally offers a quick summary plus reproducibility guidance.

Reproducibility Assets

Comparison Data and Conversion Code

The comparison workflow is organized in the KPG Platform Converters Repository, which keeps conversion scripts lightweight and reproducible while loading base datasets from tagged KPG TestGrid releases.

Converter Modules Used in This Report

Module pathPurposeMain script(s)
converters/plexosBuild PLEXOS profile CSV inputs from KPG datasetbuild_plexos_profiles.jl
converters/psseConvert MATPOWER .mat case to PSS/E .rawconvert_mat_to_raw.m, save2psse.m
converters/unitcommitmentUC reference-run pipelinerun_uc.jl
converters/powermodelsOPF reference-run pipelinerun_opf.jl

Output Artifacts (PSS/E and PLEXOS)

The repository writes generated artifacts to outputs/ so the same pipeline can be rerun with pinned dataset tags. For PLEXOS, this workflow generates profile CSV inputs; the full XML model is provided separately.

Expected paths used for this report include:

  • outputs/plexos/profile/demand/PLEXOS_demandP.csv
  • outputs/plexos/profile/renewables/PLEXOS_solarP.csv
  • outputs/plexos/profile/renewables/PLEXOS_windP.csv
  • outputs/plexos/profile/renewables/PLEXOS_hydroP.csv
  • outputs/plexos/profile/renewables_ratio/PLEXOS_solarRatio.csv
  • outputs/plexos/profile/renewables_ratio/PLEXOS_windRatio.csv
  • outputs/plexos/profile/renewables_ratio/PLEXOS_hydroRatio.csv
  • outputs/psse/KPG193_ver1_5.raw

Regeneration Commands

Terminal window
# PLEXOS profile CSV outputs
julia converters/plexos/build_plexos_profiles.jl \
--dataset-dir data/external/kpg-testgrid/KPG193_ver1_5 \
--output-dir outputs/plexos
# PSS/E RAW output
matlab -batch "addpath('converters/psse', '-begin'); convert_mat_to_raw('input.mat', 'outputs/psse/KPG193_ver1_5.raw')"

Notes

  • Commercial comparisons (PLEXOS, PSS/E) require licensed environments.
  • Differences versus commercial tools should be interpreted with modeling assumptions (e.g., temporal linkage and HVDC treatment), not solver behavior alone.
  • Full-period reruns can be reproduced with the same commands and dataset-tag workflow.