{"modules":[{"audience":"Quantitative analysts and CE model builders","category":"CE Architecture","color":"blue","description":"Understand how CE's data-derived combined models are structured, trained, and evaluated \u2014 including the role of component weights, fusion strategies, and provenance maintenance when multiple source models disagree.","difficulty":"Advanced","duration_min":120,"icon":"fa-layer-group","lessons":[{"content":"## The Case Against Single-Source Dependence\n\nCE is not built on a single authoritative climate model or a single authoritative economic model. It is built on a combining strategy \u2014 and understanding why requires understanding the failure modes of single-source dependence.\n\n### Three failure modes of single-source forecasting\n\n**Structural fragility**: Any single model embeds structural assumptions that may fail precisely when the forecast matters most. Models calibrated to stable historical periods are often worst at tails and transitions \u2014 exactly the conditions CE users care about.\n\n**Silent error propagation**: When all outputs derive from one source, errors that affect that source propagate invisibly. There is no second signal to flag disagreement. Combined models surface disagreement explicitly rather than hiding it inside one black-box output.\n\n**Over-precision at the integration step**: A single model produces a single-point output that looks confident by construction. That false precision is then carried forward into the integration step and ultimately into guidance \u2014 without any honest representation of underlying model uncertainty.\n\n### What CE's combining strategy provides\n\nCE's combining strategy provides three things a single-source approach cannot:\n\n1. **Signal diversity** \u2014 multiple components may agree on direction but disagree on magnitude; that disagreement is informative\n2. **Failure resilience** \u2014 if one component produces an outlier in a revision cycle, the others act as a check\n3. **Explicit provenance** \u2014 the contribution of each component is recorded, enabling retrospective review and weight revision\n\n| Property | Single Source | CE Combined Model |\n| --- | --- | --- |\n| Signal diversity | None \u2014 one output | Multiple \u2014 contributions preserved |\n| Failure signal | Silent | Flagged via disagreement |\n| Precision representation | Over-precise by construction | Calibrated via confidence scoring |\n| Weight revision | Not applicable | Supported via structured review |\n\n[Q: What is the fundamental reason CE uses combined models rather than relying on a single best source model, and what does the combining strategy enable that single-source forecasting cannot provide? | Single-source forecasting embeds structural fragility, propagates errors silently, and produces false precision. CE's combining strategy surfaces disagreement explicitly, provides failure resilience across components, and preserves provenance so weights can be reviewed and revised over time.]","duration_min":22,"lesson_id":"cm_01","title":"Why Combined Models Exist In CE"},{"content":"## How A Combined Model Is Built, Used, And Revised\n\nCE combined models do not emerge fully formed \u2014 they follow a structured lifecycle. Understanding this lifecycle is essential for analysts who build models and for users who need to interpret what a combined model's outputs represent at any given stage.\n\n### Stage 1: Component Registration\n\nBefore combining, each contributing model must be registered in CE with a defined asset type, data vintage, scenario scope, and known limitation record. Registration is not bureaucratic overhead \u2014 it is the precondition for honest provenance.\n\n**Key registration fields**:\n- Asset class (climate-physical, economic macro, industry-specific)\n- Data vintage (historical range and refresh cadence)\n- Scenario scope (what scenarios this asset can responsibly contribute to)\n- Known limitations (weaknesses that should limit or exclude this model under certain conditions)\n\n### Stage 2: Initial Weight Assignment\n\nWeights are not set by algorithm alone in CE \u2014 they are set by combining algorithmic calibration with analyst judgment about the task context. A model that performs well on aggregate metrics may still deserve lower weight for a specific industry or horizon where its structural assumptions are known to be weak.\n\n**Weight assignment criteria**:\n- Historical out-of-sample accuracy for similar conditions\n- Structural alignment with the target scenario family\n- Vintage freshness relative to current conditions\n- Known limitation flags triggered by the scenario\n\n### Stage 3: Running Integration And Confidence Scoring\n\nDuring active use, each component contributes its signal. The fusion layer computes the combined output and simultaneously computes the confidence score as a function of component agreement, scenario stability, and data freshness.\n\nThe confidence score is not a separate opinion \u2014 it is derived from the same information the combined model uses to generate the integrated output.\n\n### Stage 4: Review And Weight Revision\n\nAfter a defined operating period, the model undergoes a structured review. Historical output accuracy is compared against realized conditions, and component weights are revised if one or more components systematically over- or under-performed.\n\n**Review triggers**:\n- Scheduled periodic review (e.g., annual)\n- Post-event review triggered by a major realized scenario departure\n- Analyst-initiated review following an identified structural change in the target domain\n\n| Stage | Key Output | Who Is Responsible |\n| --- | --- | --- |\n| Registration | Component record with limitations | Model admin |\n| Weight assignment | Weight set with rationale | Senior analyst |\n| Running integration | Integrated outputs + confidence | CE pipeline |\n| Review and revision | Updated weights + revision notes | Domain lead |\n\n[Q: Why is the weight assignment step in a CE combined model not delegated entirely to an algorithm, and what additional input does CE require? | Algorithmic calibration on aggregate metrics may miss structural weaknesses that matter for specific industries or scenario horizons. CE requires analyst judgment about task context \u2014 including scenario family alignment, known limitation flags, and vintage freshness \u2014 to supplement statistical calibration.]","duration_min":28,"lesson_id":"cm_02","title":"The Four Stages Of A Combined Model Lifecycle"},{"content":"## What Weights Mean And What Disagreement Reveals\n\nComponent weights are the most visible part of a combined model, but they are often the most misunderstood. A high weight does not mean a component is always right \u2014 it means that under current conditions and scenario context, that component has earned more influence in the fusion.\n\n### What weights represent\n\nWeights represent conditional trust. They are calibrated for the current scenario context and data vintage. A model that receives a 0.60 weight for an energy sector stress scenario may receive a 0.30 weight for an agriculture sector baseline \u2014 and that is correct behavior.\n\n**Mistakes analysts make with weights**:\n- Treating weights as fixed properties of a model rather than context-dependent\n- Assuming the highest-weighted component is the most important source of truth\n- Ignoring low-weighted components because they \"don't count much\"\n\n### Why low-weighted components still matter\n\nA low-weighted component that disagrees strongly with the consensus is more informative than a low-weighted component that agrees. Disagreement from a minority signal is often the first indicator of a regime change or a scenario mismatch.\n\nCE surfaces this through the disagreement metric \u2014 when component disagreement exceeds a threshold, the confidence score is penalized regardless of how strong the weighted average looks.\n\n### Reading the disagreement signal\n\n| Disagreement Level | Confidence Adjustment | Analyst Interpretation |\n| --- | --- | --- |\n| Low (components agree) | None \u2014 confidence reflects other factors | Scenario may be well-suited to existing weight set |\n| Moderate | Moderate reduction | At least one component is reading conditions differently |\n| High (strong minority outlier) | Significant reduction | Review whether the outlier is detecting a scenario mismatch |\n| Extreme (near-reversal) | Major reduction | Pause before publishing \u2014 investigate component alignment |\n\n### The disagreement review habit\n\nEvery time a combined model output shows high disagreement, the analyst review habit should be to ask: is the minority component seeing something the majority is missing, or is it a known weak component producing noise under these specific conditions? The answer determines whether to revise the scenario, revise the weight, or publish with a cautionary note.\n\n[Q: Why does strong disagreement from a low-weighted component in a CE combined model sometimes deserve more attention than consensus among the high-weighted components? | Because a low-weighted component that strongly disagrees may be detecting a scenario mismatch or regime change that the majority components are not structured to identify. CE penalizes confidence when disagreement is high, specifically to prevent strong consensus among dominant components from masking a meaningful minority signal.]","duration_min":24,"lesson_id":"cm_03","title":"Component Weights And Disagreement Signals"},{"content":"## Making Every Combined Model Output Auditable\n\nCE treats provenance as a first-class output. Any integrated result that cannot be explained from its component contributions is not a CE output \u2014 it is a black box wearing CE packaging.\n\n### What a provenance record contains\n\nA CE provenance record for a combined model output contains:\n\n- **Component list**: Every registered model that contributed to this output, with its weight at the time of the run\n- **Contribution breakdown**: The signed contribution of each component to the integrated signal (direction and magnitude)\n- **Disagreement level**: The measured disagreement among components at the time of integration\n- **Scenario tag**: The exact scenario envelope under which this output was generated\n- **Vintage markers**: The data vintage of each component\n- **Run timestamp**: When this output was generated\n\n### Confidence decomposition\n\nThe confidence score is decomposed into three sub-dimensions in CE:\n\n1. **Agreement score** \u2014 how much do components agree on direction and magnitude?\n2. **Freshness score** \u2014 how current is the data vintage feeding each component?\n3. **Scenario fitness score** \u2014 how well is the scenario envelope matched to these components' calibrated scope?\n\nAll three sub-dimensions contribute to the final confidence value. A model can have high agreement but low freshness \u2014 and the resulting confidence should reflect that combination honestly.\n\n### The audit path discipline\n\nWhen a combined model output is reviewed retrospectively, the provenance record allows the reviewer to reconstruct what each component contributed and what conditions existed at that time. This is why CE requires running provenance to be stored, not just output values.\n\n| Review Question | Where To Find The Answer In Provenance |\n| --- | --- |\n| Which component drove the high-pressure signal? | Contribution breakdown |\n| Was the confidence justified given component alignment? | Agreement score |\n| Was the scenario appropriate for these components? | Scenario tag + scenario fitness score |\n| Have conditions changed since this output was generated? | Vintage markers + run timestamp |\n\n[Q: What is the confidence decomposition in CE, and why does CE compute it as three separate sub-dimensions rather than a single score? | CE decomposes confidence into agreement score, freshness score, and scenario fitness score because each dimension can fail independently. A combined model could show high component agreement but still deserve low confidence because the data is stale or the scenario envelope mismatches the components' calibrated scope. One aggregated number cannot communicate which dimension is the source of uncertainty.]","duration_min":24,"lesson_id":"cm_04","title":"Provenance Records And Confidence Decomposition"},{"content":"## When The Combined Model Is Producing Bad Outputs\n\nCombined models are not self-healing. They require structured diagnosis when outputs systematically diverge from realized conditions or when outputs seem implausible given the scenario context.\n\n### Diagnosis step 1: Isolate the source\n\nThe first question is always: is the problem in a component, in the weights, in the scenario design, or in the fusion logic? Each has different signatures.\n\n**Component failure signature**: One component consistently contributes an outlier signal that pulls the integrated output in a direction other components do not support. Fix: inspect that component's data vintage, known limitations, and whether the current scenario is outside its calibrated scope.\n\n**Weight failure signature**: The combined output tracks too closely to one component because weights are poorly calibrated for the current scenario context. Fix: review weight assignment rationale against the current scenario family.\n\n**Scenario design failure signature**: Multiple components produce internally consistent outputs that look contradictory only because the scenario itself is inconsistent. Fix: return to scenario design review before revising the model.\n\n**Fusion logic failure signature**: The confidence score moves independently of the disagreement level, or contributions do not sum to the integrated output. Fix: escalate to model admin \u2014 this is a system-level issue.\n\n### Diagnosis step 2: Distinguish bias from noise\n\nA component that is occasionally wrong is different from a component that is systematically biased. Bias requires weight revision. Noise may require only that the confidence score is appropriately penalized.\n\n### Calibration review cadence\n\n| Review Type | Trigger | Expected Outcome |\n| --- | --- | --- |\n| Scheduled review | Annual or semi-annual | Weight revision and notation |\n| Post-event review | Major realized departure | Component inspection and possible deregistration |\n| Analyst-triggered review | Implausible output detected | Scenario design check before model revision |\n| System-triggered review | Fusion logic anomaly | Admin escalation |\n\n[Q: What is the most important first question to ask when a CE combined model appears to be producing systematically wrong outputs, and why does the question matter before any weight changes are made? | The first question is whether the problem is in a component, the weights, the scenario design, or the fusion logic. Changing weights when the real problem is a flawed scenario design would leave the root cause intact. Diagnosis must isolate the source before any revision is attempted to avoid introducing new calibration errors while the original problem remains.]","duration_min":22,"lesson_id":"cm_05","title":"Diagnosing Calibration Failures"}],"module_id":"combined-model-design","objectives":["Explain why CE uses combined models rather than a single authoritative source model.","Describe the four stages of a CE combined model lifecycle.","Identify the criteria CE uses to assign and revise component weights.","Interpret a combined model's provenance record and confidence decomposition.","Diagnose calibration failures and recommend correction strategies."],"title":"Combined Model Design"},{"audience":"Sector analysts, credit analysts, and CE product users with industry coverage responsibilities","category":"Sector Analysis","color":"red","description":"Master how climate-economy pressures and opportunities translate into the six CE-covered industries through operating, financing, and supply-chain channels. Learn which signals matter most for each sector and how to weight them correctly.","difficulty":"Intermediate","duration_min":135,"icon":"fa-industry","lessons":[{"content":"## How Climate-Economy Pressures Reach Industries\n\nCE organizes the path from climate-economy conditions to industry impact through three channels. Every transmission analysis begins with identifying which channels dominate for the target industry and scenario.\n\n### Channel 1: Operating channel\n\nThe operating channel captures direct impacts on revenue, costs, and capacity. This includes physical climate impacts on production facilities, input costs from energy and material price changes, labor availability, and regulatory compliance costs.\n\nThe operating channel is most important for industries with physical assets at risk, high energy consumption, or labor exposure in climate-vulnerable regions.\n\n### Channel 2: Financing channel\n\nThe financing channel captures impacts arriving through capital access, credit pricing, insurance availability, and investor risk appetite. An industry may face no immediate operating impact but still experience significant pressure if its cost of capital rises because of perceived transition exposure.\n\nThe financing channel is most important for capital-intensive industries and for sectors where transition risk is being rapidly repriced by capital markets \u2014 whether or not the physical risk has materialized yet.\n\n### Channel 3: Supply-chain channel\n\nThe supply-chain channel captures indirect exposure arriving through input suppliers or customer demand chains. A sector that appears resilient in isolation may be highly exposed if its key suppliers are operating- or financing-constrained.\n\nThe supply-chain channel is most important for industries with concentrated supplier bases, global input dependencies, or customers who are themselves heavily exposed.\n\n### Channel hierarchy by sector\n\n| Sector | Dominant Primary Channel | Secondary Channel |\n| --- | --- | --- |\n| Energy | Operating | Financing |\n| Agriculture | Operating | Supply-chain |\n| Manufacturing | Supply-chain | Operating |\n| Transport | Operating | Supply-chain |\n| Insurance | Financing | Operating |\n| Real Estate | Operating (physical) | Financing |\n\n[Q: Why can an industry with no immediate operating exposure still show significant pressure in CE outputs? | Because the financing channel can transmit transition risk repricing before physical impacts arrive. If capital markets are rapidly reassessing transition exposure for a sector, the cost of capital rises and credit access tightens even when operations look stable. CE captures this separation explicitly.]","duration_min":20,"lesson_id":"trans_01","title":"The Three Transmission Channels"},{"content":"## Energy: Stranded Assets, Transition Pressure, And Operating Volatility\n\nThe energy sector is the most directly exposed of CE's six covered industries. It sits at the intersection of transition policy, physical climate conditions, and capital repricing \u2014 and all three channels fire simultaneously in most transition scenarios.\n\n### Operating channel in energy\n\nFor upstream fossil fuel producers and utilities, the operating channel transmits through:\n- **Production cost changes** driven by carbon pricing, regulatory compliance, and methane leakage rules\n- **Demand shifts** as the energy mix transitions and end-user behavior changes\n- **Physical hazard** to infrastructure from extreme weather (flooding, heat stress, wind events)\n\n### Financing channel in energy\n\nThe financing channel is the most structurally important for the energy sector because transition risk repricing happens here first. Equity and debt capital has been repricing fossil fuel exposure for years before operational disruption arrives in many scenarios. Insurance withdrawal from fossil fuel infrastructure is a leading signal.\n\n**Key financing indicators for CE energy analysis**:\n- Cost of capital divergence between fossil fuel and clean energy assets\n- Insurance availability and pricing for extraction and transmission infrastructure\n- Divestment pressure as a proxy for investor sentiment shift\n\n### Supply-chain channel in energy\n\nThe supply chain channel matters for clean energy specifically \u2014 wind turbine components, solar panels, battery minerals, and grid infrastructure all have concentrated global supply chains that introduce fragility independent of the energy transition's pace.\n\n### Scenario interaction\n\nIn a fast-transition scenario, the energy sector typically shows simultaneous operating pressure (legacy assets), financing pressure (repricing), and supply-chain opportunity risk (clean energy bottlenecks). CE is designed to surface all three rather than reducing the picture to a single direction.\n\n[Q: In a fast-transition scenario, why might a clean energy company still show supply-chain channel pressure in CE despite benefiting from the overall transition direction? | Because clean energy buildout depends on critical mineral inputs and manufactured components with highly concentrated global supply chains. Rapid transition pace strains those supply chains, potentially introducing cost volatility and delivery delays even for operators fully aligned with the transition direction.]","duration_min":22,"lesson_id":"trans_02","title":"Energy Sector Transmission"},{"content":"## Two Sectors With Very Different Transmission Profiles\n\nAgriculture and manufacturing sit at opposite ends of the supply-chain complexity spectrum. Agriculture is highly operating-channel sensitive and geographically granular. Manufacturing is heavily supply-chain exposed and often appears resilient at the operating level until supply chain disruptions arrive.\n\n### Agriculture transmission\n\n**Primary channel: operating**\n\nAgriculture's operating channel is among the most direct in CE. Physical climate conditions \u2014 temperature anomalies, precipitation changes, growing season shifts, extreme event frequency \u2014 translate into yield volatility and input cost changes with relatively short lags.\n\n**Key operating transmission mechanisms**:\n- Crop yield changes from heat stress and precipitation variability\n- Water availability and irrigation cost shifts\n- Pest and disease pressure changes with warming\n- Growing season shifts that affect crop mix and capital investment returns\n\n**Secondary channel: supply-chain**\nDownstream food processing and retail are heavily exposed through agriculture's supply chain. Price volatility in agricultural commodities is a rapid transmission path to broader inflation pressure.\n\n**CE caution for agriculture**: Regional granularity matters enormously. A climate signal that is adverse for temperate grain production may be beneficial for higher-latitude production. CE is designed to preserve regional distinctions rather than collapsing agriculture into a single global picture.\n\n### Manufacturing transmission\n\n**Primary channel: supply-chain**\n\nManufacturing's most dangerous exposure is typically upstream \u2014 not at the factory but in its supplier network. Energy price volatility, raw material constraints, and logistics disruptions arrive through the supply chain before they appear in the manufacturing firm's own operating costs.\n\n**Key supply-chain transmission mechanisms**:\n- Energy input cost propagation from power and fuel price changes\n- Critical raw material price and availability shifts\n- Logistics costs from weather disruption and port/rail capacity constraints\n- Supplier-country regulatory divergence creating compliance-chain complexity\n\n**Secondary channel: operating**\nHeat stress on factory workers, cooling costs, and physical infrastructure exposure are growing operating-channel concerns especially for manufacturing facilities in equatorial or high-temperature regions.\n\n| Sector | What CE Analysts Should Focus On | What Is Often Missed |\n| --- | --- | --- |\n| Agriculture | Regional yield variability by crop type | Downstream food price transmission |\n| Manufacturing | Upstream supplier exposure | Physical heat stress on labor |\n\n[Q: Why does CE treat agriculture as primarily operating-channel sensitive but manufacturing as primarily supply-chain channel sensitive, even though both involve physical production? | Agriculture translates climate conditions directly into yield and input costs with relatively short lags \u2014 the operating channel is immediate. Manufacturing typically absorbs climate impacts indirectly through its supplier network first, because energy and raw material costs change before factory-level physical exposure becomes significant. The transmission path reflects the different structures of exposure, not just the physical nature of production.]","duration_min":24,"lesson_id":"trans_03","title":"Agriculture And Manufacturing Transmission"},{"content":"## Three Sectors With Distinct Transmission Logic\n\nTransport, insurance, and real estate each have transmission profiles that cannot be generalized from the energy or agriculture patterns. Each has a different dominant channel and a different time structure for how pressure arrives.\n\n### Transport transmission\n\n**Dominant channel: operating**\n\nTransport infrastructure and operations are physically exposed \u2014 roads, rails, ports, and air infrastructure face direct climate hazard from flooding, heat damage, and extreme events. Fuel cost shifts are an immediate operating input pressure.\n\n**Financing channel**: Carbon pricing is forcing rapid capital reallocation decisions for fleet owners and infrastructure operators. The gap between traditional and low-carbon fleet assets is being priced into debt markets faster than fleet turnover can accommodate.\n\n**CE distinctive for transport**: CE disaggregates transport into freight and passenger modes because transition exposure differs sharply. Freight is more supply-chain sensitive (fuel costs, logistics routing); passenger is more operating-sensitive (infrastructure, fleet assets).\n\n### Insurance transmission\n\n**Dominant channel: financing (underwriting as a proxy)**\n\nInsurance is unusual among the six CE sectors because its dominant transmission is through the underwriting and repricing cycle rather than through operating costs. Physical climate risk arrives as rising claims frequency and severity; transition risk arrives through portfolio investment exposure and regulatory capital requirements.\n\n**Key insurance transmission mechanisms**:\n- Claims frequency and severity shifts from physical climate events\n- Withdrawal from uninsurable risk zones creating protection gaps\n- Portfolio investment repricing driven by transition and physical climate exposure in asset holdings\n- Regulatory capital requirements increasing for climate-exposed underwriting portfolios\n\n**CE caution for insurance**: Insurance availability withdrawal is itself a transmission channel to other sectors. When insurance retreats from an asset class or geography, it amplifies the financing pressure on those sectors. CE models this second-order effect.\n\n### Real estate transmission\n\n**Dominant channel: operating (physical asset exposure)**\n\nReal estate sits at the end of the physical risk exposure chain. Property value declines, rising insurance costs, and increasing maintenance burden from climate events arrive through the operating channel as direct asset performance impacts.\n\n**Financing channel**: Mortgage repricing, lender retreat from flood and fire zones, and transition-motivated green premium/brown discount dynamics are all active financing-channel effects in real estate.\n\n**CE distinctive for real estate**: CE separates commercial and residential real estate because physical exposure profiles, financing channels, and transition sensitivity differ substantially.\n\n[Q: Why does CE treat insurance as financing-channel dominant rather than operating-channel dominant when most industries face the reverse? | Because insurance transmits climate risk through claims pricing, portfolio investment exposure, and regulatory capital requirements rather than through direct physical impacts on its own operating infrastructure. The insurer's primary exposure is to the risks it has underwritten and the assets it holds \u2014 not to weather events hitting its own offices. That makes the financing and underwriting channels the primary transmission mechanism.]","duration_min":24,"lesson_id":"trans_04","title":"Transport, Insurance, And Real Estate Transmission"},{"content":"## When One Sector's Pressure Becomes Another Sector's Input\n\nThe most dangerous CE risk scenarios are not single-sector shocks \u2014 they are transmission cascades where pressure in one sector amplifies through supply-chain, financing, and operating channels into adjacent sectors.\n\n### How cascades develop\n\nCross-sector cascades typically follow one of three structural patterns:\n\n**Pattern 1: Energy-to-manufacturing cost cascade**\nEnergy price volatility or supply disruption becomes a cost shock for energy-intensive manufacturing. If the disruption is sustained, manufacturing output falls, which flows into retail and transport as a demand contraction. Insurance repricing of manufacturing assets may follow.\n\n**Pattern 2: Real estate-to-insurance-to-financing cascade**\nPhysical climate damage in a real estate market triggers rising insurance claims, which causes insurer withdrawal from that region, which removes a key risk transfer mechanism, which causes lenders to reprice or withdraw mortgage availability, which accelerates property value declines. Each step amplifies the next.\n\n**Pattern 3: Agriculture-to-food-price-to-macro cascade**\nCrop yield shocks in major producing regions generate food price volatility that feeds into CPI inflation, which affects monetary policy settings, which flows through to financing costs across all sectors. Supply-chain transmission here has macro-level breadth.\n\n### What CE captures and what it does not\n\nCE is designed to capture first and second-order cross-sector transmission \u2014 it is not a full general equilibrium model. It models named channels explicitly and uses scenario design to incorporate aggregate conditions.\n\nCE does not automatically compute a complete economy-wide cascade from a single shock. Users who need cascade analysis should construct a sequence of scenarios and review how outputs evolve across that sequence.\n\n### Using CE for systemic risk awareness\n\nThe most useful CE practice for cross-sector work is to run the same scenario through multiple industry lenses and compare the direction, magnitude, and confidence of each output. Where multiple sectors show aligned pressure under the same scenario, the probability of a cascade warrants explicit review.\n\n| Cascade Type | Trigger | First-Order Sector | Second-Order Sectors |\n| --- | --- | --- | --- |\n| Energy-manufacturing | Energy price shock | Energy | Manufacturing, transport |\n| Real estate cascade | Physical climate damage | Real estate | Insurance, financing |\n| Agriculture-macro | Crop yield shock | Agriculture | Food retail, macro/inflation |\n\n[Q: If CE shows high pressure in both the energy sector and the manufacturing sector under the same scenario, why should the analyst consider a cross-sector cascade interpretation rather than treating them as independent outputs? | Because the two pressures may not be independent \u2014 energy sector disruption is one of the most direct input-cost transmission paths into manufacturing. If both sectors are under pressure in the same scenario, the manufacturing pressure may be partially or wholly caused by the energy pressure arriving through the supply-chain channel. The aligned pressure is a signal to investigate cross-sector transmission, not just two separate sector readings.]","duration_min":25,"lesson_id":"trans_05","title":"Cross-Sector Transmission And Systemic Risk"}],"module_id":"industry-transmission","objectives":["Describe the three primary transmission channels CE uses and how each differs by sector.","Explain how energy sector operating exposure differs from real estate physical exposure.","Interpret transmission weights for agriculture versus manufacturing scenarios.","Identify where financing channel effects dominate over operating channel effects and why.","Apply sector-appropriate analysis using CE industry transmission logic."],"title":"Industry Transmission Deep Dive"},{"audience":"Analysts and product builders","category":"External Models","color":"cyan","description":"Build a working understanding of how leading climate models are selected, interpreted, stress-tested, and translated into decision-ready signals for CE.","difficulty":"Intermediate","duration_min":96,"icon":"fa-cloud-sun-rain","lessons":[{"content":"## The Selection Problem\n\nTeams often ask for the best climate model as if climate modeling were a leaderboard. That framing is wrong. The better question is: best for what decision, at what spatial scale, under what scenario, with what tolerance for uncertainty?\n\n### Model quality is task-specific\n\nA model that performs well for long-run global temperature response may be weak for regional precipitation or compound hazard interpretation. CE treats model choice as a portfolio decision, not a winner-take-all decision.\n\n- **CMIP-class ensembles** are strongest for scenario diversity and long-horizon transition analysis\n- **Reanalysis products** such as ERA5 are strongest for reconstructing recent observed state and calibrating thresholds\n- **Institutional model families** such as NASA GISS and NOAA GFDL often contribute distinctive process strengths that matter for sector interpretation\n\n### Why ensembles dominate single-model claims\n\nSingle-model confidence is usually false confidence. The ensemble discipline matters because it forces teams to examine disagreement explicitly. In CE, disagreement is not hidden after integration; it is preserved as provenance and component variance.\n\n| Use Case | Strongest Asset Type | Why |\n| --- | --- | --- |\n| Long-run policy scenario comparison | CMIP ensemble | Captures broad structural and scenario diversity |\n| Observed climate baseline | Reanalysis | Best alignment to measured historical conditions |\n| Hazard process insight | Institutional specialty models | Better domain-specific process interpretation |\n| Operational short-term forecast | Weather and seasonal systems | Climate models are not built for day-to-day operations |\n\n### What CE does with this\n\nCE forces analysts to separate three questions:\n\n1. What climate state is most defensible historically?\n2. What future pathways should be explored?\n3. How should model disagreement alter confidence?\n\nThat separation matters because decision systems fail when a historical baseline, a scenario projection, and a confidence score get collapsed into one misleading number.\n\n[Q: Why does CE prefer an ensemble-oriented view over a single best climate model claim? | Because model performance is task-specific. Ensembles preserve disagreement across scenarios and structures, which is essential for defensible forecasting. CE uses that disagreement to inform provenance and confidence instead of pretending one model is universally correct.]","duration_min":22,"lesson_id":"climate_01","title":"Why There Is No Single Best Climate Model"},{"content":"## Climate Outputs Are Conditional, Not Absolute\n\nClimate projections are conditional on forcing pathways. They are not unconditional predictions. This distinction is one of the most important training points in CE because users often read a scenario output as if it were a direct forecast of what will happen.\n\n### The three major uncertainty buckets\n\nCE keeps uncertainty legible by separating it into three buckets:\n\n- **Scenario uncertainty**: differences caused by policy, emissions, energy, and adaptation pathways\n- **Model uncertainty**: differences caused by structural assumptions and parameterization choices\n- **Internal variability**: differences caused by the chaotic behavior of the climate system itself\n\n### Time horizon changes what dominates\n\nNear-term interpretation should be more conservative because internal variability and local noise can dominate. Longer-term interpretation shifts more weight toward scenario differences.\n\n| Horizon | Dominant Source Of Caution | Implication For CE |\n| --- | --- | --- |\n| 1-5 years | Internal variability and data noise | Emphasize ranges and trigger thresholds |\n| 5-15 years | Mixed variability and scenario effects | Preserve multiple narratives |\n| 15+ years | Scenario and structural divergence | Use scenario envelopes and stress families |\n\n### Why regional detail is dangerous\n\nUsers love precise local outputs. Precision is not the same as reliability. Downscaling may be useful, but only when accompanied by clear caveats about model spread, local observation quality, and hazard-specific limitations. CE intentionally avoids presenting local detail without accompanying provenance notes.\n\n### Good practice inside CE\n\n- Never present a single scenario as the only plausible future\n- Keep scenario labels attached to outputs throughout the pipeline\n- Carry uncertainty into confidence scoring rather than discarding it at the integration step\n- Document when a regional interpretation is based on thin historical support\n\n[Q: What are the three uncertainty buckets CE separates when working with climate projections? | CE separates scenario uncertainty, model uncertainty, and internal variability so users can see whether differences come from pathway assumptions, model structure, or the climate system's own chaotic behavior.]","duration_min":24,"lesson_id":"climate_02","title":"Scenario Conditioning And Uncertainty"},{"content":"## Raw Climate Output Is Not A Decision Product\n\nCE is not a data dump. Users do not need raw gridded outputs without interpretation. They need normalized signals that map to sector exposure and operational decision logic.\n\n### The translation layer\n\nClimate model outputs become useful only after a translation layer converts them into signals such as heat pressure, water stress, coastal exposure, wildfire pressure, or transition burden.\n\nThat layer should be disciplined. Each signal needs:\n\n- A defensible source lineage\n- A consistent normalization method\n- A clear interpretation rule\n- A stated caveat about what the signal does **not** capture\n\n### Physical and transition signals belong together\n\nMany systems treat physical climate risk separately from transition dynamics. CE does not. Climate-driven regulation, insurance repricing, energy shifts, and capital costs often matter as much as direct hazard intensity.\n\n| Signal Family | Example Inputs | Example CE Output |\n| --- | --- | --- |\n| Heat and labor stress | Degree days, wet-bulb indicators | Operating pressure |\n| Water and drought stress | Soil moisture, runoff, precipitation variability | Supply-chain pressure |\n| Coastal and flood exposure | Sea-level, surge, flood recurrence proxies | Asset pressure |\n| Transition burden | Carbon pricing, fuel mix shifts, policy tightening | Financing pressure |\n\n### Provenance is part of the product\n\nEvery signal should carry enough provenance that a reviewer can see how it was built. In CE, provenance is not optional documentation after the fact. It is a first-class feature of the forecast object.\n\n[Q: Why does CE translate climate outputs into normalized decision signals instead of exposing only raw variables? | Because raw climate variables are not directly actionable. CE converts them into normalized signals tied to sector exposure, interpretation rules, and provenance so users can see how climate information affects operations, financing, and supply chains.]","duration_min":25,"lesson_id":"climate_03","title":"From Climate Variables To Decision Signals"},{"content":"## How Good Climate Inputs Still Produce Bad Decisions\n\nThe most common failures do not start with bad science. They start with bad interpretation. CE training is designed to stop these errors before they become product behavior.\n\n### Failure mode 1: Treating scenario outputs as probabilities\n\nAn SSP-style pathway is a conditioned pathway, not a probability-weighted forecast. Saying a pathway is the most likely without explicit probability work is a category error.\n\n### Failure mode 2: Mistaking local precision for confidence\n\nA map that looks precise can still be deeply uncertain. Users often trust high-resolution visuals more than the underlying evidence supports. CE counters this by pairing local outputs with confidence and caveat layers.\n\n### Failure mode 3: Ignoring adaptation and buffers\n\nThe same hazard level can imply very different business consequences depending on adaptive capacity, infrastructure, insurance access, labor practices, and capital availability. CE therefore avoids direct hazard-to-loss shortcuts unless the transmission assumptions are explicit.\n\n### Failure mode 4: Mixing historical observations and future projections without telling the user\n\nHistorical state and future pathway outputs are both useful, but they should never be blended invisibly. CE preserves the distinction through scenario envelopes and provenance notes.\n\n### A disciplined review checklist\n\n- Is the climate output historical, projected, or blended?\n- What pathway assumptions condition the result?\n- What uncertainty bucket is largest here?\n- What sector translation assumptions were applied?\n- What evidence would falsify the current interpretation?\n\n[Q: What is the error in treating a climate scenario pathway as a probability forecast? | A climate pathway is a conditioned future under stated assumptions, not a statement about what is most likely. Treating it as a probability forecast confuses scenario design with probabilistic estimation.]","duration_min":25,"lesson_id":"climate_04","title":"Common Failure Modes In Climate Interpretation"}],"module_id":"climate-model-foundations","objectives":["Explain why no single climate model should be treated as universally best.","Differentiate Earth system models, reanalysis products, and operational forecasting assets.","Interpret uncertainty across scenario, model, and internal variability dimensions.","Translate climate outputs into CE physical and transition pressure signals.","Identify common forecast misuse patterns and how CE prevents them."],"title":"Climate Model Foundations"},{"audience":"Analysts and strategy teams","category":"External Models","color":"green","description":"Understand how leading macroeconomic and climate-economy models differ, what they are actually good at, and how CE converts them into transparent economic signals.","difficulty":"Intermediate","duration_min":102,"icon":"fa-chart-line","lessons":[{"content":"## Economic Models Need The Same Discipline As Climate Models\n\nThere is no universally best economic model either. IMF, FRB/US, NiGEM, NGFS-based macro overlays, and sector-specific stress tools solve different problems. CE needs the user to understand the operating question before selecting a model family.\n\n### Typical roles of major model families\n\n| Model Family | Strongest Use | Main Limitation |\n| --- | --- | --- |\n| IMF outlook style models | Global baseline and cross-country framing | Weaker for institution-specific tactical policy transmission |\n| FRB/US style policy models | Interest rate, expectations, and US policy channels | Country-specific emphasis limits global generalization |\n| NiGEM-style global models | Cross-border propagation and spillovers | Requires care when translated to firm-level exposure |\n| Stress-test and supervisory overlays | Capital and downside vulnerability | Often scenario-heavy and not built for broad strategic storytelling |\n\n### Why CE asks for multiple sources\n\nEconomic disagreement is valuable. A policy model can disagree with a global baseline for good reasons. CE exposes that disagreement before integration so users can see what comes from domestic policy logic versus international spillovers or scenario overlays.\n\n### The model selection question set\n\n- Is this a baseline outlook problem or a stress problem?\n- Is the main transmission domestic, international, or financial?\n- Does the user need medium-term policy interpretation or long-run structural framing?\n- Is climate being represented as a shock, a pathway, or a structural damage channel?\n\n[Q: Why does CE encourage using multiple economic models instead of presenting one baseline as definitive? | Because different model families explain different transmission channels. CE uses disagreement to show where outlooks diverge across policy, global spillover, and stress frameworks rather than hiding that uncertainty behind one authoritative baseline.]","duration_min":24,"lesson_id":"economic_01","title":"Institutional Macro Models By Use Case"},{"content":"## DICE, NGFS, And The Problem Of Hidden Assumptions\n\nClimate-economy modeling traditions are often discussed as if they are interchangeable. They are not. Integrated assessment models and scenario taxonomies answer different questions.\n\n### IAM logic\n\nIntegrated assessment models such as DICE-style traditions are useful for long-run policy trade-offs. They help structure questions about mitigation cost, damage functions, discounting, and welfare over long horizons.\n\n### NGFS logic\n\nNGFS-style frameworks are useful for macro-financial scenario design. They provide structured transition narratives, sector pathways, and reference assumptions that institutions can use for risk work.\n\n### The assumption burden\n\nThe most important lesson is this: **damage assumptions can dominate outputs**. A seemingly precise economic result may be mostly a reflection of hidden assumptions about physical damages, adaptation speed, policy coordination, or financial repricing.\n\n| Modeling Tradition | What It Clarifies | What It Can Hide |\n| --- | --- | --- |\n| IAM | Long-run trade-offs and welfare logic | Sensitivity to discounting and damage functions |\n| NGFS scenario sets | Structured transition narratives | False sense of precision if scenario labels are over-trusted |\n| Supervisory stress overlays | Downside resilience questions | Narrow use-case design mistaken for a general forecast |\n\n### What CE requires\n\nCE requires assumptions to stay visible as they move through the integration pipeline. If damages or policy paths dominate the economics, that should be visible in provenance and guidance, not buried in a black-box summary.\n\n[Q: What is the major risk in using climate-economy outputs without exposing assumptions? | The outputs can appear precise while actually being driven by hidden assumptions such as damage functions, discount rates, adaptation speed, or policy coordination. CE therefore keeps those assumptions visible in provenance and guidance.]","duration_min":25,"lesson_id":"economic_02","title":"Climate-Economy Model Traditions"},{"content":"## Converting Macro Complexity Into Comparable Signals\n\nCE does not attempt to preserve every macro variable in the final surface. That would overwhelm the user. Instead, it normalizes the most decision-relevant outputs into comparable signals while preserving lineage.\n\n### Core economic signal families\n\n- **Growth pressure**: baseline demand and output conditions\n- **Inflation pressure**: cost environment and pricing stress\n- **Financing pressure**: interest, spreads, refinancing, and capital access\n- **Labor pressure**: wage competition, productivity, workforce constraints\n- **Trade and supply pressure**: external demand, logistics, and import dependence\n\n### Why normalization must stay transparent\n\nNormalization is not simplification for its own sake. It is a contract. Once a raw output becomes a CE signal, reviewers need to know what transformation occurred and what was lost.\n\n| Raw Input | Example Normalization | CE Signal |\n| --- | --- | --- |\n| GDP growth forecast | Relative to industry sensitivity bands | Growth contribution |\n| CPI and input inflation | Weighted against cost structure | Inflation contribution |\n| Policy rate and spreads | Mapped to debt sensitivity | Financing pressure |\n| Trade slowdown indicators | Weighted by import dependence | Supply-chain pressure |\n\n### Keep baseline and climate-induced economic shocks distinct\n\nA common failure is to mix ordinary macro baseline conditions with climate-induced economic effects and then treat the result as one undifferentiated signal. CE avoids this by keeping climate-conditioned channels explicit until the integration stage.\n\n[Q: Why does CE keep climate-induced economic stress separate from baseline macro conditions until integration? | Because users need to know what comes from the ordinary macro baseline and what comes from climate-conditioned channels. Mixing them too early destroys explainability and makes it harder to audit the resulting forecast.]","duration_min":26,"lesson_id":"economic_03","title":"Economic Normalization For CE"},{"content":"## Good Models Still Fail In Products When Context Is Lost\n\nEconomic forecast misuse is often a product design problem rather than a research problem. CE training is designed to force that distinction.\n\n### Failure mode 1: Treating one institutional source as ground truth\n\nInstitutional credibility matters, but no source should be treated as ground truth across every use case. Product surfaces that silently elevate one model create false authority.\n\n### Failure mode 2: Presenting point estimates without scenario context\n\nA point forecast without scenario framing is a weak decision aid. Users need to know the policy, trade, and energy conditions under which an estimate was produced.\n\n### Failure mode 3: Ignoring sensitivity by industry\n\nThe same macro move affects insurance, transport, agriculture, and manufacturing differently. CE therefore translates economic outputs through industry transmission instead of exposing only top-line macro readings.\n\n### Failure mode 4: Confusing confidence with precision\n\nA number with two decimal places is not more reliable than a range with a clear rationale. CE uses confidence as a first-class output rather than implying it through formatting.\n\n### Review checklist\n\n- Which model family produced this output?\n- What scenario or baseline conditions frame it?\n- What industry sensitivities were applied?\n- What assumptions dominate the result?\n- What disagreement was preserved and what was collapsed?\n\n[Q: Why is a precise point estimate not the same thing as a high-confidence estimate? | Precision is only formatting. Confidence depends on model agreement, scenario stability, and assumption quality. CE therefore reports confidence explicitly instead of letting precise-looking numbers imply reliability.]","duration_min":27,"lesson_id":"economic_04","title":"Failure Modes In Economic Forecast Use"}],"module_id":"economic-model-foundations","objectives":["Differentiate global baseline, policy, international, and stress-testing model families.","Explain how IAM and NGFS traditions connect climate and macroeconomics.","Recognize how assumptions about damages and policy transmission dominate outputs.","Translate macro outputs into CE decision signals without oversimplifying them.","Identify common economic-model misuse patterns in product settings."],"title":"Economic Model Foundations"},{"audience":"Platform builders and domain leads","category":"Internal CE Models","color":"blue","description":"Learn how CE converts climate and economic model families into a transparent integrated forecast through scenario envelopes, industry transmission, fusion logic, and provenance.","difficulty":"Advanced","duration_min":108,"icon":"fa-bezier-curve","lessons":[{"content":"## Every CE Forecast Starts With A Shared Scenario Object\n\nCE does not let model adapters improvise the governing context. Every forecast begins with a scenario envelope that captures policy stance, energy conditions, trade posture, shock assumptions, and industry focus.\n\n### Why the envelope matters\n\nIf economic and climate adapters interpret different scenarios, integration becomes meaningless. The scenario envelope is the contract that keeps cross-domain outputs comparable.\n\n### Canonical signals as the shared language\n\nThe atomic CE contract is the signal, not the narrative. A canonical signal keeps:\n\n- The value\n- The unit or index meaning\n- The source lineage\n- The scenario context\n- The explanatory note\n\n### What this prevents\n\n| Without canonical signals | With canonical signals |\n| --- | --- |\n| Adapters invent incompatible scales | Signals land on a shared interpretive contract |\n| Scenario assumptions drift by source | Scenario envelope remains consistent |\n| Integration hides translation decisions | Translation stays inspectable |\n\n### Design rule\n\nIf a value cannot be explained through the scenario envelope and a canonical signal definition, it should not be promoted into the integrated forecast.\n\n[Q: Why does CE use a shared scenario envelope before any integration work happens? | Because economic and climate outputs are only comparable if they are conditioned on the same governing assumptions. The shared scenario envelope prevents adapters from drifting into incompatible contexts.]","duration_min":26,"lesson_id":"ce_arch_01","title":"Scenario Envelopes And Canonical Signals"},{"content":"## Climate And Economics Meet At The Industry Layer\n\nThe most important architectural point in CE is that climate does not become operationally useful by mapping directly to GDP, and macroeconomics does not become climate-aware by simply adding a hazard score. The real bridge is industry transmission.\n\n### Transmission channels in CE\n\nCE centers three first-order transmission outputs:\n\n- **Operating pressure**: what changes on the ground for production, labor, uptime, or service delivery\n- **Financing pressure**: what changes in debt service, underwriting, capital availability, or insurance cost\n- **Supply-chain pressure**: what changes in logistics, sourcing, input volatility, or downstream continuity\n\n### Why this is better than direct fusion\n\nA hazard index plus a GDP forecast does not tell an operator what actually happens. Transmission logic does. It forces the system to express how a sector experiences combined macro and climate stress.\n\n| Domain Input | Transmission Question | Example Output |\n| --- | --- | --- |\n| Heat stress | Does labor productivity or asset reliability fall? | Operating pressure up |\n| Rate and spread tightening | Does refinancing become more expensive? | Financing pressure up |\n| Drought and trade bottlenecks | Are critical inputs harder to source? | Supply-chain pressure up |\n\n### Why the user needs to see this\n\nWhen integrated outputs are shown without transmission logic, users cannot tell whether the platform actually understands the business mechanism. CE makes transmission explicit so the bridge between science and strategy is inspectable.\n\n[Q: Why is industry transmission the core bridge in CE instead of directly combining hazard scores with macro forecasts? | Because users need to see the mechanism by which climate and macro conditions affect an industry. Transmission logic shows how operating, financing, and supply-chain conditions change, which is more actionable and auditable than direct score blending.]","duration_min":28,"lesson_id":"ce_arch_02","title":"Industry Transmission As The Real Bridge"},{"content":"## Integration Must Increase Clarity, Not Hide It\n\nCE's fusion layer produces integrated outputs, but it is designed to avoid black-box behavior. The integrated forecast is only useful if the user can still inspect what fed it.\n\n### The four visible outputs\n\nCE keeps four summary outputs visible after integration:\n\n- **Pressure**\n- **Resilience**\n- **Opportunity**\n- **Confidence**\n\nThese are intentionally separate. A scenario can have high pressure and high opportunity at the same time. A forecast can show strong directional pressure but low confidence. Hiding those tensions would make the platform less honest.\n\n### Guidance is not generic advice\n\nGuidance in CE is generated from scenario, transmission, and source logic. It should explain what the user should inspect next and why. If guidance cannot cite the logic behind it, it should not exist.\n\n### Provenance as an audit trail\n\nProvenance records:\n\n- Which model families were used\n- Which scenario conditioned the run\n- Which transformation layers were applied\n- Which component contributions drove the final outputs\n- Which caveats remain unresolved\n\n### Product rule\n\nAn integrated score with no provenance is a liability. A strong CE surface makes users more capable of questioning the result, not less.\n\n[Q: What are the four distinct summary outputs CE keeps visible after integration, and why are they kept separate? | CE keeps pressure, resilience, opportunity, and confidence separate because a scenario can be favorable in one dimension and weak in another. Keeping them distinct preserves the real tensions in the forecast instead of flattening them into one misleading score.]","duration_min":27,"lesson_id":"ce_arch_03","title":"Fusion, Guidance, And Provenance"},{"content":"## The Point Of CE Is Not To Hide Disagreement\n\nOne of CE's core product commitments is that users can inspect forecasts both before and after integration. This is not decoration. It is the control surface that prevents false authority.\n\n### Before integration\n\nUsers should be able to inspect:\n\n- Individual economic model forecasts\n- Individual climate model forecasts\n- Combined-model overlays where relevant\n- Scenario context and source notes\n\n### After integration\n\nUsers should be able to inspect:\n\n- Integrated metrics and their component contributions\n- Transmission rationale\n- Guidance linked to those contributions\n- Historical review context for why older models were accurate or inaccurate\n\n### Why older models matter\n\nA platform that never revisits forecast quality becomes a presentation layer instead of a learning system. CE includes historical review surfaces so users can see why specific models or narratives were accurate, late, or misleading.\n\n### Final architectural test\n\nAsk this question of every CE feature: Does this step make the integrated result easier to audit or easier to trust blindly? If the answer is the second one, the feature likely violates CE's design intent.\n\n[Q: Why does CE expose model outputs before integration as well as after integration? | Because transparency requires users to inspect disagreement, source behavior, and scenario context before those inputs are fused. That prevents the integrated result from becoming an unauditable authority signal.]","duration_min":27,"lesson_id":"ce_arch_04","title":"Transparent Reviews Before And After Integration"}],"module_id":"ce-integration-architecture","objectives":["Explain why scenario envelopes govern all CE outputs.","Show how industry transmission is the practical bridge between climate and economics.","Describe how fusion preserves pressure, resilience, opportunity, and confidence as distinct outputs.","Explain how provenance and guidance make CE auditable.","Identify what must remain visible before and after integration."],"title":"CE Integration Architecture"},{"audience":"Analysts, strategy leads, and domain advisors","category":"Scenario Practice","color":"purple","description":"Learn how to design defensible CE scenarios, build stress families, and interpret how scenario assumptions propagate into economic, climate, and integrated outputs.","difficulty":"Intermediate","duration_min":110,"icon":"fa-sliders","lessons":[{"content":"## The Governing Context For Every CE Forecast\n\nIn CE, no forecast is produced without a scenario envelope. The envelope is not a label or a name \u2014 it is a structured set of assumptions that governs how every model adapter interprets its inputs and what it is allowed to say about the future.\n\n### The five governing fields\n\nA CE scenario envelope holds:\n\n1. **Policy stance** \u2014 the assumed direction and intensity of government regulation, carbon pricing, and fiscal posture\n2. **Energy context** \u2014 the assumed fuel mix, transition pace, and energy price trajectory\n3. **Trade posture** \u2014 the assumed openness or fragmentation of global trade flows\n4. **Shock profile** \u2014 any assumed extraordinary disruptions such as physical climate events, financial crises, or supply chain breakdowns\n5. **Industry focus** \u2014 the sector lens that determines which transmission channels are most relevant\n\n### Why all five must be specified together\n\nEach field influences the others. A tightening policy stance that is not paired with an energy assumption is incomplete \u2014 the same carbon pricing regime implies very different things depending on whether clean energy is abundant and cheap or constrained and expensive.\n\n| Field | Under-specified Example | Why It Matters |\n| --- | --- | --- |\n| Policy stance | \"Moderate regulation\" without sector detail | Different sectors experience the same policy very differently |\n| Energy context | \"Transition underway\" with no pace assumption | Fast and slow transition produce diverging costs |\n| Trade posture | \"Globalised\" without fragmentation tail | Supply concentration risk becomes invisible |\n| Shock profile | No shocks assumed | Baseline becomes indistinguishable from stress |\n| Industry focus | Multiple industries without weights | Transmission logic cannot prioritise channels |\n\n### The envelope as a contract\n\nOnce a scenario is set, it travels with the forecast. Every output produced under that scenario should be interpretable by anyone who can read the envelope \u2014 no hidden assumptions allowed.\n\n[Q: Why does CE require all five scenario fields to be specified together rather than allowing partial scenario definitions? | Because the fields are interdependent. A policy assumption without an energy assumption is ambiguous, and an energy context without a trade posture cannot reveal supply concentration risk. Partial scenario definitions leave hidden assumptions that corrupt downstream integration.]","duration_min":22,"lesson_id":"scenario_01","title":"What Is A CE Scenario Envelope"},{"content":"## Three Scenario Families Every CE User Must Know\n\nNot all scenarios serve the same purpose. CE recognizes three structural families, and confusing them is one of the most common sources of bad integrated outputs.\n\n### Baseline scenarios\n\nA baseline scenario represents the most defensible central-tendency view given current information. It does not represent what is most likely in an absolute sense \u2014 it represents the starting point against which stress and transition scenarios are measured.\n\n**Key discipline**: A baseline should never be treated as a forecast. It is an anchor. The question a baseline answers is: what do current fundamentals imply if conditions evolve roughly as expected?\n\n### Stress scenarios\n\nA stress scenario amplifies one or more shock assumptions to test how the system responds under adverse conditions. Stress is not pessimism for its own sake \u2014 it is disciplined interrogation of vulnerability.\n\n**Key disciplines**:\n- Stress scenarios should be plausible, not merely extreme\n- Each stressed assumption should be linked to a transmission mechanism\n- Stress that cannot be traced to an industry impact is not useful stress\n\n### Transition scenarios\n\nA transition scenario explores how the economy and climate system co-evolve as major structural changes unfold \u2014 energy system shifts, regulatory regimes, capital reallocation, or behavioral adaptation. These are longer-horizon and more uncertain than stress scenarios.\n\n**Key disciplines**:\n- Transition scenarios require explicit assumptions about pace\n- They typically run over 5 to 20-year horizons, not quarterly\n- Transition winners and losers are sector-specific \u2014 a single transition narrative can imply simultaneously high opportunity in some industries and high pressure in others\n\n### The comparison discipline\n\n| Scenario Type | Primary Question | Horizon | Output Focus |\n| --- | --- | --- | --- |\n| Baseline | What does current momentum imply? | 1-5 years | Central tendency and ranges |\n| Stress | Where is the system most vulnerable? | 1-3 years | Downside and resilience |\n| Transition | How does the system transform? | 5-20 years | Structural shifts and opportunity |\n\n[Q: What is the core discipline that distinguishes a stress scenario from simply running a pessimistic forecast? | A stress scenario must link each stressed assumption to a specific transmission mechanism. Pessimism without transmission logic cannot be interrogated or falsified. CE requires that stressed assumptions trace through to identifiable industry impacts.]","duration_min":24,"lesson_id":"scenario_02","title":"Baseline, Stress, And Transition Scenarios"},{"content":"## Internal Consistency Is The First Test Of Scenario Quality\n\nThe most common scenario design failure is internal contradiction \u2014 policy assumptions that conflict with energy assumptions, or trade posture that contradicts the shock profile. An inconsistent scenario produces outputs that seem surprising or incoherent, but the surprise is in the design, not in the model.\n\n### The consistency test framework\n\nBefore submitting any scenario to CE, apply this four-step test:\n\n**Step 1: Policy-energy alignment**\nDoes the policy stance support or contradict the energy context? Aggressive carbon pricing and no clean energy buildout are weakly consistent \u2014 prices would rise sharply. Moderate carbon pricing with rapid clean energy buildout is more internally consistent.\n\n**Step 2: Trade-energy alignment**\nDoes the trade posture support the energy context? A fragmented global trade scenario with heavy clean energy import dependence is inconsistent \u2014 fragmentation strands the supply chains those imports require.\n\n**Step 3: Shock-transmission alignment**\nDoes the shock profile match the claimed transmission? A physical climate shock should connect to operating and supply-chain channels. A financial shock should connect to financing channels. Shocks that are not routed through transmission are decorative, not analytical.\n\n**Step 4: Industry-channel alignment**\nAre the dominant transmission channels appropriate for the industry focus? An insurance-focused scenario should emphasize underwriting, repricing, and liability channels. A manufacturing scenario should emphasize input costs, logistics, and labor. Generic transmission without industry calibration is a design weakness.\n\n### A worked example: energy sector, accelerated transition\n\n| Field | Chosen Assumption | Consistency Check |\n| --- | --- | --- |\n| Policy stance | Strong carbon pricing + renewable mandates | Consistent \u2014 supports the transition direction |\n| Energy context | Fast buildout of renewables, fossil divestment pressure | Consistent \u2014 aligns with policy stance |\n| Trade posture | Semi-open with mineral supply concentration risk | Consistent \u2014 transition minerals require global supply |\n| Shock profile | Stranded-asset repricing in fossil fuel infrastructure | Consistent \u2014 fast transition triggers this |\n| Industry focus | Energy sector, upstream operators | Correct \u2014 highest transmission sensitivity here |\n\n[Q: What is the four-step consistency test CE analysts should apply before submitting a scenario, and why does each step matter? | Step 1 checks policy-energy alignment to avoid contradictory cost signals. Step 2 checks trade-energy alignment to surface supply chain dependencies. Step 3 checks shock-transmission alignment to prevent decorative shocks that are not routed through industry channels. Step 4 checks that dominant transmission channels match the industry focus. Each step prevents a hidden assumption from producing incoherent outputs.]","duration_min":24,"lesson_id":"scenario_03","title":"Designing An Internally Consistent Scenario"},{"content":"## How Scenario Assumptions Flow Through The CE Pipeline\n\nUnderstanding how a scenario assumption reaches the final integrated output is essential for interpreting CE results. The path from assumption to output is not a black box \u2014 it is a documented pipeline.\n\n### The propagation chain\n\nEvery scenario assumption follows this chain inside CE:\n\n```\nScenario envelope\n  \u2192 Model adapter interprets assumption\n    \u2192 Canonical signal produced\n      \u2192 Industry transmission applied\n        \u2192 Component contribution computed\n          \u2192 Integrated output produced\n            \u2192 Guidance generated with provenance\n```\n\n### Reading component contributions\n\nThe integrated output preserves component contributions so users can trace which scenario field drove which output dimension. If pressure is elevated, analysts should be able to identify whether the driving component was economic, climate-physical, or transition-related.\n\n### What to look for when outputs feel wrong\n\n| Symptom | Likely Cause | Where To Investigate |\n| --- | --- | --- |\n| Integrated pressure contradicts input signals | Transmission weights miscalibrated | Transmission layer |\n| Confidence is high but signal disagreement is visible | Confidence scoring not reading disagreement correctly | Fusion layer |\n| Guidance text does not match component contributions | Guidance generated from stale context | Guidance logic |\n| Opportunity and pressure both appear very high | Scenario may cover multiple industries with different dynamics | Scenario design |\n\n### The analyst's review habit\n\nBefore accepting any CE output, work backwards: can the integrated result be explained from the scenario assumptions through the transmission chain? If the path is not legible, the output should not be published.\n\n[Q: If an integrated CE output shows elevated pressure but the raw economic and climate inputs both look benign, what is the most likely place to investigate first? | The transmission layer \u2014 specifically whether the transmission weights and channel assignments are correctly calibrated for the target industry. The propagation chain means elevated pressure must be traceable to either an economic, climate-physical, or transition component, not to the integrated output appearing on its own.]","duration_min":22,"lesson_id":"scenario_04","title":"Reading Scenario Propagation Through CE Outputs"},{"content":"## The Errors That Produce Misleading Integrated Forecasts\n\nScenario quality is the upstream determinant of CE output quality. A precisely computed integration of a badly designed scenario is still a badly designed scenario.\n\n### Failure 1: The optimism trap\n\nBaseline scenarios tend to drift toward optimism because analysts naturally favor central tendency over tail risk. Baselines should be regularly stress-tested and compared against historical baseline accuracy reviews.\n\n### Failure 2: Scenario inflation\n\nAdding shocks to a scenario for narrative impact rather than analytical purpose \u2014 \"making it realistic\" \u2014 creates signals that cannot be routed through transmission. Every shock must answer: which transmission channel does this amplify, and by how much?\n\n### Failure 3: Time horizon confusion\n\nRunning a transition scenario over a one-year horizon and interpreting it like a stress scenario conflates very different dynamics. CE outputs look different across short, medium, and long horizons because different uncertainty buckets dominate each.\n\n### Failure 4: Industry agnosticism\n\nA scenario that works for all industries simultaneously almost never produces useful industry-specific transmission outputs. The power of CE is granular translation \u2014 scenario design should start with the industry question, not end there.\n\n### Failure 5: Scenario-to-output over-confidence\n\nA well-constructed scenario is still not a prediction. Even the most internally consistent scenario only tells you what would follow *if the assumptions held*. CE's confidence output should reflect that conditionality.\n\n### Design review checklist\n\n- Does every shock have a transmission route?\n- Are baseline, stress, and transition functions not being conflated?\n- Is the time horizon appropriate for the scenario family?\n- Is the scenario industry-specific rather than generic?\n- Does the resulting output require any hidden assumptions to be interpretable?\n\n[Q: Why is a well-constructed, internally consistent scenario still not a prediction, and how should CE handle that distinction? | A scenario only tells you what would follow if its stated assumptions held \u2014 it is not a probability-weighted claim about the future. CE handles this by presenting confidence as a first-class output reflecting the degree of model agreement and assumption stability, not by implying that internal consistency equals predictive accuracy.]","duration_min":18,"lesson_id":"scenario_05","title":"Common Scenario Design Failures"}],"module_id":"scenario-design","objectives":["Explain the five fields that define a CE scenario envelope.","Distinguish between a baseline scenario, a stress scenario, and a transition scenario.","Design a plausible and internally consistent scenario for a target industry.","Interpret how policy, energy, and trade assumptions propagate through CE outputs.","Identify scenario design errors that produce misleading integrated forecasts."],"title":"Scenario Design and Stress Testing"},{"audience":"All CE users \u2014 analysts, governance leads, and senior reviewers","category":"Quality and Governance","color":"gold","description":"Learn why reviewing the accuracy of past forecasts is not an optional compliance exercise but a core analytical discipline. Build a working understanding of how CE surfaces historical forecast performance and how to use that evidence to calibrate confidence in future outputs.","difficulty":"Beginner","duration_min":85,"icon":"fa-clock-rotate-left","lessons":[{"content":"## Past Forecast Performance Is Present Information\n\nMost teams treat forecast review as a governance formality \u2014 something that happens after the fact to satisfy audit requirements. CE is designed around the opposite view: past forecast performance is live information that should actively shape how much confidence to place in current and future outputs.\n\n### What happens when retrospective review is skipped\n\nWhen teams skip systematic retrospective review, several failure patterns become persistent:\n\n- **Overconfidence inheritance**: Forecasts that were overconfident in the past continue to be overconfident in the future because no correction mechanism exists\n- **Silent model drift**: Models that gradually lose calibration are not detected until the drift is large\n- **Scenario illusion**: Scenarios that looked well-constructed but produced poor outcomes continue to be used because no review compares their assumptions to what actually occurred\n\n### What CE's review discipline provides\n\nCE surfaces three types of retrospective evidence that feed directly into current outputs:\n\n1. **Directional accuracy** \u2014 did the forecast correctly identify the direction of change (pressure, opportunity, neutral)?\n2. **Magnitude calibration** \u2014 was the magnitude of the predicted change consistent with realized conditions, or systematically over- or under-scaled?\n3. **Confidence calibration** \u2014 when CE assigned high confidence to a forecast, was that confidence justified by the subsequent outcome?\n\nThese three dimensions are reviewed separately because they can fail independently. A model can be directionally reliable but magnitude-overconfident. A model can be well-calibrated on magnitude but poorly calibrated on confidence.\n\n### The retrospective review as a design tool\n\nRetrospective review is most valuable when it is used before building the next scenario \u2014 not just as an audit of the last one. The question is always: given what we know about past forecast accuracy under similar conditions, what should our confidence be today?\n\n[Q: Why does CE treat retrospective forecast review as live information that shapes current outputs rather than as a historical record kept separately? | Because past accuracy on directional calls, magnitude, and confidence calibration is the best available evidence for how much to trust current outputs under similar conditions. Without that feedback loop, overconfidence and model drift persist silently. CE makes retrospective evidence directly accessible so analysts can calibrate current confidence against what history has shown.]","duration_min":16,"lesson_id":"hfr_01","title":"Why Retrospective Review Is A Core Discipline"},{"content":"## Measuring Accuracy Along Three Independent Axes\n\nCE does not collapse past forecast performance into a single score. A composite accuracy number obscures the differences between directional reliability, magnitude calibration, and confidence calibration \u2014 differences that matter enormously for how you should use a model going forward.\n\n### Dimension 1: Directional accuracy\n\nDirectional accuracy measures how often the forecast correctly identified the direction of change relative to baseline.\n\n**Why directional accuracy is necessary but not sufficient**: A model that is directionally correct 80% of the time but routinely overestimates magnitude by a factor of two is not a well-calibrated model. Directional accuracy is the minimum bar, not the full picture.\n\n**Common causes of directional inaccuracy**:\n- Scenario assumptions that placed shocks in the wrong direction\n- Transmission weights that amplified a channel that was not the dominant mechanism\n- Regime changes in the target industry that invalidated historical calibration\n\n### Dimension 2: Magnitude calibration\n\nMagnitude calibration measures whether the forecast correctly sized the impact \u2014 not just its direction.\n\n**Systematic overestimation** is the most common failure in stress scenarios \u2014 analysts often apply stress assumptions that were plausible but larger than what materialized. Knowing this pattern should cause analysts to discount extreme stress magnitude claims.\n\n**Systematic underestimation** is more common in slow-moving transition scenarios \u2014 analysts tend to underestimate the pace and scope of structural change once it begins, because historical calibration reflects a pre-transition world.\n\n### Dimension 3: Confidence calibration\n\nConfidence calibration measures whether CE's stated confidence levels were consistent with realized outcomes. When CE assigned 85% confidence, did that roughly correspond to an 85% hit rate?\n\n| Confidence Level Assigned | Calibrated Meaning | Common Miscalibration |\n| --- | --- | --- |\n| Very high (>80%) | Outcome consistent with forecast nearly all the time | Over-assigned in novel scenarios |\n| High (60-80%) | Outcome consistent most of the time | Under-penalized for model disagreement |\n| Moderate (40-60%) | Genuine uncertainty; both directions were plausible | Most honest level for complex scenarios |\n| Low (<40%) | Significant uncertainty; use as exploratory | Rarely assigned proactively, often applied post-hoc |\n\n[Q: Why is directional accuracy necessary but not sufficient as a complete measure of forecast quality, and what does magnitude calibration add? | Directional accuracy tells you whether the model identified the correct direction of change, but not whether the size of the predicted change was reliable. A model can be consistently correct in direction while dramatically overstating or understating magnitude \u2014 which means it would misdirect severity assessments and capital allocation even while being technically directionally correct. Magnitude calibration provides the scaling dimension that directional accuracy alone cannot.]","duration_min":18,"lesson_id":"hfr_02","title":"The Three Dimensions Of Forecast Accuracy In CE"},{"content":"## When CE Forecasts Are Most Likely To Be Less Reliable\n\nCE provides honest confidence scores rather than false confidence across all conditions. Understanding the structural conditions that systematically reduce reliability is essential for analysts who must decide how much weight to put on current outputs.\n\n### Condition 1: Regime change in the target domain\n\nModels calibrated to historical patterns perform worst when the structure of the target domain changes \u2014 a policy shift, a technology breakthrough, or a market structural change that makes historical relationships unreliable. CE's data vintage markers help surface this risk, but analysts must apply judgment about whether the current scenario implies a regime change.\n\n### Condition 2: Novel scenario combinations\n\nScenarios that combine multiple stresses simultaneously have less historical precedent than single-stress scenarios. When CE runs a scenario that stacks physical climate disruption with financial market stress and policy tightening simultaneously, the confidence in any individual forecast component should be lower because historical calibration was conducted under conditions that rarely combined all three at once.\n\n### Condition 3: Long horizon in a transitional environment\n\nForecasts beyond 10 years in a period of active structural transition carry fundamentally higher uncertainty. CE is designed to express this through wider confidence intervals at longer horizons, but analysts should proactively reduce confidence in long-horizon outputs generated during active transition.\n\n### Condition 4: High model disagreement at baseline\n\nIf the models contributing to a combined output disagree even before stress is applied, that baseline disagreement is a reliability warning signal. Agreement under stress means less when the baseline is already contested.\n\n### Condition 5: Thin historical coverage for the target geography or sector\n\nCalibration quality is a direct function of historical data coverage. For sectors or geographies where data density is low, model calibration is thinner and reliability estimates should be discounted accordingly.\n\n| Condition | Reliability Impact | Analyst Action |\n| --- | --- | --- |\n| Regime change | High reduction | Inspect calibration vintage and compare to current structure |\n| Novel scenario stacking | Moderate to high reduction | Run scenarios individually first, then combined |\n| Long horizon in transition | High reduction | Widen confidence intervals manually |\n| Baseline disagreement | Moderate reduction | Investigate disagreement source before proceeding |\n| Thin data coverage | Moderate reduction | Note in provenance, reduce stated confidence |\n\n[Q: Why should analysts proactively reduce confidence in long-horizon CE forecasts generated during active structural transitions, beyond what the model's own confidence score assigns? | Because model calibration is based on historical relationships that may not persist during structural transitions. The model's own confidence scoring is itself calibrated to historical patterns, so it may not adequately penalize for the type of structural break that transitions represent. Analyst judgment must supplement model-derived confidence in these conditions.]","duration_min":18,"lesson_id":"hfr_03","title":"Structural Conditions That Reduce Forecast Reliability"},{"content":"## Turning Retrospective Evidence Into Better Forecasts\n\nHistorical accuracy evidence is only useful if it changes analyst behavior going forward. This lesson covers four concrete ways CE users should apply retrospective accuracy findings.\n\n### Application 1: Scenario construction\n\nBefore finalizing a scenario, review historical accuracy under similar scenario conditions. If past transition scenarios systematically underestimated structural change pace, build that bias into current scenario design by testing a faster-transition variant alongside the central case.\n\n### Application 2: Confidence calibration\n\nWhen CE assigns a confidence level to a current output, compare that level to the historical confidence calibration curve. If similar scenarios in the past showed confidence was systematically over-stated, apply a manual discount before publishing.\n\n### Application 3: Weight review triggers\n\nIf retrospective review shows one component consistently underperformed under specific scenario conditions, that is a trigger for a weight review \u2014 even if the scheduled review period has not arrived.\n\n### Application 4: Communication calibration\n\nForecasts communicated to stakeholders should explicitly note the historical accuracy track record under similar conditions. Presenting a high-confidence output without noting that similar scenarios historically showed weaker confidence calibration is an incomplete disclosure.\n\n### The accuracy-to-scenario feedback loop\n\nThe most disciplined CE practice is to treat each retrospective review as an input to the next scenario-building cycle. The feedback loop looks like:\n\n```\nRun scenario \u2192 Produce output \u2192 Compare to realized conditions\n  \u2192 Identify directional, magnitude, and confidence accuracy\n    \u2192 Feed findings into next scenario construction\n      \u2192 Adjust component weights if component-level bias detected\n        \u2192 Update confidence calibration benchmarks\n```\n\nThis loop does not close automatically \u2014 it requires deliberate analyst action at each review point.\n\n[Q: Why is it insufficient to simply note historical accuracy findings internally without also reflecting them in stakeholder-facing communications about current forecast confidence? | Because stakeholders who receive high-confidence CE outputs without visibility into the historical accuracy track record for similar conditions cannot make appropriately calibrated decisions. The accuracy record is material information for any stakeholder who is acting on the forecast \u2014 withholding it creates a false impression of reliability that historical evidence may not support.]","duration_min":16,"lesson_id":"hfr_04","title":"Using Historical Accuracy Evidence In Practice"},{"content":"## A Practical Guide To The CE Accuracy View\n\nThe CE accuracy summary provides a structured view of past forecast performance organized by module, scenario family, and time horizon. This lesson walks through how to read that summary and what actions it should trigger.\n\n### Summary structure\n\nA CE forecast accuracy summary shows:\n\n- **Module**: Which combined model generated the forecast\n- **Scenario family**: Baseline, stress, or transition\n- **Time horizon**: Short (1-2 yr), medium (3-5 yr), or long (5+ yr)\n- **Directional accuracy %**: Hit rate on direction of change\n- **Magnitude error**: Average over- or under-estimate relative to realized magnitude\n- **Confidence calibration**: Measured difference between stated confidence and actual hit rate\n\n### What good looks like\n\nA well-calibrated combined model shows:\n- Directional accuracy above 70% for short-to-medium horizons\n- Magnitude error within a reasonable range (the exact range depends on scenario volatility)\n- Confidence calibration within \u00b110 percentage points of stated levels\n\n### Reading signals of concern\n\n| Pattern In Summary | What It Suggests |\n| --- | --- |\n| Directional accuracy drops sharply for one scenario family | That scenario family may be poorly calibrated |\n| Magnitude error large and one-directional | Systematic bias \u2014 review component weights and scenario design |\n| Confidence significantly higher than actual hit rate | Confidence is over-stated \u2014 apply manual discount |\n| Long-horizon accuracy significantly worse than short | Expected, but review if the gap is larger than prior periods |\n| One module consistently worst performer | Component inspection and possible deregistration |\n\n### The summary as a living document\n\nThe accuracy summary should be reviewed before every major scenario run, not just at scheduled review intervals. The discipline is to ask: does this summary give me reason to adjust my confidence in the outputs I am about to generate?\n\n[Q: If a CE forecast accuracy summary shows that a particular combined model consistently assigns confidence levels that are 20 percentage points higher than the actual historical hit rate, what should an analyst do before publishing a new output from that model? | The analyst should apply a manual confidence discount of approximately 20 percentage points to stated confidence before publishing, and should flag the calibration gap in the provenance record. If the gap is persistent across multiple review cycles, it should also trigger a formal weight review and potentially a fusion logic inspection to determine whether the confidence scoring mechanism itself needs recalibration.]","duration_min":17,"lesson_id":"hfr_05","title":"Reading A CE Forecast Accuracy Summary"}],"module_id":"historical-forecast-review","objectives":["Explain why retrospective forecast review is integral to forecast quality, not separate from it.","Identify the three dimensions CE uses to evaluate historical forecast accuracy.","Read a CE forecast accuracy summary and identify where confidence should be adjusted.","Describe at least three structural conditions that historically caused CE forecasts to be less reliable.","Apply historical accuracy evidence to scenario construction and confidence calibration."],"title":"Historical Forecast Review and Accuracy"},{"audience":"New CE users, analysts beginning their first CE engagement","category":"Onboarding","color":"cyan","description":"A complete end-to-end walkthrough of the CE Workbench: selecting and comparing models, running the integration desk, saving and retrieving runs, comparing archive outputs, and using training and documentation resources.","difficulty":"Beginner","duration_min":72,"icon":"fa-desktop","lessons":[{"content":"## Finding Your Way Around CE\n\nThe CE Workbench is organized around a single dark-themed shell with a collapsible sidebar on the left and a main content area on the right. All of CE's analytical tools, documentation resources, training modules, and administration surfaces are accessible from the sidebar.\n\n### The sidebar\n\nThe sidebar is divided into two sections:\n\n**Core Tools** \u2014 the analytical surfaces you will use for most CE work:\n- **Dashboard**: The CE home view, showing key system status and recent activity\n- **Models**: Source model registry \u2014 browse, add, and manage individual models\n- **Run Model**: The primary interface for selecting and running a single model\n- **Integration Desk**: The combining interface for building and running combined models\n- **Saved Runs**: Your run history, available for retrieval and comparison\n- **Archive Compare**: A side-by-side or sequential comparison of saved run outputs\n\n**Workbench** \u2014 support resources:\n- **Sources**: Documentation sources registered in CE\n- **Docs**: The CE documentation browser\n- **Training Modules**: The training library you are using right now\n- **Admin**: System administration (restricted access)\n\n### Navigation conventions\n\nThe sidebar collapses on narrow screens. The hamburger icon at the top left toggles the sidebar on any device. Breadcrumbs appear at the top of content pages to show where you are within a surface.\n\n[Q: What is the purpose of the Integration Desk in the CE Workbench, and how does it differ from the Run Model surface? | The Run Model surface is for selecting and running a single source model to produce one output. The Integration Desk is for combining multiple source models into a CE combined model output \u2014 it manages component selection, weight assignment, and fusion configuration so that the combined output reflects contributions from multiple models rather than a single one.]","duration_min":14,"lesson_id":"wb_01","title":"Orientation: The CE Workbench Shell"},{"content":"## Your First CE Run\n\nThe most common first action in CE is to select a source model, configure the run parameters, and review the output. This lesson walks through that workflow step by step.\n\n### Step 1: Browse the model registry\n\nNavigate to **Models** in the sidebar. The model registry shows all source models available in your CE instance. Each model card shows:\n- Model name and type\n- Category (climate-physical, economic macro, industry-specific, or combined)\n- Last data vintage\n- Description and domain notes\n\nUse the category filter to narrow to the model type relevant to your analysis.\n\n### Step 2: Review model details before running\n\nClick any model card to open the model detail page. Before running, review:\n- **Known limitations** \u2014 conditions where this model is less reliable\n- **Scenario scope** \u2014 what scenario families this model was calibrated for\n- **Data vintage** \u2014 how recent the underlying data is\n\nSavvy CE analysts read model limitations before running, not after. A model that looks powerful may be poorly suited to the specific scenario you have in mind.\n\n### Step 3: Configure the run\n\nNavigate to **Run Model** in the sidebar. Select your model from the dropdown and set the scenario parameters:\n- Scenario family (baseline, stress, or transition)\n- Target industry\n- Horizon\n- Any custom notes for this run\n\n### Step 4: Review the output\n\nAfter running, CE presents:\n- The integrated signal (pressure / opportunity / neutral)\n- Component contributions (if combined)\n- Confidence score and decomposition\n- Provenance summary\n\n**First-run checklist**: Before acting on the output, read the confidence score. Check whether component disagreement is flagged. Review the provenance to confirm the scenario was applied as expected.\n\n[Q: What is the most important thing to review before running a model in CE, and why does the documentation recommend reviewing it before rather than after the run? | The model's known limitations and scenario scope should be reviewed before running because it may reveal that the model is not calibrated for the scenario you intend to run. Reviewing limitations after the fact introduces confirmation bias \u2014 analysts who see an output they like are less likely to apply a limitation flag that would qualify their confidence in that output.]","duration_min":15,"lesson_id":"wb_02","title":"Selecting A Model And Running Your First Output"},{"content":"## Making Your Work Retrievable And Comparable\n\nCE is designed for iterative analysis \u2014 you will often run the same model under different scenarios, revisit past outputs to compare against new runs, or share outputs with collaborators. The saved runs and archive features support all of these workflows.\n\n### Saving a run\n\nAfter any run, use the **Save Run** button to save the output to your run history. When saving, you can:\n- Assign a descriptive label (be specific \u2014 generic labels like 'test' are hard to retrieve later)\n- Add free-text notes about the intent of this run\n- Tag the run with a scenario family label for filtering later\n\n**Good label discipline**: Include the model name, industry, scenario type, and date in your label. Example: `energy-combined-stress-fast-transition-2025-07`\n\n### Retrieving a run\n\nNavigate to **Saved Runs** in the sidebar. Runs are listed in reverse chronological order by default. Use the search and filter controls to locate runs by model, industry, scenario tag, or date range.\n\nClick any saved run to reopen the full output view including the provenance record exactly as it appeared at the time of the run. This historical fidelity is important \u2014 do not assume current model weights or settings would produce the same output today.\n\n### Using Archive Compare\n\nNavigate to **Archive Compare** in the sidebar to place two or more saved runs side by side. The compare view shows:\n- Signal direction comparison\n- Confidence score comparison\n- Component contribution comparison if both runs used combined models\n- Scenario tag and vintage comparison\n\n**Most useful archive compare patterns**:\n- Same model, same industry, different scenarios (what did the baseline vs. stress assume and what did they produce?)\n- Same model, same scenario, different dates (has the signal changed over time as data was refreshed?)\n- Different models, same scenario (how much do model choices affect the output?)\n\n[Q: Why does CE maintain historical run provenance exactly as it appeared at the time of the run, rather than recomputing when you reopen a saved run? | Because component weights, data vintages, and model configurations can change between runs. If reopening a saved run recomputed with current settings, it would no longer represent what was actually generated and communicated at the time. Accurate retrospective review requires the exact provenance of the original run, not a current-settings recomputation.]","duration_min":14,"lesson_id":"wb_03","title":"Saving Runs And Using The Archive"},{"content":"## Combining Multiple Models Into A Single CE Output\n\nThe Integration Desk is where the combining power of CE is most directly visible. Rather than accepting a single source model's output, the Integration Desk lets you assemble a combined model from registered sources, configure weights, and produce an integrated output with full provenance.\n\n### Opening the Integration Desk\n\nNavigate to **Integration Desk** in the sidebar. The desk presents a three-panel layout:\n\n- **Left panel**: Available source models by category\n- **Center panel**: Your active combined model configuration (components added and their weights)\n- **Right panel**: Configuration summary and run controls\n\n### Adding components\n\nDrag or select models from the left panel into the center panel to add them as components of your combined model. For each component, review the suggested weight (algorithmically derived) and adjust manually if your analytical judgment about scenario alignment warrants it.\n\nThe weight total must sum to 1.0. CE will warn you if the sum is out of bounds before allowing a run.\n\n### Reviewing the weight rationale\n\nBefore running, click **Weight Rationale** to see the basis for the algorithmically suggested weights. This shows which criteria drove the suggested distribution \u2014 historical accuracy, vintage freshness, scenario fitness, and limitation flags.\n\n### Running and reviewing the combined output\n\nAfter running, the Integration Desk output shows:\n- The combined signal with confidence decomposition\n- Each component's signed contribution (how much each model pushed the output and in which direction)\n- The disagreement metric across components\n- The full provenance record for this combined run\n\n**Analyst discipline**: After every combined run, read the component contributions before reading the integrated signal. Understanding what drove the output is more important than the output's direction alone.\n\n[Q: After running a combined model on the Integration Desk, why should analysts read the component contributions before reading the integrated signal? | Because the integrated signal only tells you the direction and magnitude of the combined output \u2014 it does not tell you which components drove it or whether the components agreed. A high-pressure signal driven by one dominant outlier component with two others disagreeing is a very different situation from a high-pressure signal with strong component consensus. Reading contributions first prevents analysts from accepting a direction without understanding its composition.]","duration_min":16,"lesson_id":"wb_04","title":"The Integration Desk: Building A Combined Model"},{"content":"## CE's Built-In Knowledge Resources\n\nCE is designed to be self-documenting. Every piece of domain knowledge, methodology rationale, and operational guidance is available inside the workbench \u2014 you do not need to leave CE to answer most analytical questions.\n\n### The Docs surface\n\nNavigate to **Docs** in the sidebar to access CE's documentation browser. Documentation is organized by:\n- **Architecture**: How CE's integration pipeline, fusion layer, and confidence scoring work\n- **Methodology**: The analytical frameworks behind scenario design, transmission weights, and model calibration\n- **Data Sources**: Details on each registered data source including coverage, vintage, and limitations\n- **Operational guides**: Step-by-step guides for common CE workflows\n\n**When to use Docs**: When you encounter an output that does not match your intuition, the Docs section is the first place to investigate. Look for the methodology document covering the relevant component or transmission channel.\n\n### The Training Modules surface\n\nYou are currently using the CE Training Modules surface. Training is organized into five categories:\n- **External Models**: Understanding climate and economic model inputs\n- **CE Architecture**: How CE integrates and combines models\n- **Scenario Practice**: How to design and stress-test scenarios\n- **Sector Analysis**: Industry transmission deep dives\n- **Quality and Governance**: Forecast review and accuracy practice\n- **Onboarding**: Getting started with the CE Workbench\n\nEach module includes a notes sidebar where you can capture and save your own observations as you learn. Your notes persist between sessions and can be retrieved from the **Training Notes** page.\n\n### Getting help with a specific output\n\nIf you encounter an output that is confusing or appears inconsistent:\n1. Open the provenance record \u2014 trace the output to its components\n2. Read the scenario tag \u2014 confirm the scenario matches your intent\n3. Check Docs for the relevant methodology\n4. Review historical accuracy for similar scenarios in the Archive Compare view\n5. If the issue persists, flag the run for review through the admin surface\n\n[Q: What is the recommended first step when a CE output does not match your analytical intuition, and why is it more effective than immediately escalating to support? | The recommended first step is to open the provenance record and trace the output to its component contributions. Most unexpected outputs have a legible explanation in the provenance \u2014 an outlier component, a scenario mismatch, or a low freshness score. Reading provenance first resolves the majority of output questions without escalation and trains analysts to interpret CE outputs more fluently over time.]","duration_min":13,"lesson_id":"wb_05","title":"Docs, Training, And Getting Help"}],"module_id":"using-the-ce-workbench","objectives":["Navigate the CE Workbench and locate the core tool surfaces.","Select a source model, configure a run, and interpret the output.","Save a run and retrieve it from the archive for comparison.","Use the integration desk to combine two or more models into a CE combined output.","Access training modules and documentation to answer analytical questions."],"title":"Using the CE Workbench"}],"stats":{"categories":["CE Architecture","External Models","Internal CE Models","Onboarding","Quality and Governance","Scenario Practice","Sector Analysis"],"total_duration_min":828,"total_lessons":37,"total_modules":8}}
