~/ arboretum-consulting
status: online env: production

Cognitive Language Runtime

Compose gated AI workflows across models. Evidence trails prove they did what you asked.

Coglan is a cognitive programming language and coding-agent environment. The .cog format gives teams structured, resumable, and verifiable workflows that run across CLI, TUI, VS Code, CI/CD, MCP, or any tool wired to the runtime.

Clear declarative .cog syntax Confidence-building semantic gates Flexible cross-model execution Audit-ready outputs

Start Anywhere

One workflow language, multiple execution surfaces.

Teams can keep their existing environment and share one .cog contract everywhere. Runtime behavior, checkpoints, and gate semantics stay consistent across each execution surface.

CLI

Launch from terminal-first workflows

Run orchestrated jobs locally, checkpoint progress, and resume from verified states without replacing your current tooling.

TUI

Track runtime state in real time

Follow agent execution, render technical markdown, and review evidence from a live terminal dashboard.

VS Code

Work directly beside your code

Run workflows from the editor sidebar with hover docs and artifact visibility where your team already ships software.

Core Syntax

Readable workflow definitions with built-in verification.

Coglan keeps orchestration, validation, and output shaping in a single workflow definition. Every stage is auditable, and dependencies can be resumed with predictable behavior.

health_check.cog
# 1. Load project metadata into context
load package.json as pkg

# 2. Autonomous worker investigates the project
worker health_check with tools read, shell using pkg:
  Assess this project's health. Check:
  - Git state (current branch, clean or dirty working tree)
  - Dependency health (outdated or missing packages)
  Return JSON with health (ready/caution/blocked),
  findings as an array, and recommendations.
  -> report

# 3. Gate enforces output structure
check report:
  report has health, findings, recommendations
  every finding has area, status, detail

# 4. Write verified results to a file
output report as json to reports/health-check.json
Execution Flow health_check.cog
load

Load project metadata into context

load package.json as pkg

Inputpackage.json from project root
Outputpkg project metadata

Injects project metadata so the worker has real context to investigate against.

worker

Investigate git state and dependency health

worker health_check with tools read, shell using pkg -> report

Inputpkg metadata + read, shell tools
Outputreport structured JSON assessment
Tools: read, shell Task: autonomous investigation

The worker runs shell commands and reads files autonomously to assess project health.

check

Gate enforces report structure

report has health, findings, recommendations; every finding has area, status, detail

Inputreport from worker
RuleMust contain health, findings, recommendations; each finding needs area, status, detail

Retries automatically with specific feedback if the worker output is missing required fields.

output

Write health report to a named file

output report as json to reports/health-check.json

Final ArtifactJSON written to reports/health-check.json
Evidence TrailFull execution history saved to ~/.coglan/runs/

Verified report lands in a named file alongside a complete evidence trail.

Advanced Routing

Coglan offers model orchestration at the language level.

In Coglan, the language is the router. Use Gemini 3.1 Pro for massive-context log ingestion, Opus 4.6 for deep architectural reasoning, and Sonnet 4.6 for fast, highly steerable synthesis, while maintaining one transparent verification chain.

multi_model_pipeline.cog
# 1. Import raw logs from your runtime source
call shell:cat ./logs/app.log -> raw_logs

# 2. Convert raw logs into structured critical events
pass triage with model google/gemini-3.1-pro-preview using raw_logs:
  Extract all critical crash events as strict JSON.
  -> critical_events

check critical_events:
  every event has timestamp, error_code, stack_trace
  at least 1 event

# 3. Use deep reasoning to identify root cause
worker deep_research with model anthropic/claude-opus-4.6 using critical_events:
  Return a detailed JSON architectural report.
  -> architecture_report

check architecture_report:
  has_keys root_cause, recommended_fix_architecture

# 4. Synthesize a concise executive brief
worker executive_summary with model anthropic/claude-sonnet-4.6 using architecture_report:
  Return strict JSON with keys summary, root_cause, recommended_fix_architecture.
  -> final_brief

check final_brief:
  has_keys summary, root_cause, recommended_fix_architecture

# 5. Render verified data as a readable report
pass format_brief using final_brief:
  Write a clear incident report from this data with headings
  for Summary, Root Cause, and Recommended Fix.
  -> incident_report

output incident_report as markdown to reports/incident-brief.md
Model Routing Timeline multi_model_pipeline.cog
call

Import raw logs from the runtime source

call shell:cat ./logs/app.log -> raw_logs

Input./logs/app.log runtime log stream
Outputraw_logs unfiltered telemetry text

Loads log data into the workflow context so downstream model steps operate on explicit inputs.

pass

Triage high-volume logs with large context capacity

pass triage with model google/gemini-3.1-pro-preview using raw_logs -> critical_events

Inputraw_logs (unfiltered telemetry)
Outputcritical_events strict JSON array
Model: Gemini 3.1 Pro Task: extraction

Uses large context capacity to convert noisy logs into structured incidents ready for deeper analysis.

check

Validate each critical event payload

every event has timestamp, error_code, stack_trace; at least 1 event

Inputcritical_events array
RuleEvery event includes required forensic fields

Ensures the architecture model receives complete, machine-checkable incident records.

worker

Perform root-cause analysis on architecture-level failures

worker deep_research with model anthropic/claude-opus-4.6 using critical_events -> architecture_report

Inputcritical_events
Outputarchitecture_report JSON diagnosis
Model: Claude Opus 4.6 Task: root cause

Converts event-level evidence into architectural conclusions and clear remediation paths.

check

Ensure report includes actionable decision fields

check architecture_report: has_keys root_cause, recommended_fix_architecture

Inputarchitecture_report
RuleMust contain both causal diagnosis and architecture fix

Keeps synthesis grounded in complete analysis output.

worker

Generate an executive JSON brief with a fast synthesis model

worker executive_summary with model anthropic/claude-sonnet-4.6 using architecture_report -> final_brief

Inputarchitecture_report
Outputfinal_brief strict JSON brief
Model: Claude Sonnet 4.6 Task: synthesis

Creates a direct, evidence-first brief that leadership can act on immediately.

check

Confirm final brief includes required decision fields

check final_brief: has_keys summary, root_cause, recommended_fix_architecture

Inputfinal_brief structured summary
RuleMust include summary, root cause, and recommended fix architecture

Guarantees the final artifact carries the exact fields needed for downstream decisions.

pass

Convert structured JSON into a readable incident report

pass format_brief using final_brief -> incident_report

Inputfinal_brief verified JSON
Outputincident_report readable markdown

Gates verified the structure. This step renders it as a clear report with headings for Summary, Root Cause, and Recommended Fix.

output

Write the incident report to a named file

output incident_report as markdown to reports/incident-brief.md

Final ArtifactReadable markdown at reports/incident-brief.md for incident reviews, handoffs, and ticketing
TraceabilityEvery model handoff linked to explicit gate checks in the evidence trail

Completes a multi-model run with a human-readable report and a verifiable execution trail.

Why Coglan?

Scale orchestration clarity and verification confidence.

Agent Orchestration

Route each step to the model that fits best.

Coglan lets you route one step to a frontier reasoning model and the next to a fast extraction model, all declared in readable, natively orchestrated syntax. Checkpoints anchor progress so complex multi-agent workflows can resume seamlessly from any verified state.

Verification Confidence

Shift verification from manual review to explicit contracts.

Coglan introduces semantic gates that act like a type system for LLM output. Assertions derive schema intent from plain English, shifting teams from "review everything manually" to "verify every step by contract."

Product Surface

Language, environment, and extension working as one system.

01 / Language

Structured primitives for dependable execution

Coglan provides seven step types, structural primitives for control flow and scoping, and a compact syntax readable by humans, models, and the runtime.

02 / Environment

Terminal-native operations and checkpoint clarity

Follow worker progress, inspect markdown artifacts, fork conversations when needed, and persist execution state safely inside the runtime loop.

03 / Extension

Editor-integrated execution and evidence visibility

Run the full Coglan console inside a VS Code sidebar with hover docs and local evidence visibility right beside your source code.

The Self-Programming Loop

Models can author and execute .cog workflows directly.

Models can write .cog because the syntax is compact and grounded in natural language. Coglan's entire syntax fits in a single System Prompt, creating a recursive loop where agents can plan, execute, verify, and continue with confidence.

Native Authoring

From complex task to executable workflow

An agent encounters a complex task, writes a .cog file, and runs it directly through the full runtime.

Verified Feedback

Verified outputs feed the next decision

Verified results flow back into the conversation, so each next decision starts from trusted context.

Community & Adoption

Open runtime foundation.

Layer 1 / Open Source

Runtime stack under Apache 2.0

The core runtime, CLI, TUI, VS Code extension, GitHub Actions, and example library stay free and extensible.

  • Cross-model execution
  • Checkpointing and dependency resumes
  • Local evidence generation

Share Your Coglan Experience

I'd love to hear how you're using Coglan

Whether you're experimenting solo or adopting Coglan across a company, team, or organization, I'd love to hear what is working, what feels promising, and where you want to go next.

Prefer direct email? christopher@arboretumconsulting.io