4/28/2026

The first ticker

 There is a particular kind of self-consciousness that comes from pointing a model you built yourself at a company everyone you know has an opinion on. It is not the self-consciousness of being wrong, exactly. It is the awareness of finding out that the model and the market disagree, and having to decide which one you actually believe.
I built Argos over the last several months for reasons I have written about elsewhere. The short version is that single-point valuation has always struck me as a polite fiction. A discounted cash flow produces one share price. A peer multiple produces another. An analyst target produces a third. Each of them is a slice of the future presented as if it were the picture. The honest description of any equity valuation is that you are pricing a probability distribution and pretending it is a number. Argos, true to the hundred-eyed metaphor I borrowed for it, tries to look at the distribution.The architecture itself I have covered in earlier pieces, so the shorthand will do here: ten thousand and one Monte Carlo paths, twenty-eight forward quarters, eight correlated drivers per quarter, a Markov regime switch on top, Bayesian-shrunk drift priors and Ornstein-Uhlenbeck cost-ratio dynamics underneath. A bottom-up DCF runs alongside on expected-value inputs and serves as a sanity check on the simulation median. If the deterministic point and the distribution median diverge materially, something is wrong with the calibration. If they agree, neither of them is right, but at least they are wrong in the same way.
The justification for this much machinery is not that any single name deserves it. It is that the machinery only has to be built once. After that, it asks the same set of questions about every company you feed into it. The discipline of mechanical consistency is undervalued in an industry where most analysts run a different model for every ticker and call the inconsistency judgment.
Why Apple, and why now
There were two reasons to start with Apple. The first is that a verdict on a company nobody knows tells you nothing about whether the verdict is any good. If I say the model produces a sensible valuation for, say, a mid-cap Italian industrial that two of my readers have heard of, the response is going to be a polite nod. If the model produces a verdict on Apple, the response is going to be either agreement, which is interesting, or disagreement, which is more interesting still. A flagship name is a stress test for the user, not just the model.
The second reason is timing. Apple prints fiscal Q1 on Thursday. That gives the call a real expiration date. The model says what it says today, the market says what it says today, and on Friday we will all know slightly more than we did this morning about which of us was closer. Posting a valuation note three weeks before earnings is leisurely. Posting it three days before is a small commitment device.
So: AppleDownloadSpot is $266.89. The Argos call is HOLD with negative bias. The Monte Carlo base case is $215.41, which is nineteen percent below the market. Eighty-two percent of the ten thousand and one simulated paths trade below today’s price. The bottom-up DCF sanity check lands at $239.19, ten percent below market. The deterministic single path and the distribution median bracket each other within twenty-four dollars, which is the kind of agreement that makes you trust the calibration. The sell-side mean target is $297.71.
The first thing to notice is what the call is not. It is not a claim that Apple is broken. Return on invested capital exceeds the cost of capital by seven percentage points across the seven-year forecast. Net debt is thirty cents on the dollar of EBITDA. Distress probability is zero in every year of the simulation. The buyback is funded out of free cash flow at a hundred and ten billion a year. None of these are the numbers of a company in trouble. The thesis is not that the franchise is impaired.
The thesis is that the price already reflects the franchise.ShareThe disagreement is the shape, not the slope
This is the part that took me the longest to articulate clearly, and it is the part I think most short-form summaries get wrong, including, I will admit, the first version of my own LinkedIn post on the same topic.
The Street has Apple growing twelve percent in fiscal 2026 and seven percent in fiscal 2027. Argos has Apple growing roughly ten percent in fiscal 2026 and five percent in fiscal 2027. In Year One, the gap is around two hundred basis points. That is not enormous. Reasonable people can disagree about Year One revenue growth by two hundred basis points without either of them being foolish.
The disagreement is what happens after Year One.
In the Argos path, growth decays toward a revenue-weighted segment blend of about four point eight percent. The decay is not a forecast in the usual sense. It is the consequence of the Ornstein-Uhlenbeck dynamics on the cost-ratio side and the Bayesian shrinkage on the drift side, both of which pull the model toward the long-run prior at a rate determined by the data, not by the analyst. The long-run prior is itself a weighted average of segment-level priors, and those priors are anchored on what each segment has actually done over the last decade adjusted for what nominal GDP can support. Americas at four percent, Europe at four and a half, Greater China at two, Japan at two, Rest of Asia Pacific at six. None of these is heroic. None of these is conservative. They are what the data says.
The Street’s path stays elevated. To sustain twelve percent in 2026 and seven percent in 2027, the implicit segment math requires growth above the highest single-segment prior in the model, applied across the whole portfolio. This is arithmetically possible. It is also a strong claim that requires every segment to outperform its own historical prior simultaneously, which is the sort of thing you would want a reason for, and the reason offered is generally Apple Intelligence.
I do not have a strong view on whether Apple Intelligence is transformational or incremental. I do have a view that the price as of this morning is paying for transformational, and the cost of being wrong about that is wider than the cost of being wrong about, say, Greater China stabilization, which is the other live debate.
The shape of the distribution
The other thing the model insists on is that the median is not the only number on the table. The tenth percentile of the simulation is one hundred and sixty dollars. The ninetieth percentile is two hundred and eighty-nine dollars. Spot sits between the seventy-fifth and the ninetieth percentile. It is high in the distribution. It is not above it.
This matters because the actionable signal is asymmetric placement, not the gap between the median and the spot. A point estimate of $215 against a market of $267 sounds like a strong sell. A distribution that places the market in its upper third with no path going to zero in seven years is a different story. The honest framing is that the equity is priced near the upper edge of a wide central tendency in a name where the underlying franchise is intact and the catalysts that would close the gap are dated to the next twelve months. The triggers fall out of the geometry: trim above $290, hold between $200 and $290, accumulate below $200. That is not a recommendation in the regulated sense. It is what the distribution looks like.Leave a commentWhat I am actually testing
I said in the original Argos piece that two tickers would pass through the engine in public, and only two. The point is to put the model in front of an audience and take pushback, not to launch a research note service. There are plenty of those already, written by people who do this for a living and who have compliance departments to keep them honest. I have neither.
What I am testing, more specifically, is whether the model produces verdicts that an experienced reader looks at and finds defensible, even where they disagree. A model that everyone disagrees with is broken. A model that everyone agrees with is sycophantic. The interesting middle ground is a model that produces a result a reader would not have produced themselves but can reconstruct the reasoning for. Whether Argos sits in that middle ground is something I cannot evaluate from inside my own head.
So that is the actual ask. Not whether the call on Apple is right, which time will settle without my help. Whether the way Argos got to the call is the kind of reasoning you would want a model to do.
The second ticker will be the last. Suggestions in the comments.
This piece does not constitute investment research, financial advice, or a recommendation to buy, sell, or hold any security. It is shared for educational and illustrative purposes only, to demonstrate the Argos valuation engine. Information is believed accurate but is not guaranteed, and forward-looking estimates rely on assumptions that may not hold. Modeled outcomes, including Monte Carlo distributions and price triggers, are not indicative of future results and should not be relied upon as the basis for any investment decision. Readers should conduct their own due diligence and consult a licensed financial advisor before transacting in any security.

Aapl 28042026
2.56MB ∙ PDF file

4/26/2026

Argos - A hundred-eyed watcher

Argos

A hundred-eyed watcher, a valuation engine, and the architecture they share.

The model has a name.

Argos. After the hundred-eyed watcher of Greek mythology, set to guard what mattered, with vision that never fully closed because some of his eyes were always awake while the others rested.

The metaphor is the architecture.

I went looking for a name only after the system was finished. Whatever I called it had to come from how it actually worked, not from how I wanted to market it. The myth fit on the first pass

.

Subscribe now

What Argos does

Three stages, one system, filings to thesis.

Onboarding. An Analyst drafts a configuration from the 10-K. A Reviewer challenges every field. A peer track runs the same exercise across comparable filers. A Manager adjudicates. The output, segments.json, bridges Stage 1 to Stage 2. Once written, it stays.

Valuation. Seven-year FCFF, Hamada-relevered WACC, 10,001-path Monte Carlo. GBM revenue, Ornstein-Uhlenbeck cost dynamics. Stress and tornado on top. No agents. No improvisation.

Investment Report. Twin analysts draft in parallel. A reviewer consolidates. A storyteller assembles the narrative. A CIO signs off, or sends it back, with up to three revision cycles before approval. Out in Word, PowerPoint, and Markdown.

Many eyes. One judgment.

Why the myth fits

The useful part of the myth is not the hundred eyes. It is the structure of the watching.

Argos did not see everything by being everywhere at once. He saw everything because his attention was distributed across many independent observers, none of which ever fully shut down. That is what a continuous valuation system has to do. No single agent, no single calculation, no single report holds the whole picture. The picture emerges from the coordination of partial views.

The other piece that earned its place is the role of judgment. Argos was a guardian, not a generator. He watched over what already existed. He did not invent it.

The system does the same. It reads filings that already exist. It calculates with data that already exists. It produces a thesis that has to survive an internal critic before it leaves. The agents propose. The math computes. The CIO judges. None of those steps is decorative.

Share

The visual

A central eye, navy and gold, surrounded by five colored dots in a pentagon. The eye is the watcher. The dots are the workflow.

Each color carries a function.

Dusty blue. Data. Filings, market data, prior configurations.

Gold. Agents. Analyst, Reviewer, Manager, Storyteller, CIO.

Sage. Engine. DCF, Monte Carlo, WACC, calibration.

Navy. Bridge. Configuration files between stages.

Teal. Output. Word, PowerPoint, Excel.

The five colors are not decoration. They are a navigation system. A reader who sees gold in a chart should know it represents agent activity. A reader who sees sage should know it represents calculation. Consistency compounds into recognition.

What this is not

For now, Argos is not a product. Not a startup. Not a service for sale.

It is the operational arm of a personal investment practice, built for the quality of analysis I want to apply to my own decisions and to the family office context that surrounds them. The architecture happens to be rigorous enough to share. The system stays in-house.

This shapes the design choices. A commercial product would optimize for breadth and ease of use. Argos optimizes for methodological defensibility, auditability, and the ability to argue with itself before producing a conclusion. Different objectives, different architectures.

I am sharing the methodology and the worked output. Not the code.

What comes next

Leave a comment

Two worked examples land this week. Real tickers, full output, no shortcuts. The system at work, not a description of it.

If there is a specific company you would like to see valued next, drop a ticker in the comments. The most requested names go in the next round.

Then the next build. A valuation model is only as useful as the questions you point it at. Argos answers “what is this worth?” It does not yet answer “which companies should we even be asking about?”

That is the screener. A tool that scans the market, applies a coarse filter on financial health, valuation gap, and earnings quality, and surfaces the candidates worth feeding into the pipeline. Valuation tells you the answer. The screener tells you which questions to ask.

The watcher needs something to watch.

A Risk Premium project.



from Risk Premium https://ift.tt/Q8xE5fu
via IFTTT

4/25/2026

The Machine Is Complete

 

A 10-K goes in. Nine agents argue over what it means. A Monte Carlo engine simulates 10,001 versions of the future. A storyteller turns the numbers into a thesis. A CIO signs off, or sends it back for another pass.

Out comes a full investment report.

That sentence is, in compressed form, what I have been building for the last several weeks. With Pillar 3 now in place, the valuation architecture is finished. Three stages, one system, filings to thesis.

This piece is a walk through what each stage does, what each stage explicitly does not do, and where the project goes from here.

The principle: agents author judgment, code does math

Before describing the stages, the design principle that runs through all three.

Anywhere a number can be calculated from data, no agent gets to touch it. Anywhere a structured opinion is needed about what a 10-K actually means, no piece of code gets to invent it. The line between the two is hard-coded into the system. Calibrated volatilities, working capital days, cost of debt, beta, all computed. Drift priors, FX exposures, peer selection, capital structure thresholds, all proposed by agents and adjudicated by other agents.

This is the discipline that makes the rest of the architecture defensible. When something goes wrong, you can always tell whether the failure was a model error or a judgment error, and which agent or which line of code is responsible.

Stage 1: Onboarding

Onboarding runs once per ticker. Its job is to read the 10-K, peer filings, and any prior configuration, and emit a single structured file called segments.json. Everything downstream depends on it.

Four phases run in sequence. Phase 0 fetches the raw filings. Phase 1 is a data-quality loop with retry logic, since SEC filings are not always well-formed. Phase 2 splits into two parallel tracks: a Ticker Analyst reads the company’s own filing in depth, while a Peer Analyst runs the same exercise across comparable filers in a red-team configuration. Phase 3 is the Reviewer layer, where each track’s output is challenged by an independent agent. Phase 4 is the Manager, who weighs the two tracks, adjudicates contested fields, and writes the final configuration.

Every numerical assumption proposed by an agent must be backed by a filing citation. No citation, no override. The Manager’s reasoning is logged for every field where the two tracks disagreed.

The output, segments.json, is the contract that bridges Stage 1 and Stage 2. Once written, it stays. Subsequent valuation runs read from it without re-invoking the agent pipeline, which keeps both cost and reproducibility under control.

Stage 2: Valuation

Stage 2 is fully deterministic. Same inputs, same outputs, every time. No agents, no improvisation, just math.

The engine reads ten years of SEC filings via EDGAR, market data via yfinance, and the configuration written in Stage 1. It builds a financial-analysis report covering twenty-eight ratios, parses the segment structure, computes the cost of capital using a Hamada-relevered CAPM, and calibrates the stochastic processes that drive the forecast.

Two valuation paths run in parallel. A seven-year discounted cash flow on free cash flow to the firm, used as a single-path sanity check. And a 10,001-path Monte Carlo simulation built on Geometric Brownian Motion for revenue dynamics and an Ornstein-Uhlenbeck process for cost ratio mean reversion. The Monte Carlo is the primary output. The DCF is the discipline that catches simulation pathologies before they propagate.

Stress scenarios and tornado sensitivity sit on top. Four predefined stress paths, including a severe recession, a margin compression, an FX shock, and a working-capital squeeze. The tornado isolates the three assumptions to which the valuation is most sensitive, which is often more useful than the central estimate itself.

Outputs include the DCF per share, the Monte Carlo median, the 25th and 75th percentile range, the four stress scenarios, the top three tornado drivers, an Excel workbook with thirty-three sheets, and a full stdout log. The Excel is the audit trail. Every number in the final report can be traced back to a cell.

Stage 3: Investment Report

This is the newest layer. Where Stage 1 builds the configuration and Stage 2 runs the math, Stage 3 turns the output into a narrative that an investor can actually read.

The pipeline runs four phases. Phase 1A and 1B are twin Analysts drafting in parallel, independently, from the same Excel and segment data. Parallel drafting was a deliberate choice. Two analysts working from the same numbers will reach overlapping but not identical conclusions, and the differences are usually where the interesting insight lives.

Phase 2 is a Reviewer who consolidates the two drafts, resolves disagreements, and produces a single structured content file called report_content.md. Phase 3 is the Storyteller, an agent whose job is purely narrative: take the consolidated content and turn it into prose that flows. Phase 4 is the CIO. The CIO either approves, or sends the narrative back to the Storyteller for revision, with a maximum of three cycles before forced approval.

The CIO’s veto is the most important agent in the pipeline. It is the only one designed to say no.

Once approved, a deterministic builder assembles the final report into three formats: a Word document of roughly twenty-eight pages, a PowerPoint deck of twenty slides, and the underlying Markdown source. A trace file logs every agent’s input, output, and reasoning, in case the report ever needs to be audited.

What it does not do

Three honest limitations are worth naming.

Peer ticker hallucination is real. The Peer Analyst occasionally proposes comparables that look right semantically but do not actually trade in the same market structure. The Reviewer catches most of these, but not all.

The Reviewer can be over-cautious. When management is genuinely right about a forward outlook that breaks from history, the Reviewer’s instinct is to pull the prior back toward the historical CAGR, which is the wrong move in those cases.

The self-healing XBRL loop has not been stress-tested on a genuinely broken filer. Most US large-caps file clean XBRL. The system has not yet seen a small-cap with a chaotic filing history.

These are known issues. They go on the next quarter’s roadmap.

What comes next

Two things, in order.

First, two full valuation reports go out this week as worked examples. Real companies, real filings, the entire methodology stack applied. The output of the system, not a description of it. They will land on this Substack and on LinkedIn.

Then, the next build. A valuation model is only as useful as the questions you point it at. The architecture as it stands today answers “what is this worth?” with rigor. It does not yet answer “which companies should we even be asking about?”

That is the screener. A tool that scans the market, applies a coarse filter on financial health, valuation gap, and earnings quality, and surfaces the candidates worth feeding into the pipeline. Valuation tells you the answer. The screener tells you which questions to ask. Together they close the loop from universe to investment decision.

Which would you like to see first? The two worked examples, or the screener architecture? And while you wait, drop a ticker in the comments. The most requested names will be the next ones the system runs.The third pillar is in place, the architecture is finished, and the question shifts from how to build it to where to point it.

A 10-K goes in. Nine agents argue over what it means. A Monte Carlo engine simulates 10,001 versions of the future. A storyteller turns the numbers into a thesis. A CIO signs off, or sends it back for another pass.

Out comes a full investment report.

That sentence is, in compressed form, what I have been building for the last several weeks. With Pillar 3 now in place, the valuation architecture is finished. Three stages, one system, filings to thesis.

This piece is a walk through what each stage does, what each stage explicitly does not do, and where the project goes from here.

Workflow Onepager 2026 04 25
519KB ∙ PDF file
Download

The principle: agents author judgment, code does math

Before describing the stages, the design principle that runs through all three.

Anywhere a number can be calculated from data, no agent gets to touch it. Anywhere a structured opinion is needed about what a 10-K actually means, no piece of code gets to invent it. The line between the two is hard-coded into the system. Calibrated volatilities, working capital days, cost of debt, beta, all computed. Drift priors, FX exposures, peer selection, capital structure thresholds, all proposed by agents and adjudicated by other agents.

This is the discipline that makes the rest of the architecture defensible. When something goes wrong, you can always tell whether the failure was a model error or a judgment error, and which agent or which line of code is responsible.

Stage 1: Onboarding

Onboarding runs once per ticker. Its job is to read the 10-K, peer filings, and any prior configuration, and emit a single structured file called segments.json. Everything downstream depends on it.

Four phases run in sequence. Phase 0 fetches the raw filings. Phase 1 is a data-quality loop with retry logic, since SEC filings are not always well-formed. Phase 2 splits into two parallel tracks: a Ticker Analyst reads the company’s own filing in depth, while a Peer Analyst runs the same exercise across comparable filers in a red-team configuration. Phase 3 is the Reviewer layer, where each track’s output is challenged by an independent agent. Phase 4 is the Manager, who weighs the two tracks, adjudicates contested fields, and writes the final configuration.

Every numerical assumption proposed by an agent must be backed by a filing citation. No citation, no override. The Manager’s reasoning is logged for every field where the two tracks disagreed.

The output, segments.json, is the contract that bridges Stage 1 and Stage 2. Once written, it stays. Subsequent valuation runs read from it without re-invoking the agent pipeline, which keeps both cost and reproducibility under control.

Stage 2: Valuation

Stage 2 is fully deterministic. Same inputs, same outputs, every time. No agents, no improvisation, just math.

The engine reads ten years of SEC filings via EDGAR, market data via yfinance, and the configuration written in Stage 1. It builds a financial-analysis report covering twenty-eight ratios, parses the segment structure, computes the cost of capital using a Hamada-relevered CAPM, and calibrates the stochastic processes that drive the forecast.

Two valuation paths run in parallel. A seven-year discounted cash flow on free cash flow to the firm, used as a single-path sanity check. And a 10,001-path Monte Carlo simulation built on Geometric Brownian Motion for revenue dynamics and an Ornstein-Uhlenbeck process for cost ratio mean reversion. The Monte Carlo is the primary output. The DCF is the discipline that catches simulation pathologies before they propagate.

Stress scenarios and tornado sensitivity sit on top. Four predefined stress paths, including a severe recession, a margin compression, an FX shock, and a working-capital squeeze. The tornado isolates the three assumptions to which the valuation is most sensitive, which is often more useful than the central estimate itself.

Outputs include the DCF per share, the Monte Carlo median, the 25th and 75th percentile range, the four stress scenarios, the top three tornado drivers, an Excel workbook with thirty-three sheets, and a full stdout log. The Excel is the audit trail. Every number in the final report can be traced back to a cell.

Stage 3: Investment Report

This is the newest layer. Where Stage 1 builds the configuration and Stage 2 runs the math, Stage 3 turns the output into a narrative that an investor can actually read.

The pipeline runs four phases. Phase 1A and 1B are twin Analysts drafting in parallel, independently, from the same Excel and segment data. Parallel drafting was a deliberate choice. Two analysts working from the same numbers will reach overlapping but not identical conclusions, and the differences are usually where the interesting insight lives.

Phase 2 is a Reviewer who consolidates the two drafts, resolves disagreements, and produces a single structured content file called report_content.md. Phase 3 is the Storyteller, an agent whose job is purely narrative: take the consolidated content and turn it into prose that flows. Phase 4 is the CIO. The CIO either approves, or sends the narrative back to the Storyteller for revision, with a maximum of three cycles before forced approval.

The CIO’s veto is the most important agent in the pipeline. It is the only one designed to say no.

Once approved, a deterministic builder assembles the final report into three formats: a Word document of roughly twenty-eight pages, a PowerPoint deck of twenty slides, and the underlying Markdown source. A trace file logs every agent’s input, output, and reasoning, in case the report ever needs to be audited.

What it does not do

Three honest limitations are worth naming.

Peer ticker hallucination is real. The Peer Analyst occasionally proposes comparables that look right semantically but do not actually trade in the same market structure. The Reviewer catches most of these, but not all.

The Reviewer can be over-cautious. When management is genuinely right about a forward outlook that breaks from history, the Reviewer’s instinct is to pull the prior back toward the historical CAGR, which is the wrong move in those cases.

The self-healing XBRL loop has not been stress-tested on a genuinely broken filer. Most US large-caps file clean XBRL. The system has not yet seen a small-cap with a chaotic filing history.

These are known issues. They go on the next quarter’s roadmap.

What comes next

Two things, in order.

First, two full valuation reports go out this week as worked examples. Real companies, real filings, the entire methodology stack applied. The output of the system, not a description of it. They will land on this Substack and on LinkedIn.

Then, the next build. A valuation model is only as useful as the questions you point it at. The architecture as it stands today answers “what is this worth?” with rigor. It does not yet answer “which companies should we even be asking about?”

That is the screener. A tool that scans the market, applies a coarse filter on financial health, valuation gap, and earnings quality, and surfaces the candidates worth feeding into the pipeline. Valuation tells you the answer. The screener tells you which questions to ask. Together they close the loop from universe to investment decision.

Drop a ticker in the comments. The most requested names will be the next ones the system runs.