April 5, 2026 · J. Hirschfeld
Why I Trust Recipes More Than Models
I develop frozen desserts from volatile compound matrices, thermodynamic models of sugar behavior at subzero temperatures, and a database of 230+ ingredients mapped by their shared aromatic molecules. I built the database myself. It lives at modernist.food, and it exists because I wanted to understand why raspberry and shiso work together before I put them in a sorbet.
I also build decision infrastructure for real estate operators and land developers: deterministic data pipelines, confidence-scored analytics, AI narrative layers. These sound like unrelated disciplines, but they are not. The mechanism underneath both is identical, and understanding why has made me better at each.
Here's the claim: a well-built recipe is a more honest model than almost any analytical framework I've encountered in business. Not because recipes are simple (the good ones aren't), but because recipes operate in a domain where the feedback loop is fast enough to enforce rigor and the knowledge corpus is large enough to enable genuine discovery. Most business models have neither property, and most AI implementations don't either. That's the problem.
What a Recipe Actually Is
A recipe, at its most basic, is a set of instructions that transforms inputs into an output. So is a financial model, and so is a machine learning pipeline. The question is what kind of knowledge the instructions encode, and how that knowledge was validated.
In classical cooking, recipes encode tradition. Somebody's grandmother made it this way; it works; do it again. The knowledge is experiential, the validation is generational, and the failure mode is stagnation. You can reproduce but you can't discover. This is the equivalent of a business running on institutional knowledge and quarterly reviews. It works until the environment changes, and then you're making your grandmother's recipe for a customer who doesn't eat gluten.
In molecular gastronomy, and specifically in the frozen dessert work I do, recipes encode something structurally different. They encode experimentally derived relationships between ingredients at the molecular level. Raspberry and ume plum share volatile aromatic compounds. Shiso bridges both of them through a terpene profile that also connects to limoncello's citral content. I didn't invent these relationships; they exist in the chemistry. What I did was build an infrastructure (the ingredient database, the compound matrices, the pairing maps) that makes them discoverable.
The difference matters. A traditional recipe says "these ingredients taste good together" and you either trust the source or you don't. A compound-matrix-informed recipe says "these ingredients share molecular architecture at the aromatic level, which predicts perceptual compatibility, which has been validated experimentally across thousands of pairings." The knowledge isn't anecdotal. It's structural. And because it's structural, it enables novel combinations that no human palate would have arrived at through intuition alone, but that a human palate still has to evaluate.
That last clause is the whole argument. I'll come back to it.
The Feedback Loop Problem
When I develop a new frozen dessert, say a smoked peach whisky sorbet, the development cycle looks like this: I consult the compound matrices to identify aromatic bridges between smoked stone fruit, barrel-aged whisky, and whatever complementary flavor I'm testing. I model the sugar architecture (the ratio of sucrose to dextrose to trehalose that will produce the right freezing point depression, the right texture at serving temperature, the right resistance to ice crystal formation over storage). I make a batch. I taste it. The whole cycle takes hours.
If the sorbet is grainy, I know the sugar ratio is wrong. If the smoke note disappears at low temperature, I know the volatile compounds are too light and I need to reinforce with a more thermally stable aromatic. If the whisky is harsh, I know the alcohol content is suppressing the freezing point beyond what the sugar architecture can compensate for. The feedback is immediate, specific, and structurally informative. It tells me not just that something is wrong but what mechanism produced the failure.
Compare this to a land acquisition model. An operator underwrites a deal, commits capital, builds or acquires, and discovers 18 to 36 months later whether the model was right. If it was wrong, the feedback is diffuse. Was it the absorption assumption? The competitive entry? The rate environment? The construction cost escalation? Usually it's some combination, and untangling the individual contributions is genuinely hard. The model was a point estimate that either hit or missed, and the miss doesn't cleanly decompose into its causes.
In trading, I got spoiled by fast feedback. A strategy either makes money today or it doesn't, and if it doesn't, the loss attribution is specific enough to diagnose. That discipline of building, testing, tasting, and adjusting is the same loop whether I'm tuning a pairs trade or a grapefruit sorbet. Real estate operators don't have that luxury. Their feedback loops are measured in years, and their models are structured as if the feedback loop doesn't matter.
Simply put: the length of your feedback loop should determine the structure of your model. Short loops can tolerate point estimates because you'll find out fast if you're wrong. Long loops demand explicit uncertainty characterization because by the time you find out you're wrong, the capital is deployed and the optionality is gone. Most operators build short-loop models for long-loop decisions, and that's the structural error.
What This Has to Do with AI
The current wave of AI adoption in business has a recipe problem, and it's the grandmother problem, not the molecular gastronomy problem.
Here's what I mean. The dominant implementation pattern right now is what I call "prompt-and-pray": take your existing documents, your existing data, your existing processes, feed them to a language model, and get a summary back. Maybe it drafts your memos. Maybe it answers questions about your internal wiki. Maybe it generates reports that look like the reports your analysts used to write, except faster.
This is the grandmother's recipe. The knowledge being encoded is tradition: this is how we've always done it, now do it faster. The AI is a regurgitation engine that reproduces existing patterns at lower cost. Like the grandmother's recipe, it works fine as long as the environment doesn't change. The moment the environment shifts (a new competitor, a regulatory change, a market dislocation), the AI is reproducing patterns that were validated under conditions that no longer apply, and it's doing so with the fluency and confidence that makes the output look authoritative even when it's stale.
What I build is the other thing. The molecular gastronomy version.
A deterministic data pipeline is the compound matrix. It doesn't contain opinions or narratives; it contains structured, validated relationships between inputs. Permit status and construction timelines. Demographic feeds and absorption rates. Cost indices and competitive supply. The relationships are real; they exist in the data the way aromatic compounds exist in the chemistry. The pipeline makes them discoverable and keeps them current.
A confidence-scored analytics layer is the sugar architecture model. It doesn't produce a single answer; it produces a characterized output space that explicitly accounts for what it knows and what it doesn't. Just as I model the freezing point depression curve across different sugar ratios to understand how the sorbet will behave at different temperatures, the analytics layer models the return profile across different demand scenarios to understand how the investment will behave under different conditions. The output is a surface, not a point.
The LLM narrative layer, the part that actually uses a language model, is the tasting. It sits on top of the deterministic layers and describes what they produce. It synthesizes the quantitative output into language that a committee or a board can engage with, but it doesn't generate the analysis. It doesn't freelance. It narrates a reality that the pipeline and the analytics layer have already established.
The LLM, in this architecture, is the human palate. It evaluates, communicates, and translates structured output into a form that humans can act on. What it does not do is substitute for the corpus of validated knowledge underneath it. You wouldn't ask a palate to invent flavor chemistry from scratch, and you shouldn't ask a language model to invent analytics from scratch either.
The Corpus Is the Point
This is where the metaphor stops being a metaphor and becomes a structural argument.
I built modernist.food because I wanted a corpus: a structured, queryable body of experimentally validated knowledge about how ingredients interact at the molecular level. Not a collection of recipes or a database of "things that taste good," but a relational map of validated chemical interactions that enables novel construction. The corpus is what allows me to create a raspberry-ume-shiso-limoncello sorbet that has never existed before and have reasonable confidence it will work, because the aromatic bridges are real, the thermodynamic behavior is modeled, and the experimental validation of the underlying pairings is extensive even though this specific combination is new.
The parallel to what I build for operators is exact. A decision infrastructure platform is a corpus: a structured, continuously updating body of validated data about how a market, a portfolio, or an operation actually behaves. Not a collection of reports or a dashboard of metrics someone chose because they were available, but a relational model of validated operational relationships that enables novel analysis. The corpus is what allows an operator to evaluate a deal they've never seen before, in a market configuration that hasn't existed before, and have characterized confidence in the outcome, because the data relationships are real, the uncertainty is modeled, and the validation of the underlying inputs is continuous.
In both cases, the corpus is the point. The recipe is an output. The model is an output. The AI-generated narrative is an output. The value is in the infrastructure that makes the output possible and the discipline that keeps the infrastructure honest.
This is what most AI implementations get backwards. They start with the output ("we want AI-generated reports") and work backward to whatever data happens to be available. That's like starting with "I want a sorbet" and grabbing whatever's in the fridge. You might get something edible, but you will not get something that leverages the full possibility space of what the ingredients can do together. And you definitely won't get something that improves over time, because there's no structured corpus accumulating validated knowledge about what works and why.
The Taste Test
I said I'd come back to this: molecular gastronomy enables novel combinations that no human palate would have arrived at through intuition alone, but that a human palate still has to evaluate.
This is my position on AI in decision-making, stated as precisely as I can state it: AI systems should expand the possibility space that human judgment operates on. They should surface combinations, patterns, and relationships that humans wouldn't find on their own, not because humans aren't smart enough, but because the corpus is too large and the interactions are too complex for unaided cognition to traverse. But the human still tastes the sorbet. The human still makes the investment decision. The human still evaluates whether the output of the system is good, applicable, and worth acting on.
The organizations that will get this right are the ones that invest in the corpus, the structured, validated, continuously updating knowledge infrastructure, and use AI to traverse it. The ones that will get it wrong are the ones that skip the corpus and ask the AI to be the corpus. That's the grandmother's recipe with better handwriting. It's faster, but it's not smarter.
To my eye, the difference between these two approaches is the difference between an operator who understands their own kitchen and one who's ordering delivery and calling it cooking.