mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,4 Tsd.
aktive Profile

#causalinference

1 Beitrag1 Beteiligte*r0 Beiträge heute

P2: #causalinference #causality #inference #statistic #observability #llm #reasoning
| Data types | Requires model integration | Handles via prompting in one model |
```
**LLM Limitations:** LLMs use pattern matching over
explicit causal modeling.
- No explicit causal graphs/mechanisms—only patterns and
correlations.
- Lack modular separation, functions are entwined.
- Risk of hallucinated causal links, unreliable for
interventions.
- Formal counterfactuals need extensive external
scaffolding.

**Fields:**
- **Healthcare:** Predict treatment outcomes (reasoner),
explain intervention effects (explainer), recommend
actions (producer).
- **Economics/Policy:** Assess impacts, clarify causal
pathways, propose policies.
- **Recommendation Systems:** Infer preferences, explain
choices, personalize outputs.

Text of original post: try-codeberg.github.io/static/

P2: P1: #causalinference #causality #inference #statistic #observability #llm #reasoning
needing rigorous, transparent causality.
- Effective for quick prototyping or low-risk tasks where
simulated causal logic suffices.

```text
| | **Causal Inference Neural Networks** | **Prompt-Engineered Multimodal LLM** |
|--------------+--------------------------------------+-------------------------------------------|
| Causality | Explicit, modeled, testable | Pattern-based, plausible but implicit |
| Reliability | High (given good data/model) | Medium, can produce errors/hallucinations |
| Transparency | Modular, explainable | Opaque, explanation quality varies |
| Scalability | Harder (custom per domain/signal) | Easier (generalizable across domains) |

P1: P1: #causalinference #causality #inference #statistic #observability #llm #reasoning
Topic: Causal LLM or splitting LLM
Causal Cooperative Networks (CCNets) - Causal Learnign
Framework - Reasoner, Explainer, Producer.

Causal inference finds causes by showing they covary with
effects, occur beforehand, and by ruling out
alternatives.

LLMs use pattern matching, not explicit causal models or
separate reasoning modules.
- Insufficient for regulated or high-stakes domains

Registration is still possible for the GMDS ACADEMY 2025 (Hannover, October 20-23).
There will be three parallel workshops on meta analysis, causal inference and time-to-event analysis involving Wolfgang Viechtbauer (@wviechtb), Christian Röver, Sebastian Weber, Vanessa Didelez, Arthur Allignol, Oliver Kuß, Alexandra Strobel, Hannes Buchner, Xiaofei Liu and Ann-Kathrin Ozga.
See here for more details:
👉 gmds.de/fileadmin/user_upload/

My Road to Bayesian Stats

By 2015, I had heard of Bayesian Stats but didn’t bother to go deeper into it. After all, significance stars, and p-values worked fine. I started to explore Bayesian Statistics when considering small sample sizes in biological experiments. How much can you say when you are comparing means of 6 or even 60 observations? This is the nature work at the edge of knowledge. Not knowing what to expect is normal. Multiple possible routes to a seen a result is normal. Not knowing how to pick the route to the observed result is also normal. Yet, our statistics fails to capture this reality and the associated uncertainties. There must be a way I thought. 

Free Curve to the Point: Accompanying Sound of Geometric Curves (1925) print in high resolution by Wassily Kandinsky. Original from The MET Museum. Digitally enhanced by rawpixel.

I started by searching for ways to overcome small sample sizes. There are minimum sample sizes recommended for t-tests. Thirty is an often quoted number with qualifiers. Bayesian stats does not have a minimum sample size. This had me intrigued. Surely, this can’t be a thing. But it is. Bayesian stats creates a mathematical model using your observations and then samples from that model to make comparisons. If you have any exposure to AI, you can think of this a bit like training an AI model. Of course the more data you have the better the model can be. But even with a little data we can make progress. 

How do you say, there is something happening and it’s interesting, but we are only x% sure. Frequentist stats have no way through. All I knew was to apply the t-test and if there are “***” in the plot, I’m golden. That isn’t accurate though. Low p-values indicate the strength of evidence against the null hypothesis. Let’s take a minute to unpack that. The null hypothesis is that nothing is happening. If you have a control set and do a treatment on the other set, the null hypothesis says that there is no difference. So, a low p-value says that it is unlikely that the null hypothesis is true. But that does not imply that the alternative hypothesis is true. What’s worse is that there is no way for us to say that the control and experiment have no difference. We can’t accept the null hypothesis using p-values either. 

Guess what? Bayes stats can do all those things. It can measure differences, accept and reject both  null and alternative hypotheses, even communicate how uncertain we are (more on this later). All without making assumptions about our data.

It’s often overlooked, but frequentist analysis also requires the data to have certain properties like normality and equal variance. Biological processes have complex behavior and, unless observed, assuming normality and equal variance is perilous. The danger only goes up with small sample sizes. Again, Bayes requires you to make no assumptions about your data. Whatever shape the distribution is, so called outliers and all, it all goes into the model. Small sample sets do produce weaker fits, but this is kept transparent. 

Transparency is one of the key strengths of Bayesian stats. It requires you to work a little bit harder on two fronts though. First you have to think about your data generating process (DGP). This means how do the data points you observe came to be. As we said, the process is often unknown. We have at best some guesses of how this could happen. Thankfully, we have a nice way to represent this. DAGs, directed acyclic graphs, are a fancy name for a simple diagram showing what affects what. Most of the time we are trying to discover the DAG, ie the pathway of a biological outcome. Even if you don’t do Bayesian stats, using DAGs to lay out your thoughts is a great. In Bayesian stats the DAGs can be used to test if your model fits the data we observe. If the DAG captures the data generating process the fit is good, and not if it doesn’t. 

The other hard bit is doing analysis and communicating the results. Bayesian stats forces you to be verbose about your assumptions in your model. This part is almost magicked away in t-tests. Frequentist stats also makes assumptions about the model that your data is assumed to follow. It all happens so quickly that there isn’t even a second to think about it. You put in your data, click t-test and woosh! You see stars. In Bayesian stats stating the assumptions you make in your model (using DAGs and hypothesis about DGPs) communicates to the world what and why you think this phenomenon occurs. 

Discovering causality is the whole reason for doing science. Knowing the causality allows us to intervene in the forms of treatments and drugs. But if my tools don’t allow me to be transparent and worse if they block people from correcting me, why bother?

Richard McElreath says it best:

There is no method for making causal models other than science. There is no method to science other than honest anarchy.

Beyond the Dataset

On the recent season of the show Clarkson’s farm, J.C. goes through great lengths to buy the right pub. As with any sensible buyer, the team does a thorough tear down followed by a big build up before the place is open for business. They survey how the place is built, located, and accessed. In their refresh they ensure that each part of the pub is built with purpose. Even the tractor on the ceiling. The art is  in answering the question: How was this place put together? 

A data-scientist should be equally fussy. Until we trace how every number was collected, corrected and cleaned, —who measured it, what tool warped it, what assumptions skewed it—we can’t trust the next step in our business to flourish.

Old sound (1925) painting in high resolution by Paul Klee. Original from the Kunstmuseum Basel Museum. Digitally enhanced by rawpixel.

Two load-bearing pillars

While there are many flavors of data science I’m concerned about the analysis that is done in scientific spheres and startups. In this world, the structure held up by two pillars:

  1. How we measure — the trip from reality to raw numbers. Feature extraction.
  2. How we compare — the rules that let those numbers answer a question. Statistics and causality.

Both of these related to having a deep understanding of the data generation process. Each from a different angle. A crack in either pillar and whatever sits on top crumbles. Plots, significance, AI predictions, mean nothing.

How we measure

A misaligned microscope is the digital equivalent of crooked lumber. No amount of massage can birth a photon that never hit the sensor. In fluorescence imaging, the point-spread function tells you how a pin-point of light smears across neighboring pixels; noise reminds you that light itself arrives from and is recorded by at least some randomness. Misjudge either and the cell you call “twice as bright” may be a mirage.

In this data generation process the instrument nuances control what you see. Understanding this enables us to make judgements about what kind of post processing is right and which one may destroy or invent data. For simpler analysis the post processing can stop at cleaner raw data. For developing AI models, this process extends to labeling and analyzing data distributions. Andrew Ng’s approach, in data-centric AI, insists that tightening labels, fixing sensor drift, and writing clear provenance notes often beat fancier models.

How we compare

Now suppose Clarkson were to test a new fertilizer, fresh goat pellets, only on sunny plots. Any bumper harvest that follows says more about sunshine than about the pellets. Sound comparisons begin long before data arrive. A deep understanding of the science behind the experiment is critical before conducting any statistics. The wrong randomization, controls, and lurking confounder eat away at the foundation of statistics.

This information is not in the data. Only understanding how the experiment was designed and which events preclude others enable us to build a model of the world of the experiment. Taking this lightly has large risks for startups with limited budgets and smaller experiments. A false positive result leads to wasted resources while a false negative presents opportunity costs.   

The stakes climb quickly. Early in the COVID-19 pandemic, some regions bragged of lower death rates. Age, testing access, and hospital load varied wildly, yet headlines crowned local policies as miracle cures. When later studies re-leveled the footing, the miracles vanished. 

Why the pillars get skipped

Speed, habit, and misplaced trust. Leo Breiman warned in 2001 that many analysts chase algorithmic accuracy and skip the question of how the data were generated. What he called the “two cultures.” Today’s tooling tempts us even more: auto-charts, one-click models, pretrained everything. They save time—until they cost us the answer.

The other issue is lack of a culture that communicates and shares a common language. Only in academic training is it possible to train a single person to understand the science, the instrumentation, and the statistics sufficiently that their research may be taken seriously. Even then we prefer peer review. There is no such scope in startups. Tasks and expertise must be split. It falls to the data scientist to ensure clarity and collecting information horizontally. It is the job of the leadership to enable this or accept dumb risks.

Opening day

Clarkson’s pub opening was a monumental task with a thousand details tracked and tackled by an army of experts. Follow the journey from phenomenon to file, guard the twin pillars of measure and compare, and reinforce them up with careful curation and open culture. Do that, and your analysis leaves room for the most important thing: inquiry.

My PR to the #EconML #PyWhy #opensource #causalai project was merged! 🎉 I made a small contribution by allowing a flexible choice of evaluation metric for scoring both the first stage and final stage models in Double Machine Learning (#DML). Before, only the mean square error (MSE) was implemented. But as an ML practitioner "in the trenches" I have found that MSE is hard to interpret and compare across models. My new functions allow that 🙂 #CausalInference #machinelearning #datascience

Registration is open for the GMDS ACADEMY 2025 (Hannover, October 20-23).
There will be three parallel workshops on meta analysis, causal inference and time-to-event analysis involving Wolfgang Viechtbauer (@wviechtb), Christian Röver, Sebastian Weber, Vanessa Didelez, Arthur Allignol, Oliver Kuß, Alexandra Strobel, Hannes Buchner, Xiaofei Liu and Ann-Kathrin Ozga.
See here for more details:
👉 gmds.de/fileadmin/user_upload/

#statstab #307 The C-word, the P-word, and realism in epidemiology

Thoughts: A comment on #306. Causal inference in observational research is a confusing matter. Read both.

#causalinference #observational #research #commentary

link.springer.com/article/10.1

SpringerLinkThe C-word, the P-word, and realism in epidemiology - SyntheseThis paper considers an important recent (May 2018) contribution by Miguel Hernán to the ongoing debate about causal inference in epidemiology. Hernán rejects the idea that there is an in-principle epistemic distinction between the results of randomized controlled trials and observational studies: both produce associations which we may be more or less confident interpreting as causal. However, Hernán maintains that trials have a semantic advantage. Observational studies that seek to estimate causal effect risk issuing meaningless statements instead. The POA proposes a solution to this problem: improved restrictions on the meaningful use of causal language, in particular “causal effect”. This paper argues that new restrictions in fact fail their own standards of meaningfulness. The paper portrays the desire for a restrictive definition of causal language as positivistic, and argues that contemporary epidemiology should be more realistic in its approach to causation. In a realist context, restrictions on meaningfulness based on precision of definition are neither helpful nor necessary. Hernán’s favoured approach to causal language is saved from meaninglessness, along with the approaches he rejects.

Covid-19 Pandemic as a Natural Experiment: The Case of Home Advantage in Sports

journals.sagepub.com/doi/10.11

"The COVID-19 pandemic, with its unparalleled disruptions, offers a unique opportunity to isolate causal effects and test previously impossible hypotheses. Here, we examine the home advantage (HA) in sports—a phenomenon in which teams generally perform better in front of their home fans—and how the pandemic-induced absence of fans offered... natural experiment. "