Let me describe a scene that will probably feel familiar.
You've got a NICE submission coming up. You need clinical expert input on a key parameter — let's say a long-term survival assumption that your model is going to be extremely sensitive to. So you get a few KOLs on a call, you ask them what they think, you write down some numbers, and then you do your best to reconcile them into something defensible.
Maybe you average them out. Maybe you go with whoever sounded most confident. And then you put a footnote in the model that says "based on clinical expert opinion" and quietly hope the EAG doesn't dig too deep.
I've done this. Most of us have.
The problems with this approach aren't a secret. Anchoring bias runs rampant in group discussions. You're not capturing uncertainty in any meaningful way. And the audit trail wouldn't survive serious scrutiny. But knowing the problems and having a practical way to fix them are two different things entirely.
This week's video is about bridging that gap.
Here's the short version of what good practice actually looks like, according to the NICE manual and the MRC protocol it references.
Experts should form their judgements independently before any group discussion — otherwise you're measuring who's most confident in the room, not what the evidence supports. They should express uncertainty as a probability distribution, not just a point estimate. You need a principled way to aggregate across multiple experts. And the whole process needs to be documented clearly enough that you can defend every step in a technical engagement.
The platform I cover in the video is SEE from Dark Peak Analytics — built from the ground up for the HEOR and HTA context, rather than adapted from something more generic. The feature I find most compelling is the comparative validation step: it plots your expert's long-term survival estimates side by side, and catches logical inconsistencies before they make it into your model. Seeing an 8-year PFS estimate sitting higher than a 4-year one is the kind of thing that can easily slip through a manual process. Here, the expert sees it immediately.
If you want to check it out, you can sign up here:
I've also negotiated a 50% discount on your first survey with the code MIRKO50, and if you're in academia, reach out to the Dark Peak Analytics team directly as there are separate arrangements available.
The question to ponder over is this: would your current expert elicitation process hold up under EAG scrutiny? For most teams, the answer is probably no — not because people aren't trying, but because the tools to do it properly haven't been accessible until recently.
That's changing.
We've been defending weak processes for years because there was no better option. That's a harder argument to make now…

