Ian Marschner Profile
Ian Marschner

@IanMarschner

Followers
156
Following
71
Statuses
62

Biostatistics Professor at @Sydney_Uni and @TrialsCentre

Joined September 2017
Don't wanna be here? Send us removal request.
@IanMarschner
Ian Marschner
6 months
Non-concurrent controls occur in adaptive platform trials. Our paper gives a new more rigorous definition based on randomization cohort rather than time. Cohort adjustment and implications for databases & software are discussed. @austrim_cre @TrialsCentre
0
3
14
@IanMarschner
Ian Marschner
7 months
@JAMAOnc Treatments that improve survival provide greater opportunity for adverse events to occur. See the Limitations Section: "improved survival with ARSIs could manifest as a higher captured incidence of CV events. It is impossible to account for this given lack of time to event data"
0
0
2
@IanMarschner
Ian Marschner
9 months
Recent @TheLancet trial uses confidence distribution to conclude “the confidence that the risk ratio is lower than 1 is 97.2%”. Full confidence distribution plot in supplementary material. Excellent way to present trial results as described in Reference 24
1
3
15
@IanMarschner
Ian Marschner
10 months
@KertViele @syctong @GuyattGH I agree that small concurrently randomised cohorts are a challenge for reporting. I would interpret this as an argument for avoiding design features that produce small cohorts e.g. I would argue we should avoid frequent interim analyses that lead to frequent design adaptations
0
0
1
@IanMarschner
Ian Marschner
11 months
@f2harrell @vandy_biostat Not always. If design choice reflects prior belief then Bayesian inference is affected by multiplicity. As Kass & Wasserstein said (JASA 1996 p.1359): “It could be argued that choice of design is informative and so the prior should depend on the design”
1
0
0
@IanMarschner
Ian Marschner
11 months
RT @monsoon0: “Removing the advanced mathematics prerequisite does nothing to address the decline in mathematics enrolments at schools and…
0
6
0
@IanMarschner
Ian Marschner
1 year
@KertViele Sounds like a great session. I agree that if anyone’s saying data should be thrown away that’s problematic. Also important to recognise that different types of data have different levels of quality. If we pool them then we need to understand the relative contributions of each
0
0
0
@IanMarschner
Ian Marschner
1 year
@GuyattGH Here is closely related research conducted independently by PhD student Anne Lyngholm Soerensen published last year. Clearly an important topic
0
1
4
@IanMarschner
Ian Marschner
1 year
@MaartenvSmeden Not the probability of an hypothesis being true but perhaps the confidence?
0
0
0
@IanMarschner
Ian Marschner
1 year
@JasonConnorPhD @ADAlthousePhD You might look at these two old papers (and papers citing them), they sound relevant
0
0
0
@IanMarschner
Ian Marschner
1 year
@predict_addict @austrim_cre @TrialsCentre Thanks for the interesting references but you are talking about prediction whereas I'm talking about inference (a common difference between data scientists and statisticians). It's unclear what you mean by "lack validity" when the context is inference from a randomized experiment
1
0
0
@IanMarschner
Ian Marschner
1 year
@lakens Your opening argument is based on: “it is just as likely to observe a p-value of 0.001 as it is to observe a p-value of 0.999”. Both of these values have zero probability of being observed (for a continuous p-value distribution) so I’m not sure what the point is you are making
0
0
0
@IanMarschner
Ian Marschner
1 year
@syctong Need to be careful of non-proportional odds in such an analysis. The odds ratio for “considerable improvement” is about 12.4 compared to 4.7 for survival, yet they are assumed to be the same. Might be chance variation but important to check or else the odds ratio could be biased
0
0
1
@IanMarschner
Ian Marschner
1 year
@stephensenn @cjsnowdon In Australian 1970s classrooms we had to drink it warm on 35 degree days!
0
0
0
@IanMarschner
Ian Marschner
1 year
@f2harrell The terms frequentist and Bayesian are outdated and should be retired. We should be discussing methods not people
0
0
5
@IanMarschner
Ian Marschner
1 year
1
0
1
@IanMarschner
Ian Marschner
1 year
@reverendofdoubt @ADAlthousePhD @statsepi @f2harrell @EpiEllie @ProfHayward Stopping at the 1st interim analysis always results in overestimation (on average). The time of stopping affects the magnitude of bias. Stopping at 85% would lead to very minor bias,whereas stopping at 25% could lead to large bias. I’d advocate not stopping before 50% information
2
0
0