Early career clinical researchers usually celebrates a “significant “result, but … is it really significant
well that’s a big question!
In fact , P values are not percentages, which means if you find that a p-value is 0.1, you cannot say "I am 99% confident that my results are due to random chance alone, the p-value itself cant provide an evidence
It is not about the p-value only, you should deeply understand the context of your clinical research, and I would recommend calculating the effect size
Imagine you are performing or reading clinical research about a magic pill that have a significant effect (p=Val = 0.1) on weight loss just in 6 months??? And this conclusion was driven from a paper that showed 0.1 difference in mean
Imaging also a study that concluded that a new herbal medicine would non significantly increase the median progression free survival time by 3 months compared to the standard of care
Calculating an effect size is easy and provide a true valuable insight
For the first example you might want to use weighted differences in hazards
For the second one Cohen d will give you the standardized difference in means
Of course, an 0.1 difference in mean weight loss is not strong evidence, while the smallest median increase in PFS would be promising even if the p-val was above 0.5
it is about the context, it is about the real evidence
I'm including a masterpiece peer-reviewed paper down there in the description as further reading about p-value misconceptions
https://link.springer.com/article/10.1007/s10654-016-0149-3#Sec11