My go-to reference for questions like this is Popper's Logic of Scientific Discovery, which has direct practical recommendations. Simplifying: there are two steps to method, hypothesis generation and hypothesis testing. Step 1 can come from anywhere, including a hallucination or a false belief. But maybe the most fertile ground for finding hypotheses to test experimentally is the *non-experimental* results of a prior study.
For your situation this means: 1) Lash your hands to one or two measures, for the purpose of hypothesis testing, labeled as such. Start with the one most influential in the literature, full stop. Don't agonize too much about whether it's the 'right' or 'best' index, because: 2) Also show the results with other indices/weightings as non-experimental results, labeled unmistakably as such, along with a theory about why results using different indices might differ. Step two is hypothesis generation, not testing, and it's every bit as important to the scientific method as hypothesis testing. Properly distinguished for readers, both are science. The hypotheses generated by ex-post subgroup analysis or alternative outcome indices in one paper can be tested in another paper. If it turns out to be an illusion, that becomes clear in the other paper.
In other words, paper A pre-commits to testing the effect of a pill on "well-being" as an average of pain-free days and self-reported happiness. Experimental result: nil. Clearly-labeled non-experimental section notes that there's a big positive effect on pain-free days in isolation, but not on self-reported happiness. Paper B then pre-commits to testing the effect of the same pill on pain-free days. All of this is science, including the non-experimental tests in Paper A that depart from the pre-analysis plan, because choosing the hypothesis for Paper B to test is part of hypothesis testing.
In short: Don't suppress analysis that departs from the pre-analysis plan, just make the departure fully transparent and set it apart in its own section. Then worry less about getting the pre-analysis plan perfect, trying to ignore the (real) pressure we all face for every paper to be a "home run". (Sorry Kiwis... er, a "sixer".)