The ‘strategy’ is coming along nicely, and the process has generated valuable introspection and discussion. Intuitively, the strategic directions of (1) design risk mitigation (2) knowledge management and (3) skills and staffing "feel" right. However, to me there seems to be a gap between the specifics levers listed under each direction and the strategic directions themselves (perhaps some missing items?). I wonder if we have a complete picture of the weaknesses, and are picking the weakest links to address, and if for the areas we’ve picked to focus have interventions that will in fact address most directly the problems identified. One approach that might be followed to help assure complete picture is to • list all the current problems or reasons the projects are problematic or not performing, why we think learning is not occurring, and staffing issues, and • address how one(and if) would objectively measure these problems (or, what would the ideal situation look like, and can we measure it if it were achieved). Assigning relative weights to the problems, in terms of how much we think each contribute to the poor performance (at least consensus based), and arraying these from relatively more problematic to less, would help. • Identify what interventions would most directly address the issues. Perhaps array these from the measures that would most directly or substantively address the problems, and select a few interventions deemed most likely to address the problems. This would provide a monitorable framework in terms of actions and measuring whether the problems are being corrected. My fear is that there are mismatches, partial problem definitions, and partial solutions. If this is the case, we will not have much to show at the end of the next cycle. For example, strategic direction 1 on design risk has as its first element (1) improving responsiveness to country programs. The measure or intervention under this heading is to ‘enable more flexible TA and dialogue through flexible instruments to reduce dependence on PSM projects.’ The problem this implicitly addresses would appear to be inflexible instruments and inability to carry on a dialogue with our clients. This hypothesis should be more explicitly stated as such, to enable challenge and testing. From my experience, it is not clear to me we need a new instrument, or that we lack instruments with which to maintain a dialogue with clients. Whenever we engage with a client on public sector (or other issues), we must think of the current engagements, and all instruments, we might employ. It would not be unusual to start some work as TA under a DPL discussion, followed by some work via ESW, then perhaps a specific lending operation which also leverages donor resources under a trust fund to support project implementation and reforms. Over a period of 5-10 years, the dialogue and support would appear continuous, and support results on the ground, but the vehicle or instrument (financing) varied at any point in time. (This ‘holistic’ approach would also be a more appropriate means of evaluating impact --- looking at the impact of just the ESW work, for example, would be a very partial picture, and not show the impact in the short time horizon of the ESW.) So, if the PS strategy is positing that public sector projects or reforms perform poorly because of inappropriate or inflexible instruments, I would suggest that is not the case, and also not the problem or constraint to having impact. We would have a wrong solution to the wrong problem, and any efforts would not likely much improve public sector reform fortunes when this strategy is evaluated in future. I am not advocating a dry, pedantic exercise, but would encourage a more rigorous approach to problem definition, metrics, and interventions, to be sure we are tackling the most important problems, have the right solutions, can monitor implementation and impact, and have the highest likelihood of improving impact.