I have been reflecting quite a while on my proof planning on my human meta-level, which I have been watching now for many years in order to use it for object-level proof planning. It could be considered as an AI planning process with some specialties: 1. It has free existential variables in the theorem. These variables take care of technical details that cannot be guessed easily on the high level of abstraction where the theorems are speculated. Moreover, these variables are there to admit additional requirements that turn out during the expansion of the plans to calculus level (my meta-calculus, I mean). 2. Then, on a very high level with semantical means I construct a proof plan. Although it is order independent, it may be a stituation space plan as I do not take care of threats. Nevertheless, it is not at all clear to me how this planning process could be automated. 3. Then, I try to expand the proof plan to calculus level. This is usually successful (as it changes the theorem via instantiation of meta-variables) and because I am very experienced on this very high semantical level. 4. Then, I start a semantical analysis of the instantiated theorem. If it is what I want and the additional requirements seem necessary indeed, I stop. Otherwise I start proof rewriting on the calculus level (mostly peephole optimization). If this is successful, I repeat step 4. 5. Replanning. Then go on with 3.