\author{Claus-Peter Wirth} \title{Against the first part of Lincoln A. Wallen's ``Automated Proof Search in Non-Classical Logics''} In part\,I of his monograph ``Automated Proof Search in Non-Classical Logics'' Lincoln A. Wallen blames classical sequent or tableau calculi for redundancy in the following three points: 1. Notational Redundancy This means that the same subformula occurs again and again in a proof tree. 2. Irrelevance This means that some formula may be expanded on although this expansion cannot further the proof. 3. Non-permutability This means that the ordering of \gamma\ and \delta\ steps in the construction of a proof may be crucial for a proof to go through with the proper instantiations or not. ad 1. Notational Redundancy should be avoided by a clever implementation using some structure sharing like [BM72] but not be discussed on the level of a logical calculus. Firstly, because these problems will have to be reconsidered when implementing it anyway and local optimizations tend to turn out to be non-optimal for the whole implementation. Secondly, because the resulting optimization blocks improvements of the calculus, like lemma application, induction hypotheses application, contextual reasoning, and contextual rewriting. ad 2. In practice it is hardly possible to tell in advance the expansion of which formula will contribute to a proof. Contrariwise, the lemmas speculated from seemingly non-contributing expansions may be the key to finding a proof. Moreover, in order to be useful in Wallen's approach, all already present lemmas must be included into the formula/sequent to be shown. This produces problems with lemma management, management of change in the small, goal directedness, and descente infinie by either making them impossible or by making it necessary to treat them on a non-adequate level. ad 3. The author considers the last point as the most important one. He describes the order of \gamma\ and \delta\ steps to be crucial for the success of a proof. But by using liberalized versions of the \delta-rules it completely disappears, as far as his example is considered. A modern implementation of his methods had better not use his primitive version of admissibility check for substitutions but a modern one (as can be found in http://www.ags.uni-sb.de/~cp/p/d), which admits more substitutions and thereby more proofs on a given level of multiplicity. There is, however, a new problem the becomes evident just with \delta^+ rules: The ordering of \math\beta-steps becomes relevant for the success of the proof, unless we disregard proof size and take very liberalized forms of \delta-rules like the \delta^\epsilon ones. Cf. http://www.ags.uni-sb.de/~cp/teaching/hotp/Digest1/all.pdf p. 192 ff. for an example of this. Conclusion. All in all, we should stay with sequent and tableau calculi because they are more comprehensible for human beings and give us the freedom required to design more powerful inference systems, or ask Serge Autexier on how to use indexed formula trees for much more than matrix computation, namely to get rid of any visible calculus or proof trees in the presentation of proof states.