From: J Moore Subject: notes Date: 15. Oktober 2012 19:22:22 MESZ To: wirth@logic.at ; Notes to CP Wirth by J Moore ; 13 Oct, 2012 ; Different topics are addressed in different sections.  Sections deliniated by hyphens. ; ----------------------------------------------------------------- ; Is it ``necessary'' to reason about an explicit induction hypothesis? ; Here is an example in which simplification of the induction hypothesis is ; important.  Clearly it is not logically necessary -- one could ``reverse'' ; every rewrite performed on the hypothesis and apply it to the conclusion ; instead, but that would require using rules ``in both directions'' (e.g., ; both unfolding and folding definitions).  By having the induction hypothesis ; explicitly available we allow our rules to be used in ``one direction'' (see note). ; [Note: This ``one direction'' stuff is of course not strictly true.  There ; are some beautiful proofs in which it really is best to use the rules in both ; directions, e.g., sometimes put an arithmetic expression into polynomial form ; and at other times in the same proof factor a poloynomial a product of other ; polynomials.  When one wants to do that with ACL2, you have to fight the ; system with hints that enable and disable sets of rules.] ; Here, (len x) is the number of conses in the cdr-chain of x: ; (defun len (x) ;   (if (consp x) ;       (+ 1 (len (cdr x))) ;       0)) ; It is pre-defined in ACL2. (set-gag-mode nil) (defun rev1 (x a)  (if (consp x)      (rev1 (cdr x) (cons (car x) a))      a)) (thm (<= (LEN A) (LEN (REV1 X A)))) ; The key observation is that ACL2's induction hypothesis is formed by instantiating ; X with (CDR X) and A with (CONS (CAR X) A) to get: ; (<= (LEN (CONS (CAR X) A)) ;     (LEN (REV1 (CDR X) (CONS (CAR X) A)))) ; Here is the ACL2 proof: ; ACL2 !>(thm (<= (LEN A) (LEN (REV1 X A)))) ; ; Name the formula above *1. ; ; Perhaps we can prove *1 by induction.  Two induction schemes are suggested ; by this conjecture.  However, one of these is flawed and so we are ; left with one viable candidate.   ; ; We will induct according to a scheme suggested by (REV1 X A).  This ; suggestion was produced using the :induction rule REV1.  If we let ; (:P A X) denote *1 above then the induction scheme we'll use is ; (AND (IMPLIES (NOT (CONSP X)) (:P A X)) ;      (IMPLIES (AND (CONSP X) ;                    (:P (CONS (CAR X) A) (CDR X))) ;               (:P A X))). ; This induction is justified by the same argument used to admit REV1. ; Note, however, that the unmeasured variable A is being instantiated. ; When applied to the goal at hand the above induction scheme produces ; two nontautological subgoals. ; ; Subgoal *1/2 ; (IMPLIES (NOT (CONSP X)) ;          (<= (LEN A) (LEN (REV1 X A)))). ; ; This simplifies, using the :definition REV1, to ; ; Subgoal *1/2' ; (IMPLIES (NOT (CONSP X)) ;          (<= (LEN A) (LEN A))). ; ; But we reduce the conjecture to T, by the :executable-counterpart of ; TAU-SYSTEM. ; ; Subgoal *1/1 ; (IMPLIES (AND (CONSP X) ;               (<= (LEN (CONS (CAR X) A)) ;                   (LEN (REV1 (CDR X) (CONS (CAR X) A))))) ;          (<= (LEN A) (LEN (REV1 X A)))). ; ; This simplifies, using the :definitions LEN and REV1, primitive type ; reasoning and the :rewrite rule CDR-CONS, to ; ; Subgoal *1/1' ; (IMPLIES (AND (CONSP X) ;               (<= (+ 1 (LEN A)) ;                   (LEN (REV1 (CDR X) (CONS (CAR X) A))))) ;          (<= (LEN A) ;              (LEN (REV1 (CDR X) (CONS (CAR X) A))))). ; ; But we reduce the conjecture to T, by the :executable-counterpart of ; TAU-SYSTEM. ; ; That completes the proof of *1. ; ; Q.E.D. ; ; Summary ; Form:  ( THM ...) ; Rules: ((:DEFINITION LEN) ;         (:DEFINITION NOT) ;         (:DEFINITION REV1) ;         (:EXECUTABLE-COUNTERPART TAU-SYSTEM) ;         (:FAKE-RUNE-FOR-TYPE-SET NIL) ;         (:INDUCTION REV1) ;         (:REWRITE CDR-CONS)) ; Time:  0.01 seconds (prove: 0.00, print: 0.00, other: 0.00) ; Prover steps counted:  326 ; ; Proof succeeded. ; [proof ends here] ; The (CAR X) A) was chosen as the appropriate instance for A in the ; theorem because of the way REV1 recurs.  But A occurs elsewhere in the ; theorem too.  When we replace A by the CONS, we create the non-canonical term ; (LEN (CONS (CAR X) A)) in the hypothesis.  This term ``has to be'' unfolded hypothesis ; to be useful in the conclusion. ; This example illustrates a general class of theorems in which we may do a lot rewriting in ; the hypothesis:  any theorem in which there is an ``accumulator'' (e.g., A).   ; I might add a very common proof paradigm for a simple theorem of the form ; (P X) --> (Q X) ; to be proved by induction on X is to break the induction step: ; ((CONSP X) ;  & ;  (P (CDR X)) --> (Q (CDR X))) ; --> ; (P X) --> (Q X) ; into two cases: ; [a] (CONSP X) & (P (CDR X)) & (Q (CDR X)) & (P X) --> (Q X) ; [b] (CONSP X) & (P X) --> (P (CDR X)) ; This is sometimes depicted with a diagram: ; (P (CDR X))   --->   (Q (CDR X)) ;     ^                     | ;     |                     v ;   (P X)       --->     (Q X) ; by which I mean [a] use (Q (CDR X)) to prove (Q X) and [b] use (P X) to ; prove (P (CDR X)). ; But in step [b] part of the induction hypothesis occurs in the conclusion of ; the subgoal and hence you would ``expect'' to manipulate it. ; ----------------------------------------------------------------- ; Challenge to use Quod Libet to do Operational Semantics Code Proofs ; Next I illustrate an operational semantics proof.  I will prove the ; correctness of a factorial program implemented in a simplified model of a ; JVM.  I think the problem illustrates the strength of the descente infinie ; style of induction.  I'll comment on that below. ; I show two proof approaches.  First, the ``direct'' method, where the ACL2 ; user has to provide a hint specifying the right induction.  Second, the ; ``indirect'' way where the ACL2 user employs in the crucial lemma a specially ; designed function that ``just happens'' to suggest the right induction. ; Rather than explain every move below, I'll just type the events and then ; explain them to you when we meet.  I can provide you with the source files ; later. ; Get the ``JVM'' model into ACL2.  Disclaimer: This model is so far removed ; from our actual JVM model that I call it M1 rather than ``JVM'' and I'd be ; embarrased to advertise this proof as being about the JVM!  The book below ; includes the model and some lemmas that make ACL2 behave well on the model. (include-book "/u/moore/courses/cs378/jvm/spring-12/m1/m1-lemmas") ; Select the M1 symbol package as the default. (in-package "M1") ; Define what we mean by a ``halted'' state:  the next instruction is HALT. (defun haltedp (s)  (equal (next-inst s) '(HALT))) ; Define the factorial function: (defun ! (n)  (if (zp n)      1      (* n (! (- n 1))))) ; Define an M1 factorial program, which I call ``pi'' (for ``program''). (defconst *pi*  '((ICONST 1)  ;  0  put 1 on the stack    (ISTORE 1)  ;  1  pop and assign to variable 1    (ILOAD 0)   ;  2  put the value of variable 0 on the stack    (IFEQ 10)   ;  3  pop and if it is 0, jump ahead 10 (to pc 13)    (ILOAD 1)   ;  4  put the value of variable 1 on the stack    (ILOAD 0)   ;  5  put the value of variable 0 on the stack    (IMUL)      ;  6  multiply the top two values    (ISTORE 1)  ;  7  pop and assign to variable 1    (ILOAD 0)   ;  8  put the value of variable 0 on the stack    (ICONST 1)  ;  9  put 1 on the stack    (ISUB)      ; 10  subtract them    (ISTORE 0)  ; 11  pop and assign to variable 0    (GOTO -10)  ; 12  jump back 10, to pc 2    (ILOAD 1)   ; 13  put the value of variable 1 on the stack    (HALT))     ; 14  halt  ) ; If I were to give the names N and A to variables 0 and 1 respectively, I ; could write this program as the following pseudo-code: (defconst *pi*  '((ICONST 1)  ;  0      (ISTORE 1)  ;  1       A := 1;    (ILOAD 0)   ;  2  loop:    (IFEQ 10)   ;  3       If N=0, goto exit (pc 13);    (ILOAD 1)   ;  4      (ILOAD 0)   ;  5      (IMUL)      ;  6      (ISTORE 1)  ;  7       A := A*N;    (ILOAD 0)   ;  8      (ICONST 1)  ;  9      (ISUB)      ; 10      (ISTORE 0)  ; 11       N := N-1;    (GOTO -10)  ; 12       goto loop (pc 2);    (ILOAD 1)   ; 13  exit:    (HALT))     ; 14       ``return'' A;  ) ; Define the ``schedule'' function that specifies how many steps it takes to ; run this program to completion, as a function of its input(s). (defun loop-sched (n)  (if (zp n)      (repeat 'TICK 3)      (ap (repeat 'TICK 11)          (loop-sched (- n 1))))) (defun sched (n)  (ap (repeat 'TICK 2)      (loop-sched n))) ; A simple test to illustrate the model: Start the program with N = 5 and A = ; 0, and run it from pc 0 to completion, I get a halted state (pc 14) with 5! = ; 120 on top of the stack. (run (sched 5) (make-state 0 (list 5 0) nil *pi*)) ; To prove any program correct, one must first prove the loop(s) correct and ; then work outward to the top-level entry to the program.  This program has ; just one loop.  The crux of any proof is thus the proof that the loop ; computes what we meant it to.  The loop in program pi computes (* A (! N)), ; when entered with arbitrary (natural numbers) N and A.  Furthermore, ; if run the right number of steps, the loop jumps to the HALT statement. ; We will prove this about the loop. ; That is, we will prove that if you put the pc at the loop (pc 2) with ; arbitrary N and A in the register file, and run the program as specified by ; loop-sched, you reach pc 14 with (* A (! N)) on top of the stack. ; Induction is of course necessary.  The basic idea is induction to unwind the ; control flow of the program.  Either the program is in its base case (at the top ; of the loop and N is 0), in which case it runs to HALT with the expected answer, ; or the program is going to traverse the loop.  Our induction hypothesis should tell ; us what machine state we will be in when we next arrive at the loop.  Obviously, ; N will be have been decremented to N-1.  But what about A?  It is changed, but the ; change does not occur ``in one place'' but is instead composed of multiple little ; state changes during the passage through the loop body. ; So the two approaches address the question of specifying the right value for ; A in the induction hypothesis. ; First Approach -- An explicit user-supplied hint ; Define an induction hint. (defun irrel-hint-fn (n a)  (if (zp n)      (list n a)      (irrel-hint-fn (- n 1) (* a n)))) ; Prove the loop correct by providing an induction hint. (defthm loop-is-correct  (implies (and (natp n)                (natp a))           (equal (run (loop-sched n)                       (make-state 2                                   (list n a)                                   nil                                   *pi*))                  (make-state 14                              (list 0 (* a (! n)))                              (push (* a (! n)) nil)                              *pi*)))  :hints (("Goal" :induct (irrel-hint-fn n a)))) ; Prove that pi halts and returns (! n). (defthm main  (implies (natp n)           (let ((s_fin (run (sched n)                             (make-state 0 (list n a) nil *pi*))))             (and (haltedp s_fin)                  (equal (top (stack s_fin)) (! n)))))) ; Note that with Descente Infinie one might hope that the case split on whether ; N is 0 or not, followed by symbolic evaluation until the HALT or re-arrival ; at the same pc 2, would build the necessary state so that the choice of (- N ; 1) and (* A N) could be easily identified. ; Personally, I think the problem Quod Libet might have with this proof has ; less to do with the induction than with the representation of a ``big'' state ; and its repeated simplification to drive it through the loop.  But those are ; ``engineering'' issues. ; Now for the second approach.  Undo the first approach. (ubt 'irrel-hint-fn) ; Second Approach -- factor the correctness proof ; Define the algorithm that this program implements. (defun fact-alg (n a)  (if (zp n)      a      (fact-alg (- n 1) (* a n)))) ; Prove that the loop code implements the algorithm.  Observe there is no hint because ; the algorithm is the hint. (defthm loop-is-fact-alg  (implies (and (natp n)                (natp a))           (equal (run (loop-sched n)                       (make-state 2                                   (list n a)                                   nil                                   *pi*))                  (make-state 14                              (list 0 (fact-alg n a))                              (push (fact-alg n a) nil)                              *pi*)))) ; Prove that the algorithm computes factorial (in the appropriate sense given ; the extra algorithm variable A). (defthm fact-alg-is-factorial  (implies (and (natp n)                (natp a))           (equal (fact-alg n a) (* a (! n))))) ; Prove that pi halts and returns (! n). (defthm main  (implies (natp n)           (let ((s_fin (run (sched n)                             (make-state 0 (list n a) nil *pi*))))             (and (haltedp s_fin)                  (equal (top (stack s_fin)) (! n)))))) ; First Approach Again -- ``Let it be whatever it is...'' (ubt! 'fact-alg) ; Sometimes it is just ``too hard'' to write a function that mimics the ; computation perfectly.  (Imagine many case splits and many state components ; being changed conditionally along the various paths.)  In that case it is ; sometimes easier to use the program itself to compute the correct ; instantiation of the auxiliary variables, like A.  Here is a hint that does ; that: note that to compute the new value of A, we put the old value into a ; state like the one we have, run the state 11 steps, and then recover from it ; the new value of A.  We don't have to know that it is (* A N).  It is ; ``whatever it is.'' (defun future-value-of-a (n a k)  (nth 1 (locals (run (repeat 'tick k)                      (make-state 2                                  (list n a)                                  nil                                  *pi*)))))   (defun generic-hint (n a)  (if (zp n)      a      (generic-hint (- n 1)                    (future-value-of-a n a 11)))) ; Now we prove the loop correct yet again, but give this induction hint, just ; as in the first proof.  But we've never had to analyze the program to ; determine A is (* A N) after once iteration.  This kind of reminds me of ; Descente Infinie except, of course, I am specifying the choice of A but just ; doing it in a ``convenient'' way.  When I use this approach I am depending on ; the fact that we rewrite the induction hypothesis. (defthm loop-is-correct  (implies (and (natp n)                (natp a))           (equal (run (loop-sched n)                       (make-state 2                                   (list n a)                                   nil                                   *pi*))                  (make-state 14                              (list 0 (* a (! n)))                              (push (* a (! n)) nil)                              *pi*)))  :hints (("Goal" :induct (generic-hint n a)))) (defthm main  (implies (natp n)           (let ((s_fin (run (sched n)                             (make-state 0 (list n a) nil *pi*))))             (and (haltedp s_fin)                  (equal (top (stack s_fin)) (! n)))))) ; ----------------------------------------------------------------- ; On the Edinburgh Pure Lisp Theorem Prover as described in my Dissertation ; Page numbers in these notes are written xx[yy] meaning xx is the page ; of the pdf scan of the dissertation and yy is the page number typed on ; that physical page of the bound copy of the dissertation.  So for example, ; the INTRODUCTION TO PART II is on page 84[77].  The difference between ; xx and yy is always 7, e.g., page 77 in the printed dissertation is ; page 84 in the pdf. ; By the way, occasionally the non-word ``normalization'' is used.  In the ; version of the dissertation that I defended in November, 1973, I introduced ; that ``word'' to mean what we now call ``simplification'', namely, the ; composition of symbolic evaluation, IF-normalization, and reduction.  My ; committee objected to a made-up word and asked me to replace it everywhere by ; ``simplification''.  I did that with whiteout and retyping.  But I missed ; some occurrences and I suspect some scanned copies of the document were from ; un-corrected sources. ; There is a good informal description of the induction mechanism on 87[80]. ; Also on that page, we characterize the prover as ``designed to be able to ; prove simple theorems the same way a good programmer might''. ; We note (page 95[88]) ``If we allow the addition of arbitrary axioms ; purporting to define ``functions'' we immediately open the door to ; inconsistency'' and show an example.  We exhibit one of several general ; schemes for primitive recursive functions and say it is safe to allow such ; definitions. ; But I haven't seen a definitive statement that general recursion is ; prohibited and there is even talk (page 198[191]) of partial functions in the ; chapter on EXTENSIONS.  Indeed, there we show a theorem about a partial ; function that we say the theorem prover could prove. ; (IMPLIES (MEMBER A B) (LTE (CHOP A B) B)) ; We claim that every ground instance of this that terminates under the Lisp ; interpreter (given adequate resources I presume) is true.  Of course, we ; explicitly say this is the case only if the axioms (including ; ``definitions'') are consistent. ; But we also say ``it is difficult to establish whether an extension is ; consistent after the introduction of a partial function'' and we do not offer ; any concrete help for doing it.  We do not discuss what we called measures ; (in the 1979 book and onward). ; In general, today I find the dissertation somewhat vague on logical details. ; It could be you will find it vague too!  On the other hand, it could be just ; normal mathematics-speak and that I have since become over-burdened by ; formality! ; In section 5.2 Iteration (201[194]) we discuss what today we'd call ; ``accumulator''-using functions.  These are functions that build up some ; arguments (accumulators or Bundy's ``sinks'') while being controlled by other ; arguments.  The classic example is REV1 above and it is discussed on page ; 202[195].  But we say of such functions (201[194]) ``The current program ; cannot deal with them.''  I believe what is meant is that the generalization ; and induction heuristics described in the dissertation are not effective for ; them.  We describe some desirable extensions for dealing with such functions, ; including a method of generalizing to reach, perhaps, an inductively provable ; result starting from the overly specific (REV1 X NIL) case and a way to ; select induction instances: ;    ... and the extension desired is clear: If when evaluation is determining ;    what what to induct upon, it is noted that skolem constants are being ;    built up the the recursion, then let those skolem constants be free in the ;    hypothesesis (provided they are not being inducted upon). ; We sketch a proof about REV1 using this technique and comment on the difficulties of ; implementing it, including handling free variables. ; So the lesson here is that we did not, in 1973, handle accumulators ; effectively but were thinking about it.  The free-variable problem was daunting and ; we wrote 206[199]: ;    Both problems can be avoided if EVAL is used to predict exactly which ;    instances will be needed and then have INDUCT supply just those instances. ;    This is the direction of current research. ; So this clearly points the way that we headed and described more fully in the ; induction discussion of 1979. ; Section 3.8 page 166[159] discusses induction, but it crucially depends on ; Section 3.2 page 116[109] where we discuss ``evaluation'' (which we sometimes ; called ``symbolic interpretation'' or ``rewriting'' with axioms and ; definitions).  The evaluator set up a data structured, called ANALYSIS, which ; recorded why recursively defined functions could not be unfolded. ; As I noted to you on Friday, the 1973 induction did not use destructors in ; the hypotheses but rather used constructors in the conclusion, e.g., ; (P A) --> (P (CONS A1 A)).   The ANALYSIS also gave induction enough ; information to do things like: ; (P A) & (P A1) --> (P (CONS (CONS A1 A2) A)), ; introducing names for parts of the structure being inducted upon and ; providing induction hypotheses (or not) about the named parts. ; (The fact that induction introduced CONS is why the prover did not need ; ``destructor elimination'' as talked about in the 1979 book.  By 1979, ; induction always put destructors into the hypotheses and occasionally ; used destructor elimination to trade (CAR X) and (CDR X) for A and B ; by replacing all X by (CONS A B).  Of course, destructor elimination is ; more a general lemma-driven process and ``CAR/CDR elimination'' is just ; a special case.) ; While the prover merged induction schemes suggested by ANALYSIS it did not ; implement subsumption of one scheme by another. ; It contained so-called ``special induction'' rules for handling some ; supposedly rare cases, e.g., induction by two steps at a time (CDR (CDR X)) ; rather than a general mechanism.  It could NOT do, say, 3-step induction but ; the way forward was pretty clear given what we did.  We apparently just ; didn't feel it was necessary to implement.  We wrote ``such a routine is not ; justified by its usefulness'' and we note that 90% of the theorems in the ; Appendix use the general scheme described and only 10% use the ``special'' ; schemes. ; The dissertation does not discuss what we called ``flawed induction ; suggestions'' in the 1979 book.  An induction scheme is flawed if it inducts ; on a variable that is unchanged in another suggested scheme's recursion.  So ; in (APPEND A (APPEND B C)) = (APPEND (APPEND A B) C), induction on B is ; flawed and so not chosen.  But in 1973, we chose A because it allowed more ; recursions than B (page 169[162]).  We later realized that allowing ; recursions was not so important as instantiating variables that had to remain ; unchanged elsewhere. ; In the event that multiple suggestions survive, we rate the candidates by a ; score that favors those that allow a lot of simplification steps (169[162]), ; or, to break that tie, the one suggested by the largest number of new ; suggestions (not inducted upon already in the proof).  We have similar ; tie-breaking rules today although not those exact ones. ; I found all of chapter 6 (207[200]) quite interesting.  Part of it talks ; about how nice it is to have the prover generate beautiful lemmas and it ; gives many now classic examples.  It also talks about the design philosophy ; of the program.  One point made several times here is the novelty of our ; total commitment to Lisp as the language in which we express things: ; propositional connectives, hypotheses, ``types'', etc.  (Oddly, since in 1973 ; the system itself was written in POP-2, we interacted with it by typing ; commands in POP-2.  It wasn't until we returned to the states and were ; running on machines that supported Lisp that we coded the prover in Lisp and ; interacted with it in Lisp.  Even then, it was a prover for Pure Lisp but was ; implemented in unrestricted Interlisp and later Symbolic Lisp Machine lisp. ; It wasn't until ACL2 in 1989 that Boyer and I committed to coding it in the ; same language it supported as a prover and interacting with it in that ; language.)  Another thing I like is the last two sentences in the design ; philosophy part 215[208]: ;    ... its methods allow the theorem to be proved in the way a programmer ;    might "observe" it.  The program is designed to make the right guess the ;    first time, and then pursue one goal with power and perserverance. ; (By the way, we introduced lemmas and the more general lemma-driven rewriter ; after I got to PARC.  The paper describing how we introduced lemmas was "A ; Lemma Driven Automatic Theorem Prover for Recursive Function Theory," with ; R. S. Boyer. Proceedings of the 5th International Joint Conference on ; Artificial Intelligence, 1977, pp. 511-519.  We introduced meta-functions and ; efficient execution of functions on ground constants, in ``Metafunctions: ; Proving Them Correct and Using Them Efficiently as New Proof Procedures,'' ; with R.S. Boyer. In R.S. Boyer and J S. Moore (eds), The Correctness Problem ; in Computer Science, Academic Press, London, 1981.  We described the ; integration of a linear arithmetic decision procedure in ``Integrating ; Decision Procedures into Heuristic Theorem Provers: A Case Study of Linear ; Arithmetic,'' with R.S. Boyer. In Machine Intelligence 11, Oxford University ; Press, 1988, pp. 83-124.) ; In the Related Work section (page 227[220]) I wrote (page 228[221] that ; ``Burstall points out the similarity between the form of the proofs and the ; form of the recursive functions involved.  However he does not explicitly ; describe heuristics for choosing what to induct upon, or how to generate the ; induction formula.''  I don't give a reference but the sentence occurs in ; paragraph about Burstall's 1969 paper on proving properties by structural ; induction. ; Later I wrote (230[223]) ;     No previous system (known to this author) has automated the "creative" ;     parts of the proof process, namely: genrealization of the theorem to be ;     proved, application of automatic program writing to simplify the problem ;     [a reference to the generation of recursive predicates to express ;     constraints during generalization], and the automatic generation of ;     induction formulas. ; I think that establishes that our theorem prover was the first -- at least ; that I believed that in 1973 (and still do). ; I point out 236[229] that ;      Bledsoe (1971) encorporated an induction rule in a set theory theorem ;      prover.  However, its use was triggered by the occurrence in the theorem ;      of a statement of the form ``for all natural numbers, n'', and caused ;      induction on n.  Such a trigger in our system could cause induction on C ;      in ;      (TIMES A (TIMES B C)) = (TIMES (TIMES A B) C) ;       even though C is not being recursed upon. ; I also point out (236[229]) that Brotz and Floyd (1973) have a theorem prover similar ; to ours but inducts on the ``right-most argument appearing in the theorem.  ... ; No attempt is made to analyze the recursive structure of the functions involved.'' ; That quote is more solid contemporary evidence that our thinking at the time was novel. ; I'll be happy to answer questions you may raise after you've read the dissertation, ; if I can.