Closed BilelGho closed 3 years ago
TODO: further explanation on the approach of defining P/ NP, what worked and what did not.
So .. ehm.. Here we go. Let's begin by defining/explaining the abstractions used in the theory:
comp c x r
is a predicate that means that the code c will compute for the input x , the result r. More formally, if you run the code c on a state where the register input
contains the value x
, the code will halt with a state where the register input
contains the value r
. Using the same register input
for input and output is a designer choice, to allow easy composition of functions through direct sequencing of codes (otherwise you'd always need to transfer the data between registers between both codes). For each code comp c
will define a binary relation in nat
. Due to the determinism of IMP-
semantics, we do prove it is actually a partial function. This is expressed by the lemma comp_det
that is defined as : comp c x r ==> comp c x r' ==> r=r'
it IS a life saver. You could think of comp c
as the partial function computed by the code c. comp is useful for defining reductions and deciders which will come later.comp
yet involves also a certificate.
verif c x z r
means that the code c does verify for input x and certificate z the result r. It has the same formal idea as comp. Applying the code c on any state with register input
with value x , certificate
with value z, will result in halting with a state with the register input
containing r. As the name says this will be useful to define verificators. You might expect verifying to return a boolean value, but it is helpful to be more specific in the course of our work and to specify the exact nat
the verifier is returning. We do semantically consider a non zero value as True and 0 as False. Analogously to comp, verif does define a relation that we can prove to be partial function.Remark: the universal and existential quantifiers, are kind of similar and easy to confuse in the context of using IMP-
semantics to define our abstractions due to the determinism of our semantics. Although there's still a difference. The universal quantifier does not exclude the possibility of the computation not halting, so you're saying if it halts then so and so is going to happen. The existential does give us a more precise info. The computation will halt and give us such a result. We know in both cases that no more than one computation might happen/halt . But we don't know if at least one will halt.
comp
and verif
: comp_det`` that is defined as : ``comp c x r ==> comp c x r' ==> r=r' `` and its analogous for
verif`.[| comp f x y ; comp g y z |] ==> comp (f;;g) x z
. Results from the big step semantics definitionBy lifting the aforementioned abstractions to the language level (A language is formally defined as a nat set
) one could easily obtain definitions for deciders, reductions and verifiers, that we enunciate informally:
nat
x some result r (i.e. computing a total function , doesn't mean the same result for all x) and that result is non-zero iff x is in L.an IMP- program is a reduction from L to L'. iff it computes a result for every input and the result is in L' iff the input is in L. The reduction should also conserve the register certificate
. i.e. the reduction code does always halt on a state where the register stays the same it was at the initial state. This might look as weird side effect requirement and the reason is purely technical. We aim to obtain a verifier for a new language when we sequence a reduction and a base verifier. This will be ruined if our reduction is mean enough to delete our certificate. Even if we didn't need the content of that register. It wouldn't cause any harm to prevent using that register in a machine with infinite registers.
The most important here is that the composition of a reduction with a decider/verifier, gives a decider/verifier.
The totality in every part result in totality in the sequence. The characteristic property of a reduction the result is in L' iff the input is in L
helps transfering the decision/verification process, from L' to L.
Since we're interested in complexity theory, rather than pure computability theory, some space and time bounds should be involved.
We will represent bounding functions as nat => nat
. We need our bounding functions to be polynomial.
The theory developed by Manuel Eberl on polynomial growth is really helpful.
poly p
means that p is a polynomially bounded function.(so we can think of it as a polynomial in our context)make_mono
takes a bounding function and returns one with more interesting properties:
make_mono p
is always a monotonic increasing function. make_mono p
is an upper bound for p
i.e: for every x
: p x <= make_mono p x
. So if a code is bounded by p then it is also bounded by make_mono p
poly p ==> poly (make_mono p)
Let's start with time bounds:
time_bounded c p
is a predicate saying that code c
is time bounded by the bounding function p
. i.e. the code c
run on any initial state will halt with a time cost that is bounded by p (length x)
with x
being the content of the register input
at the beginning. This already delivers a definition of the complexity class P: it is the set of problems/languages, who have a decider that is polynomially time bounded. One can see already the pros of the way we are approaching the problem. Through the presented abstractions we do separate the concepts if correctness and complexity (even though there's still a kind of intersection of both of them, since both will mean that the program always halt)
What about polynomial reductions? We try to define them as reductions that are polynomial time bounded. We know that sequencing a reduction and a decider would result in a decider for the "easier" language. So we will need to say that the sequencing of the polynomial reduction and the polynomial decider would also halt in a polynomial time.
This is not as trivial as it sounds. The time bound of the second program does depend on its own input which is the result of the first program, not the initial input. So we need to also know that the result of the first program (i.e: the reduction) is bounded.
result_bounded c p
iff c computes for every x some result
issue #18