zhilin02 / slreplace

1 stars 3 forks source link

2expspace lower bound for reversal bounded counters #13

Closed matthewhague closed 6 years ago

matthewhague commented 6 years ago

From Zhilin's mail:

One issue that I found is that the complexity upper bound for reversal-bounded two-way parametric transducers is not EXPSPACE, it is 2-EXPSPACE.

Let's consider reversal 1-bounded two way PT (the reading head is reversed once). The size of the pre-image is f(|T|, |A|) = (|T||A|)^2, where |T| is the size of the transducer, and |A| is the size of the conjunctive FA.

If there are two transducer assignments in the string constraints, then we get a pre-image of size at most f(|T|, f(|T|, |A|)) = |T|^2 (|T|^2|A|^2)^2 = |T|^{2+2^2} |A|^{2^2}.

In general, if there are n transducer assignments in the string constraints, then we get a pre-image of size at most

|T|^{2+2^2+...+2^n} |A|^{2^n},

which is doubly exponential.

Then the natural question is that whether we can match the upper and lower bounds for reversal-bounded two-way parametric transducers, e.g. can we get the 2-EXPSPACE lower bound ?

matthewhague commented 6 years ago

I suspect this also applies to non-parametric transducers also.

chentaolue commented 6 years ago

Matt, Zhilin and I suspect whether you could adapt the lower bound you provided for replaceall in the summer? We even think that reverse (which uses one reversal) may be sufficient for the lower bound.

matthewhague commented 6 years ago

I was going to ask something along these lines already. I don't see how to adapt the proof yet, but i have begun to think that if you introduce the reverse function, then you can reduce single-reversal transducers to one-way transducers.

What i can't figure is why being able to reverse a string should cause such a blow up. It doesn't add an obvious power to the EXPSPACE lower bound technique.

In fact, i was going to ask if the EXPSPACE algorithm we already have can be adapted to include the reverse function. Do you have any good examples showing why not (or examples showing an extra blow-up because of reverse)?

matthewhague commented 6 years ago

(Sorry, meant to cancel a comment, but closed the issue instead... reopened.

matthewhague commented 6 years ago

I think for a single reversal at least it should be expspace. This is because

    x = T(y)

for a single-reversal transducer T can be simulated by

    yq = truncate(y);
    xf = Tf(yq);
    xb = Tb(yq);
    xb' = reverse(xb);
    x = concat(xf, xb')

where

The only new function here is reverse. For a regular language L given by a one-way finite state automaton A, computing Pre_reverse(A) is simple: just reverse the transitions of A, with no blow-up.

zhilin02 commented 6 years ago

Cool.

Well done, Matt.

Just  a minor remark for the complexity: To get a single exponential nondeterministic space upper bound, we need a refined complexity analysis for conjunctive representations of FAs ((Q, delta), S) with S subseteq Q times Q.

Suppose in the beginning we have a FA A for x, then by using the Matt's transformation, we get two regular constraints of size |T||A| for y.

Suppose there are n assignments, say

x0 = T1(x1), ..., x(n-1) = Tn(xn),

and initially we have a regular constraint A for x0,

then after applying the Matt's transformation and removing the assignments one by one,  in the end, we get

2^n regular constraints for xn, each of them is of size |T1|...|Tn| |A|.

Let N be the maximum size of |T1|, ..., |Tn|.

1) If A is a normal FA, say (Q, q_0, F, delta), then the size of the product of FAs for xn is (N^n |A|)^{2^n}, which is 2-exp. Since the nonemptiness of FAs is solvable in nondeterministic logarithmic space, we get an exp nondeterministic space algorithm.

2) If A is a conjunctive representation of FA, say ((Q, delta), S), the the size of the product of conjunctive FAs for xn, say ((Q', delta'), S') satisfies that |Q'| <= (N^n |A|)^{2^n}, but what's the size of S' ?

As a matter of fact, |S'| = |S|. Therefore, the normal FA corresponding to ((Q', delta'), S') is of size N^{n 2^n |S|} |A|^{2^n |S|}, which is still doubly exponential, and its nonemptiness can be solved in single exponential space.

the argument is as follows:

let ((Q1, delta1), S1) and ((Q2, delta2), S2) be two conjunctive NFAs, suppose S1 = {(p1, q1), ..., (pk, qk)} and S2 = {(p1', q1'), ..., (pl', ql')} with k <= l,

then the product of ((Q1, delta1), S1) and ((Q2, delta2), S2)  is ((Q1 times Q2, delta1 times delta2), S'), where

S' = {((p1, p1'), (q1, q1')), ..., ((pk, pk'), (qk, qk')), ((p1, p(k+1)'), (q1, q(k+1)')), ..., ((p1, pl'), (q1, ql'))}, which is of size l.

Zhilin

On 24/1/2018 5:25 PM, matthewhague wrote:

I think for a single reversal at least it should be expspace. This is because

|x = T(y) |

for a single-reversal transducer T can be simulated by

|yq = truncate(y); xf = Tf(yq); xb = Tb(yq); xb' = reverse(xb); y = concat(xf, xb')| |should be x = concat(xf, xb')

|

||

where

  • |truncate| is a transducer which outputs an arbitrary prefix of y followed by a state f of T. This is intended to mark the turning point of T when processing y and give the state reached at this turning point.
  • Tf is T restricted to only forward moves (a one-way transducer) with the final state f
  • Tb is T restricted to only backward moves, which have been reversed (i.e. has an edge q -- a/b --> q' for every transition (q', a, q, b, -1) in T where -1 means moves to the left). Moreover, its initial states are the final states of T, and its final state is f. This is also one-way.
  • |reverse| is used to reverse the output of Tb: because Tb is reversed from T its output will be the wrong way around so needs reversing
  • y is then the concatenation of the forward part and the backwards part. The only new function here is |reverse|. For a regular language L given by a one-way finite state automaton A, computing |Pre_reverse(A)| is simple: just reverse the transitions of A, with no blow-up.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/zhilin02/slreplace/issues/13#issuecomment-360070590, or mute the thread https://github.com/notifications/unsubscribe-auth/AKfH1t9HEq88GAmlPxN2ELjOJ8y1kvujks5tNvcTgaJpZM4RpVPh.

zhilin02 commented 6 years ago

On 24/1/2018 6:34 PM, Zhilin Wu wrote:

Cool.

Well done, Matt.

Just  a minor remark for the complexity: To get a single exponential nondeterministic space upper bound, we need a refined complexity analysis for conjunctive representations of FAs ((Q, delta), S) with S subseteq Q times Q.

Suppose in the beginning we have a FA A for x, then by using the Matt's transformation, we get two regular constraints of size |T||A| for y.

Another remark, suppose we start with a conjunctive FA A,

if we do not apply Matt's transformation, then essentially we construct the product of the two regular constraints, get a conjunctive FA of size (|T||A|)^2.

After n iterations, we get a double-exponential size conjunctive FA, thus we need 2-exp space to store the conjunctive FA.

Suppose there are n assignments, say

x0 = T1(x1), ..., x(n-1) = Tn(xn),

and initially we have a regular constraint A for x0,

then after applying the Matt's transformation and removing the assignments one by one,  in the end, we get

2^n regular constraints for xn, each of them is of size |T1|...|Tn| |A|.

Let N be the maximum size of |T1|, ..., |Tn|.

1) If A is a normal FA, say (Q, q_0, F, delta), then the size of the product of FAs for xn is (N^n |A|)^{2^n}, which is 2-exp. Since the nonemptiness of FAs is solvable in nondeterministic logarithmic space, we get an exp nondeterministic space algorithm.

2) If A is a conjunctive representation of FA, say ((Q, delta), S), the the size of the product of conjunctive FAs for xn, say ((Q', delta'), S') satisfies that |Q'| <= (N^n |A|)^{2^n}, but what's the size of S' ?

As a matter of fact, |S'| = |S|. Therefore, the normal FA corresponding to ((Q', delta'), S') is of size N^{n 2^n |S|} |A|^{2^n |S|}, which is still doubly exponential, and its nonemptiness can be solved in single exponential space.

the argument is as follows:

let ((Q1, delta1), S1) and ((Q2, delta2), S2) be two conjunctive NFAs, suppose S1 = {(p1, q1), ..., (pk, qk)} and S2 = {(p1', q1'), ..., (pl', ql')} with k <= l,

then the product of ((Q1, delta1), S1) and ((Q2, delta2), S2) is ((Q1 times Q2, delta1 times delta2), S'), where

S' = {((p1, p1'), (q1, q1')), ..., ((pk, pk'), (qk, qk')), ((p1, p(k+1)'), (q1, q(k+1)')), ..., ((p1, pl'), (q1, ql'))}, which is of size l.

Zhilin

On 24/1/2018 5:25 PM, matthewhague wrote:

I think for a single reversal at least it should be expspace. This is because

|x = T(y) |

for a single-reversal transducer T can be simulated by

|yq = truncate(y); xf = Tf(yq); xb = Tb(yq); xb' = reverse(xb); y = concat(xf, xb')| |should be x = concat(xf, xb')

|

||

where

  • |truncate| is a transducer which outputs an arbitrary prefix of y followed by a state f of T. This is intended to mark the turning point of T when processing y and give the state reached at this turning point.
  • Tf is T restricted to only forward moves (a one-way transducer) with the final state f
  • Tb is T restricted to only backward moves, which have been reversed (i.e. has an edge q -- a/b --> q' for every transition (q', a, q, b, -1) in T where -1 means moves to the left). Moreover, its initial states are the final states of T, and its final state is f. This is also one-way.
  • |reverse| is used to reverse the output of Tb: because Tb is reversed from T its output will be the wrong way around so needs reversing
  • y is then the concatenation of the forward part and the backwards part. The only new function here is |reverse|. For a regular language L given by a one-way finite state automaton A, computing |Pre_reverse(A)| is simple: just reverse the transitions of A, with no blow-up.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/zhilin02/slreplace/issues/13#issuecomment-360070590, or mute the thread https://github.com/notifications/unsubscribe-auth/AKfH1t9HEq88GAmlPxN2ELjOJ8y1kvujks5tNvcTgaJpZM4RpVPh.

chentaolue commented 6 years ago

cool! can we just hide this transform via reverse? Essentially isn't this a (probably tricker) way to get the pre-image of rb-automata?

zhilin02 commented 6 years ago

On 24/1/2018 7:05 PM, Taolue Chen wrote:

cool! can we just hide this transform via reverse? Essentially isn't this a (probably tricker) way to get the pre-image of rb-automata?

We could use the transformation as a smart way to compute the preimage.

But we need a more general definition of conjunctive FAs.

Namely, for a tuple (A1, A2) in the representation of some recognisable relation, if A1  = A11 times A12, A2 = A21 times A22, then instead of representing the tuple as (A11 times A12, A21 times A22), we represent it as ((A11,A12), (A21, A22)), to save space.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/zhilin02/slreplace/issues/13#issuecomment-360097064, or mute the thread https://github.com/notifications/unsubscribe-auth/AKfH1gqaR8iBGnFNCKHv1Vtar_P1X6u2ks5tNw5_gaJpZM4RpVPh.

matthewhague commented 6 years ago

@zhilin02

Suppose in the beginning we have a FA A for x, then by using the Matt's transformation, we get two regular constraints of size |T||A| for y. ... 2^n regular constraints for xn...

Isn't this an issue even for one-way automata? You already had an example for the replaceAll fragment (POPL 18) where we generated 2^n constraints on the input variable.

zhilin02 commented 6 years ago

On 24/1/2018 7:23 PM, matthewhague wrote:

@zhilin02 https://github.com/zhilin02

Suppose in the beginning we have a FA A for x, then by using the
Matt's transformation, we get /two/ regular constraints of size
|T||A| for y.
...
2^n regular constraints for xn...
Isn't this an issue even for one-way automata? You already had an
example for the replaceAll fragment (POPL 18) where we generated
2^n constraints on the input variable.

Yes, that is why we compute a set of regular constraints for each variable, to save space during the computation, and we only conceptually do the product when we solve the nonemptiness problem in the end.

Otherwise,  if we compute a single regular constraint which is then a product of all the regular constraints for a variable,  then wemay have an exponential blow-up for the size of FA.

But the decomposition of a regular constraint constructed from reversal-bounded 2FT and FA into the conjunction of two regular constraints is definitely nontrivial.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/zhilin02/slreplace/issues/13#issuecomment-360100995, or mute the thread https://github.com/notifications/unsubscribe-auth/AKfH1mOosKs3c118-TsOBiOxPWxNWJxmks5tNxKVgaJpZM4RpVPh.

matthewhague commented 6 years ago

@zhilin02

Yes, that is why we compute a set of regular constraints for each variable, to save space during the computation, and we only conceptually do the product when we solve the nonemptiness problem in the end.

Thanks, this makes sense, but does it mean that our current generic algorithm is wrong, in that even for the replaceall fragment it could run into the doubly-exponential problem? That is, what is it about the encoding into one-way regular constraints that necessitates the reworking of the complexity argument? It looks to my superficial understanding like it's an issue with the algorithm in general.

zhilin02 commented 6 years ago

On 24/1/2018 8:51 PM, matthewhague wrote:

@zhilin02 https://github.com/zhilin02

Yes, that is why we compute a set of regular constraints for each
variable, to save space during the computation, and we only
/conceptually/ do the product when we solve the nonemptiness
problem in the end.

Thanks, this makes sense, but does it mean that our current generic algorithm is wrong, in that even for the replaceall fragment it could run into the doubly-exponential problem?

In the POPL paper, we already use this conjunctive representation ((Q, delta), S), instead of making a product automaton.

That is, what is it about the encoding into one-way regular constraints that necessitates the reworking of the complexity argument? It looks to my superficial understanding like it's an issue with the algorithm in general.

For reversal-bounded two-way transducers, this simple conjunctive representation is insufficient, we need a more general one: if A is the product of A1, ..., An, then we represent A as (A1, ..., An), instead of A1 times ... times An.

Note that in this more general conjunctive representation, A = ((Q, delta), S) with S= {(p1, q1), ..., (pm, qm)}, then A is represented as a tuple (A1, ..., Am), where Ai = (Q, delta, pi, qi). Note that the transition graphs of Ai's are isomorphic. While in general, in (A1, ..., Am), Ai's can have non-isomorphic transition graphs. That's why I call it as a more general conjunctive representation. The complexity upper bound holds even if we interpret ((Q, delta), S) in this (a bit) redundant way.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/zhilin02/slreplace/issues/13#issuecomment-360121047, or mute the thread https://github.com/notifications/unsubscribe-auth/AKfH1op8LB9aRLTojF5yF5BLMZkOP277ks5tNydNgaJpZM4RpVPh.

matthewhague commented 6 years ago

@zhilin02

In the POPL paper, we already use this conjunctive representation ((Q, delta), S), instead of making a product automaton.

Ok, maybe to put my concern another way. Let's take my reduction from above, but remove the reverse function and a bunch of other stuff we don't need

    x1 = T1(y);
    x2 = T2(y);
    x = concat(x1, x2)

This program is in our CAV submission fragment, and behaves pretty much the same as the reversal-bounded reduction, bar the reverse and truncation of y. If we chain n of these together, i would expect, by your previous arguments, that we need to modify the paper to avoid a 2EXPSPACE algorithm. However, afaik the current claim of the paper is that this generic algorithm should be EXPSPACE. Does this mean there is a bug (that we will fix with the new complexity analysis)?

zhilin02 commented 6 years ago

On 24/1/2018 9:35 PM, matthewhague wrote:

@zhilin02 https://github.com/zhilin02

In the POPL paper, we already use this conjunctive representation
((Q, delta), S), instead of making a product automaton.

Ok, maybe to put my concern another way. Let's take my reduction from above, but remove the reverse function and a bunch of other stuff we don't need

|x1 = T1(y); x2 = T2(y); x = concat(x1, x2) |

This program is in our CAV submission fragment, and behaves pretty much the same as the reversal-bounded reduction, bar the reverse and truncation of y. If we chain n of these together, i would expect, by your previous arguments, that we need to modify the paper to avoid a 2EXPSPACE algorithm. However, afaik the current claim of the paper is that this generic algorithm should be EXPSPACE. Does this mean there is a bug?

According to the current generic decision procedure,

I suppose T1 and T2 are one-way transducers.

|The current decision procedure works as follows: From x = concat(x1, x2) and x in A, we get A(q0, q1), A(q1, q2) for x1 and x2 respectively, where q2 in F. Then from x2 = T2(y) and A(q1, q2), we get an FA for y of size |T2||A|. Similarly from x1 = T1(y) and A(q0, q1), we get an FA for y of size |T1||A|. Therefore, in the end, we get two FAs for y of size |T2|A, |T1||A|. Note that even in the current decision procedure, we already do not compute the product of the FAs. Iterating for n times, let N be the maximum size of transducers and M be the maximum size of FAs in the original program, then we get at most exponentially many FAs of size N^n M, where n is the number of assignments. |

On the other hand, for reversal bounded two-way transducers, e.g. 2 reversal, if we do not apply your transformation, then from x2 = T2(y), we get an FA for y of size  (|T2||A|)^2, iterating this for n times, then we get exponentially many FAs of size roughly N^{2^n} M^{2^n}, doubly exponential size. (|T2||A|)^2, instead of |T2||A|, makes the trouble. But instead of one FA of size (|T2||A|)^2, we can represent it as (A1, A2) where each of them of size |T2||A|, then we will avoid the double exponential space blow-up after iterating n times.

I hope I convinced you that the current decision procedure works well for one-way transducers, it is only problematic for reversal-bounded two-way transducers.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/zhilin02/slreplace/issues/13#issuecomment-360131591, or mute the thread https://github.com/notifications/unsubscribe-auth/AKfH1g-SUZuviZmUsjgF7_IRf7cwI0Lsks5tNy7TgaJpZM4RpVPh.

matthewhague commented 6 years ago

@zhilin02

Thanks for the explanation -- the confusion may be that we have different things in mind:

On the other hand, for reversal bounded two-way transducers, e.g. 2 reversal, if we do not apply your transformation...

I was thinking that the complexity argument needs changing if we do apply the transformation. In the case where we do not, i am convinced of the blow up.

Are you saying that we can fix the complexity without the transformation by generalising the conjunctive representation?

zhilin02 commented 6 years ago

On 24/1/2018 10:49 PM, matthewhague wrote:

@zhilin02 https://github.com/zhilin02

Thanks for the explanation -- the confusion may be that we have different things in mind:

On the other hand, for reversal bounded two-way transducers, e.g.
2 reversal, if we do not apply your transformation...

I was thinking that the complexity argument needs changing if /we do apply/ the transformation. In the case where we do not, i am convinced of the blow up.

Are you saying that we can fix the complexity /without/ the transformation by changing the generalising the conjunctive representation?

No, we need both your transformation and generalised conjunctive representation.

Let me use an example to illustrate this.

x = T(y, z) /\ x in A, where T is a 2PT with one-reversal (turn right, then left) and A = (Q, delta, q0, F)

The current procedure works as follows: Guess a final state of A, say q, and a set of state pairs of A, say S, and construct B{T, S, q}. B{T, S, q} is obtained by constructing a 2FA from T and A, then turning 2FA into FA, which is of size (|T||A|)^2. The pre-image is a set of tuples (B_{T, A, S,q}, (Q, delta, S)) for S subseteq Q times Q and q in F. If we iterate this for n times, then we get need 2-exp space.

The adapted procedure by applying your transformation: Guess q in F and S subseteq Q times Q, instead of constructing one FA B{T, S, q}, we construct two FAs, say B{T, S, q, 1}, B{T, S, q, 2}, of size |T||A|, to represent B{T, S, q}. Then the pre-image is a set of tuples ((B{T, S, q, 1}, B{T, S, q, 2}), (Q, delta, q, {q'}){(q, q') in S} ) for  S subseteq Q times Q and q in F, where B{T, S, q} is conjunctively represented by (B{T, S, q, 1}, B{T, S, q, 2}), and the representation (Q, delta, S) is changed to (Q, delta, q, {q'})_{(q, q') in S}.

Is this clear to you ?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/zhilin02/slreplace/issues/13#issuecomment-360158550, or mute the thread https://github.com/notifications/unsubscribe-auth/AKfH1lN1CzKkgd8Ce1ZzcZ11xEqDLKtgks5tN0LwgaJpZM4RpVPh.

matthewhague commented 6 years ago

@zhilin02

Is this clear to you ?

I'm afraid not. I am trying to follow the details in your reply, but really i have a more fundamental misunderstanding. This is what i see as the case

  1. In the fragment where all transducers are one-way, we have an EXPSPACE algorithm (without modifying the algorithm).
  2. My reduction (for all intents and purposes) reduces a formula in the two-way fragment into a formula in the one-way fragment (of polynomial size).
  3. This reduced formula requires 2EXPSPACE to solve (unless we modify the algorithm).

Point 3 appears to conflict with point 1 to me.

matthewhague commented 6 years ago

@zhilin02 Perhaps this is the sticking point:

Then the pre-image is a set of tuples ((B{T, S, q, 1}, B{T, S, q, 2}), (Q, delta, q, {q'})_{(q, q') in S} )

Here it seems like the single-reversal transducer T is being treated separately from the rest of the formula. That is, we keep T in the formula, then compute the pre-image by an algorithm based on my reduction, then package up the results into a recognisable relation to be passed back to the processing of the main formula.

In my mind we would first replace all the uses single-reversal transducers with the reduced formula, and then proceed exactly as if we had

    x1 = T1(y)
    x2 = T2(y)
    x = concat(x1,x2)

in the formula. That is, we still get B{T, S, q, 1} and B{T, S, q, 2}, but we don't need to pair them together into the result of a single pre-image computation -- we just get two FAs for y.

zhilin02 commented 6 years ago

On 24/1/2018 11:31 PM, matthewhague wrote:

@zhilin02 https://github.com/zhilin02

Is this clear to you ?
I'm afraid not. I am trying to follow the details in your reply,
but really i have a more fundamental misunderstanding. This is
what i see as the case
  1. In the fragment where all transducers are one-way, we have an EXPSPACE algorithm (without modifying the algorithm).
  2. My reduction (for all intents and purposes) reduces a formula in the two-way fragment into a formula in the one-way fragment (of polynomial size).
  3. This reduced formula requires 2EXPSPACE to solve.
  1. in incorret. reverse is not one-way. But suppose we ignore this fact and go to 3.

  2. if we have the restriction that the reverse function is the only two-way transducer in the constraint, then the current algorithm also works in EXPSPACE, so no contradiction: from x = reverse(y) /\ x in A, the pre-image is of size |A|.

But this is not the case for arbitrary two-way transducers with one-reversal. for instance, consider two-way transducer T which outputs w w^r, then from x = T(y) /\ x in A, if we compute the pre-image without your reduction, then we get a two-way FA with one-reversal, of size |T||A| <= c|A| for some constant c, and transform it into one-way FA of size (c|A|)^2.

Point 3 appears to conflict with point 1 to me.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/zhilin02/slreplace/issues/13#issuecomment-360171875, or mute the thread https://github.com/notifications/unsubscribe-auth/AKfH1ojjpEkaUx60v_RPtO72bJulMsJGks5tN0zBgaJpZM4RpVPh.

matthewhague commented 6 years ago

@zhilin02 thanks, this addresses my concern/confusion and i'm happy to address the blow-up by generalising the conjunctive representation. Sorry for the long discussion!

My hunch was that it may be possible to eliminate reversal-bounded two-way transducers by introducing reverse as a primitive. I.e. we'd take a formula using two-way transducers, and reduce it to a formula using one-way transducers and the reverse primitive. I suspect this would get quite hairy though and not be super clean.

chentaolue commented 6 years ago

I am a bit confused:

reverse is not one-way, but in terms of pre-image computation, it can be treated as a one-way transducer, right?

For x=T(y) and x in A, where T is a 1-rb two way transducer, applying Matt's transform, we get

yq = truncate(y); xf = Tf(yq); xb = Tb(yq); xb' = reverse(xb); x = concat(xf, xb')

can we get an automaton A_y for y such that the size of A_y is bounded by |T||A|, ignoring those intermediate variables yq, xf... ?

@zhilin02 , maybe the answer is, by the current way of representing A, the ans is NO, but we could change the way of representing A so the answer becomes YES.

matthewhague commented 6 years ago

Perhaps we should have expected #13 to get difficult!

chentaolue commented 6 years ago

not sure whether this matters,

yq = truncate(y); xf = Tf(yq);

here, should consider a prefix of yq for backward, not yq itself. So z=T_prefix (yq) and xb=Tb(z) the rest is fine.

xb = Tb(yq); xb' = reverse(xb); x = concat(xf, xb')

zhilin02 commented 6 years ago

Hi Matt,

Many thanks to you as well.

Your reduction to one-way transducers + reverse is cute. I will adopt this approach.

I will not introduce the very general conjunctive representation, but just use ((Q, delta), S).

Best,

Zhilin

On 25/1/2018 12:36 AM, matthewhague wrote:

@zhilin02 https://github.com/zhilin02 thanks, this addresses my concern/confusion and i'm happy to address the blow-up by generalising the conjunctive representation. Sorry for the long discussion!

My hunch was that it may be possible to eliminate reversal-bounded two-way transducers by introducing reverse as a primitive. I.e. we'd take a formula using two-way transducers, and reduce it to a formula using one-way transducers and the reverse primitive. I suspect this would get quite hairy though and not be super clean.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/zhilin02/slreplace/issues/13#issuecomment-360193810, or mute the thread https://github.com/notifications/unsubscribe-auth/AKfH1lWkPSDDSvJoFzmxPf31QDy-qeyNks5tN1wZgaJpZM4RpVPh.

matthewhague commented 6 years ago

Closed, delegated to #17.