Open saraedum opened 2 years ago
If you only need the Eisenstein polynomial over Qp, you can do:
sage: K.base().polynomial()
e54^54 - 9*e54^48 - 9*e54^45 + 9*e54^39 - 3*e54^36 + 27*e54^28 + 27*e54^27 + 54*e54^25 + 27*e54^24 + 54*e54^22 + 27*e54^19 + 36*e54^18 + 27*e54^16 + 54*e54^15 + 27*e54^13 + 18*e54^12 + 54*e54^4 + 54*e54^3 + 3
That's defining the extension of Q that Henselization is using internally.
We can verify that the polynomial actually splits in linear factors over this field:
sage: R.<e54> = Qp(3)[]
sage: L.<e54> = Qp(3).extension(K.base().polynomial().change_ring(Qp(3)))
sage: v.montes_factorization(T^9 - 3, assume_squarefree=True, required_precision=1)
((1 + O(e54^1080))*T + e54^6 + e54^10 + 2*e54^14 + 2*e54^15 + 2*e54^18 + e54^19 + e54^21 + 2*e54^22 + e54^27 + 2*e54^28 + e54^29 + e54^30 + 2*e54^31 + e54^32 + 2*e54^33 + e54^36 + e54^37 + e54^38 + e54^39 + 2*e54^40 + e54^41 + 2*e54^42 + 2*e54^43 + e54^44 + 2*e54^45 + 2*e54^46 + e54^47 + e54^49 + 2*e54^53 + O(e54^1086)) * ((1 + O(e54^1080))*T + e54^6 + e54^10 + 2*e54^14 + 2*e54^15 + 2*e54^18 + e54^19 + e54^21 + 2*e54^22 + e54^27 + 2*e54^28 + e54^29 + e54^30 + 2*e54^31 + e54^32 + e54^33 + e54^36 + e54^38 + e54^39 + 2*e54^40 + 2*e54^41 + 2*e54^43 + e54^44 + e54^46 + e54^47 + 2*e54^48 + 2*e54^49 + 2*e54^53 + O(e54^1086)) * ((1 + O(e54^1080))*T + e54^6 + e54^10 + 2*e54^14 + 2*e54^15 + 2*e54^18 + e54^19 + e54^21 + 2*e54^22 + e54^27 + 2*e54^28 + e54^29 + e54^30 + 2*e54^31 + e54^32 + e54^36 + 2*e54^37 + e54^38 + e54^39 + 2*e54^40 + e54^42 + 2*e54^43 + e54^44 + e54^45 + e54^47 + e54^48 + 2*e54^53 + O(e54^1086)) * ((1 + O(e54^1080))*T + e54^6 + e54^10 + 2*e54^14 + 2*e54^18 + 2*e54^19 + e54^21 + 2*e54^22 + 2*e54^23 + e54^24 + 2*e54^28 + e54^29 + 2*e54^30 + e54^31 + 2*e54^32 + 2*e54^33 + e54^37 + 2*e54^38 + e54^39 + 2*e54^40 + 2*e54^42 + 2*e54^43 + e54^44 + 2*e54^45 + e54^47 + 2*e54^48 + e54^49 + e54^51 + 2*e54^52 + e54^53 + O(e54^1086)) * ((1 + O(e54^1080))*T + e54^6 + e54^10 + 2*e54^14 + 2*e54^18 + 2*e54^19 + e54^21 + 2*e54^22 + 2*e54^23 + e54^24 + 2*e54^28 + e54^29 + 2*e54^30 + e54^31 + 2*e54^32 + 2*e54^37 + 2*e54^38 + e54^39 + 2*e54^40 + 2*e54^41 + 2*e54^42 + 2*e54^43 + e54^44 + e54^45 + 2*e54^46 + e54^47 + 2*e54^50 + 2*e54^51 + 2*e54^52 + e54^53 + O(e54^1086)) * ((1 + O(e54^1080))*T + e54^6 + e54^10 + 2*e54^14 + 2*e54^18 + 2*e54^19 + e54^21 + 2*e54^22 + 2*e54^23 + e54^24 + 2*e54^28 + e54^29 + 2*e54^30 + e54^31 + 2*e54^32 + e54^33 + 2*e54^38 + e54^39 + 2*e54^40 + e54^41 + 2*e54^42 + 2*e54^43 + e54^44 + e54^46 + e54^47 + e54^48 + 2*e54^49 + e54^50 + 2*e54^52 + e54^53 + O(e54^1086)) * ((1 + O(e54^1080))*T + e54^6 + e54^10 + 2*e54^14 + e54^15 + 2*e54^18 + e54^21 + 2*e54^22 + e54^23 + 2*e54^27 + e54^29 + 2*e54^32 + 2*e54^33 + e54^36 + 2*e54^39 + e54^40 + e54^41 + 2*e54^43 + e54^44 + 2*e54^45 + 2*e54^47 + 2*e54^49 + 2*e54^50 + e54^51 + e54^52 + O(e54^1086)) * ((1 + O(e54^1080))*T + e54^6 + e54^10 + 2*e54^14 + e54^15 + 2*e54^18 + e54^21 + 2*e54^22 + e54^23 + 2*e54^27 + e54^29 + 2*e54^32 + e54^33 + e54^36 + 2*e54^37 + 2*e54^39 + e54^40 + 2*e54^41 + 2*e54^42 + 2*e54^43 + e54^44 + 2*e54^47 + 2*e54^48 + e54^50 + e54^51 + e54^52 + O(e54^1086)) * ((1 + O(e54^1080))*T + e54^6 + e54^10 + 2*e54^14 + e54^15 + 2*e54^18 + e54^21 + 2*e54^22 + e54^23 + 2*e54^27 + e54^29 + 2*e54^32 + e54^36 + e54^37 + 2*e54^39 + e54^40 + e54^42 + 2*e54^43 + e54^44 + e54^45 + 2*e54^47 + e54^48 + e54^49 + e54^51 + e54^52 + O(e54^1086))
sage: len(_)
9
If you want to understand how this works, it's implemented here: https://github.com/MCLF/henselization/blob/bded9bafb454d12223f6b54b132a5b1df353d5ec/henselization/sage/rings/padics/henselization/henselization.py#L1444
The idea (as far as I recall) is to take a uniformizing element (which is can be constructed from the MacLane description of the valuation, see https://github.com/MCLF/henselization/blob/bded9bafb454d12223f6b54b132a5b1df353d5ec/henselization/sage/rings/padics/henselization/henselization.py#L557 and then you'll have to follow the implementation into the SageMath code in sage.rings.valuation
.) We then compute the charpoly of this element and hope that it's irreducible. If it's not, we perturb the uniformizing element a bit to get another uniformizing element. Repeating this process we have to eventually find an irreducible charpoly because of the primitive element theorem.
If you wanted to only write the relative extension with an Eisenstein polynomial, the same approach should work. However, the Henselization code cannot compute the matrix of an element w.r.t. a relative extension yet, it currently fails because of this: https://github.com/MCLF/henselization/blob/bded9bafb454d12223f6b54b132a5b1df353d5ec/henselization/sage/rings/padics/henselization/henselization.py#L1074
Even with that fixed, I am not sure that the arithmetic of the elements of the Henselization would allow us to compute a charpoly directly.
You can, however, also use the valuation code in SageMath directly to compute that polynomial:
sage: R.<a9> = QQ[]
sage: K.<a9> = QQ.extension(a9^9 - 3)
sage: R.<a54> = K[]
sage: L.<a54> = K.extension(a54^6 + a9^3*a54^3 + a9^6)
sage: v = L.valuation(3)
sage: f = v.uniformizer().matrix(base=K).charpoly()
sage: [v(c) for c in f.coefficients()]
[1/9, 1/3, 2/9, 2/9, 1/9, 7/9, 0]
If you want to understand how this works, it's implemented here:
The idea (as far as I recall) is to take a uniformizing element (which is can be constructed from the MacLane description of the valuation, see
and then you'll have to follow the implementation into the SageMath code in
sage.rings.valuation
.) We then compute the charpoly of this element and hope that it's irreducible. If it's not, we perturb the uniformizing element a bit to get another uniformizing element. Repeating this process we have to eventually find an irreducible charpoly because of the primitive element theorem.
Is there any self-contained reference that explains the mechanism and theory behind the Henselization package? I'm lost in jumping through the code. In detail, I don't understand how the uniformizer is given by the limit valuation, and what's the matrix of it?
Is there any self-contained reference that explains the mechanism and theory behind the Henselization package?
There is no proper documentation for this package unfortunately. The underlying MacLane valuation theory is explained in Chapter 4 of my thesis.
I'm lost in jumping through the code.
Sorry for that. I never had the time to write something up to specifically explain how this code works.
In detail, I don't understand how the uniformizer is given by the limit valuation
Given an extension given as , suppose the valuation on L is given by a limit valuation which has . To get a uniformizer find some sum of the that sums to 1/n. Multiplying the same gives you a uniformizer.
and what's the matrix of it?
Sorry, I meant the matrix of multiplication of that element. (Given an extension L/K, consider L as a K vector space and fix a basis . Given an element x in L, write the in the basis B and then just organize this in a matrix.) The implementation of this is very short but might be cryptic:
I spend some time reading Section 4 of your thesis, where you mainly focus on the function field (am I right?), instead of the $p$-adic case (the theory also works in the mixed-characteristic setting, just like the Henselization package does, right?). The key to constructing such an Eisenstein polynomial is to find a uniformizer of the extension (basically, this is equivalent to characterizing the valuation on the extension) and the following matrix thing is easy to understand after I review the code. I'm still reading Section 4.6 of your paper intensively. Maybe some further questions will appear.:neckbeard:
Yes, I wrote Section 4 with function fields in mind so I could apply the theory to describe models of curves. But it works the same for the p-adic case.
Let $K=\mathbb{Q}_3(\zeta_3)$, with uniformizer of its valuation $v_K$ given by $\zeta_3-1$. Let $G=t^3-3$. I want to extend $v_K$ to $K[t]/(G)$, i.e. finding a discrete pseudo-valuation $v$ on $K[t]$, which extends the Gauss valuation $v_0$ induced by $v_K$ and sends exactly $(G)$ to infinity. Since $K$ is complete, by Corollary 4.67 $v$ is an infinite inductive valuation. (right?) The following is an imitation of your Example 4.78:
What's wrong here?
By Lemma 4.77, $3\lambda_1$ should equals to $2$, i.e. $\lambda_1=\frac{2}{3}$.
You get Since you get . If I understood your example correctly, you somehow got a 2 in your computation that looks wrong to me.
To go on, we construct the key polynomial over $v_1$.
The reduction of G is zero. That looks correct to me. To construct the key polynomial via reduction you need to shift G to valuation zero first. You should find that the reduction is something trivial like y₁ which means that your next key polynomial will have degree . With these degrees, nothing else could have happened here: G has degree 3 and you already found a ramification of degree 3 with λ₁, so G must be a key polynomial now.
Here's the computation with SageMath:
sage: K.<ζ> = NumberField(x^2 + x + 1)
sage: vK = K.valuation(3)
sage: vK.uniformizer()
ζ + 2
sage: R.<t> = K[]
sage: G = t ^ 3 - 3
sage: L.<t> = K.extension(G)
sage: vK.extensions(L)
[3-adic valuation]
sage: v = _[0]
sage: w = v._base_valuation
sage: w
[ Gauss valuation induced by 3-adic valuation, v(t) = 1/3 , … ]
sage: w._approximation
[ Gauss valuation induced by 3-adic valuation, v(t) = 1/3 ]
sage: w._improve_approximation()
sage: w._approximation
[ Gauss valuation induced by 3-adic valuation, v(t) = 1/3, v(t^3 - 3) = +Infinity ]
You get Since ![](https://render.githubusercontent.com/render/math?math=3v(t) = v(t^3)=\lambda_1) you get . If I understood your example correctly, you somehow got a 2 in your computation that looks wrong to me.
Well it seems that I normalized the valuation on $K$ to $v_K(\zeta_3-1)=1$, i.e. $v_K(3)=2$ which leads to the problem. But why it's not acceptable? Instead of viewing $K$ as an extension of $\mathbb{Q}_3$ and using the $3$-adic valuation, using the normalized valuation on $K$ seems to be a natural choice. As you wrote in the Conventions and Standing Assumptions part of your article,
If we do not discuss extensions of valued fields, then all valuations are normalized such that a uniformizer $\pi\in K$ has valuation $v_K(\pi)=1$.
You can normalize like that. It should not make a difference here.
Let be an approximation of pseudo-valuation over (K,v) which sends G to infinity. Now we need to construct the next key polynomial . This can be divided into following steps, if I understand the algorithm correctly:
To 1. You are right, the S needs to be chosen appropriately otherwise divisibility changes. I believe that lemmata in 4.1.2 are talking about that essentially. Choosing an S as constructed in these lemmata should work but I forget the details now.
To 2. Exactly, that choice should not matter.
To 3. What to do next: I believe that you need to multiply by some S to shift the valuation to what it needs to be to be a key polynomial and then fix the leading coefficient (because key polynomials are assumed to be monic.) Does that make sense? I can try to figure out how this works exactly. But I don't remember the details right now.
See here for the math-rended version.
To 3. What to do next: I believe that you need to multiply by some S to shift the valuation to what it needs to be to be a key polynomial and then fix the leading coefficient (because key polynomials are assumed to be monic.)
Here is my thought: Our goal is to find $\phi_{n+1}$ which $v_n$-divides $G$. By multiplying an $v_n$-unit with proper valuation on $G$, we may assume that $vn(G)=0$. We factorize $G$ in $k{vn}$ and get a factor $\bar{\phi}_{n+1}$. By Theorem 13.1 of [Mac1936I], there exists a key polynomial $\phi{n+1}$ satisfying that the reduction of $\phi_n^{\tau_n\flat_n\text{deg}\ \bar{\phi}_{n+1}}\phi_{n+1}$ to $k_{v_n}$ is $\bar{\phi}_{n+1}$. Here $\phi_n^{\tau_n\flat_n\text{deg}\ \bar{\phi}_{n+1}}$ is a $v_n$-unit (right?) to shift the valuation. Until now, everything is right?
As we have discussed, the first step of the construction in Theorem 13.1 of [Mac1936I] is lifting $\bar{\phi}_{n+1}$ to $\tilde{\phi}_{n+1}\in K[t]$. After that,
multiply $\tilde{\phi}_{n+1}$ by $\phi_{n}^{\tau_n\sharp_n \text{deg}\ \bar{\phi}_{n+1}}$ and in the expansion of the resulting product drop all terms not of minimum value and then replace the leading coefficient of $\phi_n$ by $1$.
I'm not sure what's happening here. The term $\phi_{n}^{\tau_n\sharp_n \text{deg}\ \bar{\phi}_{n+1}}$ seems to be used to shift the valuation to $0$? The following steps are also confusing, especially
replace the leading coefficient of $\phi_n$ by $1$.
Of course, it's used to make the polynomial monic. But why should I replace the leading coefficient of a key polynomial, instead of the expansion of the product?
P.S. Sorry for the poor math reading experience. I'm using a chrome extension (I modified it a little) to render MathJax. Next time I will try to use Github's math render, as you do. P.P.S. Thanks for your patience. It's really helpful.
No worries about the math formatting. I am using some silly hack because GitHub does not support LaTeX at all unfortunately. (I am fine with the way you are writing your comments, but just in case you want to try it out, I am using <img src="https://render.githubusercontent.com/render/math?math=FORMULA">
to typeset a formula. Note that you have to replace every +
with %2B
.)
Here is my thought: Our goal is to find $\phi_{n+1}$ which $v_n$-divides $G$. By multiplying an $v_n$-unit with proper valuation on $G$, we may assume that $vn(G)=0$. We factorize $G$ in $k{vn}$ and get a factor $\bar{\phi}{n+1}$. By Theorem 13.1 of [Mac1936I], there exists a key polynomial $\phi_{n+1}$ satisfying that the reduction of $\phi_n^{\tau_n\flatn\text{deg}\ \bar{\phi}{n+1}}\phi{n+1}$ to $k{vn}$ is $\bar{\phi}{n+1}$. Here $\phi_n^{\tau_n\flatn\text{deg}\ \bar{\phi}{n+1}}$ is a $v_n$-unit (right?) to shift the valuation. Until now, everything is right?
It's been a long time since I thought about the details here. I am mostly looking at the source code to understand what is going on with such details to be honest.
I don't understand your statement about the reduction of . This expression has positive valuation and therefore reduction 0.
As we have discussed, the first step of the construction in Theorem 13.1 of [Mac1936I] is lifting $\bar{\phi}{n+1}$ to $\tilde{\phi}{n+1}\in K[t]$. After that,
Yes. That's implemented in lift.
multiply $\tilde{\phi}{n+1}$ by $\phi{n}^{\tau_n\sharpn \text{deg}\ \bar{\phi}{n+1}}$ and in the expansion of the resulting product drop all terms not of minimum value and then replace the leading coefficient of $\phi_n$ by $1$.
I don't think that's correct. The implementation of this is in lift_to_key. Let's try to understand what this does.
It starts with:
coefficients = self.lift(F, report_coefficients=True)[:-1]
So, it lifts all but the leading coefficients of F
which you call . Then
coefficients = [c*self._Q(F.degree()) for i,c in enumerate(coefficients)] + [self.domain().one()]
It takes 1 as the lift of the leading coefficient that was missing earlier. And then multiplies this _Q
onto all the coefficients. I forget why this is necessary exactly but I could imagine that when reading the documentation of reduce it becomes clear.
Finally, it creates the actual key polynomial by taking the polynomial where the c are the coefficients
from above, and replacing T
with a power of :
ret = self.domain().change_ring(self.domain())([c for c in coefficients])(self.phi()**tau)
Sorry for just giving code references here. I hope this helps to make some sense of all this. I'll have more time in a few days. If you need more help to understand how this all works, I am happy to understand the details of this again. (I am working on code related to that, so I'll eventually have to do that anyway.)
With your help, I have understood most parts of the construction of lift_to_key
. I still have two little questions.
Let me quote MacLane's theorem here:
We use the same notations as above
drop all terms not of minimum value
. This is not completely reflected in your code, although I don't think the terms with higher valuation must be erased thoroughly. In $$\phi_n^{m\tau}+\sum_{k=0}^{m-1}Q^{\text{deg}\bar{\phi}_n}\cdot a_{k\tau}(x)\cdot\phi_n^{k\tau},$$ the degree of $Q^{\text{deg}\bar{\phi}_n}\cdot a_{k\tau}(x)$ may exceed $\text{deg} \phi_n$ (this is why you mod the second-highest coefficient by $\phi_n$ to ensure the result is monic). Consequentally, there might be some terms with non-minimal valuation keeped. Why don't they matter?
Sorry for the late reply, I missed the notification of your comment here. I remember that the exact proof here also puzzled me for a while. But I am not sure anymore what my conclusion was. I vaguely remember that I first did exactly what the proof told me to do and it did not work out. Whether that was my mistake or a problem in the proof, I don't recall anymore.
It's been a too long time since I thought about this, so I'll have to ask a naive question. Why would terms of non-minimal valuation be a problem in the proof?
Btw., it might be to some extent relevant to your original question that we implemented general extensions of p-adics in https://gitlab.com/sagemath/sage/-/merge_requests/32. There's still a lot of work to do but it works for quite big degrees already and also rewrites the original extension as totally-ramified over unramified.
It's been a too long time since I thought about this, so I'll have to ask a naive question. Why would terms of non-minimal valuation be a problem in the proof?
After struggling with this Theorem for several days, I think I have figured out the trick here. Let's recall MacLane's construction again: Given an inductive valuation $v$, and a monic irreducible polynomial $\psi(y)\neq y$, we are going to find a key polynomial $\phi$ over $v$, which reduce to $\psi$ after being multiplied by some equivalence unit.
Now the essence of the third step is to make sure that after this operation, $\deg f^\prime$ equals its effective degree. This ensures that the term with the highest degree won't vanish when passed to the reduction map. And the fourth step is effective only when $\deg f^\prime$ equals its effective degree.
I have a related doubt when reading your code (I'm not a fan of Python, so... :-P). In the code https://github.com/sagemath/sage/blob/1e8ba0aac4cd2975141950daa5c9e0d294b070e4/src/sage/rings/valuation/augmented_valuation.py#L1448 you lift the coefficients. Can I read it as you just replace $u$ by the variable (of the residue ring of the base valuation of $v$), where $u$ is the root of $\psi$, i.e. the residue field generator? (Since the reduction map acts on the coefficients by evaluating the reduction (w.r.t. the base valuation of $v$) of it at $u$, undoing it basically replaces $u$ by the variable)
I don't think this is Python's fault here. The complication of that code comes from inconsistencies in ring extensions in SageMath (that we actually worked on fixing the past week). Anyway, I think what you are saying is correct.
I vaguely remember that I first did exactly what the proof told me to do and it did not work out.
I implemented MacLane's original construction. It seems to be faultless. https://gist.github.com/YijunYuan/7604b845459a77eff26585f52f692f7a
Nice. Would be interesting to see if all of the test suite passes with this approach.
Do you see any advantages to your approach?
Do you see any advantages to your approach?
No... I just want to verify that MacLane is right. Actually, I think your implementation might be more efficient. Maybe some benchmarks can be made when all test suites passed.
The other interesting part would be which implementation produces smaller coefficients typically.
I have to ask a silly question: why the whole algorithm can terminate?
Sine mac_lane_step
, i.e. single step of approximation, doesn't make sure the strict increase of the degree of the key polynomial (until the degree equals $\deg G$), is there a case that in the process of approximation, the degree of the key polynomials become stale and never go larger?
Sure. The degree stagnates while the valuation goes to infinity but the process can go on forever and never get there, e.g., when the polynomial does not factor over the uncompleted ground field. In SageMath this is handled by https://github.com/sagemath/sage/blob/develop/src/sage/rings/valuation/limit_valuation.py#L358
Sure. The degree stagnates while the valuation goes to infinity but the process can go on forever and never get there, e.g., when the polynomial does not factor over the uncompleted ground field. In SageMath this is handled by https://github.com/sagemath/sage/blob/develop/src/sage/rings/valuation/limit_valuation.py#L358
Sorry for the late reply. After some discussion and calculation with my supervisor, we have come to the conclusion that MacLane's theory is not very useful for the problem which I am mainly concerned about, except that it guarantees some kind of finiteness. So I am afraid that this is where I should stop. Nevertheless, I thank you sincerely for your help, and I'm still willing to test my version of lift_to_key
.
I created https://trac.sagemath.org/ticket/33623 to track this issue in SageMath. If you feel like implementing it within SageMath, I would be curious to see how it performs. However, if we don't think that the current implementation is faulty or we don't expect your version to perform much better, it's probably not worth it to implement it within SageMath.
The easiest way to benchmark whether this is beneficial is probably to add your file to the corresponding place in henselization/sage
here and see whether the examples in https://github.com/MCLF/henselization/tree/master/henselization/benchmarks improve with that choice of coefficients.
The easiest way to benchmark whether this is beneficial is probably to add your file to the corresponding place in
henselization/sage
here and see whether the examples in https://github.com/MCLF/henselization/tree/master/henselization/benchmarks improve with that choice of coefficients.
The asv
gives me an error:
❯ sage -sh -c "asv run --quick" ─╯
Traceback (most recent call last):
File "/usr/local/bin/asv", line 33, in <module>
sys.exit(load_entry_point('asv==0.5.1', 'console_scripts', 'asv')())
File "/home/yijunyuan/.sage/local/lib/python3.10/site-packages/asv/main.py", line 38, in main
result = args.func(args)
File "/home/yijunyuan/.sage/local/lib/python3.10/site-packages/asv/commands/__init__.py", line 49, in run_from_args
return cls.run_from_conf_args(conf, args)
File "/home/yijunyuan/.sage/local/lib/python3.10/site-packages/asv/commands/run.py", line 183, in run_from_conf_args
return cls.run(
File "/home/yijunyuan/.sage/local/lib/python3.10/site-packages/asv/commands/run.py", line 211, in run
environments = list(environment.get_environments(conf, env_spec))
File "/home/yijunyuan/.sage/local/lib/python3.10/site-packages/asv/environment.py", line 375, in get_environments
yield cls(conf, python, requirements, tagged_env_vars)
TypeError: Sage.__init__() takes 4 positional arguments but 5 were given
I recently updated my Sagemath
installation to 9.5. Don't know if it matters.
Is that a completely unmodified version of henselization?
Is that a completely unmodified version of henselization?
Yes.
Someone asked per email:
[…] For example, in the following code
It defines the relative extension […] which is not Eisenstein over [the intermediate field]. This is reasonable but somehow inconvenient. Someone told me that the corresponding Eisenstein polynomial can be achieved by using Ore-MacLane/ Okutsu-Montes algorithm, but I can't see their connection. The only software supporting such feature may be MAGMA (MAGMA calculates the Eisenstein polynomial of the (totally ramified) splitting field of polynomials with Z coefficients in Qp, which is not open-sourced. So I wonder if you have any idea about this problem?