iheartla / Iheartla.github.io

0 stars 0 forks source link

Assessment of whether I❤️LA code in gallery appears similar to source LaTeX output equations #7

Open alecjacobson opened 3 years ago

alecjacobson commented 3 years ago

pmp_32

The source equation has the form X = Y = Z

we skip the = Y part.

(lhsfunc) The source is defining a function on the lefthand side: x(θ,φ)

We translate this to a variable with the name of the function and its arguments as a single string in backticks `x(θ,φ)`

pmp_41

(rhsdef) The source equation defines variables x₁,x₂,x₃ on the righthand side of an equation T = (x₁,x₂,x₃).

We translate this into three lines defining each x_i as a column of T as a matrix (how did we decide T is a matrix rather than a sequence of vectors?)

(frac) The source contains a fraction on two lines.

We linearize the fraction

See (lhsfunc)

pmp_42

The source equation uses n as both an input function from triangles T to 3D vectors and an output function from vertex indices v to 3D vectors.

We define n as a function from T as an index to a 3D vector. We translate the output `n(v)` as a variable instead of a function, see (lhsfunc).

The source uses T both as an input argument to n (unclear type) and as an subscript to α.

For this equation we assume T is an integer index and use it as both function input and subscript index.

See also: (frac)

pmp_74

The source sums over unnamed set of triangles, each of which, T, uses a right-hand side definition of three vertex indices i,j,k. See (rhsdef)

We translate this by expecting that the mesh vertex UV locations have already been "exploded" into per-triangle u and v lists of 3-vectors (one for each triangle corner) and then use T as an index.

This further means that we do not match the vertical vector formation of (vᵢ;vⱼ;vₖ), replacing it with v_T

(block) The source equation includes a block matrix definition.

We include this block matrix as a multi-line construction which causes the subsequent symbols to fall on the next line.

convex_optimization_154

(bounds) Summations have explicit bounds.

We translate without explicit bounds (they are inferred from where block).

convex_optimization_208

See also: (lhsfunc) (frac) (bounds)

convex_optimization_220

See also: (lhsfunc) (bounds)

convex_optimization_276

The source equation has no explicit variable of minimization.

We explicitly write it as x and specify its size.

The norms in the source use a subscript ₂ to indicate two-norm.

We default to two-norm so this is omitted.

(min) The source uses the full word minimize

We use min

See also: (bounds)

convex_optimization_384

(seqexp) The source explicitly defines the length of y as a sequence i=1,...,m

Our translation infers consistent sequence sizes without needing to explicitly specifiy them.

See also: (bounds)

convex_optimization_650

The source implicitly defines the sizes of B,C,D.

We translate this by introduce m (as a standin for n-k) and explicitly write out all sizes in the where-block.

convex_optimization_680

The source equation is far from the matrices from which sizes and dimensions would need to be inferred.

We translate this by explicitly defining matrices of the most generic consistent sizes.

The source uses I with implied size.

We translate this with an explicit subscript \_n indicating its size.

See also (block)

anisotropic_elasticity_7

The source uses 0-based indexing (for entries A, but not decorations on other symboles (e.g., I_5)).

We translate into 1-based indexing.

(iden) The source uses I_(3×3).

We translate a single \_3: our identity matrices are square.

See also (block)

anisotropic_elasticity_47

The source defines J₃ in prose.

We translate this into an expression using 1₃,₃.

See (frac)

symmetric_objective_function_9

analytic_eigensystems_13

See also (frac) (block)

plenoptic_modeling_22

See also (lhsfunc) (frac)

morphable_model_5

The source equation uses bounds for the first two summations, but not the third. It is unclear from the paper what the dimension of the ρ variables are.

We translate ρ variables as sequences of the same implied length.

See also (frac) (bounds)

course_registration

The source uses extra parenthesis.

These are ommitted.

The source uses underbraces to define terms (A,b,constant).

We omit these.

See also (min) (frac) (block) (bounds)

multi_frame_1

See also (lhsfunc) (frac)

multi_frame_4

See also (bounds)

atlas_refinement_3

The source uses b_j as summation variable.

We translate this using j as a summation variable.

The source is a definition of a function whose argument depends on an index i and the right-hand side also uses this index i.

We backtick-protect the lefthand side and the variable using i in the righthand side.

The source uses a superscript for the iteration number k (not a power).

We translate this as part of the backtick-protected variable names

See also (frac) (lhsfunc)

optimal_sampling_16

The source defines X as a sequence of variable-length vectors so that X_ij is the jth entry of the ith vector.

We translate this by defining X as a padded array with vectors as rows. The summation is modified to only use the appropriate number <a href=n_i>(n_i) of entries in each row.

See also (frac) (bounds)

hand_modeling_3

The source defines the dimensions of C in the prose and does not repeat it under the min.

We translate this definition under the min.

See also (bounds) (iden)

delta_mush_1

See also (bounds) (seqexp)

course_parameterization

The source equation uses a single case statement for L_ij and recursively defines the diagonally entries with a summation over its own (off-diagonal) entries.

We split this into two statements.

The source writes the summation over ℓ ≠ i

We translate this to ℓ for ℓ ≠ i

The source defines E as a matrix of bounded indices.

We translate this as a set of 2D vectors of integers.

The source does not define w as a matrix, only the quantity w_ij per edge ij.

We translate this as an input matrix w∈ℝ^(n×n)

course_curvature

The source uses sub and superscripts for the integral bounds.

We use [0, 2π] after an underscore _.

See also (lhsfunc) (frac)

alecjacobson commented 3 years ago

From @yig via email:

lhsfunc: (a) We drop the left-hand sides entirely. I think that's too aggressive. The output will be visually missing something. (b) We could drop the parameters, so f(a,b) = ... would become just f = . This makes it possible to access the output value in the returned struct as .f, since the name won't be mangled. (c) We do what we're doing. The output looks similar to the original latex, and the value can still be accessed in the returned struct as .ret.

and

course_parameterization: The input matrix w∈ℝ^(n×n) should probably be declared sparse, as in w∈ℝ^(n×n) sparse.

and

course_curvature: We support ∫a^b in addition to ∫[a,b].

yig commented 3 years ago

The course_curvature suggestion depends on issue #11