gettalong / kramdown

kramdown is a fast, pure Ruby Markdown superset converter, using a strict syntax definition and supporting several common extensions.
http://kramdown.gettalong.org
Other
1.72k stars 274 forks source link

Some problems/questions with in-line math when working with MathJax. #813

Closed ZhengYuan-Public closed 2 months ago

ZhengYuan-Public commented 2 months ago

I have been bothered by some problem with in-line math for a while and I still couldn't find the best solution. So I want to seek some help here. The most problematic part is when in-line math equations are in a list.

I have read the kramdown syntax about math here, but it does not solve my problem.

Here are some examples (I also found $ LaTeX_equations $ can be used to add in-line math equations, so I included it in the examples).

The raw markdown file content

1. Sets
    - $$ \mathcal{S} $$
    - \$$ \mathcal{A}(s_i) $$
    - \$\$ \mathcal{R(s, a)} $$
    - State: $$ \mathcal{S} $$
    - Action: \$$ \mathcal{A}(s_i) $$
    - Reward: \$\$ \mathcal{R(s, a)} $$
2. Sets
    - State: $ \mathcal{S} $
    - Action: $ \mathcal{A}(s_i) $
    - Reward: $ \mathcal{R(s, a)} $
    - $ E = (b_{30}b_{29}...b_{23})_2 = (01111100)_2 = (124)_{10} \in \{1, ..., (2^8-1) - 1 \} = \{1, ..., 254\} $
    - Some text: $ E = (b_{30}b_{29}...b_{23})_2 = (01111100)_2 = (124)_{10} \in \{1, ..., (2^8-1) - 1 \} = \{1, ..., 254\} $
    - \$$ E = (b_{30}b_{29}...b_{23})_2 = (01111100)_2 = (124)_{10} \in \{1, ..., (2^8-1) - 1 \} = \{1, ..., 254\} $$
    - Some text: \$$ E = (b_{30}b_{29}...b_{23})_2 = (01111100)_2 = (124)_{10} \in \{1, ..., (2^8-1) - 1 \} = \{1, ..., 254\} $$

The HTML file generated

<ol>
  <li>Sets
    <ul>
      <li>
\[\mathcal{S}\]
      </li>
      <li>\(\mathcal{A}(s_i)\)</li>
      <li>$$ \mathcal{R(s, a)} $$</li>
    </ul>
    <ul>
      <li>State: \(\mathcal{S}\)</li>
      <li>Action: $$ \mathcal{A}(s_i) $$</li>
      <li>Reward: $$ \mathcal{R(s, a)} $$</li>
    </ul>
  </li>
  <li>Sets
    <ul>
      <li>State: $ \mathcal{S} $</li>
      <li>Action: $ \mathcal{A}(s_i) $</li>
      <li>Reward: $ \mathcal{R(s, a)} $
      - $ E = (b_{30}b_{29}…b_{23})<em>2 = (01111100)_2 = (124)</em>{10} \in {1, …, (2^8-1) - 1 } = {1, …, 254} $
      - Some text: $ E = (b_{30}b_{29}…b_{23})<em>2 = (01111100)_2 = (124)</em>{10} \in {1, …, (2^8-1) - 1 } = {1, …, 254} $
      - $$ E = (b_{30}b_{29}…b_{23})<em>2 = (01111100)_2 = (124)</em>{10} \in {1, …, (2^8-1) - 1 } = {1, …, 254} \(- Some text: \\) E = (b_{30}b_{29}…b_{23})<em>2 = (01111100)_2 = (124)</em>{10} \in {1, …, (2^8-1) - 1 } = {1, …, 254} $$</li>
    </ul>
  </li>
</ol>

Output on Google Chrome

image

Summary

It's a little hard to summarize but it seems when in-line math are in lists

  1. $ LaTeX_equations $ works as expected sometimes, but some _ will be parsed for making words Italic. Since _ is so important in LaTeX math equations, this is not a good practice in most time.
  2. The recommended method \$$ LaTeX_equations $$ also works as long as there is nothing else in the list item.

What I want the text to be

image

What I have to write in markdown now

1. Sets
    - State: $ \mathcal{S} $
    - Action: $ \mathcal{A}(s_i) $
    - Reward: $ \mathcal{R(s, a)} $
2. Probability Distributions
    - State Transition Probability: $ p(s' \vert s, a) $
    - Reward Probability: $ p(r \vert s, a) $
3. Policy: At state $ s $, the probability to choose the action $ a $ is $ \pi (a \vert s) $
4. Markov Property: memoryless
    - $ p(s_{t+1} \vert a_{t+1}, s_t, \dots, a_1, s_0) = p(s_{t+1} \vert a_{t+1}, s_t) $
    - $ p(r_{t+1} \vert a_{t+1}, s_t, \dots, a_1, s_0) = p(r_{t+1} \vert a_{t+1}, s_t) $
    - \$$ E = (b_{30}b_{29}...b_{23})_2 = (01111100)_2 = (124)_{10} \in \{1, ..., (2^8-1) - 1 \} = \{1, ..., 254\} $$
gettalong commented 2 months ago

Your "raw markdown file content" has a few problems:

Here is "what I have to write in markdown now" corrected (only using spaces and the correct delimiters):

1. Sets
   - State: $$ \mathcal{S} $$
   - Action: $$ \mathcal{A}(s_i) $$
   - Reward: $$ \mathcal{R(s, a)} $$
2. Probability Distributions
   - State Transition Probability: $$ p(s' \vert s, a) $$
   - Reward Probability: $$ p(r \vert s, a) $$
3. Policy: At state $$ s $$, the probability to choose the action $$ a $$ is $$ \pi (a \vert s) $$
4. Markov Property: memoryless
   - \$$ p(s_{t+1} \vert a_{t+1}, s_t, \dots, a_1, s_0) = p(s_{t+1} \vert a_{t+1}, s_t) $$
   - \$$ p(r_{t+1} \vert a_{t+1}, s_t, \dots, a_1, s_0) = p(r_{t+1} \vert a_{t+1}, s_t) $$
   - \$$ E = (b_{30}b_{29}...b_{23})_2 = (01111100)_2 = (124)_{10} \in \{1, ..., (2^8-1) - 1 \} = \{1, ..., 254\} $$

This gives the following output:

<ol>
  <li>Sets
    <ul>
      <li>State: \(\mathcal{S}\)</li>
      <li>Action: \(\mathcal{A}(s_i)\)</li>
      <li>Reward: \(\mathcal{R(s, a)}\)</li>
    </ul>
  </li>
  <li>Probability Distributions
    <ul>
      <li>State Transition Probability: \(p(s' \vert s, a)\)</li>
      <li>Reward Probability: \(p(r \vert s, a)\)</li>
    </ul>
  </li>
  <li>Policy: At state \(s\), the probability to choose the action \(a\) is \(\pi (a \vert s)\)</li>
  <li>Markov Property: memoryless
    <ul>
      <li>\(p(s_{t+1} \vert a_{t+1}, s_t, \dots, a_1, s_0) = p(s_{t+1} \vert a_{t+1}, s_t)\)</li>
      <li>\(p(r_{t+1} \vert a_{t+1}, s_t, \dots, a_1, s_0) = p(r_{t+1} \vert a_{t+1}, s_t)\)</li>
      <li>\(E = (b_{30}b_{29}...b_{23})_2 = (01111100)_2 = (124)_{10} \in \{1, ..., (2^8-1) - 1 \} = \{1, ..., 254\}\)</li>
    </ul>
  </li>
</ol>

As you can see all math elements are converted to their correct representation.