When I generate new SP sequences, a perplexity value will be provided for each sequence in the CSV file. However, when I tried to re-calculate the perplexity values for the new sequences using the run_perplexity.py script, I noticed that the re-calculated values are generally higher than the ones provided when generating the sequences. Is this an expected behavior? Since the paper considered the perplexity as an indicator of SP efficiency, which value should I trust more?
When I generate new SP sequences, a perplexity value will be provided for each sequence in the CSV file. However, when I tried to re-calculate the perplexity values for the new sequences using the run_perplexity.py script, I noticed that the re-calculated values are generally higher than the ones provided when generating the sequences. Is this an expected behavior? Since the paper considered the perplexity as an indicator of SP efficiency, which value should I trust more?