Closed asusdisciple closed 5 months ago
Hi, thanks for bringing this up!
That is indeed a problem / characteristic at the moment. The model has unlimited lookahead (only limited by the context size of 512), so later characters can influence the newline probabilities of earlier characters. The model is trained on chunks of 512 characters so it might perform worse on texts shorter than that. I don't have immediate plans to fix this - I don't think this is a big issue in practice. If you have very short input texts, you could try padding to 512 characters (e.g. by repeating the texts). Possible ways to fix this on the model side are:
I'll keep both of these in mind for when I train another model.
Thanks for your fast answer! This makes sense in a way. If character length influences the result my question would be, how does the model behave if the chunk is longer than 512? For example if the total input length would be 712 characters, will it be split in 512 and 200? Do you think it makes sense to take the actual input length and split it in such a way, that it is close to 512 chars? For example split 712 into 2x356 instead of one very short segment? Also maybe it would make sense to pad the input in such a way that it is a multiple of 512 chars? Not sure what would be the best way to implement this.
Thanks in advance!
For texts longer than 512 characters, this is already handled well. The text is split into partially overlapping chunks with overlap given by the stride
argument to wtp.split
(e.g. stride=256
=> move forward by 256 characters every chunk) and the predictions for characters contained in more than one chunk are averaged together.
If the last chunk does not fit perfectly, it is calculated from the right instead of from the left i.e. in your example, with stride=512, the result would be two chunks, one from 0-512 and one from 200-712.
The only problem is (as in your original message) when the overall text is shorter than 512 characters.
@bminixhofer Based on the conversion and your explanation of the chunk=512 characteristics of the model as it relates to accuracy, is it correct to say the best direction to pad <512 strings is to pad-to-left instead of pad-to-right? Or the padding direction doesn't matter? Since we don't want the padding to influence the short text (right influencing left), I thought padding left might be idea direction for this model. Thanks.
Hi, we recently released SaT models which significantly improve upon WtP. We also specifically tackled the observed issues with short sequences via a limited lookahead mechanism and packing of fewer sentences for -sm
models. For details, please see our SaT paper. :)
Considering your example, it now gets split as expected, using both 2 and 7 sentences:
sentence_short = "ሪንግ ከተፎካካሪ የደህንነት ኩባንያም ADT ኮርፖሬሽን፣ ጋር ክስ መስርቷል። አንደ የሙከራ ክትባት የኢቦላን ገዳይነት ቢቀንስም፣ እስካሁን፣ ነባር በሽታዎችን እንዲያክም አመቺ ሆኖ የቀረበ ምንም መድሃኒት የለም።"
sentence_long = "1.ፓናሉ ለንግድ መጀመር ገንዘብ በተከለከለበት በ2013 በሻርክ ታንክ ምዕራፍ ላይ ከቀረበ ወዲህ ሽያጭ እንደጨመረ ሲሚኖፍ ተነግሯል። በ2017 መጨረሻ ላይ፣ ሲሚኖፍ በሽያጭ የቴሌቪዥን ጣቢያ ላይ ቀርቦ ነበር። ሪንግ ከተፎካካሪ የደህንነት ኩባንያም ADT ኮርፖሬሽን፣ ጋር ክስ መስርቷል። አንደ የሙከራ ክትባት የኢቦላን ገዳይነት ቢቀንስም፣ እስካሁን፣ ነባር በሽታዎችን እንዲያክም አመቺ ሆኖ የቀረበ ምንም መድሃኒት የለም። አንድ የጸረ እንግዳ አካል፣ ZMapp፣ በዚህ መስክ ላይ ተስፋን አሳይቶ ነበር፣ ግን መደበኛ ጥናቶች ሞትን ለመከላከል ከተፈለገው ጥቅም ያነሰ እንዳለው ያሳያል። በPALM ሙከራ፣ ZMapp እንደ መቆጣጠሪያ ያገለግል ነበር፣ ማለት ተመራማሪዎች እንደ መነሻ ይጠቀሙበት እና ከሌሎች ሶስት ህክምናዎች ጋር ያነጻጽሩታል።የአሜሪካ ጂምናስቲ የዩናይትድ ስቴትስ ኦሎፒክ ኮሚቴ ደብዳቤ ይደግፋል እናም በሙሉ አስፈላጊነት የኦሎምፒክ ቤተሰብ ደህንነቱ የተጠበቀ አካባቢ ለሁሉም አትሌቶቻችን ማስተዋወቅ እንዳለበት ይቀበላል።"
sat_sm.split(sentence_short)
['ሪንግ ከተፎካካሪ የደህንነት ኩባንያም ADT ኮርፖሬሽን፣ ጋር ክስ መስርቷል። ', 'አንደ የሙከራ ክትባት የኢቦላን ገዳይነት ቢቀንስም፣ እስካሁን፣ ነባር በሽታዎችን እንዲያክም አመቺ ሆኖ የቀረበ ምንም መድሃኒት የለም።']
sat_sm.split(sentence_long)
['ፓናሉ ለንግድ መጀመር ገንዘብ በተከለከለበት በ2013 በሻርክ ታንክ ምዕራፍ ላይ ከቀረበ ወዲህ ሽያጭ እንደጨመረ ሲሚኖፍ ተነግሯል። ', 'በ2017 መጨረሻ ላይ፣ ሲሚኖፍ በሽያጭ የቴሌቪዥን ጣቢያ ላይ ቀርቦ ነበር። ', 'ሪንግ ከተፎካካሪ የደህንነት ኩባንያም ADT ኮርፖሬሽን፣ ጋር ክስ መስርቷል። ', 'አንደ የሙከራ ክትባት የኢቦላን ገዳይነት ቢቀንስም፣ እስካሁን፣ ነባር በሽታዎችን እንዲያክም አመቺ ሆኖ የቀረበ ምንም መድሃኒት የለም። ', 'አንድ የጸረ እንግዳ አካል፣ ZMapp፣ በዚህ መስክ ላይ ተስፋን አሳይቶ ነበር፣ ግን መደበኛ ጥናቶች ሞትን ለመከላከል ከተፈለገው ጥቅም ያነሰ እንዳለው ያሳያል። ', 'በPALM ሙከራ፣ ZMapp እንደ መቆጣጠሪያ ያገለግል ነበር፣ ማለት ተመራማሪዎች እንደ መነሻ ይጠቀሙበት እና ከሌሎች ሶስት ህክምናዎች ጋር ያነጻጽሩታል። ', 'የአሜሪካ ጂምናስቲ የዩናይትድ ስቴትስ ኦሎፒክ ኮሚቴ ደብዳቤ ይደግፋል እናም በሙሉ አስፈላጊነት የኦሎምፒክ ቤተሰብ ደህንነቱ የተጠበቀ አካባቢ ለሁሉም አትሌቶቻችን ማስተዋወቅ እንዳለበት ይቀበላል።']
Hope this helps and fixes your issue!
I found that the usage of your splitter model gives very inconsistent results. Take for example the amharic language (lang_code="am").
If I take for example these two sentences from the flores 200 test dataset:
If i concatenate these two strings and feed them into wtp.split() it will produce 10 sentences:
However if I give the algorithm more (7) sentences and concatenate them into a string it splits them all perfectly: (Notice that the two sentences in the example above are included in the text below, sentence 3 and 4)
1.ፓናሉ ለንግድ መጀመር ገንዘብ በተከለከለበት በ2013 በሻርክ ታንክ ምዕራፍ ላይ ከቀረበ ወዲህ ሽያጭ እንደጨመረ ሲሚኖፍ ተነግሯል።
Can you explain this behaviour? It makes your algorithm very unpredictable to be honest and I fear this problem is also present in other languages if I did not make any mistake. I called the splitter with the appropiate language at all times. Let me know what you think of this.