zhenhao-huang / CPM-1-Finetune-Text-Generation

Finetune CPM-1 For Text Generation
MIT License
18 stars 3 forks source link

使用4卡,利用行业数据进行微调,loss初始几轮不降低 #3

Open DTDXG opened 2 years ago

DTDXG commented 2 years ago

0%|▏ | 36/38862 [05:11<93:33:20, 8.67s/it]Epoch 0, global step 10, total step 97155, train lm loss: 9.976178240776061 0%|▎ | 76/38862 [10:57<93:32:47, 8.68s/it]Epoch 0, global step 20, total step 97155, train lm loss: 10.820466804504395 0%|▌ | 116/38862 [16:42<93:24:51, 8.68s/it]Epoch 0, global step 30, total step 97155, train lm loss: 10.784950971603394 0%|▋ | 156/38862 [22:27<93:23:19, 8.69s/it]Epoch 0, global step 40, total step 97155, train lm loss: 10.853230118751526 1%|▊ | 196/38862 [28:12<92:55:31, 8.65s/it]Epoch 0, global step 50, total step 97155, train lm loss: 10.960585021972657 Evaluating dev: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:04<00:00, 2.46s/it] Epoch: 0, Global step: 50, Dev loss: 10.90192997455597███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:04<00:00, 2.35s/it] 1%|█ | 236/38862 [33:52<90:18:40, 8.42s/it]Epoch 0, global step 60, total step 97155, train lm loss: 10.847557139396667 1%|█▏ | 276/38862 [39:27<90:15:16, 8.42s/it]Epoch 0, global step 70, total step 97155, train lm loss: 10.860098052024842

DTDXG commented 2 years ago

看上去好像不太正常

zhenhao-huang commented 2 years ago

您的数据量有多大呢,训练一个完整epoch观察下

DTDXG commented 2 years ago

您的数据量有多大呢,训练一个完整epoch观察下

大概13万条左右的文本,500多M,生成json文件之后15万条,训练起来有点慢,我已经把layers调到6了,4个M40显卡 跑一个epoch大概需要18个小时,目前第一个epoch大概跑了40%了,但是loss一直还在提升,有点崩溃 image

zhenhao-huang commented 2 years ago

您的layer调的太小了,模型学习能力不足

DTDXG commented 2 years ago

我试过32层的layer和24层的layer,loss基本上是同样的情况,会不会是模板的问题,我文本处理完后的格式如下 image 然后会做根据你提供的滑动窗口进行tokenizer

DTDXG commented 2 years ago

这个是24层的结果 image

zhenhao-huang commented 2 years ago

您的数据量比较大,每个epoch刚开始loss是会稍微往上涨,后面下降,原因参考CPM预训练模型动量加载。另外建议layer不要调的太小。

DTDXG commented 2 years ago

看上去是的,昨晚跑了3个epoch,直到epoch1 loss还是提升,之后就开始缓慢下降了,我再观察一下 感谢

DTDXG commented 2 years ago

周末把batchsize调到了8,然后5000条文本训练了下,10个epoch发现loss没有下降,是不是训练不够了,我看ChID任务上是1000~2000之后才开始降的,另外感觉我这个行业文本不是很复杂,所以微调起来不应该这个样子 Epoch 0, global step 10, total step 2142, train lm loss: 10.077742505073548 Epoch 0, global step 20, total step 2142, train lm loss: 10.831316995620728 Epoch 0, global step 30, total step 2142, train lm loss: 10.841767644882202 Epoch 0, global step 40, total step 2142, train lm loss: 10.86684582233429 Epoch 0, global step 50, total step 2142, train lm loss: 10.797928595542908 Epoch 0, global step 60, total step 2142, train lm loss: 10.903214621543885 Epoch 0, global step 70, total step 2142, train lm loss: 10.892094326019286 Epoch 0, global step 80, total step 2142, train lm loss: 10.932651281356812 Epoch 0, global step 90, total step 2142, train lm loss: 10.941517567634582 Epoch 0, global step 100, total step 2142, train lm loss: 10.979631972312927 Epoch 0, global step 110, total step 2142, train lm loss: 11.064398932456971 Epoch 0, global step 120, total step 2142, train lm loss: 11.136101150512696 Epoch 0, global step 130, total step 2142, train lm loss: 11.153024458885193 Epoch 0, global step 140, total step 2142, train lm loss: 11.03482003211975 Epoch 0, global step 150, total step 2142, train lm loss: 10.982774353027343 Epoch 0, global step 160, total step 2142, train lm loss: 10.935764694213868 Epoch 0, global step 170, total step 2142, train lm loss: 10.82078013420105 Epoch 0, global step 180, total step 2142, train lm loss: 10.80727174282074 Epoch 0, global step 190, total step 2142, train lm loss: 10.778411698341369 Epoch 0, global step 200, total step 2142, train lm loss: 10.816575837135314 Epoch 0, global step 210, total step 2142, train lm loss: 10.736823296546936 Epoch 1, global step 220, total step 2142, train lm loss: 10.696767902374267 Epoch 1, global step 230, total step 2142, train lm loss: 10.72139070034027 Epoch 1, global step 240, total step 2142, train lm loss: 10.81094183921814 Epoch 1, global step 250, total step 2142, train lm loss: 10.972432374954224 Epoch 1, global step 260, total step 2142, train lm loss: 11.109806656837463 Epoch 1, global step 270, total step 2142, train lm loss: 11.17758595943451 Epoch 1, global step 280, total step 2142, train lm loss: 11.220130038261413 Epoch 1, global step 290, total step 2142, train lm loss: 11.225976514816285 Epoch 1, global step 300, total step 2142, train lm loss: 11.244247841835023 Epoch 1, global step 310, total step 2142, train lm loss: 11.266067957878112 Epoch 1, global step 320, total step 2142, train lm loss: 11.253532338142396 Epoch 1, global step 330, total step 2142, train lm loss: 11.344405150413513 Epoch 1, global step 340, total step 2142, train lm loss: 11.349774980545044 Epoch 1, global step 350, total step 2142, train lm loss: 11.356926369667054 Epoch 1, global step 360, total step 2142, train lm loss: 11.315776109695435 Epoch 1, global step 370, total step 2142, train lm loss: 11.389334392547607 Epoch 1, global step 380, total step 2142, train lm loss: 11.336841559410095 Epoch 1, global step 390, total step 2142, train lm loss: 11.404839777946473 Epoch 1, global step 400, total step 2142, train lm loss: 11.351309013366699 Epoch 1, global step 410, total step 2142, train lm loss: 11.463918471336365 Epoch 1, global step 420, total step 2142, train lm loss: 11.467179441452027 Epoch 2, global step 430, total step 2142, train lm loss: 11.416150069236755 Epoch 2, global step 440, total step 2142, train lm loss: 11.352361416816711 Epoch 2, global step 450, total step 2142, train lm loss: 11.353506374359132 Epoch 2, global step 460, total step 2142, train lm loss: 11.487883448600769 Epoch 2, global step 470, total step 2142, train lm loss: 11.465776753425597 Epoch 2, global step 480, total step 2142, train lm loss: 11.467491698265075 Epoch 2, global step 490, total step 2142, train lm loss: 11.489777183532714 Epoch 2, global step 500, total step 2142, train lm loss: 11.499632716178894 Epoch 2, global step 510, total step 2142, train lm loss: 11.465987300872802 Epoch 2, global step 520, total step 2142, train lm loss: 11.470770025253296 Epoch 2, global step 530, total step 2142, train lm loss: 11.453624367713928 Epoch 2, global step 540, total step 2142, train lm loss: 11.48928394317627 Epoch 2, global step 550, total step 2142, train lm loss: 11.545708107948304 Epoch 2, global step 560, total step 2142, train lm loss: 11.528135871887207 Epoch 2, global step 570, total step 2142, train lm loss: 11.483091831207275 Epoch 2, global step 580, total step 2142, train lm loss: 11.493202781677246 Epoch 2, global step 590, total step 2142, train lm loss: 11.514823198318481 Epoch 2, global step 600, total step 2142, train lm loss: 11.482706975936889 Epoch 2, global step 610, total step 2142, train lm loss: 11.51557023525238 Epoch 2, global step 620, total step 2142, train lm loss: 11.524994897842408 Epoch 2, global step 630, total step 2142, train lm loss: 11.585730075836182 Epoch 2, global step 640, total step 2142, train lm loss: 11.53557288646698 Epoch 3, global step 650, total step 2142, train lm loss: 11.483806848526001 Epoch 3, global step 660, total step 2142, train lm loss: 11.532571744918823 Epoch 3, global step 670, total step 2142, train lm loss: 11.561785197257995 Epoch 3, global step 680, total step 2142, train lm loss: 11.589659023284913 Epoch 3, global step 690, total step 2142, train lm loss: 11.510289907455444 Epoch 3, global step 700, total step 2142, train lm loss: 11.587843704223634 Epoch 3, global step 710, total step 2142, train lm loss: 11.549294066429137 Epoch 3, global step 720, total step 2142, train lm loss: 11.54594361782074 Epoch 3, global step 730, total step 2142, train lm loss: 11.534197807312012 Epoch 3, global step 740, total step 2142, train lm loss: 11.541052722930909 Epoch 3, global step 750, total step 2142, train lm loss: 11.526718473434448 Epoch 3, global step 760, total step 2142, train lm loss: 11.59164137840271 Epoch 3, global step 770, total step 2142, train lm loss: 11.59092619419098 Epoch 3, global step 780, total step 2142, train lm loss: 11.554410099983215 Epoch 3, global step 790, total step 2142, train lm loss: 11.55160436630249 Epoch 3, global step 800, total step 2142, train lm loss: 11.574915051460266 Epoch 3, global step 810, total step 2142, train lm loss: 11.528852033615113 Epoch 3, global step 820, total step 2142, train lm loss: 11.589702773094178 Epoch 3, global step 830, total step 2142, train lm loss: 11.540002965927124 Epoch 3, global step 840, total step 2142, train lm loss: 11.630426454544068 Epoch 3, global step 850, total step 2142, train lm loss: 11.614045882225037 Epoch 4, global step 860, total step 2142, train lm loss: 11.551578998565674 Epoch 4, global step 870, total step 2142, train lm loss: 11.510226058959962 Epoch 4, global step 880, total step 2142, train lm loss: 11.502415418624878 Epoch 4, global step 890, total step 2142, train lm loss: 11.560726237297057 Epoch 4, global step 900, total step 2142, train lm loss: 11.487877821922302 Epoch 4, global step 910, total step 2142, train lm loss: 11.618546724319458 Epoch 4, global step 920, total step 2142, train lm loss: 11.615844416618348 Epoch 4, global step 930, total step 2142, train lm loss: 11.605739164352418 Epoch 4, global step 940, total step 2142, train lm loss: 11.581577777862549 Epoch 4, global step 950, total step 2142, train lm loss: 11.601444292068482 Epoch 4, global step 960, total step 2142, train lm loss: 11.551263546943664 Epoch 4, global step 970, total step 2142, train lm loss: 11.61181275844574 Epoch 4, global step 980, total step 2142, train lm loss: 11.646592450141906 Epoch 4, global step 990, total step 2142, train lm loss: 11.628769874572754 Epoch 4, global step 1000, total step 2142, train lm loss: 11.573649048805237 Epoch 4, global step 1010, total step 2142, train lm loss: 11.581990218162536 Epoch 4, global step 1020, total step 2142, train lm loss: 11.619111442565918 Epoch 4, global step 1030, total step 2142, train lm loss: 11.591527462005615 Epoch 4, global step 1040, total step 2142, train lm loss: 11.584144186973571 Epoch 4, global step 1050, total step 2142, train lm loss: 11.635941171646119 Epoch 4, global step 1060, total step 2142, train lm loss: 11.664949798583985 Epoch 4, global step 1070, total step 2142, train lm loss: 11.622552633285522 Epoch 5, global step 1080, total step 2142, train lm loss: 11.544819831848145 Epoch 5, global step 1090, total step 2142, train lm loss: 11.528185105323791 Epoch 5, global step 1100, total step 2142, train lm loss: 11.597781443595887 Epoch 5, global step 1110, total step 2142, train lm loss: 11.652670073509217 Epoch 5, global step 1120, total step 2142, train lm loss: 11.607676434516907 Epoch 5, global step 1130, total step 2142, train lm loss: 11.6506023645401 Epoch 5, global step 1140, total step 2142, train lm loss: 11.612306690216064 Epoch 5, global step 1150, total step 2142, train lm loss: 11.627357172966004 Epoch 5, global step 1160, total step 2142, train lm loss: 11.588149881362915 Epoch 5, global step 1170, total step 2142, train lm loss: 11.627081179618836 Epoch 5, global step 1180, total step 2142, train lm loss: 11.585710716247558 Epoch 5, global step 1190, total step 2142, train lm loss: 11.659010529518127 Epoch 5, global step 1200, total step 2142, train lm loss: 11.651221418380738 Epoch 5, global step 1210, total step 2142, train lm loss: 11.604456758499145 Epoch 5, global step 1220, total step 2142, train lm loss: 11.623193740844727 Epoch 5, global step 1230, total step 2142, train lm loss: 11.646228480339051 Epoch 5, global step 1240, total step 2142, train lm loss: 11.562894654273986 Epoch 5, global step 1250, total step 2142, train lm loss: 11.660567116737365 Epoch 5, global step 1260, total step 2142, train lm loss: 11.598304653167725 Epoch 5, global step 1270, total step 2142, train lm loss: 11.69379289150238 Epoch 5, global step 1280, total step 2142, train lm loss: 11.660201907157898 Epoch 6, global step 1290, total step 2142, train lm loss: 11.58299777507782 Epoch 6, global step 1300, total step 2142, train lm loss: 11.574216270446778 Epoch 6, global step 1310, total step 2142, train lm loss: 11.649238419532775 Epoch 6, global step 1320, total step 2142, train lm loss: 11.675712728500367 Epoch 6, global step 1330, total step 2142, train lm loss: 11.606507658958435 Epoch 6, global step 1340, total step 2142, train lm loss: 11.661307215690613 Epoch 6, global step 1350, total step 2142, train lm loss: 11.660458064079284 Epoch 6, global step 1360, total step 2142, train lm loss: 11.615161633491516 Epoch 6, global step 1370, total step 2142, train lm loss: 11.625925517082214 Epoch 6, global step 1380, total step 2142, train lm loss: 11.617619466781616 Epoch 6, global step 1390, total step 2142, train lm loss: 11.607029056549072 Epoch 6, global step 1400, total step 2142, train lm loss: 11.657765412330628 Epoch 6, global step 1410, total step 2142, train lm loss: 11.666300988197326 Epoch 6, global step 1420, total step 2142, train lm loss: 11.659992790222168 Epoch 6, global step 1430, total step 2142, train lm loss: 11.597860407829284 Epoch 6, global step 1440, total step 2142, train lm loss: 11.637781572341918 Epoch 6, global step 1450, total step 2142, train lm loss: 11.61467981338501 Epoch 6, global step 1460, total step 2142, train lm loss: 11.637713170051574 Epoch 6, global step 1470, total step 2142, train lm loss: 11.597967433929444 Epoch 6, global step 1480, total step 2142, train lm loss: 11.682739090919494 Epoch 6, global step 1490, total step 2142, train lm loss: 11.676209783554077 Epoch 6, global step 1500, total step 2142, train lm loss: 11.631771969795228 Epoch 7, global step 1510, total step 2142, train lm loss: 11.578112125396729 Epoch 7, global step 1520, total step 2142, train lm loss: 11.561094045639038 Epoch 7, global step 1530, total step 2142, train lm loss: 11.623381900787354 Epoch 7, global step 1540, total step 2142, train lm loss: 11.571000862121583 Epoch 7, global step 1550, total step 2142, train lm loss: 11.554937815666198 Epoch 7, global step 1560, total step 2142, train lm loss: 11.671279549598694 Epoch 7, global step 1570, total step 2142, train lm loss: 11.641465592384339 Epoch 7, global step 1580, total step 2142, train lm loss: 11.643608593940735 Epoch 7, global step 1590, total step 2142, train lm loss: 11.619866061210633 Epoch 7, global step 1600, total step 2142, train lm loss: 11.616799926757812 Epoch 7, global step 1610, total step 2142, train lm loss: 11.632605481147767 Epoch 7, global step 1620, total step 2142, train lm loss: 11.677462220191956 Epoch 7, global step 1630, total step 2142, train lm loss: 11.685198783874512 Epoch 7, global step 1640, total step 2142, train lm loss: 11.621504020690917 Epoch 7, global step 1650, total step 2142, train lm loss: 11.628706884384155 Epoch 7, global step 1660, total step 2142, train lm loss: 11.660326170921326 Epoch 7, global step 1670, total step 2142, train lm loss: 11.595205950737 Epoch 7, global step 1680, total step 2142, train lm loss: 11.644860029220581 Epoch 7, global step 1690, total step 2142, train lm loss: 11.6437077999115 Epoch 7, global step 1700, total step 2142, train lm loss: 11.717098259925843 Epoch 7, global step 1710, total step 2142, train lm loss: 11.658128476142883 Epoch 8, global step 1720, total step 2142, train lm loss: 11.601220750808716 Epoch 8, global step 1730, total step 2142, train lm loss: 11.588545560836792 Epoch 8, global step 1740, total step 2142, train lm loss: 11.593594717979432 Epoch 8, global step 1750, total step 2142, train lm loss: 11.608234977722168 Epoch 8, global step 1760, total step 2142, train lm loss: 11.629752779006958 Epoch 8, global step 1770, total step 2142, train lm loss: 11.686305356025695 Epoch 8, global step 1780, total step 2142, train lm loss: 11.66082272529602 Epoch 8, global step 1790, total step 2142, train lm loss: 11.641206121444702 Epoch 8, global step 1800, total step 2142, train lm loss: 11.632195544242858 Epoch 8, global step 1810, total step 2142, train lm loss: 11.636396861076355 Epoch 8, global step 1820, total step 2142, train lm loss: 11.626308703422547 Epoch 8, global step 1830, total step 2142, train lm loss: 11.680176830291748 Epoch 8, global step 1840, total step 2142, train lm loss: 11.67740204334259 Epoch 8, global step 1850, total step 2142, train lm loss: 11.66936194896698 Epoch 8, global step 1860, total step 2142, train lm loss: 11.612979745864868 Epoch 8, global step 1870, total step 2142, train lm loss: 11.675817584991455 Epoch 8, global step 1880, total step 2142, train lm loss: 11.615044331550598 Epoch 8, global step 1890, total step 2142, train lm loss: 11.664568996429443 Epoch 8, global step 1900, total step 2142, train lm loss: 11.606658387184144 Epoch 8, global step 1910, total step 2142, train lm loss: 11.7001962184906 Epoch 8, global step 1920, total step 2142, train lm loss: 11.701418113708495 Epoch 9, global step 1930, total step 2142, train lm loss: 11.641951966285706 Epoch 9, global step 1940, total step 2142, train lm loss: 11.579303550720216 Epoch 9, global step 1950, total step 2142, train lm loss: 11.584128975868225 Epoch 9, global step 1960, total step 2142, train lm loss: 11.700719237327576 Epoch 9, global step 1970, total step 2142, train lm loss: 11.657628464698792 Epoch 9, global step 1980, total step 2142, train lm loss: 11.662394046783447 Epoch 9, global step 1990, total step 2142, train lm loss: 11.682838821411133 Epoch 9, global step 2000, total step 2142, train lm loss: 11.673422980308533 Epoch 9, global step 2010, total step 2142, train lm loss: 11.636272835731507 Epoch 9, global step 2020, total step 2142, train lm loss: 11.637586236000061 Epoch 9, global step 2030, total step 2142, train lm loss: 11.62152886390686 Epoch 9, global step 2040, total step 2142, train lm loss: 11.654696369171143 Epoch 9, global step 2050, total step 2142, train lm loss: 11.70830466747284 Epoch 9, global step 2060, total step 2142, train lm loss: 11.685967874526977 Epoch 9, global step 2070, total step 2142, train lm loss: 11.624574518203735 Epoch 9, global step 2080, total step 2142, train lm loss: 11.640417194366455 Epoch 9, global step 2090, total step 2142, train lm loss: 11.667964005470276 Epoch 9, global step 2100, total step 2142, train lm loss: 11.616567754745484 Epoch 9, global step 2110, total step 2142, train lm loss: 11.650010251998902 Epoch 9, global step 2120, total step 2142, train lm loss: 11.66224172115326 Epoch 9, global step 2130, total step 2142, train lm loss: 11.70895450115204 Epoch 9, global step 2140, total step 2142, train lm loss: 11.676241469383239

zhenhao-huang commented 2 years ago

您的数据量是多少,太少会过拟合。您可以测试下每一轮的输出结果。

DTDXG commented 2 years ago

训练集总共6861条,验证集607条,测试集706条,看验证集的输出结果跟训练集基本上一致,感觉不是过拟合的问题 Dev losses: Epoch: 0, Global step: 50, Dev loss: 10.897177935356186 Epoch: 0, Global step: 100, Dev loss: 11.064051420206116 Epoch: 0, Global step: 150, Dev loss: 10.948829458582969 Epoch: 0, Global step: 200, Dev loss: 10.709407380649022 Epoch: 1, Global step: 250, Dev loss: 11.056733254875455 Epoch: 1, Global step: 300, Dev loss: 11.254790682168235 Epoch: 1, Global step: 350, Dev loss: 11.34470785515649 Epoch: 1, Global step: 400, Dev loss: 11.40517230899561 Epoch: 2, Global step: 450, Dev loss: 11.444464611155647 Epoch: 2, Global step: 500, Dev loss: 11.471168018522716 Epoch: 2, Global step: 550, Dev loss: 11.490788605951128 Epoch: 2, Global step: 600, Dev loss: 11.506804388193856 Epoch: 3, Global step: 650, Dev loss: 11.519389268897829 Epoch: 3, Global step: 700, Dev loss: 11.530630763087954 Epoch: 3, Global step: 750, Dev loss: 11.540287361258553 Epoch: 3, Global step: 800, Dev loss: 11.549256132472129 Epoch: 3, Global step: 850, Dev loss: 11.557329172889391 Epoch: 4, Global step: 900, Dev loss: 11.572540203730265 Epoch: 4, Global step: 950, Dev loss: 11.579047784918831 Epoch: 4, Global step: 1000, Dev loss: 11.583820020868664 Epoch: 4, Global step: 1050, Dev loss: 11.588044154502096 Epoch: 5, Global step: 1100, Dev loss: 11.593776200498853 Epoch: 5, Global step: 1150, Dev loss: 11.59767199485075 Epoch: 5, Global step: 1200, Dev loss: 11.600352341930071 Epoch: 5, Global step: 1250, Dev loss: 11.602488824299403 Epoch: 6, Global step: 1300, Dev loss: 11.604372731276921 Epoch: 6, Global step: 1350, Dev loss: 11.605740200905572 Epoch: 6, Global step: 1400, Dev loss: 11.607391542622022 Epoch: 6, Global step: 1450, Dev loss: 11.609376152356466 Epoch: 6, Global step: 1500, Dev loss: 11.611124405548686 Epoch: 7, Global step: 1550, Dev loss: 11.614347740298225 Epoch: 7, Global step: 1600, Dev loss: 11.615619137883186 Epoch: 7, Global step: 1650, Dev loss: 11.616617286489124 Epoch: 7, Global step: 1700, Dev loss: 11.617309817246028 Epoch: 8, Global step: 1750, Dev loss: 11.620190438770113 Epoch: 8, Global step: 1800, Dev loss: 11.620503862698873 Epoch: 8, Global step: 1850, Dev loss: 11.621262658919607 Epoch: 8, Global step: 1900, Dev loss: 11.622025273385502 Epoch: 9, Global step: 1950, Dev loss: 11.62244535840693 Epoch: 9, Global step: 2000, Dev loss: 11.622625055057663 Epoch: 9, Global step: 2050, Dev loss: 11.623031420367104 Epoch: 9, Global step: 2100, Dev loss: 11.623468299706778

zhenhao-huang commented 2 years ago

生成下文本看看效果

DTDXG commented 2 years ago

生成的文本基本不可用。我是按照你的novel案例改的,这个loss还是太大了,一点没有下降趋势,不知道是不是生成模板的问题

zhenhao-huang commented 2 years ago

您做的是什么文本生成任务

DTDXG commented 2 years ago

医疗行业的长文本生成任务,训练语料就是几万篇的医疗文本,每篇用一个txt保存,目标是给一个标题,模型生成一段具体的文本解释或说明,所以我参照的是你的novel模板

zhenhao-huang commented 2 years ago

您是要做摘要,还是文章标题写作?可否展示下您的数据结构是怎么样的?novel这里做的是文本续写,模板可能不太适用。

DTDXG commented 2 years ago

我理解是属于标题写作吧,我的数据格式就是如下: image txt的标题和txt里面具体的文本,模型的任务就是根据标题生成一段文本,我现在数据处理的时候就是把标题也放到文本里面了当成整段文本来做

zhenhao-huang commented 2 years ago

转id之前的预处理是怎样的呢

DTDXG commented 2 years ago

image 输出的格式是[文章]\n[文章]\n[文章],输出的内容上面贴了

zhenhao-huang commented 2 years ago

就拿你给的数据来举例,预处理完是没有换行符或空格的吗

DTDXG commented 2 years ago

我这边没有做任何换行符和空格的处理,因为我的语料里面一篇文章本身就有比较多的段落,我这边全部保留了,贴一个最新的 image

zhenhao-huang commented 2 years ago

\n这个符号为换行符,预处理完输出格式为["段落"]\n["段落"]\n["段落"]"段落"中最好不要有换行符,["段落"]["段落"]\n隔开

zhenhao-huang commented 2 years ago

我之前有在"段落"中试过\n,效果不好,就去掉了

DTDXG commented 2 years ago

去除\n的版本其实我也试过,loss没有没有明显的变化 image

zhenhao-huang commented 2 years ago

layer保留20层以上,数据量足够,训练10epoch以内,没有换行符和空格,试过吗

DTDXG commented 2 years ago

我跑的第一个版本就是把所有的换行符给去除了,但是空格没有做处理,层数是32层,10个epoch,结果loss还一直在上升,就是我之前贴的结果

DTDXG commented 2 years ago

我再试试去除空格的版本,话说你之前不去除\n的话loss的表现是啥样的了

zhenhao-huang commented 2 years ago

每一轮都是先升高,后收敛,其他模板也和下面类似

Epoch 0, global step 10, total step 50500, train lm loss: 4.772784179449081
Epoch 0, global step 20, total step 50500, train lm loss: 5.451375675201416
Epoch 0, global step 30, total step 50500, train lm loss: 5.140510568022728
Epoch 0, global step 40, total step 50500, train lm loss: 5.800495904684067
Epoch 0, global step 50, total step 50500, train lm loss: 5.095267969369888
Epoch 0, global step 60, total step 50500, train lm loss: 5.326895987987518
Epoch 0, global step 70, total step 50500, train lm loss: 5.162915593385696
Epoch 0, global step 80, total step 50500, train lm loss: 5.433998364210129
Epoch 0, global step 90, total step 50500, train lm loss: 5.283599418401718
Epoch 0, global step 100, total step 50500, train lm loss: 5.200229948759079
Epoch 0, global step 110, total step 50500, train lm loss: 5.001580744981766
Epoch 0, global step 120, total step 50500, train lm loss: 4.570914876461029
Epoch 0, global step 130, total step 50500, train lm loss: 4.8354296565055845
Epoch 0, global step 140, total step 50500, train lm loss: 4.561116194725036
Epoch 0, global step 150, total step 50500, train lm loss: 4.587766915559769
Epoch 0, global step 160, total step 50500, train lm loss: 4.481195455789566
Epoch 0, global step 170, total step 50500, train lm loss: 4.334920901060104
Epoch 0, global step 180, total step 50500, train lm loss: 4.089014765620232
Epoch 0, global step 190, total step 50500, train lm loss: 3.594325539469719
Epoch 0, global step 200, total step 50500, train lm loss: 3.79841234087944
...
Epoch 0, global step 10080, total step 50500, train lm loss: 2.523105573654175
Epoch 0, global step 10090, total step 50500, train lm loss: 2.435372691601515
Epoch 0, global step 10100, total step 50500, train lm loss: 2.3292624175548555
Epoch 1, global step 10110, total step 50500, train lm loss: 2.860862870514393
Epoch 1, global step 10120, total step 50500, train lm loss: 2.762403818964958
...
Epoch 4, global step 50460, total step 50500, train lm loss: 0.12245796332717873
Epoch 4, global step 50470, total step 50500, train lm loss: 0.12343024034053088
Epoch 4, global step 50480, total step 50500, train lm loss: 0.14352914304472505
Epoch 4, global step 50490, total step 50500, train lm loss: 0.11289326981641352
Epoch 4, global step 50500, total step 50500, train lm loss: 0.11984070185571909