Closed gancx closed 10 months ago
Thank you for your interest in OS-KDFT! We are currently developing a learning strategy for OS-KDFT that is an upgrade from the version published in the paper. In the paper, we achieved an equal error rate of 5.64% in our experiments using the VoxCeleb1 dataset, while our improved version achieved an equal error rate of 3.85%. We are re-running the experiment to share the loss trends, model weights trained, etc., for that experiment and will share them as soon as we are done in the next day or two.
Thank you for your interest in OS-KDFT! We are currently developing a learning strategy for OS-KDFT that is an upgrade from the version published in the paper. In the paper, we achieved an equal error rate of 5.64% in our experiments using the VoxCeleb1 dataset, while our improved version achieved an equal error rate of 3.85%. We are re-running the experiment to share the loss trends, model weights trained, etc., for that experiment and will share them as soon as we are done in the next day or two.
Thank you. Looking forward to your latest work.
Dear Authors,
After running your code, I have some questions. I found that in your code, wav2vec2 outputs the feature at the last hidden layer, whereas wavLM and Hubert output features at all hidden layers. In addition, did you average the features from hidden layers for loss computation? Thank you.
Regards, CX Gan
On Sat, 13 Jan 2024 at 16:46, Jungwoo4021 @.***> wrote:
Thank you for your interest in OS-KDFT! We are currently developing a learning strategy for OS-KDFT that is an upgrade from the version published in the paper. In the paper, we achieved an equal error rate of 5.64% in our experiments using the VoxCeleb1 dataset, while our improved version achieved an equal error rate of 3.85%. We are re-running the experiment to share the loss trends, model weights trained, etc., for that experiment and will share them as soon as we are done in the next day or two.
— Reply to this email directly, view it on GitHub https://github.com/Jungwoo4021/OS-KDFT/issues/1#issuecomment-1890383849, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUCQZY3SPSV4FH5IWNJVFOTYOJCVRAVCNFSM6AAAAABBXTHTYSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJQGM4DGOBUHE . You are receiving this because you authored the thread.Message ID: @.***>
Dear @gancx,
This is one of the changes in our improved experiments. In previous versions of our experiments, we only used the output of the last transformer encoder layer of the PLM, but many studies have found that superior performance is derived by additionally utilizing the output of intermediate layers. To reflect this trend, we also conducted experiments utilizing the output of intermediate layers, and instead of averaging the features from hidden layers, we used a weighted sum approach to aggregate the output of each layer.
Best wishes, JW Heo
보낸 사람: Chong-Xin Gan @.> 날짜: 일요일, 2024년 1월 21일 오후 9:17 받는 사람: Jungwoo4021/OS-KDFT @.> 참조: Jungwoo4021 @.>, Comment @.> 주제: Re: [Jungwoo4021/OS-KDFT] Training log (Issue #1) Dear Authors,
After running your code, I have some questions. I found that in your code, wav2vec2 outputs the feature at the last hidden layer, whereas wavLM and Hubert output features at all hidden layers. In addition, did you average the features from hidden layers for loss computation? Thank you.
Regards, CX Gan
On Sat, 13 Jan 2024 at 16:46, Jungwoo4021 @.***> wrote:
Thank you for your interest in OS-KDFT! We are currently developing a learning strategy for OS-KDFT that is an upgrade from the version published in the paper. In the paper, we achieved an equal error rate of 5.64% in our experiments using the VoxCeleb1 dataset, while our improved version achieved an equal error rate of 3.85%. We are re-running the experiment to share the loss trends, model weights trained, etc., for that experiment and will share them as soon as we are done in the next day or two.
— Reply to this email directly, view it on GitHub https://github.com/Jungwoo4021/OS-KDFT/issues/1#issuecomment-1890383849, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUCQZY3SPSV4FH5IWNJVFOTYOJCVRAVCNFSM6AAAAABBXTHTYSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJQGM4DGOBUHE . You are receiving this because you authored the thread.Message ID: @.***>
— Reply to this email directly, view it on GitHubhttps://github.com/Jungwoo4021/OS-KDFT/issues/1#issuecomment-1902610848, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AW6RTF7PRPK5XKJS5LBTRKDYPUBNZAVCNFSM6AAAAABBXTHTYSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBSGYYTAOBUHA. You are receiving this because you commented.Message ID: @.***>
Dear @gancx, This is one of the changes in our improved experiments. In previous versions of our experiments, we only used the output of the last transformer encoder layer of the PLM, but many studies have found that superior performance is derived by additionally utilizing the output of intermediate layers. To reflect this trend, we also conducted experiments utilizing the output of intermediate layers, and instead of averaging the features from hidden layers, we used a weighted sum approach to aggregate the output of each layer. Best wishes, JW Heo 보낸 사람: Chong-Xin Gan @.> 날짜: 일요일, 2024년 1월 21일 오후 9:17 받는 사람: Jungwoo4021/OS-KDFT @.> 참조: Jungwoo4021 @.>, Comment @.> 주제: Re: [Jungwoo4021/OS-KDFT] Training log (Issue #1) Dear Authors, After running your code, I have some questions. I found that in your code, wav2vec2 outputs the feature at the last hidden layer, whereas wavLM and Hubert output features at all hidden layers. In addition, did you average the features from hidden layers for loss computation? Thank you. Regards, CX Gan On Sat, 13 Jan 2024 at 16:46, Jungwoo4021 @.> wrote: Thank you for your interest in OS-KDFT! We are currently developing a learning strategy for OS-KDFT that is an upgrade from the version published in the paper. In the paper, we achieved an equal error rate of 5.64% in our experiments using the VoxCeleb1 dataset, while our improved version achieved an equal error rate of 3.85%. We are re-running the experiment to share the loss trends, model weights trained, etc., for that experiment and will share them as soon as we are done in the next day or two. — Reply to this email directly, view it on GitHub <#1 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUCQZY3SPSV4FH5IWNJVFOTYOJCVRAVCNFSM6AAAAABBXTHTYSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJQGM4DGOBUHE . You are receiving this because you authored the thread.Message ID: @.> — Reply to this email directly, view it on GitHub<#1 (comment)>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AW6RTF7PRPK5XKJS5LBTRKDYPUBNZAVCNFSM6AAAAABBXTHTYSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBSGYYTAOBUHA. You are receiving this because you commented.Message ID: @.***>
Thanks for your answer. I wonder if you will update wav2vec2 accordingly, making its outputs weighted-summed.
Dear @gancx,
In the near future (likely sometime in February), we will include experimental results using wav2vec2.0 and data2vec. We are looking forward to seeing those experiments as well.
Dear @gancx,
I was wondering if you could provide some explanation about the link you forwarded.
Best regards, Jungwoo Heo
보낸 사람: Chong-Xin Gan @.> 날짜: 월요일, 2024년 1월 22일 오후 1:20 받는 사람: Jungwoo4021/OS-KDFT @.> 참조: Jungwoo4021 @.>, Comment @.> 주제: Re: [Jungwoo4021/OS-KDFT] Training log (Issue #1) Re: [Jungwoo4021/OS-KDFT] Training log (Issue #1) Sunday 21 Jan 2024 ⋅ 3pm – 4pm Pacific Time - Los Angeles
Join with Google Meet https://meet.google.com/xqp-qgao-eat?hs=224
Organiser Sung Jam KAM @.***
Guests Sung Jam KAM- organiser @. @. Jungwoo4021/OS-KDFT View all guest info https://calendar.google.com/calendar/event?action=VIEW&eid=NGhnY2RwM3I1a205MGU4MTNldHMwZmV0MzMgcmVwbHkrYXVjcXp5NW1yNHV4NTIydG9zMzVuZWdkeG1wdGJldmJuaGhoeHhuYWVlQHJlcGx5LmdpdGh1Yi5jb20&tok=MTkjZ2FuY3gxMjExQGdtYWlsLmNvbTY2NTI1ZTc0ZWJjOWQ5M2Q3NDZjZDAxOTViNzQxZmZlYjE5ZjU2Yjc&ctz=America%2FLos_Angeles&hl=en_GB&es=0
Reply for @.*** and view more details https://calendar.google.com/calendar/event?action=VIEW&eid=NGhnY2RwM3I1a205MGU4MTNldHMwZmV0MzMgcmVwbHkrYXVjcXp5NW1yNHV4NTIydG9zMzVuZWdkeG1wdGJldmJuaGhoeHhuYWVlQHJlcGx5LmdpdGh1Yi5jb20&tok=MTkjZ2FuY3gxMjExQGdtYWlsLmNvbTY2NTI1ZTc0ZWJjOWQ5M2Q3NDZjZDAxOTViNzQxZmZlYjE5ZjU2Yjc&ctz=America%2FLos_Angeles&hl=en_GB&es=0 Your attendance is optional.
//
Invitation from Google Calendar: https://calendar.google.com/calendar/
You are receiving this email because you are an attendee of the event. To stop receiving future updates for this event, decline this event.
Forwarding this invitation could allow any recipient to send a response to the organiser, be added to the guest list, invite others regardless of their own invitation status or modify your RSVP.
Learn more https://support.google.com/calendar/answer/37135#forwarding
— Reply to this email directly, view it on GitHubhttps://github.com/Jungwoo4021/OS-KDFT/issues/1#issuecomment-1903209196, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AW6RTFYIGY3PLEZVJQCGEO3YPXSKLAVCNFSM6AAAAABBXTHTYSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBTGIYDSMJZGY. You are receiving this because you commented.Message ID: @.***>
Never mind. I may click the wrong button. Sorry for bothering you.
On Mon, 22 Jan 2024 at 17:53, Jungwoo4021 @.***> wrote:
Dear @gancx,
I was wondering if you could provide some explanation about the link you forwarded.
Best regards, Jungwoo Heo
보낸 사람: Chong-Xin Gan @.> 날짜: 월요일, 2024년 1월 22일 오후 1:20 받는 사람: Jungwoo4021/OS-KDFT @.> 참조: Jungwoo4021 @.>, Comment @.> 주제: Re: [Jungwoo4021/OS-KDFT] Training log (Issue #1) Re: [Jungwoo4021/OS-KDFT] Training log (Issue #1) Sunday 21 Jan 2024 ⋅ 3pm – 4pm Pacific Time - Los Angeles
Join with Google Meet https://meet.google.com/xqp-qgao-eat?hs=224
Organiser Sung Jam KAM @.***
Guests Sung Jam KAM- organiser @. @. Jungwoo4021/OS-KDFT View all guest info
Reply for @.*** and view more details
//Invitation from Google Calendar: https://calendar.google.com/calendar/You are receiving this email because you are an attendee of the event. To stop receiving future updates for this event, decline this event.
Forwarding this invitation could allow any recipient to send a response to the organiser, be added to the guest list, invite others regardless of their own invitation status or modify your RSVP.
Learn more https://support.google.com/calendar/answer/37135#forwarding
— Reply to this email directly, view it on GitHub< https://github.com/Jungwoo4021/OS-KDFT/issues/1#issuecomment-1903209196>, or unsubscribe< https://github.com/notifications/unsubscribe-auth/AW6RTFYIGY3PLEZVJQCGEO3YPXSKLAVCNFSM6AAAAABBXTHTYSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBTGIYDSMJZGY>.
You are receiving this because you commented.Message ID: @.***>
— Reply to this email directly, view it on GitHub https://github.com/Jungwoo4021/OS-KDFT/issues/1#issuecomment-1903626956, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUCQZYZBH57TL4QGS7T3BATYPYZIZAVCNFSM6AAAAABBXTHTYSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBTGYZDMOJVGY . You are receiving this because you were mentioned.Message ID: @.***>
Thanks for your hard work. I wonder if you will share the training log, facilitating us to refer to it?