Open kundtx opened 1 year ago
G1 Haizhou Liu: Is there a formatting or version error? Many sections are left blank. Also, in the title, it might be better to write "Knowledge Graph" instead of "KG".
(Instructor) Please upload a complete version of this poster.
@yang2011 (Instructor) Please upload a complete version of this poster.
G18 Thanks for your reminder. i have updated the poster file. Thanks♪(・ω・)ノ
@Prof-Greatfellow G1 Haizhou Liu: Is there a formatting or version error? Many sections are left blank. Also, in the title, it might be better to write "Knowledge Graph" instead of "KG".
G18 Thanks for your comment. i have updated the poster file. Thanks♪(・ω・)ノ
@tangyuanbo1
@yang2011 (Instructor) Please upload a complete version of this poster.
G18 Thanks for your reminder. i have updated the poster file. Thanks♪(・ω・)ノ
Can you show some generated output from your model?
@yang2011
@tangyuanbo1
@yang2011 (Instructor) Please upload a complete version of this poster.
G18 Thanks for your reminder. i have updated the poster file. Thanks♪(・ω・)ノ
Can you show some generated output from your model?
Sure, here is an example: These multi-scale summaries are generated from a source text with 708words.
-------------------------------------------------- Long-level-------------------------------345words sequence transduction models have been widely explored in many natural language processing tasks . but the target sequence usually consists of discrete tokens which represent word indices in a given vocabulary . this data set is specifically designed to carry out the task of caption-to-action generation . the caption is comprised of a sequence of words describing the interactive movement between two people, and the action is a captured sequence of poses representing the movement . we propose a model to innovatively combine Multi-Head Attention (M) . this paper proposes an approach for applying GANs to NMT . we build a conditional sequence generative adversarial net . the generator aims to generate sentences which are hard to be discriminated from human-translated sentences (i.e., the golden target sentences), and the discriminator makes efforts to discriminate the machine-generated sentences from human ones . both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. . .generative model G captures the data distribution, and discriminative model D that estimates the probability that a sample came from the training data rather than G . the training procedure for G corresponds to a minimax two-player game . in the space of arbitrary functions G and D, a unique solution exists, with G recovering the training information distribution and D equal to 12 everywhere . there is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples . we propose a . .. the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is non-trivial to balance its current score and the future one once the entire sequence has been generated . in this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems . the generative model has enjoyed considerable success in generating real-valued data, but has limitations when the goal is for generating discrete tokens . a major reason lies in that the discrete output ''g
-------------------------------------------------- Medium-level-------------------------- 185words sequence transduction models have been widely explored in many natural language processing tasks . but the target sequence usually consists of discrete tokens which represent word indices in a given vocabulary . this data set is specifically designed to carry out the task of caption-to-action generation . this paper proposes an approach for applying GANs to NMT . we build a conditional sequence generative adversarial net . the generator aims to generate sentences which are hard to be discriminated from human-translated sentences . generative model G captures the data distribution, and discriminative model D that estimates the probability that a sample came from the training data rather than G . the framework corresponds to a minimax two-player game . in the space of arbitrary functions G and D, a unique solution exists . the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is non-trivial to balance its current score and the future one once the entire sequence has been generated . in this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems .
---------------------------------------------------- Short-level--------------------------- 119words sequence transduction models have been widely explored in many natural language processing tasks . but the target sequence usually consists of discrete tokens which represent word indices in a given vocabulary. this paper proposes an approach for applying GANs to NMT . we build a conditional sequence generative adversarial net . the generator aims to generate sentences hard. generative model G captures the data distribution, and discriminative model D that estimates the probability that a sample came from the training data rather than G . the framework corresponds to .the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is non-trivial to balance its current score and the future one once the entire
http://8.129.175.102/lfd2022fall-poster-session/18.html