Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2016
DOI: 10.18653/v1/p16-1057
|View full text |Cite
|
Sign up to set email alerts
|

Latent Predictor Networks for Code Generation

Abstract: Many language generation tasks require the production of text conditioned on both structured and unstructured inputs. We present a novel neural network architecture which generates an output sequence conditioned on an arbitrary number of input functions. Crucially, our approach allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training. Using this framework, we address the problem of gen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
258
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 255 publications
(260 citation statements)
references
References 17 publications
2
258
0
Order By: Relevance
“…<s.food> to food or type of food, and (2) replacing delexicalised values by the actual attribute values of the entity currently selected by the DB pointer. This is similar in spirit to the Latent Predictor Network (Ling et al, 2016) where the token generation process is augmented by a set of pointer networks to transfer entity specific information into the response.…”
Section: Generation Networkmentioning
confidence: 99%
“…<s.food> to food or type of food, and (2) replacing delexicalised values by the actual attribute values of the entity currently selected by the DB pointer. This is similar in spirit to the Latent Predictor Network (Ling et al, 2016) where the token generation process is augmented by a set of pointer networks to transfer entity specific information into the response.…”
Section: Generation Networkmentioning
confidence: 99%
“…LSTMs in Dam et al [17]) describe context distributed representations while sequentially generating code. Ling et al [18] and Allamanis et al [19] combine the code-context distributed representation with distributed representations of other modalities (e.g., natural language) to synthesize code.…”
Section: A Deep Code Representationmentioning
confidence: 99%
“…Ling et al [34] focus on the problem of generating valid code from natural language descriptions on Hearthstone and Magic cards. This is in a sense the other side of the coin, and would be needed as a part of a functioning card generation system, to use the code for these decks for artificial agents that can playtest new cards.…”
Section: Generating Cardsmentioning
confidence: 99%