Program Synthesis
138 papers with code • 3 benchmarks • 5 datasets
Program synthesis is the process of automatically generating a program or code snippet that satisfies a given specification or set of requirements. This can include generating code from a formal specification, a natural language description, or example inputs and outputs. The primary goal of program synthesis is to minimize human intervention in the coding process, reduce errors, and improve productivity.
Program synthesis often involves the use of advanced algorithms, artificial intelligence, and machine learning techniques to search the space of possible programs that meet the given constraints. This process can be guided by a variety of techniques, such as constraint solving, symbolic execution, and genetic algorithms.
Libraries
Use these libraries to find Program Synthesis models and implementationsSubtasks
Most implemented papers
CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis
To democratize this, we train and release a family of large language models up to 16. 1B parameters, called CODEGEN, on natural language and programming language data, and open source the training library JAXFORMER.
Neural Program Synthesis with Priority Queue Training
Models and examples built with TensorFlow
Memory Augmented Policy Optimization for Program Synthesis and Semantic Parsing
We present Memory Augmented Policy Optimization (MAPO), a simple and novel way to leverage a memory buffer of promising trajectories to reduce the variance of policy gradient estimate.
DeepCoder: Learning to Write Programs
We develop a first line of attack for solving programming competition-style problems from input-output examples using deep learning.
RobustFill: Neural Program Learning under Noisy I/O
Recently, two competing approaches for automatic program learning have received significant attention: (1) neural program synthesis, where a neural network is conditioned on input/output (I/O) examples and learns to generate a program, and (2) neural program induction, where a neural network generates new outputs directly using a latent program representation.
DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning
It builds expertise by creating programming languages for expressing domain concepts, together with neural networks to guide the search for programs within these languages.
Programming Puzzles
The dataset is comprehensive in that it spans problems of a range of difficulties and domains, ranging from trivial string manipulation problems, to classic programming puzzles (e. g., Tower of Hanoi), to interview/competitive-programming problems (e. g., dynamic programming), to longstanding open problems in algorithms and mathematics (e. g., factoring).
Learning Program Synthesis for Integer Sequences from Scratch
We present a self-learning approach for synthesizing programs from integer sequences.
InCoder: A Generative Model for Code Infilling and Synthesis
Our model is the first generative model that is able to directly perform zero-shot code infilling, which we evaluate on challenging tasks such as type inference, comment generation, and variable re-naming.
xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval
Recently, pre-trained large language models (LLMs) have shown impressive abilities in generating codes from natural language descriptions, repairing buggy codes, translating codes between languages, and retrieving relevant code segments.