InverseCoder is a series of code LLMs instruction-tuned by generating data from itself through Inverse-Instruct. This repo (under development) mainly contains the code for data generation (i.e. Inverse-Instruct).
pip install -r requirements.txt
Specify the path of datasets, then extract code snippets:
pythonsrc/scripts/extract_code.py
Use vllm to generate instructions from code snippets:
pythonsrc/InstGen/sample_vllm_parallel_problem_prompt_evol.py \ --model_path=$model_path \ --input_path=$input_path \ --save_path=$save_path \ --num_gpus8
Then combine sampled instructions and code:
python src/scripts/merge_evol_and_summary_samples.py
Use vllm to generate evaluations and calculate LM-scores:
pythonsrc/SelectData/sample_vllm_parallel_inst_pair.py \ --model_path=$model_path \ --input_path=$input_path \ --save_path=$save_path \ --num_gpus8
Then select the best instruction for each response to obtain the new dataset:
python src/scripts/sorted_data_samples.py
We first fine-tune the base models on synthetic data generated through Inverse-Instruct for 1 epoch, then we continue to fine-tune the models with the original instruction tuning dataset for 2 epochs to obtain InverseCoder models. We use the same hyper-parameter and prompt settings as Magicoder for comparison.
Similar to Magicoder-S-DS-6.7B, use the code below to get started with the model. Make sure you installed the transformers library.
fromtransformersimportpipelineimporttorchINVERSECODER_PROMPT="""You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.@@ Instruction{instruction}@@ Response"""instruction=<Yourcodeinstructionhere>prompt=INVERSECODER_PROMPT.format(instruction=instruction) generator=pipeline( model="wyt2000/InverseCoder-CL-7B", task="text-generation", torch_dtype=torch.bfloat16, device_map="auto", ) result=generator(prompt, max_length=1024, num_return_sequences=1, temperature=0.0) print(result[0]["generated_text"])
Arxiv:https://arxiv.org/abs/2407.05700
Please cite the paper if you use the code, models or datasets from InverseCoder.
@misc{wu2024inversecoderunleashingpowerinstructiontuned, title={InverseCoder: Unleashing the Power of Instruction-Tuned Code LLMs with Inverse-Instruct}, author={Yutong Wu and Di Huang and Wenxuan Shi and Wei Wang and Lingzhe Gao and Shihao Liu and Ziyuan Nan and Kaizhao Yuan and Rui Zhang and Xishan Zhang and Zidong Du and Qi Guo and Yewen Pu and Dawei Yin and Xing Hu and Yunji Chen}, year={2024}, eprint={2407.05700}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2407.05700}, }
- Magicoder: Training code, original datasets and data decontamination
- DeepSeek-Coder: Base model for InverseCoder-DS
- CodeLlama: Base model for InverseCoder-CL
- AutoMathText: Self-evaluation and data selection method