Our paper, “SALLM: Security Assesment of Generated Code”, has been accepted to the 6th International Workshop on Automated and verifiable Software sYstem Development (ASYDE) co-located with Automated Software Engineering conference (ASE 2024).
This is the first kind of paper to introduce a framework for automated security evaluation of the generated code using dynamic and static analysis. We have 100 Python prompts with unit tests for functionality and security. We benchmarked several models with the SALLM framework, and found GPT-3.5 balanced both functional and secure code generation.
Subscribe to this blog via RSS.
Paper 12
Research 12
Tool 2
Llm 9
Dataset 2
Survey 1
"SALLM: Security Assessment of Generated Code" accepted at ASYDE 2024 (ASE Workshop)
Posted on 07 Sep 2024Paper (12) Research (12) Tool (2) Llm (9) Dataset (2) Qualitative-analysis (1) Survey (1)