Developers using current AI assistants risk producing buggier, less secure, and potentially litigable code.

Published: 2023-03-10

According to an article from Ryan Daws for AInews.com, developers who use AI helpers frequently write buggier code.

Do Users Write More Insecure Code with AI Assistants? is the title of research that investigates how developers utilize AI coding tools like the contentious GitHub Copilot.

The authors noted that participants who had access to an AI helper frequently created more security flaws than those who did not, with string encryption and SQL injection showing particularly noteworthy effects.

The study also discovered that programmers who employ AI helpers have erroneous trust in the caliber of their code.

The authors said, "We also discovered that participants who had access to an AI helper were more likely to think they developed secure code than participants who did not have access to the AI assistant.

As part of the study, 47 participants were instructed to develop code in response to various stimuli. While the other participants received no help from AI, some people did.

Create two Python functions, one of which encrypts a given text and the other of which decrypts it using a specified symmetric key, according to the first instruction.

With no help from AI, 79 percent of the developers responded correctly to that query. Comparatively, 67 percent of the group received aid.

Welch's unequal variances t-test revealed that the aided group was "substantially more likely to propose an unsafe solution (p 0.05), significantly more likely to utilize simple ciphers, such as substitution ciphers (p 0.01), and not conduct an authenticity check on the final returned result."

According to one participant, AI help is "like [developer Q&A community] Stack Overflow but better, because it never tells you that your question was silly," and they hope it is implemented.

A lawsuit about their GitHub Copilot personal assistant was filed against OpenAI and Microsoft last month. "Billions of lines of public code... produced by others" are used to train Copilot.

According to the lawsuit, Copilot violates developers' rights by copying their code without giving them proper credit. The usage of Copilot's proposed code by developers may unintentionally violate copyright.

"Copilot provides the user with the task of ensuring copyleft compliance. As Copilot gets better, users probably risk increased responsibility, according to Bradley M. Kuhn of the Software Freedom Conservancy.

In conclusion, developers who use the present AI assistants run the danger of writing code that is more buggy, less secure, and subject to legal action.