GitHub, owned by Microsoft, today launched its Copilot AI tool, which allows developers to suggest lines of code in their code editor. GitHub originally partnered with OpenAI last year to release a preview of Copilot, and it’s generally available to all developers today.
Priced at $10 per month or $100 per year, GitHub Copilot is able to suggest the next line of code as developers type in an integrated development environment (IDE) like Visual Studio Code, Neovim, and the likes. JetBrains IDE. Copilot can suggest comprehensive methods and complex algorithms as well as boilerplate code and unit testing assistance.
Over 1.2 million developers have signed up to use the GitHub Copilot preview in the past 12 months, and it will remain a free tool for verified students and maintainers of popular open source projects. In the files where it is activated, GitHub indicates that almost 40% of the code is now written by Copilot.
“Much like the rise of compilers and open source, we believe that AI-assisted coding will fundamentally change the nature of software development, giving developers a new tool to write code easier and faster so that ‘they can be happier in their lives,’ says GitHub’s CEO. Thomas Dohmke.
Microsoft’s billion-dollar investment in OpenAI, the research company now run by former Y Combinator president Sam Altman, led to the creation of GitHub Copilot. It is built on OpenAI Codex, a descendant of OpenAI’s flagship GPT-3 language generation algorithm. GitHub Copilot has been controversial, however. Just days after its preview launch, there were questions about the legality of Copilot’s training on publicly available code posted on GitHub. Aside from copyright issues, a study also found that about 40% of Copilot’s output contained security vulnerabilities.
Microsoft isn’t the only company working on automated AI tools to help with coding. Last year, Google-owned DeepMind revealed an artificial intelligence system called AlphaCode, designed to write computer programs “at a competitive level”. AlphaCode has been tested against Codeforces, a competitive coding platform, and achieved an “estimated rank” placing it in the top 54% of human coders. The challenges are different from what an ordinary coder would face, but they show how AI coding systems could help coders in the future.