Tue, 07 Feb 2023 17:00 UTC by garethbrown

Watching technically minded people use AI services built on Large Language Models (LLMs) like Open AI’s ChatGPT and Google’s Bard, provides insight into a future where the need for some of the core competencies required of today’s developers will be diminished.

Specific knowledge of programming languages, libraries and API’s will take a back seat to a broad understanding of software engineering concepts and the ability to distil generalised descriptions of needs into clear, specific requirements.

Something that can be seen from watching commands being given to ChatGPT to produce code, is that the commands need to be clear and specific, usually to such a level that the author needs programming experience. In the not-so-distant future, developers will move towards directing AI’s to generate code rather than authoring it themselves. While I believe that these AI tools will open up application development to less technical types, enterprise level development will remain the domain of those who understand code and computing.

To ascertain that the code produced by AIs is robust and true to the requirements, full test coverage will need to be specified to exercise the code. While the writing of that test code may also be AI assisted, the assertions will be determined by humans (Else the AI generated code can only be tested against the original interpretation of the requirements).

At this point the Dev role has become ‘BaDevTest’ (Business Analyst Dev Test), later to become ‘BaTest’ (Business Analyst Test) as the AI tools improve.

How long will this take? I don’t know. As of today GitHub Copilot is occasionally useful, and ChatGPT allows us to get away with being lazy for basic coding tasks. It’s easy however to foresee the rapid improvement in capability and integration of these tools into developer workflows. I’m anticipating seeing a clear move in this direction within 5 years.

Read More