Rise of the Semi-Pro: How LLMs Empower a New Generation of Software Builders

Source Link ↗

By

Jono Bergquist

Submitter:

Jun 5, 2023

Topics: 

No items found.
Read this content here ↗

If you’ve been following the public experimentation with ChatGPT, you already know that working code can be produced from natural language descriptions. Large language models (LLMs) like ChatGPT enable non-developers to unlock the power of software engineering. But they aren't perfect yet. LLMs still suffer from hallucinations and returning broken but convincing code. While it’s a great party trick, it isn't a reliable method for non-technical normals to productively create software.

With the introduction of LLMs, we've entered the next evolutionary stage of the democratization of software development. Before LLMs there were low/no-code platforms; before that, frameworks on top of interpreted languages (e.g. Ruby on Rails); before that, system languages (e.g. C/C++/C#); and finally, machine languages. Each new paradigm was a method for abstracting something harder or more complex.

Each new paradigm has lowered the bar for who can build software, regardless of whether they fit the previous generation's definition of who a "developer" is. All of this makes me wonder, given the constraints of the current generation of LLMs, who is the next tier of builders that have been unlocked?

Enter the Semi-Pro

For lack of a better term, I'm calling this group the Semi-Pros. A Semi-Pro inhabits the liminal space between the land of computer science (CS) and normals-burg. They typically act as the bridge between the worlds of users and builders. You can think of them as the equivalent of mechanics or gearheads in the world of cars: They are the people you go to first when you have a car question but they aren’t designing cars or the manufacturing lines that produce cars. Imagine if your “car-girl” or “-guy” suddenly had a robot assistant that could help them turn their ideas about how to improve the handling of a car or a novel way to manufacture a part that frequently fails. Suddenly a world that was reserved for a specific class of individuals would be opened to an entirely new demographic. 

You've probably run into a Semi-Pro with a job title like Sales Engineer, Solutions Architect, Customer Support Engineer or Technical Account Manager. They are technically proficient but they are not software engineers (SWE). They have high EQ and can defuse an irate customer after a service outage — but don't expect them to get hyped to work the booth at a trade show.

They are the non-ironic manifestation of Tom Smykowski from the movie Office Space:

If you're feeling either seen or activated by this meme then you're probably a Semi-Pro.

The important point is that Semi-Pros are comfortable in CS-land but they spend most of their time acting like tour guides rather than contributing to building the world. GPT-4, and the current generation of LLMs, invert this narrative. Suddenly, a group that was previously unable to participate in development can transform from tour guide to contributor.

Why the Semi-Pro?

Two reasons:

1. They are literate in the language

2. They have broad domain knowledge

Writing Software With Literacy but not Mastery

Before the advent of LLMs, writing software for Semi-Pros wasn't an impossible task but it was an expensive one. Semi-Pros, by definition, haven't fully internalized the syntax of a programming language. This meant, more often than not, they spent a greater portion of their time on Reddit or StackOverflow or Quora looking for solutions to syntactical issues than actually churning out functional code.

Imagine trying to write an essay, but your least favorite grammar teacher is standing over your shoulder and grading your spelling and grammar in real-time. He lets out quiet gasps of horror or clears his throat in disapproval each time you make a misstep, but offers no suggestions on how to fix your error. Worse still, if you try to move on to the next sentence without solving the mistake, he slaps your wrist and forces you to fix your mistake before moving on. When you finally break down and beg him for help, he points you to the MLA Handbook and expects you to find the mistake you made and the solution. Learning in this environment is excruciating.

One of my favorite distillations of this comes from this article on the impact of LLMs by Paul Kedrosky and Eric Norlin of SK Ventures:

“Programming languages are the naggiest of grammar nags, which is intensely frustrating for many would-be coders (A missing colon?! That was the problem?! Oh FFS!).”

LLMs allow Semi-Pros to express instructions in more forgiving natural language and have the model spit out the syntactically formal code.

Some might balk at the idea of people building software without fully internalizing all of the skills of a domain. I would argue that we don't require anyone to write grammatically-perfect English before they are allowed to write a blog post or novel. I'm a native English speaker and this post has been edited by a number of people, but I'd wager that there are still a few imperfections left. If English had the same level of stringent formality of most programming languages, my blog post would throw an error and you wouldn't be reading it right now. A single misplaced comma and all the value that is created at the higher semantic level of my words and ideas are prevented from being consumed.

It is important to point out that when the LLM spits out hallucinations or convincing but broken code, the Semi-Pro can still utilize their literacy to catch the mistake — or, at least, challenge the LLM when it attempts to do something dodgy. This prevents most of the dangers of the current generation of LLMs, but still gives Semi-Pros the productivity boost to allow them to contribute to the building of CS-land.

Compounding Domain Knowledge

Building software isn't only about knowing how to write grammatically correct sentences (lines of code). It also requires that you know the patterns for how the pieces fit together. REST APIs, sandboxing, container orchestration, and relational databases are all useful patterns for designing software. It takes time to integrate all of these mental models into the complex network of knowledge in your brain. In the process of being tour guides, par excellence Semi-Pros have internalized a lot of these models, giving them a head start and allowing them to be productive right out of the gate.

The thing about all of these design patterns is that they compound in value as your library grows. The most unintuitive part about compound growth is that in the beginning, the amount of work that you put in is a lot more than the value that you get out.

Author James Clear, has named this effect the Plateau of Latent Potential. The accompanying visual makes this relationship abundantly clear:

In the beginning, linear growth gives you roughly equal results for each hour of time spent. Compound growth, on the other hand, gives you significantly less value for each hour of input until you have built up a base, and then suddenly the growth rate starts taking off. Learning a new skill like programming follows the compounding growth rate, which is great in the long-term but can be disappointing in the near-term.

This effect is why Semi-Pros are well positioned to take advantage of the current generation LLMs. They have already started building their library which means that the asymmetric relationship between input and output will flip to being positive sooner.

Not only do Semi-Pros have a jump start on the filling of their library, they can utilize the mental models that the LLM has been trained on, have the LLM explain the pattern back to them, then integrate this knowledge into their library faster and more efficiently.

There is a great example from the same article by Paul Kedrosky and Eric Norlin. First, they ask ChatGPT to write them a function that removes emojis from a file:

Then they ask ChatGPT to explain how it did this:

ChatGPT does an interesting thing here. It utilizes a somewhat obscure property of the difference between two character encoding schemas to differentiate between ASCII emoji and regular text. Someone somewhere in the collective knowledge of humanity generated this novel way of approaching this particular problem, and ChatGPT was trained on this solution. This is a useful mental model for solving problems that LLMs have access to, which unlocks these models for Semi-Pros without having to re-generate them de novo. By having the LLM explain what it’s doing, the Semi-Pro can take the next step and integrate this knowledge into their own library.

Without having the advantage of having built up a library of design patterns from being CS-land natives, a Semi-Pro wouldn't necessarily be able to understand what is happening here. This mental model is only useful to those who already understand that text strings have multiple different ways they can be encoded. This continues to reinforce the idea that the Semi-Pro is best positioned to take advantage of the latest progress in the democratization of software development.

What Does This Unlock for the Semi-Pro?

This unlocks the ability for Semi-Pros to build full applications and software systems rather than scripts and glue code to integrate APIs together, which was more typical of the kind of programming that Semi-Pros contributed in the past.

A friend of mine uses the analogy that developers build the Legos and Semi-Pros put the Legos together in a way that solves a novel problem. LLMs unlock the ability for Semi-Pros to wade into the world of building the factory and machines that print the Lego blocks, rather than only re-mixing the blocks that software developers have designed for them.

For most Semi-Pros, this is the equivalent of being called up to the majors after spending most of their life kicking around the AAA circuit.

What Opportunities are Within Reach?

So what does this unlocking translate into? It's difficult to accurately predict the future, but I see a few new paths on the horizon:

Semi-Pros as Technical Co-Founder

The role of the technical co-founder is to build the MVP when the other co-founder(s) cannot. Does it have to be scalable? No, not really. Does it have to follow every software development best practice? Definitely not. Does it have to do something demonstrably better than the current incumbent and allow early users to visualize the future? Yes. Do they need to be reasonably productive in writing software? Yes.

LLMs + Semi-Pros = High **enough** output that Semi-Pros can now fill the role of the technical co-founder.

Semi-Pros as Solopreneurs (MicroSaaS)

MicroSaaS and Solopreneurship is a relatively new movement but is gaining steam. It is essentially the idea that you can build a collection of useful software and create a number of revenue streams that aggregate into a strong side hustle or even full-fledged business.

If Semi-Pros are truly capable building software, then this has to be an option for them now.

Experimental Hypothesis

Before we get too far ahead of ourselves, I want to be explicit that this is a theory. Some of the ideas in this article are more fully baked than others. But the great part about theories is that you can test them.

And that's what I'm planning to do. This theory came out of my own experiments with ChatGPT over the past few months. I’ve always self-identified as the prototypical Semi-Pro. I taught myself to build a PC in high school but got a BA in college. I taught myself JavaScript in my twenties but never went back for a Masters. I parked myself at the literal intersection of liberal arts and technology, but never ventured down either path to the point of expertise. Because of this, the act of building software was never a reality for me — not practically, at least.

As I’ve spent time augmenting my skills in conversation with ChatGPT, I’m beginning to believe that the barriers that have kept me from writing software are dissolving. My aim is to test these ideas and publish the results as a way to validate or disprove this hypothesis.

About Jono Bergquist

Startupland Semi-Pro writing weekly, long-form content about the transformation of Semi-Pros from CS-land tour guides to software system builders via my newsletter. High-frequency, low-filter journey on Twitter.

Earned my Semi-Pro credentials as a Solutions Engineer and Product Marketer. 10+ years of experience scaling high-growth startups, primarily in the network and application security spaces. Alumnus of Cloudflare, Akamai, Sqreen (=> Datadog), Shape Security (=> F5), and PerimeterX (=> HUMAN Security) and Vantage.

When I’m not engaging in the startup community, I am reading fantasy or sci-fi, playing video games, gardening or hiking with my partner and fur-child.

Unlock this content by joining the PreSales Collective with global community with 20,000+ professionals
Read this content here ↗

If you’ve been following the public experimentation with ChatGPT, you already know that working code can be produced from natural language descriptions. Large language models (LLMs) like ChatGPT enable non-developers to unlock the power of software engineering. But they aren't perfect yet. LLMs still suffer from hallucinations and returning broken but convincing code. While it’s a great party trick, it isn't a reliable method for non-technical normals to productively create software.

With the introduction of LLMs, we've entered the next evolutionary stage of the democratization of software development. Before LLMs there were low/no-code platforms; before that, frameworks on top of interpreted languages (e.g. Ruby on Rails); before that, system languages (e.g. C/C++/C#); and finally, machine languages. Each new paradigm was a method for abstracting something harder or more complex.

Each new paradigm has lowered the bar for who can build software, regardless of whether they fit the previous generation's definition of who a "developer" is. All of this makes me wonder, given the constraints of the current generation of LLMs, who is the next tier of builders that have been unlocked?

Enter the Semi-Pro

For lack of a better term, I'm calling this group the Semi-Pros. A Semi-Pro inhabits the liminal space between the land of computer science (CS) and normals-burg. They typically act as the bridge between the worlds of users and builders. You can think of them as the equivalent of mechanics or gearheads in the world of cars: They are the people you go to first when you have a car question but they aren’t designing cars or the manufacturing lines that produce cars. Imagine if your “car-girl” or “-guy” suddenly had a robot assistant that could help them turn their ideas about how to improve the handling of a car or a novel way to manufacture a part that frequently fails. Suddenly a world that was reserved for a specific class of individuals would be opened to an entirely new demographic. 

You've probably run into a Semi-Pro with a job title like Sales Engineer, Solutions Architect, Customer Support Engineer or Technical Account Manager. They are technically proficient but they are not software engineers (SWE). They have high EQ and can defuse an irate customer after a service outage — but don't expect them to get hyped to work the booth at a trade show.

They are the non-ironic manifestation of Tom Smykowski from the movie Office Space:

If you're feeling either seen or activated by this meme then you're probably a Semi-Pro.

The important point is that Semi-Pros are comfortable in CS-land but they spend most of their time acting like tour guides rather than contributing to building the world. GPT-4, and the current generation of LLMs, invert this narrative. Suddenly, a group that was previously unable to participate in development can transform from tour guide to contributor.

Why the Semi-Pro?

Two reasons:

1. They are literate in the language

2. They have broad domain knowledge

Writing Software With Literacy but not Mastery

Before the advent of LLMs, writing software for Semi-Pros wasn't an impossible task but it was an expensive one. Semi-Pros, by definition, haven't fully internalized the syntax of a programming language. This meant, more often than not, they spent a greater portion of their time on Reddit or StackOverflow or Quora looking for solutions to syntactical issues than actually churning out functional code.

Imagine trying to write an essay, but your least favorite grammar teacher is standing over your shoulder and grading your spelling and grammar in real-time. He lets out quiet gasps of horror or clears his throat in disapproval each time you make a misstep, but offers no suggestions on how to fix your error. Worse still, if you try to move on to the next sentence without solving the mistake, he slaps your wrist and forces you to fix your mistake before moving on. When you finally break down and beg him for help, he points you to the MLA Handbook and expects you to find the mistake you made and the solution. Learning in this environment is excruciating.

One of my favorite distillations of this comes from this article on the impact of LLMs by Paul Kedrosky and Eric Norlin of SK Ventures:

“Programming languages are the naggiest of grammar nags, which is intensely frustrating for many would-be coders (A missing colon?! That was the problem?! Oh FFS!).”

LLMs allow Semi-Pros to express instructions in more forgiving natural language and have the model spit out the syntactically formal code.

Some might balk at the idea of people building software without fully internalizing all of the skills of a domain. I would argue that we don't require anyone to write grammatically-perfect English before they are allowed to write a blog post or novel. I'm a native English speaker and this post has been edited by a number of people, but I'd wager that there are still a few imperfections left. If English had the same level of stringent formality of most programming languages, my blog post would throw an error and you wouldn't be reading it right now. A single misplaced comma and all the value that is created at the higher semantic level of my words and ideas are prevented from being consumed.

It is important to point out that when the LLM spits out hallucinations or convincing but broken code, the Semi-Pro can still utilize their literacy to catch the mistake — or, at least, challenge the LLM when it attempts to do something dodgy. This prevents most of the dangers of the current generation of LLMs, but still gives Semi-Pros the productivity boost to allow them to contribute to the building of CS-land.

Compounding Domain Knowledge

Building software isn't only about knowing how to write grammatically correct sentences (lines of code). It also requires that you know the patterns for how the pieces fit together. REST APIs, sandboxing, container orchestration, and relational databases are all useful patterns for designing software. It takes time to integrate all of these mental models into the complex network of knowledge in your brain. In the process of being tour guides, par excellence Semi-Pros have internalized a lot of these models, giving them a head start and allowing them to be productive right out of the gate.

The thing about all of these design patterns is that they compound in value as your library grows. The most unintuitive part about compound growth is that in the beginning, the amount of work that you put in is a lot more than the value that you get out.

Author James Clear, has named this effect the Plateau of Latent Potential. The accompanying visual makes this relationship abundantly clear:

In the beginning, linear growth gives you roughly equal results for each hour of time spent. Compound growth, on the other hand, gives you significantly less value for each hour of input until you have built up a base, and then suddenly the growth rate starts taking off. Learning a new skill like programming follows the compounding growth rate, which is great in the long-term but can be disappointing in the near-term.

This effect is why Semi-Pros are well positioned to take advantage of the current generation LLMs. They have already started building their library which means that the asymmetric relationship between input and output will flip to being positive sooner.

Not only do Semi-Pros have a jump start on the filling of their library, they can utilize the mental models that the LLM has been trained on, have the LLM explain the pattern back to them, then integrate this knowledge into their library faster and more efficiently.

There is a great example from the same article by Paul Kedrosky and Eric Norlin. First, they ask ChatGPT to write them a function that removes emojis from a file:

Then they ask ChatGPT to explain how it did this:

ChatGPT does an interesting thing here. It utilizes a somewhat obscure property of the difference between two character encoding schemas to differentiate between ASCII emoji and regular text. Someone somewhere in the collective knowledge of humanity generated this novel way of approaching this particular problem, and ChatGPT was trained on this solution. This is a useful mental model for solving problems that LLMs have access to, which unlocks these models for Semi-Pros without having to re-generate them de novo. By having the LLM explain what it’s doing, the Semi-Pro can take the next step and integrate this knowledge into their own library.

Without having the advantage of having built up a library of design patterns from being CS-land natives, a Semi-Pro wouldn't necessarily be able to understand what is happening here. This mental model is only useful to those who already understand that text strings have multiple different ways they can be encoded. This continues to reinforce the idea that the Semi-Pro is best positioned to take advantage of the latest progress in the democratization of software development.

What Does This Unlock for the Semi-Pro?

This unlocks the ability for Semi-Pros to build full applications and software systems rather than scripts and glue code to integrate APIs together, which was more typical of the kind of programming that Semi-Pros contributed in the past.

A friend of mine uses the analogy that developers build the Legos and Semi-Pros put the Legos together in a way that solves a novel problem. LLMs unlock the ability for Semi-Pros to wade into the world of building the factory and machines that print the Lego blocks, rather than only re-mixing the blocks that software developers have designed for them.

For most Semi-Pros, this is the equivalent of being called up to the majors after spending most of their life kicking around the AAA circuit.

What Opportunities are Within Reach?

So what does this unlocking translate into? It's difficult to accurately predict the future, but I see a few new paths on the horizon:

Semi-Pros as Technical Co-Founder

The role of the technical co-founder is to build the MVP when the other co-founder(s) cannot. Does it have to be scalable? No, not really. Does it have to follow every software development best practice? Definitely not. Does it have to do something demonstrably better than the current incumbent and allow early users to visualize the future? Yes. Do they need to be reasonably productive in writing software? Yes.

LLMs + Semi-Pros = High **enough** output that Semi-Pros can now fill the role of the technical co-founder.

Semi-Pros as Solopreneurs (MicroSaaS)

MicroSaaS and Solopreneurship is a relatively new movement but is gaining steam. It is essentially the idea that you can build a collection of useful software and create a number of revenue streams that aggregate into a strong side hustle or even full-fledged business.

If Semi-Pros are truly capable building software, then this has to be an option for them now.

Experimental Hypothesis

Before we get too far ahead of ourselves, I want to be explicit that this is a theory. Some of the ideas in this article are more fully baked than others. But the great part about theories is that you can test them.

And that's what I'm planning to do. This theory came out of my own experiments with ChatGPT over the past few months. I’ve always self-identified as the prototypical Semi-Pro. I taught myself to build a PC in high school but got a BA in college. I taught myself JavaScript in my twenties but never went back for a Masters. I parked myself at the literal intersection of liberal arts and technology, but never ventured down either path to the point of expertise. Because of this, the act of building software was never a reality for me — not practically, at least.

As I’ve spent time augmenting my skills in conversation with ChatGPT, I’m beginning to believe that the barriers that have kept me from writing software are dissolving. My aim is to test these ideas and publish the results as a way to validate or disprove this hypothesis.

About Jono Bergquist

Startupland Semi-Pro writing weekly, long-form content about the transformation of Semi-Pros from CS-land tour guides to software system builders via my newsletter. High-frequency, low-filter journey on Twitter.

Earned my Semi-Pro credentials as a Solutions Engineer and Product Marketer. 10+ years of experience scaling high-growth startups, primarily in the network and application security spaces. Alumnus of Cloudflare, Akamai, Sqreen (=> Datadog), Shape Security (=> F5), and PerimeterX (=> HUMAN Security) and Vantage.

When I’m not engaging in the startup community, I am reading fantasy or sci-fi, playing video games, gardening or hiking with my partner and fur-child.

Unlock this content by joining the PreSales Leadership Collective! An exclusive community dedicated to PreSales leaders.
Read this content here ↗

If you’ve been following the public experimentation with ChatGPT, you already know that working code can be produced from natural language descriptions. Large language models (LLMs) like ChatGPT enable non-developers to unlock the power of software engineering. But they aren't perfect yet. LLMs still suffer from hallucinations and returning broken but convincing code. While it’s a great party trick, it isn't a reliable method for non-technical normals to productively create software.

With the introduction of LLMs, we've entered the next evolutionary stage of the democratization of software development. Before LLMs there were low/no-code platforms; before that, frameworks on top of interpreted languages (e.g. Ruby on Rails); before that, system languages (e.g. C/C++/C#); and finally, machine languages. Each new paradigm was a method for abstracting something harder or more complex.

Each new paradigm has lowered the bar for who can build software, regardless of whether they fit the previous generation's definition of who a "developer" is. All of this makes me wonder, given the constraints of the current generation of LLMs, who is the next tier of builders that have been unlocked?

Enter the Semi-Pro

For lack of a better term, I'm calling this group the Semi-Pros. A Semi-Pro inhabits the liminal space between the land of computer science (CS) and normals-burg. They typically act as the bridge between the worlds of users and builders. You can think of them as the equivalent of mechanics or gearheads in the world of cars: They are the people you go to first when you have a car question but they aren’t designing cars or the manufacturing lines that produce cars. Imagine if your “car-girl” or “-guy” suddenly had a robot assistant that could help them turn their ideas about how to improve the handling of a car or a novel way to manufacture a part that frequently fails. Suddenly a world that was reserved for a specific class of individuals would be opened to an entirely new demographic. 

You've probably run into a Semi-Pro with a job title like Sales Engineer, Solutions Architect, Customer Support Engineer or Technical Account Manager. They are technically proficient but they are not software engineers (SWE). They have high EQ and can defuse an irate customer after a service outage — but don't expect them to get hyped to work the booth at a trade show.

They are the non-ironic manifestation of Tom Smykowski from the movie Office Space:

If you're feeling either seen or activated by this meme then you're probably a Semi-Pro.

The important point is that Semi-Pros are comfortable in CS-land but they spend most of their time acting like tour guides rather than contributing to building the world. GPT-4, and the current generation of LLMs, invert this narrative. Suddenly, a group that was previously unable to participate in development can transform from tour guide to contributor.

Why the Semi-Pro?

Two reasons:

1. They are literate in the language

2. They have broad domain knowledge

Writing Software With Literacy but not Mastery

Before the advent of LLMs, writing software for Semi-Pros wasn't an impossible task but it was an expensive one. Semi-Pros, by definition, haven't fully internalized the syntax of a programming language. This meant, more often than not, they spent a greater portion of their time on Reddit or StackOverflow or Quora looking for solutions to syntactical issues than actually churning out functional code.

Imagine trying to write an essay, but your least favorite grammar teacher is standing over your shoulder and grading your spelling and grammar in real-time. He lets out quiet gasps of horror or clears his throat in disapproval each time you make a misstep, but offers no suggestions on how to fix your error. Worse still, if you try to move on to the next sentence without solving the mistake, he slaps your wrist and forces you to fix your mistake before moving on. When you finally break down and beg him for help, he points you to the MLA Handbook and expects you to find the mistake you made and the solution. Learning in this environment is excruciating.

One of my favorite distillations of this comes from this article on the impact of LLMs by Paul Kedrosky and Eric Norlin of SK Ventures:

“Programming languages are the naggiest of grammar nags, which is intensely frustrating for many would-be coders (A missing colon?! That was the problem?! Oh FFS!).”

LLMs allow Semi-Pros to express instructions in more forgiving natural language and have the model spit out the syntactically formal code.

Some might balk at the idea of people building software without fully internalizing all of the skills of a domain. I would argue that we don't require anyone to write grammatically-perfect English before they are allowed to write a blog post or novel. I'm a native English speaker and this post has been edited by a number of people, but I'd wager that there are still a few imperfections left. If English had the same level of stringent formality of most programming languages, my blog post would throw an error and you wouldn't be reading it right now. A single misplaced comma and all the value that is created at the higher semantic level of my words and ideas are prevented from being consumed.

It is important to point out that when the LLM spits out hallucinations or convincing but broken code, the Semi-Pro can still utilize their literacy to catch the mistake — or, at least, challenge the LLM when it attempts to do something dodgy. This prevents most of the dangers of the current generation of LLMs, but still gives Semi-Pros the productivity boost to allow them to contribute to the building of CS-land.

Compounding Domain Knowledge

Building software isn't only about knowing how to write grammatically correct sentences (lines of code). It also requires that you know the patterns for how the pieces fit together. REST APIs, sandboxing, container orchestration, and relational databases are all useful patterns for designing software. It takes time to integrate all of these mental models into the complex network of knowledge in your brain. In the process of being tour guides, par excellence Semi-Pros have internalized a lot of these models, giving them a head start and allowing them to be productive right out of the gate.

The thing about all of these design patterns is that they compound in value as your library grows. The most unintuitive part about compound growth is that in the beginning, the amount of work that you put in is a lot more than the value that you get out.

Author James Clear, has named this effect the Plateau of Latent Potential. The accompanying visual makes this relationship abundantly clear:

In the beginning, linear growth gives you roughly equal results for each hour of time spent. Compound growth, on the other hand, gives you significantly less value for each hour of input until you have built up a base, and then suddenly the growth rate starts taking off. Learning a new skill like programming follows the compounding growth rate, which is great in the long-term but can be disappointing in the near-term.

This effect is why Semi-Pros are well positioned to take advantage of the current generation LLMs. They have already started building their library which means that the asymmetric relationship between input and output will flip to being positive sooner.

Not only do Semi-Pros have a jump start on the filling of their library, they can utilize the mental models that the LLM has been trained on, have the LLM explain the pattern back to them, then integrate this knowledge into their library faster and more efficiently.

There is a great example from the same article by Paul Kedrosky and Eric Norlin. First, they ask ChatGPT to write them a function that removes emojis from a file:

Then they ask ChatGPT to explain how it did this:

ChatGPT does an interesting thing here. It utilizes a somewhat obscure property of the difference between two character encoding schemas to differentiate between ASCII emoji and regular text. Someone somewhere in the collective knowledge of humanity generated this novel way of approaching this particular problem, and ChatGPT was trained on this solution. This is a useful mental model for solving problems that LLMs have access to, which unlocks these models for Semi-Pros without having to re-generate them de novo. By having the LLM explain what it’s doing, the Semi-Pro can take the next step and integrate this knowledge into their own library.

Without having the advantage of having built up a library of design patterns from being CS-land natives, a Semi-Pro wouldn't necessarily be able to understand what is happening here. This mental model is only useful to those who already understand that text strings have multiple different ways they can be encoded. This continues to reinforce the idea that the Semi-Pro is best positioned to take advantage of the latest progress in the democratization of software development.

What Does This Unlock for the Semi-Pro?

This unlocks the ability for Semi-Pros to build full applications and software systems rather than scripts and glue code to integrate APIs together, which was more typical of the kind of programming that Semi-Pros contributed in the past.

A friend of mine uses the analogy that developers build the Legos and Semi-Pros put the Legos together in a way that solves a novel problem. LLMs unlock the ability for Semi-Pros to wade into the world of building the factory and machines that print the Lego blocks, rather than only re-mixing the blocks that software developers have designed for them.

For most Semi-Pros, this is the equivalent of being called up to the majors after spending most of their life kicking around the AAA circuit.

What Opportunities are Within Reach?

So what does this unlocking translate into? It's difficult to accurately predict the future, but I see a few new paths on the horizon:

Semi-Pros as Technical Co-Founder

The role of the technical co-founder is to build the MVP when the other co-founder(s) cannot. Does it have to be scalable? No, not really. Does it have to follow every software development best practice? Definitely not. Does it have to do something demonstrably better than the current incumbent and allow early users to visualize the future? Yes. Do they need to be reasonably productive in writing software? Yes.

LLMs + Semi-Pros = High **enough** output that Semi-Pros can now fill the role of the technical co-founder.

Semi-Pros as Solopreneurs (MicroSaaS)

MicroSaaS and Solopreneurship is a relatively new movement but is gaining steam. It is essentially the idea that you can build a collection of useful software and create a number of revenue streams that aggregate into a strong side hustle or even full-fledged business.

If Semi-Pros are truly capable building software, then this has to be an option for them now.

Experimental Hypothesis

Before we get too far ahead of ourselves, I want to be explicit that this is a theory. Some of the ideas in this article are more fully baked than others. But the great part about theories is that you can test them.

And that's what I'm planning to do. This theory came out of my own experiments with ChatGPT over the past few months. I’ve always self-identified as the prototypical Semi-Pro. I taught myself to build a PC in high school but got a BA in college. I taught myself JavaScript in my twenties but never went back for a Masters. I parked myself at the literal intersection of liberal arts and technology, but never ventured down either path to the point of expertise. Because of this, the act of building software was never a reality for me — not practically, at least.

As I’ve spent time augmenting my skills in conversation with ChatGPT, I’m beginning to believe that the barriers that have kept me from writing software are dissolving. My aim is to test these ideas and publish the results as a way to validate or disprove this hypothesis.

About Jono Bergquist

Startupland Semi-Pro writing weekly, long-form content about the transformation of Semi-Pros from CS-land tour guides to software system builders via my newsletter. High-frequency, low-filter journey on Twitter.

Earned my Semi-Pro credentials as a Solutions Engineer and Product Marketer. 10+ years of experience scaling high-growth startups, primarily in the network and application security spaces. Alumnus of Cloudflare, Akamai, Sqreen (=> Datadog), Shape Security (=> F5), and PerimeterX (=> HUMAN Security) and Vantage.

When I’m not engaging in the startup community, I am reading fantasy or sci-fi, playing video games, gardening or hiking with my partner and fur-child.

40k+

Join the #1 Community for Presales Professionals

Where Presales Professionals Connect, Grow, and Thrive

Join the Community