Fascinatng tension here between efficiency gains and the loss of communal knowledge-building. The irony is that LLMs were trained on exactly those Stack Exchange threads, yet now theyre replacing the very ecosystem that generated that training data in the first place.
This phenomenon is nothing unusual. Tools are created that make using complicated things easier. High level languages supplanted machine language, voice recognition supplanted typing, and now LLM is being used to generate code - just another higher level language. But, as Larry Tesla famously said, "Complexity is conserved." What is gained in rapid use can lead to errors when the result is not clearly specified, or is incorrectly applied to a problem. As with all tools, learning to use the tool well requires study, gaining experience, and critical thinking.
Talking to chatGPT is about 100x as time efficient as googling stuff. I rarely ask chatGPT to literally write code for me, but it's fantastic at learning how to use a library, program, etc, extremely quickly.
And it's always available, never frustrated or angry, and knows seemingly everything.
It does make mistakes - though some AIs are better than others - but in general for high level explanations of things, it's amazing.
Another great usecase: Copy/paste error messages and ask AI what it means. Often programming errors are cryptic, especially if you are new to something. AIs can decrypt that error message into something meaningful and often give you suggestions for fixing it.
Yes, StackExchange communities are far from what they used to be. I still use them but I use AI as well, and for good reason: I get to have a "conversation" with a fallible but very well-informed and capable "friend" who -- crucially -- responds immediately and has all the time in the world. In the economics world, to take the example we have in common, no faculty person or graduate student or research assistant has that kind of time or can be available in that way. I've had threads with OpenAI and Claude that have lasted for weeks or months and which -- edited, condensed, validated, rewritten -- have potential as book chapters. Hard to beat that.
This begs the question of who owns the copyright when AI writes a substantial portion of a book, as well as who the author actually is, if one simply edits and compiles the information. Even decades ago when I compiled a white paper book about certain new pharmaceuticals, I hesitated to say that I wrote the book. I usually refer to it as a compendium.
The US copyright office has ruled that AI works can’t be copyrighted, so no one. I suspect powerful corporations will eventually challenge this, but we’ll see.
That supposes that the person who edits and rewrites what AI provided doesn't take credit for the rewriting. The person can claim that AI provided research but that the writing was done by the purported author.
Thank your for this thoughtful commentary. This highlights a different angle in the discussion about how AI is shifting how work is done. I think one of the biggest losses with the integration of AI into virtually every area of work is the loss of human connection and interaction. The power of AI as a tool to enhance productivity is a paradox. Removing menial level tasks is a win, but doing at the cost of dismantling interpersonal communication is a major threat to how we connect as humans. My biggest concern as an AI Emotional Safety Strategist is just that. The more we disconnect from each the easier it is to bypass human interaction all together.
I retired from tech in 2018 and it's amazing the change in what's happening in this space in only 8 years. What I get from the comments is something that I fear is going to be pretty devastating to the software industry, and probably many more fields. If programmers stop programming in favor of using LLM generated code, what is the point of having programmers in the first place? And you can take away legions of testers, PMs, code version control, architects, sysadmins too. Think of the cost savings! Then the former techies can figure out what to do next. Take a minute and think critically about what in your life cannot be touched by AI and automation. Amazon has just downsized a bunch of positions in the back office. Imagine a world where no human touch is required from ordering a bag of toilet paper to the time it gets to your door. Automated warehouse systems exist. The systems needed to process the order are self maintained. Payment systems are self maintained. Accounting and finance systems are self maintained. Autonomous taxis and trucks will deliver it to your door. None of this is sci-fi. Now comes the hard part: how are you going to pay for the toilet paper? Your old job is gone. There are now legions of tradesmen looking for work as modular housing has replaced stick built houses and even that is being automated. A person can't just think on the shitter with his elbow on the wrong knee all day and bring home a good paycheck. And even thinking has been outsourced. Let's take it a bit further, at some point, is there even a need for money? If everything is provided for you and the infrastructure for humanity maintains itself, the whole concept of money becomes irrelevant. The human condition can remain venal, corrupt, hopeful, reverent, and the machine doesn't care. Your needs are met, you have a place to stay, things to eat, places where you can go, people to talk to. No need for oligarchs, despots, diplomats, leaders of the free world. The machine won't make weapons for anyone. So what do we do? Maybe we get the machine to make us a starship and we can go for long long rides. Anyway, thanks for the article, I have to take off my tinfoil hat now.
Fascinatng tension here between efficiency gains and the loss of communal knowledge-building. The irony is that LLMs were trained on exactly those Stack Exchange threads, yet now theyre replacing the very ecosystem that generated that training data in the first place.
This phenomenon is nothing unusual. Tools are created that make using complicated things easier. High level languages supplanted machine language, voice recognition supplanted typing, and now LLM is being used to generate code - just another higher level language. But, as Larry Tesla famously said, "Complexity is conserved." What is gained in rapid use can lead to errors when the result is not clearly specified, or is incorrectly applied to a problem. As with all tools, learning to use the tool well requires study, gaining experience, and critical thinking.
Autocorrect correction- not “Tesla” but “Tesler”
Can confirm - I am a SW engineer.
Talking to chatGPT is about 100x as time efficient as googling stuff. I rarely ask chatGPT to literally write code for me, but it's fantastic at learning how to use a library, program, etc, extremely quickly.
And it's always available, never frustrated or angry, and knows seemingly everything.
It does make mistakes - though some AIs are better than others - but in general for high level explanations of things, it's amazing.
Another great usecase: Copy/paste error messages and ask AI what it means. Often programming errors are cryptic, especially if you are new to something. AIs can decrypt that error message into something meaningful and often give you suggestions for fixing it.
Yes, StackExchange communities are far from what they used to be. I still use them but I use AI as well, and for good reason: I get to have a "conversation" with a fallible but very well-informed and capable "friend" who -- crucially -- responds immediately and has all the time in the world. In the economics world, to take the example we have in common, no faculty person or graduate student or research assistant has that kind of time or can be available in that way. I've had threads with OpenAI and Claude that have lasted for weeks or months and which -- edited, condensed, validated, rewritten -- have potential as book chapters. Hard to beat that.
This begs the question of who owns the copyright when AI writes a substantial portion of a book, as well as who the author actually is, if one simply edits and compiles the information. Even decades ago when I compiled a white paper book about certain new pharmaceuticals, I hesitated to say that I wrote the book. I usually refer to it as a compendium.
The US copyright office has ruled that AI works can’t be copyrighted, so no one. I suspect powerful corporations will eventually challenge this, but we’ll see.
That supposes that the person who edits and rewrites what AI provided doesn't take credit for the rewriting. The person can claim that AI provided research but that the writing was done by the purported author.
Thank your for this thoughtful commentary. This highlights a different angle in the discussion about how AI is shifting how work is done. I think one of the biggest losses with the integration of AI into virtually every area of work is the loss of human connection and interaction. The power of AI as a tool to enhance productivity is a paradox. Removing menial level tasks is a win, but doing at the cost of dismantling interpersonal communication is a major threat to how we connect as humans. My biggest concern as an AI Emotional Safety Strategist is just that. The more we disconnect from each the easier it is to bypass human interaction all together.
I retired from tech in 2018 and it's amazing the change in what's happening in this space in only 8 years. What I get from the comments is something that I fear is going to be pretty devastating to the software industry, and probably many more fields. If programmers stop programming in favor of using LLM generated code, what is the point of having programmers in the first place? And you can take away legions of testers, PMs, code version control, architects, sysadmins too. Think of the cost savings! Then the former techies can figure out what to do next. Take a minute and think critically about what in your life cannot be touched by AI and automation. Amazon has just downsized a bunch of positions in the back office. Imagine a world where no human touch is required from ordering a bag of toilet paper to the time it gets to your door. Automated warehouse systems exist. The systems needed to process the order are self maintained. Payment systems are self maintained. Accounting and finance systems are self maintained. Autonomous taxis and trucks will deliver it to your door. None of this is sci-fi. Now comes the hard part: how are you going to pay for the toilet paper? Your old job is gone. There are now legions of tradesmen looking for work as modular housing has replaced stick built houses and even that is being automated. A person can't just think on the shitter with his elbow on the wrong knee all day and bring home a good paycheck. And even thinking has been outsourced. Let's take it a bit further, at some point, is there even a need for money? If everything is provided for you and the infrastructure for humanity maintains itself, the whole concept of money becomes irrelevant. The human condition can remain venal, corrupt, hopeful, reverent, and the machine doesn't care. Your needs are met, you have a place to stay, things to eat, places where you can go, people to talk to. No need for oligarchs, despots, diplomats, leaders of the free world. The machine won't make weapons for anyone. So what do we do? Maybe we get the machine to make us a starship and we can go for long long rides. Anyway, thanks for the article, I have to take off my tinfoil hat now.
The code is SQL. Standard database query.
And it’s a matter of time before it replaces spouses…