Will my children code?
November 19, 2023 5:30 PM   Subscribe

A coder considers the waning days of the craft (archive link) - the pleasures of coding, the rise of GPT-4 and the future of hacking.
posted by dorothyisunderwood (57 comments total) 24 users marked this as a favorite
 
Speaking from personal experience, as a developer and coder, I'm more productive than I've ever been, working in languages I didn't know how to work with, because of GPT-4.

More than ever, I'm the conductor of an Orchestra. In the past I felt like first (maybe second) violin.

Executive decisions about what to do next, where to take the code and the project, are more important and occupy more of my time.

Less time worrying about minor details of syntax or errors means more time focusing on how to harmonize all instruments in the symphony.

GPT-4 has instigated, for me, a coding renaissance.

The question is - at some point - will there be no need for a conductor?
posted by carlodio at 5:57 PM on November 19, 2023 [18 favorites]


The orchestra seems like a particularly apt metaphor. You used to have to pay every member of the orchestra. If we lived in a world where nobody was going to lose the roof over their head because of this, I might be able to muster a little enthusiasm for it. Spoiler warning: we don't.

>The thing I’m relatively good at is knowing what’s worth building, what users like, how to communicate both technically and humanely. A friend of mine has called this A.I. moment “the revenge of the so-so programmer.” As coding per se begins to matter less, maybe softer skills will shine.

"Maybe softer skills will shine" is the kind of thing people say who have never tried to get paid for "softer skills." Methinks there are gonna be a lot of shocked pikachu faces when this guy and his colleagues find out how much they can expect to get paid for "knowing how to communicate both technically and humanely". I mean, sorry to be a buzzkill, and maybe I'm wrong. I'd love to be.
posted by Sing Or Swim at 6:08 PM on November 19, 2023 [27 favorites]


also what if you didn't know whether the orchestra was playing Bethoven's 5th, or a version of that music where embedded in the music which you didnt write or understand was an integral tone that caused the head of every 1000th person who listened to the symphony to explode
posted by lalochezia at 6:25 PM on November 19, 2023 [10 favorites]


We call this “Roko’s Bassoon.”
posted by GenjiandProust at 6:28 PM on November 19, 2023 [67 favorites]


Speaking from personal experience, as a developer and coder, I'm more productive than I've ever been, working in languages I didn't know how to work with, because of GPT-4.

Have fun fixing bugs in programming languages you don't understand.
posted by Pope Guilty at 6:32 PM on November 19, 2023 [40 favorites]


I guess you need to be too far inside the game to get it, but I find an alumnus of Pivotal Labs and Genius saying this shit fucking hilarious.

The world was never meant for you, dude. You are lamenting the end of something you never understood.
posted by Back At It Again At Krispy Kreme at 6:37 PM on November 19, 2023 [6 favorites]


Maybe softer skills will shine" is the kind of thing people say who have never tried to get paid for "softer skills." Methinks there are gonna be a lot of shocked pikachu faces when this guy and his colleagues find out how much they can expect to get paid for "knowing how to communicate both technically and humanely”

Many people already get paid a lot of money for that (including people whose job title is “software engineer”) so I’m not quite sure what you’re saying here.
posted by atoxyl at 7:14 PM on November 19, 2023 [14 favorites]


On the other hand it doesn’t feel quite right to say that LLMs are bad at that, or good at the fiddly parts of programming. At the moment they are pretty good at translating natural language into the general shape of software at small to medium scale. They are bad at really fiddly stuff - actually simulating the machine - and really high-level architecture and long-term planning.
posted by atoxyl at 7:18 PM on November 19, 2023 [1 favorite]


I've been a professional software engineer for 30 years now. The death of my field has been predicted several times before. 4GLs, visual programming tools with reusable components, offshoring. Each one snagged some low hanging fruit and and spurred thousands of articles.

Actually writing new code in a greenfield situation is the easiest part of the job. Analyzing existing code and making changes that preserve the original intent is much harder.
posted by microscone at 8:12 PM on November 19, 2023 [56 favorites]


Tools get more powerful over time. Software engineers no longer write assembly and use punchcards. Now AI tools will continue to make software engineering even more powerful. It’s a good thing.
posted by chasing at 8:14 PM on November 19, 2023 [3 favorites]


I've used Copilot today to fill in some functions I've written 100 times before. It's kind of like having a very eager junior dev that you know not to trust 50% of the time, but you have to know which 50% that is.

Some devs may never have to use this, like there are still people coding COBOL or mainframe assembly with everything they need to know contained in crusty dead-tree manuals. Lucky them.

We're still in early days, like spreadsheets, and figuring things out. Soon there will be tools and best practices. Many will ignore them.

I do know that as I get older I'm going to need another force multiplier to offset my cramped hands and sore back and tired brain. I'll take it.
posted by credulous at 8:36 PM on November 19, 2023 [6 favorites]


In some ways this is just another rung on the ladder of abstraction, with all the usual tradeoffs that a new abstraction brings. It still takes skill and experience to effectively orchestrate the lower levels, and it takes some familiarity with the new tools to understand their limitations.

It's an interesting time, though, because right now the people who are using GPT for code generation are mostly those who already know at least the basics of software development, how things fit together conceptually, which components are needed for a particular architecture, how to run and validate the program, etc.

As the AI tools are used more and more, it's likely new rungs will appear at a quicker pace. Things like debuggers and editor-centric IDE's will become specialized tools, and for prompt-developers, using them will feel clunky and frustrating.

And because the models are trained on existing text and past prompts, and because human hands will touch less and less code, software will become more homogenous and less creative in its implementation, even as it becomes harder to understand and fix by human developers. Every permutation of a CRUD app, every iteration of a marketing page, every in-app purchase game, every dark pattern in every e-commerce site will be copied and explored.

I find these tools fascinating and amazing, but I can't help feeling like something is being lost.

Blake said it best:
What the hammer? what the chain,
In what furnace was thy brain?
What the anvil? what dread grasp,
Dare its deadly terrors clasp!
posted by swift at 8:51 PM on November 19, 2023 [10 favorites]


Did he who made the lambda make thee?
posted by gwint at 9:21 PM on November 19, 2023 [20 favorites]


Did he who made the lambda make thee?

Yes, in a lazy-evaluation sort of way.
posted by aws17576 at 10:27 PM on November 19, 2023 [3 favorites]


I mean, has anyone seen the kind of code humans write? How much of all code is un-debuggable unfixable garbage that was made to be re-written 'someday'? 60%? 70%?

How much of perfectly readable and robust code is written to support evil corporations doing evil things supporting their evil billionaire owners? 60%? 70%?

Will either of those things change with AI coding assistance?
posted by UN at 10:39 PM on November 19, 2023 [1 favorite]


The guy's from Pivotal. Here's someone discussing the future of coding who's never been allowed to code on his own.
posted by Cardinal Fang at 12:22 AM on November 20, 2023


"Maybe softer skills will shine" is the kind of thing people say who have never tried to get paid for "softer skills."

The guy who wrote the article doesn't actually know what 'soft skills' are. What he talks about are not soft skills. They're part of what it takes to be a quality programmer.

'Knowing what's worth building and what users like' means understanding that the problem to be solved has a market, and what that market is. Otherwise you're just coming at it with a call-centre mentality, where every task is a ticket, a piece of code to be written, with no need to understand the bigger picture. It's why so much big-corp produced software is such egregious crap.

'Knowing how to communicate both technically and humanely'? First of all, if I may be neurodiversely pedantic for a moment, 'humanely' is not synonymous with 'like a human being'. It means 'in a compassionate or benevolent manner'. I think what he wanted to say is 'knowing how to explain the problem / solution to both technical and non-technical people' - which means 'know when and when not to blind them with science'. This isn't a 'people skill'. In fact, in my experience, the more neurodiverse you are (or appear to be), the more your audience is likely to trust your technical knowledge. That's how I got the job in the first place so many times.

The ironic thing is there are definite indicators of neurodiversity in the author himself. 'A few months ago, I came home from the office and told my wife about what a great day I’d had wrestling a particularly fun problem.' There is then a long paragraph all about how he approached the problem. But we never find out what his wife thought. Now, that's a soft skill.

Ultimately I think the problem with the article is that the author isn't really a programmer. Here's the giveaway for me: 'Yes, our jobs as programmers involve many things besides literally writing code, such as coaching junior hires and designing systems at a high level. = 'Our jobs as programmers also involve being team leaders / project managers and system architects.' No they don't. I make my money purely as a software guy. You wouldn't want me anywhere near a management position, either of people or of projects. I think the author has himself been programmed, into a particular corporate vision of what a 'programmer' is, or should be.

The bottom line as far as I'm concerned: yes, AI will indeed replace coders, but only the call-centre type of coders described above. The rest of us will carry on getting paid money to fix the code they produce.

Whether we notice, or care, whether the junior coder is a human or a bot is an interesting question. I am quite sure Microsoft or similar will produce an AI coder-bot which constantly interrupts us on Slack to talk about going for lunch and / or discussing the latest episode of 'Survivor'.
posted by Cardinal Fang at 1:08 AM on November 20, 2023 [9 favorites]


If you come to my job interview and say 'I will use chatgpt to write code', have fun on the other side of that door.
posted by adept256 at 1:15 AM on November 20, 2023 [9 favorites]


As for the implied question, I will tell you how I learned to program as a child. I really wanted a Nintendo, but my did bought a kit from [Australian Radio Shack] and built my first computer. On this I would carefully type code listings from magazines every time I wanted to play Dig Dug. I knew Basic when we got an IBM XT. I talked to Eliza. My Zork style RPGs were a haywire of nested Ifs that never hatched until I found Turbo Pascal, which gave me objects. By the time I had a 486 I was having fun with Conways glife and fractals.

It all started because I wanted to play Dig Dug. So this is what I suggest for your child. Son, if you want a Minecraft server, you will have to install linux on this refurbished office pc, putty into it and install your server from the command line. When you do that, I will get you the toy you actually want.
posted by adept256 at 1:34 AM on November 20, 2023 [10 favorites]


adept256: getting kids to write games is a really excellent way of getting them motivated to learn to program. On no account allow them to turn this into their career.

(source: recovering games developer. Yes, it was 20 years ago, and yes, I'm still recovering.)
posted by parm at 2:37 AM on November 20, 2023 [7 favorites]


Hate to be snarky but the following cause a raised eyebrow...

> At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling.

That just seems like the kind of thing I'd expect in an interview for a junior dev position.
posted by tomp at 3:48 AM on November 20, 2023 [17 favorites]


This very article was largely written by an LLM, you can tell by the pixels.
posted by aspersioncast at 4:33 AM on November 20, 2023 [1 favorite]


I can't recall whether Asimov or Clarke had characters with contempt for "non-comps" who couldn't cope without a computer tool of some kind. I guess there will always be people who keep some how-to knowledge even if it's redundant.

There's crafted code and industrial engineering. People will still write programs and scripts because it's useful to understand how to think inside the machine.
posted by k3ninho at 4:36 AM on November 20, 2023


That just seems like the kind of thing I'd expect in an interview for a junior dev position.

Early in my coding career someone gave me Jon Bentley's Programming Pearls. Goldmine. Many of the examples involve constraints one is unlikely to encounter now but that's not the point; it's learning to reason about stuff.

"hundred random lines from a dictionary file" seems like it could come from that book. I'm guessing the author of the article not only has never heard of it but does not even know anyone who has.
posted by Ayn Marx at 4:37 AM on November 20, 2023 [2 favorites]


I’ve been around computer graphics and software development to remember decades ago the days when graphic/visual design was going to be an extinct craft because of the wonders of desktop publishing and, no joke, things like clip-art. What will happen to the profession once everyone can create their own poster, logo or book? One click and print we go. Who needs new fonts when so many are available on disk or online? Many articles were written about the fear — it was something that truly scared people.

Executives and 'business leaders' were super excited about all the money they’d earn by having their MBAs and secretaries make their powerpoint presentations and advertisments. Money money money!

And, unfortunately, there was a long, long time when those people adopted a no-design orthodoxy. It’s not that there was no collateral damage. The profession suffered.

But, in the long run, they were wrong. There are as many or more high-paying design jobs and specializations than ever before. Want to get into something as niche as type design? There are a ton of foundries making custom fonts for pay. Companies spend millions on what are essentially power-point presentations, much of that being spent on design work. And so on and so on.

To a non-professional (and even many professionals), AI code assistants look extremely impressive — but they’re essentially clip-art code generators and will remain so. Tools based around AI code generation will improve and will allow new things to happen; the kinds of projects and tasks that engineers and designers work on will change; new kinds of specializations will be born and some will go extinct.... but the end of a profession or craft? Not really.

BUT!

There’s already a hype around "no code" startups. One or two companies will hit the jackpot and get rich without hiring a single programmer (or whatever), the hype will reach epic proportions and the business people will go bananas. And at that point I think we may enter a sort of dark-ages period in the profession similar to what designers went through not too long ago. The question is, how long will it last and how bad will it get?
posted by UN at 5:06 AM on November 20, 2023 [5 favorites]


That just seems like the kind of thing I'd expect in an interview for a junior dev position.
That threw me, too – also not a good question for evaluating LLM performance since there are many full examples of that exact problem in the training data.
posted by adamsc at 5:11 AM on November 20, 2023 [2 favorites]


I maintain a product that has code that dates back to the 80s, in a mixture of C++, C# and Fortran. The Visual Studio workspace has 220,000+ files in it.

The way I see it, an LLM is about as useful for this job as a unicycle without the wheel. I just laugh-snort every time I hear about how ChatGPT has totally changed software development. When people talk about "coding" in this context, they can't actually mean professional software development, can they?
posted by Foosnark at 5:58 AM on November 20, 2023 [7 favorites]


I guess there will always be people who keep some how-to knowledge even if it's redundant.

That's the beautiful thing: how-to knowledge is never quite redundant. There are indeed still people making a tidy rate coding COBOL, because it has never been considered cost-effective to replace the banking mainframes that run it. These coders are extremely unlikely to be replaced by AI any time soon.

I studied computer science in the mid-80's. Basically we spent three years lying around drunk. One friend lay around more drunk than the rest of us, and ended up with a 3rd class degree. So, while we scooted off into various cutting-edge tech jobs, he ended up in the basement of a retail bank, maintaining Hitachi mainframes.

Scroll forward a few years, and the Y2K hysteria hit. Being one of the extremely few people in the country who knew anything about Hitachi mainframes, he could charge what he liked, and eventually bought two apartments in London.
posted by Cardinal Fang at 6:02 AM on November 20, 2023 [5 favorites]


I studied computer science in the mid-80's. Basically we spent three years lying around drunk.

Is that you Ballmer?
posted by adept256 at 6:07 AM on November 20, 2023 [5 favorites]


I just laugh-snort every time I hear about how ChatGPT has totally changed software development. When people talk about "coding" in this context, they can't actually mean professional software development, can they?

In the late 80's every other ad on the London Underground screamed, 'We're looking for Computer Programmers! Apply now!' What they actually wanted, of course, was data entry clerks.

That was around the time when they renamed 'Data Processing' to 'Information Technology' to try and make it sound sexier. I think that shortly they'll have to similarly rename 'Artificial Intelligence'.
posted by Cardinal Fang at 6:11 AM on November 20, 2023


i’ve also never found a practical use for llms in my software development. the code is too complex, with interactions between the data model and with other services that require careful thought. explaining what to do to the llm would be the same process as writing the code except when the llm produces it i have to parse it all again to figure out if it its correct. maybe if you could train it entirely on your codebase it would be useful for things like: write a test that calls these apis with this data and asserts something. but again by the time i’ve described it i’ve basically written it. maybe the time will come when i can say “test that the system works correctly” and it can figure that out. but i don’t have a high expectation.
posted by dis_integration at 6:25 AM on November 20, 2023 [2 favorites]


"Maybe softer skills will shine" is the kind of thing people say who have never tried to get paid for "softer skills." Methinks there are gonna be a lot of shocked pikachu faces when this guy and his colleagues find out how much they can expect to get paid for "knowing how to communicate both technically and humanely". I mean, sorry to be a buzzkill, and maybe I'm wrong. I'd love to be.

I'm not in tech, so no argument from me that this could be the way it works there, with the "softer"/communicative skills being devalued. But in my field (STEM, but not tech), there are basically two professional tracks. You can be a technical person (e.g., an engineer or graphic artist), where if you are a person who is driven to advance your career (not everyone is, and that is ok), you become over time a more and more senior technical person, possibly (but not mandatorily) taking on some low-level leadership role like managing a team of junior technical people. Or, you get on the track that involves communicating, managing, planning, marketing, coordinating, strategizing, etc. You still have your technical background and do some amount of that work, but you are now primarily a project manager or program manager or senior VP or whatever, and most of what you do is communicate with internal and external people.

And, again describing my field, not tech, while the senior technical people can earn good salaries, the people on the more communicative track absolutely pass them by in terms of salary potential. The technical track is safer in the sense that there are very few ways you could fail, and the other track is riskier since if something goes wrong, the blame will probably come your way, and there are a lot of ways things can go wrong. But overall, if your focus is the money, in my field you will get way further by developing "softer" human/communication type skills.
posted by Dip Flash at 6:34 AM on November 20, 2023 [3 favorites]


Foosnark: I maintain a product that has code that dates back to the 80s, in a mixture of C++, C# and Fortran. The Visual Studio workspace has 220,000+ files in it.

The way I see it, an LLM is about as useful for this job as a unicycle without the wheel. I just laugh-snort every time I hear about how ChatGPT has totally changed software development. When people talk about "coding" in this context, they can't actually mean professional software development, can they?

A friend is building Kortical Chat, one of the surprising tools in the chatbot space that let you say "use these documents" (eg customer service script or legacy unmaintainable code) and answer questions about it, including how to rewrite parts of programs

The LLM can hold more tokens in its representation of live-state than you can, so it can propose unit tests for legacy code that match the code -- assuming you get permission to feed your legacy system's codebase to the Generalised Adversary.

dis_integration: [complexity out of control]
Yeah, you want to fix that. Feeding your codebase to the Generalised Adversary will probably also help you map out and set boundaries between interface (and schema) and implementation, plus also to break down the right coupling that makes it hard for anyone to reason through your code.
posted by k3ninho at 6:34 AM on November 20, 2023


sometimes complexity is just a consequence of the domain. that’s the case for the product i work on now. we of course provide interfaces to manage the complexity but at a certain point you just have to deal with it on its own terms, and adding chatgpt just means now i have two complex systems to manage instead of the one i’m working on
posted by dis_integration at 7:34 AM on November 20, 2023


I use ChatGPT when coding, but not to write code for me. Usually, I use it for help with naming or general conceptual issues. I might describe a function or type to it and ask it to generate several alternative descriptive names. Or I might describe a general need that I have and ask it about which data structures might be commonly used to meet that need, or which search terms are most likely to give me a good start towards solving the problem.

I can also ask it to perform code review, though not always in so many words. I might paste in some code I wrote and ask it to describe what the code does, or ask it to (unnecessarily, but don't tell) add/change some minor aspect of its functionality. If ChatGPT fails these tests, then I consider whether I might need to make my code clearer.

Another key job for ChatGPT is as a rubber-duck debugger whose patience I never have to worry about wearing thin, and whose time I don't have to feel bad about interrupting.
posted by a faded photo of their beloved at 7:37 AM on November 20, 2023 [5 favorites]


I just typed the prompt "FizzBuzz" into chatgpt, and it gave a correct answer so it looks like we should be good.
posted by BigHeartedGuy at 7:58 AM on November 20, 2023 [11 favorites]


i’ve also never found a practical use for llms in my software development.

I've used ChatGPT exactly once in my day job so far--to generate a just-complex-enough-to-be-annoying regular expression for a set of IDs that I needed to pick out of a text file. It worked just fine. I still wouldn't trust it within a mile of the kind of problems I need to solve the rest of the time.
posted by Mr. Bad Example at 7:59 AM on November 20, 2023 [1 favorite]


Oh boy, I really needed more completely opaque regex.
posted by adept256 at 8:04 AM on November 20, 2023 [11 favorites]


The guy who wrote the article doesn't actually know what 'soft skills' are. What he talks about are not soft skills

Not sure you can pin this on the author. The usage of “soft skills” in the wild is extremely broad and often not even limited to “people skills.”
posted by atoxyl at 8:30 AM on November 20, 2023


It sounds great, like an actual dream from the early days of CASE tools, to be able to tell the computer what you want the program to be and have it do that. Except that most of the tedium in the work is centered around working with sensitive data that professionals are obligated to keep from flying around the Internet willy-nilly.

When I have experimented with ChatGPT, what I found is that simple things I don't really need assistance with work fine, but if I get off into the weeds at all it's completely out of it's element. Not to mention the problem where it will just invent things. Like the time I followed a prompt from The Neuron Daily asking for a curriculum to learn something. So I asked for a SQL Server DevOps curriculum and it wrote out something that sounded amazing, but none of the links were real. It just made the whole thing up. That was the last day I seriously considered ChatGPT.

The no-code tools I have evaluated are similar. Sure they make creating a toy example so easy the proverbial vice-president could do it, but getting one to create an effective dated primary-and-detail table and interface is no easier than doing it in Django or whatever.
posted by ob1quixote at 8:36 AM on November 20, 2023 [5 favorites]


But in my field (STEM, but not tech), there are basically two professional tracks.

In “tech” (i.e. software) today, “engineering,” “engineering management” and “product management” often all exist as parallel tracks. Sometimes there are other specialties, too, positioned at different intersections of “business” and “technical.”
posted by atoxyl at 8:55 AM on November 20, 2023


I have an amatuer's working hypothesis for the question "Will an LLM be useful to solve this problem?"

( How common is the problem + how common is the solution ) / how precise must the output be to succeed

The first two terms are being addressed by blowing up the training data sets to astronomical size. But that third term is just killing LLM's as replacements for any tasks that require high precision. Filling out yearly reviews? Yes! Writing software? Legal contracts? Generating recipes that people actually want to eat?

It's possible that that third term has a fix coming down the pike, but given the non-linear nature of the advancements we've seen so far, I don't know if it will ever actually happen.
posted by SunSnork at 8:59 AM on November 20, 2023 [3 favorites]


Back in late 70’s, Intel and a network of Community Colleges put together a program where CS students would go to school three days a week, and basically work at Intel as interns the other two days. I, at Intel, was put in charge of interviewing the students to find out where they were in their programming skills. Back then, learning to program meant learning the syntax of a programming language and a little bit of what those words meant and what the code did. But they didn’t learn how to think about programming, how to break down a problem into data structures and how to design code based on best practices, etc. In other words, programming before instantiating it in to a particular language. ChatGPT sounds like one of those students, it “knows” the syntax of a language, but it doesn’t “understand” programming. (Nor anything else for that matter.)
posted by njohnson23 at 9:08 AM on November 20, 2023 [1 favorite]


I have been a programmer(1) for long enough to have seen the trade's imminent extinction arise and pass several times now. I have nothing special to add to the thread, other than to agree that

1. TFA was not written by somebody whose professional programming experience is like mine.
2. Just as the (near-) extinction of punched cards and assembly language changed the experience of programmers(2) while making programming accessible to more people, LLM-assisted programming can potentially lower the barriers to entry for programming without really diminishing the demand for programming skill, providing most programmers with a scutwork assistant.
3. The hype around new code creation from scratch is hype about something that's not what work is like for most programmers most of the time. Most programming most of the time is fixing/enhancing something existing, that must keep working the way its users are accustomed to, and it is far from clear that LLM-assisted programming is going to translate to big gains there.

OK I lied, I do have one thing to add: In my organization, we're not using these things yet, or only with obfuscated subsets of our code base, while our lawyers and consultants angle for access to and control of private instances. Because they don't want our code base (and corporate IP) to become training material for our competitors. I suspect that similar reservations are widespread where large existing proprietary code bases are concerned.

(1) If I were to show you my resume I would be referring to myself as a "software engineer," but on my tax forms I list my occupation as "programmer."

(2) And the arrival of VDTs transformed working with a text editor, compared to a hardcopy terminal. And the advent of screen editors. And IDEs... does anybody remember what it was like, manually parsing compiler error output?
posted by Aardvark Cheeselog at 9:14 AM on November 20, 2023 [3 favorites]


@njohnson, @SunSnork, and maybe others:

> ChatGPT sounds like one of those students, it “knows” the syntax of a language, but it doesn’t “understand” programming. (Nor anything else for that matter.)

> ( How common is the problem + how common is the solution ) / how precise must the output be to succeed

These critiques are both reflections of something fundamental about LLMs: they don't have any notion of knowledge representation. They don't attempt to "know" anything. It's possible that their architects and boosters hope that eventually, with a big enough training set, that knowledge representation will arise spontaneously out of hidden internal connections in language, but I'm not betting on it. For this reason, I scrupulously adhere to describing them as "LLM systems" and not "AI systems." I feel intuitively sure that any system that exhibits "AI behavior" will have some way of representing knowledge about the world: the failure of the AI program has largely been the failure to realize that goal.

LLMs are big statistical inference widgets. You initialize it with some text and it generates more text. This works really surprisingly well with natural language utterances. Though you do have to check the references to make sure it didn't just make shit up, if you weren't just jogging your own memory.

It's even more surprising that LLMs work as well as they do to generate programming language code. But there are real limitations on how well they can do that (the most surprising thing about a dancing bear is not how well it dances), limitations that mean their output is really untrustworthy. If LLM-assisted programming is to really transform software development, it will be via forcing us into better-thought-out designs, where every single detail of the system's reaction to data can be subjected to automated testing. The authoring of those tests will keep a lot of people busy.
posted by Aardvark Cheeselog at 9:31 AM on November 20, 2023 [5 favorites]


Hate to be snarky but the following cause a raised eyebrow...

> At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling.

That just seems like the kind of thing I'd expect in an interview for a junior dev position.


I almost made the exact same comment. But later in the article the author describes having earlier written a program that would generate random haikus from Ulysses:

...after reading “Ulysses” in an English class, I wrote a program that pulled random sentences from the book, counted their syllables, and assembled haikus...

I can't imagine a coder who could do the second but not the first, so I wonder if he left out some key detail about the problem.
posted by justkevin at 9:47 AM on November 20, 2023 [3 favorites]


After a 20 minute dog walk, and an unwillingness to let a ridiculous idea go:

(( How common is the problem * How common is the solution ) + Bullshit factor ) / (Precision required by the answer)^2

Problems with no representation in the model can still be solved with random bullshit if the required precision is low enough.

Think I can get published?
posted by SunSnork at 9:59 AM on November 20, 2023 [2 favorites]


The library card catalog when I was a kid was drawers of physical cards. Checking a book out required stamping a card. All of that stuff has since been replaced by software, which has probably been rewritten several times.

The same has happened to every little thing I interact with every day. I have trouble comprehending the amount of work involved. So may processes that someone had to think through in huge detail, and then explain to a machine, and does that process look anywhere near done? I look around and see a lot that still sucks, or could obviously be more efficient or more helpful in one way or another.

As a programmer I've never felt for a moment, "uh-oh, what if I run out of stuff to do"? If I figured out how to get my year's work in a day, I wouldn't be giving up. I'd be thinking, oh, good, now that's out of the way, maybe I can tackle one of those really hard projects people have been pestering me about, that I've had languishing as notes for years....

No idea where LMMs might be able to make a contribution, but if they can, and can do a good job of it, great, there's work to go around.

I've had good fortune on the labor market, and I know others haven't. But my feeling there is also not: oh dear, maybe there's not enough work. It's: how can we do a better job of bringing everyone on board that wants to? We need the help, damnit!
posted by bfields at 12:02 PM on November 20, 2023 [2 favorites]


These critiques are both reflections of something fundamental about LLMs: they don't have any notion of knowledge representation. They don't attempt to "know" anything. It's possible that their architects and boosters hope that eventually, with a big enough training set, that knowledge representation will arise spontaneously out of hidden internal connections in language, but I'm not betting on it.

Well, they... kinda represent knowledge... for some value of "knowledge".

I wish I could link to the tweeter that first pointed this out, but an excellent example of this is that language models have much stronger representations of famous people than they do of non-famous people. They also do not have the ability to identify or reason about systematic relationships- like "when person A is person B's child, that means that person B is person A's parent". This is why a language model can likely tell you that famous person A has non-famous person B as their parent, but cannot tell you any of non-famous person B's children.

It makes me think of early language learning. There is a time in toddlerhood that you have to just memorize everything. Eventually you start forming rules, and you overapply them to occasionally hilarious effect, and then you learn just the exceptions you need to those rules. I have the feeling that LLMs are essentially in that first stage of memorization; they do not consider that "parent" and "child" are reciprocal relations, or that some relationships are symmetric and some are antisymmetric, or anything like that. They are not yet set up to have any sort of mechanism to consider these relationships.

There are some researchers that are working on systematic generalization of this sort. Even as I type this I can hear Jürgen Schmidhuber privately grumbling that if AI is going to be done right, he will have to do it himself.
posted by a faded photo of their beloved at 1:06 PM on November 20, 2023 [2 favorites]


The difference between coding to make a light blink and producing useful outcomes is maybe more acute over here in "data science" (whatever that means in 2023). I already have PMs sending me ChatGPT-generated SQL that 1) sometimes flat-out doesn't work because ChatGPT can't reliably remember the difference between Redshift and MySQL, or 2) returns results but is wrong, because every data warehouse is full of reasonable-sounding columns that are LIES, and no data warehouse is accurately documented.

My skepticism of the whole project is rooted in the fact that... LLMs just don't make anything that we don't already have in abundance, because they are trained on that abundance. The big obstacles in software development are bad incentives, over-reliance on metrics, and clueless organizational structures that are about grouping like with like, instead of forming coherent problem-solving units. Solutions to these problems are rare, so... LLMs can't actually help with that.

(I am in kind of a Pascal's Wager place with all this, though: if LLMs let product managers write queries for themselves, I'm happy and move onto more interesting work. If they don't, I get to feel smug about my predictions. Yay!)
posted by McBearclaw at 6:44 PM on November 20, 2023 [9 favorites]


I've been playing around with this as a sign of good or bad use cases:

Are you using the LLM to think better, or to think less?

If the latter, you'll probably come out worse on balance. In other words, you can't throw away the fundamentals of your discipline and just hope the LLM will produce the same outcome.

Another one - using an LLM to extract information is far more powerful than to generate text from nothing. I really think these will be the most impactful integrations into products, rather than "write this for me". But they're 'boring' and hard to hype.

For those extractive use cases, the input should usually be significantly longer than the output. And the input has to be relevant and high-quality. And the task itself has to be relevant and helpful. So the LLM is really only a tiny piece of the picture.
posted by lookoutbelow at 8:19 PM on November 20, 2023 [2 favorites]


We are being pushed to use AI at work. I don't code but I have to understand what is being coded and it's been useful for looking at something and answering questions on that so I can then go and ask better questions from an actual person. Most of our communications already sound like they were written by a business-buzzword AI, so it gets used for emails a lot.

Oddly the place where I find it most useful is for complex excel functions and regex - I can do them, but rather like a slide ruler vs a calculator, it's much faster.

There are a bunch of work processes that would highly benefit from automation but for security reasons, we're not allowed to do custom solutions on them. This is where I've also used AI - we have an internal instance we can use, and I've processed a bunch of text data through it. It's a bit like using an elephant to lift a twig into place, but hey! I have a lot of twigs.
posted by dorothyisunderwood at 9:15 PM on November 20, 2023


ChapGPT is a first-year programmer googling. It has no concept of the ecosystem. It’s naive statistical patchwork.

I have a blog post in the works on all of this. It interests me because I'm hearing a *lot* about ChatGPT from those who aren't deeply embedded in some sort of software engineering/programming/development role.

We pretty much all need to be able to use computers for our jobs/life these days but the concepts have been so abstracted that a lot of people simply don't know how to debug issues. There was a blog post a decade ago on this. The snark in it is a little high but it has valid points, still.

What makes anyone think ChatGPT is going to improve this in the long run? I get it. It can be used for trivial solutions, algorithms, and code snippets, but for that kind of thing why aren't you using a third-party library or existing solution? NIH? You fail software engineering.

I also get it - it can be useful for assembling all of the parts, a lot of which is what modern software engineering is. But, paraphrasing Kernighan - if you couldn't assemble the pieces on your own, because you didn't understand them, how are you going to debug it? Oh you'll just ask ChatGPT again? Too bad, the world has changed since you last asked.

We (collectively, as software engineers) rail against duplication. I even have the opinion that blog posts and HOWTOs that duplicate sections of official documentation are bad - they eventually go out of date. The problem is the internet never forgets all that crap, and the LLMs slurp it all up and don't know any different.
posted by lawrencium at 2:26 AM on November 21, 2023 [4 favorites]


dorothyisunderwood: Oddly the place where I find it most useful is for complex excel functions and regex - I can do them, but rather like a slide ruler vs a calculator, it's much faster.

Nth-ing the query about comprehension and debugging, I came with the "I know, I'll solve this with a regex" / "Now you have two problems" gag, already made previously in this thread.
posted by k3ninho at 3:36 AM on November 21, 2023 [1 favorite]


Uhg.
posted by AlSweigart at 6:59 AM on November 21, 2023


The vision capabilities of GPT4-V are also useful for programming, including the "soft" stuff. I've used it to debug an annoying CSS bug by screenshotting the UI and uploading it to GPT4-V, pasting in the HTML/CSS and describing the issue. It came up with the goods first try. You can also screenshot a UI and ask it for UX feedback, which is obviously not going to be as good as asking a human UX expert, but it's still pretty impressive in my opinion. When I tried it, it pointed out that 2 colours in a graph would be hard for a colourblind person to distinguish and suggested a better font awesose icon for one of our buttons. You can also upload marketing materials and ask it for feedback. This was the most impressive for me, I uploaded a brochure for a user conference and it gave detailed feedback - suggesting that "call to action" words be in a different colour to stand out from the rest, spotted that the name of a guest speaker wasn't properly centered on her silhouetted image, suggesting a URL shortener instead of having long URLs on printed material and so on.

As a programming assistant, I think it starts to realise what was described at the end of "The Mythical Man Month", where instead of a team of programmers with juniors and seniors you model the team on a surgeon and a team of assistants. GPT4 won't do the surgeon's job (yet), but it can speed up so many little tasks as an assistant - "generate a C# class based on this example JSON data", "convert this powershell script to C# code", "write me a python script to read a CSV file and rename all files named a value in the OldName column to the name in the NewName column" and so on. I don't think that's all it's going to be useful for however. It seems like a lot of skeptics have moved on from the denial stage "it's just like ELIZA, people are falling for a parlour trick" to the bargaining stage "well sure it can do trivial programming tasks, but it's never going to do X, you need a human for that". But it's done so many things that I would have guessed would be impossible for a machine that I'm not confident saying that any more.
posted by L.P. Hatecraft at 2:12 AM on November 22, 2023 [3 favorites]


In my experience, the big differences between pro coders and hobby coders like me are speed, readability, and security. Give me enough time and motivation and I can give you a working program, but it's not likely to be efficient, not likely to be easy to maintain (version control means naming things "hello worldb1finalfinished2abeta", right?) and may well contain giant, laughably obvious security holes.

I am having fun imagining what the attack surface looks like in a startup where nobody knows how to code. Are they like "trust me bro, we had it vetted by AI?"
posted by surlyben at 11:50 AM on November 22, 2023 [2 favorites]


« Older Grace.   |   When Fantasia Meets Documentary Newer »


This thread has been archived and is closed to new comments