powersnail
People learn things differently.

I really need the "core concept" first, before diving into examples, (unless the core concept is extremely simple).

Many tutorials are like hand-holding Lego building. Here's your Lego pieces, watch me and follow me in building this toy project, and you'll know how to Lego at the end of the day.

I just don't function very well in this model. I want to know how and why decisions are made. I want to see things from the author's perspective. I want to know how the Lego pieces each feels like, and how they connect to each other, and how you arrive at certain designs in a certain way. Trying to follow tutorials before at least some high-level, conceptual discussion, feels to me like I'm trying to reverse-engineer something that I shouldn't need to.

Most of the time if I'm approaching a new library or framework, I read read the introduction texts, and skip the "Getting started" code samples. Usually, there's going to be some sort of "Advanced" section where a lot more talking and discussing of concepts happens, and that's what I'd like to dive into first. I'll go for the API references next, try to grasp what the important interfaces look like, and finally I'll get back to the basic code samples in the beginning of the tutorial.

austin-cheney
There was an article similar to this less than 2 weeks ago: https://news.ycombinator.com/item?id=41566097

This whole issue of writing for people really distills down to two skills:

1. Empathy

2. Writing

There is a world of difference between writing some code and writing an application, a product. That is all this article is about, though less explicitly. Empathy is a factor in this because its the difference between self-orientation and external-orientation. Self-orientated developers are primarily concerned with easiness, convenience, code vanity, and other subjectivity criteria. It comes down only to their effort of delivery.

Externally-oriented developers are primarily concerned with architecture and documentation because for them success is all about how other people receive their work product. Simplicity is more important than easiness because externally-oriented developers know they cannot read minds and have no idea what other people find easy, but they do know how to reduce steps and keep their code small.

In the brain writing an application, from a holistic product perspective, is no different than writing an essay, article, or book. Its all about organization and features. The code is something that comes later, like words on a page. For people who only write pieces of code they never develop the higher order organizational skills that brings it all together. It also works in the inverse in that if a person cannot write an essay with ease they cannot envision writing a new application.

Those are the reasons I super detest frameworks. Frameworks deprive developers the practice necessary to write original software which means they are not developing those organizational skills. Its a massive gap that the inflicted cannot see, but is so enormously apparent to those that can see it. From a behavior perspective its no different than a learning or neurological disorder in that the inflicted know something is missing, but have no means to see what that something is, and that drives massive emotional insecurity.

photonthug
> “Humans learn from examples, not from “core concepts”

Nitpicking maybe but I disagree with tfa on this point; not all humans work this way. Those of us who might actually prefer the general -> specific direction are already largely ignored in k12 and may only begin to thrive in higher education. Since we’re already kind of underserved, there’s no need to also deny that we exist!

CharlieDigital
From Code Complete:

    “The smaller part of the job of programming is writing a program so that the computer can read it; the larger part is writing it so that other humans can read it.” (P.733)
Has stayed with me for ~20 years.
animal531
Bit of a side issue for me: I was working on my Unity game the other day and thought to myself, have IDE's really not progressed all that much in the last 10-20 years?

Default intellisense has definitely gotten a lot better, but apart from that and a few other minor things the whole concept of coding feels pretty much the same today as back then.

The biggest positive change for me is outside of the editor, it has become easier thanks to much more access to libraries, documentation and just the sheer volume of user questions and answer sets we now have access to (and finally some new tools like ChatGPT that can aggregate those answers to on occasion deliver a reasonable answer).

But overall the act of writing code seems to be stuck. As a result I'm currently taking some time out from my game to run some experiments. I don't want to create a new language, but instead I want to try and offload everything I can to the computer, let it do the drudge work while allowing me to create.

Just 3 of the initial things I want to test: - Why do I need to worry about small language specifics like brackets, terminators and so on when tools should be able to auto-complete them for me? What about the private-public access chain (as well as other modifiers such as unsafe) when tools can auto-determine the most efficient set? - You're editing a file (or parts of different files) and are focusing on say 5 methods that are interacting. I want to see all of them on the screen at the same time, without having to struggle to open and manage many windows with for example VS horizontal/vertical sliders. - Data conversion. So I created a HashSet for something but realize I need to change it to a Dictionary or a Tuple, just make it happen. If it requires brainwork then show me all the places that requires supervision where I have to say ok or make an edit myself. In the case of Unity I also want to be able to click on a method and/or data set and tell it to convert it to a Burst Job with its accompanying NativeData sets.

bambax
The title of the post is debatable, as code is only written for humans. Computers don't need "code", and especially not high level code. They're happy with machine instructions. We write code because machine instructions are much too hard for us to write, and even harder to read.

We should not think of code as a way to interact with computers. Code is a way for us humans to formalize our thoughts so that they become so unambiguous that (even) a machine can follow them.

rkagerer
I write all of my code for humans.

Whether that's me, or some other poor schmuck who'll have to figure out my intent years from now.

I don't find code-writing hard. Reasoning comprehensively about your problem, collaborating with other stakeholders to discover and guide them along the best path, learning specialized skills (eg. new math or industry conventions), devising efficient algorithms, conveying your program's structure and patterns such that they're obvious and have elegant boundaries - that's the part that showcases talent. A lot of it comes down to communication and clarity.

Olshansky
Selfless shill of a blog post I wrote & shared last week:

   Move Fast & Document Things [1]
My goal wasn't to be philosophical but share actual tips on how our small team [2] enforces (not automated, not AI, but deep, hard reviews) a culture for writing code for ourselves and each other.

All my personal friends who are engineering leaders at other orgs said "We do the same thing but you actually wrote it down".

Would appreciate ↑ if it brought anyone value!

[1] https://olshansky.substack.com/p/move-fast-and-document-thin... [2] https://github.com/pokt-network/poktroll/graphs/contributors

0xbadcafebee
> Too many programming books and tutorials are like “let’s build a house starting from scratch, brick by brick” when what I want to “here is a functioning house, let’s learn about it by changing something and then seeing what happens”

That's how I taught myself how to program. I spent years getting good at writing small, simple, kinda crappy programs. Later on I learned I wasn't eligible for better software development jobs, because I had absolutely no fundamental knowledge about software design, programming languages, and computers. It was humbling walking out of a job interview realizing how much I didn't know because I never learned the boring way.

Always read the whole manual. Always learn the fundamentals.

teddyh
To interpret the headline literally: Writing code for humans is actually relatively easy; it’s called “literature” (or “technical writing”). What’s hard is writing code (for computers) which is also easy for humans to understand. Anyone who has written polyglot code knows the enormity of the challenge, but also knows the tricks to make it work. I.e. you have to do a lot of things which means something to one “reader”, but is meaningless to the other, and vice versa. For example, variable names are meaningless to the computer, but very important to humans. And so on.
auggierose
A nice read, but I think there is a contradiction here that needs to be cleared up:

1) On one hand, the author says that humans learn from examples, not core concepts.

2) On the other hand, the author emphasises the importance of reducing "conceptual overload", by reducing the number of concepts while maintaining their expressiveness.

So it is not that core concepts are not important for learning. Rather, it is essential to have a set of well-defined and well-documented core concepts which cover what the system can do. But of course, you also need plenty of insightful examples, and of course a "Getting Started" guide should start with examples, not core concepts. But if the core concepts are simple and few enough to fit into a "Getting Started", that's a win.

jeroen
As Spolsky said a long time ago:

> It’s harder to read code than to write it.

k2so
Easier to use libraries over highly complicated (supposedly performant) have a significant advantage in driving more adoption.

Recently I was trying to generate text embeddings from a huggingface model. Nvidia triton and text-embedding-inference (built by huggingface) were my two options.

> why large companies are generally incapable of delivering great developer experience. I wanted to curl up and cry while trying to make nvidia-triton spit out embeddings . The error messages are cryptic and you need to have jedi like intuition to get it to work. I finally managed to get it work after like 2 days of wrangling with the extremely verbose and long-winded documentation (thanks in part to claude, helped me understand with better examples)

Triton's documentation starts off with core-principles and throughout the entire documentation, they have hyper links to other badly written documentation to ensure you know the core concepts. The only reason I had endured this was because of the supposed performance gains triton promised but underdelivered (this highly likely being I had missed some config/core-concept and did get all the juice)

On the other hand, text-embedding-inference has a two line front and centre command to pull the docker image and get running. The only delay was due to my internet speed before it started serving the embeddings. Then deploying this on our k8s infra was a breeze, minor modifications to the dockerfile and we are running. And on top, it's more performant than triton!

mgaunard
If humans understand how computers work, then you just have to write code for computers and can ignore the human element.

Unfortunately the last few decades we decided that software engineers don't need to know how computers work anymore.

ants_everywhere
My approach to dealing with lots of concepts is pretty much stolen from how babies learn language.

Grownups talk around non-verbal babies as if they're not there. We refer to all the objects in the room (or anywhere else) whether the baby understands them or not. "How was your day at work?" "Oh it was okay, but traffic was bad so I didn't have time to get my usual coffee." Babies don't understand what traffic or coffee is, and they don't have to. They still eventually learn the language and really focus on the things that matter to them.

At some point, a lot of us try to simplify by reducing the number of concepts we're exposed to, and we try to feel like we understand those fewer concepts. I've switched my approach to just being immersed in the way experts talk about the field, and just getting used to used to not really knowing what most things mean. It turns out you get a ton of information this way. Not only do you learn the vocabulary before you need it (reducing the time required later when you eventually need it) but also you pick up a sense of which things are fundamental (they come up a lot in conversation) and which things are extraneous detail (they're barely mentioned or only mentioned when something goes wrong).

JimDabell
On a side note, I have wondered if LLMs work more effectively with code that is well-structured and easy for humans to read than they do with spaghetti. Has anybody researched this?
082349872349872
A view from 1985: https://pages.cs.wisc.edu/~remzi/Naur.pdf

[pedantically: the theory generated by a program and the theory generated by the axiomatisation in the heads of its programmers should be equivalent, but if you only have one it'll be easier to derive the former given the latter than the other way around]

throwaway14356
The flood of feedback from power users shouldn't prevent a simple experience for a new user. The hard part is gradually exposing the new user to ever more complicated goodness in the correct order to upgrade them gradually.
WillAdams
Well, there is at least one effort at a solution:

http://literateprogramming.com/

and I've found that John Ousterhout's recent book, _A Philosophy of Software Design_ is one of the most notable programming books of the past decade and speaks to many of these difficulties so well that I added it my effort at a list of (mostly) Literate Programming books:

https://www.goodreads.com/review/list/21394355-william-adams...

The other issue here is the still unanswered question:

>What does an algorithm look like?

and by extension, the further question of:

How does one manage a visual representation of a program when it gets beyond the size of one screen/window, or a page in a book, or for the largest ones, a poster?

With a bit of help of tex.stackexchange.com I was able to put together a Literate Programming system which allows me to use (La)TeX w/o the comment character which docstrip mandates:

https://github.com/WillAdams/gcodepreview/blob/main/literati...

(it's a little clunky, since that file has to be customized for the files in a given project)

but it allowed me to switch from having three files open in three different OpenPythonSCAD windows to a single .text file which makes a .pdf: https://github.com/WillAdams/gcodepreview/blob/main/gcodepre... which has a ToC, and multiple indices all nicely hyperlinked, and which makes a search/review of the code into a vertical scroll.

That said, I sympathize w/ the author quite a bit, and often work up snippets of code using either Blockly or BlockSCAD3D: https://www.blockscad3d.com/editor/ or https://github.com/derkork/openscad-graph-editor

https://raw.githubusercontent.com/WillAdams/gcodepreview/mai...

osigurdson
I've seen the aircraft / bullet heatmap diagram a few times. However, I'm always left wondering "was this a purely empirical analysis?". Clearly engineers would have some sense of what areas would bring an aircraft down (loss of engine, loss of tail strike me as obvious with no experience in aircraft design beyond paper planes). It is a great prop for survivorship bias of course!
augustk
bodeadly
Writing code is easy. Knowing /what/ to write is hard. I know how to write English. But that doesn't mean I can write a book (that someone would want to read). AI can write code. But it still has to be told what to write.
osigurdson
>> Humans learn from examples, not from “core concepts”

So true. Humans are great at building mental models from raw data. Only after we learn our mental model is wrong we RTFM (unless your role is very specialized of course).

shahzaibmushtaq
First you write code for computers, then you rewrite the same code for humans.
nextworddev
Yes basically that’s why learning AWS has a steep learning curve because the implementation details leak heavily to the API surface.
Jtsummers
Someone, a one day old account, wrote and then deleted this while I was writing a reply:

> No other engineering discipline thinks this way. You design circuits for performance, manufacturing, and cost, not other engineers.

Yeah, that's why we don't produce schematics, diagrams, and blueprints or maintain those things over the years.

Software development is a design discipline, not a construction discipline. The analogs in engineering disciplines to source code are not the circuit, the car, or the bridge artifacts, but the schematics, diagrams, and models that go into their development.

And yes, practitioners in engineering disciplines absolutely care about communicating with other people (including other engineers), that's a substantial portion of their job in fact.

samatman
The major part of this post is about documentation, and would have benefitted greatly from reference to the 4doc model: https://docs.divio.com/documentation-system/

It's basically saying: don't just provide a reference, provide how-tos as well, and lead with them because they're the part of the total documentation which users generally want to see first. Generally, mind you, I tend to go straight to the reference material but not always.

Not that 4doc is a silver bullet or a law of nature, Hillel Wayne has some good things to say about that here https://www.hillelwayne.com/post/problems-with-the-4doc-mode...

WesSouza
I thought this was going to be about genetics or something.
ForOldHack
Sacrafice nothing for clarity.
jakobov
Codeisforhumans.com
fredgrott
I kind have no comment as my favorite Song is by PinkFloyd where its encouraged to ignore teachers...
jbverschoor
It’s not.
luxuryballs
it definitely helps if you design it to be easy for people before you start writing the code, that’s the primary goal of the project architecture design imo
mikkom
Yeah bro but I read in twitter that I can just write "Code a health care app make it very profitable!" to AI and be billionaire!
known
[dead]
SilHunter
[flagged]