Read the Room

A Field Guide to Computational Creatures

You explain things differently to different people. When talking to a five-year-old, you use simple words, concrete examples, perhaps some pictures involving friendly cartoonish animals. When talking to an expert, you can use specialized vocabulary and skip the fundamentals they mastered years ago. When talking to someone who doesn’t speak your language fluently, you slow down, enunciate with care, charade more than normal, and watch for signs of comprehension.

Apparently there’s not a very strong line between condescension and communication. We are always adjusting your message to match our audience.

Programming works the same way. Except instead of different people, you find yourself working with different types of computers, different programs, different computational contexts and levels of abstraction. And each one requires a different approach, a different “language,” a different way of thinking about the problem. The computer is not a single species but an entire zoo.

Same, but different

When people say “I’m learning to program,” they are actually describing dozens of different activities that happen to share a name. It’s like saying “I’m learning to write.” Technically true, yet not the most informative.

Learning to build websites is programming. Learning to make video games is programming. Learning to control robots is programming. Learning to analyze data is programming. Learning to build mobile apps is programming.

All of these are about as similar as “writing” encompasses poetry, technical manuals, graffiti and legal briefs. Yes, they’re all writing. But the skills, the style, the context, the constraints, the audience: everything is different. A sonnet and a software license agreement are both made of words, yet no one confuses the two. If you are fluent in French you may discover that Italian, while clearly related, is not merely French with different accents.

We say “learn to program,” as if it’s one thing when it’s a whole set of related but distinct skills, each with its own customs and peculiarities.

The good news is that fundamentals transfer. The way of thinking, the basic building blocks, the problem-solving approach: those work everywhere. But the specifics, the syntax, the idioms, the patterns change depending on who you’re talking to.

A Tour of the Computational Zoo

Consider the diverse habitats where programmers work. Each environment has its own constraints, customs, and ways of thinking.

Click each domain to discover its unique nature and requirements.

Websites: The Conversation Across a Distance

When you build a website and someone views it, you are orchestrating a conversation between two entities who live far apart and communicate through letters that travel at the speed of light and arrive in milliseconds.

On one side is the server, a computer sitting in a data center somewhere, possibly thousands of miles away, humming quietly in a room full of blinking lights. When you type a URL into your browser, you’re sending a message to this server: “Hello, I’d like to see this page, please.”

The server receives this message, considers it (which might involve consulting databases, performing calculations, verifying whether you’re logged in and authorized), and then sends back a response. This is usually a document full of instructions written in HTML, CSS, and JavaScript. It is, in effect, a very elaborate recipe for what to show you.

Then your browser (the program through which you’re most likely reading these words) receives those instructions and follows them. It draws the page, makes things clickable, runs animations, waits for you to interact or scroll so it can change what is on the screen to give you a cohesive experience. It is a remarkably obedient interpreter of remote instructions.

The request/response cycle - how browsers and servers communicate.

Programming for the web means programming for this back-and-forth. You write code that runs on the server (handling requests, managing data) and code that runs in the browser (making things interactive, responding to clicks). Two different programs, in two different places, cooperating across the void.

These two programs can only talk to each other through messages. They cannot reach into each other’s memory or directly control each other beyond what each lets in (assuming they’re secure against vulnerabilities). They communicate by sending data back and forth, like pen pals who happen to be very fast writers and have attended couples therapy together.

Games: Sixty Instructions Per Second

Now imagine a completely different scenario. You’re making a game, say, a platformer where a character (who may or may not be of italian origin) jumps around collecting coins while avoiding things that wish to do them harm.

To appear smooth as a movie to the human eye, your game needs to redraw the screen 60 times every second. That’s once every 16 milliseconds. Sixteen milliseconds is not very long. For reference, it is shorter than the time it takes to blink. And in that tiny sliver of time, you need to:

  • Check what buttons the player pressed (or other allowed inputs)
  • Move the character’s position in the world accordingly
  • Check if the character hit anything in the world
  • Update the animation frame
  • Move all the enemies and
  • Check if the player collected any coins
  • Play sounds if needed
  • Draw everything on the screen

Every frame. Sixty times a second. Forever, or until the player quits or throws the controller in frustration.

This is a completely different conversation than web programming. There’s no polite request/response. There’s no waiting. There’s just a relentless loop: update, draw, update, draw, update, draw. Forever. It’s a very demanding metronome that never stops ticking.

The code you write for games reflects this constraint. Everything is organized around “what needs to happen every frame?” You become obsessed with speed. If any frame takes longer than 16 milliseconds, the game stutters, and players notice. They always notice.

Robots and Embedded Systems: The Careful Conversation

Picture a tiny computer embedded in a toaster, a robot vacuum, or a pacemaker. These are the creatures of the embedded world, and they live under constraints that would make a web developer weep.

This computer might have less memory than a single photo on your phone. It might run on a battery that needs to last for years, not hours. It might control motors or sensors or, in the case of the pacemaker, functions upon which human life depends. And it needs to be absolutely, utterly reliable. There is no “have you tried turning it off and on again” for a pacemaker.

Programming for embedded systems means counting every byte of memory, measuring every millisecond of execution time, testing exhaustively because there is no update button if something goes wrong after deployment. The device is out there in the world, doing its job, and you had better hope you got it right.

The conversation here is slow, careful, and extremely precise. You cannot simply throw more resources at a problem. You cannot afford to be wasteful. Every instruction incurs a charge on the battery, and every bit of memory is precious.

But in exchange, you get direct control of physical reality. Your code doesn’t just move pixels on a screen. It moves motors, measures temperature, and turns things on and off in the actual, physical world. There’s something rather magical about that.

Mobile Apps: The Interrupted Conversation

Your phone’s screen is small. Its battery is limited. You might be using it while walking, on a bus, or in a floor-hopping faraday cage. The operating system might interrupt your app at any moment: to take a phone call, because the battery is low, or because the user has the attention span of a caffeinated squirrel and switched to another app.

Programming for mobile means designing for interruption and constraint. Your app needs to save its state constantly, because it might be terminated without warning at any second. It needs to handle losing its network connection gracefully, with the dignity of a program that expected this and planned accordingly. It needs to be stingy with the battery, because nobody likes an app that drains their phone before lunchtime.

But you also get capabilities that web apps can only dream of, including the camera, GPS location, the ability to send notifications, and access to the user’s contacts and calendar (with permission, of course).

Data Analysis: The Batch Conversation

Imagine you have a million customer records and you need to find patterns. Or you have sensor data from a year and you need to graph it. Or you have text from thousands of documents and you need to summarize them.

This isn’t interactive. You’re not drawing things 60 times a second or responding to user clicks. You’re processing data in batches: transforming it, filtering it, aggregating it, analyzing it. It’s rather like being a librarian with very specific questions and a lot of time.

The conversation here is: “Here’s the data, here’s what I want to know about it, take your time and tell me the answer.” There’s no urgency frame-by-frame. The constraint is in getting the right answer, not in getting an answer quickly.

The code you write reflects this. You’re not worried about keeping the frame rate smooth. You’re worried about processing a million records efficiently. You’re using different tools, different patterns, different ways of thinking about the problem, and some stats. Actually, a lot of stats.

The Cloud and Beyond

Cloud Computing: Someone Else’s Computer

Instead of running programs on your own computer, you can now rent time on someone else’s computer, especially if yours is not beefy enough to do what you need to. “Cloud computing” is a rather poetic name implying your programs float somewhere in a nebulous “cloud” of computers, and you don’t really know (or need to know) exactly which physical machine is running them at any given moment.

This changes the conversation in interesting ways:

  • Your program might run on a different computer each time it runs
  • You don’t manage the hardware. If a computer fails, the cloud provider replaces it
  • You can ask for more computers when you’re busy and fewer when you’re not
  • Someone else handles the electricity bills, the cooling systems, the security guards

But you also give up control. You’re renting, not owning. You’re trusting someone else to keep the lights on, and if that cloud provider has problems, so do you. It’s a trade-off, as most things in life turn out to be.

Serverless: The Sleeping Computer

Imagine if you only paid for your car when you were actually driving it. Not when it sits in your garage overnight. Not when it’s parked at work. Just the moments when the engine is running and wheels are turning.

“Serverless” computing is like that. Your code sits dormant, sleeping, costing you nothing (which is economically pleasant). When someone needs it (when a user clicks a button, when a scheduled event occurs), your code wakes up, does its job, and goes back to sleep. Serverless is cloud computing’s gen-z sibling.

This creates a fascinating constraint. Your code must be able to start quickly, do something useful, and shut down. It cannot assume it’ll be running continuously. It has no memory of previous runs (unless you explicitly store something elsewhere). Each awakening is like a new conversation with someone who has amnesia. You must explain everything from scratch, every single time.

Machine Learning: Teaching Instead of Telling

And now for something genuinely different.

In all the domains we’ve discussed, you tell the computer exactly what to do. Step by step. Precise instructions. The computer doesn’t figure anything out. It follows your recipe with the literal-mindedness we’ve come to expect. But there’s another approach that has become remarkably powerful. Instead of telling the computer what to do, you show it examples and let it figure out the pattern. This is machine learning, and it turns the programming conversation entirely on its head.

You don’t write rules like “If the email contains words like ‘prize’ and ‘winner’ and ‘click here’, mark it as spam.” - that would be cheating.

Instead, you show the computer thousands of emails, some marked spam, some not spam. Through mathematics that we’ll not dive into here, the computer learns to recognize spam it’s never seen before.

This is less like giving instructions and more like teaching a child. You don’t tell a child “a dog has four legs, fur, a tail, barks, and comes in various sizes.” You show them dogs. Many dogs. Eventually, they recognize dogs they’ve never seen before, including unusual dogs that would have broken any rule-based system, some of which are actually bear cubs. The computer can learn to generalize.

Abstraction Layers: The Hierarchy of Communication

Computers are organized in layers, rather like a company has layers of management (though with considerably less office politics).

At the very bottom is the hardware: the actual circuits, the physical electrons moving through silicon. This is the reality of what’s actually happening. (Pretend you didn’t see the microcode and firmware behind yonder closed doors.)

Above that is the operating system (Windows, macOS, Linux, iOS, Android). This is software that manages the hardware, decides which programs get to run when, and handles files and memory and networking. The operating system is like a manager, coordinating everything.

Above that is the programming language runtime, the environment where your code actually runs. This translates your relatively readable code into instructions the operating system and hardware can execute.

And at the top is your program, the thing you’re building.

When you program, you’re typically working at one layer and trusting the layers below to handle the details. You tell the browser “draw a button here,” and you trust that the browser knows how to tell the operating system to tell the graphics card to actually light up the pixels. It’s delegation all the way down.

This is wonderful because you don’t need to understand everything. You can, in effect, stand on the shoulders of giants without needing to understand the anatomy of giants.

But it also means you need to know which layer you’re working at, what it can do, and what it expects from you. Each layer has its own language, its own customs, its own expectations.

Why So Many Programming Languages?

By now you might be asking, why are there hundreds of different programming languages? It does seem rather excessive.

The answer is the same reason there are different kinds of writing. Different tasks call for different tools. You would not write a sonnet in the style of a legal contract, nor a legal contract in the style of a sonnet. Well, you could, but the results might be unfortunate.

Easy to learn: Some languages are designed to hide complexity, provide helpful error messages, and offer lots of friendly documentation. These are great for beginners, great for prototyping, great for scripts and automation. Python is a good example.

Very fast: Some languages give you low-level control, let you manage memory precisely, and compile directly to machine code. These are great when performance is critical. C and Rust are examples.

Domain-specific: Some languages are designed for specific domains. SQL is amazing for working with databases but useless for making games. JavaScript runs in web browsers, so if you want to make websites interactive, you’re probably using JavaScript.

This is why “which language should I learn first?” is such a frustrating question. It depends entirely on what you want to make! Want to build websites? Learn JavaScript and maybe Python. Want to make games? Look at C# or C++. Want to analyze data? Python or R. They’re all good answers depending on where you want to go.

The good news is that once you learn one language well, learning others becomes considerably easier. The concepts transfer. Variables work roughly the same way in Python and JavaScript and C. Loops are loops. Functions are functions. You’re just learning a different vocabulary for expressing the same ideas, rather like how learning Spanish makes Italian somewhat less mysterious.

The Same Problem, Different Conversations

Here’s a useful exercise to understand how domain shapes solutions. Imagine you want to make something blink. A simple thing, you might think. On and off, on and off. How hard could it be?

See how the same goal requires completely different approaches across platforms.

What’s Universal

Despite all these differences (and they are substantial), some things remain true everywhere.

  • You need to think in steps
  • You need to handle things that go wrong (because they will)
  • You need to test your assumptions (because they are often wrong)
  • Precision matters (always, every time)
  • Clear code is better than clever code (your future self will thank you)
  • Debugging is part of the process, not a sign of failure (everyone does it, constantly)

These fundamentals work whether you’re programming a website or a robot, a game or a data pipeline, a mobile app or an embedded system. They are the universal laws of the computational universe.

The specific syntax changes. The tools change. The constraints change. The problems change.

But the discipline of thinking clearly, expressing precisely, and working systematically is universal. That’s the skill you’re really learning, regardless of which creature in the computational zoo you happen to be addressing.

Finding Your Domain

So how do you choose? How do you decide which kind of programming to learn, which audience to talk to?

The honest answer is to follow your interest. This sounds like advice from a greeting card, but it happens to be true.

What do you want to make? What excites you? What problems do you want to solve? Your answers determine where you should start.

If you dream about making games, learn game programming. If you’re fascinated by data and patterns, learn data analysis. If you want to build the apps you use every day, learn web or mobile development. The domain that captures your imagination is the domain where you’ll have the patience to push through the hard parts.

You don’t need to know your entire career path before you start. You don’t need to make the “right” choice. You just need to start somewhere that interests you. The right choice is, almost always, the one that gets you started.

And if you discover that web development isn’t your thing but you love working with hardware, or you thought you’d love game dev but actually you prefer building tools, that’s fine. That’s normal. The fundamentals you learned transfer. You haven’t wasted your time. You’ve merely taken the scenic route.

The important thing is to start. Pick something that interests you, something you want to build, and learn to talk to that particular audience. Once you can hold one conversation fluently, other conversations become easier to learn. The first language is always the hardest.


Next: Now that we know who we might be talking to, it’s time to learn the lingo and the building blocks of every programming language.