CodingItWrong.comTrying to steer computers towards more value and less harm.
https://codingitwrong.com/
Sun, 01 Mar 2026 15:49:38 +0000Sun, 01 Mar 2026 15:49:38 +0000Jekyll v4.4.1Don't Sleep on Exhaustive Dependencies<p>In React, if you have a <code class="language-plaintext highlighter-rouge">useEffect</code> hook that accesses a dependency not listed in the dependencies array, the lint rule <a href="https://github.com/facebook/react/blob/main/packages/eslint-plugin-react-hooks/README.md"><code class="language-plaintext highlighter-rouge">react-hooks/exhaustive-deps</code></a> (assuming you have it enabled) will give the error:</p>
<blockquote>
<p>React Hook useEffect has a missing dependency: ‘…’. Either include it or remove the dependency array.</p>
</blockquote>
<p>Oftentimes, following the suggestion and directly adding the dependency array can lead to the effect being run more frequently than you want, or even an infinite loop. When this happens, I often see developers simply disable the lint rule for this line. When asked why, they often simply say that the code breaks unless the rule is disabled. When I tell them that there should be a way to rearrange the code to satisfy the rule, I often get the response that this makes the code much more complex, and that it isn’t worth the effort to do so.</p>
<p>So should you just disable the exhaustive-deps rule at places where it’s causing you issues? Or maybe disable the rule entirely? I don’t think so. Here’s why.</p>
<p>The rule about exhaustive dependencies (not just the ESLint rule, <a href="https://react.dev/reference/react/useEffect#specifying-reactive-dependencies">but general rule about how to use dependencies</a>) was put in place by the React core team because it gets at something important about what effects are intended to be, and the situations in which they’re intended to be used. If we use effects in a way that the React core team is explicitly warning us <em>not</em> to use them, we risk bugs in our code now, breakage in the future, and making it harder for other React developers to understand our code.</p>
<p>So what would I recommend instead? First, see if you can avoid using an effect entirely, because often (but not always) there’s a better alternative. Second, if you find you <em>do</em> need to use an effect, think about it from the mental model the React docs recommend, and let that guide how you design it in a way that will satisfy the exhaustive-deps rule.</p>
<p>First, see if you can avoid using an effect. The React docs have an entire page dedicated to this topic: <a href="https://react.dev/learn/you-might-not-need-an-effect">You Might Not Need an Effect</a>. It’s worth reading in its entirety. A few key cases that often come up (see the page for more explanation and examples):</p>
<ul>
<li><strong>To load data</strong> from an API, use a React framework or a data-fetching library.</li>
<li><strong>To calculate data</strong> from other state or props, just perform that calculation directly in the render function instead of in an effect. If you find this causes a performance issue, memoize the data.</li>
<li><strong>To respond to an action</strong> the user took, put that logic in the event handler function that is called immediately in response to the action (<code class="language-plaintext highlighter-rouge">onClick</code>, etc)</li>
<li><strong>To reset all the state</strong> when a prop changes (often in the case of a form), set a <code class="language-plaintext highlighter-rouge">key</code> on the component to force React to create a new instance of the component.</li>
</ul>
<p>The reason to avoid using an effect is not because they’re bad. There are two reasons: first, they are complex to use—that’s why the React team feels like the exhaustive-deps rule is needed, and that’s why we often feel like satisfying that rule will require a lot of work. Second, effects aren’t really designed for these scenarios. According to the React docs, what effects are for is <a href="https://react.dev/learn/synchronizing-with-effects">synchronizing with external systems</a>—that’s all. So if you need to do something other than synchronizing with an external system, if there’s a way to do it <em>other</em> than an effect, it’s best to do it that way. And if effects are the only way to do it (as in the case of client-side data loading), it’s best for a library to implement it once, and that way they can put all the effort into handling any edge cases to make sure it’s reliable.</p>
<p>If you can’t avoid using an effect, design your effect in a way that will satisfy the exhaustive-deps rule. I wish I could come up with more helpful instructions on how to do this, but I haven’t been able to so far—it’s just a hard problem. I’ll give some general thoughts here, but I’d also recommend finding your favorite React trainer or instructor and checking out any resources they have on useEffect.</p>
<p>useEffect is designed to synchronize with external systems based on the updated dependencies. So, basically, every time the dependencies change, your cleanup function (if present) runs to “tear down” the previous synchronization, and then the hook runs with the new deps to set up the <em>new</em> synchronization. React wants you to tell it “run this code for any given state of these dependencies.” So if an effect is “running too often,” that means you have some logic that you want to run for <em>some</em> states of dependencies and not others. Well, we already have a way to run code sometimes but not others: the simple <code class="language-plaintext highlighter-rouge">if</code> statement. You can add conditionals into the effect’s function to check the values of the dependencies and only run a given bit of logic sometimes, not every time a dependency changes. In other words, it’s not that “the effect” is running too often, it’s that your bit of logic is: and you can use conditionals to get it to run only when you want to. However, these conditionals can get complex; that’s part of the pushback.</p>
<p>If you find you need an effect and you just don’t have time right now to design it to satisfy the exhaustive-deps rule, at least leave a <code class="language-plaintext highlighter-rouge">TODO</code> comment to remind you to do so in the future. It could be a good idea to also record what specifically breaks when you add in the extra dependencies.</p>
Thu, 30 May 2024 00:00:00 +0000
https://codingitwrong.com/2024/05/30/dont-sleep-on-exhaustive-dependencies.html
https://codingitwrong.com/2024/05/30/dont-sleep-on-exhaustive-dependencies.htmlFour Essentials for an Effective Software Development Process<p>Working in software consulting for nine years, I’ve gotten to work on teams with all kinds of different processes. Some have very heavyweight waterfall processes. Some have no processes at all. Some have tried scrum and have moved to kanban, or tried kanban and have moved to scrum. When I join a team I try not to bring too many preconceptions. There isn’t just one right process; any of these approaches can work for a team. (Except waterfall, obviously!)</p>
<p>Instead of recommending just one approach, over time I’ve seen a pattern of what are the most essential elements of a project process to work effectively together. These are the elements that, when absent, I feel the pain the most. And when they are introduced, they are the elements that result in the biggest improvement in delivering reliable software quickly.</p>
<p>All of these patterns address two different perceived problems: slow delivery and buggy delivery. We all want more reliable features quickly, and we’re frustrated when it doesn’t happen. When I ask clients what they’re looking for help with, often they ask “can we get features delivered more quickly” or “can we have fewer bugs?” It’s tempting to try to address these issues by trying to go faster or trying to be more careful. But this reminds me of an illustration Kent Beck uses: trying to make a car go faster by pushing the speed dial higher. It’s an indicator, not a cause.</p>
<p>What does help with speed and reliability is this: work needs to be split into pieces that are <strong>visible, small, prioritized, and completed:</strong></p>
<ul>
<li><strong>Visible:</strong> visible to all team members in some kind of tracking system</li>
<li><strong>Small:</strong> split into the smallest user-facing chunks possible</li>
<li><strong>Prioritized:</strong> assigned to people as their main priority for the majority of their work week</li>
<li><strong>Completed:</strong> work should be brought all the way to completion before moving on</li>
</ul>
<p>Let’s talk about what each of these means.</p>
<h2 id="visible">Visible</h2>
<p>Work should be visible in some kind of system for tracking projects, stories, or tasks.</p>
<p>I’ve always used electronic systems for this. I know early Agile movement folks are often in favor of physical systems instead: cards or sticky notes on a board. I’ve had good experiences with electronic systems, and as I’ve often worked remotely an electronic system is a necessity. One of the advantages of physical systems, though, is that they prevent these tracking systems from getting overly complex.</p>
<p>The reason for tracking work in a system is for the visibility, so that everyone can be see what is being worked on. This means keeping the <em>status</em> of the work visible as well: is it in progress? Blocked? Finished development but waiting to be tested? Tested but waiting to be deployed? A given team might have any number of different stages for the work; if a stage is important to the team, make sure that stage is visible in the system.</p>
<p>Some developers might bristle at this kind of tracking because of the fear that it will be used for micromanagement. That’s definitely a possibility, and that kind of micromanagement does cause problems! Adding a lot of overhead to individual tasks can slow a developer down. But there are <em>good</em> reasons for visibility of work. What are they?</p>
<ul>
<li>Avoiding duplicate work: if everyone is working on things that are not visible, multiple developers might be working on the same thing, or overlapping things.</li>
<li>Avoiding delays: if work is not visible, a unit of work might be done, but not everyone on the team might know that. For example, if something is finished being coded, it’s important for the person who will test it to know it’s in their court. Otherwise there can be a long delay between when the work is done and when the testing begins. This can cause delays finding and addressing problems, as well as making the fixes take longer because the original developer has lost context on what they were working on.</li>
<li>Avoiding getting stuck: any developer can get stuck. I like to think I can avoid but, but really I can get mired in a tricky story and not realize that I’ve been stuck for a while and could use help. When work is visible, others on the team can see what I’m working on and how long it’s taken. Ideally we trust each other and give each other time to work on things. But if someone seems stuck, we can offer to help, make suggestions, brainstorm if things can be split into smaller pieces, or discuss if we should try a different approach.</li>
<li>Get a sense of progress: Now, it’s important to emphasize that tracking work doesn’t mean you can perfectly predict exactly how long a project will take. Estimates are never exact. I’m not even suggesting <em>any</em> estimation or velocity tracking here. All I’m saying is, when your work is visible, the team can review it and get <em>some sense</em> of progress. Without any estimation, it’s easy to tell the difference between when there are three bugs to be fixed vs. when there are three complex screens to be built. If you aren’t aware of what work is remaining to be done before a given release, that’s a lack of visibility.</li>
</ul>
<p>Some teams that prefer low process will say, instead of tracking that information in a system, we just ask the team what they’re working on, how it’s coming along, and how much is remaining. There are downsides with this approach, however. Asking me these questions means that each time the pressure is on me to make sure I remember everything that’s remaining without any omissions. If I do forget part of it, then I’ve given a misleadingly optimistic response. I worry about doing that, causing stress. If it’s valuable to have this information, track it in a system. Systems are better at keeping tracking of information than humans, and anyone can see the state of work at any time without having to take up others’ time. If there are obstacles to entering that data and keeping it up-to-date, the right answer isn’t to give up on using systems, but to work through those obstacles.</p>
<p>Now, a system doesn’t take the place of a daily standup; I recommend a daily standup <em>alongside</em> a system for tracking work. Specifically, I recommend that the focus of the standup is precisely to <em>review the system</em>. I recommend looking at the system together (on a screen share virtually, or on a large monitor if in person). Each person points out what they are working on, and if it’s out-of-date, they update it right then so it <em>becomes</em> up-to-date. Even though I value tracking systems, I often forget to update the status of an item in it; a standup is good accountability.</p>
<p>Visible work helps with “bugs”, because if what you think of as a bug was not actually recorded as functionality that needed to be built, that’s an issue of visibility. Track the work and then you should see fewer such “bugs.”</p>
<p>Visible work helps with speed of delivery, because if not everything being worked on is visible, then effort is going towards things you don’t know about. You’re not able to readjust team priorities.</p>
<h2 id="small">Small</h2>
<p>The units of work that are tracked should be small. <em>Very</em> small. Probably smaller than you think.</p>
<p>To start, try to split work into the smallest chunks of user-visible work that you can. User-visible means that you don’t work on building out the whole database access layer in a way that isn’t visible to users. Work in vertical slices. If you’re adding a new type of data to the system, maybe the first story you work on is a list screen. Don’t build the detail screen at the same time, or the ability to add, edit, or delete records. Just build the list and get it displaying the data. Viewing details, adding, editing, and deleting can all be separate additional stories.</p>
<p>A lot of the time, we can resist splitting up work into smaller chunks because it’s not clear how to split it up, and splitting it up adds overhead.</p>
<p>But there are a few different reasons to split work up into small pieces. First, there are a number of reasons it helps with speed of delivery:</p>
<ul>
<li>You get <em>something</em> working sooner, rather than nothing until the very end.</li>
<li>A small unit of work keeps you focused, so that you bring it to completion. Otherwise you might be making progress on multiple parts of a large unit of work, without getting any of them to the point of being useful.</li>
<li>It’s easier to keep a small unit of work in your head. That way you can understand it more deeply, so you’ll be more effective at completing it.</li>
<li>A small unit of work gets feedback sooner, because it can be delivered to someone to test it. When we think of “speed” we probably don’t mean we want non-working code faster. If we want something <em>working</em> faster, then getting feedback sooner is important.</li>
<li>Splitting up a large chunk of work into smaller pieces can often lead to parallelization. Even if just two developers can work within that large area, that means that the overall large chunk will be finished twice as fast. And you can often parallelize more than two. (The practice of “swarming” is related to this.)</li>
<li>It’s motivating to be able to bring a small chunk of work to completion, and that motivation helps with speed.</li>
<li>Once you commit work to the main branch (rather than just locally or in a feature branch), you no longer have to repeatedly handle merge conflicts to it. When you work in smaller units, they can be committed to the main branch sooner. This avoids this merge conflict cost, further speeding you up.</li>
</ul>
<p>Small units of work help with “bugs” as well. Thinking through splitting work into small chunks can help you think through all the different edge cases that need to be handled, so that they’re handled from the start instead of being reported as bugs.</p>
<p>One more benefit of small units of work is flexibility. When you are working on a large unit of work, you may have something higher-priority come up. You’d like to pause what you’re doing to work on that higher priority, but you know there will be overhead at pausing that in-progress work. So you ask how soon the developer will be done, and they feel like they should say it will be soon, even if it won’t be. You’re now in a lose-lose situation. But if you’re working in small chunks, a developer can switch priorities as soon as one small chunk is complete. Even if that small chunk doesn’t add up to a fully-complete large feature, the progress that has been made is complete and tested.</p>
<p>One of the best resources I’ve run across on the value of smaller steps is this talk by “GeePaw” Hill with an ironically large title: <a href="https://youtu.be/1mOs1_pvS9A">“Want More Value Faster? Take Many More Much Smaller Steps”</a></p>
<h2 id="prioritized">Prioritized</h2>
<p>Work assigned to people should be their main priority for the majority of their work week.</p>
<p>Ideally, developers should be fully allocated to working on one development task. They shouldn’t have other responsibilities outside of development, such as support for other systems. They shouldn’t have development responsibilities across two different projects. Either of these take time away from the forward development of the system.</p>
<p>For similar reasons, it’s also best for developers not to be actively working on two different stories within the <em>same</em> project either. (Kanban follows this via the concept of work-in-progress (WIP) limits.) If you’re using a pull-request-based workflow like many developers today, it can seem impossible to work on just one story at a time: what do you do when your pull request is up and waiting for review? First, work on all the other things you can do that <em>aren’t</em> starting another story (the things that it’s tempting to delay when you’re heads-down in the code):</p>
<ul>
<li>Review <em>others’</em> pull requests to help <em>them</em> get unblocked for their next story.</li>
<li>Look over your own PR to ensure you’ve covered all the edge cases.</li>
<li>Research your next potential story to understand it, ask any clarification questions, and begin thinking through an approach <em>without</em> starting coding.</li>
<li>Catch up on emails and messages, other responsibilities you have that aren’t coding.</li>
</ul>
<p>If you’ve done all of this and <em>still</em> don’t have pull request feedback, as a team you may decide that it’s okay to pick up a second story—but if you do, you should consider it fully interruptible. Responding to pull request feedback takes priority over working on the second story, no matter how much overhead it causes to stop and start it. (See Completed below.)</p>
<p>What if the team <em>needs</em> you to work on something other than the story you have assigned? Sometimes it happens. This is another good reason to work on small chunks: this way you may be able to get it finished quickly before you work on this other responsibility. If you haven’t started coding, I’d recommend unassigning the story from yourself and letting the team know it’s available to be picked up by someone else; that way it’s not blocked on you. If you’re in the middle of coding it, things are trickier. It may be tempting to keep it assigned to you, but this means it will be blocked on you, as well as any other work that it blocks. It can be better to put your work-in-progress code somewhere that anyone can get back to in the future (for example, a git branch on the origin server, or a draft pull request), then make a note of the state of the story in the story itself, including a link to the in-progress code. (This is one good reason for using an electronic system that allows comments).</p>
<p>How do we fit in work to handle urgent bugs and other production issues? If you have several team members, a common practice is to have a rotation where, at any given time, one team member is assigned support. A rotation means it’s not stuck with just one person, or left for the person who is most stressed by them to pick them up. If you’re using a sprint-based approach you can rotate which developer is on support each sprint. When a person is on support rotation, <em>do not</em> assign them any feature work. Even if you say “support is your main priority and this feature comes second,” it is very hard to break the temptation to focus first on the feature stories and let support slip.</p>
<p>What can the support developer if there are no bugs or issues? They can look through error logs for anything that may indicate hidden issues, look into them, write up findings, or research possible fixes. They can also take on optional non-urgent “tech debt” cleanup tasks, but it’s important for them to treat any support needed as the higher priority.</p>
<p>For teams and organizations that are very small, you may not have the luxury of having a developer fully allocated to one system. If not, splitting the developer’s attention across multiple things may be unavoidable. Just be aware of the impact it will have on speed.</p>
<h2 id="completed">Completed</h2>
<p>A unit of work should be brought all the way to completion before moving on to another unit of work.</p>
<p>When a developer has a story assigned, they’re responsible for moving it all the way to done: getting it reviewed, merged, and recorded in the tracker in a state that lets the next person know it’s ready for testing.</p>
<p>In a pull-request-based workflow, it can be easy to lose attention once you’ve opened a pull request and asked for reviews, especially if there’s a significant delay on your team between when a pull request is opened and when it gets reviews. It’s the responsibility of the author of the pull request to ask for reviews, follow up if they haven’t received them, actively check for feedback and respond to it with discussion or code changes, and ask the reviewers for re-review after updates—whatever is needed to get the pull request to the point that it’s approved and merged.</p>
<p>If this flow isn’t going efficiently, think about what you can do to make it more efficient. Can GitHub notifications be set up to automatically inform people via email and the GitHub notifications button that their review is needed? Would a Slack integration help to automatically post new PRs? Should team members just manually ask for reviews in Slack?</p>
<p>If you have moved on to something else, including potentially coding another story, remember that finishing your first story is the highest priority. When pull request feedback comes in, stop what you’re doing and act on it. This will likely feel like an interruption, because it is. But ultimately it doesn’t help the team if you have multiple units of in-progress stories. Your goal is to get stories finished, so it’s best to prioritize whatever is needed to get the <em>first</em> story finished. Any amount of inefficiency is worth it to that end.</p>
<p>Once a pull request is merged and the code is ready for testing, the developer should update the state of the story in the tracking tool to let the person who needs to test it know that it’s ready for them.</p>
<p>Another thing that can get in the way of completing stories is not coding everything that’s needed. Maybe a developer hesitates to open up a pull request because the last 10% of the functionality is tricky. It can be tempting to respond to this in a few different ways.</p>
<ul>
<li>You might want to move on to another story, because it feels more productive. It may feel that way, but we need to finish the work eventually, so it’s best to stay on one story and bring it to completion.</li>
<li>You might want to mark the story as complete even though 10% of the coding is remaining. This won’t help because that work is no longer visible, and it might not be discovered until it’s an emergency, resulting in nights-and-weekends work, a delayed release, or issues in production.</li>
</ul>
<p>Instead, if part of the functionality will take a lot more effort, consider splitting that functionality out into a separate story in the tracker. Be sure to update the first story to make it clear what is <em>not</em> included in it. This way, you can land the earlier work in a way that’s visible, and you have a clear indication of the work that’s remaining. It’s a way of discovering that a story could be smaller than it initially was, and gets you all the benefits of small stories we discussed above.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Think through how your team is operating now. Does it feel like things are going slower than you like, or that the code is less reliable than you’d like? If so, think about the four elements we’ve discussed:</p>
<ul>
<li>Is all of your work visible to the team in a system, and kept up-to-date there?</li>
<li>Is it split into the smallest units of user-visible work it can be?</li>
<li>Are developers empowered to put the majority of their effort into a single unit of work?</li>
<li>Are developers bringing the work all the way to completion?</li>
</ul>
<p>If the answer isn’t a clear “yes” for all of them, consider spending some time on it. I think you’ll find it has major payoffs in speed and reliability.</p>
<p>If you’d like to learn more about these dynamics, I recommend checking out <a href="https://pragprog.com/titles/rjnsd/the-nature-of-software-development/"><em>The Nature of Software Development</em></a> by Ron Jeffries. It’s a methodology-agnostic look at the dynamics of software development and how we can tailor our approach to be more effective. (This isn’t an affiliate link, so feel free to get the book wherever you like; I’ve linked to it at The Pragmatic Bookshelf as they offer an option for DRM-free ebooks.)</p>
Thu, 18 Apr 2024 00:00:00 +0000
https://codingitwrong.com/2024/04/18/four-essentials-for-effective-software-development-process.html
https://codingitwrong.com/2024/04/18/four-essentials-for-effective-software-development-process.htmlSimple Design: Passes the Tests<p><em>This is the fourth post in a <a href="/2024/03/06/simple-design-reveals-intention.html">series about the Four Rules of Simple Design</a>.</em></p>
<p>The Rule of Simple Design I’m discussing last is actually the first one listed in order: “passes the tests.”</p>
<p>I don’t save it for last because it’s the lowest priority for me. On the contrary, having good tests is such a high priority that I would be more likely to bring it up with a team than the rest of Simple Design. And there’s so much that could be said about the importance of testing that one post only scratches the surface.</p>
<p>At the same time, I’m glad Kent Beck included it in the list of Simple Design rules instead of assuming it. This way, you can’t talk about the Rules without mentioning testing as part of it.</p>
<p>I could talk at length about the general benefits of testing for catching bugs, positively influencing design, communicating intent, fostering conversations with the business, and letting you know when you’re done a unit of work. But instead, in this post I want to talk about one specific benefit: <strong>comprehensive tests are necessary to enable Simple Design.</strong> In this series I’ve presented the benefits of the other three Rules of Simple Design. But unless you have a suite of comprehensive tests, you’ll be severely limited in your ability to get the benefit of those three Rules.</p>
<p>And in all three cases, the reason the tests are needed is to support change. Let’s talk about why.</p>
<h2 id="how-tests-enable-simple-design">How tests enable Simple Design</h2>
<p>When you first write code, even without tests you can focus on making it Reveal Intention as much as possible. But once you receive feedback that someone who <em>didn’t</em> write the code is having trouble understanding your intention, you’ll be hesitant to make changes if they risk breaking the program. Revealing Intention doesn’t matter if the program doesn’t work, so how important is it really if the names aren’t quite clear as possible, that there are a few nonessential differences, that there are some large pieces in the code, that the public interface is broader than needed? And as the code changes, a previously-small function will get large, a previously-accurate name no longer covers the scope of what the function is used for, or something in the public interface no longer needs to be there. Is it worth breaking the program just to clean those up? Static typing and IDE support might provide safety for some of those changes, but not all of them.</p>
<p>Also, when you first write code, even without tests you can try to avoid duplicating business rules. You don’t put the same boolean logic in the same place; you put it somewhere central. But how about when you discover that the same business rule was mistakenly implemented in two places? Or that something you thought of as just an incidental bit of logic turns out to be a broadly-shared business rule? Or that two things that seemed to be separate business rules are actually related? Minimizing duplication doesn’t matter if the program is broken, so how big of a problem is it really if there is a little bit of duplication?</p>
<p>Finally, when you first write code, even without tests you can focus on writing the fewest elements. You remove things that aren’t needed and you write things in the simplest possible way. But how about as the code changes, when code that used to be used is no longer used, when a necessary return value is no longer necessary, when something that previously required a low-level algorithm turns into something that a simple call to <code class="language-plaintext highlighter-rouge">map</code> will handle? Using the fewest elements doesn’t matter if the program doesn’t work, so how important is it really if there are some unnecessary elements in the program?</p>
<p>In all these cases, we can see that when there is a significant risk of changes to the code breaking the functionality of the code, this pits the Rules of Simple Design against program functionality. You have to choose: you can change the code to follow the Rules of Simple Design and get all its benefits, but you risk breaking functionality for your users; or, you can protect the functionality for your users, but miss out on the benefits of the Rules of Simple Design.</p>
<h2 id="cutting-the-gordian-knot">Cutting the Gordian knot</h2>
<p>Now, it may seem like the Rules of Simple Design are optional, that your program’s functionality is the thing that matters, and that in the long run it’s no big deal if you’re not following the rules of Simple Design. But that hasn’t been my experience. Over time, if your code does not reveal intention, if there is significant duplication, and if there are unnecessary elements, the code gets harder and harder to understand. It takes longer to make changes, the changes take more effort to make, and they are more likely to cause unexpected breakage in the software. So refusing to make changes to your code can prevent bugs in the short run, but in the long run it leads to more bugs, higher cost, and the inability to adjust your software to users’ changing needs.</p>
<p>But with a suite of comprehensive tests, you don’t have to choose between the Rules of Simple Design and program functionality: you can have both. You can keep your program working now, and continue to keep the code Simple so it works and is adjustable in the long term. If you make a change that breaks something in your program, you can trust the tests to tell you. And if the tests <em>don’t</em> tell you, your conclusion is not “we need to stop improving the code;” instead, your conclusion is “ah, there was a hole in our test coverage, let’s fix it and learn from it so that similar bugs won’t get through in the future.”</p>
<p>This is not to say that testing is easy; far from it. It takes a lot of work to learn not only the mechanics of using testing tools, but also the principles of how to structure your code and tests to get a high level of safety for a low maintenance cost. Testing has been a primary professional focus of mine for ten years, and I still run into challenging testing situations all the time. Testing is hard work.</p>
<p>What I’m arguing is that testing is <em>worth</em> that hard work. In addition to all the other benefits that comprehensive tests provide, they also transform the tradeoffs between short-term quality and Simple Design. You no longer have to make the impossible choice between whether to keep your code working now or keep it sustainable for the future. You can have both, fully.</p>
<p>If you’ve been applying the other three Rules of Simple Design but you wouldn’t say your tests are comprehensive, I’m not saying that you haven’t gotten any benefits of Simple Design. You almost certainly have! You can do your best to apply the rules when you first write the code, and when you need to change the code you can apply the Rules when the upside of doing so clearly offsets the risk of making changes. The question I would ask is: does it feel like you’re keeping up? Or does it feel like the code is deteriorating over time? That despite your best efforts the intention is getting more obscured, duplication is proliferating, and there are more and more unnecessary elements in the code?</p>
<p>That deterioration of the code is not inevitable. If you focus on writing comprehensive tests, you’re freed up to take your Simple Design as far as possible, and get all the benefits it provides.</p>
Mon, 11 Mar 2024 00:00:00 +0000
https://codingitwrong.com/2024/03/11/simple-design-passes-the-tests.html
https://codingitwrong.com/2024/03/11/simple-design-passes-the-tests.htmlSimple Design: Fewest Elements<p><em>This is the third post in a <a href="/2024/03/06/simple-design-reveals-intention.html">series about the Four Rules of Simple Design</a>.</em></p>
<p>“Fewest elements” means that if there is a way to accomplish the same functionality with fewer programming elements, prefer the option with fewer.</p>
<h2 id="many-examples-of-fewest-elements">Many examples of fewest elements</h2>
<p>Examples of unnecessary elements you can remove include:</p>
<p><strong>Unreachable code.</strong> Most modern programming languages have code linters that can alert you to unreachable code. Leaving in unreachable code is confusing to the reader, because it suggests that it’s needed, but it’s not. You could put anything in there that compiles and it wouldn’t affect the behavior of the program.</p>
<p><strong>Commented-out code.</strong> Sometimes developers will leave in commented-out code just because they didn’t think to delete it when they finished their unit of work. Delete it now. If you leave it, a future developer will have more uncertainty than you about whether it is needed for something, and it will stay in the codebase forever. Since you’re using version control, you can always get deleted code back. (If you aren’t sure how to use your version control tool to get deleted code back, delete this code and then take this opportunity to learn; it’s a good skill to have!) Sometimes developers will comment out a unit test that they know they need, but is currently failing, and they can’t fix it yet but plan to soon. That may be a reason to leave in a commented-out test with a <code class="language-plaintext highlighter-rouge">TODO</code> comment. But it’s even better if the test tool has a way to skip a test or to mark that it’s expected to fail—for example, <a href="https://jestjs.io/docs/api#testskipname-fn">Jest’s <code class="language-plaintext highlighter-rouge">.skip()</code></a> or <a href="https://rspec.info/features/3-13/rspec-core/pending-and-skipped-examples/">RSpec’s pending and skipped features</a>. These are better than commenting out the code because the test runner will report them as a reminder that they exist.</p>
<p><strong>Code that is not currently used.</strong> If there are functions or branches in your codebase that aren’t currently being called by the running app, delete them. If you need them later, remember that you can get deleted code back from version control. Keeping them in the codebase incurs an ongoing cost: the cost of reading over them, the risk that they don’t actually work right because they aren’t being called to verify, and the cost of changing them when you do other refactorings in the code.</p>
<p><strong>Comments that don’t add value.</strong> As mentioned in <a href="/2024-03-06-simple-design-reveals-intention.html">Reveals Intention</a>, if the code can be updated to communicate the same thing a comment does, update the code and then remove the comment. And if you have comments that just duplicate what the code already says, just delete them—unless they’re used to generate API documentation that’s <em>actually</em> used by somebody.</p>
<p><strong>Unnecessary low-level code.</strong> If a higher-level abstraction exists and works for your purposes, use it instead of reimplementing it. If you’re writing a loop by hand that is the equivalent of a <code class="language-plaintext highlighter-rouge">map</code> or <code class="language-plaintext highlighter-rouge">filter</code> function, that’s unnecessary complexity in your code; just use <code class="language-plaintext highlighter-rouge">map</code> or <code class="language-plaintext highlighter-rouge">filter</code> function directly.</p>
<p><strong>Unused abstractions and indirection.</strong> Some design approaches involve thinking in advance about the ways a system will need to change, and adding in an inheritance hierarchy or configuration points to support all of them. The problem is that programmers are notoriously bad fortune tellers; we can’t know the future. When a codebase contains abstractions and indirection that aren’t used, they make the codebase harder to understand and harder to change. They only provide benefit once they begin being used—if in fact they ever are. It’s better to remove abstractions and indirection that aren’t currently being used. Instead, to prepare for the future, set up your code to be easy to change so that you can add in just the bits of flexibility that are needed <em>when</em> they’re needed.</p>
<p><strong>Unneeded syntax elements.</strong> This one may not be very clear, so let me give a variety of examples: a variable you assign a value to but then don’t use the value. A return value that isn’t used. Calling a function that returns a value you ignore, instead of calling an equivalent function that doesn’t return a value (<code class="language-plaintext highlighter-rouge">map</code> instead of <code class="language-plaintext highlighter-rouge">forEach</code>). A catch block that simply re-throws the error. In JavaScript, <code class="language-plaintext highlighter-rouge">await</code>ing the result of a promise and then returning the result unchanged (since an <code class="language-plaintext highlighter-rouge">async</code> function returns a promise, this is the same as if you just returned the promise directly). In all these cases the program functions exactly the same without the element I mentioned. If this happens, you should remove that unnecessary element.</p>
<h1 id="fewer-elements-fewer-problems">Fewer elements, fewer problems</h1>
<p>If you apply the Fewest Elements rule in this way, the result is that every syntax element in your program has to justify its existence. If it is removed, something in the program should break, and a test should report that.</p>
<p>Why does Fewest Elements matter? Is it just an aesthetic preference? No. Keeping unnecessary elements in the codebase incurs an ongoing cost: the cost of reading over them, the cost of intent being less clearly revealed, the cost of having more code to change when you do other refactorings.</p>
<p>For this rule, there is an extreme you can take it to: <a href="https://en.wikipedia.org/wiki/Code_golf">code golf</a>, where you attempt to shorten some code to the fewest possible characters. Like real golf, code golf can be fun recreationally, but it’s not a good idea to do it in an office. Fewest Elements is not quite the same as code golf because the latter focuses on fewest <em>characters</em>, not elements. The reason not to go so far as focusing on minimizing characters is that it can hinder Revealing Intent. Code that has been golfed works, but it is often much harder to understand (see examples from the <a href="https://www.ioccc.org/">International Obfuscated C Code Contest</a>). The goal of Fewest Elements is to remove elements that don’t add value, not to remove elements that add clarity. For example, you might have a complex calculation used in only one place, but instead of using it directly, you assign it to an explaining variable to give it a descriptive name. That variable is not functionally necessary, but it is valuable to reveal intent.</p>
<p>While I’ve contrasted a beneficial focus on Fewest Elements and detrimental code golfing, that is not to suggest that the difference will always be clear. There are tensions around the rules of Reveals Intention, No Duplication, and Fewest Elements, and you’ll need to make judgment calls for yourself or among your team about what is best in a given situation. In cases like this, these rules give you a way to see some of the tradeoffs so you can make an informed decision.</p>
Fri, 08 Mar 2024 00:00:00 +0000
https://codingitwrong.com/2024/03/08/simple-design-fewest-elements.html
https://codingitwrong.com/2024/03/08/simple-design-fewest-elements.htmlSimple Design: No* Duplication<p><em>This is the second post in a <a href="/2024/03/06/simple-design-reveals-intention.html">series about the Four Rules of Simple Design</a>.</em></p>
<p>“No duplication” means that when a domain concept is represented in two or more places in the code, you adjust the code so that that domain concept is represented in just one place. This usually involves putting shared data or logic in a single shared place so they can be accessed from wherever they’re needed.</p>
<h2 id="why-no-duplication">Why no duplication?</h2>
<p>Consider a user account record. Say there are multiple conditions that determine whether the account is considered “active:” the user needs to have accepted the latest terms and conditions, and their subscription payment needs to be up-to-date—unless they created their account during the beta period, in which case their account is free. If a user’s account is not active, they aren’t locked out of the application entirely, but there are a number of features throughout the app that they can’t see or are read-only instead of interactive. An example of duplication would be if at each of these places there was a separate calculation that check if the account is active. It would be better to have a single function or method that can tell us if a given user is active, and that function or method is called everywhere that information is needed.</p>
<p>Why does removing the duplication of multiple active checks matter? There are a number of benefits. It makes the code easier to understand because it operates at a single level of abstraction: instead of reading the details of that calculation at each place it’s needed, we just see that we’re asking if the user is active. It prevents errors where a developer might not implement the logic the same way in each place. It reduces testing burden because you can test the function against all the factors that affect being active in one place. Then, when testing each place that the function is called, you don’t have to test all the factors, you can just check one case where the user is active and one where they are inactive.</p>
<p>Those are all important reasons, but there’s one I think is the most important of all: supporting change. If your code is never going to change, having duplicate logic to check for active isn’t too big a deal. But consider if a new condition is added someday—I dunno, say instead of paying cash users can donate NFTs they don’t want anymore. If the logic to check active users is duplicated, you need to update everywhere that logic is run add that condition in. If you miss a case, the application won’t work consistently. That’s a big impediment to changing the app, and can result in you either avoiding change or the app becoming unstable as you make changes. That, I think, is the ultimate reason to avoid duplication of domain concepts: to enable change.</p>
<h2 id="an-invalid-counterexample">An invalid counterexample</h2>
<p>The absolute of “no” duplication is quite extreme, and objections may immediately rise in your mind. It’s been asked before whether multiple <code class="language-plaintext highlighter-rouge">if</code> statements constitute duplication, and whether this rule means they should be replaced by a function call.</p>
<p>I have a few responses to that. First, this is why I used the term “domain concept” above. <code class="language-plaintext highlighter-rouge">if</code> statements aren’t a domain concept, they’re a programming construct, and of course it’s fine to use the same programming construct in multiple places. A domain concept is something that belongs to the problem domain of the application, something that might change in the future based on business rules or laws or user preferences. Another way I’ve heard this said is “no duplication of <em>ideas</em>.”</p>
<p>Second, imagine you actually did write a function that took a condition and a function and only executed the function if the condition were true. That doesn’t gain us anything over the <code class="language-plaintext highlighter-rouge">if</code> statement. It’s no shorter to write. It won’t save us any work when something changes, because whereas business rules change all the time, I can’t imagine programming <code class="language-plaintext highlighter-rouge">if</code> constructs changing. And it will be less familiar to readers of the code than <code class="language-plaintext highlighter-rouge">if</code> statements.</p>
<h2 id="a-valid-counterexample">A valid counterexample</h2>
<p>I do know of at least one important qualification to “no duplication,” however. Sandi Metz, a prominent software design teacher, has said:</p>
<blockquote>
<p>Duplication is far cheaper than the wrong abstraction… Prefer duplication over the wrong abstraction.</p>
</blockquote>
<p>What does “the wrong abstraction” mean? Her <a href="https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction">blog post about the wrong abstraction</a> explains in detail. But in short the point is that if you remove duplication, it can turn out that those two things were <em>not</em> actually the same, and the way you find that it is if they later have to change differently.</p>
<p>There aren’t any foolproof answers for finding out what the right or wrong abstractions are. One heuristic is to think in terms of the business domain: is this duplication based on a common business rule? Another, less abstract, is the “rule of three:” don’t remove duplication when there are two instances but wait until there are three, to give you greater confidence that it really is duplication.</p>
<p>Regardless of what heuristics you apply, if you ever prefer duplication over the wrong abstraction, clearly you aren’t following the “no duplication” rule to the letter. So this rule has qualifications. Whereas I wouldn’t say “sometimes you should make sure to have code that doesn’t communicate intent,” sometimes you <em>should</em> have duplication in the code.</p>
Thu, 07 Mar 2024 00:00:00 +0000
https://codingitwrong.com/2024/03/07/simple-design-no-duplication.html
https://codingitwrong.com/2024/03/07/simple-design-no-duplication.htmlSimple Design: Reveals Intention<p><em>This is the first post in a series about the Four Rules of Simple Design. Click a link in the list below to navigate to a post about the linked rule.</em></p>
<p>In the first edition of the book <em>Extreme Programming Explained</em>, Kent Beck introduced four rules of simple design. Martin Fowler has a succinct <a href="https://www.martinfowler.com/bliki/BeckDesignRules.html">summary</a> of them:</p>
<blockquote>
<ul>
<li><a href="/2024/03/11/simple-design-passes-the-tests.html">Passes the tests</a></li>
<li>Reveals intention <em>(this post)</em></li>
<li><a href="/2024/03/07/simple-design-no-duplication.html">No duplication</a></li>
<li><a href="/2024/03/08/simple-design-fewest-elements.html">Fewest elements</a></li>
</ul>
</blockquote>
<p>Let’s discuss all of them in a series of posts, skipping over “passes the tests” for the moment; we’ll look at it at the end in light of the other rules. In this first post we’ll look at “reveals intention”.</p>
<h2 id="why-reveal-intention">Why reveal intention</h2>
<p>The rule “reveal intention” indicates that code should help make it clear to the reader of the code what it’s intended to do.</p>
<p>Harold Abelson, the author of <em>Structure and Interpretation of Computer Programs</em>, famously wrote “programs must be written for people to read, and only incidentally for machines to execute.” This might or might not be a dramatic exaggeration, but it counteracts the common assumption programmers can tend to make, which is “programs are written for machines to execute, and <em>only incidentally for people to read</em>.”</p>
<p>Why does it matter how readable a program is by people? When a programmer is first writing the code, presumably they understand it at the time (although this is not always the case!). But when another programmer needs to make changes to that code, they might have a much harder time understanding it because they don’t have the same context as the original programmer. And sometimes the other programmer is the author in six months, when they’ve forgotten what they were thinking at the time.</p>
<p>Difficult-to-understand code causes a lot of problems in development: it takes more time to get ready to make changes to it, it’s harder to figure out what change will accomplish your current goal, and it’s more likely you’ll break something unintentionally.</p>
<h2 id="how-to-reveal-intention">How to reveal intention</h2>
<p>So what contributes to code revealing intention? Here are some of the most common ways I’ve found it’s helpful to do so.</p>
<p><strong>Format the code</strong> in a consistent way, ideally with an automatic formatter if there is one available for the language. Really bad formatting can lead to missing things, but even slightly different formatting throughout a project slows down the reader.</p>
<p><strong>Avoid nonessential differences.</strong> If two bits of code do the same thing, they should do it in the same way. If two bits of code do something in a different way, there should be a reason for that difference. Differences take cognitive effort to process, and so they should always be used for a reason. Doing the same thing in the same way helps patterns emerge that can help with removing duplication, the next rule of simple design.</p>
<p><strong>Choose names</strong> for clarity. Avoid abbreviations unless they’re well-known concepts in general or at the company (but remember that new people do join the company who don’t know the acronyms). Avoid single-letter variable names except in contexts where it’s very clear what they mean: maybe loop variables and single-line closure functions passed into another function. Make sure variable names aren’t too general (<code class="language-plaintext highlighter-rouge">count</code> may often be) or too specific (if a button is used for both create and editing, don’t call it <code class="language-plaintext highlighter-rouge">createButton</code>). Remember that you don’t need to name a function argument the same as the variable you’re passing into it—for example, if you’re using a button to submit a form, you might pass a <code class="language-plaintext highlighter-rouge">handleSubmit</code> function to it. But if that button is used for other uses in addition to submitting forms, it doesn’t know or care whether it’s being used for submitting a form or not, so don’t name that argument <code class="language-plaintext highlighter-rouge">handleSubmit</code> or <code class="language-plaintext highlighter-rouge">onSubmit</code>—name it <code class="language-plaintext highlighter-rouge">onClick</code>, because that is what the button knows about.</p>
<p><strong>Instead of adding comments, make the code more clear</strong> when you can. This isn’t an absolute rule (“never use comments”), it’s a priority: add comments only when you’ve made the code as clear as you can, and there is still more to write. The reason to do this is that comments can be ignored and the code can drift so that it no longer matches what the functions say, which is at best time-wasting and at worst can lead to misunderstandings and bugs. If a comment clarifies the purpose of a function or variable, see if you can rename the function or variable to make it clear on its own. Requirements that every function and property on an object must have a comment can sound good on paper, but when the functions and properties are named well they often devolve into a repeat of the function/property name. Don’t be embarrassed about long function and variable names—clarity is the goal, and if a forty-letter-long variable name makes it clearer, name it that way. React has a few great examples of this: the names of its <code class="language-plaintext highlighter-rouge">dangerouslySetInnerHTML</code> and <code class="language-plaintext highlighter-rouge">UNSAFE_componentWillReceiveProps</code> APIs make sure that any developer reading them has a warning that these should be used carefully, and often lead to good conversations during code review that catch potential issues. One case where comments can be the best option is when you need to record <em>why</em> code is written a certain way, which is often hard to make clear in the code itself.</p>
<p><strong>Splitting code into smaller pieces</strong> (smaller functions, classes, and files) is another way to help reveal the intention of the code. If you have a 500-line-long function with lots of loops and conditionals, you have to read the whole function and keep it in your head to understand what it’s doing. Sometimes developers will put a comment at the top of different sections in the function to describe what each is doing. Instead, for each section, create a function whose name matches the comment you would use to describe that section, move the code for that section into that function, and call that function instead. That way, the reader can read that function to get a high-level overview of what it does, and can dig into individual step functions only when they want the details for that step. If you are concerned about extra function calls hindering the performance of your program, the increased understandability of the code is probably more important than any minuscule performance hit unless you’re working on embedded systems.</p>
<p><strong>Work at the highest level of abstraction</strong> that enables you to solve the problem. For example, decades ago the most common way to iterate over a list was to create a counter variable and repeatedly increment it by 1 in a <code class="language-plaintext highlighter-rouge">for</code> loop. But now many language have built-in collection libraries that allow you to <code class="language-plaintext highlighter-rouge">forEach</code> or <code class="language-plaintext highlighter-rouge">map</code> over a list, or even dedicated syntax to loop over each element of a list in turn. This means that the reader of your code doesn’t have to process the incrementing of the counter itself (and think about the possibility of a bug in that code); they can just focus on what you are doing at each step. Another example is the <code class="language-plaintext highlighter-rouge">reduce</code> function: although it’s a collection function that automatically handles incrementing, it’s lower-level than <code class="language-plaintext highlighter-rouge">map</code> and <code class="language-plaintext highlighter-rouge">filter</code> and so is harder to understand at a glance. If you are just transforming each list element, use <code class="language-plaintext highlighter-rouge">map</code> instead of <code class="language-plaintext highlighter-rouge">reduce</code>; if you are just removing some elements from the list, use <code class="language-plaintext highlighter-rouge">filter</code> instead of <code class="language-plaintext highlighter-rouge">reduce</code>. Only reach for the extra complexity of <code class="language-plaintext highlighter-rouge">reduce</code> when a higher-level abstraction won’t meet your need.</p>
<p><strong>Use the closest-fit function</strong> for what you’re trying to accomplish. For example, the purpose of <code class="language-plaintext highlighter-rouge">map</code> is to return a new list that has transformed versions of the list elements. If you want to loop over a list for the sake of <em>side effects</em>, not for the sake of transforming the list elements, don’t use <code class="language-plaintext highlighter-rouge">map</code>: that’s what <code class="language-plaintext highlighter-rouge">forEach</code> and similar language constructs are for. Using <code class="language-plaintext highlighter-rouge">map</code> is misleading because it suggests to the reader of the code that it’s being used for something that it isn’t really used for.</p>
<p><strong>Limit the public interface</strong> for a unit of code. Make it clear to users of that code how it’s intended to be used. In object-oriented languages this often means only marking object methods public if you intend them to be used externally, and otherwise using a more restrictive visibility like private. In JavaScript modules this means only exporting the functions and variables you intend to be called from elsewhere.</p>
<p><strong>Limit the scope of variables</strong> to the narrowest scope needed. If a variable is only needed in the body of a function, make it local to that function. If the variable is only needed in the body of a conditional or loop, and the language supports block-scoped variables, make the variable local to that block. This lets readers of the code know at a glance that that variable is only used in that narrow scope, and they don’t need to think about the effects of that variable elsewhere in the code. It also prevents a future developer from using the variable in a context that it wasn’t designed to be used. Avoid global variables: they have the widest scope possible and so have the highest chance of having unintended effects.</p>
<p>When a constant value is hard-coded in the code but its purpose is not clear, consider giving it an <strong>explaining variable name</strong>: assign it to a variable so you can name that variable over what it’s intended to be used for. For example, if you have a three-column layout, instead of using the number <code class="language-plaintext highlighter-rouge">3</code> throughout the code, consider assigning it to a <code class="language-plaintext highlighter-rouge">numColumns</code> variable. That will help the reader of the code know <em>why</em> a 3 is being used in a given case. It also handles the possibility of the same number being used for reasons <em>other</em> than the number of columns—which could lead to bugs if you need to update the number of columns in the future and change a 3 that should not have been changed.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Focusing on revealing intention as you write code is helpful, but the proof is how that code does in code review. A reviewer may not be able to figure out what code is doing; if so, the author should not only explain it to them, but also try to update the code so that it explains itself. Or a reviewer may <em>think</em> they know what the code is doing but misunderstand; this is another occasion to update the code to reveal intention more clearly. Even if a reviewer is able to figure out what the code means, if it took them some effort or they see a way the code could be more clear, they should suggest it, and the author should make changes accordingly. As an author getting your code reviewed, err on the side of making changes in response to review comments instead of leaving it as-is. Whether you make the exact change requested, or whether you come up with a third option that satisfies both you and the reviewer, either works.</p>
<p>If you apply these principles to make your code more intention-revealing, you’ll see a significant benefit in your ability to make changes to the codebase quickly and reliably. You’ll also get a feeling of increased confidence in your ability to work in the codebase. You’ll likely think of other principles that help the code reveal intention as well: maybe principles specific to your language, framework, or project.</p>
Wed, 06 Mar 2024 00:00:00 +0000
https://codingitwrong.com/2024/03/06/simple-design-reveals-intention.html
https://codingitwrong.com/2024/03/06/simple-design-reveals-intention.htmlA Brief Summary of Evolutionary Design<p><em>For years I’ve wanted to write something to advocate for evolutionary design. There is so much that can be said, and I hope to write more in the future. But I wanted to begin with this short definition of what I mean by “evolutionary design” and explanation of why it’s helpful.</em></p>
<p>Most developers have experienced a codebase that is hard to work with. What contributes to making code hard to work with? One major factor is over-design or under-design. Under-designed software results when people hack in changes never intended originally in the code, making it harder and harder to work with. Over-designed software is when the developer tries to predict all the ways it will change and plan for it in advance, making a codebase that has a lot of unnecessary indirection, most of which never gets used.</p>
<p>A way to find the middle path between over-design and under-design is evolutionary design. It involves two steps:</p>
<ol>
<li>Building the simplest possible implementation of the requirements of the system today, and</li>
<li>When a new requirement comes in, rearranging the code so it’s the simplest possible implementation of the new requirement.</li>
</ol>
<p>The reason for evolutionary design is that we can’t actually know what will change in the codebase in the future. But by setting up the code so it’s as easy to understand and change as possible, you can change it in any way it needs to change in the future.</p>
<p>Some key principles for ensuring the code is as simple as possible were <a href="https://www.martinfowler.com/bliki/BeckDesignRules.html">described by Kent Beck</a>:</p>
<ul>
<li>Reveals intention: that is, it is set up to help the reader of the code understand what the author of the code was trying to do.</li>
<li>No duplication: that is, that every bit of business logic in the code is only in one place, so that if the business rule changes you don’t risk two different spots in the code getting out of sync.</li>
<li>Fewest elements: that is, you avoid statements, functions, and types that are unnecessary. This can be because they aren’t executed, don’t accomplish anything, or because there is a more straightforward way to accomplish the same thing.</li>
</ul>
<p>The process of changing the code to best fit the new requirements involves refactoring. It’s important to note that this “refactoring” doesn’t involve pre-planned rewrites of large portions of the codebase. Instead, it’s the kind of refactoring described in Martin Fowler’s <a href="https://martinfowler.com/books/refactoring.html"><em>Refactoring</em> book</a>: small changes as you go to support the need you have in the moment. You rearrange a bit of code to be a better fit for the new feature you need to implement.</p>
<p>In order to refactor in this way, you need a suite of comprehensive tests: tests that if they pass you can be confident that the code is still working. The reason is because if refactoring is going to be a regular occurrence, you want to automate checking that you haven’t broken anything, instead of having to check manually. If you have to check manually, the effort it takes to check and the risk of breaking something means that you will tend to defer refactoring until absolutely necessary, which means the code is not being regularly updated to best fit its new requirements. You don’t necessarily need to use test-driven development to write this suite, but test-driven development is one tool you can choose to use to help you ensure your test suite is comprehensive enough to support refactoring.</p>
<p>To learn more about evolutionary design and the kind of simple code, refactoring, and comprehensive tests that support it, I recommend reading the introductory chapters of Fowler’s <em>Refactoring</em>.</p>
Mon, 29 Jan 2024 00:00:00 +0000
https://codingitwrong.com/2024/01/29/brief-summary-of-evolutionary-design.html
https://codingitwrong.com/2024/01/29/brief-summary-of-evolutionary-design.htmlFrontend Web Architectures<p>For a few years now I’ve wanted to write about the various architectural approaches for web frontends. I’ve hesitated, however, because of a lack of experience with one particular approach: streaming HTML from the server (e.g. Phoenix LiveView). I looked for an opportunity to try out that approach to get more experience with it before I wrote, but that opportunity never came: it didn’t seem to be a good fit for the professional and side projects I had.</p>
<p>But there’s been a shift in the frontend ecosystem that’s motivated me to finally write. <a href="https://react.dev/learn/start-a-new-react-project">The new React docs now recommend using a third-party framework</a> for React web apps created from scratch. Previously it felt like single-page apps (SPAs) were the default React approach with the first-party Create React App tool, and frameworks were for if you needed enhanced behavior such as server rendering. But now that frameworks are the official recommendation, I don’t feel like I have a default anymore. I don’t want frameworks to be my default because I strongly feel the downsides, but SPAs’ downsides are now more pronounced too.</p>
<p>Without a default to lean on, each time I start a project I need to think though the tradeoffs of the different architectural approaches. Which isn’t necessarily a bad thing! Having processed through the tradeoffs more extensively, I wanted to share my current thoughts on them.</p>
<p>Take this as a snapshot of what I’ve had the opportunity to learn so far—both “here’s what I know” and “here’s what I know could use more research.” <a href="https://meta.wikimedia.org/wiki/Cunningham%27s_Law">Cunningham’s Law</a> may help me out here. If any readers have information or experiences to fill in gaps in my knowledge I’d welcome that—feel free to contact me via <a href="https://tdd.social/@CodingItWrong">Mastodon</a>. I tend to give more consideration to openhanded opinions, so “here’s a way to mitigate a downside” will carry more weight with me than “here’s why my preferred approach actually has no downsides.”</p>
<p>This article is focused on rich web applications, not static web pages/documents, so I don’t go into the topic of static site generation. The boundary between web pages and web apps is fuzzy, hence frameworks like Next.js that mix static, server, and client rendering. But this post focuses on rich web applications, as those are what I spend most of my time building.</p>
<p>With that said, let’s look at the frontend architectures as I understand them.</p>
<h2 id="javascript-sprinkles">JavaScript Sprinkles</h2>
<p>At the beginning of the web, JavaScript didn’t exist. All web pages were rendered by generating an HTML document on a server and returning it to the client. When something on the page needed to change, the user would click a link or button, a request was made to the server, and the server returned a new HTML page.</p>
<p>When JavaScript was later added to web browsers, its capabilities were limited—as were ideas for what to do with it. The use of JavaScript was generally limited to adding a little animation or interactivity to small portions of the page. DHH, the creator of the Ruby on Rails framework, refers to this approach as “JavaScript sprinkles,” in contrast to some of the other approaches we’ll discuss next that use significantly more JavaScript.</p>
<p>Pros of this approach include that almost all of the UI is defined in one place: the server templates. There is also a strong separation between backend and frontend: you can use any JavaScript library with any backend, even using different JavaScript libraries on different pages as needed. This approach also still relies on the web browser for navigation, so the URL, back button, and history work as you would expect.</p>
<p>A key downside of the JavaScript sprinkles approach was that there was limited ability for rich interactions. The DOM wasn’t changed much because almost all of the initial HTML came from the server.</p>
<p>It’s still possible to use JS sprinkles today, but now that so much JavaScript development has shifted to other approaches, it can be difficult to find third-party libraries that work with this approach.</p>
<h2 id="frontend-embedded-in-backend">Frontend Embedded in Backend</h2>
<p>To allow for richer, more app-like interactions on the web, more substantial JavaScript UI frameworks emerged that allowed developers to implement larger widgets. Each page would still be rendered by the server, but a significant chunk of that page might be created by JavaScript. Embedding individual components in a server-rendered page isn’t the way we usually think about React development, but the React docs still do describe <a href="https://react.dev/learn/add-react-to-an-existing-project">how to use React for part of your existing page</a>. And <a href="https://vuejs.org/">Vue.js</a> is called “the progressive JavaScript framework” because it’s incrementally adoptable, and its docs have been organized to emphasize how it can easily be dropped into an existing server-rendered application.</p>
<p>The simplest way to embed frontend widgets within backend-rendered pages is by manually writing script tags. However, if you’re using the frontend framework extensively throughout your application then this can get boilerplate-heavy, especially when you want to pass data from backend to frontend. Some libraries exist to handle this boilerplate for you, including the recently-released <a href="https://github.com/thoughtbot/superglue">Superglue</a> for React and Rails, and <a href="https://inertiajs.com">Inertia</a> which is from the Laravel ecosystem but says it can be used with any backend.</p>
<p>Like the sprinkles approach, frontend embedded in backend has a strong separation between backend and frontend: you can mix-and-match different frameworks on different pages, and migrating from one UI framework to another can be done incrementally. This approach also relies on native browser navigation functionality for an intuitive experience.</p>
<p>A downside of this approach is that it splits your UI logic across two different technologies: the server templating engine, and the frontend framework. When you begin to create a bit of UI you need to decide which of the two to implement it in, and if you later need to change that approach it requires significant reimplementation. And developers need to be familiar with and think through two different UI technologies to understand what will be rendered for a given page.</p>
<h2 id="cdn-hosted-single-page-application">CDN-Hosted Single Page Application</h2>
<p>A single-page application is a frontend web application that runs on a single HTML page returned from the server. Generally the JavaScript is responsible for rendering everything visible on the screen, so the initial HTML page might not include any visible HTML tags at all, only tags that load assets like JavaScript, CSS, and fonts. Because the frontend consists only of static assets, it does not need to be hosted on a high-cost application server: it can be served from a Content Delivery Network (CDN).</p>
<p>For a long time SPAs were the de facto “default” way to use frontend frameworks. First party tools like <a href="https://create-react-app.dev/">Create React App</a> and <a href="https://cli.vuejs.org/">Vue CLI</a> provided a low-effort way to create an SPA in the given framework. They handled needs like transpiling custom syntax and newer JavaScript features to a form the target browsers could run, as well as bundling many source files into a form optimized for fast loading. For more custom needs, <a href="https://webpack.js.org/">webpack</a> was the widest-adopted JavaScript build tool for a long time, then later <a href="https://vitejs.dev/">Vite</a> got a lot of adoption. Recently, browser <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules#importing_modules_using_import_maps">importmap</a> functionality provides a way to potentially skip the bundling step. Build tools are not highly coupled to the code you write, so it should be fairly easy to migrate from one tool to another as long as the latter supports all the features you need.</p>
<p>One of the main benefits of SPAs is having your whole UI in a single technology, while getting the rich interactions that modern JavaScript frameworks provide. SPAs also fully decouple the frontend and backend so that you don’t even need to run the backend for local development: your local frontend can connect to a backend on a different server. This means developers only need JavaScript experience and tooling to work on the frontend code, which has fueled the rise of the frontend developer role as separate from backend development. Because it is hosted on a CDN, traffic cost is minimal and transfer rates tend to be better. For applications that need additional clients such as native mobile apps, an SPA accesses the backend in the same way as other clients: through an API. This can prevent inconsistency and extra work, because once you have an API endpoint for one client it can likely work for others as well. Finally, since an SPA is effectively a complete application running in the browser, you have the option to provide offline support for either connection drops or extended usage offline.</p>
<p>The most visible downside to SPAs is performance: rather than the browser receiving a web page and rendering it immediately, the browser has to go through several steps: receive an HTML page, load all needed JavaScript files, parse them, and execute them—and only then does the application appear for users. This may be fine for web applications that users spend a significant amount of time in on fast connections on fast devices. But for high-visibility public web sites and on slower connections and devices, this speed issue can significantly hinder a user’s experience, and SEO scores reflect that.</p>
<p>SPAs tend to have the concept of pages, but since it’s not actually separate HTML pages the browser doesn’t know what to do with these conceptual pages. So, to get the address bar to update to a new path so that the back button and saved links work, custom JavaScript code is needed. And it’s possible to get that navigation logic wrong. Also, significantly, there are more limited options for secure credential storage. Most commonly, local storage is used, but it is vulnerable to <a href="https://owasp.org/www-community/attacks/xss/">cross-site scripting (XSS) attacks</a> and some authorities recommend never storing access tokens in local storage. Apparently, <a href="https://auth0.com/docs/secure/security-guidance/data-security/token-storage#browser-in-memory-scenarios">web workers</a> or service workers can also be used, but I’ve never seen an example of how to implement them yourself—only offered by third-party authentication providers.</p>
<h2 id="server-hosted-spa">Server-Hosted SPA</h2>
<p>Instead of hosting an SPA on a CDN, it can instead be hosted from within a server-based web application. In this approach the frontend application code is very similar to CDN-hosted SPAs, but the differences in tradeoffs are significant enough that I’m describing this option separately.</p>
<p>In this approach, a server-rendered web application renders the mostly-empty HTML page that boots the SPA. This can be done easily with web library or framework that accepts HTTP connections and returns HTML, whether something lightweight like Express.js or heavyweight like Ruby on Rails. The key advantage that a server-rendered application gets you is the ability to use cookies for credential storage. Cookies can be configured to be secure against XSS attacks, and although they’re vulnerable to <a href="https://owasp.org/www-community/attacks/csrf">cross-site request forgery attacks</a>, that effect can be mitigated.</p>
<p>The coupling between frontend and backend is still fairly light because the frontend is rendered without knowledge of the backend it’s running within, and it receives its data via backend-agnostic API requests. A downside is that these requests will likely need to be proxied through the backend so that the cookie can be translated into an access token sent via some other mechanism—and this means more traffic on more expensive application servers.</p>
<p>Another downside is that you need to run both frontend and backend technologies locally for development. And the performance is no better than a CDN-hosted SPA; possibly worse, as you don’t have a CDN helping with initial load times.</p>
<h2 id="server-rendered-frontend-framework-srff">Server-Rendered Frontend Framework (SRFF)</h2>
<p>Within Frontend circles this approach is simply called “server-side rendering” (SSR). This can elicit the pushback that “server-side rendering isn’t new, it’s the original approach of the web” (as in, JavaScript sprinkes in a server-rendered HTML page). But dunks aside, SSR is genuinely different, and to clarify that I’m referring to it with the more precise name “Server-Rendered Frontend Framework” (SRFF).</p>
<p>In this approach, the same frontend framework code that runs in the browser for rich interactivity also runs on the server to render the initial HTML for the page. It may not be immediately obvious why this is faster than an SPA—after all, the same work is happening, whether on the server or in the browser. The speed benefit comes from fast servers with cached assets sending a visually complete web page down to the client so that as soon as it receives the first HTTP response it can render with no additional requests, parsing, or execution needed. And this speed benefit comes while still having the benefit of all of the UI implemented in a single technology.</p>
<p>SRFF has been developed over the years via frameworks like <a href="https://nextjs.org/">Next.js</a>, <a href="https://nuxt.com/">Nuxt</a>, and <a href="https://remix.run/">Remix</a>. The the technologies it’s implemented with have advanced over time: for example, <a href="https://nextjs.org/docs/app/building-your-application/rendering/server-components">React Server Components</a> allow for more granular control over what happens on the server and more minimization of what code is sent to the client, reducing JavaScript bundle sizes. Server-side rendering is one of the reasons <a href="https://react.dev/learn/start-a-new-react-project">the React docs recommend using a framework</a>, although not the only reason: they point out that many such frameworks allow exporting static assets while providing other benefits.</p>
<p>Running frontend code on the server provides another option that is sometimes helpful: skipping the API layer entirely. You can make direct database calls from that server code, so that if the web is your only client you can skip developing and maintaining an API server and API client code. If you have other clients such as native mobile apps, though, you will likely need an API regardless. In those cases, your SRFF application can make calls to that API from its server-side code.</p>
<p>Another benefit of SRFF is that it handles integration between frontend and backend for you, letting you skip writing boilerplate-heavy integration code yourself. But this benefit leads right into the major downside: coupling. Your application is no longer a general React or Vue application; it is now specific to the SRFF framework you’re using, and it cannot easily be ported to another framework. If there is a bug in the SRFF framework or a breaking change that will take you a lot of effort to address, you’re out of luck. And although none of the frameworks <em>require</em> hosting on a single server platform, they are very complex applications, so deviating from the recommended hosting platform may require a lot of setup work. For example, Next.js is created by Vercel so their priority is on making it run easily on their own hosting platform—and you may have challenges running it on other server platforms.</p>
<h2 id="incremental-server-rendering">Incremental Server Rendering</h2>
<p>Parallel to the rise of JavaScript UI frameworks there have been efforts to get a richer web user experience while keeping UI rendering logic on the server.</p>
<p>As early as 2013 Ruby on Rails apps had a common pattern to allow Ajax requests to return JavaScript to execute, which was commonly used to <a href="https://web.archive.org/web/20140118105330/http://ruby.railstutorial.org/chapters/following-users#sec-a_working_follow_button_with_ajax">replace parts of the page with updated HTML rendered on the server</a>.</p>
<p>An alternative to returning HTML over HTTP responses is to use WebSockets. This has the advantage of avoiding overhead for repeatedly establishing server connections, as well as the option to proactively push updates in response to server events. There are a number of backend-framework-specific libraries for streaming HTML updates over WebSockets. Microsoft’s <a href="https://dotnet.microsoft.com/en-us/apps/aspnet/web-apps/blazor">Blazor</a> may have been the first (2018), but outside the Microsoft ecosystem Phoenix <a href="https://github.com/phoenixframework/phoenix_live_view">LiveView</a> got more attention (announced in 2019). Following suit were <a href="https://laravel-livewire.com/">LiveWire</a> in Laravel and <a href="https://hotwired.dev/">Hotwire</a> in Rails. And, uniquely, <a href="https://htmx.org/">HTMX</a> provides this architecture in a backend-agnostic way, so you can use it regardless of your server-side framework.</p>
<p>This approach is often referred to as “streaming HTML,” but the term “streaming” specifically refers to WebSocket connections. I also want to include HTML updates in response to HTTP requests because the tradeoffs are similar, so I’m using “incremental server rendering” as a broader term.</p>
<p>Incremental server rendering libraries give you the advantage of keeping your UI in a single technology—the backend templating engine. They mostly remove the need for a frontend framework. They also provide the advantage of initial quick rendering, because the initial render <em>is</em> just a normal HTML page.</p>
<p>A downside is that most of these libraries are coupled to the backend framework they run within. You’ll hope that the maintainers of that framework continue to maintain this library in addition to their overall framework, because if they stop maintaining it your code can’t easily be migrated to another approach. HTMX mitigates this problem because it <em>isn’t</em> coupled to a specific backend. This may help it gain more momentum, as teams using any server-side technology have the option of adopting it and contributing to it.</p>
<p>Since all interactions require a trip to the server, this necessarily introduces a delay, even in the smallest interactions. Even if the server responded instantaneously, the request and response take time to travel. By contrast, in SPA code running in the browser, small changes like toggling a button can be handled as quickly as the JavaScript can execute. Now, I haven’t investigated interaction times in incremental server-rendering libraries myself, so I would appreciate any data on it that a reader might want to share. But the speed of light does exist, so there is going to be a limit of how quickly data transmission over a distance can happen, barring some kind of major radical rethinking.</p>
<p>A more significant downside of incremental server rendering is that it generally isn’t robust for offline use—whether protracted offline work or just handling connection hiccups. There is no persistent application running in the browser, only on the server, and even the smallest interactions require server connectivity. If your application is online-only then this may not be too much of a problem, but it can still magnify disruptions to a user’s experience from connection drops.</p>
<p>A broader limitation is that incremental server rendering has not received a lot of adoption yet. It may be trending upward so we’ll see where it goes in the next few years. But without a high degree of adoption you have fewer well-supported third-party libraries (not to mention that libraries are fragmented across the different backend technologies) so you’ll need to write more low-level code from scratch. This isn’t to say this approach is bad because it hasn’t received adoption (that would be circular reasoning), but it is a pragmatic consideration.</p>
<h2 id="decisions">Decisions</h2>
<p>To conclude, I don’t have a neat set of criteria for how to pick the frontend architecture for a given project. Really, the whole article is the criteria: you’ll want to take all the tradeoffs I discussed into consideration. Instead, I’ll share some of the decisions I’ve made on projects recently.</p>
<p>One of my side projects is a server-side web application with only one sprinkle of JavaScript. Slapdash is an app for creating <a href="https://slapdash.codingitwrong.com/josh">public topical notes</a>; I use it for reference information for programming languages and frameworks. It’s more of a web site than a web app: it displays pages of content and allows the logged-in users to edit it in a text box. The only JavaScript it uses is a library for applying syntax highlighting to code snippets. It’s such a perfect fit for the server-side web that I didn’t see any advantage to considering a more JavaScript-heavy approach, especially with all the niceties I got by using Ruby on Rails.</p>
<p>The client project I worked on in 2019 had a frontend embedded in the backend. It was a custom CMS for an organization’s public web site. Most of the public-facing pages only needed to be static HTML, and most of the admin pages were simple HTML forms. However, there was one key place that needed a richer user interface: a block-based page editor allowing admins to lay out pages. We implemented that page editor as a React application embedded within that admin page.</p>
<p>For most of my side projects—such as the latest, <a href="https://about.riverbed.app/">Riverbed</a>—I’ve gone with a CDN-hosted SPA. Access tokens are stored in local storage, and to mitigate the risk of XSS attacks I use only widely-trusted dependencies and don’t allow any rendering of user-supplied HTML. Because I’m not looking to scale these side projects to hundreds of thousands of users, and because they’re personal utility apps that don’t inherently track super-secure information, this seems to me to be secure enough that the effort of setting up a server-hosted SPA was not warranted. Initial load time isn’t very important either for SEO or for user experience; users will generally be running the applications in a browser tab for an extended period of time.</p>
<p>I’ve thought about trying a SRFF on one of these side projects just to get more experience with it, but what makes me hesitate is the coupling it introduces to that framework. I want to maintain these projects indefinitely, so it’s not appealing to take on the burden of having to keep the app updated to that framework’s updates when (as I just mentioned) the apps work perfectly fine as vanilla React SPAs.</p>
<p>If I had a client that was deeply invested in one of the ecosystems that has a streaming HTML approach (.NET, Rails, Phoenix, Laravel) then I would recommend that they look into that option. But, because I’ve been professionally focused on React, the opportunities I tend to get are projects using React or looking to do so.</p>
Wed, 17 Jan 2024 00:00:00 +0000
https://codingitwrong.com/2024/01/17/frontend-web-architectures.html
https://codingitwrong.com/2024/01/17/frontend-web-architectures.htmlRunning OPENSTEP 4.2 in a Virtual Machine<p>An article on Adafruit.com describes <a href="https://learn.adafruit.com/build-your-own-next-with-a-virtual-machine/overview">how to run OPENSTEP 4.2 in a virtual machine</a>. I went through the steps, and it went smoothly. The only hiccup was a few settings in VirtualBox that are phrased slightly differently in 7.0 for macOS.</p>
<p>Here are screenshots and info about the VirtualBox 7.0 settings to use when following along with the Adafruit article. This article isn’t a complete tutorial for setting up OPENSTEP 4.2 in VirtualBox, as there are a lot more steps beyond this; follow along with both articles side-by-side for the best experience.</p>
<p><img src="/img/posts/openstep-vm/00-openstep-desktop.png" alt="OPENSTEP 4.2 desktop with a File Viewer displaying the Demos folder, and BoinkOut.app selected" /></p>
<p>When first creating the virtual machine, the “Type” dropdown is set to “Other”, and the “Version” dropdown is set to “Other/Unknown”. Note that you don’t specify an “ISO Image” at this point; leave it at the default “<not selected>”.</p>
<p><img src="/img/posts/openstep-vm/01-vm-name.jpg" alt="The VirtualBox "Create Virtual Machine" dialog, with ISO Image not selected, Type set to "Other", and Version set to "Other/Unknown"" /></p>
<p>In the Hardware step, the “Base Memory” value is what you set to 128 MB.</p>
<p><img src="/img/posts/openstep-vm/02-hardware.jpg" alt="The VirtualBox "Hardware" dialog, with Base Memory set to 128 megabytes" /></p>
<p>In the Virtual Hard disk step, make sure the disk size is set to 2 GB. Note that you will only be prompted for the disk size; there is not a need to specify VDI or Dynamically allocated.</p>
<p><img src="/img/posts/openstep-vm/03-virtual-hard-disk.jpg" alt="The VirtualBox "Virtual Hard disk" dialog, with Disk Size set to 2 gigabytes" /></p>
<p>When you open Settings, go to Storage, and choose the Optical Drive, the setting you should change it to is now called “IDE Primary Device 1”.</p>
<p><img src="/img/posts/openstep-vm/04-storage-optical.jpg" alt="The VirtualBox "Storage" settings window, with the Optical drive set to IDE Primary Device 1" /></p>
<p>To add a floppy disk controller, click the “Add New Controller” icon, then choose “I82078 (Floppy)”.</p>
<p><img src="/img/posts/openstep-vm/05-storage-floppy.jpg" alt="The VirtualBox "Storage" settings window, with a Floppy Controller added and a floppy disk image named 4.2_Install_Disk.img" /></p>
<p>To deactivate USB, click the Ports icon then the USB tab, then uncheck “Enable USB Controller”.</p>
<p><img src="/img/posts/openstep-vm/06-ports.jpg" alt="The VirtualBox "Ports" settings window showing the USB tab, with "Enable USB Controller" unchecked" /></p>
<p>This covers all the VirtualBox settings you’ll need to do. If you continue along in the Adafruit tutorial, you should be set to run OPENSTEP 4.2 in a virtual machine!</p>
Mon, 08 Jan 2024 00:00:00 +0000
https://codingitwrong.com/2024/01/08/openstep-vm.html
https://codingitwrong.com/2024/01/08/openstep-vm.htmlHow to Play Marathon over a LAN on Classic Macs<p><strong>tl;dr: to play the original Classic Mac Marathon games over a LAN, enable AppleTalk and set it to connect via Ethernet. No TCP/IP configuration is needed.</strong></p>
<hr />
<p>One of my favorite memories growing up a Mac user was LAN gaming with my brothers playing Marathon. When I started collecting Macs again, being able to play Marathon on a LAN was one of my main goals. I had trouble getting it working, and now that I’ve found the answer, I wanted to share it so it’ll show up in others’ search results.</p>
<p>Note that another option to play Marathon over the network is <a href="https://alephone.lhowon.org/">Aleph One</a>, a project for running Marathon on modern machines. But in my case I wanted the authentic original experience on original hardware.</p>
<p>First, get Macs with classic Mac OS and a working Ethernet port. All the Macs I used had on-board Ethernet:</p>
<ul>
<li>Power Macintosh 7500</li>
<li>Power Macintosh 8500</li>
<li>Twentieth Anniversary Macintosh</li>
<li>Powerbook G3 Pismo</li>
<li>Power Mac G3 Tower</li>
<li>Power Mac G4 Tower (MDD)</li>
</ul>
<p>Get the Macs connected over Ethernet. I used a Netgear WPN824 router.</p>
<p>Next, enable AppleTalk and configure it to run over Ethernet.</p>
<p>If you have an AppleTalk control panel, open it and choose “Ethernet” from the dropdown.</p>
<p><img src="/img/posts/marathon-lan/appletalk-over-ethernet.png" alt="The Mac OS 9 AppleTalk control panel, with Ethernet selected from the "Connect via" dropdown." /></p>
<p>On some computers running pre-Open Transport, you may need to do this in the Network control panel instead of AppleTalk.</p>
<p>Next, make sure AppleTalk is enabled. You can do this one of two places. The first is in Chooser:</p>
<p><img src="/img/posts/marathon-lan/chooser-appletalk-active.png" alt="The Mac OS 9 Chooser, showing AppleTalk as Active" /></p>
<p>And the second is in the AppleTalk Control Strip module:</p>
<p><img src="/img/posts/marathon-lan/control-strip-appletalk-active.png" alt="The Mac OS 9 AppleTalk control strip module, with AppleTalk Active selected" /></p>
<p>Note that you do <em>not</em> need to do any TCP/IP configuration at all. 2023 me equated Ethernet with TCP/IP, but I forgot that TCP/IP seems to be an <em>alternative</em> to AppleTalk.</p>
<p>At this point, you can test the AppleTalk connectivity without needing to go into Marathon using file sharing. Open the File Sharing control panel, make sure there is an owner name, password, and computer name, and click Start File Sharing.</p>
<p><img src="/img/posts/marathon-lan/file-sharing-on.png" alt="The Mac OS 9 File Sharing control panel, with File Sharing on" /></p>
<p>Then open Chooser and pick AppleShare. If the machines are connected, they should all see each other in the list of servers.</p>
<p><img src="/img/posts/marathon-lan/chooser-appleshare.png" alt="The Mac OS 9 Chooser, with AppleShare selected and a file server named "Josh's TAM" showing in the list" /></p>
<p>Next you’ll need one of the Marathon games installed on each machine. There is a Marathon <a href="https://marathon.bungie.org/maraserialgen/">serial number generator</a> that Bungie is not opposed to, so you can get a different serial number for each machine. I went with Marathon Infinity.</p>
<p><img src="/img/posts/marathon-lan/serial-generator.jpg" alt="The Marathon Serial Generator, with generated serial numbers blurred out" /></p>
<p>Start the games up. Pick one machine to be the server (probably makes sense to choose the highest-powered one) and click Gather Network Game on it. Make sure Ethernet is chosen from the Network dropdown. Enter a name, color, team, and any game settings you want. Then click OK.</p>
<p><img src="/img/posts/marathon-lan/setup-network-game.png" alt="The Marathon Infinity Setup Network Game dialog, configured to use Ethernet as the Network" /></p>
<p>On the other machines, choose Join Network Game instead. Enter a name, color, and team for each.</p>
<p>Once you’re done, click Join on each. Back on the Gather machine, all the machines’ names should appear in the “Players On Network” list.</p>
<p><img src="/img/posts/marathon-lan/gather-network-game.png" alt="The Marathon Infinity Gather Network Game dialog, with a player named Pismo in game and a player named TAM on the network" /></p>
<p>Click each name and click Add.</p>
<p><img src="/img/posts/marathon-lan/gather-network-game-added.png" alt="The Marathon Infinity Gather Network Game dialog, with players named Pismo and TAM in game" /></p>
<p>Then click OK and the game will start!</p>
<p><em>Thanks to cheesestraws and treellama on 68kmla.org for additions and corrections!</em></p>
Mon, 04 Dec 2023 00:00:00 +0000
https://codingitwrong.com/2023/12/04/marathon-infinity-lan.html
https://codingitwrong.com/2023/12/04/marathon-infinity-lan.html