{
    "version": "https://jsonfeed.org/version/1",
    "title": "Samuel Plumppu",
    "feed_url": "https://samuelplumppu.se/feed.json",
    "description": "Experienced system developer in Switzerland, primarily working with TypeScript and Rust. Curious learner who enjoy writing and building projects. Beyond programming, I combine tech, systems thinking and Doughnut design for business to create a positive impact.",
    "icon": "https://samuelplumppu.se/images/favicon.svg",
    "author": {
        "name": "Samuel Plumppu",
        "url": "https://samuelplumppu.se/"
    },
    "items": [
        {
            "id": "https://samuelplumppu.se/blog/practicing-systems-thinking",
            "content_html": "<article><p>Regularly practicing systems thinking is worthwhile for your career, but also because it gives a valuable lens to explore and better understand the world we live in. Everything is connected, and by understanding the systems around us, we can identify <a href=\"https://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/\" class=\"link\">leverage points</a> where we can make effective changes.</p><hr/><p>This text assumes some familiarity with the fundamental concepts of systems thinking, but should still be worthwhile for anyone curious about improving their craft and creating impactful software projects. If you're curious to learn more, I can highly recommend the following books:</p><ol><li>Fundamental Theory: <a href=\"https://libro.fm/audiobooks/9781603588478\" class=\"link\">Thinking in Systems: A Primer</a></li><li>Why and how this matters for the IT world: <a href=\"https://libro.fm/audiobooks/9781663747143\" class=\"link\">Learning Systems Thinking: Essential Non-Linear Skills and Practices for Software Professionals</a></li></ol><p>These books are some of the most influential and inspiring books I've read on the topic so far. After practicing systems thinking for several years and exploring many resources, they both strike a good balance between being approachable while introducing in-depth topics for further exploration.</p><hr/><h2>Software: one part of a larger socio-technical system</h2><p>As software systems evolve and change over time, both the complexity of the code itself and the interconnections between different parts of the system tend to increase. A technical change like this will also have social and cultural implications. This is because a similar increase in complexity and interconnections also happens for the people, teams and organizations involved in developing, maintaining and using the software.</p><p>For developers, working in this kind of environment requires a quite different perspective and approach compared to working with one isolated software module. The software itself needs to adapt to changing requirements and conditions, but also the needs and preferences of the people and organizations using the software. We could make this easier, for example by making the software flexible and ready for change by designing the software as a collection of composable modules.</p><p>But that's just how we could structure the software itself. There are also other expectations on both on the software and the team developing it:</p><ul><li>The software adapts to the changing expectations of the world around us, including new regulation.</li><li>The software needs to work well for the people and organizations who use it, while also being accessible and performant.</li><li>Both the software and the team around it can handle changes within the organization, given that people come and go, while some software often can be used in production for years or decades.</li><li>Ideally, the software should be delivered as quickly and smoothly as possible, making effective use of the resources, skills and time available in the (often cross-functional) team of product-owners, designers, and developers.</li><li>Regular updates and improvements to the software, patching security issues and keeping a high development velocity.</li></ul><p>Combining all these perspectives, it becomes clear that professional software development requires more skills than programming: It's also about the surrounding system of people, processes and communication involved to make the software achieve the desired outcomes.</p><p>In other words, the software itself is part of a larger socio-technical system which surrounds - and supports it. In some ways, software is quite similar to a living organism, interacting with its environment and gradually evolving. Both are systems of systems that would not exist without their environment, and they are both constantly shaping their environment just like it shapes them.</p><hr/><h2>Working with systems of systems</h2><p>There are many factors that determine how well a given piece of software, and the teams and processes around it work together.</p><p>In this kind of environment, it's hard to find all solutions by only applying logic and thinking step by step, using an approach commonly referred to as <a href=\"https://en.wikipedia.org/wiki/Vertical_thinking#Vertical_thinking_vs_lateral_thinking\" class=\"link\">linear thinking</a>. While it's very useful when solving specific problems with clear conditions and boundaries, it's much harder (if even possible) to use linear thinking to solve problems in the complex socio-technical systems we often work with. Of course, there are still pure software and programming problems, but also many other kinds of work that needs to be done which can't be solved in a clear, linear way. The software is often just one piece in a much larger puzzle, where the ability to deliver high quality products and services relies on the successful collaboration involving many people.</p><p>It's also worth pointing out that this applies for all other software your project interacts with, each surrounded by its own socio-technical system. Even upstream and downstream dependencies - both internal or external - often need to be taken into account. How is your software project influenced by the risks with supply chain attacks?</p><p>Let me rephrase that - how is your organization contributing to the software you depend on, to make sure it's well-funded and well-maintained?</p><p>The main point is that all these socio-technical systems can be viewed both separately and as one large interconnected system. Given all their interconnections and varying conditions, how do we make all parts work well both on their own and together? There is no easy answer, but there are practices you can use in order to ask better questions and find where you can make changes. Systems thinking is one such practice.</p><hr/><h2>How systems thinking can help developers</h2><p>With this complex environment surrounding most projects, thinking about yet another layer of interactions can easily get overwhelming if you are not used to it. But it will get easier with practice.</p><p>Before we explore how systems thinking can help, it's important to note that this is just one approach among others. This is not <em>the</em> ultimate way to view the world in every situation. However, by practicing systems thinking for some years, I've found it to be very useful when you want to get a better understanding - and ask better questions.</p><p>Systems thinking is useful because it makes it easier and more enjoyable to work in this kind of environment. But like most skills worth learning, it takes practice, reflection and patience. To support your learning, it's helpful to create an intentional culture of curiosity, prototyping and willingness to not just improve your software development skills, but also your communication and collaboration skills. Ultimately, systems thinking can also help you grow as a person by increasing your self awareness and making reflective learning a habit.</p><p>However, if I only get to choose one of the main benefits of systems thinking for developers, I have to say how it teaches you how to identify <a href=\"https://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/\" class=\"link\">leverage points</a> in systems.</p><hr/><h2>Using systems thinking to identify leverage points</h2><p>Sometimes, changing the software or the processes around its development is not enough. This is when we need to map out the surrounding systems in greater detail, exploring how the software interacts with people, organizations and even society and the living planet.</p><p>With a gradually more nuanced understanding of the system and its interconnections, we can start exploring <a href=\"https://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/\" class=\"link\">leverage points</a>. These are places where we can make changes to affect the behaviour of the system. Just like when you try to move a stone and by using a lever to be able to move heavier objects with less effort, you can use leverage points in systems to find places where you can create effective change.</p><p>This is one of the most important skills to learn as a software developer, where one of your primary career objectives should be to get better at identifying the most high impact work you can do at any given moment. This will help you set priorities, and keep a healthy balance between proactive (long term improvements) and reactive work (short term bug fixing). How? By saying no to less important tasks and by separating potential future ideas from the actual high, mid and low impact work.</p><p>Working with systems of systems, you can learn to identify leverage points where you can make changes to subsystems to affect the outcome of the larger system. For example, identifying the backend service where you could invest your time and energy to get the biggest improvement to the overall user experience. Or even identifying ways to improve the communication/collaboration within your team to improve your iteration speed while creating better conditions for the team members to grow professionally.</p><p>This might not seem like system development if you just started working as a professional developer, but perspectives like these get more important as you gain more responsibility within your team and organization. Sometimes, its more impactful to really get to know your team and find how you can improve your collaboration, rather than purely focusing on completing tasks. Though this is usually not a problem if the organization has a strong learning culture where people genuinely want to improve both their craft, and their processes.</p><hr/><h2>Beyond development: systems thinking as a core skill for the 21st century</h2><p>I'm actively exploring and practicing systems thinking since several years back, and the more I learn, the more convinced I become that systems thinking is a fundamental skill for anyone working with software. Not only for developers or architects, but also for people working with design, product and even the underlying <a href=\"https://doughnuteconomics.org/themes/business-enterprise\" class=\"link\">business design</a>.</p><p>Business design, you might think - how is this related to the software? In my view, the business design shapes the purpose of the organization and determines how features or improvements are prioritized, and which values and worldview that get baked into the software. These things shape the product or service itself, which in turn shapes how the product or service interacts with the surrounding world, in both positive and negative ways.</p><p>If your goal is to increase the positive impact while reducing the negative impact that your software has in the world, you need to understand at which layer to make effective changes. To achieve some types of real-world changes, you need to change your business design, consisting of the purpose, networks, governance, ownership, finance. I highly recommend <a href=\"https://doughnuteconomics.org/themes/business-enterprise\" class=\"link\">Doughnut design for business</a> if you're curious. This is in stark contrast to if you try to get different results while you keep using the same structures that caused the problems in the first place. Every organization is not yet ready for this type of exploration, though <a href=\"https://doughnuteconomics.org/tools/doughnut-design-for-business-case-studies\" class=\"link\">more and more are</a>.</p><p>Some organizations that have successful used systems thinking and identified leverage points are: <a href=\"https://libro.fm/\" class=\"link\">Libro.fm</a> for buying audiobooks while supporting bookshops, <a href=\"https://subvert.fm/\" class=\"link\">Subvert.fm</a> for buying music while supporting artists or <a href=\"https://www.fairphone.com/\" class=\"link\">Fairphone</a> for repairable phones, modular hardware and software longevity. All of these organizations have found real-world problems where old, obsolete business models caused negative externalities for people and planet, and are now working towards fixing them. The leverage points they used were partly in the software layer, but mainly in the business design.</p><p>By practicing systems thinking and learning to map out systems and how they are interconnected, you can more easily identify opportunities for improvements. This can also help you mitigate risks, increase resilience and improve maintainability of not just the software, but also the people and organization around it.</p><hr/><h2>Practicing systems thinking</h2><p>Here are some ideas for how to start learning, and most importantly, practicing - systems thinking:</p><ul><li><strong>Explore theory</strong> - Read books and research papers, watch talks or listen to podcasts. Be an active learner and write notes about things you find interesting and useful. Write down your questions about things you don't understand, and later on follow up when you find good answers. Also, <a href=\"https://www.nature.com/articles/s44222-025-00323-4\" class=\"link\">writing is thinking</a> and helps you both with organizing your thoughts, and improves your communication because now you have a structure of words, concepts and ideas that you can share with others.</li><li><strong>Use systems thinking in practice</strong> - View the world (and your projects) as systems. Try to understand how they work, and how they are interconnected. Apply the tools and methods from systems thinking, and with time you start to find things like new connections that you wouldn't have thought about before, or even leverage points where you could make the most impactful changes.</li><li><strong>Experiment and learn</strong> - Approach problems with curiosity and view them as learning opportunities. Most often, you can experiment and learn how to find better solutions. Remember that everything is a process and the best solutions often take several iterations.</li><li><strong>Reflect on your learning</strong> - No matter the outcome of the experiments, write down what went well, what didn't and what can be improved in the next iteration. Maybe you missed a critical detail of how a system works, causing your solution to not work as intended. Maybe something happened in the social aspect of software development that affected your outcome. Also make sure to highlight the things that did go well and prioritising new iterations and experiments instead of overanalyzing.</li><li><strong>Iterate</strong> - Repetition is key. Start small and make it a habit.</li></ul><hr/><h2>The developer as a learning system</h2><p>As for my personal journey, the last few years have been highly rewarding and taught me a lot about systems thinking for software development. Sometimes helping me identify and take care of high impact work and make meaningful contributions. Sometimes finding and fixing a potential problem before it causes too much trouble. Sometimes failing to do so and instead gaining experience and learning valuable lessons. Most importantly though, the core practice of regular experimentation, reflection and learning is probably the most valuable skill of all.</p><p>A reflective and systematic approach to learning changes the perspective so the only true failures are when things don't go as you planned and you <em>also</em> failed to learn anything from the experience. In my view, this is the difference between feeling content with passively gaining more <em>experience</em> of various situations, compared to actively improving your <em>proficiency</em> by following your curiosity and deliberately practicing your skills and craft.</p><p>Given that everything changes constantly, skills like continuous learning and the ability to both understand systems and find leverage points to make effective interventions will only become more important. Especially for software developers.</p><p><strong>To wrap up, I'd like to repeat what I wrote in the beginning:</strong></p><p>Regularly practicing systems thinking is worthwhile for your career, but also because it gives a valuable lens to explore and better understand the world we live in. Everything is connected, and by understanding the systems around us, we can identify <a href=\"https://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/\" class=\"link\">leverage points</a> where we can make effective changes.</p><p>Let's move fast and fix things 🌱</p></article>",
            "url": "https://samuelplumppu.se/blog/practicing-systems-thinking",
            "title": "Practicing systems thinking to become a better developer",
            "date_modified": "2026-01-22T00:00:00.000Z",
            "date_published": "2026-01-22T00:00:00.000Z",
            "tags": [
                "Systems Thinking",
                "Productivity"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/testing-rust-cli-apps",
            "content_html": "<article><p>I recently finished reading the book <a href=\"https://rust-cli.github.io/book/\" class=\"link\">Command Line Applications in Rust</a>, experimenting with the exercises and adding more test cases. While the testing chapter covers the <a href=\"https://rust-cli.github.io/book/tutorial/testing.html\" class=\"link\">basics of testing CLI apps</a>, it didn't show the full potential of the recommended crates <a href=\"https://crates.io/crates/assert_cmd\" class=\"link\">assert_cmd</a>, <a href=\"https://crates.io/crates/assert_fs\" class=\"link\">assert_fs</a> and <a href=\"https://crates.io/crates/predicates\" class=\"link\">predicates</a>.</p><p>Specifically, I wondered how to create a temporary directory with multiple nested subdirectories and files. This is very useful for testing CLI tools that scaffold projects. Or in the case of my current Rust project, building a simple Git clone to learn more about how Git works internally, how to structure unit and integration tests in Rust, and to practice using the language.</p><p>Since I'm used to reading technical documentation and learning new technologies, I found what I needed without too much trouble by exploring <a href=\"https://docs.rs/assert_fs/latest/assert_fs/struct.TempDir.html\" class=\"link\">docs.rs/assert_fs</a> to learn about temporary directories, and <a href=\"https://docs.rs/assert_cmd/latest/assert_cmd/cmd/struct.Command.html#method.current_dir\" class=\"link\">docs.rs/assert_cmd</a> for executing the CLI app in a specific working directory. However, this seemed like a good opportunity to improve the Rust CLI book itself, to make common testing techniques easier to discover for people who might be new(er) to programming and not yet comfortable with jumping into technical documentation. I remember how much I valued these types of friendly and accessible descriptions when I started more than a decade ago, and these days I think they are important to make software development more accessible.</p><p>So I ended up <a href=\"https://github.com/rust-cli/book/pull/284\" class=\"link\">contributing</a> a new section describing the following example to hopefully make the book even better. If you're curious about how it works, I encourage you to read the <a href=\"https://github.com/rust-cli/book/pull/284/files\" class=\"link\">full version</a> which gives some more context.</p><pre><code>#[test]\nfn find_content_in_file_of_tmp_dir() -> Result&#x3C;(), Box&#x3C;dyn std::error::Error>> {\n    let cwd = assert_fs::TempDir::new()?;\n\n    let child_dir = cwd.child(\"nested/child_dir\");\n    let child_file = child_dir.child(\"sample.txt\");\n\n    child_file.write_str(\"The first\\ntest file.\\nLast line of first file.\")?;\n\n    // Files can be nested several levels within the temporary directory\n    assert!(child_file.path().ends_with(\"nested/child_dir/sample.txt\"));\n\n    cargo_bin_cmd!(\"grrs\")\n        // Execute in the temporary directory\n        .current_dir(cwd.path())\n        .arg(\"first\")\n        .arg(child_file.path())\n        .assert()\n        .success()\n        .stdout(predicate::str::contains(\n            \"The first\\nLast line of first file.\",\n        ));\n\n    Ok(())\n}\n</code></pre><p>Making this change was also a good opportunity to learn more about how <a href=\"https://github.com/rust-lang/mdBook\" class=\"link\">mdBook</a> works, which is commonly used for many Rust books and technical tutorials. I really enjoyed how fast it builds, and even runs tests for the code samples to verify everything works as expected. This makes technical writing such a smooth experience.</p><p>And speaking of the Git clone, it's coming together nicely and has already taught me a lot about both Rust programming and how to create integration tests that simulate a Git repository. In fact, these testing techniques already helped me catch a regression while refactoring to reuse some code between <code>git cat-file</code> and <code>git ls-tree</code>, so this knowledge has already proven useful!</p></article>",
            "url": "https://samuelplumppu.se/blog/testing-rust-cli-apps",
            "title": "Testing Rust-based CLI applications using temporary directories",
            "date_modified": "2026-01-18T03:42:21.000Z",
            "date_published": "2026-01-17T00:00:00.000Z",
            "tags": [
                "Rust",
                "Terminal",
                "Open Source"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/git-cms-tip-automatic-updated-at-timestamp",
            "content_html": "<article><p>One of the main benefits of using a database to power either a backend API or a CMS is the ability to get consistent <code>updatedAt</code> timestamps for entries like posts, comments or other forms of content. Until recently, this has been one of the main features that made me consider database-powered CMSes and backends even in cases where 90% of the project would work much better with a Git-based CMS.</p><p>However, it turns out that even Git-based CMSes can get this functionality, with only a few lines of code. By taking advantage of the fact that Git tracks the modification time for each file in commits, we can get reliable results since there is a difference between the <em>commit timestamp</em> and the <em>filesystem modification timestamp</em>. This is needed since the filesystem likely will change more often, and might not even be reliable at all in CI or production environments.</p><p>This means that blogs and other projects using Git-based CMSes to store content in Git like Markdown, JSON or YAML now can get the <code>updatedAt</code> timestamp automatically. Since it's built into the <code>git</code> command, this works with any programming language where you can spawn child processes - or where you use a dedicated git library directly to get the same information.</p><h2>How to extract <code>updatedAt</code> by using <code>git log</code></h2><p>To keep things simple, this technique assumes your content is stored as separate files, like for example <code>posts/*.md</code>. However, it should also be possible to extract the git commit timestamp for specific lines within each file, perhaps by using the Git blame information.</p><p>For now though, let's focus on the simple use case with separate files like <code>posts/*.md</code>. If you group data together in to for example one big <code>posts.json</code> file, then the following code will only give you the modification time for the entire file. It might still be good enough for some kinds of data, but this is worth considering when modeling your data and how it's stored.</p><pre><code># Get the unix timestamps for when file was last modified.\n# `--follow` allows us to take file renames into account.\n# The extra `--` prevents the file name from clashing with\n# git flags or options.\n#\n# By sorting the timestamps, we can reliably find the\n# newest/oldest timestamp, even if the Git commits show in\n# a different order due to rebases/merges. When reversing\n# the sort, the most recent timestamp will be at the start,\n# and can be retrieved with `head -n1`:\ngit log --follow --format=%ad --date unix -- &#x3C;FILE> |\\\n    sort --reverse |\\\n    head -n1\n\n# To get the timestamp for when the file was created,\n# use `tail -n1` instead:\ngit log --follow --format=%ad --date unix -- &#x3C;FILE> |\\\n    sort --reverse |\\\n    tail -n1\n</code></pre><hr/><h2>Example implementation</h2><p>You could make use of this in Node.js and TypeScript like this:</p><pre><code>import { execSync } from 'node:child_process'\n\n/**\n * Use Git to determine when a file was last modified.\n *\n * This is more accurate than using the file system,\n * where changes happen more freqeuntly.\n *\n * @param path The file path to operate on.\n * @returns The `updatedAt` Date, or undefined if the\n * file has not yet been modified.\n */\nfunction getFileUpdatedAtFromGit(path: string) {\n    // Get the most recent UNIX timestamp for when file was\n    // modified in Git.\n    //\n    // By using `--follow`, we get the full history even if\n    // the file was renamed. This uses the Git author timestamp,\n    // because the commit timestamp is not as accurate and\n    // may change during rebases and merges\n    //\n    // Timestamps are sorted since rebases/merges might have\n    // caused commits to show in a different order\n    const rawTimestamps = execSync(\n        `git log --follow --format=%ad --date unix -- ${path} | sort --reverse`,\n    ).toString()\n\n    // Sepearate entries and only keep valid ones\n    const timestamps = rawTimestamps.split('\\n').filter(Boolean)\n\n    // If we only have one timestamp, the file was\n    // just created and has not yet been updated.\n    if (timestamps.length &#x3C; 2) {\n        return\n    }\n\n    // Git stores timestamps in seconds, so we need to\n    // convert to ms to get the expected JS date.\n    const updatedAt = new Date(parseInt(timestamps[0]) * 1000)\n    return updatedAt\n}\n</code></pre><p>While Git-based CMSes are no perfect solution for all problems, this makes them even more viable, simplifying apps and websites into low-cost and easy maintenance systems that don't require any server components or databases. This is ideal for resource-constrained environments, and happens to be great for security and performance too.</p></article>",
            "url": "https://samuelplumppu.se/blog/git-cms-tip-automatic-updated-at-timestamp",
            "title": "Git-based CMS tip: Automatic updatedAt timestamps with Git",
            "date_modified": "2026-01-18T05:34:57.000Z",
            "date_published": "2025-12-16T00:00:00.000Z",
            "tags": [
                "Git",
                "Shell Scripting",
                "TypeScript"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/live-reloading-db-schema-with-drizzle-sqlite",
            "content_html": "<article><p>During development, automatically reloading modified TypeScript code and restarting the app gives a significant productivity boost. This is very common for all kinds of frontend and backend projects.</p><p>But what if you could live reload and automatically migrate your local development database whenever you change your <a href=\"https://orm.drizzle.team/\" class=\"link\">Drizzle</a> schema?</p><p>This seemed a bit crazy at first, but it turns out this works quite well! Let's explore how.</p><hr/><h2>Prototyping database schemas with Drizzle</h2><p>When using Drizzle, you define your database schema in a <code>schema.ts</code> file. For local development, it's usually a good idea to use the command <code>drizzle-kit push</code> to automatically migrate the database and smoothly iterate on your schema.</p><p>The <code>push</code>-based workflow usually requires you either:</p><ol><li>Stop the dev server, run <code>drizzle-kit push</code> and then start the dev server again.</li><li>Or keep two terminals open to manually run the <code>drizzle-kit push</code> in one of them and then saving a file to trigger a dev server restart in the other.</li></ol><p>If you make several rapid schema changes, both these alternatives require manual interactions (and your attention) even for tiny updates. For larger schema changes you will almost always need to manually review the <code>push</code>-based migration. Alternatively, you could just reset the DB and insert new seeding data that has been updated to the latest schema.</p><h2>Solution: live reloading database schemas</h2><p>I've so far only confirmed this solution works with SQLite, but it could likely also work for other databases supported by Drizzle, like Postgres.</p><p>Also note that the following Drizzle versions were used, and there might be changes for this to work with the upcoming Drizzle <code>v1</code>:</p><pre><code>{\n    \"drizzle-orm\": \"0.44.7\",\n    \"drizzle-kit\": \"0.31.7\"\n}\n</code></pre><p>The core idea is pretty simple. Only in development, run a small script when the backend starts, performing the following actions:</p><ol><li>Run <code>drizzle-kit push</code> in a non-interactive child-process (without <code>stdio</code>) and capture the output. The non-interactive child process ensures that the CLI prompt (if any is shown) will be aborted, preventing accidental data loss.</li><li>Parse the output and determine if there any prompt about data loss.</li><li>If there were schema changes that cause data loss, abort the backend startup process with an error, and make it clear manual actions are required.</li><li>However, most small schema changes just work, and in those cases Drizzle automatically applies the schema changes and lets the backend start as normal.</li></ol><p>This is the most important part of the script, implementing what was described above:</p><pre><code>// dev-db-check.ts\nimport { execSync } from 'node:child_process'\nimport { styleText } from 'node:util'\n\nif (!DEV) {\n    throw new Error('This module should only be imported during development')\n}\n\n// `drizzle-kit push` validates the DB schema changes\n// and attempts to migrate your database.\n// Schema changes without data loss are applied immediately.\n// This effectively works like a \"live reload\" for your\n// DB schema, which is very useful during development.\nconst dbCheckResult = execSync('npx drizzle-kit push').toString()\n\n// However, abort with an error if data loss could happen\nif (/warning|data loss|revert|abort/gi.test(dbCheckResult)) {\n    const msg = `\\nSchema changes with potential data loss detected. Please resolve manually:\\n`\n\n    console.error(styleText(red, msg))\n    console.error(dbCheckResult + '\\n')\n    process.exit(1)\n}\n</code></pre><p>I chose to put this code in a separate module, to clearly distinguish it from the main code path during production.</p><p>You can explore the <a href=\"https://github.com/paccao/allerthsbageri.se/pull/141/files#diff-6eb830c2d4ca2deb49aa3df603d79a9f6341f4be13c21d39959b79384748c873\" class=\"link\">full code for live reloading database schemas</a>. The same <a href=\"https://github.com/paccao/allerthsbageri.se/pull/141\" class=\"link\">PR</a> also implements some related DX enhancements explained below:</p><h2>Improving the local development experience</h2><p>Now when the core feature of live reloading the DB schema is in place, we can add several related quality of life improvements too:</p><ol><li>Automatically validate the DB schema to ensure the database is in a good state before starting the app. This runs every time you (re)start the dev server, letting you focus on other problems than keeping your development database in sync with the latest Drizzle schema.</li><li>Simplify first-time local setup by automating the creation of a development DB and adding seeding data if it does not exist.</li><li>Automatically re-create the development DB and add seeding data if you have just removed or replaced the development database. Common when working when working with SQLite and renaming/removing databases used to test various states.</li></ol><h2>Closing thoughts</h2><p>I would only use this for local development where I host the DB locally and not with so called \"serverless\" database services.</p><p>Again, I've only tested this with SQLite, so if you try this method with another DB - <a href=\"https://fosstodon.org/@Greenheart\" class=\"link\">let me know</a> how it works!</p><p>And if you have important data in your development database - please make sure you have backups before using scripts like this to automatically modify your database.</p><p>With that said, I hope this will make it easier to experiment and iterate on your database schema just like it did for me!</p></article>",
            "url": "https://samuelplumppu.se/blog/live-reloading-db-schema-with-drizzle-sqlite",
            "title": "Live Reloading Database Schemas with Drizzle and SQLite",
            "date_modified": "2026-01-16T18:16:56.000Z",
            "date_published": "2025-12-03T00:00:00.000Z",
            "tags": [
                "TypeScript",
                "Drizzle",
                "SQLite"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/keystatic-sveltekit-markdoc",
            "content_html": "<article><p>Have you ever worked on a <a href=\"https://svelte.dev/docs/kit/introduction\" class=\"link\">SvelteKit</a> project where you want to use the Git-based <a href=\"https://keystatic.com\" class=\"link\">Keystatic CMS</a>? Up until now that has usually meant installing a separate web framework like Astro/Remix/Next.js just to run the CMS, which might not always be desirable.</p><p>After some experimentation, I found that it's actually possible to use Keystatic directly in your SvelteKit project! This makes it possible to use the same dev server and, if you want, the same production server.</p><p>You can even combine Keystatic with <a href=\"https://github.com/CollierCZ/markdoc-svelte\" class=\"link\">markdoc-svelte</a> to make your SvelteKit project render <a href=\"https://markdoc.dev/\" class=\"link\">Markdoc</a> content with custom formatting, interactive Svelte components and use other powerful features of Markdoc.</p><p>In combination, this gives you a solid foundation to build apps and websites where you want to make content editing accessible to your entire team via Keystatic CMS, and especially with their <a href=\"https://keystatic.com/docs/github-mode\" class=\"link\">GitHub</a> mode.</p><p><strong>Quick start: Check out the <a href=\"https://github.com/Greenheart/keystatic-sveltekit\" class=\"link\">keystatic-sveltekit</a> repository if you want the simplest way to add Keystatic to your SvelteKit project.</strong></p><p>If you also want to understand <em>when</em> to use this setup as well as <em>why</em> and <em>how</em> I designed it the way I did - then you've come to the right place! :)</p><hr/><h2>Part 1: Why use a Git-based CMS like Keystatic?</h2><p>Most mobile apps, web apps and websites don't need a complex backend or even a database. Instead, you can use a Git-based CMS like Keystatic to store content together with your code. This simplifies your tech stack, reduces hosting costs and can help you increase the security of your system.</p><p>Keystatic CMS allows non-technical people to use a graphical, web-based interface to make content changes that automatically syncs with your Git repository in the background.</p><h3>When to use a Git-based CMS like Keystatic:</h3><ul><li>You can easily represent content as Markdown/Markdoc/JSON/YAML files, images, and other static assets and doesn't need a backend server or database just to manage content.</li><li>You don't need a backend server or database at all.</li><li>You want to keep your project as simple and with as few moving pieces as possible.</li><li>You don't want to maintain Docker-containers for a traditional CMS or for its database, and you don't want to maintain any database backups, apart from your regular Git repository.</li><li>You want to be able to Git checkout any historical commit and automatically get the correct content, in the right format, matching the code implementation at the time. This allows you to quickly achieve what you want, instead of first having to find and restore an old DB backup - if it even still exists years later!</li><li>You want to keep hosting costs as low as possible for your web app, mobile app or website, for example by building static content and caching it via a CDN.</li></ul><h3>When to look for other CMS solutions:</h3><ul><li>You need a CMS that integrates with your existing backend system to read and modify custom data types.</li><li>You need complex content types that can't be represented as static files. You can achieve surprisingly much with <a href=\"https://keystatic.com/docs/fields/relationship\" class=\"link\">relationships</a> in Keystatic, especially if you add content build scripts that run together with your regular project build to verify content and transform it into the format used by your app or website (<a href=\"https://github.com/Greenheart/idg.tools/blob/a70dc2cf41507e1e2036a181632ee13298bb8923/content/scripts/build-content.ts\" class=\"link\">example</a>). However, for more complex relationships and cascading updates for related entries, you might want to use another solution.</li><li>You are implementing e-commerce features, or similar cases where you need a server and database to keep track of the orders and products to for example prevent multiple people from ordering the final item.</li><li>If you are working on an open source project and want to allow anyone (not just trusted collaborators) to propose content changes that automatically gets submitted as pull requests to your project. In this case you need Open Authoring, currently an <a href=\"https://github.com/Thinkmill/keystatic/issues/1433\" class=\"link\">open feature request</a> for Keystatic but supported in other Git-based CMSes like <a href=\"https://decapcms.org/docs/open-authoring/\" class=\"link\">Decap CMS</a>.</li></ul><hr/><h2>Part 2: Integrating Keystatic CMS with SvelteKit projects</h2><p>I've worked on many projects where a Git-based CMS made development and content-collaboration straightforward and enjoyable. For example, the multilingual mobile app <a href=\"https://github.com/29ki/29k\" class=\"link\">Aware (29k)</a>, the web app <a href=\"https://github.com/Greenheart/idg.tools\" class=\"link\">IDG.tools</a> and several websites. After using other Git-based CMSes, I started using Keystatic CMS in 2023 and found it to be of both reliable and full of useful features.</p><p>Up until now, using Keystatic together with SvelteKit usually meant creating a separate Astro/Remix/Next.js project just to serve the CMS. In some cases, it might be desirable to run Keystatic entirely separately from the main SvelteKit app or website since this isolates dependencies and could improve security and performance. In smaller projects though, it's more convenient and usually preferable to only have one Vite dev server, and only deploy one SvelteKit app to production.</p><p>After thinking about this in several projects, I implemented a solution that evolved into <a href=\"https://github.com/Greenheart/keystatic-sveltekit\" class=\"link\">keystatic-sveltekit</a>.</p><p>Thanks to the fact that the Keystatic API is framework-agnostic, this was a lot simpler than expected. And it works surprisingly well for both development and production usage.</p><h3>Keystatic consists of two parts:</h3><ol><li><p><strong>Backend:</strong> Keystatic exports the <code>makeGenericAPIRouteHandler</code> function that can be called with a <a href=\"https://keystatic.com/docs/configuration\" class=\"link\">keystatic.config.ts</a> to create a generic API endpoint that handles routing internally. The Keystatic API endpoint accepts a standard <a href=\"https://developer.mozilla.org/en-US/docs/Web/API/Request\" class=\"link\">Request</a> object and returns <code>{ body, headers, status, statusText }</code> which you can be returned as a standard <a href=\"https://developer.mozilla.org/en-US/docs/Web/API/Request\" class=\"link\">Response</a>. This is framework-agnostic as long as you serve it at the route <code>/api/keystatic/[...rest]</code>.</p></li><li><p><strong>Frontend:</strong> A React-based SPA that only renders on the client side. Similar to the API, the frontend expects to be served from <code>/keystatic/[...rest]</code>.</p></li></ol><h3>Integrating Keystatic with SvelteKit</h3><p>There are multiple ways to run Keystatic together with SvelteKit with the same Vite server. What follows are some of the alternatives I've considered, along with the pros and cons of each. I found this exploration quite useful to deepen my understanding of how Keystatic, SvelteKit and Vite work, and hopefully you will too.</p><h3>Goals and requirements</h3><p>Our main goal is to make both the API endpoint and the frontend routes available inside the SvelteKit app. Starting the <code>vite</code> dev server should make it possible to use Keystatic locally, reloading the CMS as soon as the <code>keystatic.config.ts</code> changes. Similarly, running <code>vite build &amp;&amp; vite preview</code> should make a production build and serve it.</p><p>The first version will only explicitly support Node.js, but it should be possible to add support for other SvelteKit adapters and runtimes.</p><p>When using Keystatic, it could be fine to use the <code>local</code> storage and only enable the CMS in the local development environment. Though most projects probably want to use the <code>github</code> storage to allow simpler content collaboration. The integration with SvelteKit should be flexible and support all modes and options of Keystatic.</p><h3>Serving the Keystatic API within a SvelteKit project</h3><p>This was straightforward and went smoothly thanks to the <code>makeGenericAPIRouteHandler</code>.</p><ul><li><p>Serving the Keystatic API from a <a href=\"https://svelte.dev/docs/kit/routing#server-Fallback-method-handler\" class=\"link\">fallback</a> handler. If we add this to the <code>/api/keystatic/[...rest]</code> route, it will respond to all incoming requests, no matter which HTTP verb was used.</p><ul><li>Works well, but requires modifications to the project routes.</li></ul></li><li><p><strong>Chosen solution:</strong> Serving the Keystatic API from the SvelteKit <a href=\"https://svelte.dev/docs/kit/hooks#Server-hooks-handle\" class=\"link\">handle</a> hook. Achieves the same result with minimal code changes, making it much simpler to add Keystatic.</p></li></ul><h3>Serving the Keystatic frontend within a SvelteKit project</h3><p>Integrating custom frontend routes in a SvelteKit app is a bit more tricky, but totally doable.</p><ul><li><p>Serving the Keystatic SPA within the SvelteKit app with <a href=\"https://github.com/bfanger/svelte-preprocess-react\" class=\"link\">svelte-preprocess-react</a>, and by adding the route <code>/keystatic/[...rest]</code>. Works quite well with hot module reloading for dev, which is important when you change the <code>keystatic.config.ts</code>.</p><ul><li>The major problem with this approach is that the styles likely will interfere with each other and cause problems, since technically both apps share the same styles. This can be worked around by importing styles to specific routes only, or more drastically, by adding SvelteKit <a href=\"https://svelte.dev/docs/kit/advanced-routing#Advanced-layouts-%28group%29\" class=\"link\">layout groups</a> to completely isolate the CMS routes from the rest of your project. However, this requires significant code changes and makes it much harder to integrate with a SvelteKit project.</li><li>Another drawback with this approach is that it adds an extra dependency, and renders React inside the SvelteKit app which is unnecessary overhead since we don't use any SvelteKit or Svelte code at all on the CMS routes.</li><li>Since the Keystatic SPA doesn't support SSR or prerendering, we can't use the full potential of <code>svelte-preprocess-react</code> either.</li></ul></li><li><p>Prebuilding the Keystatic SPA separately and serving as static assets. The basic idea is good since it clearly separates React from SvelteKit, removing some unnecessary JS and making the styles separated so you don't need to make drastic routing changes like adding SvelteKit layout groups. It works quite well if you always start the CMS by visiting the <code>/keystatic</code> path. However, opening a specific route like <code>/keystatic/collection/posts</code> won't work, unless you add custom routing logic in for example the <code>handle</code> hook.</p></li><li><p><strong>Chosen solution:</strong> Prebuilding the Keystatic SPA and serving it from the <code>handle</code> hook together with the API routes. This makes the internal implementation of the integration more complex, but makes it as simple as possible to add Keystatic to SvelteKit projects. It also offers the best developer experience since the expected features like hot reloading during development work out of the box.</p></li></ul><p>Both the API and the frontend are best served from the <code>handle</code> hook. However, one critical piece we haven't explored yet is how to build the Keystatic React SPA so it can be served by the <code>handle</code> hook.</p><h3>Building the CMS in the background</h3><p>A few alternatives were considered:</p><ul><li><p>Building the CMS when the SvelteKit app starts and the first request is sent to the <code>handle</code> hook. This could work for basic cases, but since we want to integrate more deeply with the Vite dev server and adapt to the project configuration, we get many benefits from building in a Vite plugin. Also, the CMS build step would be better to run as early as possible.</p></li><li><p><strong>Chosen solution:</strong> Building the CMS in the background with a Vite plugin. This gives a lot of flexibility and deep integration with the underlying server as well as the Vite build process. This makes it simpler to implement features like hot reloading during development.</p><ul><li>The Vite plugin build went through several iterations: Initially it all happened in the same process to get a working prototype. This blocked the SvelteKit app from starting.</li><li>Then, the Vite plugin started child processes to build in the background. This unblocked the main thread, but added significant overhead for starting and stopping each process.</li><li>Most recently, builds run as a one-off worker for production, and a reusable worker pool during development. We can use Node.js workers to get efficient builds without the overhead of starting and stopping child processes. One good reason for using workers instead of child processes is because the CMS build is CPU-bound (compilation and bundling) rather than I/O bound (file system). By reusing the worker multiple times (worker pool) during development, we get further performance improvements.</li><li>Another thing worth noting is that the CMS is bundled with all dependencies including both the Keystatic CMS and the <code>keystatic.config.ts</code> of the current project. Since we use <code>esbuild</code>, performance is no problem. It would be nice to only rebuild <code>keystatic.config.ts</code> for hot reloads during development, and only build the CMS bundle when restarting the <code>vite</code> process, since that's the only time when the CMS bundle might need to be updated. However, bundling everything together is much simpler, and has good enough performance for now.</li></ul></li></ul><h3>Finding the right trade-offs</h3><p>As of October 2025, the best way to add Keystatic to a SvelteKit project is via the <code>handle</code> hook in <code>hooks.server.ts</code> to serve both the API and the frontend. This should be combined with a Vite plugin added in <code>vite.config.ts</code> to build (and rebuild) the CMS.</p><p>With the <code>handle</code> hook, we get complete control to handle incoming requests and returning responses. However, since routes implemented in the <code>handle</code> hook are not part of the regular SvelteKit router, they need to be manually added to the build output, and will only be available in production if we use an adapter like <code>@sveltejs/adapter-node</code>.</p><p>Looking to the future, this might get even simpler if we could register routes programmatically from a SvelteKit plugin/integration, similar to how this is implemented in the Keystatic integration for Astro: <a href=\"https://github.com/Thinkmill/keystatic/blob/63c767bbb8b9bbc96c30535862bcccfbbc4ea346/packages/astro/src/index.ts\" class=\"link\">@keystatic/astro</a>. A related feature request would be to make it possible to control which routes should be prerendered when programmatically defining routes.</p><p>There is an open <a href=\"https://github.com/sveltejs/kit/issues/8896\" class=\"link\">issue</a>, so let's see what the future brings.</p><p>For now, registering the API route via the <code>handle</code> hook is a good workaround.</p><hr/><h3>Overview of how <code>keystatic-sveltekit</code> works:</h3><p>Now that we have explored why the integration was implemented the way it is, here's an overview of how to add Keystatic to your SvelteKit project. The simplest way is to make a copy of the <a href=\"https://github.com/Greenheart/keystatic-sveltekit\" class=\"link\">keystatic-sveltekit</a> repository to use as a foundation for your project.</p><p>Here are the most important parts of the project that make it work together:</p><ol><li><p>The <code>lib/keystatic/</code> directory implements the integration.</p></li><li><p><code>keystatic.config.ts</code> defines your content collections and how they show up in the CMS editor.</p></li><li><p>The Vite plugin (re)builds the CMS frontend:</p></li></ol><pre><code>// vite.config.ts\nimport { defineConfig } from 'vite'\nimport { sveltekit } from '@sveltejs/kit/vite'\nimport { keystatic } from '$lib/keystatic'\n\nexport default defineConfig({\n    plugins: [keystatic(), sveltekit()],\n})\n</code></pre><ol start=\"4\"><li>The <code>handleKeystatic</code> hook serves the CMS frontend and API:</li></ol><pre><code>// hooks.server.ts\nimport { handleKeystatic } from '$lib/keystatic'\n\nexport const handle = handleKeystatic()\n</code></pre><p>Alternatively, if you have multiple hooks:</p><pre><code>// hooks.server.ts\nimport { sequence } from '@sveltejs/kit/hooks'\nimport { handleKeystatic } from '$lib/keystatic'\n\nexport const handle = sequence(...yourOtherHandleHooks, handleKeystatic())\n</code></pre><ol start=\"5\"><li>And finally, to support prerendering, you can add customise the <code>svelte.config.ts</code>:</li></ol><pre><code>// svelte.config.ts\nimport { type Config } from '@sveltejs/kit'\nimport { isKeystaticRoute } from './src/lib/keystatic/index.ts'\n\nconst config = {\n    kit: {\n        prerender: {\n            handleHttpError({ path, message }) {\n                // Ignore prerendering errors for Keystatic CMS\n                // since it's a SPA that only supports CSR.\n                if (isKeystaticRoute(path)) return\n\n                // Fail the build in other cases.\n                throw new Error(message)\n            },\n        },\n    },\n} satisfies Config\n\nexport default config\n</code></pre><h2>Part 3: How to render Markdoc content with interactive Svelte components</h2><p>You can find a working implementation in the <a href=\"https://github.com/Greenheart/keystatic-sveltekit\" class=\"link\">keystatic-sveltekit</a> repository, but I won't cover Markdoc rendering further in this post since it's already long. Let me know if you would like to explore it in a future post though.</p><hr/><h2>Future improvements: official Keystatic integration, easier project setup</h2><p>Thanks to the generic API handler, it's possible to integrate the Keystatic API with any basically any backend framework for Node.js/Deno/Bun. Now that we know this works for SvelteKit, Astro, and Remix, it should also be possible to integrate Keystatic for other Vite-based frameworks too. Rendering the React-based Keystatic frontend is the tricky part (for non-React-based frameworks), but definitely possible.</p><p>I considered if it would be worth creating a Vite plugin like <code>vite-plugin-keystatic</code> to support any Vite-based meta-framework like SvelteKit, Astro, Remix and more. However, since the routing is deeply integrated and highly framework-specific, it's probably a better idea to maintain separate, minimal adapters, like <code>@keystatic/astro</code> and soon, perhaps even a <code>@keystatic/sveltekit</code> adapter that simplifies and standardizes the solutions we explored in this blog post.</p><p>Speaking of which - do you think it would be worth creating an adapter like <code>@keystatic/sveltekit</code> along with a starter project, and contributing it to the Keystatic project? That would take some initial work, and maintenance in the future, but would make it possible to use the Keystatic CLI to rapidly scaffold a Keystatic project. And if we have the <code>@keystatic/sveltekit</code> adapter, it would be possible to create a <code>keystatic</code> addon for the <a href=\"https://github.com/sveltejs/cli\" class=\"link\">Svelte CLI</a>, to simplify adding Keystatic in both new and existing projects.</p><h2>Closing thoughts</h2><p>If you take one thing away from all this, let it be the fact that it's really important to create good public APIs for your library. Just look at what happened thanks to <code>@keystatic/core</code> making the right building blocks available (<code>makeGenericAPIRouteHandler</code>) to allow customization beyond what was originally intended.</p><p>This way of integrating Keystatic with SvelteKit has already simplified several of my projects. It could certainly be refined though, so you're welcome to join the discussion and help make it better. One interesting area would be to explore how it works with other SvelteKit adapters, and submit issues and pull requests to make the integration easier to use.</p><p><strong>Check out the <a href=\"https://github.com/Greenheart/keystatic-sveltekit\" class=\"link\">keystatic-sveltekit</a> repository to learn how to add Keystatic to your project.</strong></p><p>I'm looking forward to hearing what you build using SvelteKit and Keystatic!</p><p>Happy hacking!</p></article>",
            "url": "https://samuelplumppu.se/blog/keystatic-sveltekit-markdoc",
            "title": "Integrate Keystatic CMS with SvelteKit to Render Markdoc Content with Interactive Svelte Components",
            "date_modified": "2026-01-16T21:16:38.000Z",
            "date_published": "2025-10-01T00:00:00.000Z",
            "tags": [
                "SvelteKit",
                "TypeScript",
                "Keystatic"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/detect-vite-plugin-restarts-to-avoid-rerunning-expensive-tasks",
            "content_html": "<article><p>When developing <a href=\"https://vite.dev\" class=\"link\">Vite</a> plugins, you sometimes need to detect when the Vite server restarts. Both the <code>vite</code> dev server and <code>vite build</code> can run your plugin multiple times within the same Node.js process. In my case, I wanted to only execute an expensive task once, and avoid duplicate work.</p><p>It's usually convenient to let Vite restart the dev server whenever the configuration or other imported modules change, and rerunning your Vite plugins. However, this also means that expensive setup work could happen multiple times, wasting both time and resources. This is especially important if your plugin does expensive computations which could slow down the development experience, and if your production build involves a prerendering step, like <a href=\"https://svelte.dev/docs/kit/introduction\" class=\"link\">SvelteKit</a>.</p><p>To make a Vite plugin only execute some task on the first run, we can take advantage of the fact that Vite (I'm using version 7.1.5 at the time of writing) runs the dev server as well as the production build within the same Node.js process. This means we can define properties on <code>globalThis</code> to communicate between different executions of the Vite plugin.</p><p>While I generally think global state like <code>globalThis</code> should be avoided as much as possible, this seems like a good time to use it. Here is a minimal example of how you can use this technique in your Vite plugin:</p><pre><code>// Add type safety\ndeclare global {\n    /** Ensure the Vite plugin only runs some expensive task once */\n    var HAS_RUN_BEFORE: boolean | undefined\n}\n\nfunction yourVitePlugin() {\n    // Logs `undefined` the first time and then `true`.\n    // Use `process.uptime()` to easily identify whenever Vite reruns your plugin.\n    console.log(globalThis.HAS_RUN_BEFORE, process.uptime())\n\n    if (!globalThis.HAS_RUN_BEFORE) {\n        runExpensiveTask()\n        globalThis.HAS_RUN_BEFORE = true\n    }\n\n    console.log(globalThis.HAS_RUN_BEFORE, process.uptime()) // Always logs `true`.\n\n    // (...)\n}\n</code></pre><h2>A real-world example</h2><p>I'm building a Vite plugin specifically to integrate with the SvelteKit production build and prerendering, where the Vite server first creates the production build, which is then prerendered by SvelteKit in a separate restart.</p><p>In the following example, <code>globalThis.HAS_CMS_BUILD_STARTED</code> is only assigned once, and can then be read by all future instances of the Vite plugin to prevent the build from running more than once. Without the <code>globalThis</code> workaround, this would have meant the expensive build logic would be run two times, or even more for development server restarts.</p><pre><code>import type { ConfigEnv } from 'vite'\n\ndeclare global {\n    var HAS_CMS_BUILD_STARTED: boolean | undefined\n}\n\ntype BuildMode = 'prio' | boolean\n\n/**\n * Ensure the initial CMS build only happens once.\n *\n * Since the `vite` command restarts the server multiple times both during\n * development and production builds within the same parent process, we use this\n * function to avoid duplicate builds in the same `vite` process. This also\n * makes the initial build faster.\n */\nfunction getBuildMode(env: ConfigEnv): BuildMode {\n    if (globalThis.HAS_CMS_BUILD_STARTED) {\n        return false\n    } else {\n        // We can use `globalThis` to reliably determine if there has been a previous build.\n        // This is possible since `globalThis` is shared in the Vite parent process that restarts the build,\n        // and because both the Vite config loading and the SvelteKit dev/build process are run by the same parent process,\n        globalThis.HAS_CMS_BUILD_STARTED = true\n    }\n\n    if (env.mode !== 'development') {\n        if (env.command === 'build') {\n            // For production, make sure the CMS build finishes before other parts of the app build.\n            return 'prio'\n        } else {\n            // Don't build when serving in production (e.g. preview). In these cases the CMS should already be built.\n            return false\n        }\n    }\n\n    // Build the first time during development\n    return true\n}\n</code></pre></article>",
            "url": "https://samuelplumppu.se/blog/detect-vite-plugin-restarts-to-avoid-rerunning-expensive-tasks",
            "title": "Detect Vite Plugin Restarts to Avoid Rerunning Expensive Tasks",
            "date_modified": "2026-01-08T17:06:35.000Z",
            "date_published": "2025-09-18T00:00:00.000Z",
            "tags": [
                "Node.js",
                "TypeScript",
                "Vite"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/git-ignore-files-and-directories-without-using-gitignore",
            "content_html": "<article><p>By adding a pattern with the <code>.gitignore</code> syntax to the special file <code>.git/info/exclude</code>, it's possible to ignore files and directories without using <code>.gitignore</code> files.</p><pre><code>echo \"src/content/posts/_*\" >> .git/info/exclude\n</code></pre><h2>When to use <code>.git/info/exclude</code></h2><p>This is primarily useful to handle draft content, local config files, and when working with a Git-based CMS where you want to keep placeholder content out of the Git history.</p><h2>Why not use alternatives?</h2><p>Both <code>git update-index --assume-unchanged</code> and <code>git update-index --skip-worktree</code> are good alternatives. However, both of them only apply to one file at a time. While this could be solved with a shell script, I usually prefer <code>git update-index</code> commands to temporarily ignore a few files.</p><p>This make <code>.git/info/exclude</code> easier to work with when you need to ignore many files and directories.</p><h2>Potential drawbacks of using <code>.git/info/exclude</code></h2><ul><li>Search results may be missing from your repository since you've explicitly told Git to ignore these files no matter what. This can be solved by manually including the directories you want to search, like <code>src/**/*</code> or just <code>src/</code>.</li><li>The <code>.git/info/exclude</code> is only applied to your local Git repository. For many use cases, the regular <code>.gitignore</code> files are a better choice. In particular, <code>.gitignore</code> is better when you need to apply ignore files consistently for a large team, or in cases where it's critical to keep sensitive information away from your repositories.</li></ul><h2>Summary</h2><p>Taking this all into account, <code>.git/info/exclude</code> is very useful for local overrides - especially when working with Git-based content. However, like with any solution, make sure to consider what works best in your project.</p></article>",
            "url": "https://samuelplumppu.se/blog/git-ignore-files-and-directories-without-using-gitignore",
            "title": "Make Git Ignore Files and Directories Without Using .gitignore",
            "date_modified": "2025-09-12T00:00:00.000Z",
            "date_published": "2025-09-12T00:00:00.000Z",
            "tags": [
                "Git",
                "Terminal"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/automated-text-extraction-from-pdf-images-with-ocrmypdf",
            "content_html": "<article><p>When extracting text content from PDF files, you occasionally find embedded images without any text nodes. For tiny PDFs this can usually be solved manually, but it's not feasible to manually retype text from many PDF pages. Especially not as part of a data pipeline processing many thousands of documents.</p><p>Luckily, there are ways to automate the text extraction by using Optical Character Recognition (<a href=\"https://en.wikipedia.org/wiki/Optical_character_recognition\" class=\"link\">OCR</a>) software.</p><p>One great example is the open source program <a href=\"https://github.com/ocrmypdf/OCRmyPDF\" class=\"link\">OCRmyPDF</a>, which in turn is built on top of <a href=\"https://github.com/tesseract-ocr/tesseract\" class=\"link\">Tesseract</a>. The best thing about this tool compared to others is that it runs completely locally on your computer which allows you to keep sensitive data private. Since it's a command-line tool, it's easy to automate and process many files in parallel.</p><p><code>ocrmypdf</code> can usually be <a href=\"https://ocrmypdf.readthedocs.io/en/latest/installation.html\" class=\"link\">installed with one command</a> to let you start using it. Though, as always, make sure you are installing from a reputable source, or build the program yourself from the source code.</p><h2>Using <code>ocrmypdf</code> to extract text from PDFs</h2><p>If you only need to extract text from PDF files with English content, you can use the default language pack which usually comes preinstalled.</p><p>Here's how to perform OCR on a PDF with English content:</p><pre><code>ocrmypdf in.pdf out.pdf\n</code></pre><p>If some pages have text content already, you can skip them with <code>--skip-text</code>:</p><pre><code>ocrmypdf --skip-text in.pdf out.pdf\n</code></pre><h2>Using specific languages</h2><p>If you need support for additional languages, you can <a href=\"https://ocrmypdf.readthedocs.io/en/latest/languages.html\" class=\"link\">install additional language packs</a>. If you for example want to use German, you would <a href=\"https://ocrmypdf.readthedocs.io/en/latest/languages.html\" class=\"link\">install</a> the <code>deu</code> language pack and then use it like this:</p><pre><code>ocrmypdf -l deu in.pdf out.pdf\n</code></pre><p>If you want both German and English, you can enable multiple language packs:</p><pre><code>ocrmypdf -l deu+eng in.pdf out.pdf\n</code></pre><h2>Conclusion</h2><p>These commands usually solve most cases for me with great results. Even though it's not always perfect, the output from <code>ocrmypdf</code> is a much better starting point for manually reviewing the PDF texts when it's important to make 100% correct conversions.</p><p>There are also plenty of options to explore with <code>ocrmypdf</code> to improve your results. If you find cases where it doesn't work, both <code>ocrmypdf</code> and <code>tesseract</code> are open source projects that could become even better with your contributions. In other cases, there are other OCR tools available, many of which are libre software. However, I've not needed them so far.</p></article>",
            "url": "https://samuelplumppu.se/blog/automated-text-extraction-from-pdf-images-with-ocrmypdf",
            "title": "Automated Text Extraction from PDF Images with OCRmyPDF",
            "date_modified": "2026-01-16T18:16:56.000Z",
            "date_published": "2025-09-04T00:00:00.000Z",
            "tags": [
                "PDF",
                "Data Pipelines",
                "OCR"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/using-sqlite-triggers-to-boost-performance-of-select-count",
            "content_html": "<article><p>I recently developed a website where the landing page shows two important numbers, both derived from the <code>users</code> table. Initially, these numbers were retrieved by executing <code>SELECT COUNT(*)</code> for every page load. This worked well in the beginning but got slower as the number of users grew and the website traffic increased. However, by using SQLite triggers and a dedicated <code>stats</code> table, I made the website load much faster, and using fewer resources. This blog post describes my process and how to implement similar solutions in your own projects.</p><h2>The website requirements</h2><p>During the sign-up, users choose if they want to become members and whether they want to be visible on the website. The count of members and non-members are then visible on the landing page, only including the users who want to be visible.</p><p>Here are the two relevant boolean columns in the <code>users</code> table, represented as the integers <code>1</code> or <code>0</code> in SQLite:</p><pre><code>CREATE TABLE users (\n    \"is_member\" integer NOT NULL   -- boolean: 1 or 0\n    \"is_visible\" integer NOT NULL  -- boolean: 1 or 0\n)\n</code></pre><p>To show the number of members and non-members on the landing page, I first used the following <code>SELECT COUNT(*)</code> query, only including those who want to be visible:</p><pre><code>SELECT\n(\n    SELECT COUNT(*) FROM \"users\" WHERE (\n        \"users\".\"is_member\" = 1\n        AND \"users\".\"is_visible\" = 1\n    )\n) as \"members\",\n(\n    SELECT COUNT(*) FROM \"users\" WHERE (\n        \"users\".\"is_member\" = 0\n        AND \"users\".\"is_visible\" = 1\n    )\n) as \"non_members\";\n</code></pre><p>This worked well initially, but it gradually slowed down as the <code>users</code> table grew and the website got more traffic. The reason why <code>SELECT COUNT(*)</code> is slower for larger tables is because SQlite needs to scan the table and check every row, which takes longer time for larger tables with many rows. <code>SELECT COUNT(*)</code> is essentially an <code>O(n)</code> operation in terms of time complexity, and since we did two queries for every visit to the landing page, the total time complexity is more like <code>2 * O(n)</code>.</p><h2>Why SQLite indexes are not suitable for boolean columns</h2><p>To improve query performance, I first thought about using a DB index and started experimenting. In some cases, this made the <code>SELECT COUNT(*)</code> queries faster. However, there were also other cases that actually resulted in worse performance than just letting SQLite do full table scans for each <code>SELECT COUNT(*)</code> query.</p><p>The reason for this is that indexes are better suited for columns that have many different values, such as strings. Since we have two boolean columns which only can have the values <code>1</code> or <code>0</code>, we don't get reliable benefits from using an index.</p><p>This made me take a step back to think more about the problem I wanted to solve: To avoid counting members and non-members on every page load.</p><h2>Exploring various caching options</h2><p>What if we could somehow cache the latest counts and retrieve them when they were needed? This could be solved with many different kinds of caching. On one hand, in-memory caching could be an alternative, but would not be reliable since the website backend runs in many different instances which means we could end up showing different results depending on which instance that served incoming requests. On the other hand, a dedicated cache implemented with for example <a href=\"https://valkey.io/\" class=\"link\">Valkey</a> would introduce another component to the backend system only to cache two numbers, which would not be worth the increased maintenance required. I like to keep the tech stack as simple as possible and neither of these solutions were appropriate for this specific problem.</p><p>Since we have plenty of capacity in the SQLite database for both more read- and write operations, what if we could cache the latest counts directly in SQLite?</p><p>I looked for inspiration and realized this would be a good opportunity to explore SQLite triggers, which seemed useful to solve my problem. Using SQLite triggers, we could re-evaluate the <code>SELECT COUNT(*)</code> only when there are meaningful changes in the DB, and save the counts in a separate <code>stats</code> table that can easily be looked up when needed.</p><h2>A brief intro to SQLite triggers</h2><p><a href=\"https://www.sqlite.org/lang_createtrigger.html\" class=\"link\">SQLite triggers</a> can be used to execute SQL statements when certain events happen in the database. These events can be for example <code>BEFORE INSERT</code> to validate data before saving it, or <code>AFTER UPDATE</code> to react when some table is updated. The basic syntax looks like this:</p><pre><code>CREATE TRIGGER trigger_name\n[BEFORE | AFTER] [INSERT | UPDATE | DELETE]\nON table_name\nBEGIN\n   -- SQL statements to run here\nEND;\n</code></pre><p>SQLite triggers are very flexible and powerful. Notably, they can also use references to the <code>NEW</code> and <code>OLD</code> rows to know which rows and columns that changed, and how.</p><p>However, triggers should be tested carefully, since their execution can block other DB operations. Changing triggers requires deploying a new DB migration, where the faulty triggers are dropped, and then re-created with updated SQL statements that should be executed. It's doable, but could cause consequences in a production environment. Therefore, it's important to test properly before deploying triggers.</p><h2>The solution: Using SQLite triggers to cache stats</h2><p>With this knowledge, I started experimenting by executing SQL statements in a shell. First creating the <code>stats</code> table and inserting the initial state based on the current <code>users</code> table. Then adding triggers that updated the <code>stats</code> whenever the <code>users</code> table had meaningful changes. And finally, verifying that the triggers behaved as expected and correctly updated the <code>stats</code> when I inserted, updated and deleted users.</p><p>Once I had working triggers, I combined the SQL statements into the following DB migration:</p><pre><code>CREATE TABLE `stats` (\n\t`id` integer PRIMARY KEY AUTOINCREMENT NOT NULL,\n\t`members` integer NOT NULL,\n\t`non_members` integer NOT NULL\n);\n\n-- For all stats calculations, we only include users\n-- who want to be visible --> where is_visible = 1\n\n-- During the migration when the stats table is created,\n-- we need to add the initial stats based on the current DB state.\nINSERT INTO \"stats\" (\"members\", \"non_members\")\nSELECT\n(\n    SELECT COUNT(*) FROM \"users\" WHERE (\n        \"users\".\"is_member\" = 1\n        AND \"users\".\"is_visible\" = 1\n    )\n) as \"members\",\n(\n    SELECT COUNT(*) FROM \"users\" WHERE (\n        \"users\".\"is_member\" = 0\n        AND \"users\".\"is_visible\" = 1\n    )\n) as \"non_members\";\n\n-- Update stats when a new user was added\nCREATE TRIGGER trigger_update_stats_after_insert\nAFTER INSERT\nON \"users\"\nBEGIN\n    UPDATE \"stats\"\n    SET \"members\" = (\n        SELECT COUNT(*) FROM \"users\" WHERE (\n                \"users\".\"is_member\" = 1\n                AND \"users\".\"is_visible\" = 1\n            )\n        ),\n        \"non_members\" = (\n            SELECT COUNT(*) FROM \"users\" WHERE (\n                \"users\".\"is_member\" = 0\n                AND \"users\".\"is_visible\" = 1\n            )\n        )\n    WHERE id = 1;\nEND;\n\n-- Update stats when a user was updated\nCREATE TRIGGER trigger_update_stats_after_update\nAFTER UPDATE\nON \"users\"\nBEGIN\n    UPDATE \"stats\"\n    SET \"members\" = (\n        SELECT COUNT(*) FROM \"users\" WHERE (\n                \"users\".\"is_member\" = 1\n                AND \"users\".\"is_visible\" = 1\n            )\n        ),\n        \"non_members\" = (\n            SELECT COUNT(*) FROM \"users\" WHERE (\n                \"users\".\"is_member\" = 0\n                AND \"users\".\"is_visible\" = 1\n            )\n        )\n    WHERE id = 1;\nEND;\n\n-- Update stats when a user was deleted\nCREATE TRIGGER trigger_update_stats_after_delete\nAFTER DELETE\nON \"users\"\nBEGIN\n    UPDATE \"stats\"\n    SET \"members\" = (\n        SELECT COUNT(*) FROM \"users\" WHERE (\n                \"users\".\"is_member\" = 1\n                AND \"users\".\"is_visible\" = 1\n            )\n        ),\n        \"non_members\" = (\n            SELECT COUNT(*) FROM \"users\" WHERE (\n                \"users\".\"is_member\" = 0\n                AND \"users\".\"is_visible\" = 1\n            )\n        )\n    WHERE id = 1;\nEND;\n</code></pre><p>After applying the migration above, the landing page query could be updated to read the latest <code>stats</code> instead of re-evaluating the member count for every incoming request:</p><pre><code>SELECT \"members\", \"non_members\" FROM \"stats\" WHERE \"stats\".\"id\" = 1\n</code></pre><p>Note that I always use <code>id = 1</code> to read and update stats. This could potentially break in the future since the <code>id</code> is theoretically not guaranteed to always be <code>1</code>.</p><p>I still went with this solution though, because the first row is created in the migration above, no other code except the triggers are updating the <code>stats</code> table, and no additional rows are inserted either in the database or in the application code. Selecting an explicit <code>id</code> offered better performance compared to using <code>LIMIT 1</code>.</p><p>While it might not be ideal to save state like this that could easily get out of date, potential future problems could be fixed since I intentionally kept the SQLite triggers and the <code>stats</code> table as simple as possible.</p><h2>Impact and performance improvements</h2><p>The result of these changes turned out way better than expected. The read operations now take a constant time <code>O(1)</code> even after the database more than doubled in size. Best of all, this problem could be solved without introducing another dependency only needed to cache two numbers. Simple and effective solutions are key to making technology easier to understand, maintain and improve.</p><p>Let's explore how and why the performance improved:</p><h3>Before</h3><pre><code>SELECT\n(\n    SELECT COUNT(*) FROM \"users\" WHERE (\n        \"users\".\"is_member\" = 1\n        AND \"users\".\"is_visible\" = 1\n    )\n) as \"members\",\n(\n    SELECT COUNT(*) FROM \"users\" WHERE (\n        \"users\".\"is_member\" = 0\n        AND \"users\".\"is_visible\" = 1\n    )\n) as \"non_members\";\n</code></pre><p>Even though it didn't take more than <code>2 * 22 ms = 44 ms</code> to run the two <code>SELECT COUNT(*)</code> queries for every page load, the query time was growing significantly and could get out of hand as we got more users.</p><p>Another issue with this query was that it caused two full table scans, reading all rows from the users table twice and using unnecessary resources. In some cloud-hosted databases where you pay per usage, excess row reads could be a limiting and potentially costly problem.</p><h3>After</h3><pre><code>SELECT \"members\", \"non_members\" FROM \"stats\" WHERE \"stats\".\"id\" = 1\n</code></pre><p>The new query only reads one row and its execution time stays constant at <code>27 ms</code>, even as the database more than doubled in size. This might not seem like much of an improvement in the beginning, but compared to the previous solution, the difference will only get bigger as the database grows. It didn't take many days before it was way faster compared to running two <code>SELECT COUNT(*)</code> operations for landing page visit. With a much more predictable query time, this approach will continue to perform over time.</p><p>I'm curious to understand why it needs <code>27 ms</code> for what seems like a very simple read operation. Please let me know if you know a way to make this even faster.</p><h2>Conclusion: When to use SQLite triggers</h2><p>Like any solution, this one has two trade-offs worth mentioning:</p><ol><li>Every <code>INSERT</code>, <code>UPDATE</code> and/or <code>DELETE</code> will take slightly longer to complete because it also executes the corresponding SQLite trigger. In our case, this is acceptable since this part of the project needs to optimize for fast reads on the landing page. However, if your system needs to optimize for fast writes, this might be a problem.</li><li>We need to store an additional <code>stats</code> table in the DB, which might get outdated if something happens with the triggers or the tables used to calculate the <code>stats</code>. However, the table is only one row and is only modified by triggers and never from application code. Thus, SQLite triggers are a good tool as long as you test them properly.</li></ol><p>In our case, these are very good trade-offs to make the website load significantly faster - and stay fast over time.</p><p>Even though SQLite triggers worked well in this case, I would carefully consider other options before implementing SQLite triggers in other projects. For simple use cases, it might be worth it, but for more complex business logic for data validations and transformations, consider implementing them as part of your regular backend code instead.</p><p>In cases where more complex validations and transformations are needed, keeping one backend module responsible for a part of the database and giving that module exclusive responsibilities to write to and read from that part of the database, you can achieve the same result as with SQLite triggers, while also making integration testing much easier. SQLite triggers are possible to test, but harder to debug if you make errors, and thus it seems reasonable to mostly use them for simpler cases.</p><h2>Bonus: When to optimize performance</h2><p>This kind of work should usually happen after the main functionality is implemented, deployed and confirmed to be valuable with users and stakeholders. Only then could it be a good time to start optimizing a system by identifying potential bottlenecks and deciding when and how you need to deal with them. Even if you don't implement performance optimizations immediately, making a note about potential issues will make them easier to identify and fix when (or rather, if) the need arises in the future.</p><p>Just don't optimize too early. Instead, focus on the core value first and improve with each iteration.</p></article>",
            "url": "https://samuelplumppu.se/blog/using-sqlite-triggers-to-boost-performance-of-select-count",
            "title": "Using SQLite Triggers to Boost the Performance of SELECT COUNT(*)",
            "date_modified": "2026-01-22T01:21:17.000Z",
            "date_published": "2025-08-27T00:00:00.000Z",
            "tags": [
                "SQLite",
                "Caching",
                "Performance"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/install-playwright-on-linux-with-distrobox",
            "content_html": "<article><h2>Prerequisites</h2><p>This guide assumes you have a container runtime like <a href=\"https://podman.io/\" class=\"link\">Podman</a> (strongly recommended) or Docker already installed.</p><h2>Set up distrobox</h2><p>Since Playwright only supports Ubuntu, you can use <a href=\"https://distrobox.it/\" class=\"link\">Distrobox</a> to run Playwright within an Ubuntu container. This allows you to run Playwright on your development machine despite which Linux distribution you prefer.</p><p>On Fedora, you can install it like this:</p><pre><code>sudo dnf install distrobox\n</code></pre><p>It's a good practice to separate your actual home directory from the home directories of your distrobox containers. I suggest you store them all in once place in <code>~/distrobox</code>, and then create subdirectories with the actual home directories of each distrobox container:</p><pre><code>mkdir ~/distrobox\n</code></pre><p>To use distrobox, we begin by creating a new container named <code>ubuntu</code> and set its home directory to <code>~/distrobox/ubuntu</code>. This will also install some additional packages needed for Playwright.</p><pre><code>distrobox create \\\n--name ubuntu --image ubuntu:24.04 \\\n--home ~/distrobox/ubuntu \\\n--additional-packages \"git vim nodejs npm\"\n</code></pre><p>Next, let's enter into the <code>ubuntu</code> container:</p><pre><code>distrobox enter ubuntu\n</code></pre><p><strong>NOTE:</strong> The first time you run this command it will start installing dependencies, which might take some time depending on your network.</p><p>Once this is completed, your current terminal will have access to the environment of your distrobox container called <code>ubuntu</code>. You can now run commands specific to the Ubuntu environment, such as:</p><pre><code>apt --version\n</code></pre><p>Great! Now let's get started with Playwright.</p><h2>Installing Playwright</h2><p>In the same terminal with access to the <code>ubuntu</code> container, navigate to your project directory and run the following two commands.</p><p>First, install the system dependencies (Ubuntu packages) needed by Playwright:</p><pre><code>npx playwright install-deps\n</code></pre><p>Then, install the latest browsers used to run tests:</p><pre><code>npx playwright install\n# or\nnpx playwright install firefox \t# specific browser\n</code></pre><p>And now you're ready to start testing with Playwright!</p><h2>Running tests with Playwright</h2><p><strong>NOTE:</strong> Make sure to run the Playwright tests in your native Terminal application and not in an integrated terminal such as the one in your code editor. This way, you keep the Playwright process - which spawns several browsers - separate from your code editor. This is much better for system stability and memory usage.</p><p>To get access to the <code>ubuntu</code> distrobox container, run the following in a new native terminal:</p><pre><code>distrobox enter ubuntu\n</code></pre><p>In the same terminal, you can then navigate to your project directory and run the tests:</p><pre><code>npx playwright test\n# or replace with your test command\npnpm test\n</code></pre><p>And that's all - happy testing!</p></article>",
            "url": "https://samuelplumppu.se/blog/install-playwright-on-linux-with-distrobox",
            "title": "Installing Playwright on non-Ubuntu Linux distributions",
            "date_modified": "2026-01-08T17:06:35.000Z",
            "date_published": "2025-08-23T00:00:00.000Z",
            "tags": [
                "Playwright",
                "Distrobox",
                "Testing"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/run-typescript-directly-in-nodejs-22",
            "content_html": "<article><p><strong>NOTE: Since Node.js 22.18.0 or 23.6.0, \"type stripping\" is enabled by default.</strong> See the latest Node.js docs for <a href=\"https://nodejs.org/en/learn/typescript/run-natively\" class=\"link\">more info</a>.</p><hr/><p><strong>It's about to get much easier to run TypeScript directly with Node.js.</strong> As of Node.js <code>22.7.0</code>, there are two experimental command line flags to strip TypeScript types and convert TypeScript-only syntax into JavaScript that can be executed by Node.js.</p><p>This even works with import aliases if you make some config and code changes, as demonstrated later. But let's start with the basics first:</p><h2>How to run TypeScript directly with Node.js:</h2><pre><code>node --experimental-strip-types main.ts\n</code></pre><p>If your code (or any dependencies) use TypeScript-only features like <code>enum</code> and <code>namespace</code>, you need to use the following command:</p><pre><code>node \\\n--experimental-strip-types \\\n--experimental-transform-types main.ts\n</code></pre><p>If you start many <code>node</code>-processes and want to filter out the <code>ExperimentalWarning</code>s from the log output, you can pass the flag <code>--no-warnings=ExperimentalWarning</code> to get a much cleaner output:</p><pre><code>node \\\n--no-warnings=ExperimentalWarning \\\n--experimental-strip-types \\\n--experimental-transform-types main.ts\n</code></pre><p>Reading the <a href=\"https://nodejs.org/en/learn/typescript/run-natively\" class=\"link\">official guide</a>, I'm especially excited that this brings us a step closer to full TypeScript-support without any external tools or command line flags.</p><blockquote><p>Future versions of Node.js will include support for TypeScript without the need for a command line flag.</p></blockquote><h2>How to run TypeScript code with import aliases</h2><p>One limitation as of Node.js is that import aliases defined via <code>tsconfig.json</code> and the <code>paths</code> option (<a href=\"https://www.typescriptlang.org/docs/handbook/modules/reference.html#paths\" class=\"link\">docs</a>) don't work.</p><p>However, there is a workaround available by adding Node.js <a href=\"https://nodejs.org/api/packages.html#subpath-patterns\" class=\"link\">subpath patterns</a>, defined in the <code>imports</code> field of <code>package.json</code> to achieve the same effect. Let's look at an example:</p><h3>1. Update configuration</h3><p>If you have a <code>tsconfig.json</code> defining an <code>@app</code> import alias like this:</p><pre><code>// tsconfig.json\n{\n    \"compilerOptions\": {\n        \"paths\": {\n            \"@app/*\": [\"./src/*\"]\n        }\n    }\n}\n</code></pre><p>Then you can add the following import alias in <code>package.json</code> to make it work almost the same way (more on that in a moment):</p><pre><code>// package.json\n{\n    \"imports\": {\n        \"#app/*\": \"./src/*\"\n    }\n}\n</code></pre><p><strong>NOTE:</strong> Import aliases within your own module/code base need to start with the <code>#</code> character. If you know a way to make this work with a custom character like <code>@</code> or <code>$</code>, please <a href=\"https://fosstodon.org/@Greenheart\" class=\"link\">let me know</a>!</p><h3>2. Update code import statements</h3><p>To make your code run again, you need to update the imports in your code with a global find all <code>@app</code> and replace with <code>#app</code>.</p><p>For example, all your imports like this:</p><pre><code>import { something } from '@app/lib/something'\n</code></pre><p>Need to be updated into:</p><pre><code>import { something } from '#app/lib/something.ts'\n</code></pre><p>Also note that Node.js requires explicit file extensions when using import aliases. Perhaps it's possible to use without <code>.ts</code> extensions, but at the same time, I think it's good to be explicit - especially since most of the imports are added automatically by the code editor.</p><h2>Run TypeScript in production</h2><p>Since this is an experimental feature, it's currently recommended to transpile TypeScript using <code>tsc</code> when building for production. To easily run TypeScript without a separate transpilation step, <code>tsx</code> is still a great choice - <a href=\"https://github.com/privatenumber/tsx/\" class=\"link\">learn more here</a>.</p><p>However, for development and quick scripts, this is a big boost to productivity!</p></article>",
            "url": "https://samuelplumppu.se/blog/run-typescript-directly-in-nodejs-22",
            "title": "Run TypeScript Directly with Node.js 22",
            "date_modified": "2026-01-08T17:06:35.000Z",
            "date_published": "2024-12-29T00:00:00.000Z",
            "tags": [
                "TypeScript",
                "Node.js"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/find-files-without-substring-with-grep",
            "content_html": "<article><p>After resolving a merge conflict in a large React codebase, the following error appeared:</p><pre><code>Element type is invalid: expected a string (for built-in components) or a class/function\n</code></pre><p>It seemed like one of the components was <a href=\"https://stackoverflow.com/questions/44897070/element-type-is-invalid-expected-a-string-for-built-in-components-or-a-class\" class=\"link\">missing the export keyword</a>. The question was just which component?</p><p>It's easy to find all files in a directory that do include a substring like <code>export</code> in the IDE and various terminal commands, but what about finding the files that don't include the <code>export</code> keyword?</p><p>Shell scripting to the rescue: After some research, I learned about some new CLI flags that could be used to solve this with the <code>grep</code> command, available in most Unix shells.</p><p>The following command recursively finds all files in the <code>components</code> directory (including its subdirectories) that don't include the <code>export</code> search string:</p><pre><code>grep -riL \"export\" components\n</code></pre><p>There's also an option to match files that do include a keyword, by replacing the <code>-L</code> flag with <code>-l</code>, as described <a href=\"https://stackoverflow.com/a/56486664\" class=\"link\">here</a>:</p><pre><code>grep -ril \"export\" components\n</code></pre><p>Quite a powerful way to quickly find what you need!</p></article>",
            "url": "https://samuelplumppu.se/blog/find-files-without-substring-with-grep",
            "title": "Find Files Without a Substring with Grep",
            "date_modified": "2025-12-16T22:40:59.000Z",
            "date_published": "2024-05-29T00:00:00.000Z",
            "tags": [
                "React",
                "Shell Scripting",
                "Terminal"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/chalmers-guest-lecture-social-entrepreneurship",
            "content_html": "<article><p>Imagine ten minutes of observing a room full of passionate students as they share and discuss their visions for a sustainable future. So many emotions, hopes, and dreams expressed even without hearing specific words. Imagine noticing how the room gradually warms up, and eventually reaches a point where the positive energy in some groups spread to others around them, reaching almost the entire room by the end of the exercise.</p><p><strong>That's the potential of generative social fields</strong> - to create conditions where people have positive social interactions that create a good feeling and atmosphere. This is not the only kind of social field, but it's probably the easiest one to recognize. And by paying attention to how these generative social fields make us feel and influence how we think and act, maybe with time, we can start creating more of these positive interactions with people we meet in life.</p><h2>Social Entrepreneurship in Practice</h2><p>Today, I had the privilege to give my third guest lecture/workshop for students at Chalmers University of Technology, with the goal to explore social entrepreneurship in practice. Specifically focusing on how some friends and I co-founded the non-profit tech agency <a href=\"https://greenheart.coop\" class=\"link\">Greenheart Co-operative</a>.</p><p>This time was my most in-depth and cohesive presentation on this subject so far. <a href=\"https://samuelplumppu.se/talks/2023-05-08-chalmers-entrepreneurship/\" class=\"link\">See the presentation slides here</a>. I'm really happy with how it all turned out, and especially the questions and discussions this sparked (some of us kept going for a full hour after the official end)! I especially want to give a big thanks to my partner Sara (<a href=\"https://saranewmountain.earth\" class=\"link\">check out her website</a>) for your help and feedback to structure the presentation!</p><p>Although it felt like we barely had time to scratch the surface of some topics, I'm really happy with the format of my presentation. And who knows - I might write more in-depth about social entrepreneurship and related topics in the future.</p><h2>It Matters How We Show Up</h2><p>Coming back to the social fields - it matters how we show up, because our interactions ripple out and influence our surroundings, just like we're influenced by people around us.</p><p>Given the potential to transform the energy in a room, today's exercise to explore future visions could prove to be valuable in more ways. If you want to try it yourself, see <a href=\"https://samuelplumppu.se/talks/2023-05-08-chalmers-entrepreneurship/#/12\" class=\"link\">the second part of the presentation</a>. Maybe bringing this exercise into everyday life could help us strengthen (or even regain) our ability to imagine how things could be better. If anything, this kind of imagination is what social entrepreneurship is about.</p><p>To imagine different futures, we need good conditions. For example, by making time, finding a safe space and being fully present. But perhaps most importantly, it matters how we show up.</p><p>So, let's help each other show up in ways that allow us to imagine!</p></article>",
            "url": "https://samuelplumppu.se/blog/chalmers-guest-lecture-social-entrepreneurship",
            "title": "Chalmers Guest Lecture on Social Entrepreneurship",
            "date_modified": "2026-01-08T17:06:35.000Z",
            "date_published": "2023-05-08T00:00:00.000Z",
            "tags": [
                "Entrepreneurship",
                "Co-operatives",
                "Economics"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/improving-shell-startup-with-lazy-loading",
            "content_html": "<article><p>Using <a href=\"https://ohmyz.sh/\" class=\"link\">Oh My Zsh</a> is usually a great experience. However, adding heavy plugins (like <code>nvm</code>) to load at startup time can really hurt performance. Luckily there's a way to lazy load them.</p><h2>A Simple Solution</h2><pre><code># ~/.zshrc\nplugins=(nvm git) # 1\nzstyle ':omz:plugins:nvm' lazy yes # 2\n\nsource $ZSH/oh-my-zsh.sh # 3\n</code></pre><ol><li>Add the nvm plugin to your <code>.zshrc</code> file.</li><li>Enable lazy loading for the <code>nvm</code> plugin.</li><li>Make sure you source Oh My Zsh at the end.</li></ol><p>This method reduced my shell startup time from <em>~1.5 s</em> to <em>~200 ms</em>. A <strong>huge</strong> improvement for a common action I perform many times daily.</p><p>However, I soon realized some of my projects had external dependencies that relied on commands like <code>node</code> and <code>npm</code> (and other package managers) to always be defined in the shell environment. This caused weird crashes, since lazy loading <code>nvm</code> means commands like <code>node</code> and <code>npm</code> only gets enabled when they first got used.</p><h2>Conditionally Lazy Loading for Specific Directories</h2><p>Adding an if statement to avoid lazy loading in specific directories:</p><pre><code># ~/.zshrc\nplugins=(nvm git)\n\n# This excludes any subdirectory of \"/your-project/\"\n# =~ is used for RegExp matching.\nif ! [[ $PWD =~ \"/your-project/\" ]]; then\n  zstyle ':omz:plugins:nvm' lazy yes\nfi\n\nsource $ZSH/oh-my-zsh.sh\n</code></pre><h2>Add more commands that should load nvm</h2><p>In the <a href=\"https://github.com/ohmyzsh/ohmyzsh/tree/master/plugins/nvm\" class=\"link\">documentation</a> for <code>nvm</code> plugin, there are also features that can be useful. For example, this is how to make sure <code>npx</code> and <code>pnpx</code> work even in new terminals.</p><pre><code>zstyle ':omz:plugins:nvm' lazy-cmd npx pnpx\n</code></pre><p>Hopefully this saves some time, allowing you move at the speed of thought 💭</p></article>",
            "url": "https://samuelplumppu.se/blog/improving-shell-startup-with-lazy-loading",
            "title": "Improving Oh My Zsh Startup Time with Lazy Loading",
            "date_modified": "2025-12-16T22:40:59.000Z",
            "date_published": "2023-03-23T00:00:00.000Z",
            "tags": [
                "DX",
                "Code Snippet"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/automatic-internal-external-links-in-sveltekit",
            "content_html": "<article><p><strong>Update 2023-03-23:</strong> This method is heavily outdated. See <a href=\"https://kit.svelte.dev/docs/link-options\" class=\"link\">https://kit.svelte.dev/docs/link-options</a> for modern options.</p><p>Markdown content on blogs often require you to support both internal and external links at the same time. Usually, you need to separate behaviors for the different kinds of links, like for example prefetching internal links to improve page load times on your blog, while simultaneously opening external links in separate tabs without prefetching but instead with other attributes like <code>rel=\"noopener noreferrer\"</code>.</p><p>Fortunately, Svelte and SvelteKit provides a good solution to this problem.</p><p>First, we need to test if a given URL is external. This can be solved with a helper function like this:</p><pre><code>/**\n * Test if an URL is external.\n *\n * @param href {string} The URL to test.\n * @returns True if the link is external, and false otherwise.\n */\nfunction isExternalURL(href: string): boolean {\n    const a = document.createElement('a')\n    a.href = href\n    return window.location.host !== a.host\n}\n</code></pre><p>Then you can use <code>isExternalURL()</code> to create a Svelte <code>&lt;Link /></code> component that automatically handles the right attributes for both internal and external links. And with SvelteKit's <code>sveltekit:prefetch</code> directive, your users will get a really smooth experience browsing your website, without compromising on how you handle external links.</p><h2>Finished SvelteKit <code>&lt;Link /></code> Component</h2><pre><code>&#x3C;!-- Link.svelte -->\n\n&#x3C;script lang=\"ts\" module>\n    import { onMount } from 'svelte'\n\n    /**\n     * Test if an URL is external.\n     *\n     * @param href {string} The URL to test.\n     * @returns True if the link is external, and false otherwise.\n     */\n    function isExternalURL(href: string): boolean {\n        const a = document.createElement('a')\n        a.href = href\n        return window.location.host !== a.host\n    }\n&#x3C;/script>\n\n&#x3C;script lang=\"ts\">\n    export let href = ''\n    let additionalProps: object\n    const classes = [$$props.class ?? '', 'default'].join(' ').trim()\n\n    onMount(() => {\n        if (isExternalURL(href)) {\n            additionalProps = {\n                rel: 'noopener noreferrer',\n                target: '_blank',\n            }\n        } else {\n            additionalProps = {\n                'sveltekit:prefetch': true,\n            }\n        }\n    })\n&#x3C;/script>\n\n&#x3C;a {href} class={classes} {...$$props} {...additionalProps}>\n    &#x3C;slot />\n&#x3C;/a>\n</code></pre><h2>Some Thoughts About This Implementation</h2><ol><li>It uses two separate script contexts: One with <code>module</code> in order to only import external dependencies and create functions once during the runtime, and the other one for the component context which handles component instances and re-renders.</li><li><code>$$props.class</code> is an unfortunate workaround to support external classes passed down via the regular class attribute, since <code>class</code> is a reserved keyword in JavaScript. Let me know if you have a better solution for this!</li></ol></article>",
            "url": "https://samuelplumppu.se/blog/automatic-internal-external-links-in-sveltekit",
            "title": "Automatic Internal and External Links in SvelteKit",
            "date_modified": "2025-12-16T22:40:59.000Z",
            "date_published": "2021-07-31T00:00:00.000Z",
            "tags": [
                "TypeScript",
                "Svelte",
                "SvelteKit",
                "Code Snippet"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/generate-password-in-browser-web-crypto-api",
            "content_html": "<article><p>Strong, cryptographically safe passwords are an essential foundation to live a secure digital life. With an open source password manager like <a href=\"https://bitwarden.com/\" class=\"link\">Bitwarden</a>, it's never been more accessible to generate unique, strong passwords for every online account, and then storing them in your password vault.</p><p>But what if you want to add password generation directly to your web app? That's recently been getting much more accessible as well thanks to the standard <a href=\"https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto\" class=\"link\">Web Crypto API</a>.</p><h2>Use the Web Crypto API in Any Environment</h2><p>In order to make the generator work in both browsers and Node.js, we need an abstraction. This ensures we can use the same Web Crypto API no matter where the generator is used.</p><pre><code>// crypto.js\n\n/**\n * Get a reference to the Web Crypto API in any modern JS environment\n *\n * @returns An object implementing the Web Crypto API.\n */\nasync function loadCrypto() {\n    if (\n        (typeof window !== 'undefined' &#x26;&#x26; window.crypto) ||\n        (globalThis &#x26;&#x26; globalThis.crypto)\n    ) {\n        // Running in browsers released after 2017, and other\n        // runtimes with `globalThis` like Deno or CloudFlare Workers\n        const crypto = window.crypto || globalThis.crypto\n\n        return new Promise((resolve) => resolve(crypto))\n    } else {\n        // Running in Node.js >= 15\n        const nodeCrypto = await import('crypto')\n        return nodeCrypto.webcrypto\n    }\n}\n\nconst crypto = await loadCrypto()\nexport default crypto\n</code></pre><h2>Creating a Password by Selecting Random Characters</h2><p>The way our generator is going to work is by creating an array of a given length (matching the password length), and then filling it with random characters.</p><p>First, we'll import the crypto abstraction and define the character set we want to use.</p><pre><code>// generate-password.js\n\nimport crypto from './crypto'\n\nconst digits = '0123456789'\nconst upper = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'\nconst lower = upper.toLowerCase()\nconst CHAR_SET = digits + upper + lower\n</code></pre><p>Using <code>Array.from()</code>, we can provide a <code>Array.prototype.map()</code> callback to add the random characters directly while the array is created. Then, we just have to join the password into a string and we're done - except for the details of <code>getRandomCharacter()</code> which we'll cover soon.</p><pre><code>/**\n * Generate a random password of a given length.\n *\n * @param {number} length The password length.\n * @param {string} characters The set of characters to pick from.\n * @returns A random password.\n */\nexport function generatePassword(length = 80, characters = CHAR_SET) {\n    return Array.from({ length }, (_) =>\n        getRandomCharacter(characters),\n    ).join('')\n}\n</code></pre><h2>Cryptographically Secure Random Number Generation</h2><p>Let's implement <code>getRandomCharacter()</code>. To ensure the characters are randomized in a cryptographically safe way, we use <code>crypto.getRandomValues()</code>. This is <em>strongly</em> recommended instead of using <code>Math.random()</code> which may seem simpler, but is not secure enough for our needs.</p><pre><code>\n/**\n * Get a random character from a given set of characters.\n *\n * @param {string} characters The set of characters to pick from.\n * @returns A random character.\n */\nfunction getRandomCharacter(characters) {\n    const randomNumber = crypto.getRandomValues(new Uint8Array(1))[0]\n    return characters[randomNumber % characters.length]\n}\n</code></pre><p>To explain <code>getRandomCharacter()</code>, let's start by thinking about the character set again. Since our character set has less than 256 characters (8 bytes), we can pass an <code>Uint8Array</code> to <code>crypto.getRandomValues()</code>, to fill it with random numbers. In our case, this will be a single number between 0-255 since we created an <code>Uint8Array</code> with <code>1</code> byte. We'll retrieve the <code>randomNumber</code> and can now use this to calculate the random index from where to pick the next character.</p><p>Since our character set contains less than 256 characters, we need to ensure the random number isn't out of range to avoid crashes. This can be done using <code>%</code> - the <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Remainder\" class=\"link\">Remainder operator</a>, which allows us to use a random number potentially much larger than our character set length, and always get a value within our desired range.</p><p>However, this method has a severe security issue - it will cause the first characters in our set to appear more often, greatly reducing the password security. This is caused by the fact that the result when using the remainder operator will restart from 0 once <code>randomNumber</code> has reached another multiple of the character set length. <code>39 % 40</code> yields <code>39</code> and <code>40 % 40</code> yields <code>0</code>, meaning we'll get the last character and then the first character again. This repeats for larger multiples such as 80 and so on, until the final iteration where we've reached the final multiple before 255. Then the remaining indices will add additional probability to pick the first characters with the lowest indices.</p><pre><code>const characters = '...'\nconst characterLength = 40\n\nconst randomNumber1 = 39\nconst randomNumber2 = 40\n\n\nconst index1 = randomNumber1 % characterLength // 39 % 40 = 39\nconst index2 = randomNumber2 % characterLength // 40 % 40 = 0\n\nconst first = characters[index1] // Returns the last character\nconst second = characters[index2] // Returns the first character!\n</code></pre><h2>Ensure Random Characters Have Equal Distribution</h2><p>To work around the issue caused by the remainder operator, we can only allow random numbers that are smaller than the maximum multiple of the character set length that is in turn smaller than 255 (the maximum possible value for our random number when using an <code>Uint8Array</code>).</p><p>To calculate the maximum value, we can use the following expression:</p><pre><code>const max = 256 - (256 % characters.length)\n</code></pre><p>To give an example, this means that a character set length of 60 would yield the maximum random number 240, since 240 is the largest number that is both less than 256 and evenly divisible by 60.</p><pre><code>const max = 256 - (256 % 60) // 240\n</code></pre><p>Getting back to implementing <code>getRandomCharacter()</code>, the next step would be to ensure that we regenerate <code>randomNumber</code> as long as it's larger than our maximum allowed value. In the final version of <code>getRandomCharacter()</code>, we'll use a <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/do...while\" class=\"link\">do...while</a> loop to achieve this:</p><pre><code>/**\n * Get a random character from a given set of characters.\n *\n * @param {string} characters The set of characters to pick from.\n * @returns A random character.\n */\nfunction getRandomCharacter(characters) {\n    let randomNumber\n    /**\n     * Due to the repeating nature of results from the remainder\n     * operator, we potentially need to regenerate the random number\n     * several times. This is required to ensure all characters have\n     * the same probability to get picked. Otherwise, the first\n     * characters would appear more often, resulting in a weaker\n     * password security.\n     */\n    do {\n        randomNumber = crypto.getRandomValues(new Uint8Array(1))[0]\n    } while (randomNumber >= 256 - (256 % characters.length))\n\n    return characters[randomNumber % characters.length]\n}\n</code></pre><h2>The Finished Password Generator</h2><p>Here's how the generator looks when all pieces come together!</p><pre><code>// generate-password.js\n\nimport crypto from './crypto'\n\nconst digits = '0123456789'\nconst upper = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'\nconst lower = upper.toLowerCase()\nconst CHAR_SET = digits + upper + lower\n\n/**\n * Generate a random password of a given length.\n *\n * @param {number} length The password length.\n * @param {string} characters The set of characters to pick from.\n * @returns A random password.\n */\nexport function generatePassword(length = 80, characters = CHAR_SET) {\n    return Array.from({ length }, (_) =>\n        getRandomCharacter(characters),\n    ).join('')\n}\n\n/**\n * Get a random character from a given set of characters.\n *\n * @param {string} characters The set of characters to pick from.\n * @returns A random character.\n */\nfunction getRandomCharacter(characters) {\n    let randomNumber\n    /**\n     * Due to the repeating nature of results from the remainder\n     * operator, we potentially need to regenerate the random number\n     * several times. This is required to ensure all characters have\n     * the same probability to get picked. Otherwise, the first\n     * characters would appear more often, resulting in a weaker\n     * password security.\n     */\n    do {\n        randomNumber = crypto.getRandomValues(new Uint8Array(1))[0]\n    } while (randomNumber >= 256 - (256 % characters.length))\n\n    return characters[randomNumber % characters.length]\n}\n</code></pre><h2>Generate Passwords with <code>pagecrypt</code> in Your Next Project</h2><p>This post is based on what I learned while creating the <a href=\"https://github.com/greenheart/pagecrypt\" class=\"link\">pagecrypt</a> package which implements the code from this blog post, along with other related Web Crypto utilities. Since pagecrypt is just a standard ES module, it works with any JavaScript framework both on the frontend and the backend.</p><p>Install it with</p><pre><code>npm i pagecrypt\n</code></pre><p>Then, you can generate random passwords both in <a href=\"https://caniuse.com/cryptography\" class=\"link\">browsers newer than 2018</a> and Node.js newer than v15.</p><pre><code>import { generatePassword } from 'pagecrypt/core'\n\nconst password = generatePassword(64)\n</code></pre><p><strong>Enjoy!</strong></p><p>Let me know if you have any suggestions and further improvements!</p></article>",
            "url": "https://samuelplumppu.se/blog/generate-password-in-browser-web-crypto-api",
            "title": "Use the Web Crypto API to Generate a Cryptographically Secure Password in the Browser and Node.js",
            "date_modified": "2025-12-16T22:40:59.000Z",
            "date_published": "2021-07-28T00:00:00.000Z",
            "tags": [
                "JavaScript",
                "Web Crypto API",
                "Node.js"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/nodejs-rename-file-extensions",
            "content_html": "<article><p>A Node.js script to rename the file extension for all matching files in a directory.</p><pre><code>#!/usr/bin/env node\n\nimport { readdir, rename } from 'fs/promises'\nimport { resolve } from 'path'\n\n/**\n * Rename the file extension for all matching files in a directory.\n *\n * @param {string} baseDir Where to find the files.\n * @param {function} shouldRenameFile Filter function that should return a\n *   boolean for whether or not to rename the file.\n * @param {string} beforeExt The file extension to replace. If `beforeExt` is an\n *   empty string, the `afterExt` will be added to the original filename.\n * @param {string} afterExt The new file extension to use instead.\n * @returns The number of files renamed.\n */\nasync function updateFileExtensions({\n    baseDir,\n    shouldRenameFile,\n    beforeExt,\n    afterExt,\n}) {\n    const files = (await readdir(baseDir)).filter(shouldRenameFile)\n\n    const renamed = await Promise.all(\n        files.map((f) => {\n            const before = resolve(baseDir, f)\n            const after = beforeExt.length\n                ? before.replace(beforeExt, afterExt)\n                : before + afterExt\n            return rename(before, after)\n        }),\n    )\n\n    return renamed.length\n}\n\nconst renamedCount = await updateFileExtensions({\n    baseDir: resolve(process.cwd(), 'images'),\n    shouldRenameFile: (f) => f.length === 64,\n    beforeExt: '',\n    afterExt: '.jpg',\n})\n\nconsole.log(renamedCount)\n</code></pre></article>",
            "url": "https://samuelplumppu.se/blog/nodejs-rename-file-extensions",
            "title": "Rename File Extensions with Node.js",
            "date_modified": "2025-12-16T22:40:59.000Z",
            "date_published": "2021-07-27T00:00:00.000Z",
            "tags": [
                "Code Snippet",
                "JavaScript",
                "Node.js"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/git-rewrite-commit-email",
            "content_html": "<article><p>Here's a quick way to update commit author email and display name for previous commits in a local project.</p><p>Two things worth mentioning before using this:</p><ol><li><p>If you change your email, it might no longer count as contributions to your GitHub/GitLab profile. But as long as you keep the old email as a hidden email connected to your account, it should work.</p></li><li><p>Remember that rewriting history in shared projects is a bad idea. Especially when working in a collaborative environment with other people. But for old local projects that you want to upload to a public Git repository, this method could be useful to hide some personal information.</p></li></ol><p>Let's use <a href=\"https://github.com/newren/git-filter-repo\" class=\"link\">git-filter-repo</a> which is a modern replacement to <code>git filter-branch</code> and can be installed via package managers or by following the official <a href=\"https://github.com/newren/git-filter-repo/blob/main/INSTALL.md\" class=\"link\">installation guide</a>.</p><p>Once installed, we can update the email like this:</p><pre><code>git-filter-repo \\\n--email-callback 'return email.replace(b\"old@email.com\", b\"new@email.com\")'\n</code></pre><p>If you also want to update your name, you can run this command:</p><pre><code>git-filter-repo \\\n--name-callback 'return name.replace(b\"OldName\", b\"NewName\")' \\\n--email-callback 'return email.replace(b\"old@email.com\", b\"new@email.com\")'\n</code></pre><p>Credit: <a href=\"https://stackoverflow.com/a/60364176\" class=\"link\">StackOverflow</a></p></article>",
            "url": "https://samuelplumppu.se/blog/git-rewrite-commit-email",
            "title": "Update Your Git Commit Email Address Before Pushing to Remote Repository",
            "date_modified": "2025-12-16T22:40:59.000Z",
            "date_published": "2021-07-23T00:00:00.000Z",
            "tags": [
                "Git",
                "Code Snippet"
            ]
        },
        {
            "id": "https://samuelplumppu.se/blog/firefox-bookmark-keywords",
            "content_html": "<article><p>One of my favourite Firefox features is the ability to add bookmark keywords. These allow near instantaneous navigation when visiting the most commonly used bookmarks.</p><p>It's worth noting that this feature is missing in Chrome and other Chromium-based browsers such as Edge.</p><h2>How to Use Firefox Bookmark Keywords</h2><p>To open a commonly visited bookmark by using its keyword:</p><ol><li>Open a new tab with <code>CTRL/CMD + T</code>.</li><li>Type a single character to find the right bookmark using a keyword. For example <code>c</code> for calendar. This will make the bookmark appear as the selected suggestion in the address bar.</li><li>Press <code>ENTER</code> to open.</li></ol><p>Since this can be done with just the keyboard and only requires 4 keystrokes, this really helps when you want to stay in the flow. Focus on what to do with the bookmarked website instead of how to open it.</p><h2>Add Bookmark Keywords to Websites You Visit Often</h2><ol><li>Open the Firefox Bookmarks Library with <code>CTRL/CMD + SHIFT + O</code>.</li><li>Add a new bookmark, or edit one of your favourites.</li><li>Click on the bookmark to view detailed fields, and you will find <code>Keywords</code> at the bottom.</li><li>In the <code>Keywords</code> field, enter a single character or single word keyword that you want to use. For example, I use <code>c</code> for calendar and <code>m</code> for mail.</li><li>The bookmarks automatically save as you edit them, so once you've added the keyword you want, you can try it out using the steps above. Enjoy!</li></ol><p>Given how much time is spent in the browser, searching for information or using web apps, it's well worth learning how to use the browser efficiently. What's your best browser productivity tip?</p></article>",
            "url": "https://samuelplumppu.se/blog/firefox-bookmark-keywords",
            "title": "Use Firefox Bookmark Keywords to Quickly Get to Websites You Visit Often with Only 4 Keystrokes",
            "date_modified": "2025-12-16T22:40:59.000Z",
            "date_published": "2021-07-22T00:00:00.000Z",
            "tags": [
                "Firefox",
                "Productivity"
            ]
        }
    ]
}