<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[A pragmatic engineer blog, written by a human]]></title><description><![CDATA[Senior backend engineer writing about distributed systems, observability, AI in software development, and pragmatic career growth for developers.]]></description><link>https://nguyengineer.dev</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 12:31:57 GMT</lastBuildDate><atom:link href="https://nguyengineer.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Developers, what should we do next in the age of AI?]]></title><description><![CDATA[At this point, April 2026, everyone is familiar with AI integrated deeply into the software development process. Some worry that AI will eventually replace or devalue them. I know that's the common co]]></description><link>https://nguyengineer.dev/developers-what-should-we-do-next-in-the-age-of-ai</link><guid isPermaLink="true">https://nguyengineer.dev/developers-what-should-we-do-next-in-the-age-of-ai</guid><category><![CDATA[TDD (Test-driven development)]]></category><category><![CDATA[AI]]></category><category><![CDATA[Product Engineering]]></category><category><![CDATA[Career]]></category><category><![CDATA[SRE]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Sat, 11 Apr 2026 19:46:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/64479ed989b09f5069ab7a18/c3d07aeb-2c10-46a9-aea8-e78c7b9ceee3.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At this point, April 2026, everyone is familiar with AI integrated deeply into the software development process. Some worry that AI will eventually replace or devalue them. I know that's the common concern, I don't need and don’t want to brag about it. There are some practices I believe developers should do next.</p>
<h2>Strengthen the tech stack and domain expertise</h2>
<p>"AI can write all the code, why bother choosing the language?" You may have heard this already, but the answer is simple: a language is tied to its ecosystem, it takes years to become an expert in a tech stack, and we rely on that expertise to judge the solutions provided by AI.</p>
<p>The code looks fine, and all the tests pass, but is it safe to deploy? Not quite. AI makes several architecture decisions no one is aware of, like using long expiration days for the auth token in a fintech product, choosing a low-maintenance npm package from some unknown developer, or heavily investing time in edge cases that never exist in real life.</p>
<h3>Taste and judgement</h3>
<p>So, being fundamentally strong in your company's tech stack is even more valuable in the AI age. AI can write code, but we still need to review and sometimes rewrite it, and redirect AI toward better code. We are opinionated here, not just accepting any code. Our taste is proven and solid, let it dictate the way new code is written. You're familiar with the system you own, your team's coding conventions, your battle scars. Even though all conventions are written into some instructions, AI can still go off the rails sometimes.</p>
<p>Regarding domain knowledge, AI can't replace anyone outright yet. Even in a scenario of a many-years complex project system, even though we can implement new features or fix bugs tangled in layers of complexity, anyone on the team can just drop in Opus 4.6 and a few lines of requirements and AI can implement a lot of that, it's still not the end of your expertise. Companies still need people to drive development. The metrics below show exactly where that human edge lives.</p>
<h3>Drive the output via TDD</h3>
<p>This is real-life experience from the team I'm working in, and several others have experienced the same: nothing drives AI output better than the TDD approach. You can also think of spec-driven development in the same mindset, we must understand what we need to do first and force the AI to actually understand what it's going to build.</p>
<blockquote>
<p>When you're directing thousands of lines of code generation, you need a forcing function that makes you actually understand what's being built. Tests are that forcing function. -- Martin Fowler</p>
</blockquote>
<p>We don't even need to write the tests line by line. Given the existing tests in the system, we can instruct AI to build new tests based on existing patterns and feed it our requirements for the expected output. It needs to write the test first and run it, the test should fail, then write the code and run the test again, keeping fixing the code in a loop until all green. With AI, TDD is easier than ever.</p>
<h3>Context Engineering</h3>
<p>AI providers keep improving models to handle larger and larger context windows, but that doesn't mean we can put the whole codebase into one prompt. Models don't know where to focus their attention when the provided context is too large. At enterprises, codebases can be millions of lines of code, not to mention that logs and real production data can be large in some cases.</p>
<p>Also, code is not everything; logs, metrics, ticket systems, and human discussions are inputs as well. We need to carefully compose good context for a task: that can mean referencing the relevant ticket, including meeting notes on the solution, querying related data and logs, and embedding them into the prompt. This process is often called context engineering; in an agentic setup, it becomes one part of a broader “harness engineering” discipline that also covers tools, repositories, and feedback loops.</p>
<h3>Observability and Debugging</h3>
<p>I mapped my debugging skills into an agent and shared it with the team. In general, it's the skill of looking for related data in logs, metrics, and traces. In detail, it includes grabbing traceable information like traceId and entityId, finding the related logs to query in our Elasticsearch database, then locating the relevant code that produces those logs, then asking for a real piece of data, looking for related tickets and discussions around the issue. That is my expertise, collected over years of working on the system, that's not something AI can easily establish without a human to orchestrate it. Not to mention production access: we can't give it to AI directly. Who knows what a hallucinating AI agent might do.</p>
<p>As a result, this AI agent alone reduced my MTTR by 12x, from 1 hour to 5 minutes. In most cases, within 5 minutes I can identify what is wrong with the incident we're facing, compared to hours of investigation before the AI age.</p>
<p>But if I stop feeding my skills to AI, stop providing the correct data through my authority, and stop judging the AI's investigation results, the output becomes meaningless. As I keep improving, the AI also improves through my discoveries.</p>
<h2>Metrics</h2>
<p>Technical expertise sets the foundation, but metrics are how we keep score. Here's where the human edge becomes measurable.</p>
<h3>Coding metrics</h3>
<p>Thanks to AI, we can now produce more options and more solutions to the same problem. For code, there are some simple metrics we can measure right after solutions are provided, like Cognitive Complexity.</p>
<blockquote>
<p>Cognitive Complexity specifically measures how difficult code is for humans to understand, heavily penalizing nested control structures like if statements, loops, and switches.</p>
</blockquote>
<p>We can reinforce it through instructions, but sometimes I still need to edit the code manually to maintain readability. Other metrics we can use are Cyclomatic Complexity, Maximum Nesting Level, and Duplicated Lines (%).</p>
<h3>Product metrics</h3>
<p>As we, developers, step into higher-value work, the product-centric metrics are what we need to be aware of and contribute to after the coding step. Metrics like feature adoption, clicks, retention, conversion, and search matching rate are usually the KPIs of the product team, but understanding them will bring developers closer to the customer. It's what companies usually dream of: a product engineer.</p>
<blockquote>
<p>A product engineer (or product-minded engineer) blends software engineering with product thinking. They code features while considering user needs, business impact, and data, beyond pure tech tasks.</p>
</blockquote>
<h3>Service Level Metrics</h3>
<p>Service Level Metrics are the most important for developers. They're tied into the contract we signed with customers, so they must be satisfied, that's a direct impact on the company. Example:</p>
<table>
<thead>
<tr>
<th>#</th>
<th>Service</th>
<th>SLI (Measured)</th>
<th>SLO (Target)</th>
<th>SLA (Promise)</th>
</tr>
</thead>
<tbody><tr>
<td>1</td>
<td>Web Latency</td>
<td>p95=200ms</td>
<td>&lt;300ms 99%</td>
<td>&lt;500ms 99%</td>
</tr>
<tr>
<td>2</td>
<td>Login Time</td>
<td>Avg=1.5s</td>
<td>&lt;2s 95%</td>
<td>&lt;3s 99%</td>
</tr>
<tr>
<td>3</td>
<td>Checkout</td>
<td>99.2% success</td>
<td>&gt;=99.95%/7d</td>
<td>&gt;=99.9%</td>
</tr>
<tr>
<td>4</td>
<td>Email Delivery</td>
<td>97% inboxed</td>
<td>&gt;=99%/day</td>
<td>&gt;=98% or refund</td>
</tr>
<tr>
<td>5</td>
<td>Support Tickets</td>
<td>Avg=30min</td>
<td>&lt;45min 95%</td>
<td>&lt;1hr response</td>
</tr>
<tr>
<td>6</td>
<td>Cache Hit</td>
<td>85% hits</td>
<td>&gt;=90%/hour</td>
<td>&gt;=85%</td>
</tr>
<tr>
<td>7</td>
<td>Uptime</td>
<td>99.92%</td>
<td>&gt;=99.95%/mo</td>
<td>&gt;=99.9% or refund</td>
</tr>
</tbody></table>
<p>Take this example from the internet to understand it better:</p>
<blockquote>
<p>During a traffic spike: SLI drops to 99.7% (outages from nested if bugs in your API). SLO breached → investigate, error budget exhausted → prioritize fixes over features. SLA holds (still &gt;99.9%), avoiding penalties. Ties back to code reviews: Low Cognitive Complexity prevents such failures.</p>
</blockquote>
<p>Metrics lead all the way back to decision-making when prompting, reviewing, and refining code to ensure business value. We haven't given AI this level of thinking yet, and AI also doesn't go to a client and sign the SLA contract, doesn't take charge of the penalties. Only humans are accountable for it. It leaves us with questions like:</p>
<ul>
<li><p>Should we implement this feature at all? What is the benefit?</p>
</li>
<li><p>Among 3 proposed solutions, what technical debt can we tolerate?</p>
</li>
<li><p>The solution is clear, but it requires 48 hours of work for a low-value ticket, is it worth it?</p>
</li>
<li><p>Our biggest client can produce 30M events at peak, what do we need to do to ensure the SLA?</p>
</li>
</ul>
<h2>Ownership and Accountability Mindset</h2>
<p>AI can't be assigned to PagerDuty and join an incident call at 3:00 AM. It can help in the incident investigation process, but do customers want to show up on a call with an AI agent during an incident? Definitely not. When SLAs are breached and penalties land, that weight falls on a human, and always will.</p>
<p>Based on the foundation of technical expertise and metrics-centric development above, we can see that the more AI absorbs the doing, the more valuable the person who takes responsibility becomes.</p>
<h2>Communication and Collaboration</h2>
<p>One of my favorite communication skills is translating the technical focused discussions into language that product people can understand, while keeping it relevant and precise. People love the summarize button, but who can ensure that summary is still correct? Only the person sitting between these layers can take responsibility for it.</p>
<p>At this point, I think we can all agree that we can't just prompt all the way to production, and on the other side, engineers can't only work on code when AI can do that part most efficiently. Pure software engineers are elevating to product engineers, who can orchestrate the technical components to satisfy requirements from the product and the customer, and can also communicate well between stakeholders.</p>
<img src="https://static.wikia.nocookie.net/matrix/images/f/f3/Architect_%26_neo.png/revision/latest" alt="Neo and the Architect" style="display:block;margin:0 auto" />

<p><em>I remember the scene from The Matrix Reloaded where Neo tells the Architect that the machines need human beings to survive. Even though AI has assimilated most human knowledge, without us, it would stagnate, waiting for new human discoveries. We are not becoming obsolete. We are becoming the ones who decide what gets built, why it matters, and who answers for it.</em></p>
]]></content:encoded></item><item><title><![CDATA[Don't start your system with microservice]]></title><description><![CDATA[The idea is simple: you have yourself and probably a few devs, you should focus on functionality instead of scaling. Don’t let the fear of a million users on day one open the door and try your website at the same time. Even with that, the vertical sc...]]></description><link>https://nguyengineer.dev/dont-start-your-system-with-microservice</link><guid isPermaLink="true">https://nguyengineer.dev/dont-start-your-system-with-microservice</guid><category><![CDATA[Microservices]]></category><category><![CDATA[monolith]]></category><category><![CDATA[architecture]]></category><category><![CDATA[Go Language]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Fri, 30 May 2025 08:11:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748592635254/c117adff-497b-4f47-9417-c620345b6cc0.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The idea is simple: you have yourself and probably a few devs, you should focus on functionality instead of scaling. Don’t let the fear of a million users on day one open the door and try your website at the same time. Even with that, the vertical scaling, stateless server, and a good old proxy like Traefik are enough to hold the door.</p>
<p>I worked on a few projects that later became a maintenance disaster, some of them luckily rescued by a ton of time and money, but the technical debt haunted them for years. We don’t have to live like that.</p>
<h2 id="heading-here-is-how-we-should-build-a-new-system">Here is how we should build a new system.</h2>
<p>Let’s start with a majestic monolith, key requirements:</p>
<ul>
<li><p>Follow clean architecture, where your business logic has nothing to do with the infrastructure.</p>
<ul>
<li>Why? That’s the separation of concerns. Later, you can easily split it into microservices, only the infrastructure changes, the business logic is the same, regardless you run it on a VM, a k8s pod, a lambda function, a cloud function…</li>
</ul>
</li>
<li><p>Run your app in one single server at first, written in any language that can maxed out all the server resource. I would say Go, or C#, Java, and never Node.js or Python. Never build your monolith with Node.js, please. Ask yourself, can this Node.js app max out 64 cores CPU and 196GB RAM, running for 6 months without interruption? If the answer is No, just don’t use JS for the backend.</p>
<ul>
<li>Why? I came a long way here to tell you not to use Node.js for your monolith because I felt all the pains from this language and its ecosystem over 7 years. It may be fine with a microservice, serverless, or small-scale project. But in the long term, the thing we scale is not the business, but the JS problem.</li>
</ul>
</li>
<li><p>Use Cloud cloud-based database. So you don’t have to think about scaling your database (yet). Postgres or MongoDB is mostly enough for all the use cases.</p>
<ul>
<li>Why? You don’t have to think about sharing a database between microservices, or each microservice has its database, because there is no microservice at all. Cloud provider ensures the scaling for us, if we stress a table, it won’t take down the whole database.</li>
</ul>
</li>
<li><p>Stateless server</p>
<ul>
<li><p>Why? We already have a single-server monolith. It doesn’t mean we can’t have many instances. Here is the beauty of it. If you need to split a hot spot (like the whole user domain) to a microservice, simply use a proxy like Traefik, a load balancer, then spin up a few servers with the exact code base, configure the proxy to re-route the traffic in that domain to those servers. Ka-boom, you still have microservices when needed.</p>
<ul>
<li>We should not use the sticky session or similar stuff; let’s use token-based authentication, so the request can be sent to any of your servers.</li>
</ul>
</li>
</ul>
</li>
<li><p>Use message bus for async work</p>
<ul>
<li><p>This is the fun part where the debate gets hottest. You don’t need a swarm of Kubernetes pods to pull the message from the message bus and call a lot of microservices to complete the work. Even when you can, and several companies have already gone this way, it is still not recommended to go that way.</p>
</li>
<li><p>Solution: The business logic of the async work is still in the core, just expose it to an API. Pub/Sub needs to be configured to push instead of pull. When you publish a message to the message bus, it will push to the API. And these APIs allow for the long-running task, which will process the task and ack the message. It works well with the message ordering, and you can adjust the processing rate too. If your service is stressed, the Pub/Sub will know and decrease the rate of messages sent to your server. So beautiful.</p>
</li>
<li><p>If the tasks are intensive, spin up another server just for the async work.</p>
</li>
</ul>
</li>
</ul>
<p>Later, you will realize most of the microservices are unnecessary. The internal function call is much faster than the API call, not to mention the network cost, latency, and a million things that can happen in the network.</p>
<p>The database will stress anyway in any architecture, I rarely see the code stress; most of the API work is just some data manipulation, waiting for the database query, or waiting for a 3rd party service. Time spent on scaling microservices should be spent on the database scaling.</p>
<h2 id="heading-when-you-need-a-microservice">When you need a microservice</h2>
<p>Things that use CPU or RAM intensively need to run in a separate service to not affect the rest of the system</p>
<ul>
<li><p>Compose email, process HTML, and PDF</p>
</li>
<li><p>Image/video processing</p>
</li>
<li><p>AI model</p>
</li>
</ul>
<p>Assume you are a CTO or a tech lead, you need many people to work on the same project. You still don’t need to start with microservices on day one. You will start to separate the team by the domain (the concern). Once a domain is assigned to a team, we don’t care if internally it monolith or whatever architecture inside it, we only care about the interface they expose.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748592580232/b1a2570c-e7df-4516-9047-939dc58084fe.jpeg" alt="Conway's Law" class="image--center mx-auto" /></p>
<p>The thing we should focus on here is the delivery of the value with the least effort. While enjoying its simplicity and cleverness.</p>
]]></content:encoded></item><item><title><![CDATA[Self-host your Wakatime stats for (almost) free with Supabase Postgres + Google Cloud Run]]></title><description><![CDATA[In my previous post, I introduce the project Wakapi - an alternative to Wakatime but it totally free, it mean you have to self hosted it somewhere.
https://hashnode.com/post/cm3i8c9mp001609ku3rhieuku
 
The database
Wakapi provide several choices of d...]]></description><link>https://nguyengineer.dev/self-host-your-wakatime-stats-for-almost-free-with-supabase-postgres-google-cloud-run</link><guid isPermaLink="true">https://nguyengineer.dev/self-host-your-wakatime-stats-for-almost-free-with-supabase-postgres-google-cloud-run</guid><category><![CDATA[wakatime]]></category><category><![CDATA[wakapi]]></category><category><![CDATA[#cloudrun]]></category><category><![CDATA[google cloud]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[supabase]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Fri, 15 Nov 2024 07:54:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1731659047290/5da37d5d-be07-4c88-ae3d-e39586163868.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In my previous post, I introduce the project Wakapi - an alternative to Wakatime but it totally free, it mean you have to self hosted it somewhere.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://hashnode.com/post/cm3i8c9mp001609ku3rhieuku">https://hashnode.com/post/cm3i8c9mp001609ku3rhieuku</a></div>
<p> </p>
<h2 id="heading-the-database">The database</h2>
<p>Wakapi provide several choices of database.</p>
<blockquote>
<h3 id="heading-supported-databases">Supported databases</h3>
<p>Wakapi uses <a target="_blank" href="https://gorm.io/">GORM</a> as an ORM. As a consequence, a set of different relational databases is supported.</p>
<ul>
<li><p><a target="_blank" href="https://sqlite.org/">SQLite</a> (<em>default, easy setup</em>)</p>
</li>
<li><p><a target="_blank" href="https://hub.docker.com/_/mysql">MySQL</a> (<em>recommended, because most extensively tested</em>)</p>
</li>
<li><p><a target="_blank" href="https://hub.docker.com/_/mariadb">MariaDB</a> (<em>open-source MySQL alternative</em>)</p>
</li>
<li><p><a target="_blank" href="https://hub.docker.com/_/postgres">Postgres</a> (<em>open-source as well</em>)</p>
</li>
<li><p><a target="_blank" href="https://www.cockroachlabs.com/docs/stable/install-cockroachdb-linux.html">CockroachDB</a> (<em>cloud-native, distributed, Postgres-compatible API</em>)</p>
</li>
<li><p><a target="_blank" href="https://hub.docker.com/_/microsoft-mssql-server">Microsoft SQL Server</a> (<em>Microsoft SQL Server</em>)</p>
</li>
</ul>
</blockquote>
<p>I was looking for a free service to host my database and discovered that the Supabase free tier is sufficient for this small application. They offer a managed Postgres service, which I really appreciate as I love using Postgres.</p>
<p>Get started with:</p>
<ul>
<li><p>Unlimited API requests</p>
</li>
<li><p>50,000 monthly active users</p>
</li>
<li><p>500 MB database space</p>
<p>  Shared CPU • 500 MB RAM</p>
</li>
<li><p>5 GB bandwidth</p>
</li>
<li><p>1 GB file storage</p>
</li>
<li><p>Community support</p>
</li>
</ul>
<p>Supabase is a company, and their need for profit is important. I don't encourage spamming their free tier; we only use it here as an experiment. Also, remember to back up your data because we don’t know when it might be stop offering the free package.</p>
<p>To get started with the free package, follow these steps:</p>
<ol>
<li><p>Visit <a target="_blank" href="https://supabase.com/pricing">https://supabase.com/pricing</a> and register for an account.</p>
</li>
<li><p>Create a project on the platform.</p>
</li>
<li><p>On the home page, click the "Connect" button to obtain the connection string.</p>
</li>
</ol>
<p>Make sure to copy the connection string for the Session mode, as it supports concurrency and is suitable for your app. The Transaction mode is only appropriate for one-time commands.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731653948300/ef0925c2-d2fd-4215-acd4-3e2096423964.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-wakapi-app-on-google-cloud">The Wakapi app on Google Cloud</h2>
<p>This is the hard part. You need a credit card to continue 😆</p>
<p><img src="https://i.pinimg.com/736x/ca/42/4c/ca424c76464672e6a9f63bd1039379aa.jpg" alt="Picture memes g7G8wky07 — iFunny" /></p>
<p>However, if you can afford it, it's really cheap.</p>
<p>Cloud Run is incredibly affordable for low workloads. I created a new Google Cloud account to experiment with Kubernetes, but in October I only used a single Cloud Run instance to host my Wakapi. The total cost for that was 2,733 VND, which is approximately $0.11. That's why I say it feels almost free. Please ignore the free trial credit in the image below, as it expires in three months. However, the monthly fee for Cloud Run remains minimal.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731654880154/267c6681-73e0-404f-a0fa-f549e02aa18d.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-how-to-set-up-your-app">How to set up your app</h2>
<p><strong>Steps:</strong></p>
<ol>
<li><p>Pull the original Docker image.</p>
</li>
<li><p>Push it to your Google Cloud Artifact Registry.</p>
</li>
<li><p>Create a Cloud Run instance that uses the most recent image you just pushed.</p>
</li>
<li><p>Set up the necessary configurations.</p>
</li>
</ol>
<p><strong>Pull the docker image</strong></p>
<pre><code class="lang-bash"> docker pull --platform linux/amd64 ghcr.io/muety/wakapi:latest
</code></pre>
<p>Remember to pull this arch <code>--platform linux/amd64</code> If you are on MacOS Apple silicon chip, it will pull the arm64 image and you can’t push it to Artifact register</p>
<p><strong>Push it to your Google Cloud Artifact Registry</strong></p>
<p>After you pull it, your image will be <code>wakapi:latest</code> Now push it to you google project</p>
<pre><code class="lang-bash">docker push gcr.io/YOUR_GOOGLE_PROJECT_ID/wakapi:latest
</code></pre>
<p><strong>Create a Cloud Run instance that uses the most recent image you just pushed.</strong></p>
<p>Create a new Cloud Run Service deploy from container</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731655974389/41e09915-4217-4cdd-b33e-d1749e74528a.png" alt class="image--center mx-auto" /></p>
<p>Give the required setting</p>
<ul>
<li><p>Select the image you just pushed</p>
</li>
<li><p>App entry point</p>
</li>
<li><p>2GiB RAM and 2 CPU is more than enough</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731655922809/a174b6e6-5ca1-4fd7-b2c8-230cee0ea9f2.png" alt class="image--center mx-auto" /></p>
<p><strong>Set up the necessary configurations.</strong></p>
<p>Don’t click Deploy yet, remember your database secret? Wakapi provide all kind of configuration here <a target="_blank" href="https://github.com/muety/wakapi?tab=readme-ov-file#-configuration-options">https://github.com/muety/wakapi?tab=readme-ov-file#-configuration-options</a></p>
<p>You can set it however you want. Here is mine.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731659235449/5e3db3cd-9356-4fa4-8760-00970011179c.png" alt class="image--center mx-auto" /></p>
<p>Example:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Database Configuration</span>
<span class="hljs-built_in">export</span> WAKAPI_DB_AUTOMIGRATE_FAIL_SILENTLY=<span class="hljs-literal">true</span>
<span class="hljs-built_in">export</span> WAKAPI_DB_TYPE=postgres
<span class="hljs-built_in">export</span> WAKAPI_DB_HOST=aws-0-ap-southeast-1.redacted.supabase.co
<span class="hljs-built_in">export</span> WAKAPI_DB_PORT=5432
<span class="hljs-built_in">export</span> WAKAPI_DB_USER=postgres.redacted
<span class="hljs-built_in">export</span> WAKAPI_DB_NAME=postgres
<span class="hljs-built_in">export</span> WAKAPI_DB_PASSWORD=********  <span class="hljs-comment"># Replace with your actual database password</span>

<span class="hljs-comment"># Application Settings</span>
<span class="hljs-built_in">export</span> WAKAPI_LISTEN_IPV4=0.0.0.0
<span class="hljs-built_in">export</span> WAKAPI_ALLOW_SIGNUP=<span class="hljs-literal">false</span>
<span class="hljs-built_in">export</span> WAKAPI_INSECURE_COOKIES=<span class="hljs-literal">true</span>

<span class="hljs-comment"># Security Settings</span>
<span class="hljs-built_in">export</span> WAKAPI_PASSWORD_SALT=********  <span class="hljs-comment"># Replace with your secure random salt</span>
<span class="hljs-built_in">export</span> WAKAPI_SENTRY_DSN=********    <span class="hljs-comment"># Replace with your Sentry DSN if using error tracking</span>

<span class="hljs-comment"># Environment Type</span>
<span class="hljs-built_in">export</span> ENVIRONMENT=production
</code></pre>
<p>The most crucial configuration is the database secret. Once you're satisfied with your settings, click "Deploy" and wait a few minutes for your Cloud Run service to boot up. At the top of the page, you will see a URL; this is your public URL to access your Wakapi. You also have the option to purchase a domain and point it to your Cloud Run service. Googling <code>Cloud Run domain mapping</code> for the further instruction.</p>
<h2 id="heading-setting-up-the-client"><strong>Setting Up the Client</strong></h2>
<p>Wakapi uses the same client as Wakatime, which means that all Wakatime plugins can send data to Wakapi. You have the option to disable data sending to Wakatime and send data exclusively to Wakapi, and that is completely acceptable.</p>
<p>To get started, navigate to your deployed Cloud Run URL and register your first account. Make sure to retrieve the API key and update it in the appropriate location in your system, located at <code>~/.wakatime.cfg</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731656753259/ec6b7107-bfbe-4326-98e0-6f0a97db14f0.png" alt class="image--center mx-auto" /></p>
<p>That pretty much, if you run into any issue, let me know in the comment.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Thanks <a target="_self" href="https://wakatime.com/">Wakatime</a> for open source your plugin and thanks <a target="_self" href="https://github.com/muety">Ferdinand Mütsch</a> for making this project.</div>
</div>]]></content:encoded></item><item><title><![CDATA[Wakapi: my first contribution to the open source]]></title><description><![CDATA[I have been using Wakatime since 2017, but I stopped using it in recent years because the free plan, which only provides a two-week range report, is not very helpful. I primarily relied on it for the yearly report to see how much I coded over the pas...]]></description><link>https://nguyengineer.dev/wakapi-my-first-contribution-to-the-open-source</link><guid isPermaLink="true">https://nguyengineer.dev/wakapi-my-first-contribution-to-the-open-source</guid><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Fri, 15 Nov 2024 04:19:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1731640229146/1523069a-22af-4416-8dc3-e869ee80255a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I have been using Wakatime since 2017, but I stopped using it in recent years because the free plan, which only provides a two-week range report, is not very helpful. I primarily relied on it for the yearly report to see how much I coded over the past year, just for fun.</p>
<p>I realized that I was freely providing my valuable coding statistics to them; as a free service, I became the product. I’ve been searching for alternatives and found Wakapi. Wakapi is an open-source project that sends data to your self-hosted server using the same plugins and protocols as Wakatime. It’s quite an ingenious solution. Additionally, since the Wakatime plugin is open-source, we can modify it to send data anywhere we want, and that's perfectly legal.</p>
<p>I set up my server and used it for a while, and I found it fascinating. I'm thinking of contributing to it. Noticed several small problems with this alternative, such as performance issues and a lack of features. So, I found a good first issue from the list to start with:</p>
<p><a target="_blank" href="https://github.com/muety/wakapi/issues">https://github.com/muety/wakapi/issues</a></p>
<p>I'm not familiar with the codebase yet, so working on an issue that requires me to walk through it would be a good starting point, rather than just updating the README. The code is written in Go, and Go developers tend to utilize the standard library as much as possible.</p>
<p>In line with the preferences of the repository owner, I think I can do some refactoring, and I found a suitable ticket for that. The issue isn't difficult, but it does involve a lot of changes. Specifically, it requires replacing the <code>logbuch</code> library with <code>slog</code>. Structured logging was introduced in Go version 1.21, making many structured logging libraries obsolete. Therefore, replacing third-party libraries like <code>logbuch</code> is an obvious choice.</p>
<p>I picked this issue: <a target="_blank" href="https://github.com/muety/wakapi/issues/480">Migrate from <code>logbuch</code> to <code>slog</code> #480</a></p>
<p><a target="_blank" href="https://github.com/muety/wakapi/issues/480"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731642253546/27fc443f-7b17-40de-8b1c-919cb7f7552b.png" alt class="image--center mx-auto" /></a></p>
<p>Collaborating with the repository owner fascinates me as well. After a few back-and-forth PR updates, the pull request was merged. Yay 🎉🎉</p>
<p>Follow up by a few more discussion and additional pull request, the issue is resolve.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731642940227/56a7cb0a-91a4-4c47-baa2-9ed1e27d5d9e.png" alt class="image--center mx-auto" /></p>
<p>Contributing to this issue alone makes me the 9th/50 most significant contributor to the project, possibly due to the LOC changes 😜</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731643095052/d0d64f41-c852-4079-95af-8b4f5ad21c44.png" alt class="image--center mx-auto" /></p>
<p>Thanks to <a target="_blank" href="https://github.com/muety">Ferdinand Mütsch</a>, who is such a nice guy. I still use Wakapi every day and have found many improvements I can implement. I just need to find the time.</p>
<h3 id="heading-my-take-away-to-start-contribute-to-oss">My take away to start contribute to OSS</h3>
<ul>
<li><p><strong>Journey</strong></p>
<p>  Let’s dive into your project. Identify the challenges you face and seek open-source solutions. Utilize these solutions and pinpoint areas for improvement. If you’re familiar with the tech stack, consider reviewing the issue list or submitting an issue ticket to start collaborating with others.</p>
</li>
<li><p><strong>Expertise</strong></p>
<p>  To begin contributing, you need a certain level of expertise. The open-source software (OSS) you develop will be used by many people, potentially thousands or more, running the code you write on their own infrastructure or computers. We don’t want to create issues for them with any new bugs.</p>
</li>
<li><p><strong>Be Proactive</strong></p>
<p>  The time frame for a pull request (PR) should be within a few weeks; otherwise, we risk losing context. While it's great that we are using our free time to create open-source software, maintaining a professional approach is what everyone expects in a collaborative environment. Be proactive and assist the maintainer when needed.</p>
</li>
</ul>
<p>That's all for today. I hope you find inspiration to start your journey as an open source contributor. Here is my GitHub profile; let's connect for more interesting projects: <a target="_blank" href="https://github.com/finnng">https://github.com/finnng</a></p>
]]></content:encoded></item><item><title><![CDATA[Series Building a Chat System that Scales: A Developer's Journey]]></title><description><![CDATA[A Developer's Journey
I've always wanted to build a chat system, just for the joy of it. The original plan was simple: set up an old HP Elitedesk as a server, NAT the ports, point a domain to it, and share it with friends. But as I looked at today's ...]]></description><link>https://nguyengineer.dev/series-building-a-chat-system-that-scales-a-developers-journey</link><guid isPermaLink="true">https://nguyengineer.dev/series-building-a-chat-system-that-scales-a-developers-journey</guid><category><![CDATA[htmx]]></category><category><![CDATA[go-htmx]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[React]]></category><category><![CDATA[templ]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Sat, 09 Nov 2024 12:40:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1731152494473/e9ad5a0a-c86b-4293-a2a2-1150202bb2c4.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-a-developers-journey">A Developer's Journey</h1>
<p>I've always wanted to build a chat system, just for the joy of it. The original plan was simple: set up an old HP Elitedesk as a server, NAT the ports, point a domain to it, and share it with friends. But as I looked at today's job market, I realized this could be more than just a fun project.</p>
<p>These days, job requirements read like a technical encyclopedia. Companies want developers who can:</p>
<ul>
<li><p>Master frontend technologies from service workers to real-time vanilla JS</p>
</li>
<li><p>Code proficiently in multiple languages (typically Go and JS) with at least 7 years of production experience</p>
</li>
<li><p>Be an expert in both SQL and NoSQL databases, plus columnar databases like Druid or Cassandra for analytics</p>
</li>
<li><p>Handle pub/sub systems like Kafka for microservices</p>
</li>
<li><p>Implement solutions like Debezium or Postgres listen/notify for replication lag</p>
</li>
<li><p>Set up comprehensive monitoring with logging, tracing, and metrics</p>
</li>
<li><p>Deploy and manage Kubernetes clusters (CKA certification preferred)</p>
</li>
<li><p>Build and maintain CI/CD pipelines</p>
</li>
<li><p>And of course, demonstrate experience with systems handling 100M+ requests per day</p>
</li>
</ul>
<p>And that's just to get past the CV screening - we haven't even gotten to the LeetCode challenges yet!</p>
<p>So, I decided to turn my chat system project into a learning journey. I'm setting an ambitious goal: build a system that can handle 100 million requests per day. Not just because it's a common job requirement, but because it's an excellent way to learn these technologies in a practical context.</p>
<p>In this series, I'll document my journey building a scalable chat application from the ground up. We'll cover everything from frontend implementation to deployment and scaling strategies. No shortcuts, no oversimplification - just real, hands-on experience with the tools and techniques that modern tech companies use.</p>
<p>Let's start with where users first interact with our system: the frontend.</p>
<h2 id="heading-the-unexpected-journey-back">The Unexpected Journey Back</h2>
<p>Back in 2012, when I started my career, Node.js was everywhere. The job market was flooded with Node.js opportunities, and I dove straight in. For the next decade, that's where I lived. In doing so, I completely missed the era where people built websites with PHP and jQuery - a gap that would later prove interesting in my HTMX journey.</p>
<h2 id="heading-beyond-pet-projects-building-real-systems">Beyond Pet Projects: Building Real Systems</h2>
<p>Everything changed when I started building a complete system rather than just another small app. Let me tell you - building a system is an entirely different beast from creating "pet" projects where performance, design, and scaling aren't critical concerns. It requires a bird's-eye view while still demanding attention to every line of code.</p>
<p>The scale of the project made me realize something crucial: I needed to minimize the technology stack I had to maintain. Fewer moving parts mean a more stable system. While our backend was solid (even with complex pieces like Kafka, Debezium, Postgres, Centrifugo, Go webserver, and k8s), the frontend remained our Achilles' heel, especially its build process. Despite my seven years of React experience, it still occasionally drives me crazy.</p>
<h2 id="heading-the-frontend-fatigue">The Frontend Fatigue</h2>
<p>Let's talk about our frontend journey - it's quite a tale:</p>
<ul>
<li><p>We needed Babel just to write code with new ES specs</p>
</li>
<li><p>Webpack became our daily wrestling partner</p>
</li>
<li><p>Node.js runtime version upgrades felt like walking through a minefield</p>
</li>
<li><p>Abandoned projects kept us up at night</p>
</li>
<li><p>Deprecated libraries became our regular headache</p>
</li>
<li><p>Security issues? Don't get me started</p>
</li>
</ul>
<p>And let's not forget how OOP implementation in JavaScript was a mess back then. In my opinion, most issues in the JS ecosystem stem from the language's inherent fragility – it created an environment where mistakes could slip by unnoticed.</p>
<h2 id="heading-the-revelation">The Revelation</h2>
<p>This realization led me to reconsider JavaScript's original purpose: a lightweight scripting language to enhance HTML's user experience, bridging the gap between solid backend-rendered HTML and the browser's flexibility. Maybe it was time to put JavaScript back where it belonged – on the client side, in a more focused role.</p>
<p>Here's what this looks like in practice:</p>
<h3 id="heading-before-the-react-way">Before (The React Way):</h3>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> UserProfile = <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-keyword">const</span> [user, setUser] = useState(<span class="hljs-literal">null</span>);
  <span class="hljs-keyword">const</span> [loading, setLoading] = useState(<span class="hljs-literal">true</span>);
  <span class="hljs-comment">// More state management...</span>

  useEffect(<span class="hljs-function">() =&gt;</span> {
    fetchUser()
      .then(<span class="hljs-function"><span class="hljs-params">data</span> =&gt;</span> setUser(data))
      .catch(<span class="hljs-function"><span class="hljs-params">err</span> =&gt;</span> setError(err));
  }, []);

  <span class="hljs-comment">// Complex rendering logic...</span>
};
</code></pre>
<h3 id="heading-after-the-htmx-go-way">After (The HTMX + Go Way):</h3>
<pre><code class="lang-javascript">&lt;!-- Simple, direct, <span class="hljs-attr">effective</span>: get user after the page loaded --&gt;
<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">hx-get</span>=<span class="hljs-string">"/api/user"</span> <span class="hljs-attr">hx-trigger</span>=<span class="hljs-string">"load"</span>&gt;</span>
  <span class="hljs-comment">&lt;!-- Server sends exactly what we need --&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span></span>
</code></pre>
<h2 id="heading-learning-htmx-a-personal-challenge">Learning HTMX: A Personal Challenge</h2>
<p>I took HTMX for a test drive in two projects to overcome the honeymoon phenomenon. I'll be honest - it was challenging at first, mainly because of my background. Remember how I mentioned missing the PHP and jQuery era? That gap became apparent. Early in my career, I was deep in mobile app and game development, working with React and React Native, before transitioning directly into backend development.</p>
<p>So, when it came to HTML-over-the-wire, it felt like learning to write with my left hand. Sending partial HTML and using a dedicated API for form validation? Updating the UI via a backend API? It all felt strange initially. But here's the thing - the more I work with it, the more I understand how it makes sense. It isn't that it's difficult; it's that my thinking was conditioned to use the more complicated way.</p>
<h2 id="heading-why-go-htmx-clicks">Why Go + HTMX Clicks</h2>
<p>Here's what I've discovered works brilliantly:</p>
<ol>
<li><p><strong>One Model to Rule Them All</strong></p>
<ul>
<li><p>No more juggling between frontend and backend models</p>
</li>
<li><p>No more omitting fields before sending to the frontend</p>
</li>
<li><p>Everything lives where it should - on the server</p>
</li>
</ul>
</li>
<li><p><strong>Go's Superpowers in Frontend Code</strong></p>
<ul>
<li><p>Imagine writing frontend code with Go's compiler watching your back</p>
</li>
<li><p>If it builds, most issues are already caught</p>
</li>
<li><p>No more undefined/null/empty string gymnastics</p>
</li>
</ul>
</li>
<li><p><strong>Development Joy</strong></p>
<p> Instead of:</p>
<pre><code class="lang-javascript"> <span class="hljs-comment">// Dealing with JavaScript uncertainty</span>
 <span class="hljs-keyword">const</span> userEmail = user &amp;&amp; user.email || <span class="hljs-string">''</span>;
</code></pre>
<p> We get (this is <code>Go templ</code> syntax):</p>
<pre><code class="lang-go"> &lt;span&gt;{ User.Email }&lt;/span&gt;
</code></pre>
<p> Because static typed language already have the solid default value. User.Email is string so the default value is <code>““</code>. No more <code>null</code>, <code>undefined</code>, <code>‘‘</code>. madness.</p>
</li>
</ol>
<h2 id="heading-real-world-impact">Real-World Impact</h2>
<p>In our production environment, this approach has meant:</p>
<ul>
<li><p>Dramatically simpler deployment process</p>
</li>
<li><p>Faster feature implementation</p>
</li>
<li><p>Fewer moving parts to maintain</p>
</li>
<li><p>Better sleep at night (seriously!)</p>
</li>
</ul>
<h2 id="heading-looking-forward">Looking Forward</h2>
<p>This journey has taught me that sometimes, simpler really is better. While this doesn't mean we should abandon React or other frontend frameworks entirely – it depends on your needs – it's shown me a more sustainable path for certain types of applications.</p>
<p>In my next post, I'll dive deep into a real project built with this stack. I'll share the nitty-gritty details:</p>
<ul>
<li><p>How we structured our templates</p>
</li>
<li><p>Where HTMX really shines</p>
</li>
<li><p>The challenges we faced and overcame</p>
</li>
<li><p>Practical patterns we discovered along the way</p>
</li>
</ul>
<p>The web development world keeps evolving, and sometimes evolution means rediscovering what we left behind. Stay tuned for more concrete examples and detailed code walkthrough!</p>
]]></content:encoded></item><item><title><![CDATA[Replace string contains special character in Vim]]></title><description><![CDATA[A Simple Solution to a Tricky Problem
If you've ever tried to replace a string containing special characters in Vim, especially across multiple files, you know it can be a real headache. The usual search and replace commands often fall short, getting...]]></description><link>https://nguyengineer.dev/replace-string-contains-special-character-in-vim</link><guid isPermaLink="true">https://nguyengineer.dev/replace-string-contains-special-character-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[neovim]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Wed, 16 Oct 2024 13:00:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/SWfcRVm-o0E/upload/a828a2327f2fff146b4e12181187e415.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A Simple Solution to a Tricky Problem</p>
<p>If you've ever tried to replace a string containing special characters in Vim, especially across multiple files, you know it can be a real headache. The usual search and replace commands often fall short, getting tripped up by those pesky special characters.</p>
<p>After much trial and error, I've found a reliable method that makes this task straightforward. Here's how to do it:</p>
<ol>
<li><p>First, use fzf's live grep to find all occurrences of your string:</p>
<pre><code class="lang-bash"> :Rg <span class="hljs-string">"w.Header().Set(\"Content-Type\", \"text/html; charset=utf-8\")"</span>
</code></pre>
<p> This populates your quickfix list with all matches.</p>
</li>
<li><p>Now, here's the key command that does the heavy lifting:</p>
<pre><code class="lang-bash"> :cfdo %s/\V\Cw.Header().Set(<span class="hljs-string">"Content-Type"</span>, <span class="hljs-string">"text\/html; charset=utf-8"</span>)/w.Header().Set(<span class="hljs-string">"Content-Type"</span>, <span class="hljs-string">"application\/json"</span>)/g
</code></pre>
<p> Let's break it down:</p>
<ul>
<li><p><code>cfdo</code>: Applies the command to all files in the quickfix list.</p>
</li>
<li><p><code>\V</code>: "Very nomagic" mode, which treats most characters literally.</p>
</li>
<li><p><code>\C</code>: Ensures case-sensitive matching.</p>
</li>
<li><p><code>\/</code>: Escapes forward slashes to avoid confusion.</p>
</li>
</ul>
</li>
<li><p>Finally, save all your changes:</p>
<pre><code class="lang-bash"> :wa
</code></pre>
</li>
</ol>
<p>And there you have it! This method reliably replaces your string, special characters and all, across your entire project.</p>
<p>No more wrestling with escape characters or pulling your hair out over missed replacements. With this approach, you can handle even the trickiest of string replacements in Vim with ease.</p>
<p>Give it a try next time you're faced with a challenging find-and-replace task. You might be surprised at how smoothly it goes!</p>
]]></content:encoded></item><item><title><![CDATA[Connect Cloud Run Services to VPC via Terraform]]></title><description><![CDATA[Introduction
In this guide, we'll walk through setting up two Cloud Run services - a public frontend and a private backend - using Terraform. We'll use Google Artifact Registry to store our Docker images. This setup demonstrates how to create a secur...]]></description><link>https://nguyengineer.dev/connect-cloud-run-services-to-vpc-via-terraform</link><guid isPermaLink="true">https://nguyengineer.dev/connect-cloud-run-services-to-vpc-via-terraform</guid><category><![CDATA[Terraform]]></category><category><![CDATA[#cloudrun]]></category><category><![CDATA[gcloud]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[vpc]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Wed, 28 Aug 2024 01:58:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1724810153301/0ad78723-8a40-4b08-899a-b21d2829065c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>In this guide, we'll walk through setting up two Cloud Run services - a public frontend and a private backend - using Terraform. We'll use Google Artifact Registry to store our Docker images. This setup demonstrates how to create a secure, scalable microservices architecture on Google Cloud Platform.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ul>
<li><p>A Google Cloud Platform account</p>
</li>
<li><p><code>gcloud</code> CLI installed and configured</p>
</li>
<li><p>Terraform installed</p>
</li>
<li><p>Docker installed</p>
</li>
<li><p>Git installed</p>
</li>
</ul>
<h2 id="heading-getting-started">Getting Started</h2>
<ol>
<li><p>Clone my demo repository:</p>
<pre><code class="lang-yaml"> <span class="hljs-string">git</span> <span class="hljs-string">clone</span> <span class="hljs-string">git@github.com:finnng/demo-cloud-run-vpc.git</span>
 <span class="hljs-string">cd</span> <span class="hljs-string">demo-cloud-run-vpc</span>
</code></pre>
</li>
<li><p>Set up your Google Cloud project:</p>
<pre><code class="lang-yaml"> <span class="hljs-string">export</span> <span class="hljs-string">PROJECT_ID=your-project-id</span>
 <span class="hljs-string">gcloud</span> <span class="hljs-string">config</span> <span class="hljs-string">set</span> <span class="hljs-string">project</span> <span class="hljs-string">$PROJECT_ID</span>
</code></pre>
</li>
<li><p>Enable necessary APIs:</p>
<pre><code class="lang-yaml"> <span class="hljs-string">gcloud</span> <span class="hljs-string">services</span> <span class="hljs-string">enable</span> <span class="hljs-string">run.googleapis.com</span> <span class="hljs-string">artifactregistry.googleapis.com</span> <span class="hljs-string">compute.googleapis.com</span>
</code></pre>
</li>
<li><p>Create a Google Artifact Registry repository:</p>
<pre><code class="lang-yaml"> <span class="hljs-string">gcloud</span> <span class="hljs-string">artifacts</span> <span class="hljs-string">repositories</span> <span class="hljs-string">create</span> <span class="hljs-string">cloud-run-demo</span> <span class="hljs-string">--repository-format=docker</span> <span class="hljs-string">--location=us-central1</span>
</code></pre>
</li>
</ol>
<h2 id="heading-building-and-pushing-docker-images">Building and Pushing Docker Images</h2>
<ol>
<li><p>Build and push the backend image:</p>
<pre><code class="lang-yaml"> <span class="hljs-string">cd</span> <span class="hljs-string">backend</span>
 <span class="hljs-string">docker</span> <span class="hljs-string">build</span> <span class="hljs-string">--platform</span> <span class="hljs-string">linux/amd64</span> <span class="hljs-string">-t</span> <span class="hljs-string">us-central1-docker.pkg.dev/$PROJECT_ID/cloud-run-demo/backend:v1</span> <span class="hljs-string">.</span>
 <span class="hljs-string">docker</span> <span class="hljs-string">push</span> <span class="hljs-string">us-central1-docker.pkg.dev/$PROJECT_ID/cloud-run-demo/backend:v1</span>
 <span class="hljs-string">cd</span> <span class="hljs-string">..</span>
</code></pre>
</li>
<li><p>Build and push the frontend image:</p>
<pre><code class="lang-yaml"> <span class="hljs-string">cd</span> <span class="hljs-string">frontend</span>
 <span class="hljs-string">docker</span> <span class="hljs-string">build</span> <span class="hljs-string">--platform</span> <span class="hljs-string">linux/amd64</span> <span class="hljs-string">-t</span> <span class="hljs-string">us-central1-docker.pkg.dev/$PROJECT_ID/cloud-run-demo/frontend:v1</span> <span class="hljs-string">.</span>
 <span class="hljs-string">docker</span> <span class="hljs-string">push</span> <span class="hljs-string">us-central1-docker.pkg.dev/$PROJECT_ID/cloud-run-demo/frontend:v1</span>
 <span class="hljs-string">cd</span> <span class="hljs-string">..</span>
</code></pre>
</li>
</ol>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Remember to choose the platform <code>--platform linux/amd64</code> Especially when you are using Mac M1 chip.</div>
</div>

<h2 id="heading-configuring-terraform">Configuring Terraform</h2>
<ol>
<li><p>Update <code>terraform.tfvars</code> with your project details:</p>
<pre><code class="lang-yaml"> <span class="hljs-string">project_id</span>     <span class="hljs-string">=</span> <span class="hljs-string">"your-project-id"</span>
 <span class="hljs-string">region</span>         <span class="hljs-string">=</span> <span class="hljs-string">"us-central1"</span>
 <span class="hljs-string">frontend_image</span> <span class="hljs-string">=</span> <span class="hljs-string">"us-central1-docker.pkg.dev/your-project-id/cloud-run-demo/frontend:v1"</span>
 <span class="hljs-string">backend_image</span>  <span class="hljs-string">=</span> <span class="hljs-string">"us-central1-docker.pkg.dev/your-project-id/cloud-run-demo/backend:v1"</span>
</code></pre>
</li>
<li><p>Initialize Terraform:</p>
<pre><code class="lang-yaml"> <span class="hljs-string">terraform</span> <span class="hljs-string">init</span>
</code></pre>
</li>
<li><p>Plan and apply the Terraform configuration:</p>
<pre><code class="lang-yaml"> <span class="hljs-string">terraform</span> <span class="hljs-string">plan</span>
 <span class="hljs-string">terraform</span> <span class="hljs-string">apply</span>
</code></pre>
</li>
</ol>
<h2 id="heading-understanding-the-configuration">Understanding the Configuration</h2>
<p>Let's look at some key parts of our Terraform configuration:</p>
<h3 id="heading-vpc-and-connector-setup">VPC and Connector Setup</h3>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"google_compute_network"</span> <span class="hljs-string">"vpc_network"</span> {
  <span class="hljs-string">name</span>                    <span class="hljs-string">=</span> <span class="hljs-string">"cloud-run-network"</span>
  <span class="hljs-string">auto_create_subnetworks</span> <span class="hljs-string">=</span> <span class="hljs-literal">false</span>
}

<span class="hljs-string">resource</span> <span class="hljs-string">"google_vpc_access_connector"</span> <span class="hljs-string">"connector"</span> {
  <span class="hljs-string">name</span>          <span class="hljs-string">=</span> <span class="hljs-string">"vpc-con"</span>
  <span class="hljs-string">ip_cidr_range</span> <span class="hljs-string">=</span> <span class="hljs-string">"10.8.0.0/28"</span>
  <span class="hljs-string">network</span>       <span class="hljs-string">=</span> <span class="hljs-string">google_compute_network.vpc_network.name</span>
  <span class="hljs-string">region</span>        <span class="hljs-string">=</span> <span class="hljs-string">var.region</span>
}
</code></pre>
<p>This creates a VPC network and a connector, allowing our Cloud Run services to communicate securely.</p>
<h3 id="heading-backend-service">Backend Service</h3>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"google_cloud_run_v2_service"</span> <span class="hljs-string">"backend"</span> {
  <span class="hljs-string">name</span>     <span class="hljs-string">=</span> <span class="hljs-string">"backend"</span>
  <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">var.region</span>

  <span class="hljs-string">template</span> {
    <span class="hljs-string">containers</span> {
      <span class="hljs-string">image</span> <span class="hljs-string">=</span> <span class="hljs-string">var.backend_image</span>
    }
    <span class="hljs-string">service_account</span> <span class="hljs-string">=</span> <span class="hljs-string">google_service_account.demo_backend_sa.email</span>
    <span class="hljs-string">vpc_access</span> {
      <span class="hljs-string">connector</span> <span class="hljs-string">=</span> <span class="hljs-string">google_vpc_access_connector.connector.id</span>
      <span class="hljs-string">egress</span>    <span class="hljs-string">=</span> <span class="hljs-string">"PRIVATE_RANGES_ONLY"</span>
    }
  }

  <span class="hljs-string">ingress</span> <span class="hljs-string">=</span> <span class="hljs-string">"INGRESS_TRAFFIC_INTERNAL_ONLY"</span>
}
</code></pre>
<p>Note the <code>egress = "PRIVATE_RANGES_ONLY"</code> setting, which allows the backend to make external API calls while maintaining security.</p>
<h3 id="heading-frontend-service">Frontend Service</h3>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"google_cloud_run_v2_service"</span> <span class="hljs-string">"frontend"</span> {
  <span class="hljs-string">name</span>     <span class="hljs-string">=</span> <span class="hljs-string">"frontend"</span>
  <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">var.region</span>

  <span class="hljs-string">template</span> {
    <span class="hljs-string">containers</span> {
      <span class="hljs-string">image</span> <span class="hljs-string">=</span> <span class="hljs-string">var.frontend_image</span>
      <span class="hljs-string">env</span> {
        <span class="hljs-string">name</span>  <span class="hljs-string">=</span> <span class="hljs-string">"BACKEND_URL"</span>
        <span class="hljs-string">value</span> <span class="hljs-string">=</span> <span class="hljs-string">google_cloud_run_v2_service.backend.uri</span>
      }
    }
    <span class="hljs-string">service_account</span> <span class="hljs-string">=</span> <span class="hljs-string">google_service_account.demo_frontend_sa.email</span>
    <span class="hljs-string">vpc_access</span> {
      <span class="hljs-string">connector</span> <span class="hljs-string">=</span> <span class="hljs-string">google_vpc_access_connector.connector.id</span>
      <span class="hljs-string">egress</span>    <span class="hljs-string">=</span> <span class="hljs-string">"ALL_TRAFFIC"</span>
    }
  }
}
</code></pre>
<p>The frontend service is configured to access the backend securely through the VPC connector.</p>
<h2 id="heading-testing-the-setup">Testing the Setup</h2>
<p>After applying the Terraform configuration, you can test your setup:</p>
<ol>
<li><p>Get the URLs:</p>
<pre><code class="lang-yaml"> <span class="hljs-string">terraform</span> <span class="hljs-string">output</span> <span class="hljs-string">frontend_url</span>
 <span class="hljs-string">terraform</span> <span class="hljs-string">output</span> <span class="hljs-string">backend_url</span>
</code></pre>
</li>
<li><p>Open the URL in a web browser. You should see the frontend HTML, that means the frontend is publicly access.</p>
</li>
<li><p>To verify the backend is private, try accessing its URL directly (you can find it in the GCP Console). It should not be publicly accessible.</p>
</li>
<li><p>Now try to let call https://{frontend_url}/request-backend. This request is call to backend and backend make an external call to https://example.com and return the html. You should see the website content of example.com</p>
</li>
</ol>
<h2 id="heading-cleaning-up">Cleaning Up</h2>
<p>To avoid incurring charges, remember to destroy the resources when you're done:</p>
<pre><code class="lang-yaml"><span class="hljs-string">terraform</span> <span class="hljs-string">destroy</span>
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>This guide demonstrates how to set up a secure microservices architecture using Cloud Run and Terraform. By keeping the backend private and allowing the frontend to communicate with it through a VPC, we've created a scalable and secure setup.</p>
<p>Remember to always consider security best practices, especially in production environments. Regularly review and update your configurations to maintain cloud security standards.</p>
<p>Happy coding, and enjoy exploring Cloud Run and Terraform!</p>
]]></content:encoded></item><item><title><![CDATA[Building a data analytic Slack App with Machine Learning]]></title><description><![CDATA[As developers, we've all been there – drowning in a sea of Slack alerts, desperately trying to spot the critical issues amidst the noise. It's a common problem, but what if we could use machine learning to make sense of this chaos? That's exactly wha...]]></description><link>https://nguyengineer.dev/building-a-data-analytic-slack-app-with-machine-learning</link><guid isPermaLink="true">https://nguyengineer.dev/building-a-data-analytic-slack-app-with-machine-learning</guid><category><![CDATA[golang]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[slack]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Python]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Sun, 04 Aug 2024 13:00:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/bmmcfZqSjBU/upload/58df1ea428fbf521a84c6075cbfb6f14.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As developers, we've all been there – drowning in a sea of Slack alerts, desperately trying to spot the critical issues amidst the noise. It's a common problem, but what if we could use machine learning to make sense of this chaos? That's exactly what I set out to do, and I'm excited to share my journey with you, including the intricate details of how I built and deployed this solution using modern cloud architecture.</p>
<h2 id="heading-the-challenge">The Challenge</h2>
<p>Picture this: You're working on a complex system, and your Slack channel is constantly bombarded with alerts. Some are critical, some are noise, and distinguishing between them becomes a time-consuming task. This was my reality, and I knew there had to be a better way.</p>
<h2 id="heading-the-solution-unsupervised-machine-learning-and-cloud-architecture">The Solution: Unsupervised Machine Learning and Cloud Architecture</h2>
<p>I decided to tackle this problem head-on by applying unsupervised machine learning to cluster Slack messages and identify error patterns. But the solution went beyond just the ML algorithms – it involved creating a robust Slack app and leveraging cloud services for seamless deployment and scalability. Here's why I chose this approach:</p>
<ol>
<li><p><strong>Learn by doing</strong>: There's no better way to understand machine learning and cloud architecture than by applying them to a real-world problem.</p>
</li>
<li><p><strong>Slack app development</strong>: This project allowed me to create a non-workflow Slack app, a valuable skill for future needs.</p>
</li>
<li><p><strong>Cloud services practice</strong>: Implementing this solution gave me hands-on experience with Google Cloud Platform services.</p>
</li>
<li><p><strong>Real-life ML application</strong>: It's one thing to understand ML theoretically, but applying it in a production environment is a whole different ball game.</p>
</li>
<li><p><strong>CI/CD implementation</strong>: Setting up continuous integration and deployment for the Slack app provided practical experience with modern DevOps practices.</p>
</li>
<li><p><strong>More Golang practice</strong>: As an added bonus, I got to code in Go, which is always fun!</p>
</li>
</ol>
<h2 id="heading-the-technical-deep-dive">The Technical Deep Dive</h2>
<p>Now, let's get into the nitty-gritty of how this system works, from the ML algorithms to the cloud architecture.</p>
<h3 id="heading-machine-learning-pipeline">Machine Learning Pipeline</h3>
<ol>
<li><p><strong>Preprocessing the Text</strong></p>
<ul>
<li><p>Convert all text to lowercase</p>
</li>
<li><p>Remove punctuation</p>
</li>
<li><p>Split the text into individual tokens (words)</p>
</li>
<li><p>Remove common "stop words" that don't add much meaning</p>
</li>
<li><p>Apply stemming to reduce words to their root form</p>
</li>
</ul>
</li>
<li><p><strong>Vectorizing with TF-IDF</strong> We use the Term Frequency-Inverse Document Frequency (TF-IDF) algorithm to convert our preprocessed text into numerical vectors.</p>
</li>
<li><p><strong>Clustering with DBSCAN</strong> With our text converted to vectors, we apply the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm to group similar messages.</p>
</li>
</ol>
<h3 id="heading-slack-app-pipeline">Slack App Pipeline</h3>
<p>The Slack app pipeline is where the magic happens. Here's how it works:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722775174421/98da6b53-80b9-4d5b-906b-dd8a6983b905.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>A user interacts with the Alligator Slack app, initiating the process.</p>
</li>
<li><p>The app sends a request to our Google Cloud Run service.</p>
</li>
<li><p>The service responses with the Blockit schema that define the Slack modal.</p>
</li>
<li><p>The app presents a modal to the user, allowing them to select a time range for analysis.</p>
</li>
<li><p>Once the user submits the time range, another request is sent to the Cloud Run service.</p>
</li>
<li><p>The service send a link to a report to Slack app. This time the report is not ready yet, users will need to wait a little bit.</p>
</li>
<li><p>The background work in the service processes the messages within the specified time range using our ML pipeline, which includes downloading and processing the relevant Slack messages.</p>
</li>
<li><p>Finally, the app displays the clustering results to the user in a neatly formatted message.</p>
</li>
</ol>
<p>This pipeline allows for seamless interaction between the user, the Slack interface, and our backend ML processing.</p>
<h3 id="heading-result">Result</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722775931870/1913aaa6-b303-493c-a774-04884f26fecd.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-google-cloud-platform-architecture">Google Cloud Platform Architecture</h3>
<p>To ensure scalability, reliability, and ease of deployment, I leveraged several GCP services:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722775491055/b56176f6-4221-41d7-8b32-202689a2ca63.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><strong>Cloud Source Repositories</strong>: This is where our code lives. It's directly integrated with other GCP services, making our CI/CD pipeline smooth.</p>
</li>
<li><p><strong>Cloud Build</strong>: Whenever a commit is pushed to the master branch, Cloud Build automatically triggers a new build.</p>
</li>
<li><p><strong>Artifact Registry</strong>: My built Docker images are stored here, ready for deployment.</p>
</li>
<li><p><strong>Cloud Run</strong>: This is where our application runs. Cloud Run automatically deploys new versions of our app whenever a new image is pushed to the Container Registry.</p>
</li>
</ol>
<p>The workflow looks like this:</p>
<ol>
<li><p>I make changes and commit to the master branch in Cloud Source Repositories.</p>
</li>
<li><p>This triggers Cloud Build, which builds a new Docker image.</p>
</li>
<li><p>The new image is pushed to Container Registry.</p>
</li>
<li><p>Cloud Run detects the new image and automatically deploys it.</p>
</li>
</ol>
<p>This setup ensures that our Slack app is always running the latest version of our code, with zero downtime during updates.</p>
<h2 id="heading-lessons-learned">Lessons Learned</h2>
<p>This project taught me a ton, and I want to share some key takeaways:</p>
<ol>
<li><p><strong>LLMs aren't everything</strong>: Sure, Large Language Models are cool, but traditional ML techniques still have their place. Don't forget about them!</p>
</li>
<li><p><strong>Fundamentals matter</strong>: Understanding and applying basic ML techniques is super valuable. It gives you the flexibility to solve unique problems.</p>
</li>
<li><p><strong>Cloud architecture is key</strong>: A great algorithm is only as good as its deployment. Cloud services give you the scalability and reliability you need for real-world applications.</p>
</li>
<li><p><strong>CI/CD streamlines development</strong>: Setting up a good CI/CD pipeline makes development and deployment so much smoother. It's worth the effort!</p>
</li>
<li><p><strong>Humans in the loop</strong>: Even with all this automation, ML and AI apps still need human oversight and tweaking.</p>
</li>
<li><p><strong>Use the right tool for the job</strong>: While Go is awesome, it's not always the best choice for every task, especially when it comes to machine learning.</p>
</li>
</ol>
<h3 id="heading-the-go-vs-python-saga">The Go vs. Python Saga</h3>
<p>Here's a funny story - after I built the whole thing in Go, I realized it wasn't the best fit for the ML parts. Don't get me wrong, I love Go, but sometimes you gotta know when to switch gears.</p>
<p>Initially, I implemented both TF-IDF and DBSCAN in Go. It worked, but man, was it slow! Processing 4MB of data with about 2000 feature dimension vectors took a whopping 9 minutes. That's when I knew I had to rethink my approach.</p>
<p>The problem wasn't Go itself, but the lack of a mature ML ecosystem around it. I couldn't find optimized implementations of the data structures and math formulas I needed, which are readily available in languages like Python.</p>
<h4 id="heading-the-hybrid-solution">The Hybrid Solution</h4>
<p>So, I came up with a hybrid approach:</p>
<ol>
<li><p>Keep the main app structure in Go, because it's great for building efficient, concurrent systems.</p>
</li>
<li><p>Switch to Python for the core ML algorithms (TF-IDF and DBSCAN), taking advantage of its rich ecosystem of ML libraries.</p>
</li>
<li><p>Use Go to wrap around the Python script, calling it when needed for the ML tasks.</p>
</li>
</ol>
<p>The results? Mind-blowing. The same dataset that took 9 minutes in pure Go now took just a few seconds in Python. Talk about a performance boost!</p>
<h3 id="heading-key-takeaway">Key Takeaway</h3>
<p>This experience really drove home a crucial point: <strong>use the right tool for the right job</strong>. It's tempting to stick with one language or tech stack, but sometimes the best solution involves mixing and matching.</p>
<p>In the world of ML and data processing, the ecosystem around a language can be just as important as the language itself. Python's extensive ML libraries make it a powerhouse for these tasks, even if we might prefer other languages for different parts of the app.</p>
<h2 id="heading-keeping-our-secret-sauce">Keeping Our Secret Sauce</h2>
<p>Now, let's talk about LLMs for a second. They're all the rage right now, and for good reason - they're pretty amazing. But here's the thing: they're not the answer to everything. By using fundamental algorithms to solve our problem without sending data to OpenAI or similar services, we're keeping our core competencies in-house.</p>
<p>Why is this important? A few reasons:</p>
<ol>
<li><p>We keep our data private and secure.</p>
</li>
<li><p>We can customize our algorithms exactly how we want.</p>
</li>
<li><p>We can optimize performance for our specific needs.</p>
</li>
<li><p>For high-volume stuff, it might even be cheaper than using API-based services.</p>
</li>
</ol>
<p>By developing and maintaining these core capabilities ourselves, we're making sure that our critical know-how stays, well, ours.</p>
<h2 id="heading-whats-next">What's Next?</h2>
<p>As with any project, there's always room for improvement. Some next steps include:</p>
<ul>
<li><p>Optimizing for larger datasets</p>
</li>
<li><p>Integrating LLM-based summarization of errors</p>
</li>
<li><p>Improving the app's distribution and installation process</p>
</li>
<li><p>Adding more robust monitoring tools</p>
</li>
<li><p>Exploring other cloud services to further enhance scalability and performance</p>
</li>
</ul>
<p>Remember, while it's great to leverage cutting-edge tools and services, don't outsource your core competencies. This project wasn't just about solving a problem - it was about expanding our skills in crucial areas of modern software development. From ML to cloud architecture to DevOps practices, we've grown a lot. And perhaps most importantly, we've learned when to adapt our approach and use different tools to get the best results.</p>
<p>Have you faced similar challenges in your projects? How do you decide when to use external services versus building in-house capabilities? Drop your thoughts in the comments - I'd love to hear about your experiences!</p>
<h2 id="heading-and-i-will-release-this-cool-app-to-the-slack-app-marketplace-soon">And I will release this cool app to the Slack app marketplace soon!</h2>
]]></content:encoded></item><item><title><![CDATA[My problem-solving frameworks]]></title><description><![CDATA[A problem-solving framework is a checklist of questions that help you break down a problem and figure out how to solve it. It's like a step-by-step guide to understanding the issue, brainstorming solutions, picking the best one, and then implementing...]]></description><link>https://nguyengineer.dev/unlock-the-mystery-how-top-engineers-use-frameworks-to-solve-problems-faster</link><guid isPermaLink="true">https://nguyengineer.dev/unlock-the-mystery-how-top-engineers-use-frameworks-to-solve-problems-faster</guid><category><![CDATA[#softwareengineering]]></category><category><![CDATA[problem solving skills]]></category><category><![CDATA[framework]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Wed, 10 Apr 2024 23:50:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1732763417506/f2d19663-3b47-4cd5-9d6a-2fbb94b4ce9f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>A</em> <strong><em>problem-solving framework</em></strong> <em>is a checklist of questions that help you break down a problem and figure out how to solve it. It's like a step-by-step guide to understanding the issue, brainstorming solutions, picking the best one, and then implementing that solution.</em></p>
<p>As software engineers, our day-to-day tasks are mostly problem-solving. Given a problem, we find the solution. We gather a lot of skills and practice, and time after time, we become proficient with it. We automatically follow some frameworks but don't realize it.</p>
<p>One of the challenges of an organization is not to depend on the key person, that person who has gathered a lot of knowledge and practice. He can resolve the problem easily; he's a superhero, a firefighter. But if he leaves the company, all of that is lost.</p>
<p>Documentation won't help much; it will be outdated and no one will look at it.</p>
<p>IMO, I think we should focus on building the process, not the key person.</p>
<p>A well-defined process can help the newcomer go through a checklist with only the basic, fundamental knowledge about the system. In case of an emergency, a team member without experience can try the process for the first time, it works for them and if anything is missing from their perspective they can add it to the process.</p>
<p>To be more specific, we call the process a framework. The framework is opinionated by the creator. A place where we can see how the top performers work, extract their methodologies, turn them into a process, and make the excellent a standard for everyone.</p>
<h2 id="heading-example-framework-to-investigate-a-bug">Example: Framework to Investigate a Bug</h2>
<p>If we have a bug report that something went wrong in the last 3 months, what should we do?</p>
<ol>
<li><p>Understand the problem, and read the problem statement at least twice to ensure the reported issue is correct</p>
</li>
<li><p>If the resolver lacks knowledge, find the help page, find the document, or consult another team member to understand how it works</p>
</li>
<li><p>Define the clear expectation</p>
</li>
<li><p>Dig into the log, understand why the system doesn't work as expected</p>
</li>
<li><p>Dig into the code, read it, and find the wrong logic</p>
</li>
<li><p>If the code looks right, maybe it has changed during the last 3 months but gets reverted. Let's check the git logs.</p>
</li>
<li><p>Find the ticket related to the code changes to find out what we expected at that time</p>
</li>
<li><p>Find the Slack message around that day to discover what is the decision making that didn't write to the ticket properly.</p>
</li>
</ol>
<h2 id="heading-framework-to-implement-a-new-feature">Framework to Implement a New Feature</h2>
<ol>
<li><p>Understand the requirement, and read the problem statement at least twice to ensure the reported issue is correct</p>
</li>
<li><p>Define the acceptance criteria</p>
</li>
<li><p>Look at the design (if any) twice, and notice all the details missing in the requirement</p>
</li>
<li><p>Ask the ticket creator all the questions to clarify the requirement until you make sure all the detail is covered</p>
</li>
<li><p>Now work on the code base, and the system design to find the solution</p>
</li>
<li><p>Write a solution for preview, maybe a list of bullet points, a diagram, or a document on how you resolve the problem</p>
</li>
<li><p>Consult the team members, get an agreement on the proposed solution</p>
</li>
<li><p>Go to coding, write the test first if your team follows TDD</p>
</li>
<li><p>Submit PR, ask your team for review</p>
</li>
<li><p>Resolve feedback, the rest is your normal deployment process...</p>
</li>
</ol>
<h2 id="heading-lets-start">Let’s start</h2>
<p>Implementing problem-solving frameworks isn't just about following a checklist – it's about fostering a culture of shared knowledge and continuous improvement.</p>
<p>Picture this: a team where every member, regardless of experience level, can confidently tackle complex issues. A workplace where knowledge isn't siloed but flows freely, captured in living, breathing frameworks that evolve with each challenge overcome.</p>
<p>This isn't just a pipe dream – it's a reality we can build, one framework at a time.</p>
<p>Here's my challenge to you:</p>
<ol>
<li><p>Start small: Pick one recurring problem in your team. It could be bug investigation, feature implementation, or even code review.</p>
</li>
<li><p>Document your process: Next time you tackle this problem, jot down each step you take. Be specific – what worked? What didn't?</p>
</li>
<li><p>Refine and share: Discuss your nascent framework with your team. Gather their insights, refine the steps, and create a shared resource.</p>
</li>
<li><p>Iterate and improve: As your team uses the framework, encourage feedback. Let it evolve based on real-world application.</p>
</li>
</ol>
<p>Remember, the goal isn't perfection – it's progress. By taking these steps, you're not just solving today's problems; you're building a foundation for tackling tomorrow's challenges.</p>
<p>I'd love to hear about your experiences. Have you used problem-solving frameworks in your work? What hurdles did you face? What unexpected benefits did you discover? Drop your thoughts in the comments below – let's learn from each other and elevate our craft together.</p>
]]></content:encoded></item><item><title><![CDATA[Design (and code) a job scheduling system]]></title><description><![CDATA[User story (simple)
As a user, I want to set up a time delay for my actions (e.g., send email) in a certain period, a certain day of the week, or a specific day of the year. So that my action can be executed with timing accuracy.
Function requirement...]]></description><link>https://nguyengineer.dev/design-and-code-a-job-scheduling-system</link><guid isPermaLink="true">https://nguyengineer.dev/design-and-code-a-job-scheduling-system</guid><category><![CDATA[scheduling]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[System Design]]></category><category><![CDATA[distributed system]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Thu, 04 Jan 2024 12:06:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/laJW5pp-6Yw/upload/54f6eabf2a5386614e4981eb21505534.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-user-story-simple">User story (simple)</h1>
<p>As a user, I want to set up a time delay for my actions (e.g., send email) in a certain period, a certain day of the week, or a specific day of the year. So that my action can be executed with timing accuracy.</p>
<h2 id="heading-function-requirements">Function requirements</h2>
<p>The requirement isn’t strict to any detailed use of the scheduling system. Then I map some concepts into an online marketing system, in which users need to set up a marketing campaign with steps of action running during a period.</p>
<ul>
<li><p>[x] Ensure at least one delivery of every single job</p>
<ul>
<li>[x] Extra: prevent duplicate sends</li>
</ul>
</li>
<li><p>[x] The scheduler should be horizontally scalable to handle ~5m timers at peak</p>
</li>
<li><p>[x] The user can change the value or cancel timers on the fly</p>
</li>
<li><p>[x] Fairness, priority some group of users</p>
<ol>
<li><p>New signed-up users</p>
</li>
<li><p>Tenant with a small audience list.</p>
</li>
<li><p>The rest of the system, but shared the workload between them</p>
</li>
</ol>
</li>
<li><p>[x] Monitoring metrics <a target="_blank" href="https://www.notion.so/Monitoring-system-4e1f58613ab3476f8ae878b04ccac00c?pvs=21">Monitoring system</a></p>
<ul>
<li><p>[x] Time delay reports end-to-end processing time, from the time the job is due to the time it pulled out of the sending queue to process.</p>
</li>
<li><p>[x] Monitor the number of jobs processed per unit of time (e.g., per second)</p>
</li>
</ul>
</li>
<li><p>[x] TPS: 5m new timers/hour, and 5m due timers/hour</p>
</li>
<li><p>[x] The scheduler should have a p95 scheduling deviation below 10 seconds</p>
</li>
<li><p>[x] Cost. The worker didn’t design for scaling another worker internally. I would defer it to a dedicated service like K8S HPA, or self self-managed service that reads the metric and schedules new worker instance</p>
</li>
</ul>
<h2 id="heading-scope">Scope</h2>
<ol>
<li><p><strong>Backend Development</strong>: Focus on the backend, interacting with the system via APIs.</p>
</li>
<li><p><strong>Scheduling System Implementation</strong>:</p>
<ul>
<li><p>Design and implement a distributed job scheduling system without using/extending any existing Cloud or Open-source job scheduler.</p>
</li>
<li><p>Ensure the system can handle 10 million new and/or due timers per hour, with the potential to double this capacity.</p>
</li>
<li><p>Include a benchmark to demonstrate the system's capacity.</p>
</li>
</ul>
</li>
<li><p><strong>Exclusions</strong>:</p>
<ul>
<li>The implementation does not cover the actual job execution; the focus is on the scheduling aspect only. The jobs sent to the execution queue are done.</li>
</ul>
</li>
</ol>
<h1 id="heading-system-design">System design</h1>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F36f2141f-862d-4a6b-841d-8ce8a3da65cf%2Fc02c87e4-0540-4afd-b8b8-73c3016924f1%2FUntitled.png?table=block&amp;id=ad3b0491-40c4-4bde-9e38-2fd26d1fd741&amp;spaceId=36f2141f-862d-4a6b-841d-8ce8a3da65cf&amp;width=2000&amp;userId=4c4612be-099f-4160-992e-ab12600ad036&amp;cache=v2" alt class="image--center mx-auto" /></p>
<h2 id="heading-core-components">Core components</h2>
<ol>
<li><p><strong>API server</strong>: handle the user requests to schedule the job, handle the job delay time calculation</p>
</li>
<li><p><strong>PostgreSQL database</strong>: central database, that takes the key role in data consistency.</p>
</li>
<li><p><strong>Job processor</strong>: the due job checker, the heart of the system. Interval, concurrently check for the due job and send to queue</p>
</li>
<li><p><strong>Due job fixer</strong>: as the nature of a distributed system the process may stuck due to network errors, or code bugs. The job fixer helps to ensure no job is left behind.</p>
</li>
<li><p><strong>Data feeder</strong>: Feed the data to the system for demo purposes only</p>
</li>
<li><p><strong>Monitoring service</strong>. Use an agent to collect metric data for monitoring from the service and workers</p>
</li>
</ol>
<h2 id="heading-the-erd">The ERD</h2>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F36f2141f-862d-4a6b-841d-8ce8a3da65cf%2F04b483c7-1d3d-4f90-9a54-08c916c53411%2FUntitled.png?table=block&amp;id=3c3fbbb0-9aa0-43d1-abfa-12559e97ad61&amp;spaceId=36f2141f-862d-4a6b-841d-8ce8a3da65cf&amp;width=2000&amp;userId=4c4612be-099f-4160-992e-ab12600ad036&amp;cache=v2" alt /></p>
<h2 id="heading-entities">Entities</h2>
<p>I only describe the entities involved in the scope of this project</p>
<ul>
<li><p><strong>Tenant</strong>: to get the priority information</p>
</li>
<li><p><strong>Sequence</strong>: The holder of the template for the jobs, including steps and subscribers</p>
</li>
<li><p><strong>Step</strong>: The detailed information to build a job, could be a wait step or a job step</p>
</li>
<li><p><strong>Subscriber</strong>: The subscriber subscribes to the sequence. In this scope, we simply refer to it as a number to count how many jobs we schedule for subscribers of sequence.</p>
</li>
<li><p><strong>Job</strong>: The base unit of the scheduler system, the most important information is status and due time</p>
</li>
</ul>
<p>For the simplicity of the demo, I only define the table for the Job entity</p>
<pre><code class="lang-sql"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> <span class="hljs-keyword">IF</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">EXISTS</span> public.jobs
(
    <span class="hljs-keyword">id</span>        <span class="hljs-built_in">serial</span>
        <span class="hljs-keyword">CONSTRAINT</span> jobs_pk
            PRIMARY <span class="hljs-keyword">KEY</span>,
    due_at    <span class="hljs-built_in">timestamp</span> <span class="hljs-keyword">DEFAULT</span> <span class="hljs-keyword">NOW</span>() <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>,
    <span class="hljs-keyword">priority</span>  <span class="hljs-built_in">integer</span>   <span class="hljs-keyword">DEFAULT</span> <span class="hljs-number">0</span>,
    tenant_id <span class="hljs-built_in">integer</span>   <span class="hljs-keyword">DEFAULT</span> <span class="hljs-number">1</span>,
    <span class="hljs-keyword">status</span>    <span class="hljs-built_in">integer</span>   <span class="hljs-keyword">DEFAULT</span> <span class="hljs-number">0</span>,
    metadata  <span class="hljs-built_in">varchar</span>(<span class="hljs-number">100</span>)
);

<span class="hljs-keyword">ALTER</span> <span class="hljs-keyword">TABLE</span> public.jobs
    OWNER <span class="hljs-keyword">TO</span> postgres;

<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">INDEX</span> <span class="hljs-keyword">IF</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">EXISTS</span> jobs_due_at_index
    <span class="hljs-keyword">ON</span> public.jobs (due_at);

<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">INDEX</span> <span class="hljs-keyword">IF</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">EXISTS</span> jobs_priority_index
    <span class="hljs-keyword">ON</span> public.jobs (<span class="hljs-keyword">priority</span>);

<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">INDEX</span> <span class="hljs-keyword">IF</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">EXISTS</span> jobs_status_index
    <span class="hljs-keyword">ON</span> public.jobs (<span class="hljs-keyword">status</span>);
</code></pre>
<h2 id="heading-workflows">Workflows</h2>
<p>Users send a POST request to the API server and the system will auto-start the rest of the workflow. Below are the key workflows</p>
<ol>
<li><strong>Schedule job process</strong></li>
</ol>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F36f2141f-862d-4a6b-841d-8ce8a3da65cf%2F525a79ae-dca4-427b-905d-9e8f9a4d4a42%2FUntitled.png?table=block&amp;id=433a4810-bc64-4d5b-9509-cfc823626293&amp;spaceId=36f2141f-862d-4a6b-841d-8ce8a3da65cf&amp;width=2000&amp;userId=4c4612be-099f-4160-992e-ab12600ad036&amp;cache=v2" alt /></p>
<p>Example request</p>
<pre><code class="lang-json">POST &lt;http:<span class="hljs-comment">//localhost:8081/schedule-job&gt;</span>
Content-Type: application/json

{
  <span class="hljs-attr">"type"</span>: <span class="hljs-string">"sequence"</span>,
  <span class="hljs-attr">"steps"</span>: [
    {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"wait_certain_period"</span>,
      <span class="hljs-attr">"delay_period"</span>: <span class="hljs-number">1</span>,
      <span class="hljs-attr">"delay_unit"</span>: <span class="hljs-string">"minute"</span>
    },
    {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"job"</span>,
      <span class="hljs-attr">"metadata"</span>: <span class="hljs-string">"{ 'any': 'thing' }"</span>
    },
    {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"wait_weekday"</span>,
      <span class="hljs-attr">"weekdays"</span>: [
        <span class="hljs-string">"monday"</span>,
        <span class="hljs-string">"tuesday"</span>,
        <span class="hljs-string">"wednesday"</span>,
        <span class="hljs-string">"thursday"</span>,
        <span class="hljs-string">"friday"</span>
      ]
    },
    {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"job"</span>,
      <span class="hljs-attr">"metadata"</span>: <span class="hljs-string">"job 2"</span>
    },
    {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"wait_specific_date"</span>,
      <span class="hljs-attr">"date"</span>: <span class="hljs-string">"2023-12-29T18:48:34.200Z"</span>
    },
    {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"job"</span>,
      <span class="hljs-attr">"metadata"</span>: <span class="hljs-string">"job 3"</span>
    }
  ],
  <span class="hljs-attr">"subscribers"</span>: <span class="hljs-number">20</span>
}
</code></pre>
<p>To satisfy the design requirement. The schedule-job API accepts a sequence of steps, if the step is a wait step, we will calculate the relative due time, this due time will be the start of the next step in the step list.</p>
<p>No matter where the job step is in the step list, once we get into the calculation of the due time for the job, we only need to use the due time calculated by the previous step as the reference.</p>
<p>Refer to this unit test to understand the expected input and output</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">TestCalculateNextJobs</span><span class="hljs-params">(t *testing.T)</span></span> {
    <span class="hljs-comment">// Define a start time for the sequence</span>
    startedAt := time.Date(<span class="hljs-number">2023</span>, <span class="hljs-number">12</span>, <span class="hljs-number">28</span>, <span class="hljs-number">12</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, time.UTC)

    <span class="hljs-comment">// Setup Sequence with steps</span>
    sequence := entity.Sequence{
        Steps: []entity.Step{
            &amp;entity.StepWaitCertainPeriod{DelayPeriod: <span class="hljs-number">1</span>, DelayUnit: entity.DelayUnitMinute},
            &amp;entity.StepJob{Metadata: <span class="hljs-string">"{ 'any': 'thing' }"</span>},
            &amp;entity.StepWaitWeekDay{WeekDays: []entity.WeekDay{entity.Monday, entity.Tuesday, entity.Wednesday, entity.Friday}},
            &amp;entity.StepJob{Metadata: <span class="hljs-string">"job 2"</span>},
            &amp;entity.StepWaitSpecificDate{Date: <span class="hljs-string">"2023-12-29T18:48:34.200Z"</span>},
            &amp;entity.StepJob{Metadata: <span class="hljs-string">"job 3"</span>},
        },
        Subscribers: <span class="hljs-number">2</span>,
    }

    <span class="hljs-comment">// Expected due dates for jobs</span>
    expectedDates := []time.Time{
        startedAt.Add(<span class="hljs-number">1</span> * time.Minute),                           <span class="hljs-comment">// 1 minute from startedAt (Job 1)</span>
        time.Date(<span class="hljs-number">2023</span>, <span class="hljs-number">12</span>, <span class="hljs-number">29</span>, <span class="hljs-number">12</span>, <span class="hljs-number">1</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, time.UTC),           <span class="hljs-comment">// Next weekday (Friday) for Job 2</span>
        time.Date(<span class="hljs-number">2023</span>, <span class="hljs-number">12</span>, <span class="hljs-number">29</span>, <span class="hljs-number">18</span>, <span class="hljs-number">48</span>, <span class="hljs-number">34</span>, <span class="hljs-number">200000000</span>, time.UTC), <span class="hljs-comment">// Specific date for Job 3</span>
    }

    got, err := controllers.CalculateNextJobs(sequence, startedAt)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        t.Fatalf(<span class="hljs-string">"CalculateNextJobs() error = %v"</span>, err)
    }

    <span class="hljs-keyword">if</span> <span class="hljs-built_in">len</span>(got) != <span class="hljs-built_in">len</span>(expectedDates) {
        t.Fatalf(<span class="hljs-string">"Expected %d jobs, got %d"</span>, <span class="hljs-built_in">len</span>(expectedDates), <span class="hljs-built_in">len</span>(got))
    }

    <span class="hljs-keyword">for</span> i, job := <span class="hljs-keyword">range</span> got {
        log.Print(i, job)
        <span class="hljs-keyword">if</span> !job.DueAt.Equal(expectedDates[i]) {
            t.Errorf(<span class="hljs-string">"Job %d due at %v, want %v"</span>, i, job.DueAt, expectedDates[i])
        }
    }
}
</code></pre>
<p>After the job calculation, it will be sent to the database for the bulk insert.</p>
<ol>
<li><strong>Due job checker</strong></li>
</ol>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F36f2141f-862d-4a6b-841d-8ce8a3da65cf%2F44ce17a0-8bca-4a89-a69e-d2b9951bebeb%2FUntitled.png?table=block&amp;id=eb5d0b1c-45d9-4186-99e8-ba6f7ebaca77&amp;spaceId=36f2141f-862d-4a6b-841d-8ce8a3da65cf&amp;width=2000&amp;userId=4c4612be-099f-4160-992e-ab12600ad036&amp;cache=v2" alt /></p>
<p>After the jobs are inserted into the database with the due date, the due-job-checker workers can pull it from the database.</p>
<p>I use Postgres advisory lock here to ensure only one worker can pull the job at the same time.</p>
<p>But I still use the <code>SELECT FOR UPDATE</code> here to ensure no other process updates my records while I select it.</p>
<pre><code class="lang-go">rows, err := conn.Query(<span class="hljs-string">`
              UPDATE jobs 
              SET status = $1
              WHERE id IN (
                  SELECT id FROM jobs 
                  WHERE due_at &lt;= NOW() AND status = $2
                  ORDER BY priority 
                  LIMIT $3
                  FOR UPDATE SKIP LOCKED
              )
              RETURNING id, due_at`</span>, entity.JobStatusInProgress, entity.JobStatusInitialized, dueJobBatchSize)
</code></pre>
<p>The lock is released early right after I update its status to processing, this will help to increase the throughput of the system while other incoming actions don’t rely on the database.</p>
<ol>
<li><strong>Due job fixer</strong></li>
</ol>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F36f2141f-862d-4a6b-841d-8ce8a3da65cf%2Feeda3483-df43-43b2-a234-ed65c570ddca%2FUntitled.png?table=block&amp;id=c8ad9ad8-5e35-41da-9b0e-82db3e414ef6&amp;spaceId=36f2141f-862d-4a6b-841d-8ce8a3da65cf&amp;width=2000&amp;userId=4c4612be-099f-4160-992e-ab12600ad036&amp;cache=v2" alt /></p>
<p>To keep the queue table tidy. We need to clean up it frequently. This worker will internally (15 seconds) archive the processed job, in a real system I would move it to the archived table, but in this demo I simply just delete it.</p>
<p>For some reason, if the job fails to process after the maximum processing time limit (let’s say 10s), it is considered a failed job and moved back to the queue.</p>
<p>It is out of scope so I don’t go further on this but in a real system, we need to check more criteria to decide if a job has failed and retry it, not just the exceeded processing time criteria.</p>
<h2 id="heading-monitoring-system">Monitoring system</h2>
<p>I use external tools for monitoring. Prometheus to collect the metrics and Grafana to plot the data. The tools defined in the docker-compose.yml file</p>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F36f2141f-862d-4a6b-841d-8ce8a3da65cf%2F2043c63d-8d91-489f-82e0-90dcf53cea15%2FUntitled.png?table=block&amp;id=9fe651bb-a33f-4434-8f8b-75e9224b68bb&amp;spaceId=36f2141f-862d-4a6b-841d-8ce8a3da65cf&amp;width=2000&amp;userId=4c4612be-099f-4160-992e-ab12600ad036&amp;cache=v2" alt /></p>
<p>The general idea is to use the Prometheus SDK to collect the metric and send it to the Prometheus push gateway. Then set the Grafana to select the data source.</p>
<p>The metrics I collect</p>
<ul>
<li><p><strong>Job process TPS</strong>. This is the number of jobs this system can process per second. As you can see in the screenshot below, the TPS average is ~20.000 jobs/second.</p>
<p>  According to the requirement of 5M new timers + 5M due timers = 10M jobs/hours.</p>
<p>  It can be doubled at any time = 20M jobs/hours</p>
<p>  So 20.000 jobs/sec <em>60</em> 60 = 72.000.000 jobs/hour. This design can cover the system requirement more than 3 times.</p>
<p>  Note that this system runs on my local machine (MacBook M2 pro base edition), the Postgre database runs in docker with some resource constraints so the real system on the cloud can be much more capable of the higher load.</p>
<p>  <img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F36f2141f-862d-4a6b-841d-8ce8a3da65cf%2F0214aea4-8372-49a6-8429-8177fa49e38c%2Ftps.png?table=block&amp;id=16678d0e-d84d-4f47-8b12-0bb546fb9820&amp;spaceId=36f2141f-862d-4a6b-841d-8ce8a3da65cf&amp;width=2000&amp;userId=4c4612be-099f-4160-992e-ab12600ad036&amp;cache=v2" alt /></p>
</li>
<li><p><strong>The p95 of the job due time delay</strong></p>
<p>  An average of 95% of job processing time is 1148 ms. Calculate from the exact UTC that the job due, to the time it is marked as processed.</p>
<p>  This metric is the delayed time from the time the user expected it to be sent to the time it is sent.</p>
<p>  <img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F36f2141f-862d-4a6b-841d-8ce8a3da65cf%2F59d5bc77-1560-4575-af25-d1c67367152e%2FUntitled.png?table=block&amp;id=f025852a-65ca-4859-8c76-5eb5cb21e13c&amp;spaceId=36f2141f-862d-4a6b-841d-8ce8a3da65cf&amp;width=2000&amp;userId=4c4612be-099f-4160-992e-ab12600ad036&amp;cache=v2" alt /></p>
</li>
<li><p>Job in queue</p>
<p>  In my data feed, I set it up to schedule a job now and another job delayed 1 min later. As you can see the system is processing both new timers and due timers. As the data feed keeps running every second, this chart won’t be low unless I stop the data feed. But we can expect every job will be processed in about 1 second.</p>
<p>  <img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F36f2141f-862d-4a6b-841d-8ce8a3da65cf%2Fc9e36c1b-240c-4b82-8c66-4cba31c9632b%2Fjobs_in_queue.png?table=block&amp;id=cd67c13f-977d-44bf-b33b-5c92950a3f10&amp;spaceId=36f2141f-862d-4a6b-841d-8ce8a3da65cf&amp;width=2000&amp;userId=4c4612be-099f-4160-992e-ab12600ad036&amp;cache=v2" alt /></p>
</li>
</ul>
<h1 id="heading-design-decision">Design decision</h1>
<ul>
<li><p>Why Go</p>
<ul>
<li><p>I need the robustness of the language and its ecosystem</p>
</li>
<li><p>Foolproof, not afraid of making code mistakes, many good practices and standards can be found online</p>
</li>
<li><p>Concurrency model.</p>
</li>
</ul>
</li>
<li><p>Why PostgreSQL</p>
<ul>
<li><p>Mostly a silver bullet for most projects that need robustness and flexibility. For example</p>
<ul>
<li><p>I utilized advisory lock, <code>select for update</code> skip locked for this project</p>
</li>
<li><p>In another project, I utilize the <code>jsonb</code> query and view.</p>
</li>
</ul>
</li>
<li><p>Rich documentation and resources</p>
</li>
</ul>
</li>
</ul>
<h2 id="heading-extending-ideas">Extending ideas</h2>
<ul>
<li><p>If I have more time on this project I will refactor it following the Clean Architecture structure to avoid logic fragmenting in many services workers. But for the scope of this project, the structured method works fine.</p>
</li>
<li><p>Add more integration tests and load tests for all the critical functions. I already have some tests but it is not enough to ensure the robustness of this system. Especially in the concurrent environment.</p>
</li>
</ul>
<h1 id="heading-read-me">Read me</h1>
<h3 id="heading-repository">Repository</h3>
<p><a target="_blank" href="https://github.com/finnng/job-scheduling-system">https://github.com/finnng/job-scheduling-system</a></p>
<h3 id="heading-prerequisites">Prerequisites</h3>
<ul>
<li><p>Go version 1.21.0</p>
</li>
<li><p>docker-compose (you may need to update docker-compose.yml for intel based computer)</p>
</li>
</ul>
<h3 id="heading-steps">Steps</h3>
<ol>
<li><p>Pull the source code</p>
</li>
<li><p>Start the databases: <code>docker compose up -d</code></p>
</li>
<li><p>Create a Postgre test database. Use any database client to create a database name <code>test</code> and grant permission for the default user <code>postgre</code> on it.</p>
</li>
</ol>
<pre><code class="lang-sql"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">DATABASE</span> <span class="hljs-keyword">test</span>;
<span class="hljs-keyword">GRANT</span> <span class="hljs-keyword">ALL</span> <span class="hljs-keyword">PRIVILEGES</span> <span class="hljs-keyword">ON</span> <span class="hljs-keyword">DATABASE</span> <span class="hljs-keyword">test</span> <span class="hljs-keyword">TO</span> postgre;
</code></pre>
<ol>
<li>Start the API server, it should automatically provision the tables. From the repo’s root directory, type</li>
</ol>
<pre><code class="lang-bash">go run api-server/app.go
</code></pre>
<ol>
<li>Start other workers to complete the full system, open other terminal tabs for these commands</li>
</ol>
<pre><code class="lang-bash">go run worker-due-job-checker/app.go
</code></pre>
<pre><code class="lang-bash">go run worker-job-fixer/app.go
</code></pre>
<pre><code class="lang-bash">go run data-feed/app.go
</code></pre>
<p>The data-feed worker will randomize the test data and send it to the API server to keep the system busy for the demo purpose. You can edit the test request to test all the cases of the scheduling scenario.</p>
<pre><code class="lang-go">payload := Payload{
            Type: <span class="hljs-string">"sequence"</span>,
            Steps: []Step{
                {
                    Type:     <span class="hljs-string">"job"</span>,
                    Metadata: <span class="hljs-string">"{ 'any': 'thing 1' }"</span>,
                },
                {
                    Type:        <span class="hljs-string">"wait_certain_period"</span>,
                    DelayPeriod: <span class="hljs-number">1</span>,
                    DelayUnit:   <span class="hljs-string">"minute"</span>,
                },
                {
                    Type:     <span class="hljs-string">"job"</span>,
                    Metadata: <span class="hljs-string">"{ 'any': 'thing 2' }"</span>,
                },
            },
            Subscribers: rand.Intn(<span class="hljs-number">10000</span>) + <span class="hljs-number">1</span>, <span class="hljs-comment">// Random number between 1 and 1000</span>
        }
</code></pre>
<p>Your terminal panels should look like this.</p>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F36f2141f-862d-4a6b-841d-8ce8a3da65cf%2Fe9420eb4-0520-4581-b340-fb12d789394f%2FUntitled.png?table=block&amp;id=5fcf772e-3d3c-462e-9b9f-e1e69a4e5009&amp;spaceId=36f2141f-862d-4a6b-841d-8ce8a3da65cf&amp;width=2000&amp;userId=4c4612be-099f-4160-992e-ab12600ad036&amp;cache=v2" alt /></p>
<h3 id="heading-monitoring">Monitoring</h3>
<p>I haven’t handled the Grafana database migration yet, so you need to head to the Grafana dashboard at <a target="_blank" href="https://www.notion.so/Monitoring-system-4e1f58613ab3476f8ae878b04ccac00c?pvs=21"><code>http://localhost:3000</code></a> according to the docker-compose file Grafana port.</p>
<ol>
<li><p>Setup Prometheus as the data source</p>
</li>
<li><p>Play around with the metrics sent from the scheduling system</p>
</li>
</ol>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F36f2141f-862d-4a6b-841d-8ce8a3da65cf%2F5d685a22-c0bd-4828-adf9-20cb86b8323c%2FUntitled.png?table=block&amp;id=856caad0-da74-408b-8987-4d6930ebc1e9&amp;spaceId=36f2141f-862d-4a6b-841d-8ce8a3da65cf&amp;width=2000&amp;userId=4c4612be-099f-4160-992e-ab12600ad036&amp;cache=v2" alt /></p>
<p>The complete dashboard should look like</p>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F36f2141f-862d-4a6b-841d-8ce8a3da65cf%2F56d91f73-f935-4ce8-9819-427bc9e8b1a2%2FUntitled.png?table=block&amp;id=88fbbc56-9497-4a23-8b80-fa18c0f8fa20&amp;spaceId=36f2141f-862d-4a6b-841d-8ce8a3da65cf&amp;width=2000&amp;userId=4c4612be-099f-4160-992e-ab12600ad036&amp;cache=v2" alt /></p>
]]></content:encoded></item><item><title><![CDATA[How I emerging AI into daily life (2023)]]></title><description><![CDATA[I'm using the latest commercial AI technologies, including Copilot for business, Copilot chat, Grammarly, Notion AI, ChatGPT Plus, and maybe other AI features embedded in a product that I am not aware]]></description><link>https://nguyengineer.dev/how-i-emerging-ai-into-daily-life-2023</link><guid isPermaLink="true">https://nguyengineer.dev/how-i-emerging-ai-into-daily-life-2023</guid><category><![CDATA[#ai-tools]]></category><category><![CDATA[chatgptplus]]></category><category><![CDATA[documentation]]></category><category><![CDATA[virtual assistant]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Sat, 02 Dec 2023 04:51:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1701491907853/478727b8-9f26-4d71-9229-def4ccf62a4a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I'm using the latest commercial AI technologies, including Copilot for business, Copilot chat, Grammarly, Notion AI, ChatGPT Plus, and maybe other AI features embedded in a product that I am not aware of.</p>
<p>I took a basic course on how to create an AI algorithm, it is a Gradient Descent algorithm and it is so basic. I didn't go further with the more advanced algorithms because it is not my expertise area, but it gave me a big picture of how AI works. Prove me wrong. AI now is all about statistics on the highest possible output based on the input.</p>
<p>As long as AI is still based on the training data and the probability, it still needs the input to produce output, it won't be able to destroy any of humankind, and it can't "think" on its own. So what can it do?</p>
<p>For me, I use AI for those purposes</p>
<ol>
<li><p>Coding</p>
<ol>
<li><p>Code suggestion</p>
</li>
<li><p>Code fixing</p>
</li>
<li><p>Chat with code.</p>
</li>
<li><p>Chat with code base (not yet, still waiting for GitHub Enterprise to release this feature - this is freaking cutting edge)</p>
</li>
</ol>
</li>
<li><p>Research</p>
<ol>
<li><p>Anything, from the new plant I found on the road to the software design philosophy or rocket science</p>
</li>
<li><p>Product research</p>
</li>
</ol>
</li>
<li><p>Documentation</p>
<ol>
<li><p>Fix grammar</p>
</li>
<li><p>Find the missing argument in my document</p>
</li>
</ol>
</li>
<li><p>Helping my wife with her work</p>
<ol>
<li><p>Refine the requirement</p>
</li>
<li><p>Draw the diagram</p>
</li>
</ol>
</li>
<li><p>Making AI-based product</p>
</li>
</ol>
<p>And many more purposes.</p>
<h2>Coding</h2>
<p>My company subscribes to the business Github Copilot, surely I have access to the IDE plugin for my Neovim. But I still need the chat, Copilot chat didn't have a plugin for Neovim yet, so I have to create my plugin, you know, by utilizing AI to generate the Lua code. I have another post here on a plugin I copilot with AI to create <a href="https://finnng.hashnode.dev/quickly-create-a-scatch-in-neovim">https://finnng.hashnode.dev/quickly-create-a-scatch-in-neovim</a></p>
<p>I'm working for a company that partners with other enterprises so data security will be the top concern in mind when using AI for enterprise work. All the code related to work needs to be under the API given by the company.</p>
<h3>Benefits</h3>
<ul>
<li><p><strong>I can focus on ideas, rather than coding.</strong> Because I know coding is easy when ideas are clear.</p>
</li>
<li><p><strong>Write code faster</strong>. Copilot is amazing, it can guess what are my next lines with high accuracy. It speeds me up a lot for the repetitive task and also reminds me of the syntax I don't remember.</p>
</li>
<li><p><strong>Faster experiment</strong>. I don't know Lua, I don't know Rust, or I don't have expertise in Python, but in case I need to modify a lib or a snippet of code I can always ask for AI to help.</p>
</li>
<li><p><strong>Faster onboarding.</strong> When the Copilot Chat With Code Base is released. I believe it will be the <em><strong>game changer</strong></em>. In history, we need the expert in the team to tell you where the place to change code to satisfy the new requirement. Now the new commer and all the developers can benefit from this. We still need humans to work on the code, consider the pros and cons of the solution, and think through the system and how the system affects the users - <em><strong>AI can't do that</strong></em>, but at least it can connect the dots to help you gain understanding faster. Especially for the huge code base in the enterprise company.</p>
</li>
</ul>
<h3>Drawbacks</h3>
<ul>
<li><p><strong>It makes me lazy</strong>. I need to be mindful when using AI. I need to keep in mind how it works, and how to judge its result, but sometimes I am still too lazy to do that and just use the output. This is a dangerous habit. Getting your hand dirty, and scratching your head is a part of the process, it helps to develop the brain. We shouldn't let AI take it from us. You can research this, or I can give a few</p>
<ul>
<li><p>Learning by doing helps students perform better in science</p>
</li>
<li><p><a href="https://news.harvard.edu/gazette/story/2021/10/study-finds-students-learn-better-through-physical-participation/#:~:text=Study%20finds%20students%20learn%20better,Siliezar%20Harvard%20Staff%20Writer%20Date">Finding hands-on approaches to remote learning</a></p>
</li>
</ul>
</li>
<li><p><strong>I changing my working habits.</strong> This one comes from laziness, and also many other drawbacks you can list. The solution is mindfulness when using tools.</p>
</li>
</ul>
<h2>Researching</h2>
<p>ChatGPT Plus has a crucial role in my research process. Google search is too noisy and not focused on the answer makes me imagine the day that Google becomes another Yahoo.</p>
<p>My research process also changed. ChatGPT Plus can access the internet in real time and it is a game changer. It is based on statistics so it can list out the top popular keywords, and terminologies on the thing I research.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701489617042/5f0512f3-0027-4413-9c83-974ade8a0e26.png" alt="" style="display:block;margin:0 auto" />

<p>After I have the consolidated keywords, articles, and terminologies. I can ask ChatGPT for more details or I can do a Google search myself to prove the output.</p>
<p>Again, we need to be mindful of the results of AI, and do our research. Keep in mind that it is a copilot. Not a pilot.</p>
<h2>Documentation</h2>
<p>I'm using Notion AI at work, it is under an enterprise license for sensitive data protection. That's why I can only compose the work documentation with it.</p>
<p>I mostly use it for grammar fixing or sometimes review my docs and find the missing piece I didn't mention or unclear.</p>
<p>Once again, it isn't a "can't live with" tool, it helps me work faster but I can't count how many percent faster.</p>
<p>I found Grammarly to be much more useful as I'm not a native English speaker. Grammarly has a MacOS app that can help to fix Grammar everywhere. It has a clear agreement to not access user sensitive data but I'm not sure so I disabled it in Notion to not let it read the company-related document.</p>
<h2>Helping my wife with her work</h2>
<p>This is where all the AI comes to shine. My wife is not a developer, nor a technical person, so the output of AI should be natural to humans.</p>
<p>I often ask AI to draw the diagram via the PlantUML code and paste it into <a href="https://www.planttext.com/">https://www.planttext.com/</a> to view the diagram</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701490678972/53c79e7d-5ca8-43ec-b15b-a96eb797d802.png" alt="" style="display:block;margin:0 auto" />

<p>ChatGPT Plus has the plugin to draw the diagram in the chat too, but it is a paid solution so I'm fine with the PlantUML.</p>
<p>ChatGPT Plus is so advanced at document refining as the GPT-4 model is training on text. I have no comment about this as the whole world is already impressed by it. But if you use the free account with GPT-3.5 it may not good as you expected.</p>
<p>I installed the app and it can take pictures as the input too. Now I can ask it what is this plant in the picture. After a few back-and-forth corrects it can show me the correct answer.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701491440241/2c028757-0c40-4e65-8b93-7fa80b4d95c6.png" alt="" style="display:block;margin:0 auto" />

<h3>Voice chat with AI</h3>
<p>ChatGPT app also has a voice mode to let you chat with AI by voice. I usually use this in the morning walk while my mind still wandering on some research. It makes me feel like a scene in the movie when a man talking with his AI assistant like talking with people. Interesting, right? Now everyone can hire an assistant.</p>
<img src="https://149695847.v2.pressablecdn.com/wp-content/uploads/2016/10/her5.jpg" alt="Movie HER – a Portrayal of Future AI Capabilities" style="display:block;margin:0 auto" />

<h2>Making AI-based product</h2>
<p>I made a few small products, and editor plugins based on AI. But since it tightens with OpenAI and its core competencies are on OpenAI's shoulder, I never going to do business with it. You can read more about this opinion here <a href="https://finnng.hashnode.dev/dont-outsource-your-core-competencies">https://finnng.hashnode.dev/dont-outsource-your-core-competencies</a></p>
<p>I may share a few projects in the future if I have time.</p>
<h1>Conclusion</h1>
<p>That's it. Writing a blog is hard, please give it an upvote if you read it through.</p>
<p>I hope it gives you some insights into using AI to improve your daily life.</p>
]]></content:encoded></item><item><title><![CDATA[I'm surfing on the trend of HTMX]]></title><description><![CDATA[I'm tired of JS, not to mention TS and mostly I blame the whole JS ecosystem.
If I can come in time to tell one thing to my younger self I will tell him all in BTC back in 2012! Just kidding, I will tell him to choose Java or at least C# Asp.net. Don...]]></description><link>https://nguyengineer.dev/im-surfing-on-the-trend-of-htmx</link><guid isPermaLink="true">https://nguyengineer.dev/im-surfing-on-the-trend-of-htmx</guid><category><![CDATA[htmx]]></category><category><![CDATA[Go Language]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Sat, 11 Nov 2023 09:09:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/yRv2F4NN3BE/upload/78801738b15c7b39ff148f71cb3f425c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I'm tired of JS, not to mention TS and mostly I blame the whole JS ecosystem.</p>
<p>If I can come in time to tell one thing to my younger self I will tell him all in BTC back in 2012! Just kidding, I will tell him to choose Java or at least C# Asp.net. Don't choose anything that trendy when you are still a novice, j/k again, who knows what the future holds? And we can't time travel.</p>
<p>JS is still my main tech stack on the backend, along with Go, and C#. But on frontend, I don't have a choice but JS. While the backend can change easily, the frontend is different, I'm not a frontend guy so I don't have a passion for refactoring, or rewriting frontend code regardless of whatever the technology it was.</p>
<p>I am also tired of Webpack, frontend build pipeline, and dependencies chain break because the author adopted something new to their repo. I want to stay with the stable version, but then Dependabot says I have some security risks if I stay with some specific version. What should I do but not to stay up-to-date with my dependencies?</p>
<p>It is time to experiment. But if I do serverside rendering I will go back to the day of PHP and the template engine, back to the AJAX and JQuery if I want to make the page not reload on every click. Then I found HTMX and these things make sense</p>
<blockquote>
<ul>
<li><p>Why should only <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/a"><code>&lt;a&gt;</code></a> and <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/form"><code>&lt;form&gt;</code></a> be able to make HTTP requests?</p>
</li>
<li><p>Why should only <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/Element/click_event"><code>click</code></a> &amp; <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/HTMLFormElement/submit_event"><code>submit</code></a> events trigger them?</p>
</li>
<li><p>Why should only <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/GET"><code>GET</code></a> &amp; <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/POST"><code>POST</code></a> methods be <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods">available</a>?</p>
</li>
<li><p>Why should you only be able to replace the <strong>entire</strong> screen?</p>
</li>
</ul>
<p>By removing these arbitrary constraints, htmx completes HTML as a <a target="_blank" href="https://en.wikipedia.org/wiki/Hypertext">hypertext</a></p>
<p>htmx is a library that allows you to access modern browser features directly from HTML, rather than using javascript.</p>
</blockquote>
<p>With HTMX I don't have to use JS anymore, so it requires the server to work together to produce updated HTML (no page reload). The communication between the backend and frontend now is HTML, not JSON, so the server is in charge of frontend logic.</p>
<p>I like this idea, based on my experience, putting the business logic on frontend is a mess, the frontend guy tended to work with frontend only, they tried some magic with the backend JSON response to satisfy the business logic and left the mess here. We asked each other many times, where is the bug, frontend or backend. Hence we can easily trace it, but it is still time-consuming.</p>
<p>We should let the frontend guy do what he does best: frontend, styling, animation. The backend guy does the computation, and data formatting.</p>
<p>Enough talking, in the next part, I will start writing an htmx application.</p>
]]></content:encoded></item><item><title><![CDATA[Quickly create a scatch in Neovim]]></title><description><![CDATA[I missed the feature of quickly creating a scratch when I switched back to Vim from Webstorm.

A scratch file is handy, it auto-saves somewhere on your computer and allows you to paste temporary content here to do whatever you want.
In Vim or Neovim ...]]></description><link>https://nguyengineer.dev/quickly-create-a-scatch-in-neovim</link><guid isPermaLink="true">https://nguyengineer.dev/quickly-create-a-scatch-in-neovim</guid><category><![CDATA[nvim, ]]></category><category><![CDATA[Lua]]></category><category><![CDATA[scratch]]></category><category><![CDATA[WebStorm]]></category><category><![CDATA[dotfiles]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Sat, 11 Nov 2023 08:18:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1699690666500/f0421036-25f6-40d2-81d6-370d7f54b7d0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I missed the feature of quickly creating a scratch when I switched back to Vim from Webstorm.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699689525829/d023e417-f141-43b6-8998-b65af2dc244c.png" alt class="image--center mx-auto" /></p>
<p>A scratch file is handy, it auto-saves somewhere on your computer and allows you to paste temporary content here to do whatever you want.</p>
<p>In Vim or Neovim you need to open a new buffer with <code>:new</code> or <code>:vnew</code> , then <code>:set filetype {sometype}</code>. Paste the content, then <code>:w</code> to save it somewhere, you also need to name it, what a cumbersome process.</p>
<p>Say no more, this is my Neovim Lua script.</p>
<pre><code class="lang-javascript">-- Require the fzf-lua plugin
local fzf_lua = <span class="hljs-built_in">require</span>(<span class="hljs-string">"fzf-lua"</span>)

-- <span class="hljs-built_in">Function</span> to open a <span class="hljs-keyword">new</span> scratch buffer <span class="hljs-keyword">with</span> a specific filetype
local <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">open_scratch_buffer</span>(<span class="hljs-params">filetype</span>)
    -- <span class="hljs-title">Generate</span> <span class="hljs-title">a</span> <span class="hljs-title">timestamp</span> <span class="hljs-title">for</span> <span class="hljs-title">unique</span> <span class="hljs-title">file</span> <span class="hljs-title">naming</span>
    <span class="hljs-title">local</span> <span class="hljs-title">date_time</span> = <span class="hljs-title">os</span>.<span class="hljs-title">date</span>(<span class="hljs-params"><span class="hljs-string">"%Y-%m-%d_%H-%M-%S"</span></span>)
    -- <span class="hljs-title">Define</span> <span class="hljs-title">the</span> <span class="hljs-title">directory</span> <span class="hljs-title">where</span> <span class="hljs-title">scratch</span> <span class="hljs-title">files</span> <span class="hljs-title">will</span> <span class="hljs-title">be</span> <span class="hljs-title">stored</span>
    <span class="hljs-title">local</span> <span class="hljs-title">scratch_dir</span> = <span class="hljs-title">vim</span>.<span class="hljs-title">fn</span>.<span class="hljs-title">expand</span>(<span class="hljs-params"><span class="hljs-string">"~/.vim/scratches/"</span></span>)
    -- <span class="hljs-title">Create</span> <span class="hljs-title">the</span> <span class="hljs-title">full</span> <span class="hljs-title">path</span> <span class="hljs-title">for</span> <span class="hljs-title">the</span> <span class="hljs-title">new</span> <span class="hljs-title">scratch</span> <span class="hljs-title">file</span>
    <span class="hljs-title">local</span> <span class="hljs-title">filename</span> = <span class="hljs-title">scratch_dir</span> .. <span class="hljs-title">date_time</span> .. "." .. <span class="hljs-title">filetype</span>

    -- <span class="hljs-title">Create</span> <span class="hljs-title">the</span> <span class="hljs-title">scratch</span> <span class="hljs-title">directory</span> <span class="hljs-title">if</span> <span class="hljs-title">it</span> <span class="hljs-title">does</span> <span class="hljs-title">not</span> <span class="hljs-title">exist</span>
    <span class="hljs-title">vim</span>.<span class="hljs-title">fn</span>.<span class="hljs-title">mkdir</span>(<span class="hljs-params">scratch_dir, <span class="hljs-string">"p"</span></span>)
    -- <span class="hljs-title">Open</span> <span class="hljs-title">a</span> <span class="hljs-title">new</span> <span class="hljs-title">vertical</span> <span class="hljs-title">split</span> <span class="hljs-title">with</span> <span class="hljs-title">the</span> <span class="hljs-title">created</span> <span class="hljs-title">file</span>
    <span class="hljs-title">vim</span>.<span class="hljs-title">cmd</span>(<span class="hljs-params"><span class="hljs-string">"vnew "</span> .. vim.fn.fnameescape(filename)</span>)
    -- <span class="hljs-title">Set</span> <span class="hljs-title">the</span> <span class="hljs-title">filetype</span> <span class="hljs-title">for</span> <span class="hljs-title">the</span> <span class="hljs-title">new</span> <span class="hljs-title">buffer</span> <span class="hljs-title">for</span> <span class="hljs-title">appropriate</span> <span class="hljs-title">syntax</span> <span class="hljs-title">highlighting</span>
    <span class="hljs-title">vim</span>.<span class="hljs-title">api</span>.<span class="hljs-title">nvim_buf_set_option</span>(<span class="hljs-params"><span class="hljs-number">0</span>, <span class="hljs-string">"filetype"</span>, filetype</span>)
    -- <span class="hljs-title">Save</span> <span class="hljs-title">the</span> <span class="hljs-title">file</span> <span class="hljs-title">to</span> <span class="hljs-title">create</span> <span class="hljs-title">it</span> <span class="hljs-title">on</span> <span class="hljs-title">disk</span>
    <span class="hljs-title">vim</span>.<span class="hljs-title">api</span>.<span class="hljs-title">nvim_command</span>(<span class="hljs-params"><span class="hljs-string">"write"</span></span>)
<span class="hljs-title">end</span>

-- <span class="hljs-title">Global</span> <span class="hljs-title">function</span> <span class="hljs-title">to</span> <span class="hljs-title">select</span> <span class="hljs-title">a</span> <span class="hljs-title">filetype</span> <span class="hljs-title">and</span> <span class="hljs-title">create</span> <span class="hljs-title">a</span> <span class="hljs-title">scratch</span> <span class="hljs-title">buffer</span>
<span class="hljs-title">function</span> <span class="hljs-title">_G</span>.<span class="hljs-title">select_filetype_and_create_scratch</span>(<span class="hljs-params"></span>)
    -- <span class="hljs-title">Retrieve</span> <span class="hljs-title">the</span> <span class="hljs-title">list</span> <span class="hljs-title">of</span> <span class="hljs-title">Vim</span> <span class="hljs-title">syntax</span> <span class="hljs-title">files</span> <span class="hljs-title">from</span> <span class="hljs-title">the</span> <span class="hljs-title">runtime</span> <span class="hljs-title">path</span>
    <span class="hljs-title">local</span> <span class="hljs-title">syntax_dir</span> = <span class="hljs-title">vim</span>.<span class="hljs-title">fn</span>.<span class="hljs-title">globpath</span>(<span class="hljs-params">vim.fn.getenv(<span class="hljs-string">"VIMRUNTIME"</span>), <span class="hljs-string">"syntax/*.vim"</span>, false, true</span>)
    <span class="hljs-title">local</span> <span class="hljs-title">filetypes</span> = </span>{}

    -- Extract filetype names <span class="hljs-keyword">from</span> the retrieved syntax file paths
    <span class="hljs-keyword">for</span> _, filepath <span class="hljs-keyword">in</span> ipairs(syntax_dir) <span class="hljs-keyword">do</span>
        local filetype = filepath:match(<span class="hljs-string">"syntax[/\\](.+).vim$"</span>)
        <span class="hljs-keyword">if</span> filetype then
            -- Add the filetype to the list
            table.insert(filetypes, filetype)
        end
    end

    -- Execute fzf <span class="hljs-keyword">with</span> the list <span class="hljs-keyword">of</span> filetypes
    fzf_lua.fzf_exec(filetypes, {
        prompt = <span class="hljs-string">"Filetypes&gt; "</span>,
        actions = {
            -- Action to perform when a filetype is selected
            [<span class="hljs-string">"default"</span>] = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">selected</span>)
                <span class="hljs-title">open_scratch_buffer</span>(<span class="hljs-params">selected[<span class="hljs-number">1</span>]</span>)
            <span class="hljs-title">end</span>,
        },
    })
<span class="hljs-title">end</span>

-- <span class="hljs-title">Set</span> <span class="hljs-title">a</span> <span class="hljs-title">key</span> <span class="hljs-title">mapping</span> <span class="hljs-title">for</span> <span class="hljs-title">the</span> <span class="hljs-title">function</span>
-- &lt;<span class="hljs-title">leader</span>&gt;<span class="hljs-title">t</span> <span class="hljs-title">will</span> <span class="hljs-title">trigger</span> <span class="hljs-title">the</span> <span class="hljs-title">filetype</span> <span class="hljs-title">selection</span> <span class="hljs-title">and</span> <span class="hljs-title">scratch</span> <span class="hljs-title">buffer</span> <span class="hljs-title">creation</span>
<span class="hljs-title">vim</span>.<span class="hljs-title">api</span>.<span class="hljs-title">nvim_set_keymap</span>(<span class="hljs-params"><span class="hljs-string">"n"</span>, <span class="hljs-string">"&lt;leader&gt;t"</span>, <span class="hljs-string">":lua _G.select_filetype_and_create_scratch()&lt;CR&gt;"</span>, { noremap = true }</span>)</span>
</code></pre>
<p>What does it do?</p>
<ol>
<li><p>Get the list of vim supported file types, grep the base name only to make the list of available file types</p>
</li>
<li><p>Use fzf.lua API <code>fzf_exec</code> to serve the list, and provide you with a UI for fuzzy searching</p>
</li>
<li><p>Select the user's selected file type, open a new buffer to the right, name it the current datetime, and save it. Now you can paste your content here.</p>
</li>
</ol>
<p>Why the auto-save part is important?</p>
<p>Because some commands like <code>:EslintAutoFix</code> won't run on an unsaved buffer.</p>
<p>To use this, you need:</p>
<ol>
<li><p>You Neovim are using init.lua</p>
</li>
<li><p>You need to install fzf-lua. I tried with fzf.vim but no luck, incompatible.</p>
</li>
<li><p>Add this script to your init.lua file <code>require("scratch_config")</code> assumed you have this script name <code>scratch_config</code></p>
</li>
<li><p>Benefit</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699690132946/28e39378-5ed7-4641-8406-676e8fc330dd.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699690402002/1d4b2446-ad8d-47ae-8af8-e3f5f0065bc4.png" alt class="image--center mx-auto" /></p>
<p>For more information, check my latest update <a target="_blank" href="https://github.com/finnng/dotfiles/blob/master/nvim/lua/scratch_config.lua">https://github.com/finnng/dotfiles/blob/master/nvim/lua/scratch_config.lua</a></p>
<p>This is my repo so it may change at any time, in that case you can visit the commit <a target="_blank" href="https://github.com/finnng/dotfiles/commit/c09b79cd455e940cdb07ad836bfce2ca255fd2de">https://github.com/finnng/dotfiles/commit/c09b79cd455e940cdb07ad836bfce2ca255fd2de</a></p>
]]></content:encoded></item><item><title><![CDATA[Finally, I found the best way to keep dot files in sync]]></title><description><![CDATA[I was struggling to keep my dotfiles in GitHub in sync with the dot files in the $HOME directory. But now it is so smooth.
Cut to the chase, here is what I do:

The $HOME folder will be the single source of truth. I keep all the original files here

...]]></description><link>https://nguyengineer.dev/finally-i-found-the-best-way-to-keep-dot-files-in-sync</link><guid isPermaLink="true">https://nguyengineer.dev/finally-i-found-the-best-way-to-keep-dot-files-in-sync</guid><category><![CDATA[dotfiles]]></category><category><![CDATA[dotfile]]></category><category><![CDATA[vim]]></category><category><![CDATA[neovim]]></category><category><![CDATA[karabiner-elements]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Wed, 08 Nov 2023 14:31:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/nAYC7M-7M00/upload/d776d84dfda95c80ed3f058665952285.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I was struggling to keep my dotfiles in GitHub in sync with the dot files in the $HOME directory. But now it is so smooth.</p>
<p>Cut to the chase, here is what I do:</p>
<ol>
<li><p>The $HOME folder will be the <strong>single source of truth.</strong> I keep all the original files here</p>
</li>
<li><p>Use rsync to sync all the files from $HOME to dotfiles Github local repo</p>
</li>
<li><p>Commit the files to the origin to backup.</p>
</li>
</ol>
<p>You may have many files so you may have many rsync commands, let pack it into a bash function and add it to your ~/.zshrc</p>
<pre><code class="lang-bash"><span class="hljs-function"><span class="hljs-title">dotsync</span></span>() {
  rsync -r ~/.config/nvim/* ~/projects/dotfiles/nvim
  rsync -r ~/.config/karabiner/* ~/projects/dotfiles/karabiner
  rsync -r ~/.config/kitty/* ~/projects/dotfiles/kitty
}
</code></pre>
<p>My dotfiles repo is public here <a target="_blank" href="https://github.com/finnng/dotfiles">https://github.com/finnng/dotfiles</a></p>
<p>There are a lot of benefits to this approach</p>
<ol>
<li><p>My dotfiles repo is pristine since the file itself is the single source of truth</p>
</li>
<li><p>No more linking issues like sometimes you have to force linking with <code>ln -fs</code></p>
</li>
<li><p>Stress-free, no more file conflict, let the rsync command deal with it.</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Don't outsource your core competencies]]></title><description><![CDATA[I have scheduled myself to write a weekly blog, so I am now thinking about what I want to share this week. There's always so much to talk about when it comes to AI and its impact on businesses.
This w]]></description><link>https://nguyengineer.dev/dont-outsource-your-core-competencies</link><guid isPermaLink="true">https://nguyengineer.dev/dont-outsource-your-core-competencies</guid><category><![CDATA[generative ai]]></category><category><![CDATA[SMEs]]></category><category><![CDATA[Product Design]]></category><category><![CDATA[product development]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Sat, 21 Oct 2023 03:31:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/64479ed989b09f5069ab7a18/0969cada-5193-49e1-bc68-7b98e3a92c65.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I have scheduled myself to write a weekly blog, so I am now thinking about what I want to share this week. There's always so much to talk about when it comes to AI and its impact on businesses.</p>
<p>This week, I just finished a task related to AI. It was nothing special, mostly just calling another third-party service to outsource some work. But even in this seemingly mundane task, there were some interesting insights that I would like to share with you.</p>
<p>Small AI businesses nowadays mostly rely on generative AI to process their core business operations. It's an efficient approach, but it also comes with its fair share of risks. Just yesterday, I observed that the OpenAI service was down, which had a direct impact on a feature in our product. Until the service was back up and running, that feature simply didn't work. Thankfully, it was just a small feature with a lot of fault tolerance built in. But it got me thinking, what if an AI business heavily relied on the OpenAI service? They would go down together, potentially causing significant disruptions to their operations.</p>
<p>This situation made me realize the importance of not outsourcing our core competence. It's easy for anyone to build an application around OpenAI without truly understanding how AI works. However, there's a better way. By using an open-source model, we can take control of our AI implementation. We can run it on our own infrastructure, gaining a deeper understanding of its inner workings. We can train and fine-tune the model to better suit our needs, all while ensuring the security and privacy of our customer data. This approach empowers us to be the true owners of our core competency, allowing us to navigate the ever-changing AI landscape with confidence.</p>
<p>In our company, we take a practical approach to AI. While we use AI to improve certain features and content of our product, our main focus is still on the traditional market. Our product is already leading the market, even without AI. However, AI allows us to enhance our offerings and provide more value to our customers. It's important to note that our success in the market is not solely dependent on AI. We have a team of industry experts who drive our success.</p>
<p>Reflecting on my recent experiences with AI and its role in our business, I appreciate the need for a balanced approach. While AI brings great benefits, we must also be aware of the potential risks and maintain our core competencies. By doing so, we can leverage the power of AI to propel our businesses forward while maintaining control.</p>
]]></content:encoded></item><item><title><![CDATA[An underrated problem solving skill]]></title><description><![CDATA[It is the ability to transform the verbal, rough description into programmable tasks. And this skill is hard to test out during the interview.
Normally we - software engineers go to an interview, and they throw a competitive programming problem to us...]]></description><link>https://nguyengineer.dev/an-underrated-problem-solving-skill</link><guid isPermaLink="true">https://nguyengineer.dev/an-underrated-problem-solving-skill</guid><category><![CDATA[problem solving skills]]></category><category><![CDATA[pragmatic]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[engineering-management]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Sat, 14 Oct 2023 04:28:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/mpN7xjKQ_Ns/upload/f1be945a3ffc8009920150aa8e2f48d9.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It is the ability to transform the verbal, rough description into programmable tasks. And this skill is hard to test out during the interview.</p>
<p>Normally we - software engineers go to an interview, and they throw a competitive programming problem to us and say, solve it or **** off. They say that is the best way for engineers to show their problem-solving skills. But I found it isn't true. Here we go, I'm not a big influencer so it won't hurt to keep reading this perspective.</p>
<p>Some of my coworkers are good at solving LeetCode problems, and they are good at requirement problem-solving too. But the underlying aspect of this is some of them cannot still</p>
<ul>
<li><p>Explain the technical understanding to the stakeholders in their language.</p>
</li>
<li><p>Thinking through the entire system on how this new change affects the other parts and the consequences.</p>
</li>
<li><p>Well-written and verbal English. I'm not talking about the fluency in English here, I worked with the native UK and US coworker, it's their native language but their presentation was not always as clear as I expected. This skill includes the ability to draw diagrams, write the docs, do presentations, and lead meetings.</p>
</li>
<li><p>Communication and collaboration, delegation, listening skills, low ego, ease to work with, to teach the other in the day-to-day work.</p>
</li>
</ul>
<p>LeetCode problem, somehow is on the other side of the pragmatic problem-solving skills above. I don't understand why the company tests mathematic skills but doesn't use any of those skills in real work. What they need is a person who can transform the product owner thinking into a running feature, while maintaining the harmonic atmosphere in the team. Live long and prosper.</p>
<p>People with mathematic skill is good for their product at some point, I worked with some 10x engineers, and they rock, but they also burned the whole team to the ground lately with their technical-centric thinking, high IQ doesn't always come with high EQ. Sadly I usually find myself in a pro-IQ interview where competitive programming is not my expertise, some time I pass the interview but I have to work with those 10x engineers, that was a terrible and stressful experience.</p>
<p>Finally, with 10 years of experience in this industry, I can say that people with my kind of problem-solving skills still have a place, with high pay and the open road for higher positions, but the company that respects those skills is not popular.</p>
<p>Pieces of Advice? Here is my 2 cents</p>
<p>For engineers, just keep moving, you know you are the unpopular group, and there are always the right places for you.</p>
<p>For companies, let's say EQ and IQ ranged from 0 to 10. You better look for a person who has both IQ and EQ above 5, that's enough. Don't just look at the IQ 10 and ignore the EQ aspect, the low EQ person is a destroyer. Mathematics problem-solving is not everything, you need engineers with pragmatic skills too. The perfect person may still exist if you keep looking, maybe some day you can find them.</p>
]]></content:encoded></item><item><title><![CDATA[Building a system that automatic searches things for you]]></title><description><![CDATA[That is the idea of my recent failed project. Yet another project that going to blow your mind in... never.

As a human, I sometimes find interesting and secondhand stuff or something rare. But I can't check the web search every day, so I hope there ...]]></description><link>https://nguyengineer.dev/building-a-system-that-automatic-searches-things-for-you</link><guid isPermaLink="true">https://nguyengineer.dev/building-a-system-that-automatic-searches-things-for-you</guid><category><![CDATA[#cleanarchitecture #golang #typescript #postgres #queuetable #monolith #monorepo #chromedp #proxypool #nlp #bert #vectordatabase #tokenbucketratelimiterwithpostgresonly #react #reactquery]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Wed, 26 Jul 2023 08:32:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1698836100754/5852d23a-ae59-4b2b-b45c-b04f558effe1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>That is the idea of my recent failed project. Yet another project that going to blow your mind in... never.</p>
<p><img src="https://programmerhumor.io/wp-content/uploads/2022/06/programmerhumor-io-programming-memes-1ddc40bda08dbae-scaled.jpg" alt="This guy is lucky he didn't post the problem on stackoverflow –  ProgrammerHumor.io" /></p>
<p>As a human, I sometimes find interesting and secondhand stuff or something rare. But I can't check the web search every day, so I hope there is a tool to help me search for that item every day and notify me when it's found.</p>
<p>Sounds great! The tool will run in the background, diligently looking for your item on the web day by day. And one day, the system finds it! Ding ding! You receive a notification to look for the item.</p>
<p>I put a lot of effort into building this system and hope it becomes a valid SaaS. However, there are some obstacles that I don't want to overcome. Let me rephrase that: "don't want" instead of "can't". And that's the story I'm sharing today. It won't be a business insight story, but rather a pure technical story.</p>
<p>The keywords for this system are:</p>
<ul>
<li><p>Clean architecture</p>
</li>
<li><p>Golang</p>
</li>
<li><p>Typescript</p>
</li>
<li><p>Postgres</p>
</li>
<li><p>Queue table</p>
</li>
<li><p>Monolith</p>
</li>
<li><p>Mono repo</p>
</li>
<li><p>Chromedp</p>
</li>
<li><p>Proxy pool</p>
</li>
<li><p>NLP</p>
</li>
<li><p>BERT</p>
</li>
<li><p>Vector database</p>
</li>
<li><p>Token bucket rate limiter with Postgres only</p>
</li>
<li><p>React</p>
</li>
<li><p>React-query</p>
</li>
</ul>
<p>This system also comes with some challenges in crawling data problems and NLP problems.</p>
<p>If anyone is interested in this topic, I will continue to write more until the end of this series.</p>
]]></content:encoded></item><item><title><![CDATA[Tame the wild horse Windows]]></title><description><![CDATA[This is my third attempt at migrating the dev environment from Mac to Windows. I used to develop on Mac for 5 years. With all my muscle memory on keymaps, productivity tools, and the general UX is stuck to Mac, I can’t think of moving to another OS, ...]]></description><link>https://nguyengineer.dev/tame-the-wild-horse-windows</link><guid isPermaLink="true">https://nguyengineer.dev/tame-the-wild-horse-windows</guid><category><![CDATA[WSL]]></category><category><![CDATA[Windows]]></category><category><![CDATA[Developer Tools]]></category><dc:creator><![CDATA[Nguyen Engineer]]></dc:creator><pubDate>Sun, 21 Nov 2021 17:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/f40PF2gDBKU/upload/0a38233999e9fde2b53a1eae336aa183.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is my third attempt at migrating the dev environment from Mac to Windows. I used to develop on Mac for 5 years. With all my muscle memory on keymaps, productivity tools, and the general UX is stuck to Mac, I can’t think of moving to another OS, until the quarantine day. I got locked down at home for a month, and my 2017 Mac started to slow since I started C# development with Rider. And I have my gaming PC stay here just for Doto2.</p>
<p>The first attempt a year ago was about native app prebuild for Windows like nvim-qt, vscode, Windows terminal… I quit after a few days cause I missed my keymap.</p>
<p>After a few months, the second attempt is about WSL, this time I dug deeper into the real command-line tools like curl, git, grep, gcloud, kubectl, autojump… And I was too lazy to learn the powershell way to do it, then I went with WSL. And you know what, WSL sucks with a ton of troubles while installing and using it through a Windows terminal.</p>
<p>And this time is the third.</p>
<p>I have to accept that I can’t make the experience on Windows exactly like Mac. Since I know how to live with compromise, I know how to deal with this situation. Besides, I don’t want to be a sheeple that sticks to every generation of Mac even when it sucks. So, the first thing that comes to mind is to learn to compromise.</p>
<h1 id="heading-the-editor">The editor</h1>
<p>The most frustrating thing with Windows is I can’t config Vim — my favorite editor exactly like what I have in Mac. For example, I have Karabiner Elements to map jk to ESC, it’s super productive while the escape is right at your finger, but I don’t have it here. Fzf is experimenting on Windows, it’s buggy and doesn’t work at all in my case, The Windows terminal is not very good with Vim running inside…</p>
<p><em>I have to config Rider and Vscode as much as I can, compromised with something missing.</em></p>
<h1 id="heading-the-command-line-tools">The command-line tools</h1>
<p>Git is the second most important tool. I have several alias functions for git, which combine several commands in one function which helps save time on repetitive tasks. It works on whatever shells except for Powershell… So I have to learn the PowerShell way and transform those commands one by one, like this: It is quite similar to bash shell syntax but it still needs some research and transforms all of them. I have like 10 frequently used commands like that, e.g., tagging, rebasing, showing log, pull with rebase… for example, git merge to master</p>
<pre><code class="lang-bash"><span class="hljs-keyword">function</span> <span class="hljs-function"><span class="hljs-title">gmerge</span></span>() {
  <span class="hljs-variable">$CURRENT_BRANCH</span> = &amp; invoke-Expression gcurrent 2&gt;&amp;1
  <span class="hljs-variable">$TO_BRANCH</span>=<span class="hljs-string">"master"</span>  <span class="hljs-string">"&gt; Merging <span class="hljs-variable">$CURRENT_BRANCH</span> -&gt; <span class="hljs-variable">$TO_BRANCH</span>"</span>  <span class="hljs-string">"&gt; git checkout <span class="hljs-variable">$TO_BRANCH</span>"</span>
  git checkout <span class="hljs-variable">$TO_BRANCH</span>  <span class="hljs-string">"&gt; git merge --no-ff <span class="hljs-variable">$CURRENT_BRANCH</span>"</span>
  git merge --no-ff <span class="hljs-variable">$CURRENT_BRANCH</span>  <span class="hljs-string">"&gt; git commit -m 'Merge branch <span class="hljs-variable">$CURRENT_BRANCH</span>'"</span>
  git commit -m <span class="hljs-string">"Merge branch <span class="hljs-variable">$CURRENT_BRANCH</span>"</span>  <span class="hljs-string">"&gt; git push origin <span class="hljs-variable">$TO_BRANCH</span>"</span>
  git push origin <span class="hljs-variable">$TO_BRANCH</span>  <span class="hljs-string">"&gt; Delete <span class="hljs-variable">$CURRENT_BRANCH</span> branch from local"</span>
  git branch -D <span class="hljs-variable">$CURRENT_BRANCH</span>
}
</code></pre>
<p>For the other command, Windows has <code>scoop</code>, it will install the prebuilt CLI program which works similarly to the program on Linux or Mac. If Scoop doesn’t have that, MSYS2 comes to the rescue. The most frequently used are curl, gcloud, autojump, grep, cat, tail, pwd… So far the command line is quite the same.</p>
<h1 id="heading-the-terminal">The terminal</h1>
<p>Windows doesn’t have much choice on the terminal app, cause most of them are ugly af. Windows terminal so far is the best, with true color, tab, and pannel support, able to open all kinds of shells.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:1400/0*b59SnBpdRI9zM6LU.png" alt /></p>
<p>It still needs a lot to improve but at least it is good enough for now.</p>
<h1 id="heading-productivity-tools">Productivity tools</h1>
<h2 id="heading-window-management">Window management</h2>
<p>I introduce you to PowerToys, a series of productive tools for Windows. There is a tool named FancyZones that helps to split the windows by the predefined layouts. But I found it’s kind of complex to use, a hundred times harder to use than my favorite tool on Mac named Spectacles. So the basic default Win + up/down/left/right is enough for me.</p>
<p>If you want to press the Space key to preview a file, there is a tool named QuickLook on the Windows Store that can help.</p>
<h2 id="heading-spotlight-search">Spotlight search</h2>
<p>PowerToys also has a tool PowerToys Run, which searches for file, application, and calculator, runs a shell command, opens a URL, and runs a window command.</p>
<h2 id="heading-key-remapping">Key remapping</h2>
<p>PowerToys Keyboard Manager can help to remap the simple keystrokes. I don’t want to relearn the memory muscle so that I map all the familiar keystrokes like alt+c, and alt+v to ctrl+c, ctrl+v, and so on.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:1400/0*fRqg2Ci3kB2zFVeO.png" alt /></p>
<p>I also use the F keys to quickly launch the app, which is super productive for me. For example, I usually switch between Chrome, Terminal, Vim, and Postman for development, mapping it directly to F1, F2, F3, and F8… check out my AutoHotkey dot file</p>
<pre><code class="lang-bash">F1::
<span class="hljs-keyword">if</span> WinExist(<span class="hljs-string">"ahk_exe chrome.exe"</span>,,<span class="hljs-string">"Picture-in-Picture"</span>)
    WinActivate
returnF2::
<span class="hljs-keyword">if</span> WinExist(<span class="hljs-string">"ahk_exe code.exe"</span>)
    WinActivate
returnF3::
<span class="hljs-keyword">if</span> WinExist(<span class="hljs-string">"ahk_exe WindowsTerminal.exe"</span>)
    WinActivate
returnF4::
<span class="hljs-keyword">if</span> WinExist(<span class="hljs-string">"ahk_exe rider64.exe"</span>)
    WinActivate
returnF5::
<span class="hljs-keyword">if</span> WinExist(<span class="hljs-string">"ahk_exe firefox.exe"</span>,,<span class="hljs-string">"Picture-in-Picture"</span>)
    WinActivate
<span class="hljs-built_in">return</span>!F5::
<span class="hljs-keyword">if</span> WinExist(<span class="hljs-string">"ahk_exe telegram.exe"</span>)
    WinActivate
returnF6::
<span class="hljs-keyword">if</span> WinExist(<span class="hljs-string">"ahk_exe discord.exe"</span>)
    WinActivate
returnF7::
<span class="hljs-keyword">if</span> WinExist(<span class="hljs-string">"ahk_exe Slack.exe"</span>)
    WinActivate
returnF8::
<span class="hljs-keyword">if</span> WinExist(<span class="hljs-string">"ahk_exe Postman.exe"</span>)
    WinActivate
returnF9::
<span class="hljs-keyword">if</span> WinExist(<span class="hljs-string">"ahk_exe pritunl.exe"</span>)
    WinActivate
returnF10::
<span class="hljs-keyword">if</span> WinExist(<span class="hljs-string">"ahk_exe clickup.exe"</span>)
    WinActivate
<span class="hljs-built_in">return</span>
</code></pre>
<h1 id="heading-conclusion">Conclusion</h1>
<p>It is not the end of the road yet, I just switched to Windows for a few weeks, and the post will be updated. Checkout my dotfiles for more detail: <a target="_blank" href="https://github.com/finnng/dotfiles">https://github.com/finnng/dotfiles</a></p>
<p>There are something still make me uncomfortable, my favorite editor is NeoVim but it is not fully working on Windows yet, my favorite fzf.vim still unstable and buggy which prevents me from using it as a daily driver. There are no Apple Photos, iMessage, Notes, and other productivity apps that belong to the Apple ecosystem available here. I have to compromise because the computer power is huge compared to my Macbook Pro. I think a big Mac Pro with 28 cores and 64 GB of RAM will resolve all the trouble above.</p>
]]></content:encoded></item></channel></rss>