Extremely Serious

Month: March 2026

Why We Need Modern Software and Tools?

Modern software and tools are no longer “nice to have”; they are the infrastructure that lets individuals and organizations work faster, more accurately, and more securely in a digital economy.

The role of modern tools in today’s world

We now build, run, and maintain most services through software, from banking and healthcare to logistics and entertainment. Modern tools encapsulate current best practices, regulations, and technologies, allowing us to keep up with rapidly changing requirements and expectations.

Efficiency and productivity at scale

Modern tools automate repetitive work such as deployments, testing, reporting, and coordination, which dramatically reduces manual effort and context switching. This automation scales: one team can now manage systems that would previously have required many more people, simply because the tools handle orchestration and routine checks.

Accuracy, reliability, and reduced risk

Contemporary platforms embed validation, type checking, automated tests, and monitoring capabilities that reduce the likelihood of human error. As a result, systems become more reliable, analytics more trustworthy, and business decisions less exposed to mistakes arising from inconsistent or incorrect data.

Collaboration in a distributed world

Work has become inherently distributed across locations and time zones, and modern software is designed to support this reality. Shared repositories, real‑time document and code collaboration, integrated chat, and task tracking make it feasible for cross‑functional teams to coordinate effectively without being physically co‑located.

Security, compliance, and maintainability

Security threats evolve constantly, and older tools tend not to receive timely patches or support for new standards. Modern platforms incorporate stronger authentication, encryption, audit trails, and compliance features, helping organizations protect data and meet regulatory obligations while keeping maintenance overhead manageable.

Innovation and competitive advantage

New capabilities—AI-assisted development, advanced analytics, low‑code platforms, cloud‑native services—are exposed primarily through modern tools and ecosystems. Organizations that adopt them can experiment faster, ship features more quickly, and create better user experiences, while those tied to outdated tooling tend to move slowly and lose competitive ground.

In short, we use modern software and tools because they are the practical way to achieve speed, quality, security, and innovation in a world where all of these are moving targets.

Cloud Native Applications and the Twelve‑Factor Methodology

Cloud native and the twelve‑factor methodology describe two tightly related but distinct layers of modern software: cloud native is primarily about the environment and platform you deploy to, while twelve‑factor is about how you design and implement the application so it thrives in that environment.

What “cloud native” actually means

Cloud‑native applications are designed to run on dynamic, elastic infrastructure such as public clouds, private clouds, or hybrid environments. They assume that:

  • Infrastructure is ephemeral: instances can disappear and be recreated at any time.
  • Scale is horizontal: you handle more load by adding instances, not vertically scaling a single machine.
  • Configuration, networking, and persistence are provided by the platform and external services, not by local machine setup.

Typically, cloud‑native systems use:

  • Containers (OCI images) as the primary packaging and deployment unit.
  • Orchestration (e.g., Kubernetes) to schedule, scale, heal, and roll out workloads.
  • Declarative configuration and infrastructure‑as‑code to describe desired state.
  • Observability (logs, metrics, traces) and automation (CI/CD, auto‑scaling, auto‑healing) as first‑class concerns.

From an architect’s perspective, “cloud native” is the combination of these platform capabilities with an application design that can exploit them. Twelve‑factor is one of the earliest and still influential descriptions of that design.

The twelve‑factor app in a nutshell

The twelve‑factor methodology was introduced to codify best practices for building Software‑as‑a‑Service applications that are:

  • Portable across environments.
  • Easy to scale horizontally.
  • Amenable to continuous deployment.
  • Robust under frequent change.

The original factors (Codebase, Dependencies, Config, Backing services, Build/Release/Run, Processes, Port binding, Concurrency, Disposability, Dev/prod parity, Logs, Admin processes) constrain how you structure and operate the app. The key idea is that by following these constraints, you produce an application that is:

  • Stateless in its compute tier.
  • Strict about configuration boundaries.
  • Explicit about dependencies.
  • Friendly to automation and orchestration.

Notice how those properties line up almost one‑for‑one with cloud‑native expectations.

How twelve‑factor underpins cloud‑native properties

Let’s connect specific twelve‑factor principles to core cloud‑native characteristics.

Portability and containerization

Several factors directly support packaging and running your app in containers:

  • Dependencies: All dependencies are declared explicitly and isolated from the base system. This maps naturally to container images, where your application and its runtime are packaged together.
  • Config: Configuration is stored in the environment, not baked into the image. That means the same image can be promoted across environments (dev → test → prod) simply by changing environment variables, ConfigMaps, or Secrets.
  • Backing services: Backing services (databases, queues, caches, etc.) are treated as attached resources, accessed via configuration. This decouples code from specific infrastructure instances, making it easy to bind to managed cloud services.

Result: your artifact (image) becomes environment‑agnostic, which is a prerequisite for true cloud‑native deployments across multiple clusters, regions, or even cloud providers.

Statelessness and horizontal scalability

Cloud‑native platforms shine when workloads are stateless and scale horizontally. Several factors enforce that:

  • Processes: The app executes as one or more stateless processes; any persistent state is stored in external services.
  • Concurrency: Scaling is achieved by running multiple instances of the process rather than threading tricks inside a single instance.
  • Disposability: Processes are fast to start and stop, enabling rapid scaling, rolling updates, and failure recovery.

On an orchestrator like Kubernetes, these characteristics translate directly into:

  • Replica counts controlling concurrency.
  • Pod restarts and rescheduling being safe and routine.
  • Auto‑scaling policies that can add or remove instances in response to load.

If your app violates these factors (e.g., uses local disk for state, maintains sticky in‑memory sessions, or takes minutes to start), it fights the cloud‑native platform rather than benefiting from it.

Reliability, operability, and automation

Cloud‑native systems rely heavily on automation and observability. Twelve‑factor anticipates this:

  • Dev/prod parity: Minimizing the gap between development, staging, and production environments reduces surprises and supports continuous delivery.
  • Logs: Treating logs as an event stream, written to stdout/stderr, fits perfectly with container logging and centralized log aggregation. The platform can capture, ship, and index logs without the application managing log files.
  • Admin processes: One‑off tasks (migrations, batch jobs) run as separate processes (or jobs), using the same codebase and configuration as long‑running services. This aligns with Kubernetes Jobs/CronJobs or serverless functions.

Together, these make it far easier to build reliable CI/CD pipelines, perform safe rollouts/rollbacks, and operate the system with minimal manual intervention—hallmarks of cloud‑native operations.

How to use twelve‑factor as a cloud‑native checklist

you can treat twelve‑factor as a practical assessment framework for cloud‑readiness of an application, regardless of language or stack.

For each factor, ask: “If I deployed this on a modern orchestrator, would this factor hold, or would it cause friction?” For example:

  • Config: Can I deploy the same container image to dev, QA, and prod, changing only environment settings? If not, there is a cloud‑native anti‑pattern.
  • Processes & Disposability: Can I safely kill any instance at any time without data loss and with quick recovery? If not, the app is not truly cloud‑native‑friendly.
  • Logs: If I run multiple instances, can I still understand system behavior from aggregated logs, or is there stateful, instance‑local logging?

You will usually discover that bringing a legacy application “into Kubernetes” without addressing these factors leads to brittle deployments: liveness probes fail under load, rollouts are risky, and scaling is unpredictable.

Conversely, if an app cleanly passes a twelve‑factor review, it tends to behave very well in a cloud‑native environment with minimal additional work.

How to position twelve‑factor today

Twelve‑factor is not the whole story in 2026, but it remains an excellent baseline:

  • It does not cover all modern concerns (e.g., multi‑tenant isolation, advanced security, service mesh, zero‑trust networking, event‑driven patterns).
  • It is, however, an excellent “minimum bar” for application behavior in a cloud‑native context.

I recommend treating it as:

  • A design standard for service teams: code reviews and design docs should reference the factors explicitly where relevant.
  • A readiness checklist before migrating a service to a Kubernetes cluster or similar platform.
  • A teaching tool for new engineers to understand why “just dockerizing the app” is not enough.

Scaffolding a Modern VS Code Extension with Yeoman

In this article we focus purely on scaffolding: generating the initial VS Code extension project using the Yeoman generator, with TypeScript and esbuild, ready for you to start coding.


Prerequisites

Before you scaffold the project, ensure you have:

  • Node.js 18+ installed (check with node -v).
  • Git installed (check with git --version).

These are required because the generator uses Node, and the template can optionally initialise a Git repository for you.


Generating the extension with Yeoman

VS Code’s official generator is distributed as a Yeoman generator. You don’t need to install anything globally; you can invoke it directly via npx:

# One-time scaffold (no global install needed)
npx --package yo --package generator-code -- yo code

This command:

  • Downloads yo (Yeoman) and generator-code on demand.
  • Runs the VS Code extension generator.
  • Prompts you with a series of questions about the extension you want to create.

Recommended answers to the generator prompts

When the interactive prompts appear, choose:

? What type of extension do you want to create? → New Extension (TypeScript)
? What's the name of your extension?            → my-ai-extension
? What's the identifier?                        → my-ai-extension
? Initialize a git repository?                  → Yes
? Which bundler to use?                         → esbuild
? Which package manager?                        → npm

Why these choices matter:

  • New Extension (TypeScript) – gives you a typed development experience and a standard project layout.
  • Name / Identifier – the identifier becomes the technical ID used in the marketplace and in settings; pick something stable and lowercase.
  • Initialize a git repository – sets up Git so you can immediately start version-controlling your work.
  • esbuild – a modern, fast bundler that creates a single bundled extension.js for VS Code.
  • npm – a widely used default package manager; you can adapt to pnpm/yarn later if needed.

After you answer the prompts, Yeoman will generate the project in a new folder named after your extension (e.g. my-ai-extension).


Understanding the generated structure

Open the new folder in VS Code. The generator gives you a standard layout, including:

  • src/extension.ts
    This is the entry point of your extension. It exports activate and (optionally) deactivate. All your activation logic, command registration, and other behaviour start here.
  • package.json
    This acts as the extension manifest. It contains:

    • Metadata (name, version, publisher).
    • "main" field pointing to the compiled bundle (e.g. ./dist/extension.js).
    • "activationEvents" describing when your extension loads.
    • "contributes" describing commands, configuration, views, etc., that your extension adds to VS Code.

From an architectural perspective, package.json is the single most important file: it tells VS Code what your extension is and how and when it integrates into the editor.

You’ll also see other generated files such as:

  • tsconfig.json – TypeScript compiler configuration.
  • Build scripts in package.json – used to compile and bundle the extension with esbuild.
  • .vscode/launch.json – debug configuration for running the extension in a development host.

At this stage, you don’t need to modify any of these to get a working scaffold.


Running the scaffolded extension

Once the generator finishes:

  1. Install dependencies:

    cd my-ai-extension
    npm install
  2. Open the folder in VS Code (if you aren’t already).

  3. Press F5.

    VS Code will:

    • Run the build task defined by the generator.
    • Launch a new Extension Development Host window.
    • Load your extension into that window.

In the Extension Development Host:

  • Open the Command Palette.
  • Run the sample command that the generator added (typically named something like “Hello World”).

If the command runs and shows the sample notification, you have a fully working scaffolded extension. From here, you can start replacing the generated sample logic in src/extension.ts and adjusting package.json to declare your own contributions.