Ron and Ella Wiki Page

Extremely Serious

Page 2 of 36

Understanding and Using Shutdown Hooks in Java

When building Java applications, it’s often important to ensure resources are properly released when the program exits. Whether you’re managing open files, closing database connections, or saving logs, shutdown hooks give your program a final chance to perform cleanup operations before the Java Virtual Machine (JVM) terminates.

What Is a Shutdown Hook?

A shutdown hook is a special thread that the JVM executes when the program is shutting down. This mechanism is part of the Java standard library and is especially useful for performing graceful shutdowns in long-running or resource-heavy applications. It ensures key operations, like flushing buffers or closing sockets, complete before termination.

How to Register a Shutdown Hook

You can register a shutdown hook using the addShutdownHook() method of the Runtime class. Here’s the basic pattern:

Runtime.getRuntime().addShutdownHook(new Thread(() -> {
    // Cleanup code here
}));

When the JVM begins to shut down (via System.exit(), Ctrl + C, or a normal program exit), it will execute this thread before exiting completely.

Example: Adding a Cleanup Hook

The following example demonstrates a simple shutdown hook that prints a message when the JVM terminates:

public class ShutdownExample {
    public static void main(String[] args) {
        Runtime.getRuntime().addShutdownHook(new Thread(() -> {
            System.out.println("Performing cleanup before exit...");
        }));

        System.out.println("Application running. Press Ctrl+C to exit.");
        try {
            Thread.sleep(5000);
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }
    }
}

When you stop the program (using Ctrl + C, for example), the message “Performing cleanup before exit...” appears — proof that the shutdown hook executed successfully.

Removing Shutdown Hooks

If necessary, you can remove a registered hook using:

Runtime.getRuntime().removeShutdownHook(thread);

This returns true if the hook was successfully removed. Keep in mind that you can only remove hooks before the shutdown process begins.

When Shutdown Hooks Are Triggered

Shutdown hooks run when:

  • The application terminates normally.
  • The user presses Ctrl + C.
  • The program calls System.exit().

However, hooks do not run if the JVM is abruptly terminated — for example, when executing Runtime.halt() or receiving a kill -9 signal.

Best Practices for Using Shutdown Hooks

  • Keep them lightweight: Avoid long or blocking operations that can delay shutdown.
  • Handle concurrency safely: Use synchronized blocks, volatile variables, or other concurrency tools as needed.
  • Avoid creating new threads: Hooks should finalize existing resources, not start new tasks.
  • Log carefully: Writing logs can be important, but ensure that log systems are not already shut down when the hook runs.

Final Thoughts

Shutdown hooks provide a reliable mechanism for graceful application termination in Java. When used correctly, they help ensure your program exits cleanly, freeing up resources and preventing data loss. However, hooks should be used judiciously — they’re not a substitute for proper application design, but rather a safety net for final cleanup.

Infrastructure as Code (IaC): A Practical Introduction

Infrastructure as Code (IaC) revolutionizes how teams manage servers, networks, databases, and cloud services by treating them like application code—versioned, reviewed, tested, and deployed via automation. Instead of manual console clicks or ad-hoc scripts, IaC uses declarative files to define desired infrastructure states, enabling tools to provision and maintain them consistently.

Defining IaC

IaC expresses infrastructure in machine-readable formats like YAML, JSON, or HCL (HashiCorp Configuration Language). Tools read these files to align reality with the specified state, handling creation, updates, or deletions automatically. Changes occur by editing code and reapplying it, eliminating manual tweaks that cause errors or "configuration drift."

Key Benefits

IaC drives efficiency and reliability across environments.

  • Consistency: Identical files create matching dev, test, and prod setups, minimizing "it works on my machine" problems.
  • Automation and Speed: Integrates into CI/CD pipelines for rapid provisioning and updates alongside app deployments.
  • Auditability: Version control provides history, reviews, testing, and rollbacks to catch issues early.

Declarative vs. Imperative Approaches

Declarative IaC dominates modern tools: specify what you want (e.g., "three EC2 instances with this security group"), and the tool handles how. Imperative styles outline step-by-step actions, resembling scripts but risking inconsistencies without careful management.

Mutable vs. Immutable Infrastructure

Mutable infrastructure modifies running resources, leading to drift over time. Immutable approaches replace them entirely (e.g., deploy a new VM image), simplifying troubleshooting and ensuring predictability.

Tool Categories

IaC tools split into provisioning (creating resources like compute and storage) and configuration management (software setup inside resources). Popular examples include Terraform for provisioning and Ansible for configuration.

Security and Governance

Scan IaC files for vulnerabilities like open ports before deployment. Code-based definitions enforce standards for compliance, tagging, and networking across teams.

Understanding Java Spliterator and Stream API

The Java Spliterator, introduced in Java 8, powers the Stream API by providing sophisticated traversal and partitioning capabilities. This enables both sequential and parallel stream processing with optimal performance across diverse data sources.

What Is a Spliterator?

A Spliterator (split + iterator) traverses elements while supporting data partitioning for concurrent processing. Unlike traditional Iterator, its trySplit() method divides data sources into multiple Spliterators, making it perfect for parallel streams.

Spliterator's Role in Stream API

Stream API methods like collection.stream() and collection.parallelStream() internally call the collection's spliterator() method. The StreamSupport.stream(spliterator, parallel) factory creates the stream pipeline.

Enabling Parallel Processing

The Fork/Join framework uses trySplit() to recursively partition data across threads. Each split creates smaller Spliterators processed independently, then results merge efficiently.

Core Spliterator Methods

Method Purpose
tryAdvance(Consumer) Process next element
forEachRemaining(Consumer) Process all remaining elements
trySplit() Partition data source
estimateSize() Estimate remaining elements
characteristics() Data source properties

Spliterator Characteristics

Characteristics describe data source properties, optimizing stream execution:

Characteristic Description
ORDERED Defined encounter order
DISTINCT No duplicate elements
SORTED Elements follow comparator
SIZED Exact element count known
NONNULL No null elements
IMMUTABLE Source cannot change
CONCURRENT Thread-safe modification
SUBSIZED Split parts have known sizes

These flags enable Stream API optimizations like skipping redundant operations based on source properties.

Custom Spliterator Example: Square Generator

Here's a production-ready custom Spliterator that generates squares of numbers in a range, with full parallel execution support:

import java.util.Spliterator;
import java.util.function.Consumer;
import java.util.stream.StreamSupport;

/**
 * A Spliterator that generates squares of numbers in a range.
 * This implementation properly supports parallel execution because
 * each element can be computed independently without shared mutable state.
 */
public class SquareSpliterator implements Spliterator<Integer> {
    private int start;
    private final int end;

    public SquareSpliterator(int start, int end) {
        this.start = start;
        this.end = end;
    }

    @Override
    public boolean tryAdvance(Consumer<? super Integer> action) {
        if (start >= end) {
            return false;
        }
        int value = start * start;
        action.accept(value);
        start++;
        return true;
    }

    @Override
    public Spliterator<Integer> trySplit() {
        int remaining = end - start;

        // Only split if we have at least 2 elements
        if (remaining < 2) {
            return null;
        }

        // Split the range in half
        int mid = start + remaining / 2;
        int oldStart = start;
        start = mid;

        // Return a new spliterator for the first half
        return new SquareSpliterator(oldStart, mid);
    }

    @Override
    public long estimateSize() {
        return end - start;
    }

    @Override
    public int characteristics() {
        return IMMUTABLE | SIZED | SUBSIZED | NONNULL | ORDERED;
    }

    public static void main(String[] args) {
        System.out.println("=== Sequential Execution ===");
        var sequentialStream = StreamSupport.stream(new SquareSpliterator(1, 11), false);
        sequentialStream.forEach(n -> System.out.println(
            Thread.currentThread().getName() + ": " + n
        ));

        System.out.println("\n=== Parallel Execution ===");
        var parallelStream = StreamSupport.stream(new SquareSpliterator(1, 11), true);
        parallelStream.forEach(n -> System.out.println(
            Thread.currentThread().getName() + ": " + n
        ));

        System.out.println("\n=== Computing Sum in Parallel ===");
        long sum = StreamSupport.stream(new SquareSpliterator(1, 101), true)
                .mapToLong(Integer::longValue)
                .sum();
        System.out.println("Sum of squares from 1² to 100²: " + sum);

        System.out.println("\n=== Finding Max in Parallel ===");
        int max = StreamSupport.stream(new SquareSpliterator(1, 51), true)
                .max(Integer::compareTo)
                .orElse(0);
        System.out.println("Max square (1-50): " + max);

        System.out.println("\n=== Filtering Even Squares in Parallel ===");
        long countEvenSquares = StreamSupport.stream(new SquareSpliterator(1, 21), true)
                .filter(n -> n % 2 == 0)
                .count();
        System.out.println("Count of even squares (1-20): " + countEvenSquares);
    }
}

Key Features Demonstrated:

  • Perfect parallel splitting via balanced trySplit()
  • Thread-independent computation (no shared mutable state)
  • Rich characteristics enabling Stream API optimizations
  • Real-world stream operations: sum, max, filter, count

Sample Output shows different threads processing different ranges, proving effective parallelization.

Why Spliterators Matter

Spliterators provide complete control over stream data sources. They enable:

  • Custom data generation (ranges, algorithms, files, networks)
  • Optimal parallel processing with balanced workload distribution
  • Metadata-driven performance tuning through characteristics

This architecture makes Java Stream API uniquely scalable, from simple collections to complex distributed data processing pipelines.

Java Streams: mapMulti vs flatMap

Java Streams offer powerful ways to transform data, especially for one-to-many mappings. mapMulti, introduced in Java 16, provides an imperative alternative to the classic flatMap, optimizing performance by skipping intermediate Stream creation.

Core Distinctions

mapMulti uses a BiConsumer that receives the input element and a downstream Consumer, enabling direct emission of multiple (or zero) values without generating Streams per element. This reduces overhead, making it ideal for conditional expansions or small output sets. flatMap, in contrast, applies a Function returning a Stream for each input, then flattens them; it's elegant for functional styles but creates unnecessary Streams, even empties during filtering.

The imperative nature of mapMulti allows seamless integration of filtering and mapping logic in one step, streamlining pipelines.​

Practical Examples

Consider processing languages, expanding those containing 'o' to both original and uppercase forms.

With mapMulti (efficient, direct emission):

List<String> result = Stream.of("Java", "Groovy", "Clojure")
        .<String>mapMulti((lang, downstream) -> {
            if (lang.contains("o")) {
                downstream.accept(lang);
                downstream.accept(lang.toUpperCase());
            }
        })
        .toList();
IO.println(result);

With flatMap (functional, but with Stream overhead):

List<String> result = Stream.of("Java", "Groovy", "Clojure")
        .filter(lang -> lang.contains("o"))
        .flatMap(lang -> Stream.of(lang, lang.toUpperCase()))
        .toList();
IO.println(result);
// Output: ["Groovy", "GROOVY", "Clojure", "CLOJURE"]

Choosing the Right Tool

Select mapMulti for high-performance scenarios like microservices processing, where avoiding Stream instantiation boosts throughput, or for complex imperative conditions. Stick with flatMap for declarative codebases or transformations naturally producing Streams, such as string splitting.

Understanding package-info.java in Java

In Java, package-info.java is a special source file used to document and annotate an entire package rather than individual classes. It does not define any classes or interfaces; instead, it holds Javadoc comments and package-level annotations tied to the package declaration.

Why Package-Level Documentation Matters

As projects grow, the number of classes and interfaces increases, and understanding their relationships becomes harder. Class-level Javadoc explains individual types but often fails to describe the “big picture” of how they fit together, which is where package-level documentation becomes valuable.

By centralizing high-level information in package-info.java, teams can describe the purpose of a package, its design rules, and how its types should be used without scattering that information across many files.

The Structure of package-info.java

A typical package-info.java file contains three elements in this order:

  1. A Javadoc comment block that describes the package.
  2. Optional annotations that apply to the package as a whole.
  3. The package declaration matching the directory structure.

This structure makes the file easy to scan: documentation at the top, then any global annotations, and finally the declaration that links it to the actual package.

A Comprehensive Example

Imagine an application with a com.example.billing package that handles invoicing, payments, and tax calculations. A rich package-info.java for that package could look like this:

/**
 * Provides the core billing and invoicing functionality for the application.
 *
 * <p>This package defines:
 * <ul>
 *   <li>Immutable value types representing invoices, line items, and monetary amounts.</li>
 *   <li>Services that calculate totals, apply discounts, and handle tax rules.</li>
 *   <li>Integration points for payment providers and accounting systems.</li>
 * </ul>
 *
 * <h2>Design Guidelines</h2>
 * <ul>
 *   <li>All monetary calculations use a fixed-precision type and a shared rounding strategy.</li>
 *   <li>Public APIs avoid exposing persistence details; repositories live in a separate package.</li>
 *   <li>Domain objects are designed to be side‑effect free; state changes go through services.</li>
 * </ul>
 *
 * <h2>Thread Safety</h2>
 * <p>Value types are intended to be thread‑safe. Service implementations are stateless or guarded
 * by application-level configuration. Callers should not share mutable collections across threads.
 *
 * <h2>Usage</h2>
 * <p>Client code typically starts with the {@code InvoiceService} to create and finalize
 * invoices, then delegates payment processing to implementations of {@code PaymentGateway}.
 */
@javax.annotation.ParametersAreNonnullByDefault
package com.example.billing;

Note on the Annotation

The annotation @javax.annotation.ParametersAreNonnullByDefault used here is part of the JSR-305 specification, which defines standard Java annotations for software defect detection and nullability contracts. This particular annotation indicates that, by default, all method parameters in this package are considered non-null unless explicitly annotated otherwise.

Using JSR-305 annotations like this in package-info.java helps enforce global contract assumptions and allows static analysis tools (such as FindBugs or modern IDEs) to detect possible null-related errors more effectively.

Using Package-Level Annotations Effectively

Even without other annotations, package-info.java remains a powerful place to define global assumptions via annotations. Typical examples include nullness defaults from JSR-305, deprecation of an entire package, or framework-specific configuration.

By keeping only meaningful annotations, you avoid clutter while benefiting from centralized configuration.

When and How to Introduce package-info.java

The workflow for introducing package-info.java stays the same:

  1. Create package-info.java inside the target package directory.
  2. Write a clear Javadoc block that answers “what lives here” and “how it should be used.”
  3. Add only those package-level annotations that genuinely express a package-wide rule.
  4. Keep the file up to date whenever the package’s design or guarantees change.

With this approach, your package-info.java file becomes a concise, accurate source of truth about each package in your codebase, while clearly documenting the use of important annotations like those defined by JSR-305.

Java Method and Constructor References: Concepts and Practical Examples

Java method references, introduced in Java 8, offer a succinct and expressive way to refer to existing methods or constructors using the :: operator. They serve as a powerful alternative to verbose lambda expressions, helping developers write clearer and more maintainable code in functional programming contexts. This article covers the four types of method references.

Types of Java Method References

Java supports four main types of method references, grouped by the kind of method they refer to:

  1. Reference to a Constructor
    References a constructor to create new objects or arrays.
    Syntax: ClassName::new
    Examples:

    List people = names.stream().map(Person::new).toList();

    Create new instances with constructor reference for each element in the stream.

    Additionally, constructor references can be used for arrays:

    import java.util.function.IntFunction;
    
    IntFunction arrayCreator = String[]::new;
    String[] myArray = arrayCreator.apply(5);
    System.out.println("Array length: " + myArray.length);  // Prints 5

    This is especially useful in streams to collect into arrays:

    String[] namesArray = names.stream().toArray(String[]::new);
  2. Reference to a Static Method
    This refers to a static method in a class.
    Syntax: ClassName::staticMethodName
    Example:

    Arrays.sort(array, Integer::compare);
  3. Reference to an Instance Method of a Particular Object (Bound Method Reference)
    This is a bound method reference, tied to a specific, existing object instance. The instance is fixed when the reference is created.
    Syntax: instance::instanceMethodName
    Example:

    List names = List.of("Alice", "Bob");
    names.forEach(System.out::println);  // System.out is a fixed object

    Here, System.out::println is bound to the particular System.out object.

  4. Reference to an Instance Method of an Arbitrary Object of a Particular Type (Unbound Method Reference)
    This is an unbound method reference where the instance is supplied dynamically when the method is called.
    Syntax: ClassName::instanceMethodName
    Important Rule:
    The first parameter of the functional interface method corresponds to the instance on which the referenced instance method will be invoked. That is, the instance to call the method on is passed as the first argument, and any remaining parameters map directly to the method parameters.
    Example:

    List team = Arrays.asList("Dan", "Josh", "Cora");
    team.sort(String::compareToIgnoreCase);

    In this example, when the comparator functional interface’s compare method is called with two arguments (a, b), it is equivalent to calling a.compareToIgnoreCase(b) on the first parameter instance.

Summary

  • Java method references simplify code by allowing concise references to methods and constructors.
  • The first type—constructor references—express object and array instantiation clearly.
  • The second type is referencing static methods.
  • The third type—instance method reference of a particular object—is a bound method reference, fixed on a single object instance.
  • The fourth type—instance method reference of an arbitrary object of a particular type—is an unbound method reference, where the instance is provided at call time.
  • Constructor references are especially handy for arrays like String[].
  • System.out::println is a classic example of a bound method reference.

Locks and Semaphores in Java: A Guide to Concurrency Control

Locks and semaphores are foundational synchronization mechanisms in Java, designed to control access to shared resources in concurrent programming. Proper use of these constructs ensures thread safety, prevents data corruption, and manages resource contention efficiently.

What is a Lock in Java?

A lock provides exclusive access to a shared resource by allowing only one thread at a time to execute a critical section of code. The simplest form in Java is the intrinsic lock obtained by the synchronized keyword, which guards methods or blocks. For more flexibility, Java’s java.util.concurrent.locks package offers classes like ReentrantLock that provide advanced features such as interruptible lock acquisition, timed waits, and fairness policies.

Using locks ensures that when multiple threads try to modify shared data, one thread gains exclusive control while others wait, thus preventing race conditions.

Example of a Lock (ReentrantLock):

import java.util.concurrent.locks.ReentrantLock;

public class Counter {
    private int count = 0;
    private final ReentrantLock lock = new ReentrantLock();

    public void increment() {
        lock.lock();  // acquire lock
        try {
            count++;  // critical section
        } finally {
            lock.unlock();  // release lock
        }
    }

    public int getCount() {
        return count;
    }
}

What is a Semaphore in Java?

A semaphore controls access based on a set number of permits, allowing a fixed number of threads to access a resource concurrently. Threads must acquire a permit before entering the critical section and release it afterward. If no permits are available, threads block until a permit becomes free. This model suits scenarios like connection pools or task throttling, where parallel access is limited rather than exclusive.

Example of a Semaphore:

import java.util.concurrent.Semaphore;

public class WorkerPool {
    private final Semaphore semaphore;

    public WorkerPool(int maxConcurrent) {
        this.semaphore = new Semaphore(maxConcurrent);
    }

    public void performTask() throws InterruptedException {
        semaphore.acquire();  // acquire permit
        try {
            // critical section
        } finally {
            semaphore.release();  // release permit
        }
    }
}

Comparing Locks and Semaphores

Aspect Lock Semaphore
Concurrency Single thread access (exclusive) Multiple threads up to a limit (concurrent)
Use case Mutual exclusion in critical sections Limit concurrent resource usage
API examples synchronized, ReentrantLock Semaphore
Complexity Simpler, single ownership More flexible, requires permit management

Best Practices for Using Locks and Semaphores

  • Always release locks or semaphore permits in a finally block to avoid deadlocks.
  • Use locks for strict mutual exclusion when only one thread should execute at a time.
  • Use semaphores when allowing multiple threads limited concurrent access.
  • Keep the critical section as short as possible to reduce contention.
  • Avoid acquiring multiple locks or permits in inconsistent order to prevent deadlocks.

Mastering locks and semaphores is key to writing thread-safe Java applications that perform optimally in concurrent environments. By choosing the right synchronization mechanism, developers can effectively balance safety and parallelism to build scalable, reliable systems.

Java 25 Compact Source Files: A New Simplicity with Instance Main Methods

Java continues evolving to meet the needs of developers—from beginners learning the language to pros writing quick scripts. Java 25 introduces Compact Source Files, a feature that lets you write Java programs without explicit class declarations, coupled with Instance Main Methods which allow entry points that are instance-based rather than static. This combination significantly reduces boilerplate, simplifies small programs, and makes Java more approachable while preserving its power and safety.

What Are Compact Source Files?

Traditionally, a Java program requires a class and a public static void main(String[] args) method. However, this requirement adds ceremony that can be cumbersome for tiny programs or for learners.

Compact source files lift this restriction by implicitly defining a final top-level class behind the scenes to hold fields and methods declared outside any class. This class:

  • Is unnamed and exists in the unnamed package.
  • Has a default constructor.
  • Extends java.lang.Object and implements no interfaces.
  • Must contain a launchable main method, which can be an instance method (not necessarily static).
  • Cannot be referenced by name in the source.

This means a full Java program can be as simple as:

void main() {
    IO.println("Hello, World!");
}

The new java.lang.IO class introduced in Java 25 provides simple convenient methods for console output like IO.println().

Implicit Imports for Compact Source Files

To keep programs concise, compact source files automatically import all public top-level classes and interfaces exported by the java.base module. This includes key packages such as java.util, java.io, java.math, and java.lang. So classes like List or BigDecimal are immediately available without import declarations.

Modules outside java.base require explicit import declarations.

Limitations and Constraints

Compact source files have some structural constraints:

  • The implicit class cannot be named or explicitly instantiated.
  • No package declarations are allowed; the class is always in the unnamed package.
  • Static members cannot be referenced via method references.
  • The IO class’s static methods require qualification.
  • Complex multi-class or modular programs should evolve into regular class-based files with explicit imports and package declarations.

This feature targets small programs, scripts, and educational use while preserving Java’s rigorous type safety and tooling compatibility.

Using Command-Line Arguments in a Compact Source File

Compact source files support standard command-line arguments passed to the main method as a String[] parameter, just like traditional Java programs.

Here is an example that prints provided command-line arguments:

void main(String[] args) {
    if (args.length == 0) {
        IO.println("No arguments provided.");
        return;
    }
    IO.println("Arguments:");
    for (var arg : args) {
        IO.println(" - " + arg);
    }
}

Save this as PrintArgs.java, then run it with:

java PrintArgs.java apple banana cherry

Output:

textArguments:
 - apple
 - banana
 - cherry

This shows how you can easily handle inputs in a script-like manner without boilerplate class syntax.

Growing Your Program

If your program outgrows simplicity, converting from a compact source file to a named class is straightforward. Wrap methods and fields in a class declaration and add imports explicitly. For instance:

import module java.base;

class PrintArgs {
    void main(String[] args) {
        if (args.length == 0) {
            IO.println("No arguments provided.");
            return;
        }
        IO.println("Arguments:");
        for (var arg : args) {
            IO.println(" - " + arg);
        }
    }
}

The logic inside main remains unchanged, enabling an easy migration path.

Conclusion

Java 25’s compact source files paired with instance main methods introduce a fresh, lightweight way to write Java programs. By reducing ceremony and automatically importing core APIs, they enable rapid scripting, teaching, and prototyping, while maintaining seamless interoperability with the full Java platform. Handling command-line arguments naturally fits into this new model, encouraging exploration and productivity in a familiar yet simplified environment.

This innovation invites developers to write less, do more, and enjoy Java’s expressive power with less friction.

Scoped Values in Java 25: Context Propagation for Modern Java

With the release of Java 25, scoped values come out of preview and enter the mainstream as one of the most impactful improvements for concurrency and context propagation in Java’s recent history. Designed to address the perennial issue of safely sharing context across method chains and threads, scoped values deliver a clean, immutable, and automatically bounded solution that dethrones the error-prone ThreadLocal for most scenarios.

Rethinking Context: Why Scoped Values?

For years, Java developers have used ThreadLocal to pass data down a call stack—such as security credentials, logging metadata, or request-specific state. While functional, ThreadLocal suffers from lifecycle ambiguity, memory leaks, and incompatibility with lightweight virtual threads. Scoped values solve these problems by making context propagation explicit in syntax, immutable in nature, and automatic in cleanup.

The Mental Model

Imagine code execution as moving through a series of rooms, each with its unique lighting. A scoped value is like setting the lighting for a room—within its boundaries, everyone sees the same illumination (data value), but outside those walls, the setting is gone. Each scope block clearly defines where the data is available and safe to access.

How Scoped Values Work

Scoped values require two ingredients:

  • Declaration: Define a ScopedValue as a static final field, usually parameterized by the intended type.
  • Binding: Use ScopedValue.where() to create an execution scope where the value is accessible.

Inside any method called within the binding scope—even dozens of frames deep—the value can be retrieved by .get(), without explicit parameter passing.

Example: Propagating User Context Across Methods

// Declare the scoped value
private static final ScopedValue<String> USERNAME = ScopedValue.newInstance();

public static void main(String[] args) {
    ScopedValue.where(USERNAME, "alice").run(() -> entryPoint());
}

static void entryPoint() {
    printCurrentUser();
}

static void printCurrentUser() {
    System.out.println("Current user: " + USERNAME.get()); // Outputs "alice"
}

In this sample, USERNAME is accessible in any method within the binding scope, regardless of how far it’s called from the entry point.

Nested Binding and Rebinding

Scoped values provide nested rebinding: within a scope, a method can establish a new nested binding for the same value, which is then available only to its callees. This ensures truly bounded context lifetimes and avoids unintended leakage or overwrites.

// Declare the scoped value
private static final ScopedValue<String> MESSAGE = ScopedValue.newInstance();

void foo() {
    ScopedValue.where(MESSAGE, "hello").run(() -> bar());
}

void bar() {
    System.out.println(MESSAGE.get()); // prints "hello"
    ScopedValue.where(MESSAGE, "goodbye").run(() -> baz());
    System.out.println(MESSAGE.get()); // prints "hello"
}

void baz() {
    System.out.println(MESSAGE.get()); // prints "goodbye"
}

Here, the value "goodbye" is only visible within the nested scope inside baz, while "hello" remains for bar outside that sub-scope.openjdk

Thread Safety and Structured Concurrency

Perhaps the biggest leap: scoped values are designed for modern concurrency, including Project Loom’s virtual threads and Java's structured concurrency. Scoped values are automatically inherited by child threads launched within the scope, eliminating complex thread plumbing and ensuring correct context propagation.

// Declare the scoped value
private static final ScopedValue<String> REQUEST_ID = ScopedValue.newInstance();

public static void main(String[] args) throws InterruptedException {
    String requestId = "req-789";
    usingVirtualThreads(requestId);
    usingStructuredConcurrency(requestId);
}

private static void usingVirtualThreads(String requestId) {
    try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
        // Launch multiple concurrent virtual threads, each with its own scoped value binding
        Future<?> taskA = executor.submit(() ->
                ScopedValue.where(REQUEST_ID, requestId).run(() -> processTask("Task VT A"))
        );
        Future<?> taskB = executor.submit(() ->
                ScopedValue.where(REQUEST_ID, requestId).run(() -> processTask("Task VT B"))
        );
        Future<?> taskC = executor.submit(() ->
                ScopedValue.where(REQUEST_ID, requestId).run(() -> processTask("Task VT C"))
        );

        // Wait for all tasks to complete
        try {
            taskA.get();
            taskB.get();
            taskC.get();
        } catch (Exception e) {
            throw new RuntimeException(e);
        }
    }
}

private static void usingStructuredConcurrency(String requestId) {
    ScopedValue.where(REQUEST_ID, requestId).run(() -> {
        try (var scope = StructuredTaskScope.open()) {
            // Launch multiple concurrent virtual threads
            scope.fork(() -> {
                processTask("Task SC A");
            });
            scope.fork(() -> {
                processTask("Task SC B");
            });
            scope.fork(() -> {
                processTask("Task SC C");
            });

            scope.join();
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
            throw new RuntimeException(e);
        }
    });
}

private static void processTask(String taskName) {
    // Scoped value REQUEST_ID is automatically visible here
    System.out.println(taskName + " processing request: " + REQUEST_ID.get());
}

No need for explicit context passing—child threads see the intended value automatically.

Key Features and Advantages

  • Immutability: Values cannot be mutated within scope, preventing accidental overwrite and race conditions.
  • Automatic Cleanup: Context disappears at the end of the scope, eliminating leaks.
  • No Boilerplate: No more manual parameter threading across dozens of method signatures.
  • Designed for Virtual Threads: Plays perfectly with Java’s latest concurrency primitives.baeldung+2

Use Cases

  • Securely propagate authenticated user or tracing info in web servers.
  • Pass tenant, locale, metrics, or logger context across libraries.
  • Enable robust structured concurrency with context auto-inheritance.

Asynchronous Programming in Java with CompletableFuture and Virtual Threads

Java's CompletableFuture provides a powerful and flexible framework for asynchronous programming. Introduced in Java 8, it allows writing non-blocking, event-driven applications with simple and readable code. With Java 21 and Project Loom, virtual threads can be combined with CompletableFutures to achieve highly scalable concurrency with minimal overhead. This article explores the core usage patterns of CompletableFuture and how to leverage virtual threads effectively.

Basics of CompletableFuture Usage

At its simplest, a CompletableFuture represents a future result of an asynchronous computation. You can create one that runs asynchronously using the supplyAsync() or runAsync() methods.

  • supplyAsync() runs a background task that returns a result.
  • runAsync() runs a background task with no returned result.

Example:

CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> "Hello World");
System.out.println(future.get());  // Blocks until the result is ready

In this example, the supplier runs asynchronously, and the main thread waits for its result using get(). The computation executes in the common ForkJoinPool by default.

Chaining and Composing Tasks

CompletableFuture excels at composing asynchronous tasks without nested callbacks:

  • thenApply() transforms the result of a completed future.
  • thenAccept() consumes the result without returning anything.
  • thenRun() runs a task once the future is complete.
  • thenCombine() combines results of two independent futures.
  • thenCompose() chains dependent futures for sequential asynchronous steps.

Example of chaining:

CompletableFuture.supplyAsync(() -> 10)
    .thenApply(result -> result + 20)
    .thenAccept(result -> System.out.println("Result: " + result));

Example of combining:

CompletableFuture<Integer> f1 = CompletableFuture.supplyAsync(() -> 10);
CompletableFuture<Integer> f2 = CompletableFuture.supplyAsync(() -> 20);
CompletableFuture<Integer> combined = f1.thenCombine(f2, Integer::sum);
System.out.println(combined.get());  // 30

These patterns allow building complex, non-blocking workflows with clean and expressive code.

Exception Handling

CompletableFuture allows robust error handling without complicating the flow:

  • Use exceptionally() to recover with a default value on error.
  • Use handle() to process outcome result or exception.
  • Use whenComplete() to perform an action regardless of success or failure.

Example:

CompletableFuture.supplyAsync(() -> {
    if (true) throw new RuntimeException("Failure");
    return "Success";
}).exceptionally(ex -> "Recovered from " + ex.getMessage())
  .thenAccept(System.out::println);  // Outputs: Recovered from java.lang.RuntimeException: Failure

Waiting for Multiple Futures

The allOf() method is used to wait for multiple CompletableFutures to finish:

List<CompletableFuture<String>> futures = List.of(
    CompletableFuture.completedFuture("A"),
    CompletableFuture.completedFuture("B"),
    CompletableFuture.completedFuture("C"));

CompletableFuture<Void> allDone = CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]));
allDone.join();  // Wait until all futures complete

This enables executing parallel asynchronous operations efficiently.

Using CompletableFuture with Virtual Threads

Java 21 introduces virtual threads, lightweight threads that allow massive concurrency with minimal resource consumption. To use CompletableFutures on virtual threads, create an executor backed by virtual threads and pass it to async methods.

Example:

try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
    CompletableFuture<Void> future = CompletableFuture.runAsync(() -> {
        System.out.println("Running in virtual thread: " + Thread.currentThread());
        try { Thread.sleep(1000); } catch (InterruptedException e) { Thread.currentThread().interrupt(); }
        System.out.println("Task completed");
    }, executor);

    future.join();
}
  • The executor is created with Executors.newVirtualThreadPerTaskExecutor().
  • Async tasks run on virtual threads, offering high scalability.
  • The executor must be closed to release resources and stop accepting tasks; using try-with-resources is recommended.

All operations such as thenApplyAsync() or thenCombineAsync() can similarly take the virtual thread executor to keep subsequent stages on virtual threads.

Summary

  • CompletableFuture allows flexible, readable asynchronous programming.
  • Tasks can be created, chained, combined, and composed easily.
  • Robust exception handling is built-in.
  • allOf() allows waiting on multiple futures.
  • With virtual threads, CompletableFuture scales brilliantly by offloading async tasks to lightweight threads.
  • Always close virtual thread executors to properly release resources.

Using CompletableFuture with virtual threads simplifies asynchronous programming and enables writing performant scalable Java applications with clean and maintainable code.

« Older posts Newer posts »