Extremely Serious

Month: January 2026 (Page 2 of 3)

MapStruct Mapper Composition: Pure Java

Mapper composition in MapStruct creates modular mappings by delegating between mappers using @Mapper(uses = {...}). The examples below are fully self-contained, compilable classes with complete domain models, mappers, and main methods.

Complete Domain Classes

import java.time.LocalDate;

// Source domain classes
public class Address {
    private String street;
    private String city;

    public Address(String street, String city) {
        this.street = street;
        this.city = city;
    }

    // Getters and setters
    public String getStreet() { return street; }
    public void setStreet(String street) { this.street = street; }
    public String getCity() { return city; }
    public void setCity(String city) { this.city = city; }
}

public class User {
    private String name;
    private Address address;

    public User(String name, Address address) {
        this.name = name;
        this.address = address;
    }

    // Getters and setters
    public String getName() { return name; }
    public void setName(String name) { this.name = name; }
    public Address getAddress() { return address; }
    public void setAddress(Address address) { this.address = address; }
}

// Extended User for annotation example
public class UserEnriched {
    private String firstName;
    private String lastName;
    private int age;
    private Address address;

    public UserEnriched(String firstName, String lastName, int age, Address address) {
        this.firstName = firstName;
        this.lastName = lastName;
        this.age = age;
        this.address = address;
    }

    // Getters
    public String getFirstName() { return firstName; }
    public String getLastName() { return lastName; }
    public int getAge() { return age; }
    public void setAge(int age) { this.age = age; }
    public Address getAddress() { return address; }
}

// Target DTO classes
public class AddressDto {
    private String street;
    private String city;

    public AddressDto() {}

    // Getters and setters
    public String getStreet() { return street; }
    public void setStreet(String street) { this.street = street; }
    public String getCity() { return city; }
    public void setCity(String city) { this.city = city; }

    @Override
    public String toString() {
        return street + ", " + city;
    }
}

public class UserDto {
    private String name;
    private AddressDto address;

    public UserDto() {}

    // Getters and setters
    public String getName() { return name; }
    public void setName(String name) { this.name = name; }
    public AddressDto getAddress() { return address; }
    public void setAddress(AddressDto address) { this.address = address; }

    @Override
    public String toString() {
        return name + " <" + address + ">";
    }
}

public class UserEnrichedDto {
    private String fullName;
    private String ageGroup;
    private AddressDto address;

    public UserEnrichedDto() {}

    // Getters and setters
    public String getFullName() { return fullName; }
    public void setFullName(String fullName) { this.fullName = fullName; }
    public String getAgeGroup() { return ageGroup; }
    public void setAgeGroup(String ageGroup) { this.ageGroup = ageGroup; }
    public AddressDto getAddress() { return address; }
    public void setAddress(AddressDto address) { this.address = address; }

    @Override
    public String toString() {
        return fullName + " (" + ageGroup + ") <" + address + ">";
    }
}

Basic Composition Example

import org.mapstruct.Mapper;
import org.mapstruct.factory.Mappers;

@Mapper
public interface AddressMapper {
    AddressMapper INSTANCE = Mappers.getMapper(AddressMapper.class);

    AddressDto toDto(Address address);
}

@Mapper(uses = AddressMapper.class)
public interface UserMapper {
    UserMapper INSTANCE = Mappers.getMapper(UserMapper.class);

    UserDto toDto(User user);
}

// Runnable test
public class BasicComposition {
    public static void main(String[] args) {
        User user = new User("John Doe",
            new Address("123 Main St", "Auckland"));

        UserDto dto = UserMapper.INSTANCE.toDto(user);

        System.out.println(dto);
        // Output: John Doe <123 Main St, Auckland>
    }
}

Multiple Mapper and Custom Conversion Composition

import org.mapstruct.Mapper;
import org.mapstruct.Mapping;
import org.mapstruct.factory.Mappers;
import java.time.format.DateTimeFormatter;
import java.time.LocalDate;

public class DateUtils {
    public static String toString(LocalDate date) {
        if (date == null) {
            return null;
        }
        return date.format(DateTimeFormatter.ISO_LOCAL_DATE);
    }
}

public class Event {
    private String title;
    private LocalDate date;
    private Address address;

    public Event(String title, LocalDate date, Address address) {
        this.title = title;
        this.date = date;
        this.address = address;
    }

    public String getTitle() { return title; }
    public void setTitle(String title) { this.title = title; }
    public LocalDate getDate() { return date; }
    public void setDate(LocalDate date) { this.date = date; }
    public Address getAddress() { return address; }
    public void setAddress(Address address) { this.address = address; }
}

public class EventDto {
    private String title;
    private String dateString;
    private AddressDto address;

    public EventDto() {}

    public String getTitle() { return title; }
    public void setTitle(String title) { this.title = title; }
    public String getDateString() { return dateString; }
    public void setDateString(String dateString) { this.dateString = dateString; }
    public AddressDto getAddress() { return address; }
    public void setAddress(AddressDto address) { this.address = address; }

    @Override
    public String toString() {
        return title + " on " + dateString + " at " + address;
    }
}

@Mapper(uses = {AddressMapper.class, DateUtils.class})
public interface EventMapper {
    EventMapper INSTANCE = Mappers.getMapper(EventMapper.class);

    @Mapping(source = "date", target = "dateString")
    EventDto toDto(Event event);
}

// Runnable test
public class CustomComposition {
    public static void main(String[] args) {
        Event event = new Event("Tech Conference",
            LocalDate.of(2026, 3, 15),
            new Address("Convention Centre", "Auckland"));

        EventDto dto = EventMapper.INSTANCE.toDto(event);

        System.out.println(dto);
        // Output: Tech Conference on 2026-03-15 at Convention Centre, Auckland
    }
}

Advanced: Reusable Annotation Composition

import org.mapstruct.Mapper;
import org.mapstruct.Mapping;
import org.mapstruct.Named;
import org.mapstruct.Qualifier;
import org.mapstruct.factory.Mappers;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

// Custom utility for age grouping
public class AgeUtils {
    @Named("ageToGroup")
    public static String ageToGroup(int age) {
        if (age < 18) return "Minor";
        if (age < 65) return "Adult";
        return "Senior";
    }
}

// 1. Custom composed annotation (bundles multiple @Mapping rules)
@Target({ElementType.METHOD})
@Retention(RetentionPolicy.CLASS)
@Mapping(target = "fullName", expression = "java(user.getFirstName() + \" \" + user.getLastName())")
@Mapping(target = "ageGroup", source = "age", qualifiedByName = "ageToGroup")
public @interface UserEnrichment {}

// 2. Mapper using the composed annotation
@Mapper(uses = {AddressMapper.class, AgeUtils.class})
public interface UserEnrichedMapper {
    UserEnrichedMapper INSTANCE = Mappers.getMapper(UserEnrichedMapper.class);

    @UserEnrichment  // <- Single annotation replaces 3+ @Mapping entries!
    UserEnrichedDto toEnrichedDto(UserEnriched user);
}

// Runnable test
public class AnnotationComposition {
    public static void main(String[] args) {
        UserEnriched user = new UserEnriched(
            "John", "Doe", 42,
            new Address("123 Main St", "Auckland")
        );

        UserEnrichedDto dto = UserEnrichedMapper.INSTANCE.toEnrichedDto(user);

        System.out.println(dto);
        // Output: John Doe (Adult) <123 Main St, Auckland>
    }
}

Global Configuration

See the Shared MapperConfig in MapStruct: Pure Java

Key Benefits Summary

Feature Pure Java Benefit
uses = {...} Automatic instantiation, no DI container
Mappers.getMapper() Single point of access per mapper
@MapperConfig Consistent behavior without framework
Multiple uses Type-based delegation selection
@UserEnrichment Zero-boilerplate reusable mapping rules
Custom utils Seamless integration with plain classes

Each example compiles and runs independently after MapStruct annotation processing, demonstrating self-contained mapper hierarchies without external dependencies.

Shared MapperConfig in MapStruct: Pure Java

MapStruct's @MapperConfig centralizes common mapping rules, global settings, and shared utilities across multiple mappers. Without Spring, mappers generate plain POJOs for manual instantiation.

Why Shared @MapperConfig?

  • DRY principle: Define ignore rules, nullValuePropertyMappingStrategy, etc. once
  • Consistency: All mappers share identical base behavior
  • Inheritance: Prototype methods propagate @Mapping annotations to concrete mappers
  • Flexibility: Works as interface (pure config) or abstract class (config + utilities)

Complete Domain Models

public class BaseDto {
    private Long id;
    private Long createdAt;
    private Long version;

    // constructors, getters, setters
    public BaseDto() {}

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public Long getCreatedAt() {
        return createdAt;
    }

    public void setCreatedAt(Long createdAt) {
        this.createdAt = createdAt;
    }

    public Long getVersion() {
        return version;
    }

    public void setVersion(Long version) {
        this.version = version;
    }
}

public class BaseEntity {
    private Long id;
    private Long createdAt;
    private Long version;
    private Long lastModified;

    // constructors, getters, setters
    public BaseEntity() {}

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public Long getCreatedAt() {
        return createdAt;
    }

    public void setCreatedAt(Long createdAt) {
        this.createdAt = createdAt;
    }

    public Long getVersion() {
        return version;
    }

    public void setVersion(Long version) {
        this.version = version;
    }

    public Long getLastModified() {
        return lastModified;
    }

    public void setLastModified(Long lastModified) {
        this.lastModified = lastModified;
    }
}

public class CustomerDto extends BaseDto {
    private String firstName;
    private String lastName;
    private String email;

    // constructors, getters, setters
    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public void setLastName(String lastName) {
        this.lastName = lastName;
    }

    public String getEmail() {
        return email;
    }

    public void setEmail(String email) {
        this.email = email;
    }
}

public class Customer extends BaseEntity {
    private String firstName;
    private String lastName;
    private String customerEmail;

    // constructors, getters, setters
    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public void setLastName(String lastName) {
        this.lastName = lastName;
    }

    public String getCustomerEmail() {
        return customerEmail;
    }

    public void setCustomerEmail(String customerEmail) {
        this.customerEmail = customerEmail;
    }
}

public class OrderDto extends BaseDto {
    private String orderNumber;
    private CustomerDto customer;

    // constructors, getters, setters
    public String getOrderNumber() {
        return orderNumber;
    }

    public void setOrderNumber(String orderNumber) {
        this.orderNumber = orderNumber;
    }

    public CustomerDto getCustomer() {
        return customer;
    }

    public void setCustomer(CustomerDto customer) {
        this.customer = customer;
    }
}

public class Order extends BaseEntity {
    private String orderNumber;
    private String customerEmail;
    private String status;

    // constructors, getters, setters
    public String getOrderNumber() {
        return orderNumber;
    }

    public void setOrderNumber(String orderNumber) {
        this.orderNumber = orderNumber;
    }

    public String getCustomerEmail() {
        return customerEmail;
    }

    public void setCustomerEmail(String customerEmail) {
        this.customerEmail = customerEmail;
    }

    public String getStatus() {
        return status;
    }

    public void setStatus(String status) {
        this.status = status;
    }
}

@MapperConfig as Interface (Pure Configuration)

GlobalMapperConfig.java:

import org.mapstruct.MapperConfig;
import org.mapstruct.Mapping;
import org.mapstruct.ReportingPolicy;
import org.mapstruct.MappingInheritanceStrategy;

@MapperConfig(
    unmappedTargetPolicy = ReportingPolicy.ERROR,
    mappingInheritanceStrategy = MappingInheritanceStrategy.AUTO_INHERIT_FROM_CONFIG
)
public interface GlobalMapperConfig {

    @Mapping(target = "id", ignore = true)
    @Mapping(target = "createdAt", ignore = true)
    @Mapping(target = "version", ignore = true)
    BaseEntity toBaseEntity(BaseDto dto);
}

CustomerMapper.java:

import org.mapstruct.Mapper;
import org.mapstruct.Mapping;
import org.mapstruct.MappingTarget;

@Mapper(config = GlobalMapperConfig.class)
public interface CustomerMapper {

    // AUTO inherits ignore rules for id/createdAt/version
    @Mapping(target = "customerEmail", source = "email")
    @Mapping(target = "lastModified", ignore = true)
    Customer toEntity(CustomerDto dto);

    @Mapping(target = "customerEmail", source = "email")
    @Mapping(target = "lastModified", ignore = true)
    void updateEntity(CustomerDto dto, @MappingTarget Customer entity);
}

Generated CustomerMapperImpl.java:

import javax.annotation.processing.Generated;

@Generated(
    value = "org.mapstruct.ap.MappingProcessor",
    comments = "version: 1.6.3, compiler: IncrementalProcessingEnvironment from gradle-language-java-9.2.0.jar, environment: Java 25 (Oracle Corporation)"
)
public class CustomerMapperImpl implements CustomerMapper {

    @Override
    public Customer toEntity(CustomerDto dto) {
        if ( dto == null ) {
            return null;
        }

        Customer customer = new Customer();

        customer.setCustomerEmail( dto.getEmail() );
        customer.setFirstName( dto.getFirstName() );
        customer.setLastName( dto.getLastName() );

        return customer;
    }

    @Override
    public void updateEntity(CustomerDto dto, Customer entity) {
        if ( dto == null ) {
            return;
        }

        entity.setCustomerEmail( dto.getEmail() );
        entity.setFirstName( dto.getFirstName() );
        entity.setLastName( dto.getLastName() );
    }
}

@MapperConfig in Abstract Mapper Class

BaseEntityMapper.java:

import org.mapstruct.BeforeMapping;
import org.mapstruct.Mapper;
import org.mapstruct.Mapping;
import org.mapstruct.MappingTarget;
import org.mapstruct.Named;

@Mapper(config = GlobalMapperConfig.class)
public abstract class BaseEntityMapper {

    // Inherits GlobalMapperConfig ignore rules
    @Mapping(target = "lastModified", ignore = true)
    public abstract BaseEntity toEntity(BaseDto dto);

    // Shared utility method
    @Named("normalizeEmail")
    protected String normalizeEmail(String email) {
        return email != null ? email.trim().toLowerCase() : null;
    }

    @Mapping(target = "lastModified", ignore = true)
    public abstract void updateEntity(BaseDto dto, @MappingTarget BaseEntity entity);

    @BeforeMapping
    protected void enrichTimestamps(BaseDto dto, @MappingTarget BaseEntity entity) {
        entity.setLastModified(System.currentTimeMillis());
    }
}

Generated BaseEntityMapperImpl.java:

import javax.annotation.processing.Generated;

@Generated(
    value = "org.mapstruct.ap.MappingProcessor",
    comments = "version: 1.6.3, compiler: IncrementalProcessingEnvironment from gradle-language-java-9.2.0.jar, environment: Java 25 (Oracle Corporation)"
)
public class BaseEntityMapperImpl extends BaseEntityMapper {

    @Override
    public BaseEntity toEntity(BaseDto dto) {
        if ( dto == null ) {
            return null;
        }

        BaseEntity baseEntity = new BaseEntity();

        enrichTimestamps( dto, baseEntity );

        return baseEntity;
    }

    @Override
    public void updateEntity(BaseDto dto, BaseEntity entity) {
        if ( dto == null ) {
            return;
        }

        enrichTimestamps( dto, entity );
    }
}

OrderMapper.java:

import org.mapstruct.Mapper;
import org.mapstruct.Mapping;

@Mapper(config = GlobalMapperConfig.class, uses = BaseEntityMapper.class)
public abstract class OrderMapper {

    // Inherits GlobalConfig + BaseEntityMapper utilities
    @Mapping(target = "customerEmail", source = "customer.email", qualifiedByName = "normalizeEmail")
    @Mapping(target = "status", constant = "PENDING")
    @Mapping(target = "lastModified", ignore = true)
    public abstract Order toEntity(OrderDto dto);

    public Order createProcessedOrder(OrderDto dto) {
        Order order = toEntity(dto);
        order.setStatus("PROCESSED");
        return order;
    }
}

Generated OrderMapperImpl.java:

import javax.annotation.processing.Generated;
import org.mapstruct.factory.Mappers;

@Generated(
    value = "org.mapstruct.ap.MappingProcessor",
    comments = "version: 1.6.3, compiler: IncrementalProcessingEnvironment from gradle-language-java-9.2.0.jar, environment: Java 25 (Oracle Corporation)"
)
public class OrderMapperImpl extends OrderMapper {

    private final BaseEntityMapper baseEntityMapper = Mappers.getMapper( BaseEntityMapper.class );

    @Override
    public Order toEntity(OrderDto dto) {
        if ( dto == null ) {
            return null;
        }

        Order order = new Order();

        baseEntityMapper.enrichTimestamps( dto, order );

        order.setCustomerEmail( baseEntityMapper.normalizeEmail( dtoCustomerEmail( dto ) ) );
        order.setOrderNumber( dto.getOrderNumber() );

        order.setStatus( "PENDING" );

        return order;
    }

    private String dtoCustomerEmail(OrderDto orderDto) {
        CustomerDto customer = orderDto.getCustomer();
        if ( customer == null ) {
            return null;
        }
        return customer.getEmail();
    }
}

Usage in Application Code

public class OrderService {
    private final CustomerMapper customerMapper = new CustomerMapperImpl();
    private final OrderMapper orderMapper = new OrderMapperImpl();

    public Customer createCustomer(CustomerDto dto) {
        return customerMapper.toEntity(dto);
    }

    public Order processOrder(OrderDto dto) {
        // Gets 3 layers of configuration
        return orderMapper.createProcessedOrder(dto);
    }
}

// Fluent factory (alternative)
// import org.mapstruct.factory.Mappers;
// CustomerMapper mapper = Mappers.getMapper(CustomerMapper.class);
// Customer customer = mapper.toEntity(dto);

Inheritance Layers Summary

Layer Source Provides
GlobalMapperConfig Interface id/createdAt/version ignore rules
BaseEntityMapper Abstract class + normalizeEmail(), @BeforeMapping
CustomerMapper Interface Specific Customer mappings
OrderMapper Abstract class Specific Order mappings + createProcessedOrder()

Key Benefits

  1. Scalable hierarchy - Global → Base → Specific mappers
  2. Full type safety - No runtime surprises

This pattern handles dozens of entities while maintaining clean, consistent mapping behavior across your entire application.

Java Stream Reduce: A Practical Guide

Java Stream Reduce: A Practical Guide

Java Streams' reduce operation transforms a sequence of elements into a single result through repeated application of an accumulator function, embodying the essence of functional reduction patterns.

Core Method Overloads

Three primary signatures handle different scenarios. The basic Optional<T> reduce(BinaryOperator<T> accumulator) pairwise combines elements, returning Optional for empty stream safety.

The identity form T reduce(T identity, BinaryOperator<T> accumulator) supplies a starting value like 0 for sums, guaranteeing results even from empty streams.​

Advanced reduce(T identity, BiFunction<T,? super U,R> accumulator, BinaryOperator<R> combiner) supports parallel execution and type conversion from stream <T> to result <R>.

Reduction folds elements left-to-right: begin with identity (or first element), accumulate each subsequent item. For [1,2,3] summing, compute ((0+1)+2)+3.

Parallel streams divide work into subgroups, requiring an associative combiner to merge partial results reliably.

Basic Reductions

Sum integers: int total = IntStream.range(1, 11).reduce(0, Integer::sum); // 55.

Maximum value: OptionalInt max = IntStream.range(1, 11).reduce(Math::max); //OptionalInt[10].

String concatenation: String joined = Stream.of("Hello", " ", "World").reduce("", String::concat); //Hello World.

Object comparison:

record Car(String model, int price) {
}

var cars = List.of(
    new Car("Model A", 20000),
    new Car("Model B", 30000),
    new Car("Model C", 25000)
);

Optional<Car> priciest = cars.stream().reduce((c1,c2) -> c1.price > c2.price ? c1 : c2); //Optional[Car[model=Model B, price=30000]]

Advanced: Different Types

Three-argument overload converts stream <T> to result <R>:

// IntStream → formatted String
String squares = IntStream.of(1,2,3)
    .boxed()
    .reduce("",
        (accStr, num) -> accStr + (num * num) + ", ",
        String::concat);  // "1, 4, 9, "

Employee list → summary:

record Employee(String name, String dept) {
}

var employees = List.of(
    new Employee("John", "IT"),
    new Employee("Tom", "Sales")
);

String summary = employees.stream()
        .reduce("",
                (acc, emp) -> acc + emp.name() + "-" + emp.dept() + " | ",
                String::concat);  // "John-IT | Tom-Sales | "

Parallel execution demands the combiner to merge thread-local String partials.

Sequential Execution Parallel (with combiner)
((0+1)+2)+3 (0+1) + (2+3)1+5
""+"1"+"4" ""+"1" + ""+"4""1"+"4"

Performance Tips

Use parallelStream() with proper combiner: list.parallelStream().reduce(0, (a,b)->a+b, Integer::sum).

Opt for primitive streams (IntStream, LongStream) to eliminate boxing overhead.

Prefer sum(), max(), collect(joining()) for simple cases; reserve custom reduce for complex logic or type transformations.

Data-Oriented Programming in Modern Java

Data-oriented programming (DOP) in Java emphasizes immutable data structures separated from business logic, leveraging modern features like records, sealed interfaces, and pattern matching for safer, more maintainable code.

Core Principles of DOP

DOP models data transparently using plain structures that fully represent domain concepts without hidden behavior or mutable state. Key rules include making data immutable, explicitly modeling all variants with sealed types, preventing illegal states at the type level, and handling validation at boundaries.

This contrasts with traditional OOP by keeping data passive and logic in pure functions, improving testability and reducing coupling.

Java's Support for DOP

Records provide concise, immutable data carriers with built-in equality and toString. Sealed interfaces define closed hierarchies for exhaustive handling, while pattern matching in switch and instanceof enables declarative operations on variants.

These features combine to enforce exhaustiveness at compile time, eliminating visitor patterns or runtime checks.

Practical Example: Geometric Shapes

Model 2D shapes to compute centers, showcasing DOP in action.

sealed interface Shape permits Circle, Rectangle, Triangle {}

record Point(double x, double y) {}

record Circle(Point center, double radius) implements Shape {}

record Rectangle(Point topLeft, Point bottomRight) implements Shape {}

record Triangle(Point p1, Point p2, Point p3) implements Shape {}

Operations remain separate and pure:

public static Point centerOf(Shape shape) {
    return switch (shape) {
        case Circle c -> c.center();
        case Rectangle r -> new Point(
            (r.topLeft().x() + r.bottomRight().x()) / 2.0,
            (r.topLeft().y() + r.bottomRight().y()) / 2.0
        );
        case Triangle t -> new Point(
            (t.p1().x() + t.p2().x() + t.p3().x()) / 3.0,
            (t.p1().y() + t.p2().y() + t.p3().y()) / 3.0
        );
    };
}

The sealed interface ensures exhaustive coverage, records keep data transparent, and the function is stateless.

DOP vs. Traditional OOP

Aspect DOP in Java Traditional OOP
Data Immutable records, sealed variants Mutable objects with fields/methods
Behavior Separate pure functions Embedded in classes
State Handling None; inputs → outputs Mutable instance state
Safety Compile-time exhaustiveness Runtime polymorphism/overrides
Testing Easy unit tests on functions Mocking object interactions

DOP shines in APIs, events, and rules engines by prioritizing data flow over object lifecycles.

Python f-Strings vs. t-Strings: A Comparison

Python f-strings provide immediate string interpolation for everyday use, while t-strings (Python 3.14+, PEP 750) create structured Template objects ideal for safe, customizable rendering in scenarios like HTML or logging.

Core Differences

F-strings eagerly evaluate {} expressions into a final str, losing all template structure. T-strings preserve segments and interpolations as an iterable Template, allowing renderers to process values securely without direct concatenation.

Basic Example with Iteration:

name = "Alice"
age = 30

f_result = f"Hello, {name}! You are {age} years old."
print(f_result)  # Hello, Alice! You are 30 years old.
print(type(f_result))  # <class 'str'>

t_result = t"Hello, {name}! You are {age} years old."
print(type(t_result))  # <class 'string.templatelib.Template'>
print(list(t_result))  # ['Hello, ', Interpolation('Alice', 'name', None, ''), '! You are ', Interpolation(30, 'age', None, ''), ' years old.']

T-strings expose components for targeted processing.

Syntax and Formatting

Format specifiers work in both, but t-strings defer final application.

Formatting Example:

pi = 3.14159

f_pi = f"Pi ≈ {pi:.2f}"
print(f_pi)  # Pi ≈ 3.14

t_pi = t"Pi ≈ {pi:.2f}"
result = ""
for i, s in enumerate(t_pi.strings):
    result += s
    if i < len(t_pi.interpolations):
        interp = t_pi.interpolations[i]
        if interp.format_spec:
            result += format(interp.value, interp.format_spec)
        else:
            result += str(interp.value)
print(result)  # Pi ≈ 3.14

Consumers can override or enhance formatting in t-strings.

HTML Rendering Example

T-strings prevent XSS by enabling per-value escaping.

Safe HTML Generation:

user_name = "<script>alert('XSS')</script>"
greeting = "Welcome"

html_tmpl = t"""
<html>
  <h1>{greeting}</h1>
  <p>Hello, {user_name}!</p>
</html>
"""

# Custom HTML renderer (no external libs needed)
def html_render(template):
    parts = []
    for segment in template:
        if isinstance(segment, str):
            parts.append(segment)
        else:
            # Simulate HTML escaping
            escaped = str(segment.value).replace('&', '&').replace('<', '<').replace('>', '>').replace("'", '&#x27;')
            parts.append(escaped)
    return ''.join(parts)

safe_html = html_render(html_tmpl)
print(safe_html)  
# <html>\n  <h1>Welcome</h1>\n  <p>Hello, <script>alert(&#x27;XSS&#x27;)</script>!</p>\n</html>

This shows t-strings' strength: structure enables selective escaping.

Logging Renderer Example

Safe Logging with Context:

import datetime
timestamp = datetime.datetime.now()

user_id = "user123"
level = "ERROR"

log_tmpl = t"[{level}] User {user_id} logged in at {timestamp:%Y-%m-%d %H:%M:%S}"

def log_render(template):
    parts = []
    for segment in template:
        if isinstance(segment, str):
            parts.append(segment)
        else:
            parts.append(str(segment.value))  # Log-safe formatting
    return ''.join(parts)

log_entry = log_render(log_tmpl)
print(log_entry)  # [ERROR] User user123 logged in at 2026-01-11 13:35:00

T-strings keep logs structured yet safe.

Practical Use Cases Table

Scenario F-String Approach T-String + Render Benefit
Debug Logging f"{var=}" → instant string Custom formatters per field
HTML Generation Manual escaping everywhere Auto-escape via renderer
Config Templates Direct substitution Validate/transform values before render
CLI Output Simple trusted data Colorize/structure fields selectively

T-strings complement f-strings by enabling secure, modular rendering without sacrificing Python's concise syntax.

Understanding Python Generators

Python generators are a powerful feature that allows developers to create iterators in a simple, memory-efficient way. Instead of computing and returning all values at once, generators produce them lazily—one at a time—whenever requested. This design makes them highly useful for handling large datasets and infinite sequences without exhausting system memory.

What Are Generators?

A generator is essentially a special type of Python function that uses the yield keyword instead of return. When a generator yields a value, it pauses its execution while saving its internal state. The next time the generator is called, it resumes right from where it stopped, continuing until it runs out of values or reaches a return statement.

When you call a generator function, Python doesn’t actually execute it immediately. Instead, it returns a generator object—an iterator—that can be used to retrieve values on demand using either a for loop or the next() function.

How Generators Work

Let’s look at a simple example:

def count_up_to(max):
    count = 1
    while count <= max:
        yield count
        count += 1

for number in count_up_to(5):
    print(number)

Output:

1
2
3
4
5

Here’s what happens under the hood:

  1. The count_up_to function is called, returning a generator object.
  2. The first iteration executes until the first yield, producing the value 1.
  3. Each call to next() continues execution from where it paused, yielding the next number in the sequence.
  4. When the condition count <= max is no longer true, the function ends, and the generator signals completion with a StopIteration exception.

Why Use Generators?

Generators offer several benefits:

  • Memory Efficiency: Since they yield one value at a time, generators don’t store entire sequences in memory.
  • Lazy Evaluation: They compute values only when needed, making them suitable for large or infinite data sources.
  • Clean and Readable Code: They provide a simple way to implement iterators without managing internal state manually.
  • Performance: Generators can lead to faster code for streaming or pipeline-based data processing.

Generator Expressions

Python also supports a shorthand syntax known as generator expressions, which resemble list comprehensions but use parentheses instead of square brackets.

Example:

squares = (x * x for x in range(5))
for num in squares:
    print(num)

This creates the same effect as a generator function—producing numbers lazily, one at a time.

Final Thoughts

Generators are one of Python’s most elegant tools for working with data efficiently. Whether you’re reading files line by line, processing data streams, or building pipelines, generators can help you write cleaner, faster, and more scalable code.

Python Generics

Python's generics system brings type safety to dynamic code, enabling reusable functions and classes that work across types while aiding static analysis tools like mypy. Introduced in Python 3.5 through PEP 483 and refined in versions like 3.12, generics use type variables without runtime overhead, leveraging duck typing for flexibility.

What Are Generics?

Generics parameterize types, allowing structures like lists or custom classes to specify element types at usage time. Core building block: TypeVar from typing (built-in since 3.12). They exist purely for static checking—no enforcement at runtime, unlike Java's generics.

from typing import TypeVar
T = TypeVar('T')  # Placeholder for any type

Generic Functions in Action

Create flexible utilities by annotating parameters and returns with type variables. A practical example is a universal adder for any comparable types.

from typing import TypeVar

T = TypeVar('T')  # Any type supporting +

def add(a: T, b: T) -> T:
    return a + b

# Usage
result1: int = add(5, 3)           # Returns 8, type int
result2: str = add("hello", "world")  # Returns "helloworld", type str
result3: float = add(2.5, 1.7)     # Returns 4.2, type float

Mypy infers and enforces matching types—add(1, "a") fails checking. Another example: identity function.

def identity(value: T) -> T:
    return value

This works seamlessly across any type.

Building Generic Classes

Inherit from Generic[T] for type-aware containers (or use class Stack[T]: in 3.12+). A real-world Result type handles success/error cases like Rust's Result<T, E>.

from typing import Generic, TypeVar

T = TypeVar('T')  # Success type
E = TypeVar('E')  # Error type

class Result(Generic[T, E]):
    def __init__(self, value: T | None = None, error: E | None = None):
        self.value = value
        self.error = error
        self.is_ok = error is None

    def unwrap(self) -> T | None:
        if self.is_ok:
            return self.value
        raise ValueError(f"Error: {self.error}")
class Stack(Generic[T]):
    def __init__(self) -> None:
        self.items: list[T] = []

    def push(self, item: T) -> None:
        self.items.append(item)

    def pop(self) -> T:
        return self.items.pop()

Sample Usage:

# Result usage
def divide(a: float, b: float) -> Result[float, str]:
    if b == 0:
        return Result(error="Division by zero")
    return Result(value=a / b)

success = divide(10, 2)
print(success.unwrap())  # 5.0

failure = divide(10, 0)
# failure.unwrap() #raises ValueError

# Stack usage
int_stack: Stack[int] = Stack()
int_stack.push(1)
int_stack.push(42)
print(int_stack.pop())  # 42

str_stack: Stack[str] = Stack()
str_stack.push("hello")
print(str_stack.pop())  # "hello"

Advanced Features

  • Multiple TypeVars: K = TypeVar('K'); V = TypeVar('V') for dict-like classes: class Mapping(Generic[K, V]):.
  • Bounds: T = TypeVar('T', bound=str) restricts to subclasses of str.
  • Variance: TypeVar('T', contravariant=True) for input-only types.

Mypy in Practice

Save the Stack class to stack.py. Run mypy stack.py—no errors for valid code.

Test errors: Add stack: Stack[int] = Stack[str]() then mypy stack.py:

stack.py: error: Incompatible types in assignment (expression has type "Stack[str]", variable has type "Stack[int]")  [assignment]

Fix by matching types. Correct usage passes silently.

Practical Benefits and Tools

Generics catch errors early in IDEs and CI pipelines. Run mypy script.py to validate. No performance hit—type hints erase at runtime. Ideal for libraries like FastAPI or Pydantic.

A Guide to Python Dataclasses

Python dataclasses, introduced in Python 3.7 via the dataclasses module, streamline class definitions for data-heavy objects by auto-generating boilerplate methods like __init__, __repr__, __eq__, and more. They promote cleaner code, type safety, and IDE integration without sacrificing flexibility. This article covers basics to advanced usage, drawing from official docs and practical patterns.python

Defining a Dataclass

Start by importing and decorating a class with @dataclass. Fields require type annotations; the decorator handles the rest.

from dataclasses import dataclass

@dataclass
class Point:
    x: float
    y: float

p = Point(1.5, 2.5)
print(p)  # Point(x=1.5, y=2.5)

Customization via parameters: @dataclass(eq=True, order=False, frozen=False, slots=False) toggles comparisons, immutability (frozen=True prevents attribute changes), and memory-efficient slots (Python 3.10+).

Field Defaults and Customization

Use assignment for immutables; field(default_factory=...) for mutables to avoid shared state.

from dataclasses import dataclass, field

@dataclass
class Employee:
    name: str
    dept: str = "Engineering"
    skills: list[str] = field(default_factory=list)
    id: int = field(init=False, default=0)  # Skipped in __init__, set later

Post-init logic: Define __post_init__ for validation or computed fields.

def __post_init__(self):
    self.id = hash(self.name)

Other field() options: repr=False, compare=False, hash=None, metadata={...} for extras, kw_only=True (3.10+) for keyword-only args.

Inheritance and Composition

Dataclasses support single/multiple inheritance; parent fields prepend in __init__.

@dataclass
class Employee(Person):  # Assuming Person from earlier
    salary: float

Nested dataclasses work seamlessly; use InitVar for init-only vars.

from dataclasses import dataclass, InitVar

@dataclass
class Logger:
    name: str
    level: str = "INFO"
    log_file: str = None  # Computed during init

    config: InitVar[dict] = None

    def __post_init__(self, config):
        if config:
            self.level = config.get('default_level', self.level)
            self.log_file = config.get('log_path', f"{self.name}.log")
        else:
            self.log_file = f"{self.name}.log"

app_config = {'default_level': 'DEBUG', 'log_path': '/var/logs/app.log'}
logger = Logger("web_server", config=app_config)
print(logger)  # Logger(name='web_server', level='DEBUG', log_file='/var/logs/app.log')
logger = Logger("web_server")
print(logger)  # Logger(name='web_server', level='INFO', log_file='web_server.log')

Field order via __dataclass_fields__ aids debugging.

Utilities and Patterns

  • replace(): Immutable updates: new_p = replace(p, age=31).
  • Exports: asdict(p), astuple(p) for serialization.
  • Introspection: fields(p), is_dataclass(p), make_dataclass(...).
Feature Use Case Python Version
frozen=True Immutable data 3.7+
slots=True Memory/attr speed 3.10+
kw_only=True Keyword args 3.10+
field(metadata=...) Annotations 3.7+

Best Practices and Gotchas

Prefer dataclasses over namedtuples for mutability needs; use frozen=True for hashable configs. Avoid overriding generated methods unless necessary—extend via __post_init__. For production, validate inputs and consider slots=True for perf gains.

Production Readiness Guidelines: Ensuring Robust Deployments

Production readiness guidelines provide a structured checklist to confirm applications are reliable, secure, and scalable before live deployment.

Core Checklist Categories

Teams assess applications across key areas using pass/fail criteria during production readiness reviews (PRRs).

Functional Testing

Comprehensive testing verifies feature completeness and performance under load.

  • Unit, integration, and end-to-end tests pass defined thresholds with peer-reviewed code changes.
  • Benchmarks for response times, throughput, and error rates meet SLOs.
  • Code coverage exceeds standards, confirmed via peer validation.

Security and Compliance

Security gates protect against threats and ensure regulatory alignment.

  • Vulnerability scans, encryption, API security, and access controls (e.g., OAuth2) are implemented.
  • Compliance checks validated by peers in CI/CD pipelines.
  • Automated blocks for non-compliant builds.

Observability and Monitoring

Full visibility enables proactive issue detection and recovery.

  • Logging, metrics (latency, errors, resource usage), and alerting tied to SLOs.
  • Incident response runbooks, on-call rotations, and scalability tests with SRE peer input.
  • Regular backup and disaster recovery validation.

Deployment and Operations

Repeatable processes support safe, scalable releases.

  • Automated CI/CD pipelines with rollbacks, staging mirrors, and IaC; peer-reviewed configs.
  • Operational training and capacity planning confirmed.

Peer Review Process

Cross-functional reviews catch issues early and build deployment confidence.

  • At least one approving review per production change from developers, leads, and SREs; CI/CD gates enforce this.
  • Documented outcomes and threaded discussions in PRs/MRs for audits.
  • Metrics tracking (e.g., review time) ensures efficiency, with streamlined hotfix paths.

Documentation and Review

Clear artifacts aid maintenance and audits.

  • Up-to-date API docs, architecture diagrams, and onboarding guides in version control.
  • Final PRR with peer sign-offs as gated criteria.

Implementation Tips

Automate checklist items in tools like GitLab or GitHub for consistency, reserving manual peer reviews for high-impact changes. Regularly refine based on post-deployment metrics to evolve readiness over time.

Navigating the Risks of Solo Development for Non-Trivial Applications

Solo development of non-trivial applications promises independence but introduces severe vulnerabilities like single points of failure, knowledge silos, and undetected errors that cascade in production. Blind spots from lacking diverse perspectives, absent accountability, and handoff risks further compound these challenges for complex projects spanning architecture, security, scalability, and maintenance. Targeted mitigations can help, though they require discipline and external support.

Single Points of Failure

Relying solely on one developer creates a critical single point of failure, where illness, burnout, or sudden departure halts all progress. Knowledge silos emerge as tribal knowledge stays undocumented, rendering recovery impossible without that individual. In production, these amplify into outages or data loss from unshared insights.

Blind Spots and Error Amplification

Solo developers miss subtle bugs, security flaws, or scalability issues due to absent diverse perspectives that teams provide. These oversights lead to breaches, downtime, or expensive rewrites when flaws emerge under real-world loads. Assumptions persist without peer challenges, escalating minor issues into systemic failures.

Accountability and Quality Erosion

Without code reviews, shortcuts erode quality over time, with hotfixes becoming untraceable and root cause analysis infeasible. This builds technical debt as unvetted changes accumulate, prioritizing short-term speed over sustainable rigor. Releases grow unstable, undermining user trust.

Burnout and Handoff Risks

Over-reliance accelerates burnout from endless multitasking across coding, testing, ops, and support, stalling timelines and onboarding. Departure wipes out tribal knowledge, crippling maintenance or scaling efforts. Handoffs turn chaotic absent structured documentation.

Time, Skill, and Scope Challenges

Juggling every phase stretches timelines unpredictably, with personal disruptions grinding work to a halt. Skill gaps in areas like DevOps or UX lead to suboptimal decisions, while isolation fuels scope creep and doubt, risking abandonment.

Mitigation Strategies

Mandate reviews—even for small changes—via GitHub PRs with external contributors or AI linters. Build modular architecture, rigorous documentation, and MVPs for early validation; leverage open-source tools and scheduled breaks to combat burnout and ease handoffs.

« Older posts Newer posts »