Logo Francesco Montelli
  • Home
  • About
  • Technologies
  • Experiences
  • Education
  • More
    Projects Publications Recent Posts
  • Posts
  • Notes
  • English
    Italiano English
  • Dark Theme
    Light Theme Dark Theme System Theme
Logo Inverted Logo
  • Tags
  • Architecture
  • Automation
  • CAPI
  • Cluster API
  • Containerization
  • Day 1 Operations
  • DevOps
  • Distributed Systems
  • Docker
  • Grafana
  • Homelab
  • Image Builder
  • Infrastructure as Code
  • Ingress
  • Instrumentation
  • Kind
  • Kubernetes
  • LGTM Stack
  • Linux
  • Local Development
  • Logging
  • Loki
  • Metrics
  • Mimir
  • Monitoring
  • N8n
  • NGINX
  • Observability
  • OpenTelemetry
  • Performance Testing
  • Prometheus
  • Proxmox
  • Quality
  • Self-Hosted
  • Software
  • SRE
  • Telemetry
  • Tempo
  • Testing
  • Tracing
  • Ubuntu
Hero Image
From port-forward to Ingress: How to configure a professional local Kubernetes environment with NGINX

The Problem: Accessing Services in a Local Development Environment Let’s face it: we all started this way. You have your brand new app on Kubernetes, and to test it you open 12 different terminals, one for each kubectl port-forward .... It works, but it’s awkward and doesn’t simulate a real environment at all. To better understand these concepts, it’s useful to consult the official Kubernetes documentation. To dive deeper into the use of kubectl, consult the official kubectl documentation.

  • Kubernetes
  • kind
  • Ingress
  • NGINX
  • DevOps
  • Local Development
Tuesday, October 21, 2025 | 14 minutes Read
Hero Image
CAPI Part 1: From Chaos to Automation

The Problem of Manual Kubernetes Management Managing Kubernetes clusters represents one of the most complex challenges in the modern cloud-native ecosystem. As the number of nodes and clusters grows, operational complexity increases exponentially, quickly making operations like provisioning new workers, coordinated control plane upgrades, network configuration management, and underlying infrastructure maintenance unmanageable. Limitations of Traditional Methods Traditional methods for managing Kubernetes clusters typically rely on: Custom scripts for node provisioning and configuration Manual procedures documented, hopefully, for upgrades and maintenance Static configurations difficult to version and replicate Imperative approaches that describe “how to do” rather than “what to achieve” Concrete Operational Problems According to CNCF surveys, operational complexity represents one of the main challenges in enterprise Kubernetes adoption.

  • Kubernetes
  • CAPI
  • Cluster API
  • Infrastructure as Code
  • DevOps
  • Automation
Tuesday, August 5, 2025 | 6 minutes Read
Hero Image
CAPI Part 2: Anatomy of Cluster API - Components and Mechanisms

CAPI Component Architecture Cluster API implements a modular architecture based on the Kubernetes controller pattern, where each component has specific and well-defined responsibilities. This separation of responsibilities ensures extensibility, maintainability, and testability of the system. Management Cluster vs Workload Cluster The fundamental distinction in CAPI is the separation between the cluster that manages infrastructure and the clusters that run application workloads. Management Cluster The Management Cluster serves as the central control hub for Kubernetes infrastructure. Its main characteristics include:

  • Kubernetes
  • CAPI
  • Cluster API
  • Infrastructure as Code
  • DevOps
  • Automation
Tuesday, August 5, 2025 | 9 minutes Read
Hero Image
CAPI Part 3: Talos Linux - The Operating System for Kubernetes

The Immutable OS Paradigm for Kubernetes Traditional operating system management in Kubernetes environments presents numerous challenges: configuration drift, extended attack surface, maintenance complexity, and inconsistency between environments. Talos Linux represents a revolutionary approach that completely redefines how operating systems interact with Kubernetes. Problems with Traditional Operating Systems Configuration Drift and Snowflake Servers Traditional operating systems (Ubuntu, CentOS, RHEL) in Kubernetes environments suffer from structural problems: # Typical scenario on an Ubuntu node ssh worker-node-01 sudo apt update && sudo apt upgrade -y sudo systemctl restart kubelet # One month later... ssh worker-node-02 sudo apt update && sudo apt upgrade -y # Different versions, divergent configurations, inconsistent behaviors According to the 2023 State of DevOps Report, over 60% of organizations struggle with inconsistent configuration management in distributed systems.

  • Kubernetes
  • CAPI
  • Cluster API
  • Infrastructure as Code
  • DevOps
  • Automation
Tuesday, August 5, 2025 | 10 minutes Read
Hero Image
CAPI Part 4: Practical Setup - Day 1 Operations

Part 4: Practical Setup - Day 1 Operations Fourth article in the series “Deploy Kubernetes with Cluster API: Automated Cluster Management” In previous parts we explored the theoretical foundations of Cluster API, component architecture, and integration with Talos Linux. It’s now time to put these concepts into practice through a complete implementation of Day 1 Operations. This part will guide through every step of the initial deployment process: from Proxmox infrastructure configuration to the first functional and verified workload cluster, using the Python generator to automate the generation of parametric configurations.

  • Kubernetes
  • CAPI
  • Cluster API
  • Infrastructure as Code
  • DevOps
  • Day 1 Operations
Tuesday, August 5, 2025 | 8 minutes Read
Hero Image
DevContainers: Your Portable and Reproducible Development Environment

The Development Environment Problem: “It Works on My Machine!” How many times have you heard or said the infamous phrase “It works on my machine!”? It’s the bane of collaboration between developers. Each team member configures their environment slightly differently: mismatched language versions, missing dependencies, or misaligned environment variables. This leads to hours wasted debugging the setup, slowing onboarding of new team members and creating inconsistencies between development and production environments.

  • Docker
  • Linux
  • Containerization
  • DevOps
Wednesday, July 30, 2025 | 6 minutes Read
Hero Image
Observability in Distributed Systems: From Monitoring to Understanding

The Exitless Maze: A New Analogy for Observability Imagine you are a brilliant architect, responsible for a huge and intricate building, full of complex systems: heating, ventilation, lighting, security, elevators. You have installed sensors everywhere: every temperature, every pressure, every watt of energy consumed is recorded. Your control dashboards are a profusion of charts and data, every parameter is monitored to perfection, every line is green and reassuring. You know exactly what is happening in every single corner of the building.

  • Observability
  • OpenTelemetry
  • Distributed Systems
  • Tracing
  • Metrics
  • Logging
  • DevOps
  • Architecture
  • Monitoring
Tuesday, July 29, 2025 | 21 minutes Read
Hero Image
OpenTelemetry: Anatomy of Observability in Distributed Systems

The Pre-OpenTelemetry Fragmentation Problem Before the advent of OpenTelemetry, the observability ecosystem was a maze of protocols, APIs, and proprietary formats. Each vendor had developed their own “dialect”: Jaeger used its own span format and ingestion protocol. Zipkin had a different data model and specific REST APIs. Prometheus required metrics in a specific format with rigid naming conventions. AWS X-Ray, Google Cloud Trace, Azure Monitor - each with proprietary SDKs. This fragmentation created systemic problems:

  • OpenTelemetry
  • Observability
  • Tracing
  • Metrics
  • Logging
  • Instrumentation
  • DevOps
  • Telemetry
  • Distributed Systems
Tuesday, July 29, 2025 | 10 minutes Read
Hero Image
The LGTM Stack and OpenTelemetry: Complete Observability for Your Distributed Systems

We have explored the principles of observability and the fundamental role of OpenTelemetry as a unifying standard for telemetry. OpenTelemetry provides us with the tools to generate and collect high-quality data (metrics, logs and traces) in an agnostic and consistent format. But once these valuable signals have been collected, where are they stored, queried and, most importantly, displayed in a meaningful way? This is where the LGTM stack comes into play, a powerful combination of open source tools that form a complete and integrated observability solution, developed and primarily supported by Grafana Labs.

  • OpenTelemetry
  • Observability
  • LGTM Stack
  • Loki
  • Grafana
  • Tempo
  • Mimir
  • Prometheus
  • Tracing
  • Metrics
  • Logging
  • Instrumentation
  • DevOps
  • Telemetry
  • Distributed Systems
  • Docker
Tuesday, July 29, 2025 | 9 minutes Read
Hero Image
Self-Hosted n8n Deployment in Homelab

Context and Motivations n8n is a powerful open-source workflow automation tool that allows connecting a wide range of services through configurable nodes. Unlike SaaS (Software as a Service) alternatives like Zapier or Make, n8n can be installed in self-hosted mode, ensuring full control over data, privacy, and extensibility. The goal of this project is to illustrate how to create a fully automated n8n instance within a home lab environment. The following technologies will be used:

  • n8n
  • Automation
  • Homelab
  • DevOps
  • Self-Hosted
Sunday, July 20, 2025 | 10 minutes Read
Navigation
  • About
  • Technologies
  • Experiences
  • Education
  • Projects
  • Publications
  • Recent Posts
Contact me:
  • francesco@montelli.dev
  • monte97
  • Francesco Montelli

Liability Notice: Privacy Policy.


Toha Theme Logo Toha
Francesco Montelli
P.IVA: 02726990399
Powered by Hugo Logo