Building High-Performance Energy IoT Systems with .NET and Azure

Building a production-ready IoT platform for smart grid energy management that processes 400+ billion data points annually with millisecond-level response times using .NET, Azure cloud services, and edge computing.

Author Avatar

Fernando

Energy companies process 400+ billion data points annually from IoT devices while requiring millisecond-level decision making for grid stability. This showcase demonstrates a comprehensive .NET-based architecture that delivers real-time processing at scale, combining Azure cloud services with edge computing capabilities.

Architecture Goal: A “two-loop” design separating sub-second operational control from cloud-based optimization, processing billions of events per second while maintaining grid stability and operational efficiency.

Solution Architecture Overview #

The solution architecture implements a multi-tier approach combining cloud-native services with edge computing to handle the massive scale and performance requirements of modern smart grids.

graph TB
    subgraph "Field Layer"
        D1
        D2 
        D3
        D4
        D5
    end
    
    subgraph "Edge Layer"
        E1
        E2[Local Processing]
        E3[Critical Control]
    end
    
    subgraph "Cloud Platform"
        subgraph "Ingestion"
            I1
            I2[Event Hubs]
        end
        
        subgraph "Processing"
            P1
            P2[Azure Functions]
            P3
        end
        
        subgraph "Storage"
            S1
            S2
            S3
        end
        
        subgraph "Intelligence"
            AI1
            AI2[Azure ML]
            AI3[Predictive Analytics]
        end
        
        subgraph "Presentation"
            U1
            U2
            U3[Mobile Apps]
        end
    end
    
    D1 --> E1
    D2 --> E1
    D3 --> E1
    D4 --> E1
    D5 --> E1
    
    E1 --> I1
    E2 --> I1
    E3 --> I1
    
    I1 --> P1
    I2 --> P1
    
    P1 --> S1
    P1 --> S2
    P1 --> AI1
    
    P2 --> P3
    P3 --> S1
    
    AI1 --> U1
    AI2 --> U1
    AI3 --> U1
    
    U1 --> U2
    U1 --> U3

Data Flow Architecture #

The data processing pipeline handles three distinct paths optimized for different latency and processing requirements:

graph LR
    subgraph "IoT Devices"
        DEV[Sensors & Meters]
    end
    
    subgraph "Edge Processing"
        EDGE
        LOCAL[Local Analytics]
        CACHE[Edge Cache]
    end
    
    subgraph "Cloud Ingestion"
        HUB
        EVENTS[Event Hubs]
    end
    
    subgraph "Stream Processing"
        STREAM
        FUNC[Azure Functions]
    end
    
    subgraph "Data Paths"
        HOT
        WARM
        COLD[Cold Path<br/>Analytics & ML]
    end
    
    subgraph "Storage & Services"
        TSDB
        COSMOS
        BLOB
        SIGNALR
        MICRO[Microservices]
        ML
    end
    
    DEV --> EDGE
    EDGE --> LOCAL
    EDGE --> CACHE
    EDGE --> HUB
    
    HUB --> EVENTS
    EVENTS --> STREAM
    STREAM --> FUNC
    
    STREAM --> HOT
    STREAM --> WARM
    STREAM --> COLD
    
    HOT --> SIGNALR
    WARM --> MICRO
    COLD --> ML
    
    MICRO --> TSDB
    MICRO --> COSMOS
    ML --> BLOB

Core Architecture: Stream Processing + Microservices #

The foundational architecture combines Azure Stream Analytics for real-time event processing with.NET microservices for business logic, creating a scalable system capable of handling millions of events per second.

Real-Time Stream Processing #

Azure Stream Analytics processes continuous data streams using SQL-based queries optimized for temporal operations:

 1-- Grid frequency monitoring with 1-second windows
 2WITH GridFrequency AS (
 3    SELECT 
 4        SubstationId,
 5        AVG(Frequency) as AvgFrequency,
 6        MIN(Frequency) as MinFrequency,
 7        MAX(Frequency) as MaxFrequency,
 8        System.Timestamp() as WindowEnd
 9    FROM TelemetryInput TIMESTAMP BY EventTime
10    GROUP BY SubstationId, TumblingWindow(second, 1)
11)
12SELECT * INTO FrequencyAlerts
13FROM GridFrequency
14WHERE AvgFrequency < 49.8 OR AvgFrequency > 50.2

Grid Optimization Service Architecture #

Grid balancing is reframed as an optimization problem, not a hard-real-time control loop. This service analyzes trends and dispatches high-level commands, using SignalR for operator notifications and gRPC for internal service-to-service communication.

 1public class GridOptimizationService : BackgroundService
 2{
 3    private readonly IHubContext<GridMonitoringHub> _hubContext;
 4    private readonly IDeviceControlService _deviceControl;
 5    private readonly IGridAnalyzer _analyzer;
 6
 7    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
 8    {
 9        await foreach (var measurement in _telemetryStream.ReadAllAsync(stoppingToken))
10        {
11            var analysis = await _analyzer.AnalyzeFrequencyStability(measurement);
12            
13            if (analysis.RequiresImmediateAction)
14            {
15                // Analyze trends and dispatch high-level (non-real-time) control actions
16                var controlAction = GenerateControlAction(analysis);
17                await _deviceControl.ExecuteAsync(controlAction);
18                
19                // Notify operators in real-time
20                await _hubContext.Clients.Group($"grid-{measurement.Region}")
21                  .SendAsync("GridAnomalyDetected", new
22                    {
23                        Severity = analysis.Severity,
24                        Action = controlAction.Description,
25                        Timestamp = DateTime.UtcNow,
26                        PredictedImpact = analysis.PredictedImpact
27                    }, stoppingToken);
28            }
29        }
30    }
31
32    private ControlAction GenerateControlAction(GridAnalysis analysis)
33    {
34        return analysis.AnomalyType switch
35        {
36            AnomalyType.OverFrequency => new ControlAction
37            {
38                Type = ControlActionType.LoadIncrease,
39                Magnitude = CalculateLoadAdjustment(analysis.FrequencyDeviation),
40                TargetDevices = SelectOptimalLoads(analysis.AffectedRegion)
41            },
42            AnomalyType.UnderFrequency => new ControlAction
43            {
44                Type = ControlActionType.GenerationIncrease,
45                Magnitude = CalculateGenerationAdjustment(analysis.FrequencyDeviation),
46                TargetDevices = SelectAvailableGenerators(analysis.AffectedRegion)
47            },
48            _ => ControlAction.NoAction
49        };
50    }
51}

Microservices Architecture Pattern #

The microservices architecture implements domain-driven design with energy-specific bounded contexts, enabling independent scaling and deployment of different system components. This architecture includes gateway services responsible for translating legacy operational protocols (like DNP3 or IEC 61850) from SCADA systems into modern APIs for the cloud.

graph TB
    subgraph "API Gateway"
        GATEWAY[Azure API Management]
    end
    
    subgraph "Core Services"
        DEV
        TEL
        GRID
        ALERT
        PRED
    end
    
    subgraph "Supporting Services"
        AUTH
        CONFIG
        LOG
    end
    
    subgraph "Data Layer"
        TSDB
        COSMOS
        REDIS
    end
    
    subgraph "Message Bus (Dapr Pub/Sub)"
        SERVICEBUS
    end
    
    subgraph "External Systems"
        SCADA
        EMS
        MARKET[Energy Markets]
    end
    
    GATEWAY --> DEV
    GATEWAY --> TEL
    GATEWAY --> GRID
    GATEWAY --> ALERT
    GATEWAY --> PRED
    
    DEV --> SERVICEBUS
    TEL --> SERVICEBUS
    GRID --> SERVICEBUS
    ALERT --> SERVICEBUS
    PRED --> SERVICEBUS
    
    DEV --> TSDB
    TEL --> TSDB
    GRID --> COSMOS
    ALERT --> REDIS
    PRED --> COSMOS
    
    DEV --> AUTH
    TEL --> CONFIG
    GRID --> LOG
    
    SCADA --> GATEWAY
    EMS --> GATEWAY
    MARKET --> GATEWAY

Device Management Service (with Dapr) #

To build a resilient, loosely-coupled system, the microservices use Dapr (Distributed Application Runtime) for pub/sub messaging. This removes the need for broker-specific SDKs from the business logic.

 1[ApiController]
 2
 3public class DeviceController : ControllerBase
 4{
 5    private readonly IDeviceService _deviceService;
 6    private readonly DaprClient _daprClient; // Injected Dapr Client
 7    private readonly ILogger<DeviceController> _logger;
 8
 9    public DeviceController(IDeviceService deviceService, DaprClient daprClient, ILogger<DeviceController> logger)
10    {
11        _deviceService = deviceService;
12        _daprClient = daprClient;
13        _logger = logger;
14    }
15
16    [HttpPost("{deviceId}/telemetry")]
17    public async Task<IActionResult> ReceiveTelemetry(
18        string deviceId, TelemetryData data)
19    {
20        var stopwatch = Stopwatch.StartNew();
21        
22        try
23        {
24            // Validate device and data integrity
25            var validationResult = await _deviceService.ValidateTelemetryAsync(deviceId, data);
26            if (!validationResult.IsValid)
27                return BadRequest(validationResult.Errors);
28
29            // Process telemetry with enrichment
30            var enrichedData = await _deviceService.EnrichTelemetryAsync(data);
31            
32            // Publish using Dapr for downstream processing
33            await _daprClient.PublishEventAsync("pubsub", "telemetry-received", new TelemetryReceived
34            {
35                DeviceId = deviceId,
36                Data = enrichedData,
37                Timestamp = DateTime.UtcNow,
38                CorrelationId = HttpContext.TraceIdentifier
39            });
40
41            stopwatch.Stop();
42            
43            return Ok(new 
44            { 
45                Status = "Processed", 
46                ProcessingLatency = stopwatch.ElapsedMilliseconds,
47                DataPoints = enrichedData.Measurements.Count
48            });
49        }
50        catch (Exception ex)
51        {
52            _logger.LogError(ex, "Error processing telemetry for device {DeviceId}", deviceId);
53            return StatusCode(500, "Processing error");
54        }
55    }
56
57    [HttpGet("{deviceId}/health")]
58    public async Task<IActionResult> GetDeviceHealth(string deviceId)
59    {
60        var health = await _deviceService.GetHealthStatusAsync(deviceId);
61        return Ok(health);
62    }
63}

High-Performance Time-Series Processing #

Energy IoT systems require optimized time-series data handling for both real-time operations and historical analysis. The architecture leverages TimescaleDB for superior performance with high-cardinality datasets.

Time-Series Repository Pattern #

This repository demonstrates two critical performance patterns:

  1. Correct Connection Pooling: Connections are not held. They are created and disposed of for each operation, allowing ADO.NET’s underlying connection pool to manage resources efficiently.
  2. Binary Bulk Import: Using BeginBinaryImportAsync is the fastest way to ingest large batches of data into PostgreSQL/TimescaleDB.[7]
 1public class OptimizedTelemetryRepository
 2{
 3    // Inject IServiceProvider or NpgsqlDataSource (Npgsql 7+) to resolve scoped
 4    // connections, not a singleton NpgsqlConnection.
 5    private readonly IServiceProvider _serviceProvider;
 6
 7    public OptimizedTelemetryRepository(IServiceProvider serviceProvider)
 8    {
 9        _serviceProvider = serviceProvider;
10    }
11
12    public async Task<IEnumerable<TelemetryReading>> GetAggregatedReadingsAsync(
13        string deviceId, DateTime from, DateTime to, TimeSpan interval)
14    {
15        // Create/dispose connection for each call. The pool makes this fast and thread-safe.
16        await using var scope = _serviceProvider.CreateAsyncScope();
17        await using var connection = scope.ServiceProvider.GetRequiredService<NpgsqlConnection>();
18        await connection.OpenAsync();
19
20        // Use time bucket aggregation for efficient querying
21        const string sql = @"
22            SELECT 
23                time_bucket(@interval, timestamp) AS bucket,
24                device_id,
25                AVG(voltage) as avg_voltage,
26                AVG(current) as avg_current,
27                AVG(power_factor) as avg_power_factor,
28                MIN(voltage) as min_voltage,
29                MAX(voltage) as max_voltage,
30                COUNT(*) as sample_count
31            FROM telemetry_readings 
32            WHERE device_id = @deviceId 
33              AND timestamp >= @from 
34              AND timestamp <= @to
35            GROUP BY bucket, device_id
36            ORDER BY bucket DESC";
37
38        await using var command = new NpgsqlCommand(sql, connection);
39        command.Parameters.AddWithValue("deviceId", deviceId);
40        command.Parameters.AddWithValue("from", from);
41        command.Parameters.AddWithValue("to", to);
42        command.Parameters.AddWithValue("interval", interval);
43
44        var readings = new List<TelemetryReading>();
45        await using var reader = await command.ExecuteReaderAsync();
46        
47        while (await reader.ReadAsync())
48        {
49            readings.Add(new TelemetryReading
50            {
51                DeviceId = reader.GetString("device_id"),
52                Timestamp = reader.GetDateTime("bucket"),
53                AvgVoltage = reader.GetDouble("avg_voltage"),
54                //... map other fields
55            });
56        }
57        
58        return readings;
59    }
60
61    public async Task BulkInsertOptimizedAsync(IEnumerable<TelemetryReading> readings)
62    {
63        await using var scope = _serviceProvider.CreateAsyncScope();
64        await using var connection = scope.ServiceProvider.GetRequiredService<NpgsqlConnection>();
65        await connection.OpenAsync();
66        
67        // Use binary COPY for maximum insert performance
68        const string copyCommand = 
69            "COPY telemetry_readings (device_id, timestamp, voltage, current, power_factor) " +
70            "FROM STDIN (FORMAT BINARY)";
71
72        await using var writer = await connection.BeginBinaryImportAsync(copyCommand);
73
74        foreach (var reading in readings)
75        {
76            await writer.StartRowAsync();
77            await writer.WriteAsync(reading.DeviceId);
78            await writer.WriteAsync(reading.Timestamp, NpgsqlDbType.TimestampTz);
79            await writer.WriteAsync(reading.Voltage);
80            await writer.WriteAsync(reading.Current);
81            await writer.WriteAsync(reading.PowerFactor);
82        }
83
84        await writer.CompleteAsync();
85    }
86}

Edge Computing with Azure IoT Edge #

Critical grid protection requires local processing capabilities. This edge layer’s role is to monitor and react to events from dedicated Intelligent Electronic Devices (IEDs) and then forward operational data to the cloud, especially when connectivity is compromised.[8]

graph TB
    subgraph "Substation Edge"
        subgraph "IoT Edge Runtime"
            EDGE[Edge Runtime]
            DOCKER[Container Runtime]
        end
        
        subgraph "Protection Modules"
            PROT[Protection Logic<br/>.NET Module]
            RELAY[Relay Control<br/>Module]
            LOG[Local Logging<br/>Module]
        end
        
        subgraph "Local Storage"
            TSDB_LOCAL[(Local TimescaleDB)]
            CACHE_LOCAL[Redis Cache]
        end
    end
    
    subgraph "Field Devices (IEDs)"
        CT[Current Transformers]
        PT[Potential Transformers]
        RELAY_DEV[Protection Relays]
        BREAKER[Circuit Breakers]
    end
    
    subgraph "Cloud Services"
        IOT_HUB[Azure IoT Hub]
        STREAM[Stream Analytics]
        SERVICES[Microservices]
    end
    
    CT --> PROT
    PT --> PROT
    RELAY_DEV --> PROT
    
    PROT --> RELAY
    PROT --> LOG
    PROT --> TSDB_LOCAL
    
    RELAY --> BREAKER
    LOG --> CACHE_LOCAL
    
    EDGE --> IOT_HUB
    IOT_HUB --> STREAM
    STREAM --> SERVICES
    
    SERVICES --> IOT_HUB
    IOT_HUB --> EDGE

Edge Protection Module #

This.NET module runs on IoT Edge. It is not a hard-real-time controller (which is the job of dedicated IEDs) but a high-frequency monitor that coordinates local logic and communicates with the cloud. The System.Threading.Timer is used for a “best effort” polling cycle, acknowledging it’s running on a non-real-time OS.

 1public class GridProtectionModule : ModuleClient
 2{
 3    private readonly ConcurrentDictionary<string, DeviceState> _deviceStates = new();
 4    private readonly Timer _protectionTimer;
 5    private readonly ILogger _logger;
 6
 7    public GridProtectionModule(ILogger logger)
 8    {
 9        _logger = logger;
10        // 5ms monitoring cycle for high-frequency polling
11        _protectionTimer = new Timer(ExecuteProtectionCycle, null, 
12            TimeSpan.Zero, TimeSpan.FromMilliseconds(5));
13    }
14
15    private async void ExecuteProtectionCycle(object state)
16    {
17        var cycleStart = DateTime.UtcNow;
18        
19        try
20        {
21            // Logic here *monitors* state from dedicated hardware (IEDs)
22            foreach (var (deviceId, deviceState) in _deviceStates)
23            {
24                await EvaluateProtectionLogic(deviceId, deviceState);
25            }
26        }
27        catch (Exception ex)
28        {
29            _logger.LogError(ex, "Protection cycle error");
30        }
31        
32        var cycleTime = DateTime.UtcNow - cycleStart;
33        if (cycleTime.TotalMilliseconds > 5) // Alert if cycle exceeds monitoring interval
34        {
35            _logger.LogWarning("Protection cycle time exceeded: {CycleTime}ms", 
36                cycleTime.TotalMilliseconds);
37        }
38    }
39
40    private async Task EvaluateProtectionLogic(string deviceId, DeviceState state)
41    {
42        // Example: Overcurrent protection logic
43        if (state.Current > state.InstantaneousThreshold)
44        {
45            await TripBreakerAsync(deviceId, "Instantaneous Overcurrent");
46            return;
47        }
48        //... other protection logic
49    }
50
51   
52    public async Task<MethodResponse> UpdateMeasurements(MethodRequest methodRequest)
53    {
54        var measurements = JsonSerializer.Deserialize<DeviceMeasurements>(methodRequest.DataAsJson);
55        
56        _deviceStates.AddOrUpdate(measurements.DeviceId, 
57            new DeviceState(measurements), 
58            (key, existing) => existing.UpdateMeasurements(measurements));
59        
60        return new MethodResponse(200);
61    }
62
63    private async Task TripBreakerAsync(string deviceId, string reason)
64    {
65        var tripCommand = new
66        {
67            DeviceId = deviceId,
68            Action = "TRIP",
69            Reason = reason,
70            Timestamp = DateTime.UtcNow.Ticks
71        };
72
73        // Forward trip command to dedicated relay hardware via a separate module
74        await SendEventAsync("protection-trips", 
75            new Message(JsonSerializer.SerializeToUtf8Bytes(tripCommand)));
76
77        _logger.LogCritical("Circuit breaker trip signal sent: {DeviceId}, Reason: {Reason}", 
78            deviceId, reason);
79    }
80}

ML.NET Predictive Analytics Integration #

This architecture provides a “best-of-both-worlds” ML strategy. While the.NET services consume the models for high-performance inference, data science teams are not limited to ML.NET for training. They can use Python-based frameworks like PyTorch or TensorFlow to build and train models, which are then exported to the interoperable ONNX (Open Neural Network Exchange) format. The.NET EquipmentPredictionService can then load these .onnx files for inference, decoupling the data science workflow from the application runtime.

Predictive Maintenance Architecture #

graph LR
    subgraph "Data Sources"
        TELEMETRY
        MAINTENANCE
        WEATHER
        MARKET
    end
    
    subgraph "Feature Engineering"
        EXTRACT[Feature Extraction]
        TRANSFORM
        AGGREGATE
    end
    
    subgraph "ML Pipeline (Polyglot)"
        TRAIN
        VALIDATE[Cross Validation]
        EXPORT[Export to ONNX]
    end
    
    subgraph "Prediction Service (.NET)"
        CONSUME[Load ONNX Model]
        SCORE
        FEEDBACK[Feedback Loop]
    end
    
    subgraph "Actions"
        ALERTS[Maintenance Alerts]
        SCHEDULE
        INVENTORY[Parts Ordering]
    end
    
    TELEMETRY --> EXTRACT
    MAINTENANCE --> EXTRACT
    WEATHER --> EXTRACT
    MARKET --> EXTRACT
    
    EXTRACT --> TRANSFORM
    TRANSFORM --> AGGREGATE
    
    AGGREGATE --> TRAIN
    TRAIN --> VALIDATE
    VALIDATE --> EXPORT
    
    EXPORT --> CONSUME
    CONSUME --> SCORE
    
    SCORE --> ALERTS
    SCORE --> SCHEDULE
    SCHEDULE --> INVENTORY
    
    ALERTS --> FEEDBACK
    FEEDBACK --> TRAIN

Equipment Failure Prediction Service #

 1public class EquipmentPredictionService
 2{
 3    // The _predictionEngine can be loaded from an ML.NET-trained model
 4    // or, more likely, an ONNX model exported from Python.
 5    private readonly PredictionEngine<EquipmentData, FailurePrediction> _predictionEngine;
 6    private readonly ILogger<EquipmentPredictionService> _logger;
 7
 8    public EquipmentPredictionService(PredictionEngine<EquipmentData, FailureFailurePrediction> predictionEngine, 
 9                                      ILogger<EquipmentPredictionService> logger)
10    {
11        _predictionEngine = predictionEngine;
12        _logger = logger;
13    }
14
15    public async Task<EquipmentHealthAssessment> AssessEquipmentHealthAsync(
16        string equipmentId, TelemetryData recentData)
17    {
18        // Feature engineering from telemetry data
19        var features = ExtractFeatures(recentData);
20        var equipmentData = new EquipmentData
21        {
22            EquipmentId = equipmentId,
23            Temperature = features.AvgTemperature,
24            Vibration = features.VibrationRms,
25            //... other features
26        };
27
28        // Generate prediction
29        var prediction = _predictionEngine.Predict(equipmentData);
30
31        // Calculate time to failure estimate
32        var timeToFailure = EstimateTimeToFailure(prediction.Probability, features);
33
34        return new EquipmentHealthAssessment
35        {
36            EquipmentId = equipmentId,
37            HealthScore = 1.0f - prediction.Probability,
38            FailureProbability = prediction.Probability,
39            EstimatedTimeToFailure = timeToFailure,
40            RiskLevel = DetermineRiskLevel(prediction.Probability),
41            RecommendedActions = GenerateRecommendations(prediction, timeToFailure),
42            ConfidenceLevel = prediction.Score,
43            AssessmentTimestamp = DateTime.UtcNow
44        };
45    }
46
47    private EquipmentFeatures ExtractFeatures(TelemetryData data)
48    {
49        //... feature extraction logic
50        return new EquipmentFeatures();
51    }
52
53    private List<MaintenanceRecommendation> GenerateRecommendations(
54        FailurePrediction prediction, TimeSpan timeToFailure)
55    {
56        //... recommendation logic
57        return new List<MaintenanceRecommendation>();
58    }
59}

Performance Characteristics and Scaling #

The architecture delivers specific performance benchmarks optimized for energy sector requirements:

Performance Metrics #

  • Stream Processing: 1M+ events/second per Stream Analytics unit
  • Hot Path Latency: <50ms for critical alerts (Cloud)
  • Warm Path Latency: <200ms for business logic processing (Cloud)
  • Edge Processing: <10ms for local monitoring & control functions
  • Time-Series Queries: <100ms for 1M+ record aggregations
  • ML Predictions: <25ms for equipment failure analysis (ONNX)

Horizontal Scaling Patterns #

The microservices architecture enables linear scaling with proper partitioning strategies:

 1public class PartitionedTelemetryProcessor
 2{
 3    private readonly IServiceProvider _serviceProvider;
 4    private readonly int _partitionCount;
 5
 6    public async Task RouteToProcessorAsync(string deviceId, TelemetryData data)
 7    {
 8        // Consistent hashing ensures even distribution
 9        var partitionKey = CalculatePartition(deviceId);
10        var processor = _serviceProvider.GetKeyedService<ITelemetryProcessor>(partitionKey);
11        
12        await processor.ProcessAsync(data);
13    }
14
15    private int CalculatePartition(string deviceId)
16    {
17        // Use device region prefix for locality
18        var regionHash = deviceId.Substring(0, 3).GetHashCode();
19        return Math.Abs(regionHash) % _partitionCount;
20    }
21}

Enterprise-Grade Foundations: Security and Observability #

A solution for critical infrastructure must be built on a production-ready foundation.

  • Security: For a production-grade enterprise deployment, device identity is paramount. Instead of simple device IDs, this architecture would leverage the Azure IoT Device Provisioning Service (DPS) with X.509 certificates. This ensures every device connecting to the grid is authenticated via secure hardware (like a TPM) using cryptographic attestation, a non-negotiable for critical infrastructure.

  • Observability: To manage a distributed system of this scale, end-to-end observability is built in using OpenTelemetry. Instrumentation in both the.NET edge modules and the cloud microservices provides distributed tracing, allowing an operator to follow a single event from the sensor, through the edge, across the Dapr pub/sub bus, and into the database. This is essential for debugging bottlenecks and managing fleet health.

This solution architecture showcases a comprehensive, enterprise-ready approach to smart grid IoT integration using.NET. The combination of a “two-loop” design, modern microservice patterns like Dapr, and a polyglot ML strategy creates a robust platform capable of handling the massive scale and performance requirements of modern energy systems.

References #