Understanding the Renewable Forecasting Problem
Forecasting energy generation—especially from solar—has always been a tricky problem. Weather conditions change rapidly. Sensors behave unpredictably. And operational decisions often rely on assumptions rather than accurate forecasts.
Recently, we worked on a use case aimed at bringing more structure to this uncertainty. The goal was simple: predict solar power output up to three days in advance using historical data, real-time weather, and a bit of machine learning. What made this project interesting was how everything came together through Microsoft Fabric.
This blog isn’t about theoretical models or textbook solutions. It’s about what we built, what worked well, and where the challenges were.
Our Motivation: Why We Set Out to Build This
Our starting point was a solar power setup where data was already being collected—equipment readings, temperature sensors, and external weather APIs. The operational team had access to dashboards but no reliable way to forecast power output in advance.
The motivation came from the ground:
- Planning dispatch based on estimated generation
- Improving load balancing
- Avoiding overdraws or inefficiencies on cloudy days
- Supporting maintenance teams with insights into underperformance trends
We weren’t looking to over-engineer it. The idea was to use the data we already had, apply a reliable model, and integrate the results into the tools the team already used.
The Solution We Designed
At a high level, the system works like this:
- Data from AVEVA PI and weather APIs lands in Microsoft Fabric
- We use Dataflows to transform and clean the data
- The data is stored in OneLake, simplifying access
- A time-series LSTM model is trained on this data
- The model is deployed as a REST API using Azure ML Studio
- Predictions are displayed in a simple PowerApps UI for end users
We chose an LSTM model because it’s designed to handle time-based data.
In traditional forecasting, you often have to manually define relationships between input variables and future outputs. But LSTM models—unlike standard regression or decision tree models—can learn those relationships over time, especially when the data includes seasonality, lag effects, or delayed impacts.
For example:
- A sudden drop in temperature might not affect generation immediately, but it could influence the system 2–3 hours later.
- Consecutive days of partial cloud cover might show a compounding effect on performance.
LSTM is particularly good at capturing these temporal patterns because it remembers past inputs over long sequences. It treats the data like a narrative, not a snapshot.
We started with simple features—ambient temperature, irradiation, and module temperature—then added lagged values, rolling means, and timestamp-based features like hour of day and day of week.
Once trained, the LSTM model outputs a forecast curve for the next 72 hours, updating predictions every time new sensor or weather data arrives.
System Architecture: How It All Comes Together
Process Flow:
- Data Acquisition: IoT sensors and the PI System collect real-time environmental and operational data (e.g., solar radiation, temperature).
- Data Storage: The collected data is stored in OneLake, Microsoft Fabric’s data lake, for centralized access and analysis.
- Model Training: Using Acuprism (Fabric), the stored data is used to train machine learning models.
- Historical and live sensor data
- Weather data inputs
- Training compute and experiment logs are generated
- Model Governance and Registry: The trained model is registered and tracked using Unity Catalog to ensure version control, metadata, and security.
- Model Deployment: The registered model is deployed via Azure ML, where an endpoint is created for inference.
- Input Integration: Live weather data (via Weather API) and temperature sensor data are fed into the deployed model through a PowerApps interface.
- Prediction Output: The model generates solar power predictions, visualized or used directly within PowerApps.
The architecture isn’t flashy, but it’s functional. Each component is replaceable, and the workflow is transparent.
What Worked Better Than Expected
One unexpected win was how smoothly Microsoft Fabric handled data operations. For real-time ingestion and transformation, Dataflows were more useful than we anticipated. We didn’t need to build custom pipelines or set up external schedulers.
Another pleasant surprise was how lightweight the PowerApps integration was. We could expose the model’s output to operations staff with minimal friction. No one needed to learn a new dashboard tool.
Challenges We Encountered
Not everything was smooth. A few issues we had to manage:
- Aligning time zones across weather API data and PI tags
- Handling missing values in temperature sensors
- Preventing the LSTM model from overfitting during cloudy weeks
- Balancing accuracy with model complexity—we deliberately avoided overengineering the ML component
There’s still more tuning we could do, especially around seasonal adjustments and anomaly handling.
Why This Approach Was the Right Fit
We didn’t set out to build a complex platform—we just wanted something practical, stable, and useful for operations. Microsoft Fabric provided data orchestration, and Azure ML supported straightforward model deployment. It gave us a good balance of control and simplicity.
From a user’s perspective, the result is simple:
You enter weather inputs, and you get a generation forecast.
Behind the scenes, it’s a blend of historical trends, sensor data, and machine learning. But for the people who need the forecast, what matters is that it’s useful—and in this case, it is.
Conclusion: Progress Through Practical Innovation
There’s a lot of excitement around AI and predictive systems, but not every use case needs to be revolutionary. Sometimes, it’s about getting the basics right—stitching together existing tools cleanly and making things easier for the teams on the ground.
This forecasting setup won’t solve every energy problem. But it helps make better decisions, plan more effectively, and rely less on guesswork.
And that’s a step forward.