Nick Jetten | 26 May 2024

Renewable Energy: Building an AI driven energy forecasting capability

Building energy forecasting capability to navigate within a highly volatile market  

Since the rise of renewables and the continued decentralization and digitalization of grids, the volatility of the market increases, which makes it more difficult for clients to stay in control of their energy needs.  

 Many start-ups, scale-ups and corporate energy companies have been offering tools to enable their clients to gain insights into their energy usage and production, to provide technology to adjust energy demand to the energy supply by stirring their assets in the market, and many other tools.  

 For many of these applications, data and artificial intelligence (AI) is a core capability. The better one can predict the future, the better their clients can proactively steer their asset within a highly volatile grid. And as a result, maximize green energy usage at an affordable price. 

 However, the challenge when building AI solutions within the energy sector is that the number of machine learning algorithms explode quite rapidly, from energy consumption to energy production, from volume to price predictions, from solar panels to other assets like batteries, and from long-term forecasts to short-term forecasts in PTU’s (Power Transfer Unit). Hence, the question is, how to build an AI based energy forecasting system that can accommodate all these different algorithms whilst staying in control and iterating fast. And second, where to start? 

 This article shares our 3 key lessons learned: 

  1. While building an AI capability and underlying use cases, make solid decisions on what you buy versus what you build in house.
  2. Utilizing internal data results in outperforming existing standard for energy consumption forecasting.
  3. Spend as much time on a ML ops platform as spending time on building your first algorithm. 

Let’s have a look at the first lessons learned.

  1. Buy versus build. To proactively steer within the energy industry, a multifold of different algorithms is required. Building all algorithms internally will not give the velocity in a scaling organization, nor is a wise investment decision as external parties can potentially build better forecasting algorithms. Fortunately, there are many providers of energy forecasts, both on volume and pricing up to specific use cases like trading and asset steering. This creates a business case for every energy forecasting use case: I can buy externally against a certain costs, which increase speed of delivery, creates dependency, and guarantees a certain performance and therefore an upside. Or, I can build an algorithm against certain costs, which decrease speed of delivery, creates independency, but potentially outperform (or underperform) the benchmark what is out there in the market.

 And that later is important: if there is a large potential to build an algorithm that performs equally good or even better than what’s out there in the market, there is a concrete upside to build it internally (next to increasing a company’s independency). And the key to good algorithms is not the modelling technique and clever thinking, but mostly the data that is being provided. Hence, understanding your internal (and externally available) data quality and quantity and how that can be combined and utilized as an asset in building high-performing algorithms is crucial for the make and buy decision.  

  1. Outperforming the existing benchmark. Every organization possesses a set of data that other companies do not have. Simply, through the service they provide to their customers. An energy customer typically knows their customer better than an external energy consumption forecasting company. Hence, the question is, how can an energy company utilize this data within their modelling approach?

 Through experimentations, a data scientists can assess the value of internal data and compare the resulting model forecasts to the existing benchmark. Typically, the first experiments does not result in the best algorithm. Hence, iterations are key. A plan can be drafted on how to further outperform the existing benchmark. Several approaches are: gather more and higher quality data, improve the modelling technique by testing and validating several models, or combine multiple models into one. The latter is interesting, as it serves as a gradual way of moving away from a “buy”-situation, to a “make”-situation.  

  1. Build an ML ops platform, before, deploying ML models. Given the investments needed in building a data & AI infrastructure and the challenges to adopt AI within businesses, typically, the first use case will not earn back the investments made. However, spending time on your ML ops platform from the beginning will pay back already in the short-term. 

 At all projects we do, we set a high standard on how to train, deploy, and monitor the first use case in a standardized way. This is driven by multiple reasons. First, when deploying an algorithm on an infrastructure that is not robust, a data science team must spend time on maintenance work to get the model running again, instead of spending time on development of new models and improvements on existing models. Second, by standardizing the way to train, deploy, and monitor AI models, multiple people can work on multiple use cases in parallel. And it reduces the dependency to this one data scientist that knows all the ins and outs. Lastly, it speeds up delivery as components are reusable.  

 To summarize: setting up an AI-driven energy forecasting capability requires a solid strategy that enables an organization to make critical make versus buy decisions to spend resources well and to unlock the potential of their internal data that external forecast providers do not have. Moreover, the underlying tech stack plays an important role in scaling your ML use cases. Putting time and effort into your engineering stack always pays off.