The E-files - Part 6 - At your service!
During development of the app that monitors the energy production of my solar panels at home, I had to manually start the program each morning and stop it when the sun set. Let's see how we can make this more 'production grade' by running it as a service. On a Raspberry Pi. Only when there is daylight.
As explained in part 2, it is not so hard to turn the worker program into a Linux service by adding the Microsoft.Extensions.Hosting.Systemd
package. That is if you just want to start your service when the Pi reboots and don't need to do anything fancy. In my case, I only need the service to run when energy is actually being produced, i.e. at daylight. The inverter shuts itself (and by that also the WiFi dongle) down when no output comes from the panels. When the sun rises, the inverter starts running again and so should the service. I found an interesting article that describes how to use .service
files in combination with .timer
files to achieve what is needed. More about that later. There were a couple of other things that I needed to take care of first in making the program ready for production.
Secrets Management
I started building the app by using a regular appsettings.json
and of course I stored the connection string to the Azure Storage Table in there for easy access. When the time came to write these blog posts, I kind of forgot that I put it there. I wanted to make the source availabe and pushed the code to GitHub. About 3 minutes later I had an email in my mailbox that warned me that I had checked in a secret! So, after regenerating the keys first, I had to change the way I store the connenction string. Preferably without the need to change too much code. .NET Core has facilities for that baked in by means of using a Secret Manager and a secrets.json
file. From the docs:
The Secret Manager tool stores sensitive data during the development of an ASP.NET Core project. In this context, a piece of sensitive data is an app secret. App secrets are stored in a separate location from the project tree. The app secrets are associated with a specific project or shared across several projects. The app secrets aren't checked into source control.
Perfect. The docs include all the steps for using this mechanism on both Windows and Linux. What the docs don't talk about is publishing your solution to an environment. On the Windows side, I have the project file and can just run the steps. On the Linux side though, I don't have a project file (because of the publish command). To get it working you need to make sure to copy over the .csproj file and create a folder in the .microsoft/usersecrets folder with the name of the guid that was generated on the Windows side with the init
command.
Another part of the puzzle is that the structure in the JSON file for the user secrets is completely flat. I used a nice hierarchical setup with the connection string being a part of the Configuration object:
{
"Configuration": {
"ConnectionString": "not falling for showing it again..."
}
}
It is not possible to store it in the secrets.json
file in this way. To denote a hierarchy, you need to separate the objects with a semicolon like this:
{ "Configuration:ConnectionString": "not falling for showing it again..." }
The good news is that the Configuration system works with this automatically. After I had created the secrets.json
files on both environments, the code I had in place to read the configuration and put the values in a POCO class worked as if the connectionstring was still in the appsettings file. That is if your code is running in a Development environment. In Visual Studio this is automatically set up for you in the project properties in the Environment variables section when creating the project. To make sure I could use this when debugging the code (running on the Pi) through Visual Studio Code (running on the PC) I needed to add an "env" section in the launch.json
file. Again, not easy to find. It looks like this:
"env": {
"DOTNET_ENVIRONMENT" : "Development"
}
As stated in the docs, you should only use this in a development situation. After following the steps described above, I was able to run the app from VS Code. I also wanted to be able to start the program stand alone every day to make sure I would capture all the data and I created a bash file for that. But how would the bash file know to start the program within the Development environment? Turns out you can use the export
command for that. Just adding a line with 'export DOTNET_ENVIRONMENT=Development
' before starting the actual program did the trick.
Using Azure Key Vault
The next step in running the program as a service was to get rid of the environment variables and the secrets.json
files completely. The way to do that with Azure is to use a service that is called Key Vault. You can use Azure Key Vault to encrypt keys and small secrets like passwords and connection strings that use keys stored in hardware security modules (HSMs). For more assurance, your keys are processed in FIPS 140-2 Level 2 validated HSMs (hardware and firmware). With Key Vault, Microsoft can not see or extract your keys. To top it of, there is also an Azure Key Vault Configuration Provider. This plugs in Key Vault in the regular parsing of the configuration settings and, just like with the Secret Manager before, you don't need to change your code to use this.
When your app is running on Azure, you can use Managed identities for Azure resources to authenticate the app to Azure Key Vault with Azure AD authentication without storing credentials in the app's code or configuration. In my case, I'm using Azure services but the app itself is not running on Azure but on a Raspberry Pi. In this case I need to configure Azure AD, Azure Key Vault, and the app to use an Azure Active Directory Application ID and X.509 certificate to authenticate to a key vault. So, I need a certificate. Now, what?
The docs describe a way to generate your own self-signed certificate (fine for my setup but DO NOT do this for any real-world production app) with MakeCert or OpenSSL. Do able? Yes, but takes a bit of time to familiarize yourself with the commands and parameters to get it right. And on top of that you will need to copy the certificate file from your local machine to Azure and the Pi. So, is there an easier way? Yes, as it happens, Azure Key Vault has certificate gneration capabilities built in! Once you have created a Key Vault in Azure, go to 'Certificates' under 'Settings' and click on 'Generate/import'. Make sure you give it a unique name and subject. The rest can stay at the default values. Click on 'Create' and your self-signed certificate will be available in a couple of minutes.
In Azure Key Vault you can have multiple versions of every secret you store. Of these versions only one can be enabled, of course. When you click on the generated certificate in the overview screen, you'll see all the versions of this secret. Click on the current version to go to it's details and in the blade you get then, you'll see options to download the certificate in .cer and/or .pfx/.pem format. You'll need the .pfx file to install the certificate on you local machine and the Pi. You also need the .pfx file to register your app in Azure Active Directory. To complete the registration, you also need the Thumbprint of the certificate.
Installing the certificate on Windows is pretty easy. Just double-click the file and follow the wizzard.There is nothing available like that on the Pi. To be able to read the certificate from .NET, the .pfx file needs to be placed in a special folder. Some Binging revealed that it needed to be placed in .dotnet/corefx/cryptography/x509stores/my
. The code to get it wired up is actually pretty straightforward. You just need to hook it up in the CreateHostBuilder
method like this:
Host.CreateDefaultBuilder(args)
.UseSystemd()
.ConfigureAppConfiguration((hostContext, config) =>
{
if (hostContext.HostingEnvironment.IsProduction())
{
var buildConfig = config.Build();
using var store = new X509Store(StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadOnly);
var certs = store.Certificates
.Find(X509FindType.FindByThumbprint, buildConfig["AZUREADCERTTHUMBPRINT"], false);
config.AddAzureKeyVault(
$"https://{buildConfig["KEYVAULTNAME"]}.vault.azure.net/",
buildConfig["AZUREADAPPLICATIONID"],
certs.OfType<X509Certificate2>().Single());
store.Close();
}
})
This takes care of adding the Azure Key Vault Configuration Provider by using the Certificate thumbprint, the Azure AD ApplicationId and the Key Vault name. No secrets needed! Because of the provider model, there are again no further code changes needed.
Active Directory
To bind all the Azure services together, I needed to setup an Azure Active Directory. Not so much for storing users but for storing the application in there as an object to be able to grant access to the Key Vault. You do this by creating a new App Registration (in the Manage section). The steps are pretty straight forward. You give the registration a name and indicate who has access to the application:
Once you have registerd the app, you get an application id. This, togheter with the certificate thumbprint and the name of the keyvault is everything you need to be able to read configuration values form the Key vault.You can see how these three are being referenced in the CreateHostBuilder
source dispalyed above.
SystemD timers
To start and stop the servie at certain times, I could have used a cron job, but I found a very nice sample to do this with the help of some systemd timers. The article that describes this can be found at https://www.linux.com/training-tutorials/systemd-timers-two-use-cases-0/. The files that describe the services and timers needed can be found in the GitHub repository in the systemd folder (https://github.com/vnbaaij/GrowattMonitor/tree/master/GrowattMonitor/systemd)
That is it for the final part of this series for now. Questions? Remarks? Let me know in the comments below. If there is any part of the monitoring process you would like to know more about, let me know and I'll do a blog post about it!
Comments
Comments are closed