Analyzing the gathered data
Now, we are ready to check whether everything works as expected. To do so, we can use Microsoft Azure Storage Explorer. When we access the queue we defined during the event subscription process, we can see that there are events gathered already:
A quick look at one of the records reveals all the data gathered here:
{
"subject": "/subscriptions/.../resourceGroups/azureadministrator-euw-rg/providers/Microsoft.Storage/storageAccounts/azureadministrator",
"eventType": "Microsoft.Resources.ResourceActionFailure",
"eventTime": "2019-02-18T19:17:52.7587534Z",
"id": "407dc633-1157-46ac-97e5-92175fe78099",
"data": {
...
},
"dataVersion": "2",
"metadataVersion": "1",
"topic": "/subscriptions/.../resourceGroups/azureadministrator-euw-rg"
}
As you can see, such a JSON document contains the following information:
- The event type (eventType)
- A timestamp telling us when an event occurred (eventTime)
- The identifier of an event (id)
- The origin of an event (subject)
You can leverage this information to implement a complete automation solution (for example, to find out when somebody changes a resource or even tries to change something). Now, let's try to extend this architecture and connect to Azure Logic Apps for an even greater set of possibilities.
As we used Storage Queues here, we can integrate events generated by our resource group with any other service that can read messages from it. For an administrator, the best choice is to use Azure Logic Apps.
Once your Azure Logic App has been provisioned, follow these steps:
- Go to the Logic app designer blade to start the process of designing a new application:
- In the designer window, you have multiple options to get started – there are many examples and templates that will speed up the process of creating a new app. For now, we need a custom one, so I selected Blank Logic App.
- On the new screen, you will be able to search for connectors you are interested in. Since we used a queue from the Azure Storage service, this is the component we are searching for:
Each connector has its own settings that you have to configure. For Azure Queues, the very first choice is to decide whether you do the following:
-
- You want to trigger an Azure Logic App when there is a specific number of messages in the queue
- See if there is a message in a queue
- In this particular scenario, I went for the latter as I wanted to trigger it immediately when a new event is available. The second step is related to a storage account you want to use along with the connector. Obviously, you have to select a storage account you've used previously as the destination of the gathered events:
- The last thing needed here is to configure the interval of checks for the queue. For the purpose of this exercise, I chose 5 minutes (in a real scenario, you can select any interval that suits your needs, such as every 5 seconds):
- Now, you can save the application and test whether it works. If you followed my tutorial, you should be able to see a similar result to mine:
Congratulations! You have just created your very own automation pipeline that you can customize and extend to meet all your requirements.
There are over 200 different connectors available in Azure Logic Apps that you can leverage at any time to create event advanced workflows that will react to the incoming events and send emails, SMSes, raise alerts, or trigger reports. Azure Logic Apps are a great integration tool for those who do not want to develop a solution programmatically. As they rely on an intuitive and robust toolset, you can quickly implement the functionality you want. I strongly encourage you to give them a try and make them one of the common tools you are familiar with so that you can become much more productive and open many additional integration opportunities.
Using services such as Azure Event Grid and Azure Logic Apps becomes more and more important when searching for flexibility and automation in Azure administration. With proper experience, you can easily build complex pipelines that will audit resources and gather insights about their usage and configuration. The next section includes some guidance regarding naming conventions so that you can administer your resources in an even better fashion.