DevKimchi

Author: Justin Yoo

SOAP over Azure API Management, Logic Apps and Functions

ACKNOWLEDGEMENT: This has been originally posted on Mexia blog

When we work for a service integration project for a customer’s information systems, not all systems use cutting-edge technologies. Rather, still many information systems use legacy ways to get integration works done. For example, some legacy applications still drop files to a designated folder so that other applications pick up those files periodically. On the other hand, other legacy applications support SOAP (Simple Object Access Protocol) based webservices. In .NET world, we can easily handle those SOAP webservices through WCF by creating service references. Now, everything has changed. We use Azure API Management, Logic Apps and Functions for service integration more than ever.

SOAP over API Management and Logic Apps

API Management supports SOAP out-of-the-box using WSDL. However, we have to know that API Management has some restrictions of using WSDL. In other words, some SOAP-based webservices having complex structure might not be suitable for using API Management.

APIM - ??? - SOAP

What about Logic Apps? If we create a custom connector and import WSDL through it, we can use SOAP webservices directly from the Logic App instances. However, there are still concerns around Logic Apps using SOAP-based webservices. As it uses API Management behind the scene, it has same restrictions that API Management has. Moreover, Logic Apps’ customer connector itself has limitations. Therefore, this also needs to be considered, when we design our integration architecture.

LOGIC APPS - ??? - SOAP

Then we have the last one standing – Azure Functions. It’s basically C# code, so we can easily refer to service references, which looks perfect. Let’s have a look.

The sample code used for this post can be found here.

Analysing Service References

When we create a service reference proxy, it contains service client class inheriting ClientBase. It also has several constructors taking no parameter, string parameters and instance parameters.

Except the first and last constructors, all other constructors take string parameters for binding and endpoint information, which are stored at Web.config or App.config. In fact, the configuration file looks like:

Both basicHttpBinding and endpoint nodes are used for setting up the WCF service client. That’s actually not possible for Azure Functions, because it doesn’t have Web.config file! Wow, that’s a kind of critical to use WCF in our Azure Function code, yeah?

Fortunately, the last constructor of the service client accepts both Binding instance and EndpointAddress instance for its parameters. In other words, as long as we can create both instances and pass them to the service client as dependencies, we can still use WCF service references without the Web.config file.

SOAP over Azure Functions

Let’s have a look a the function code. We should note that there are binding and endpoint instances to instantiate WCF service client. This doesn’t require the system.serviceModel node at Web.config, but only needs the actual endpoint URL that is defined in the application settings blade of the Azure Functions instance (equivalent to local.settings.json at our local development environment).

Based on the connection type, both binding instance and an appropriate BasicHttpSecurityMode value should be carefully chosen. In addition to this, if necessary, user credentials like username and password should be provided from app settings.

Once we implement this and run it, we can get an expected result like:

REQUEST to AF through POSTMAN

Service Client as Singleton Dependency

Now, we can send requests to SOAP webservices through Azure Functions. There’s one more thing to keep in mind, from the implementation design perspective. As we can see, the WCF service client is just a proxy and majority of SOAP applications are monolithic. In other words, the service client is reusable over time so that it’s always a good idea to register it within an IoC container, as a singleton instance. I have written many posts for dependency management in Azure Functions and this is the most recent one that is worth checking. Therefore, with this approach, the service client instance can be registered as a singleton and injected to functions, if necessary.

Which One to Choose?

So far, we have discussed which Azure service is good to handle SOAP webservices. Here’s a simple chart for comparison:

Logic Apps API Management Functions
Connector Limit x o o
Complex Message Structure x x o
  • Azure Logic Apps has a connector limitation – number of connectors and number of requests per connector. So frequent access to SOAP webservice through Logic App wouldn’t be ideal.
  • Azure API Management has restrictions on complex SOAP message structure. As Azure Logic Apps relies on API Management, it also has the same restrictions.
  • Azure Functions doesn’t have a concept of connector and can directly use WCF proxy libraries, so it has virtually no limitation but requires heavy-lifted coding efforts.

From these perspective, we can choose an appropriate one for our integration application structure. So, have you decided what to choose?

Securing SAS Token from Azure Logic Apps

ACKNOWLEDGEMENT: This has been originally posted on https://blog.mexia.com.au/securing-sas-token-from-azure-logic-apps

When we are using Azure Logic Apps, especially HTTP trigger, their endpoint URLs are overwhelmingly long. Here is an example:

The purpose of the SAS token used in the Logic Apps is for authentication and authorisation, which is good. But the problem is that this SAS token belongs to querystring. In other words, a bad guy out there can easily take the token and send requests for inappropriate purposes. Querystring itself is secure as long as we use HTTP connection. However, the querystring can always be logged, regardless of the secure connection or not. Therefore, it’s always a good practice to hide sensitive information from the querystring as much as we can. Instead, it should be passed through request header.

Unfortunately, Azure Logic Apps currently doesn’t support request header feature yet. Therefore, we might have to find a workaround to give more protection to Logic Apps. In this post, I’m going to show how to secure the SAS token using Azure API Management and Azure Functions Proxies.

Preparing Logic App Instance – HTTP Trigger

First of all, we need a Logic App instance – an HTTP trigger. As this is just an example, it has a relatively simple workflow – whatever it receives through request body, it just returns the payload.

When we send an HTTP request through Postman, we will expect to see the result like this:

Nothing special, huh? Now, let’s hide the SAS token from querystring.

Using Azure API Management

Azure API Management offers policy management feature for comprehensive control over many registered APIs. Therefore, we can remove Logic Apps’ SAS token from their querystring and put the token value into their request header through the policy management. It sounds somewhat complicating, but actually is not that hard. Let’s have a look. In API Management, we can easily import Logic App HTTP trigger.

Select a Logic App instance you want to import, enter other details, URL suffix and products. Then click the Create button. It’s now imported. How easy!

This import job does every dirty job for us. All we need to do now is to setup policy for the Logic App instance. Let’s get into the API structure screen. We’re only interested in both Frontend and Inbound processing tiles. Click the pencil icon of the Frontend tile.

In the Headers tab, we need define a header key that accepts the SAS token value. You can call it whatever you think it’s meaningful. Let’s call it as X-Sas-Token for now. As this header key is mandatory for this API call, we must get the required box ticked. We don’t need the actual value because we’ll send it over HTTP request. Once everything is done, save it.

Now, we need to actually setup the inbound request processing policies. Click another pencil icon of the Inbound processing tile. Actually, click the little triangle right next to the pencil and select the Code editor menu.

Then we’ll be able to see a set of policies written in XML. Look at the rewrite-uri node. It has the template attribute that the API request coming through API Management will eventually redirected to the Logic App instance. When you have a look at the template value, it’s a querystring that we need to work on.

it contains a liquid template markup surrounded by two curly braces like {{lamanual5a27d3ff5eec5fd4fc847565}}. This is actually the SAS token we’re looking to replace. The lamanual5a27d3ff5eec5fd4fc847565 is a key defined within API Management instance.

This is not the one we’re using, but the one we’re replacing with the request header. Let’s get back to the policy editor. Remove the liquid template part from the template attribute of the request-uri node.

Now, we need to get the SAS token value from the request header. Add another policy node called set-variable and give it a name of sasToken. For its value, we can use policy expression to access to the request header. The policy expression is pretty much C# compliant. Therefore, if you’re used to C# language, you can easily pick that up.

Now, we’ve got the SAS token value from the request header. We now need to append it to querystring. Let’s add another policy node called, set-query-parameter. We put the parameter name of sv because, the SAS token always consist of svand sig.

Of course the sp parameter is a part of the SAS token, but it’s already the part of the template which won’t be changing. We’re not considering that.

Set the parameter value like below and save it.

Now, we’re all set. Let’s send a request through Postman. Send a request to API Management URL with a header of X-Sas-Token and this is what we expected, yeah?

That’s how Azure API Management can secure Azure Logic App instance’s SAS token. By doing so, we can keep the SAS token part at a secure and separate place.

Using Azure Functions Proxies

I know. API Management is not cost-effective only for this purpose. Unless we’re heavily using API Management, it wouldn’t be a good choice. There’s an alternative, fortunately – Azure Functions. With Azure Functions Proxies, we can achieve the same goal. Let’s have a look. Open an Azure Functions instance, go to the Proxies blade, and create a new proxy.

Here is the interesting part. When we put the Logic App instance’s endpoint URL into the Backend URL field, we need to access to the request header and get the SAS token value from there. Fortunately, Azure Function Proxies allow us to use request and response parameters. Therefore, in order to get the SAS token value from the header, we replace the SAS token part with {request.headers.x-sas-token}. By doing so, the Azure Function Proxy can directly read the header value and append it into the querystring of the Logic App instance.

Now we all set. Send an HTTP request through Postman to the function proxy endpoint with X-Sas-Token header value.

That’s how Azure Function Proxies can secure Azure Logic App instance’s SAS token. This approach is far much easier than using API Management, isn’t it?

Which One to Choose?

So far, we have looked at both Azure API Management and Azure Functions Proxies to secure SAS token for Azure Logic App instances. Both provides a very great way of securing Azure Logic Apps. If you want to look for much simpler and easier way, Azure Functions Proxies is good for you. On the other hand, if you want to look for much more controlled and integrated way, Azure API Management will be your good fit.

Outbound IP Registration to Azure SQL Using Azure Functions

As Azure SQL Database is PaaS, it has its own firewall settings. Due to its white-listed nature, only traffic from registered IP addresses is allowed to access to the server instance. Of course, there is an option that the server allows all Azure resources to access to the server. However, this is not secure because there are chances that some malicious attacks come from other Azure resources. Therefore, registering only outbound IP addresses assigned to other Azure resources like Azure Web App instances is strongly recommended.

Interestingly, according to this article, those outbound IP addresses, assigned to a specific Azure Web App instance, can change from time to time when the app instance is restarted or scaling happens. If those outbound IP addresses are updated, there is no way to let the Azure SQL Database server instance know unless manually updating them. I expected that Azure Event Grid would support this scenario, but at the time of this writing, apparently it’s not yet possible. However, there is still a workaround, if we use Azure Functions. In this post, I’m going to show how to update the firewall rules on an Azure SQL Database instance, using Azure Functions and Azure Fluent SDK.

Why Fluent SDK?

There is the Azure SDK library and Fluent SDK used to be a part of it. From the functionality point of view both are the same as each other. However, Fluent SDK offers more succinct way of handling Azure resources, and provides better code readability. For example, this is how to get authenticated and authorised to access to Azure resources:

Can you see how the code looks like? It looks dead simple, yeah? With this Fluent SDK, let’s move on.

The sample code used in this post can be found here.

Building an HTTP Trigger

If I can draw a user story reflecting this scenario, it would be:

  • AS a DevOps engineer,
  • GIVEN the name of Azure Resource Group,
  • I WANT to retrieve all outbound IP addresses from Azure Web App instances and all firewall rules registered to Azure SQL Databases, from the resource group,
  • SO THAT new IP addresses are registered to the firewall rules, as well as unused ones are deleted from the firewall rule.

First of all, for local debugging purpose, it’s always a good idea to start from an HTTP trigger. Let’s create a simple HTTP trigger function like:

This is just a scaffolded function so it does nothing with Azure resources but leave logs onto the console. Let’s put the basic authentication logic using Fluent SDK.

There are a few spots noticeable.

  1. Azure credentials are handled by SdkContext and Fluent API.
  2. Azure context is handled by Azure and Fluent API.
  3. All environment variables are converted to Config, a strongly-typed object.

The Config is a static instance that retrieves environment variables. You can still use ConfigurationManager.AppSettings["KEY"] for it, but the ConfigurationManage won’t be a good idea when Azure Functions move forward to .NET Standard. So, it’s much safer to use Environment.GetEnvironmentVariable("KEY"). Of course, this Config class and its properties might not need the static modifier, if you consider dependency injection. For the convenience sake, I’m sticking on the static nature, for now. Here’s the code:

Now, we need to get outbound IP addresses from web apps in a given resource group. Between the two log lines put several lines of code to retrieve all web app instances then all outbound IP addresses are fetched from there.

Those IP addresses need to be registered to firewall settings on each Azure SQL Database instance. If there are discrepancies between outbound IP addresses and registered IP addresses, all unnecessary IPs should be removed from the firewall rules and only newly updated IP addresses should be added to the rule. Let’s finish up the function code. Once all Azure SQL Database instances are populated, code loops through them. In the loop, all registered IP addresses are fetched and compared to the outbound IPs so that we know which IP addresses are to be removed and inserted.

Yeah, the coding part is done. Now, let’s run this on our local machine. The database instance has the following firewall rules – to allow all internal IPs from Azure resources (pointed by red arrow), one web app outbound IP (13.x.x.x) and one public IP (115.x.x.x) from my laptop.

Punch the F5 key and send an HTTP request through Postman. The function has run smoothly. Now we’re expecting all the Azure internal IP addresses will be removed and my public IP will be removed, but the existing web app IP will remain.

Go back to the Azure Portal and check the firewall settings. As we expected, all internal Azure IP addresses have been blocked (pointed by red arrow), my public IP has been removed, and other outbound IPs have been registered.

Unfortunately, there is no SDK ready for Azure Database for MySQL at the time of this writing. Instead, in order to apply this approach for it, we should use REST API to register outbound IP addresses.

Converting to a Timer Trigger

Once you confirm this works fine, you can simply copy and paste all the code bits into a timer trigger function so that this is triggered in a scheduled manner. The following code snippet says the timer function is triggered once a day at midnight in UTC.


So far, we have walked through how to check Azure Web App instances’ outbound IP addresses regularly and register them into the firewall rules of Azure SQL Database instance. As I stated above, once Azure Event Grid is applied to Azure Web App, this would be much easier.

Azure Functions with IoC Container

I’ve been talking about managing dependencies and unit testing in Azure Functions quite a few times in those articles:

Throughout my articles, the service locator pattern always took the centre of dependency management. The combination of Common Service Locator and Autofac certainly convinced me this would be the only way to handle dependencies for Azure Functions.

A few weeks back, I was asked to take a coding test before being engaged with a client. The topic was simple – given a JSON payload as a source of truth, I need to build an application to process the payload to display an instructed result. I, of course, decided to use Azure Functions to fulfill their requirements. Because the test itself was pretty straight forward, it could be done within a couple of hours with full of spaghetti code. However, they also wanted a sort of over-engineering including dependency injections, SOLID principles, unit testing, etc.

So, I started writing an Azure Functions application for it. As soon as I started, I realised that:

“Why can’t I upgrade my Autofac version? Is it because of the common service locator locks-in the Autofac version?”

This means that Azure Functions doesn’t yet support assembly binding redirects out-of-the-box. Apparently, it’s possible for libraries used internally. However, this is not applied to my case, if my Azure Functions app has dependencies that need binding redirects. Even though, there is an workaround for this concern, I was reluctant using this approach for Autofac.

What if I can use Autofac directly, without relying on Common Service Locator? Can I do this? It would be worth trying, yeah? Let’s move on.

Here’s my coding test repository as an example.

No More ServiceLocatorBuilder

In my previous post, I introduced ServiceLocatorBuilder for Autofac integration like:

This was called within FunctionFactory like:

It seemed to be redundant. I wasn’t happy about that, but justified myself this would be the only way to do it. Now this is the time to rewrite. Let’s do it.

New FunctionFactory

In the constructor of the new FunctionFactory class, instantiate the Autofac.IContainer instance directly from Autofac.ContainerBuilder.ContainerBuilder instance. Then, within the Create<TFunction>() method, the given type of function instance is resolved directly from the Autofac.IContainer instance. Here’s the code.

HttpTrigger with FunctionFactory

The basic usage is the same as the previous version of FunctionFactory. Simply create an instance and use it within the function method.

With this approach, you don’t need to use service locator pattern any longer for your dependency management. Hope this helps.

Renewal

Finally, the new website has now launched!

When my Azure Subscription was about to expire, I was unable to backup blog posts for migration, which was shame. Eventually, I lost the old DevKimchi blog. Fortunately, I had a backup of posts themselves so managed to publish them using GitHub pages. There are still some images missing, though.

My initial plan with this renewed blog was:

  1. To restore all the blog posts from the GitHub pages, and
  2. To run this one as if nothing happened.

However, life was not that easy. The restore process didn’t work very well. Therefore, Plan B was executed, which was:

  1. To keep the existing GitHub pages as is, with a different domain name, http://old.devkimchi.com, and
  2. To run this new one as a sort of Reboot.

Now, you can see my old blog posts through http://old.devkimchi.com and this will be filled with new posts.

Stay tuned!

Copyright © 2017 DevKimchi

Theme by Anders NorenUp ↑