Introducing our comprehensive webinar on “The Ultimate Logic Apps Best Practices Guide.” Join us as we delve into the intricacies of optimising Logic Apps for performance, security, and efficiency. Led by industry experts Andrew Rivers, CEO and co-founder of 345 Technology, and Dave Phelps, a senior cloud solution architect at Microsoft, this session is packed with invaluable insights and practical advice.
Discover the nuances between stateful and stateless Logic Apps, the impact of built-in versus managed connectors, and the significance of synchronous versus asynchronous processing. We’ll also explore the importance of action management, the art of scaling Logic Apps effectively, and strategies for organising workflows to maximise performance.
Whether you’re new to Azure Logic Apps or looking to enhance your existing solutions, this webinar offers a wealth of knowledge, from introductory concepts for newcomers to advanced strategies for seasoned professionals. Engage with live polls, Q&A sessions, and real-world examples to fully grasp the potential of Logic Apps in your integration projects.
Don’t miss out on this opportunity to elevate your integration solutions with Azure Logic Apps. Join us for a session filled with actionable insights and become part of the action.
Here’s the video transcript:
Hello, everyone. Welcome to the Ultimate Logic Apps Best Practices Guide. You won’t believe what’s just happened. Literally 20 seconds ago, they’ve done a fire alarm test in the building, but fortunately, it’s just finished. Welcome, everyone. It’s really good to have you here, so many of you live both on StreamYard and live on LinkedIn. We’ll just do the intros, let a few more people join, and then we’ll get into the meat of the presentation.
So, what we’re going to cover today is a bit of introductions to me and Dave. We’re going to look at the problem space overall, just a very quick intro to that for people that aren’t really deep into this topic. And then, what we’re really going to focus on is Logic Apps’ performance today. Hopefully, we’ll have a bit of time for Q&A.
Please feel free to join us at Slido because there will be some polls. If you actually scan this QR code, you can then join in and be part of the action. We’d love to hear your opinions on Slido.
Okay, speakers. I’ve got Dave Phelps, who’s a senior cloud solution architect at Microsoft. Dave, would you like to say a few words about yourself and what you do in the integration Logic App space?”
“Yeah, sure. So, I’m Dave. As Andrew said, I’m a cloud solution architect at Microsoft. I’ve been at Microsoft now for about seven years. I specialize in the integration space on Azure. That’s typically Azure services such as Logic Apps, Functions, API Management, Service Bus, and Event Grid. I work with customers to help them on their journeys into the cloud. And as I said, integration is my specialist subject. So, really pleased to be part of this webinar.”
“Great stuff. Thanks, Dave. And as for me, I’m obviously Andrew, Andrew Rivers. I’m the CEO and co-founder of 345 Technology. Now, you’ve probably heard of Microsoft, so we probably don’t need to introduce Microsoft too much. But if you don’t know 345 Technology, then we’re the number one integration partner on Azure. We do the full solution lifecycle from envisioning, designing, build, and support. We’ve got just over 30 people in the UK, and we’re focused solely on the integration space. We have a product that we call 345 Air, which is a rapid start footprint for Azure integration services. So, you can get an out-of-the-box pre-canned integration infrastructure up and running in record time. And that’s us.
So, talking a bit about Logic Apps’ ultimate best practices. Now, we are going to have more than one webinar on this because it’s such a massive topic. We realized that when Dave and I started putting this together, there are so many different elements to Logic Apps’ best practices. There’s monitoring and alerting, there’s security, there’s DevOps, FinOps for a start, but the one that everyone talks about first is performance. So, we are going to start out of the box and talk mostly about performance today. One of the other things I would say as well is that if you leave comments or questions on Slido, I believe, but if you use the comments on the webinar, we’ll address questions during the Q&A later. We probably aren’t going to address questions as we go through. So, looking at the problem space first, we’ve got just a very quick introduction to where Logic Apps fit in the integration picture.
What is integration? The way I usually describe it is that integration is the movement of data between two or more systems. And obviously, since almost every organization has two or more systems that need data moved between them, effectively almost every organization has an integration need. But not everyone recognizes it as a discipline in itself. And there is a toolset in itself, and there are products that address this space. In terms of the capabilities we need for integration, obviously, APIs, connecting things together, that’s a key part. Workflow is a key part. Messaging, event handling, streaming data, and the ability to embed code and logic in there somewhere so you can actually do some work. Those are the main capabilities that underpin integration. But in terms of Azure and the Azure product stack, as Dave said, he reeled out the product set. So obviously, APIs are covered by API management. Logic Apps actually cover a few things because Logic Apps can be exposed as APIs, they can connect to APIs, and they can do workflow. So Logic Apps are a bit of a Swiss army knife in the integration toolset. Lots of the Event Grids help us handle events. Event Hubs is a Kafka-compatible streaming data. Service Bus helps us deal with messages. Data Factory helps us deal with large quantities of data, often for reporting and BI purposes. And then Azure Functions allow us to write serverless bits of code that encapsulate business logic. So in terms of the overall integration space, there’s a suite of services provided by Azure, and we are really focusing on Logic Apps that do a lot of the heavy lifting in this space.
Without further ado, you can rescan the QR code, but I’ve got a little bit of a poll here. So please join in the poll and just let us know if you’re using Logic Apps standard, if you’re using consumption Logic Apps, are you using both, or are you using neither? You should be able to see the code there. Okay. We’ll start to get some results in. Interesting. We’ll keep very interesting that a lot of people are using both. We’ll talk a little bit about this in a second. Okay. Well, thanks, everyone. I’m going to move on in just a second. So get your final votes in now. And it’s really interesting, actually, that you’ve got so many of you using both. Also interested that for all of you who are using neither, then really, really glad to have you aboard and hope you really get something out of this. So let’s just move on. I’m going to talk about Logic Apps performance and really what we’re going to cover in here, stateless versus stateless. We’re going to cover built-in connections against managed connectors, synchronous versus asynchronous. We’re going to look at the effect that the number of actions has. We’re going to look at scaling, and we’re going to look at the relationship between Logic Apps and workflows and what that does. Again, back on the poll, next poll, hot off the press. Please let us know whether you are experiencing any performance issues on Logic Apps because if you are, then this is definitely the webinar for you. And if you’re not, I’m glad, but again, just give you another minute to get some answers. It’s already looking really interesting. And again, if hopefully, by the end of this webinar, you’ll have a lot of actionable things you can do to maybe address some of those performance issues. And if not, then people like me and Dave are around to help have a chat too, so that you can then take that away and improve that. But an overwhelming majority are having some performance issues. Let’s draw a line on the poll there. Thanks so much for responding, and we’ll move on.
Okay, so factors for performance. Performance, really, I mean, in the world of computing, clouds move things on, but ultimately, it still boils down to the same things. You’ve got CPU, you’ve got memory, you’ve got disk IO, and you’ve got network. The difference between moving things into the cloud is that a lot of things behind the scenes actually talk via HTTP, so you have a lot more network. But it’s a lot more easy to scale out CPU and memory. But every piece of technology is built differently. So the way that Logic Apps are put together, you have a hosting plan, which is effectively, it’s an app service. It’s behind the hosting plan are virtual machines that are running your Logic Apps. And they can scale out, so you can set limits of minimum and maximum on those for scaling, and you can also set the size of the CPU. And then on that hosting plan, you host Logic Apps, and a Logic App, and we’re talking Logic App standard here rather than consumption, but a Logic App can host multiple workflows, and then you can treat that as a unit of deployment. You develop a Logic App and you deploy that onto a hosting plan, and you can deploy multiple Logic Apps onto a hosting plan as well. And then Logic Apps themselves talk to storage accounts, especially for when we get to talk about stateful Logic Apps. You know, just bear in mind that behind every stateful Logic App, there’s a storage account that’s being written to. So the first thing we’re going to talk about is stateful versus stateless. So, Dave, do you want to just take us through a little, you know, some of the main points on here?”
“Yeah, sure. So if we think of how a stateful Logic App works with stateless workflow, just looking at the one on the screen here, what’s actually happening under the hood is that when a request is received, it’s actually being written onto a set of queues in the storage account. And those queues are then being looked at by workers that are running on the compute. And then they’re being picked up and executed. And when an action completes, then that state is written back to the storage account. And it’s then picked up again by another worker when the next action runs. You can see that the reason that this is really important is for scenarios where the workflow may be long-running. So maybe there’s some kind of process where you need to wait for an event that could happen at some point in the future. Or maybe we are talking to some kind of service or database that might be unavailable, in which case we want that to be able to recover in the event of a failure that could happen. And for that retry process to be reliable. So it’s very well suited to long-running processes and work or business processes where it’s really important that something completes. And stateful workflows do scale out, but the speed of scale-out is less than that of HTTP-based scale-out, which we’ll see in a few seconds. And are we moving on to statelessness?”
“Yes, move on to stateless now.”
“Yeah, so a stateless one, in contrast to stateful, as the name would imply, just runs completely in memory. So as these actions on the screen are running, there’s no persistence going on to storage. So you can get much higher throughput and much lower latency when using a stateless workflow. So if you take the example of a website, for example, where you’ve got a user and they click a button and it calls a workflow, if that workflow is just reading some data from a database, there really isn’t any need for that to be stateful because we don’t need that persistence. And even if we do, then we have to think about what’s going to happen with that user if that Logic App were to go into some kind of retry mechanism where it might take minutes to complete. The users probably long gone and they’ve just refreshed the page and clicked the button again. So even though that scenario, it may still be beneficial to use a stateless workflow. And with HTTP-based triggers, then because behind Logic Apps is the function runtime running on essentially premium functions, we have this concept of pre-warmed instances. So when an HTTP request is received, then we have an instance that’s ready to go. And as more traffic comes in, then we add more instances. But those pre-warmed instances are pre-provisioned so that when we need to scale out, we can do that immediately. So if you have an HTTP-based request that is executing a stateless workflow, we can get some really good performance by using these pre-warmed instances. And the one thing about stateless is that you don’t get run history, which you, not out of the box, you can enable it. But out of the box, you don’t get the run history. But Application Insights, if it’s configured for that stateless workflow, will write to the various tables that app insights use like requests, exceptions, etc. And so you can usually gain the insights that you want using application insights with stateless workflows and stateful for that matter should definitely be enabling app insights here. But that’s a topic for another day. Okay, so so one of the things we’ve we’ve done is um, it’s taken a workflow. So if we look at this workflow on the right, here’s one that Dave put together earlier, and it receives an HTTP request, takes the current time, creates a response, calls a backend service, and then returns the response to the call all in a synchronous HTTP call. Now this, and we’ve got two flavors of this. We’ve got a stateless and a stateful, and we’ve run this through Azure load test to actually apply some load to it. Now, running Azure load test does not make great TV. I’ll say that right now. So we’ve in a very preordinated, you know, we’ve actually gone and got the results first because we don’t need to wait five minutes to see a load of instances spin up. So if we look first at some baseline results for HTTP using a stateful Logic App, I don’t know if you want to chip in on this, Dave, but we’ve just, you know, on one instance, we’ve been able to do 36 instances per second on average, but probably over time as the test’s gone on, there’s a fairly slow ramp up and then we’re kind of peaking almost at 50 requests a second. And similarly with latency, on startup, we’re talking getting on for two and a half seconds and that latency is gradually coming down to just over half a second 600 milliseconds so every you know every workflow is different but that’s just the baseline now if we take that exact same uh that exact same logic cap and convert it to state less then we’ve gone from 36 to 125 um requests per second that that that that um logic has been able to handle and the average latency has gone down from over 600 milliseconds to 241 so we’ve cut the cut the latency by about 70 percent and been able to almost triple the triple the throughput so that’s that’s kind of just some you know just a qualitative thing that you might get on um, you know, stateless versus stateful logic apps in terms of performance. Do you have any immediate thoughts on that, Dave?”
“Yeah. With the stateful one, what’s interesting is that we’re receiving HTTP requests, but because we have a number of actions that need to execute, those are being written onto queues, as I mentioned, and being picked up and processed by workers. So it might be that we need additional compute in order to cope with that additional demand. Because remember, a stateful workflow is using queues and storage underneath for good reason. So what we end up with are the HTTP requests coming in that are being serviced. But then in order for the response to go back, we are reliant on those messages or those actions being written onto queues and being processed. And so that can take some time to add that compute in the event that we need to scale out because it isn’t HTTP at that point. Whereas with the state-less workflow, you can see on the right there that the response time comes down really quickly because we can scale up faster and we don’t have any dependencies on workers picking up any other actions. So the new compute that this really needs is satisfied by the scale out of the HTTP. requests coming in and using those pre-warmed instances that I mentioned earlier okay good so so really just to summarize and we’re going to do this on this format and a lot of things in terms of what what you may be missing and then what the and what the takeaways are but you know state stateful logic apps are really good for resilience and recoverability because you can resume them when you have an error you can resume them you’ve got a full run history you know what happened to it um and people tend to use them by default to the extent I think a lot of people you know maybe aren’t thinking about stateful versus stateless but also stateful logic apps you know like like dave said that they’re always right reading and writing from these queues and so they’re naturally going to be more resource hungry um And with stateless, you don’t get the run history, but there’s other ways of doing that, such as application insights. And then best practices. I think the first thing is just to think about whether you need things to be persisted or not. If something’s happened, like Dave said, if something’s happened in the HTTP request, does it need to be persisted? Because you’ve got a caller there that can retry. Whereas if it’s fire and forget, maybe you do need persistence, but maybe in a fire and forget scenario, the latency isn’t an issue. So you’ve got kind of a couple of things there. One of the other things you can also look at is that because a logic app contains multiple workflows, you can actually refactor your workflow so that some of it is stateless and some of it is stateful. So maybe… for resilience, you’ll kick off a stateful logic app, but actually large parts of the work inside that and the actions inside that can be performed by a stateless one and you just call directly into it. Anything else you’d like to add on the best practices there, Dave?”
“Yeah, I think what you just said is actually very valid. If we need persistence, maybe not everything needs to be persisted. And so if you’re calling out to an HTTP endpoint, for example, or reading data from a database or something, that could be offloaded to, you could kick off with a stateful workflow, but it could call a stateless one that does that kind of read operation, but then goes back to the stateful to just go and do whatever needs to be done. Like, for example, writing to a service or a queue or something like that where you want that to be guaranteed guaranteed to happen okay so I think we’ve we’ve covered a lot on stable versus stateless and that’s a really big part of performance but uh what we’re going to look at next is just a little bit about built-in versus managed connectors again some of the key points really that’s you know a built-in connector effectively executes in the same process as a logic app. So it’s actually running under your hosting plan on a thread there, whereas the managed connectors, they’re actually hosted in Azure on a completely separate process that you connect to them via HTTP. So they’re a very, very different way of running them. Now, for performance, as you can imagine, because you can auto-scale your hosting plan, the built-in connectors will scale naturally with that. But also, for a lot of our customers, for example, security is a really key paramount concern, and they use a lot of private endpoints and have logic apps talking inside private networks. to other line of business systems within the corporate network. Now, built-in connectors can do that because you can have your hosting plan inside that virtual network, whereas a managed connector is never going to be able to talk to inside your network. The other thing is, Dave, do you want to talk a little bit about the things like the throttling limits and that sort of thing that you get on managed connectors?”
“Yeah. Also, if you just think back to the consumption tier of logic apps, they are all managed connectors. So it is possible to connect to private resources with some of those using the on-premises data gateway, but it isn’t a native Bnet connectivity that you get with the built-in ones. And of course, because the managed connectors are in a hosting platform, outside of your Logic App runtime and your Logic App compute, they all have throttling limits on them. It’s worth knowing what they are upfront so that you can be sure that your business NFLs are going to be met because, as it says on the screen here, Service Bus has a 6,000 request in 60 seconds. Other ones have lower throughput And so what you can find is that you may, if you have fairly high throughput, that suddenly your logic app or your actions in your logic app are going into a retry loop. And that’s because they’re being told to retry in a period of time because they’ve hit the throttling limit. So suddenly something that you think would take a fairly short amount of time can take a lot longer as it goes into that retry loop. Whereas for the built-in connectors, they have no throttling limits. It’s completely controlled by the scale-outs that you have with the plan and how the Logic App runtime is deciding to scale out. But it’s important to say that, as it says at the bottom of the screen there, that the managed connectors do actually have a lot of functionality that it would take you a long time to write. So there are definitely cases where they’re really valid. It’s just that some of them where you might need particularly high throughput, like Service Bus, for example, where you might be processing millions of records, you really want that to scale out and actually kind of run that logic up at full scale, sending those messages to Service Bus in order to meet your needs that you would never be able to do with a managed connector. And as with all these things, performance isn’t absolute. Performance is what does your application need? And then measure against that rather than fastest isn’t always the best. Sorry, there’s one more thing to say. Sorry, the built-in connectors don’t have a cost associated with them where the managed ones do. So it’s worth bearing that in mind. um if you are using the managed connectors that there is a per action cost so it’s worth checking that out yeah whereas the um because the built-in connectors run you’re already paying for the compute by having your hosting plan so you know there’s no additional cost for them actually executing um but the yeah so so if performance is a concern then obviously look at using the built-in connectors where they’re available but also where you are using the managed connectors just understand the characteristics of them and just understand that if you are trying to exceed their throttling limits then that’s where you’re going to get into some of those scenarios that Dave talked about that you’re going to have retries and you’re going to have things just held due to throttling and and so look at what your non-functional requirements are and then choose appropriately it’s just you know it’s just good design practice to be going through this stuff and knowing where you are with it I’ll just say one more thing sorry if you do need higher throughput with the managed connectors it’s worth remembering that the throttling limits I spoke about they’re at the connection level so if you have one logic app talking to a managed connector then that throttling is for that connection. But if you have another logic app, then it would get its own throttling. Or if you added another connection into your logic app and you have two, then you would get two, essentially double throughput. It’s often the case that the service you’re talking to, whether it’s Office 365 or Salesforce or something, they will have their own throttling limits too, so it’s worth bearing in mind what they are. But by creating multiple connections, you can get more throughput. Good stuff. Okay, the next thing we’ll talk about is a little bit about synchronous versus asynchronous. And I suppose one of the things here is that, a bit like in that example we showed before, that When you use an HTTP trigger for a logic app, you receive a request for our HTTP. At some point, you send a response. Now, if you’re used to writing procedural code, you get to the end of a particular operation, and then you return to the caller. and so a lot of people um naturally kind of think that okay with the logic app I’ll receive the http request I’ll do my work and then at the end of it I’ll return to the caller and and and obviously the more time you’re doing between the request and the response the more time you you know the more your your caller is going to have to wait and the lower the latency of that call and and the more that you know the slower that call is the fewer connections you’re going to be able to have and the performance will be apparently worse. Now, one of the things, so some of the things we can do with that is that we can minimize the amount of work we do while that HTTP request is open. So you can actually take a request, create your response, send the response back, that closes the HTTP connection, but the rest of Logic App can continue to actually function. So if you actually… If the HTTP request is just telling the logic app to do some work, you don’t have to keep the request open for it to do that work, and that can actually get you a lot more throughput. The other thing is you can also have a logic app that writes to Service Bus, so you might synchronously write to Service Bus to make sure, yes, I’ve captured that message. It’s on Service Bus. We’ve got it. It’s now going to be processed, but then we can process that asynchronously. Um, any more thoughts about guns around logic apps and service bus state?”
“Yeah, I think quite a few people don’t realize that the connector can take a batch. So I’ve seen a lot of customers writing to service bus one message at a time, but, or even in, in some kind of loop and we’ll get onto two loops. But if you send messages to service bus in a batch. it is significantly faster and the performance is way better. And Andrew makes a really good point about sending a response back to the client because maybe all the client wants to know is that the message has been received and persisted and that’s good enough. And I’ve seen scenarios too where there’s an HTTP request and then it calls some backend system and that backend system isn’t really able to cope with the The demand of the requests that are coming in, in which case offloading to a queue, as Andrew said, and then just sending a response back makes real sense because then the backend can process at the speed that it wants to. So we need to think about what it is that the client needs, and it could be that maybe you send back some kind of intermediate identifier back to the client before it even hits the back end. So if they make a further call, you could use that to tie the two things up.”
“Yeah, because you can almost have another operation which is like a get status that passes that identifier and you can go and check on that.”
“Yeah, so there’s a whole load of different patterns here that you can employ. Obviously, one of them is the webhook callback thing as well that allows you to actually give yourself, pass back a correlation identifier, and then later on when the work’s finished, we call back and tell the originating system that work’s completed. One of the things I’m actually going to do now, and this is the thing I’m most nervous about today, is I’m going to do a demo to just cover some of the batching sides of Service Bus. Now, as we all know, doing a live demo on the cloud whilst doing something live is asking for bad things to happen. But what I’ve got here, very simply, I’ve got two different logic apps. And all they do is read off service bus. And then they’ll store that message in blob storage. And so this one here does single messages. So this is just reading a message at a time off Service Bus, sending it to Blob Storage. And then I’ve got a batch version, which is almost exactly the same. But actually, this has got the shape of when one or more messages arrives on a topic on Service Bus. And this is set to do up to 100 messages. So we can actually read Logic App. The Logic App can read in batches and then send those to Blob Storage. And what I have here is some blob storage. So I’ve got a blob container for batch, and I’ve got a blob container for single. They’re both empty at the moment. So what I’m going to do now is just kick off a little script that will write 1,000 onto a batch topic. This takes a second to run, and I’ve got And I’ve got a little console app here that’s looking at that. Now, the trigger on the Logic App is polling. So in a while, it will reach its polling interval and then start to pull those off in batches. So this, again, this isn’t the amazing TV. But as we can see, the Logic App is now fired up. It’s calling off that batch, and we’ve got through 1,000 in 3, 4, 5, 6, 7, you know, in about eight seconds, we’ve got through 1,000 messages. So that’s like, you know, that’s just because it’s almost like the expensive bit is making the connection to Service Bus. Actually, the number of messages you either read or write, okay, you read a lot of messages, it’s going to be some more work, but there’s so much overhead associated with establishing that connection. But actually, it gives us really good performance to actually think about batching. So what I’m going to do now is, in contrast, is I’m going to write, let’s find the command. There we are. I’m going to do the same thing, but I’m going to write 1,000 messages onto a topic that’s being picked up by by the single processor. So again, I’ve got another similar console app here that’s just polling that queue. There’s the 1,000. So the Logic app in this case is going to be picking those up and picking them up one at a time. So at some point, we’ll get the polling interval, and then we’ll start to see how fast that is taking to run down. There we go. We’ve hit the polling interval, so we’re You might not be able to see the, can you magnify this? Text size large. So there we go. So hopefully you can see the numbers on here. So we’ve got started off at a thousand and we’re working through this, but we seem to be doing somewhere around 10 to 12, 13 messages a second. So it is, it is chugging through that work, but that, yeah, we were almost got, so we had a thousand in about eight seconds when we were doing batch. So that’s like 120 seconds versus 12 seconds. So we’ve got 10 times improvement in throughput by, by just changing that, um, that one shape on logic app from reading a single message to reading a batch. And so that’s just a, uh, Hopefully that’s a good illustration for you as to some of the performance improvements that you can get by going to a batch model. And then there we go. Here we are. We’re a minute later. That’s still chugging through that work. It’ll do it. There’s nothing wrong with it, but we’re not getting that level of performance. So it really does make a difference if you’re using Service Bus. Service Bus is good because it allows you to process things asynchronously and have control over rates. But it really does make a difference when you actually are reading that as a batch. Okay, let’s get back to the presentation. And let’s talk about number of actions. So I think I’m going to let you talk a bit now, Dave, if you’re able to, about actions. But just to kick things off, writing a logic app is not the same as writing procedural code. And so, but do you want to sort of just add a little bit more thoughts on actions, Dave?”
“And it’s not necessarily a bad thing having a larger number of actions, but it’s worth bearing in mind that when you have actions, they need to be executed on compute. And so if it’s a stateful workflow, then again, we’re writing these things onto keys and processing them and That just means that maybe we need more compute in order to get through those actions. But of course, there are implications of doing that. But maybe that will take a bit more time to provision. And if it’s a stateless workflow, it’s going to take more time to complete from start to finish. So definitely worth thinking about the number of actions that you have. necessarily that you might be thinking, well, the number of actions I have is the amount that I need. And that’s not often the case. I’ve seen quite a few workflows that customers have created that have a lot of undue complexity in them that isn’t required. And so definitely worth considering the complexity of the workflow. I think some of that, again, is just You know, some of that is just trying to, you know, if you’ve got your mind in procedural code, where you might do where you might do a loop, there are other ways of achieving there can be other ways of achieving the same thing. But certainly, looping is like one of those big areas where people can go a little bit crazy and do loops within loops within loops, and then wonder why everything’s going really slowly. But there are But there are some other things as well. So parameters versus variables. I mean, that’s a big one we see all the time, isn’t it, Dave?”
“Yeah, and parameters actually are great because you can put objects in parameters and they can be deployed with the Logic app through your pipeline or whatever. And then if you want to do some kind of lookup, then you can do that directly through a parameter and they’re really, really quick to do. Just want to pick up on that point actually at the top that we didn’t talk about, about looping slower than parallel execution. If you are, in fact, Andrew’s batching example was a really good one. And if, for example, you thought maybe you would create a logic app that starts and then it has a loop and maybe it goes and reads data from some place and then maybe does the transformation and then it’s going to maybe create another message to go and send somewhere. A much, much better way of doing that would be to just rely on the parallelism that Logic Apps has and just point it at a queue, for example, let it run in parallel, pick those messages up in a batch, process them, even as one, do a transformation or whatever, and then just send it on rather than thinking, I need to think that I’m in some kind of loop and I’m reading messages and doing something with it. So definitely using the parallelism that Logic Apps has is definitely the way to go. And often, if you find yourself in a Logic App, building a Logic App or a workflow, and it is intuitively feeling that it’s getting complex, and like Andrew said, you’ve got loops within loops and appending to variables, and you’re bending the workflow, to make it do what you want. We have this new feature where you can write custom code, essentially an inline Azure function. So if you can do it in code, just do it in code because it will be a lot quicker, easier to understand, and ultimately cheaper as well, requiring less compute. Because sometimes you do need to write a small amount of code. uh and if you do that in a separate function project but within the logic app using the this in vs code you can do this that can be a really good option and and definitely code and logic apps better better together it doesn’t have to be one or the other yeah good absolutely or and the other thing is I mean like say we’ve You’ve got Logic Apps as it functions. The other thing is that if you need to write code, you can put that in a function as well and call that from your Logic App. But if it’s, again, if it’s real speed, it’s going to be much quicker to execute if you actually call that code directly from your Logic App because you haven’t got that kind of HTTP hop to go and call a function. So again, just be aware of this when you’re doing your design. Okay, so we’re going to talk a little bit now about how Logic Apps scale out. mean there’s there’s not an awful lot to to do this but but ultimately a hosting plan has you know one or more virtual machines that run in parallel you know just like any other kind of web farm application um but the but some of the things we kind of think about or that we see is that sometimes people say oh I’ve got to set my you know I’ve got to set my um my scale out to be four instances otherwise it’s going to be too expensive the thing is if if those if those logic apps aren’t doing any work they’ll they’ll auto scale back down to you know back down to one so they’re not going to cost you any more but when you actually have a lot of work to do you know you kind of think the area under the curve is the same if it takes you know if you’ve got one one instance running for 10 hours is the same as 10 instances running for one hour in terms of cost so why not why not set your limits higher, get through that work quicker? Because you can actually, by limiting the maximum number of instances, because you might be worried about cost, you might actually artificially give yourself poor performance, when actually there doesn’t need to be. And you’re actually not incurring a cost penalty by getting through that same amount of work quicker with more machines in parallel. So do bear that one in mind. But the other thing is, and maybe Dave, you can tell a little bit about scale up.”
“Yeah, it’s really true that pinning the maximum number of instances in your plan rarely makes sense. And load testing is really important so if you if you’re building a solution before you put it into production make sure that you load test it so that you understand those behaviors and you can understand what those costs are likely to be up front and actually a logic app scaling up to 20 instances and running for 10 minutes the cost is actually incredibly low And it could even cost you more by pinning the instances because they’re going to run longer, because they’re going to be retrying, because it’s not that compute to be able to cope with the demand. So it just doesn’t really make sense. And so, yeah, we talked about scale out earlier and HTTP having pre-warmed instances. Batch processing is scaled differently, but there have been some improvements there recently to improve. the scale-out time and how many machines we add when we need to scale out. Just look at the takeaways of that then. The main takeaway is don’t be afraid. Don’t be afraid of scaling out because scaling out is actually your friend. And finally, I think this is the last thing we can talk about, but it’s about logic apps and workflows. So this actually also has quite a big impact on performance because, again, imagine you’ve got a lot of different workflows you need to do. And you can put them in one uber massive logic app. Or you can split them up and you could, you know, if you’ve got 80 workflows, you could have one logic app with 80 workflows. Or you could have 10 logic apps with an average of eight workflows. But this makes a difference to execution because when you actually run a single workflow, everything in that logic app is loaded into memory. Whereas if actually some of them aren’t being used a lot of the time, they don’t need to be loaded into memory. So you can get more memory available for the ones that actually are executing rather than the ones that aren’t. The other thing is that as your hosting plan scales out, just let the underlying Azure services worry about what’s running on which node and let that do the work of being the most efficient at that. So on the same hosting plan, you’ve got a lot more flexibility if you break these up. Have you got anything else to add to that, Dave?”
“Yeah, I’ve seen this quite a bit where we have one plan, one logic app, and that logic app has multiple workflows in it that are serving different purposes, if you like, where it might be one workflow is dealing with batch, another one is dealing with HTTP. And in that particular scenario, what you really want is the HTTP request, which might be coming from the website, for example, to be processed quickly. What you don’t want is for that to be in the mix with a massive batch process that’s processing 10 linear records. So the noisy neighbor thing, isn’t it?”
“Exactly. And in that instance, it might make sense to actually put it on a different plan even.”
“Yeah. But it’s worth bearing in mind that but logic apps and premium functions, for that matter, and app service scales at the app level, doesn’t scale at the workflow level. So if you have four logic apps and one of them needs to scale, it will scale that out, of course, onto multiple machines, but the others aren’t loaded. Unless they start executing, which case they can start to reuse that compute that’s there, if it makes sense. You know, there are runtime decisions that are taking place that will figure out whether we’re not backed and run on the same machine. But it is worth remembering how that scale-like works.”
“Yeah, and one of the other things that kind of didn’t, we haven’t really talked about, but another thing that is also very relevant for that is that this model you see here in the diagram, that a logic app points to a storage account. Now, it also means that if you’ve got one logic app with a massive number of workflows in it, they’ve all got to write to the same storage account. And so you’re going to get, you know, you can potentially start to get contention on that storage account. But again, if they are split up and divided amongst separate logic apps, then they each can have their own storage account. So you don’t get that same level of contention at the storage or the IO level. So that’s always useful to bear in mind. So you can separate your resource hungry workflows onto separate logic apps and even separate those onto different hosting plans and then make sure that you’re keeping your storage accounts separate because that again is a point of contention. Okay. quick deep breath. I think we’ve covered this. We’ve gone, we’ve been going pretty fast for 50 minutes. Um, so we’re going to conclude now really, and just start to wrap up and hopefully we’ve got a little bit of time to do questions and comments, but just, um, the main takeaways really is that, you know, we’ve talked about logic apps performance and a lot of people have experienced issues with logic apps performance, but it’s, but you know, the, Performance is one of the things that people often complain about in Logic Apps, but they do give you a lot of other benefits against just writing code. It’s not always all about performance, but you… And secondly, Logic Apps do perform. And if you understand a lot of the strategies that we’ve talked about today, you know the tuning strategies, you can actually get orders of magnitude better. I mean, one of the other examples just from one of our customers that we’ve been dealing with, doing a project over the last six months, is that they had a Logic App that takes a load of data from web services and processes it in loops. And that was taking a day to execute. But we’ve refactored that by writing that data into a database, running a couple of queries against it with Logic App, and it’s like 10 seconds. So there are, you know, just by doing a different approach to design, you can actually make, you know, huge orders of magnitude difference and you know very often loops within loops within loops different approach to design you can you can very often make massive differences there okay q a so we’ve got loads and loads of things in the comments thank you so much for for those um I’m just going to have a quick flick through and see if there So let’s have a look at this one. So Mark is saying, can you cache a response in a stateful logic app so that if the same inbound message is received, we bypass, for example, a database request? Then if you can do that in a separate cache. So any thoughts on that?”
“Yeah, I think in that instance, you would have to use some kind of caching technology, either a lower latency store, or I’m guessing the thing that you’re trying to cache is quite expensive, or maybe has throughput limits or something. So in which case, there’s no built-in cache in Logic Apps, so you’d have to just reuse Redis or some other service. I suppose the thing there is that if you’ve got a stateful logic app, you can actually store something in the parameter or variable in the stateful logic app. But once that logic app’s finished executing, that state’s gone. So if actually you need to actually preserve that across executions, then you’d need to cache it elsewhere. Whereas if you were just… If you had one Logic App executing, then yes, you could preserve a response in the state of the Logic App over a long-running transaction. It’s worth just noting as well that lots of organizations that use Logic Apps also have APN, API Management. And API Management has a built-in cache, and it’s a reasonable cache as well. And so if you’re using that, you can actually just spin up an API super quick, use the built-in cache from APN, call it from a Logic App, and you’re done, and you’ve got a really low latency cache that you can use. Funnily enough, I’m just looking through the comments further down, and that is exactly the kind of thing that Sandro said. Thank you very much, Sandro. So let’s just… This is quite… An interesting one, and this might be something we’ll cover off under monitoring and alerting, but very quickly, we talked about not getting to run the history, but using application insights. So does that give us the input output of each action in the statements logica?”
“No, no, it doesn’t. But I think, yes, we are going to cover this in a separate session. But it’s very straightforward to be able to track and log custom data from Logic Apps and from other services for that matter into Application Insights that would give you the information you need. And often it’s not necessarily that you want to see the whole payload, but maybe you want to pick out an order ID or something so that you can do some kind of customized search. But you can write the payload. It’s really track properties in Logic Apps, yes. But by default, it isn’t written. But there are ways around that. OK, good stuff. Another one we’ve got here from Mohit. How to use managed connectors in an isolated environment. I think, presumably, I’m assuming we mean a private VNet. And I think you’ve kind of alluded to that, haven’t you, Dave?”
“Yeah, the only mechanism you’ve really got is on-premise data gateway. Yeah, I mean, there’s nothing to stop you calling them. But if you want, maybe we can take this offline. But if the managed connector, if the thing it’s talking to is private, then the only option really is the data gateway if that’s supported or you can open up the firewall walls, which isn’t necessarily a good idea there. there isn’t an easy way around that. But it would be interesting to know which connector it is that you’re trying to use.”
“Yeah. I mean, the other thing is that you may have a logic app that needs to call into your, it depends where the logic app is sitting in your private network or whether it’s sitting outside needing to call in because you can also, your logic app can call API management that can then feed through into something in your internal network as well. So there’s, again, lots of different options on the design there. Sorry, just one really super quick thing on that. And also, it’s not necessarily the only option of enabling security, and that is using private networking because we do have Azure AD or Microsoft Entra ID available to us where we can protect different resources with our wall. And that can be a very valid option as well. So I know we’re almost at time, but just last one. How many workflows can we have in a single Logic App? Thanks for that question, Rishi. I don’t know what the upper limit is, if there is one. But I think the conversation we had earlier, I’d definitely not have lots of workflows in a Logic App. Apart from anything, it’s a good practice to, if you separate them into multiple logic apps, they can be deployed independently, tested independently, scaled independently, etc. So although the limit is probably quite high, you might not want to do that. It takes me back, I remember back in the old BizTalk days, I can remember a colleague of mine who’d hit the limit on the number of shapes you could have in an orchestration. which I think was 256. And it was like, at what point did you think that having this many shapes was a good idea before you split this down a bit? So, you know, just, just, there will be a limit, but really just be sensible like that and understand how you’re going to balance that workload out, I think is the key thing there. Great. Okay. So moving on, if we can just, have a quick oh someone’s already answered this how would you rate the webinar please please give us a final vote on uh on how much you’ve enjoyed the webinar we’ve really enjoyed having so many of you on the webinar today um I’m not quite sure what the linkedin numbers have been but we’ve had really really good numbers on stream yard so thank you so much thank you for so much uh engagement in the comments and One of the things we’ll say is that we’re going to go through the comments after this. Any questions we’ve missed, we’ll be in touch. We’ll get back to you. But feel free to pass on any additional things. Send us an email. This is a shared mailbox. It’ll go to me and some of my colleagues. And we’ll do our best to field your questions. So thanks very much, everyone. Glad you enjoyed it. I think we’ve definitely… I think we’ve definitely pleased a lot of people. So thank you so much. And we’ll catch you for the next best practices guide quite soon. All right. Thanks a lot, everyone. Bye now.
Welcome to our latest case study, where we delve into the transformative journey of a leading beverage manufacturer. Faced with the challenge of consolidating 15
In today’s digital landscape, optimising costs while maintaining efficiency in cloud-based applications is paramount for businesses. This text delves into strategic approaches to cost optimisation
Logic Apps Standard runtime is here – what this means for Biztalk Users
Released in June 2021, the new Logic Apps Standard runtime is Microsoft’s latest offering in the logic apps technology space.
We’ve been saying for a while that it might be time to think of modernising classic on-premises BizTalk Server installations, and Microsoft’s BizTalk Migrator tool helps users to make the move to the cloud.
We think the introduction of Logic Apps Standard is all the more reason to think about making this move, and so we’re going to look at that in more depth in this article.