Quantcast
Channel: It Ain't Boring
Viewing all 554 articles
Browse latest View live

PowerPlatform Environments: Order vs Chaos

$
0
0

  It seems in the effort to promote PowerApps, Microsoft has created the situation where there had to be a way to take this fast spreading powerapps movement under control – otherwise, everyone with the proper license can create environments, so, if there are 100 licensed users, we may end up with 100 trial environment that will be showing up on the list and with a lot of production environments that will be adding up to the total storage consumption. This might be a nightmare from the management standpoint if you are not prepared.

  Of course if we found ourselves in this situation, that would mean our users are actively exploring the platform, so that might be just what we need. However, this is where it’s becoming an existential question of what is it you prefer to live with: do you want to have more order by introducing some restrictions, or are you ok with having more chaos there by allowing your users to explore freely?

The choice is yours, since you can do it this way or the other, and here is how:

 https://docs.microsoft.com/en-us/power-platform/admin/control-environment-creation

image

This setting, I believe, applies to the production environments, since they would be taking up the space. But, if you also want to disable trials, there is a PowerShell command for that:

image


Use AAD Groups to set up your PowerApps / Dynamics teams

$
0
0

 

Have you tried creating a team lately? If not, give it a try, and you may see a couple of team types that were not there before and that might actually get your AD admins excited:

clip_image002

Now imagine that there is a group in AAD which you want to use to set up a team in PowerApps. Previously, it would be a relatively complicated task that would require some kind of integration to be set up. Now you can do it out of the box. Setup the teams in PowerApps, assign security roles, and let AD/Office admins manage the membership outside of Dynamics.

Here is an AAD group:

clip_image006

Here is a team in PowerApps that’s using the ID of the AAD group above:

clip_image004

 

Here is something important that I did not realize first: any Azure AD group membership maintenance done on the team member in Azure AD will not be reflected until the next time the team member logs in or when the system refreshes the cache (after 8 hours of continuous log-in).

https://docs.microsoft.com/en-us/dynamics365/customer-engagement/admin/manage-teams

But, once I’ve logged out and logged in under each of the user accounts above, I can see them added to the team:

clip_image008

And, once it happens, the users will get their permissions derived from the team. Which means that the next time you need to create a Sales Person user account in Dynamics, you might probably just add that user to the corresponding AD group.

DevOps for Dynamics/PowerApps – Q & A

$
0
0

 

In the last several weeks, I wrote a few blog posts on the devops topic. However, I also meant to conclude those series with a dedicated Q & A post, so here it goes…

1. Why did I use PowerApps build tools?

Quite frankly, it was, mostly, out of curiosity. There are other community tools available out there, but I was curious to see how “native” tools will measure up. Turned out they can be very useful; however, they are missing a bunch of parameters for solution import and/or for the solution packages:

  • we can’t “stage for upgrade” our managed solutions while importing
  • we also can’t choose a mapping file for the solutionpackager

2. What are the caveats of XML merge?

This is a painful topic. We can probably go on and on about how merging XML files might be confusing and challenging – the only way to mitigate this would be to merge often, but even that is not, always, possible.

However, that issue aside, there is one very specific problem which you might encounter.

Imagine that you have a dev branch where you’ve been doing some work on the solution, and you need to merge the changes that occurred on the master branch into your dev branch. Imagine that, as part of those changes, one or more of the solution components have been removed. For example, another developer might have deleted a workflow that’s no longer required.

In this scenario, you will need to also delete the files which are no longer needed from the branch you’ll be merging into. So you might try to use the following git command to ensure that, when merging, git would prioritize the “master”:

git merge –squash -X theirs master

The problem with “theirs” is that git might not delete any of the files you have in your local branch. And, still, it would merge the XML, so that XML wouldn’t be referencing workflow file anymore, but the file itself would still be sitting there in the solution folders.

So, when you try to re-package your solution with the solutionpackager, that operation will fail.

If this happens, you may need to actually delete the file/folder, and, then, checkout that folder from the master branch into your local branch:

git rm –r <folder>

git checkout master –<folder>

For more details on this issue, have a look at the discussion below, for example: https://stackoverflow.com/questions/25254037/git-merge-strategy-theirs-is-not-resolving-modify-delete-conflict

3. How do we store passwords for EasyRepro?

EasyRepro is not supposed to work with the connection strings – it’s a UI testing framework, so it needs to know the user name and password in order to “type them into” the login dialog. Which means those parameters may have to be stored somewhere as regular text. It’s not the end of the world, but you may want to limit test user account permissions in your environment since that password won’t be too secure anymore.

4. What about the unit tests for various plugins etc?

I have not used them in my demo project. If you want to explore the options, Fake Xrm Easy might be a good place to start.

5. What about the “configuration data”?

Again, this is something I did not do in the demo project; however, things are starting to get interesting in this area. It used to be that utilizing the Configuration Migration tool would be a semi-manual process. However, there is a powershell module now:

https://www.powershellgallery.com/packages/Microsoft.Xrm.Tooling.ConfigurationMigration/1.0.0.12

Eventually, this is going to be a “go to” option.

For the time being, since it’s only the first release (and I still have to try it…. maybe it’s what I’ll do next), you may still find other options useful. For example, I’ve been using the approach described here on one of the client projects.

 

6. What about more complicated scenarios where patching is required?

There are situations where that “simple pipeline” approach does not work. After all, as the solution keeps growing, it may become close to impossible to keep moving complete solution between different environments every time, and, instead, we may have to start splitting that solution into smaller ones and/or we may want to start using solution patches. In either case, there are a few problems with that, and I am not even sure what the solution would be here:

  • Even though solution packager supports patches, I am not quite certain what and how should we be merging in the source control
  • The pipelines have to be re-designed to understand this patching process

 

New licensing is just around the corner now – more details keep coming out

$
0
0

Microsoft has just published more details on the upcoming licensing changes – don’t miss this update:

https://www.microsoft.com/en-us/licensing/news/updates-to-microsoft-365-dynamics-365-powerapps-and-microsoft-flow-licensing

I know there are different ways to look at it – some of us are working more on the model-driven applications side, yet others are using CanvasApps and/or Flows. What’s coming might affect us differently. However, looking at it from my point of view (which is, mostly, model-driven apps), I think it’s worth keeping in mind a few things:

  • We are getting a cheaper “introductory” plan: $10 per app plan. It can be used for both model-driven and canvas apps
  • Users licensed for Dynamics won’t be able to work in the Core CDS environments. I am not exactly sure why it matters since they can still work in the Dynamics CDS environments where PowerApps-licensed users can work as well. In other words, we just need to make sure those Power Apps (model-driven or canvas) are deployed in the Dynamics CDS environments to allow our Dynamics-licensed users work with them. I have a feeling this is a bit of a license hack, so it might not work this way forever
  • It’s probably more important from the Dynamics-licensed users perspective that they will be loosing general-purpose Flow use rights
  • Embedded Canvas Apps will not be counted towards the limit
  • Irrespective of the “app licensing”, there will be API limits per user. This it what actually bothers me because I am not sure if there is an easy way to estimate usage at the moment, and, also, since this may or may not affect everyday usage but will certainly have to be accounted for on the data-migration projects
  • That API limit above will affect all types of accounts (including application user accounts and non-interactive accounts)
  • Building a portal for Power Apps is becoming an exercise in cost estimates.  On the one hand, we can get a portal starting at $2 per external login. They way I understand it, API calls from the portal are not counted in this case. On the other hand, we can build a custom portal, but we will have to count those API calls now

All that said, I think this licensing change should have been expected – in the cloud, we always pay per use, so now that PowerApps licensing will be clearly accounting for the utilization, the whole licensing model will probably start becoming more straightforward. Although, we might also have to rethink some common scenarios we got used to (for example: data integration, data migration, sharepoint permissions replication, etc)

Upcoming API call limits: a few things to keep in mind

$
0
0

 

Microsoft is introducing daily API call limits in the October licensing:

image

https://docs.microsoft.com/en-ca/power-platform/admin/api-request-limits-allocations

This will affect all user accounts, all types of licenses, all types of applications/flows. Even non-interactive/application/admin user accounts will be affected.

This may give you a bit of a panic attack when you are reading it the first time, but there are a few things to keep in mind:

1. There will be a grace period until December 31, 2019

You also have an option to extend that period till October 1, 2020. Don’t forget to request an extension:

image

This grace period applies to the license plans, too.


2. From the WebAPI perspective, a batch request will still be counted as one API call

Sure this might not be very helpful in the scenarios where we have no control over how the requests are submitted (individually or via a batch), but, when looking at the data integrations/data migrations, we should probably start using batches more aggressively, especially since such SSIS connectors as KingswaySoft or CozyRoc would allow you to specify the batch size.

Although, you may also have to be careful about various text lookups, since, technically, they would likely break the batch. From that standpoint, “local DB” pattern I described a couple of years ago might be helpful here as well – instead of using a lookup, we might load everything into the local db using a batch operation, then do a lookup locally, then push data to the target without having to do lookups there:

https://www.itaintboring.com/dynamics-crm/data-migration-how-do-you-usually-do-it/

3. Those limits are not just for the CDS API calls

https://docs.microsoft.com/en-us/power-platform/admin/api-request-limits-allocations#what-is-a-microsoft-power-platform-request

image

If there are errors you can’t easily see in the Flow designer, look at the complex actions – the errors might be hidden inside

$
0
0

Having deployed my Flow in the Test environment earlier today, I quickly realized it was not working. Turned out Flow designer was complaining about the connections in the Flow with the following error:

Some of the connections are not authorized yet. If you just created a workflow from a template, please add the authorized connections to your workflow before saving. 

image

I’ve fixed the connection and clicked “save” hoping to see the error gone, but:

image

Huh?

It seems Flow designer is a little quirky when it comes to highlighting where the errors are. There was yet another connection error in the “Apply to each”, but I could not really see it till I have expanded that step:

image

Once that other connection error has been fixed, the Flow went back to life. By the way, it did not take me long to run into exactly the same situation with yet another Flow, but, this time, the error was hidden inside a condition action.

PCF controls in Canvas Apps and why using Xrm namespace might not be a good idea

$
0
0

 

I wanted to see how a custom PCF control would work in a canvas app, and, as it turned out, it just works if you make sure this experimental feature has been enabled in the environment:

https://powerapps.microsoft.com/en-us/blog/announcing-experimental-release-of-the-powerapps-component-framework-for-canvas-apps/

You also need to enable components for the app:

https://docs.microsoft.com/en-us/powerapps/developer/component-framework/component-framework-for-canvas-apps

So, since I tried it for the Validated Input control, here is how it looks like in the canvas app:

image

image

image

Here is what I realized, though.

If you tried creating PCF components before, you would have probably noticed that you can use WebAPI to run certain queries (CRUD + retrieveMultiple):

https://docs.microsoft.com/en-us/powerapps/developer/component-framework/reference/webapi

Often, those 5 methods provided in that namespace are not enough – for example, we can’t associate N:N records using WebAPI.

So, when implementing a different control earlier, I sort of cheated and assumed that, since my control would always be running in a model-driven application entity form, there would always be an Xrm object. Which is why I was able to do this in the index.ts file:

declare var Xrm: any;

And, then, I could use it this way:

var url: string = (<any>Xrm).Utility.getGlobalContext().getClientUrl();

Mind you, there is probably a different method of getting the url, so, technically, I might not have to use Xrm in this particular scenario.

Still, that example above actually shows how easy it is to get access to the Xrm namespace from a PCF component, so it might be tempting. Compare that to the web resources where you have to go to the parent window to find that Xrm object, yet you may have to wait till Xrm gets initialized.

However, getting back to the Canvas Apps. There is no Xrm here… who would have thought, huh?

WebAPI might become available, even though it’s not there yet. Keep in mind it’s still an experimental preview, so things can change completely.  However, there might be no reason at all for the Canvas Apps to surface complete Xrm namespace, which means if you decide to use Xrm in you PCF component, you will only be able to utilize such component in the model-driven applications. It seems to be almost obvious, but, somehow, I only realized it once I started to look at my control in the Canvas App.

Choosing the right Model-Driven App Supporting Technology

$
0
0

 

While switching between Visual Studio, where I was adding yet another plugin, and Microsoft Flow designer, where I had to tweak a flow earlier this week, I found myself going over another episode of self-assessment which, essentially, was all about trying to answer this question: “why am I using all those different technologies on a single project”?

So, I figured why don’t I  dig into it a little more? For now, let’s not even think about stand-alone Canvas Apps – I am mostly working with model-driven applications, so, if you look at the diagram below, it might be a good enough approximation of how model-driven application development looks like today. Although, you will notice that I did not mention such products as Logic Apps, Azure Functions, Forms Pro, etc. This is because those are not PowerPlatform technologies, and they all fall into the “Custom or Third-Party” boxes on this diagram.

image

On a high level, we can put all those “supporting technologies” into a few buckets (I used “print forms”, “integrations”, “automation”, “reporting”, “external access” on the diagram above); however, there will usually be a few technologies within each bucket, and, at the moment, I would have a hard time identifying a single technology that would be superior to the others in that same bucket. Maybe with the exception of external access where Power Platform can offer only one solution, which is the Portals. Of course we can always develop something custom, so “custom or third-party” would normally be present in each bucket.

So, just to have an example of how the thinking might go:

  • Flows are the latest and greatest, of course, but there are no synchronous Flows
  • Workflows will probably be deprecated
  • Plugins are going to stay, so might work well for synchronous
  • However, we need developers to create plugins

 

I could probably go on and on with this “yes but” pattern – instead, I figured I’d create a few quick comparison tables (one per bucket), so here is what I ended up with – you might find it useful, too.

1. Print forms

image

It seems Word Templates would certainly be better IF we could use unlimited relationships, and, possibly subreports. However, since we can’t, and since, at some point, we will almost inevitably need to add data to the report that can only be tracked through a relationship that’s not trackable in Word Templates, that would, sooner or later, represent a problem. So, even if we start with Word Templates only, at some point we may still end up adding an SSRS report.

2. Integrations and UI

image

Again, when comparing, I tried to make sure that each “technology” has something unique about it. What is “ad-hoc development? Basically, it’s the ability to adjust the script right in the application without having to first compile the typescript and re-deploy the whole thing(which is one of the differences between web resources and PCF).

3. Automation

image

So, as far as automation goes, Microsoft Flow is the latest and greatest except that it does not support synchronous events. And, of course, you might not like the idea of having custom code, but, quite often, it’s the only efficient way to achieve something. Classic workflows are not quite future-proof keeping in mind that Flow has been slowly replacing them. Web Hooks require urls, so those may have to be updated once the solution has been deployed. And, also, web hooks are a bit special in the sense that they still have to be developed, it’s just that we don’t care about how and where they are developed (as long as they do exist) while on the PowerApps side.

4. Reporting

image

Essentially, SSRS is a very capable tool in many cases, and we don’t need a separate license to use it. However, compatible dev tools are, usually, a few versions behind. Besides, you would need a “developer” to create SSRS reports. None of the other tools are solution-aware yet. Excel templates have limitations on what we can do with the relationships. Power BI requires a separate license. Power Query is  not “integrated” with PowerApps.

5. External access

This one is really simple since, out-of-the-box, there is nothing to choose from. It’s either the Portals or it has to be something “external”.

So, really, in my example with the plugins and Flows, it’s not that I want to make the project more complicated by utilizing plugins and Flows together. In fact, I’m trying to utilize Flow where possible to avoid “coding”, but, since Flows are always asynchronous, there are situations where they just don’t cover the requirements. And, as you can see from the above tables, it’s pretty much the same situation with everything else. It’s an interesting world we live inSmile


Managed solutions – let’s debunk some misconceptions

$
0
0

I have a strange attitude towards managed solutions. In general, I don’t always see great benefits in using them. On the other hand, this is what Microsoft recommends for production environments, so should I even be bringing this up?

This is why I probably don’t mind it either way now (managed or unmanaged); although, if somebody decides to go with unmanaged, I will be happy to remind them about this:

image

Still, it’s good to have a clear recommendation, but it would be even better to understand the “why-s” of it. Of course it would be much easier if we could describe the difference between managed/unmanaged in terms of the benefits each of those solution types is bringing.

For example, consider Debug and Release builds of your C# code in the development world. Debug version will have some extra debugging information, but it could also be slower/bigger. Release version might be somewhat faster and smaller, so it’s better for production. However, it’s not quite the same as managed/unmanaged in PowerApps since we can’t, really, say that managed is faster or slower, that there is some extra information in the unmanaged that will help with debugging, etc.

I am not sure I can do it that clearly for “managed” vs “unmanaged”, but let’s look at a few benefits of the managed solutions which are, sometimes, misunderstood.

1. There is no ability to edit solution components directly

The key here is “directly”. The screenshot below explains it very clearly:

image

In other words, you would not be able to lock solution components just by starting to use a managed solution. You’d have to disable customizations of those components (and, yes, those settings will be imported as part of the solution). Without that, you might still go to the default solution and modify the settings.

However, locking those components makes your solution much less extendable. This is probably one of the reasons why Microsoft is not overusing that technique, and we can still happily customize contacts, accounts, and other “managed” entities:

image

Despite the fact that Account is managed (see screenshot above), we can still add forms, update fields, create new fields, etc.

Then, do managed solutions help with “component locking”? From my standpoint, the answer is “sometimes”.

2. Ability to delete solution components is always great

This can be as risky as it is useful. It’s true that, with the unmanaged solutions, we can add components but we can’t delete them (we can “manually” or through the API, but not as part of solution removal). Managed solutions can be of great help there. However, even with managed there can be some planning involved.

Would you delete an attribute just like that or would you need to ensure data retention? What if an attribute is removed from the dev environment by a developer who did not think of the data retention, and, then, that solution is deployed in production? The attribute, and the data, will be gone forever.

Is it good or bad? Again, it depends. You may not want to automate certain things to that extent.

 

3. Managed can’t be exported, and that’s ok

For a long time, this has been my main concern. If the dev environment gets broken or becomes out of sync with production, how do we restore it?

This is where, I think, once we start talking about using managed solutions in production, we should also start talking about using some form of source control and devops. Whether it’s Azure DevOps, or whether it’s something else, but we need a way to store solution sources somewhere just in case we have to re-build our dev environment, and, also, we need to ensure we don’t deploy something in prod “manually”, but we always do it from the source control.

Which is great, but, if you ever looked at setting up devops for PowerApps, you might realize that it is not such a simple exercise. Do you have the expertise (or developers/consultants) for that?

So, at this point, do you still want to use managed solutions?

If you are not that certain anymore, I think that’s exactly what I wanted to achieve, but, if you just said “No”, maybe that’s taking it too far.

All the latest enhancements in that area (solution history, for example) are tied into the concept of managed solutions. The product team has certainly been pushing managed solutions lately. I still think there is a missing piece that will make the choice obvious as far as internal development goes, but, of course, as far as ISV-s are concerned managed is exactly what they need. Besides, you might be able to put “delete” feature to good use even for internal development, and you may be ok with having to figure out proper source control, and, possibly, ci/cd for your solutions. Hence, “managed” might just be the best way to go in those scenarios.

Test harness for PCF controls – we can also use “start npm start”

$
0
0

While developing a PCF control, you might want to start the test harness.

Although, if you are, like me, not doing PCF development daily, let me clarify what the “test harness” is. Essentially, it’s what you can/should use to test your PCF control without having to deploy it into the CDS environment. Once it’s started, you’ll be able to test the behavior of your PCF control outside of the CDS in a special web page – you’ll be able to modify property values, to see how your control responds to those changes, debug your code, etc.

If you need more details, you might want to open the docs and have a look:

https://docs.microsoft.com/en-us/powerapps/developer/component-framework/debugging-custom-controls

Either way, in order to start the test harness, you would use the following command:

npm start

This is fast and convenient, but this command will lock your terminal session. For example, if you are doing PCF development in Visual Studio Code, here is what you will see in the terminal window:

image

You won’t be able to use that window for any additional commands until you have terminated the test harness. Of course you could open yet another terminal window, but it just occurred to me that we could also type

start npm start

Those two “start”-s surrounding the “npm” have completely different meaning. When done this way, a new command prompt window will show up and “npm start” will run in that additional window:

image

It’ll be the same result – you will have the harness started, but, also, your original terminal session will continue to work, and you won’t need to open another one.

Using PowerShell to export/import solutions, data, and Word Templates

$
0
0

 

I blogged about it before, but now that ItAintBoring.CDS.PowerShell library has been fine tuned, it might be time to do it again.

There are three parts to this post:

  • Introduction
  • ItAintBoring.CDS.PowerShell installation instructions
  • ItAintBoring.CDS.PowerShell usage example

 

INTRODUCTION

There are different ways we can set up ci/cd – we can use Azure Devops, we can use PowerApp Build Tools, we can use various community tools, or we can even create our own powershell library.

Each has some pros and cons, but this post is all about that last option, which is using our own “powershell library”.

What are the main benefits? Well, it’s the amount of control we have over what we can do. You might say that “no code” is always better, but I would argue that, when you are setting up ci/cd, “no code” will probably be the least of your concerns.

Anyway, in this case we can use the library to export/import solutions, and, also, to export/import configuration data. Including, as of v 1.0.1, word templates.

INSTALLATION INSTRUCTIONS

1. Create a new folder, call it “CDS Deployment Scripts”(although, you can give it a different name if you prefer)

2. Create a new ps1 file in that folder with the content below

function Get-NuGetPackages{

$sourceNugetExe = “https://dist.nuget.org/win-x86-commandline/latest/nuget.exe”
$targetNugetExe = “.\nuget.exe”
Remove-Item .\Tools -Force -Recurse -ErrorAction Ignore
Invoke-WebRequest $sourceNugetExe -OutFile $targetNugetExe
Set-Alias nuget $targetNugetExe -Scope Global -Verbose

./nuget install ItAintBoring.CDS.PowerShell -O .
}
Get-NuGetPackages

Here is how it should all look like:

image

3. Run the file above with PowerShell – you will have the scripts downloaded

image

image

4. Update system path variable so it has a path to the deployment.psm1

image

 

USAGE EXAMPLE

For the remaining part of this post, you will need to download sources from Git (just clone the repo if you prefer):

https://github.com/ashlega/ItAintBoring.Deployment

That repository includes all script sources, but, also, a sample project.

1. In the file explorer, navigate to the Setup/Projects/DeploymentDemo folder

image

2. Update settings.ps1 with the correct environment url-s

Open settings.ps1 and update your connection settings for the source/destination. If you are just trying the demo, don’t worry about the source, but make sure to update the destination.

image

3. IF YOU HAD A SOURCE environment

Which you don’t, at least while working on this example. But the idea is that, if you start using the script in your ci/cd, you would have the source.

So, if you had a source environment, you would now need to update Export.ps1. The purpose of that file is to export solutions and data from the source:

image

You can see how, for the data, it’s using FetchXml, which also works for the Word Document Templates.

4. Run import_solutions.ps1 to deploy solutions and data to the destination environment

image

5. Run import_data.ps1 to deploy data (including Word Templates) for the new entities to the destination environment

image

 

As a result of those last two steps above, you will have the solution deployed and some data uploaded:

image

image

image

Most importantly, it literally takes only a few clicks once those export/import files have been prepared.

And what about the source control? Well, it can still be there if you want. Once the solution file has been exported, you can run solution packager to unpack the file, then use Git to put it in the DevOps or on GitHub. Before running the import, you will need to get those sources from the source control, package them using the solution packager, and run the import scripts.

Lookups behavior in Wave 2 – recent items, wildcard search, magnifying glass button, etc

$
0
0

 

I am wondering how many of us have missed this post:

“Preview for usability enhancements to lookups in Unified Interface”

https://powerapps.microsoft.com/en-us/blog/preview-for-usability-enhancements-to-lookups-in-unified-interface/

I certainly did. So, to summarize, here is what’s happening now when you are clicking that magnifying glass button in the lookup field:

  • If “recent list” is enabled for the lookup, you will see recent items only
  • If “recent list” has been disabled through the configuration, you will see nothing (there will still be “new <record>” link though)
  • If you try typing “*” and clicking the magnifying glass after that, you will get nothing as well (there seem to be a bug when using “*” on its own)

 

If you wanted to bring up the list of records (not the “recent” records), there seem to be two options:

  • When in the lookup field, while it is still empty, push “enter” button
  • OR enter some text to search for and push “enter” or click the magnifying glass button(for example, “T*123” would work fine… as long as it’s not just “*”)

Trying out Microsoft bot technologies

$
0
0

 

I got a little different idea for the month of October, and it is not directly related to the PowerApps development, but it depends on what I can do with the bots. So, naturally, I’ve been trying Bot Framework, Virtual Agent, and may need to try Virtual Assistant samples.

If this interests you, carry on readingSmile

image

First of all, Bot Framework is in the core of any “agent/assistant/bot” technology from Microsoft. However, the way I see it:

  • Bot Framework is the underlying technology
  • Virtual Agent is a self-driving vehicle
  • Virtual Assistant is a Formula 1 car

And, of course, you can always develop your own bot from scratch directly on top of the Bot Framework.

Either way, let’s talk about the Virtual Agent in this post.

First of all, Virtual Agent is a Dynamics/PowerApps-connected technology. You can easily utilize Microsoft Flow directly from the Virtual Agent (but you can’t do much more in terms of “development”… which is how PowerPlatform often is – it’s a “low code”/”no code” platform in many cases)

Then, it’s in a preview. Don’t start posting questions to the community forum in the first 10 minutes after you’ve created a trial through the link below:

https://dynamics.microsoft.com/en-us/ai/virtual-agent-for-customer-service/

Wait till you see the confirmation:

image

Until then, you will see what can probably be described as “deployment work in progress” – some areas will be working, but others won’t. For example, all the buttons to create new topics or new bots will be greyed out and unavailable.

Either way, here is what I wanted to make my virtual agent do:

  • Wake up – easy to do, just need to define some trigger phrases. That said, the agent does not seem to recognize a trigger phrase if it’s even slightly different
  • Greet the visitor – not a problem, it’s just something the bot needs to say
  • Take the answer and get the name

 

This last ones seems simple, but it actually requires a Flow. The user might say “I am…”, “My name is…”, etc. However, all I can do in the agent designer is take that answer into a variable and use “is equal to” condition in the expressions:

image

Which is not quite what I need since I’d rather knew the visitor name. Hence, I need to call a Flow through what is called an “action” in the Virtual Agent to try to parse the answer. Actually, there is a demo of a similar Flow here:

https://www.youtube.com/watch?v=joXCzvi38Fo&feature=youtu.be

That’s overly simplified, though, since I need to parse the user response to remove “I am…”, “My name is…”, etc.

This is where I quickly found out that error reporting between Flows and Virtual Agent may still need some work because, once I had my Flow created and added as an action to the bot, I started to get this:

image

If I could only see what’s wrong somehow? Apparently, the Flow was running ok:

image

Turned out that was just a matter of forming HTTP response incorrectly in the Flow:

image

The agent was expecting “VisitorName” since that’s how the parameter was named in the schema, but the flow was returning “Name” instead. In the absence of detailed error reporting it’s always the simplest stuff that you verify last, so took me a while to figure it out – was a quick fix after that.

In the end, it was not too complicated, and, apparently, this relative simplicity is why we’d want to use a Virtual Agent:

 

image

From there, if I wanted this agent to run on my own web site, there is an iframe deployment option. Basically, as I mentioned before, this is all in line with the low-code/no-code approach.

And, because of the Flow integration(which, in turn, can connect to just about anything), the possibilities there are just mind-blowing. We can use a virtual agent to:

  • Confirm site visitor identity by comparing some info with what’s stored in Dynamics/PowerApps
  • Help site visitors open a support ticket
  • Provide an update on their support tickets
  • Surface knowledge base articles from Dynamics
  • Help them navigate through the phone directory
  • Search for the special offers
  • Etc etc

Besides, there is a Flow connector for LUIS, and I could probably add intent recognition and do other cool stuff using Flows:

https://flow.microsoft.com/en-US/connectors/shared_luis/luis/

 

I would definitely stick to trying it more, but I really wanted to integrate my bot with speech services, and, since this feature is not available yet (as per this thread: https://community.dynamics.com/365/virtual-agent-for-customer-service/f/dynamics-365-virtual-agent-for-customer-service-forum/358349/does-dynamics-365-virtual-agent-for-ce-support-voice-bot), will be moving on to the Bot Framework and Virtual Assistant for now.

Which means leaving the familiar PowerPlatform domain. Let’s see what’s out there in the following posts…

 

Readonly = impression, FieldSecurity = impression + access restrictions, Plugins = controlled changes

$
0
0

 

Why is it not enough to make a field readonly on the form if you want to secure your data from unintended changes?

Because there are at least 2 simple ways to unlock such fields:

1. Level up extension from Natraj Yegnaraman

https://chrome.google.com/webstore/detail/level-up-for-dynamics-crm/bjnkkhimoaclnddigpphpgkfgeggokam

Here is how read-only fields look before you apply “God Mode”:

image

Then I apply the “God Mode”:

image

And I can happily update my read-only field:

image

Which is absolutely awesome when I am a system administrator trying to quickly fix some data in the system, but it can become a nightmare from the data consistency standpoint if I am either not a system administrator or if, even as a system administrator, I am not really supposed to do those things.

2. When in the grid view, you can use “Open in Excel Online” option

image

“Read-only” is how you mark fields on the forms, not what really applies to the server side/excel/etc. So, once you’ve updated the records in excel, you can just save the changes:

image

Of course you can also export/import, but “online” is the quickest in that sense.

What about the field-level security, though?

It does help when the user in question is not a system admin:

https://docs.microsoft.com/en-us/power-platform/admin/field-level-security

“Record-level permissions are granted at the entity level, but you may have certain fields associated with an entity that contain data that is more sensitive than the other fields. For these situations, you use field level security to control access to specific fields.”

Is that good enough? This is certainly better, but it does not help with the opportunist system admins who just want to deal with the immediate data entry problem. They will still have access to all the fields.

Is there a way to actually control those changes?

Well, strictly speaking you can’t beat a system admin in this game. However, you might create a synchronous plugin, or, possibly, a real0time workflow to run on update of some entities and to raise errors whenever a user is not supposed to modify certain data.

Would it help all the time? To some extent – a system admin can just go into the environment and disable a plugin. Of course that’s a little more involved, and that’s the kind of intervention not every system admin would be willing to do, but still. However, for the most critical data you could, at least, use this method to notify the system administrator why updating such fields would not be desirable. Which is, often, part of the problem – if there is a read-only field and no explanations of why it is read-only, then… why not to unlock it, right? But, if a plugin throws an error after that with some extra details, even the system admin might decide not to proceed. Technically, you might also use a real-time workflow for this(just stop a real-time workflow with “cancelled” status), but it might be difficult/impossible to verify conditions properly in the workflow.

Anyway, those are the options, and, of course, in terms of complexity, making a field read-only would be the easiest but it would also be the least restrictive. Using field level security would be more involved but would restrict data access for anyone but for the system administrators. Plugins might give even more control, but that would certainly be development.

PowerPlatform: when a CanvasApp saves a project

$
0
0

 

Sometimes, a Canvas App can save a project. This would be almost impossible for me to say even a year ago; however, the way my current project evolved in the last two weeks, I can think of no better tool to quickly deliver the functionality required by the client.

And, so, if you are more on the professional development side, and if you are still not quite buying those Canvas Apps, here is the story you may want to read. There will be a Canvas App, there will be a model-driven app, there will even be a plugin just so you don’t get bored… besides, there will be json parsing, regular expressions, tablets, web browsers. And it actually won’t take that long.

Either way, here is an additional requirement I got from the client having spent a few months developing a model-driven app: “we need to be able to do product inspections as quickly as possible, and we can’t afford all those mistakes the inspectors can make if they have to learn how to navigate in the model-driven app”. That was the essence of it, and there were some caveats about extra validations, color coding, etc. Basically, what I was hearing is that “a model-driven app will work for everyone else but not for the inspector”. Oh, nice…

First I went into the panic mode, then I thought of the options, and this is when I realized that a Canvas App can cover all of that, even though a developer in me is naturally biased against “excel-like” development.

For the rest of this post(s?), I‘ll be using a simplified version of what I ended up with, but, I think, that will still be a good enough evidence of why a Canvas App can, sometimes, save a project.

I’ll be using two entities for the demo solution:

  • “Inspection” entity
  • “Inspection item” entity

 

The idea is that a user can create an inspection, add inspection items, and pass/fail each of the items so that the whole inspection will pass or fail. In the model-driven app, here is how it would look like:

imageRemember this is a simplified version – on the actual project, there were inspection types, templates, dates, they would have to be booked in advance by different users, the products to inspect might not be available, etc. And, since the inspectors would be using this application only sporadically, you can probably imagine why they would not be comfortable doing it unless the process were streamlined. Not to mention they’d rather be doing it on their tablets.

Come to think of it, that particular part of the model-driven application can be implemented as a canvas app with only three screens:

image

That’s, really, all the inspectors need.

Again, in the actual app there was a bit more to it. There were about 30 inspection items per inspection, different tabs, different colors, inspection item templates, inspection setup logic, etc. But those screenshots above should already give you a good idea of why a Canvas app can work better in this scenario. And, yes, it can all run on tablets.

I don’t, really, mean to list each and every step we have to take to create a canvas app, so I’ll only focus on what I found challenging or not that straightforward. Namely:

  • Formula-based development
  • Interface and designer limitations
  • Run-time and design-time being mixed up in the PowerApp Studio
  • Workaround for the absence of reusable functions
  • Working with JSON and regular expressions
  • Implementing business logic in a CDS plugin

1. Formula-based development

Basically, you are working with some version of Excel. I would not go as far as to say that “if you know Excel you know PowerApps”, but there is some truth there.

Just about everything is either a static value or a formula. It’s not a function you can define and reuse, it’s a formula you can write to calculate a value for a property. For example, on the screenshot below, “Text” property of the label will take its value from the ProductNumber variable defined somewhere else, and, as soon as that variable gets updated, that “Text” property will immediately reflect updated value:

image

Can you set that “Text” property directly? Nope.

I am not quite sure why – we can set variables, but we can’t set properties. Go figure.

Can we use a more complicated formula? Not a problem, except that you may need to get used to the syntax:

image

In that sense, Excel it is.

For the whole list of out-of-the-box functions you can use in those formulas, have a look here:

https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/formula-reference

2. Interface and designer limitations

This is a weird one. There are out of the box controls we can add to the screens. There are certain properties we can configure. Anything beyond that? Not an option.

Well, unless you go for an experimental PCF control, which can be very interesting and promising, but it’s not quite feasible for production yet:

https://powerapps.microsoft.com/en-us/blog/announcing-experimental-release-of-the-powerapps-component-framework-for-canvas-apps/

Just to give you an example. I wanted to have a 3-state checkbox for those pass-fail-n/a options. Of course I would not be able to do it in the model-driver application either without some kind of custom development, but, at least, I’d have web resources and PCF on that side. In the Canvas App, I had to opt for a radio buttons group. Which works, but it’s not quite ideal if you consider how much space those radio button groups will be taking on the screen when there are 30 different “groups”.

In the actual app, I also had to implement a version of the tab control. Which would be a no-brainer in HTML/Javascript, but, with the Canvas Apps, those had to be buttons, and each button would have to change visibility of some controls on the screen.

Why would not I be using a separate screen per tab? Because “Select” method can only work with the controls on the same screen – I’ll talk about it in the “workaround for reusable functions” below. It helps that in Canvas Apps we can group controls:

image

We can select multiple controls (either using a mouse, or using by clicking on them one after the other while holding “shift”), and, then, group them.

One advantage is that we can, then, control visibility of all those controls using group properties:

image

And, then, if you have multiple groups on the same screen, you can hide all but one, update control settings in that group, then switch to the other, etc. That way, you can design each group separately.

Although, this is where you will quickly realize that there is no big difference between run-time and design-time in the PowerApps Studio, and that causes a bit of a problem, though that also presents an interesting workaround. Because, for example, if you use a variable to control group visibility(that’s what you would need to make it all work at run-time), you will not have control over the Visible property in the designer anymore – it’ll be greyed out:

image

And what if you wanted to switch to a different group? The trick is, you need to start your screen, ensure the condition for that “Visible” property are met for your group, and, then, return to the designer. The group will be visible, and you can keep working on the design of the grouped controls:

image

When trying to implement something like a “tab control”, this seems to help… a lot.

3. Run-time and design-time being mixed up in the PowerApp Studio

It’s very unusual when your design-time is affected by the results of the most recent start of your application. It’s also unusual when you can start any screen independently without going through all the screens which are supposed to be opened first.

At the same time, there is a difference between how the application behaves in the PowerApp Studio and how the end users work with the application. When we click start button, that opens up specific application screen:

image

The end users don’t have that option – in order to open that particular application screen, they would have to go through the previous screens.

Another way to think about it would be to say that we are not just designing the app, but we are actually debugging an app (and we are allowed to design it at the same time). We can get to a screen and switch to the “design mode”. From there, we can check current values of all the variables and/or collections:

image

Maybe we can’t update those values directly, but we can bring up another screen and do it. For that matter, we can even create a special screen which would not ever become visible for the regular users since there would be no navigation path to that screen, and, then, use it to set the variables the way we need them. Sounds weird? Well, leave your biases behind for now, and just give it a try. Chances are, a CanvasApp would still be the best fit for what you need to do under some circumstances.

4. Workaround for the absence of reusable functions

There are no reusable functions. I am not sure why Microsoft did it this way. Don’t get me wrong – I now have a couple of examples when a Canvas App saved a project for me, but there are things I just don’t understand, and this is one of them.

No matter what, it’s almost impossible to do any kind of development without having some sort of a function – it’s simply impossible to keep copy-pasting code from one place to another.

And there is a workaround, but, as any workaround, it’s not perfect.

See, we can use “Select” function.

We can create a button, put it on the screen, make it invisible, add some code to the “OnSelect” property, and, then, use “Select(buttonName)” to call that code from anywhere on the same screen.

Here is an example. Let’s say I wanted to store “fail/pass” results for each item in a variables (separately for each item). Of course I could do it in the “OnChange” of each radio button control, but I could also add a new button to the screen, hide it, and add the following code to the “OnSelect” of that button:

image

Which would allow me to call “Select(buttonSetVariables);” from any other place on the same screen:

image

How does it help?

What if I wanted to add common actions to each of those OnChange above? For instance, what if I wanted to update a record in the CDS database every time a radio button is updated? Now I could add a Patch call to just one place, pass all those variables to the Patch, and I would not have to remember to update every single OnChange instead.

Of course you might say that using Patch that way would cause an update to each and every attribute, and that’s not, necessarily, what I may want to do, but you can see how it simplifies code maintenance. Essentially, we are getting some kind of a function. If only Microsoft could introduce real functions…

It seems there is a related technique. You can use global variables, and you can add initialization code to the “OnVisible” property of the application screens. For example, in the inspection application above, I would initialize all global variables on the second screen (once the product # has been selected), and, then would keep updating them on the third screen through that hidden button. The idea is to keep those big chunks of code in one place, even if that means doing a bit more than you would, normally, do in a function.

At this point, there are two remaining topic which would better be discussed together, but which would qualify for a separate post: JSON/Regex and plugin-based business logic.

Hence, have fun for now, and stay tuned for a follow up post.


PowerPlatform: when a plugin meets a Canvas App

$
0
0

One problem with my inspection canvas app was that I did need some “business logic” to be there, and, of course, with the limited coding we can do in the canvas applications, it just had to be implemented somewhere else.

A Microsoft Flow maybe? Well… I am talking about a CDS application, and this is where I would probably go with a plugin over a flow just about any time for anything more complex than a simple query.

Let’s see what a plugin can do, then.

Here is a diagram:

image

The idea is that there are a few places I would use the plugin:

  • In Pre-RetrieveMultiple to pre-process the request. This is where I would parse the query to get product ID from the query so I could use it in Post-Retrieve
  • In Post-RetrieveMultiple to post-process the request. This is where I would use the product id to check if an inspection is allowed, if all the conditions have been met, probably to create the inspection record, and, also, to add error messages to a dedicated field on the inspection entity. Also, the same plugin would add json-serialized data to a dedicated field on the inspection record so that the canvas app could extract results for every inspection item from that field
  • And, finally, when updating the inspection, the canvas app would prepare the json and pass it back to CDS. An OnUpdate plugin would kick in to prase that json and to update inspection item results as required

 

You will find exported solution, including the canvas app and plugin sources here:

https://github.com/ashlega/ItAintBoring.CanvasInspectionDemo

On the high level, here is what’s happening there:

1. Parsing JSON in the canvas app

In the OnVisible of the second screen (where the user is supposed to confirm the produc #), the Canvas App would use a Lookup to load the inspection record by product number:

image

This is where “RetrieveMultiple” plugin would kick in.

ConfirmationGroup would become visible once SelectedInspection variable has been initialized, but only if there was no error:

image

Otherwise, ErrorGroup would show up:

image

Once the user clicks “Confirm” button in the confirmation group, Canvas App would use a regex to parse the json (which would be coming from the plugin):

image

But, before that, if something goes completely wrong, here is what Canvas App user would see:

image

On the other hand, if the plugin decides there is a problem, it would pass it back to the Canvas App through a field on the “fake” inspection record:

image

And the error would look like this:

image

Otherwise, if there were no errors, the user would be able to continue with the inspection:image

2. Passing JSON back to the plugin to create inspection items

Once the inspection is finished, the user would click Finish button, and the canvas app would pack inspection results into a json string:

image

Here is how “OnSelect” for that “Finish” button looks like:

image

The plugin, in turn, would parse that json to create inspection items:

image

Of course I might add all those inspection items as “checkboxes” to the inspection entity itself, but, when it’s done using a separate entity, I have more flexibility when it comes to adding new inspection items later.

And there we are… There is a Canvas Application that delivers better user experience just where it’s required, but all the business logic is implemented in the plugin, and, even more, that plugin can sort of talk to the canvas app by returning errors through a field, and, also, by providing(and accepting) additional data through json.

 

Business logic for Canvas Apps: Plugins vs Flows vs WorkFlows

$
0
0

In the previous post I talked about using a “RetrieveMultiple” (+Update/Create) plugin to incorporate business logic into a Canvas App. And, of course, the very first question I got is “why using a plugin if you could use a Flow”?

It’s a fair question, and I figured I’d try to provide a fair answer. Have a feeling it might end up being not that clear-cut at all, though.

First of all, whatever business logic I was going to implement, it was always meant to work with CDS. I was not going to connect to other data sources there, so, from that standpoint, a Flow would not offer any benefits.

A plugin would have to be developed, of course, and that assumes there would have to be a “professional” developer. That would be another advantage of using a Flow where a “citizen” developer could actually do the job. Although, in reality this is a little more complicated, since, if you look at the classic workflows and compare them with plugins and Flows from the development effort perspective, I think the comparison would go like this:

image

*That’s if we leave custom workflow activities out of this, of course.

Of course plugins are way more on the development side than Flows, but Flows seem to require more effort from the “citizen” developers than classic workfows.

Either way, in my case there would be a “professional” developer to maintain the plugins, so I was not too concerned about that part.

Now, with the Flows being stripped off their two main advantages over the plugins (in this particular situation), let’s see why a plugin might be a better option:

  • When compared to the classic “code” development, no-code development will always be limited no matter how great the tools are. To some extent it’s fine, but, then, once the logic becomes really complicated, it’s actually much easier to maintain it in code, since it’s a lot more concise there
  • Plugins are offering better control of where and how they will start. Actually, neither the Canvas App, nor the model driven app have to worry about whether the plugin will start or not – it will happen automatically
  • Flows are asynchronous, and plugins can be both synchronous and asynchronous. That’s not always a big deal, but can be useful every now and then
  • Plugins can be added to the solutions and moved between the environments. The same is true for the Flows; although, I believe we still need to reset Flow connections in the target environment after importing the Flow there. Plugins will just start to work
  • When developing a plugin, we can rely on the usual source control merge tools since plugins are all about the source code

 

In other words, the reason I went with a plugin in case with the inspection canvas application is, basically, that I did not need those advantages Flows can offer, and, as such, it seemed a plugin would fit better.

Hope it makes sense.

Self-Service Purchase Capabilities for Power Platform

$
0
0

 

You may have heard of it already, but there is going to be a new way to purchase Power Platform services soon – Self-Service Purchase Capabilities are arriving in November:

image

There is a corresponding FAQ page that answers some of the questions you may have:

https://docs.microsoft.com/en-us/microsoft-365/commerce/subscriptions/self-service-purchase-faq

One thing to keep in mind, though, is that there will be no opt-out – every tenant in the commercial cloud will receive this update. In general, I’m guessing this is not a bad idea to give more flexibility to the individuals and to the groups within the organization, but there are a few related concerns, and, the way I see it, Microsoft leaves it to the organizations to resolve those concerns:

Organizations can then rely on their own internal policies, procedures and communications to ensure that those individuals making self-service purchases are complying with company policies.

Tenant administrators will be able to see who made the purchases, for instance, but, in general, this may create a bit of a shadow IT situation, and that might be more concerning for the organizations that have to comply with the additional regulatory, auditing, data security requirements, or even just budgeting requirements.

From that standpoint, the key thing to remember is that it’s still the organization that would be responsible if anything goes wrong (from the compliance standpoint) with the solutions purchased and implemented through the self-service.

That said, I believe self-service can be a good thing. Somebody willing to purchase a service won’t need to go through the long process of procurement anymore, and that might make the process faster/easier.

There is a possibility somebody would decide to exploit this new option and bypass the approvals/procurement completely. Are the risks that big? It would not be that different from a person in your company going rogue to start buying laptops for their department without first approving that purchase somehow, and that would not be a very likely scenario.

Still, if you do need those approvals to happen, you may want to add a few more notes to the self-service purchase policies your company has established so far.

Environment Variables preview

$
0
0

 

A number of folks have pointed out that solution environment variables are, now, in preview:

https://docs.microsoft.com/en-us/powerapps/maker/common-data-service/environmentvariables

I was looking at it yesterday, and, it seems, I’m having a bit mixed emotions.

Environment variables seem to represent a hybrid experience: we can work with them in the model-driven application, but we can also work with them in the solution designer.

For instance, I can find Environment Variables Definitions in the advanced find, run a query, get the list, and review the variables:

image

image

I can even update the values on the screen above:

image

Then, if I open https://make.preview.powerapps.com/ (notice “preview” in the url), I can, then, open the same environment variable definition in the admin experience. As you can see on the screenshot below, the change I’ve made in the regular interface is, now, reflected in the admin interface as well:

image

In other words, it’s a regular (well, not quite regular, but still) entity that’s also exposed through the admin interface. Come to think of it, that’s not that unusual since there are, also, entities for such “metadata” as views, for example. So, maybe, at some point environment variables will simply be hidden from the regular user interface.

That’s, really, just a little unusual twist, but what’s really interesting is the usage scenarios.

First of all, we can include environment variables definitions into our solutions. And, then, we can also include special json files which will overwrite those default values.

Basically, when using the solution packager to re-package our solution, we can choose to overwrite those values in the destination depending on the environment where the solution will be deployed.

However, if you look at it from the build/release pipeline standpoint… If a solution represents a build artefact, we would not be able to just get that zip file and push it through to the QA/UAT/PROD when every environment needs different values for the environment variables.

Of course we can re-package every time to add environment-specific values, but that breaks the idea of having solution files as build artefacts.

The other option, it seems, would be to set environment variables in the target environment after solution import using some kind of a script, but, then, is it that beneficial to have environment variables in the solution?

There are a few other caveats in the preview, you can read about them in the docs (see that link at the beginning of this post), but might also want to consider a few more:

  • Both default value and value attributes are configured to have 2000 max length. It seems environment variables are not meant to store large amounts of data (as in, it would not be “by design” to serialize all your reference data  into a json string, pass that string to the target environment through a solution, and de-serialize the json there through a plugin, for instance
  • Using environment variables in the classic workflows would be problematic yet (without some kind of custom workflow activity)

 

With all that said, I think it’s a good thing the product team is moving in that direction, since, in the end, having some kind of solution-aware configuration entity which everyone is using would be really great.

Unless you have other thoughts? Let me know…

Workflow “TimeOut Until” vs Flow “delay until”

$
0
0

I was looking for a way to implement “timeout until” in Microsoft Flow, and, although there seem to be a way to do it, there is at least one caveat.

First of all, here is the problem. Let’s say I wanted to send a notification email (or assign a case to somebody) on the Follow Up By date:

image

In the classic workflows, I would set up a workflow like this:

image

Once the workflow reaches that step, it would be postponed until the follow up date:

image

Where it becomes interesting is what happens if I update that date on the case:

image

Looking at the same workflow session, I can see that new date has been reflected there – exactly the same workflow session is now postponed till Nov 11:

image

Which takes case of a lot of “date-based” notification scenarios.

Can we do it with Microsoft Flows? Well, sort of. There is “delay until” in Flows, so a Flow which seems similar to the workflow above might look like this:

 

 

image

Looks good? Let’s move follow up date to the 15th of November, then.

The classic workflow is, now, waiting for the 15th:

image

The Flow is still waiting till the 12th, though:

image

Although, since the Flow is configured to start on the update of the “Follow Up” date, there are, actually, two Flows running now:

image

Once of those flows is waiting till the 15th, and another one is waiting till the 12th.

There are, also, multiple workflow sessions running (for the same reasons – every update/create would start a session):

image

The difference between those is that “delay until” conditions in the Flows are not updated with the modified date value, so each of those Flows is waiting for a different date/time, but all classic workflows have picked up the new date and are, now, waiting for exactly the same date/time.

Having multiple Workflows or Flows trying to take the same action against the record might be a bit of a problem either way (unless it does not matter how many times the action happens), but, more importantly, where classic workflows will run on time as expected, each Flow might be taking that action at different times, and the “context” for the action might be different. For all we know, that “follow up date” may have moved by a week, but some instance of the Flow will still be using original follow up date.

In theory, I guess we could add a condition to the flow to ensure that the flow should still be taking an action – possibly check if the planned time is within a few minutes of the current time:

image

There is, also, a 30 days limit for the flows, so this may only work in the short-term reminders.

For everything else, it seems we should start using the “recurrence” trigger – on every run, we would need to query the list of records that are past the “due date”, then use  “for each” to go over all those records and take actions. Although, if we wanted those actions not to be delayed by a few hours, we would likely have to use a few recurrence triggers (for example, to have the flow started every hour). Or, for the time being, we might probably keep using classic workflows in this kind of scenarios, though this would not be the recommended approach.

 

Viewing all 554 articles
Browse latest View live