Quantcast
Channel: It Ain't Boring
Viewing all 554 articles
Browse latest View live

XRM, CDS, Microsoft Dataflex… What’s in the name?

$
0
0

image

So, CDS is DataFlex now. Actually, it’s important to call it Microsoft Dataflex since there is a separate non-Microsoft product called DataFlex.

It’s the second rebranding of what we used to know as XRM, but it may be the one that will stick for a little longer (I know what you are thinking… but we should have some faith).

To start with, I never really understood what Common Data Service meant. In the last few years, I’ve been on a lot of projects utilizing CDS, but, even when those projects were for the same client, most of the data was unique. There might be some Common Data Model, but that data model would  still would be customized and extended for every project needs. And, besides, what if we take it one step further and look at the data utilized by different clients? We will likely see different models, and we will certainly see different data.

Then why calling it “common” if it’s not that common at all?

That said, I think the intention was to really start building around common data model when CDS/CDM first came out. It’s just not necessarily how things worked out, it seems.

In that sense, XRM might make more sense even today, but XRM has a different flavour to it. It has a lot of “CRM” legacy, and what became of it has very little to do with CRM.

This is why, leaving our personal preferences and attachments aside, neither CDS nor XRM seem to be the right name for this product/service.

Is it different with Microsoft Dataflex? Quite frankly, the main difference seems to be that it does not have any meaning embedded into it except that it’s dealing with the data, and it’s a product from Microsoft. From that standpoint, it could easily be called somehow else, and it would not matter. New services may be added and existing services can be removed, the product may keep shifting its focus, licensing may keep changing, Microsoft may choose to rename Common Data Model to something else… None of that would automatically imply another renaming for Microsoft Dataflex at this point.

Which might actually be great for everyone in the long term.

But, for the time being, it seems we all just got work to do. Updating the slide decks, writing these blog posts, making the clients aware of this new naming, etc. Still, for the reasons I mentioned above, I’m hoping that, once the dust settles, this name will stay for a little longerSmile


Setting up sprints for dev/qa in Azure DevOps

$
0
0

It seems to be very common when there is a project which is trying to implement SCRUM, and which can never really do it because of the inherent problem which occurs when there is a dedicated QA team.

SCRUM assumes QA happens within the sprint, but, when scrum development team is different from the QA team, it can get really messy.

Since, of course, QA can’t start till development is done. Which means QA usually starts close to the end of the development sprint, and that leas to all sorts of workarounds with each of eventually breaking the SCRUM process:

  • There might be 3 weeks sprints with the last week being reserved for the QA. But what is it the developers are supposed to do during that last week?
  • QA team may choose to do their work outside of SCRUM; however, if “QA test passed” is part of the definition of done for the dev team, they’ll never be able to close work items within their sprints, and, at the very least, that’s going to render burndown charts useless

 

The problem here is, usually, that it’s almost impossible to blend traditional QA and development within the same sprint, and, this way or another, we end up with the diagram below if we are trying to complete both development and QA within the sprint:

image

So what if we treated DEV and QA processes as two different SCRUMs?

That would requires a few adjustments to the definitions, but, I guess, we could think of it this way:

  • The dev team would still be responsible for their own testing, and they’d be doing it within dev sprints. This might not be the same level of comprehensive testing a QA team can deliver, but, as far as Dev team is concerned, there would be release-ready candidates after each sprint, it’s just that they’d be released to the QA team first
  • QA team would be responsible for comprehensive testing, possibly for adding test automation, etc. They’d be doing it within their own set of sprints, and, by the end of each sprint, they’d have added known issues to the release candidate provided to them by the dev team. That release candidate (with the associated list of known issues) could be made available to the end users

 

Of course this would assumes some level of communication between two teams, but this is where Azure Devops can really help because:

  • Within the same project, we can create multiple teams
  • If we choose so, each team will be able to see the same work items
  • Each team can have its own set of iterations

 

And, of course, we can have joint sprint reviews, scrum of scrum meetings, etc.

So, how do we set it up in devops? Here is an example:

For a Scrum project in Azure Devops, create a Dev team and a QA team

image

There will always be a default project team as well, btw. You can use that one in place of one of the other ones if you want.

Configure project iterations

image

Notice how DEV and QA sprints are aligned. They don’t have to be

For my example, I’ve set up each team to have access to the default area AND to all sub-areas(sub-areas are included on the screenshot below)

image

Dev team should have access to the “dev” set of iterations

image

QA team should have access to the QA set of iterations

image

With the setup done, I can now go to the sprint boards for the Dev Team (notice team selection at the top), and add a work item to the bord + a task:

image

There would still be noting on the taskboard for the QA team:

image

By the end of the sprint, my task #5077 would move to the “done” state on the Dev Team taskboard:

image

And I (as a developer) could create another task for the QA and put it to the upcoming QA sprint (which is not quite in compliance with SCRUM principles… but QA team can, then, decide to get those items off the upcoming sprint and handle it in the sprint farther away):

image

Now if I look at the QA sprint taskboard, here is what shows up here:

image

And there you go: Dev Team will be working on their own set of sprints, yet QA Team will be working on their set of sprints. Each team can plan their work, they don’t depend on each other when closing the sprints, and it’s just that the “release candidate” has to go through a two step process now:

image

July 16 PowerStorm session summary & lessons learned

$
0
0

Have you heard of the PowerStorms? As Microsoft keeps changing application development climate to the better, some fascinating things start happening around the Globe. Four of us happened to find ourselves right in the center of one of those on July 16 – that’s when the first ever PowerStorm session happened!

There were four of us to witness it:

And myself

By the way, between the four of us, we had 16 hours time difference, and nobody was on the same time zone.

Originally, I thought this would be more of a training-like event, but, since there was no set agenda other than “let’s do something interesting with Power Platform”, I guess I just wanted to see how it works out.

So… Arjun brought up a problem to solve, Greg was absolutely instrumental in organizing the process, Linn and I were mostly messing with the Flow, and, despite all this… we ended up not being able to solve the problem at hands.

How so?

Well, we just wanted to achieve a seemingly simple goal of creating a Flow that would trigger whenever a new file is added to a blog and that would post that file to a Teams channel so it all looks like this:

image

As a result of this, we were hoping to get a message posted to Teams which would have a link to the uploaded file:

image

What we found out is:

  • We can use Azure Blob Storage connector to trigger our Flow
  • We can actually connect to a completely different tenant (since we did not have Blob Storage in the trial environment we created that time)
  • We can also use that connector to retrieve the file
  • We can use Sharepoint connector to upload that file to the Team Channel’s sharepoint folder
  • And we can use Teams connector to post a message

What we have not quite figured out is how do we display that file link in the same manner it’s done on the screenshot above. It must be something simple we’ve missed? AdaptiveCards maybe? Have no idea how to use them yetSmile

Anyway, it seems there are really a few ways to conduct these sessions, so it’s something to think about on my spare time.

In the meantime, there are a few other lessons learned:

  • If we are to do another hackathon-style session, I should have the trial ready in advance. Otherwise, it can easily take half an hour just to set everything up and to add participants as trial users
  • For those interested in the E5 trial licenses, you might also want to look at the following link: http://aka.ms/m365devprogram This kind of tenants won’t have D365 instances, but you will get everything that comes with E5 including Power Apps for Office 365 (https://docs.microsoft.com/en-us/office/developer-program/microsoft-365-developer-program-faq). These developer training instances are for 90 days and they can be extended. Although, they are not necessarily the best option for PowerPlatform trainings/hackathons

Well, it was good 3 hours of learning/trying/brainstorming. We have not solved the problem, and it’s still bugging me, but I’ve definitely learned quite a few things.

Thank you folks, hope to see you around next time!

PowerStorm watch for July 30 – Adaptive Cards possible

$
0
0

LogoHey everybody – another PowerStorm watch has just been confirmed, and, apparently, it’s going to happen on Thursday, July 30, at 9 PM EST.

 

 

According to the itaintboring powerologists, here are some examples of the conditions you may expect during the event:

  • Different ideas of using Adaptive Cards with Power Platform will be floated around
  • The session will start with a quick overview of the adaptive cards
  • Following the overview, possible scenarios of adaptive cards usage will be reviewed
  • We will have a quick brainstorming session to see what other ideas we may come up with (hopefully, those who end up in the center of this event will feel recharged enough to start generating ideasSmile ).
  • Finally, and this may depend on the experience of the folks attending the event, we will try building out a few samples of using adaptive cards in the Flows/Teams/Canvas Apps/Model-Driven Apps

 

There are still a few slots available – register now!

Adaptive Cards – PowerStorm session findings

$
0
0

Just had a really cool PowerStorm session with Aric Levin and Linn Zaw Win. You would think that’s not a lot of people, and there would be no argument there, but, that said, the idea of those sessions is not to do a demo/presentation, but, rather, to try something out together.

Long story short, I like how it worked out, since we’ve managed not only to run into a bunch of issues along the way, but, also, to resolve them. Which is exactly what make up the experience.

So, fresh off the storm, here is my recollection of what we’ve learned today:

1. Whas is adaptive cards?

“Adaptive Cards are platform-agnostic snippets of UI, authored in JSON, that apps and services can openly exchange. When delivered to a specific app, the JSON is transformed into native UI that automatically adapts to its surroundings. It helps design and integrate light-weight UI for all major platforms and frameworks.”

 

To be a little more specific, we can use adaptive cards with Teams, Outlook, Bots, Javascript, etc. We can even create a PCF control to render adaptive cards in the canvas apps/model-driven apps (there is an example here).

To be absolutely specific, here is an example of the rendered adaptive card:

image

You can have a look at the json for that card here: https://adaptivecards.io/samples/CalendarReminder.html

Which bring me to the next point

2. There is an adaptive cards designer

Using the adaptive cards designer, you can quickly build your own adaptive cards

It’s worth mentioning that different host apps (Teams, Outlook, etc) may be using slightly different schema for the adaptive cards; however, adaptive card designer is aware of those differences, and this is exactly why it’s allowing us to select a host app:

image

For instance, Outlook allows usage of Adaptive Cards to create so-called actionable messages, and there is a special action called Action.Http which we might use to post card date to a url. That action is only available in Outlook, and it won’t work anywhere else. However, an adaptive cards meant for Teams might use Action.Submit action, but would not be able to use Action.Http action.

3. So, how do you send an adaptive card to Teams?

We were using PowerAutomate Flows during this session. Which is, of course, just one of the options.

Still, in order to send an adaptive card  from the Flow, we need to use a connector. With the Teams, it turned out to be relatively straightforward – there are a few actions we can use:

image

There are actions to send adaptive cards to a user or to a channel. And, for each of those, you can choose to wait for the response (in which case the Flow will pause) or not to wait for the response (in which case the Flow will continue running)

There are a few caveats there:

When a card it sent to a channel, the Flow that’s setup to wait for the response, will resume after the first response

When a card is sent to multiple users from the same flow, you can either do a “for each” loop to send the cards concurrently, or you can send them one after another. In the first case, all users will see the card right away. However, The Flow will still have to wait for everyone’s response.

In the second case, adaptive cards will be showing up sequentially. Once the first user provides their response, the Flow will continue by sending the same card to the second user, then it will wait for that user to respond, and so on.

Which means it might be challenging to implement a Flow which will be sending a card to multiple users, but which will be analyzing each and every response as those responses start coming in (without waiting for all of them first).

Because, as it turned out, we can’t terminate a Flow from within the foreach.

So that’s one of the challenges we did not have time to dig into.

4. And how do you send an adaptive card by email?

There are a few good resources:

https://docs.microsoft.com/en-us/outlook/actionable-messages/adaptive-card

https://spodev.com/flow-and-adaptive-cards-post-1/

Sending an adaptive card by email proved to be extremely simple and, yet, quite complicated at the same time:

image

Btw, pay attention to that script tag – it’s important.

Anyway, originally we tried sending an adaptive card without that highlighted originator attribute. It worked… but it only worked when an email was sent to ourselves. I could send an email to Aric, and he would not see the adaptive card. Aric could send an email to Linn, and Linn would not see the card. But, when I were sending an email to myself, it was all working. It was the same for Linn and Aric.

Did not take long for Aric to find a page talking about the security requirements:

https://docs.microsoft.com/en-us/outlook/actionable-messages/security-requirements

Then we’ve also found this excerpt:

https://docs.microsoft.com/en-us/outlook/actionable-messages/email-dev-dashboard

image

While Aric an I were messing with the Flow, Linn was trying a different approach. He has found actionable messages debugger for Outlook:

https://appsource.microsoft.com/en-us/product/office/WA104381686?tab=Overview

image

Once it was installed, we could finally see the error:

image

So, it was a problem with the security. We needed to set that originator field. And the url in that message led us straight to where we need to register new originator:

https://outlook.office.com/connectors/oam/publish

So we did, and, once we had originator id, we put it in the adaptive card json:

image

That was the last step, after which the card started to show up for all 3 of us
no matter who was sending it.

5. What have we not tried?

Of course there is probably more we have not tried than what we have tried. One thing I am thinking of trying on my own (unless somebody does it before) is creating a Flow which would be triggered by a POST http request (sent “from” the outlook actionable message). This would allow such a Flow to kick in once a user responds to the adaptive card, and, essentially, that would mean we can use actionable messages to create a completely custom email-based approval workflow.

Anyway, those turned out to be 2.5 hours where I learnt quite a bit, so this session format seems to make sense. Will try it again in a couple of weeks, so stay tuned.

Flow connections and CI/CD

$
0
0

I am wondering if PowerAutomate flows can really be part of CI/CD when there are non-CDS connections?

There seem to be a few problems here:

  • Once deployed, the flow is turned off, and all non-CDS connections have to be re-wired in order to turn it on. That’s a manual step
  • While re-wiring the connections, we’ll be creating an unmanaged customization for a managed flow (assuming all deployments are using managed solutions

The first item undermines the idea of fully automated deployments.

The second item means, that we might not be able to deploy flow updates through a managed solution unless we remove unmanaged customizations (or the flow) first.

 

Here is how the flow looks like once it’s been deployed through a managed solution:

image

It’s off, since, in addition to the CDS (current environment) connector used for the trigger and one of the actions, there is an Office 365 Outlook connector in that flow, and the connection needs to be re-wired for that one:

image

If I tried turning the Flow on in the target environment, I’d get this error:

image

So… Have to edit the flow, and, to start with, have to sign into that Outlook connection:

image

Surprisingly, I can’t. Well, I can’t from the managed solution. Which is not that surprisingly come to think of it, but still…

From the default solution, I can do it:

image

The CDS connection re-wires automatically once I click “continue”(even though, presumably, it does not need to. At least does not need to be re-wired when there are other connections in the Flow), and, now, I can activate the Flow.

image

So far, it seems, I’ve just managed to demonstrate how automated deployment becomes broken.

But what about those unmanaged customizations?

Well, by re-wiring the connections, I got an unmanaged customizations layer for the Flow:

image

What if I the Flow were updated in the source environment?

For example, let’s change the email body. It used to be like this in the first version:

image

Let’s make it slightly different:

image

Once deployed in the target environment, the Flow is on. But that email action is still using original text:

image

Now, when importing the solution, we have a few options. What if I used the one which is not recommended?

image

This will take care of the updates, but the flow will be turned off. Because, I’m assuming, those connections were originally fixed in the unmanaged layer, and now at least some of those changes have been rolled back. Which means the connections have to be re-wired again before I can turn on the flow.

From the CI/CD perspective, this all seems to be a little cumbersome, so I am wondering how is everybody else doing CI/CD with flows?

Business rules and editable subgrids

$
0
0

What seems to be the most popular reason why a business rule would not be working?

There is very little that can really break in the business rules, except for one thing: we can include certain fields into the business rule conditions, and, then, forget to add those fields to the context (which can be a form, or it can also be an editable grid).

When working with the forms, we can always make a field hidden, so it won’t be visible, but it will still allow the business rule to work.

When it comes to the editable grids, though, it seems to be just plain dangerous to use the business rules.

Because:

  • Editable grids are using views
  • Those views can be updated any time
  • Whoever is updating the views will, sooner or later, forget (or simply won’t know) to add a column for one of the fields required by the business rules

 

And voila, the business rule will not be working anymore. What’s worse, this kind of bug is not that easy to notice. There will be no errors, no notifications, no any signs of  a problem at all. Instead, you’ll suddenly realize something is off (and you might not even know why it’s off by that time)… or, maybe, it’s the users who will notice long after the changes are in production…

This just happened to me again today – there is an editable subgrid for an entity, and that subgrid shows up on two different forms (even more, those forms are for different entities). There is an attribute that must be editable when on one of the forms, but it should be readonly when on the other form. The condition in my business rule would have to look at whether there is data in a certain lookup field, and that would only work if I had that lookup field added to the subgrid. Which means the interface would become more crowded, so the users would immediately want to get rid of that column.

Anyway, this is exactly why I removed a couple of business rules from the system just now and  replaced them with the following javascript:

function onFeeSelect(executionContext) {
var gridContext = executionContext.getFormContext();
if(gridContext.getAttribute(“<attribute_name>”) != null)
{
gridContext.getAttribute(“<attribute_name>”).controls.get(0).setDisabled(true);
}
}

 

That script is now attached to the onRecordSelect subgrid event only on the forms I need.

And this should do it – no more users will be updating that attribute in the editable subgrid on that particular form.

 

Using flow-generated word documents in model-driven apps

$
0
0

Document Templates have been available in model-driven apps for a while now – they are integrated with the model-driven apps, and it’s easy for the users to access them.

They do have limitations, though. We cannot filter those documents, we can only use 1 level of relationships, on each relationship we can only load 100 records max, etc.

There is “Populate a Microsoft Word Template” action in PowerAutomate. Which might be even more powerful, but the problem here is that it’s not quite clear how to turn this into a good user experience. We’d have to let users download those generated documents from the model-driven apps somehow, and, ideally, the whole thing would work like this:

image

So, while thinking about it, I recalled an old trick we can use to download a file through javascript: https://www.itaintboring.com/dynamics-crm/dynamics-365-the-craziest-thing-i-learned-lately/

It proved to be quite useful in the scenario above, since, in the end, here is how we can make it all work with a PCF control:

 

As usual, you will find all source codes on github:

https://github.com/ashlega/ITAintBoring.PCFControls

For this particular component, look in the ITAWordTemplate folder.

If using the component as is, you’ll need to configure a few properties. In a nutshell, here is how it works:

  • You will need a flow that is using HTTP request trigger
  • You will need to configure that trigger to accept docId parameter:

image

  • After that, you can do whatever you need to generate the document, and, eventually you’ll need to pass that document back through the response action:image

Here is a great post that talks about the nuances of working with “word template” action (apparently, in my Flow above it’s a much more simple version):

https://flow.microsoft.com/en-us/blog/intermediate-flow-of-the-week-create-pdf-invoices-using-word-templates-with-microsoft-flow/

  • Then you will need to put ITAWordTemplate component on the form, configure its properties (including Flow url), and that’s about it

 

Technically, most of the work will be happening in these two javascript methods:

public downloadFile(blob: any) {
	if (navigator.msSaveBlob) { // IE 10+
		navigator.msSaveBlob(blob, this._fileName);
	} else {
		var link = document.createElement("a");
		if (link.download !== undefined) { 
			var url = URL.createObjectURL(blob);
			link.setAttribute("href", url);
			link.setAttribute("download", this._fileName);
			link.style.visibility = 'hidden';
			document.body.appendChild(link);
			link.click();
			document.body.removeChild(link);
		}
	}
}

public getFile() {
	var docId: string = this.getUrlParameter("id");
	var data = {
		docId: docId
	};
	fetch(this._flowUrl, {
		method: 'POST',
		headers: {
			'Content-Type': 'application/json'
		},
		body: JSON.stringify(data) 
		}).then(response => {
			response.blob().then(blob => {
				this.downloadFile(blob);
			})
		}).then(data => console.log(data));
}

 

Just one note on the usage of “fetch” (it has nothing to do with FetchXML, btw). At first, I tried using XMLHttpClient, but it kept broking the encoding, so I figured I’d try fetch. And it worked like a charm. Well, it’s the preferred method these days anyway, so there you go – there is no XMLHttpRequest in this code.

One question you may have here is: “what about security?” After all, that’s an http request trigger, so it’s not quite protected. If that’s what you are concerned about, there is another great post you might want to read: https://demiliani.com/2020/06/25/securing-your-http-triggered-flow-in-power-automate/


Add intelligent File Download button to your model-driven (or canvas) apps

$
0
0

How do we add file download button to the model-driven apps? To start with, why would we even want to do it?

There can be some interesting scenarios there, one being to allow your users to download PowerAutomate-generated word templates (see my previous post).

That, of course, requires some custom development, since you may want to pass current record id and/or other parameters to the API/url that you’ll be using to download the file. You may also need to use different HTTP methods, you may need to specify different titles for that button, and you may need to have downloaded file name adjusted somehow.

So, building on my earlier post, here is another PCF control – it’s a generic file download button this time (which we can also use with PowerAutomate):

 

downloadbutton.pcf

Unlike the earlier control, this one has a few other perks:

  • First of all, there is a separate solution (to make it easier to try)
  • Also, download url is represented by 3 parameters this time. This is in case the url is longer than 100 characters (just split it as needed between those 3 parameters) –it seems this is still an issue for PCF components
  • There is HTTP method parameter (should normally be “GET” or “POST”. Should be “POST” for PowerAutomate flows)
  • In the model-driven apps, you can use attribute names to sort of parameterize those parameters (just put those names within ## tags. You can also use “id” parameter, which is just the record id

Here is an example of control settings – notice how file name template is parameterized with the ita_name attribute:

image

Last but not least, this PCF control can work in two modes: it can download the file, or it can open that file in a new tab. What happens aftet depends on whether the file can be viewed in the browser, so, for example, a pdf file will show up in the new tab right away:

You can control component’s behavoir through the highlighted parameter below – use “TRUE” or “FALSE” for the value:

To add this control to your forms, just put some text field on the form, and replace out of the box control with the ITAFileDownloadButton.

The source code is on github: https://github.com/ashlega/ITAintBoring.PCFControls/tree/master/Controls/ITAFileDownloadButton

And here is a link to the packaged (unmanaged) solution:

https://github.com/ashlega/ITAintBoring.PCFControls/raw/master/Controls/Deployment/Solutions/ITAFileDownloadButtonSolution.zip

Connection references

$
0
0

Connection references have been released (well, not quite, but they are in public preview, which is close enough), and, from the ALM support perspective, it might be one of the most anticipated features for those of us who have been struggling with all those connections in the Flows (chances are, if you are using Flows, and if your Flows are doing anything other than connecting to the current CDS environment, you have most likely been struggling).

The announcement came out earlier today:

https://powerapps.microsoft.com/en-us/blog/announcing-the-new-solution-import-experience-with-connections-and-environment-variables/

And, right away, when looking at the connections in my newly created Flow, I see connection references instead of connections:

image

Which is, honestly, a very pro-dev way of calling things, but, I guess, they should have been called differently from the former connections… and there we go, there are connection references now. Still, that captures the nature of this new thing quite accurately.

It’s interesting my older Flows are still using “Connections”, not “Connection References”:

image

Yet, it does not matter if I am adding new actions or updating existing ones. It seems older Flows are just using connections.

This can be solved by re-importing the Flow (unmanaged in my case), though:

image

Not sure if there is an easier way to reset the Flow so it starts using connection references, but I just added it to a new solution, exported the solution, deleted both the Flow and my new solution, then imported it back.

By the way, I made a rookie mistakes while trying it all out. When I tried importing my new solution to another environment, I did not get that setup connections setup dialog.

This is because I should have included connection references into the solution to get it to work:

image

Yeah, but… well, I added my connection reference, and it just did not show up. Have to say PowerApps were a bit uncooperative this afternoon:

image

Turned out there is a magic trick. Instead of using “All” filter, make sure it’s “Connection References”:

image

Now we are talking! And now I’m finally getting connections set up dialog when importing my solution to another environment:

image

Although, to be fair, maybe I did not even need connection references for CDS (current environment). But, either way, even if only for the sake of experimentSmile

PS. As exciting as it it, the sad part about this last screen is that we finally have to say farewell to the classic solution import experience. It does not support this new feature, and, so, as of now it’s, technically, obsolete. You might still want to do some of the things in the classic solution designer, but make sure you are not using it for import.

For example, here is a Flow that’s using Outlook connector. I just imported a managed solution through the new solution import experience. My flow is on, and there is a correctly configured connection reference in it:

image

When the same solution is imported using classic experience, the flow is off:

image

From “just make it work” to Low-Code to Pro-Dev

$
0
0

imageA few years ago, there was a common mantra on pretty much any project I was on:

“Stick to the configuration. There should be no javascripts and/or plugins”

This was happening since quite a few people had run into problems with those low-level customizations in the past. Which is understandable – to start with, you need somebody who can support those customizations moving forward, and, quite often, even plugin source codes would have been lost.

That’s about when Microsoft came up with the concept of “low code” – those are your Canvas Apps and Microsoft Flow (which is Power Automate now). It seemed the idea was quite ridiculous, but, by constantly pushing the boundaries of low code, Canvas Apps and Power Automate have turned into very powerful tools.

Which did not come without some sacrifices, since, if you think “low code” means “low effort”, it is not, always, the case anymore. Learning the syntax, figuring out various tricks and limitations of those tools takes time. Besides, “low code” is not the same as “no code” – just think about all that json parsing in Power Automate, organizing actions into correct sequences, Writing up canvas app formulas, etc. Yet, it presents other problems – what somebody can do easily with a few lines of code may actually require a few separate actions in Power Automate or a tricky formula in Canvas Apps. Does it save time? Not necessarily. Does it open up “development” to those folks who would not know how to create a javascript/.NET app? For sure.

In the meantime, plugins and custom workflow activities were still lingering there. Those of us not afraid of these monsters kept using them to our advantage, since, for instance, there are situations when you need synchronous server-side logic. Not to mention that it may be faster and easier to write for loop in .NET than to do it in Power Automate. But, it seemed, those technologies were not quite encouraged anymore.

On the client side, we got Business Rules. Which were supposed to become a replacement for various javascript webresources… except that, of course, it did not quite work out. Business Rules designer went through a few iterations and, eventually, got stuck at the point where it’s only usable for simple scenarios.  For example, if I have 20 fields to lock up on the form, I’ll go with javascript vs business rules designer since it would be faster to do and easier to support. For something more simple, though, I might create a business rule.

But then we got PCF components, and, so, the whole “low code” approach was somewhat ditched.

How come?

Well, think of it. There are lots of components in the PCF gallery, but none of the clients I know would agree to rely on the open-source code unless that code is, somehow, supported. And, since a lot of those components are released and supported by PCF enthusiasts (rather than by Microsoft partners, for example), there is absolutely no guarantee that support will last.

At least I know I can’t even support my PCF components beyond providing occasional updates. Basically, if there is a bug… and if you discover it once the component is in production… you are on your own.

Which means anyone considering to use PCF components in their environments should assume that a pro-dev person would be required to support such solutions.

PCF is only one example, though. There has been a lot of emphasis on the proper ALM and integration with DevOps in the recent years, and those are topic which are pro-dev by definition.

What else… Custom Connectors? Data providers for Virtual Entities? Azure Functions to support various integrations and/or as extensions for the Apps/Flows? Web resources are still alive since there is no replacement (PCF-s were never meant to replace the web resources), and plugins are still there.

The whole concept of Dynamics CRM/D365/PowerApps development has come a full circle, it seems. From the early days when everything was allowed, all the way through the days when scared clients would insist on not having anything to do with the plugins/javascripts, and back to the point where we actually do need developers to support our solutions.

So, for now, see ya “no code”. I guess we’ll be there again, but, for the time being, we seem to be very much on the opposite side.

Reporting and document generation in Power Platform

$
0
0

For some reason, the part of Power Platform that used to be Dynamics CRM (and was, then, transformed into what’s called “first party” applications) has always been limited in its reporting and document generation capabilities. It’s still the case, it seems, and it’s a strange situation for an enterprise-grade platform. After all, what’s the point of having all that structural data at your disposal when you can not even report on it or, for that matter, issue a detailed invoice to the client?

Although, you might not agree with what you just read, so I’ll try to explain.

Back in the early days, we had Advanced Find and SSRS for reporting, yet we had Mail Merge for document generation.

Of course there was that reporting wizard (which is still there), but, quite frankly,  I don’t think it was ever up for the job.

The problem with all those “technologies” has always been that they were never user-friendly. There is no way a business user would ever want to write an SSRS report. Advanced Find has never been about reporting – it’s mostly a data query tool. Quite frankly, I don’t really remember a lot about Mail Merge, but, since it was deprecated  and replaced with document templates years ago, there is not a lot to talk about anyway.

In short, there was no user-friendly report generation tool which would be natively supported by Dynamics CRM in those early versions. And there was no user-friendly document generation tool either.

Even when using SSRS(and that would require a report developer), it was close to impossible to schedule reports, to send them by email, etc (on-premise was a little different, but does anyone remember there was… oh, wait… there is still is… on-premise version?)

In the recent years, we’ve got cloud version, then Power Platform, we got document templates, we still have SSRS, and we got Power BI.

But, if we take a closer look at all this variety of tools, here is what we’ll find:

  • Word Templates have tons of limitations(100 related records max, 1 level of relationships only, 1 root entity, not sorting or filtering on the relationships, etc)
  • It is possible to use Word Templates in Power Automate, but they don’t seem to support nested repeaters, yet they are not that well integrated with Model-Driven
  • On the reporting side, it is still possible to create SSRS reports. With all the same limitations we always had there – no scheduling/automation, no “self-service” (need a developer). But, most importantly, SSRS authoring tools for Dynamics 365 have been pretty much abandoned by Microsoft – the last update was released in 2017, and, even then, it would only work with Visual Studio 2015 at best (I am writing this in Q3 of 2020). If it’s not the definition of being “abandoned” then what is?
  • There is still advanced find, of course. Which is not that much of a reporting tool
  • It’s possible to do certain things in Power Automate (by creating an HTML table, for example). But, again, this is not a reporting/templating tool at all

 

I guess this is where I should have mentioned Power BI, and this should have been the end of my post… if not for the licensing.

Power BI does not come with Power Platform or D365 licenses. Which means if we wanted to stick to Power Platform / D365 licenses only, we would have no reporting tool. Which is kind of interesting, since I’d argue that any decent enterprise implementation would need a reporting tool, and, given that SSRS does not look like a well supported option these days, Power BI seems to be the only other option.

However, where Power BI Pro might look relatively inexpensive, it’s not, necessarily, what we need. If we wanted to do the same kind of reporting we cold do with SSRS (so to generate PDF files, for example), we would have to get Power BI paginated reports

Actually, there is a good hint there of why SSRS might never be coming back (I’m not saying it won’t be coming back to Power Platform, but I definitely would not bet on that happening any time soon):

image

If Power BI Report Builder is sharing the same foundation as SSRS, why even bother to keep supporting both?

However, what it all means is that if we wanted to use Power BI (and, again, it seems there is nothing else that would be available “out of the box” and that would be able to cover reporting / document generation needs in model-driven apps), we would have to

  • Get Power BI Premim for the organization (since that would allow the organization to generate pretty decent paginated documents)
  • Get Power BI Pro for every user who should be able to create and share those reports

 

image

A fair question would be: at which point this is all becoming cost-effective and how big the organization should be to benefit from this? It seems it would have to have at least a few hundreds of of users.

Those implementing Power Platform on a smaller scale are still stuck with the same old question: what tool should we use for reporting / document generation in model-driven apps?

Actually, I don’t have a good answer to that – there are some awesome third-party tools such as Xpertdoc, for example. However, the only reason they exist is that Power Platform itself is not offering those capabilities out of the box.

PS. In the next post, I’m going to demonstrate how to create an Azure function that may help with document generation (and that can be utilized from Power Automate), but, even so, that’s not going to answer the question above. This is one of the options I looked at on the project, and it might still be my  backup plan, but I’d very much prefer for those reporting / document generating needs to be covered out of the box.

PPS (Oct 2): that Azure Function post may have to wait a little – had to dig a little more in to the paginated reports

Paginated reports in Power BI – not a replacement for SSRS yet?

$
0
0

There are a few reasons I still have not, really, touched Power BI (beyond some very basic things just to see what it’s capable of), but, having looked at my reporting options recently, I figured Power BI paginated reports might be what’s required on the project.

Luckily, it turned out we might be fine on the licensing side, so, having looked at it quickly this morning, I am now starting to realize a few things:

1. Paginated reports are, essentially, SSRS reports

You would recognize the interface, even thought it’s a separate tool now:

image

You would also recognize the file extension:

image

Although, to be fair, what we used to call SSRS reports were, really, reports utilizing rdl file format, and they would be running on the SSRS server.

Now we have Power BI reports, since they would be using Power BI Service… but they are still using rdl file format. And that’s where Power BI Report Builder comes in, since it allows us to store those report in Power BI service:

image

That kind of clears things up, it seems. We are not talking about a new reporting format – we are talking about a different engine to run those reports, and, also, we are talking about a different tool we’d be using to build them.

2. There is no Fetch XML data source

image

Which immediately brings up a question of how do we actually build paginated reports for CDS? With SSRS, we had report authoring tools for Dynamics 365, now we don’t have those, but we do have Common Data Service (Preview) datasource.

3. The problem with CDS data source, at least right now, is exactly that it’s in preview

I tried it, id did not work, so I was looking for what could have gone wrong (especially since I did try it with the SQL Management Studio about a month ago, and all worked well)… till I suddenly spotted this warning:

image

https://docs.microsoft.com/en-us/powerapps/developer/common-data-service/cds-sql-query

In a way, that’s an extreme example of why preview features should not be used in production – they are not covered by any SLA-s, so, in the situations like this, preview features can be disabled. It’s kind of hard to believe, but that’s why it’s called “preview”.

This brings me back to the list of reporting/document-generating options I could use with model-driven applications… and the only working option right now is, actually, SSRS. I’m running in circles here, it seems.

PS. Sure we might use Azure SQL Database, but that would mean syncing CDS data to the database. Which might not be a huge issue, but that’s just extra work piling up, and, besides, there would be no integrated security (in the sense that it’ll be for the report developer to add query conditions to do data access validations)

Using OpenXml with Power Automate Flows

$
0
0

Earlier this month, I wrote a couple of post about reporting/document generation. While working on it, I also tried one other option which you may find useful, too.

One of the biggest problems with Word Templates, be it classic word templates in model-driven apps or new populate word template action in Power Automate is that there are always limitations.

However, if there is a wish, there is a way.

Open XML SDK library has all we need to manipulate Word documents programmatically, so it’s entirely possible to even create custom templating functionality if we wanted to. Actually, I know some clients who did just that (to be honest, not sure if they did use Open XML).

In this post, I will walk you through the process of building an Azure Function which will be accepting two word documents, and which will merging those two into one.

Once there is an Azure Function, we can use it in a Flow like this:

image

When the flow runs, it will merge two files into one. Here is what it looks like:

  • You will see each individual file first
  • Then I’ll run the Flow
  • And, as a result, a new file will be created and downloaded. That file will have the content of the other two merged

How does it work?

There is an Azure Function that’s called in that HTTP Request action. This function would extract documents from the incoming HTTP request:

image

And there is a helper class that’s handling the merge:

image

You will find complete source code here:

https://github.com/ashlega/ItAintBoring.Functions

I struggled a little bit figuring out how to pass file content to the Azure Function, so the screenshot below might help – it turned out the trick was to use [‘$content’] parameter of the output:

image

But, realistically, does it solve the problem of generating large word documents out of individual files? Turned out not really – I tried with a hundred individual documents, and this whole thing just timed out. But, as a proof of concept… was an interesting exerciseSmile

Can’t see “Edit Filters” button in the classic solution designer for a view? Here is one reason why

$
0
0

There is, really, not a lot to write about it. The screenshot below says it all:

image

There is one link missing on the right side, which is “Edit Filter Criteria”. It does show up for some other views, though:

image

But, from what I could see, it disappears once the view(or, possibly, the filters) has been updated in the new designer.


Thursday rant – long ignored issues with the Admin UI

$
0
0

With Wednesday being a holiday (Remembrance Day in Canada), Thursday feels a bit like Monday. Which means I am still in the “holiday mode”, I don’t necessarily want to do anything at all, and, hence, some of the minor issues with Power Platform which should have been fixed long ago seem somewhat inflated on this bleak morning.

And there could be no better time to rant about it!

Why is it that there is no sorting in the list of solution components?

image

Why is it that list above has components which are simply not actionable in the UI?

image

I can try creating a new relationship attribute, but here is all I get:

image

 

Why is it that security roles, field security profiles, web resources, option sets, site maps, and a few others are all grouped under “Other” in the list of solution components?

image

How come we can’t use a wildcard in the search?

image

Why is it that highlighted columns are not sortable? (and the other ones are)

image

Is there any reason why web resource are given this strange customization type?

image

And where the heck are my javascript web resources this morning?

There are a few in the classic UI:

They do show up under “All” in the new UI (though I have to order by type and scroll to the right place):

But only one of them shows up under “other” – so what’s that “other”, then?

Well, there you go. I have missed a few for sure, but the rant is over – going back to work now!

Retrieving environment variable value in Javascript

$
0
0

The script below might help if you wanted to read CDS environment variable value in your javascript web resource.

top.environmentVariables = [];

function getEnvironmentVariableInternal(varName){
  "use strict";
   top.environmentVariables[varName] = null;
   Xrm.WebApi.retrieveMultipleRecords("environmentvariabledefinition", `?$top=1&fetchXml=<fetch version='1.0' output-format='xml-platform' mapping='logical' distinct='true'>
	  <entity name='environmentvariabledefinition'>
		<attribute name='defaultvalue' />
		<filter type='and'>
		  <condition attribute='schemaname' operator='eq' value='` + varName + `' />
		</filter>
		<link-entity name='environmentvariablevalue' from='environmentvariabledefinitionid' to='environmentvariabledefinitionid' link-type='outer' alias='varvalue'>
		<attribute name='value' />      
		</link-entity>
	  </entity>
	</fetch>`).then(
		function success(result) {
			for (var i = 0; i < result.entities.length; i++) {
			        if(typeof(result.entities[i]["varvalue.value"]) != "undefined")
                                {
                                   top.environmentVariables[varName] = result.entities[i]["varvalue.value"];
                                }
				else if(typeof(result.entities[i].defaultvalue) != "undefined")
                                {
                                   top.environmentVariables[varName] = result.entities[i].defaultvalue;
                                }
                                else{
                                   top.environmentVariables[varName] = null;
                                }
			}                    
		},
		function (error) {
			console.log(error.message);
			
		}
	  );
	  
}

function getEnvironmentVariable(executionContext)
{
  "use strict";
   getEnvironmentVariableInternal("SCHEMA_NAME_OF_YOUR_VARIABLE");	
}

Just a couple of notes:

1. I’m using WebAPI + FetchXML to get the values

I think this is just because I’m so used to FetchXml it’s my first choice. As Diana Birkelbach just noted, it should actually be easier with Web API. So will be updating this post soon to add a Web API – only version of the same function.

2.  I’m storing variable value (default or overridden) in the top.environmentVariables array

This way, I can access that array from the script associated to a ribbon button (which is a completely separate script)

PS. As promised, here is an updated version that’s not using FetchXML:

top.environmentVariables = [];

function getEnvironmentVariableInternal(varName){
  "use strict";
   top.environmentVariables[varName] = null;
   Xrm.WebApi.retrieveMultipleRecords("environmentvariabledefinition", "?$select=defaultvalue,displayname&$expand=environmentvariabledefinition_environmentvariablevalue($select=value)&$filter=schemaname eq '"+varName+"'").then(
		function success(result) {
			for (var i = 0; i < result.entities.length; i++) {
			        if(typeof(result.entities[i]["environmentvariabledefinition_environmentvariablevalue"]) != "undefined"
                                   && result.entities[i]["environmentvariabledefinition_environmentvariablevalue"].length > 0)
                                {
                                   top.environmentVariables[varName] = result.entities[i]["environmentvariabledefinition_environmentvariablevalue"][0].value;
                                }
                                else if(typeof(result.entities[i].defaultvalue) != "undefined")
                                {
                                   top.environmentVariables[varName] = result.entities[i].defaultvalue;
                                }
                                else{
                                   top.environmentVariables[varName] = null;
                                }
			}                    
		},
		function (error) {
			console.log(error.message);
			
		}
	  ); 
}


function getEnvironmentVariable(executionContext)
{
  "use strict";
   getEnvironmentVariableInternal("coo_InvoicePrintFlowUrl");	
}

 

Think twice when using functions to filter your data sets in the Canvas Apps

$
0
0

Here is a warning message which I keep running into:

image

(The “Filter” part of this formula might not work correctly on large data sets)

It’s not that I keep running into it every day. But I find myself looking at this warning every time I’m setting up data sources for a new application.

So, maybe, if I write it a few times here, I’ll remember to do it right from the start the next time I’m doing it.

Think twice when using functions in the filter conditions

Think twice when using functions in the filter conditions

Think twice when using functions in the filter conditions

Think twice when using functions in the filter conditions

Now, if you are new to this, and if this does not make a lot of sense so far, let me explain.

Canvas Apps are lazy – they know how to delegate work  to the data sources. For example, if I wanted to find all accounts that are called “Big Corp”, ignoring the character case, I could use Filter function like this:

image

As a result, I’d get that “Big Corp” account. Yet, there would be no delegation warning.

This is because, for this kind of straightforward condition, Canvas App would just delegate filtering work to the data source – instead of loading all accounts and filtering them on the client, all that work will be happening on the “server” (in this particular case, it means CDS Web API service would be doing the filtering).

Not everything can be delegated, though. Actually, there are only a few basic functions/operators which are delegate-able:

image

https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/delegation-overview

By the way, I’m not going to speculate why “Upper” would not be delegatable for SQL data source either. There is a corresponding MS SQL “Upper” function, so, it seems, this might have been  delegateable… But it’s not.

There is one scenario where we can use “non-delegatable” functions in the conditions. It’s when those functions are applied not to the “columns”, but to the constant values ( to the variables, for example):

image

In this case, Canvas App would know that it can calculate Upper(“big corp”) beforehand, then delegate the rest of filtering work to the data source.

Hope this makes sense so far?

How about this one, then:

image

Compare the last two screenshots. Can you tell why, in the last example, there is no data that matches the condition, whereas in the previous example “Big Corp” account showed up in the results?

PS. You are welcome to reply in the comments or on LinkedIn if that’s how you landed hereSmile

Polymorphic lookup delegation in Canvas Apps

$
0
0

Right on the heels of my previous post where I was talking about delegation in Canvas Apps, here is another one on the same topic.

We can’t help but break delegation when filtering polymorphic lookups, right? Since, of course, “AsType” cannot be delegated:

image

Well, if you are up to writing a little plugin, it’s, actually, quite doable:

image

The idea is very simple:

  • Let’s create a dummy field (“Dummy Account Name”)
  • Let’s create a plugin to kick in on RetrieveMultiple
  • And let’s update the query in the pre-operation so that the filter we specify for the “Dummy Account Name” is converted into a filter on the linked account entity

In other words, in the pre-operation, the plugin will receive this query:

image

The plugin will convert this query into another one:

image

And the rest of the execution pipeline will work as is.

So, to start with, we’ll need to add “Dummy Account Field” to the contact entity:

image

We’ll need a plugin:

image

And we’ll need to register that plugin:

image

There you go. Don’t you ever forget about pluginsSmile

PS. And you will find the source code here: https://github.com/ashlega/ITAintBoring.PolymorphicDelegation

Entities are Tables now, so what?

$
0
0

You have probably heard that Entities are Tables now? If not, have a look here:

https://docs.microsoft.com/en-ca/powerapps/maker/common-data-service/data-platform-intro#terminology-updates

image

Well, am I thrilled about it? Am I concerned about it?

Quite frankly, we should all get used, by now, to all those changes in the product names and/or in the terminology around Microsoft products. Sometimes, those changes are successful, and, sometimes, they are not. One thing is certain – they did happen in the past, they keep happening, and they will be happening in the future.

And I just think I reached the point where it does not matter to me what the name is, since:

  • I don’t know the reasons behind renaming (other than vague references to the users feedback etc)
  • If anyone tells me they don’t like new names, I’m just going to say “it’s not worse or better than it used to be. As long as this is what Microsoft will be using these days, I’m fine with that”

 

For example, in case with entities and tables, we’ve all got used to “entities” over the years. But the concept is rather vague to be honest. It’s not a table, it’s not a view… it’s some combination of metadata and business logic.

It is vague to the point where even XRM SDK has it wrong. There is “Entity” class in the SDK, but, realistically, it should have been called EntityInstance. Or, maybe, EntityRecord. Or even just “Instance”.

If it’s easier to call it Table when discussing these concepts with new clients/developers, so be it. Although, of course, in this new terminology we will likely always have to add “well, it’s not quite the same table you’d have in SQL. But it’s a good enough approximation”

In that sense, it seems I almost became immune to the renaming virus. I know it’s there, but I’m staying cool.

Although, on a more personal level, this change may affect me, and not in the best way.

See, half a year later, when new terminology settles in, everyone will be searching for “CDS tables…” in google. But all my blog posts up until now used different terminology, so there will be two immediate consequences:

  • Those older posts might stop showing in the search results
  • Even if they do show up, blog readers (especially those new to Power Platform)might actually get confused even more when they start seeing old terminology

 

Almost inevitably, there will be  some period of adjustment, when old and new terminology will have to co-exist, and, yet, every Power Platform user/client/developer would have to be familiar with both sets of names to be able to understand older posts/articles/blogs/or recordings.

From that perspective, it might be quite a conundrum, of course. Although, everyone is going to be in this boat, so we might, as well, simply keep sailing – just need to adopt new terminology and start using it moving forward.

Viewing all 554 articles
Browse latest View live