Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / weblog

Dominic Cronin's weblog

Showing blog entries tagged as: DXA

So what use is that Discover-EnvironmentCapabilities.ps1 script anyway?

Not long ago, I posted a powershell script on Tridion Practice that allows you to connect to a Tridion discovery service and read out the capabilities that are offered by the corresponding content delivery environment. Well that's a pretty good party trick as far as it goes, but hey maybe you already had a pretty good idea of how you've got things set up, so it's kind of a one-trick pony kind of thing eh? Well it turns out that the script is much more useful than just that.

For starters, I'd like to mention that I've recently updated the script so that it also lists the Model service if it's there, mostly because I was interested in querying it directly. When you are building a DXA application, you can always throw the logging into DEBUG to see the service calls, but it's also very handy to be able to do the service calls yourself, without your application in the way. After all, "divide and conquer" is the most ancient debugging technique of all. Isolate your problem to get a better look at it. Still - as many of the APIs are not public and documented, the debug logging is probably your first port of call to figure out how the query ought to look.

Especially if you're dealing with an automatically provisioned system such as SDL's cloud, the only reliable way to get the service urls is to ask the discovery service for them, so when I wanted to query the model service, I'd first need to do that anyway. Actually I first needed to know the localization, so I was starting with the content service. I didn't want to be re-typing URLs, so actually for my first quick-and-dirty, I just copied all the discovery code into my script and hacked in a $capabilities variable to capture all the output from the foreach loop. Something like this:

$capabilities = foreach($capabilityName in $capabilityNames) {
# invoking the discovery service and returning PSObjects
}

Then I could just follow up with

$contentUri = ($capabilities | ? {$_.Capability -eq 'ContentServiceCapability'}).'Service URI'
$queryUri = ($contentUri -replace '/content.svc','/client/v4/content.svc')  + "/GetPublicationMappingsFunctionImport(Url='$url')"
Invoke-RestMethod -Method Get -Uri $queryUri -Headers @{Authorization=$Authorization}

(You've probably realised by now that if you want to follow along at home, it's probably best to start by visiting the cookbook recipe from the link above and downloading the script.) Anyway - this worked great, and gave me an XML document from which I could dig out the PublicationId. I'd simply used the same $Authorization variable that I'd created to use with the Discovery service, which of course is valid for the other services too. (BTW - if JSON is your poison, just fix up the -Headers parameter to be @{Authorization=$Authorization;Accept='application/json'})

Now I was ready to call the Model service, but the thought of yet another script that copied in the discovery code was starting to make me feel a bit itchy around the DRY principle. I mean... no need to be a fanatic, but enough is enough, eh? So then I realised I didn't have to copy it. All I needed was to "dot source" the existing script. I had the discovery script in the same folder, so my entire script ended up being 4 lines:

$capabilities = . .\Discover-EnvironmentCapabilities.ps1
$modelUri = ($capabilities | ? {$_.Capability -eq 'ModelServiceCapability'}).'Service URI'

$queryUri = $modelUri + "/PageModel/tcm/309/nl/blah/foo/index?includes=INCLUDE"
Invoke-RestMethod -Method Get -Uri $queryUri -Headers @{Authorization=$Authorization}

"Dot sourcing" in powershell (and also in some other shells) means using the dot operator to execute another script in your current context. The dot operator is the first of the two dots on the right hand side of the $capabilities assignment. This meant that not only did I manage to populate $capabilities with the return value of the script (a list of PSObjects describing capabilities) but any variables that were assigned in the discovery script were now also available for use locally. This meant that I could just use  $Authorization and thereby avoid having to do all the tedious OAuth wrangling again.

So while the architectural purist in me is quietly cursing, spitting and mumbling about dependencies and side effects, my inner scripting hacker is bouncing around with glee. This is great! Re-use FTW!!! (Yes, yes, I should probably factor out the OAuth stuff too, some day, maybe...)

Anyway - this is just too handy not to share. Hope you all enjoy it.

Room for a little YAGNI in the DXA :-)

Posted by Dominic Cronin at Sep 27, 2019 08:37 AM |

I wouldn't usually call out open source code in a blog post, but honestly, this made me Laugh Out Loud in the office yesterday. I'd ended up poking around in the part of the Tridion Digital Experience Manager framework (DXA) that deals with media items. Just to be clear, the media items in question would usually be either binaries that have some role in displaying your web site, or in this case, more specifically, downloads such as a PDF or whatever. The thing that made me laugh was in a function called getFriendlyFileSize(). A common use case for this would be to display a file size next to your download link so the visitor knows that they can download the PDF fairly quickly, or that maybe they'd better wait until they're on the Wifi before attempting that 10GB ISO file.

getFriendlyFileSize() converts a raw number of bytes into something like 13MB, 7KB, or 5GB. What made me laugh was the fact that the author has also very helpfully included support not only for GigaBytes, but also TeraBytes, PetaBytes and ExaBytes.

Sitting in my office right now, I'm getting about "140 down" from Speedtest. That is to say, my download speed from the Internet is about 140 Mbps, which works out in practice that if I want to grab the latest Centos-with-all-the-bells-and-whistles.ISO (let's say 10GB) it'll take me about 10 minutes. Let's say I want to scale that up to 10PB, then we're talking about 400 years or so, which somewhat exceeds the longest web server uptime known to mankind by an order of magnitude and then some.

Well maybe this is just old-fashioned thinking, but I'm inclined to think we don't need friendly Exabyte file sizes for website media downloads just yet. In the words of the old Extreme Programming mantra, "You ain't gonna need it". (YAGNI)

I'm not here to take a rise out of the hard-working hackers that contribute so much to us all. Really. I can't say that strongly enough. It made me laugh out loud, that's all.