Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / weblog

Dominic Cronin's weblog

Programatically changing the Publishable flag on a Category

Not long ago I was writing a script which, among many other things, needed to set the Publishable property of a category. In the Tridion user interface, a category has a checkbox, large as life, that says "Publishable". How hard could it be, I thought. :-) 

It turns out that when you work with the API (in this case, the core service), it's not called Publishable (or any variation on that), but UseForNavigation. 

 The documentation for the UseForNavigation property

I kind of get it. Back when categories first could be published, the focus was on using them to build navigations. There's even a note in the documentation that says "Before SDL Tridion 2009 the behavior was get or set whether the taxonomy can be used for navigation."

Well I suppose every product as complex (and powerful) as Tridion will have it's history and quirks. In fact, it only cost me a few minutes to figure this out, so it's not really a problem. I'm still going to file this post under "gotchas" though! 

 

Don't mount your Hyper-V disk and change the contents if it has checkpoints

Posted by Dominic Cronin at Oct 16, 2020 05:54 PM |
Filed under: , ,

I just managed to get myself into trouble with Hyper-V. I'm busy setting up a new Tridion image, so I'd started with a fresh Windows Server Essentials install, and then once I had that running, I wanted to copy a handful of installers from the host computer to the new image. What could be simpler than just mounting the VHDX, I thought. Wrong!

So.... mounting the virtual disk is easy, you just right-click on the file, and Windows offers you a Mount option in the context menu. You have to stop the virtual server first, but OK. So I did this - copied my installers over, unmounted the drive with the Eject option from the context menu of what had become E: and went back to start the image again. This promptly failed with various messages that said there was a mismatch between the differencing virtual disk and the parent disk. I hadn't really wanted a differencing disk (which is what it turns out a checkpoint is really called). Checkpoint shmeckpoint... this is not a highly available super reliable server I'm building. Anyway  - checkpoints... it looks like it's just journaling; all your edits go in the checkpoint file, and I suppose a restore is just deleting the checkpoint file.

Enough speculation - what it means is that for the disk to work, Hyper-V has to know the order that the various slices are layered on top of each other, and when it does that - it also does some integrity checking. Not a bad thing, you might think, but adding some files had presumably borked a checksum or whatever, and it was throwing up the mismatch message. Apparently you can fix this through the Inspect button in the user interface, but then I got a different error. Fortunately - everything in Hyper-V works from the command line too, so from the powershell I was able to do the following and it was all good again.

Set-VHD .\some_checkpoint_or_other.avhdx -ParentPath .\TheVirtualDisk.vhdx -IgnoreIdMismatch

If you have more checkpoints, you can make each one in turn a parent of the other in the same way.

Mostly - this post is about the fact that it's apparently stupid to mount a VHDX and edit it, and nobody had told me. Next time I'll just run up a share.

So what use is that Discover-EnvironmentCapabilities.ps1 script anyway?

Not long ago, I posted a powershell script on Tridion Practice that allows you to connect to a Tridion discovery service and read out the capabilities that are offered by the corresponding content delivery environment. Well that's a pretty good party trick as far as it goes, but hey maybe you already had a pretty good idea of how you've got things set up, so it's kind of a one-trick pony kind of thing eh? Well it turns out that the script is much more useful than just that.

For starters, I'd like to mention that I've recently updated the script so that it also lists the Model service if it's there, mostly because I was interested in querying it directly. When you are building a DXA application, you can always throw the logging into DEBUG to see the service calls, but it's also very handy to be able to do the service calls yourself, without your application in the way. After all, "divide and conquer" is the most ancient debugging technique of all. Isolate your problem to get a better look at it. Still - as many of the APIs are not public and documented, the debug logging is probably your first port of call to figure out how the query ought to look.

Especially if you're dealing with an automatically provisioned system such as SDL's cloud, the only reliable way to get the service urls is to ask the discovery service for them, so when I wanted to query the model service, I'd first need to do that anyway. Actually I first needed to know the localization, so I was starting with the content service. I didn't want to be re-typing URLs, so actually for my first quick-and-dirty, I just copied all the discovery code into my script and hacked in a $capabilities variable to capture all the output from the foreach loop. Something like this:

$capabilities = foreach($capabilityName in $capabilityNames) {
# invoking the discovery service and returning PSObjects
}

Then I could just follow up with

$contentUri = ($capabilities | ? {$_.Capability -eq 'ContentServiceCapability'}).'Service URI'
$queryUri = ($contentUri -replace '/content.svc','/client/v4/content.svc')  + "/GetPublicationMappingsFunctionImport(Url='$url')"
Invoke-RestMethod -Method Get -Uri $queryUri -Headers @{Authorization=$Authorization}

(You've probably realised by now that if you want to follow along at home, it's probably best to start by visiting the cookbook recipe from the link above and downloading the script.) Anyway - this worked great, and gave me an XML document from which I could dig out the PublicationId. I'd simply used the same $Authorization variable that I'd created to use with the Discovery service, which of course is valid for the other services too. (BTW - if JSON is your poison, just fix up the -Headers parameter to be @{Authorization=$Authorization;Accept='application/json'})

Now I was ready to call the Model service, but the thought of yet another script that copied in the discovery code was starting to make me feel a bit itchy around the DRY principle. I mean... no need to be a fanatic, but enough is enough, eh? So then I realised I didn't have to copy it. All I needed was to "dot source" the existing script. I had the discovery script in the same folder, so my entire script ended up being 4 lines:

$capabilities = . .\Discover-EnvironmentCapabilities.ps1
$modelUri = ($capabilities | ? {$_.Capability -eq 'ModelServiceCapability'}).'Service URI'

$queryUri = $modelUri + "/PageModel/tcm/309/nl/blah/foo/index?includes=INCLUDE"
Invoke-RestMethod -Method Get -Uri $queryUri -Headers @{Authorization=$Authorization}

"Dot sourcing" in powershell (and also in some other shells) means using the dot operator to execute another script in your current context. The dot operator is the first of the two dots on the right hand side of the $capabilities assignment. This meant that not only did I manage to populate $capabilities with the return value of the script (a list of PSObjects describing capabilities) but any variables that were assigned in the discovery script were now also available for use locally. This meant that I could just use  $Authorization and thereby avoid having to do all the tedious OAuth wrangling again.

So while the architectural purist in me is quietly cursing, spitting and mumbling about dependencies and side effects, my inner scripting hacker is bouncing around with glee. This is great! Re-use FTW!!! (Yes, yes, I should probably factor out the OAuth stuff too, some day, maybe...)

Anyway - this is just too handy not to share. Hope you all enjoy it.

Constructing an ImportExport ItemsSelector in Powershell

Posted by Dominic Cronin at Apr 01, 2020 07:17 PM |

I've used the Tridion ImportExport API a couple of times from the PowerShell, and until now, I didn't really have any reason to use anything except a Subtree selector for my exports. If you put your items in a bundle, this is what you use, and for the rest, mostly what you want is everything in a folder or structure group. Invoking the constructor of SubtreeSelection usually looks something like this: 

$selection = New-Object Tridion.ContentManager.ImportExport.SubtreeSelection $someOrgItemUrl,$true

This is fine because the arguments are both single variables. The trouble comes when you want to construct an Items Selector. Your first attempt probably looks like:

$items = @($itemUrl)
$selection = New-Object Tridion.ContentManager.ImportExport.ItemsSelection $items

You're probably thinking: I only want one item, but the constructor expects an [IEnumerable[string]] so I'll just use the array subexpression operator @() to force my single item to be an array and let Powershell take care of the rest of the magic of casting to IEnumerable. Powershell for the easy life, eh?

But it doesn't work. You get back some message like

New-Object : Cannot convert argument "0", with value: "foobar", for "ItemsSelection" to type "System.Collections.Generic.IEnumerable`1[System.String]": "Cannot convert the "foobar" value of type "System.String" to type "System.Collections.Generic.IEnumerable`1[System.String]"."

So what's going on here? It turns out that Powershell thinks that constructor parameters shouldn't be collections. However you want to imagine that, its type resolution logic ends up converting your collection back to a single item (presumably the first) which the constructor promptly rejects. I went through various hoops trying to force things to be an array, or a single item containing an array. You can create your array either with the subexpression operator @(), or just with a unary comma operator ($foo  =  ,$itemURl) but I ended up calling split with an empty delimiter. I'm not saying it's pretty, but it worked for me. I then also cast it explicitly to the expected collection type. In Powershell v5, the constructor is available using the static method syntax on the type, and calling the constructor this way is less prone to type resolution magic messing things up. Don't ask me exactly how. I have no idea. Anyway - this is what worked eventually:

[System.Collections.Generic.IEnumerable`1[System.String]]$items = $itemUrl.split('')
$selection = [Tridion.ContentManager.ImportExport.ItemsSelection]::new($items)

I hope this saves somebody some hair pulling and Googling.

Docker integration with WSL2

Docker integration with WSL2

Posted by Dominic Cronin at Jan 06, 2020 09:35 PM |
Filed under: , ,

I have just set up the Docker/WSL2 integration on my computer, and it looks very promising. 

Update: I've now just set my WSL back to version 1 and reinstalled Docker. As I said - it looks promising, but we're not there yet. Fair enough - running on the insider release of Windows and with a "beta" flag set in Docker, you can't really complain if it stops working. For now, I need it working, so back to the old set up. I'm still looking forward to when they get it stable.

 

Adding an authorization header for the Tridion content service using Fiddler

I've started to experiment with the GraphQL API offered by Tridion Sites 9's Content service. The obvious way to do this is to use the GraphiQL endpoint. On my system I can do this by pointing my browser at http://cd.local:8081/cd/api/graphiql. The only fly in the ointment is that the service expects an OAuth header, so you have to take care of that yourself. The guidance I've seen so far is to use a browser plugin like Requestly to do this, so I duly installed it, and was able to get successful query responses instead of the dreaded 'invalid_grant'. All well and good, but honestly, it's a right faff. Firstly, the plugin itself is clunky, so to open the relevant config window, you're at least several clicks away from sorting out your authorization header, which wouldn't be too bad, but the darned things keep timing out, so you keep having to repeat the procedure. Maybe there's a better plugin, but I figured life's too short. I use Fiddler quite often for faking various scenarios and making test setups work a bit more like they are supposed to in the real world, so why not knock off a quick Fiddler script and be done with it.... I thought!

Actually - it turned out to be a bit fiddly, but I now have it working, so time to share. Usual disclaimers.... it's not very polished. It works for my scenario, and if yours is different you'll have to use the source, Luke. 

So - go and open up Fiddler and head to the FiddlerScript button or go to the Rules->CustomiseRules menu option. Once you have a script editing screen in view, you should be able to find the function OnBeforeRequest(oSession: Session). Inside this function, paste in the following code and fix it up to meet your own bizarre preferences: 

if (oSession.uriContains("http://cd.local:8081/cd/api")) {
    var client_id = "cduser";
    var client_secret = 'CDUserP@ssw0rd';
    var strBody = "client_id=$client_id&client_secret=$client_secret&grant_type=client_credentials&resources=%2F".replace("$client_id",encodeURIComponent(client_id)).replace("$client_secret",encodeURIComponent(client_secret));
    
    var arrBody = new byte[strBody.length];
    for (var i = 0;i < strBody.length;i++){
        arrBody[i] = strBody.charCodeAt(i);
    }

    var oHeaders = new HTTPRequestHeaders();
    oHeaders.RequestPath ="http://cd.local:8082/token.svc";
    oHeaders["Content-Type"] = "application/x-www-form-urlencoded";
    oHeaders["Host"] = "cd.local:8082"
    oHeaders.HTTPMethod = "POST";
    oHeaders["Content-Length"] = arrBody.length;
    
    
    var oAuthSession = FiddlerApplication.oProxy.SendRequestAndWait(oHeaders, arrBody, null, null);
    if (200 == oAuthSession.responseCode) {
        var oJSON = Fiddler.WebFormats.JSON.JsonDecode(oAuthSession.GetResponseBodyAsString());
        oSession.RequestHeaders.Add("Authorization", oJSON.JSONObject["token_type"] + ' ' + oJSON.JSONObject["access_token"]);
    }
    else {
        MessageBox.Show("Bad Auth:  " + oAuthSession.responseCode);
    }
}

 If you now go back to your grapiql page, you should find that your requests are authorised. If it doesn't work, make sure that you've removed your rule out of Requestly or whatever you've been using; given two "Authorized" headers, the service will very likely not behave nicely.  

There are plenty of obvious improvements that can still be made. For example, it's probably fairly easy to switch this on and off with a setting in Fiddler, or to check for an existing Authorization header. 

Anyway - this is going to make my life much nicer as I play with the API. 

2020 foresight. How will the New Year look for Tridion specialists?

Posted by Dominic Cronin at Dec 31, 2019 02:42 PM |

A year ago, I wrote a similar blog post about the year 2019 which we were then entering. Looking back, it wasn't a bad set of predictions. More Sites 9, more DXA, the end of vbScript, the beginning of GraphQL, more cloud computing and devops. OK - so I wasn't exactly making bizarre claims or strange predictions: something more akin to "more of the same", "keep up the good work" and generally keeping steadily on towards the projects and architectures of the future.

So I didn't get it too badly wrong. How well will I do this time? Well let me start with my first prediction. This coming year will really, really, definitely and for ever see the absolute, total and utter end of vbScript templating as part of what Tridion people have to do. Really. Truth be told, this is going to be a better prediction than it was last year. At my current customer, there are still one or two pockets of resistance, but the new architecture is in place, and a relatively straightforward implementation should see the job done.

For myself, I've seen a very welcome increase in the amount of DXA work and devops, including Java, Jenkins and OpenShift, combined with SDL's cloud offering. I've also had the chance to bring my front-end skills up to date, and to embrace the notion of being a "T-shaped" agile team member. The reality for Tridion specialists has always been that we are generalists too, and most of us have spent our days doing whatever it takes to get enterprise level web applications up and running. As we now see further shifts in architectural emphasis, there will be more going on in the browser, so we'll be there. I hesitate to say "full-stack", because that's a stereotype in itself, so perhaps the old term "n-tier" is closer. I expect in 2020 that we'll see further differentiation in front-end work, so that there'll perhaps be a clearer division between front-end-front-end and back-end-front-end. With modern frameworks, there's plenty for a programmer to do in the browser without getting intimately involved in the intricacies of the presentation layer.

Tridion itself is also steadily moving towards supporting these architectures, with the GraphQL API and expected architectural shifts towards getting a very efficient "publishing pipeline". It can be tempting to see things through the lens of how we'd build our current applications on the new architecture, but at the Tridion Developer Summit 2019, I was involved in a very interesting round-table discussion that was supposed to be about headless WCMS. We ended up discussing how new architectures could bring new possibilities to the kind of web sites people create, or want to create. The availability of advanced queries on a published data store opens the door to designs that part company with the "traditional" model of a hierarchy of web pages. Of course, these possibilities have been available for quite some time, but mostly for very big sites where search is a far more comfortable navigation paradigm than hierarchy. As it gets progressively easier to do, we'll see some shifts in what people expect. I'm not saying every site will end up as a single-page application, but we'll see variations on all the traditional themes, and some we've not seen yet. The future is out there. 

As the worlds of web site design and web application architecture morph into something new, Tridion's role is both the same and different. If you want headless, of course, Tridion can do that, but the reasons you'll want Tridion are much the same as they've always been: superb content management features, technical excellence, scalability, blueprinting, and the ability to integrate with anything. It's great to be able to take the new stuff for granted, but also the old stuff.

In 2020, I think we will see an acceptance in the industry that Tridion is back! In the early days, it was always pitched as a "best of breed" system that did what it did very well and integrated very well with other nearby systems. The typical Tridion customer didn't want it to be a document management system or a customer relations management system: they had those already. They also clearly didn't want it to become a pervasive platform that would be a one stop shop for a one size fits all. A few years ago, SDL departed from the "best of breed" identity that had served it so well, and in doing so, damaged its own performance in the market. Fortunately, these things were corrected, but it's like steering a super-tanker; it took some time to see the results of the correction, and then people wanted to just check the course for a little while longer. We're now far enough to be able to say that the good ship Tridion is on course and sailing for better weather.

2020 is going to be a good year!

Happy New Year.

Querying the Tridion GraphQL service with Powershell

Yet another in the ongoing series of "If it's Tridion, I want to be able to do it in Powershell" :-) 

Now available for your delectation up on Tridion Practice

Is it the end of the component presentation?

Posted by Dominic Cronin at Nov 17, 2019 03:20 PM |
Filed under: ,

Among the most interesting talks at the recent Tridion Developer Summit, was one by Raimond Kempees and Anton Minko, in which they looked into their crystal ball to give some hints about the direction content delivery is going in. In brief, the news is that Tridion R&D are now following through on the consequences of recent changes in the way web content management is done. The market is demanding headless Web CMS systems, and although Tridion's current offering is fully able to punch its weight as a headless system, the focus looks as though it's going even further in that direction. Buzzwords aside, the reality is that templating is moving further and further away from the content manager, and will probably end up being done largely in the browser.

This blog post is not about the death of the server-based web application: that might be a little premature, but it's really clear that we won't be doing our templating on the content manager. Apart from legacy work, this is already the case for most practical purposes. If you are using a framework like DXA, you will most likely not wish to modify the templating that comes with the framework. The framework templating doesn't render output, but presents the data from your components and pages in a generic format such as JSON. Assuming that all the relevant data is made available, you shouldn't ever need to intervene at this stage in the proceedings. Any modifications or customisations you may wish to make will probably be done in C#, and not in a templating language. (Tridion's architecture is very flexible, and I'm talking here about where mainstream use of the product is going. There are still organisations whose current work looks very different to this, and they have their own very good reasons for that.)

So the templating has already moved out to DXA, and there's an advanced content service which offers JSON data which you specify in GraphQL queries. Architectural decisions in your project are still likely to be about what should happen on the server and what in the browser, but you won't be doing much on the content manager beyond, erm... managing content.

Among the interesting new directions sketched out in the talk were the following:

  1. A "native" data format for publishing. Effectively - instead of templates generating JSON, Tridion itself will do this, so you won't need templates any more.
  2. A fast publishing pipeline to ensure that the content gets from the content manager to the content service in a highly efficient manner.

 

All well and good, but somewhere in there, almost as a throwaway line, they touched on the end of the component presentation? Well it's kind of a logical conclusion in some ways, but when they mentioned this, I had a kind of "whoa" moment. So don't throw the baby out with the bath water, guys!

Seriously - for sure, if we don't have component templates any more, we can't really have component presentations, can we? Well yes, of course, we can, and both DD4T and DXA have followed the route of using good-old-fashioned page composition in Tridion, with the editors selecting component templates to indicate how they'd like to see the component rendered. In practice, all the component templates are identical, but the choice serves to trigger a specific "view" in the web application. The editorial experience remains the same as ever, and everyone knows what they are doing.

So in practice, instead of a page being a list of component presentations, it's become a list of components, with some sort of metadata indicating the choice of view for each component. That metadata doesn't need to be a component template, and you can see why they'd want to tidy this up. If you're writing a framework, you use the mechanisms available to you, but Tridion R&D can make fundamental changes when it's the right thing to do.

So they could get rid of component presentations. My initial reaction was to think they'd still have to do something pretty similar to "page composition as we know it", but that got me thinking. We now have page regions, which effectively turns a page from one list into a number of smaller lists. With this in place, it comes down to the fundamental reasons that we have traditionally used different component presentations. It's very common to have page template/view logic that does something like: get all the link-list components and put them in the right-hand side bar, or get the main component and use it to render the detail view in the main content area, or get all the content components and put them in the main content area. These very common use patterns are easily coped with by using regions, but maybe there are other cases where it would still be handy to specify your choice of view for a given component.

Actually, a lot of our work in page templates over the years has been about working around the inflexibility of a page being simply a list of component presentations. We've written logic that switched on which schema it was, or maybe a dozen other things, to achieve the results we needed to. Still, that simple familiar model has a lot to be said for it, and I suspect there are cases where regions on their own probably aren't enough.

I've really appreciated the way product managers at Tridion have reached out to the community in recent times to validate their ideas and share inspiration. I have every confidence that the future of pages in a world without component presentations will be the subject of similar consultations. The Tridion of two or three versions hence might look a lot different, but it will probably also have a lot of familiar things. It's all rather exciting.

Room for a little YAGNI in the DXA :-)

Posted by Dominic Cronin at Sep 27, 2019 08:37 AM |

I wouldn't usually call out open source code in a blog post, but honestly, this made me Laugh Out Loud in the office yesterday. I'd ended up poking around in the part of the Tridion Digital Experience Manager framework (DXA) that deals with media items. Just to be clear, the media items in question would usually be either binaries that have some role in displaying your web site, or in this case, more specifically, downloads such as a PDF or whatever. The thing that made me laugh was in a function called getFriendlyFileSize(). A common use case for this would be to display a file size next to your download link so the visitor knows that they can download the PDF fairly quickly, or that maybe they'd better wait until they're on the Wifi before attempting that 10GB ISO file.

getFriendlyFileSize() converts a raw number of bytes into something like 13MB, 7KB, or 5GB. What made me laugh was the fact that the author has also very helpfully included support not only for GigaBytes, but also TeraBytes, PetaBytes and ExaBytes.

Sitting in my office right now, I'm getting about "140 down" from Speedtest. That is to say, my download speed from the Internet is about 140 Mbps, which works out in practice that if I want to grab the latest Centos-with-all-the-bells-and-whistles.ISO (let's say 10GB) it'll take me about 10 minutes. Let's say I want to scale that up to 10PB, then we're talking about 400 years or so, which somewhat exceeds the longest web server uptime known to mankind by an order of magnitude and then some.

Well maybe this is just old-fashioned thinking, but I'm inclined to think we don't need friendly Exabyte file sizes for website media downloads just yet. In the words of the old Extreme Programming mantra, "You ain't gonna need it". (YAGNI)

I'm not here to take a rise out of the hard-working hackers that contribute so much to us all. Really. I can't say that strongly enough. It made me laugh out loud, that's all.