Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / weblog

Dominic Cronin's weblog

Showing blog entries tagged as: Powershell

Don't mount your Hyper-V disk and change the contents if it has checkpoints

Posted by Dominic Cronin at Oct 16, 2020 05:54 PM |
Filed under: , ,

I just managed to get myself into trouble with Hyper-V. I'm busy setting up a new Tridion image, so I'd started with a fresh Windows Server Essentials install, and then once I had that running, I wanted to copy a handful of installers from the host computer to the new image. What could be simpler than just mounting the VHDX, I thought. Wrong!

So.... mounting the virtual disk is easy, you just right-click on the file, and Windows offers you a Mount option in the context menu. You have to stop the virtual server first, but OK. So I did this - copied my installers over, unmounted the drive with the Eject option from the context menu of what had become E: and went back to start the image again. This promptly failed with various messages that said there was a mismatch between the differencing virtual disk and the parent disk. I hadn't really wanted a differencing disk (which is what it turns out a checkpoint is really called). Checkpoint shmeckpoint... this is not a highly available super reliable server I'm building. Anyway  - checkpoints... it looks like it's just journaling; all your edits go in the checkpoint file, and I suppose a restore is just deleting the checkpoint file.

Enough speculation - what it means is that for the disk to work, Hyper-V has to know the order that the various slices are layered on top of each other, and when it does that - it also does some integrity checking. Not a bad thing, you might think, but adding some files had presumably borked a checksum or whatever, and it was throwing up the mismatch message. Apparently you can fix this through the Inspect button in the user interface, but then I got a different error. Fortunately - everything in Hyper-V works from the command line too, so from the powershell I was able to do the following and it was all good again.

Set-VHD .\some_checkpoint_or_other.avhdx -ParentPath .\TheVirtualDisk.vhdx -IgnoreIdMismatch

If you have more checkpoints, you can make each one in turn a parent of the other in the same way.

Mostly - this post is about the fact that it's apparently stupid to mount a VHDX and edit it, and nobody had told me. Next time I'll just run up a share.

So what use is that Discover-EnvironmentCapabilities.ps1 script anyway?

Not long ago, I posted a powershell script on Tridion Practice that allows you to connect to a Tridion discovery service and read out the capabilities that are offered by the corresponding content delivery environment. Well that's a pretty good party trick as far as it goes, but hey maybe you already had a pretty good idea of how you've got things set up, so it's kind of a one-trick pony kind of thing eh? Well it turns out that the script is much more useful than just that.

For starters, I'd like to mention that I've recently updated the script so that it also lists the Model service if it's there, mostly because I was interested in querying it directly. When you are building a DXA application, you can always throw the logging into DEBUG to see the service calls, but it's also very handy to be able to do the service calls yourself, without your application in the way. After all, "divide and conquer" is the most ancient debugging technique of all. Isolate your problem to get a better look at it. Still - as many of the APIs are not public and documented, the debug logging is probably your first port of call to figure out how the query ought to look.

Especially if you're dealing with an automatically provisioned system such as SDL's cloud, the only reliable way to get the service urls is to ask the discovery service for them, so when I wanted to query the model service, I'd first need to do that anyway. Actually I first needed to know the localization, so I was starting with the content service. I didn't want to be re-typing URLs, so actually for my first quick-and-dirty, I just copied all the discovery code into my script and hacked in a $capabilities variable to capture all the output from the foreach loop. Something like this:

$capabilities = foreach($capabilityName in $capabilityNames) {
# invoking the discovery service and returning PSObjects
}

Then I could just follow up with

$contentUri = ($capabilities | ? {$_.Capability -eq 'ContentServiceCapability'}).'Service URI'
$queryUri = ($contentUri -replace '/content.svc','/client/v4/content.svc')  + "/GetPublicationMappingsFunctionImport(Url='$url')"
Invoke-RestMethod -Method Get -Uri $queryUri -Headers @{Authorization=$Authorization}

(You've probably realised by now that if you want to follow along at home, it's probably best to start by visiting the cookbook recipe from the link above and downloading the script.) Anyway - this worked great, and gave me an XML document from which I could dig out the PublicationId. I'd simply used the same $Authorization variable that I'd created to use with the Discovery service, which of course is valid for the other services too. (BTW - if JSON is your poison, just fix up the -Headers parameter to be @{Authorization=$Authorization;Accept='application/json'})

Now I was ready to call the Model service, but the thought of yet another script that copied in the discovery code was starting to make me feel a bit itchy around the DRY principle. I mean... no need to be a fanatic, but enough is enough, eh? So then I realised I didn't have to copy it. All I needed was to "dot source" the existing script. I had the discovery script in the same folder, so my entire script ended up being 4 lines:

$capabilities = . .\Discover-EnvironmentCapabilities.ps1
$modelUri = ($capabilities | ? {$_.Capability -eq 'ModelServiceCapability'}).'Service URI'

$queryUri = $modelUri + "/PageModel/tcm/309/nl/blah/foo/index?includes=INCLUDE"
Invoke-RestMethod -Method Get -Uri $queryUri -Headers @{Authorization=$Authorization}

"Dot sourcing" in powershell (and also in some other shells) means using the dot operator to execute another script in your current context. The dot operator is the first of the two dots on the right hand side of the $capabilities assignment. This meant that not only did I manage to populate $capabilities with the return value of the script (a list of PSObjects describing capabilities) but any variables that were assigned in the discovery script were now also available for use locally. This meant that I could just use  $Authorization and thereby avoid having to do all the tedious OAuth wrangling again.

So while the architectural purist in me is quietly cursing, spitting and mumbling about dependencies and side effects, my inner scripting hacker is bouncing around with glee. This is great! Re-use FTW!!! (Yes, yes, I should probably factor out the OAuth stuff too, some day, maybe...)

Anyway - this is just too handy not to share. Hope you all enjoy it.

Constructing an ImportExport ItemsSelector in Powershell

Posted by Dominic Cronin at Apr 01, 2020 07:17 PM |

I've used the Tridion ImportExport API a couple of times from the PowerShell, and until now, I didn't really have any reason to use anything except a Subtree selector for my exports. If you put your items in a bundle, this is what you use, and for the rest, mostly what you want is everything in a folder or structure group. Invoking the constructor of SubtreeSelection usually looks something like this: 

$selection = New-Object Tridion.ContentManager.ImportExport.SubtreeSelection $someOrgItemUrl,$true

This is fine because the arguments are both single variables. The trouble comes when you want to construct an Items Selector. Your first attempt probably looks like:

$items = @($itemUrl)
$selection = New-Object Tridion.ContentManager.ImportExport.ItemsSelection $items

You're probably thinking: I only want one item, but the constructor expects an [IEnumerable[string]] so I'll just use the array subexpression operator @() to force my single item to be an array and let Powershell take care of the rest of the magic of casting to IEnumerable. Powershell for the easy life, eh?

But it doesn't work. You get back some message like

New-Object : Cannot convert argument "0", with value: "foobar", for "ItemsSelection" to type "System.Collections.Generic.IEnumerable`1[System.String]": "Cannot convert the "foobar" value of type "System.String" to type "System.Collections.Generic.IEnumerable`1[System.String]"."

So what's going on here? It turns out that Powershell thinks that constructor parameters shouldn't be collections. However you want to imagine that, its type resolution logic ends up converting your collection back to a single item (presumably the first) which the constructor promptly rejects. I went through various hoops trying to force things to be an array, or a single item containing an array. You can create your array either with the subexpression operator @(), or just with a unary comma operator ($foo  =  ,$itemURl) but I ended up calling split with an empty delimiter. I'm not saying it's pretty, but it worked for me. I then also cast it explicitly to the expected collection type. In Powershell v5, the constructor is available using the static method syntax on the type, and calling the constructor this way is less prone to type resolution magic messing things up. Don't ask me exactly how. I have no idea. Anyway - this is what worked eventually:

[System.Collections.Generic.IEnumerable`1[System.String]]$items = $itemUrl.split('')
$selection = [Tridion.ContentManager.ImportExport.ItemsSelection]::new($items)

I hope this saves somebody some hair pulling and Googling.

Docker integration with WSL2

Docker integration with WSL2

Posted by Dominic Cronin at Jan 06, 2020 09:35 PM |
Filed under: , ,

I have just set up the Docker/WSL2 integration on my computer, and it looks very promising. 

Update: I've now just set my WSL back to version 1 and reinstalled Docker. As I said - it looks promising, but we're not there yet. Fair enough - running on the insider release of Windows and with a "beta" flag set in Docker, you can't really complain if it stops working. For now, I need it working, so back to the old set up. I'm still looking forward to when they get it stable.

 

Querying the Tridion GraphQL service with Powershell

Yet another in the ongoing series of "If it's Tridion, I want to be able to do it in Powershell" :-) 

Now available for your delectation up on Tridion Practice

Discovering content delivery environment capabilities... on Tridion practice

I recently wrote a script to query the capabilities of a Tridion content delivery environment. Rather than post it here, I've put it up on Tridion Practice. After all, it's about time "practice" got a bit of love. I think I'll try to get a bit more fresh material up there soon. 

I hope you'll find the script useful. Dare I also hope that some of you might be inspired to contribute something to Tridion Practice? It's always been the intention that we'd try to get contributions from a range of practitioners, so please feel welcome. Cookbook recipes are probably the most prominent part of the site, but the patterns and practices sections are also interesting. Come on down! 

Enumerating the Tridion config replacement tokens

OK - I get it. It's starting to look like I've got some kind of monomania regarding the replacement tokens in Tridion config files, but bear with me. In my last blog post, I'd hacked out a regex that could be used for replacing them with their default values, but had thought better of actually doing so. But still, the idea of being able to grab all the tokens has some appeal. I can't bear to waste that regex, so now I'm looking for a reasonable use for it.

It occurred to me that at some point in an installation, it might be handy to have a comprehensive list of all the things you can pass in as environment variables. Based on what I'd done yesterday this was quite straightforward

gci -r -include *.xml -exclude logback.xml| sls '\$\{.*?\}' `
| select {$_.RelativePath((pwd))},LineNumber,{$_.Matches.value} `
| Export-Csv SitesNineTokens.csv

By going to my unzipped Tridion zip and running  this in the "Content Delivery/roles" folder, I had myself a spreadsheet with a list of all the tokens in Sites 9. Similarly, I created a spreadsheet for Web 8.5. (As you can see, I've excluded the logback files just to keep the volume down a bit, but in real life, you might also want to see those listed.)

The first thing you see when comparing Sites 9 with Web 8.5 is that there are a lot more of the things. More than twice as many. (At this point I should probably confess to some possible inaccuracy, as I haven't gone to the trouble of stripping out XML comments, so there could be some duplicates.)

65 of these come from the addition of XO and another 42 from IQ, but in general, there are just more of them. The bottom line is that to get a Tridion system up and running these days, you are dealing with hundreds of settings. To be fair, that's simply what's necessary in order to implement the various capabilities of such an enterprise system.

One curious thing I noticed is that the ambient configs all have a token to allow you to disable oauth security, yet no tokens for the security settings for the various roles. I wonder if this reflects the way people actually use Tridion.

Of course, you aren't necessarily limited by the tokens in the example configs of the shipped product. Are customers defining their own as they need them?

That's probably enough about this subject, though, isn't it?

Removing the replacement tokens from Tridion configuration files, and choosing not to

In SDL Web 8.5 we saw the introduction of replacement tokens in the content delivery configuration files. Whereas previously we'd simply had XML files with attributes and elements that we filled in with the relevant values, the replacement tokens allowed for the values to be provided externally when the configuration file is used. The commonest way to do this is probably by using environment variables, but you can also pass them as arguments to the java runtime. (A while ago, I wasted a bunch of time writing a script to pass environment variables in via java arguments. You don't need to do this.)

So anyway - taking the deployer config as my example, we started to see this kind of thing:

<Queues>
 <Queue Id="ContentQueue" Adapter="FileSystem" Verbs="Content" Default="true">
  <Property Value="${queuePath}" Name="Destination"/>
 </Queue>
 <Queue Id="CommitQueue" Adapter="FileSystem" Verbs="Commit,Rollback">
  <Property Value="${queuePath}/FinalTX" Name="Destination"/>
 </Queue>
 <Queue Id="PrepareQueue" Adapter="FileSystem" Verbs="Prepare">
  <Property Value="${queuePath}/Prepare" Name="Destination"/>
 </Queue>

Or from the storage conf of the disco service:

<Role Name="TokenServiceCapability" Url="${tokenurl:-http://localhost:8082/token.svc}">

So if you have an environment variable called queuePath, it will be used instead of ${queuePath}. In the second example, you can see that there's also a syntax for providing a default value, so if there's a tokenurl environment variable, that will be used, and if not, you'll get http://localhost:8082/token.svc.

This kind of replacement token is very common in the *nix world, where it's taken to even further extremes. Most of this is based on the venerable Shell Parameter Expansion syntax.

All this is great for automated deployments and I'm sure the team running SDL's cloud services makes full use of this technique. For now, I'm still using my own scripts to replace values in the config files, so a recent addition turned out to be a bit inconvenient. In Tridion Sites 9, the queue Ids in the deployer config have also been tokenised. So now we have this kind of thing:

<Queue Default="true" Verbs="Content" Adapter="FileSystem" Id="${contentqueuename:-ContentQueue}">
  <Property Name="Destination" Value="${queuePath}"/>
</Queue>

Seeing as I had an XPath that locates the Queue elements by ID, this wasn't too helpful. (Yes, yes, in the general case it's great, but I'm thinking purely selfishly!) Shooting from the hip I quickly updated my script with an awesome regex :-) , so instead of

$config = [xml](gc $deployerConfig)

I had

$config = [xml]((gc $deployerConfig) -replace '\$\{(?:.*?):-(.*?)\}','$1')

About ten seconds after finishing this, I realised that what I should be doing instead is fixing my XPath to glom onto the Verbs attribute instead, but you can't just throw away a good regex. So - I present to you, this beautiful regex for converting shell parameter expansions (or whatever they are called) into their default values while using the PowerShell. In other words, ${contentqueuename:-ContentQueue} becomes ContentQueue.

How does it work? Here it comes, one piece at a time:

'            single quote. Otherwise Powershell will interpret characters like $ and {, which you don't want
\            a slash to escape the dollar from the regex
$            the opening dollar of the expansion espression
\{           match the {, also escaped from the regex
(?:.*?)      match zero or more of anything, non-greedily, and without capturing
:- match the :-
(.*?) match zero or more of anything non-greedily. No ?: this time so it's captured for use later as $1
\} match the }
' single quote

 The second parameter of -replace is '$1', which translates to "the first capture". Note the single quotes, for the same reason as before

So there you have it. Now if ever you need to rip through a bunch of config files and blindly accept the defaults for everything, you know how. But meh... you could also just not provide any values in the environment. I refuse to accept that this hack is useless. A reason will emerge. The universe abhors a scripting hack with no purpose.

Docker from the powershell, take two

Posted by Dominic Cronin at Mar 29, 2019 02:50 PM |
Filed under: ,

Take one

Back in 2016, I posted a quick and dirty technique for parsing the output from the docker CLI utilities. I recently returned to this looking for a slightly more robust approach. Back then I'd also pointed out the existence of a PowerShell module from Microsoft that made things a bit easier, but this has now been deprecated with a recommendation to use the docker cli directly or to use Docker.DotNet. This latter is a full-blown API that might be really handy for some tasks, but not for what I had in mind. The Docker CLI, OTOH is the problem I'm trying to solve. Yes, sure, I might get further with spending more time figuring out filters and formatting, but the bottom line is that the PowerShell has made me lazy, and I plan to stay that way. A quick Google turns up any number of people hacking away at Bash to solve docker's CLI inadequacies. Whatever. This is Windows, and I expect to see a pipeline full of objects with properties. :-)

Take two

It turns out that a reasonably generic approach to parsing docker's output gives you exactly that: a pipeline full of objects. I've just added a couple of functions to my $profile, which means I can do something like

docker ps -a | parseColumns

and get the columns from ps in my pipeline as object properties. Here is the code:

function parseColumnsFromHeader ($line){
    $cols = @()
    # Lazily match chunks of text followed by at least two whitespace or a line end
    $re = [regex]"((.*?)(\s{2,}|$))*"
    $matches = $re.Match($line)
    # Group 0 is the whole match, then you count left parens, so
    # group 1 is ((.*?)(\s{2,}|$)),
    foreach ($capture in $matches.Groups[1].Captures){
        if ($capture.Length -gt 0) {
            # The captures are therefore both the chunk of text and the following whitespace
            # so the length is right, but we trim for the name
            $col = @{
                Name = $capture.Value.Trim()
                Index = $capture.Index
                Length = $capture.Length
            }
            $cols += New-Object PSObject -Property $col;
        }
    }
    return $cols
}

filter parseColumns {

    if (-not $headerDone) {
        $cols = parseColumnsFromHeader $_;
        $headerDone = $true;
    } else {
        $propertiesHash = [ordered]@{}        
        foreach($col in $cols) {                
            $propertiesHash.Add($col.Name, ([string]$_).Substring($col.Index, $col.Length))            
        }
        New-Object PSObject -Property $propertiesHash
    }

}

This works equally well for other docker commands, such as 'docker images'. The only proviso is that the output begins with a single line with column headers. As long as the headers themselves have at least two white space characters between them, and no more than one in the header text itself, it should be fine.

OK - it's fairly generic. Maybe it's better than my previous approach. That said, I'm irritated by the reliance on having to parse columns based on white space. This could break at any moment. Should I be looking for something better?

Not good enough

The thing is, I'm always on the lookout for techniques that will work everywhere. The reason I'm reasonably fluent in the vi editor today is that some years ago, I consciously chose to stop learning Emacs. This isn't a holy wars thing. Emacs was probably better. Maybe... but it wasn't everywhere. Certainly at the time, vi was available on every Unix machine in the world. (These days I have to type "apt-get update && apt-get install -y vim && vi foobar.txt" far more often than I'd like, but on those machines, there's no editor installed at all, and I understand why.)

One of the reasons I never really got along with the Powershell Module is that on any given day, I can't guarantee I'll be able to install modules on the system I'm working on. I probably can paste some code into my $profile, or perhaps even more commonly, grab a one-liner from this blog and paste it directly in my shell. But general hacks in your fingertips FTW.

A better way?

So maybe I just have to learn to love the lowest common denominator. So if I'm on a bare windows Machine doing Docker, can I get the pain threshold down far enough? Well just maybe!

If you look at the Docker CLI documentation, you'll see that pretty much every command takes a --format flag, which allows you to pass a GO template. If you want to output the results as json, it's fairly simple, and then of course, the built-in ConvertFrom-Json cmdlet will get you the rest of the way.

I'm reasonably sure that before too long I'll be typing this kind of thing instead of using the functions above:

docker ps -a --format '{{json .}}' | ConvertFrom-Json 

Tridion Core service PowerShell settings for SSO-enabled CMS

Posted by Dominic Cronin at Nov 18, 2018 07:30 PM |

In a Single-Sign-On (SSO) configuration, it's necessary to use Basic Authentication for web requests to the Tridion Content Manager from the browser. This is probably the oldest way of authenticating a web request, and involves sending the password in plain over the wire. This allows the SSO system to make use of the password, which would be impossible if you used, for example, Windows Authentication. The down side of this is that you'd be sending the password in plain over the wire... can't have that, so we encrypt the connection with HTTPS.

What I'm describing here is the relatively simple use case of using the powershell module to log in to an SSO-enabled site using a domain account. Do please note that this won't work if you're expecting to authenticate using SSO. Then you'll need to mess around with federated security tokens and such things. My use case is a little simpler as I have a domain account I can log in with. As the site is set up to support most of the users coming in via SSO, these are the settings I needed, and hence this "note to self" post. If anyone has gone the extra mile to get SSO working, I'd be interested to hear about it.

So this is how it ends up looking:

Import-Module Tridion-CoreService
Set-TridionCoreServiceSettings -HostName 'contentmanager.company.com'
Set-TridionCoreServiceSettings -Version 'Web-8.5'
Set-TridionCoreServiceSettings -CredentialType 'Basic'
Set-TridionCoreServiceSettings -ConnectionType 'Basic-SSL'
$ServiceAccountPassword = ConvertTo-SecureString 'secret' -AsPlainText -Force
$ServiceAccountCredential = New-Object System.Management.Automation.PSCredential ('DOMAIN\login', $ServiceAccountPassword)
Set-TridionCoreServiceSettings -Credential $ServiceAccountCredential

$core = Get-TridionCoreServiceClient
$core.GetApiVersion() # The simplest test

This is just an example, so I've stored my password in the script. The password is 'secret'. It's a secret. Don't tell anyone. Still - even though I'm a bit lacking in security rigour, the PowerShell isn't. It only wants to work with secure strings and so should you. In fact, it's not much more fuss to work with Convert-ToSecureString and friends to keep everything ship shape and Bristol fashion.