Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / weblog

Dominic Cronin's weblog

Removing the replacement tokens from Tridion configuration files, and choosing not to

In SDL Web 8.5 we saw the introduction of replacement tokens in the content delivery configuration files. Whereas previously we'd simply had XML files with attributes and elements that we filled in with the relevant values, the replacement tokens allowed for the values to be provided externally when the configuration file is used. The commonest way to do this is probably by using environment variables, but you can also pass them as arguments to the java runtime. (A while ago, I wasted a bunch of time writing a script to pass environment variables in via java arguments. You don't need to do this.)

So anyway - taking the deployer config as my example, we started to see this kind of thing:

<Queues>
 <Queue Id="ContentQueue" Adapter="FileSystem" Verbs="Content" Default="true">
  <Property Value="${queuePath}" Name="Destination"/>
 </Queue>
 <Queue Id="CommitQueue" Adapter="FileSystem" Verbs="Commit,Rollback">
  <Property Value="${queuePath}/FinalTX" Name="Destination"/>
 </Queue>
 <Queue Id="PrepareQueue" Adapter="FileSystem" Verbs="Prepare">
  <Property Value="${queuePath}/Prepare" Name="Destination"/>
 </Queue>

Or from the storage conf of the disco service:

<Role Name="TokenServiceCapability" Url="${tokenurl:-http://localhost:8082/token.svc}">

So if you have an environment variable called queuePath, it will be used instead of ${queuePath}. In the second example, you can see that there's also a syntax for providing a default value, so if there's a tokenurl environment variable, that will be used, and if not, you'll get http://localhost:8082/token.svc.

This kind of replacement token is very common in the *nix world, where it's taken to even further extremes. Most of this is based on the venerable Shell Parameter Expansion syntax.

All this is great for automated deployments and I'm sure the team running SDL's cloud services makes full use of this technique. For now, I'm still using my own scripts to replace values in the config files, so a recent addition turned out to be a bit inconvenient. In Tridion Sites 9, the queue Ids in the deployer config have also been tokenised. So now we have this kind of thing:

<Queue Default="true" Verbs="Content" Adapter="FileSystem" Id="${contentqueuename:-ContentQueue}">
  <Property Name="Destination" Value="${queuePath}"/>
</Queue>

Seeing as I had an XPath that locates the Queue elements by ID, this wasn't too helpful. (Yes, yes, in the general case it's great, but I'm thinking purely selfishly!) Shooting from the hip I quickly updated my script with an awesome regex :-) , so instead of

$config = [xml](gc $deployerConfig)

I had

$config = [xml]((gc $deployerConfig) -replace '\$\{(?:.*?):-(.*?)\}','$1')

About ten seconds after finishing this, I realised that what I should be doing instead is fixing my XPath to glom onto the Verbs attribute instead, but you can't just throw away a good regex. So - I present to you, this beautiful regex for converting shell parameter expansions (or whatever they are called) into their default values while using the PowerShell. In other words, ${contentqueuename:-ContentQueue} becomes ContentQueue.

How does it work? Here it comes, one piece at a time:

'            single quote. Otherwise Powershell will interpret characters like $ and {, which you don't want
\            a slash to escape the dollar from the regex
$            the opening dollar of the expansion espression
\{           match the {, also escaped from the regex
(?:.*?)      match zero or more of anything, non-greedily, and without capturing
:- match the :-
(.*?) match zero or more of anything non-greedily. No ?: this time so it's captured for use later as $1
\} match the }
' single quote

 The second parameter of -replace is '$1', which translates to "the first capture". Note the single quotes, for the same reason as before

So there you have it. Now if ever you need to rip through a bunch of config files and blindly accept the defaults for everything, you know how. But meh... you could also just not provide any values in the environment. I refuse to accept that this hack is useless. A reason will emerge. The universe abhors a scripting hack with no purpose.

Getting SSH working on WSL

Posted by Dominic Cronin at Jun 01, 2019 08:39 PM |
Filed under: ,

Or on longhand: getting the Secure Shell working on the Windows Subsystem for Linux.

I've run a Unix command line on my Windows systems for years using Cygwin. I'm not one of those Unix nerds that can't function in a native Windows world, but there was always one particular use case that Windows was spectacularly poor in. If you wanted to connect to *nix systems, the obvious way to do this was via the Secure Shell (SSH) and Windows just didn't have an SSH client. Full stop. Nothing, nada, etc. Windows had its own mechanisms for connecting securely to.... another Windows box. If you wanted to connect to something that wasn't Windows.... well who'd want to do that?

Those of us that did installed Cygwin. This was an implementation of a *nix kernel in a DLL, and a bunch of the standard utilities built to use it. You could (and still can) do pretty much anything: if you couldn't search a file system without grep, Cygwin made it OK for you. I didn't use many of the utilities apart from occasionally Ghostscript to manipulate PDFs, but I used SSH every day.

Eventually Microsoft wised up and realised that open source wasn't the enemy. Linux was cool, and even Microsofties could learn to love it. So they implemented the Linux kernel as a Windows driver and called it the Windows Subsystem for Linux. They first teamed up with Ubuntu to get the user space stuff running, and then later with Suse and Debian, so you've got a fair choice if you're fussy about your distros.

And still - the killer use case is opening up a secure shell session to a Linux box. This is why we want WSL. So it's a bit rubbish when you discover that the standard way of logging in to such a remote session doesn't work. I'm talking about public key authentication. The basic idea is that you have two files holding the two parts f a public/private key pair. One lives on the server, and the other on the client. With this setup, you just make the connection and you're logged in. In order to keep this secure, the standard SSH client software insists that they key file is secured so that it's private to you. If anyone else can read it, the software will just refuse to play ball.

This is all well and good as long as you can set the security up to do that, but under WSL, in its out of the box configuration, you can't. This has been a source of great irritation to me, and I have now figured out the solution for the second time, having failed to write a note-to-self blog post the last time. This time, I'm writing it. See?

The bottom line is that you need to have the WSL enable file system metadata so that you can override the security settings you need to. Here's an article explaining why, and here's one explaining how.

TL;DR

From in your WSL shell create "/etc/wsl.conf". You'll probably need to sudo in to vi to do this or you won't be able to save it. In the file, add the following:

[automount]
options="metadata"

With this in place, the next time you start the shell, metadata will be enabled, and you'll be able to "chmod 700" your key files to your heart's content.

Octotree is a nice thing

Posted by Dominic Cronin at Apr 14, 2019 07:55 AM |

https://www.octotree.io/

It's only rock and roll, but I like it.

Docker from the powershell, take two

Posted by Dominic Cronin at Mar 29, 2019 02:50 PM |
Filed under: ,

Take one

Back in 2016, I posted a quick and dirty technique for parsing the output from the docker CLI utilities. I recently returned to this looking for a slightly more robust approach. Back then I'd also pointed out the existence of a PowerShell module from Microsoft that made things a bit easier, but this has now been deprecated with a recommendation to use the docker cli directly or to use Docker.DotNet. This latter is a full-blown API that might be really handy for some tasks, but not for what I had in mind. The Docker CLI, OTOH is the problem I'm trying to solve. Yes, sure, I might get further with spending more time figuring out filters and formatting, but the bottom line is that the PowerShell has made me lazy, and I plan to stay that way. A quick Google turns up any number of people hacking away at Bash to solve docker's CLI inadequacies. Whatever. This is Windows, and I expect to see a pipeline full of objects with properties. :-)

Take two

It turns out that a reasonably generic approach to parsing docker's output gives you exactly that: a pipeline full of objects. I've just added a couple of functions to my $profile, which means I can do something like

docker ps -a | parseColumns

and get the columns from ps in my pipeline as object properties. Here is the code:

function parseColumnsFromHeader ($line){
    $cols = @()
    # Lazily match chunks of text followed by at least two whitespace or a line end
    $re = [regex]"((.*?)(\s{2,}|$))*"
    $matches = $re.Match($line)
    # Group 0 is the whole match, then you count left parens, so
    # group 1 is ((.*?)(\s{2,}|$)),
    foreach ($capture in $matches.Groups[1].Captures){
        if ($capture.Length -gt 0) {
            # The captures are therefore both the chunk of text and the following whitespace
            # so the length is right, but we trim for the name
            $col = @{
                Name = $capture.Value.Trim()
                Index = $capture.Index
                Length = $capture.Length
            }
            $cols += New-Object PSObject -Property $col;
        }
    }
    return $cols
}

filter parseColumns {

    if (-not $headerDone) {
        $cols = parseColumnsFromHeader $_;
        $headerDone = $true;
    } else {
        $propertiesHash = [ordered]@{}        
        foreach($col in $cols) {                
            $propertiesHash.Add($col.Name, ([string]$_).Substring($col.Index, $col.Length))            
        }
        New-Object PSObject -Property $propertiesHash
    }

}

This works equally well for other docker commands, such as 'docker images'. The only proviso is that the output begins with a single line with column headers. As long as the headers themselves have at least two white space characters between them, and no more than one in the header text itself, it should be fine.

OK - it's fairly generic. Maybe it's better than my previous approach. That said, I'm irritated by the reliance on having to parse columns based on white space. This could break at any moment. Should I be looking for something better?

Not good enough

The thing is, I'm always on the lookout for techniques that will work everywhere. The reason I'm reasonably fluent in the vi editor today is that some years ago, I consciously chose to stop learning Emacs. This isn't a holy wars thing. Emacs was probably better. Maybe... but it wasn't everywhere. Certainly at the time, vi was available on every Unix machine in the world. (These days I have to type "apt-get update && apt-get install -y vim && vi foobar.txt" far more often than I'd like, but on those machines, there's no editor installed at all, and I understand why.)

One of the reasons I never really got along with the Powershell Module is that on any given day, I can't guarantee I'll be able to install modules on the system I'm working on. I probably can paste some code into my $profile, or perhaps even more commonly, grab a one-liner from this blog and paste it directly in my shell. But general hacks in your fingertips FTW.

A better way?

So maybe I just have to learn to love the lowest common denominator. So if I'm on a bare windows Machine doing Docker, can I get the pain threshold down far enough? Well just maybe!

If you look at the Docker CLI documentation, you'll see that pretty much every command takes a --format flag, which allows you to pass a GO template. If you want to output the results as json, it's fairly simple, and then of course, the built-in ConvertFrom-Json cmdlet will get you the rest of the way.

I'm reasonably sure that before too long I'll be typing this kind of thing instead of using the functions above:

docker ps -a --format '{{json .}}' | ConvertFrom-Json 

I'm an Idea Machine!

Apparently, I'm an "Idea Machine". Who'da thought it?

So.... the SDL community site just gave me the Idea Machine badge. Apparently you get this for submitting five ideas. Alright, alright - I'll get over myself, really. Just let me enjoy the moment!! :-)

But seriously... for all the nonsense of online achievements, badges and whatever... submitting ideas is a really good... um.. idea. Many of us are working with Tridion software day in and day out, and sometimes you come across something that could be better, or even just something a bit irritating. Chances are you aren't the only one who would appreciate an improvement. So do us all a favour, hold that thought and pop along to https://community.sdl.com/ideas/sdl-tridion-dx-ideas/sdl-tridion-sites-ideas/, or  the relevant site for you, and post your idea.

Once your idea is in, people can vote it up or down and comment on it. You can also be sure that it will be seen by people from R&D and product management. The only down side of the whole thing is that they have an "ideation wiki". Ideation?  Ideation? Really. Maybe they'll have to find some cweatives for that.

Oh what the heck?! Come on folks! Get ideating!!!

Out with the old, in with the new. How will 2019 look for Tridion specialists?

Posted by Dominic Cronin at Dec 31, 2018 05:12 PM |

It's New Year's eve: a traditional time to look backwards and forwards. I've spent a little time contemplating these grand themes in the context of my life as a Tridion specialist. Where are we now and what will the new year bring?

Let's start with 2018. The major event this year, of course, was the release of SDL Tridion DX, incorporating SDL Tridion Sites 9. So the first thing you see is that we got the Tridion name back. Hurrah! OK - enough of that: getting the name back is good, but there are other things to get excited about.

Firstly - Tridion DX brought SDL's two content management systems together: the web content management system "formerly known as Tridion", and the SDL Knowledge Center. The new branding names the first "SDL Tridion Sites" and the second "SDL Tridion Docs". So now we have both the web content management and structured content management features in the same product. To be honest, I suspect at first the number of customers who want to combine the two will be small, but for companies that do need to straddle these two worlds, the integration provided by DX will be a killer feature. As time goes on, we'll probably find that having both approaches available helps to prevent the need to knock a round peg into a square hole in some implementations. It's also clear that this represents a significant engineering effort at SDL. They haven't just put everything in the same shrink-wrap, but for example, the content delivery architecture has had a major revamp to get the two systems to play nicely together. Even this got a new branding: the Unified Delivery Platform!

I suspect, though that in 2019, I'll be mostly busy with pure WCM work. Sites 9 brings a raft load of enhancements that help to keep it current in the fast-moving world of modern web development. The most interesting is perhaps the new model service. We've seen a model service as part of the DXA framework, but Sites 9 has a "Public Content API", which boils down to a GraphQL endpoint. Tridion's architecture has always had great separation of concerns, so in many ways, it can take the current trend for headless sites in its stride, but a GraphQL service will make it easier to consume content directly, without having to build server-side support as part of your implementation. GraphQL allows you to specify exactly which data you want to get back, and will enable developers to ensure the data traffic between server and client is clean and lean.

There are also other interesting new features. A good example is regions within pages. In practice, the build-up of a web page is done this way - we have different areas of the page showing different kinds of content, and it's great to see that this kind of structure can now be modeled directly in the content manager. I'll stop there; there are far too many new features for a short blog post.

The Sites 9 release has meant a matching update (2.1) to the DXA framework, which is now using the new public content API and of course has support for the new page regions.

So going in to 2019, things are looking really great for anyone beginning a greenfield project on SDL Tridion. That's not the whole story, though. At the other end of the spectrum, there are always customers who are waiting for the right moment to upgrade from an older version. This might be the year when we finally say goodbye to our old friend vbScript. As I understand it, from the Sites 9 release onwards, the legacy support won't even install any more, so organisations that still have vbScript will be planning how to migrate before the next "major" release puts them out of support. To be fair to SDL, by my calculation it's 16 years since compound templating was introduced. That ought to be ample time, you'd have thought. Putting that a bit more positively, we now have very much better ways of doing things, and a Tridion 9/DXA 2.1 approach is a very much better place to be. 

I suspect the other main themes for 2019 will be cloud computing and devops. As organisations move forward to the new product versions, they are also looking at their architectures and working practices. Fortunately, Tridion as a product is already highly cloud-capable, and the move away from templating on the content manager has definitely had an impact on how easy it is to implement continuous integration and delivery/deployment.

It will be a year of transition for many of our customers: not only the technical transitions that I've mentioned, to new architectures and techniques, but also for the business people who are looking to take the next step towards a unified on-line experience for their customers and visitors.

I'm looking forward to it. Bring it on!

A happy new year to you all.

Unboxing SDL Tridion Sites 9

Posted by Dominic Cronin at Dec 26, 2018 10:40 PM |

It's Boxing Day, so I thought I'd treat myself by unboxing Tridion Sites 9, or more to the point, installing the Content Manager. Just to give a bit of context, this is not a production installation, but rather a "fifth environment". (Chris Summers once suggested this term for a developer's own setup, as distinct from the Developent, Test, Acceptance and Production environments of a traditional delivery street. The name seems to have stuck.)

I'm installing on the Google Cloud Platform (GCP) so I selected a "SQL Server 2017 Standard on Windows Server 2016 Datacenter" with a standard 50GB boot disk. By default I got a system with 3.75GB of memory, but during installation I got a notification from GCP that this might not be enough, so I accepted the suggested upgrade to 4.75GB. I'm not sure if the installation process is sufficiently typical use to determine memory sizing, but well... 4 gigs is pretty small these days, eh? I'm trying to run this on a tight budget, but an extra gig of memory won't be what breaks the bank. (If anything turns out to be expensive, it will be the Windows license, but you pay by the second and I plan to be very disciplined about shutting things down when I don't need them.) The shoestring budget is why I chose a version that already has MSSQL installed. The last time I did this, I ended up running up two separate Windows Servers, but this time I'm just starting with the version with MSSQL, which might help to keep the costs down. Of course, in production, a separate database server or two would still make sense, but this is a research rig. As for where to run the database, I'm also looking at the dockerised version of MSSQL, which has some attractions, but to get going quickly, a Windows image with it installed will be fine.

For SDL Web 8.5 I already had some scripts that took care of most of the content manager installation. I'm pleased to say that these ran with only very minor modifications for Tridion 9. So, for example, the layout of the installer directory is relatively predictable, but the installer executable is now called SDLTridionSites9.exe. But let's start at the beginning. After the usual fuss trying to get the files I needed up to the cloud and available from my image, I was able to do the following:

  • Run my script that kicks off all the Tridion database install scripts. Nothing very exciting here, just a rinse-and-repeat operation for most of the time.Having a script for this saves you typing the same input parameters a dozen times, and gives you an audit trail. Before I could do this, I had to install Microsoft Sql Server Management studio and use it to set up the sa account as I wanted it. For content delivery I may well choose to put the databases somewhere else, but I'll definitely remember to read through this note to self I wrote a while ago.
  • As it's an all-in-one install, it's necessary to disable the loopback check. If you're happy with a quick-and-dirty, this will get you there:
New-ItemProperty HKLM:\System\CurrentControlSet\Control\Lsa -Name DisableLoopbackCheck -Value 1 -PropertyType dword
    • After that I ran my installation script. Mostly this is a question of passing some pre-determined parameters to the installer, mainly so it's repeatable. I do, however do a couple of things first. One is to create the content manager system account (tridionsys, or mtsuser if you're old-fashioned). I also set up a couple of optional windows features: like this -
$desiredFeatures = "IIS-ApplicationInit", `
            "IIS-ASPNET",`
            "IIS-HttpCompressionDynamic", `
            "IIS-ManagementService", `
            "Web-Mgmt-Service", `
            "WAS-NetFxEnvironment", `
            "Windows-Identity-Foundation"

Get-WindowsOptionalFeature -Online `
| ?{$desiredFeatures -contains $_.FeatureName -and -not ($_.State -eq 'Enabled')} `
| Enable-WindowsOptionalFeature -Online

The old installer had problems with installing the content manager and topology manager on the same port with distinct host headers. My installation approach involves letting the installer put them on separate ports and then running a separate script to fix things up how I like it. I haven't tested whether the new installer makes this unnecessary, as for now my priority is just getting a working system. Perhaps it's also interesting to test whether my pre-install fixup of Windows features is still necessary. UPDATE: I have now tested whether this problem still exists in the Sites 9 installer, and it does.

Anyway, I now have the content manager and topology manager running, and can move on to content delivery. My overall assessment so far is that it's pretty straightforward setup. I'm looking forward to my further adventures with Tridion Sites 9.

 

 

Using environment variables to configure the Tridion microservices

Within a day of posting this, Peter Kjaer informed me that the microservices already support environment variables, so this entire blog post is pointless. So my life just got simpler, but it cost me a blog post to find out. Oh well. I'm currently trying to decide whether to delete the post entirely or work it into something useful. In the meantime at least be aware that it's pointless! :-) Anyway - thanks Peter.

When setting up a Tridion content delivery infrastructure, one of the most important considerations is how you are going to manage all the configuration values. The microservices have configuration files that look very similar to those we're familiar with from versions of Tridion going back to R5. Fairly recently, (in 8.5, I think) they acquired a "new trick", which is that you can put replacement tokens in the files, and these will be filled in with values that you can pass as JVM parameters when starting up your java process. Here's an example taken from cd_discovery_conf.xml

<ConfigRepository ServiceUri="${discoveryurl:-http://localhost:8082/discovery.svc}"
ConnectionTimeout="10000"
    CacheEnabled="true"
    CacheExpirationDuration="600"
    ServiceMonitorPollDuration="10"
    ClientId="registration"
    ClientSecret="encrypted:HzfQh9wYwAKShDxCm4DnnBnysAz9PtbDMFXMbPszSVY="
    TokenServiceUrl="${tokenurl:-http://localhost:8082/token.svc}">

Here you can see the tokens "discoveryurl" and "tokenurl" delimited from the surrounding text with ${} and followed by default values after the :- symbol.

This is really handy if you are doing any kind of managed provisioning where the settings have to come from some external source. One word of warning, though. If you are setting up your system by hand and intending to maintain it that way, it's most likely a really bad idea to use this technique. In particular, if you are going to install the services under Windows, you'll find that the JVM parameters are stored in a deeply obscure part of the registry. More to the point, you really don't want two versions of the truth, and if you have to look every time to figure out whether tokenurl is coming from the default in your config or from deep underground, I don't hold out much hope for your continued sanity if you ever have to troubleshoot the thing.

That said, if you do want to provision these values externally, this is the way to go. Or at least, in general, it's what you want, but personally I'm not really too happy with the fact that you have to use JVM parameters for this. I've recently been setting up a dockerised system, and I found myself wishing that I could use environment variables instead. That's partly because this is a natural idiom with docker. Docker doesn't care what you run in a container, and has absolutely no notion of a JVM parameter. On the other hand, Docker knows all about environment variables, and provides full support for passing them in when you start the container. On the command line, you can do this with something like:

> docker run -it -e dbtype=MSSQL -e dbclass=com.microsoft.sqlserver.jdbc.SQLServerDataSource -e dbhost=mssql -e dbport=1433 -e dbname=Tridion_Disc
-e discoveryurl=http://localhost:8082/discovery.svc -e tokenurl=http://localhost:8082/token.svc discovery bash

I'm just illustrating how you'd pass command-line environment arguments, so don't pay too much attention to anything else here, and of course, even if you had a container that could run your service, this wouldn't work. It's not very much less ugly than constructing a huge set of command parameters for your start.sh and passing them as a command array. But bear with me; I still don't want to construct that command array, and there are nicer ways of passing in the environment variables. For example, here's how they might look in a docker-compose.yaml file (Please just assume that any YAML I post is accompanied by a ritual hawk and spit. A curse be on YAML and it's benighted followers.)

   environment: 
      - dbtype=MSSQL
      - dbclass=com.microsoft.sqlserver.jdbc.SQLServerDataSource
      - dbhost=mssql
      - dbport=1433
      - dbname=Tridion_Discovery
      - dbuser=TridionBrokerUser
      - dbpassword=Tridion1
      - discoveryurl=http://localhost:8082/discovery.svc
      - tokenurl=http://localhost:8082/token.svc

This is much more readable and manageable. In practice, rather than docker-compose, it's quite likely that you'll be using some more advanced orchestration tools, perhaps wrapped up in some nice cloudy management system. In any of these environments, you'll find good support for passing in some neatly arranged environment variables. (OK - it will probably degenerate to YAML at some point, but let's leave that aside for now.)

Out of the box, the Tridion services are started with a bash script "start.sh" that's to be found in the bin directory of your service. I didn't want to mess with this: any future updates would then be a cause for much fiddling and cursing. On top of that, I wanted something I could generically apply to all the services. My approach looks like this:

#!/bin/bash
# vim: set fileformat=unix

scriptArgs=""
tcdenvMatcher='^tcdconf_([^=]*)=(.*)'
for tcdenv in $(printenv); do
    if [[ $tcdenv =~ $tcdenvMatcher ]]; then
        scriptArgs="$scriptArgs -D${BASH_REMATCH[1]}=${BASH_REMATCH[2]}"
    fi
done

script_path="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
$script_path/start.sh $scriptArgs

(I'm sticking with the docker-compose example to illustrate this. In fact, with docker-compose, you'd also need to script some dependency-management between the various services, which is why you'd probably prefer to use a proper orchestration framework.)

The script is called "startFromEnv.sh". When I create my docker containers, I drop this into the bin folder right next to start.sh. When I start the container, the command becomes something like this, (but YMMV depending on how you build your images).

command: "/Discovery/bin/startFromEnv.sh"

instead of:

command: "/Discovery/bin/start.sh"

And the environment variables get some prefixes, so the relevant section of the setup looks like this:

    environment: 
      - tcdconf_dbtype=MSSQL
      - tcdconf_dbclass=com.microsoft.sqlserver.jdbc.SQLServerDataSource 
      - tcdconf_dbhost=mssql
      - tcdconf_dbport=1433
      - tcdconf_dbname=Tridion_Discovery
      - tcdconf_dbuser=TridionBrokerUser
      - tcdconf_dbpassword=Tridion1
      - tcdconf_discoveryurl=http://localhost:8082/discovery.svc
      - tcdconf_tokenurl=http://localhost:8082/token.svc

The script is written in bash, as evidenced by the hashbang line at the top. (Immediately after is a vim modeline that you can ignore or delete unless you happen to be using an editor that respects such things and you are working on a Windows system. I've left it as a reminder that the line endings in the file do need to be unix-style.)

The rest of the script simply(!) loops through the environment variables that are prefixed with "tcdconf_" and converts them to -D arguments which it then passes on to script.sh (which it looks for in the same directory as itself).

I'm still experimenting, but for now I'm assuming that this approach has improved my life. Please do let me know if it improves yours. :-)

If you think the script is ugly, apparently this is a design goal of bash, so don't worry about it. At least it's not YAML (hack, spit!)

Tridion Core service PowerShell settings for SSO-enabled CMS

Posted by Dominic Cronin at Nov 18, 2018 07:30 PM |

In a Single-Sign-On (SSO) configuration, it's necessary to use Basic Authentication for web requests to the Tridion Content Manager from the browser. This is probably the oldest way of authenticating a web request, and involves sending the password in plain over the wire. This allows the SSO system to make use of the password, which would be impossible if you used, for example, Windows Authentication. The down side of this is that you'd be sending the password in plain over the wire... can't have that, so we encrypt the connection with HTTPS.

What I'm describing here is the relatively simple use case of using the powershell module to log in to an SSO-enabled site using a domain account. Do please note that this won't work if you're expecting to authenticate using SSO. Then you'll need to mess around with federated security tokens and such things. My use case is a little simpler as I have a domain account I can log in with. As the site is set up to support most of the users coming in via SSO, these are the settings I needed, and hence this "note to self" post. If anyone has gone the extra mile to get SSO working, I'd be interested to hear about it.

So this is how it ends up looking:

Import-Module Tridion-CoreService
Set-TridionCoreServiceSettings -HostName 'contentmanager.company.com'
Set-TridionCoreServiceSettings -Version 'Web-8.5'
Set-TridionCoreServiceSettings -CredentialType 'Basic'
Set-TridionCoreServiceSettings -ConnectionType 'Basic-SSL'
$ServiceAccountPassword = ConvertTo-SecureString 'secret' -AsPlainText -Force
$ServiceAccountCredential = New-Object System.Management.Automation.PSCredential ('DOMAIN\login', $ServiceAccountPassword)
Set-TridionCoreServiceSettings -Credential $ServiceAccountCredential

$core = Get-TridionCoreServiceClient
$core.GetApiVersion() # The simplest test

This is just an example, so I've stored my password in the script. The password is 'secret'. It's a secret. Don't tell anyone. Still - even though I'm a bit lacking in security rigour, the PowerShell isn't. It only wants to work with secure strings and so should you. In fact, it's not much more fuss to work with Convert-ToSecureString and friends to keep everything ship shape and Bristol fashion.

 

 

Using the Tridion PowerShell module in a restricted environment

At some point, pretty much every Tridion specialist is going to want to make use of Peter Kjaer's Tridion Core Service Powershell modules.  The modules come with batteries included, and if you look at the latest version, you'll see that the modules are available from the PowerShell gallery, and therefore a simple install via Install-Module should "just work".

Most of us spend a lot of our time on computers that are behind a corporate firewall, and on which the operating system is managed for us by people whose main focus is on not allowing us to break anything. I recently found myself trying to install the modules on a system with an older version of PowerShell where Install-Module wasn't available. The solution for this is usually to install the PowerShellGet module which makes Install-Module available to you. In this particular environment, I knew that various other difficulties existed, notably with the way the PowerShell module path is managed. Installing a module would first require a solution to the problem of installing modules. In the past, I'd made a custom version of the Tridion module as a workaround, but now I was trying to get back to a clean copy of the latest, greatest version. Hacking things by hand would defeat my purpose.

It turned out that I was able to clone the GIT repository, so I had the folder structure on disk. (Failing that I could have tried downloading a Zip file from GitHub.) 

Normally, you install your modules in a location on the Module Path of your PowerShell, and the commonest of these locations is the WindowsPowerShell folder in your Documents folder. (There are other locations, and you can check these with "gc Env:\PSModulePath".) As I've mentioned, in this case, using the normal Module Path mechanism was problematic, so I looked a little further. It turned out the solution was much simpler than I had feared. You can simply load a module by specifying its location when you call ImportModule. I made sure that the tridion-powershell-modules folder I'd got from GIT was in a known location relative to the script file from which I wanted to invoke it, and then called Import-Module using the location of Tridion-CoreService.psd1

$scriptLocation = Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path 
import-module $scriptLocation\..\tridion-powershell-modules\CoreService\Tridion-CoreService.psd1

Getting the script location from the built-in MyInvocation variable is ugly, but pretty much standard PowerShell. Anyway - this works, and I now have a strategy for setting up my scripts to use the latest version of the core service module. Obviously, if you want the Alchemy or Content Delivery module, a similar technique ought to work.