Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / weblog

Dominic Cronin's weblog

Showing blog entries tagged as: Tridion

Room for a little YAGNI in the DXA :-)

Posted by Dominic Cronin at Sep 27, 2019 08:37 AM |

I wouldn't usually call out open source code in a blog post, but honestly, this made me Laugh Out Loud in the office yesterday. I'd ended up poking around in the part of the Tridion Digital Experience Manager framework (DXA) that deals with media items. Just to be clear, the media items in question would usually be either binaries that have some role in displaying your web site, or in this case, more specifically, downloads such as a PDF or whatever. The thing that made me laugh was in a function called getFriendlyFileSize(). A common use case for this would be to display a file size next to your download link so the visitor knows that they can download the PDF fairly quickly, or that maybe they'd better wait until they're on the Wifi before attempting that 10GB ISO file.

getFriendlyFileSize() converts a raw number of bytes into something like 13MB, 7KB, or 5GB. What made me laugh was the fact that the author has also very helpfully included support not only for GigaBytes, but also TeraBytes, PetaBytes and ExaBytes.

Sitting in my office right now, I'm getting about "140 down" from Speedtest. That is to say, my download speed from the Internet is about 140 Mbps, which works out in practice that if I want to grab the latest Centos-with-all-the-bells-and-whistles.ISO (let's say 10GB) it'll take me about 10 minutes. Let's say I want to scale that up to 10PB, then we're talking about 400 years or so, which somewhat exceeds the longest web server uptime known to mankind by an order of magnitude and then some.

Well maybe this is just old-fashioned thinking, but I'm inclined to think we don't need friendly Exabyte file sizes for website media downloads just yet. In the words of the old Extreme Programming mantra, "You ain't gonna need it". (YAGNI)

I'm not here to take a rise out of the hard-working hackers that contribute so much to us all. Really. I can't say that strongly enough. It made me laugh out loud, that's all.

Discovering content delivery environment capabilities... on Tridion practice

I recently wrote a script to query the capabilities of a Tridion content delivery environment. Rather than post it here, I've put it up on Tridion Practice. After all, it's about time "practice" got a bit of love. I think I'll try to get a bit more fresh material up there soon. 

I hope you'll find the script useful. Dare I also hope that some of you might be inspired to contribute something to Tridion Practice? It's always been the intention that we'd try to get contributions from a range of practitioners, so please feel welcome. Cookbook recipes are probably the most prominent part of the site, but the patterns and practices sections are also interesting. Come on down! 

Watch out for this Tridion quantum leap!

Posted by Dominic Cronin at Sep 07, 2019 04:20 PM |

With the release of SDL Tridion's Sites 9.1, they've made a startling leap in their platform support for Java.

In Sites 9, Java 8 is supported. It's not deprecated, and Java 11 is not yet mentioned.

In Sites 9.1, Java 8 support is totally gone, and the only supported version is Java 11.

I was certainly caught out. In trying to puzzle out a plausible upgrade path, before I'd actually read through and absorbed all the details, I'd raised a support ticket that turned out to be unnecessary, or at least premature. We'll be going to Sites 9, so the leap to Java 11 is a little further up the road.

I get it that they have all sorts of good reasons for this. The only possible scenario I can think of for not deprecating Java 8 is that they didn't know this was needed when they released Sites 9. It's also true that we have very, very little of the Tridion product actually baked in to the application these days.

I'm still calling this a gotcha.

Discovery service in Tridion Sites nine has two storage configs

I just got bitten by a little "gotcha" in SDL Tridion Sites 9. When you unpack the intallation zip, you'll find that in the Content Delivery/roles/Discovery folder, there's a separate folder for registration, with the registration tool and its own copy of cd_storage_conf.xml. The idea seems to be that running the service and registering capabilities are two separate activities. I kind of get that. When I first saw the ConfigRepository element at the bottom of Discovery's configuration, I felt like it had been shoehorned into a somewhat awkward place. Yet now, it seems even more awkward. So sure, both the service and the registration tool need access to the storage settings for the discovery database, while only the registration tool needs the configuration repository.

The main difference seems to be one of security. The version of the config that goes with the registration tool has the ClientId and ClientSecret attributes while the other doesn't. This, in fact, is the gotcha that caught me out; I'd copied the storage config from the service, and ended up being unable to perform an update. The error output did mention being unable to get an OAuth token, but I didn't immediately realise that the missing ClientId and ClientSecret were the reason. Kudos to Damien Jewett for his answer on Stack Exchange, which saved me some hair-pulling.

I'm left wondering if this is the end game, or whether a future version will see some further tidying up or separation of concerns.

EDIT: On looking at this again, I realised that even in 8.5 we had two storage configs. The difference is that in 8.5 both had the ClientId and ClientSecret attributes.

Enumerating the Tridion config replacement tokens

OK - I get it. It's starting to look like I've got some kind of monomania regarding the replacement tokens in Tridion config files, but bear with me. In my last blog post, I'd hacked out a regex that could be used for replacing them with their default values, but had thought better of actually doing so. But still, the idea of being able to grab all the tokens has some appeal. I can't bear to waste that regex, so now I'm looking for a reasonable use for it.

It occurred to me that at some point in an installation, it might be handy to have a comprehensive list of all the things you can pass in as environment variables. Based on what I'd done yesterday this was quite straightforward

gci -r -include *.xml -exclude logback.xml| sls '\$\{.*?\}' `
| select {$_.RelativePath((pwd))},LineNumber,{$_.Matches.value} `
| Export-Csv SitesNineTokens.csv

By going to my unzipped Tridion zip and running  this in the "Content Delivery/roles" folder, I had myself a spreadsheet with a list of all the tokens in Sites 9. Similarly, I created a spreadsheet for Web 8.5. (As you can see, I've excluded the logback files just to keep the volume down a bit, but in real life, you might also want to see those listed.)

The first thing you see when comparing Sites 9 with Web 8.5 is that there are a lot more of the things. More than twice as many. (At this point I should probably confess to some possible inaccuracy, as I haven't gone to the trouble of stripping out XML comments, so there could be some duplicates.)

65 of these come from the addition of XO and another 42 from IQ, but in general, there are just more of them. The bottom line is that to get a Tridion system up and running these days, you are dealing with hundreds of settings. To be fair, that's simply what's necessary in order to implement the various capabilities of such an enterprise system.

One curious thing I noticed is that the ambient configs all have a token to allow you to disable oauth security, yet no tokens for the security settings for the various roles. I wonder if this reflects the way people actually use Tridion.

Of course, you aren't necessarily limited by the tokens in the example configs of the shipped product. Are customers defining their own as they need them?

That's probably enough about this subject, though, isn't it?

Removing the replacement tokens from Tridion configuration files, and choosing not to

In SDL Web 8.5 we saw the introduction of replacement tokens in the content delivery configuration files. Whereas previously we'd simply had XML files with attributes and elements that we filled in with the relevant values, the replacement tokens allowed for the values to be provided externally when the configuration file is used. The commonest way to do this is probably by using environment variables, but you can also pass them as arguments to the java runtime. (A while ago, I wasted a bunch of time writing a script to pass environment variables in via java arguments. You don't need to do this.)

So anyway - taking the deployer config as my example, we started to see this kind of thing:

<Queues>
 <Queue Id="ContentQueue" Adapter="FileSystem" Verbs="Content" Default="true">
  <Property Value="${queuePath}" Name="Destination"/>
 </Queue>
 <Queue Id="CommitQueue" Adapter="FileSystem" Verbs="Commit,Rollback">
  <Property Value="${queuePath}/FinalTX" Name="Destination"/>
 </Queue>
 <Queue Id="PrepareQueue" Adapter="FileSystem" Verbs="Prepare">
  <Property Value="${queuePath}/Prepare" Name="Destination"/>
 </Queue>

Or from the storage conf of the disco service:

<Role Name="TokenServiceCapability" Url="${tokenurl:-http://localhost:8082/token.svc}">

So if you have an environment variable called queuePath, it will be used instead of ${queuePath}. In the second example, you can see that there's also a syntax for providing a default value, so if there's a tokenurl environment variable, that will be used, and if not, you'll get http://localhost:8082/token.svc.

This kind of replacement token is very common in the *nix world, where it's taken to even further extremes. Most of this is based on the venerable Shell Parameter Expansion syntax.

All this is great for automated deployments and I'm sure the team running SDL's cloud services makes full use of this technique. For now, I'm still using my own scripts to replace values in the config files, so a recent addition turned out to be a bit inconvenient. In Tridion Sites 9, the queue Ids in the deployer config have also been tokenised. So now we have this kind of thing:

<Queue Default="true" Verbs="Content" Adapter="FileSystem" Id="${contentqueuename:-ContentQueue}">
  <Property Name="Destination" Value="${queuePath}"/>
</Queue>

Seeing as I had an XPath that locates the Queue elements by ID, this wasn't too helpful. (Yes, yes, in the general case it's great, but I'm thinking purely selfishly!) Shooting from the hip I quickly updated my script with an awesome regex :-) , so instead of

$config = [xml](gc $deployerConfig)

I had

$config = [xml]((gc $deployerConfig) -replace '\$\{(?:.*?):-(.*?)\}','$1')

About ten seconds after finishing this, I realised that what I should be doing instead is fixing my XPath to glom onto the Verbs attribute instead, but you can't just throw away a good regex. So - I present to you, this beautiful regex for converting shell parameter expansions (or whatever they are called) into their default values while using the PowerShell. In other words, ${contentqueuename:-ContentQueue} becomes ContentQueue.

How does it work? Here it comes, one piece at a time:

'            single quote. Otherwise Powershell will interpret characters like $ and {, which you don't want
\            a slash to escape the dollar from the regex
$            the opening dollar of the expansion espression
\{           match the {, also escaped from the regex
(?:.*?)      match zero or more of anything, non-greedily, and without capturing
:- match the :-
(.*?) match zero or more of anything non-greedily. No ?: this time so it's captured for use later as $1
\} match the }
' single quote

 The second parameter of -replace is '$1', which translates to "the first capture". Note the single quotes, for the same reason as before

So there you have it. Now if ever you need to rip through a bunch of config files and blindly accept the defaults for everything, you know how. But meh... you could also just not provide any values in the environment. I refuse to accept that this hack is useless. A reason will emerge. The universe abhors a scripting hack with no purpose.

I'm an Idea Machine!

Apparently, I'm an "Idea Machine". Who'da thought it?

So.... the SDL community site just gave me the Idea Machine badge. Apparently you get this for submitting five ideas. Alright, alright - I'll get over myself, really. Just let me enjoy the moment!! :-)

But seriously... for all the nonsense of online achievements, badges and whatever... submitting ideas is a really good... um.. idea. Many of us are working with Tridion software day in and day out, and sometimes you come across something that could be better, or even just something a bit irritating. Chances are you aren't the only one who would appreciate an improvement. So do us all a favour, hold that thought and pop along to https://community.sdl.com/ideas/sdl-tridion-dx-ideas/sdl-tridion-sites-ideas/, or  the relevant site for you, and post your idea.

Once your idea is in, people can vote it up or down and comment on it. You can also be sure that it will be seen by people from R&D and product management. The only down side of the whole thing is that they have an "ideation wiki". Ideation?  Ideation? Really. Maybe they'll have to find some cweatives for that.

Oh what the heck?! Come on folks! Get ideating!!!

Out with the old, in with the new. How will 2019 look for Tridion specialists?

Posted by Dominic Cronin at Dec 31, 2018 05:12 PM |

It's New Year's eve: a traditional time to look backwards and forwards. I've spent a little time contemplating these grand themes in the context of my life as a Tridion specialist. Where are we now and what will the new year bring?

Let's start with 2018. The major event this year, of course, was the release of SDL Tridion DX, incorporating SDL Tridion Sites 9. So the first thing you see is that we got the Tridion name back. Hurrah! OK - enough of that: getting the name back is good, but there are other things to get excited about.

Firstly - Tridion DX brought SDL's two content management systems together: the web content management system "formerly known as Tridion", and the SDL Knowledge Center. The new branding names the first "SDL Tridion Sites" and the second "SDL Tridion Docs". So now we have both the web content management and structured content management features in the same product. To be honest, I suspect at first the number of customers who want to combine the two will be small, but for companies that do need to straddle these two worlds, the integration provided by DX will be a killer feature. As time goes on, we'll probably find that having both approaches available helps to prevent the need to knock a round peg into a square hole in some implementations. It's also clear that this represents a significant engineering effort at SDL. They haven't just put everything in the same shrink-wrap, but for example, the content delivery architecture has had a major revamp to get the two systems to play nicely together. Even this got a new branding: the Unified Delivery Platform!

I suspect, though that in 2019, I'll be mostly busy with pure WCM work. Sites 9 brings a raft load of enhancements that help to keep it current in the fast-moving world of modern web development. The most interesting is perhaps the new model service. We've seen a model service as part of the DXA framework, but Sites 9 has a "Public Content API", which boils down to a GraphQL endpoint. Tridion's architecture has always had great separation of concerns, so in many ways, it can take the current trend for headless sites in its stride, but a GraphQL service will make it easier to consume content directly, without having to build server-side support as part of your implementation. GraphQL allows you to specify exactly which data you want to get back, and will enable developers to ensure the data traffic between server and client is clean and lean.

There are also other interesting new features. A good example is regions within pages. In practice, the build-up of a web page is done this way - we have different areas of the page showing different kinds of content, and it's great to see that this kind of structure can now be modeled directly in the content manager. I'll stop there; there are far too many new features for a short blog post.

The Sites 9 release has meant a matching update (2.1) to the DXA framework, which is now using the new public content API and of course has support for the new page regions.

So going in to 2019, things are looking really great for anyone beginning a greenfield project on SDL Tridion. That's not the whole story, though. At the other end of the spectrum, there are always customers who are waiting for the right moment to upgrade from an older version. This might be the year when we finally say goodbye to our old friend vbScript. As I understand it, from the Sites 9 release onwards, the legacy support won't even install any more, so organisations that still have vbScript will be planning how to migrate before the next "major" release puts them out of support. To be fair to SDL, by my calculation it's 16 years since compound templating was introduced. That ought to be ample time, you'd have thought. Putting that a bit more positively, we now have very much better ways of doing things, and a Tridion 9/DXA 2.1 approach is a very much better place to be. 

I suspect the other main themes for 2019 will be cloud computing and devops. As organisations move forward to the new product versions, they are also looking at their architectures and working practices. Fortunately, Tridion as a product is already highly cloud-capable, and the move away from templating on the content manager has definitely had an impact on how easy it is to implement continuous integration and delivery/deployment.

It will be a year of transition for many of our customers: not only the technical transitions that I've mentioned, to new architectures and techniques, but also for the business people who are looking to take the next step towards a unified on-line experience for their customers and visitors.

I'm looking forward to it. Bring it on!

A happy new year to you all.

Unboxing SDL Tridion Sites 9

Posted by Dominic Cronin at Dec 26, 2018 10:40 PM |

It's Boxing Day, so I thought I'd treat myself by unboxing Tridion Sites 9, or more to the point, installing the Content Manager. Just to give a bit of context, this is not a production installation, but rather a "fifth environment". (Chris Summers once suggested this term for a developer's own setup, as distinct from the Developent, Test, Acceptance and Production environments of a traditional delivery street. The name seems to have stuck.)

I'm installing on the Google Cloud Platform (GCP) so I selected a "SQL Server 2017 Standard on Windows Server 2016 Datacenter" with a standard 50GB boot disk. By default I got a system with 3.75GB of memory, but during installation I got a notification from GCP that this might not be enough, so I accepted the suggested upgrade to 4.75GB. I'm not sure if the installation process is sufficiently typical use to determine memory sizing, but well... 4 gigs is pretty small these days, eh? I'm trying to run this on a tight budget, but an extra gig of memory won't be what breaks the bank. (If anything turns out to be expensive, it will be the Windows license, but you pay by the second and I plan to be very disciplined about shutting things down when I don't need them.) The shoestring budget is why I chose a version that already has MSSQL installed. The last time I did this, I ended up running up two separate Windows Servers, but this time I'm just starting with the version with MSSQL, which might help to keep the costs down. Of course, in production, a separate database server or two would still make sense, but this is a research rig. As for where to run the database, I'm also looking at the dockerised version of MSSQL, which has some attractions, but to get going quickly, a Windows image with it installed will be fine.

For SDL Web 8.5 I already had some scripts that took care of most of the content manager installation. I'm pleased to say that these ran with only very minor modifications for Tridion 9. So, for example, the layout of the installer directory is relatively predictable, but the installer executable is now called SDLTridionSites9.exe. But let's start at the beginning. After the usual fuss trying to get the files I needed up to the cloud and available from my image, I was able to do the following:

  • Run my script that kicks off all the Tridion database install scripts. Nothing very exciting here, just a rinse-and-repeat operation for most of the time.Having a script for this saves you typing the same input parameters a dozen times, and gives you an audit trail. Before I could do this, I had to install Microsoft Sql Server Management studio and use it to set up the sa account as I wanted it. For content delivery I may well choose to put the databases somewhere else, but I'll definitely remember to read through this note to self I wrote a while ago.
  • As it's an all-in-one install, it's necessary to disable the loopback check. If you're happy with a quick-and-dirty, this will get you there:
New-ItemProperty HKLM:\System\CurrentControlSet\Control\Lsa -Name DisableLoopbackCheck -Value 1 -PropertyType dword
    • After that I ran my installation script. Mostly this is a question of passing some pre-determined parameters to the installer, mainly so it's repeatable. I do, however do a couple of things first. One is to create the content manager system account (tridionsys, or mtsuser if you're old-fashioned). I also set up a couple of optional windows features: like this -
$desiredFeatures = "IIS-ApplicationInit", `
            "IIS-ASPNET",`
            "IIS-HttpCompressionDynamic", `
            "IIS-ManagementService", `
            "Web-Mgmt-Service", `
            "WAS-NetFxEnvironment", `
            "Windows-Identity-Foundation"

Get-WindowsOptionalFeature -Online `
| ?{$desiredFeatures -contains $_.FeatureName -and -not ($_.State -eq 'Enabled')} `
| Enable-WindowsOptionalFeature -Online

The old installer had problems with installing the content manager and topology manager on the same port with distinct host headers. My installation approach involves letting the installer put them on separate ports and then running a separate script to fix things up how I like it. I haven't tested whether the new installer makes this unnecessary, as for now my priority is just getting a working system. Perhaps it's also interesting to test whether my pre-install fixup of Windows features is still necessary. UPDATE: I have now tested whether this problem still exists in the Sites 9 installer, and it does.

Anyway, I now have the content manager and topology manager running, and can move on to content delivery. My overall assessment so far is that it's pretty straightforward setup. I'm looking forward to my further adventures with Tridion Sites 9.

 

 

Using environment variables to configure the Tridion microservices

Within a day of posting this, Peter Kjaer informed me that the microservices already support environment variables, so this entire blog post is pointless. So my life just got simpler, but it cost me a blog post to find out. Oh well. I'm currently trying to decide whether to delete the post entirely or work it into something useful. In the meantime at least be aware that it's pointless! :-) Anyway - thanks Peter.

When setting up a Tridion content delivery infrastructure, one of the most important considerations is how you are going to manage all the configuration values. The microservices have configuration files that look very similar to those we're familiar with from versions of Tridion going back to R5. Fairly recently, (in 8.5, I think) they acquired a "new trick", which is that you can put replacement tokens in the files, and these will be filled in with values that you can pass as JVM parameters when starting up your java process. Here's an example taken from cd_discovery_conf.xml

<ConfigRepository ServiceUri="${discoveryurl:-http://localhost:8082/discovery.svc}"
ConnectionTimeout="10000"
    CacheEnabled="true"
    CacheExpirationDuration="600"
    ServiceMonitorPollDuration="10"
    ClientId="registration"
    ClientSecret="encrypted:HzfQh9wYwAKShDxCm4DnnBnysAz9PtbDMFXMbPszSVY="
    TokenServiceUrl="${tokenurl:-http://localhost:8082/token.svc}">

Here you can see the tokens "discoveryurl" and "tokenurl" delimited from the surrounding text with ${} and followed by default values after the :- symbol.

This is really handy if you are doing any kind of managed provisioning where the settings have to come from some external source. One word of warning, though. If you are setting up your system by hand and intending to maintain it that way, it's most likely a really bad idea to use this technique. In particular, if you are going to install the services under Windows, you'll find that the JVM parameters are stored in a deeply obscure part of the registry. More to the point, you really don't want two versions of the truth, and if you have to look every time to figure out whether tokenurl is coming from the default in your config or from deep underground, I don't hold out much hope for your continued sanity if you ever have to troubleshoot the thing.

That said, if you do want to provision these values externally, this is the way to go. Or at least, in general, it's what you want, but personally I'm not really too happy with the fact that you have to use JVM parameters for this. I've recently been setting up a dockerised system, and I found myself wishing that I could use environment variables instead. That's partly because this is a natural idiom with docker. Docker doesn't care what you run in a container, and has absolutely no notion of a JVM parameter. On the other hand, Docker knows all about environment variables, and provides full support for passing them in when you start the container. On the command line, you can do this with something like:

> docker run -it -e dbtype=MSSQL -e dbclass=com.microsoft.sqlserver.jdbc.SQLServerDataSource -e dbhost=mssql -e dbport=1433 -e dbname=Tridion_Disc
-e discoveryurl=http://localhost:8082/discovery.svc -e tokenurl=http://localhost:8082/token.svc discovery bash

I'm just illustrating how you'd pass command-line environment arguments, so don't pay too much attention to anything else here, and of course, even if you had a container that could run your service, this wouldn't work. It's not very much less ugly than constructing a huge set of command parameters for your start.sh and passing them as a command array. But bear with me; I still don't want to construct that command array, and there are nicer ways of passing in the environment variables. For example, here's how they might look in a docker-compose.yaml file (Please just assume that any YAML I post is accompanied by a ritual hawk and spit. A curse be on YAML and it's benighted followers.)

   environment: 
      - dbtype=MSSQL
      - dbclass=com.microsoft.sqlserver.jdbc.SQLServerDataSource
      - dbhost=mssql
      - dbport=1433
      - dbname=Tridion_Discovery
      - dbuser=TridionBrokerUser
      - dbpassword=Tridion1
      - discoveryurl=http://localhost:8082/discovery.svc
      - tokenurl=http://localhost:8082/token.svc

This is much more readable and manageable. In practice, rather than docker-compose, it's quite likely that you'll be using some more advanced orchestration tools, perhaps wrapped up in some nice cloudy management system. In any of these environments, you'll find good support for passing in some neatly arranged environment variables. (OK - it will probably degenerate to YAML at some point, but let's leave that aside for now.)

Out of the box, the Tridion services are started with a bash script "start.sh" that's to be found in the bin directory of your service. I didn't want to mess with this: any future updates would then be a cause for much fiddling and cursing. On top of that, I wanted something I could generically apply to all the services. My approach looks like this:

#!/bin/bash
# vim: set fileformat=unix

scriptArgs=""
tcdenvMatcher='^tcdconf_([^=]*)=(.*)'
for tcdenv in $(printenv); do
    if [[ $tcdenv =~ $tcdenvMatcher ]]; then
        scriptArgs="$scriptArgs -D${BASH_REMATCH[1]}=${BASH_REMATCH[2]}"
    fi
done

script_path="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
$script_path/start.sh $scriptArgs

(I'm sticking with the docker-compose example to illustrate this. In fact, with docker-compose, you'd also need to script some dependency-management between the various services, which is why you'd probably prefer to use a proper orchestration framework.)

The script is called "startFromEnv.sh". When I create my docker containers, I drop this into the bin folder right next to start.sh. When I start the container, the command becomes something like this, (but YMMV depending on how you build your images).

command: "/Discovery/bin/startFromEnv.sh"

instead of:

command: "/Discovery/bin/start.sh"

And the environment variables get some prefixes, so the relevant section of the setup looks like this:

    environment: 
      - tcdconf_dbtype=MSSQL
      - tcdconf_dbclass=com.microsoft.sqlserver.jdbc.SQLServerDataSource 
      - tcdconf_dbhost=mssql
      - tcdconf_dbport=1433
      - tcdconf_dbname=Tridion_Discovery
      - tcdconf_dbuser=TridionBrokerUser
      - tcdconf_dbpassword=Tridion1
      - tcdconf_discoveryurl=http://localhost:8082/discovery.svc
      - tcdconf_tokenurl=http://localhost:8082/token.svc

The script is written in bash, as evidenced by the hashbang line at the top. (Immediately after is a vim modeline that you can ignore or delete unless you happen to be using an editor that respects such things and you are working on a Windows system. I've left it as a reminder that the line endings in the file do need to be unix-style.)

The rest of the script simply(!) loops through the environment variables that are prefixed with "tcdconf_" and converts them to -D arguments which it then passes on to script.sh (which it looks for in the same directory as itself).

I'm still experimenting, but for now I'm assuming that this approach has improved my life. Please do let me know if it improves yours. :-)

If you think the script is ugly, apparently this is a design goal of bash, so don't worry about it. At least it's not YAML (hack, spit!)